hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
9dadcf359a8cb4d146fb46a1fbc9d8cd9138f7b0 | 9,257 | py | Python | Machine-Learning-in-Production/00-Course-Overview.py | databricks-academy/ml-in-production | 1fd6713e18cfc36357f3a98d75fedc8ffbf9eedc | [
"CC0-1.0"
] | 14 | 2021-09-21T19:48:02.000Z | 2022-03-09T19:22:39.000Z | Machine-Learning-in-Production/00-Course-Overview.py | databricks-academy/ml-in-production | 1fd6713e18cfc36357f3a98d75fedc8ffbf9eedc | [
"CC0-1.0"
] | null | null | null | Machine-Learning-in-Production/00-Course-Overview.py | databricks-academy/ml-in-production | 1fd6713e18cfc36357f3a98d75fedc8ffbf9eedc | [
"CC0-1.0"
] | 5 | 2021-08-22T12:12:49.000Z | 2022-02-28T15:47:43.000Z | # Databricks notebook source
# MAGIC %md-sandbox
# MAGIC
# MAGIC <div style="text-align: center; line-height: 0; padding-top: 9px;">
# MAGIC <img src="https://databricks.com/wp-content/uploads/2018/03/db-academy-rgb-1200px.png" alt="Databricks Learning" style="width: 600px">
# MAGIC </div>
# COMMAND ----------
# MAGIC %md
# MAGIC # Machine Learning in Production
# MAGIC ### Managing the Complete Machine Learning Lifecycle with MLflow, Deployment and CI/CD
# MAGIC
# MAGIC In this 1-day course, machine learning engineers, data engineers, and data scientists learn the best practices for managing the complete machine learning lifecycle from experimentation and model management through various deployment modalities and production issues. Students begin with end-to-end reproducibility of machine learning models using MLflow including data management, experiment tracking, and model management before deploying models with batch, streaming, and real time as well as addressing related monitoring, alerting, and CI/CD issues. Sample code accompanies all modules and theoretical concepts.
# MAGIC
# MAGIC First, this course explores managing the experimentation process using MLflow with a focus on end-to-end reproducibility including data, model, and experiment tracking. Second, students operationalize their models by integrating with various downstream deployment tools including saving models to the MLflow model registry, managing artifacts and environments, and automating the testing of their models. Third, students implement batch, streaming, and real time deployment options. Finally, additional production issues including continuous integration, continuous deployment are covered as well as monitoring and alerting.
# MAGIC
# MAGIC By the end of this course, you will have built an end-to-end pipeline to log, deploy, and monitor machine learning models. This course is taught entirely in Python.
# MAGIC
# MAGIC ## Lessons
# MAGIC
# MAGIC | Time | Lesson | Description |
# MAGIC |:----:|-------|-------------|
# MAGIC | 30m | **Introductions & Setup** | *Registration, Courseware & Q&As* |
# MAGIC | 30m | **ML in Production Overview** | Introducing the full end-to-end ML lifecycle |
# MAGIC | 10m | **Break** ||
# MAGIC | 20m | **[Experimentation - Data Management]($./01-Experimentation)** | [Manage data with Delta & Databricks Feature Store]($./01-Experimentation/01-Data-Management) |
# MAGIC | 40m | **[Experimentation - Experiment Tracking & Lab]($./01-Experimentation)** | [Track ML experiment with MLflow]($./01-Experimentation/02-Experiment-Tracking) </br> [Experiment Tracking Lab]($./01-Experimentation/Labs/02-Experiment-Tracking-Lab) |
# MAGIC | 10m | **Break** ||
# MAGIC | 30m | **[Experimentation - Advanced Experiment Tracking & Lab]($./01-Experimentation)** | [Advanced Experiment Tracking]($./01-Experimentation/03-Advanced-Experiment-Tracking) </br> [Advanced Experiment Tracking Lab (Optional)]($./01-Experimentation/Labs/03-Advanced-Experiment-Tracking-Lab) |
# MAGIC | 30m | **[Model Management - MLflow Models & Lab]($./02-Model-Management)** | [Model management with MLflow]($./02-Model-Management/01-Model-Management) </br> [Model managment lab]($./02-Model-Management/Labs/01-Model-Management-Lab) |
# MAGIC | | **Break** ||
# MAGIC | 35m | **[Model Management - Model Registry]($./02-Model-Management)** | [Register, version, and deploy models with MLflow]($./02-Model-Management/02-Model-Registry) |
# MAGIC | 25m | **[Model Management - Webhooks]($./02-Model-Management)** | [Create a testing job and a webhook of registered model]($./02-Model-Management/03a-Webhooks-and-Testing) </br> [Automated Testing]($./02-Model-Management/03b-Webhooks-Job-Demo)|
# MAGIC | 10m | **Break** ||
# MAGIC | 60m |**[Deployment Paradigms]($./03-Deployment-Paradigms)** | [Batch]($./03-Deployment-Paradigms/01-Batch)</br> [Real time]($./03-Deployment-Paradigms/02-Real-Time)</br> [Streaming (Reference)]($./Reference/03-Streaming-Deployment)</br> [Labs]($./03-Deployment-Paradigms/Labs)|
# MAGIC | 10m | **Break** ||
# MAGIC | 60m | **[Production]($./04-Production)** | [Monitoring]($./04-Production/01-Monitoring)</br> [Monitoring Lab]($./04-Production/Labs/01-Monitoring-Lab)</br>[Alerting (Reference)]($./Reference/02-Alerting) </br>[Pipeline Example (Reference)]($./Reference/04-Pipeline-Example/00-Orchestrate)|
# MAGIC
# MAGIC
# MAGIC ## Prerequisites
# MAGIC - Experience with Python (`pandas`, `sklearn`, `numpy`)
# MAGIC - Background in machine learning and data science
# MAGIC
# MAGIC ## Cluster Requirements
# MAGIC - See your instructor for specific requirements
# MAGIC
# MAGIC <img src="https://files.training.databricks.com/images/icon_warn_24.png"/> **Certain features used in this course, such as the notebooks API and model registry, are only available to paid or trial subscription users of Databricks.**
# MAGIC If you are using the Databricks Community Edition, click the `Upgrade` button on the landing page <a href="https://accounts.cloud.databricks.com/registration.html#login" target="_blank">or navigate here</a> to start a free trial.
# COMMAND ----------
# MAGIC %md
# MAGIC ##  Classroom-Setup
# MAGIC
# MAGIC For each lesson to execute correctly, please make sure to run the **`Classroom-Setup`** cell at the start of each lesson (see the next cell).
# COMMAND ----------
# MAGIC %run ./Includes/Classroom-Setup
# COMMAND ----------
# MAGIC %md ### Agile Data Science
# MAGIC
# MAGIC Deploying machine learning models into production comes with a wide array of challenges, distinct from those data scientists face when they're initially training models. Teams often solve these challenges with custom, in-house solutions that are often brittle, monolithic, time consuming, and difficult to maintain.
# MAGIC
# MAGIC A systematic approach to the deployment of machine learning models results in an agile solution that minimizes developer time and maximizes the business value derived from data science. To achieve this, data scientists and data engineers need to navigate various deployment solutions as well as have a system in place for monitoring and alerting once a model is out in production.
# MAGIC
# MAGIC The main deployment paradigms are as follows:<br><br>
# MAGIC
# MAGIC 1. **Batch:** predictions are created and stored for later use, such as a database that can be queried in real time in a web application
# MAGIC 2. **Streaming:** data streams are transformed where the prediction is needed soon after it arrives in a data pipeline but not immediately
# MAGIC 3. **Real time:** normally implemented with a REST endpoint, a prediction is needed on the fly with low latency
# MAGIC 4. **Mobile/Embedded:** entails embedding machine learning solutions in mobile or IoT devices and is outside the scope of this course
# MAGIC
# MAGIC Once a model is deployed in one of these paradigms, it needs to be monitored for performance with regards to the quality of predictions, latency, throughput, and other production considerations. When performance starts to slip, this is an indication that the model needs to be retrained, more resources need to be allocated to serving the model, or any number of improvements are needed. An alerting infrastructure needs to be in place to capture these performance issues.
# COMMAND ----------
# MAGIC %md-sandbox
# MAGIC © 2021 Databricks, Inc. All rights reserved.<br/>
# MAGIC Apache, Apache Spark, Spark and the Spark logo are trademarks of the <a href="http://www.apache.org/">Apache Software Foundation</a>.<br/>
# MAGIC <br/>
# MAGIC <a href="https://databricks.com/privacy-policy">Privacy Policy</a> | <a href="https://databricks.com/terms-of-use">Terms of Use</a> | <a href="http://help.databricks.com/">Support</a>
| 107.639535 | 1,131 | 0.702279 | 1,235 | 9,257 | 5.259919 | 0.309312 | 0.189655 | 0.280788 | 0.369458 | 0.167796 | 0.12069 | 0.096059 | 0.096059 | 0.096059 | 0.096059 | 0 | 0.017919 | 0.16204 | 9,257 | 85 | 1,132 | 108.905882 | 0.819518 | 0.980879 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9dbea510be3fd7da5da79fec7785b43b6c5f528d | 171 | bzl | Python | dotnet/private/deps/gen/base.bzl | nolen777/rules_mono | b49c210478c2240fcc7be655c9fc37d751610fb1 | [
"Apache-2.0"
] | null | null | null | dotnet/private/deps/gen/base.bzl | nolen777/rules_mono | b49c210478c2240fcc7be655c9fc37d751610fb1 | [
"Apache-2.0"
] | null | null | null | dotnet/private/deps/gen/base.bzl | nolen777/rules_mono | b49c210478c2240fcc7be655c9fc37d751610fb1 | [
"Apache-2.0"
] | null | null | null | load("@rules_mono//dotnet/private:rules/nuget.bzl", "nuget_package")
def dotnet_repositories_nuget():
### Generated by the tool
### End of generated by the tool
| 24.428571 | 68 | 0.71345 | 24 | 171 | 4.916667 | 0.666667 | 0.186441 | 0.237288 | 0.305085 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.152047 | 171 | 6 | 69 | 28.5 | 0.813793 | 0.292398 | 0 | 0 | 1 | 0 | 0.495575 | 0.380531 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9dc041e78253ad1023c15f44d5de6959ac905fd1 | 2,898 | py | Python | tests/TestGoods.py | DaveTCode/tradingsim | 4e7fe5389d9af9a0a34ca23b9e42e7e366a71966 | [
"MIT"
] | null | null | null | tests/TestGoods.py | DaveTCode/tradingsim | 4e7fe5389d9af9a0a34ca23b9e42e7e366a71966 | [
"MIT"
] | null | null | null | tests/TestGoods.py | DaveTCode/tradingsim | 4e7fe5389d9af9a0a34ca23b9e42e7e366a71966 | [
"MIT"
] | null | null | null | import random
import unittest
from tradingsim.simulation.goods import Goods
class GoodsTests(unittest.TestCase):
def test_str(self):
good = Goods("test_good", 1, 1, 1, 1)
self.assertEqual(str(good), "test_good")
def test_purchase_cost_of_one(self):
good = Goods("test_good", 1, 1, 1, 1)
self.assertAlmostEqual(good.purchase_cost_of_one(1), 1)
good = Goods("a", 10, 8, 100, 20)
self.assertAlmostEqual(good.purchase_cost_of_one(0), 10)
self.assertAlmostEqual(good.purchase_cost_of_one(15), 10) # Check that the max_cost_amount works
self.assertAlmostEqual(good.purchase_cost_of_one(20), 10) # Check that the max_cost_amount works on boundary
self.assertAlmostEqual(good.purchase_cost_of_one(100), 8)
self.assertAlmostEqual(good.purchase_cost_of_one(200), 8)
self.assertAlmostEqual(good.purchase_cost_of_one(60), 9)
self.assertAlmostEqual(good.purchase_cost_of_one(41), 9)
self.assertAlmostEqual(good.purchase_cost_of_one(39), 8)
self.assertAlmostEqual(good.purchase_cost_of_one(81), 10)
self.assertAlmostEqual(good.purchase_cost_of_one(79), 9)
def test_purchase_cost(self):
good = Goods("a", 10, 8, 100, 20)
self.assertAlmostEqual(good.purchase_cost(20, 20), 10 * 20)
self.assertAlmostEqual(good.purchase_cost(60, 1), 9)
self.assertAlmostEqual(good.purchase_cost(100, 80), 719)
self.assertAlmostEqual(good.purchase_cost(60, 2), 18)
self.assertAlmostEqual(good.purchase_cost(45, 10), 5 * 9 + 5 * 8)
def test_sale_cost_of_one(self):
good = Goods("test_good", 1, 1, 1, 1)
self.assertAlmostEqual(good.sale_cost_of_one(1), 1)
good = Goods("a", 10, 8, 100, 20)
self.assertAlmostEqual(good.sale_cost_of_one(0), 10)
self.assertAlmostEqual(good.sale_cost_of_one(100), 8)
self.assertAlmostEqual(good.sale_cost_of_one(60), 9)
self.assertAlmostEqual(good.sale_cost_of_one(41), 9)
self.assertAlmostEqual(good.sale_cost_of_one(39), 10)
self.assertAlmostEqual(good.sale_cost_of_one(81), 8)
self.assertAlmostEqual(good.sale_cost_of_one(79), 9)
def test_sale_cost(self):
good = Goods("a", 10, 8, 100, 20)
self.assertAlmostEqual(good.sale_cost(20, 20), 10 * 20)
self.assertAlmostEqual(good.sale_cost(60, 1), 9)
self.assertAlmostEqual(good.sale_cost(20, 80), 721) # Shouldn't this be 719? Rounding issues maybe.
self.assertAlmostEqual(good.sale_cost(60, 2), 18)
self.assertAlmostEqual(good.sale_cost(45, 10), 5 * 9 + 5 * 10)
def test_sale_less_than_purchase(self):
for i in range(100):
g = Goods("A", random.randint(10, 200), random.randint(0, 10), random.randint(5, 10), random.randint(1, 4))
self.assertEqual(g.sale_cost(10, 2), g.purchase_cost(8, 2)) | 47.508197 | 119 | 0.680814 | 426 | 2,898 | 4.410798 | 0.159624 | 0.324109 | 0.385844 | 0.281001 | 0.783395 | 0.777009 | 0.71421 | 0.58595 | 0.211283 | 0.211283 | 0 | 0.08394 | 0.194272 | 2,898 | 61 | 120 | 47.508197 | 0.720771 | 0.045204 | 0 | 0.14 | 0 | 0 | 0.014834 | 0 | 0 | 0 | 0 | 0 | 0.62 | 1 | 0.12 | false | 0 | 0.06 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d1baae2a3a62c041521eaf4781454470ad51b69a | 128 | py | Python | amocrm_asterisk_ng/scenario/impl/classic/functions/core/__init__.py | iqtek/amocrn_asterisk_ng | 429a8d0823b951c855a49c1d44ab0e05263c54dc | [
"MIT"
] | null | null | null | amocrm_asterisk_ng/scenario/impl/classic/functions/core/__init__.py | iqtek/amocrn_asterisk_ng | 429a8d0823b951c855a49c1d44ab0e05263c54dc | [
"MIT"
] | null | null | null | amocrm_asterisk_ng/scenario/impl/classic/functions/core/__init__.py | iqtek/amocrn_asterisk_ng | 429a8d0823b951c855a49c1d44ab0e05263c54dc | [
"MIT"
] | null | null | null | from .IGetCallDirectionFunction import IGetCallDirectionFunction
from .IsInternalNumberFunction import IsInternalNumberFunction
| 42.666667 | 64 | 0.921875 | 8 | 128 | 14.75 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.0625 | 128 | 2 | 65 | 64 | 0.983333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d1e17bf09ac7cb877006348ebe29fc0618135264 | 4,170 | py | Python | Packs/TaniumThreatResponse/Integrations/TaniumThreatResponse/TaniumThreatResponse_test.py | diCagri/content | c532c50b213e6dddb8ae6a378d6d09198e08fc9f | [
"MIT"
] | 799 | 2016-08-02T06:43:14.000Z | 2022-03-31T11:10:11.000Z | Packs/TaniumThreatResponse/Integrations/TaniumThreatResponse/TaniumThreatResponse_test.py | diCagri/content | c532c50b213e6dddb8ae6a378d6d09198e08fc9f | [
"MIT"
] | 9,317 | 2016-08-07T19:00:51.000Z | 2022-03-31T21:56:04.000Z | Packs/TaniumThreatResponse/Integrations/TaniumThreatResponse/TaniumThreatResponse_test.py | diCagri/content | c532c50b213e6dddb8ae6a378d6d09198e08fc9f | [
"MIT"
] | 1,297 | 2016-08-04T13:59:00.000Z | 2022-03-31T23:43:06.000Z | PROCESS_TREE_RAW = [
{
"id": 3,
"ptid": 3,
"pid": 1,
"name": "1: <Pruned Process>",
"parent": "4: System",
"children": [
{
"id": 44,
"ptid": 44,
"pid": 4236,
"name": "4236: mmc.exe",
"parent": "1: <Pruned Process>",
"children": []
},
{
"id": 45,
"ptid": 45,
"pid": 4840,
"name": "4840: cmd.exe",
"parent": "1: <Pruned Process>",
"children": []
}
]
}
]
PROCESS_TREE_TWO_GENERATIONS_RAW = [
{
"id": 3,
"ptid": 3,
"pid": 1,
"name": "1: <Pruned Process>",
"parent": "4: System",
"children": [
{
"id": 44,
"ptid": 44,
"pid": 4236,
"name": "4236: mmc.exe",
"parent": "1: <Pruned Process>",
"children": [
{
"id": 420,
"ptid": 44,
"pid": 4236,
"name": "4236: mmc.exe",
"parent": "1: <Pruned Process>",
"children": []
}
]
}
]
}
]
PROCESS_TREE_ITEM_RES = {
"ID": 3,
"PTID": 3,
"PID": 1,
"Name": "1: <Pruned Process>",
"Parent": "4: System",
"Children": [
{
"ID": 44,
"PTID": 44,
"PID": 4236,
"Name": "4236: mmc.exe",
"Parent": "1: <Pruned Process>",
"Children": []
},
{
"ID": 45,
"PTID": 45,
"PID": 4840,
"Name": "4840: cmd.exe",
"Parent": "1: <Pruned Process>",
"Children": []
}
]
}
PROCESS_TREE_ITEM_TWO_GENERATIONS_RES = {
"ID": 3,
"PTID": 3,
"PID": 1,
"Name": "1: <Pruned Process>",
"Parent": "4: System",
"Children": [
{
"ID": 44,
"PTID": 44,
"PID": 4236,
"Name": "4236: mmc.exe",
"Parent": "1: <Pruned Process>",
"Children": [
{
"id": 420,
"ptid": 44,
"pid": 4236,
"name": "4236: mmc.exe",
"parent": "1: <Pruned Process>",
"children": []
}
]
}
]
}
PROCESS_TREE_READABLE_RES = {
"ID": 3,
"PTID": 3,
"PID": 1,
"Name": "1: <Pruned Process>",
"Parent": "4: System",
"Children": [
{
"ID": 44,
"PTID": 44,
"PID": 4236,
"Name": "4236: mmc.exe",
"Parent": "1: <Pruned Process>",
"ChildrenCount": 0
},
{
"ID": 45,
"PTID": 45,
"PID": 4840,
"Name": "4840: cmd.exe",
"Parent": "1: <Pruned Process>",
"ChildrenCount": 0
}
]
}
PROCESS_TREE_TWO_GENERATIONS_READABLE_RES = {
"ID": 3,
"PTID": 3,
"PID": 1,
"Name": "1: <Pruned Process>",
"Parent": "4: System",
"Children": [
{
"ID": 44,
"PTID": 44,
"PID": 4236,
"Name": "4236: mmc.exe",
"Parent": "1: <Pruned Process>",
"ChildrenCount": 1
}
]
}
def test_get_process_tree_item():
from TaniumThreatResponse import get_process_tree_item
tree, readable_output = get_process_tree_item(PROCESS_TREE_RAW[0], 0)
assert tree == PROCESS_TREE_ITEM_RES
assert readable_output == PROCESS_TREE_READABLE_RES
def test_get_process_tree_item_two_generations():
from TaniumThreatResponse import get_process_tree_item
tree, readable_output = get_process_tree_item(PROCESS_TREE_TWO_GENERATIONS_RAW[0], 0)
assert tree == PROCESS_TREE_ITEM_TWO_GENERATIONS_RES
assert readable_output == PROCESS_TREE_TWO_GENERATIONS_READABLE_RES
| 24.244186 | 89 | 0.393525 | 364 | 4,170 | 4.315934 | 0.101648 | 0.126034 | 0.151496 | 0.112031 | 0.98536 | 0.958625 | 0.833864 | 0.824316 | 0.786123 | 0.781031 | 0 | 0.079755 | 0.452758 | 4,170 | 171 | 90 | 24.385965 | 0.608677 | 0 | 0 | 0.626582 | 0 | 0 | 0.238369 | 0 | 0 | 0 | 0 | 0 | 0.025316 | 1 | 0.012658 | false | 0 | 0.012658 | 0 | 0.025316 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8824052de51e1a775fda4def5bf6f44f61bac8d0 | 88 | py | Python | trax/trax/exceptions.py | christianlupus/trax | 85af6f908cbf55584f74856207ae3f6530728ccb | [
"MIT"
] | 4 | 2021-01-19T16:12:24.000Z | 2021-08-05T07:25:44.000Z | trax/trax/exceptions.py | christianlupus/trax | 85af6f908cbf55584f74856207ae3f6530728ccb | [
"MIT"
] | 1 | 2021-03-18T20:44:01.000Z | 2021-03-18T20:44:01.000Z | trax/trax/exceptions.py | christianlupus/trax | 85af6f908cbf55584f74856207ae3f6530728ccb | [
"MIT"
] | 1 | 2021-08-16T01:10:52.000Z | 2021-08-16T01:10:52.000Z | from django.forms import ValidationError
class HandleError(ValidationError):
pass
| 14.666667 | 40 | 0.806818 | 9 | 88 | 7.888889 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.147727 | 88 | 5 | 41 | 17.6 | 0.946667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
884d95a4b3afb6ba8dd4836567a5ce86c0432e87 | 2,145 | py | Python | src/processor.py | RaymondLZhou/job-scraper | 9de07729051357276a5b5c9d2892bb763ff61330 | [
"MIT"
] | null | null | null | src/processor.py | RaymondLZhou/job-scraper | 9de07729051357276a5b5c9d2892bb763ff61330 | [
"MIT"
] | null | null | null | src/processor.py | RaymondLZhou/job-scraper | 9de07729051357276a5b5c9d2892bb763ff61330 | [
"MIT"
] | null | null | null | import linker
# Cleans and appends results from Monster based on page HTML layout
def processDataMonster(job_elems, source, jobList, titles, companies, locations, times, links, sources):
for job_elem in job_elems:
title_elem = job_elem.find("h2", class_="title")
company_elem = job_elem.find("div", class_="company")
location_elem = job_elem.find("div", class_="location")
time_elem = job_elem.find("div", class_="meta flex-col")
# Clean results
title, company, location, time = linker.verifyData(title_elem, company_elem, location_elem, time_elem)
# Ignore empty results
if(title == "" and company == "" and location == "" and time == ""):
continue
time = time.split('\n')[0]
link = job_elem.find("a")["href"]
# Append data to lists
linker.appendData(title, company, location, time, link, source, jobList, titles, companies, locations, times, links, sources)
# Cleans and appends results from Indeed based on page HTML layout
def processDataIndeed(job_elems, source, jobList, titles, companies, locations, times, links, sources):
for job_elem in job_elems:
title_elem = job_elem.find('h2', class_='title')
company_elem = job_elem.find('span', class_='company')
location_elem = job_elem.find('div', class_='location')
time_elem = job_elem.find("span", class_="date")
if(location_elem is None):
location_elem = job_elem.find('span', class_='location')
# Clean results
title, company, location, time = linker.verifyData(title_elem, company_elem, location_elem, time_elem)
# Ignore empty and irrelevant results
if(title == "" and company == "" and location == "" and time == ""):
continue
if(company == "The Sydney Call Centre"):
continue
link = "https://ca.indeed.com" + job_elem.find("a")["href"]
# Append data to lists
linker.appendData(title, company, location, time, link, source, jobList, titles, companies, locations, times, links, sources)
| 43.77551 | 133 | 0.640093 | 263 | 2,145 | 5.053232 | 0.26616 | 0.068473 | 0.091046 | 0.10158 | 0.856283 | 0.809631 | 0.737397 | 0.737397 | 0.737397 | 0.737397 | 0 | 0.001839 | 0.239627 | 2,145 | 49 | 134 | 43.77551 | 0.812998 | 0.119814 | 0 | 0.392857 | 0 | 0 | 0.078723 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0.035714 | 0 | 0.107143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8883b6fcf99a40793f334170aaa6354e8e003fcb | 155 | py | Python | module.py | Eve-AI/package-template | c079e2da9e50f73a0e68d38dc836be4247dbfc5e | [
"MIT"
] | null | null | null | module.py | Eve-AI/package-template | c079e2da9e50f73a0e68d38dc836be4247dbfc5e | [
"MIT"
] | null | null | null | module.py | Eve-AI/package-template | c079e2da9e50f73a0e68d38dc836be4247dbfc5e | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# -*- coding:utf-8 -*-
import utils
def run(string, entities):
# utils.output('end', 'intent', utils.translate('intent'))
pass | 19.375 | 59 | 0.645161 | 21 | 155 | 4.761905 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007519 | 0.141935 | 155 | 8 | 60 | 19.375 | 0.744361 | 0.632258 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
ee3ed8072b4fa02938e3adbc5e83178be3ac350f | 12,241 | py | Python | pydlm/access/dlmAccessMod.py | onnheimm/pydlm | 4693af6e621e3b75feda7ca15327b69a4ca622a7 | [
"BSD-3-Clause"
] | 423 | 2016-09-15T06:45:26.000Z | 2022-03-29T08:41:11.000Z | pydlm/access/dlmAccessMod.py | onnheimm/pydlm | 4693af6e621e3b75feda7ca15327b69a4ca622a7 | [
"BSD-3-Clause"
] | 50 | 2016-09-14T19:45:49.000Z | 2021-07-26T17:04:10.000Z | pydlm/access/dlmAccessMod.py | onnheimm/pydlm | 4693af6e621e3b75feda7ca15327b69a4ca622a7 | [
"BSD-3-Clause"
] | 99 | 2016-09-19T08:08:41.000Z | 2022-03-07T13:47:36.000Z | from copy import deepcopy
from pydlm.base.tools import getInterval
from pydlm.access._dlmGet import _dlmGet
class dlmAccessModule(_dlmGet):
""" A dlm module for all the access methods
"""
def getAll(self):
""" get all the _result class which contains all results
Returns:
The @result object containing all computed results.
"""
return deepcopy(self.result)
def getMean(self, filterType='forwardFilter', name='main'):
""" get mean for data or component.
If the working dates are not (0, self.n - 1),
then a warning will prompt stating the actual filtered dates.
Args:
filterType: the type of mean to be returned. Could be
'forwardFilter', 'backwardSmoother', and 'predict'.
Default to 'forwardFilter'.
name: the component to get mean. When name = 'main', then it
returns the filtered mean for the time series. When
name = some component's name, then it returns the filtered
mean for that component. Default to 'main'.
Returns:
A list of the time series observations based on the choice
"""
# get the working date
start, end = self._checkAndGetWorkingDates(filterType=filterType)
end += 1 # To get the result for the last date.
# get the mean for the fitlered data
if name == 'main':
# get out of the matrix form
if filterType == 'forwardFilter':
return self._1DmatrixToArray(
self.result.filteredObs[start:end])
elif filterType == 'backwardSmoother':
return self._1DmatrixToArray(
self.result.smoothedObs[start:end])
elif filterType == 'predict':
return self._1DmatrixToArray(
self.result.predictedObs[start:end])
else:
raise NameError('Incorrect filter type.')
# get the mean for the component
self._checkComponent(name)
return self._getComponentMean(name=name,
filterType=filterType,
start=start, end=(end - 1))
def getVar(self, filterType='forwardFilter', name='main'):
""" get the variance for data or component.
If the filtered dates are not (0, self.n - 1),
then a warning will prompt stating
the actual filtered dates.
Args:
filterType: the type of variance to be returned. Could be
'forwardFilter', 'backwardSmoother', and 'predict'.
Default to 'forwardFilter'.
name: the component to get variance. When name = 'main', then it
returns the filtered variance for the time series. When
name = some component's name, then it returns the filtered
variance for that component. Default to 'main'.
Returns:
A list of the filtered variances based on the choice.
"""
# get the working date
start, end = self._checkAndGetWorkingDates(filterType=filterType)
end += 1
# get the variance for the time series data
if name == 'main':
# get out of the matrix form
if filterType == 'forwardFilter':
return self._1DmatrixToArray(
self.result.filteredObsVar[start:end])
elif filterType == 'backwardSmoother':
return self._1DmatrixToArray(
self.result.smoothedObsVar[start:end])
elif filterType == 'predict':
return self._1DmatrixToArray(
self.result.predictedObsVar[start:end])
else:
raise NameError('Incorrect filter type.')
# get the variance for the component
self._checkComponent(name)
return self._getComponentVar(name=name, filterType=filterType,
start=start, end=(end - 1))
def getResidual(self, filterType='forwardFilter'):
""" get the residuals for data after filtering or smoothing.
If the working dates are not (0, self.n - 1),
then a warning will prompt stating the actual filtered dates.
Args:
filterType: the type of residuals to be returned. Could be
'forwardFilter', 'backwardSmoother', and 'predict'.
Default to 'forwardFilter'.
Returns:
A list of residuals based on the choice
"""
# get the working date
start, end = self._checkAndGetWorkingDates(filterType=filterType)
end += 1 # To get the result for the last date.
# get the mean for the fitlered data
# get out of the matrix form
if filterType == 'forwardFilter':
return self._1DmatrixToArray(
[self.data[i] - self.result.filteredObs[i]
for i in range(start, end)])
elif filterType == 'backwardSmoother':
return self._1DmatrixToArray(
[self.data[i] - self.result.smoothedObs[i]
for i in range(start, end)])
elif filterType == 'predict':
return self._1DmatrixToArray(
[self.data[i] - self.result.predictedObs[i]
for i in range(start, end)])
else:
raise NameError('Incorrect filter type.')
def getInterval(self, p=0.95, filterType='forwardFilter', name='main'):
""" get the confidence interval for data or component.
If the filtered dates are not
(0, self.n - 1), then a warning will prompt stating the actual
filtered dates.
Args:
p: The confidence level.
filterType: the type of CI to be returned. Could be
'forwardFilter', 'backwardSmoother', and 'predict'.
Default to 'forwardFilter'.
name: the component to get CI. When name = 'main', then it
returns the confidence interval for the time series. When
name = some component's name, then it returns the confidence
interval for that component. Default to 'main'.
Returns:
A tuple with the first element being a list of upper bounds
and the second being a list of the lower bounds.
"""
# get the working date
start, end = self._checkAndGetWorkingDates(filterType=filterType)
end += 1
# get the mean and the variance for the time series data
if name == 'main':
# get out of the matrix form
if filterType == 'forwardFilter':
compMean = self._1DmatrixToArray(
self.result.filteredObs[start:end])
compVar = self._1DmatrixToArray(
self.result.filteredObsVar[start:end])
elif filterType == 'backwardSmoother':
compMean = self._1DmatrixToArray(
self.result.smoothedObs[start:end])
compVar = self._1DmatrixToArray(
self.result.smoothedObsVar[start:end])
elif filterType == 'predict':
compMean = self._1DmatrixToArray(
self.result.predictedObs[start:end])
compVar = self._1DmatrixToArray(
self.result.predictedObsVar[start:end])
else:
raise NameError('Incorrect filter type.')
# get the mean and variance for the component
else:
self._checkComponent(name)
compMean = self._getComponentMean(name=name,
filterType=filterType,
start=start, end=(end - 1))
compVar = self._getComponentVar(name=name,
filterType=filterType,
start=start, end=(end - 1))
# get the upper and lower bound
upper, lower = getInterval(compMean, compVar, p)
return (upper, lower)
def getLatentState(self, filterType='forwardFilter', name='all'):
""" get the latent states for different components and filters.
If the filtered dates are not (0, self.n - 1),
then a warning will prompt stating the actual filtered dates.
Args:
filterType: the type of latent states to be returned. Could be
'forwardFilter', 'backwardSmoother', and 'predict'.
Default to 'forwardFilter'.
name: the component to get latent state. When name = 'all', then it
returns the latent states for the time series. When
name = some component's name, then it returns the latent
states for that component. Default to 'all'.
Returns:
A list of lists, standing for the latent states given
the different choices.
"""
# get the working dates
start, end = self._checkAndGetWorkingDates(filterType=filterType)
end += 1
# to return the full latent states
if name == 'all':
if filterType == 'forwardFilter':
return list(map(lambda x: x if x is None
else self._1DmatrixToArray(x),
self.result.filteredState[start:end]))
elif filterType == 'backwardSmoother':
return list(map(lambda x: x if x is None
else self._1DmatrixToArray(x),
self.result.smoothedState[start:end]))
elif filterType == 'predict':
return list(map(lambda x: x if x is None
else self._1DmatrixToArray(x),
self.result.smoothedState[start:end]))
else:
raise NameError('Incorrect filter type.')
# to return the latent state for a given component
self._checkComponent(name)
return list(map(lambda x: x if x is None else self._1DmatrixToArray(x),
self._getLatentState(name=name, filterType=filterType,
start=start, end=(end - 1))))
def getLatentCov(self, filterType='forwardFilter', name='all'):
""" get the error covariance for different components and
filters.
If the filtered dates are not (0, self.n - 1),
then a warning will prompt stating the actual filtered dates.
Args:
filterType: the type of latent covariance to be returned. Could be
'forwardFilter', 'backwardSmoother', and 'predict'.
Default to 'forwardFilter'.
name: the component to get latent cov. When name = 'all', then it
returns the latent covariance for the time series. When
name = some component's name, then it returns the latent
covariance for that component. Default to 'all'.
Returns:
A list of numpy matrices, standing for the filtered latent
covariance.
"""
# get the working dates
start, end = self._checkAndGetWorkingDates(filterType=filterType)
end += 1
# to return the full latent covariance
if name == 'all':
if filterType == 'forwardFilter':
return self.result.filteredCov[start:end]
elif filterType == 'backwardSmoother':
return self.result.smoothedCov[start:end]
elif filterType == 'predict':
return self.result.smoothedCov[start:end]
else:
raise NameError('Incorrect filter type.')
# to return the latent covariance for a given component
self._checkComponent(name)
return self._getLatentCov(name=name, filterType=filterType,
start=start, end=(end - 1))
| 41.921233 | 79 | 0.561474 | 1,279 | 12,241 | 5.342455 | 0.125098 | 0.038636 | 0.05049 | 0.050929 | 0.837407 | 0.832138 | 0.798332 | 0.714474 | 0.65052 | 0.622128 | 0 | 0.00592 | 0.365248 | 12,241 | 291 | 80 | 42.065292 | 0.873488 | 0.394657 | 0 | 0.789063 | 0 | 0 | 0.069453 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.054688 | false | 0 | 0.023438 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ee5239b4cbd2330ff1090d2d0cbb2da76c8e9e1b | 154 | py | Python | mysite/posts/admin.py | 2021fallCMPUT404/group-cmput404-project | 985b76dc6c554caf77e7cf5788355cca22a26e74 | [
"Apache-2.0"
] | 2 | 2021-12-06T06:42:41.000Z | 2022-03-29T21:40:14.000Z | mysite/posts/admin.py | 2021fallCMPUT404/group-cmput404-project | 985b76dc6c554caf77e7cf5788355cca22a26e74 | [
"Apache-2.0"
] | 7 | 2021-10-29T20:31:44.000Z | 2021-12-05T06:55:58.000Z | mysite/posts/admin.py | 2021fallCMPUT404/group-cmput404-project | 985b76dc6c554caf77e7cf5788355cca22a26e74 | [
"Apache-2.0"
] | null | null | null | from django.contrib import admin
from .models import Post,Comment, Node
admin.site.register(Post)
admin.site.register(Comment)
admin.site.register(Node) | 22 | 38 | 0.811688 | 23 | 154 | 5.434783 | 0.478261 | 0.216 | 0.408 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.084416 | 154 | 7 | 39 | 22 | 0.886525 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.4 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
ee79190229147b7ee80f21ac9adff2c9059053c3 | 4,548 | py | Python | run_cifar_train.py | qinwei-hfut/LDAM-DRW | c93ebbfa82912fa9778723bce8da9ee1425597dc | [
"MIT"
] | null | null | null | run_cifar_train.py | qinwei-hfut/LDAM-DRW | c93ebbfa82912fa9778723bce8da9ee1425597dc | [
"MIT"
] | null | null | null | run_cifar_train.py | qinwei-hfut/LDAM-DRW | c93ebbfa82912fa9778723bce8da9ee1425597dc | [
"MIT"
] | null | null | null | import os
def run_exp(gpu,imb_type,imb_factor,loss_type,train_rule,exp_str,normalize_type,dataset):
# python cifar_train.py --gpu 0 --imb_type exp --imb_factor 0.01 --loss_type LDAM --train_rule DRW
the_command = "python cifar_train.py " \
+ " --gpu="+str(gpu) \
+ " --imb_type="+imb_type \
+ " --imb_factor="+str(imb_factor) \
+ " --loss_type="+loss_type \
+ " --train_rule="+train_rule \
+ " --exp_str="+exp_str \
+ " --normalize_type="+normalize_type \
+ " --dataset="+dataset \
print(the_command)
os.system(the_command)
dataset = "cifar100"
# run_exp(gpu=0,imb_type='exp',imb_factor=0.02,loss_type="LDAM",train_rule="DRW",normalize_type="logit_normalization",exp_str='ln_1')
# run_exp(gpu=0,imb_type='exp',imb_factor=0.02,loss_type="LDAM",train_rule="DRW",normalize_type="prob_normalization",exp_str='pn_1')
# run_exp(gpu=0,imb_type='exp',imb_factor=0.02,loss_type="LDAM",train_rule="DRW",normalize_type="logit_normalization",exp_str='ln_2')
# run_exp(gpu=0,imb_type='exp',imb_factor=0.02,loss_type="LDAM",train_rule="DRW",normalize_type="logit_normalization",exp_str='ln_3')
# run_exp(gpu=0,imb_type='exp',imb_factor=0.02,loss_type="LDAM",train_rule="DRW",normalize_type="prob_normalization",exp_str='pn_1')
# run_exp(gpu=0,imb_type='exp',imb_factor=0.02,loss_type="LDAM",train_rule="DRW",normalize_type="prob_normalization",exp_str='pn_2')
# run_exp(gpu=0,imb_type='exp',imb_factor=0.02,loss_type="LDAM",train_rule="DRW",normalize_type="prob_normalization",exp_str='pn_3')
# run_exp(gpu=0,imb_type='exp',imb_factor=0.02,loss_type="LDAM",train_rule="DRW",normalize_type="logit_standardization",exp_str='ls_1')
# run_exp(gpu=0,imb_type='exp',imb_factor=0.02,loss_type="LDAM",train_rule="DRW",normalize_type="prob_standardization",exp_str='ps_1')
# run_exp(gpu=0,imb_type='exp',imb_factor=0.02,loss_type="LDAM",train_rule="DRW",normalize_type="logit_standardization",exp_str='ls_2')
# run_exp(gpu=0,imb_type='exp',imb_factor=0.02,loss_type="LDAM",train_rule="DRW",normalize_type="logit_standardization",exp_str='ls_3')
# run_exp(gpu=0,imb_type='exp',imb_factor=0.02,loss_type="LDAM",train_rule="DRW",normalize_type="prob_standardization",exp_str='ps_2')
# run_exp(gpu=0,imb_type='exp',imb_factor=0.02,loss_type="LDAM",train_rule="DRW",normalize_type="prob_standardization",exp_str='ps_3')
# run_exp(gpu=0,imb_type='exp',imb_factor=0.005,loss_type="LDAM",train_rule="DRW",normalize_type="none",exp_str='none_4',dataset=dataset)
# run_exp(gpu=0,imb_type='exp',imb_factor=0.005,loss_type="LDAM",train_rule="DRW",normalize_type="prob_division",exp_str='pd_7_10k',dataset=dataset)
# run_exp(gpu=0,imb_type='exp',imb_factor=0.005,loss_type="LDAM",train_rule="DRW",normalize_type="none",exp_str='none_5',dataset=dataset)
# run_exp(gpu=0,imb_type='exp',imb_factor=0.005,loss_type="LDAM",train_rule="DRW",normalize_type="none",exp_str='none_6',dataset=dataset)
# run_exp(gpu=0,imb_type='exp',imb_factor=0.005,loss_type="LDAM",train_rule="DRW",normalize_type="prob_division",exp_str='pd_8_10k',dataset=dataset)
# run_exp(gpu=0,imb_type='exp',imb_factor=0.005,loss_type="LDAM",train_rule="DRW",normalize_type="prob_division",exp_str='pd_9_10k',dataset=dataset)
# run_exp(gpu=0,imb_type='exp',imb_factor=0.005,loss_type="LDAM",train_rule="DRW",normalize_type="uniform",exp_str='uniform_test',dataset=dataset)
# run_exp(gpu=0,imb_type='exp',imb_factor=0.005,loss_type="LDAM",train_rule="DRW",normalize_type="gaussian",exp_str='gau_2',dataset=dataset)
# run_exp(gpu=0,imb_type='exp',imb_factor=0.005,loss_type="LDAM",train_rule="DRW",normalize_type="gaussian",exp_str='gau_3',dataset=dataset)
# run_exp(gpu=0,imb_type='exp',imb_factor=0.005,loss_type="LDAM",train_rule="DRW",normalize_type="dy_gaussian",exp_str='dg_1',dataset=dataset)
# run_exp(gpu=0,imb_type='exp',imb_factor=0.005,loss_type="LDAM",train_rule="DRW",normalize_type="dy_gaussian",exp_str='dg_2',dataset=dataset)
# run_exp(gpu=0,imb_type='exp',imb_factor=0.005,loss_type="LDAM",train_rule="DRW",normalize_type="dy_gaussian",exp_str='dg_3',dataset=dataset)
run_exp(gpu=0,imb_type='exp',imb_factor=0.005,loss_type="LDAM",train_rule="DRW",normalize_type="static_gaussian",exp_str='sg_1',dataset=dataset)
run_exp(gpu=0,imb_type='exp',imb_factor=0.005,loss_type="LDAM",train_rule="DRW",normalize_type="static_gaussian",exp_str='sg_2',dataset=dataset)
run_exp(gpu=0,imb_type='exp',imb_factor=0.005,loss_type="LDAM",train_rule="DRW",normalize_type="static_gaussian",exp_str='sg_3',dataset=dataset) | 73.354839 | 148 | 0.757696 | 810 | 4,548 | 3.912346 | 0.071605 | 0.070685 | 0.08236 | 0.100663 | 0.901862 | 0.889555 | 0.881982 | 0.881982 | 0.874408 | 0.874408 | 0 | 0.038541 | 0.047274 | 4,548 | 62 | 149 | 73.354839 | 0.692823 | 0.772208 | 0 | 0 | 0 | 0 | 0.214851 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058824 | false | 0 | 0.058824 | 0 | 0.117647 | 0.058824 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ee8b902e6672c8e09e777111eb4396dcaed3e477 | 538 | py | Python | fib.py | Housebear/python-learning | 6a6bb50a3151f75f0855879d2e1cb036cc8bef77 | [
"MIT"
] | null | null | null | fib.py | Housebear/python-learning | 6a6bb50a3151f75f0855879d2e1cb036cc8bef77 | [
"MIT"
] | null | null | null | fib.py | Housebear/python-learning | 6a6bb50a3151f75f0855879d2e1cb036cc8bef77 | [
"MIT"
] | null | null | null | #!/usr/bin/python3
# -*- coding:utf-8 -*-
# File Name: fib.py
# Author: Lipsum
# Mail: niuleipeng@gmail.com
# Created Time: 2016-05-11 21:58:41
#!/usr/bin/python3
# -*- coding:utf-8 -*-
# File Name: fib.py
# Author: Lipsum
# Mail: niuleipeng@gmail.com
# Created Time: 2020-10-02 15:38:07
def fib(max):
n, a, b = 0, 0, 1
while n < max:
yield b
a, b = b, a + b
n = n + 1
# f = fib(5)
# print(next(f))
# print(next(f))
# print(next(f))
# print(next(f))
# print(next(f))
for x in fib(10):
print(x) | 15.371429 | 35 | 0.557621 | 95 | 538 | 3.157895 | 0.484211 | 0.15 | 0.166667 | 0.2 | 0.706667 | 0.706667 | 0.706667 | 0.706667 | 0.706667 | 0.706667 | 0 | 0.095588 | 0.241636 | 538 | 35 | 36 | 15.371429 | 0.639706 | 0.650558 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0 | 0 | 0.125 | 0.125 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c9fadc76f036b6023590f9b8808db4b52b2d0978 | 8,886 | py | Python | CGATReport/DataTypes.py | IanSudbery/sphinx-report | 34492b24d3df9261e9c74c3c3d6d4493c258aeac | [
"MIT"
] | 9 | 2015-02-14T16:53:58.000Z | 2022-01-03T20:22:42.000Z | CGATReport/DataTypes.py | IanSudbery/sphinx-report | 34492b24d3df9261e9c74c3c3d6d4493c258aeac | [
"MIT"
] | 26 | 2015-01-29T15:39:02.000Z | 2018-02-14T09:04:21.000Z | CGATReport/DataTypes.py | IanSudbery/sphinx-report | 34492b24d3df9261e9c74c3c3d6d4493c258aeac | [
"MIT"
] | 4 | 2015-11-25T17:11:11.000Z | 2022-01-03T20:22:45.000Z | import copy
class DataSimple(object):
"""Base class for data types.
Derived classes enforce consistency checks on data.
"""
__slots__ = ["_instance", "_data", "_fn"]
def __init__(self, fn):
"""Store data returned by function."""
object.__setattr__(self, "_fn", fn)
object.__setattr__(self, "_data", None)
object.__setattr__(self, "_instance", None)
def __call__(self, *args, **kwargs):
"""call the function and return a clone of one-self.
"""
# Decorators will use the same object for each decoration
# and data will get overwritten in successive calls to the same function.
# Thus clone oneself before storing the data and return
# the clone.
clone = copy.copy(self)
setattr(clone, "_data", self._fn(self._instance, *args, **kwargs))
clone.__check__()
return clone
def __len__(self):
return len(self._data)
def __get__(self, instance, cls=None):
object.__setattr__(self, "_instance", instance)
return self
def __getstate__(self):
# previously used deepcoy, but not necessary
return {"_data": self._data}
def __setstate__(self, dict):
for key, val in list(dict.items()):
object.__setattr__(self, key, val)
def __iter__(self):
return self._data.__iter__()
def __getitem__(self, *args, **kwargs):
return self._data.__getitem__(*args, **kwargs)
def __setslice__(self, *args, **kwargs):
return self._data.__getslice__(*args, **kwargs)
def __contains__(self):
return self._data.__contains__(*args, **kwargs)
def __copy__(self):
return self.__class__(self)
# def __getattr__(self, name):
# return getattr(self._data, name)
# def __setattr__(self, name, value):
# setattr(self._data, name, value)
class Data(object):
"""Base class for data types.
Derived classes enforce consistency checks on data.
"""
__slots__ = ["_data"]
def __init__(self, data):
"""Store data returned by function."""
object.__setattr__(self, "_data", data)
if data:
self.__check__()
def __len__(self):
return len(self._data)
def __getstate__(self):
# previously used deepcoy, but not necessary
return {"_data": self._data}
def __setstate__(self, dict):
for key, val in list(dict.items()):
object.__setattr__(self, key, val)
def __iter__(self):
return self._data.__iter__()
def __getitem__(self, *args, **kwargs):
return self._data.__getitem__(*args, **kwargs)
def __setslice__(self, *args, **kwargs):
return self._data.__getslice__(*args, **kwargs)
def __contains__(self):
return self._data.__contains__(*args, **kwargs)
def __copy__(self):
return self.__class__(self)
# def __getattr__(self, name):
# return getattr(self._data, name)
# def __setattr__(self, name, value):
# setattr(self._data, name, value)
class SingleColumn(Data):
"""Single column.
The data can be any scalar type.
Example: (1,2,"a")
"""
def __init__(self, fn):
Data.__init__(self, fn)
def __check__(self):
assert type(self._data) in ContainerTypes, "returned type is not a collection: %s" % (
type(self._data))
for x in self._data:
assert type(x) in NumberTypes, "value %s is not a number: type=%s" % (
str(x), type(x))
class SingleColumnData(Data):
"""Single column data.
All data are numerical values.
Example: (1,2,3)
"""
def __init__(self, fn):
Data.__init__(self, fn)
def __check__(self):
assert type(self._data) in ContainerTypes, "returned type is not a collection: %s" % (
type(self._data))
for x in self._data:
assert type(x) in NumberTypes, "value %s is not a number: type=%s" % (
str(x), type(x))
class MultipleColumns(Data):
"""Multiple column data
The data can be any scalar type. All columns have the same length.
Example: (("column1", "column2"), (("val1",2,3), ("val2",2,3)))
"""
def __init__(self, fn):
Data.__init__(self, fn)
def __check__(self):
assert type(self._data) in ContainerTypes, "returned type is not a collection: %s" % (
type(self._data))
assert type(self._data[0]) in ContainerTypes, "first column is not a collection: %s" % (
type(self._data[0]))
assert type(self._data[1]) in ContainerTypes, "second column is not a collection: %s" % (
type(self._data[1]))
for c in self._data[1]:
assert type(
c) in ContainerTypes, "column is not a collection: %s" % (type(c))
try:
assert min([len(c) for c in self._data[1]]) == max([len(c) for c in self._data[1]]), \
"data columns have not the same length: %i != %i." %\
(min([len(c) for c in self._data[1]]), max([len(c)
for c in self._data[1]]))
except ValueError as msg:
# ignore errors due to empty sequences
pass
class MultipleColumnData(Data):
"""Multiple column data
All data are numerical values.
Example: (("column1", "column2"), ((1,2,3), (1,2,3)))
"""
def __init__(self, fn):
Data.__init__(self, fn)
def __check__(self):
assert type(self._data) in ContainerTypes, "returned type is not a collection: %s" % (
type(self._data))
assert type(self._data[0]) in ContainerTypes, "first field is not a collection: %s" % (
type(self._data[0]))
assert type(self._data[1]) in ContainerTypes, "second field is not a collection: %s" % (
type(self._data[1]))
for c in self._data[1]:
assert type(
c) in ContainerTypes, "column is not a collection: %s" % (type(c))
for x in c:
assert type(x) in NumberTypes, "value %s is not a number: type=%s" % (
str(x), type(x))
assert min([len(c) for c in self._data[1]]) == max(
[len(c) for c in self._data[1]]), "data columns have not the same length."
class LabeledData(Data):
"""Labeled data points.
Data can be of any type. There is only one value per label.
Example: (("column1", 1), ("column2",2))
"""
def __init__(self, fn):
Data.__init__(self, fn)
def __check__(self):
assert type(
self._data) in ContainerTypes, "returned type is not a collection: %s" % (self._data)
for x in self._data:
assert type(
x) in ContainerTypes, "row is not a collection: %s" % str(x)
assert len(
x) == 2, "data is not a column, value tuple: %s" % str(x)
def returnLabeledValue(Data):
"""decorator for Trackers returning:class:`LabeledValue`."""
def wrapped_f(*args, **kwargs):
return LabeledValue(f(*args, **kwargs))
return wrapped_f
def returnSingleColumn(f):
"""decorator for Trackers returning:class:`SingleColumn`."""
def wrapped_f(*args, **kwargs):
return SingleColumn(f(*args, **kwargs))
return wrapped_f
def returnSingleColumnData(f):
"""decorator for Trackers returning:class:`SingleColumnData`."""
def wrapped_f(*args, **kwargs):
return SingleColumnData(f(*args, **kwargs))
return wrapped_f
def returnMultipleColumns(f):
"""decorator for Trackers returning:class:`MultipleColumn`."""
def wrapped_f(*args, **kwargs):
return MultipleColumns(f(*args, **kwargs))
return wrapped_f
def returnMultipleColumnData(f):
"""decorator for Trackers returning:class:`MultipleColumnData`."""
def wrapped_f(*args, **kwargs):
return MultipleColumnData(f(*args, **kwargs))
return wrapped_f
def returnLabeledData(f):
"""decorator for Trackers returning:class:`LabeledData`."""
def wrapped_f(*args, **kwargs):
return LabeledData(f(*args, **kwargs))
return wrapped_f
# def returnSingleColumn(f):
# """decorator for Trackers returning:class:`SingleColumn`."""
# return SingleColumn(f)
# def returnSingleColumnData(f):
# """decorator for Trackers returning:class:`SingleColumnData`."""
# return SingleColumnData(f)
# def returnMultipleColumns(f):
# """decorator for Trackers returning:class:`MultipleColumn`."""
# return MultipleColumns(f)
# def returnMultipleColumnData(f):
# """decorator for Trackers returning:class:`MultipleColumnData`."""
# return MultipleColumnData(f)
# def returnLabeledData(f):
# """decorator for Trackers returning:class:`LabeledData`."""
# return LabeledData(f)
| 30.327645 | 98 | 0.605559 | 1,077 | 8,886 | 4.707521 | 0.143918 | 0.07574 | 0.040237 | 0.03787 | 0.776134 | 0.754635 | 0.722682 | 0.693886 | 0.648718 | 0.644379 | 0 | 0.006424 | 0.264236 | 8,886 | 292 | 99 | 30.431507 | 0.769043 | 0.270088 | 0 | 0.623288 | 0 | 0 | 0.112188 | 0 | 0 | 0 | 0 | 0 | 0.123288 | 1 | 0.287671 | false | 0.006849 | 0.006849 | 0.136986 | 0.547945 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
a0176ded4c6d5a2fd3b7b2f2b8873dc3161d6c53 | 20 | py | Python | llama/__init__.py | HEPonHPC/llama | 6a46c395d84e6ac824fbebb1b108e68699ca64ab | [
"MIT"
] | null | null | null | llama/__init__.py | HEPonHPC/llama | 6a46c395d84e6ac824fbebb1b108e68699ca64ab | [
"MIT"
] | null | null | null | llama/__init__.py | HEPonHPC/llama | 6a46c395d84e6ac824fbebb1b108e68699ca64ab | [
"MIT"
] | null | null | null | from .llama import * | 20 | 20 | 0.75 | 3 | 20 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15 | 20 | 1 | 20 | 20 | 0.882353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a018bd6f5a659860894977ac4726329dbd8a5748 | 6,488 | py | Python | tests/pytests/unit/states/postgresql/test_extension.py | babs/salt | c536ea716d5308880b244e7980f4b659d86fc104 | [
"Apache-2.0"
] | 1 | 2021-02-11T16:55:00.000Z | 2021-02-11T16:55:00.000Z | tests/pytests/unit/states/postgresql/test_extension.py | babs/salt | c536ea716d5308880b244e7980f4b659d86fc104 | [
"Apache-2.0"
] | 9 | 2021-03-31T20:25:25.000Z | 2021-07-04T05:33:46.000Z | tests/pytests/unit/states/postgresql/test_extension.py | babs/salt | c536ea716d5308880b244e7980f4b659d86fc104 | [
"Apache-2.0"
] | 1 | 2020-06-02T14:15:24.000Z | 2020-06-02T14:15:24.000Z | import pytest
import salt.modules.postgres as postgresmod
import salt.states.postgres_extension as postgres_extension
from tests.support.mock import Mock, patch
@pytest.fixture
def configure_loader_modules():
return {
postgres_extension: {
"__grains__": {"os_family": "linux"},
"__salt__": {
"config.option": Mock(),
"cmd.run_all": Mock(),
"file.chown": Mock(),
"file.remove": Mock(),
},
"__opts__": {"test": False},
},
}
def test_present_failed():
"""
scenario of creating upgrading extensions with possible schema and
version specifications
"""
with patch.dict(
postgres_extension.__salt__,
{
"postgres.create_metadata": Mock(
side_effect=[
[postgresmod._EXTENSION_NOT_INSTALLED],
[postgresmod._EXTENSION_TO_MOVE, postgresmod._EXTENSION_INSTALLED],
]
),
"postgres.create_extension": Mock(side_effect=[False, False]),
},
):
ret = postgres_extension.present("foo")
assert ret == {
"comment": "Failed to install extension foo",
"changes": {},
"name": "foo",
"result": False,
}
ret = postgres_extension.present("foo")
assert ret == {
"comment": "Failed to upgrade extension foo",
"changes": {},
"name": "foo",
"result": False,
}
def test_present():
"""
scenario of creating upgrading extensions with possible schema and
version specifications
"""
with patch.dict(
postgres_extension.__salt__,
{
"postgres.create_metadata": Mock(
side_effect=[
[postgresmod._EXTENSION_NOT_INSTALLED],
[postgresmod._EXTENSION_INSTALLED],
[postgresmod._EXTENSION_TO_MOVE, postgresmod._EXTENSION_INSTALLED],
]
),
"postgres.create_extension": Mock(side_effect=[True, True, True]),
},
):
ret = postgres_extension.present("foo")
assert ret == {
"comment": "The extension foo has been installed",
"changes": {"foo": "Installed"},
"name": "foo",
"result": True,
}
ret = postgres_extension.present("foo")
assert ret == {
"comment": "Extension foo is already present",
"changes": {},
"name": "foo",
"result": True,
}
ret = postgres_extension.present("foo")
assert ret == {
"comment": "The extension foo has been upgraded",
"changes": {"foo": "Upgraded"},
"name": "foo",
"result": True,
}
def test_presenttest():
"""
scenario of creating upgrading extensions with possible schema and
version specifications
"""
with patch.dict(
postgres_extension.__salt__,
{
"postgres.create_metadata": Mock(
side_effect=[
[postgresmod._EXTENSION_NOT_INSTALLED],
[postgresmod._EXTENSION_INSTALLED],
[postgresmod._EXTENSION_TO_MOVE, postgresmod._EXTENSION_INSTALLED],
]
),
"postgres.create_extension": Mock(side_effect=[True, True, True]),
},
):
with patch.dict(postgres_extension.__opts__, {"test": True}):
ret = postgres_extension.present("foo")
assert ret == {
"comment": "Extension foo is set to be installed",
"changes": {},
"name": "foo",
"result": None,
}
ret = postgres_extension.present("foo")
assert ret == {
"comment": "Extension foo is already present",
"changes": {},
"name": "foo",
"result": True,
}
ret = postgres_extension.present("foo")
assert ret == {
"comment": "Extension foo is set to be upgraded",
"changes": {},
"name": "foo",
"result": None,
}
def test_absent():
"""
scenario of creating upgrading extensions with possible schema and
version specifications
"""
with patch.dict(
postgres_extension.__salt__,
{
"postgres.is_installed_extension": Mock(side_effect=[True, False]),
"postgres.drop_extension": Mock(side_effect=[True, True]),
},
):
ret = postgres_extension.absent("foo")
assert ret == {
"comment": "Extension foo has been removed",
"changes": {"foo": "Absent"},
"name": "foo",
"result": True,
}
ret = postgres_extension.absent("foo")
assert ret == {
"comment": ("Extension foo is not present, so it cannot be removed"),
"changes": {},
"name": "foo",
"result": True,
}
def test_absent_failed():
"""
scenario of creating upgrading extensions with possible schema and
version specifications
"""
with patch.dict(postgres_extension.__opts__, {"test": False}):
with patch.dict(
postgres_extension.__salt__,
{
"postgres.is_installed_extension": Mock(side_effect=[True, True]),
"postgres.drop_extension": Mock(side_effect=[False, False]),
},
):
ret = postgres_extension.absent("foo")
assert ret == {
"comment": "Extension foo failed to be removed",
"changes": {},
"name": "foo",
"result": False,
}
def test_absent_failedtest():
with patch.dict(
postgres_extension.__salt__,
{
"postgres.is_installed_extension": Mock(side_effect=[True, True]),
"postgres.drop_extension": Mock(side_effect=[False, False]),
},
):
with patch.dict(postgres_extension.__opts__, {"test": True}):
ret = postgres_extension.absent("foo")
assert ret == {
"comment": "Extension foo is set to be removed",
"changes": {},
"name": "foo",
"result": None,
}
| 31.64878 | 87 | 0.515721 | 559 | 6,488 | 5.735242 | 0.148479 | 0.127261 | 0.052402 | 0.071117 | 0.843107 | 0.826887 | 0.7932 | 0.74111 | 0.74111 | 0.739551 | 0 | 0 | 0.364365 | 6,488 | 204 | 88 | 31.803922 | 0.777401 | 0.069205 | 0 | 0.6 | 0 | 0 | 0.205452 | 0.051994 | 0 | 0 | 0 | 0 | 0.070588 | 1 | 0.041176 | false | 0 | 0.023529 | 0.005882 | 0.070588 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4e53bf9dcf9b269fd27554d237dfc2a89b7654ee | 29 | py | Python | pyqt_foldable_window/__init__.py | yjg30737/pyqt-foldable-window | ab7e77b2a517532fa1e5ecb57733d787f971b962 | [
"MIT"
] | 1 | 2022-02-02T10:33:32.000Z | 2022-02-02T10:33:32.000Z | pyqt_foldable_window/__init__.py | yjg30737/pyqt-foldable-window | ab7e77b2a517532fa1e5ecb57733d787f971b962 | [
"MIT"
] | null | null | null | pyqt_foldable_window/__init__.py | yjg30737/pyqt-foldable-window | ab7e77b2a517532fa1e5ecb57733d787f971b962 | [
"MIT"
] | null | null | null | from .foldableWindow import * | 29 | 29 | 0.827586 | 3 | 29 | 8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.103448 | 29 | 1 | 29 | 29 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
09635738f5b2205db060dba86992dd9ec6813d1d | 93 | py | Python | thethings/endpoints/__init__.py | VekotinVerstas/DjangoHttpBroker-TheThing | 9ba2f02b397b132945177554da0f3aaf094e5b22 | [
"MIT"
] | null | null | null | thethings/endpoints/__init__.py | VekotinVerstas/DjangoHttpBroker-TheThing | 9ba2f02b397b132945177554da0f3aaf094e5b22 | [
"MIT"
] | null | null | null | thethings/endpoints/__init__.py | VekotinVerstas/DjangoHttpBroker-TheThing | 9ba2f02b397b132945177554da0f3aaf094e5b22 | [
"MIT"
] | 2 | 2020-05-05T12:57:47.000Z | 2020-08-14T13:33:56.000Z | from broker.providers.endpoint import import_endpoints
import_endpoints(__file__, __name__)
| 23.25 | 54 | 0.870968 | 11 | 93 | 6.454545 | 0.727273 | 0.422535 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.075269 | 93 | 3 | 55 | 31 | 0.825581 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
11ef3126b2d613cd0c944e40fd5c1955764dcd3f | 68 | py | Python | double3/double3sdk/imu/imu.py | CLOMING/winter2021_double | 9b920baaeb3736a785a6505310b972c49b5b21e9 | [
"Apache-2.0"
] | null | null | null | double3/double3sdk/imu/imu.py | CLOMING/winter2021_double | 9b920baaeb3736a785a6505310b972c49b5b21e9 | [
"Apache-2.0"
] | null | null | null | double3/double3sdk/imu/imu.py | CLOMING/winter2021_double | 9b920baaeb3736a785a6505310b972c49b5b21e9 | [
"Apache-2.0"
] | null | null | null | from double3sdk.double_api import _DoubleAPI
class _Imu:
pass
| 11.333333 | 44 | 0.779412 | 9 | 68 | 5.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018182 | 0.191176 | 68 | 5 | 45 | 13.6 | 0.890909 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
11f6ada767d0d96f897868f64e56840998521ca8 | 7,347 | py | Python | tests/test_core.py | jakirkham/mplview | 0847e4ccf3c4247cb72f35600b7f5f553b429c2d | [
"BSD-3-Clause"
] | 2 | 2018-05-30T18:53:19.000Z | 2018-06-11T17:32:54.000Z | tests/test_core.py | jakirkham/mplview | 0847e4ccf3c4247cb72f35600b7f5f553b429c2d | [
"BSD-3-Clause"
] | 15 | 2016-11-01T12:54:03.000Z | 2019-02-28T18:16:48.000Z | tests/test_core.py | jakirkham/mplview | 0847e4ccf3c4247cb72f35600b7f5f553b429c2d | [
"BSD-3-Clause"
] | null | null | null | __author__ = "John Kirkham <kirkhamj@janelia.hhmi.org>"
__date__ = "$Nov 01, 2016 9:19$"
import unittest
import numpy
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot
import mplview
import mplview.core
class TestMatplotlibViewer(unittest.TestCase):
def setUp(self):
self.mplv = matplotlib.pyplot.figure(
FigureClass=mplview.core.MatplotlibViewer
)
def test_state(self):
self.assertIsInstance(self.mplv, matplotlib.figure.Figure)
self.assertIsNotNone(getattr(self.mplv, "viewer", None))
def test_init_image(self):
img = numpy.arange(12.0).reshape(3,4)
self.mplv.set_images(img)
cur_img = self.mplv.get_image()
self.assertTrue(numpy.array_equal(img, cur_img))
def test_init_image_matshow(self):
img = numpy.arange(12.0).reshape(3,4)
self.mplv.set_images(img, use_matshow=True)
cur_img = self.mplv.get_image()
self.assertTrue(numpy.array_equal(img, cur_img))
def test_init_image_stack(self):
img = numpy.arange(60.0).reshape(5,3,4)
self.mplv.set_images(img)
cur_img = self.mplv.get_image()
self.assertTrue(numpy.array_equal(img[0], cur_img))
cur_img = self.mplv.get_image(1)
self.assertTrue(numpy.array_equal(img[1], cur_img))
cur_img = self.mplv.get_image(-1)
self.assertTrue(numpy.array_equal(img[-1], cur_img))
def test_image_stack_retry(self):
img = numpy.arange(60.0).reshape(5,3,4)
self.mplv.set_images(img)
cur_img = self.mplv.get_image()
self.assertTrue(numpy.array_equal(img[0], cur_img))
cur_img = self.mplv.get_image()
self.assertTrue(numpy.array_equal(img[0], cur_img))
cur_img = self.mplv.get_image(2)
self.assertTrue(numpy.array_equal(img[2], cur_img))
cur_img = self.mplv.get_image(2)
self.assertTrue(numpy.array_equal(img[2], cur_img))
cur_img = self.mplv.get_image(-1)
self.assertTrue(numpy.array_equal(img[-1], cur_img))
cur_img = self.mplv.get_image(-1)
self.assertTrue(numpy.array_equal(img[-1], cur_img))
def test_init_too_big(self):
img = numpy.arange(60.0).reshape(1,5,3,4)
with self.assertRaises(ValueError) as e:
self.mplv.set_images(img)
def test_format_coord(self):
img = numpy.arange(12.0).reshape(3,4)
self.mplv.set_images(img)
exp_str = 'x=0.0000, y=0.0000, z=0.0000'
self.assertEqual(exp_str, self.mplv.format_coord(0.0, 0.0))
exp_str = 'x=0.2000, y=0.0000, z=0.0000'
self.assertEqual(exp_str, self.mplv.format_coord(0.2, 0.0))
exp_str = 'x=0.0000, y=0.2000, z=0.0000'
self.assertEqual(exp_str, self.mplv.format_coord(0.0, 0.2))
exp_str = 'x=0.2000, y=0.2000, z=0.0000'
self.assertEqual(exp_str, self.mplv.format_coord(0.2, 0.2))
exp_str = 'x=0.8000, y=0.2000, z=1.0000'
self.assertEqual(exp_str, self.mplv.format_coord(0.8, 0.2))
exp_str = 'x=0.2000, y=0.8000, z=4.0000'
self.assertEqual(exp_str, self.mplv.format_coord(0.2, 0.8))
exp_str = 'x=0.8000, y=0.8000, z=5.0000'
self.assertEqual(exp_str, self.mplv.format_coord(0.8, 0.8))
exp_str = 'x=4.0000, y=5.0000'
self.assertEqual(exp_str, self.mplv.format_coord(4.0, 5.0))
def test_image_color_range(self):
img = numpy.linspace(0, 1, 12).reshape(3,4)
self.mplv.set_images(img, vmin=0.0, vmax=1.0)
self.assertEqual(self.mplv.vmin, 0.0)
self.assertEqual(self.mplv.vmax, 1.0)
self.assertEqual(self.mplv.svmin, 0.0)
self.assertEqual(self.mplv.svmax, 1.0)
self.mplv.color_range_update(0.0, 1.0)
self.assertEqual(self.mplv.svmin, 0.0)
self.assertEqual(self.mplv.svmax, 1.0)
self.mplv.color_range_update(0.1, 0.9)
self.assertEqual(self.mplv.svmin, 0.1)
self.assertEqual(self.mplv.svmax, 0.9)
self.mplv.color_range_update(0.25, 0.75)
self.assertAlmostEqual(self.mplv.svmin, 0.3)
self.assertAlmostEqual(self.mplv.svmax, 0.7)
self.mplv.color_range_update(0.5, 0.5)
self.assertEqual(self.mplv.svmin, 0.0)
self.assertEqual(self.mplv.svmax, 1.0)
def test_navigator_callback(self):
img = numpy.arange(60.0).reshape(5,3,4)
self.mplv.set_images(img)
v = [0]
def callback(v=v):
v[0] += 1
self.assertEqual(v[0], 0)
cid = self.mplv.time_nav.on_time_update(callback)
self.assertEqual(v[0], 0)
self.mplv.time_nav.time_update(2)
self.assertEqual(v[0], 1)
self.mplv.time_nav.time_update(2)
self.assertEqual(v[0], 1)
self.mplv.time_nav.time_update(4)
self.assertEqual(v[0], 2)
self.mplv.time_nav.disconnect(cid)
self.mplv.time_nav.time_update(1)
self.assertEqual(v[0], 2)
def test_image_stack_nav_pos(self):
img = numpy.arange(60.0).reshape(5,3,4)
self.mplv.set_images(img)
self.mplv.time_nav.time_update(2)
cur_img = self.mplv.get_image()
self.assertTrue(numpy.array_equal(img[2], cur_img))
self.mplv.time_nav.time_update(-1)
cur_img = self.mplv.get_image()
self.assertTrue(numpy.array_equal(img[0], cur_img))
self.mplv.time_nav.time_update(10)
cur_img = self.mplv.get_image()
self.assertTrue(numpy.array_equal(img[-1], cur_img))
def test_image_stack_nav_ends(self):
img = numpy.arange(60.0).reshape(5,3,4)
self.mplv.set_images(img)
self.mplv.time_nav.begin_time(None)
cur_img = self.mplv.get_image()
self.assertTrue(numpy.array_equal(img[0], cur_img))
self.mplv.time_nav.begin_time(None)
self.mplv.time_nav.prev_time(None)
cur_img = self.mplv.get_image()
self.assertTrue(numpy.array_equal(img[0], cur_img))
self.mplv.time_nav.end_time(None)
cur_img = self.mplv.get_image()
self.assertTrue(numpy.array_equal(img[-1], cur_img))
self.mplv.time_nav.end_time(None)
self.mplv.time_nav.next_time(None)
cur_img = self.mplv.get_image()
self.assertTrue(numpy.array_equal(img[-1], cur_img))
def test_image_stack_nav_step(self):
img = numpy.arange(60.0).reshape(5,3,4)
self.mplv.set_images(img)
self.mplv.time_nav.begin_time(None)
cur_img = self.mplv.get_image()
self.assertTrue(numpy.array_equal(img[0], cur_img))
self.mplv.time_nav.next_time(None)
cur_img = self.mplv.get_image()
self.assertTrue(numpy.array_equal(img[1], cur_img))
self.mplv.time_nav.prev_time(None)
cur_img = self.mplv.get_image()
self.assertTrue(numpy.array_equal(img[0], cur_img))
self.mplv.time_nav.end_time(None)
cur_img = self.mplv.get_image()
self.assertTrue(numpy.array_equal(img[-1], cur_img))
self.mplv.time_nav.prev_time(None)
cur_img = self.mplv.get_image()
self.assertTrue(numpy.array_equal(img[-2], cur_img))
self.mplv.time_nav.next_time(None)
cur_img = self.mplv.get_image()
self.assertTrue(numpy.array_equal(img[-1], cur_img))
def tearDown(self):
del self.mplv
| 32.799107 | 67 | 0.638084 | 1,155 | 7,347 | 3.864935 | 0.09697 | 0.150538 | 0.091174 | 0.106631 | 0.821685 | 0.782482 | 0.758513 | 0.721998 | 0.708109 | 0.699373 | 0 | 0.055323 | 0.22009 | 7,347 | 223 | 68 | 32.946188 | 0.723735 | 0 | 0 | 0.558282 | 0 | 0 | 0.038383 | 0.003675 | 0 | 0 | 0 | 0 | 0.325153 | 1 | 0.092025 | false | 0 | 0.03681 | 0 | 0.134969 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ee9b8834ac3bd09bb93d15b005b944d34f1c87c5 | 210 | py | Python | sqlalchemy_model_builder/__init__.py | aminalaee/fastapi-admin | 15206af6b223f778cbe64d1e72d4200289e72eba | [
"MIT"
] | 2 | 2021-06-04T17:33:49.000Z | 2022-03-23T19:22:35.000Z | sqlalchemy_model_builder/__init__.py | aminalaee/sqlalchemy-model-builder | 15206af6b223f778cbe64d1e72d4200289e72eba | [
"MIT"
] | 10 | 2021-06-09T06:02:05.000Z | 2021-08-08T16:22:34.000Z | sqlalchemy_model_builder/__init__.py | aminalaee/sqlalchemy-model-builder | 15206af6b223f778cbe64d1e72d4200289e72eba | [
"MIT"
] | null | null | null | __version__ = "0.0.6"
from sqlalchemy_model_builder.exceptions import ModelBuilderException
from sqlalchemy_model_builder.model_builder import ModelBuilder
__all__ = ["ModelBuilder", "ModelBuilderException"]
| 30 | 69 | 0.847619 | 22 | 210 | 7.5 | 0.545455 | 0.218182 | 0.230303 | 0.315152 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015544 | 0.080952 | 210 | 6 | 70 | 35 | 0.839378 | 0 | 0 | 0 | 0 | 0 | 0.180952 | 0.1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
eec4266eac29b3e5a641c293e18489aa06c36c27 | 12,612 | py | Python | 3.7.0/lldb-3.7.0.src/test/tools/lldb-mi/variable/TestMiGdbSetShowPrint.py | androm3da/clang_sles | 2ba6d0711546ad681883c42dfb8661b842806695 | [
"MIT"
] | 3 | 2016-02-10T14:18:40.000Z | 2018-02-05T03:15:56.000Z | 3.7.0/lldb-3.7.0.src/test/tools/lldb-mi/variable/TestMiGdbSetShowPrint.py | androm3da/clang_sles | 2ba6d0711546ad681883c42dfb8661b842806695 | [
"MIT"
] | 1 | 2016-02-10T15:40:03.000Z | 2016-02-10T15:40:03.000Z | 3.7.0/lldb-3.7.0.src/test/tools/lldb-mi/variable/TestMiGdbSetShowPrint.py | androm3da/clang_sles | 2ba6d0711546ad681883c42dfb8661b842806695 | [
"MIT"
] | null | null | null | """
Test lldb-mi -gdb-set and -gdb-show commands for 'print option-name'.
"""
import lldbmi_testcase
from lldbtest import *
import unittest2
class MiGdbSetShowTestCase(lldbmi_testcase.MiTestCaseBase):
mydir = TestBase.compute_mydir(__file__)
@lldbmi_test
@expectedFailureWindows("llvm.org/pr22274: need a pexpect replacement for windows")
@skipIfFreeBSD # llvm.org/pr22411: Failure presumably due to known thread races
@skipIfLinux # llvm.org/pr22841: lldb-mi tests fail on all Linux buildbots
def test_lldbmi_gdb_set_show_print_char_array_as_string(self):
"""Test that 'lldb-mi --interpreter' can print array of chars as string."""
self.spawnLldbMi(args = None)
# Load executable
self.runCmd("-file-exec-and-symbols %s" % self.myexe)
self.expect("\^done")
# Run to BP_gdb_set_show_print_char_array_as_string_test
line = line_number('main.cpp', '// BP_gdb_set_show_print_char_array_as_string_test')
self.runCmd("-break-insert main.cpp:%d" % line)
self.expect("\^done,bkpt={number=\"1\"")
self.runCmd("-exec-run")
self.expect("\^running")
self.expect("\*stopped,reason=\"breakpoint-hit\"")
# Test that default print char-array-as-string value is "off"
self.runCmd("-gdb-show print char-array-as-string")
self.expect("\^done,value=\"off\"")
# Test that an char* is expanded to string when print char-array-as-string is "off"
self.runCmd("-var-create - * cp")
self.expect("\^done,name=\"var\d+\",numchild=\"1\",value=\"0x[0-9a-f]+ \\\\\\\"hello\\\\\\\"\",type=\"const char \*\",thread-id=\"1\",has_more=\"0\"")
# Test that an char[] isn't expanded to string when print char-array-as-string is "off"
self.runCmd("-var-create - * ca")
self.expect("\^done,name=\"var\d+\",numchild=\"6\",value=\"\[6\]\",type=\"const char \[6\]\",thread-id=\"1\",has_more=\"0\"")
# Test that an char16_t* is expanded to string when print char-array-as-string is "off"
self.runCmd("-var-create - * u16p")
self.expect("\^done,name=\"var\d+\",numchild=\"1\",value=\"0x[0-9a-f]+ u\\\\\\\"hello\\\\\\\"\",type=\"const char16_t \*\",thread-id=\"1\",has_more=\"0\"")
# Test that an char16_t[] isn't expanded to string when print char-array-as-string is "off"
self.runCmd("-var-create - * u16a")
self.expect("\^done,name=\"var\d+\",numchild=\"6\",value=\"\[6\]\",type=\"const char16_t \[6\]\",thread-id=\"1\",has_more=\"0\"")
# Test that an char32_t* is expanded to string when print char-array-as-string is "off"
self.runCmd("-var-create - * u32p")
self.expect("\^done,name=\"var\d+\",numchild=\"1\",value=\"0x[0-9a-f]+ U\\\\\\\"hello\\\\\\\"\",type=\"const char32_t \*\",thread-id=\"1\",has_more=\"0\"")
# Test that an char32_t[] isn't expanded to string when print char-array-as-string is "off"
self.runCmd("-var-create - * u32a")
self.expect("\^done,name=\"var\d+\",numchild=\"6\",value=\"\[6\]\",type=\"const char32_t \[6\]\",thread-id=\"1\",has_more=\"0\"")
# Test that -gdb-set can set print char-array-as-string flag
self.runCmd("-gdb-set print char-array-as-string on")
self.expect("\^done")
self.runCmd("-gdb-set print char-array-as-string 1")
self.expect("\^done")
self.runCmd("-gdb-show print char-array-as-string")
self.expect("\^done,value=\"on\"")
# Test that an char* is expanded to string when print char-array-as-string is "on"
self.runCmd("-var-create - * cp")
self.expect("\^done,name=\"var\d+\",numchild=\"1\",value=\"0x[0-9a-f]+ \\\\\\\"hello\\\\\\\"\",type=\"const char \*\",thread-id=\"1\",has_more=\"0\"")
# Test that an char[] isn't expanded to string when print char-array-as-string is "on"
self.runCmd("-var-create - * ca")
self.expect("\^done,name=\"var\d+\",numchild=\"6\",value=\"\\\\\\\"hello\\\\\\\"\",type=\"const char \[6\]\",thread-id=\"1\",has_more=\"0\"")
# Test that an char16_t* is expanded to string when print char-array-as-string is "on"
self.runCmd("-var-create - * u16p")
self.expect("\^done,name=\"var\d+\",numchild=\"1\",value=\"0x[0-9a-f]+ u\\\\\\\"hello\\\\\\\"\",type=\"const char16_t \*\",thread-id=\"1\",has_more=\"0\"")
# Test that an char16_t[] isn't expanded to string when print char-array-as-string is "on"
self.runCmd("-var-create - * u16a")
self.expect("\^done,name=\"var\d+\",numchild=\"6\",value=\"u\\\\\\\"hello\\\\\\\"\",type=\"const char16_t \[6\]\",thread-id=\"1\",has_more=\"0\"")
# Test that an char32_t* is expanded to string when print char-array-as-string is "on"
self.runCmd("-var-create - * u32p")
self.expect("\^done,name=\"var\d+\",numchild=\"1\",value=\"0x[0-9a-f]+ U\\\\\\\"hello\\\\\\\"\",type=\"const char32_t \*\",thread-id=\"1\",has_more=\"0\"")
# Test that an char32_t[] isn't expanded to string when print char-array-as-string is "on"
self.runCmd("-var-create - * u32a")
self.expect("\^done,name=\"var\d+\",numchild=\"6\",value=\"U\\\\\\\"hello\\\\\\\"\",type=\"const char32_t \[6\]\",thread-id=\"1\",has_more=\"0\"")
# Test that -gdb-set print char-array-as-string fails if "on"/"off" isn't specified
self.runCmd("-gdb-set print char-array-as-string")
self.expect("\^error,msg=\"The request ''print' expects option-name and \"on\" or \"off\"' failed.\"")
# Test that -gdb-set print char-array-as-string fails when option is unknown
self.runCmd("-gdb-set print char-array-as-string unknown")
self.expect("\^error,msg=\"The request ''print' expects option-name and \"on\" or \"off\"' failed.\"")
@lldbmi_test
@expectedFailureWindows("llvm.org/pr22274: need a pexpect replacement for windows")
@expectedFailureGcc("https://llvm.org/bugs/show_bug.cgi?id=23357")
@skipIfFreeBSD # llvm.org/pr22411: Failure presumably due to known thread races
def test_lldbmi_gdb_set_show_print_expand_aggregates(self):
"""Test that 'lldb-mi --interpreter' can expand aggregates everywhere."""
self.spawnLldbMi(args = None)
# Load executable
self.runCmd("-file-exec-and-symbols %s" % self.myexe)
self.expect("\^done")
# Run to BP_gdb_set_show_print_expand_aggregates
line = line_number('main.cpp', '// BP_gdb_set_show_print_expand_aggregates')
self.runCmd("-break-insert main.cpp:%d" % line)
self.expect("\^done,bkpt={number=\"1\"")
self.runCmd("-exec-run")
self.expect("\^running")
self.expect("\*stopped,reason=\"breakpoint-hit\"")
# Test that default print expand-aggregates value is "off"
self.runCmd("-gdb-show print expand-aggregates")
self.expect("\^done,value=\"off\"")
# Test that composite type isn't expanded when print expand-aggregates is "off"
self.runCmd("-var-create var1 * complx")
self.expect("\^done,name=\"var1\",numchild=\"3\",value=\"{\.\.\.}\",type=\"complex_type\",thread-id=\"1\",has_more=\"0\"")
# Test that composite type[] isn't expanded when print expand-aggregates is "off"
self.runCmd("-var-create var2 * complx_array")
self.expect("\^done,name=\"var2\",numchild=\"2\",value=\"\[2\]\",type=\"complex_type \[2\]\",thread-id=\"1\",has_more=\"0\"")
# Test that -gdb-set can set print expand-aggregates flag
self.runCmd("-gdb-set print expand-aggregates on")
self.expect("\^done")
self.runCmd("-gdb-set print expand-aggregates 1")
self.expect("\^done")
self.runCmd("-gdb-show print expand-aggregates")
self.expect("\^done,value=\"on\"")
# Test that composite type is expanded when print expand-aggregates is "on"
self.runCmd("-var-create var3 * complx")
self.expect("\^done,name=\"var3\",numchild=\"3\",value=\"{i = 3, inner = {l = 3}, complex_ptr = 0x[0-9a-f]+}\",type=\"complex_type\",thread-id=\"1\",has_more=\"0\"")
# Test that composite type[] is expanded when print expand-aggregates is "on"
self.runCmd("-var-create var4 * complx_array")
self.expect("\^done,name=\"var4\",numchild=\"2\",value=\"{\[0\] = {i = 4, inner = {l = 4}, complex_ptr = 0x[0-9a-f]+}, \[1\] = {i = 5, inner = {l = 5}, complex_ptr = 0x[0-9a-f]+}}\",type=\"complex_type \[2\]\",thread-id=\"1\",has_more=\"0\"")
# Test that -gdb-set print expand-aggregates fails if "on"/"off" isn't specified
self.runCmd("-gdb-set print expand-aggregates")
self.expect("\^error,msg=\"The request ''print' expects option-name and \"on\" or \"off\"' failed.\"")
# Test that -gdb-set print expand-aggregates fails when option is unknown
self.runCmd("-gdb-set print expand-aggregates unknown")
self.expect("\^error,msg=\"The request ''print' expects option-name and \"on\" or \"off\"' failed.\"")
@lldbmi_test
@expectedFailureWindows("llvm.org/pr22274: need a pexpect replacement for windows")
@expectedFailureGcc("https://llvm.org/bugs/show_bug.cgi?id=23357")
@skipIfFreeBSD # llvm.org/pr22411: Failure presumably due to known thread races
def test_lldbmi_gdb_set_show_print_aggregate_field_names(self):
"""Test that 'lldb-mi --interpreter' can expand aggregates everywhere."""
self.spawnLldbMi(args = None)
# Load executable
self.runCmd("-file-exec-and-symbols %s" % self.myexe)
self.expect("\^done")
# Run to BP_gdb_set_show_print_aggregate_field_names
line = line_number('main.cpp', '// BP_gdb_set_show_print_aggregate_field_names')
self.runCmd("-break-insert main.cpp:%d" % line)
self.expect("\^done,bkpt={number=\"1\"")
self.runCmd("-exec-run")
self.expect("\^running")
self.expect("\*stopped,reason=\"breakpoint-hit\"")
# Test that default print aggregatep-field-names value is "on"
self.runCmd("-gdb-show print aggregate-field-names")
self.expect("\^done,value=\"on\"")
# Set print expand-aggregates flag to "on"
self.runCmd("-gdb-set print expand-aggregates on")
self.expect("\^done")
# Test that composite type is expanded with field name when print aggregate-field-names is "on"
self.runCmd("-var-create var1 * complx")
self.expect("\^done,name=\"var1\",numchild=\"3\",value=\"{i = 3, inner = {l = 3}, complex_ptr = 0x[0-9a-f]+}\",type=\"complex_type\",thread-id=\"1\",has_more=\"0\"")
# Test that composite type[] is expanded with field name when print aggregate-field-names is "on"
self.runCmd("-var-create var2 * complx_array")
self.expect("\^done,name=\"var2\",numchild=\"2\",value=\"{\[0\] = {i = 4, inner = {l = 4}, complex_ptr = 0x[0-9a-f]+}, \[1\] = {i = 5, inner = {l = 5}, complex_ptr = 0x[0-9a-f]+}}\",type=\"complex_type \[2\]\",thread-id=\"1\",has_more=\"0\"")
# Test that -gdb-set can set print aggregate-field-names flag
self.runCmd("-gdb-set print aggregate-field-names off")
self.expect("\^done")
self.runCmd("-gdb-set print aggregate-field-names 0")
self.expect("\^done")
self.runCmd("-gdb-show print aggregate-field-names")
self.expect("\^done,value=\"off\"")
# Test that composite type is expanded without field name when print aggregate-field-names is "off"
self.runCmd("-var-create var3 * complx")
self.expect("\^done,name=\"var3\",numchild=\"3\",value=\"{3,\{3\},0x[0-9a-f]+}\",type=\"complex_type\",thread-id=\"1\",has_more=\"0\"")
# Test that composite type[] is expanded without field name when print aggregate-field-names is "off"
self.runCmd("-var-create var4 * complx_array")
self.expect("\^done,name=\"var4\",numchild=\"2\",value=\"{{4,\{4\},0x[0-9a-f]+},{5,\{5\},0x[0-9a-f]+}}\",type=\"complex_type \[2\]\",thread-id=\"1\",has_more=\"0\"")
# Test that -gdb-set print aggregate-field-names fails if "on"/"off" isn't specified
self.runCmd("-gdb-set print aggregate-field-names")
self.expect("\^error,msg=\"The request ''print' expects option-name and \"on\" or \"off\"' failed.\"")
# Test that -gdb-set print aggregate-field-names fails when option is unknown
self.runCmd("-gdb-set print aggregate-field-names unknown")
self.expect("\^error,msg=\"The request ''print' expects option-name and \"on\" or \"off\"' failed.\"")
if __name__ == '__main__':
unittest2.main()
| 56.810811 | 250 | 0.61608 | 1,795 | 12,612 | 4.247911 | 0.091365 | 0.066885 | 0.071607 | 0.052459 | 0.952131 | 0.941377 | 0.937574 | 0.918164 | 0.880525 | 0.851541 | 0 | 0.024903 | 0.17856 | 12,612 | 221 | 251 | 57.067873 | 0.7111 | 0.263003 | 0 | 0.692913 | 0 | 0 | 0.40325 | 0.041603 | 0 | 0 | 0 | 0 | 0 | 1 | 0.023622 | false | 0 | 0.023622 | 0 | 0.062992 | 0.244094 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
011d0931e1200ee171a4f47a8c9c0803c928ffd0 | 174 | py | Python | data/__init__.py | isamu-isozaki/hidden-networks | 7dcb96a7de43b65ffde176d771f88b5ecedb84ab | [
"Apache-2.0"
] | 132 | 2019-12-03T19:02:36.000Z | 2022-03-27T15:56:43.000Z | data/__init__.py | isamu-isozaki/hidden-networks | 7dcb96a7de43b65ffde176d771f88b5ecedb84ab | [
"Apache-2.0"
] | 9 | 2019-12-05T16:28:33.000Z | 2022-02-21T21:49:13.000Z | data/__init__.py | isamu-isozaki/hidden-networks | 7dcb96a7de43b65ffde176d771f88b5ecedb84ab | [
"Apache-2.0"
] | 45 | 2019-12-04T00:11:53.000Z | 2022-03-30T21:07:37.000Z | from data.cifar import CIFAR10
from data.imagenet import ImageNet
from data.tinyimagenet import TinyImageNet
from data.mnist import MNIST
from data.bigcifar import BigCIFAR10 | 34.8 | 42 | 0.862069 | 25 | 174 | 6 | 0.4 | 0.266667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.025806 | 0.109195 | 174 | 5 | 43 | 34.8 | 0.941935 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
011d79b29a8da8afdb960917fc2da1c69648d032 | 11,106 | py | Python | loss.py | memmelma/Mode_Collapse | c06a4e769933ebece9f0bdeb150f9d8b61077f85 | [
"MIT"
] | 14 | 2020-06-22T12:56:10.000Z | 2022-03-31T10:23:00.000Z | loss.py | memmelma/Mode_Collapse | c06a4e769933ebece9f0bdeb150f9d8b61077f85 | [
"MIT"
] | null | null | null | loss.py | memmelma/Mode_Collapse | c06a4e769933ebece9f0bdeb150f9d8b61077f85 | [
"MIT"
] | 2 | 2022-01-21T01:22:23.000Z | 2022-02-13T18:08:08.000Z | from typing import Optional
import torch
import torch.nn as nn
import torch.nn.functional as F
class GANLossGenerator(nn.Module):
"""
This class implements the standard generator GAN loss proposed in:
https://papers.nips.cc/paper/2014/file/5ca3e9b122f61f8f06494c97b1afccf3-Paper.pdf
"""
def __init__(self) -> None:
"""
Constructor method.
"""
# Call super constructor
super(GANLossGenerator, self).__init__()
def forward(self, discriminator_prediction_fake: torch.Tensor, **kwargs) -> torch.Tensor:
"""
Forward pass.
:param discriminator_prediction_fake: (torch.Tensor) Raw discriminator predictions for fake samples
:return: (torch.Tensor) Standard generator GAN loss
"""
# Loss can be computed by utilizing the softplus function since softplus combines both sigmoid and log
return - F.softplus(discriminator_prediction_fake).mean()
class GANLossDiscriminator(nn.Module):
"""
This class implements the standard discriminator GAN loss proposed in:
https://papers.nips.cc/paper/2014/file/5ca3e9b122f61f8f06494c97b1afccf3-Paper.pdf
"""
def __init__(self) -> None:
"""
Constructor method.
"""
# Call super constructor
super(GANLossDiscriminator, self).__init__()
def forward(self, discriminator_prediction_real: torch.Tensor,
discriminator_prediction_fake: torch.Tensor, **kwargs) -> torch.Tensor:
"""
Forward pass.
:param discriminator_prediction_real: (torch.Tensor) Raw discriminator prediction for real samples
:param discriminator_prediction_fake: (torch.Tensor) Raw discriminator predictions for fake samples
:return: (torch.Tensor) Standard discriminator GAN loss
"""
# Loss can be computed by utilizing the softplus function since softplus combines both sigmoid and log
return F.softplus(- discriminator_prediction_real).mean() \
+ F.softplus(discriminator_prediction_fake).mean()
class NSGANLossGenerator(nn.Module):
"""
This class implements the non-saturating generator GAN loss proposed in:
https://papers.nips.cc/paper/2014/file/5ca3e9b122f61f8f06494c97b1afccf3-Paper.pdf
"""
def __init__(self) -> None:
"""
Constructor method.
"""
# Call super constructor
super(NSGANLossGenerator, self).__init__()
def forward(self, discriminator_prediction_fake: torch.Tensor, **kwargs) -> torch.Tensor:
"""
Forward pass.
:param discriminator_prediction_fake: (torch.Tensor) Raw discriminator predictions for fake samples
:return: (torch.Tensor) Non-saturating generator GAN loss
"""
# Loss can be computed by utilizing the softplus function since softplus combines both sigmoid and log
return F.softplus(- discriminator_prediction_fake).mean()
class NSGANLossDiscriminator(GANLossDiscriminator):
"""
This class implements the non-saturating discriminator GAN loss proposed in:
https://papers.nips.cc/paper/2014/file/5ca3e9b122f61f8f06494c97b1afccf3-Paper.pdf
"""
def __init__(self) -> None:
"""
Constructor method.
"""
# Call super constructor
super(NSGANLossDiscriminator, self).__init__()
class WassersteinGANLossGenerator(nn.Module):
"""
This class implements the Wasserstein generator GAN loss proposed in:
http://proceedings.mlr.press/v70/arjovsky17a/arjovsky17a.pdf
"""
def __index__(self) -> None:
"""
Constructor method.
"""
# Call super constructor
super(WassersteinGANLossGenerator, self).__index__()
def forward(self, discriminator_prediction_fake: torch.Tensor, **kwargs) -> torch.Tensor:
"""
Forward pass.
:param discriminator_prediction_fake: (torch.Tensor) Raw discriminator predictions for fake samples
:return: (torch.Tensor) Wasserstein Generator GAN loss with gradient
"""
return - discriminator_prediction_fake.mean()
class WassersteinGANLossDiscriminator(nn.Module):
"""
This class implements the Wasserstein generator GAN loss proposed in:
http://proceedings.mlr.press/v70/arjovsky17a/arjovsky17a.pdf
"""
def __init__(self) -> None:
"""
Constructor method.
"""
# Call super constructor
super(WassersteinGANLossDiscriminator, self).__init__()
def forward(self, discriminator_prediction_real: torch.Tensor,
discriminator_prediction_fake: torch.Tensor, **kwargs) -> torch.Tensor:
"""
Forward pass.
:param discriminator_prediction_real: (torch.Tensor) Raw discriminator prediction for real samples
:param discriminator_prediction_fake: (torch.Tensor) Raw discriminator predictions for fake samples
:return: (torch.Tensor) Wasserstein generator GAN loss with gradient penalty
"""
return - discriminator_prediction_real.mean() \
+ discriminator_prediction_fake.mean()
class WassersteinGANLossGPGenerator(WassersteinGANLossGenerator):
"""
This class implements the Wasserstein generator GAN loss proposed in:
https://proceedings.neurips.cc/paper/2017/file/892c3b1c6dccd52936e27cbd0ff683d6-Paper.pdf
"""
def __index__(self) -> None:
"""
Constructor method.
"""
# Call super constructor
super(WassersteinGANLossGPGenerator, self).__index__()
class WassersteinGANLossGPDiscriminator(nn.Module):
"""
This class implements the Wasserstein generator GAN loss proposed in:
https://proceedings.neurips.cc/paper/2017/file/892c3b1c6dccd52936e27cbd0ff683d6-Paper.pdf
"""
def __init__(self) -> None:
"""
Constructor method.
"""
# Call super constructor
super(WassersteinGANLossGPDiscriminator, self).__init__()
def forward(self, discriminator_prediction_real: torch.Tensor,
discriminator_prediction_fake: torch.Tensor,
discriminator: nn.Module,
real_samples: torch.Tensor,
fake_samples: torch.Tensor,
lambda_gradient_penalty: Optional[float] = 2., **kwargs) -> torch.Tensor:
"""
Forward pass.
:param discriminator_prediction_real: (torch.Tensor) Raw discriminator prediction for real samples
:param discriminator_prediction_fake: (torch.Tensor) Raw discriminator predictions for fake samples
:return: (torch.Tensor) Wasserstein discriminator GAN loss with gradient penalty
"""
# Generate random alpha for interpolation
alpha = torch.rand((real_samples.shape[0], 1), device=real_samples.device)
# Make interpolated samples
samples_interpolated = (alpha * real_samples + (1. - alpha) * fake_samples)
samples_interpolated.requires_grad = True
# Make discriminator prediction
discriminator_prediction_interpolated = discriminator(samples_interpolated)
# Calc gradients
gradients = torch.autograd.grad(outputs=discriminator_prediction_interpolated.sum(),
inputs=samples_interpolated,
create_graph=True,
retain_graph=True)[0]
# Calc gradient penalty
gradient_penalty = (gradients.view(gradients.shape[0], -1).norm(dim=1) - 1.).pow(2).mean()
return - discriminator_prediction_real.mean() \
+ discriminator_prediction_fake.mean() \
+ lambda_gradient_penalty * gradient_penalty
class LSGANLossGenerator(nn.Module):
"""
This class implements the least squares generator GAN loss proposed in:
https://openaccess.thecvf.com/content_ICCV_2017/papers/Mao_Least_Squares_Generative_ICCV_2017_paper.pdf
"""
def __init__(self) -> None:
"""
Constructor method.
"""
# Call super constructor
super(LSGANLossGenerator, self).__init__()
def forward(self, discriminator_prediction_fake: torch.Tensor, **kwargs) -> torch.Tensor:
"""
Forward pass.
:param discriminator_prediction_fake: (torch.Tensor) Raw discriminator predictions for fake samples
:return: (torch.Tensor) Generator LSGAN loss
"""
return - 0.5 * (discriminator_prediction_fake - 1.).pow(2).mean()
class LSGANLossDiscriminator(nn.Module):
"""
This class implements the least squares discriminator GAN loss proposed in:
https://openaccess.thecvf.com/content_ICCV_2017/papers/Mao_Least_Squares_Generative_ICCV_2017_paper.pdf
"""
def __init__(self) -> None:
"""
Constructor method.
"""
# Call super constructor
super(LSGANLossDiscriminator, self).__init__()
def forward(self, discriminator_prediction_real: torch.Tensor,
discriminator_prediction_fake: torch.Tensor, **kwargs) -> torch.Tensor:
"""
Forward pass.
:param discriminator_prediction_real: (torch.Tensor) Raw discriminator prediction for real samples
:param discriminator_prediction_fake: (torch.Tensor) Raw discriminator predictions for fake samples
:return: (torch.Tensor) Discriminator LSGAN loss
"""
return 0.5 * ((- discriminator_prediction_real - 1.).pow(2).mean()
+ discriminator_prediction_fake.pow(2).mean())
class HingeGANLossGenerator(WassersteinGANLossGenerator):
"""
This class implements the Hinge generator GAN loss proposed in:
https://arxiv.org/pdf/1705.02894.pdf
"""
def __init__(self) -> None:
"""
Constructor method.
"""
# Call super constructor
super(HingeGANLossGenerator, self).__init__()
class HingeGANLossDiscriminator(nn.Module):
"""
This class implements the Hinge discriminator GAN loss proposed in:
https://arxiv.org/pdf/1705.02894.pdf
"""
def __init__(self) -> None:
"""
Constructor method.
"""
# Call super constructor
super(HingeGANLossDiscriminator, self).__init__()
def forward(self, discriminator_prediction_real: torch.Tensor,
discriminator_prediction_fake: torch.Tensor, **kwargs) -> torch.Tensor:
"""
Forward pass.
:param discriminator_prediction_real: (torch.Tensor) Raw discriminator prediction for real samples
:param discriminator_prediction_fake: (torch.Tensor) Raw discriminator predictions for fake samples
:return: (torch.Tensor) Hinge discriminator GAN loss
"""
return - torch.minimum(torch.tensor(0., dtype=torch.float, device=discriminator_prediction_real.device),
discriminator_prediction_real - 1.).mean() \
- torch.minimum(torch.tensor(0., dtype=torch.float, device=discriminator_prediction_fake.device),
- discriminator_prediction_fake - 1.).mean()
| 38.968421 | 112 | 0.671079 | 1,109 | 11,106 | 6.519387 | 0.127142 | 0.165422 | 0.104564 | 0.079668 | 0.776072 | 0.752144 | 0.73112 | 0.70332 | 0.691701 | 0.672891 | 0 | 0.023767 | 0.23852 | 11,106 | 284 | 113 | 39.105634 | 0.831146 | 0.429768 | 0 | 0.321429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.047619 | 0 | 0.547619 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
0163892589b4d3a6b8694316dc5084f5e1cdae3d | 297 | py | Python | paradigm/QuestionGeneration/Crossword.py | Paradigm-shift-AI/paradigm-brain | 5347a91dbb45b1352534a256968ce7f6ff6bb299 | [
"MIT"
] | null | null | null | paradigm/QuestionGeneration/Crossword.py | Paradigm-shift-AI/paradigm-brain | 5347a91dbb45b1352534a256968ce7f6ff6bb299 | [
"MIT"
] | null | null | null | paradigm/QuestionGeneration/Crossword.py | Paradigm-shift-AI/paradigm-brain | 5347a91dbb45b1352534a256968ce7f6ff6bb299 | [
"MIT"
] | null | null | null | class Crossword:
def __init__(self, processed_transcript: dict):
self.processed_transcript = processed_transcript
self.question = []
def __generate_question(self):
return 1
def questions(self):
self.__generate_question()
return self.question
| 22.846154 | 56 | 0.670034 | 30 | 297 | 6.2 | 0.433333 | 0.306452 | 0.247312 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004525 | 0.255892 | 297 | 12 | 57 | 24.75 | 0.837104 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0.111111 | 0.666667 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
019b2e8343a699dc4ca783aa4b763c92c6920e49 | 39 | py | Python | Python/Hello_Git_Hacktoberfest.py | PRONAY24/Hello-world | 2e4e5a97837ec74effe674e23e2347a03b8c6573 | [
"MIT"
] | null | null | null | Python/Hello_Git_Hacktoberfest.py | PRONAY24/Hello-world | 2e4e5a97837ec74effe674e23e2347a03b8c6573 | [
"MIT"
] | null | null | null | Python/Hello_Git_Hacktoberfest.py | PRONAY24/Hello-world | 2e4e5a97837ec74effe674e23e2347a03b8c6573 | [
"MIT"
] | null | null | null | print("Hello GitHub Hacktoberfest.!!")
| 19.5 | 38 | 0.74359 | 4 | 39 | 7.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.076923 | 39 | 1 | 39 | 39 | 0.805556 | 0 | 0 | 0 | 0 | 0 | 0.74359 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
6d85bf327e48c563e0d5cee6bd34370dfc1d949f | 118 | py | Python | app/pitch/__init__.py | jonodrew/graduate-rotator | 0039033257a53ffa67edae288c9b7c37b50086a3 | [
"MIT"
] | 1 | 2021-09-10T13:55:25.000Z | 2021-09-10T13:55:25.000Z | app/pitch/__init__.py | jonodrew/graduate-rotator | 0039033257a53ffa67edae288c9b7c37b50086a3 | [
"MIT"
] | 59 | 2020-06-29T21:50:14.000Z | 2022-03-31T13:11:15.000Z | app/pitch/__init__.py | jonodrew/graduate-rotator | 0039033257a53ffa67edae288c9b7c37b50086a3 | [
"MIT"
] | null | null | null | from flask import Blueprint
pitch_bp = Blueprint("pitch", __name__)
from app.pitch import routes # noqa: E402,F401
| 19.666667 | 47 | 0.762712 | 17 | 118 | 5 | 0.705882 | 0.329412 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.06 | 0.152542 | 118 | 5 | 48 | 23.6 | 0.79 | 0.127119 | 0 | 0 | 0 | 0 | 0.049505 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0.666667 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
6dc59f37b5aa510a8b6f3e79eaefc13ee14d1f1d | 981 | py | Python | nlp/lstm_gru.py | mikuh/models-tf2 | 1541aa4e6355e20da67d0ae562b4daa8c0823b6f | [
"MIT"
] | null | null | null | nlp/lstm_gru.py | mikuh/models-tf2 | 1541aa4e6355e20da67d0ae562b4daa8c0823b6f | [
"MIT"
] | null | null | null | nlp/lstm_gru.py | mikuh/models-tf2 | 1541aa4e6355e20da67d0ae562b4daa8c0823b6f | [
"MIT"
] | null | null | null | from tensorflow import keras
input_length = 100
def lstm_model():
model = keras.Sequential([
keras.layers.Embedding(input_dim=30000, output_dim=100, input_length=input_length),
keras.layers.LSTM(32, return_sequences=True),
keras.layers.LSTM(1, activation='sigmoid', return_sequences=False)
])
model.compile(optimizer=keras.optimizers.Adam(), loss=keras.losses.BinaryCrossentropy(), metrics=['accuracy'])
return model
def gru_model():
model = keras.Sequential([
keras.layers.Embedding(input_dim=30000, output_dim=32, input_length=input_length),
keras.layers.LSTM(32, return_sequences=True),
keras.layers.LSTM(1, activation='sigmoid', return_sequences=False)
])
model.compile(optimizer=keras.optimizers.Adam(), loss=keras.losses.BinaryCrossentropy(), metrics=['accuracy'])
return model
if __name__ == '__main__':
lstm = lstm_model()
lstm.summary()
gru = gru_model()
gru.summary() | 31.645161 | 114 | 0.704383 | 118 | 981 | 5.644068 | 0.322034 | 0.099099 | 0.09009 | 0.075075 | 0.831832 | 0.831832 | 0.831832 | 0.831832 | 0.831832 | 0.831832 | 0 | 0.029233 | 0.163099 | 981 | 31 | 115 | 31.645161 | 0.781973 | 0 | 0 | 0.521739 | 0 | 0 | 0.038697 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.086957 | false | 0 | 0.043478 | 0 | 0.217391 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6dedd43790eb0c3aeedaadb6857d49dbcd7e65ee | 39 | py | Python | tests/test_basic.py | normcyr/comptage_velo_mtl | 42014b52b4aa19df6ef694b818c0b258c789146e | [
"MIT"
] | null | null | null | tests/test_basic.py | normcyr/comptage_velo_mtl | 42014b52b4aa19df6ef694b818c0b258c789146e | [
"MIT"
] | null | null | null | tests/test_basic.py | normcyr/comptage_velo_mtl | 42014b52b4aa19df6ef694b818c0b258c789146e | [
"MIT"
] | null | null | null | from .context import comptage_velo_mtl
| 19.5 | 38 | 0.871795 | 6 | 39 | 5.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102564 | 39 | 1 | 39 | 39 | 0.914286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a307667074b16d05f42cdfed465e6d3e024d6e20 | 31 | py | Python | twitterwall/__init__.py | johnTheSloth/TwitterWall | a0938b466f72caaf8e742bea7713f87fc59d1743 | [
"CC0-1.0"
] | null | null | null | twitterwall/__init__.py | johnTheSloth/TwitterWall | a0938b466f72caaf8e742bea7713f87fc59d1743 | [
"CC0-1.0"
] | null | null | null | twitterwall/__init__.py | johnTheSloth/TwitterWall | a0938b466f72caaf8e742bea7713f87fc59d1743 | [
"CC0-1.0"
] | null | null | null | from .twitterwallCore import *
| 15.5 | 30 | 0.806452 | 3 | 31 | 8.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.129032 | 31 | 1 | 31 | 31 | 0.925926 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0967a9fd1e7532cdb3b93bab07a57565aebebbf7 | 151 | py | Python | test.py | SalehAghajani/Casimir_programming | 396e33fdff771e1bc80ed3f72ed7f7360ff4bbf9 | [
"MIT"
] | null | null | null | test.py | SalehAghajani/Casimir_programming | 396e33fdff771e1bc80ed3f72ed7f7360ff4bbf9 | [
"MIT"
] | null | null | null | test.py | SalehAghajani/Casimir_programming | 396e33fdff771e1bc80ed3f72ed7f7360ff4bbf9 | [
"MIT"
] | null | null | null | print('Hello World')
import numpy as np
def circumstance_circle(radius):
return 2*np.pi*raius
def area_circle(radius):
return np.pi*(radius**2)
| 12.583333 | 32 | 0.735099 | 25 | 151 | 4.36 | 0.64 | 0.220183 | 0.330275 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015385 | 0.139073 | 151 | 11 | 33 | 13.727273 | 0.823077 | 0 | 0 | 0 | 0 | 0 | 0.073826 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.166667 | 0.333333 | 0.833333 | 0.166667 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
096931808fc4e9da200e83cef2b22af5af63b4cd | 48,802 | py | Python | quarkchain/cluster/tests/test_shard_state.py | chenhuan14/pyquarkchain | 4fc28dd89947af1263fdeeeba57218515cd0ee77 | [
"MIT"
] | 4 | 2018-11-07T14:58:52.000Z | 2019-12-22T20:35:00.000Z | quarkchain/cluster/tests/test_shard_state.py | chenhuan14/pyquarkchain | 4fc28dd89947af1263fdeeeba57218515cd0ee77 | [
"MIT"
] | null | null | null | quarkchain/cluster/tests/test_shard_state.py | chenhuan14/pyquarkchain | 4fc28dd89947af1263fdeeeba57218515cd0ee77 | [
"MIT"
] | 3 | 2019-01-02T11:15:00.000Z | 2019-12-22T20:32:28.000Z | import random
import unittest
from quarkchain.cluster.shard_state import ShardState
from quarkchain.cluster.tests.test_utils import (
get_test_env,
create_transfer_transaction,
)
from quarkchain.core import CrossShardTransactionDeposit, CrossShardTransactionList
from quarkchain.core import Identity, Address
from quarkchain.diff import EthDifficultyCalculator
from quarkchain.evm import opcodes
from quarkchain.genesis import GenesisManager
def create_default_shard_state(env, shard_id=0, diff_calc=None):
genesis_manager = GenesisManager(env.quark_chain_config)
shard_state = ShardState(env=env, shard_id=shard_id, diff_calc=diff_calc)
shard_state.init_genesis_state(genesis_manager.create_root_block())
return shard_state
class TestShardState(unittest.TestCase):
def test_shard_state_simple(self):
env = get_test_env()
state = create_default_shard_state(env)
self.assertEqual(state.root_tip.height, 0)
self.assertEqual(state.header_tip.height, 0)
def test_gas_price(self):
id_list = [Identity.create_random_identity() for _ in range(5)]
acc_list = [Address.create_from_identity(i, full_shard_id=0) for i in id_list]
env = get_test_env(genesis_account=acc_list[0], genesis_minor_quarkash=10000000)
state = create_default_shard_state(env=env)
# 5 tx per block, make 3 blocks
for _ in range(3):
for j in range(5):
state.add_tx(
create_transfer_transaction(
shard_state=state,
key=id_list[j].get_key(),
from_address=acc_list[j],
to_address=random.choice(acc_list),
value=0,
gas_price=42 if j == 0 else 0,
)
)
b = state.create_block_to_mine(address=acc_list[1])
state.finalize_and_add_block(b)
# for testing purposes, update percentile to take max gas price
state.gas_price_suggestion_oracle.percentile = 100
gas_price = state.gas_price()
self.assertEqual(gas_price, 42)
# results should be cached (same header). updating oracle shouldn't take effect
state.gas_price_suggestion_oracle.percentile = 50
gas_price = state.gas_price()
self.assertEqual(gas_price, 42)
def test_estimate_gas(self):
id1 = Identity.create_random_identity()
acc1 = Address.create_from_identity(id1, full_shard_id=0)
acc2 = Address.create_random_account(full_shard_id=0)
env = get_test_env(genesis_account=acc1, genesis_minor_quarkash=10000000)
state = create_default_shard_state(env=env)
tx_gen = lambda data: create_transfer_transaction(
shard_state=state,
key=id1.get_key(),
from_address=acc1,
to_address=acc2,
value=12345,
data=data,
)
tx = tx_gen(b"")
estimate = state.estimate_gas(tx, acc1)
self.assertEqual(estimate, 21000)
tx = tx_gen(b"12123478123412348125936583475758")
estimate = state.estimate_gas(tx, acc1)
self.assertEqual(estimate, 23176)
def test_execute_tx(self):
id1 = Identity.create_random_identity()
acc1 = Address.create_from_identity(id1, full_shard_id=0)
acc2 = Address.create_random_account(full_shard_id=0)
env = get_test_env(genesis_account=acc1, genesis_minor_quarkash=10000000)
state = create_default_shard_state(env=env)
tx = create_transfer_transaction(
shard_state=state,
key=id1.get_key(),
from_address=acc1,
to_address=acc2,
value=12345,
)
# adding this line to make sure `execute_tx` would reset `gas_used`
state.evm_state.gas_used = state.evm_state.gas_limit
res = state.execute_tx(tx, acc1)
self.assertEqual(res, b"")
def test_add_tx_incorrect_from_shard_id(self):
id1 = Identity.create_random_identity()
acc1 = Address.create_from_identity(id1, full_shard_id=1)
acc2 = Address.create_random_account(full_shard_id=1)
env = get_test_env(genesis_account=acc1, genesis_minor_quarkash=10000000)
state = create_default_shard_state(env=env)
# state is shard 0 but tx from shard 1
tx = create_transfer_transaction(
shard_state=state,
key=id1.get_key(),
from_address=acc1,
to_address=acc2,
value=12345,
)
self.assertFalse(state.add_tx(tx))
self.assertIsNone(state.execute_tx(tx, acc1))
def test_one_tx(self):
id1 = Identity.create_random_identity()
acc1 = Address.create_from_identity(id1, full_shard_id=0)
acc2 = Address.create_random_account(full_shard_id=0)
acc3 = Address.create_random_account(full_shard_id=0)
env = get_test_env(genesis_account=acc1, genesis_minor_quarkash=10000000)
state = create_default_shard_state(env=env)
tx = create_transfer_transaction(
shard_state=state,
key=id1.get_key(),
from_address=acc1,
to_address=acc2,
value=12345,
gas=50000,
)
state.evm_state.gas_used = state.evm_state.gas_limit
self.assertTrue(state.add_tx(tx))
block, i = state.get_transaction_by_hash(tx.get_hash())
self.assertEqual(block.tx_list[0], tx)
self.assertEqual(block.header.create_time, 0)
self.assertEqual(i, 0)
# tx claims to use more gas than the limit and thus not included
b1 = state.create_block_to_mine(address=acc3, gas_limit=49999)
self.assertEqual(len(b1.tx_list), 0)
b1 = state.create_block_to_mine(address=acc3, gas_limit=50000)
self.assertEqual(len(b1.tx_list), 1)
# Should succeed
state.finalize_and_add_block(b1)
self.assertEqual(state.header_tip, b1.header)
self.assertEqual(
state.get_balance(id1.recipient), 10000000 - opcodes.GTXCOST - 12345
)
self.assertEqual(state.get_balance(acc2.recipient), 12345)
self.assertEqual(state.get_balance(acc3.recipient), opcodes.GTXCOST // 2)
# Check receipts
self.assertEqual(len(state.evm_state.receipts), 1)
self.assertEqual(state.evm_state.receipts[0].state_root, b"\x01")
self.assertEqual(state.evm_state.receipts[0].gas_used, 21000)
block, i = state.get_transaction_by_hash(tx.get_hash())
self.assertEqual(block, b1)
self.assertEqual(i, 0)
# Check receipts in storage
resp = state.get_transaction_receipt(tx.get_hash())
self.assertIsNotNone(resp)
block, i, r = resp
self.assertEqual(block, b1)
self.assertEqual(i, 0)
self.assertEqual(r.success, b"\x01")
self.assertEqual(r.gas_used, 21000)
# Check Account has full_shard_id
self.assertEqual(
state.evm_state.get_full_shard_id(acc2.recipient), acc2.full_shard_id
)
tx_list, _ = state.db.get_transactions_by_address(acc1)
self.assertEqual(tx_list[0].value, 12345)
tx_list, _ = state.db.get_transactions_by_address(acc2)
self.assertEqual(tx_list[0].value, 12345)
def test_duplicated_tx(self):
id1 = Identity.create_random_identity()
acc1 = Address.create_from_identity(id1, full_shard_id=0)
acc2 = Address.create_random_account(full_shard_id=0)
acc3 = Address.create_random_account(full_shard_id=0)
env = get_test_env(genesis_account=acc1, genesis_minor_quarkash=10000000)
state = create_default_shard_state(env=env)
tx = create_transfer_transaction(
shard_state=state,
key=id1.get_key(),
from_address=acc1,
to_address=acc2,
value=12345,
)
self.assertTrue(state.add_tx(tx))
self.assertFalse(state.add_tx(tx)) # already in tx_queue
self.assertEqual(len(state.tx_queue), 1)
self.assertEqual(len(state.tx_dict), 1)
block, i = state.get_transaction_by_hash(tx.get_hash())
self.assertEqual(len(block.tx_list), 1)
self.assertEqual(block.tx_list[0], tx)
self.assertEqual(block.header.create_time, 0)
self.assertEqual(i, 0)
b1 = state.create_block_to_mine(address=acc3)
self.assertEqual(len(b1.tx_list), 1)
# Should succeed
state.finalize_and_add_block(b1)
self.assertEqual(state.header_tip, b1.header)
self.assertEqual(
state.get_balance(id1.recipient), 10000000 - opcodes.GTXCOST - 12345
)
self.assertEqual(state.get_balance(acc2.recipient), 12345)
self.assertEqual(state.get_balance(acc3.recipient), opcodes.GTXCOST // 2)
# Check receipts
self.assertEqual(len(state.evm_state.receipts), 1)
self.assertEqual(state.evm_state.receipts[0].state_root, b"\x01")
self.assertEqual(state.evm_state.receipts[0].gas_used, 21000)
block, i = state.get_transaction_by_hash(tx.get_hash())
self.assertEqual(block, b1)
self.assertEqual(i, 0)
# tx already confirmed
self.assertTrue(state.db.contain_transaction_hash(tx.get_hash()))
self.assertFalse(state.add_tx(tx))
def test_add_invalid_tx_fail(self):
id1 = Identity.create_random_identity()
acc1 = Address.create_from_identity(id1, full_shard_id=0)
acc2 = Address.create_random_account(full_shard_id=0)
env = get_test_env(genesis_account=acc1, genesis_minor_quarkash=10000000)
state = create_default_shard_state(env=env)
tx = create_transfer_transaction(
shard_state=state,
key=id1.get_key(),
from_address=acc1,
to_address=acc2,
value=999999999999999999999, # insane
)
self.assertFalse(state.add_tx(tx))
self.assertEqual(len(state.tx_queue), 0)
def test_add_non_neighbor_tx_fail(self):
id1 = Identity.create_random_identity()
acc1 = Address.create_from_identity(id1, full_shard_id=0)
acc2 = Address.create_random_account(full_shard_id=3) # not acc1's neighbor
acc3 = Address.create_random_account(full_shard_id=8) # acc1's neighbor
env = get_test_env(
genesis_account=acc1, genesis_minor_quarkash=10000000, shard_size=64
)
state = create_default_shard_state(env=env)
tx = create_transfer_transaction(
shard_state=state,
key=id1.get_key(),
from_address=acc1,
to_address=acc2,
value=0,
gas=1000000,
)
self.assertFalse(state.add_tx(tx))
self.assertEqual(len(state.tx_queue), 0)
tx = create_transfer_transaction(
shard_state=state,
key=id1.get_key(),
from_address=acc1,
to_address=acc3,
value=0,
gas=1000000,
)
self.assertTrue(state.add_tx(tx))
self.assertEqual(len(state.tx_queue), 1)
def test_exceeding_xshard_limit(self):
id1 = Identity.create_random_identity()
acc1 = Address.create_from_identity(id1, full_shard_id=0)
acc2 = Address.create_random_account(full_shard_id=1)
acc3 = Address.create_random_account(full_shard_id=0)
env = get_test_env(genesis_account=acc1, genesis_minor_quarkash=10000000)
# a huge number to make xshard tx limit become 0 so that no xshard tx can be
# included in the block
env.quark_chain_config.MAX_NEIGHBORS = 10 ** 18
state = create_default_shard_state(env=env)
# xshard tx
tx = create_transfer_transaction(
shard_state=state,
key=id1.get_key(),
from_address=acc1,
to_address=acc2,
value=12345,
gas=50000,
)
self.assertTrue(state.add_tx(tx))
b1 = state.create_block_to_mine(address=acc3)
self.assertEqual(len(b1.tx_list), 0)
# inshard tx
tx = create_transfer_transaction(
shard_state=state,
key=id1.get_key(),
from_address=acc1,
to_address=acc3,
value=12345,
gas=50000,
)
self.assertTrue(state.add_tx(tx))
b1 = state.create_block_to_mine(address=acc3)
self.assertEqual(len(b1.tx_list), 1)
def test_two_tx_in_one_block(self):
id1 = Identity.create_random_identity()
id2 = Identity.create_random_identity()
acc1 = Address.create_from_identity(id1, full_shard_id=0)
acc2 = Address.create_from_identity(id2, full_shard_id=0)
acc3 = Address.create_random_account(full_shard_id=0)
env = get_test_env(
genesis_account=acc1, genesis_minor_quarkash=2000000 + opcodes.GTXCOST
)
state = create_default_shard_state(env=env)
state.add_tx(
create_transfer_transaction(
shard_state=state,
key=id1.get_key(),
from_address=acc1,
to_address=acc2,
value=1000000,
)
)
b0 = state.create_block_to_mine(address=acc3)
state.finalize_and_add_block(b0)
self.assertEqual(state.get_balance(id1.recipient), 1000000)
self.assertEqual(state.get_balance(acc2.recipient), 1000000)
self.assertEqual(state.get_balance(acc3.recipient), opcodes.GTXCOST // 2)
# Check Account has full_shard_id
self.assertEqual(
state.evm_state.get_full_shard_id(acc2.recipient), acc2.full_shard_id
)
state.add_tx(
create_transfer_transaction(
shard_state=state,
key=id1.get_key(),
from_address=acc1,
to_address=Address(
acc2.recipient, acc2.full_shard_id + 2
), # set a different full shard id
value=12345,
gas=50000,
)
)
state.add_tx(
create_transfer_transaction(
shard_state=state,
key=id2.get_key(),
from_address=acc2,
to_address=acc1,
value=54321,
gas=40000,
)
)
b1 = state.create_block_to_mine(address=acc3, gas_limit=40000)
self.assertEqual(len(b1.tx_list), 1)
b1 = state.create_block_to_mine(address=acc3, gas_limit=90000)
self.assertEqual(len(b1.tx_list), 2)
# Should succeed
state.finalize_and_add_block(b1)
self.assertEqual(state.header_tip, b1.header)
self.assertEqual(
state.get_balance(id1.recipient), 1000000 - opcodes.GTXCOST - 12345 + 54321
)
self.assertEqual(
state.get_balance(acc2.recipient), 1000000 - opcodes.GTXCOST + 12345 - 54321
)
self.assertEqual(state.get_balance(acc3.recipient), opcodes.GTXCOST * 1.5)
# Check receipts
self.assertEqual(len(state.evm_state.receipts), 2)
self.assertEqual(state.evm_state.receipts[0].state_root, b"\x01")
self.assertEqual(state.evm_state.receipts[0].gas_used, 21000)
self.assertEqual(state.evm_state.receipts[1].state_root, b"\x01")
self.assertEqual(state.evm_state.receipts[1].gas_used, 42000)
block, i = state.get_transaction_by_hash(b1.tx_list[0].get_hash())
self.assertEqual(block, b1)
self.assertEqual(i, 0)
block, i = state.get_transaction_by_hash(b1.tx_list[1].get_hash())
self.assertEqual(block, b1)
self.assertEqual(i, 1)
# Check acc2 full_shard_id doesn't change
self.assertEqual(
state.evm_state.get_full_shard_id(acc2.recipient), acc2.full_shard_id
)
def test_fork_does_not_confirm_tx(self):
"""Tx should only be confirmed and removed from tx queue by the best chain"""
id1 = Identity.create_random_identity()
id2 = Identity.create_random_identity()
acc1 = Address.create_from_identity(id1, full_shard_id=0)
acc2 = Address.create_from_identity(id2, full_shard_id=0)
acc3 = Address.create_random_account(full_shard_id=0)
env = get_test_env(
genesis_account=acc1, genesis_minor_quarkash=2000000 + opcodes.GTXCOST
)
state = create_default_shard_state(env=env)
state.add_tx(
create_transfer_transaction(
shard_state=state,
key=id1.get_key(),
from_address=acc1,
to_address=acc2,
value=1000000,
)
)
b0 = state.create_block_to_mine(address=acc3)
b1 = state.create_block_to_mine(address=acc3)
b0.tx_list = [] # make b0 empty
state.finalize_and_add_block(b0)
self.assertEqual(len(state.tx_queue), 1)
self.assertEqual(len(b1.tx_list), 1)
state.finalize_and_add_block(b1)
# b1 is a fork and does not remove the tx from queue
self.assertEqual(len(state.tx_queue), 1)
b2 = state.create_block_to_mine(address=acc3)
state.finalize_and_add_block(b2)
self.assertEqual(len(state.tx_queue), 0)
def test_revert_fork_put_tx_back_to_queue(self):
"""Tx in the reverted chain should be put back to the queue"""
id1 = Identity.create_random_identity()
id2 = Identity.create_random_identity()
acc1 = Address.create_from_identity(id1, full_shard_id=0)
acc2 = Address.create_from_identity(id2, full_shard_id=0)
acc3 = Address.create_random_account(full_shard_id=0)
env = get_test_env(
genesis_account=acc1, genesis_minor_quarkash=2000000 + opcodes.GTXCOST
)
state = create_default_shard_state(env=env)
state.add_tx(
create_transfer_transaction(
shard_state=state,
key=id1.get_key(),
from_address=acc1,
to_address=acc2,
value=1000000,
)
)
b0 = state.create_block_to_mine(address=acc3)
b1 = state.create_block_to_mine(address=acc3)
state.finalize_and_add_block(b0)
self.assertEqual(len(state.tx_queue), 0)
b1.tx_list = [] # make b1 empty
state.finalize_and_add_block(b1)
self.assertEqual(len(state.tx_queue), 0)
b2 = b1.create_block_to_append()
state.finalize_and_add_block(b2)
# now b1-b2 becomes the best chain and we expect b0 to be reverted and put the tx back to queue
self.assertEqual(len(state.tx_queue), 1)
b3 = b0.create_block_to_append()
state.finalize_and_add_block(b3)
self.assertEqual(len(state.tx_queue), 1)
b4 = b3.create_block_to_append()
state.finalize_and_add_block(b4)
# b0-b3-b4 becomes the best chain
self.assertEqual(len(state.tx_queue), 0)
def test_stale_block_count(self):
id1 = Identity.create_random_identity()
acc1 = Address.create_from_identity(id1, full_shard_id=0)
acc3 = Address.create_random_account(full_shard_id=0)
env = get_test_env(genesis_account=acc1, genesis_minor_quarkash=10000000)
state = create_default_shard_state(env=env)
b1 = state.create_block_to_mine(address=acc3)
b2 = state.create_block_to_mine(address=acc3)
b2.header.create_time += 1
state.finalize_and_add_block(b1)
self.assertEqual(state.db.get_block_count_by_height(1), 1)
state.finalize_and_add_block(b2)
self.assertEqual(state.db.get_block_count_by_height(1), 2)
def test_xshard_tx_sent(self):
id1 = Identity.create_random_identity()
acc1 = Address.create_from_identity(id1, full_shard_id=0)
acc2 = Address.create_from_identity(id1, full_shard_id=1)
acc3 = Address.create_random_account(full_shard_id=0)
env = get_test_env(genesis_account=acc1, genesis_minor_quarkash=10000000)
state = create_default_shard_state(env=env, shard_id=0)
env1 = get_test_env(genesis_account=acc1, genesis_minor_quarkash=10000000)
state1 = create_default_shard_state(env=env1, shard_id=1)
# Add a root block to update block gas limit so that xshard tx can be included
root_block = (
state.root_tip.create_block_to_append()
.add_minor_block_header(state.header_tip)
.add_minor_block_header(state1.header_tip)
.finalize()
)
state.add_root_block(root_block)
tx = create_transfer_transaction(
shard_state=state,
key=id1.get_key(),
from_address=acc1,
to_address=acc2,
value=888888,
gas=opcodes.GTXXSHARDCOST + opcodes.GTXCOST,
)
state.add_tx(tx)
b1 = state.create_block_to_mine(address=acc3)
self.assertEqual(len(b1.tx_list), 1)
self.assertEqual(state.evm_state.gas_used, 0)
# Should succeed
state.finalize_and_add_block(b1)
self.assertEqual(len(state.evm_state.xshard_list), 1)
self.assertEqual(
state.evm_state.xshard_list[0],
CrossShardTransactionDeposit(
tx_hash=tx.get_hash(),
from_address=acc1,
to_address=acc2,
value=888888,
gas_price=1,
),
)
self.assertEqual(
state.get_balance(id1.recipient),
10000000 - 888888 - opcodes.GTXCOST - opcodes.GTXXSHARDCOST,
)
# Make sure the xshard gas is not used by local block
self.assertEqual(
state.evm_state.gas_used, opcodes.GTXCOST + opcodes.GTXXSHARDCOST
)
# GTXXSHARDCOST is consumed by remote shard
self.assertEqual(state.get_balance(acc3.recipient), opcodes.GTXCOST // 2)
def test_xshard_tx_insufficient_gas(self):
id1 = Identity.create_random_identity()
acc1 = Address.create_from_identity(id1, full_shard_id=0)
acc2 = Address.create_from_identity(id1, full_shard_id=1)
acc3 = Address.create_random_account(full_shard_id=0)
env = get_test_env(genesis_account=acc1, genesis_minor_quarkash=10000000)
state = create_default_shard_state(env=env, shard_id=0)
state.add_tx(
create_transfer_transaction(
shard_state=state,
key=id1.get_key(),
from_address=acc1,
to_address=acc2,
value=888888,
gas=opcodes.GTXCOST,
)
)
b1 = state.create_block_to_mine(address=acc3)
self.assertEqual(len(b1.tx_list), 0)
self.assertEqual(len(state.tx_queue), 0)
def test_xshard_tx_received(self):
id1 = Identity.create_random_identity()
acc1 = Address.create_from_identity(id1, full_shard_id=0)
acc2 = Address.create_from_identity(id1, full_shard_id=16)
acc3 = Address.create_random_account(full_shard_id=0)
env0 = get_test_env(
genesis_account=acc1, genesis_minor_quarkash=10000000, shard_size=64
)
env1 = get_test_env(
genesis_account=acc1, genesis_minor_quarkash=10000000, shard_size=64
)
state0 = create_default_shard_state(env=env0, shard_id=0)
state1 = create_default_shard_state(env=env1, shard_id=16)
# Add a root block to allow later minor blocks referencing this root block to
# be broadcasted
root_block = (
state0.root_tip.create_block_to_append()
.add_minor_block_header(state0.header_tip)
.add_minor_block_header(state1.header_tip)
.finalize()
)
state0.add_root_block(root_block)
state1.add_root_block(root_block)
# Add one block in shard 0
b0 = state0.create_block_to_mine()
state0.finalize_and_add_block(b0)
b1 = state1.get_tip().create_block_to_append()
b1.header.hash_prev_root_block = root_block.header.get_hash()
tx = create_transfer_transaction(
shard_state=state1,
key=id1.get_key(),
from_address=acc2,
to_address=acc1,
value=888888,
gas=opcodes.GTXXSHARDCOST + opcodes.GTXCOST,
gas_price=2,
)
b1.add_tx(tx)
# Add a x-shard tx from remote peer
state0.add_cross_shard_tx_list_by_minor_block_hash(
h=b1.header.get_hash(),
tx_list=CrossShardTransactionList(
tx_list=[
CrossShardTransactionDeposit(
tx_hash=tx.get_hash(),
from_address=acc2,
to_address=acc1,
value=888888,
gas_price=2,
)
]
),
)
# Create a root block containing the block with the x-shard tx
root_block = (
state0.root_tip.create_block_to_append()
.add_minor_block_header(b0.header)
.add_minor_block_header(b1.header)
.finalize()
)
state0.add_root_block(root_block)
# Add b0 and make sure all x-shard tx's are added
b2 = state0.create_block_to_mine(address=acc3)
state0.finalize_and_add_block(b2)
self.assertEqual(state0.get_balance(acc1.recipient), 10000000 + 888888)
# Half collected by root
self.assertEqual(
state0.get_balance(acc3.recipient), opcodes.GTXXSHARDCOST * 2 // 2
)
# X-shard gas used
evmState0 = state0.evm_state
self.assertEqual(evmState0.xshard_receive_gas_used, opcodes.GTXXSHARDCOST)
def test_xshard_tx_received_exclude_non_neighbor(self):
id1 = Identity.create_random_identity()
acc1 = Address.create_from_identity(id1, full_shard_id=0)
acc2 = Address.create_from_identity(id1, full_shard_id=3)
acc3 = Address.create_random_account(full_shard_id=0)
env0 = get_test_env(
genesis_account=acc1, genesis_minor_quarkash=10000000, shard_size=64
)
env1 = get_test_env(
genesis_account=acc1, genesis_minor_quarkash=10000000, shard_size=64
)
state0 = create_default_shard_state(env=env0, shard_id=0)
state1 = create_default_shard_state(env=env1, shard_id=3)
# Add one block in shard 0
b0 = state0.create_block_to_mine()
state0.finalize_and_add_block(b0)
b1 = state1.get_tip().create_block_to_append()
tx = create_transfer_transaction(
shard_state=state1,
key=id1.get_key(),
from_address=acc2,
to_address=acc1,
value=888888,
gas=opcodes.GTXXSHARDCOST + opcodes.GTXCOST,
gas_price=2,
)
b1.add_tx(tx)
# Add a x-shard tx from remote peer
state0.add_cross_shard_tx_list_by_minor_block_hash(
h=b1.header.get_hash(),
tx_list=CrossShardTransactionList(
tx_list=[
CrossShardTransactionDeposit(
tx_hash=tx.get_hash(),
from_address=acc2,
to_address=acc1,
value=888888,
gas_price=2,
)
]
),
)
# Create a root block containing the block with the x-shard tx
root_block = (
state0.root_tip.create_block_to_append()
.add_minor_block_header(b0.header)
.add_minor_block_header(b1.header)
.finalize()
)
state0.add_root_block(root_block)
# Add b0 and make sure all x-shard tx's are added
b2 = state0.create_block_to_mine(address=acc3)
state0.finalize_and_add_block(b2)
self.assertEqual(state0.get_balance(acc1.recipient), 10000000)
# Half collected by root
self.assertEqual(state0.get_balance(acc3.recipient), 0)
# X-shard gas used
evmState0 = state0.evm_state
self.assertEqual(evmState0.xshard_receive_gas_used, 0)
def test_xshard_for_two_root_blocks(self):
id1 = Identity.create_random_identity()
acc1 = Address.create_from_identity(id1, full_shard_id=0)
acc2 = Address.create_from_identity(id1, full_shard_id=1)
acc3 = Address.create_random_account(full_shard_id=0)
env0 = get_test_env(genesis_account=acc1, genesis_minor_quarkash=10000000)
env1 = get_test_env(genesis_account=acc1, genesis_minor_quarkash=10000000)
state0 = create_default_shard_state(env=env0, shard_id=0)
state1 = create_default_shard_state(env=env1, shard_id=1)
# Add a root block to allow later minor blocks referencing this root block to
# be broadcasted
root_block = (
state0.root_tip.create_block_to_append()
.add_minor_block_header(state0.header_tip)
.add_minor_block_header(state1.header_tip)
.finalize()
)
state0.add_root_block(root_block)
state1.add_root_block(root_block)
# Add one block in shard 0
b0 = state0.create_block_to_mine()
state0.finalize_and_add_block(b0)
b1 = state1.get_tip().create_block_to_append()
b1.header.hash_prev_root_block = root_block.header.get_hash()
tx = create_transfer_transaction(
shard_state=state1,
key=id1.get_key(),
from_address=acc2,
to_address=acc1,
value=888888,
gas=opcodes.GTXXSHARDCOST + opcodes.GTXCOST,
)
b1.add_tx(tx)
# Add a x-shard tx from state1
state0.add_cross_shard_tx_list_by_minor_block_hash(
h=b1.header.get_hash(),
tx_list=CrossShardTransactionList(
tx_list=[
CrossShardTransactionDeposit(
tx_hash=tx.get_hash(),
from_address=acc2,
to_address=acc1,
value=888888,
gas_price=2,
)
]
),
)
# Create a root block containing the block with the x-shard tx
root_block0 = (
state0.root_tip.create_block_to_append()
.add_minor_block_header(b0.header)
.add_minor_block_header(b1.header)
.finalize()
)
state0.add_root_block(root_block0)
b2 = state0.get_tip().create_block_to_append()
state0.finalize_and_add_block(b2)
b3 = b1.create_block_to_append()
b3.header.hash_prev_root_block = root_block.header.get_hash()
# Add a x-shard tx from state1
state0.add_cross_shard_tx_list_by_minor_block_hash(
h=b3.header.get_hash(),
tx_list=CrossShardTransactionList(
tx_list=[
CrossShardTransactionDeposit(
tx_hash=bytes(32),
from_address=acc2,
to_address=acc1,
value=385723,
gas_price=3,
)
]
),
)
root_block1 = (
state0.root_tip.create_block_to_append()
.add_minor_block_header(b2.header)
.add_minor_block_header(b3.header)
.finalize()
)
state0.add_root_block(root_block1)
# Test x-shard gas limit when create_block_to_mine
b5 = state0.create_block_to_mine(address=acc3, gas_limit=0)
# Current algorithm allows at least one root block to be included
self.assertEqual(b5.header.hash_prev_root_block, root_block0.header.get_hash())
b6 = state0.create_block_to_mine(address=acc3, gas_limit=opcodes.GTXXSHARDCOST)
self.assertEqual(b6.header.hash_prev_root_block, root_block0.header.get_hash())
# There are two x-shard txs: one is root block coinbase with zero gas, and anonther is from shard 1
b7 = state0.create_block_to_mine(
address=acc3, gas_limit=2 * opcodes.GTXXSHARDCOST
)
self.assertEqual(b7.header.hash_prev_root_block, root_block1.header.get_hash())
b8 = state0.create_block_to_mine(
address=acc3, gas_limit=3 * opcodes.GTXXSHARDCOST
)
self.assertEqual(b8.header.hash_prev_root_block, root_block1.header.get_hash())
# Add b0 and make sure all x-shard tx's are added
b4 = state0.create_block_to_mine(address=acc3)
self.assertEqual(b4.header.hash_prev_root_block, root_block1.header.get_hash())
state0.finalize_and_add_block(b4)
self.assertEqual(state0.get_balance(acc1.recipient), 10000000 + 888888 + 385723)
# Half collected by root
self.assertEqual(
state0.get_balance(acc3.recipient), opcodes.GTXXSHARDCOST * (2 + 3) // 2
)
# Check gas used for receiving x-shard tx
self.assertEqual(state0.evm_state.gas_used, 18000)
self.assertEqual(state0.evm_state.xshard_receive_gas_used, 18000)
def test_fork_resolve(self):
id1 = Identity.create_random_identity()
acc1 = Address.create_from_identity(id1, full_shard_id=0)
env = get_test_env(genesis_account=acc1, genesis_minor_quarkash=10000000)
state = create_default_shard_state(env=env, shard_id=0)
b0 = state.get_tip().create_block_to_append()
b1 = state.get_tip().create_block_to_append()
state.finalize_and_add_block(b0)
self.assertEqual(state.header_tip, b0.header)
# Fork happens, first come first serve
state.finalize_and_add_block(b1)
self.assertEqual(state.header_tip, b0.header)
# Longer fork happens, override existing one
b2 = b1.create_block_to_append()
state.finalize_and_add_block(b2)
self.assertEqual(state.header_tip, b2.header)
def test_root_chain_first_consensus(self):
id1 = Identity.create_random_identity()
acc1 = Address.create_from_identity(id1, full_shard_id=0)
env0 = get_test_env(genesis_account=acc1, genesis_minor_quarkash=10000000)
env1 = get_test_env(genesis_account=acc1, genesis_minor_quarkash=10000000)
state0 = create_default_shard_state(env=env0, shard_id=0)
state1 = create_default_shard_state(env=env1, shard_id=1)
# Add one block and prepare a fork
b0 = state0.get_tip().create_block_to_append(address=acc1)
b2 = state0.get_tip().create_block_to_append(
address=Address.create_empty_account()
)
state0.finalize_and_add_block(b0)
state0.finalize_and_add_block(b2)
b1 = state1.get_tip().create_block_to_append()
b1.finalize(evm_state=state1.run_block(b1))
# Create a root block containing the block with the x-shard tx
state0.add_cross_shard_tx_list_by_minor_block_hash(
h=b1.header.get_hash(), tx_list=CrossShardTransactionList(tx_list=[])
)
root_block = (
state0.root_tip.create_block_to_append()
.add_minor_block_header(b0.header)
.add_minor_block_header(b1.header)
.finalize()
)
state0.add_root_block(root_block)
b00 = b0.create_block_to_append()
state0.finalize_and_add_block(b00)
self.assertEqual(state0.header_tip, b00.header)
# Create another fork that is much longer (however not confirmed by root_block)
b3 = b2.create_block_to_append()
state0.finalize_and_add_block(b3)
b4 = b3.create_block_to_append()
state0.finalize_and_add_block(b4)
self.assertGreater(b4.header.height, b00.header.height)
self.assertEqual(state0.header_tip, b00.header)
def test_shard_state_add_root_block(self):
id1 = Identity.create_random_identity()
acc1 = Address.create_from_identity(id1, full_shard_id=0)
env0 = get_test_env(genesis_account=acc1, genesis_minor_quarkash=10000000)
env1 = get_test_env(genesis_account=acc1, genesis_minor_quarkash=10000000)
state0 = create_default_shard_state(env=env0, shard_id=0)
state1 = create_default_shard_state(env=env1, shard_id=1)
# Add one block and prepare a fork
b0 = state0.get_tip().create_block_to_append(address=acc1)
b2 = state0.get_tip().create_block_to_append(
address=Address.create_empty_account()
)
state0.finalize_and_add_block(b0)
state0.finalize_and_add_block(b2)
b1 = state1.get_tip().create_block_to_append()
b1.finalize(evm_state=state1.run_block(b1))
# Create a root block containing the block with the x-shard tx
state0.add_cross_shard_tx_list_by_minor_block_hash(
h=b1.header.get_hash(), tx_list=CrossShardTransactionList(tx_list=[])
)
root_block = (
state0.root_tip.create_block_to_append()
.add_minor_block_header(b0.header)
.add_minor_block_header(b1.header)
.finalize()
)
root_block1 = (
state0.root_tip.create_block_to_append()
.add_minor_block_header(b2.header)
.add_minor_block_header(b1.header)
.finalize()
)
state0.add_root_block(root_block)
b00 = b0.create_block_to_append()
state0.finalize_and_add_block(b00)
self.assertEqual(state0.header_tip, b00.header)
# Create another fork that is much longer (however not confirmed by root_block)
b3 = b2.create_block_to_append()
state0.finalize_and_add_block(b3)
b4 = b3.create_block_to_append()
state0.finalize_and_add_block(b4)
self.assertEqual(state0.header_tip, b00.header)
self.assertEqual(state0.db.get_minor_block_by_height(2), b00)
self.assertIsNone(state0.db.get_minor_block_by_height(3))
b5 = b1.create_block_to_append()
state0.add_cross_shard_tx_list_by_minor_block_hash(
h=b5.header.get_hash(), tx_list=CrossShardTransactionList(tx_list=[])
)
root_block2 = (
root_block1.create_block_to_append()
.add_minor_block_header(b3.header)
.add_minor_block_header(b4.header)
.add_minor_block_header(b5.header)
.finalize()
)
self.assertFalse(state0.add_root_block(root_block1))
self.assertTrue(state0.add_root_block(root_block2))
self.assertEqual(state0.header_tip, b4.header)
self.assertEqual(state0.meta_tip, b4.meta)
self.assertEqual(state0.root_tip, root_block2.header)
self.assertEqual(state0.db.get_minor_block_by_height(2), b3)
self.assertEqual(state0.db.get_minor_block_by_height(3), b4)
def test_shard_state_fork_resolve_with_higher_root_chain(self):
id1 = Identity.create_random_identity()
acc1 = Address.create_from_identity(id1, full_shard_id=0)
env = get_test_env(genesis_account=acc1, genesis_minor_quarkash=10000000)
state = create_default_shard_state(env=env, shard_id=0)
b0 = state.get_tip().create_block_to_append()
state.finalize_and_add_block(b0)
root_block = (
state.root_tip.create_block_to_append()
.add_minor_block_header(b0.header)
.finalize()
)
self.assertEqual(state.header_tip, b0.header)
self.assertTrue(state.add_root_block(root_block))
b1 = state.get_tip().create_block_to_append()
b2 = state.get_tip().create_block_to_append(nonce=1)
b2.header.hash_prev_root_block = root_block.header.get_hash()
b3 = state.get_tip().create_block_to_append(nonce=2)
b3.header.hash_prev_root_block = root_block.header.get_hash()
state.finalize_and_add_block(b1)
self.assertEqual(state.header_tip, b1.header)
# Fork happens, although they have the same height, b2 survives since it confirms root block
state.finalize_and_add_block(b2)
self.assertEqual(state.header_tip, b2.header)
# b3 confirms the same root block as b2, so it will not override b2
state.finalize_and_add_block(b3)
self.assertEqual(state.header_tip, b2.header)
def test_shard_state_difficulty(self):
env = get_test_env()
for shard in env.quark_chain_config.SHARD_LIST:
shard.GENESIS.DIFFICULTY = 10000
env.quark_chain_config.SKIP_MINOR_DIFFICULTY_CHECK = False
diff_calc = EthDifficultyCalculator(cutoff=9, diff_factor=2048, minimum_diff=1)
env.quark_chain_config.NETWORK_ID = (
1
) # other network ids will skip difficulty check
state = create_default_shard_state(env=env, shard_id=0, diff_calc=diff_calc)
# Check new difficulty
b0 = state.create_block_to_mine(state.header_tip.create_time + 8)
self.assertEqual(
b0.header.difficulty,
state.header_tip.difficulty // 2048 + state.header_tip.difficulty,
)
b0 = state.create_block_to_mine(state.header_tip.create_time + 9)
self.assertEqual(b0.header.difficulty, state.header_tip.difficulty)
b0 = state.create_block_to_mine(state.header_tip.create_time + 17)
self.assertEqual(b0.header.difficulty, state.header_tip.difficulty)
b0 = state.create_block_to_mine(state.header_tip.create_time + 24)
self.assertEqual(
b0.header.difficulty,
state.header_tip.difficulty - state.header_tip.difficulty // 2048,
)
b0 = state.create_block_to_mine(state.header_tip.create_time + 35)
self.assertEqual(
b0.header.difficulty,
state.header_tip.difficulty - state.header_tip.difficulty // 2048 * 2,
)
def test_shard_state_recovery_from_root_block(self):
id1 = Identity.create_random_identity()
acc1 = Address.create_from_identity(id1, full_shard_id=0)
env = get_test_env(genesis_account=acc1, genesis_minor_quarkash=10000000)
state = create_default_shard_state(env=env, shard_id=0)
blockHeaders = []
blockMetas = []
for i in range(12):
b = state.get_tip().create_block_to_append(address=acc1)
state.finalize_and_add_block(b)
blockHeaders.append(b.header)
blockMetas.append(b.meta)
# add a fork
b1 = state.db.get_minor_block_by_height(3)
b1.header.create_time += 1
state.finalize_and_add_block(b1)
self.assertEqual(state.db.get_minor_block_by_hash(b1.header.get_hash()), b1)
root_block = state.root_tip.create_block_to_append()
root_block.minor_block_header_list = blockHeaders[:5]
root_block.finalize()
state.add_root_block(root_block)
recoveredState = ShardState(env=env, shard_id=0)
recoveredState.init_from_root_block(root_block)
# forks are pruned
self.assertIsNone(
recoveredState.db.get_minor_block_by_hash(b1.header.get_hash())
)
self.assertEqual(
recoveredState.db.get_minor_block_by_hash(
b1.header.get_hash(), consistency_check=False
),
b1,
)
self.assertEqual(recoveredState.root_tip, root_block.header)
self.assertEqual(recoveredState.header_tip, blockHeaders[4])
self.assertEqual(recoveredState.confirmed_header_tip, blockHeaders[4])
self.assertEqual(recoveredState.meta_tip, blockMetas[4])
self.assertEqual(
recoveredState.evm_state.trie.root_hash, blockMetas[4].hash_evm_state_root
)
def test_add_block_receipt_root_not_match(self):
id1 = Identity.create_random_identity()
acc1 = Address.create_from_identity(id1)
acc3 = Address.create_random_account(full_shard_id=0)
env = get_test_env(genesis_account=acc1, genesis_minor_quarkash=10000000)
state = create_default_shard_state(env=env)
b1 = state.create_block_to_mine(address=acc3)
# Should succeed
state.finalize_and_add_block(b1)
b1.finalize(evm_state=state.run_block(b1))
b1.meta.hash_evm_receipt_root = b"00" * 32
self.assertRaises(ValueError, state.add_block(b1))
def test_not_update_tip_on_root_fork(self):
""" block's hash_prev_root_block must be on the same chain with root_tip to update tip.
+--+
a. |r1|
/+--+
/ |
+--+ / +--+ +--+
|r0|<----|m1|<---|m2| c.
+--+ \ +--+ +--+
\ | |
\+--+ |
b. |r2|<----+
+--+
Initial state: r0 <- m1
Then adding r1, r2, m2 should not make m2 the tip because r1 is the root tip and r2 and r1
are not on the same root chain.
"""
id1 = Identity.create_random_identity()
acc1 = Address.create_from_identity(id1, full_shard_id=0)
env = get_test_env(genesis_account=acc1, genesis_minor_quarkash=10000000)
state = create_default_shard_state(env=env, shard_id=0)
m1 = state.get_tip().create_block_to_append(address=acc1)
state.finalize_and_add_block(m1)
r1 = state.root_tip.create_block_to_append()
r2 = state.root_tip.create_block_to_append()
r1.minor_block_header_list.append(m1.header)
r1.finalize()
state.add_root_block(r1)
r2.minor_block_header_list.append(m1.header)
r2.header.create_time = r1.header.create_time + 1 # make r2, r1 different
r2.finalize()
self.assertNotEqual(r1.header.get_hash(), r2.header.get_hash())
state.add_root_block(r2)
self.assertEqual(state.root_tip, r1.header)
m2 = m1.create_block_to_append(address=acc1)
m2.header.hash_prev_root_block = r2.header.get_hash()
state.finalize_and_add_block(m2)
# m2 is added
self.assertEqual(state.db.get_minor_block_by_hash(m2.header.get_hash()), m2)
# but m1 should still be the tip
self.assertEqual(state.header_tip, m1.header)
def test_add_root_block_revert_header_tip(self):
""" block's hash_prev_root_block must be on the same chain with root_tip to update tip.
+--+
|r1|<-------------+
/+--+ |
/ | |
+--+ / +--+ +--+ +--+
|r0|<----|m1|<---|m2| <---|m3|
+--+ \ +--+ +--+ +--+
\ | \
\+--+. +--+
|r2|<-----|r3| (r3 includes m2)
+--+ +--+
Initial state: r0 <- m1 <- m2
Adding r1, r2, m3 makes r1 the root_tip, m3 the header_tip
Adding r3 should change the root_tip to r3, header_tip to m2
"""
id1 = Identity.create_random_identity()
acc1 = Address.create_from_identity(id1, full_shard_id=0)
env = get_test_env(genesis_account=acc1, genesis_minor_quarkash=10000000)
state = create_default_shard_state(env=env, shard_id=0)
m1 = state.get_tip().create_block_to_append(address=acc1)
state.finalize_and_add_block(m1)
m2 = state.get_tip().create_block_to_append(address=acc1)
state.finalize_and_add_block(m2)
r1 = state.root_tip.create_block_to_append()
r2 = state.root_tip.create_block_to_append()
r1.minor_block_header_list.append(m1.header)
r1.finalize()
state.add_root_block(r1)
r2.minor_block_header_list.append(m1.header)
r2.header.create_time = r1.header.create_time + 1 # make r2, r1 different
r2.finalize()
self.assertNotEqual(r1.header.get_hash(), r2.header.get_hash())
state.add_root_block(r2)
self.assertEqual(state.root_tip, r1.header)
m3 = state.create_block_to_mine(address=acc1)
self.assertEqual(m3.header.hash_prev_root_block, r1.header.get_hash())
state.finalize_and_add_block(m3)
r3 = r2.create_block_to_append(address=acc1)
r3.add_minor_block_header(m2.header)
r3.finalize()
state.add_root_block(r3)
self.assertEqual(state.root_tip, r3.header)
self.assertEqual(state.header_tip, m2.header)
| 38.609177 | 107 | 0.638212 | 6,219 | 48,802 | 4.68548 | 0.057726 | 0.072068 | 0.038814 | 0.033254 | 0.837057 | 0.807372 | 0.788908 | 0.759292 | 0.730499 | 0.68863 | 0 | 0.048762 | 0.272591 | 48,802 | 1,263 | 108 | 38.639747 | 0.772078 | 0.084546 | 0 | 0.666667 | 0 | 0 | 0.001218 | 0.000722 | 0 | 0 | 0 | 0 | 0.168224 | 1 | 0.030114 | false | 0 | 0.009346 | 0 | 0.041537 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
09741a01e28308c48f26f7897b62450946a0dc81 | 5,399 | py | Python | test/test_telluric.py | cylammarco/ASPIRED | d4d04a5dfdf7c4719f9d22feea6d4035bfee8345 | [
"BSD-3-Clause"
] | 13 | 2019-12-22T08:44:57.000Z | 2022-03-23T18:09:14.000Z | test/test_telluric.py | cylammarco/ASPIRED | d4d04a5dfdf7c4719f9d22feea6d4035bfee8345 | [
"BSD-3-Clause"
] | 71 | 2019-09-13T20:58:28.000Z | 2022-03-27T13:40:56.000Z | test/test_telluric.py | cylammarco/ASPIRED | d4d04a5dfdf7c4719f9d22feea6d4035bfee8345 | [
"BSD-3-Clause"
] | null | null | null | import copy
import numpy as np
from aspired import spectral_reduction
from aspired.flux_calibration import FluxCalibration
'''
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.science_spectrum_list[0].add_wavelength(wave)
onedspec.science_spectrum_list[0].add_flux(flux_sci, None, None)
onedspec.science_spectrum_list[0].add_flux_continuum(flux_sci_continuum)
fluxcal = FluxCalibration(log_file_name=None)
telluric_func = fluxcal.get_telluric_profile(wave,
flux_std,
flux_std_continuum,
mask_range=[[495, 551],
[700, 753],
[848, 960]],
return_function=True)
fluxcal.inspect_telluric_profile()
onedspec.add_telluric_function(telluric_func)
onedspec.get_telluric_profile()
onedspec.inspect_telluric_profile(
display=True)
onedspec.apply_telluric_correction()
std_wave = np.load('test/test_data/std_wave.npy')
std_flux = np.load('test/test_data/std_flux.npy')
std_flux_continuum = np.load('test/test_data/std_flux_continuum.npy')
sci_wave = np.load('test/test_data/sci_wave.npy')
sci_flux = np.load('test/test_data/sci_flux.npy')
sci_flux_continuum = np.load('test/test_data/sci_flux_continuum.npy')
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.science_spectrum_list[0].add_wavelength(sci_wave)
onedspec.science_spectrum_list[0].add_flux(sci_flux, None, None)
onedspec.science_spectrum_list[0].add_flux_continuum(sci_flux_continuum)
fluxcal = FluxCalibration(log_file_name=None)
telluric_func = fluxcal.get_telluric_profile(std_wave,
std_flux,
std_flux_continuum,
return_function=True)
onedspec.add_telluric_function(telluric_func)
onedspec.get_telluric_profile(auto_apply=False)
onedspec.inspect_telluric_profile(
display=True)
'''
def test_telluric_square_wave():
wave = np.arange(1000.)
flux_sci = np.ones(1000) * 5.
flux_std = np.ones(1000) * 100.
flux_sci_continuum = copy.deepcopy(flux_sci)
flux_std_continuum = copy.deepcopy(flux_std)
flux_sci[500:550] *= 0.01
flux_sci[700:750] *= 0.001
flux_sci[850:950] *= 0.1
flux_std[500:550] *= 0.01
flux_std[700:750] *= 0.001
flux_std[850:950] *= 0.1
# Get the telluric profile
fluxcal = FluxCalibration(log_file_name=None)
telluric_func = fluxcal.get_telluric_profile(wave,
flux_std,
flux_std_continuum,
mask_range=[[495, 551],
[700, 753],
[848, 960]],
return_function=True)
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.science_spectrum_list[0].add_wavelength(wave)
onedspec.science_spectrum_list[0].add_flux(flux_sci, None, None)
onedspec.science_spectrum_list[0].add_flux_continuum(flux_sci_continuum)
onedspec.add_telluric_function(telluric_func)
onedspec.get_telluric_profile()
onedspec.apply_telluric_correction()
assert np.isclose(np.nansum(onedspec.science_spectrum_list[0].flux),
np.nansum(flux_sci_continuum),
rtol=1e-2)
onedspec.inspect_telluric_profile(
display=False,
return_jsonstring=True,
save_fig=True,
fig_type='iframe+jpg+png+svg+pdf',
filename='test/test_output/test_telluric')
def test_telluric_real_data():
std_wave = np.load('test/test_data/std_wave.npy')
std_flux = np.load('test/test_data/std_flux.npy')
std_flux_continuum = np.load('test/test_data/std_flux_continuum.npy')
sci_wave = np.load('test/test_data/sci_wave.npy')
sci_flux = np.load('test/test_data/sci_flux.npy')
sci_flux_continuum = np.load('test/test_data/sci_flux_continuum.npy')
# Get the telluric profile
fluxcal = FluxCalibration(log_file_name=None)
telluric_func = fluxcal.get_telluric_profile(std_wave,
std_flux,
std_flux_continuum,
return_function=True)
onedspec = spectral_reduction.OneDSpec(log_file_name=None)
onedspec.science_spectrum_list[0].add_wavelength(sci_wave)
onedspec.science_spectrum_list[0].add_flux(sci_flux, None, None)
onedspec.science_spectrum_list[0].add_flux_continuum(sci_flux_continuum)
onedspec.add_telluric_function(telluric_func)
onedspec.get_telluric_profile()
onedspec.apply_telluric_correction()
assert np.isclose(np.nansum(onedspec.science_spectrum_list[0].flux),
np.nansum(sci_flux_continuum),
rtol=1e-2)
onedspec.inspect_telluric_profile(
display=False,
return_jsonstring=True,
save_fig=True,
fig_type='iframe+jpg+png+svg+pdf',
filename='test/test_output/test_telluric')
| 38.29078 | 76 | 0.630302 | 644 | 5,399 | 4.931677 | 0.136646 | 0.069584 | 0.101385 | 0.119018 | 0.871537 | 0.854534 | 0.831234 | 0.831234 | 0.831234 | 0.831234 | 0 | 0.031893 | 0.279867 | 5,399 | 140 | 77 | 38.564286 | 0.784979 | 0.009076 | 0 | 0.411765 | 0 | 0 | 0.086145 | 0.086145 | 0 | 0 | 0 | 0 | 0.029412 | 1 | 0.029412 | false | 0 | 0.058824 | 0 | 0.088235 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
09749554017eb56bbfcb7b1f0a5bfc2b91658755 | 3,118 | py | Python | music/analysis_merge.py | rkneusel9/SwarmOptimization | 5445b6f90ab49339ca0fdb71e98d44e6827c95a8 | [
"MIT"
] | 2 | 2022-01-11T17:14:14.000Z | 2022-03-07T10:22:32.000Z | music/analysis_merge.py | rkneusel9/SwarmOptimization | 5445b6f90ab49339ca0fdb71e98d44e6827c95a8 | [
"MIT"
] | null | null | null | music/analysis_merge.py | rkneusel9/SwarmOptimization | 5445b6f90ab49339ca0fdb71e98d44e6827c95a8 | [
"MIT"
] | 1 | 2021-11-24T01:11:49.000Z | 2021-11-24T01:11:49.000Z | #
# file: analysis_merge.py
#
# Plots for melody merge results
#
# RTK, 27-Oct-2020
# Last update: 27-Oct-2020
#
################################################################
import numpy as np
import matplotlib.pylab as plt
import pickle
def Plot(giter, gbest, sym, lbl):
d = np.zeros(10000)
for i in range(len(giter)):
d[giter[i]:] = gbest[i]
plt.plot(range(10000)[::800], d[::800], marker=sym, linestyle='none', color='k', label=lbl)
plt.plot(range(10000), d, linewidth=1, color='k')
# alpha = 0.5
p = pickle.load(open("results/merge/mary_ode_alpha_0.5_DE/results.pkl","rb"))
Plot(p["giter"],p["gbest"], "o", "DE")
p = pickle.load(open("results/merge/mary_ode_alpha_0.5_RO/results.pkl","rb"))
Plot(p["giter"],p["gbest"], "s", "RO")
p = pickle.load(open("results/merge/mary_ode_alpha_0.5_GA/results.pkl","rb"))
Plot(p["giter"],p["gbest"], "X", "GA")
p = pickle.load(open("results/merge/mary_ode_alpha_0.5_PSO/results.pkl","rb"))
Plot(p["giter"],p["gbest"], "<", "PSO")
p = pickle.load(open("results/merge/mary_ode_alpha_0.5_GWO/results.pkl","rb"))
Plot(p["giter"],p["gbest"], ">", "GWO")
p = pickle.load(open("results/merge/mary_ode_alpha_0.5_JAYA/results.pkl","rb"))
Plot(p["giter"],p["gbest"], "*", "Jaya")
plt.legend(loc="upper right")
plt.tight_layout(pad=0, w_pad=0, h_pad=0)
plt.savefig("merge_0.5_plot.png", dpi=300)
plt.show()
plt.close()
# alpha = 0.1
p = pickle.load(open("results/merge/mary_ode_alpha_0.1_DE/results.pkl","rb"))
Plot(p["giter"],p["gbest"], "o", "DE")
p = pickle.load(open("results/merge/mary_ode_alpha_0.1_RO/results.pkl","rb"))
Plot(p["giter"],p["gbest"], "s", "RO")
p = pickle.load(open("results/merge/mary_ode_alpha_0.1_GA/results.pkl","rb"))
Plot(p["giter"],p["gbest"], "X", "GA")
p = pickle.load(open("results/merge/mary_ode_alpha_0.1_PSO/results.pkl","rb"))
Plot(p["giter"],p["gbest"], "<", "PSO")
p = pickle.load(open("results/merge/mary_ode_alpha_0.1_GWO/results.pkl","rb"))
Plot(p["giter"],p["gbest"], ">", "GWO")
p = pickle.load(open("results/merge/mary_ode_alpha_0.1_JAYA/results.pkl","rb"))
Plot(p["giter"],p["gbest"], "*", "Jaya")
plt.legend(loc="upper right")
plt.tight_layout(pad=0, w_pad=0, h_pad=0)
plt.savefig("merge_0.1_plot.png", dpi=300)
plt.show()
plt.close()
# alpha = 0.9
p = pickle.load(open("results/merge/mary_ode_alpha_0.9_DE/results.pkl","rb"))
Plot(p["giter"],p["gbest"], "o", "DE")
p = pickle.load(open("results/merge/mary_ode_alpha_0.9_RO/results.pkl","rb"))
Plot(p["giter"],p["gbest"], "s", "RO")
p = pickle.load(open("results/merge/mary_ode_alpha_0.9_GA/results.pkl","rb"))
Plot(p["giter"],p["gbest"], "X", "GA")
p = pickle.load(open("results/merge/mary_ode_alpha_0.9_PSO/results.pkl","rb"))
Plot(p["giter"],p["gbest"], "<", "PSO")
p = pickle.load(open("results/merge/mary_ode_alpha_0.9_GWO/results.pkl","rb"))
Plot(p["giter"],p["gbest"], ">", "GWO")
p = pickle.load(open("results/merge/mary_ode_alpha_0.9_JAYA/results.pkl","rb"))
Plot(p["giter"],p["gbest"], "*", "Jaya")
plt.legend(loc="upper right")
plt.tight_layout(pad=0, w_pad=0, h_pad=0)
plt.savefig("merge_0.9_plot.png", dpi=300)
plt.show()
plt.close()
| 38.02439 | 95 | 0.6517 | 557 | 3,118 | 3.490126 | 0.141831 | 0.064815 | 0.101852 | 0.138889 | 0.844136 | 0.844136 | 0.844136 | 0.844136 | 0.829733 | 0.829733 | 0 | 0.034807 | 0.078576 | 3,118 | 81 | 96 | 38.493827 | 0.641838 | 0.044901 | 0 | 0.5 | 0 | 0 | 0.425172 | 0.295862 | 0 | 0 | 0 | 0 | 0 | 1 | 0.016667 | false | 0 | 0.05 | 0 | 0.066667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
09d11e9b43e16e7e7cb5c47d400f2aa13354dbc0 | 31 | py | Python | FlowNet2_src/flownet2/__init__.py | TopGun666/FlowVO | 382bf31e9acc49dcb448713cb8e7e79eb4ae9e8b | [
"MIT"
] | 2 | 2020-07-08T08:23:05.000Z | 2020-10-21T01:46:06.000Z | FlowNet2_src/flownet2/__init__.py | TopGun666/FlowVO | 382bf31e9acc49dcb448713cb8e7e79eb4ae9e8b | [
"MIT"
] | 4 | 2020-03-29T03:02:09.000Z | 2020-11-29T22:56:59.000Z | FlowNet2_src/flownet2/__init__.py | TopGun666/FlowVO | 382bf31e9acc49dcb448713cb8e7e79eb4ae9e8b | [
"MIT"
] | 3 | 2020-11-17T23:48:29.000Z | 2022-03-16T12:44:44.000Z | from .flownet2 import FlowNet2
| 15.5 | 30 | 0.83871 | 4 | 31 | 6.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.074074 | 0.129032 | 31 | 1 | 31 | 31 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
61eb4c1c85c94de28b0cca6a936f4841099aca13 | 1,682 | py | Python | back/REST/flask/rating.py | lvvanegas10/Historio | a0a7bf27eb78ca0ac3ca9893c0dae88e17235b15 | [
"MIT"
] | 1 | 2020-06-11T18:06:34.000Z | 2020-06-11T18:06:34.000Z | back/REST/flask/rating.py | lvvanegas10/Historio | a0a7bf27eb78ca0ac3ca9893c0dae88e17235b15 | [
"MIT"
] | null | null | null | back/REST/flask/rating.py | lvvanegas10/Historio | a0a7bf27eb78ca0ac3ca9893c0dae88e17235b15 | [
"MIT"
] | 2 | 2018-09-18T12:39:33.000Z | 2019-02-28T07:06:01.000Z | from flask import Blueprint
from aux import *
rating = Blueprint("rating", __name__)
@rating.route('/', methods=["POST"])
@authorExists
def createRating():
incoming = request.json
story = incoming["story"]
author = incoming["author"]["username"]
rating = incoming["rating"]
resp = query(
"match (a:Author) match (s:Story) where a.username = $uname and id(s) = $sid merge (s)<-[r:Rating ]-(a) on create set r.rating = $rating, r.comment = $comment, r.date = $date return r",
{"uname": author, "sid": story, "rating": rating["rating"], "comment": rating["comment"],
"date": rating["date"]})
return Response(json.dumps(resp), content_type='application/json')
@rating.route('/', methods=["PUT"])
def updateRating():
incoming = request.json
story = incoming["story"]
author = incoming["author"]["username"]
rating = incoming["rating"]
resp = query(
" match (s:Story)<-[r:Rating ]-(a:Author) where a.username = $uname and id(s) = $sid set r.rating = $rating, r.comment = $comment, r.date = $date return r",
{"uname": author, "sid": story, "rating": rating["rating"], "comment": rating["comment"],
"date": rating["date"]})
return Response(json.dumps(resp), content_type='application/json')
@rating.route('/', methods=["DELETE"])
def deleteRating():
incoming = request.json
story = incoming["story"]
author = incoming["author"]["username"]
resp = query("match (s:Story)<-[r:Rating ]-(a:Author) where a.username = $uname and id(s) = $sid delete r",
{"uname": author, "sid": story})
return Response(json.dumps(resp), content_type='application/json')
| 40.047619 | 193 | 0.627229 | 206 | 1,682 | 5.087379 | 0.223301 | 0.068702 | 0.051527 | 0.068702 | 0.812977 | 0.793893 | 0.793893 | 0.793893 | 0.767176 | 0.716603 | 0 | 0 | 0.182521 | 1,682 | 41 | 194 | 41.02439 | 0.762182 | 0 | 0 | 0.588235 | 0 | 0.088235 | 0.390606 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.088235 | false | 0 | 0.058824 | 0 | 0.235294 | 0.058824 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
11094b8bd229f65dea928b6ec8f190db607367ba | 111 | py | Python | src/ceploy/__init__.py | qadahtm/cloud-deployer | b4ba3b594bccaa7e4c563b3adca6fc43673f1a6c | [
"Apache-2.0"
] | null | null | null | src/ceploy/__init__.py | qadahtm/cloud-deployer | b4ba3b594bccaa7e4c563b3adca6fc43673f1a6c | [
"Apache-2.0"
] | null | null | null | src/ceploy/__init__.py | qadahtm/cloud-deployer | b4ba3b594bccaa7e4c563b3adca6fc43673f1a6c | [
"Apache-2.0"
] | null | null | null | from .constants import Provider
from .cloud import Cloud
from .cloud import VmInstance
from .utils import Utils | 27.75 | 31 | 0.828829 | 16 | 111 | 5.75 | 0.4375 | 0.195652 | 0.326087 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.135135 | 111 | 4 | 32 | 27.75 | 0.958333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1114a1441ff03a2c7e9113d1a53e9fac39bc1493 | 3,609 | py | Python | tfrecords/write_concentrations.py | Aremaki/MscProjectNMR | 5bb8fb129d5fe326aa73b56cb7c5b01a17aebb0d | [
"MIT"
] | null | null | null | tfrecords/write_concentrations.py | Aremaki/MscProjectNMR | 5bb8fb129d5fe326aa73b56cb7c5b01a17aebb0d | [
"MIT"
] | null | null | null | tfrecords/write_concentrations.py | Aremaki/MscProjectNMR | 5bb8fb129d5fe326aa73b56cb7c5b01a17aebb0d | [
"MIT"
] | 1 | 2021-07-28T11:18:00.000Z | 2021-07-28T11:18:00.000Z | import tensorflow as tf
from tfrecords.write import _bytes_feature, serialize_array
def _float_feature(value):
"""Returns a float_list from a float / double."""
return tf.train.Feature(float_list=tf.train.FloatList(value=[value]))
def serialize_example_concentrations(X, Y):
"""
Creates a tf.train.Example message ready to be written to a file.
"""
# Create a dictionary mapping the feature name to the tf.train.Example-compatible
# data type.
feature = {
'X': _bytes_feature(serialize_array(X)),
'Y': _bytes_feature(serialize_array(Y)),
}
# Create a Features message using tf.train.Example.
example_proto = tf.train.Example(features=tf.train.Features(feature=feature))
return example_proto.SerializeToString()
def serialize_example_concentrations_single(X, y):
"""
Creates a tf.train.Example message ready to be written to a file.
"""
# Create a dictionary mapping the feature name to the tf.train.Example-compatible
# data type.
feature = {
'X': _bytes_feature(serialize_array(X)),
'y': _float_feature(y),
}
# Create a Features message using tf.train.Example.
example_proto = tf.train.Example(features=tf.train.Features(feature=feature))
return example_proto.SerializeToString()
def tf_serialize_example_concentrations(X, Y):
tf_string = tf.py_function(serialize_example_concentrations, (X, Y), tf.string)
return tf.reshape(tf_string, ()) # The result is a scalar
def tf_serialize_example_concentrations_single(X, y):
tf_string = tf.py_function(serialize_example_concentrations_single, (X, y), tf.string)
return tf.reshape(tf_string, ()) # The result is a scalar
def write_tfrecords_concentrations(path, dataset=None, size=1000, number=None):
"""
:param dataset: tf.data object containing the examples (X: array(tf.float32), Y: array(tf.float32))
:param size: Number of example in each tfrecord file
:param number: Number of files
:return: Create the tfrecord files
"""
serialized_dataset = dataset.map(tf_serialize_example_concentrations)
if number:
#The number has priority over the size
size = len(serialized_dataset) // number
else:
number = len(serialized_dataset) // size
if len(serialized_dataset) % number >= 1:
number += 1
for i in range(number):
filename = path + '/data_{}.tfrecord'.format(i)
data_to_write = serialized_dataset.take(size)
serialized_dataset = serialized_dataset.skip(size)
writer = tf.data.experimental.TFRecordWriter(filename)
writer.write(data_to_write)
def write_tfrecords_concentrations_single(path, dataset=None, size=1000, number=None):
"""
:param dataset: tf.data object containing the examples (X: array(tf.float32), Y: array(tf.float32))
:param size: Number of example in each tfrecord file
:param number: Number of files
:return: Create the tfrecord files
"""
serialized_dataset = dataset.map(tf_serialize_example_concentrations_single)
if number:
#The number has priority over the size
size = len(serialized_dataset) // number
else:
number = len(serialized_dataset) // size
if len(serialized_dataset) % number >= 1:
number += 1
for i in range(number):
filename = path + '/data_{}.tfrecord'.format(i)
data_to_write = serialized_dataset.take(size)
serialized_dataset = serialized_dataset.skip(size)
writer = tf.data.experimental.TFRecordWriter(filename)
writer.write(data_to_write) | 36.09 | 103 | 0.700748 | 470 | 3,609 | 5.208511 | 0.185106 | 0.097222 | 0.098039 | 0.042484 | 0.883987 | 0.867647 | 0.852941 | 0.840686 | 0.823529 | 0.823529 | 0 | 0.006925 | 0.199778 | 3,609 | 100 | 104 | 36.09 | 0.84072 | 0.281518 | 0 | 0.653846 | 0 | 0 | 0.01523 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.134615 | false | 0 | 0.038462 | 0 | 0.269231 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
115792ab6602036bb26ac8846eb27f23ccfb1ffe | 100 | py | Python | iniscrapec/modules/__init__.py | riccardopaltrinieri/inipec-scraper | 70ca30cf543553a6296d97046b37990283ef3f05 | [
"MIT"
] | null | null | null | iniscrapec/modules/__init__.py | riccardopaltrinieri/inipec-scraper | 70ca30cf543553a6296d97046b37990283ef3f05 | [
"MIT"
] | null | null | null | iniscrapec/modules/__init__.py | riccardopaltrinieri/inipec-scraper | 70ca30cf543553a6296d97046b37990283ef3f05 | [
"MIT"
] | null | null | null | from modules.dao import Dao
from modules.gui import RootWindow
from modules.scraper import find_pec
| 25 | 36 | 0.85 | 16 | 100 | 5.25 | 0.5625 | 0.392857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12 | 100 | 3 | 37 | 33.333333 | 0.954545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
11590668af011ae6da6438ac162f9072248fb965 | 128 | py | Python | articles/admin.py | Uncensored-Developer/django-elastic-drf-example | bddd5bc2c869425eef4c940228e7a8a122aa5500 | [
"MIT"
] | 2 | 2020-02-23T11:17:39.000Z | 2021-01-11T13:20:47.000Z | articles/admin.py | Uncensored-Developer/django-elastic-drf-example | bddd5bc2c869425eef4c940228e7a8a122aa5500 | [
"MIT"
] | 2 | 2019-12-05T14:03:53.000Z | 2019-12-05T14:03:53.000Z | articles/admin.py | Uncensored-Developer/django-elastic-drf-example | bddd5bc2c869425eef4c940228e7a8a122aa5500 | [
"MIT"
] | null | null | null | from django.contrib import admin
from articles import models as articles_models
admin.site.register(articles_models.Article)
| 18.285714 | 46 | 0.84375 | 18 | 128 | 5.888889 | 0.611111 | 0.264151 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.109375 | 128 | 6 | 47 | 21.333333 | 0.929825 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
feca2ef199ff9760ea084933d9527e793a07a090 | 187 | py | Python | sleekxmpp/plugins/xep_0065/__init__.py | E-Tahta/sleekxmpp | ed067c9412835c5fe44bf203936262bcec09ced4 | [
"BSD-3-Clause"
] | 499 | 2015-01-04T21:45:16.000Z | 2022-02-14T13:04:08.000Z | sleekxmpp/plugins/xep_0065/__init__.py | numanturle/SleekXMPP | 1aeefd88accf45947c6376e9fac3abae9cbba8aa | [
"BSD-3-Clause"
] | 159 | 2015-01-02T19:09:47.000Z | 2020-02-12T08:29:54.000Z | sleekxmpp/plugins/xep_0065/__init__.py | numanturle/SleekXMPP | 1aeefd88accf45947c6376e9fac3abae9cbba8aa | [
"BSD-3-Clause"
] | 209 | 2015-01-07T16:23:16.000Z | 2022-01-26T13:02:20.000Z | from sleekxmpp.plugins.base import register_plugin
from sleekxmpp.plugins.xep_0065.stanza import Socks5
from sleekxmpp.plugins.xep_0065.proxy import XEP_0065
register_plugin(XEP_0065)
| 23.375 | 53 | 0.860963 | 28 | 187 | 5.535714 | 0.428571 | 0.180645 | 0.387097 | 0.296774 | 0.348387 | 0 | 0 | 0 | 0 | 0 | 0 | 0.099415 | 0.085562 | 187 | 7 | 54 | 26.714286 | 0.807018 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.75 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
3a50f373708e2afd1a214a3c3f9a885b9c4b301e | 18 | py | Python | model_zoo/east/__init__.py | saicoco/gluon-east | 9597bf4fe20a971940fbd5e72c221040ecacb5b7 | [
"MIT"
] | 2 | 2019-01-05T02:40:06.000Z | 2019-03-20T18:00:05.000Z | model_zoo/east/__init__.py | saicoco/gluon-east | 9597bf4fe20a971940fbd5e72c221040ecacb5b7 | [
"MIT"
] | null | null | null | model_zoo/east/__init__.py | saicoco/gluon-east | 9597bf4fe20a971940fbd5e72c221040ecacb5b7 | [
"MIT"
] | null | null | null | from east import * | 18 | 18 | 0.777778 | 3 | 18 | 4.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 18 | 1 | 18 | 18 | 0.933333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
28a56e0f50d127a53e0210503144a6c51bab59ae | 37 | py | Python | cffi_ext/__init__.py | gwangyi/cffi_ext | 10be829f94065a80d5db2912096d82aafb08ccd2 | [
"MIT"
] | null | null | null | cffi_ext/__init__.py | gwangyi/cffi_ext | 10be829f94065a80d5db2912096d82aafb08ccd2 | [
"MIT"
] | null | null | null | cffi_ext/__init__.py | gwangyi/cffi_ext | 10be829f94065a80d5db2912096d82aafb08ccd2 | [
"MIT"
] | null | null | null | from .extractor import cdef_extract
| 12.333333 | 35 | 0.837838 | 5 | 37 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.135135 | 37 | 2 | 36 | 18.5 | 0.9375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
28b020f4b93b4bc09c260e143089f4a3c6a84a95 | 59 | py | Python | __init__.py | vedant-jad99/Snipping-Tool | 312daf66c5e594e2cb139e1f5944e842801d42bf | [
"MIT"
] | 1 | 2021-02-11T14:54:42.000Z | 2021-02-11T14:54:42.000Z | __init__.py | vedant-jad99/Snipper | 312daf66c5e594e2cb139e1f5944e842801d42bf | [
"MIT"
] | null | null | null | __init__.py | vedant-jad99/Snipper | 312daf66c5e594e2cb139e1f5944e842801d42bf | [
"MIT"
] | null | null | null | import tools.tools
import tools.toolTip
import main_app.app | 19.666667 | 20 | 0.864407 | 10 | 59 | 5 | 0.5 | 0.44 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.084746 | 59 | 3 | 21 | 19.666667 | 0.925926 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
28be229fb7cb93e1ecdc1c71cf6480fe93461b80 | 42 | py | Python | ome/dumping/__init__.py | aebrahim/ome | f3ba928e2df41bffb91ac921693ca3a1d73ce956 | [
"MIT"
] | null | null | null | ome/dumping/__init__.py | aebrahim/ome | f3ba928e2df41bffb91ac921693ca3a1d73ce956 | [
"MIT"
] | null | null | null | ome/dumping/__init__.py | aebrahim/ome | f3ba928e2df41bffb91ac921693ca3a1d73ce956 | [
"MIT"
] | null | null | null | from model_dumping.dump import dump_model
| 21 | 41 | 0.880952 | 7 | 42 | 5 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.095238 | 42 | 1 | 42 | 42 | 0.921053 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e90fdd364300ac40566f0a840b6babf3ada001fa | 117 | py | Python | Project/speak_auth/app/api/__init__.py | joexu01/speak_auth | 05b2a10761463d45d926ff0d8b865f9f6ab86757 | [
"MIT"
] | null | null | null | Project/speak_auth/app/api/__init__.py | joexu01/speak_auth | 05b2a10761463d45d926ff0d8b865f9f6ab86757 | [
"MIT"
] | 3 | 2020-03-24T17:15:01.000Z | 2020-03-31T04:38:24.000Z | Project/speak_auth/app/api/__init__.py | joexu01/speak_auth | 05b2a10761463d45d926ff0d8b865f9f6ab86757 | [
"MIT"
] | null | null | null | # -*- coding: UTF-8 -*-
from flask import Blueprint
api = Blueprint('api', __name__)
from . import errors, person
| 14.625 | 32 | 0.675214 | 15 | 117 | 5 | 0.733333 | 0.32 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010417 | 0.179487 | 117 | 7 | 33 | 16.714286 | 0.770833 | 0.179487 | 0 | 0 | 0 | 0 | 0.031915 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0.666667 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
3a7049c5ca7fee8be06e46b2304d641d6a046c69 | 12,428 | py | Python | test/crude_algorithm.py | Danderson123/Masters_Project | ef9e2fbadda3626a244dfdae42729bd007752d45 | [
"CC0-1.0"
] | null | null | null | test/crude_algorithm.py | Danderson123/Masters_Project | ef9e2fbadda3626a244dfdae42729bd007752d45 | [
"CC0-1.0"
] | null | null | null | test/crude_algorithm.py | Danderson123/Masters_Project | ef9e2fbadda3626a244dfdae42729bd007752d45 | [
"CC0-1.0"
] | 1 | 2020-11-18T12:14:40.000Z | 2020-11-18T12:14:40.000Z | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Initial testing script for crude_annotate.py
"""
string = "TGATGCTGAGACTGACGCACTTGTTGATGCTGAAGCTGACGCACTGGTTGATGCGGATTCAGAAGCTGATGTGCTTGCTGAGGCTGACGCACTCGTTGATGCTGAGACTGACGCACTTGTTGACGCTGAGGCTGAGGCACTTGTCGACGCTGAAGCTGACGCACTCGTTGATGCTGAAGCCGAGGCACTGGTTGACGCTGAAGCTGAAGCACTGGTTGACGCTGAGGCTGACACACTCGTTGATGCTGAGGCTGACGCACTGGTACTTGCTGAAGCCGAGGCGCTAGTGCTTGCTG"
genes = []
start_indexes = []
end_indexes = []
for x in range(len(string)):
gene = ''
index = 0
if string[x:x+3] == "ATG":
start = x #start is the index of ATG
gene = string[x:x+3] #gene is ATG
cut = string[x:] #ATG to end of string
end = 0
for y in range(len(cut)): # for character in the cut
if not cut[y:y+3] == "CTA": #if the character:character + 3 not CTA
gene += cut[y:y+3] #append the codon to the gene
end = y+2 #new y becomes y + 3
print(end)
else:
genes.append(gene + "CTA")
start_indexes.append(start+1)
end_of_cut = start + y
end_indexes.append(y + 7)
continue
genes = []
start_indexes = []
end_indexes = []
cuts = []
strands = []
for x in range(len(string)):
index = x
if string[index:index+3] == "ATG":
gene = ''
end_codon = ["TAG", "TAA", "TGA"]
gene_strand = "+"
start = index #start is the index of ATG
start_indexes.append(start + 1)
cut = string[index:] #ATG to end of string
cuts.append(cut)
for y in range(len(cut)): # for character in the cut
if (y + 3) % 3 ==0:
gene+=cut[y:y+3]
if any(cut[y:y+3] == motif for motif in end_codon):
genes.append(gene)
strands.append(gene_strand)
end_indexes.append(start + (y+3))
elif any(string[index:index+3] == rev_stop for rev_stop in ["CTA", "TTA", "TCA"]):
gene = ''
gene_strand = "-"
start = index #start is the index of ATG
start_indexes.append(start + 1)
cut = string[index:] #ATG to end of string
cuts.append(cut)
for y in range(len(cut)): # for character in the cut
if (y + 3) % 3 ==0:
gene+=cut[y:y+3]
if cut[y:y+3] == "CAT":
genes.append(gene)
strands.append(gene_strand)
end_indexes.append(start + (y+3))
for x in tqdm(range(len(split))):
locus_number = 0
title = split[x].split(" ")[0]
fasta_split = split[x].split("\n")
fasta_header = ">" + fasta_split[0]
sequence = "".join(fasta_split[1:])
sequences_all_regions.append(fasta_header)
sequences_all_regions.append(sequence)
#region_titles.append("##sequence-region " + title + " 1 " + str(len(sequence)))
annotations = set()
for regex in regexList:
if re.search(regex, sequence):
some_list= re.finditer(regex,sequence)
for result in some_list:
if (result.end() - result.start()) % 3 == 0:
#codon = regex.split("(")[0]
start_codon = regex[regex.find("=")+1:regex.find(")")]
print(start_codon)
if start_codon == "ATG":
gene_strand = "+"
elif start_codon == "CTA" or start_codon == "TTA" or start_codon == "TCA":
gene_strand = "-"
else:
gene_strand = ""
annotation = title + "\tCRUDE\tCDS\t" + str(result.start() - 2) + "\t" + str(result.end() + 3) + "\t.\t" + gene_strand + "\t0\tID=" + title.split(".")[0] + "_" + str(locus_number) + ";product=putative_protein_region"
annotations.add(annotation)
locus_number += 1
annotation_all_regions += ["##sequence-region " + title + " 1 " + str(len(sequence))]
annotation_all_regions += sorted(list(annotations), key=lambda x: x.split('\t')[3])
for x in tqdm(range(len(split))):
locus_number = 0
title = split[x].split(" ")[0]
fasta_split = split[x].split("\n")
fasta_header = ">" + fasta_split[0]
sequence = "".join(fasta_split[1:])
sequences_all_regions.append(fasta_header)
sequences_all_regions.append(sequence)
#region_titles.append("##sequence-region " + title + " 1 " + str(len(sequence)))
annotations = set()
for base in range(len(sequence)):
index = base
if string[index:index+3] == "ATG":
gene = ''
end_codon = ["TAG", "TAA", "TGA"]
gene_strand = "+"
start = index #start is the index of ATG
cut = string[index:] #ATG to end of string
for y in range(len(cut)): # for character in the cut
if (y + 3) % 3 ==0:
gene+=cut[y:y+3]
if any(cut[y:y+3] == motif for motif in end_codon):
annotation = title + "\tCRUDE\tCDS\t" + str(start + 1) + "\t" + str(start + (y+3)) + "\t.\t" + gene_strand + "\t0\tID=" + title.split(".")[0] + "_" + str(locus_number) + ";product=putative_protein_region"
annotations.add(annotation)
locus_number += 1
elif any(string[index:index+3] == rev_stop for rev_stop in ["CTA", "TTA", "TCA"]):
gene = ''
gene_strand = "-"
start = index #start is the index of ATG
cut = string[index:] #ATG to end of string
for y in range(len(cut)): # for character in the cut
if (y + 3) % 3 ==0:
gene+=cut[y:y+3]
if cut[y:y+3] == "CAT":
annotation = title + "\tCRUDE\tCDS\t" + str(start + 1) + "\t" + str(start + (y+3)) + "\t.\t" + gene_strand + "\t0\tID=" + title.split(".")[0] + "_" + str(locus_number) + ";product=putative_protein_region"
annotations.add(annotation)
locus_number += 1
annotation_all_regions += ["##sequence-region " + title + " 1 " + str(len(sequence))]
annotation_all_regions += sorted(list(annotations), key=lambda x: x.split('\t')[3])
region_titles = []
annotation_all_regions = []
sequences_all_regions = ["##FASTA"]
import re
from tqdm import tqdm
regex1 = r"ATG(.*)(?=TAG)"
regex2 = r"ATG)(.*)(?=TAA)"
regex3 = r"ATG)(.*)(?=TGA)"
regex4 = r"CTA)(.*)(?=CAT)"
regex5 = r"TTA)(.*)(?=CAT)"
regex6 = r"TCA)(.*)(?=CAT)"
stop1 = r"(?<=ATG)(.*)TAG"
stop2 = r"(?<=ATG)(.*)TAA"
stop3 = r"(?<=ATG)(.*)TGA"
stop4 = r"(?<=CTA)(.*)CAT"
stop5 = r"(?<=TTA)(.*)CAT"
stop6 = r"(?<=TCA)(.*)CAT"
regexList = [regex1, regex2, regex3, regex4, regex5, regex6]
stopList = [stop1, stop2, stop3, stop4, stop5, stop6]
for x in tqdm(range(len(split))):
locus_number = 0
title = split[x].split(" ")[0]
fasta_split = split[x].split("\n")
fasta_header = ">" + fasta_split[0]
sequence = "".join(fasta_split[1:])
sequences_all_regions.append(fasta_header)
sequences_all_regions.append(sequence)
#region_titles.append("##sequence-region " + title + " 1 " + str(len(sequence)))
annotations = set()
for regex in regexList:
if re.search(regex, sequence):
start_list = re.finditer(regex,sequence)
for stop_regex in stopList:
if re.search(stop_regex, sequence):
stop_codon_list = re.finditer(stop_regex,sequence)
for result in start_list:
for stop_result in stop_codon_list:
if (stop_result.end() - result.start()) % 3 == 0:
codon = regex.split("(")[0]
start_codon = regex[regex.find("=")+1:regex.find(")")]
print(start_codon)
if start_codon == "ATG":
gene_strand = "+"
elif start_codon == "CTA" or start_codon == "TTA" or start_codon == "TCA":
gene_strand = "-"
else:
gene_strand = ""
annotation = title + "\tCRUDE\tCDS\t" + str(result.start() - 2) + "\t" + str(stop_result.end() + 3) + "\t.\t" + gene_strand + "\t0\tID=" + title.split(".")[0] + "_" + str(locus_number) + ";product=putative_protein_region"
annotations.add(annotation)
locus_number += 1
annotation_all_regions += ["##sequence-region " + title + " 1 " + str(len(sequence))]
annotation_all_regions += sorted(list(annotations), key=lambda x: x.split('\t')[3])
for x in tqdm(range(len(split))):
locus_number = 0
title = split[x].split(" ")[0]
fasta_split = split[x].split("\n")
fasta_header = ">" + fasta_split[0]
sequence = "".join(fasta_split[1:])
sequences_all_regions.append(fasta_header)
sequences_all_regions.append(sequence)
#region_titles.append("##sequence-region " + title + " 1 " + str(len(sequence)))
annotations = set()
for base in range(len(sequence)):
index = base
if sequence[index:index+3] == "ATG":
gene = ''
end_codon = ["TAG", "TAA", "TGA"]
gene_strand = "+"
start = index #start is the index of ATG
cut = sequence[index:] #ATG to end of string
for y in range(len(cut)): # for character in the cut
if (y + 3) % 3 ==0:
gene+=cut[y:y+3]
if any(cut[y:y+3] == motif for motif in end_codon):
annotation = title + "\tCRUDE\tCDS\t" + str(start + 1) + "\t" + str(start + (y+3)) + "\t.\t" + gene_strand + "\t0\tID=" + title.split(".")[0] + "_" + str(locus_number) + ";product=putative_protein_region"
annotations.add(annotation)
locus_number += 1
else:
pass
elif any(sequence[index:index+3] == rev_stop for rev_stop in ["CTA", "TTA", "TCA"]):
gene = ''
gene_strand = "-"
start = index #start is the index of ATG
cut = sequence[index:] #ATG to end of string
for y in range(len(cut)): # for character in the cut
if (y + 3) % 3 ==0:
gene+=cut[y:y+3]
if cut[y:y+3] == "CAT":
annotation = title + "\tCRUDE\tCDS\t" + str(start + 1) + "\t" + str(start + (y+3)) + "\t.\t" + gene_strand + "\t0\tID=" + title.split(".")[0] + "_" + str(locus_number) + ";product=putative_protein_region"
annotations.add(annotation)
locus_number += 1
else:
pass
annotation_all_regions += ["##sequence-region " + title + " 1 " + str(len(sequence))]
annotation_all_regions += sorted(list(annotations), key=lambda x: x.split('\t')[3])
#gff_file = "\n".join(region_titles + annotation_all_regions + sequences_all_regions)
gff_file = "\n".join(list(annotation_all_regions))
fasta_out = "\n".join(list(sequences_all_regions))
total_genes += annotation_all_regions
#outfile_name = os.path.basename(header).split(".fna")[0] + ".gff"
outfile_name = os.path.basename(header).split(".fna")[0]
outfile_fasta = open("crudely_annotated/" + outfile_name + ".fna", "w")
outfile_fasta.write(fasta_out)
outfile_fasta.close()
outfile_gff = open("crudely_annotated/" + outfile_name + ".gff", "w")
outfile_gff.write(gff_file)
outfile_gff.close()
end_time = time.time()
print(str(end_time - start_time) + " seconds" )
print(len(total_genes))
| 42.707904 | 307 | 0.510058 | 1,434 | 12,428 | 4.281729 | 0.097629 | 0.008795 | 0.011401 | 0.013681 | 0.785342 | 0.753094 | 0.73355 | 0.73355 | 0.716938 | 0.703909 | 0 | 0.017791 | 0.344223 | 12,428 | 290 | 308 | 42.855172 | 0.735583 | 0.093659 | 0 | 0.721739 | 0 | 0 | 0.10255 | 0.043517 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.008696 | 0.008696 | 0 | 0.008696 | 0.021739 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3a9cc633da6f8d12e0eb7ec8e7212a26c42f5288 | 93 | py | Python | ci/test_fs.py | lukemassa/readfs | a83a972f82333c584ac345ac6730a72e350fe653 | [
"MIT"
] | null | null | null | ci/test_fs.py | lukemassa/readfs | a83a972f82333c584ac345ac6730a72e350fe653 | [
"MIT"
] | null | null | null | ci/test_fs.py | lukemassa/readfs | a83a972f82333c584ac345ac6730a72e350fe653 | [
"MIT"
] | null | null | null | #!/usr/bin/python3
import unittest
import fs
class TestBlock(unittest.TestCase):
pass
| 10.333333 | 35 | 0.741935 | 12 | 93 | 5.75 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012821 | 0.16129 | 93 | 8 | 36 | 11.625 | 0.871795 | 0.182796 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.25 | 0.5 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
3ab4d8712bf58460b2b07147c59aa7da2ab57df4 | 46 | py | Python | atomate/feff/fireworks/__init__.py | Zhuoying/atomate | 067023f0f740d3abac47b7ae7743c1c31eff8a06 | [
"BSD-3-Clause-LBNL"
] | null | null | null | atomate/feff/fireworks/__init__.py | Zhuoying/atomate | 067023f0f740d3abac47b7ae7743c1c31eff8a06 | [
"BSD-3-Clause-LBNL"
] | null | null | null | atomate/feff/fireworks/__init__.py | Zhuoying/atomate | 067023f0f740d3abac47b7ae7743c1c31eff8a06 | [
"BSD-3-Clause-LBNL"
] | null | null | null | from .core import EELSFW, XASFW, EXAFSPathsFW
| 23 | 45 | 0.804348 | 6 | 46 | 6.166667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.130435 | 46 | 1 | 46 | 46 | 0.925 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3ae4344ffb1b578b19a7819f261c44e8516b8edf | 417 | py | Python | trac/Scripts/check-trac-subtickets-script.py | thinkbase/PortableTrac | 9ea0210f6b88f135ef73f370b48127af0495b2d7 | [
"BSD-3-Clause"
] | 2 | 2015-08-06T04:19:21.000Z | 2020-04-29T23:52:10.000Z | trac/Scripts/check-trac-subtickets-script.py | thinkbase/PortableTrac | 9ea0210f6b88f135ef73f370b48127af0495b2d7 | [
"BSD-3-Clause"
] | null | null | null | trac/Scripts/check-trac-subtickets-script.py | thinkbase/PortableTrac | 9ea0210f6b88f135ef73f370b48127af0495b2d7 | [
"BSD-3-Clause"
] | null | null | null | #!"E:\PortableTrac\Portable Python 2.7.3.1\App\python.exe"
# EASY-INSTALL-ENTRY-SCRIPT: 'tracsubticketsplugin==0.2.0.dev-20121107','console_scripts','check-trac-subtickets'
__requires__ = 'tracsubticketsplugin==0.2.0.dev-20121107'
import sys
from pkg_resources import load_entry_point
sys.exit(
load_entry_point('tracsubticketsplugin==0.2.0.dev-20121107', 'console_scripts', 'check-trac-subtickets')()
)
| 41.7 | 114 | 0.76259 | 58 | 417 | 5.293103 | 0.551724 | 0.205212 | 0.214984 | 0.224756 | 0.547231 | 0.547231 | 0.436482 | 0.436482 | 0.436482 | 0.436482 | 0 | 0.096354 | 0.079137 | 417 | 9 | 115 | 46.333333 | 0.703125 | 0.405276 | 0 | 0 | 0 | 0 | 0.489451 | 0.42616 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
c96d70ee433c05888567a1181dbbfcb9eb83e88d | 92 | py | Python | boa3_test/test_sc/native_test/stdlib/AtoiMismatchedType.py | OnBlockIO/neo3-boa | cb317292a67532a52ed26f2b0f0f7d0b10ac5f5f | [
"Apache-2.0"
] | 25 | 2020-07-22T19:37:43.000Z | 2022-03-08T03:23:55.000Z | boa3_test/test_sc/native_test/stdlib/AtoiMismatchedType.py | OnBlockIO/neo3-boa | cb317292a67532a52ed26f2b0f0f7d0b10ac5f5f | [
"Apache-2.0"
] | 419 | 2020-04-23T17:48:14.000Z | 2022-03-31T13:17:45.000Z | boa3_test/test_sc/native_test/stdlib/AtoiMismatchedType.py | OnBlockIO/neo3-boa | cb317292a67532a52ed26f2b0f0f7d0b10ac5f5f | [
"Apache-2.0"
] | 15 | 2020-05-21T21:54:24.000Z | 2021-11-18T06:17:24.000Z | from boa3.builtin.nativecontract.stdlib import StdLib
def main():
StdLib.atoi(10, 10)
| 15.333333 | 53 | 0.73913 | 13 | 92 | 5.230769 | 0.769231 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.064103 | 0.152174 | 92 | 5 | 54 | 18.4 | 0.807692 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c976e3ba1fd05cc2e3f0bca0ed40f7557b4defcc | 552 | py | Python | src/api/domain/dashboard/GetDataOperationJobExecutionWidget/GetDataOperationJobExecutionWidgetQuery.py | PythonDataIntegrator/pythondataintegrator | 6167778c36c2295e36199ac0d4d256a4a0c28d7a | [
"MIT"
] | 14 | 2020-12-19T15:06:13.000Z | 2022-01-12T19:52:17.000Z | src/api/domain/dashboard/GetDataOperationJobExecutionWidget/GetDataOperationJobExecutionWidgetQuery.py | PythonDataIntegrator/pythondataintegrator | 6167778c36c2295e36199ac0d4d256a4a0c28d7a | [
"MIT"
] | 43 | 2021-01-06T22:05:22.000Z | 2022-03-10T10:30:30.000Z | src/api/domain/dashboard/GetDataOperationJobExecutionWidget/GetDataOperationJobExecutionWidgetQuery.py | PythonDataIntegrator/pythondataintegrator | 6167778c36c2295e36199ac0d4d256a4a0c28d7a | [
"MIT"
] | 4 | 2020-12-18T23:10:09.000Z | 2021-04-02T13:03:12.000Z | from dataclasses import dataclass
from infrastructure.cqrs.IQuery import IQuery
from domain.dashboard.GetDataOperationJobExecutionWidget.GetDataOperationJobExecutionWidgetRequest import GetDataOperationJobExecutionWidgetRequest
from domain.dashboard.GetDataOperationJobExecutionWidget.GetDataOperationJobExecutionWidgetResponse import GetDataOperationJobExecutionWidgetResponse
@dataclass
class GetDataOperationJobExecutionWidgetQuery(IQuery[GetDataOperationJobExecutionWidgetResponse]):
request: GetDataOperationJobExecutionWidgetRequest = None
| 55.2 | 149 | 0.918478 | 32 | 552 | 15.84375 | 0.5 | 0.039448 | 0.074951 | 0.209073 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.052536 | 552 | 9 | 150 | 61.333333 | 0.969407 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.571429 | 0 | 0.857143 | 0 | 0 | 0 | 1 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c97e9d379697b646ae57fa85967a85d1a7568e53 | 194 | py | Python | zalgolib/__init__.py | jivanyatra/zalgolibpy | 7f0d054d370e90c031c1e5d6637503cb1ad38e6f | [
"MIT"
] | 1 | 2022-03-06T16:43:13.000Z | 2022-03-06T16:43:13.000Z | zalgolib/__init__.py | jivanyatra/zalgolib | 7f0d054d370e90c031c1e5d6637503cb1ad38e6f | [
"MIT"
] | null | null | null | zalgolib/__init__.py | jivanyatra/zalgolib | 7f0d054d370e90c031c1e5d6637503cb1ad38e6f | [
"MIT"
] | null | null | null | from .zalgolib import enzalgofy, dezalgofy
from .diacritics import DOWN_MARKS, DOWN_LEN
from .diacritics import MID_MARKS, MID_LEN
from .diacritics import UP_MARKS, UP_LEN
__version__ = "0.2.0" | 32.333333 | 44 | 0.814433 | 30 | 194 | 4.933333 | 0.466667 | 0.283784 | 0.405405 | 0.310811 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017544 | 0.118557 | 194 | 6 | 45 | 32.333333 | 0.847953 | 0 | 0 | 0 | 0 | 0 | 0.025641 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.8 | 0 | 0.8 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c9840bc19bbaeea29a9cacb76f6370359d37d1d1 | 23 | py | Python | fairways/io/__init__.py | dan-win/fairways_py | 771623c6f9ec40e8016b5cebb7951613d01e31f7 | [
"Apache-2.0"
] | 103 | 2015-02-12T20:21:53.000Z | 2022-03-29T15:30:47.000Z | cyvlfeat/generic/__init__.py | samousavizade/cyvlfeat | 03297e4d1a6924920a7cf2df9d558c93a8445b9f | [
"BSD-2-Clause"
] | 49 | 2015-05-05T03:48:37.000Z | 2022-03-09T13:54:24.000Z | cyvlfeat/generic/__init__.py | samousavizade/cyvlfeat | 03297e4d1a6924920a7cf2df9d558c93a8445b9f | [
"BSD-2-Clause"
] | 68 | 2015-02-11T10:33:11.000Z | 2022-02-08T09:26:34.000Z | from .generic import *
| 11.5 | 22 | 0.73913 | 3 | 23 | 5.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.173913 | 23 | 1 | 23 | 23 | 0.894737 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a384e1102599eebadaa51b8e403d86000021b3dc | 15,510 | py | Python | cca_zoo/test/test_deepmodels.py | jameschapman19/cca_zoo | 45c38f0164a324e8fcc33a480814842e747d86c3 | [
"MIT"
] | 96 | 2020-11-10T11:16:55.000Z | 2022-03-31T08:34:59.000Z | cca_zoo/test/test_deepmodels.py | jameschapman19/cca_zoo | 45c38f0164a324e8fcc33a480814842e747d86c3 | [
"MIT"
] | 76 | 2020-11-25T00:47:43.000Z | 2022-03-30T13:58:45.000Z | cca_zoo/test/test_deepmodels.py | jameschapman19/cca_zoo | 45c38f0164a324e8fcc33a480814842e747d86c3 | [
"MIT"
] | 24 | 2020-10-26T06:12:37.000Z | 2022-03-03T08:00:00.000Z | import numpy as np
from sklearn.utils.validation import check_random_state
from torch import optim, manual_seed
from torch.utils.data import Subset
from cca_zoo import data
from cca_zoo.data import Noisy_MNIST_Dataset
from cca_zoo.deepmodels import DCCA, DCCAE, DVCCA, DCCA_NOI, DTCCA, SplitAE, DeepWrapper
from cca_zoo.deepmodels import objectives, architectures
from cca_zoo.models import CCA
manual_seed(0)
rng = check_random_state(0)
X = rng.rand(200, 10)
Y = rng.rand(200, 10)
Z = rng.rand(200, 10)
X_conv = rng.rand(100, 1, 16, 16)
Y_conv = rng.rand(100, 1, 16, 16)
train_dataset = data.CCA_Dataset([X, Y])
def test_input_types():
latent_dims = 2
device = "cpu"
encoder_1 = architectures.Encoder(latent_dims=latent_dims, feature_size=10)
encoder_2 = architectures.Encoder(latent_dims=latent_dims, feature_size=10)
# DCCA
dcca_model = DCCA(
latent_dims=latent_dims,
encoders=[encoder_1, encoder_2],
objective=objectives.CCA,
)
dcca_model = DeepWrapper(dcca_model, device=device)
dcca_model.fit(train_dataset, epochs=3)
dcca_model.fit(train_dataset, val_dataset=train_dataset, epochs=3)
dcca_model.fit((X, Y), val_dataset=(X, Y), epochs=3)
dcca_model.fit((X, Y), val_split=0.2, epochs=3)
def tutorial_test():
# Load MNIST Data
N = 500
latent_dims = 2
dataset = Noisy_MNIST_Dataset(mnist_type="FashionMNIST", train=True)
ids = np.arange(min(2 * N, len(dataset)))
np.random.shuffle(ids)
train_ids, val_ids = np.array_split(ids, 2)
val_dataset = Subset(dataset, val_ids)
train_dataset = Subset(dataset, train_ids)
test_dataset = Noisy_MNIST_Dataset(mnist_type="FashionMNIST", train=False)
test_ids = np.arange(min(N, len(test_dataset)))
np.random.shuffle(test_ids)
test_dataset = Subset(test_dataset, test_ids)
print("DCCA")
encoder_1 = architectures.Encoder(latent_dims=latent_dims, feature_size=784)
encoder_2 = architectures.Encoder(latent_dims=latent_dims, feature_size=784)
dcca_model = DCCA(latent_dims=latent_dims, encoders=[encoder_1, encoder_2])
dcca_model = DeepWrapper(dcca_model)
dcca_model.fit(train_dataset, val_dataset=val_dataset, epochs=2)
dcca_results = np.stack(
(dcca_model.score(train_dataset), dcca_model.correlations(test_dataset)[0, 1])
)
def test_large_p():
large_p = 256
X = rng.rand(2000, large_p)
Y = rng.rand(2000, large_p)
latent_dims = 32
device = "cpu"
encoder_1 = architectures.Encoder(latent_dims=latent_dims, feature_size=large_p)
encoder_2 = architectures.Encoder(latent_dims=latent_dims, feature_size=large_p)
dcca_model = DCCA(
latent_dims=latent_dims,
encoders=[encoder_1, encoder_2],
objective=objectives.MCCA,
eps=1e-3,
)
optimizer = optim.Adam(dcca_model.parameters(), lr=1e-4)
dcca_model = DeepWrapper(dcca_model, device=device, optimizer=optimizer)
dcca_model.fit((X, Y), epochs=100)
def test_DCCA_methods_cpu():
latent_dims = 4
cca_model = CCA(latent_dims=latent_dims).fit((X, Y))
device = "cpu"
epochs = 100
# DCCA
encoder_1 = architectures.Encoder(latent_dims=latent_dims, feature_size=10)
encoder_2 = architectures.Encoder(latent_dims=latent_dims, feature_size=10)
dcca_model = DCCA(
latent_dims=latent_dims,
encoders=[encoder_1, encoder_2],
objective=objectives.CCA,
)
optimizer = optim.SGD(dcca_model.parameters(), lr=1e-2)
dcca_model = DeepWrapper(dcca_model, device=device, optimizer=optimizer)
dcca_model.fit((X, Y), epochs=epochs)
assert (
np.testing.assert_array_less(
cca_model.score((X, Y)).sum(), dcca_model.score((X, Y)).sum()
)
is None
)
# DGCCA
encoder_1 = architectures.Encoder(latent_dims=latent_dims, feature_size=10)
encoder_2 = architectures.Encoder(latent_dims=latent_dims, feature_size=10)
dgcca_model = DCCA(
latent_dims=latent_dims,
encoders=[encoder_1, encoder_2],
objective=objectives.GCCA,
)
optimizer = optim.SGD(dgcca_model.parameters(), lr=1e-2)
dgcca_model = DeepWrapper(dgcca_model, device=device, optimizer=optimizer)
dgcca_model.fit((X, Y), epochs=epochs)
assert (
np.testing.assert_array_less(
cca_model.score((X, Y)).sum(), dgcca_model.score((X, Y)).sum()
)
is None
)
# DMCCA
encoder_1 = architectures.Encoder(latent_dims=latent_dims, feature_size=10)
encoder_2 = architectures.Encoder(latent_dims=latent_dims, feature_size=10)
dmcca_model = DCCA(
latent_dims=latent_dims,
encoders=[encoder_1, encoder_2],
objective=objectives.MCCA,
)
optimizer = optim.SGD(dmcca_model.parameters(), lr=1e-2)
dmcca_model = DeepWrapper(dmcca_model, device=device, optimizer=optimizer)
dmcca_model.fit((X, Y), epochs=epochs)
assert (
np.testing.assert_array_less(
cca_model.score((X, Y)).sum(), dmcca_model.score((X, Y)).sum()
)
is None
)
# DCCA_NOI
encoder_1 = architectures.Encoder(latent_dims=latent_dims, feature_size=10)
encoder_2 = architectures.Encoder(latent_dims=latent_dims, feature_size=10)
dcca_noi_model = DCCA_NOI(
latent_dims, X.shape[0], encoders=[encoder_1, encoder_2], rho=0
)
optimizer = optim.Adam(dcca_noi_model.parameters(), lr=1e-2)
dcca_noi_model = DeepWrapper(dcca_noi_model, device=device, optimizer=optimizer)
dcca_noi_model.fit((X, Y), epochs=epochs)
assert (
np.testing.assert_array_less(
cca_model.score((X, Y)).sum(), dcca_noi_model.score((X, Y)).sum()
)
is None
)
def test_DTCCA_methods_cpu():
latent_dims = 2
device = "cpu"
encoder_1 = architectures.Encoder(latent_dims=10, feature_size=10)
encoder_2 = architectures.Encoder(latent_dims=10, feature_size=10)
encoder_3 = architectures.Encoder(latent_dims=10, feature_size=10)
# DTCCA
dtcca_model = DTCCA(latent_dims=latent_dims, encoders=[encoder_1, encoder_2])
dtcca_model = DeepWrapper(dtcca_model, device=device)
dtcca_model.fit((X, Y), epochs=20)
# DCCA
dcca_model = DCCA(
latent_dims=latent_dims,
encoders=[encoder_1, encoder_2],
objective=objectives.GCCA,
)
dcca_model = DeepWrapper(dcca_model, device=device)
dcca_model.fit((X, Y), epochs=20)
def test_scheduler():
latent_dims = 2
device = "cpu"
encoder_1 = architectures.Encoder(latent_dims=latent_dims, feature_size=10)
encoder_2 = architectures.Encoder(latent_dims=latent_dims, feature_size=10)
# DCCA
dcca_model = DCCA(
latent_dims=latent_dims,
encoders=[encoder_1, encoder_2],
objective=objectives.CCA,
)
optimizer = optim.Adam(dcca_model.parameters(), lr=1e-4)
scheduler = optim.lr_scheduler.CosineAnnealingLR(optimizer, 1)
dcca_model = DeepWrapper(
dcca_model, device=device, optimizer=optimizer, scheduler=scheduler
)
dcca_model.fit((X, Y), epochs=20)
def test_DGCCA_methods_cpu():
latent_dims = 2
device = "cpu"
encoder_1 = architectures.Encoder(latent_dims=latent_dims, feature_size=10)
encoder_2 = architectures.Encoder(latent_dims=latent_dims, feature_size=10)
encoder_3 = architectures.Encoder(latent_dims=latent_dims, feature_size=10)
# DTCCA
dtcca_model = DTCCA(latent_dims=latent_dims, encoders=[encoder_1, encoder_2])
dtcca_model = DeepWrapper(dtcca_model, device=device)
dtcca_model.fit((X, Y, Z))
# DGCCA
dgcca_model = DCCA(
latent_dims=latent_dims,
encoders=[encoder_1, encoder_2, encoder_3],
objective=objectives.GCCA,
)
dgcca_model = DeepWrapper(dgcca_model, device=device)
dgcca_model.fit((X, Y, Z))
# DMCCA
dmcca_model = DCCA(
latent_dims=latent_dims,
encoders=[encoder_1, encoder_2, encoder_3],
objective=objectives.MCCA,
)
dmcca_model = DeepWrapper(dmcca_model, device=device)
dmcca_model.fit((X, Y, Z))
def test_DCCAE_methods_cpu():
latent_dims = 2
device = "cpu"
encoder_1 = architectures.Encoder(latent_dims=latent_dims, feature_size=10)
encoder_2 = architectures.Encoder(latent_dims=latent_dims, feature_size=10)
decoder_1 = architectures.Decoder(latent_dims=latent_dims, feature_size=10)
decoder_2 = architectures.Decoder(latent_dims=latent_dims, feature_size=10)
# DCCAE
dccae_model = DCCAE(
latent_dims=latent_dims,
encoders=[encoder_1, encoder_2],
decoders=[decoder_1, decoder_2],
)
dccae_model = DeepWrapper(dccae_model, device=device)
dccae_model.fit((X, Y), epochs=20)
# SplitAE
splitae_model = SplitAE(
latent_dims=latent_dims, encoder=encoder_1, decoders=[decoder_1, decoder_2]
)
splitae_model = DeepWrapper(splitae_model, device=device)
splitae_model.fit((X, Y), epochs=10)
def test_DCCAEconv_methods_cpu():
latent_dims = 2
device = "cpu"
encoder_1 = architectures.CNNEncoder(latent_dims=latent_dims, feature_size=[16, 16])
encoder_2 = architectures.CNNEncoder(latent_dims=latent_dims, feature_size=[16, 16])
decoder_1 = architectures.CNNDecoder(latent_dims=latent_dims, feature_size=[16, 16])
decoder_2 = architectures.CNNDecoder(latent_dims=latent_dims, feature_size=[16, 16])
# DCCAE
dccae_model = DCCAE(
latent_dims=latent_dims,
encoders=[encoder_1, encoder_2],
decoders=[decoder_1, decoder_2],
)
dccae_model = DeepWrapper(dccae_model, device=device)
dccae_model.fit((X_conv, Y_conv))
def test_DVCCA_methods_cpu():
latent_dims = 2
device = "cpu"
encoder_1 = architectures.Encoder(
latent_dims=latent_dims, feature_size=10, variational=True
)
encoder_2 = architectures.Encoder(
latent_dims=latent_dims, feature_size=10, variational=True
)
decoder_1 = architectures.Decoder(
latent_dims=latent_dims, feature_size=10, norm_output=True
)
decoder_2 = architectures.Decoder(
latent_dims=latent_dims, feature_size=10, norm_output=True
)
# DVCCA
dvcca_model = DVCCA(
latent_dims=latent_dims,
encoders=[encoder_1, encoder_2],
decoders=[decoder_1, decoder_2],
)
dvcca_model = DeepWrapper(dvcca_model, device=device)
dvcca_model.fit((X, Y))
def test_DVCCA_p_methods_cpu():
latent_dims = 2
device = "cpu"
encoder_1 = architectures.Encoder(
latent_dims=latent_dims, feature_size=10, variational=True
)
encoder_2 = architectures.Encoder(
latent_dims=latent_dims, feature_size=10, variational=True
)
private_encoder_1 = architectures.Encoder(
latent_dims=latent_dims, feature_size=10, variational=True
)
private_encoder_2 = architectures.Encoder(
latent_dims=latent_dims, feature_size=10, variational=True
)
decoder_1 = architectures.Decoder(
latent_dims=2 * latent_dims, feature_size=10, norm_output=True
)
decoder_2 = architectures.Decoder(
latent_dims=2 * latent_dims, feature_size=10, norm_output=True
)
# DVCCA
dvcca_model = DVCCA(
latent_dims=latent_dims,
encoders=[encoder_1, encoder_2],
decoders=[decoder_1, decoder_2],
private_encoders=[private_encoder_1, private_encoder_2],
)
dvcca_model = DeepWrapper(dvcca_model, device=device)
dvcca_model.fit((X, Y))
def test_DCCA_methods_gpu():
latent_dims = 2
device = "cuda"
encoder_1 = architectures.Encoder(latent_dims=latent_dims, feature_size=10)
encoder_2 = architectures.Encoder(latent_dims=latent_dims, feature_size=10)
# DCCA
dcca_model = DCCA(
latent_dims=latent_dims,
encoders=[encoder_1, encoder_2],
objective=objectives.CCA,
)
dcca_model = DeepWrapper(dcca_model, device=device)
dcca_model.fit((X, Y))
# DGCCA
dgcca_model = DCCA(
latent_dims=latent_dims,
encoders=[encoder_1, encoder_2],
objective=objectives.GCCA,
)
dgcca_model = DeepWrapper(dgcca_model, device=device)
dgcca_model.fit((X, Y))
# DMCCA
dmcca_model = DCCA(
latent_dims=latent_dims,
encoders=[encoder_1, encoder_2],
objective=objectives.MCCA,
)
dmcca_model = DeepWrapper(dmcca_model, device=device)
dmcca_model.fit((X, Y))
# DCCA_NOI
dcca_noi_model = DCCA_NOI(
latent_dims, X.shape[0], encoders=[encoder_1, encoder_2], rho=0
)
dcca_noi_model = DeepWrapper(dcca_noi_model, device=device)
dcca_noi_model.fit((X, Y))
def test_DGCCA_methods_gpu():
latent_dims = 2
device = "cuda"
encoder_1 = architectures.Encoder(latent_dims=latent_dims, feature_size=10)
encoder_2 = architectures.Encoder(latent_dims=latent_dims, feature_size=10)
encoder_3 = architectures.Encoder(latent_dims=latent_dims, feature_size=10)
# DGCCA
dgcca_model = DCCA(
latent_dims=latent_dims,
encoders=[encoder_1, encoder_2, encoder_3],
objective=objectives.GCCA,
)
dgcca_model = DeepWrapper(dgcca_model, device=device)
dgcca_model.fit((X, Y, Z))
# DMCCA
dmcca_model = DCCA(
latent_dims=latent_dims,
encoders=[encoder_1, encoder_2, encoder_3],
objective=objectives.MCCA,
)
dmcca_model = DeepWrapper(dmcca_model, device=device)
dmcca_model.fit((X, Y, Z))
def test_DCCAE_methods_gpu():
latent_dims = 2
device = "cuda"
encoder_1 = architectures.Encoder(latent_dims=latent_dims, feature_size=10)
encoder_2 = architectures.Encoder(latent_dims=latent_dims, feature_size=10)
decoder_1 = architectures.Decoder(latent_dims=latent_dims, feature_size=10)
decoder_2 = architectures.Decoder(latent_dims=latent_dims, feature_size=10)
# DCCAE
dccae_model = DCCAE(
latent_dims=latent_dims,
encoders=[encoder_1, encoder_2],
decoders=[decoder_1, decoder_2],
)
dccae_model = DeepWrapper(dccae_model, device=device)
dccae_model.fit((X, Y))
def test_DVCCA_methods_gpu():
latent_dims = 2
device = "cuda"
encoder_1 = architectures.Encoder(
latent_dims=latent_dims, feature_size=10, variational=True
)
encoder_2 = architectures.Encoder(
latent_dims=latent_dims, feature_size=10, variational=True
)
decoder_1 = architectures.Decoder(
latent_dims=latent_dims, feature_size=10, norm_output=True
)
decoder_2 = architectures.Decoder(
latent_dims=latent_dims, feature_size=10, norm_output=True
)
# DVCCA
dvcca_model = DVCCA(
latent_dims=latent_dims,
encoders=[encoder_1, encoder_2],
decoders=[decoder_1, decoder_2],
)
dvcca_model = DeepWrapper(dvcca_model, device=device)
dvcca_model.fit((X, Y))
def test_linear():
encoder_1 = architectures.LinearEncoder(latent_dims=1, feature_size=10)
encoder_2 = architectures.LinearEncoder(latent_dims=1, feature_size=10)
dcca_model = DCCA(latent_dims=1, encoders=[encoder_1, encoder_2])
dcca_model = DeepWrapper(dcca_model).fit((X, Y), epochs=35)
cca = CCA().fit((X, Y))
# check linear encoder with SGD matches vanilla linear CCA
assert (
np.testing.assert_array_almost_equal(
cca.score((X, Y)), dcca_model.score((X, Y)), decimal=2
)
is None
)
| 33.938731 | 88 | 0.692456 | 2,060 | 15,510 | 4.913107 | 0.065534 | 0.170932 | 0.115404 | 0.144255 | 0.855844 | 0.830254 | 0.818002 | 0.78599 | 0.759016 | 0.720186 | 0 | 0.029647 | 0.20187 | 15,510 | 456 | 89 | 34.013158 | 0.787947 | 0.01412 | 0 | 0.56117 | 0 | 0 | 0.004848 | 0 | 0 | 0 | 0 | 0 | 0.026596 | 1 | 0.042553 | false | 0 | 0.023936 | 0 | 0.066489 | 0.00266 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6e6240d05f3b3a50eeaa37d19e7929506c84dd0e | 5,494 | py | Python | veripro/config/veripro.py | Suganyabalasubramanian/veriPRO | 289c2193ee54d3d7fed113b2bbc86300416a05c5 | [
"MIT"
] | null | null | null | veripro/config/veripro.py | Suganyabalasubramanian/veriPRO | 289c2193ee54d3d7fed113b2bbc86300416a05c5 | [
"MIT"
] | null | null | null | veripro/config/veripro.py | Suganyabalasubramanian/veriPRO | 289c2193ee54d3d7fed113b2bbc86300416a05c5 | [
"MIT"
] | null | null | null | from frappe import _
def get_data():
return [
{
"label": _("General"),
"items": [
{
"type": "doctype",
"name": "Checks",
"description": _("checks"),
"onboard": 1,
},
{
"type":"doctype",
"name": "Check Package",
"description": _("check_package"),
"onboard": 1,
},
{
"type": "doctype",
"name": "Batch",
"description": _("batch"),
"onboard": 1,
},
{
"type": "doctype",
"name": "Case",
"description": _("Case"),
"onboard": 1,
},
# {
# "type": "doctype",
# "name": "Case",
# "description": _("Invoice"),
# "onboard": 1,
# },
]
},
{
"label": _("Entry"),
"items": [
{
"type": "doctype",
"name": "Address Check",
"description": _("checks"),
"onboard": 1,
},
{
"type": "doctype",
"name": "Education Check",
"description": _("veripro"),
"onboard": 1,
},
{
"type": "doctype",
"name": "Employment",
"description": _("Applicants"),
"onboard": 1,
},
{
"type": "doctype",
"name": "Identity Check",
"description": _("Invoice"),
"onboard": 1,
},
{
"type": "doctype",
"name": "Criminal Check",
"description": _("Invoice"),
"onboard": 1,
},
]
},
{
"label": _("Execution"),
"items": [
{
"type": "doctype",
"name": "Verify Address Check",
"description": _("checks"),
"onboard": 1,
},
{
"type": "doctype",
"name": "Verify Education Check",
"description": _("veripro"),
"onboard": 1,
},
{
"type": "doctype",
"name": "Verify Employment Check",
"description": _("Applicants"),
"onboard": 1,
},
{
"type": "doctype",
"name": "Verify Identity Check",
"description": _("Invoice"),
"onboard": 1,
},
{
"type": "doctype",
"name": "Verify Criminal Check",
"description": _("Invoice"),
"onboard": 1,
},
]
}
# {
# "module_name": "veriPRO",
# "color": "grey",
# "icon": "fa fa-star",
# "type": "module",
# "label": _("General"),
# "items": [
# {
# "type": "doctype",
# "name": "Check Package",
# "icon": "fa fa-star",
# "label": _("Check Package"),
# "description": _("Check Package"),
# },
# {
# "type": "doctype",
# "name": "Applicant",
# "icon": "fa fa-star",
# "label": _("Applicant"),
# "description": _("Applicant"),
# },
# {
# "type": "doctype",
# "name": "veriPRO Batch",
# "icon": "fa fa-star",
# "label": _("veriPRO Batch"),
# "description": _("veriPRO Batch"),
# },
# {
# "type": "doctype",
# "name": "Sales Invoice",
# "icon": "fa fa-star",
# "label": _("veriPRO Batch"),
# "description": _("veriPRO Batch"),
# },
# ],
# },
# {
# "module_name": "veriPRO",
# "color": "grey",
# "icon": "fa fa-star",
# "type": "module",
# "label": _("Entry"),
# "items": [
# {
# "type": "doctype",
# "name": "Check Package",
# "icon": "fa fa-star",
# "label": _("Check Package"),
# "description": _("Check Package"),
# },
# {
# "type": "doctype",
# "name": "Applicant",
# "icon": "fa fa-star",
# "label": _("Applicant"),
# "description": _("Applicant"),
# },
# {
# "type": "doctype",
# "name": "veriPRO Batch",
# "icon": "fa fa-star",
# "label": _("veriPRO Batch"),
# "description": _("veriPRO Batch"),
# },
# {
# "type": "doctype",
# "name": "Sales Invoice",
# "icon": "fa fa-star",
# "label": _("veriPRO Batch"),
# "description": _("veriPRO Batch"),
# },
# ],
# }
]
| 28.466321 | 64 | 0.304696 | 294 | 5,494 | 5.554422 | 0.136054 | 0.15493 | 0.211268 | 0.13962 | 0.924679 | 0.876914 | 0.769137 | 0.644213 | 0.644213 | 0.44703 | 0 | 0.005774 | 0.527121 | 5,494 | 192 | 65 | 28.614583 | 0.622787 | 0.466873 | 0 | 0.407767 | 0 | 0 | 0.286517 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.009709 | true | 0 | 0.009709 | 0.009709 | 0.029126 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6e7771f57a487e2580a86213a56177435c7ae8fb | 2,161 | py | Python | brain_image_text/flags.py | cvsubmittemp/BraVL | 784e3df6a8b6cf9f19bd28d0e0a2d21eede47ebd | [
"MIT"
] | null | null | null | brain_image_text/flags.py | cvsubmittemp/BraVL | 784e3df6a8b6cf9f19bd28d0e0a2d21eede47ebd | [
"MIT"
] | null | null | null | brain_image_text/flags.py | cvsubmittemp/BraVL | 784e3df6a8b6cf9f19bd28d0e0a2d21eede47ebd | [
"MIT"
] | 1 | 2022-03-28T10:29:35.000Z | 2022-03-28T10:29:35.000Z | from utils.BaseFlags import parser as parser
# DATASET NAME
parser.add_argument('--dataset', type=str, default='Brain_Image_Text', help="name of the dataset")
# DATA DEPENDENT
# to be set by experiments themselves
parser.add_argument('--style_m1_dim', type=int, default=0, help="dimension of varying factor latent space")
parser.add_argument('--style_m2_dim', type=int, default=0, help="dimension of varying factor latent space")
parser.add_argument('--style_m3_dim', type=int, default=0, help="dimension of varying factor latent space")
parser.add_argument('--num_hidden_layers', type=int, default=2, help="number of channels in images")
parser.add_argument('--likelihood_m1', type=str, default='laplace', help="output distribution")
parser.add_argument('--likelihood_m2', type=str, default='laplace', help="output distribution")
parser.add_argument('--likelihood_m3', type=str, default='laplace', help="output distribution")
# LOSS TERM WEIGHTS
parser.add_argument('--beta_m1_style', type=float, default=1.0, help="default weight divergence term style modality 1")
parser.add_argument('--beta_m2_style', type=float, default=1.0, help="default weight divergence term style modality 2")
parser.add_argument('--beta_m3_style', type=float, default=1.0, help="default weight divergence term style modality 3")
parser.add_argument('--beta_m1_rec', type=float, default=1.0, help="default weight reconstruction modality 1")
parser.add_argument('--beta_m2_rec', type=float, default=1.0, help="default weight reconstruction modality 2")
parser.add_argument('--beta_m3_rec', type=float, default=1.0, help="default weight reconstruction modality 3")
parser.add_argument('--div_weight_m1_content', type=float, default=0.25, help="default weight divergence term content modality 1")
parser.add_argument('--div_weight_m2_content', type=float, default=0.25, help="default weight divergence term content modality 2")
parser.add_argument('--div_weight_m3_content', type=float, default=0.25, help="default weight divergence term content modality 2")
parser.add_argument('--div_weight_uniform_content', type=float, default=0.25, help="default weight divergence term prior")
#
| 77.178571 | 130 | 0.782508 | 324 | 2,161 | 5.049383 | 0.209877 | 0.099022 | 0.187042 | 0.115526 | 0.806846 | 0.74511 | 0.74511 | 0.660147 | 0.660147 | 0.660147 | 0 | 0.026196 | 0.081444 | 2,161 | 27 | 131 | 80.037037 | 0.797985 | 0.037483 | 0 | 0 | 0 | 0 | 0.482642 | 0.04677 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.052632 | 0 | 0.052632 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6ebab96c4a6b6e0edf79a43f4c0c8544dc9db4ab | 137 | py | Python | demo-project/src/demo_project/pipelines/feature_engineering/__init__.py | admariner/kedro-viz | 371f7f943ee1c9a69545ed6aa1cd468deafed7f0 | [
"BSD-3-Clause-Clear",
"Apache-2.0"
] | 125 | 2022-01-10T14:18:32.000Z | 2022-03-31T16:08:29.000Z | demo-project/src/demo_project/pipelines/feature_engineering/__init__.py | kedro-org/kedro-viz | 627aeca40c543412afb988a8ddc64e86dcaf27ec | [
"BSD-3-Clause-Clear",
"Apache-2.0"
] | 81 | 2022-01-10T15:14:24.000Z | 2022-03-31T16:20:59.000Z | demo-project/src/demo_project/pipelines/feature_engineering/__init__.py | admariner/kedro-viz | 371f7f943ee1c9a69545ed6aa1cd468deafed7f0 | [
"BSD-3-Clause-Clear",
"Apache-2.0"
] | 11 | 2022-01-12T14:57:54.000Z | 2022-03-07T06:48:30.000Z | """
This is a boilerplate pipeline 'feature_engineering'
generated using Kedro 0.18.1
"""
from .pipeline import create_pipeline # NOQA
| 19.571429 | 52 | 0.766423 | 19 | 137 | 5.421053 | 0.894737 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.034188 | 0.145985 | 137 | 6 | 53 | 22.833333 | 0.846154 | 0.635037 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6ebfdc9706090e2a6caa03ce8a224dc942d9f753 | 97 | py | Python | QuantTorch/LogLinNet.py | Enderdead/BinaryConnect_PyTorch | 990e970b1fbd299ff88200db21a9cc3fe44706d3 | [
"MIT"
] | 75 | 2019-03-19T07:36:56.000Z | 2021-12-23T02:34:59.000Z | QuantTorch/LogLinNet.py | Enderdead/BinaryConnect_PyTorch | 990e970b1fbd299ff88200db21a9cc3fe44706d3 | [
"MIT"
] | 10 | 2019-03-19T21:16:56.000Z | 2019-04-16T15:05:37.000Z | QuantTorch/LogLinNet.py | Enderdead/BinaryConnect_PyTorch | 990e970b1fbd299ff88200db21a9cc3fe44706d3 | [
"MIT"
] | 9 | 2019-08-12T10:33:55.000Z | 2021-07-23T02:10:06.000Z | from QuantTorch.functions.log_lin_connect import *
from QuantTorch.layers.log_lin_layers import * | 48.5 | 50 | 0.865979 | 14 | 97 | 5.714286 | 0.571429 | 0.35 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.072165 | 97 | 2 | 51 | 48.5 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2829feebbad655b17c9c4d26e91272dd5e02226a | 4,368 | py | Python | cogs/profile.py | nemesio65/Discord-Bot | b576f0c220aaf5f1bc2373821efee47531b275ff | [
"MIT"
] | null | null | null | cogs/profile.py | nemesio65/Discord-Bot | b576f0c220aaf5f1bc2373821efee47531b275ff | [
"MIT"
] | null | null | null | cogs/profile.py | nemesio65/Discord-Bot | b576f0c220aaf5f1bc2373821efee47531b275ff | [
"MIT"
] | null | null | null | import random as r
import sqlite3
import discord
from discord.ext import commands
import asyncio
import datetime
import math
class ProfileCog(commands.Cog, name='Profile'):
def __init__(self, bot):
self.bot = bot
@commands.command()
async def biosetup(self, ctx, *, content:str):
db = sqlite3.connect('bot.db')
cursor = db.cursor()
cursor.execute(f"SELECT profiles.user_id, profiles.guild_id FROM profiles JOIN levels ON profiles.user_id = levels.user_id and profiles.guild_id = levels.guild_id WHERE levels.guild_id = '{ctx.message.author.guild.id}' and levels.user_id = '{ctx.message.author.id}'")
result = cursor.fetchone()
if result is None:
sql = ("INSERT INTO profiles(guild_id, user_id, bio, twitch) VALUES(?,?,?,?)")
val = (ctx.message.author.guild.id, ctx.message.author.id, str(content), None)
cursor.execute(sql, val)
db.commit()
await ctx.send('Your bio has been updated.')
else:
cursor.execute(f"SELECT user_id, bio FROM profiles WHERE guild_id = '{ctx.message.author.guild.id}' and user_id = '{ctx.message.author.id}'")
result1 = cursor.fetchone()
sql = ("UPDATE profiles SET bio = ? WHERE guild_id = ? and user_id = ?")
val = (str(content), str(ctx.message.guild.id), str(ctx.message.author.id))
cursor.execute(sql, val)
db.commit()
await ctx.send(f"your bio has been updated.")
@commands.command()
async def twitchsetup(self, ctx, *, content:str):
db = sqlite3.connect('bot.db')
cursor = db.cursor()
cursor.execute(f"SELECT profiles.user_id, profiles.guild_id FROM profiles JOIN levels ON profiles.user_id = levels.user_id and profiles.guild_id = levels.guild_id WHERE levels.guild_id = '{ctx.message.author.guild.id}' and levels.user_id = '{ctx.message.author.id}'")
result = cursor.fetchone()
if result is None:
sql = ("INSERT INTO profiles(guild_id, user_id, bio, twitch) VALUES(?,?,?,?)")
val = (ctx.message.author.guild.id, ctx.message.author.id, None, str(content))
cursor.execute(sql, val)
db.commit()
await ctx.send('Your twitch has been updated.')
else:
cursor.execute(f"SELECT user_id, twitch FROM profiles WHERE guild_id = '{ctx.message.author.guild.id}' and user_id = '{ctx.message.author.id}'")
result1 = cursor.fetchone()
sql = ("UPDATE profiles SET twitch = ? WHERE guild_id = ? and user_id = ?")
val = (str(content), str(ctx.message.guild.id), str(ctx.message.author.id))
cursor.execute(sql, val)
db.commit()
await ctx.send(f"your twitch has been updated.")
@commands.command()
async def bio(self, ctx, user:discord.User=None):
if user is not None:
db = sqlite3.connect('bot.db')
cursor = db.cursor()
cursor.execute(
f"SELECT user_id, bio, twitch FROM profiles WHERE guild_id = '{ctx.message.author.guild.id}' and user_id = '{user.id}'")
result = cursor.fetchone()
if result is None:
await ctx.send('That user has no bio.')
else:
await ctx.send(f"{user.name} bio: '{str(result[1])}' twitch: '{str(result[2])}' .")
cursor.close()
db.close()
elif user is None:
db = sqlite3.connect('bot.db')
cursor = db.cursor()
cursor.execute(
f"SELECT user_id, bio, twitch FROM profiles WHERE guild_id = '{ctx.message.guild.id}' and user_id = '{ctx.message.author.id}'")
result = cursor.fetchone()
if result is None:
await ctx.send('That user has no bio.')
else:
await ctx.send(
f"{ctx.message.author.name} bio: '{str(result[1])}' twitch: '{str(result[2])}' .")
cursor.close()
db.close()
def setup(bot):
bot.add_cog(ProfileCog(bot))
db = sqlite3.connect('bot.db')
cursor = db.cursor()
cursor.executescript('''
CREATE TABLE IF NOT EXISTS profiles (
guild_id TEXT,
user_id TEXT,
bio TEXT,
twitch TEXT
)
''')
print('Profiles is loaded')
| 43.247525 | 275 | 0.591117 | 571 | 4,368 | 4.446585 | 0.14711 | 0.074439 | 0.107129 | 0.085073 | 0.820796 | 0.813706 | 0.813706 | 0.783379 | 0.783379 | 0.763293 | 0 | 0.003796 | 0.276328 | 4,368 | 100 | 276 | 43.68 | 0.799431 | 0 | 0 | 0.544444 | 0 | 0.088889 | 0.396062 | 0.075321 | 0 | 0 | 0 | 0 | 0 | 1 | 0.022222 | false | 0 | 0.077778 | 0 | 0.111111 | 0.011111 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
288215dd458aab4e832606cd84d8e15289afa4c1 | 24 | py | Python | pyjs/transport/__init__.py | vulcan-coalition/pyjs | cfafea13269ac04988478e107941b8c9f3147af4 | [
"Apache-2.0"
] | null | null | null | pyjs/transport/__init__.py | vulcan-coalition/pyjs | cfafea13269ac04988478e107941b8c9f3147af4 | [
"Apache-2.0"
] | null | null | null | pyjs/transport/__init__.py | vulcan-coalition/pyjs | cfafea13269ac04988478e107941b8c9f3147af4 | [
"Apache-2.0"
] | null | null | null | from . import websocket
| 12 | 23 | 0.791667 | 3 | 24 | 6.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 24 | 1 | 24 | 24 | 0.95 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
955e59c2ac9cb825b31f8910f61471cb2c725361 | 29 | py | Python | aggregate_extensions/__init__.py | mynl/aggregate_extensions | 514aff89e95ba74f34848b324f3986654db3a2cb | [
"BSD-3-Clause"
] | null | null | null | aggregate_extensions/__init__.py | mynl/aggregate_extensions | 514aff89e95ba74f34848b324f3986654db3a2cb | [
"BSD-3-Clause"
] | null | null | null | aggregate_extensions/__init__.py | mynl/aggregate_extensions | 514aff89e95ba74f34848b324f3986654db3a2cb | [
"BSD-3-Clause"
] | null | null | null |
from . allocation import *
| 7.25 | 26 | 0.689655 | 3 | 29 | 6.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.241379 | 29 | 3 | 27 | 9.666667 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
95cd131ab757776a7600b159afbdd2191be3ca37 | 37 | py | Python | rpn/__init__.py | bradleysawler/python-rpn | 727dda890d106aced9c07035aa4034178394cc05 | [
"MIT"
] | 2 | 2016-11-09T14:46:29.000Z | 2019-12-24T18:13:25.000Z | rpn/__init__.py | bradleysawler/python-rpn | 727dda890d106aced9c07035aa4034178394cc05 | [
"MIT"
] | 1 | 2020-04-22T06:26:50.000Z | 2020-04-22T06:26:50.000Z | rpn/__init__.py | bradleysawler/python-rpn | 727dda890d106aced9c07035aa4034178394cc05 | [
"MIT"
] | 1 | 2020-03-07T05:34:46.000Z | 2020-03-07T05:34:46.000Z | from .rpn import solve_rpn, RPNError
| 18.5 | 36 | 0.810811 | 6 | 37 | 4.833333 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.135135 | 37 | 1 | 37 | 37 | 0.90625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
95d616ca5d2b521674b7eb24ba0515626041dea3 | 7,774 | py | Python | geoalchemy2/tests/test_types.py | fredj/geoalchemy2 | 9f26714e8d181440ac03d7295d34d615cac11d02 | [
"MIT"
] | null | null | null | geoalchemy2/tests/test_types.py | fredj/geoalchemy2 | 9f26714e8d181440ac03d7295d34d615cac11d02 | [
"MIT"
] | null | null | null | geoalchemy2/tests/test_types.py | fredj/geoalchemy2 | 9f26714e8d181440ac03d7295d34d615cac11d02 | [
"MIT"
] | null | null | null | import unittest
import re
from nose.tools import eq_, raises
def eq_sql(a, b, msg=None):
a = re.sub(r'[\n\t]', '', str(a))
eq_(a, b, msg)
def _create_geometry_table():
from sqlalchemy import Table, MetaData, Column
from geoalchemy2.types import Geometry
table = Table('table', MetaData(), Column('geom', Geometry))
return table
def _create_geography_table():
from sqlalchemy import Table, MetaData, Column
from geoalchemy2.types import Geography
table = Table('table', MetaData(), Column('geom', Geography))
return table
class TestGeometry(unittest.TestCase):
def test_get_col_spec(self):
from geoalchemy2 import Geometry
g = Geometry(srid=900913)
eq_(g.get_col_spec(), 'geometry(GEOMETRY,900913)')
def test_column_expression(self):
from sqlalchemy.sql import select
table = _create_geometry_table()
s = select([table.c.geom])
eq_sql(s, 'SELECT ST_AsBinary("table".geom) AS geom FROM "table"')
def test_select_bind_expression(self):
from sqlalchemy.sql import select
table = _create_geometry_table()
s = select(['foo']).where(table.c.geom == 'POINT(1 2)')
eq_sql(s, 'SELECT foo FROM "table" WHERE '
'"table".geom = ST_GeomFromText(:geom_1)')
eq_(s.compile().params, {'geom_1': 'POINT(1 2)'})
def test_insert_bind_expression(self):
from sqlalchemy.sql import insert
table = _create_geometry_table()
i = insert(table).values(geom='POINT(1 2)')
eq_sql(i, 'INSERT INTO "table" (geom) VALUES (ST_GeomFromText(:geom))')
eq_(i.compile().params, {'geom': 'POINT(1 2)'})
def test_function_call(self):
from sqlalchemy.sql import select
table = _create_geometry_table()
s = select([table.c.geom.ST_Buffer(2)])
eq_sql(s,
'SELECT ST_AsBinary(ST_Buffer("table".geom, :param_1)) '
'AS "ST_Buffer_1" FROM "table"')
@raises(AttributeError)
def test_non_ST_function_call(self):
table = _create_geometry_table()
table.c.geom.Buffer(2)
def test_subquery(self):
# test for geometry columns not delivered to the result
# http://hg.sqlalchemy.org/sqlalchemy/rev/f1efb20c6d61
from sqlalchemy.sql import select
table = _create_geometry_table()
s = select([table]).alias('name').select()
eq_sql(s,
'SELECT ST_AsBinary(name.geom) AS geom FROM '
'(SELECT "table".geom AS geom FROM "table") AS name')
class TestGeography(unittest.TestCase):
def test_get_col_spec(self):
from geoalchemy2 import Geography
g = Geography(srid=900913)
eq_(g.get_col_spec(), 'geography(GEOMETRY,900913)')
def test_column_expression(self):
from sqlalchemy.sql import select
table = _create_geography_table()
s = select([table.c.geom])
eq_sql(s, 'SELECT ST_AsBinary("table".geom) AS geom FROM "table"')
def test_select_bind_expression(self):
from sqlalchemy.sql import select
table = _create_geography_table()
s = select(['foo']).where(table.c.geom == 'POINT(1 2)')
eq_sql(s, 'SELECT foo FROM "table" WHERE '
'"table".geom = ST_GeogFromText(:geom_1)')
eq_(s.compile().params, {'geom_1': 'POINT(1 2)'})
def test_insert_bind_expression(self):
from sqlalchemy.sql import insert
table = _create_geography_table()
i = insert(table).values(geom='POINT(1 2)')
eq_sql(i, 'INSERT INTO "table" (geom) VALUES (ST_GeogFromText(:geom))')
eq_(i.compile().params, {'geom': 'POINT(1 2)'})
def test_function_call(self):
from sqlalchemy.sql import select
table = _create_geography_table()
s = select([table.c.geom.ST_Buffer(2)])
eq_sql(s,
'SELECT ST_AsBinary(ST_Buffer("table".geom, :param_1)) '
'AS "ST_Buffer_1" FROM "table"')
@raises(AttributeError)
def test_non_ST_function_call(self):
table = _create_geography_table()
table.c.geom.Buffer(2)
def test_subquery(self):
# test for geography columns not delivered to the result
# http://hg.sqlalchemy.org/sqlalchemy/rev/f1efb20c6d61
from sqlalchemy.sql import select
table = _create_geography_table()
s = select([table]).alias('name').select()
eq_sql(s,
'SELECT ST_AsBinary(name.geom) AS geom FROM '
'(SELECT "table".geom AS geom FROM "table") AS name')
class TestPoint(unittest.TestCase):
def test_get_col_spec(self):
from geoalchemy2.types import Geometry
g = Geometry(geometry_type='POINT', srid=900913)
eq_(g.get_col_spec(), 'geometry(POINT,900913)')
class TestCurve(unittest.TestCase):
def test_get_col_spec(self):
from geoalchemy2.types import Geometry
g = Geometry(geometry_type='CURVE', srid=900913)
eq_(g.get_col_spec(), 'geometry(CURVE,900913)')
class TestLineString(unittest.TestCase):
def test_get_col_spec(self):
from geoalchemy2.types import Geometry
g = Geometry(geometry_type='LINESTRING', srid=900913)
eq_(g.get_col_spec(), 'geometry(LINESTRING,900913)')
class TestPolygon(unittest.TestCase):
def test_get_col_spec(self):
from geoalchemy2.types import Geometry
g = Geometry(geometry_type='POLYGON', srid=900913)
eq_(g.get_col_spec(), 'geometry(POLYGON,900913)')
class TestMultiPoint(unittest.TestCase):
def test_get_col_spec(self):
from geoalchemy2.types import Geometry
g = Geometry(geometry_type='MULTIPOINT', srid=900913)
eq_(g.get_col_spec(), 'geometry(MULTIPOINT,900913)')
class TestMultiLineString(unittest.TestCase):
def test_get_col_spec(self):
from geoalchemy2.types import Geometry
g = Geometry(geometry_type='MULTILINESTRING', srid=900913)
eq_(g.get_col_spec(), 'geometry(MULTILINESTRING,900913)')
class TestMultiPolygon(unittest.TestCase):
def test_get_col_spec(self):
from geoalchemy2.types import Geometry
g = Geometry(geometry_type='MULTIPOLYGON', srid=900913)
eq_(g.get_col_spec(), 'geometry(MULTIPOLYGON,900913)')
class TestGeometryCollection(unittest.TestCase):
def test_get_col_spec(self):
from geoalchemy2.types import Geometry
g = Geometry(geometry_type='GEOMETRYCOLLECTION', srid=900913)
eq_(g.get_col_spec(), 'geometry(GEOMETRYCOLLECTION,900913)')
class TestFunction(unittest.TestCase):
def test_ST_Equal_WKTElement_WKTElement(self):
from sqlalchemy import func
from geoalchemy2.elements import WKTElement
expr = func.ST_Equals(WKTElement('POINT(1 2)'),
WKTElement('POINT(1 2)'))
eq_sql(expr, 'ST_Equals('
'ST_GeomFromText(:ST_GeomFromText_1, :ST_GeomFromText_2), '
'ST_GeomFromText(:ST_GeomFromText_3, :ST_GeomFromText_4))')
eq_(expr.compile().params,
{u'ST_GeomFromText_1': 'POINT(1 2)',
u'ST_GeomFromText_2': -1,
u'ST_GeomFromText_3': 'POINT(1 2)',
u'ST_GeomFromText_4': -1})
def test_ST_Equal_Column_WKTElement(self):
from sqlalchemy import func
from geoalchemy2.elements import WKTElement
table = _create_geometry_table()
expr = func.ST_Equals(table.c.geom, WKTElement('POINT(1 2)'))
eq_sql(expr,
'ST_Equals("table".geom, '
'ST_GeomFromText(:ST_GeomFromText_1, :ST_GeomFromText_2))')
eq_(expr.compile().params, {u'ST_GeomFromText_1': 'POINT(1 2)',
u'ST_GeomFromText_2': -1})
| 35.336364 | 79 | 0.647286 | 980 | 7,774 | 4.903061 | 0.112245 | 0.034964 | 0.041623 | 0.052653 | 0.805411 | 0.803746 | 0.785432 | 0.780645 | 0.704266 | 0.690114 | 0 | 0.033736 | 0.233599 | 7,774 | 219 | 80 | 35.497717 | 0.772743 | 0.027528 | 0 | 0.592593 | 0 | 0 | 0.20728 | 0.086962 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.191358 | 0 | 0.438272 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c262c8c923e69e417011da1b888a9e0a2fb35768 | 83 | py | Python | kino/draw/__init__.py | BrancoLab/Kino | 0e914e3d65fdf76e4efa95b9848cb30da3653f3d | [
"MIT"
] | 1 | 2021-12-09T09:19:25.000Z | 2021-12-09T09:19:25.000Z | kino/draw/__init__.py | BrancoLab/Kino | 0e914e3d65fdf76e4efa95b9848cb30da3653f3d | [
"MIT"
] | null | null | null | kino/draw/__init__.py | BrancoLab/Kino | 0e914e3d65fdf76e4efa95b9848cb30da3653f3d | [
"MIT"
] | null | null | null | # from kino.draw.animal import DrawAnimal
# from kino.draw.kinematics import Steps
| 27.666667 | 41 | 0.807229 | 12 | 83 | 5.583333 | 0.666667 | 0.238806 | 0.358209 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.120482 | 83 | 2 | 42 | 41.5 | 0.917808 | 0.939759 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c2ed9e0c3cb9b54965dabc6c38469e32c8923ddb | 37,860 | py | Python | instances/passenger_demand/pas-20210421-2109-int6000000000000001e-1/90.py | LHcau/scheduling-shared-passenger-and-freight-transport-on-a-fixed-infrastructure | bba1e6af5bc8d9deaa2dc3b83f6fe9ddf15d2a11 | [
"BSD-3-Clause"
] | null | null | null | instances/passenger_demand/pas-20210421-2109-int6000000000000001e-1/90.py | LHcau/scheduling-shared-passenger-and-freight-transport-on-a-fixed-infrastructure | bba1e6af5bc8d9deaa2dc3b83f6fe9ddf15d2a11 | [
"BSD-3-Clause"
] | null | null | null | instances/passenger_demand/pas-20210421-2109-int6000000000000001e-1/90.py | LHcau/scheduling-shared-passenger-and-freight-transport-on-a-fixed-infrastructure | bba1e6af5bc8d9deaa2dc3b83f6fe9ddf15d2a11 | [
"BSD-3-Clause"
] | null | null | null |
"""
PASSENGERS
"""
numPassengers = 1304
passenger_arriving = (
(2, 2, 5, 3, 1, 0, 2, 3, 2, 1, 3, 0), # 0
(4, 6, 3, 3, 0, 0, 3, 4, 1, 5, 2, 0), # 1
(1, 1, 3, 1, 4, 0, 2, 3, 3, 3, 0, 0), # 2
(1, 6, 0, 2, 0, 0, 3, 1, 0, 1, 0, 0), # 3
(2, 5, 7, 1, 2, 0, 4, 2, 0, 0, 0, 0), # 4
(4, 0, 2, 2, 2, 0, 2, 2, 2, 1, 0, 0), # 5
(1, 6, 2, 2, 1, 0, 5, 2, 1, 3, 0, 0), # 6
(1, 5, 1, 1, 1, 0, 8, 2, 4, 1, 1, 0), # 7
(2, 1, 3, 1, 0, 0, 1, 3, 2, 4, 0, 0), # 8
(0, 6, 3, 0, 0, 0, 3, 7, 2, 1, 2, 0), # 9
(1, 2, 1, 5, 2, 0, 2, 4, 2, 1, 0, 0), # 10
(1, 4, 4, 1, 0, 0, 4, 3, 1, 1, 1, 0), # 11
(0, 3, 7, 1, 0, 0, 2, 6, 4, 2, 0, 0), # 12
(1, 2, 1, 1, 0, 0, 4, 3, 5, 3, 2, 0), # 13
(1, 3, 0, 3, 2, 0, 6, 6, 4, 1, 0, 0), # 14
(1, 3, 5, 1, 3, 0, 2, 1, 2, 4, 1, 0), # 15
(1, 3, 0, 2, 1, 0, 1, 3, 3, 5, 2, 0), # 16
(1, 3, 2, 0, 0, 0, 1, 6, 3, 1, 1, 0), # 17
(3, 2, 0, 0, 1, 0, 3, 9, 0, 2, 3, 0), # 18
(4, 4, 6, 4, 0, 0, 5, 0, 2, 4, 0, 0), # 19
(5, 2, 3, 0, 3, 0, 3, 4, 2, 3, 0, 0), # 20
(1, 1, 2, 0, 0, 0, 2, 2, 3, 2, 0, 0), # 21
(2, 3, 2, 1, 0, 0, 4, 4, 7, 3, 0, 0), # 22
(2, 3, 1, 2, 1, 0, 2, 5, 2, 0, 2, 0), # 23
(3, 6, 1, 1, 2, 0, 1, 3, 4, 2, 1, 0), # 24
(3, 2, 6, 2, 1, 0, 0, 7, 4, 3, 1, 0), # 25
(1, 4, 1, 2, 0, 0, 4, 5, 4, 2, 0, 0), # 26
(2, 3, 4, 0, 1, 0, 0, 4, 2, 2, 2, 0), # 27
(2, 1, 7, 1, 0, 0, 1, 2, 1, 3, 1, 0), # 28
(2, 4, 4, 0, 2, 0, 1, 4, 2, 2, 0, 0), # 29
(0, 3, 2, 0, 1, 0, 3, 5, 4, 3, 1, 0), # 30
(3, 6, 3, 0, 0, 0, 2, 2, 3, 0, 0, 0), # 31
(0, 9, 4, 2, 0, 0, 5, 5, 1, 1, 0, 0), # 32
(2, 6, 4, 3, 2, 0, 2, 4, 1, 3, 2, 0), # 33
(1, 4, 3, 2, 0, 0, 4, 3, 5, 1, 0, 0), # 34
(5, 3, 2, 1, 2, 0, 1, 3, 2, 1, 0, 0), # 35
(1, 6, 8, 0, 1, 0, 1, 2, 1, 3, 3, 0), # 36
(2, 1, 6, 0, 0, 0, 1, 5, 2, 2, 3, 0), # 37
(0, 2, 4, 3, 0, 0, 3, 4, 3, 1, 2, 0), # 38
(3, 5, 2, 1, 0, 0, 2, 1, 3, 4, 1, 0), # 39
(2, 4, 1, 2, 1, 0, 4, 1, 3, 1, 2, 0), # 40
(1, 7, 0, 0, 2, 0, 2, 4, 3, 1, 0, 0), # 41
(3, 4, 2, 2, 4, 0, 2, 2, 4, 1, 2, 0), # 42
(3, 4, 2, 2, 1, 0, 2, 5, 3, 4, 1, 0), # 43
(4, 6, 4, 0, 1, 0, 5, 2, 3, 0, 1, 0), # 44
(1, 3, 4, 0, 1, 0, 1, 4, 2, 2, 0, 0), # 45
(2, 3, 5, 1, 2, 0, 3, 6, 1, 4, 3, 0), # 46
(1, 3, 2, 1, 0, 0, 6, 3, 3, 2, 0, 0), # 47
(2, 2, 3, 3, 1, 0, 2, 4, 2, 2, 2, 0), # 48
(3, 0, 5, 0, 1, 0, 4, 1, 1, 1, 0, 0), # 49
(1, 2, 3, 2, 1, 0, 0, 4, 1, 0, 0, 0), # 50
(3, 7, 2, 0, 1, 0, 2, 4, 1, 1, 3, 0), # 51
(3, 1, 2, 0, 0, 0, 3, 5, 4, 3, 0, 0), # 52
(2, 1, 1, 1, 0, 0, 2, 4, 3, 3, 1, 0), # 53
(2, 3, 4, 2, 1, 0, 0, 6, 2, 5, 2, 0), # 54
(3, 2, 1, 0, 1, 0, 4, 0, 3, 4, 2, 0), # 55
(3, 3, 3, 1, 1, 0, 3, 5, 0, 2, 0, 0), # 56
(1, 2, 2, 2, 1, 0, 2, 1, 2, 2, 1, 0), # 57
(1, 4, 3, 3, 0, 0, 0, 2, 4, 0, 0, 0), # 58
(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), # 59
)
station_arriving_intensity = (
(1.5897909350307289, 4.077876420454546, 4.7965416131105405, 3.8017663043478263, 4.285817307692308, 2.8540760869565225), # 0
(1.6047132060286802, 4.123224959227694, 4.822449322514998, 3.8229386322463776, 4.317939903846154, 2.853103283514493), # 1
(1.6194650863330406, 4.167900841750842, 4.84774207369323, 3.8436449275362325, 4.349384615384616, 2.8521007246376815), # 2
(1.6340340539947322, 4.211855859375001, 4.8724013897814915, 3.863867527173913, 4.380122596153847, 2.851068546195653), # 3
(1.6484075870646768, 4.25504180345118, 4.896408793916025, 3.883588768115943, 4.410125000000001, 2.850006884057971), # 4
(1.662573163593796, 4.297410465330389, 4.919745809233077, 3.902790987318841, 4.439362980769231, 2.848915874094203), # 5
(1.6765182616330119, 4.338913636363637, 4.942393958868896, 3.9214565217391315, 4.467807692307693, 2.8477956521739136), # 6
(1.690230359233246, 4.379503107901936, 4.964334765959726, 3.939567708333334, 4.49543028846154, 2.8466463541666673), # 7
(1.7036969344454203, 4.419130671296296, 4.985549753641818, 3.9571068840579713, 4.522201923076924, 2.845468115942029), # 8
(1.7169054653204567, 4.457748117897728, 5.006020445051415, 3.974056385869566, 4.5480937500000005, 2.8442610733695655), # 9
(1.7298434299092773, 4.49530723905724, 5.025728363324765, 3.9903985507246387, 4.573076923076924, 2.8430253623188406), # 10
(1.7424983062628039, 4.5317598261258425, 5.044655031598115, 4.0061157155797105, 4.597122596153847, 2.841761118659421), # 11
(1.7548575724319582, 4.567057670454545, 5.062781973007713, 4.021190217391305, 4.620201923076923, 2.8404684782608696), # 12
(1.7669087064676616, 4.601152563394361, 5.080090710689803, 4.035604393115943, 4.642286057692309, 2.839147576992754), # 13
(1.7786391864208373, 4.6339962962962975, 5.096562767780633, 4.049340579710145, 4.663346153846154, 2.8377985507246377), # 14
(1.7900364903424055, 4.665540660511364, 5.112179667416452, 4.062381114130435, 4.683353365384616, 2.8364215353260875), # 15
(1.8010880962832896, 4.695737447390573, 5.126922932733506, 4.074708333333334, 4.702278846153847, 2.835016666666667), # 16
(1.8117814822944105, 4.724538448284933, 5.1407740868680385, 4.0863045742753625, 4.720093750000001, 2.833584080615943), # 17
(1.8221041264266904, 4.751895454545455, 5.1537146529563, 4.097152173913044, 4.736769230769233, 2.8321239130434788), # 18
(1.8320435067310508, 4.777760257523148, 5.165726154134534, 4.107233469202899, 4.752276442307693, 2.830636299818841), # 19
(1.841587101258414, 4.802084648569023, 5.176790113538988, 4.11653079710145, 4.76658653846154, 2.8291213768115946), # 20
(1.850722388059702, 4.82482041903409, 5.186888054305914, 4.125026494565218, 4.779670673076923, 2.827579279891305), # 21
(1.8594368451858356, 4.845919360269361, 5.196001499571551, 4.1327028985507255, 4.7915, 2.8260101449275368), # 22
(1.867717950687738, 4.865333263625843, 5.204111972472152, 4.139542346014493, 4.802045673076924, 2.8244141077898557), # 23
(1.8755531826163303, 4.8830139204545455, 5.211200996143959, 4.145527173913044, 4.811278846153846, 2.8227913043478265), # 24
(1.8829300190225344, 4.898913122106482, 5.217250093723223, 4.150639719202899, 4.819170673076923, 2.8211418704710147), # 25
(1.8898359379572727, 4.91298265993266, 5.222240788346188, 4.15486231884058, 4.825692307692308, 2.819465942028986), # 26
(1.8962584174714663, 4.9251743252840905, 5.2261546031491, 4.15817730978261, 4.830814903846155, 2.817763654891305), # 27
(1.9021849356160379, 4.935439909511785, 5.22897306126821, 4.160567028985508, 4.834509615384616, 2.8160351449275365), # 28
(1.9076029704419084, 4.943731203966752, 5.23067768583976, 4.162013813405798, 4.836747596153847, 2.814280548007247), # 29
(1.9125000000000003, 4.950000000000001, 5.231250000000001, 4.1625000000000005, 4.8375, 2.8125000000000004), # 30
(1.9170822170716115, 4.955207279829545, 5.230820969202899, 4.162412193627452, 4.837226196808512, 2.8100257558720645), # 31
(1.92156550511509, 4.960345738636365, 5.229546014492754, 4.162150490196079, 4.836410638297873, 2.8062148550724646), # 32
(1.925951878196931, 4.96541473721591, 5.227443342391306, 4.161717463235295, 4.83506210106383, 2.8011046101949026), # 33
(1.9302433503836318, 4.970413636363637, 5.22453115942029, 4.161115686274511, 4.833189361702129, 2.794732333833084), # 34
(1.9344419357416882, 4.975341796875, 5.22082767210145, 4.160347732843138, 4.830801196808512, 2.78713533858071), # 35
(1.9385496483375964, 4.980198579545456, 5.216351086956522, 4.1594161764705895, 4.827906382978725, 2.7783509370314845), # 36
(1.9425685022378518, 4.9849833451704555, 5.211119610507247, 4.158323590686275, 4.824513696808511, 2.768416441779111), # 37
(1.9465005115089515, 4.989695454545455, 5.2051514492753626, 4.157072549019608, 4.820631914893617, 2.757369165417291), # 38
(1.9503476902173915, 4.994334268465909, 5.1984648097826085, 4.155665625000001, 4.816269813829788, 2.7452464205397304), # 39
(1.9541120524296678, 4.998899147727274, 5.191077898550725, 4.154105392156863, 4.811436170212766, 2.73208551974013), # 40
(1.9577956122122764, 5.003389453125, 5.18300892210145, 4.152394424019608, 4.806139760638298, 2.717923775612195), # 41
(1.9614003836317138, 5.0078045454545475, 5.174276086956523, 4.150535294117647, 4.800389361702129, 2.702798500749626), # 42
(1.9649283807544762, 5.0121437855113635, 5.164897599637682, 4.148530575980393, 4.794193750000001, 2.6867470077461273), # 43
(1.968381617647059, 5.01640653409091, 5.154891666666668, 4.146382843137255, 4.78756170212766, 2.6698066091954025), # 44
(1.971762108375959, 5.0205921519886365, 5.144276494565219, 4.1440946691176475, 4.780501994680852, 2.652014617691155), # 45
(1.9750718670076732, 5.024700000000001, 5.133070289855074, 4.141668627450981, 4.77302340425532, 2.633408345827087), # 46
(1.978312907608696, 5.028729438920456, 5.121291259057971, 4.139107291666667, 4.765134707446809, 2.6140251061969018), # 47
(1.981487244245525, 5.032679829545455, 5.108957608695652, 4.136413235294118, 4.7568446808510645, 2.5939022113943038), # 48
(1.9845968909846547, 5.0365505326704545, 5.096087545289856, 4.133589031862746, 4.748162101063831, 2.5730769740129937), # 49
(1.9876438618925836, 5.040340909090909, 5.0826992753623195, 4.130637254901962, 4.739095744680852, 2.551586706646677), # 50
(1.990630171035806, 5.044050319602273, 5.0688110054347835, 4.127560477941177, 4.729654388297873, 2.5294687218890557), # 51
(1.9935578324808187, 5.047678125000001, 5.054440942028986, 4.124361274509805, 4.719846808510639, 2.5067603323338337), # 52
(1.996428860294118, 5.051223686079546, 5.039607291666667, 4.121042218137255, 4.709681781914894, 2.483498850574713), # 53
(1.9992452685422, 5.054686363636364, 5.024328260869566, 4.117605882352942, 4.6991680851063835, 2.4597215892053974), # 54
(2.0020090712915604, 5.058065518465909, 5.00862205615942, 4.114054840686276, 4.688314494680852, 2.4354658608195905), # 55
(2.0047222826086957, 5.061360511363636, 4.992506884057971, 4.110391666666667, 4.677129787234043, 2.410768978010995), # 56
(2.007386916560103, 5.064570703125002, 4.976000951086957, 4.10661893382353, 4.6656227393617025, 2.3856682533733133), # 57
(2.0100049872122767, 5.067695454545454, 4.959122463768116, 4.102739215686276, 4.653802127659575, 2.3602009995002504), # 58
(0.0, 0.0, 0.0, 0.0, 0.0, 0.0), # 59
)
passenger_arriving_acc = (
(2, 2, 5, 3, 1, 0, 2, 3, 2, 1, 3, 0), # 0
(6, 8, 8, 6, 1, 0, 5, 7, 3, 6, 5, 0), # 1
(7, 9, 11, 7, 5, 0, 7, 10, 6, 9, 5, 0), # 2
(8, 15, 11, 9, 5, 0, 10, 11, 6, 10, 5, 0), # 3
(10, 20, 18, 10, 7, 0, 14, 13, 6, 10, 5, 0), # 4
(14, 20, 20, 12, 9, 0, 16, 15, 8, 11, 5, 0), # 5
(15, 26, 22, 14, 10, 0, 21, 17, 9, 14, 5, 0), # 6
(16, 31, 23, 15, 11, 0, 29, 19, 13, 15, 6, 0), # 7
(18, 32, 26, 16, 11, 0, 30, 22, 15, 19, 6, 0), # 8
(18, 38, 29, 16, 11, 0, 33, 29, 17, 20, 8, 0), # 9
(19, 40, 30, 21, 13, 0, 35, 33, 19, 21, 8, 0), # 10
(20, 44, 34, 22, 13, 0, 39, 36, 20, 22, 9, 0), # 11
(20, 47, 41, 23, 13, 0, 41, 42, 24, 24, 9, 0), # 12
(21, 49, 42, 24, 13, 0, 45, 45, 29, 27, 11, 0), # 13
(22, 52, 42, 27, 15, 0, 51, 51, 33, 28, 11, 0), # 14
(23, 55, 47, 28, 18, 0, 53, 52, 35, 32, 12, 0), # 15
(24, 58, 47, 30, 19, 0, 54, 55, 38, 37, 14, 0), # 16
(25, 61, 49, 30, 19, 0, 55, 61, 41, 38, 15, 0), # 17
(28, 63, 49, 30, 20, 0, 58, 70, 41, 40, 18, 0), # 18
(32, 67, 55, 34, 20, 0, 63, 70, 43, 44, 18, 0), # 19
(37, 69, 58, 34, 23, 0, 66, 74, 45, 47, 18, 0), # 20
(38, 70, 60, 34, 23, 0, 68, 76, 48, 49, 18, 0), # 21
(40, 73, 62, 35, 23, 0, 72, 80, 55, 52, 18, 0), # 22
(42, 76, 63, 37, 24, 0, 74, 85, 57, 52, 20, 0), # 23
(45, 82, 64, 38, 26, 0, 75, 88, 61, 54, 21, 0), # 24
(48, 84, 70, 40, 27, 0, 75, 95, 65, 57, 22, 0), # 25
(49, 88, 71, 42, 27, 0, 79, 100, 69, 59, 22, 0), # 26
(51, 91, 75, 42, 28, 0, 79, 104, 71, 61, 24, 0), # 27
(53, 92, 82, 43, 28, 0, 80, 106, 72, 64, 25, 0), # 28
(55, 96, 86, 43, 30, 0, 81, 110, 74, 66, 25, 0), # 29
(55, 99, 88, 43, 31, 0, 84, 115, 78, 69, 26, 0), # 30
(58, 105, 91, 43, 31, 0, 86, 117, 81, 69, 26, 0), # 31
(58, 114, 95, 45, 31, 0, 91, 122, 82, 70, 26, 0), # 32
(60, 120, 99, 48, 33, 0, 93, 126, 83, 73, 28, 0), # 33
(61, 124, 102, 50, 33, 0, 97, 129, 88, 74, 28, 0), # 34
(66, 127, 104, 51, 35, 0, 98, 132, 90, 75, 28, 0), # 35
(67, 133, 112, 51, 36, 0, 99, 134, 91, 78, 31, 0), # 36
(69, 134, 118, 51, 36, 0, 100, 139, 93, 80, 34, 0), # 37
(69, 136, 122, 54, 36, 0, 103, 143, 96, 81, 36, 0), # 38
(72, 141, 124, 55, 36, 0, 105, 144, 99, 85, 37, 0), # 39
(74, 145, 125, 57, 37, 0, 109, 145, 102, 86, 39, 0), # 40
(75, 152, 125, 57, 39, 0, 111, 149, 105, 87, 39, 0), # 41
(78, 156, 127, 59, 43, 0, 113, 151, 109, 88, 41, 0), # 42
(81, 160, 129, 61, 44, 0, 115, 156, 112, 92, 42, 0), # 43
(85, 166, 133, 61, 45, 0, 120, 158, 115, 92, 43, 0), # 44
(86, 169, 137, 61, 46, 0, 121, 162, 117, 94, 43, 0), # 45
(88, 172, 142, 62, 48, 0, 124, 168, 118, 98, 46, 0), # 46
(89, 175, 144, 63, 48, 0, 130, 171, 121, 100, 46, 0), # 47
(91, 177, 147, 66, 49, 0, 132, 175, 123, 102, 48, 0), # 48
(94, 177, 152, 66, 50, 0, 136, 176, 124, 103, 48, 0), # 49
(95, 179, 155, 68, 51, 0, 136, 180, 125, 103, 48, 0), # 50
(98, 186, 157, 68, 52, 0, 138, 184, 126, 104, 51, 0), # 51
(101, 187, 159, 68, 52, 0, 141, 189, 130, 107, 51, 0), # 52
(103, 188, 160, 69, 52, 0, 143, 193, 133, 110, 52, 0), # 53
(105, 191, 164, 71, 53, 0, 143, 199, 135, 115, 54, 0), # 54
(108, 193, 165, 71, 54, 0, 147, 199, 138, 119, 56, 0), # 55
(111, 196, 168, 72, 55, 0, 150, 204, 138, 121, 56, 0), # 56
(112, 198, 170, 74, 56, 0, 152, 205, 140, 123, 57, 0), # 57
(113, 202, 173, 77, 56, 0, 152, 207, 144, 123, 57, 0), # 58
(113, 202, 173, 77, 56, 0, 152, 207, 144, 123, 57, 0), # 59
)
passenger_arriving_rate = (
(1.5897909350307289, 3.2623011363636363, 2.877924967866324, 1.5207065217391305, 0.8571634615384615, 0.0, 2.8540760869565225, 3.428653846153846, 2.2810597826086956, 1.918616645244216, 0.8155752840909091, 0.0), # 0
(1.6047132060286802, 3.298579967382155, 2.8934695935089985, 1.5291754528985508, 0.8635879807692308, 0.0, 2.853103283514493, 3.4543519230769233, 2.2937631793478266, 1.928979729005999, 0.8246449918455387, 0.0), # 1
(1.6194650863330406, 3.3343206734006734, 2.908645244215938, 1.5374579710144929, 0.8698769230769231, 0.0, 2.8521007246376815, 3.4795076923076924, 2.3061869565217394, 1.939096829477292, 0.8335801683501683, 0.0), # 2
(1.6340340539947322, 3.369484687500001, 2.9234408338688946, 1.545547010869565, 0.8760245192307694, 0.0, 2.851068546195653, 3.5040980769230776, 2.3183205163043477, 1.9489605559125964, 0.8423711718750002, 0.0), # 3
(1.6484075870646768, 3.4040334427609436, 2.9378452763496146, 1.553435507246377, 0.8820250000000001, 0.0, 2.850006884057971, 3.5281000000000002, 2.3301532608695656, 1.9585635175664096, 0.8510083606902359, 0.0), # 4
(1.662573163593796, 3.437928372264311, 2.951847485539846, 1.5611163949275362, 0.8878725961538462, 0.0, 2.848915874094203, 3.5514903846153847, 2.3416745923913043, 1.9678983236932306, 0.8594820930660777, 0.0), # 5
(1.6765182616330119, 3.4711309090909093, 2.9654363753213375, 1.5685826086956525, 0.8935615384615384, 0.0, 2.8477956521739136, 3.5742461538461536, 2.352873913043479, 1.9769575835475584, 0.8677827272727273, 0.0), # 6
(1.690230359233246, 3.5036024863215487, 2.9786008595758355, 1.5758270833333334, 0.8990860576923079, 0.0, 2.8466463541666673, 3.5963442307692315, 2.363740625, 1.9857339063838901, 0.8759006215803872, 0.0), # 7
(1.7036969344454203, 3.5353045370370366, 2.991329852185091, 1.5828427536231884, 0.9044403846153848, 0.0, 2.845468115942029, 3.617761538461539, 2.3742641304347827, 1.994219901456727, 0.8838261342592592, 0.0), # 8
(1.7169054653204567, 3.566198494318182, 3.003612267030849, 1.5896225543478264, 0.90961875, 0.0, 2.8442610733695655, 3.638475, 2.3844338315217395, 2.002408178020566, 0.8915496235795455, 0.0), # 9
(1.7298434299092773, 3.5962457912457917, 3.015437017994859, 1.5961594202898552, 0.9146153846153847, 0.0, 2.8430253623188406, 3.658461538461539, 2.394239130434783, 2.0102913453299056, 0.8990614478114479, 0.0), # 10
(1.7424983062628039, 3.625407860900674, 3.0267930189588688, 1.602446286231884, 0.9194245192307693, 0.0, 2.841761118659421, 3.677698076923077, 2.403669429347826, 2.0178620126392457, 0.9063519652251685, 0.0), # 11
(1.7548575724319582, 3.653646136363636, 3.0376691838046277, 1.6084760869565218, 0.9240403846153845, 0.0, 2.8404684782608696, 3.696161538461538, 2.4127141304347828, 2.025112789203085, 0.913411534090909, 0.0), # 12
(1.7669087064676616, 3.680922050715489, 3.048054426413882, 1.614241757246377, 0.9284572115384617, 0.0, 2.839147576992754, 3.713828846153847, 2.4213626358695657, 2.032036284275921, 0.9202305126788722, 0.0), # 13
(1.7786391864208373, 3.7071970370370377, 3.05793766066838, 1.6197362318840578, 0.9326692307692308, 0.0, 2.8377985507246377, 3.7306769230769232, 2.429604347826087, 2.038625107112253, 0.9267992592592594, 0.0), # 14
(1.7900364903424055, 3.732432528409091, 3.0673078004498713, 1.624952445652174, 0.9366706730769232, 0.0, 2.8364215353260875, 3.746682692307693, 2.437428668478261, 2.044871866966581, 0.9331081321022727, 0.0), # 15
(1.8010880962832896, 3.7565899579124578, 3.0761537596401034, 1.6298833333333334, 0.9404557692307693, 0.0, 2.835016666666667, 3.7618230769230774, 2.4448250000000002, 2.050769173093402, 0.9391474894781144, 0.0), # 16
(1.8117814822944105, 3.779630758627946, 3.084464452120823, 1.634521829710145, 0.9440187500000001, 0.0, 2.833584080615943, 3.7760750000000005, 2.4517827445652176, 2.0563096347472154, 0.9449076896569865, 0.0), # 17
(1.8221041264266904, 3.8015163636363636, 3.09222879177378, 1.6388608695652176, 0.9473538461538464, 0.0, 2.8321239130434788, 3.7894153846153857, 2.4582913043478265, 2.0614858611825198, 0.9503790909090909, 0.0), # 18
(1.8320435067310508, 3.8222082060185185, 3.09943569248072, 1.6428933876811593, 0.9504552884615385, 0.0, 2.830636299818841, 3.801821153846154, 2.464340081521739, 2.0662904616538134, 0.9555520515046296, 0.0), # 19
(1.841587101258414, 3.841667718855218, 3.106074068123393, 1.6466123188405797, 0.9533173076923078, 0.0, 2.8291213768115946, 3.8132692307692313, 2.46991847826087, 2.0707160454155953, 0.9604169297138045, 0.0), # 20
(1.850722388059702, 3.8598563352272715, 3.1121328325835482, 1.650010597826087, 0.9559341346153846, 0.0, 2.827579279891305, 3.8237365384615383, 2.475015896739131, 2.0747552217223655, 0.9649640838068179, 0.0), # 21
(1.8594368451858356, 3.8767354882154885, 3.1176008997429308, 1.65308115942029, 0.9582999999999999, 0.0, 2.8260101449275368, 3.8331999999999997, 2.4796217391304354, 2.0784005998286204, 0.9691838720538721, 0.0), # 22
(1.867717950687738, 3.892266610900674, 3.122467183483291, 1.655816938405797, 0.9604091346153847, 0.0, 2.8244141077898557, 3.8416365384615387, 2.483725407608696, 2.0816447889888607, 0.9730666527251685, 0.0), # 23
(1.8755531826163303, 3.9064111363636362, 3.1267205976863752, 1.6582108695652176, 0.9622557692307692, 0.0, 2.8227913043478265, 3.8490230769230767, 2.4873163043478264, 2.0844803984575835, 0.9766027840909091, 0.0), # 24
(1.8829300190225344, 3.919130497685185, 3.1303500562339335, 1.6602558876811595, 0.9638341346153845, 0.0, 2.8211418704710147, 3.855336538461538, 2.4903838315217395, 2.086900037489289, 0.9797826244212963, 0.0), # 25
(1.8898359379572727, 3.930386127946128, 3.1333444730077127, 1.661944927536232, 0.9651384615384615, 0.0, 2.819465942028986, 3.860553846153846, 2.492917391304348, 2.088896315338475, 0.982596531986532, 0.0), # 26
(1.8962584174714663, 3.940139460227272, 3.1356927618894597, 1.6632709239130437, 0.966162980769231, 0.0, 2.817763654891305, 3.864651923076924, 2.4949063858695655, 2.0904618412596396, 0.985034865056818, 0.0), # 27
(1.9021849356160379, 3.948351927609427, 3.1373838367609257, 1.664226811594203, 0.9669019230769231, 0.0, 2.8160351449275365, 3.8676076923076925, 2.4963402173913045, 2.091589224507284, 0.9870879819023568, 0.0), # 28
(1.9076029704419084, 3.954984963173401, 3.138406611503856, 1.664805525362319, 0.9673495192307693, 0.0, 2.814280548007247, 3.869398076923077, 2.4972082880434785, 2.092271074335904, 0.9887462407933503, 0.0), # 29
(1.9125000000000003, 3.9600000000000004, 3.1387500000000004, 1.665, 0.9675, 0.0, 2.8125000000000004, 3.87, 2.4975, 2.0925000000000002, 0.9900000000000001, 0.0), # 30
(1.9170822170716115, 3.9641658238636355, 3.138492581521739, 1.6649648774509804, 0.9674452393617023, 0.0, 2.8100257558720645, 3.8697809574468094, 2.497447316176471, 2.0923283876811594, 0.9910414559659089, 0.0), # 31
(1.92156550511509, 3.9682765909090914, 3.137727608695652, 1.6648601960784315, 0.9672821276595746, 0.0, 2.8062148550724646, 3.869128510638298, 2.497290294117647, 2.091818405797101, 0.9920691477272728, 0.0), # 32
(1.925951878196931, 3.9723317897727273, 3.1364660054347833, 1.6646869852941177, 0.9670124202127659, 0.0, 2.8011046101949026, 3.8680496808510636, 2.497030477941177, 2.090977336956522, 0.9930829474431818, 0.0), # 33
(1.9302433503836318, 3.976330909090909, 3.134718695652174, 1.664446274509804, 0.9666378723404256, 0.0, 2.794732333833084, 3.8665514893617026, 2.496669411764706, 2.0898124637681157, 0.9940827272727273, 0.0), # 34
(1.9344419357416882, 3.9802734374999997, 3.13249660326087, 1.664139093137255, 0.9661602393617023, 0.0, 2.78713533858071, 3.864640957446809, 2.4962086397058827, 2.0883310688405796, 0.9950683593749999, 0.0), # 35
(1.9385496483375964, 3.984158863636364, 3.1298106521739135, 1.6637664705882356, 0.9655812765957449, 0.0, 2.7783509370314845, 3.8623251063829795, 2.4956497058823537, 2.086540434782609, 0.996039715909091, 0.0), # 36
(1.9425685022378518, 3.987986676136364, 3.126671766304348, 1.66332943627451, 0.9649027393617021, 0.0, 2.768416441779111, 3.8596109574468085, 2.494994154411765, 2.0844478442028986, 0.996996669034091, 0.0), # 37
(1.9465005115089515, 3.9917563636363633, 3.1230908695652175, 1.662829019607843, 0.9641263829787234, 0.0, 2.757369165417291, 3.8565055319148938, 2.4942435294117646, 2.082060579710145, 0.9979390909090908, 0.0), # 38
(1.9503476902173915, 3.995467414772727, 3.119078885869565, 1.6622662500000003, 0.9632539627659574, 0.0, 2.7452464205397304, 3.8530158510638297, 2.4933993750000005, 2.079385923913043, 0.9988668536931817, 0.0), # 39
(1.9541120524296678, 3.9991193181818185, 3.114646739130435, 1.661642156862745, 0.9622872340425531, 0.0, 2.73208551974013, 3.8491489361702125, 2.492463235294118, 2.0764311594202898, 0.9997798295454546, 0.0), # 40
(1.9577956122122764, 4.0027115625, 3.10980535326087, 1.660957769607843, 0.9612279521276595, 0.0, 2.717923775612195, 3.844911808510638, 2.4914366544117645, 2.07320356884058, 1.000677890625, 0.0), # 41
(1.9614003836317138, 4.006243636363638, 3.1045656521739136, 1.6602141176470588, 0.9600778723404256, 0.0, 2.702798500749626, 3.8403114893617025, 2.490321176470588, 2.0697104347826087, 1.0015609090909094, 0.0), # 42
(1.9649283807544762, 4.00971502840909, 3.0989385597826087, 1.6594122303921572, 0.9588387500000001, 0.0, 2.6867470077461273, 3.8353550000000003, 2.4891183455882357, 2.0659590398550725, 1.0024287571022725, 0.0), # 43
(1.968381617647059, 4.013125227272727, 3.0929350000000007, 1.6585531372549018, 0.9575123404255319, 0.0, 2.6698066091954025, 3.8300493617021276, 2.487829705882353, 2.061956666666667, 1.0032813068181818, 0.0), # 44
(1.971762108375959, 4.016473721590909, 3.086565896739131, 1.657637867647059, 0.9561003989361703, 0.0, 2.652014617691155, 3.824401595744681, 2.4864568014705886, 2.0577105978260875, 1.0041184303977273, 0.0), # 45
(1.9750718670076732, 4.019760000000001, 3.0798421739130446, 1.6566674509803923, 0.954604680851064, 0.0, 2.633408345827087, 3.818418723404256, 2.4850011764705884, 2.0532281159420296, 1.0049400000000002, 0.0), # 46
(1.978312907608696, 4.022983551136364, 3.0727747554347826, 1.6556429166666666, 0.9530269414893617, 0.0, 2.6140251061969018, 3.812107765957447, 2.483464375, 2.0485165036231883, 1.005745887784091, 0.0), # 47
(1.981487244245525, 4.026143863636364, 3.0653745652173914, 1.654565294117647, 0.9513689361702128, 0.0, 2.5939022113943038, 3.805475744680851, 2.4818479411764707, 2.043583043478261, 1.006535965909091, 0.0), # 48
(1.9845968909846547, 4.029240426136363, 3.0576525271739134, 1.6534356127450982, 0.9496324202127661, 0.0, 2.5730769740129937, 3.7985296808510642, 2.4801534191176473, 2.0384350181159423, 1.0073101065340908, 0.0), # 49
(1.9876438618925836, 4.032272727272726, 3.0496195652173914, 1.6522549019607846, 0.9478191489361703, 0.0, 2.551586706646677, 3.791276595744681, 2.478382352941177, 2.0330797101449276, 1.0080681818181816, 0.0), # 50
(1.990630171035806, 4.035240255681818, 3.04128660326087, 1.6510241911764707, 0.9459308776595745, 0.0, 2.5294687218890557, 3.783723510638298, 2.476536286764706, 2.0275244021739134, 1.0088100639204545, 0.0), # 51
(1.9935578324808187, 4.0381425, 3.0326645652173916, 1.6497445098039216, 0.9439693617021278, 0.0, 2.5067603323338337, 3.775877446808511, 2.4746167647058828, 2.0217763768115944, 1.009535625, 0.0), # 52
(1.996428860294118, 4.0409789488636365, 3.0237643750000003, 1.6484168872549019, 0.9419363563829788, 0.0, 2.483498850574713, 3.767745425531915, 2.472625330882353, 2.0158429166666667, 1.0102447372159091, 0.0), # 53
(1.9992452685422, 4.043749090909091, 3.014596956521739, 1.6470423529411766, 0.9398336170212767, 0.0, 2.4597215892053974, 3.7593344680851066, 2.470563529411765, 2.009731304347826, 1.0109372727272727, 0.0), # 54
(2.0020090712915604, 4.046452414772727, 3.005173233695652, 1.6456219362745101, 0.9376628989361703, 0.0, 2.4354658608195905, 3.750651595744681, 2.4684329044117654, 2.003448822463768, 1.0116131036931817, 0.0), # 55
(2.0047222826086957, 4.049088409090909, 2.995504130434783, 1.6441566666666665, 0.9354259574468086, 0.0, 2.410768978010995, 3.7417038297872343, 2.4662349999999997, 1.9970027536231885, 1.0122721022727272, 0.0), # 56
(2.007386916560103, 4.051656562500001, 2.9856005706521738, 1.6426475735294117, 0.9331245478723404, 0.0, 2.3856682533733133, 3.732498191489362, 2.4639713602941176, 1.9904003804347825, 1.0129141406250002, 0.0), # 57
(2.0100049872122767, 4.054156363636363, 2.9754734782608696, 1.64109568627451, 0.930760425531915, 0.0, 2.3602009995002504, 3.72304170212766, 2.461643529411765, 1.9836489855072463, 1.0135390909090907, 0.0), # 58
(0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0), # 59
)
passenger_allighting_rate = (
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 0
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 1
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 2
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 3
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 4
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 5
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 6
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 7
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 8
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 9
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 10
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 11
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 12
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 13
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 14
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 15
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 16
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 17
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 18
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 19
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 20
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 21
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 22
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 23
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 24
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 25
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 26
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 27
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 28
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 29
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 30
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 31
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 32
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 33
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 34
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 35
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 36
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 37
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 38
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 39
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 40
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 41
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 42
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 43
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 44
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 45
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 46
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 47
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 48
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 49
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 50
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 51
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 52
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 53
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 54
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 55
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 56
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 57
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 58
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 59
)
"""
parameters for reproducibiliy. More information: https://numpy.org/doc/stable/reference/random/parallel.html
"""
#initial entropy
entropy = 258194110137029475889902652135037600173
#index for seed sequence child
child_seed_index = (
1, # 0
89, # 1
)
| 113.014925 | 218 | 0.728896 | 5,147 | 37,860 | 5.359433 | 0.202059 | 0.313214 | 0.247961 | 0.469821 | 0.336088 | 0.330143 | 0.329019 | 0.328004 | 0.328004 | 0.328004 | 0 | 0.818869 | 0.119229 | 37,860 | 334 | 219 | 113.353293 | 0.008367 | 0.031986 | 0 | 0.202532 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.015823 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6c23ec2583140bf984576a6351f3963680033c1b | 36 | py | Python | pymedphys/_mudensity/delivery/__init__.py | pymedphys/pymedphys-archive-2019 | 6bb7c8d0da2e93ff56469bb47e65b15ece2ea25e | [
"Apache-2.0"
] | 1 | 2020-12-20T14:13:56.000Z | 2020-12-20T14:13:56.000Z | pymedphys/_mudensity/delivery/__init__.py | pymedphys/pymedphys-archive-2019 | 6bb7c8d0da2e93ff56469bb47e65b15ece2ea25e | [
"Apache-2.0"
] | 6 | 2020-10-06T15:36:46.000Z | 2022-02-27T05:15:17.000Z | pymedphys/_mudensity/delivery/__init__.py | cpbhatt/pymedphys | 177b3db8e2a6e83c44835d0007d1d5c7a420fd99 | [
"Apache-2.0"
] | 1 | 2020-12-20T14:14:00.000Z | 2020-12-20T14:14:00.000Z | from .core import DeliveryMuDensity
| 18 | 35 | 0.861111 | 4 | 36 | 7.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 36 | 1 | 36 | 36 | 0.96875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6c627c6b39fa381c23371035b3fed6453ffb65d9 | 177 | py | Python | mcore/enum.py | eruvanos/mcore | 7d890d19dbbb382b0ca28d863555832002f38382 | [
"MIT"
] | null | null | null | mcore/enum.py | eruvanos/mcore | 7d890d19dbbb382b0ca28d863555832002f38382 | [
"MIT"
] | null | null | null | mcore/enum.py | eruvanos/mcore | 7d890d19dbbb382b0ca28d863555832002f38382 | [
"MIT"
] | null | null | null | from enum import Enum
class AutoNameEnum(Enum):
def _generate_next_value_(self, start, count, last):
return self
def __repr__(self):
return self.value | 19.666667 | 56 | 0.683616 | 23 | 177 | 4.913043 | 0.652174 | 0.176991 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.242938 | 177 | 9 | 57 | 19.666667 | 0.843284 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.166667 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
6c7b6a1a077e42e46c23c3b1d7a1c7e89812b8ec | 5,936 | py | Python | filtering_class3/test_assignment3.py | eriksonJAguiar/imageproc_SCC5830_icmc | f9419cee35dbcf811e059d98788ce66097ee43c4 | [
"MIT"
] | null | null | null | filtering_class3/test_assignment3.py | eriksonJAguiar/imageproc_SCC5830_icmc | f9419cee35dbcf811e059d98788ce66097ee43c4 | [
"MIT"
] | null | null | null | filtering_class3/test_assignment3.py | eriksonJAguiar/imageproc_SCC5830_icmc | f9419cee35dbcf811e059d98788ce66097ee43c4 | [
"MIT"
] | null | null | null | import unittest
from assignment3 import *
from imageio import imread
from os import path
from matplotlib import pyplot as plt
import numpy as np
def read_in_out():
in_ = list()
out_ = list()
path = './CasosDeTeste/'
for f in listdir(path):
if f.endswith('.in'):
i = open(path+f).read().splitlines()
in_.append(i)
elif f.endswith('.out'):
o = open(path+f).read().splitlines()
out_.append(o[0])
return (in_, out_)
class TestAssignment(unittest.TestCase):
def test_filter_1d(self):
imgref = imageio.imread('arara.png', as_gray=True)
w = np.array([-2, -1, 0, 1, 2])
img_hat = filter_1d(imgref, w)
plt.figure(figsize=(12, 12))
plt.subplot(121)
plt.imshow(imgref, cmap="gray", vmin=0, vmax=255)
plt.title('Original')
plt.axis('off')
plt.colorbar()
plt.subplot(122)
plt.imshow(img_hat, cmap="gray", vmin=0, vmax=255)
plt.title('Filtered')
plt.axis('off')
plt.colorbar()
plt.show()
print(root_mean_square_error(imgref, img_hat))
def test_filter_2d(self):
imgref = imageio.imread('image02_quant.png')
w = np.array([[-1, 0, 1],
[-2, 0, 2],
[-1, 0, 1]])
img_hat = filter_2d(imgref, w)
plt.figure(figsize=(12, 12))
plt.subplot(121)
plt.imshow(imgref, cmap="gray", vmin=0, vmax=255)
plt.title('Original')
plt.axis('off')
plt.colorbar()
plt.subplot(122)
plt.imshow(img_hat, cmap="gray", vmin=0, vmax=255)
plt.title('Filtered')
plt.axis('off')
plt.colorbar()
plt.show()
print(root_mean_square_error(imgref, img_hat))
def test_filter_median(self):
imgref = imageio.imread('image02_salted.png').astype(np.uint8)
img_hat = median_filter(imgref, 5)
#img_hat = ndimage.median_filter(imgref, 5)
plt.figure(figsize=(12, 12))
plt.subplot(121)
plt.imshow(imgref, cmap="gray", vmin=0, vmax=255)
plt.colorbar()
plt.title('Original')
plt.axis('off')
plt.subplot(122)
plt.imshow(img_hat, cmap="gray", vmin=0, vmax=255)
plt.title('Filtered')
plt.colorbar()
plt.axis('off')
plt.show()
print(root_mean_square_error(imgref, img_hat))
def test_integrated_case1(self):
in_, out_ = read_in_out()
i,o = in_[0], out_[0]
method = int(i[1])
imgref = imageio.imread(i[0], as_gray=True)
w = np.array(i[3].split(' ')).astype(int)
rmse = select_method(imgref, method, w)
print('rmse: %f; real: %f' % (rmse, float(o)))
self.assertTrue(rmse >= (float(o)-5) and rmse <= (float(o)+5))
def test_integrated_case2(self):
in_, out_ = read_in_out()
i,o = in_[1], out_[1]
method = int(i[1])
imgref = imageio.imread(i[0], as_gray=True)
w = int(i[2])
rmse = select_method(imgref, method, w)
print('rmse: %f; real: %f' % (rmse, float(o)))
self.assertTrue(rmse >= (float(o)-5) and rmse <= (float(o)+5))
def test_integrated_case3(self):
in_, out_ = read_in_out()
i,o = in_[2], out_[2]
method = int(i[1])
imgref = imageio.imread(i[0], as_gray=True)
w_aux = []
for row in range(3,len(i)):
w_aux.append(i[row].split(' '))
w = np.array(w_aux).astype(int)
rmse = select_method(imgref, method, w)
print('rmse: %f; real: %f' % (rmse, float(o)))
self.assertTrue(rmse >= (float(o)-5) and rmse <= (float(o)+5))
def test_integrated_case4(self):
in_, out_ = read_in_out()
i,o = in_[3], out_[3]
method = int(i[1])
imgref = imageio.imread(i[0], as_gray=True)
w = np.array(i[3].split(' ')).astype(int)
rmse = select_method(imgref, method, w)
print('rmse: %f; real: %f' % (rmse, float(o)))
self.assertTrue(rmse >= (float(o)-5) and rmse <= (float(o)+5))
def test_integrated_case5(self):
in_, out_ = read_in_out()
i,o = in_[4], out_[4]
method = int(i[1])
imgref = imageio.imread(i[0], as_gray=True)
w = int(i[2])
rmse = select_method(imgref, method, w)
print('rmse: %f; real: %f' % (rmse, float(o)))
self.assertTrue(rmse >= (float(o)-5) and rmse <= (float(o)+5))
def test_integrated_case6(self):
in_, out_ = read_in_out()
i,o = in_[5], out_[5]
method = int(i[1])
imgref = imageio.imread(i[0], as_gray=True)
w_aux = []
for row in range(3,len(i)):
w_aux.append(i[row].split(' '))
w = np.array(w_aux).astype(int)
rmse = select_method(imgref, method, w)
print('rmse: %f; real: %f' % (rmse, float(o)))
self.assertTrue(rmse >= (float(o)-5) and rmse <= (float(o)+5))
def test_integrated_case7(self):
in_, out_ = read_in_out()
i,o = in_[6], out_[6]
method = int(i[1])
imgref = imageio.imread(i[0], as_gray=True)
w_aux = []
for row in range(3,len(i)):
w_aux.append(i[row].split(' '))
w = np.array(w_aux).astype(int)
rmse = select_method(imgref, method, w)
print('rmse: %f; real: %f' % (rmse, float(o)))
self.assertTrue(rmse >= (float(o)-5) and rmse <= (float(o)+5))
def test_integrated_case8(self):
in_, out_ = read_in_out()
i,o = in_[7], out_[7]
method = int(i[1])
imgref = imageio.imread(i[0], as_gray=True)
w = int(i[2])
rmse = select_method(imgref, method, w)
print('rmse: %f; real: %f' % (rmse, float(o)))
self.assertTrue(rmse >= (float(o)-5) and rmse <= (float(o)+5))
if __name__ == '__main__':
unittest.main()
| 32.977778 | 70 | 0.541105 | 846 | 5,936 | 3.634752 | 0.131206 | 0.070244 | 0.078049 | 0.057236 | 0.803577 | 0.769106 | 0.763252 | 0.754797 | 0.754797 | 0.697561 | 0 | 0.034385 | 0.289589 | 5,936 | 179 | 71 | 33.162011 | 0.694807 | 0.007075 | 0 | 0.675325 | 0 | 0 | 0.053114 | 0 | 0 | 0 | 0 | 0 | 0.051948 | 1 | 0.077922 | false | 0 | 0.038961 | 0 | 0.12987 | 0.071429 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
66b0819fd8c0a4fa082e481813c8b2de94087cd8 | 3,631 | py | Python | test/layer/test_conv2d.py | ControlNet/tensorneko | 70dfb2f6395e1703dbdf5d5adcfed7b1334efb8f | [
"MIT"
] | 9 | 2021-05-23T17:38:09.000Z | 2021-12-30T19:12:12.000Z | test/layer/test_conv2d.py | ControlNet/tensorneko | 70dfb2f6395e1703dbdf5d5adcfed7b1334efb8f | [
"MIT"
] | null | null | null | test/layer/test_conv2d.py | ControlNet/tensorneko | 70dfb2f6395e1703dbdf5d5adcfed7b1334efb8f | [
"MIT"
] | null | null | null | import unittest
from torch import Tensor
from tensorneko.layer import Conv2d
from fn import F
import torch.nn as nn
import torch
class TestConv2d(unittest.TestCase):
"""The test class for :class:`tensorneko.layer.Conv2d`."""
@property
def b(self):
return 8
@property
def h(self):
return 32
@property
def w(self):
return 32
@property
def c(self):
return self.in_features
@property
def in_features(self):
return 3
@property
def out_features(self):
return 64
@property
def kernel_size(self):
return 3, 3
@property
def stride(self):
return 1, 1
@property
def padding(self):
return 1, 1
@property
def activation_factory(self):
return nn.ReLU
@property
def normalization_factory(self):
return F(nn.BatchNorm2d, self.out_features)
def test_single_layer(self):
"""The a single layer without batch normalization and activation"""
# Create a batch of size 8
x = torch.rand(self.b, self.c, self.h, self.w)
# Create a single layer
neko_layer = Conv2d(self.in_features, self.out_features, self.kernel_size, self.stride, self.padding)
torch_layer = nn.Sequential(neko_layer.conv)
# Forward prop
neko_result: Tensor = neko_layer(x)
pytorch_result: Tensor = torch_layer(x)
# Check the output
self.assertTrue((neko_result == pytorch_result).all())
def test_single_layer_with_activation(self):
"""The a single layer with activation"""
# Create a batch of size 8
x = torch.rand(self.b, self.c, self.h, self.w)
# Create a single layer
neko_layer = Conv2d(self.in_features, self.out_features, self.kernel_size, self.stride, self.padding, build_activation=self.activation_factory)
torch_layer = nn.Sequential(neko_layer.conv, neko_layer.activation)
# Forward prop
neko_result: Tensor = neko_layer(x)
pytorch_result: Tensor = torch_layer(x)
# Check the output
self.assertTrue((neko_result == pytorch_result).all())
def test_single_layer_with_normalization(self):
"""The a single layer with batch normalization"""
# Create a batch of size 8
x = torch.rand(self.b, self.c, self.h, self.w)
# Create a single layer
neko_layer = Conv2d(self.in_features, self.out_features, self.kernel_size, self.stride, self.padding, build_normalization=self.normalization_factory)
torch_layer = nn.Sequential(neko_layer.conv, neko_layer.normalization)
# Forward prop
neko_result: Tensor = neko_layer(x)
pytorch_result: Tensor = torch_layer(x)
# Check the output
self.assertTrue((neko_result == pytorch_result).all())
def test_single_layer_with_normalization_and_activation(self):
"""The a single layer with batch normalization and activation"""
# Create a batch of size 8
x = torch.rand(self.b, self.c, self.h, self.w)
# Create a single layer
neko_layer = Conv2d(self.in_features, self.out_features, self.kernel_size, self.stride, self.padding, build_activation=self.activation_factory, build_normalization=self.normalization_factory)
torch_layer = nn.Sequential(neko_layer.conv, neko_layer.normalization, neko_layer.activation)
# Forward prop
neko_result: Tensor = neko_layer(x)
pytorch_result: Tensor = torch_layer(x)
# Check the output
self.assertTrue((neko_result == pytorch_result).all())
| 29.762295 | 199 | 0.662352 | 479 | 3,631 | 4.847599 | 0.135699 | 0.062016 | 0.041344 | 0.031008 | 0.787252 | 0.759259 | 0.739449 | 0.705857 | 0.678295 | 0.678295 | 0 | 0.00951 | 0.247039 | 3,631 | 121 | 200 | 30.008264 | 0.839795 | 0.154503 | 0 | 0.455882 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.058824 | 1 | 0.220588 | false | 0 | 0.088235 | 0.161765 | 0.485294 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 6 |
66361d397d0611ed5f883e4ab9658e85af52ccc4 | 233 | py | Python | feeds/managers/base.py | kbase/feeds | a2ed4cb88120aeb10a295919cb0fba85e13d462d | [
"MIT"
] | null | null | null | feeds/managers/base.py | kbase/feeds | a2ed4cb88120aeb10a295919cb0fba85e13d462d | [
"MIT"
] | 48 | 2018-10-15T23:36:50.000Z | 2022-01-19T02:49:30.000Z | feeds/managers/base.py | kbase/feeds | a2ed4cb88120aeb10a295919cb0fba85e13d462d | [
"MIT"
] | 3 | 2018-10-03T20:37:41.000Z | 2019-01-16T15:03:19.000Z | class BaseManager(object):
def __init__(self):
pass
def get_target_users(self, activity):
"""
TODO: Abstract some basic functionality here for the generic activity type.
"""
return []
| 23.3 | 83 | 0.60515 | 25 | 233 | 5.4 | 0.88 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.309013 | 233 | 9 | 84 | 25.888889 | 0.838509 | 0.321888 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 0 | 1 | 0.4 | false | 0.2 | 0 | 0 | 0.8 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
b089011a1466cc93ba85cad20ab1b0cc6b894656 | 24,013 | py | Python | statsmodels/tsa/statespace/tests/test_prediction.py | CCHiggins/statsmodels | 300b6fba90c65c8e94b4f83e04f7ae1b0ceeac2e | [
"BSD-3-Clause"
] | 1 | 2022-01-26T19:37:11.000Z | 2022-01-26T19:37:11.000Z | statsmodels/tsa/statespace/tests/test_prediction.py | CCHiggins/statsmodels | 300b6fba90c65c8e94b4f83e04f7ae1b0ceeac2e | [
"BSD-3-Clause"
] | 1 | 2022-01-20T05:49:29.000Z | 2022-01-25T00:01:31.000Z | statsmodels/tsa/statespace/tests/test_prediction.py | CCHiggins/statsmodels | 300b6fba90c65c8e94b4f83e04f7ae1b0ceeac2e | [
"BSD-3-Clause"
] | 3 | 2017-01-23T02:31:56.000Z | 2020-11-18T04:14:44.000Z | """
Tests for prediction of state space models
Author: Chad Fulton
License: Simplified-BSD
"""
import pytest
import numpy as np
import pandas as pd
from numpy.testing import assert_equal, assert_raises, assert_allclose, assert_
from statsmodels import datasets
from statsmodels.tsa.statespace import sarimax, varmax
from statsmodels.tsa.statespace.tests.test_impulse_responses import TVSS
dta = datasets.macrodata.load_pandas().data
dta.index = pd.period_range(start='1959Q1', end='2009Q3', freq='Q')
def test_predict_dates():
index = pd.date_range(start='1950-01-01', periods=11, freq='D')
np.random.seed(324328)
endog = pd.Series(np.random.normal(size=10), index=index[:-1])
# Basic test
mod = sarimax.SARIMAX(endog, order=(1, 0, 0))
res = mod.filter(mod.start_params)
# In-sample prediction should have the same index
pred = res.predict()
assert_equal(len(pred), mod.nobs)
assert_equal(pred.index.values, index[:-1].values)
# Out-of-sample forecasting should extend the index appropriately
fcast = res.forecast()
assert_equal(fcast.index[0], index[-1])
# Simple differencing in the SARIMAX model should eliminate dates of
# series eliminated due to differencing
mod = sarimax.SARIMAX(endog, order=(1, 1, 0), simple_differencing=True)
res = mod.filter(mod.start_params)
pred = res.predict()
# In-sample prediction should lose the first index value
assert_equal(mod.nobs, endog.shape[0] - 1)
assert_equal(len(pred), mod.nobs)
assert_equal(pred.index.values, index[1:-1].values)
# Out-of-sample forecasting should still extend the index appropriately
fcast = res.forecast()
assert_equal(fcast.index[0], index[-1])
# Simple differencing again, this time with a more complex differencing
# structure
mod = sarimax.SARIMAX(endog, order=(1, 2, 0), seasonal_order=(0, 1, 0, 4),
simple_differencing=True)
res = mod.filter(mod.start_params)
pred = res.predict()
# In-sample prediction should lose the first 6 index values
assert_equal(mod.nobs, endog.shape[0] - (4 + 2))
assert_equal(len(pred), mod.nobs)
assert_equal(pred.index.values, index[4 + 2:-1].values)
# Out-of-sample forecasting should still extend the index appropriately
fcast = res.forecast()
assert_equal(fcast.index[0], index[-1])
def test_memory_no_predicted():
# Tests for forecasts with memory_no_predicted is set
endog = [0.5, 1.2, 0.4, 0.6]
mod = sarimax.SARIMAX(endog, order=(1, 0, 0))
res1 = mod.filter([0.5, 1.])
mod.ssm.memory_no_predicted = True
res2 = mod.filter([0.5, 1.])
# Make sure we really didn't store all of the values in res2
assert_equal(res1.predicted_state.shape, (1, 5))
assert_(res2.predicted_state is None)
assert_equal(res1.predicted_state_cov.shape, (1, 1, 5))
assert_(res2.predicted_state_cov is None)
# Check that we can't do dynamic in-sample prediction
assert_raises(ValueError, res2.predict, dynamic=True)
assert_raises(ValueError, res2.get_prediction, dynamic=True)
# Make sure the point forecasts are the same
assert_allclose(res1.forecast(10), res2.forecast(10))
# Make sure the confidence intervals are the same
fcast1 = res1.get_forecast(10)
fcast2 = res1.get_forecast(10)
assert_allclose(fcast1.summary_frame(), fcast2.summary_frame())
@pytest.mark.parametrize('use_exog', [True, False])
@pytest.mark.parametrize('trend', ['n', 'c', 't'])
def test_concatenated_predict_sarimax(use_exog, trend):
endog = np.arange(100).reshape(100, 1) * 1.0
exog = np.ones(100) if use_exog else None
if use_exog:
exog[10:30] = 2.
trend_params = [0.1]
ar_params = [0.5]
exog_params = [1.2]
var_params = [1.]
params = []
if trend in ['c', 't']:
params += trend_params
params += ar_params
if use_exog:
params += exog_params
params += var_params
y1 = endog.copy()
y1[-50:] = np.nan
mod1 = sarimax.SARIMAX(y1, order=(1, 1, 0), trend=trend, exog=exog)
res1 = mod1.smooth(params)
p1 = res1.get_prediction()
pr1 = p1.prediction_results
x2 = exog[:50] if use_exog else None
mod2 = sarimax.SARIMAX(endog[:50], order=(1, 1, 0), trend=trend, exog=x2)
res2 = mod2.smooth(params)
x2f = exog[50:] if use_exog else None
p2 = res2.get_prediction(start=0, end=99, exog=x2f)
pr2 = p2.prediction_results
attrs = (
pr1.representation_attributes
+ pr1.filter_attributes
+ pr1.smoother_attributes)
for key in attrs:
assert_allclose(getattr(pr2, key), getattr(pr1, key))
@pytest.mark.parametrize('use_exog', [True, False])
@pytest.mark.parametrize('trend', ['n', 'c', 't'])
def test_concatenated_predict_varmax(use_exog, trend):
endog = np.arange(200).reshape(100, 2) * 1.0
exog = np.ones(100) if use_exog else None
trend_params = [0.1, 0.2]
var_params = [0.5, -0.1, 0.0, 0.2]
exog_params = [1., 2.]
cov_params = [1., 0., 1.]
params = []
if trend in ['c', 't']:
params += trend_params
params += var_params
if use_exog:
params += exog_params
params += cov_params
y1 = endog.copy()
y1[-50:] = np.nan
mod1 = varmax.VARMAX(y1, order=(1, 0), trend=trend, exog=exog)
res1 = mod1.smooth(params)
p1 = res1.get_prediction()
pr1 = p1.prediction_results
x2 = exog[:50] if use_exog else None
mod2 = varmax.VARMAX(endog[:50], order=(1, 0), trend=trend, exog=x2)
res2 = mod2.smooth(params)
x2f = exog[50:] if use_exog else None
p2 = res2.get_prediction(start=0, end=99, exog=x2f)
pr2 = p2.prediction_results
attrs = (
pr1.representation_attributes
+ pr1.filter_attributes
+ pr1.smoother_attributes)
for key in attrs:
assert_allclose(getattr(pr2, key), getattr(pr1, key))
@pytest.mark.parametrize('use_exog', [True, False])
@pytest.mark.parametrize('trend', ['n', 'c', 't'])
def test_predicted_filtered_smoothed_with_nans(use_exog, trend):
# In this test, we construct a model with only NaN values for `endog`, so
# that predicted, filtered, and smoothed forecasts should all be the
# same
endog = np.zeros(200).reshape(100, 2) * np.nan
exog = np.ones(100) if use_exog else None
trend_params = [0.1, 0.2]
var_params = [0.5, -0.1, 0.0, 0.2]
exog_params = [1., 2.]
cov_params = [1., 0., 1.]
params = []
if trend in ['c', 't']:
params += trend_params
params += var_params
if use_exog:
params += exog_params
params += cov_params
x_fit = exog[:50] if use_exog else None
mod = varmax.VARMAX(endog[:50], order=(1, 0), trend=trend, exog=x_fit)
res = mod.smooth(params)
x_fcast = exog[50:61] if use_exog else None
p_pred = res.get_prediction(
start=0, end=60, information_set='predicted',
exog=x_fcast)
f_pred = res.get_prediction(
start=0, end=60, information_set='filtered',
exog=x_fcast)
s_pred = res.get_prediction(
start=0, end=60, information_set='smoothed',
exog=x_fcast)
# Test forecasts
assert_allclose(s_pred.predicted_mean, p_pred.predicted_mean)
assert_allclose(s_pred.var_pred_mean, p_pred.var_pred_mean)
assert_allclose(f_pred.predicted_mean, p_pred.predicted_mean)
assert_allclose(f_pred.var_pred_mean, p_pred.var_pred_mean)
assert_allclose(p_pred.predicted_mean[:50], res.fittedvalues)
assert_allclose(p_pred.var_pred_mean[:50].T, res.forecasts_error_cov)
p_signal = res.get_prediction(
start=0, end=60, information_set='predicted', signal_only=True,
exog=x_fcast)
f_signal = res.get_prediction(
start=0, end=60, information_set='filtered', signal_only=True,
exog=x_fcast)
s_signal = res.get_prediction(
start=0, end=60, information_set='smoothed', signal_only=True,
exog=x_fcast)
# Test signal predictions
assert_allclose(s_signal.predicted_mean, p_signal.predicted_mean)
assert_allclose(s_signal.var_pred_mean, p_signal.var_pred_mean)
assert_allclose(f_signal.predicted_mean, p_signal.predicted_mean)
assert_allclose(f_signal.var_pred_mean, p_signal.var_pred_mean)
if use_exog is False and trend == 'n':
assert_allclose(p_signal.predicted_mean[:50], res.fittedvalues)
assert_allclose(p_signal.var_pred_mean[:50].T, res.forecasts_error_cov)
else:
assert_allclose(p_signal.predicted_mean[:50] + mod['obs_intercept'],
res.fittedvalues)
assert_allclose((p_signal.var_pred_mean[:50] + mod['obs_cov']).T,
res.forecasts_error_cov)
def test_predicted_filtered_smoothed_with_nans_TVSS(reset_randomstate):
mod = TVSS(np.zeros((50, 2)) * np.nan)
mod.ssm.initialize_known([1.2, 0.8], np.eye(2))
res = mod.smooth([])
mod_oos = TVSS(np.zeros((11, 2)) * np.nan)
kwargs = {key: mod_oos[key] for key in [
'obs_intercept', 'design', 'obs_cov',
'transition', 'selection', 'state_cov']}
p_pred = res.get_prediction(
start=0, end=60, information_set='predicted',
**kwargs)
f_pred = res.get_prediction(
start=0, end=60, information_set='filtered',
**kwargs)
s_pred = res.get_prediction(
start=0, end=60, information_set='smoothed',
**kwargs)
# Test forecasts
assert_allclose(s_pred.predicted_mean, p_pred.predicted_mean)
assert_allclose(s_pred.var_pred_mean, p_pred.var_pred_mean)
assert_allclose(f_pred.predicted_mean, p_pred.predicted_mean)
assert_allclose(f_pred.var_pred_mean, p_pred.var_pred_mean)
assert_allclose(p_pred.predicted_mean[:50], res.fittedvalues)
assert_allclose(p_pred.var_pred_mean[:50].T, res.forecasts_error_cov)
p_signal = res.get_prediction(
start=0, end=60, information_set='predicted', signal_only=True,
**kwargs)
f_signal = res.get_prediction(
start=0, end=60, information_set='filtered', signal_only=True,
**kwargs)
s_signal = res.get_prediction(
start=0, end=60, information_set='smoothed', signal_only=True,
**kwargs)
# Test signal predictions
assert_allclose(s_signal.predicted_mean, p_signal.predicted_mean)
assert_allclose(s_signal.var_pred_mean, p_signal.var_pred_mean)
assert_allclose(f_signal.predicted_mean, p_signal.predicted_mean)
assert_allclose(f_signal.var_pred_mean, p_signal.var_pred_mean)
assert_allclose(p_signal.predicted_mean[:50] + mod['obs_intercept'].T,
res.fittedvalues)
assert_allclose((p_signal.var_pred_mean[:50] + mod['obs_cov'].T).T,
res.forecasts_error_cov)
@pytest.mark.parametrize('use_exog', [True, False])
@pytest.mark.parametrize('trend', ['n', 'c', 't'])
def test_predicted_filtered_smoothed_varmax(use_exog, trend):
endog = np.log(dta[['realgdp', 'cpi']])
if trend in ['n', 'c']:
endog = endog.diff().iloc[1:] * 100
if trend == 'n':
endog -= endog.mean()
exog = np.ones(100) if use_exog else None
if use_exog:
exog[20:40] = 2.
trend_params = [0.1, 0.2]
var_params = [0.5, -0.1, 0.0, 0.2]
exog_params = [1., 2.]
cov_params = [1., 0., 1.]
params = []
if trend in ['c', 't']:
params += trend_params
params += var_params
if use_exog:
params += exog_params
params += cov_params
x_fit = exog[:50] if use_exog else None
mod = varmax.VARMAX(endog[:50], order=(1, 0), trend=trend, exog=x_fit)
# Add in an obs_intercept and obs_cov to make the test more comprehensive
mod['obs_intercept'] = [5, -2.]
mod['obs_cov'] = np.array([[1.2, 0.3],
[0.3, 3.4]])
res = mod.smooth(params)
x_fcast = exog[50:61] if use_exog else None
p_pred = res.get_prediction(
start=0, end=60, information_set='predicted',
exog=x_fcast)
f_pred = res.get_prediction(
start=0, end=60, information_set='filtered',
exog=x_fcast)
s_pred = res.get_prediction(
start=0, end=60, information_set='smoothed',
exog=x_fcast)
# Test forecasts
fcast = res.get_forecast(11, exog=x_fcast)
d = mod['obs_intercept'][:, None]
Z = mod['design']
H = mod['obs_cov'][:, :, None]
desired_s_signal = Z @ res.smoothed_state
desired_f_signal = Z @ res.filtered_state
desired_p_signal = Z @ res.predicted_state[..., :-1]
assert_allclose(s_pred.predicted_mean[:50], (d + desired_s_signal).T)
assert_allclose(s_pred.predicted_mean[50:], fcast.predicted_mean)
assert_allclose(f_pred.predicted_mean[:50], (d + desired_f_signal).T)
assert_allclose(f_pred.predicted_mean[50:], fcast.predicted_mean)
assert_allclose(p_pred.predicted_mean[:50], (d + desired_p_signal).T)
assert_allclose(p_pred.predicted_mean[50:], fcast.predicted_mean)
desired_s_signal_cov = (
Z[None, :, :] @ res.smoothed_state_cov.T @ Z.T[None, :, :])
desired_f_signal_cov = (
Z[None, :, :] @ res.filtered_state_cov.T @ Z.T[None, :, :])
desired_p_signal_cov = (
Z[None, :, :] @ res.predicted_state_cov[..., :-1].T @ Z.T[None, :, :])
assert_allclose(s_pred.var_pred_mean[:50], (desired_s_signal_cov.T + H).T)
assert_allclose(s_pred.var_pred_mean[50:], fcast.var_pred_mean)
assert_allclose(f_pred.var_pred_mean[:50], (desired_f_signal_cov.T + H).T)
assert_allclose(f_pred.var_pred_mean[50:], fcast.var_pred_mean)
assert_allclose(p_pred.var_pred_mean[:50], (desired_p_signal_cov.T + H).T)
assert_allclose(p_pred.var_pred_mean[50:], fcast.var_pred_mean)
p_signal = res.get_prediction(
start=0, end=60, information_set='predicted', signal_only=True,
exog=x_fcast)
f_signal = res.get_prediction(
start=0, end=60, information_set='filtered', signal_only=True,
exog=x_fcast)
s_signal = res.get_prediction(
start=0, end=60, information_set='smoothed', signal_only=True,
exog=x_fcast)
# Test signal predictions
fcast_signal = fcast.predicted_mean - d.T
fcast_signal_cov = (fcast.var_pred_mean.T - H).T
assert_allclose(s_signal.predicted_mean[:50], desired_s_signal.T)
assert_allclose(s_signal.predicted_mean[50:], fcast_signal)
assert_allclose(f_signal.predicted_mean[:50], desired_f_signal.T)
assert_allclose(f_signal.predicted_mean[50:], fcast_signal)
assert_allclose(p_signal.predicted_mean[:50], desired_p_signal.T)
assert_allclose(p_signal.predicted_mean[50:], fcast_signal)
assert_allclose(s_signal.var_pred_mean[:50], desired_s_signal_cov)
assert_allclose(s_signal.var_pred_mean[50:], fcast_signal_cov)
assert_allclose(f_signal.var_pred_mean[:50], desired_f_signal_cov)
assert_allclose(f_signal.var_pred_mean[50:], fcast_signal_cov)
assert_allclose(p_signal.var_pred_mean[:50], desired_p_signal_cov)
assert_allclose(p_signal.var_pred_mean[50:], fcast_signal_cov)
def test_predicted_filtered_smoothed_TVSS(reset_randomstate):
mod = TVSS(np.zeros((50, 2)))
mod.ssm.initialize_known([1.2, 0.8], np.eye(2))
res = mod.smooth([])
mod_oos = TVSS(np.zeros((11, 2)) * np.nan)
kwargs = {key: mod_oos[key] for key in [
'obs_intercept', 'design', 'obs_cov',
'transition', 'selection', 'state_cov']}
p_pred = res.get_prediction(
start=0, end=60, information_set='predicted',
**kwargs)
f_pred = res.get_prediction(
start=0, end=60, information_set='filtered',
**kwargs)
s_pred = res.get_prediction(
start=0, end=60, information_set='smoothed',
**kwargs)
p_signal = res.get_prediction(
start=0, end=60, information_set='predicted', signal_only=True,
**kwargs)
f_signal = res.get_prediction(
start=0, end=60, information_set='filtered', signal_only=True,
**kwargs)
s_signal = res.get_prediction(
start=0, end=60, information_set='smoothed', signal_only=True,
**kwargs)
# Test forecasts and signals
d = mod['obs_intercept'].transpose(1, 0)[:, :, None]
Z = mod['design'].transpose(2, 0, 1)
H = mod['obs_cov'].transpose(2, 0, 1)
fcast = res.get_forecast(11, **kwargs)
fcast_signal = fcast.predicted_mean - mod_oos['obs_intercept'].T
fcast_signal_cov = fcast.var_pred_mean - mod_oos['obs_cov'].T
desired_s_signal = Z @ res.smoothed_state.T[:, :, None]
desired_f_signal = Z @ res.filtered_state.T[:, :, None]
desired_p_signal = Z @ res.predicted_state.T[:-1, :, None]
assert_allclose(s_pred.predicted_mean[:50], (d + desired_s_signal)[..., 0])
assert_allclose(s_pred.predicted_mean[50:], fcast.predicted_mean)
assert_allclose(f_pred.predicted_mean[:50], (d + desired_f_signal)[..., 0])
assert_allclose(f_pred.predicted_mean[50:], fcast.predicted_mean)
assert_allclose(p_pred.predicted_mean[:50], (d + desired_p_signal)[..., 0])
assert_allclose(p_pred.predicted_mean[50:], fcast.predicted_mean)
assert_allclose(s_signal.predicted_mean[:50], desired_s_signal[..., 0])
assert_allclose(s_signal.predicted_mean[50:], fcast_signal)
assert_allclose(f_signal.predicted_mean[:50], desired_f_signal[..., 0])
assert_allclose(f_signal.predicted_mean[50:], fcast_signal)
assert_allclose(p_signal.predicted_mean[:50], desired_p_signal[..., 0])
assert_allclose(p_signal.predicted_mean[50:], fcast_signal)
for t in range(mod.nobs):
assert_allclose(s_pred.var_pred_mean[t],
Z[t] @ res.smoothed_state_cov[..., t] @ Z[t].T + H[t])
assert_allclose(f_pred.var_pred_mean[t],
Z[t] @ res.filtered_state_cov[..., t] @ Z[t].T + H[t])
assert_allclose(p_pred.var_pred_mean[t],
Z[t] @ res.predicted_state_cov[..., t] @ Z[t].T + H[t])
assert_allclose(s_signal.var_pred_mean[t],
Z[t] @ res.smoothed_state_cov[..., t] @ Z[t].T)
assert_allclose(f_signal.var_pred_mean[t],
Z[t] @ res.filtered_state_cov[..., t] @ Z[t].T)
assert_allclose(p_signal.var_pred_mean[t],
Z[t] @ res.predicted_state_cov[..., t] @ Z[t].T)
assert_allclose(s_pred.var_pred_mean[50:], fcast.var_pred_mean)
assert_allclose(f_pred.var_pred_mean[50:], fcast.var_pred_mean)
assert_allclose(p_pred.var_pred_mean[50:], fcast.var_pred_mean)
assert_allclose(s_signal.var_pred_mean[50:], fcast_signal_cov)
assert_allclose(f_signal.var_pred_mean[50:], fcast_signal_cov)
assert_allclose(p_signal.var_pred_mean[50:], fcast_signal_cov)
@pytest.mark.parametrize('use_exog', [False, True])
@pytest.mark.parametrize('trend', ['n', 'c', 't'])
def test_predicted_filtered_dynamic_varmax(use_exog, trend):
endog = np.log(dta[['realgdp', 'cpi']])
if trend in ['n', 'c']:
endog = endog.diff().iloc[1:] * 100
if trend == 'n':
endog -= endog.mean()
exog = np.ones(100) if use_exog else None
if use_exog:
exog[20:40] = 2.
trend_params = [0.1, 0.2]
var_params = [0.5, -0.1, 0.0, 0.2]
exog_params = [1., 2.]
cov_params = [1., 0., 1.]
params = []
if trend in ['c', 't']:
params += trend_params
params += var_params
if use_exog:
params += exog_params
params += cov_params
# Compute basic model with 50 observations
x_fit1 = exog[:50] if use_exog else None
x_fcast1 = exog[50:61] if use_exog else None
mod1 = varmax.VARMAX(endog[:50], order=(1, 0), trend=trend, exog=x_fit1)
res1 = mod1.filter(params)
# Compute basic model with only 20 observations
x_fit2 = exog[:20] if use_exog else None
x_fcast2 = exog[20:61] if use_exog else None
mod2 = varmax.VARMAX(endog[:20], order=(1, 0), trend=trend, exog=x_fit2)
res2 = mod2.filter(params)
# Test predictions
p1 = res1.get_prediction(start=0, dynamic=20, end=60, exog=x_fcast1)
p2 = res2.get_prediction(start=0, end=60, exog=x_fcast2)
assert_allclose(p1.predicted_mean, p2.predicted_mean)
assert_allclose(p1.var_pred_mean, p2.var_pred_mean)
p1 = res1.get_prediction(start=2, dynamic=18, end=60, exog=x_fcast1)
p2 = res2.get_prediction(start=2, end=60, exog=x_fcast2)
assert_allclose(p1.predicted_mean, p2.predicted_mean)
assert_allclose(p1.var_pred_mean, p2.var_pred_mean)
p1 = res1.get_prediction(start=20, dynamic=True, end=60, exog=x_fcast1)
p2 = res2.get_prediction(start=20, end=60, exog=x_fcast2)
assert_allclose(p1.predicted_mean, p2.predicted_mean)
assert_allclose(p1.var_pred_mean, p2.var_pred_mean)
# Test predictions, filtered
p1 = res1.get_prediction(start=0, dynamic=20, end=60, exog=x_fcast1,
information_set='filtered')
p2 = res2.get_prediction(start=0, end=60, exog=x_fcast2,
information_set='filtered')
assert_allclose(p1.predicted_mean, p2.predicted_mean)
assert_allclose(p1.var_pred_mean, p2.var_pred_mean)
p1 = res1.get_prediction(start=2, dynamic=18, end=60, exog=x_fcast1,
information_set='filtered')
p2 = res2.get_prediction(start=2, end=60, exog=x_fcast2,
information_set='filtered')
assert_allclose(p1.predicted_mean, p2.predicted_mean)
assert_allclose(p1.var_pred_mean, p2.var_pred_mean)
p1 = res1.get_prediction(start=20, dynamic=True, end=60, exog=x_fcast1,
information_set='filtered')
p2 = res2.get_prediction(start=20, end=60, exog=x_fcast2,
information_set='filtered')
assert_allclose(p1.predicted_mean, p2.predicted_mean)
assert_allclose(p1.var_pred_mean, p2.var_pred_mean)
# Test signals
p1 = res1.get_prediction(start=0, dynamic=20, end=60, exog=x_fcast1,
signal_only=True)
p2 = res2.get_prediction(start=0, end=60, exog=x_fcast2, signal_only=True)
assert_allclose(p1.predicted_mean, p2.predicted_mean)
assert_allclose(p1.var_pred_mean, p2.var_pred_mean)
p1 = res1.get_prediction(start=2, dynamic=18, end=60, exog=x_fcast1,
signal_only=True)
p2 = res2.get_prediction(start=2, end=60, exog=x_fcast2, signal_only=True)
assert_allclose(p1.predicted_mean, p2.predicted_mean)
assert_allclose(p1.var_pred_mean, p2.var_pred_mean)
p1 = res1.get_prediction(start=20, dynamic=True, end=60, exog=x_fcast1,
signal_only=True)
p2 = res2.get_prediction(start=20, end=60, exog=x_fcast2, signal_only=True)
assert_allclose(p1.predicted_mean, p2.predicted_mean)
assert_allclose(p1.var_pred_mean, p2.var_pred_mean)
# Test signal, filtered
p1 = res1.get_prediction(start=0, dynamic=20, end=60, exog=x_fcast1,
signal_only=True, information_set='filtered')
p2 = res2.get_prediction(start=0, end=60, exog=x_fcast2, signal_only=True,
information_set='filtered')
assert_allclose(p1.predicted_mean, p2.predicted_mean)
assert_allclose(p1.var_pred_mean, p2.var_pred_mean)
p1 = res1.get_prediction(start=2, dynamic=18, end=60, exog=x_fcast1,
signal_only=True, information_set='filtered')
p2 = res2.get_prediction(start=2, end=60, exog=x_fcast2, signal_only=True,
information_set='filtered')
assert_allclose(p1.predicted_mean, p2.predicted_mean)
assert_allclose(p1.var_pred_mean, p2.var_pred_mean)
p1 = res1.get_prediction(start=20, dynamic=True, end=60, exog=x_fcast1,
signal_only=True, information_set='filtered')
p2 = res2.get_prediction(start=20, end=60, exog=x_fcast2, signal_only=True,
information_set='filtered')
assert_allclose(p1.predicted_mean, p2.predicted_mean)
assert_allclose(p1.var_pred_mean, p2.var_pred_mean)
| 38.982143 | 79 | 0.669221 | 3,558 | 24,013 | 4.255762 | 0.069421 | 0.095232 | 0.055937 | 0.042663 | 0.860256 | 0.837538 | 0.818914 | 0.785563 | 0.759081 | 0.740523 | 0 | 0.04322 | 0.200267 | 24,013 | 615 | 80 | 39.045528 | 0.745261 | 0.059509 | 0 | 0.681034 | 0 | 0 | 0.03003 | 0 | 0 | 0 | 0 | 0 | 0.258621 | 1 | 0.019397 | false | 0 | 0.015086 | 0 | 0.034483 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7c0307c2c9c7aae7f5e2d83faded572efe3f1012 | 460 | py | Python | image_classification_pytorch/__init__.py | denred0/image-classification-pytorch | 9b4e43c3f01e28a6ae63a2a4abb75d9add6b690a | [
"MIT"
] | 2 | 2021-05-18T08:30:43.000Z | 2021-08-13T16:54:30.000Z | image_classification_pytorch/__init__.py | denred0/image_classification_pytorch | 9b4e43c3f01e28a6ae63a2a4abb75d9add6b690a | [
"MIT"
] | null | null | null | image_classification_pytorch/__init__.py | denred0/image_classification_pytorch | 9b4e43c3f01e28a6ae63a2a4abb75d9add6b690a | [
"MIT"
] | null | null | null | from image_classification_pytorch.model import ICPModel
from image_classification_pytorch.datamodule import ICPDataModule
from image_classification_pytorch.dataset import ICPDataset
from image_classification_pytorch.train import ICPTrainer
from image_classification_pytorch.inference import ICPInference
from image_classification_pytorch.schedulers import *
from image_classification_pytorch.optimizers import *
from image_classification_pytorch.dict import *
| 51.111111 | 65 | 0.906522 | 53 | 460 | 7.566038 | 0.339623 | 0.179551 | 0.458853 | 0.598504 | 0.179551 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.069565 | 460 | 8 | 66 | 57.5 | 0.936916 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7c0d0fdc7b254f21f4da65cd29c56c419b7b8e23 | 32 | py | Python | plugins/cjk-auto-spacing/__init__.py | mohnjahoney/website_source | edc86a869b90ae604f32e736d9d5ecd918088e6a | [
"MIT"
] | 13 | 2020-01-27T09:02:25.000Z | 2022-01-20T07:45:26.000Z | plugins/cjk-auto-spacing/__init__.py | mohnjahoney/website_source | edc86a869b90ae604f32e736d9d5ecd918088e6a | [
"MIT"
] | 29 | 2020-03-22T06:57:57.000Z | 2022-01-24T22:46:42.000Z | plugins/cjk-auto-spacing/__init__.py | mohnjahoney/website_source | edc86a869b90ae604f32e736d9d5ecd918088e6a | [
"MIT"
] | 6 | 2020-07-10T00:13:30.000Z | 2022-01-26T08:22:33.000Z | from .cjk_auto_spacing import *
| 16 | 31 | 0.8125 | 5 | 32 | 4.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 32 | 1 | 32 | 32 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b0128fa85cecfd762e72dd336faa2608836435ce | 106 | py | Python | pygsm/growing_string_methods/__init__.py | stenczelt/pyGSM | 48e7a710744ec768e2c4a0f4d8dc1f9ffd948ce1 | [
"MIT"
] | null | null | null | pygsm/growing_string_methods/__init__.py | stenczelt/pyGSM | 48e7a710744ec768e2c4a0f4d8dc1f9ffd948ce1 | [
"MIT"
] | 2 | 2021-05-29T13:04:31.000Z | 2021-05-30T11:05:41.000Z | pygsm/growing_string_methods/__init__.py | stenczelt/pyGSM | 48e7a710744ec768e2c4a0f4d8dc1f9ffd948ce1 | [
"MIT"
] | null | null | null | from .de_gsm import DE_GSM
from .gsm import GSM
from .se_cross import SE_Cross
from .se_gsm import SE_GSM
| 21.2 | 30 | 0.811321 | 22 | 106 | 3.636364 | 0.272727 | 0.3375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.150943 | 106 | 4 | 31 | 26.5 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b0562330d4506491dde86cd12f6078b53b1c0906 | 95 | py | Python | terrascript/nsxt/__init__.py | bkez322/python-terrascript | 7779a9d0c65b7f4b463746c84a4f181dd895a849 | [
"BSD-2-Clause"
] | 4 | 2022-02-07T21:08:14.000Z | 2022-03-03T04:41:28.000Z | terrascript/nsxt/__init__.py | bkez322/python-terrascript | 7779a9d0c65b7f4b463746c84a4f181dd895a849 | [
"BSD-2-Clause"
] | null | null | null | terrascript/nsxt/__init__.py | bkez322/python-terrascript | 7779a9d0c65b7f4b463746c84a4f181dd895a849 | [
"BSD-2-Clause"
] | 2 | 2022-02-06T01:49:42.000Z | 2022-02-08T14:15:00.000Z | # terrascript/nsxt/__init__.py
import terrascript
class nsxt(terrascript.Provider):
pass
| 13.571429 | 33 | 0.778947 | 11 | 95 | 6.363636 | 0.727273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.136842 | 95 | 6 | 34 | 15.833333 | 0.853659 | 0.294737 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
c68be3caa3208d1bea26db9181e56bb98bd1dcf1 | 70 | py | Python | deprecated/origin_stgcn_repo/tools/utils/__init__.py | fserracant/mmskeleton | 44008bdef3dd6354a17c220fac8bcd8cd08ed201 | [
"Apache-2.0"
] | 2,302 | 2018-01-23T11:18:30.000Z | 2022-03-31T12:24:55.000Z | deprecated/origin_stgcn_repo/tools/utils/__init__.py | fserracant/mmskeleton | 44008bdef3dd6354a17c220fac8bcd8cd08ed201 | [
"Apache-2.0"
] | 246 | 2019-08-24T15:36:11.000Z | 2022-03-23T06:57:02.000Z | deprecated/origin_stgcn_repo/tools/utils/__init__.py | fserracant/mmskeleton | 44008bdef3dd6354a17c220fac8bcd8cd08ed201 | [
"Apache-2.0"
] | 651 | 2018-01-24T00:56:54.000Z | 2022-03-25T23:42:53.000Z | from . import video
from . import openpose
from . import visualization | 23.333333 | 27 | 0.8 | 9 | 70 | 6.222222 | 0.555556 | 0.535714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.157143 | 70 | 3 | 27 | 23.333333 | 0.949153 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c6b978213acd6e471556097fc8d4c0cb51668519 | 30 | py | Python | seq_conv/__init__.py | shobrook/lstm-graph-conv | 23db8adb967290700cb39a8a35d74e3e2b239cc4 | [
"MIT"
] | 8 | 2020-07-23T06:34:41.000Z | 2022-01-15T15:48:17.000Z | seq_conv/__init__.py | shobrook/lstm-graph-conv | 23db8adb967290700cb39a8a35d74e3e2b239cc4 | [
"MIT"
] | 2 | 2021-04-12T15:22:44.000Z | 2021-04-16T10:36:50.000Z | seq_conv/__init__.py | shobrook/SeqConv | 23db8adb967290700cb39a8a35d74e3e2b239cc4 | [
"MIT"
] | null | null | null | from .seq_conv import SeqConv
| 15 | 29 | 0.833333 | 5 | 30 | 4.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 30 | 1 | 30 | 30 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
af3402d027939d65be918962ca2452182992a111 | 2,890 | py | Python | tests/directories_helper.py | captainhunt/sysrsync | ee3faadd82ac23eead97336d4ab930bc9da35d4e | [
"MIT"
] | 36 | 2019-01-23T15:14:00.000Z | 2022-03-30T20:53:11.000Z | tests/directories_helper.py | captainhunt/sysrsync | ee3faadd82ac23eead97336d4ab930bc9da35d4e | [
"MIT"
] | 18 | 2019-12-10T04:09:30.000Z | 2022-02-14T08:37:34.000Z | tests/directories_helper.py | captainhunt/sysrsync | ee3faadd82ac23eead97336d4ab930bc9da35d4e | [
"MIT"
] | 11 | 2019-05-24T12:15:42.000Z | 2022-02-20T16:16:43.000Z | from sysrsync.helpers import directories
from nose.tools import eq_
def test_strip_trailing_slash():
"""test strip trailing slash"""
test_dir = '/a/'
expect = '/a'
result = directories.strip_trailing_slash(test_dir)
eq_(expect, result)
def test_skip_strip_trailing_slash():
"""test skip strip trailing slash when not necessary"""
test_dir = '/a'
result = directories.strip_trailing_slash(test_dir)
eq_(result, test_dir)
def test_add_trailing_slash():
"""test add trailing slash"""
test_dir = '/a'
expect = '/a/'
result = directories.add_trailing_slash(test_dir)
eq_(expect, result)
def test_skip_add_trailing_slash():
"""test skip add trailing slash when not necessary"""
test_dir = '/a/'
result = directories.add_trailing_slash(test_dir)
eq_(result, test_dir)
def test_sanitize_trailing_slash():
"""test sanitize trailing slash when syncing source contents"""
source, target = '/a', '/b/'
expect_source, expect_target = '/a/', '/b'
result_source, result_target = directories.sanitize_trailing_slash(
source, target)
eq_(expect_source, result_source)
eq_(expect_target, result_target)
def test_sanitize_trailing_slash_no_action_needed():
"""test sanitize trailing slash when syncing source contents when already sanitized"""
source, target = '/a/', '/b'
expect_source, expect_target = '/a/', '/b'
result_source, result_target = directories.sanitize_trailing_slash(
source, target)
eq_(expect_source, result_source)
eq_(expect_target, result_target)
def test_sanitize_trailing_slash_whole_source():
"""test sanitize trailing slash when syncing whole source"""
source, target = '/a/', '/b/'
expect_source, expect_target = '/a', '/b'
result_source, result_target = directories.sanitize_trailing_slash(
source, target, sync_sourcedir_contents=False)
eq_(expect_source, result_source)
eq_(expect_target, result_target)
def test_sanitize_trailing_slash_whole_source_no_action_needed():
"""test sanitize trailing slash when syncing whole source when already sanitized"""
source, target = '/a', '/b/'
expect_source, expect_target = '/a', '/b'
result_source, result_target = directories.sanitize_trailing_slash(
source, target, sync_sourcedir_contents=False)
eq_(expect_source, result_source)
eq_(expect_target, result_target)
def test_dir_with_ssh():
"""should compose string with ssh for rsync connection"""
directory = '/a'
ssh = 'host'
expect = 'host:/a'
result = directories.get_directory_with_ssh(directory, ssh)
eq_(result, expect)
def test_dir_without_ssh():
"""should return directory when ssh is None"""
directory = '/a'
ssh = None
result = directories.get_directory_with_ssh(directory, ssh)
eq_(result, directory)
| 28.9 | 90 | 0.710381 | 368 | 2,890 | 5.23913 | 0.141304 | 0.161826 | 0.130705 | 0.103734 | 0.881743 | 0.812759 | 0.812759 | 0.812759 | 0.749481 | 0.642635 | 0 | 0 | 0.182007 | 2,890 | 99 | 91 | 29.191919 | 0.815567 | 0.177163 | 0 | 0.644068 | 0 | 0 | 0.029652 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.169492 | false | 0 | 0.033898 | 0 | 0.20339 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
af5421c895bf0254c42c939d6d953e738256d2a9 | 27,418 | py | Python | TinyDCU_Net_16-master/speech_data.py | scofir/AEC-Challenge | d0df780f428dd73dc40ef9cd917c089d2f1894fa | [
"MIT"
] | 1 | 2021-08-03T03:57:31.000Z | 2021-08-03T03:57:31.000Z | TinyDCU_Net_16-master/speech_data.py | scofir/AEC-Challenge | d0df780f428dd73dc40ef9cd917c089d2f1894fa | [
"MIT"
] | null | null | null | TinyDCU_Net_16-master/speech_data.py | scofir/AEC-Challenge | d0df780f428dd73dc40ef9cd917c089d2f1894fa | [
"MIT"
] | null | null | null | import os
import sys
import logging
import traceback
import json
import librosa
import random
import time
import threading
import torch
import torch.nn as nn
import numpy as np
from utils.signalprocess import *
from utils.tools import *
from utils.istft import ISTFT
try:
from Queue import Queue
except ImportError:
from queue import Queue
class Producer(threading.Thread):
def __init__(self, reader):
threading.Thread.__init__(self)
self.reader = reader
self.exitcode = 0
self.stop_flag = False
def run(self):
try:
min_queue_size = self.reader.cfg.min_queue_size
while not self.stop_flag:
idx = self.reader.next_produce_idx
if idx < len(self.reader.clean_wav_list):
if self.reader.batch_queue.qsize() < min_queue_size:
group_list = self.reader.load_one_batch()
for batch in group_list:
self.reader.batch_queue.put(batch)
else:
time.sleep(1)
else:
time.sleep(1)
except Exception as e:
logging.warning("producer exception: %s" % e)
self.exitcode = 1
traceback.print_exc()
def stop(self):
self.stop_flag = True
def get_file_line(file_list, cfg=None):
line_list = []
with open(file_list, 'r') as f:
for line in f.readlines():
line = line.strip().split()[0]
if cfg is not None:
sig_data = audio_read(line, samp_rate=cfg.sample_rate)
if len(sig_data) >= cfg.chunk_length:
line_list.append(line)
else:
line_list.append(line)
return line_list
def compute_reverberation(config, clean_sig, impulse_sig):
fftsize = 16384
if len(impulse_sig) < fftsize:
impulse_sig = np.concatenate((impulse_sig, np.zeros((fftsize - len(impulse_sig),))))
erir_sig = np.zeros((fftsize,))
lrir_sig = np.zeros((fftsize,))
erir_sig[:1024] = impulse_sig[:1024]
lrir_sig[:fftsize] = impulse_sig[:fftsize]
prhk = rfft(lrir_sig)
erhk = rfft(erir_sig)
frames = samples_to_stft_frames(len(clean_sig), size=config.frame_size, shift=config.frame_shift)
rfrm = np.zeros((fftsize,))
efrm = np.zeros((fftsize,))
reverb_sig = np.zeros((len(clean_sig),))
earlyr_sig = np.zeros((len(clean_sig),))
for i in range(frames):
xinp = np.zeros((fftsize,))
xinp[:config.frame_shift] = clean_sig[i * config.frame_shift:(i + 1) * config.frame_shift]
xink = rfft(xinp)
rink = xink * prhk
eink = xink * erhk
time_signal = irfft(rink)
rfrm[:(fftsize - config.frame_shift)] = time_signal[:(fftsize - config.frame_shift)] \
+ rfrm[config.frame_shift:fftsize]
rfrm[(fftsize - config.frame_shift):fftsize] = time_signal[(fftsize - config.frame_shift):fftsize]
time_signal = irfft(eink)
efrm[:(fftsize - config.frame_shift)] = time_signal[:(fftsize - config.frame_shift)] \
+ efrm[config.frame_shift:fftsize]
efrm[(fftsize - config.frame_shift):fftsize] = time_signal[(fftsize - config.frame_shift):fftsize]
reverb_sig[i * config.frame_shift:(i + 1) * config.frame_shift] = rfrm[:config.frame_shift]
earlyr_sig[i * config.frame_shift:(i + 1) * config.frame_shift] = efrm[:config.frame_shift]
return earlyr_sig, reverb_sig
def compute_features(config, noisy_sig, clean_sig):
frames = samples_to_stft_frames(len(clean_sig), size=config.frame_size, shift=config.frame_shift)
sent_width = frames // 16
sent_width = sent_width * 16
sent_height = config.frame_size / 2 + 1
clean_stft = stft_analysis(clean_sig, size=config.frame_size, shift=config.frame_shift)
noisy_stft = stft_analysis(noisy_sig, size=config.frame_size, shift=config.frame_shift)
clean_stft = clean_stft[:sent_width, :sent_height]
noisy_stft = noisy_stft[:sent_width, :sent_height]
frames = clean_stft.shape[0]
frebin = clean_stft.shape[1]
clean_feat = np.vstack((np.real(clean_stft), np.imag(clean_stft)))
noisy_feat = np.vstack((np.real(noisy_stft), np.imag(noisy_stft)))
noisy_feat = np.reshape(noisy_feat, [1, 2, frames, frebin])
clean_feat = np.reshape(clean_feat, [1, 2, frames, frebin])
return noisy_feat, clean_feat
def compute_reverb_features(config, noise_sig, reverb_sig, early_sig):
reverb_power = np.mean(np.square(reverb_sig))
noise_power = np.mean(np.square(noise_sig))
snr = 12 * (np.random.rand() + 0.25)
scale = np.sqrt(reverb_power / noise_power) * 10 ** (-snr / 10)
# get mixture sig
if len(reverb_sig) >= len(noise_sig):
repeat_num = np.ceil(len(reverb_sig) / len(noise_sig)).astype(np.int32)
repeat_noise_sig = np.tile(scale * noise_sig, repeat_num)
else:
repeat_idx = np.random.randint(len(noise_sig) - len(reverb_sig))
repeat_noise_sig = scale * noise_sig[repeat_idx:repeat_idx + len(reverb_sig)]
noisy_sig = reverb_sig + repeat_noise_sig[:len(reverb_sig)]
normAmp = np.random.rand()
sent_height = config.frame_size / 2 + 1
# normAmp = np.sqrt(len(clean_sig) / np.sum(clean_sig ** 2.0))
if normAmp < 0.1:
normAmp = 0.1
early_sig = early_sig * normAmp
noisy_sig = noisy_sig * normAmp
clean_stft = stft_analysis(early_sig, size=config.frame_size, shift=config.frame_shift)
noisy_stft = stft_analysis(noisy_sig, size=config.frame_size, shift=config.frame_shift)
clean_stft = clean_stft[:, :sent_height]
noisy_stft = noisy_stft[:, :sent_height]
frames = samples_to_stft_frames(len(early_sig), size=config.frame_size, shift=config.frame_shift)
sent_width = frames // 16
sent_width = 16 * sent_width
if frames > sent_width:
start_idx = np.random.randint(frames - sent_width)
else:
start_idx = 0
clean_stft = clean_stft[start_idx:start_idx + sent_width, :sent_height]
noisy_stft = noisy_stft[start_idx:start_idx + sent_width, :sent_height]
clean_feat = np.vstack((np.real(clean_stft), np.imag(clean_stft)))
noisy_feat = np.vstack((np.real(noisy_stft), np.imag(noisy_stft)))
noisy_feats = np.reshape(noisy_feat, [1, 2, sent_width, sent_height])
clean_feats = np.reshape(clean_feat, [1, 2, sent_width, sent_height])
return noisy_feats, clean_feats
class SpeechReader(object):
def __init__(self, config, job_type, clean_list=None, noisy_list=None, impulse_list=None):
self.cfg = config
self.job_type = job_type
if clean_list is not None and noisy_list is not None:
self.clean_wav_list = get_file_line(clean_list, config)
self.noisy_wav_list = get_file_line(noisy_list, config)
else:
if job_type is not None:
json_path = os.path.join(config.json_dir, self.job_type, 'files.json')
with open(json_path, 'r') as f:
json_list = json.load(f)
random.shuffle(json_list)
self.clean_wav_list = []
self.noisy_wav_list = []
for wav_file_name in json_list:
clean_wav_file_path = os.path.join(config.dataset_dir, self.job_type, 'clean', wav_file_name)
noisy_wav_file_path = os.path.join(config.dataset_dir, self.job_type, 'mix', wav_file_name)
self.clean_wav_list.append(clean_wav_file_path)
self.noisy_wav_list.append(noisy_wav_file_path)
if impulse_list is not None:
self.impulse_wav_list = get_file_line(impulse_list)
self.next_produce_idx = 0
self.next_consume_idx = 0
self.running_out_flag = 0
self.batch_count = 0
self.narray_window = analysis_window(config.frame_size, config.frame_shift)
def __getitem__(self, index):
"""Reads an wave file and preprocesses it and returns."""
print(index)
clean_file = self.clean_wav_list[index]
# clean_sig = audio_read(clean_file, samp_rate=self.cfg.sample_rate)
# noise_file = self.noise_wav_list[np.random.randint(len(self.noise_wav_list))]
# noise_sig = audio_read(noise_file, samp_rate=self.cfg.sample_rate)
# noisy_feat, clean_feat, noisy_sig = compute_features(self.cfg, noise_sig, clean_sig)
noisy_feat, clean_feat, noisy_sig, clean_sig = 0, 0, 0, 0
return noisy_feat, clean_feat, noisy_sig, clean_sig
def __len__(self):
"""Returns the total number of clean files."""
return len(self.clean_wav_list)
def start(self):
self.next_produce_idx = 0
self.next_consume_idx = 0
self.running_out_flag = 0
def reset(self):
self.next_produce_idx = 0
self.next_consume_idx = 0
self.running_out_flag = 0
self.batch_count = 0
json_path = os.path.join(self.cfg.json_dir, self.job_type, 'files.json')
with open(json_path, 'r') as f:
json_list = json.load(f)
random.shuffle(json_list)
self.clean_wav_list = []
self.noisy_wav_list = []
for wav_file_name in json_list:
clean_wav_file_path = os.path.join(self.cfg.dataset_dir, self.job_type, 'clean', wav_file_name)
noisy_wav_file_path = os.path.join(self.cfg.dataset_dir, self.job_type, 'mix', wav_file_name)
self.clean_wav_list.append(clean_wav_file_path)
self.noisy_wav_list.append(noisy_wav_file_path)
def shuffle_data_list(self):
random.shuffle(self.noisy_wav_list)
def load_samples(self, noisy_name):
noisy_sig = audio_read(noisy_name, samp_rate=self.cfg.sample_rate)
noisy_stft = stft_analysis(noisy_sig, size=self.cfg.frame_size, shift=self.cfg.frame_shift)
frames = samples_to_stft_frames(len(noisy_sig), size=self.cfg.frame_size, shift=self.cfg.frame_shift)
return noisy_sig, noisy_stft, frames
def load_one_mixture(self):
"""Reads an wave file and preprocesses it and returns."""
clean_file = self.clean_wav_list[np.random.randint(len(self.clean_wav_list))]
noise_file = self.noisy_wav_list[np.random.randint(len(self.noisy_wav_list))]
clean_sig = audio_read(clean_file, samp_rate=self.cfg.sample_rate)
noise_sig = audio_read(noise_file, samp_rate=self.cfg.sample_rate)
clean_power = np.mean(np.square(clean_sig))
noise_power = np.mean(np.square(noise_sig))
snr = 20 * (np.random.rand() - 0.25)
scale = np.sqrt(clean_power / noise_power) * 10 ** (-snr / 10)
# get mixture sig
if len(clean_sig) >= len(noise_sig):
repeat_num = np.ceil(len(clean_sig) / len(noise_sig)).astype(np.int32)
repeat_noise_sig = np.tile(scale * noise_sig, repeat_num)
else:
repeat_idx = np.random.randint(len(noise_sig) - len(clean_sig))
repeat_noise_sig = scale * noise_sig[repeat_idx:repeat_idx + len(clean_sig)]
noisy_sig = clean_sig + repeat_noise_sig[:len(clean_sig)]
# dump wav file
# audio_write('./wav/clean.wav', clean_sig, 16000)
# audio_write('./wav/noisy.wav', noisy_sig, 16000)
frames = samples_to_stft_frames(len(noisy_sig), size=self.cfg.frame_size, shift=self.cfg.frame_shift)
clean_stft = stft_analysis(clean_sig, size=self.cfg.frame_size, shift=self.cfg.frame_shift)
noisy_stft = stft_analysis(noisy_sig, size=self.cfg.frame_size, shift=self.cfg.frame_shift)
clean_magn = np.abs(clean_stft)
noisy_magn = np.abs(noisy_stft)
return noisy_sig, noisy_stft, noisy_magn, clean_sig, clean_stft, clean_magn, frames
def load_one_item(self):
"""Reads an wave file and preprocesses it and returns."""
clean_file = self.clean_wav_list[self.next_consume_idx]
clean_sig = audio_read(clean_file, samp_rate=self.cfg.sample_rate)
noisy_file = self.noisy_wav_list[self.next_consume_idx]
noisy_sig = audio_read(noisy_file, samp_rate=self.cfg.sample_rate)
clean_sig = np.reshape(clean_sig, [1, len(clean_sig)])
noisy_sig = np.reshape(noisy_sig, [1, len(noisy_sig)])
self.next_consume_idx = min(self.next_consume_idx + 1, len(self.clean_wav_list))
return noisy_sig, clean_sig
def load_one_batch(self):
"""Reads an wave file and preprocesses it and returns."""
noisy_list, clean_list, frame_list = [], [], []
for i in range(self.cfg.batch_size):
next_consume_idx = self.next_consume_idx % len(self.clean_wav_list)
clean_file = self.clean_wav_list[next_consume_idx]
clean_sig = audio_read(clean_file, samp_rate=self.cfg.sample_rate)
noisy_file = self.noisy_wav_list[next_consume_idx]
noisy_sig = audio_read(noisy_file, samp_rate=self.cfg.sample_rate)
# clean_sig = np.reshape(clean_sig, [1, len(clean_sig)])
# noisy_sig = np.reshape(noisy_sig, [1, len(noisy_sig)])
clean_sig = torch.FloatTensor(clean_sig)
noisy_sig = torch.FloatTensor(noisy_sig)
if len(clean_sig) > self.cfg.chunk_length:
wav_start = random.randint(0, len(clean_sig) - self.cfg.chunk_length)
clean_sig = clean_sig[wav_start:wav_start + self.cfg.chunk_length]
noisy_sig = noisy_sig[wav_start:wav_start + self.cfg.chunk_length]
frame_num = len(clean_sig) // self.cfg.frame_shift + 1
clean_list.append(clean_sig)
noisy_list.append(noisy_sig)
frame_list.append(frame_num)
self.next_consume_idx = self.next_consume_idx + 1
if self.next_consume_idx >= len(self.clean_wav_list):
self.running_out_flag = 1
clean_list = nn.utils.rnn.pad_sequence(clean_list, batch_first=True)
noisy_list = nn.utils.rnn.pad_sequence(noisy_list, batch_first=True)
# print('self.next_produce_idx: ' + str(self.next_produce_idx))
return noisy_list, clean_list, frame_list
def load_one_norm_norm_batch(self):
"""Reads an wave file and preprocesses it and returns."""
noisy_feat_list, clean_feat_list, frame_list = [], [], []
for i in range(self.cfg.batch_size):
next_consume_idx = self.next_consume_idx % len(self.clean_wav_list)
clean_file = self.clean_wav_list[next_consume_idx]
clean_sig = audio_read(clean_file, samp_rate=self.cfg.sample_rate)
noisy_file = self.noisy_wav_list[next_consume_idx]
noisy_sig = audio_read(noisy_file, samp_rate=self.cfg.sample_rate)
if len(clean_sig) > self.cfg.chunk_length:
wav_start = random.randint(0, len(clean_sig) - self.cfg.chunk_length)
clean_sig = clean_sig[wav_start:wav_start + self.cfg.chunk_length]
noisy_sig = noisy_sig[wav_start:wav_start + self.cfg.chunk_length]
frame_num = len(clean_sig) // self.cfg.frame_shift + 1
noisy_feat = librosa.stft(noisy_sig, n_fft=self.cfg.frame_size, hop_length=self.cfg.frame_shift,
window=self.narray_window, pad_mode='constant').T
clean_feat = librosa.stft(clean_sig, n_fft=self.cfg.frame_size, hop_length=self.cfg.frame_shift,
window=self.narray_window, pad_mode='constant').T
noisy_feat, clean_feat = noisy_feat[0:frame_num, :], clean_feat[0:frame_num, :]
noisy_real, noisy_imag = np.real(noisy_feat), np.imag(noisy_feat)
clean_real, clean_imag = np.real(clean_feat), np.imag(clean_feat)
noisy_feat = torch.FloatTensor(np.concatenate(
(noisy_real[:, :, np.newaxis].astype(np.float32), noisy_imag[:, :, np.newaxis].astype(np.float32)),
axis=-1))
clean_feat = torch.FloatTensor(np.concatenate(
(clean_real[:, :, np.newaxis].astype(np.float32), clean_imag[:, :, np.newaxis].astype(np.float32)),
axis=-1))
noisy_feat_list.append(noisy_feat)
clean_feat_list.append(clean_feat)
frame_list.append(frame_num)
self.next_consume_idx = self.next_consume_idx + 1
if self.next_consume_idx >= len(self.clean_wav_list):
self.running_out_flag = 1
self.batch_count = self.batch_count + 1
noisy_feat_list = nn.utils.rnn.pad_sequence(noisy_feat_list, batch_first=True)
clean_feat_list = nn.utils.rnn.pad_sequence(clean_feat_list, batch_first=True)
noisy_feat_list = noisy_feat_list.permute(0, 3, 1, 2).contiguous()
clean_feat_list = clean_feat_list.permute(0, 3, 1, 2).contiguous()
# print('self.next_produce_idx: ' + str(self.next_produce_idx))
return noisy_feat_list, clean_feat_list, frame_list
def load_one_comp_norm_batch(self):
"""Reads an wave file and preprocesses it and returns."""
noisy_feat_list, clean_feat_list, frame_list = [], [], []
for i in range(self.cfg.batch_size):
next_consume_idx = self.next_consume_idx % len(self.clean_wav_list)
clean_file = self.clean_wav_list[next_consume_idx]
clean_sig = audio_read(clean_file, samp_rate=self.cfg.sample_rate)
noisy_file = self.noisy_wav_list[next_consume_idx]
noisy_sig = audio_read(noisy_file, samp_rate=self.cfg.sample_rate)
if len(clean_sig) > self.cfg.chunk_length:
wav_start = random.randint(0, len(clean_sig) - self.cfg.chunk_length)
clean_sig = clean_sig[wav_start:wav_start + self.cfg.chunk_length]
noisy_sig = noisy_sig[wav_start:wav_start + self.cfg.chunk_length]
frame_num = len(clean_sig) // self.cfg.frame_shift + 1
noisy_feat = librosa.stft(noisy_sig, n_fft=self.cfg.frame_size, hop_length=self.cfg.frame_shift,
window=self.narray_window, pad_mode='constant').T
clean_feat = librosa.stft(clean_sig, n_fft=self.cfg.frame_size, hop_length=self.cfg.frame_shift,
window=self.narray_window, pad_mode='constant').T
noisy_feat, clean_feat = noisy_feat[0:frame_num, :], clean_feat[0:frame_num, :]
noisy_mag, noisy_phase = np.abs(noisy_feat), np.angle(noisy_feat)
noisy_mag_com = np.sqrt(noisy_mag)
noisy_real, noisy_imag = noisy_mag_com * np.cos(noisy_phase), noisy_mag_com * np.sin(noisy_phase)
clean_real, clean_imag = np.real(clean_feat), np.imag(clean_feat)
noisy_feat = torch.FloatTensor(np.concatenate(
(noisy_real[:, :, np.newaxis].astype(np.float32), noisy_imag[:, :, np.newaxis].astype(np.float32)),
axis=-1))
clean_feat = torch.FloatTensor(np.concatenate(
(clean_real[:, :, np.newaxis].astype(np.float32), clean_imag[:, :, np.newaxis].astype(np.float32)),
axis=-1))
noisy_feat_list.append(noisy_feat)
clean_feat_list.append(clean_feat)
frame_list.append(frame_num)
self.next_consume_idx = self.next_consume_idx + 1
if self.next_consume_idx >= len(self.clean_wav_list):
self.running_out_flag = 1
self.batch_count = self.batch_count + 1
noisy_feat_list = nn.utils.rnn.pad_sequence(noisy_feat_list, batch_first=True)
clean_feat_list = nn.utils.rnn.pad_sequence(clean_feat_list, batch_first=True)
noisy_feat_list = noisy_feat_list.permute(0, 3, 1, 2).contiguous()
clean_feat_list = clean_feat_list.permute(0, 3, 1, 2).contiguous()
# print('self.next_produce_idx: ' + str(self.next_produce_idx))
return noisy_feat_list, clean_feat_list, frame_list
def load_one_norm_comp_batch(self):
"""Reads an wave file and preprocesses it and returns."""
noisy_feat_list, clean_feat_list, frame_list = [], [], []
for i in range(self.cfg.batch_size):
next_consume_idx = self.next_consume_idx % len(self.clean_wav_list)
clean_file = self.clean_wav_list[next_consume_idx]
clean_sig = audio_read(clean_file, samp_rate=self.cfg.sample_rate)
noisy_file = self.noisy_wav_list[next_consume_idx]
noisy_sig = audio_read(noisy_file, samp_rate=self.cfg.sample_rate)
if len(clean_sig) > self.cfg.chunk_length:
wav_start = random.randint(0, len(clean_sig) - self.cfg.chunk_length)
clean_sig = clean_sig[wav_start:wav_start + self.cfg.chunk_length]
noisy_sig = noisy_sig[wav_start:wav_start + self.cfg.chunk_length]
frame_num = len(clean_sig) // self.cfg.frame_shift + 1
noisy_feat = librosa.stft(noisy_sig, n_fft=self.cfg.frame_size, hop_length=self.cfg.frame_shift,
window=self.narray_window, pad_mode='constant').T
clean_feat = librosa.stft(clean_sig, n_fft=self.cfg.frame_size, hop_length=self.cfg.frame_shift,
window=self.narray_window, pad_mode='constant').T
noisy_feat, clean_feat = noisy_feat[0:frame_num, :], clean_feat[0:frame_num, :]
clean_mag, clean_phase = np.abs(clean_feat), np.angle(clean_feat)
clean_mag_com = np.sqrt(clean_mag)
noisy_real, noisy_imag = np.real(noisy_feat), np.imag(noisy_feat)
clean_real, clean_imag = clean_mag_com * np.cos(clean_phase), clean_mag_com * np.sin(clean_phase)
noisy_feat = torch.FloatTensor(np.concatenate(
(noisy_real[:, :, np.newaxis].astype(np.float32), noisy_imag[:, :, np.newaxis].astype(np.float32)),
axis=-1))
clean_feat = torch.FloatTensor(np.concatenate(
(clean_real[:, :, np.newaxis].astype(np.float32), clean_imag[:, :, np.newaxis].astype(np.float32)),
axis=-1))
noisy_feat_list.append(noisy_feat)
clean_feat_list.append(clean_feat)
frame_list.append(frame_num)
self.next_consume_idx = self.next_consume_idx + 1
if self.next_consume_idx >= len(self.clean_wav_list):
self.running_out_flag = 1
self.batch_count = self.batch_count + 1
noisy_feat_list = nn.utils.rnn.pad_sequence(noisy_feat_list, batch_first=True)
clean_feat_list = nn.utils.rnn.pad_sequence(clean_feat_list, batch_first=True)
noisy_feat_list = noisy_feat_list.permute(0, 3, 1, 2).contiguous()
clean_feat_list = clean_feat_list.permute(0, 3, 1, 2).contiguous()
# print('self.next_produce_idx: ' + str(self.next_produce_idx))
return noisy_feat_list, clean_feat_list, frame_list
def load_one_comp_comp_batch(self):
"""Reads an wave file and preprocesses it and returns."""
noisy_feat_list, clean_feat_list, frame_list = [], [], []
for i in range(self.cfg.batch_size):
next_consume_idx = self.next_consume_idx % len(self.clean_wav_list)
clean_file = self.clean_wav_list[next_consume_idx]
clean_sig = audio_read(clean_file, samp_rate=self.cfg.sample_rate)
noisy_file = self.noisy_wav_list[next_consume_idx]
noisy_sig = audio_read(noisy_file, samp_rate=self.cfg.sample_rate)
if len(clean_sig) > self.cfg.chunk_length:
wav_start = random.randint(0, len(clean_sig) - self.cfg.chunk_length)
clean_sig = clean_sig[wav_start:wav_start + self.cfg.chunk_length]
noisy_sig = noisy_sig[wav_start:wav_start + self.cfg.chunk_length]
frame_num = len(clean_sig) // self.cfg.frame_shift + 1
noisy_feat = librosa.stft(noisy_sig, n_fft=self.cfg.frame_size, hop_length=self.cfg.frame_shift,
window=self.narray_window, pad_mode='constant').T
clean_feat = librosa.stft(clean_sig, n_fft=self.cfg.frame_size, hop_length=self.cfg.frame_shift,
window=self.narray_window, pad_mode='constant').T
noisy_feat, clean_feat = noisy_feat[0:frame_num, :], clean_feat[0:frame_num, :]
noisy_mag, noisy_phase = np.abs(noisy_feat), np.angle(noisy_feat)
clean_mag, clean_phase = np.abs(clean_feat), np.angle(clean_feat)
noisy_mag_com, clean_mag_com = np.sqrt(noisy_mag), np.sqrt(clean_mag)
noisy_real, noisy_imag = noisy_mag_com * np.cos(noisy_phase), noisy_mag_com * np.sin(noisy_phase)
clean_real, clean_imag = clean_mag_com * np.cos(clean_phase), clean_mag_com * np.sin(clean_phase)
noisy_feat = torch.FloatTensor(np.concatenate(
(noisy_real[:, :, np.newaxis].astype(np.float32), noisy_imag[:, :, np.newaxis].astype(np.float32)),
axis=-1))
clean_feat = torch.FloatTensor(np.concatenate(
(clean_real[:, :, np.newaxis].astype(np.float32), clean_imag[:, :, np.newaxis].astype(np.float32)),
axis=-1))
noisy_feat_list.append(noisy_feat)
clean_feat_list.append(clean_feat)
frame_list.append(frame_num)
self.next_consume_idx = self.next_consume_idx + 1
if self.next_consume_idx >= len(self.clean_wav_list):
self.running_out_flag = 1
self.batch_count = self.batch_count + 1
noisy_feat_list = nn.utils.rnn.pad_sequence(noisy_feat_list, batch_first=True)
clean_feat_list = nn.utils.rnn.pad_sequence(clean_feat_list, batch_first=True)
noisy_feat_list = noisy_feat_list.permute(0, 3, 1, 2).contiguous()
clean_feat_list = clean_feat_list.permute(0, 3, 1, 2).contiguous()
# print('self.next_produce_idx: ' + str(self.next_produce_idx))
return noisy_feat_list, clean_feat_list, frame_list
def is_running_out(self):
if self.running_out_flag == 1:
return True
else:
return False
def next_batch(self):
while self.producer.exitcode == 0:
try:
if self.batch_queue.qsize() > 0:
batch_data = self.batch_queue.get(block=False)
self.next_consume_idx = min(self.next_consume_idx + 1, len(self.clean_wav_list))
print('self.next_consume_idx: ' + str(self.next_consume_idx))
return batch_data
else:
time.sleep(0.5)
except Exception as e:
time.sleep(3)
def get_reader(config, job_type=None, clean_list=None, noisy_list=None, impulse_list=None):
data_reader = SpeechReader(config, job_type, clean_list, noisy_list, impulse_list)
return data_reader
| 55.389899 | 116 | 0.640638 | 3,824 | 27,418 | 4.26046 | 0.059885 | 0.033513 | 0.039529 | 0.03425 | 0.805856 | 0.769826 | 0.752026 | 0.723545 | 0.711147 | 0.69482 | 0 | 0.01042 | 0.254468 | 27,418 | 494 | 117 | 55.502024 | 0.786605 | 0.050295 | 0 | 0.543578 | 0 | 0 | 0.005808 | 0.000863 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055046 | false | 0 | 0.041284 | 0 | 0.142202 | 0.006881 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
af5663f8e8250faf95f324d3ba927cc94fb8eaba | 238 | py | Python | monolith_filemanager/console_commands/install_tensorflow.py | MonolithAILtd/monolith-filemanager | 2369e244e4d8a48890f55d00419a83001a5c6c40 | [
"Apache-2.0"
] | 3 | 2021-06-02T09:45:00.000Z | 2022-02-01T14:30:01.000Z | monolith_filemanager/console_commands/install_tensorflow.py | MonolithAILtd/monolith-filemanager | 2369e244e4d8a48890f55d00419a83001a5c6c40 | [
"Apache-2.0"
] | 3 | 2021-05-26T11:46:28.000Z | 2021-11-04T10:14:42.000Z | monolith_filemanager/console_commands/install_tensorflow.py | MonolithAILtd/monolith-filemanager | 2369e244e4d8a48890f55d00419a83001a5c6c40 | [
"Apache-2.0"
] | 2 | 2021-06-04T15:02:14.000Z | 2021-09-03T09:26:45.000Z | import subprocess
import sys
def install_tensorflow():
"""
Installs the tensorflow requirement for the package.
Returns: None
"""
subprocess.check_call([sys.executable, '-m', 'pip', 'install', 'tensorflow>=2.1.0'])
| 19.833333 | 88 | 0.668067 | 28 | 238 | 5.607143 | 0.75 | 0.216561 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015544 | 0.189076 | 238 | 11 | 89 | 21.636364 | 0.797927 | 0.281513 | 0 | 0 | 0 | 0 | 0.192053 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | true | 0 | 0.5 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
afa3fa16ff275b693b0d6d8a4ceb7f459ac86b99 | 31 | py | Python | face_network/__init__.py | DongChengdongHangZhou/Siamese-Network-tiff | aaf923ad59301af1b3237e605964341a90dc414b | [
"MIT"
] | null | null | null | face_network/__init__.py | DongChengdongHangZhou/Siamese-Network-tiff | aaf923ad59301af1b3237e605964341a90dc414b | [
"MIT"
] | null | null | null | face_network/__init__.py | DongChengdongHangZhou/Siamese-Network-tiff | aaf923ad59301af1b3237e605964341a90dc414b | [
"MIT"
] | null | null | null | from .Network import simpleCNN
| 15.5 | 30 | 0.83871 | 4 | 31 | 6.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.129032 | 31 | 1 | 31 | 31 | 0.962963 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
afc8a068cc19889eca49229381594cb34ebcf6fe | 35 | py | Python | sysidentpy/parameter_estimation/__init__.py | armandobs14/sysidentpy | 108bd9ec6f0bc30652f915861b6eaeee08bad330 | [
"BSD-3-Clause"
] | 107 | 2020-05-19T12:59:56.000Z | 2022-03-29T05:25:27.000Z | sysidentpy/parameter_estimation/__init__.py | armandobs14/sysidentpy | 108bd9ec6f0bc30652f915861b6eaeee08bad330 | [
"BSD-3-Clause"
] | 20 | 2020-05-24T15:56:15.000Z | 2022-03-05T19:54:02.000Z | sysidentpy/parameter_estimation/__init__.py | armandobs14/sysidentpy | 108bd9ec6f0bc30652f915861b6eaeee08bad330 | [
"BSD-3-Clause"
] | 25 | 2020-05-19T14:02:17.000Z | 2022-03-15T20:17:58.000Z | from .estimators import Estimators
| 17.5 | 34 | 0.857143 | 4 | 35 | 7.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.114286 | 35 | 1 | 35 | 35 | 0.967742 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
afd2b1b81829c178873b1a47303af127587353a2 | 42 | py | Python | netbox_api/__init__.py | zinic/netbox_api | ace6cb2b60edd93f4a37f7a29e8d262a1c8e1fc4 | [
"MIT"
] | 2 | 2018-04-10T09:28:48.000Z | 2019-08-10T22:48:51.000Z | netbox_api/__init__.py | zinic/netbox_api | ace6cb2b60edd93f4a37f7a29e8d262a1c8e1fc4 | [
"MIT"
] | null | null | null | netbox_api/__init__.py | zinic/netbox_api | ace6cb2b60edd93f4a37f7a29e8d262a1c8e1fc4 | [
"MIT"
] | null | null | null | from netbox_api.api import new_api_client
| 21 | 41 | 0.880952 | 8 | 42 | 4.25 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.095238 | 42 | 1 | 42 | 42 | 0.894737 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
bb6e43eb0eeb61735417edaf9d2326c4673f1c95 | 47 | py | Python | models/thresholds/__init__.py | YashYash/advanced-lane-lines | 236d410a1220ea266f105794a8c3ead7da6bc0f9 | [
"MIT"
] | null | null | null | models/thresholds/__init__.py | YashYash/advanced-lane-lines | 236d410a1220ea266f105794a8c3ead7da6bc0f9 | [
"MIT"
] | null | null | null | models/thresholds/__init__.py | YashYash/advanced-lane-lines | 236d410a1220ea266f105794a8c3ead7da6bc0f9 | [
"MIT"
] | null | null | null | from models.thresholds.model import Thresholds
| 23.5 | 46 | 0.87234 | 6 | 47 | 6.833333 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.085106 | 47 | 1 | 47 | 47 | 0.953488 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
bbbcd0b733df8417578eb37746026f76c7997a52 | 119 | py | Python | src/nodes/corenodes/transform/__init__.py | Correct-Syntax/GimelStudio | db6e2db35730e11bcb25f5ba82823e68b86003f1 | [
"Apache-2.0"
] | 1 | 2022-01-16T01:15:24.000Z | 2022-01-16T01:15:24.000Z | src/nodes/corenodes/transform/__init__.py | Correct-Syntax/GimelStudio | db6e2db35730e11bcb25f5ba82823e68b86003f1 | [
"Apache-2.0"
] | null | null | null | src/nodes/corenodes/transform/__init__.py | Correct-Syntax/GimelStudio | db6e2db35730e11bcb25f5ba82823e68b86003f1 | [
"Apache-2.0"
] | null | null | null | from .flip_node import FlipNode
from .rotate_node import RotateNode
from .circular_shift_node import CircularShiftNode
| 29.75 | 50 | 0.87395 | 16 | 119 | 6.25 | 0.625 | 0.3 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.10084 | 119 | 3 | 51 | 39.666667 | 0.934579 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
bbcb9e0696b7d8e6b5c67b2146329636121bc224 | 208 | py | Python | english_teaching_app/blueprint.py | SebastianDix/english_teaching_app | 10b32b3459a5936f95d382630c7e9123a920a925 | [
"Apache-2.0"
] | null | null | null | english_teaching_app/blueprint.py | SebastianDix/english_teaching_app | 10b32b3459a5936f95d382630c7e9123a920a925 | [
"Apache-2.0"
] | null | null | null | english_teaching_app/blueprint.py | SebastianDix/english_teaching_app | 10b32b3459a5936f95d382630c7e9123a920a925 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python3
from flask import Blueprint
sebastian_blueprint = Blueprint('sebastian',__name__)
@sebastian_blueprint.route('/<string:name>')
def home(name):
return f"You shall prevail, {name}!"
| 26 | 53 | 0.75 | 27 | 208 | 5.555556 | 0.703704 | 0.24 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005376 | 0.105769 | 208 | 7 | 54 | 29.714286 | 0.801075 | 0.100962 | 0 | 0 | 0 | 0 | 0.263441 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.2 | 0.2 | 0.6 | 0.6 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 6 |
a541a8ab7ff7a710e46b6fffc92952ff9a4f1d93 | 231 | py | Python | great_expectations/datasource/data_connector/sorter/__init__.py | vanderGoes/great_expectations | 9790cd992a8a4de672c640e89ddd7278a0ca0889 | [
"Apache-2.0"
] | 6,451 | 2017-09-11T16:32:53.000Z | 2022-03-31T23:27:49.000Z | great_expectations/datasource/data_connector/sorter/__init__.py | vanderGoes/great_expectations | 9790cd992a8a4de672c640e89ddd7278a0ca0889 | [
"Apache-2.0"
] | 3,892 | 2017-09-08T18:57:50.000Z | 2022-03-31T23:15:20.000Z | great_expectations/datasource/data_connector/sorter/__init__.py | vanderGoes/great_expectations | 9790cd992a8a4de672c640e89ddd7278a0ca0889 | [
"Apache-2.0"
] | 1,023 | 2017-09-08T15:22:05.000Z | 2022-03-31T21:17:08.000Z | from .sorter import Sorter # isort:skip
from .custom_list_sorter import CustomListSorter
from .date_time_sorter import DateTimeSorter
from .lexicographic_sorter import LexicographicSorter
from .numeric_sorter import NumericSorter
| 38.5 | 53 | 0.87013 | 28 | 231 | 6.964286 | 0.535714 | 0.307692 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.099567 | 231 | 5 | 54 | 46.2 | 0.9375 | 0.04329 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a562b7945e913468db367fd3adfe21a53848976e | 174 | py | Python | Cartwheel/cartwheel-3d/Python/App/__init__.py | MontyThibault/centre-of-mass-awareness | 58778f148e65749e1dfc443043e9fc054ca3ff4d | [
"MIT"
] | null | null | null | Cartwheel/cartwheel-3d/Python/App/__init__.py | MontyThibault/centre-of-mass-awareness | 58778f148e65749e1dfc443043e9fc054ca3ff4d | [
"MIT"
] | null | null | null | Cartwheel/cartwheel-3d/Python/App/__init__.py | MontyThibault/centre-of-mass-awareness | 58778f148e65749e1dfc443043e9fc054ca3ff4d | [
"MIT"
] | null | null | null | from SNMApp import SNMApp
from SnapshotTree import SnapshotBranch, Snapshot
from ObservableList import ObservableList
import Scenario
import InstantChar
import KeyframeEditor | 29 | 49 | 0.890805 | 19 | 174 | 8.157895 | 0.526316 | 0.258065 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.103448 | 174 | 6 | 50 | 29 | 0.99359 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a584521c0c10dcd37673b86c6ad111c93c93a10a | 12,212 | py | Python | spytest/tests/routing/BGP/test_bgp_rr_traffic.py | mykolaf/sonic-mgmt | de77268526173c5e3a345f3f3703b56eb40c5eed | [
"Apache-2.0"
] | 1 | 2021-09-15T17:09:13.000Z | 2021-09-15T17:09:13.000Z | spytest/tests/routing/BGP/test_bgp_rr_traffic.py | mykolaf/sonic-mgmt | de77268526173c5e3a345f3f3703b56eb40c5eed | [
"Apache-2.0"
] | 1 | 2020-02-05T16:51:53.000Z | 2020-02-05T16:51:53.000Z | spytest/tests/routing/BGP/test_bgp_rr_traffic.py | mykolaf/sonic-mgmt | de77268526173c5e3a345f3f3703b56eb40c5eed | [
"Apache-2.0"
] | null | null | null | import pytest
from spytest import st, tgapi
from spytest.utils import poll_wait
import apis.routing.ip as ipapi
import apis.routing.bgp as bgpapi
import BGP.bgplib as bgplib
bgp_cli_type = 'vtysh'
@pytest.fixture(scope="module", autouse=True)
def bgp_module_hooks(request):
st.ensure_min_topology('D1D2:1', 'D1T1:1', 'D2T1:1')
bgplib.init_resource_data(st.get_testbed_vars())
bgp_pre_config()
yield
bgp_pre_config_cleanup()
# bgp module level pre config function
def bgp_pre_config():
global topo
st.banner("BGP MODULE CONFIG - START")
# loopback config
bgplib.l3tc_vrfipv4v6_address_leafspine_loopback_config_unconfig(config='yes', config_type='all')
# TG Configuration
bgplib.l3tc_vrfipv4v6_address_leafspine_tg_config_unconfig(config='yes', config_type='all')
st.banner("BGP MODULE CONFIG - END")
# bgp module level pre config cleanup function
def bgp_pre_config_cleanup():
st.banner("BGP MODULE CONFIG CLEANUP - START")
# loopback unconfig
bgplib.l3tc_vrfipv4v6_address_leafspine_loopback_config_unconfig(config='no')
# TG uconfiguration
bgplib.l3tc_vrfipv4v6_address_leafspine_tg_config_unconfig(config='no')
st.banner("BGP MODULE CONFIG CLEANUP - END")
@pytest.fixture(scope="function")
def bgp_func_hooks(request):
yield
################################################################################
# BGP Route Reflector with traffic fixture, class and test cases - START
def bgp_rr_traffic_pre_config():
global topo
st.banner("BGP RR WITH TRAFFIC CLASS CONFIG - START")
# underlay config - configure physical interfaces
bgplib.l3tc_underlay_config_unconfig(config='yes')
# config ip on underlay interface
bgplib.l3tc_vrfipv4v6_address_leafspine_config_unconfig(config='yes', config_type='all')
# Ping Verification
if not bgplib.l3tc_vrfipv4v6_address_leafspine_ping_test(config_type='all', ping_count=3):
st.error("Ping failed in between Spine - Leaf")
st.report_fail('test_case_failed')
ibgp_as = bgplib.data['spine_as']
bgplib.l3tc_vrfipv4v6_address_leafspine_rr_tg_bgp_config(config='yes', rr_enable='true')
bgplib.l3tc_vrfipv4v6_address_leafspine_bgp_config(config='yes', rr_enable='true')
# BGP Neighbor Verification
if not poll_wait(bgplib.l3tc_vrfipv4v6_address_leafspine_bgp_check, 10, config_type='all'):
st.error("Neighbour is failed to Establish between Spine - Leaf")
st.report_fail('test_case_failed')
st.log("Getting all topology info related to connectivity / TG and other parameters between duts")
topo = bgplib.get_leaf_spine_topology_info()
st.banner("BGP RR WITH TRAFFIC CLASS CONFIG - END")
def bgp_rr_traffic_pre_config_cleanup():
st.banner("BGP RR WITH TRAFFIC CLASS CONFIG CLEANUP - START")
ibgp_as = bgplib.data['spine_as']
bgplib.l3tc_vrfipv4v6_address_leafspine_bgp_config(config='no', rr_enable='true')
bgplib.l3tc_vrfipv4v6_address_leafspine_config_unconfig(config='no')
bgpapi.cleanup_router_bgp(st.get_dut_names())
ipapi.clear_ip_configuration(st.get_dut_names(), family='all', thread=True)
bgplib.l3tc_underlay_config_unconfig(config='no')
bgplib.l3tc_vrfipv4v6_address_leafspine_rr_tg_bgp_config(config='no', rr_enable='true')
st.banner("BGP RR WITH TRAFFIC CLASS CONFIG CLEANUP - END")
@pytest.fixture(scope='class')
def bgp_rr_traffic_class_hook(request):
bgp_rr_traffic_pre_config()
yield
bgp_rr_traffic_pre_config_cleanup()
# Route Reflector with traffic Class
@pytest.mark.usefixtures('bgp_rr_traffic_class_hook')
class TestBGPRrTraffic():
@pytest.mark.bgp_rr_traffic
def test_ft_bgp_rr_traffic_check(self):
TG_D1 = topo.tg_dut_list_name[0]
TG_D2 = topo.tg_dut_list_name[1]
tg_ob = topo['T1{}P1_tg_obj'.format(TG_D1)]
bgp_handle = topo['T1{}P1_ipv4_tg_bh'.format(TG_D1)]
tc_fail_flag = 0
spine_as = int(bgplib.data['spine_as'])
bgp_ctrl = tg_ob.tg_emulation_bgp_control(handle=bgp_handle['handle'], mode='stop')
st.wait(10)
st.log("Advertising Routes from one of the Leaf Router")
bgp_route = tg_ob.tg_emulation_bgp_route_config(handle=bgp_handle['handle'], mode='add', num_routes='100',
prefix='121.1.1.0', as_path='as_seq:1')
bgp_ctrl = tg_ob.tg_emulation_bgp_control(handle=bgp_handle['handle'], mode='start')
# Sleep for some time and the check the route count in neighbour
st.wait(10)
bgp_summary = bgpapi.show_bgp_ipv4_summary(topo.dut_list[1])
rib_entries = bgp_summary[0]['ribentries']
st.log('RIB Entries : {}'.format(rib_entries))
# when route-reflector is not configured at server(spine), we should not learn anything at
# route-reflector-client (leaf node), ideally, route count should be 0.
if int(rib_entries) > 10:
st.error('iBGP Routes are advertised to iBGP peer DUT, even when route-reflector-client is not configured')
tc_fail_flag = 1
# now configure route-reflector-client at spine node
result = bgpapi.create_bgp_route_reflector_client(topo.dut_list[0], spine_as, 'ipv4', 'spine_leaf', 'yes')
if not result:
st.log("Configuring client reflection on {} {} bgp {} Failed".format(topo.dut_list[0], 'ipv4', spine_as))
tc_fail_flag = 1
bgpapi.create_bgp_next_hop_self(topo.dut_list[0], spine_as, 'ipv4', 'spine_leaf', 'yes', 'yes',cli_type=bgp_cli_type)
st.wait(15)
bgp_summary = bgpapi.show_bgp_ipv4_summary(topo.dut_list[1])
rib_entries = bgp_summary[0]['ribentries']
st.log('RIB Entries : {}'.format(rib_entries))
if int(rib_entries) < 100:
st.error('iBGP Routes are not advertised to route-reflector-client')
tc_fail_flag = 1
st.log("Initiating the Ipv4 traffic for those Routes from another Leaf Router")
src_handle = 'handle'
dst_handle = 'handles'
if tg_ob.tg_type == 'ixia':
src_handle = 'ipv4_handle'
dst_handle = 'handle'
tr1 = tg_ob.tg_traffic_config(port_handle=topo['T1{}P1_ipv4_tg_ph'.format(TG_D2)],
emulation_src_handle=topo['T1{}P1_ipv4_tg_ih'.format(TG_D2)][src_handle],
emulation_dst_handle=bgp_route[dst_handle], circuit_endpoint_type='ipv4',
mode='create',
transmit_mode='single_burst', pkts_per_burst='2000', length_mode='fixed',
rate_pps=1000)
stream_id1 = tr1['stream_id']
tg_ob.tg_traffic_control(action='run', handle=stream_id1)
st.wait(20)
tg1_stats = tgapi.get_traffic_stats(tg_ob, port_handle=topo["T1{}P1_ipv4_tg_ph".format(TG_D1)])
tg2_stats = tgapi.get_traffic_stats(tg_ob, port_handle=topo["T1{}P1_ipv4_tg_ph".format(TG_D2)])
if not (int(tg2_stats.tx.total_packets) and int(tg1_stats.rx.total_packets)):
st.error('Received ZERO stats.')
tc_fail_flag = 1
else:
percent_rx = float(int(tg1_stats.rx.total_packets) - int(tg2_stats.tx.total_packets)) / int(
tg2_stats.tx.total_packets) * 100
st.log('tg1_stats.rx.total_packets : {}'.format(tg1_stats.rx.total_packets))
st.log('tg2_stats.tx.total_packets : {}'.format(tg2_stats.tx.total_packets))
st.log('percent_rx : {}'.format(percent_rx))
if percent_rx > 0.5:
tc_fail_flag = 1
tg_ob.tg_emulation_bgp_control(handle=bgp_handle['handle'], mode='stop')
if tc_fail_flag:
st.report_fail("traffic_verification_failed")
st.report_pass('test_case_passed')
@pytest.mark.bgp6_rr_traffic
def test_ft_bgp6_rr_traffic_check(self):
TG_D1 = topo.tg_dut_list_name[0]
TG_D2 = topo.tg_dut_list_name[1]
tg_ob = topo['T1{}P1_tg_obj'.format(TG_D1)]
bgp_handle = topo['T1{}P1_ipv6_tg_bh'.format(TG_D1)]
tc_fail_flag = 0
spine_as = int(bgplib.data['spine_as'])
st.log("Advertising Routes from one of the Leaf Router")
bgp_route = tg_ob.tg_emulation_bgp_route_config(handle=bgp_handle['handle'], mode='add', ip_version='6',
num_routes='100',
prefix='1001::1', as_path='as_seq:1')
bgp_ctrl = tg_ob.tg_emulation_bgp_control(handle=bgp_handle['handle'], mode='start')
# Sleep for some time and the check the route count in neighbour
st.wait(10)
bgp_summary = bgpapi.show_bgp_ipv6_summary(topo.dut_list[1])
rib_entries = bgp_summary[0]['ribentries']
st.log('RIB Entries : {}'.format(rib_entries))
# when route-reflector is not configured at server(spine), we should not learn anything at
# route-reflector-client (leaf node), ideally, route count should be 0.
if int(rib_entries) > 10:
st.error('iBGP Routes are advertised to iBGP peer DUT, even when route-reflector-client is not configured')
tc_fail_flag = 1
# now configure route-reflector-client at spine node
result = bgpapi.create_bgp_route_reflector_client(topo.dut_list[0], spine_as, 'ipv6', 'spine_leaf6', 'yes')
if not result:
st.log("Configuring client reflection on {} {} bgp {} Failed".format(topo.dut_list[0], 'ipv6', spine_as))
tc_fail_flag = 1
bgpapi.create_bgp_next_hop_self(topo.dut_list[0], spine_as, 'ipv6', 'spine_leaf6', 'yes', 'yes',cli_type=bgp_cli_type)
st.wait(15)
bgp_summary = bgpapi.show_bgp_ipv6_summary(topo.dut_list[1])
rib_entries = bgp_summary[0]['ribentries']
st.log('RIB Entries : {}'.format(rib_entries))
if int(rib_entries) < 100:
st.error('iBGP Routes are not advertised to route-reflector-client')
tc_fail_flag = 1
st.log("Initiating the Ipv6 traffic for those Routes from another Leaf Router")
src_handle = 'handle'
dst_handle = 'handles'
if tg_ob.tg_type == 'ixia':
src_handle = 'ipv6_handle'
dst_handle = 'handle'
tr1 = tg_ob.tg_traffic_config(port_handle=topo['T1{}P1_ipv6_tg_ph'.format(TG_D2)],
emulation_src_handle=topo['T1{}P1_ipv6_tg_ih'.format(TG_D2)][src_handle],
emulation_dst_handle=bgp_route[dst_handle], circuit_endpoint_type='ipv6',
mode='create',
transmit_mode='single_burst', pkts_per_burst='2000', length_mode='fixed',
rate_pps=1000)
stream_id1 = tr1['stream_id']
tg_ob.tg_traffic_control(action='run', handle=stream_id1)
st.wait(20)
tg1_stats = tgapi.get_traffic_stats(tg_ob, port_handle=topo["T1{}P1_ipv6_tg_ph".format(TG_D1)])
tg2_stats = tgapi.get_traffic_stats(tg_ob, port_handle=topo["T1{}P1_ipv6_tg_ph".format(TG_D2)])
if not (int(tg2_stats.tx.total_packets) and int(tg1_stats.rx.total_packets)):
st.error('Received ZERO stats.')
tc_fail_flag = 1
else:
percent_rx = float(int(tg2_stats.tx.total_packets) - int(tg1_stats.rx.total_packets)) / int(
tg2_stats.tx.total_packets) * 100
st.log('tg1_stats.rx.total_packets : {}'.format(tg1_stats.rx.total_packets))
st.log('tg2_stats.tx.total_packets : {}'.format(tg2_stats.tx.total_packets))
st.log('percent_rx : {}'.format(percent_rx))
if percent_rx > 0.5:
tc_fail_flag = 1
tg_ob.tg_emulation_bgp_control(handle=bgp_handle['handle'], mode='stop')
if tc_fail_flag:
st.report_fail("traffic_verification_failed")
st.report_pass('test_case_passed')
# BGP Neighbor In L3 Over LAG fixture, class and test cases - END
################################################################################
| 49.241935 | 126 | 0.653865 | 1,703 | 12,212 | 4.376982 | 0.143277 | 0.010196 | 0.018782 | 0.041857 | 0.842769 | 0.802522 | 0.759324 | 0.738798 | 0.70271 | 0.647974 | 0 | 0.026761 | 0.225843 | 12,212 | 247 | 127 | 49.441296 | 0.761688 | 0.081313 | 0 | 0.544503 | 0 | 0 | 0.20096 | 0.024554 | 0 | 0 | 0 | 0 | 0 | 1 | 0.04712 | false | 0.010471 | 0.031414 | 0 | 0.08377 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a5991b92b59df4040eae9190aa6d6672fa8cbf37 | 230 | py | Python | spotdl/types/__init__.py | phcreery/spotdl-v4 | 3bd3768de10ae80b5e1ba3bbe6b792f7fc9f8dfc | [
"MIT"
] | 10 | 2022-01-03T15:00:34.000Z | 2022-03-18T19:55:37.000Z | spotdl/types/__init__.py | phcreery/spotdl-v4 | 3bd3768de10ae80b5e1ba3bbe6b792f7fc9f8dfc | [
"MIT"
] | 9 | 2022-01-15T05:43:35.000Z | 2022-03-16T17:57:47.000Z | spotdl/types/__init__.py | phcreery/spotdl-v4 | 3bd3768de10ae80b5e1ba3bbe6b792f7fc9f8dfc | [
"MIT"
] | 11 | 2022-01-03T15:00:22.000Z | 2022-03-27T19:27:05.000Z | """
Types for the spotdl package.
"""
from spotdl.types.song import Song
from spotdl.types.playlist import Playlist
from spotdl.types.album import Album
from spotdl.types.artist import Artist
from spotdl.types.saved import Saved
| 23 | 42 | 0.804348 | 35 | 230 | 5.285714 | 0.342857 | 0.27027 | 0.405405 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121739 | 230 | 9 | 43 | 25.555556 | 0.915842 | 0.126087 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a5b28c104354e92346ca9cbec8e7ab3941c5998c | 64 | py | Python | libs/yowsup/yowsup/yowsup/layers/interface/__init__.py | akshitpradhan/TomHack | 837226e7b38de1140c19bc2d478eeb9e379ed1fd | [
"MIT"
] | 22 | 2017-07-14T20:01:17.000Z | 2022-03-08T14:22:39.000Z | libs/yowsup/yowsup/yowsup/layers/interface/__init__.py | akshitpradhan/TomHack | 837226e7b38de1140c19bc2d478eeb9e379ed1fd | [
"MIT"
] | 6 | 2017-07-14T21:03:50.000Z | 2021-06-10T19:08:32.000Z | libs/yowsup/yowsup/yowsup/layers/interface/__init__.py | akshitpradhan/TomHack | 837226e7b38de1140c19bc2d478eeb9e379ed1fd | [
"MIT"
] | 13 | 2017-07-14T20:13:14.000Z | 2020-11-12T08:06:05.000Z | from .interface import YowInterfaceLayer, ProtocolEntityCallback | 64 | 64 | 0.90625 | 5 | 64 | 11.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.0625 | 64 | 1 | 64 | 64 | 0.966667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3c4c752dff78fce480b8161e422a37091e20873a | 47 | py | Python | terraformspawner/__init__.py | sodre/terraformspawner | d1369371a861680311659b1a5dcea7b6f0b136db | [
"BSD-3-Clause"
] | null | null | null | terraformspawner/__init__.py | sodre/terraformspawner | d1369371a861680311659b1a5dcea7b6f0b136db | [
"BSD-3-Clause"
] | 10 | 2019-04-10T07:16:28.000Z | 2019-04-18T06:04:19.000Z | terraformspawner/__init__.py | sodre/terraformspawner | d1369371a861680311659b1a5dcea7b6f0b136db | [
"BSD-3-Clause"
] | null | null | null | from .terraformspawner import TerraformSpawner
| 23.5 | 46 | 0.893617 | 4 | 47 | 10.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.085106 | 47 | 1 | 47 | 47 | 0.976744 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
3c5428a50611949dbae3f58695591f080ff856b1 | 43 | py | Python | build/lib.linux-x86_64-2.7/bucket/__init__.py | ibtesamlatif2997/python-GCS | dd535d4ddf76da4d24b2afb9031242e8b386539e | [
"MIT"
] | null | null | null | build/lib.linux-x86_64-2.7/bucket/__init__.py | ibtesamlatif2997/python-GCS | dd535d4ddf76da4d24b2afb9031242e8b386539e | [
"MIT"
] | 5 | 2021-03-19T10:14:40.000Z | 2021-06-10T19:54:46.000Z | build/lib.linux-x86_64-2.7/bucket/__init__.py | ibtesamlatif2997/python-GCS | dd535d4ddf76da4d24b2afb9031242e8b386539e | [
"MIT"
] | null | null | null | from .gcpbucket import UploadorDownloadFile | 43 | 43 | 0.906977 | 4 | 43 | 9.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.069767 | 43 | 1 | 43 | 43 | 0.975 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3c6eb1e9d59f31c6a6564f5f12202d3cf7783fa3 | 201 | py | Python | report/__init__.py | tonygalmiche/is_gestion_lot | ed27f6874443cc29ec2e7fe0c5a187a1dd9d5037 | [
"MIT"
] | null | null | null | report/__init__.py | tonygalmiche/is_gestion_lot | ed27f6874443cc29ec2e7fe0c5a187a1dd9d5037 | [
"MIT"
] | null | null | null | report/__init__.py | tonygalmiche/is_gestion_lot | ed27f6874443cc29ec2e7fe0c5a187a1dd9d5037 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
import stock_bloquer_lot
import stock_debloquer_lot
import stock_change_location_lot
import stock_rebut_lot
# vim:expandtab:smartindent:tabstop=4:softtabstop=4:shiftwidth=4:
| 20.1 | 65 | 0.81592 | 29 | 201 | 5.344828 | 0.62069 | 0.283871 | 0.270968 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021858 | 0.089552 | 201 | 9 | 66 | 22.333333 | 0.825137 | 0.422886 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
b1c9b45898019787f1c4db051cedaf9bda494d02 | 125 | py | Python | thor/utils/__init__.py | B612-Asteroid-Institute/thor | d3d1dcbe86f67a62c90b4cde3fc577e414825cf2 | [
"BSD-3-Clause"
] | null | null | null | thor/utils/__init__.py | B612-Asteroid-Institute/thor | d3d1dcbe86f67a62c90b4cde3fc577e414825cf2 | [
"BSD-3-Clause"
] | null | null | null | thor/utils/__init__.py | B612-Asteroid-Institute/thor | d3d1dcbe86f67a62c90b4cde3fc577e414825cf2 | [
"BSD-3-Clause"
] | null | null | null | from .io import *
from .spice import *
from .astropy import *
from .horizons import *
from .mpc import *
from .ades import *
| 17.857143 | 23 | 0.712 | 18 | 125 | 4.944444 | 0.444444 | 0.561798 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.192 | 125 | 6 | 24 | 20.833333 | 0.881188 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b1ce928d6a1048f0523947ff545abd2509cb2d94 | 8,357 | py | Python | tf_crnn/parse_args.py | sunmengnan/city_brain | 478f0b974f4491b4201956f37b83ce6860712bc8 | [
"MIT"
] | null | null | null | tf_crnn/parse_args.py | sunmengnan/city_brain | 478f0b974f4491b4201956f37b83ce6860712bc8 | [
"MIT"
] | null | null | null | tf_crnn/parse_args.py | sunmengnan/city_brain | 478f0b974f4491b4201956f37b83ce6860712bc8 | [
"MIT"
] | null | null | null | #!/usr/env/bin python3
import argparse
import os
import datetime
from libs.utils import check_dir_exist
from libs.config import load_config
OUTPUT_DIR = os.getenv("OUTPUT_DIR")
CURRENT_DIR = os.path.abspath(os.path.dirname(__file__))
def save_flags(args, save_dir):
"""
Save flags into file, use date as file name
:param args: tf.app.flags
:param save_dir: dir to save flags file
"""
filename = datetime.datetime.now().strftime("%Y_%m_%d_%H_%M_%S") + ".txt"
if not os.path.exists(save_dir):
os.makedirs(save_dir)
filepath = os.path.join(save_dir, filename)
print("Save flags to %s" % filepath)
cfg = load_config(args.cfg_name)
with open(filepath, mode="w", encoding="utf-8") as f:
d = vars(args)
for k, v in d.items():
f.write("%s: %s\n" % (k, v))
print("=" * 30)
for k, v in cfg.items():
f.write('%s: %s\n' % (k, v))
def parse_infer_args(infer = True):
if OUTPUT_DIR is None:
output_dir = os.path.join(CURRENT_DIR, 'output')
else:
output_dir = OUTPUT_DIR
parser = argparse.ArgumentParser()
parser.add_argument('--gpu', action='store_true', default=False)
# 如果是使用 TFrecord 模式,则会加载相应数据目录下的所有 tf_record 文件
# 如果是 JPG 模式,则会去加载相应目录下二级目录中的 jpg 图片 和 labels.txt
#parser.add_argument('--train_dir', required=True)
#parser.add_argument('--train_file_format', required=True, choices=['TF', 'JPG'])
#parser.add_argument('--val_dir', required=True)
#parser.add_argument('--val_file_format', required=True, choices=['TF', 'JPG'])
#parser.add_argument('--test_dir', default=None, help='test 只支持小文件图片格式')
#parser.add_argument('--restore', action='store_true', help='Whether to resotre checkpoint from ckpt_dir')
#parser.add_argument('--restore_step', action='store_true', help='如果 restore step,lr 会减小')
parser.add_argument('--tag', default='default', help='Subdirectory to create in checkpoint_dir/log_dir/result_dir')
parser.add_argument('--ckpt_dir', default=os.path.join(output_dir, 'checkpoint'),
help='Directory to save tensorflow checkpoint')
#parser.add_argument('--log_dir', default=os.path.join(output_dir, 'output/log'),
# help='Directory to save tensorboard logs')
parser.add_argument('--result_dir', default=os.path.join(output_dir, 'output/result'),
help='Directory to save val/test result')
parser.add_argument('--chars_file',
default=os.path.join(CURRENT_DIR, 'data/ocr_chars/chn.txt'), help='Chars file to load')
parser.add_argument('--cfg_name', default='raw', help="raw / squeeze/ dense / resnet / simple")
parser.add_argument('--val_step', type=int, default=5000, help='Steps to do val.test and save checkpoint')
parser.add_argument('--log_step', type=int, default=50, help='Steps save tensorboard summary')
parser.add_argument('--display_step', type=int, default=10, help='Steps print loss to stdout')
# Only for inference
parser.add_argument('--infer_dir', default='./data/demo', help='Directory store infer images and labels')
parser.add_argument('--infer_data_ordered', action='store_true', help='ground truth 存在 labels.txt 文件中')
parser.add_argument('--load_sub_infer_dir', action='store_true', help='对 infer_dir 中的子目录进行测试')
parser.add_argument('--infer_copy_failed', action='store_true', help='拷贝结果错误的测试数据图片到特定目录')
parser.add_argument('--infer_batch_size', type=int, default=1)
args, _ = parser.parse_known_args()
#if (not infer) and (not os.path.exists(args.train_dir)):
# parser.error('train_dir not exist')
#if (args.val_dir is not None) and (not os.path.exists(args.val_dir)):
# parser.error('val_dir not exist')
#if (args.test_dir is not None) and (not os.path.exists(args.test_dir)):
# parser.error('test_dir not exist')
if infer and (not os.path.exists(args.infer_dir)):
parser.error('infer_dir not exist')
args.ckpt_dir = os.path.join(args.ckpt_dir, args.tag)
args.best_test_ckpt_dir = os.path.join(args.ckpt_dir, 'best_test_ckpt')
args.flags_dir = os.path.join(args.ckpt_dir, "flags")
#args.log_dir = os.path.join(args.log_dir, args.tag)
args.result_dir = os.path.join(args.result_dir, args.tag)
check_dir_exist(args.ckpt_dir)
check_dir_exist(args.best_test_ckpt_dir)
check_dir_exist(args.flags_dir)
#check_dir_exist(args.log_dir)
check_dir_exist(args.result_dir)
save_flags(args, args.flags_dir)
print('Use %s as base net' % args.cfg_name)
return args
def parse_args(infer=False):
if OUTPUT_DIR is None:
output_dir = os.path.join(CURRENT_DIR, 'output')
else:
output_dir = OUTPUT_DIR
parser = argparse.ArgumentParser()
parser.add_argument('--gpu', action='store_true', default=False)
# 如果是使用 TFrecord 模式,则会加载相应数据目录下的所有 tf_record 文件
# 如果是 JPG 模式,则会去加载相应目录下二级目录中的 jpg 图片 和 labels.txt
parser.add_argument('--train_dir', required=True)
parser.add_argument('--train_file_format', required=True, choices=['TF', 'JPG'])
parser.add_argument('--val_dir', required=True)
parser.add_argument('--val_file_format', required=True, choices=['TF', 'JPG'])
parser.add_argument('--test_dir', default=None, help='test 只支持小文件图片格式')
parser.add_argument('--restore', action='store_true', help='Whether to resotre checkpoint from ckpt_dir')
parser.add_argument('--restore_step', action='store_true', help='如果 restore step,lr 会减小')
parser.add_argument('--tag', default='default', help='Subdirectory to create in checkpoint_dir/log_dir/result_dir')
parser.add_argument('--ckpt_dir', default=os.path.join(output_dir, 'checkpoint'),
help='Directory to save tensorflow checkpoint')
parser.add_argument('--log_dir', default=os.path.join(output_dir, 'output/log'),
help='Directory to save tensorboard logs')
parser.add_argument('--result_dir', default=os.path.join(output_dir, 'output/result'),
help='Directory to save val/test result')
parser.add_argument('--chars_file',
default=os.path.join(CURRENT_DIR, 'data/ocr_chars/chn.txt'), help='Chars file to load')
parser.add_argument('--cfg_name', default='raw', help="raw / squeeze/ dense / resnet / simple")
parser.add_argument('--val_step', type=int, default=5000, help='Steps to do val.test and save checkpoint')
parser.add_argument('--log_step', type=int, default=50, help='Steps save tensorboard summary')
parser.add_argument('--display_step', type=int, default=10, help='Steps print loss to stdout')
# Only for inference
parser.add_argument('--infer_dir', default='./data/demo', help='Directory store infer images and labels')
parser.add_argument('--infer_data_ordered', action='store_true', help='ground truth 存在 labels.txt 文件中')
parser.add_argument('--load_sub_infer_dir', action='store_true', help='对 infer_dir 中的子目录进行测试')
parser.add_argument('--infer_copy_failed', action='store_true', help='拷贝结果错误的测试数据图片到特定目录')
parser.add_argument('--infer_batch_size', type=int, default=1)
args, _ = parser.parse_known_args()
if (not infer) and (not os.path.exists(args.train_dir)):
parser.error('train_dir not exist')
if (args.val_dir is not None) and (not os.path.exists(args.val_dir)):
parser.error('val_dir not exist')
if (args.test_dir is not None) and (not os.path.exists(args.test_dir)):
parser.error('test_dir not exist')
if infer and (not os.path.exists(args.infer_dir)):
parser.error('infer_dir not exist')
args.ckpt_dir = os.path.join(args.ckpt_dir, args.tag)
args.best_test_ckpt_dir = os.path.join(args.ckpt_dir, 'best_test_ckpt')
args.flags_dir = os.path.join(args.ckpt_dir, "flags")
args.log_dir = os.path.join(args.log_dir, args.tag)
args.result_dir = os.path.join(args.result_dir, args.tag)
check_dir_exist(args.ckpt_dir)
check_dir_exist(args.best_test_ckpt_dir)
check_dir_exist(args.flags_dir)
check_dir_exist(args.log_dir)
check_dir_exist(args.result_dir)
save_flags(args, args.flags_dir)
print('Use %s as base net' % args.cfg_name)
return args
if __name__ == '__main__':
args = parse_args()
| 42.637755 | 119 | 0.68649 | 1,230 | 8,357 | 4.445528 | 0.137398 | 0.072421 | 0.136796 | 0.02853 | 0.891368 | 0.891368 | 0.891368 | 0.891368 | 0.885516 | 0.885516 | 0 | 0.003172 | 0.170037 | 8,357 | 195 | 120 | 42.85641 | 0.785179 | 0.168482 | 0 | 0.631579 | 0 | 0 | 0.268618 | 0.015937 | 0 | 0 | 0 | 0 | 0 | 1 | 0.026316 | false | 0 | 0.04386 | 0 | 0.087719 | 0.052632 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b1d08507f193d3613adccdce2fa72ba603ae18e4 | 73 | py | Python | MobuToMaya/MobuToMayaTools/__init__.py | jazzboysc/SERiggingTools | 41289589b88bc812240f6f87359456dbc1a209cd | [
"MIT"
] | 4 | 2020-06-10T07:54:47.000Z | 2021-04-22T01:57:02.000Z | MobuToMaya/MobuToMayaTools/__init__.py | jazzboysc/SERiggingTools | 41289589b88bc812240f6f87359456dbc1a209cd | [
"MIT"
] | null | null | null | MobuToMaya/MobuToMayaTools/__init__.py | jazzboysc/SERiggingTools | 41289589b88bc812240f6f87359456dbc1a209cd | [
"MIT"
] | null | null | null | import MobuServer2020
import SendAnimationToMayaTool
import SendToMayaUI
| 18.25 | 30 | 0.917808 | 6 | 73 | 11.166667 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.059701 | 0.082192 | 73 | 3 | 31 | 24.333333 | 0.940299 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b1d9dee0f67766888eb055365298f1b2e9982fed | 34 | py | Python | groundhog/datasets/__init__.py | wavelets/GroundHog | 7ca23c9d741d3b10912c71c9a9ac883e86e70f17 | [
"BSD-3-Clause"
] | 1 | 2015-10-06T22:03:06.000Z | 2015-10-06T22:03:06.000Z | groundhog/datasets/__init__.py | wavelets/GroundHog | 7ca23c9d741d3b10912c71c9a9ac883e86e70f17 | [
"BSD-3-Clause"
] | null | null | null | groundhog/datasets/__init__.py | wavelets/GroundHog | 7ca23c9d741d3b10912c71c9a9ac883e86e70f17 | [
"BSD-3-Clause"
] | null | null | null | from LM_dataset import LMIterator
| 17 | 33 | 0.882353 | 5 | 34 | 5.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 34 | 1 | 34 | 34 | 0.966667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b1e449451e4fb0993af8a57d3c072e5c996dc132 | 15,944 | py | Python | causal_hdf5_runner.py | mbchang/multiagent-particle-envs | 9535594effd62c7682b7d6c41c2acbcdd9d56d64 | [
"MIT"
] | null | null | null | causal_hdf5_runner.py | mbchang/multiagent-particle-envs | 9535594effd62c7682b7d6c41c2acbcdd9d56d64 | [
"MIT"
] | null | null | null | causal_hdf5_runner.py | mbchang/multiagent-particle-envs | 9535594effd62c7682b7d6c41c2acbcdd9d56d64 | [
"MIT"
] | null | null | null | import argparse
from collections import OrderedDict
import numpy as np
import os
import itertools
import time
parser = argparse.ArgumentParser()
parser.add_argument('--for-real', action='store_true')
args = parser.parse_args()
def product_dict(**kwargs):
keys = kwargs.keys()
vals = kwargs.values()
for instance in itertools.product(*vals):
yield dict(zip(keys, instance))
class Runner():
def __init__(self, command='python3 information_economy/scratch/vickrey.py', gpus=[]):
self.gpus = gpus
self.command = command
self.flags = {}
def add_flag(self, flag_name, flag_values=''):
self.flags[flag_name] = flag_values
def append_flags_to_command(self, command, flag_dict):
for flag_name, flag_value in flag_dict.items():
if type(flag_value) == bool:
if flag_value == True:
command += ' --{}'.format(flag_name)
else:
command += ' --{} {}'.format(flag_name, flag_value)
return command
def command_prefix(self, i):
prefix = 'CUDA_VISIBLE_DEVICES={} DISPLAY=:0 '.format(self.gpus[i]) if len(self.gpus) > 0 else 'DISPLAY=:0 '
command = prefix+self.command
return command
def command_suffix(self, command):
# if len(self.gpus) == 0:
# command += ' --cpu'
# command += ' --printf'
command += ' &'
return command
def generate_commands(self, execute=False):
i = 0
j = 0
for flag_dict in product_dict(**self.flags):
command = self.command_prefix(i)
command = self.append_flags_to_command(command, flag_dict)
command = self.command_suffix(command)
print(command)
if execute:
os.system(command)
if len(self.gpus) > 0:
i = (i + 1) % len(self.gpus)
j += 1
print('Launched {} jobs'.format(j))
class RunnerWithIDs(Runner):
def __init__(self, command, gpus):
Runner.__init__(self, command, gpus)
def product_dict(self, **kwargs):
ordered_kwargs_dict = OrderedDict()
for k, v in kwargs.items():
if not k == 'seed':
ordered_kwargs_dict[k] = v
keys = ordered_kwargs_dict.keys()
vals = ordered_kwargs_dict.values()
for instance in itertools.product(*vals):
yield dict(zip(keys, instance))
def generate_commands(self, execute=False):
if 'seed' not in self.flags:
Runner.generate_commands(self, execute)
else:
i = 0
j = 0
for flag_dict in self.product_dict(**self.flags):
command = self.command_prefix()
command = self.append_flags_to_command(command, flag_dict)
# add exp_id: one exp_id for each flag_dict.
exp_id = ''.join(str(s) for s in np.random.randint(10, size=7))
command += ' --expid {}'.format(exp_id)
# command doesn't get modified from here on
for seed in self.flags['seed']:
seeded_command = command
seeded_command += ' --seed {}'.format(seed)
seeded_command = self.command_suffix(seeded_command)
print(seeded_command)
if execute:
os.system(seeded_command)
if len(self.gpus) > 0:
i = (i + 1) % len(self.gpus)
j += 1
print('Launched {} jobs'.format(j))
def all_counterfactuals_draft1_7_6_2021():
"""
for laptop
"""
r = RunnerWithIDs(command='python bin/counterfactual_hdf5.py --scenario intervenable_bouncing.py', gpus=[])
r.add_flag('num_episodes', ['20'])
r.add_flag('max_episode_length', ['8'])
r.add_flag('t_intervene', ['4'])
r.add_flag('intervention_type', ['displacement', 'addition', 'removal', 'force'])
r.add_flag('data_root', ['intervenable_bouncing'])
r.generate_commands(execute=args.for_real)
def all_counterfactuals_geb_7_6_2021():
"""
for geb
"""
r = RunnerWithIDs(command='python bin/counterfactual_hdf5.py --scenario intervenable_bouncing.py', gpus=[])
r.add_flag('num_episodes', ['10000'])
r.add_flag('max_episode_length', ['8'])
r.add_flag('t_intervene', ['4'])
r.add_flag('intervention_type', ['displacement', 'addition', 'removal', 'force'])
r.add_flag('data_root', ['intervenable_bouncing'])
r.generate_commands(execute=args.for_real)
def all_counterfactuals_earlier_geb_7_7_2021():
"""
for geb
intervention step at 2
T = 8
"""
r = RunnerWithIDs(command='python bin/counterfactual_hdf5.py --scenario intervenable_bouncing.py', gpus=[])
r.add_flag('num_episodes', ['10000'])
r.add_flag('max_episode_length', ['8'])
r.add_flag('t_intervene', ['2'])
r.add_flag('intervention_type', ['displacement', 'addition', 'removal', 'force'])
r.add_flag('data_root', ['intervenable_bouncing_s2_t8'])
r.generate_commands(execute=args.for_real)
def all_counterfactuals_earlier_baobab_7_21_2021():
"""
for geb
intervention step at 2
T = 8
"""
r = RunnerWithIDs(command='python bin/counterfactual_hdf5.py --scenario intervenable_bouncing.py', gpus=[])
r.add_flag('num_episodes', ['20'])
r.add_flag('max_episode_length', ['8'])
r.add_flag('t_intervene', ['2'])
r.add_flag('intervention_type', ['displacement', 'addition', 'removal', 'force'])
r.add_flag('data_root', ['intervenable_bouncing_s2_t8'])
r.generate_commands(execute=args.for_real)
def testing_colors_baobab_7_22_2021():
"""
for geb
intervention step at 2
T = 8
"""
r = RunnerWithIDs(command='python bin/counterfactual_hdf5.py --scenario intervenable_bouncing.py', gpus=[])
r.add_flag('num_episodes', ['20'])
r.add_flag('max_episode_length', ['8'])
r.add_flag('t_intervene', ['2'])
r.add_flag('intervention_type', ['displacement', 'addition', 'removal', 'force'])
r.add_flag('data_root', ['intervenable_bouncing_s2_t8_colors'])
r.generate_commands(execute=args.for_real)
def colors_geb_7_22_2021():
"""
for geb
intervention step at 2
T = 8
"""
r = RunnerWithIDs(command='python bin/counterfactual_hdf5.py --scenario intervenable_bouncing.py', gpus=[])
r.add_flag('num_episodes', ['10000'])
r.add_flag('max_episode_length', ['8'])
r.add_flag('t_intervene', ['2'])
r.add_flag('intervention_type', ['displacement', 'addition', 'removal', 'force'])
r.add_flag('data_root', ['intervenable_bouncing_s2_t8_colors'])
r.generate_commands(execute=args.for_real)
def horizon20_geb_8_27_2021():
"""
t = 20
"""
r = RunnerWithIDs(command='python bin/counterfactual_hdf5.py --scenario intervenable_bouncing.py', gpus=[])
r.add_flag('num_episodes', ['1000'])
r.add_flag('max_episode_length', ['20'])
r.add_flag('t_intervene', ['5'])
r.add_flag('intervention_type', ['displacement', 'addition', 'removal', 'force'])
r.add_flag('data_root', ['intervenable_bouncing_s5_t20_colors'])
r.generate_commands(execute=args.for_real)
def horizon20_geb_8_27_2021():
"""
t = 20
"""
r = RunnerWithIDs(command='python bin/counterfactual_hdf5.py --scenario intervenable_bouncing.py', gpus=[])
r.add_flag('num_episodes', ['1000'])
r.add_flag('max_episode_length', ['20'])
r.add_flag('t_intervene', ['5'])
r.add_flag('intervention_type', ['displacement', 'addition', 'removal', 'force'])
r.add_flag('data_root', ['intervenable_bouncing_s5_t20_colors'])
r.generate_commands(execute=args.for_real)
def horizon20_baobab_8_27_2021():
"""
t = 20
"""
r = RunnerWithIDs(command='python bin/counterfactual_hdf5.py --scenario intervenable_bouncing.py', gpus=[])
r.add_flag('num_episodes', ['10'])
r.add_flag('max_episode_length', ['20'])
r.add_flag('t_intervene', ['5'])
r.add_flag('intervention_type', ['displacement'])
r.add_flag('data_root', ['intervenable_bouncing_s5_t20_colors'])
r.generate_commands(execute=args.for_real)
def n8_s5_t20_baobab_9_5_2021():
"""
t = 20
"""
r = RunnerWithIDs(command='python bin/counterfactual_hdf5.py --scenario intervenable_bouncing.py', gpus=[])
r.add_flag('num_episodes', ['10'])
r.add_flag('max_episode_length', ['10'])
r.add_flag('t_intervene', ['5'])
r.add_flag('num_entities', ['8'])
r.add_flag('intervention_type', ['displacement'])
r.add_flag('data_root', ['intervenable_bouncing_k8_s5_t10'])
r.generate_commands(execute=args.for_real)
def displacement_debug_baobab_9_16_2021():
"""
t = 20
"""
r = RunnerWithIDs(command='python bin/counterfactual_hdf5.py --scenario intervenable_bouncing.py', gpus=[])
r.add_flag('num_episodes', ['10'])
r.add_flag('max_episode_length', ['10'])
r.add_flag('t_intervene', ['0', '5'])
r.add_flag('num_entities', ['4', '8'])
r.add_flag('intervention_type', ['displacement'])
r.add_flag('data_root', ['displacement_debug'])
r.generate_commands(execute=args.for_real)
def displacement_geb_9_16_2021():
"""
t = 20
"""
r = RunnerWithIDs(command='python bin/counterfactual_hdf5.py --scenario intervenable_bouncing.py', gpus=[])
r.add_flag('num_episodes', ['2000'])
r.add_flag('max_episode_length', ['10'])
r.add_flag('t_intervene', ['0', '1', '2', '3', '4',' 5'])
r.add_flag('num_entities', ['4', '8'])
r.add_flag('intervention_type', ['displacement'])
r.add_flag('data_root', ['displacement'])
r.generate_commands(execute=args.for_real)
def distshift_debug_baobab_9_21_2021():
"""
t = 20
"""
r = RunnerWithIDs(command='python bin/counterfactual_hdf5.py --scenario intervenable_bouncing.py', gpus=[])
r.add_flag('num_episodes', ['10'])
r.add_flag('max_episode_length', ['10'])
r.add_flag('t_intervene', ['0'])
r.add_flag('num_entities', ['4'])
r.add_flag('intervention_type', ['displacement'])
r.add_flag('data_root', ['distshift_debug'])
r.generate_commands(execute=args.for_real)
def distshift_debug_baobab_9_21_2021():
"""
t = 20
"""
r = RunnerWithIDs(command='python bin/counterfactual_hdf5.py --scenario intervenable_bouncing.py', gpus=[])
r.add_flag('num_episodes', ['10'])
r.add_flag('max_episode_length', ['10'])
r.add_flag('t_intervene', ['0'])
r.add_flag('num_entities', ['4'])
r.add_flag('intervention_type', ['displacement'])
r.add_flag('data_root', ['distshift_debug'])
r.generate_commands(execute=args.for_real)
def distshift_geb_9_21_2021():
"""
t = 20
"""
r = RunnerWithIDs(command='python bin/counterfactual_hdf5.py --scenario intervenable_bouncing.py', gpus=[])
r.add_flag('num_episodes', ['2000'])
r.add_flag('max_episode_length', ['10'])
r.add_flag('t_intervene', ['0'])
r.add_flag('num_entities', ['4'])
r.add_flag('intervention_type', ['displacement'])
r.add_flag('color_dist', [
'uniform_k20',
'context_swap_k4_4505_a',
'context_swap_k4_4505_b',
'context_swap_k4_5000_a',
'context_swap_k4_5000_b',
'multiplicity_k20',
'fcontext_swap_k4_752500_a',
'fcontext_swap_k4_752500_b',
])
r.add_flag('data_root', ['distshift'])
r.generate_commands(execute=args.for_real)
def distshift_baobab_9_21_2021():
"""
t = 20
"""
r = RunnerWithIDs(command='python bin/counterfactual_hdf5.py --scenario intervenable_bouncing.py', gpus=[])
r.add_flag('num_episodes', ['100'])
r.add_flag('max_episode_length', ['10'])
r.add_flag('t_intervene', ['0'])
r.add_flag('num_entities', ['4'])
r.add_flag('intervention_type', ['displacement'])
r.add_flag('color_dist', [
# 'uniform_k20',
# 'context_swap_k4_4505_a',
# 'context_swap_k4_4505_b',
# 'context_swap_k4_5000_a',
# 'context_swap_k4_5000_b',
# 'multiplicity_k20',
'fcontext_swap_k4_752500_a',
'fcontext_swap_k4_752500_b',
])
r.add_flag('data_root', ['distshift'])
r.generate_commands(execute=args.for_real)
def whiteball_push_baobab_9_24_2021():
"""
t = 20
"""
r = RunnerWithIDs(command='python bin/counterfactual_hdf5.py --scenario intervenable_bouncing_white_action.py', gpus=[])
r.add_flag('num_episodes', ['10'])
r.add_flag('max_episode_length', ['10'])
r.add_flag('t_intervene', ['0'])
r.add_flag('num_entities', ['4'])
r.add_flag('color_dist', ['uniform_k20'])
r.add_flag('intervention_type', ['displacement'])
r.add_flag('data_root', ['whiteballpush'])
r.generate_commands(execute=args.for_real)
def whiteball_push_geb_9_24_2021():
"""
t = 20
"""
r = RunnerWithIDs(command='python bin/counterfactual_hdf5.py --scenario intervenable_bouncing_white_action.py', gpus=[])
r.add_flag('num_episodes', ['2000'])
r.add_flag('max_episode_length', ['10'])
r.add_flag('t_intervene', ['0'])
r.add_flag('num_entities', ['4'])
r.add_flag('color_dist', ['uniform_k20'])
r.add_flag('intervention_type', ['displacement'])
r.add_flag('data_root', ['whiteballpush'])
r.generate_commands(execute=args.for_real)
def whiteball_push_baobab_9_27_2021():
"""
t = 20
"""
r = RunnerWithIDs(command='python bin/counterfactual_hdf5.py --scenario intervenable_bouncing_white_action.py', gpus=[])
r.add_flag('num_episodes', ['10'])
r.add_flag('max_episode_length', ['10'])
r.add_flag('t_intervene', ['0'])
r.add_flag('num_entities', ['1'])
r.add_flag('color_dist', ['uniform_k20'])
r.add_flag('intervention_type', ['displacement'])
r.add_flag('data_root', ['singlewhiteballpush'])
r.generate_commands(execute=args.for_real)
def whiteball_push_geb_9_27_2021():
"""
t = 20
"""
r = RunnerWithIDs(command='python bin/counterfactual_hdf5.py --scenario intervenable_bouncing_white_action.py', gpus=[])
r.add_flag('num_episodes', ['2000'])
r.add_flag('max_episode_length', ['10'])
r.add_flag('t_intervene', ['0'])
r.add_flag('num_entities', ['1'])
r.add_flag('color_dist', ['uniform_k20'])
r.add_flag('intervention_type', ['displacement'])
r.add_flag('data_root', ['singlewhiteballpush'])
r.generate_commands(execute=args.for_real)
def whiteball_push_gauss1_10_25_2021():
"""
t = 20
"""
r = RunnerWithIDs(command='python bin/counterfactual_hdf5.py --scenario intervenable_bouncing_white_action.py', gpus=[])
r.add_flag('num_episodes', ['2000'])
r.add_flag('max_episode_length', ['10'])
r.add_flag('t_intervene', ['0'])
r.add_flag('num_entities', ['4'])
r.add_flag('color_dist', ['uniform_k20'])
r.add_flag('intervention_type', ['displacement'])
r.add_flag('data_root', ['singlewhiteballpush'])
r.generate_commands(execute=args.for_real)
if __name__ == '__main__':
# all_counterfactuals_draft1_7_6_2021()
# all_counterfactuals_geb_7_6_2021()
# all_counterfactuals_earlier_geb_7_7_2021()
# all_counterfactuals_earlier_baobab_7_21_2021()
# testing_colors_baobab_7_22_2021()
# colors_geb_7_22_2021()
# horizon20_geb_8_27_2021()
# horizon20_baobab_8_27_2021()
# n8_s5_t20_baobab_9_5_2021()
# displacement_debug_baobab_9_16_2021()
# displacement_geb_9_16_2021()
# distshift_debug_baobab_9_21_2021()
# distshift_geb_9_21_2021()
# distshift_baobab_9_21_2021()
# whiteball_push_baobab_9_24_2021()
# whiteball_push_geb_9_24_2021()
# whiteball_push_baobab_9_27_2021()
# whiteball_push_geb_9_27_2021()
whiteball_push_gauss1_10_25_2021() | 36.072398 | 124 | 0.642561 | 2,078 | 15,944 | 4.577478 | 0.090472 | 0.091989 | 0.104289 | 0.038162 | 0.861123 | 0.834735 | 0.787847 | 0.775231 | 0.762405 | 0.739802 | 0 | 0.048955 | 0.201831 | 15,944 | 442 | 125 | 36.072398 | 0.698491 | 0.072566 | 0 | 0.687719 | 0 | 0 | 0.324846 | 0.116396 | 0 | 0 | 0 | 0 | 0 | 1 | 0.108772 | false | 0 | 0.021053 | 0 | 0.147368 | 0.014035 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
593917995a6282d6ab8ae02cf31f0ebd523d9fba | 240 | py | Python | modules/rdpConfig.py | mygalus/RemoteConnexionOrganizers | 29a6e881313b696c2f0ab2a1bce56233c54b8c22 | [
"MIT"
] | 1 | 2019-09-12T01:47:02.000Z | 2019-09-12T01:47:02.000Z | modules/rdpConfig.py | mygalus/RemoteConnexionOrganizers | 29a6e881313b696c2f0ab2a1bce56233c54b8c22 | [
"MIT"
] | null | null | null | modules/rdpConfig.py | mygalus/RemoteConnexionOrganizers | 29a6e881313b696c2f0ab2a1bce56233c54b8c22 | [
"MIT"
] | 1 | 2020-04-05T07:23:30.000Z | 2020-04-05T07:23:30.000Z | class RdpConfig:
def __init__(self, prog):
self.prog = prog
pass
def getDefaultCmd(self):
return (self.prog + ' --plugin cliprdr --ntlm --composition -x m -u Administrator -p "xxxxxxxx" -g 1920x1000 xxxxxxx') | 40 | 126 | 0.6375 | 29 | 240 | 5.137931 | 0.758621 | 0.161074 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.044199 | 0.245833 | 240 | 6 | 126 | 40 | 0.779006 | 0 | 0 | 0 | 0 | 0.166667 | 0.394191 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0.166667 | 0 | 0.166667 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.