max_stars_repo_path stringlengths 4 286 | max_stars_repo_name stringlengths 5 119 | max_stars_count int64 0 191k | id stringlengths 1 7 | content stringlengths 6 1.03M | content_cleaned stringlengths 6 1.03M | language stringclasses 111 values | language_score float64 0.03 1 | comments stringlengths 0 556k | edu_score float64 0.32 5.03 | edu_int_score int64 0 5 |
|---|---|---|---|---|---|---|---|---|---|---|
003.py | gconsidine/project-euler | 1 | 6615251 | <filename>003.py<gh_stars>1-10
#<NAME>
#Project Euler -- Problem 3
num = 600851475143
i = num
j = 2
largestPrimeFactor = 0
while j <= num / 2:
if num % j == 0:
i = num / j
if num % i == 0 and (i % 2 != 0 or i == 2) and i != num:
k = 2
while k <= i:
if k != i and i % k == 0:
break
else:
if k >= i / 2:
largestPrimeFactor = i
break
k += 1
if largestPrimeFactor != 0:
break
j += 1
print(largestPrimeFactor)
'''
My first attempt to solve this problem was sort of brute force, I had lists
involved and iterated through an entire large number, one smaller number at
a time searching for factors. Once I found a factor, I would search through
it one number at a time to check if it was divisible by any number other than
itself and 1 (i.e. prime).
This worked fine for smaller numbers, but was definitely not cutting it for
finding the largest prime factor of a 12 digit number.
I optimized in the following ways:
-Removed lists entirely (they served no purpose)
-Rather than incrementing through the large number, i.e. finding the smallest
factors first, I reversed the loop, decrementing and breaking as soon as the
first prime factor is found.
-Stopped plus-one incrementation and instead divided the number in question
with by an incrementing value (e.g. 600851 / 2 -> 600851 / 3 -> 600851 / 4)
after first confirming the remainder of the division operation would
equal zero. I incremented starting at 2 and ignored anything greater
than 1/2 the number in question since it would be impossible to have
a factor larger than 1/2 the number other than the number itself.
-Adjusted the inner loop so that it doesn't increment higher than 1/2 of
the potential prime factor. Anything above 1/2 of the value can be
discared for the same reasons as above.
-Ignore even factors (other than two) when found right away since there are
no even prime numbers other than 2.
Interesting to note than an 11 digit number takes hardly any time at all, but
the 12 digit number took roughly an hour. That could just be because the
largest prime factor of 600851475143 is relatively small.
'''
| <filename>003.py<gh_stars>1-10
#<NAME>
#Project Euler -- Problem 3
num = 600851475143
i = num
j = 2
largestPrimeFactor = 0
while j <= num / 2:
if num % j == 0:
i = num / j
if num % i == 0 and (i % 2 != 0 or i == 2) and i != num:
k = 2
while k <= i:
if k != i and i % k == 0:
break
else:
if k >= i / 2:
largestPrimeFactor = i
break
k += 1
if largestPrimeFactor != 0:
break
j += 1
print(largestPrimeFactor)
'''
My first attempt to solve this problem was sort of brute force, I had lists
involved and iterated through an entire large number, one smaller number at
a time searching for factors. Once I found a factor, I would search through
it one number at a time to check if it was divisible by any number other than
itself and 1 (i.e. prime).
This worked fine for smaller numbers, but was definitely not cutting it for
finding the largest prime factor of a 12 digit number.
I optimized in the following ways:
-Removed lists entirely (they served no purpose)
-Rather than incrementing through the large number, i.e. finding the smallest
factors first, I reversed the loop, decrementing and breaking as soon as the
first prime factor is found.
-Stopped plus-one incrementation and instead divided the number in question
with by an incrementing value (e.g. 600851 / 2 -> 600851 / 3 -> 600851 / 4)
after first confirming the remainder of the division operation would
equal zero. I incremented starting at 2 and ignored anything greater
than 1/2 the number in question since it would be impossible to have
a factor larger than 1/2 the number other than the number itself.
-Adjusted the inner loop so that it doesn't increment higher than 1/2 of
the potential prime factor. Anything above 1/2 of the value can be
discared for the same reasons as above.
-Ignore even factors (other than two) when found right away since there are
no even prime numbers other than 2.
Interesting to note than an 11 digit number takes hardly any time at all, but
the 12 digit number took roughly an hour. That could just be because the
largest prime factor of 600851475143 is relatively small.
'''
| en | 0.952809 | #<NAME> #Project Euler -- Problem 3 My first attempt to solve this problem was sort of brute force, I had lists involved and iterated through an entire large number, one smaller number at a time searching for factors. Once I found a factor, I would search through it one number at a time to check if it was divisible by any number other than itself and 1 (i.e. prime). This worked fine for smaller numbers, but was definitely not cutting it for finding the largest prime factor of a 12 digit number. I optimized in the following ways: -Removed lists entirely (they served no purpose) -Rather than incrementing through the large number, i.e. finding the smallest factors first, I reversed the loop, decrementing and breaking as soon as the first prime factor is found. -Stopped plus-one incrementation and instead divided the number in question with by an incrementing value (e.g. 600851 / 2 -> 600851 / 3 -> 600851 / 4) after first confirming the remainder of the division operation would equal zero. I incremented starting at 2 and ignored anything greater than 1/2 the number in question since it would be impossible to have a factor larger than 1/2 the number other than the number itself. -Adjusted the inner loop so that it doesn't increment higher than 1/2 of the potential prime factor. Anything above 1/2 of the value can be discared for the same reasons as above. -Ignore even factors (other than two) when found right away since there are no even prime numbers other than 2. Interesting to note than an 11 digit number takes hardly any time at all, but the 12 digit number took roughly an hour. That could just be because the largest prime factor of 600851475143 is relatively small. | 3.672218 | 4 |
valet/abilities/taught/joke.py | sadmicrowave/valet | 0 | 6615252 | ###########################################################################
#
## @file joke.py
#
###########################################################################
import pyjokes
###########################################################################
class Joke :
"""#######################################################################
A wrapper for Valet to call and interact with the jokes module. Returning
a random joke from the provided joke sources and input mechanisms
Params
@valet / instance of valet itself to interact with main object
@input / str > the raw input string from the user
Returns / str > new random joke string
#######################################################################
"""
def __init__ ( self, valet=None, input=None, ) :
pass
#self.valet = valet
#######################################################################
#
## Instance executor so the execution must be directly and intentionally
# called rather than calling upon instantiation
#
#######################################################################
def execute ( self, ) -> str :
return pyjokes.get_joke( ).capitalize( ) | ###########################################################################
#
## @file joke.py
#
###########################################################################
import pyjokes
###########################################################################
class Joke :
"""#######################################################################
A wrapper for Valet to call and interact with the jokes module. Returning
a random joke from the provided joke sources and input mechanisms
Params
@valet / instance of valet itself to interact with main object
@input / str > the raw input string from the user
Returns / str > new random joke string
#######################################################################
"""
def __init__ ( self, valet=None, input=None, ) :
pass
#self.valet = valet
#######################################################################
#
## Instance executor so the execution must be directly and intentionally
# called rather than calling upon instantiation
#
#######################################################################
def execute ( self, ) -> str :
return pyjokes.get_joke( ).capitalize( ) | de | 0.591331 | ########################################################################### # ## @file joke.py # ########################################################################### ########################################################################### ####################################################################### A wrapper for Valet to call and interact with the jokes module. Returning a random joke from the provided joke sources and input mechanisms Params @valet / instance of valet itself to interact with main object @input / str > the raw input string from the user Returns / str > new random joke string ####################################################################### #self.valet = valet ####################################################################### # ## Instance executor so the execution must be directly and intentionally # called rather than calling upon instantiation # ####################################################################### | 2.556671 | 3 |
ingestion/src/metadata/orm_profiler/profiles/core.py | cometta/OpenMetadata | 0 | 6615253 | # Copyright 2021 Collate
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Main Profile definition and queries to execute
"""
from abc import ABC, abstractmethod
from typing import Any, Dict, Optional, Tuple
from sqlalchemy.orm import DeclarativeMeta, Query
from sqlalchemy.orm.session import Session
from metadata.orm_profiler.metrics.core import (
ComposedMetric,
CustomMetric,
Metric,
QueryMetric,
StaticMetric,
TimeMetric,
)
from metadata.orm_profiler.orm.registry import NOT_COMPUTE
from metadata.orm_profiler.utils import logger
logger = logger()
class Profiler(ABC):
"""
Core Profiler.
A profiler is composed of:
- A session, used to run the queries against.
- An ORM Table. One profiler attacks one table at a time.
- A tuple of metrics, from which we will construct queries.
"""
def __init__(self, *metrics: Metric, session: Session, table):
if not isinstance(table, DeclarativeMeta):
raise ValueError(f"Table {table} should be a DeclarativeMeta.")
self._session = session
self._table = table
self._metrics = metrics
self._results: Optional[Dict[str, Any]] = None
@property
def session(self) -> Session:
return self._session
@property
def table(self) -> DeclarativeMeta:
return self._table
@property
def metrics(self) -> Tuple[Metric]:
return self._metrics
@property
def results(self) -> Optional[Dict[str, Any]]:
"""
Iterate over the _metrics to pick up
all values from _results dict.
Note that if some Metric does not run against
a specific column (e.g., STDDEV only runs against
numerical columns), then the metric won't appear
in _results. However, we still want that
result to be available, even if it is `None`.
Here we prepare the logic to being able to have
the complete suite of computed metrics.
"""
if not self._results:
return None
results = {
metric.name(): self._results.get(metric.name()) for metric in self.metrics
}
return results
@results.setter
def results(self, value: Dict[str, Any]):
"""
If we have not yet computed any result, use the
incoming value, otherwise, update the dict.
"""
if not isinstance(value, dict):
raise ValueError(
f"Trying to set value {value} to profiler results, but value should be a dict."
)
if not self._results:
self._results = value
else:
self._results.update(value)
def _filter_metrics(self, _type: type): # Type of class is `type`
"""
Filter metrics by type
"""
return [metric for metric in self.metrics if isinstance(metric, _type)]
@property
def static_metrics(self):
return self._filter_metrics(StaticMetric)
@property
def time_metrics(self):
return self._filter_metrics(TimeMetric)
@property
def composed_metrics(self):
return self._filter_metrics(ComposedMetric)
@property
def custom_metrics(self):
return self._filter_metrics(CustomMetric)
@property
def query_metrics(self):
return self._filter_metrics(QueryMetric)
def build_col_query(self) -> Optional[Query]:
"""
Build the query with all the column static metrics.
Can return None if no metric has an allowed
type. In that case, we cannot build an empty
query.
"""
allowed_metrics = [
metric.fn()
for metric in self.static_metrics
if metric.col
is not None # If metric is None, it is a table metric and gets executed in `sql_table_run`
and metric.col.type.__class__ not in NOT_COMPUTE
]
if not allowed_metrics:
return
query = self.session.query(*allowed_metrics)
return query
@abstractmethod
def sql_col_run(self):
"""
Run the profiler and obtain the results,
e.g. build_query().first(), or all()
Data should be saved under self.results
"""
def sql_table_run(self):
"""
Run Table Static metrics
"""
# Table metrics do not have column informed
table_metrics = [metric for metric in self.static_metrics if metric.col is None]
for metric in table_metrics:
row = self.session.query(metric.fn()).select_from(self.table).first()
self.results = dict(row)
def sql_query_run(self):
"""
Run QueryMetrics
"""
for metric in self.query_metrics:
query_res = metric.query(session=self.session).all()
# query_res has the shape of List[Row], where each row is a dict,
# e.g., [{colA: 1, colB: 2},...]
# We are going to transform this into a Dict[List] by pivoting, so that
# data = {colA: [1,2,3], colB: [4,5,6]...}
data = {k: [dic[k] for dic in query_res] for k in dict(query_res[0])}
self.results = {metric.name(): data}
def post_run(self):
"""
Run this after the metrics have been computed
Data should be saved under self.results
"""
logger.info("Running post Profiler...")
if not self._results:
return
for metric in self.composed_metrics:
# Composed metrics require the results as an argument
self._results[metric.name()] = metric.fn(self.results)
def execute(self) -> Optional[Dict[str, Any]]:
"""
Run the whole profiling
"""
self.sql_table_run()
self.sql_col_run()
self.sql_query_run()
self.post_run()
return self.results
class SingleProfiler(Profiler):
"""
Basic Profiler.
Passing a set of metrics, it runs them all.
Returns a single ROW
"""
def sql_col_run(self) -> Dict[str, Any]:
"""
Run the profiler and store its results
This should only execute column metrics.
"""
logger.info("Running SQL Profiler...")
query = self.build_col_query()
if query:
row = query.first()
self.results = dict(row)
return self.results
| # Copyright 2021 Collate
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Main Profile definition and queries to execute
"""
from abc import ABC, abstractmethod
from typing import Any, Dict, Optional, Tuple
from sqlalchemy.orm import DeclarativeMeta, Query
from sqlalchemy.orm.session import Session
from metadata.orm_profiler.metrics.core import (
ComposedMetric,
CustomMetric,
Metric,
QueryMetric,
StaticMetric,
TimeMetric,
)
from metadata.orm_profiler.orm.registry import NOT_COMPUTE
from metadata.orm_profiler.utils import logger
logger = logger()
class Profiler(ABC):
"""
Core Profiler.
A profiler is composed of:
- A session, used to run the queries against.
- An ORM Table. One profiler attacks one table at a time.
- A tuple of metrics, from which we will construct queries.
"""
def __init__(self, *metrics: Metric, session: Session, table):
if not isinstance(table, DeclarativeMeta):
raise ValueError(f"Table {table} should be a DeclarativeMeta.")
self._session = session
self._table = table
self._metrics = metrics
self._results: Optional[Dict[str, Any]] = None
@property
def session(self) -> Session:
return self._session
@property
def table(self) -> DeclarativeMeta:
return self._table
@property
def metrics(self) -> Tuple[Metric]:
return self._metrics
@property
def results(self) -> Optional[Dict[str, Any]]:
"""
Iterate over the _metrics to pick up
all values from _results dict.
Note that if some Metric does not run against
a specific column (e.g., STDDEV only runs against
numerical columns), then the metric won't appear
in _results. However, we still want that
result to be available, even if it is `None`.
Here we prepare the logic to being able to have
the complete suite of computed metrics.
"""
if not self._results:
return None
results = {
metric.name(): self._results.get(metric.name()) for metric in self.metrics
}
return results
@results.setter
def results(self, value: Dict[str, Any]):
"""
If we have not yet computed any result, use the
incoming value, otherwise, update the dict.
"""
if not isinstance(value, dict):
raise ValueError(
f"Trying to set value {value} to profiler results, but value should be a dict."
)
if not self._results:
self._results = value
else:
self._results.update(value)
def _filter_metrics(self, _type: type): # Type of class is `type`
"""
Filter metrics by type
"""
return [metric for metric in self.metrics if isinstance(metric, _type)]
@property
def static_metrics(self):
return self._filter_metrics(StaticMetric)
@property
def time_metrics(self):
return self._filter_metrics(TimeMetric)
@property
def composed_metrics(self):
return self._filter_metrics(ComposedMetric)
@property
def custom_metrics(self):
return self._filter_metrics(CustomMetric)
@property
def query_metrics(self):
return self._filter_metrics(QueryMetric)
def build_col_query(self) -> Optional[Query]:
"""
Build the query with all the column static metrics.
Can return None if no metric has an allowed
type. In that case, we cannot build an empty
query.
"""
allowed_metrics = [
metric.fn()
for metric in self.static_metrics
if metric.col
is not None # If metric is None, it is a table metric and gets executed in `sql_table_run`
and metric.col.type.__class__ not in NOT_COMPUTE
]
if not allowed_metrics:
return
query = self.session.query(*allowed_metrics)
return query
@abstractmethod
def sql_col_run(self):
"""
Run the profiler and obtain the results,
e.g. build_query().first(), or all()
Data should be saved under self.results
"""
def sql_table_run(self):
"""
Run Table Static metrics
"""
# Table metrics do not have column informed
table_metrics = [metric for metric in self.static_metrics if metric.col is None]
for metric in table_metrics:
row = self.session.query(metric.fn()).select_from(self.table).first()
self.results = dict(row)
def sql_query_run(self):
"""
Run QueryMetrics
"""
for metric in self.query_metrics:
query_res = metric.query(session=self.session).all()
# query_res has the shape of List[Row], where each row is a dict,
# e.g., [{colA: 1, colB: 2},...]
# We are going to transform this into a Dict[List] by pivoting, so that
# data = {colA: [1,2,3], colB: [4,5,6]...}
data = {k: [dic[k] for dic in query_res] for k in dict(query_res[0])}
self.results = {metric.name(): data}
def post_run(self):
"""
Run this after the metrics have been computed
Data should be saved under self.results
"""
logger.info("Running post Profiler...")
if not self._results:
return
for metric in self.composed_metrics:
# Composed metrics require the results as an argument
self._results[metric.name()] = metric.fn(self.results)
def execute(self) -> Optional[Dict[str, Any]]:
"""
Run the whole profiling
"""
self.sql_table_run()
self.sql_col_run()
self.sql_query_run()
self.post_run()
return self.results
class SingleProfiler(Profiler):
"""
Basic Profiler.
Passing a set of metrics, it runs them all.
Returns a single ROW
"""
def sql_col_run(self) -> Dict[str, Any]:
"""
Run the profiler and store its results
This should only execute column metrics.
"""
logger.info("Running SQL Profiler...")
query = self.build_col_query()
if query:
row = query.first()
self.results = dict(row)
return self.results
| en | 0.871204 | # Copyright 2021 Collate # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # http://www.apache.org/licenses/LICENSE-2.0 # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. Main Profile definition and queries to execute Core Profiler. A profiler is composed of: - A session, used to run the queries against. - An ORM Table. One profiler attacks one table at a time. - A tuple of metrics, from which we will construct queries. Iterate over the _metrics to pick up all values from _results dict. Note that if some Metric does not run against a specific column (e.g., STDDEV only runs against numerical columns), then the metric won't appear in _results. However, we still want that result to be available, even if it is `None`. Here we prepare the logic to being able to have the complete suite of computed metrics. If we have not yet computed any result, use the incoming value, otherwise, update the dict. # Type of class is `type` Filter metrics by type Build the query with all the column static metrics. Can return None if no metric has an allowed type. In that case, we cannot build an empty query. # If metric is None, it is a table metric and gets executed in `sql_table_run` Run the profiler and obtain the results, e.g. build_query().first(), or all() Data should be saved under self.results Run Table Static metrics # Table metrics do not have column informed Run QueryMetrics # query_res has the shape of List[Row], where each row is a dict, # e.g., [{colA: 1, colB: 2},...] # We are going to transform this into a Dict[List] by pivoting, so that # data = {colA: [1,2,3], colB: [4,5,6]...} Run this after the metrics have been computed Data should be saved under self.results # Composed metrics require the results as an argument Run the whole profiling Basic Profiler. Passing a set of metrics, it runs them all. Returns a single ROW Run the profiler and store its results This should only execute column metrics. | 2.225523 | 2 |
toolcall/admin.py | thebjorn/toolcall | 0 | 6615254 | # -*- coding: utf-8 -*-
"""toolcall admin.
"""
from django.contrib import admin
from .models import Client, Tool, ToolCall, ToolCallLog
class ToolInlineAdmin(admin.TabularInline):
model = Tool
fields = """slug name description icon restartable restart_duration_minutes
""".split()
extra = 1
class ClientAdmin(admin.ModelAdmin):
list_display = """id name
receive_start_token_url
receive_result_token_url
""".split()
inlines = [ToolInlineAdmin]
class ToolCallLogAdmin(admin.TabularInline):
model = ToolCallLog
# fields = """timestamp status details""".split()
fields = """status details""".split()
extra = 0
class ToolCallAdmin(admin.ModelAdmin):
list_display = """tool user started ended status""".split()
inlines = [ToolCallLogAdmin]
raw_id_fields = ['user']
admin.site.register(Client, ClientAdmin)
admin.site.register(ToolCall, ToolCallAdmin)
| # -*- coding: utf-8 -*-
"""toolcall admin.
"""
from django.contrib import admin
from .models import Client, Tool, ToolCall, ToolCallLog
class ToolInlineAdmin(admin.TabularInline):
model = Tool
fields = """slug name description icon restartable restart_duration_minutes
""".split()
extra = 1
class ClientAdmin(admin.ModelAdmin):
list_display = """id name
receive_start_token_url
receive_result_token_url
""".split()
inlines = [ToolInlineAdmin]
class ToolCallLogAdmin(admin.TabularInline):
model = ToolCallLog
# fields = """timestamp status details""".split()
fields = """status details""".split()
extra = 0
class ToolCallAdmin(admin.ModelAdmin):
list_display = """tool user started ended status""".split()
inlines = [ToolCallLogAdmin]
raw_id_fields = ['user']
admin.site.register(Client, ClientAdmin)
admin.site.register(ToolCall, ToolCallAdmin)
| en | 0.638602 | # -*- coding: utf-8 -*- toolcall admin. slug name description icon restartable restart_duration_minutes id name receive_start_token_url receive_result_token_url # fields = """timestamp status details""".split() status details tool user started ended status | 2.057293 | 2 |
05_scrapy/python_04/python_04/items.py | 2018-B-GR1-Python/Velasco-Yepez-Andres-David | 0 | 6615255 | # -*- coding: utf-8 -*-
# Define here the models for your scraped items
#
# See documentation in:
# https://doc.scrapy.org/en/latest/topics/items.html
import scrapy
from scrapy.loader.processors import MapCompose
def shorten_amazon_link(link):
id_producto = link.split('/')[-1] # ultimo elemento
short_link = 'https://www.amazon.com/dp/' + id_producto
return short_link
class ProductoItem(scrapy.Item):
titulo = scrapy.Field()
precio = scrapy.Field()
link = scrapy.Field(
input_processor=MapCompose(shorten_amazon_link)
)
| # -*- coding: utf-8 -*-
# Define here the models for your scraped items
#
# See documentation in:
# https://doc.scrapy.org/en/latest/topics/items.html
import scrapy
from scrapy.loader.processors import MapCompose
def shorten_amazon_link(link):
id_producto = link.split('/')[-1] # ultimo elemento
short_link = 'https://www.amazon.com/dp/' + id_producto
return short_link
class ProductoItem(scrapy.Item):
titulo = scrapy.Field()
precio = scrapy.Field()
link = scrapy.Field(
input_processor=MapCompose(shorten_amazon_link)
)
| en | 0.601543 | # -*- coding: utf-8 -*- # Define here the models for your scraped items # # See documentation in: # https://doc.scrapy.org/en/latest/topics/items.html # ultimo elemento | 2.516106 | 3 |
associations/serializers.py | ollc-code/django-back | 0 | 6615256 | <reponame>ollc-code/django-back<filename>associations/serializers.py
from .models import Associations
from rest_framework import serializers
class AssocSerializer(serializers.ModelSerializer):
class Meta:
model = Associations
fields = '__all__'
| from .models import Associations
from rest_framework import serializers
class AssocSerializer(serializers.ModelSerializer):
class Meta:
model = Associations
fields = '__all__' | none | 1 | 1.910407 | 2 | |
language_features/collection/range/range_intro.py | PrasadHonrao/python-samples | 3 | 6615257 | r = range(5)
print(r)
for i in r:
print(i, end=' ')
print('\n')
for i in range(5, 10):
print(i, end=' ')
print('\n')
for i in range(10, 100, 10):
print(i, end=' ')
print('\n')
# prefer enumerate function to iterate over range, which returns index and value
r = range(0, 100, 10)
for p in enumerate(r):
print(p, end=' ')
print()
# conver range to list
lst = list(range(5))
print(lst)
| r = range(5)
print(r)
for i in r:
print(i, end=' ')
print('\n')
for i in range(5, 10):
print(i, end=' ')
print('\n')
for i in range(10, 100, 10):
print(i, end=' ')
print('\n')
# prefer enumerate function to iterate over range, which returns index and value
r = range(0, 100, 10)
for p in enumerate(r):
print(p, end=' ')
print()
# conver range to list
lst = list(range(5))
print(lst)
| en | 0.584007 | # prefer enumerate function to iterate over range, which returns index and value # conver range to list | 4.091413 | 4 |
src/kgextractiontoolbox/entitylinking/tagging/vocabulary.py | HermannKroll/KGExtractionToolbox | 6 | 6615258 | <gh_stars>1-10
import csv
from collections import defaultdict
from pathlib import Path
from typing import Union, List
class VocabularyEntry:
def __init__(self, entity_id, entity_type, heading, synonyms):
self.entity_id = entity_id
self.entity_type = entity_type
self.heading = heading
self.synonyms = synonyms
def to_dict(self):
return dict(id=self.entity_id, type=self.entity_type, heading=self.heading, synonyms=self.synonyms)
class Vocabulary:
def __init__(self, path: Union[str, Path]):
self.path = path
self.vocabularies = defaultdict(lambda: defaultdict(set))
self.vocabulary_entries: List[VocabularyEntry] = list()
self._entry_by_id_and_type = {}
self.size = 0
def add_vocabulary(self, vocabulary):
for entry in vocabulary.vocabulary_entries:
self.add_vocab_entry(entry.entity_id, entry.entity_type, entry.heading, entry.synonyms)
def add_vocab_entry(self, entity_id: str, entity_type: str, heading: str, synonyms: str, expand_terms=True):
self.size += 1
entry = VocabularyEntry(entity_id, entity_type, heading, synonyms)
self.vocabulary_entries.append(entry)
key = (entity_id, entity_type)
if key in self._entry_by_id_and_type:
raise ValueError(f"Found duplicated entry in vocabulary: {key}")
else:
self._entry_by_id_and_type[key] = entry
if expand_terms:
for syn in {s
for t in (synonyms.split(";") if synonyms else []) + [heading]
for s in expand_vocabulary_term(t.lower()) if t}:
self.vocabularies[entity_type][syn] |= {entity_id}
else:
for syn in {t.lower()
for t in (synonyms.split(";") if synonyms else []) + [heading]}:
self.vocabularies[entity_type][syn] |= {entity_id}
def compute_reverse_index(self):
self.vocabularies = {k: dict(v) for k, v in self.vocabularies.items()}
def load_vocab(self, expand_terms=True):
if self.vocabularies:
return
with open(self.path, "r") as f:
reader = csv.DictReader(f, delimiter="\t")
type_heading = 'type' if 'type' in reader.fieldnames else 'enttype'
for line in reader:
if not line["heading"] or not line[type_heading] or not line["id"]:
continue
self.add_vocab_entry(line["id"], line[type_heading], line["heading"], line["synonyms"],
expand_terms=expand_terms)
self.compute_reverse_index()
def export_vocabulary_as_tsv(self, output_file: str):
"""
Export the vocabulary as a TSV file
:param output_file: Path to the file
:return: None
"""
self.vocabulary_entries.sort(key=lambda x: x.entity_id)
with open(output_file, 'wt') as f:
f = csv.DictWriter(f, ["id", "type", "heading", "synonyms"], delimiter="\t")
f.writeheader()
for e in self.vocabulary_entries:
f.writerow(e.to_dict())
def get_entity_heading(self, entity_id: str, entity_type: str) -> str:
"""
Get an entity heading from the vocabulary
:param entity_id: entity id
:param entity_type: entity type
:return: heading
"""
return self._entry_by_id_and_type[(entity_id, entity_type)].heading
def get_ent_types(self):
return self.vocabularies.keys()
def expand_vocabulary_term(term: str, minimum_len_to_expand=3, depth=0) -> str:
# only consider the length the last term
if ' ' in term and len(term.split(' ')[-1]) < minimum_len_to_expand:
yield term
# test if term has the minimum len to be expanded
elif len(term) < minimum_len_to_expand:
yield term
else:
if term.endswith('y'):
yield f'{term[:-1]}ies'
if term.endswith('ies'):
yield f'{term[:-3]}y'
if term.endswith('s') or term.endswith('e'):
yield term[:-1]
if term.endswith('or') and len(term) > 2:
yield term[:-2] + "our"
if term.endswith('our') and len(term) > 3:
yield term[:-3] + "or"
if "-" in term:
yield term.replace("-", " ")
if depth == 0:
yield from expand_vocabulary_term(term.replace("-", " "), depth=1)
yield term.replace("-", "")
if depth == 0:
yield from expand_vocabulary_term(term.replace("-", ""), depth=1)
if " " in term:
yield term.replace(" ", "-")
if depth == 0:
yield from expand_vocabulary_term(term.replace(" ", "-"), depth=1)
yield term.replace(" ", "")
if depth == 0:
yield from expand_vocabulary_term(term.replace(" ", ""), depth=1)
yield from [term, f'{term}e', f'{term}s']
| import csv
from collections import defaultdict
from pathlib import Path
from typing import Union, List
class VocabularyEntry:
def __init__(self, entity_id, entity_type, heading, synonyms):
self.entity_id = entity_id
self.entity_type = entity_type
self.heading = heading
self.synonyms = synonyms
def to_dict(self):
return dict(id=self.entity_id, type=self.entity_type, heading=self.heading, synonyms=self.synonyms)
class Vocabulary:
def __init__(self, path: Union[str, Path]):
self.path = path
self.vocabularies = defaultdict(lambda: defaultdict(set))
self.vocabulary_entries: List[VocabularyEntry] = list()
self._entry_by_id_and_type = {}
self.size = 0
def add_vocabulary(self, vocabulary):
for entry in vocabulary.vocabulary_entries:
self.add_vocab_entry(entry.entity_id, entry.entity_type, entry.heading, entry.synonyms)
def add_vocab_entry(self, entity_id: str, entity_type: str, heading: str, synonyms: str, expand_terms=True):
self.size += 1
entry = VocabularyEntry(entity_id, entity_type, heading, synonyms)
self.vocabulary_entries.append(entry)
key = (entity_id, entity_type)
if key in self._entry_by_id_and_type:
raise ValueError(f"Found duplicated entry in vocabulary: {key}")
else:
self._entry_by_id_and_type[key] = entry
if expand_terms:
for syn in {s
for t in (synonyms.split(";") if synonyms else []) + [heading]
for s in expand_vocabulary_term(t.lower()) if t}:
self.vocabularies[entity_type][syn] |= {entity_id}
else:
for syn in {t.lower()
for t in (synonyms.split(";") if synonyms else []) + [heading]}:
self.vocabularies[entity_type][syn] |= {entity_id}
def compute_reverse_index(self):
self.vocabularies = {k: dict(v) for k, v in self.vocabularies.items()}
def load_vocab(self, expand_terms=True):
if self.vocabularies:
return
with open(self.path, "r") as f:
reader = csv.DictReader(f, delimiter="\t")
type_heading = 'type' if 'type' in reader.fieldnames else 'enttype'
for line in reader:
if not line["heading"] or not line[type_heading] or not line["id"]:
continue
self.add_vocab_entry(line["id"], line[type_heading], line["heading"], line["synonyms"],
expand_terms=expand_terms)
self.compute_reverse_index()
def export_vocabulary_as_tsv(self, output_file: str):
"""
Export the vocabulary as a TSV file
:param output_file: Path to the file
:return: None
"""
self.vocabulary_entries.sort(key=lambda x: x.entity_id)
with open(output_file, 'wt') as f:
f = csv.DictWriter(f, ["id", "type", "heading", "synonyms"], delimiter="\t")
f.writeheader()
for e in self.vocabulary_entries:
f.writerow(e.to_dict())
def get_entity_heading(self, entity_id: str, entity_type: str) -> str:
"""
Get an entity heading from the vocabulary
:param entity_id: entity id
:param entity_type: entity type
:return: heading
"""
return self._entry_by_id_and_type[(entity_id, entity_type)].heading
def get_ent_types(self):
return self.vocabularies.keys()
def expand_vocabulary_term(term: str, minimum_len_to_expand=3, depth=0) -> str:
# only consider the length the last term
if ' ' in term and len(term.split(' ')[-1]) < minimum_len_to_expand:
yield term
# test if term has the minimum len to be expanded
elif len(term) < minimum_len_to_expand:
yield term
else:
if term.endswith('y'):
yield f'{term[:-1]}ies'
if term.endswith('ies'):
yield f'{term[:-3]}y'
if term.endswith('s') or term.endswith('e'):
yield term[:-1]
if term.endswith('or') and len(term) > 2:
yield term[:-2] + "our"
if term.endswith('our') and len(term) > 3:
yield term[:-3] + "or"
if "-" in term:
yield term.replace("-", " ")
if depth == 0:
yield from expand_vocabulary_term(term.replace("-", " "), depth=1)
yield term.replace("-", "")
if depth == 0:
yield from expand_vocabulary_term(term.replace("-", ""), depth=1)
if " " in term:
yield term.replace(" ", "-")
if depth == 0:
yield from expand_vocabulary_term(term.replace(" ", "-"), depth=1)
yield term.replace(" ", "")
if depth == 0:
yield from expand_vocabulary_term(term.replace(" ", ""), depth=1)
yield from [term, f'{term}e', f'{term}s'] | en | 0.781274 | Export the vocabulary as a TSV file :param output_file: Path to the file :return: None Get an entity heading from the vocabulary :param entity_id: entity id :param entity_type: entity type :return: heading # only consider the length the last term # test if term has the minimum len to be expanded | 2.863436 | 3 |
text-game/example-2.py | whitmans-max/python-examples | 140 | 6615259 | rooms = {
'room1': {
'description': "Your in a room how to get out...",
},
'room2': {
'description': "This is the second room",
},
'room3': {
'description': "This is the third room",
},
}
def show_room(name):
print(rooms[name]['description'])
star
looking_forward = False
looking_backward = False
while True:
answer = input().lower()
if answer == 'exit':
print("Good Bye")
return
if answer == "look forward":
if not looking_forward:
print("You are looking forward")
looking_forward = True
looking_backward = False
else:
print("You are already looking forward, so what next")
elif answer == 'look backward':
if not looking_backward:
print("You are looking backward")
looking_forward = False
looking_backward = True
else:
print("You are already looking backward, so what next")
elif answer == 'go there':
if looking_forward:
return 'room2'
if looking_backward:
return 'room3'
else:
print('Go where ???')
# --------------------------------
name = 'room1'
while name is not None:
name = show_room(name)
| rooms = {
'room1': {
'description': "Your in a room how to get out...",
},
'room2': {
'description': "This is the second room",
},
'room3': {
'description': "This is the third room",
},
}
def show_room(name):
print(rooms[name]['description'])
star
looking_forward = False
looking_backward = False
while True:
answer = input().lower()
if answer == 'exit':
print("Good Bye")
return
if answer == "look forward":
if not looking_forward:
print("You are looking forward")
looking_forward = True
looking_backward = False
else:
print("You are already looking forward, so what next")
elif answer == 'look backward':
if not looking_backward:
print("You are looking backward")
looking_forward = False
looking_backward = True
else:
print("You are already looking backward, so what next")
elif answer == 'go there':
if looking_forward:
return 'room2'
if looking_backward:
return 'room3'
else:
print('Go where ???')
# --------------------------------
name = 'room1'
while name is not None:
name = show_room(name)
| en | 0.123644 | # -------------------------------- | 4.135191 | 4 |
app/eligibility.py | BloomTech-Labs/family-promise-service-tracker-ds-a | 4 | 6615260 | from fastapi import APIRouter, Depends
import sqlalchemy
from datetime import datetime
from app.db import get_db
from typing import Tuple
router = APIRouter()
@router.post("/eligibility/{household_id}")
async def check_eligibility(household_id: str, db=Depends(get_db)) -> dict:
"""
Checks for eligibility for services based on service provider data.
### Parameters
--------------
id
A household_id entry from the households table.
### Returns
-----------
JSON
"resident_assistance_eligibility": bool
"reduced_bus_fare_eligibility": bool
"""
household_id = f"'{household_id}'"
income = check_income(household_id, db)
stability = check_household_stability(household_id, db)
bus_fare = any(check_recipients(household_id, db))
return {
"resident_assistance_eligibility": income or stability,
"reduced_bus_fare_eligibility": bus_fare,
}
def check_household_stability(household_id: str,
db: sqlalchemy.engine.Connection) -> bool:
"""
Checks if a household has been flagged as unstable.
### Parameters
--------------
id
A household_id from the households table
### Returns
-----------
bool
True if household flagged as unstable, otherwise False
"""
with db.begin():
result = db.execute(f"""
SELECT is_unstable
FROM households
WHERE household_id = {household_id}""").fetchall()[0][0]
return result
def get_household_size(household_id: str,
db: sqlalchemy.engine.Connection) -> int:
"""
Gets the size of a household from its id.
### Parameters
--------------
id
A household_id from the households table
### Returns
-----------
int
The size of the household as the number of household members
"""
with db.begin():
size = db.execute(f"""
SELECT household_size
FROM households
WHERE household_id = {household_id}""").fetchall()[0][0]
return size
def check_income(household_id, db: sqlalchemy.engine.Connection):
"""
Checks if family income is below the current $61,680 threshold.
### Parameters
--------------
id
A household_id from the households table
### Returns
-----------
bool
True if the household's income is at or below the threshold
"""
# This should not be hard coded
# It is currently a placeholder with the
# correct value as of 7/18/2021 for Spokane, WA.
threshold = 61680 / 12
with db.begin():
income = db.execute(f"""
SELECT household_monthly_income
FROM households
WHERE household_id = {household_id}""").fetchall()[0][0]
return income <= threshold
def check_recipients(household_id: str,
db: sqlalchemy.engine.Connection) -> Tuple[
bool, bool, bool, bool, bool]:
"""
Checks whether or not recipients in a
household are above the age of 65, and
if they're a veteran.
### Parameters
--------------
id
A household_id from the households table
### Returns
-----------
tuple[bool]
A tuple containing the following, in order:
over_65
vet_status
has_disability
has_valid_ssi
has_valid_medicare_card
"""
with db.begin():
recipients = db.execute(f"""
SELECT recipient_date_of_birth, recipient_veteran_status, has_disability,
has_valid_ssi, has_valid_medicare_card
FROM recipients
WHERE household_id = {household_id}""").fetchall()
is_senior = lambda age: (datetime.now().date() - age).days / 365.25 >= 65
return (
any(is_senior(row[0]) for row in recipients),
any(row[1] for row in recipients),
any(row[2] for row in recipients),
any(row[3] for row in recipients),
any(row[4] for row in recipients),
)
| from fastapi import APIRouter, Depends
import sqlalchemy
from datetime import datetime
from app.db import get_db
from typing import Tuple
router = APIRouter()
@router.post("/eligibility/{household_id}")
async def check_eligibility(household_id: str, db=Depends(get_db)) -> dict:
"""
Checks for eligibility for services based on service provider data.
### Parameters
--------------
id
A household_id entry from the households table.
### Returns
-----------
JSON
"resident_assistance_eligibility": bool
"reduced_bus_fare_eligibility": bool
"""
household_id = f"'{household_id}'"
income = check_income(household_id, db)
stability = check_household_stability(household_id, db)
bus_fare = any(check_recipients(household_id, db))
return {
"resident_assistance_eligibility": income or stability,
"reduced_bus_fare_eligibility": bus_fare,
}
def check_household_stability(household_id: str,
db: sqlalchemy.engine.Connection) -> bool:
"""
Checks if a household has been flagged as unstable.
### Parameters
--------------
id
A household_id from the households table
### Returns
-----------
bool
True if household flagged as unstable, otherwise False
"""
with db.begin():
result = db.execute(f"""
SELECT is_unstable
FROM households
WHERE household_id = {household_id}""").fetchall()[0][0]
return result
def get_household_size(household_id: str,
db: sqlalchemy.engine.Connection) -> int:
"""
Gets the size of a household from its id.
### Parameters
--------------
id
A household_id from the households table
### Returns
-----------
int
The size of the household as the number of household members
"""
with db.begin():
size = db.execute(f"""
SELECT household_size
FROM households
WHERE household_id = {household_id}""").fetchall()[0][0]
return size
def check_income(household_id, db: sqlalchemy.engine.Connection):
"""
Checks if family income is below the current $61,680 threshold.
### Parameters
--------------
id
A household_id from the households table
### Returns
-----------
bool
True if the household's income is at or below the threshold
"""
# This should not be hard coded
# It is currently a placeholder with the
# correct value as of 7/18/2021 for Spokane, WA.
threshold = 61680 / 12
with db.begin():
income = db.execute(f"""
SELECT household_monthly_income
FROM households
WHERE household_id = {household_id}""").fetchall()[0][0]
return income <= threshold
def check_recipients(household_id: str,
db: sqlalchemy.engine.Connection) -> Tuple[
bool, bool, bool, bool, bool]:
"""
Checks whether or not recipients in a
household are above the age of 65, and
if they're a veteran.
### Parameters
--------------
id
A household_id from the households table
### Returns
-----------
tuple[bool]
A tuple containing the following, in order:
over_65
vet_status
has_disability
has_valid_ssi
has_valid_medicare_card
"""
with db.begin():
recipients = db.execute(f"""
SELECT recipient_date_of_birth, recipient_veteran_status, has_disability,
has_valid_ssi, has_valid_medicare_card
FROM recipients
WHERE household_id = {household_id}""").fetchall()
is_senior = lambda age: (datetime.now().date() - age).days / 365.25 >= 65
return (
any(is_senior(row[0]) for row in recipients),
any(row[1] for row in recipients),
any(row[2] for row in recipients),
any(row[3] for row in recipients),
any(row[4] for row in recipients),
)
| en | 0.811226 | Checks for eligibility for services based on service provider data. ### Parameters -------------- id A household_id entry from the households table. ### Returns ----------- JSON "resident_assistance_eligibility": bool "reduced_bus_fare_eligibility": bool Checks if a household has been flagged as unstable. ### Parameters -------------- id A household_id from the households table ### Returns ----------- bool True if household flagged as unstable, otherwise False SELECT is_unstable FROM households WHERE household_id = {household_id} Gets the size of a household from its id. ### Parameters -------------- id A household_id from the households table ### Returns ----------- int The size of the household as the number of household members SELECT household_size FROM households WHERE household_id = {household_id} Checks if family income is below the current $61,680 threshold. ### Parameters -------------- id A household_id from the households table ### Returns ----------- bool True if the household's income is at or below the threshold # This should not be hard coded # It is currently a placeholder with the # correct value as of 7/18/2021 for Spokane, WA. SELECT household_monthly_income FROM households WHERE household_id = {household_id} Checks whether or not recipients in a household are above the age of 65, and if they're a veteran. ### Parameters -------------- id A household_id from the households table ### Returns ----------- tuple[bool] A tuple containing the following, in order: over_65 vet_status has_disability has_valid_ssi has_valid_medicare_card SELECT recipient_date_of_birth, recipient_veteran_status, has_disability, has_valid_ssi, has_valid_medicare_card FROM recipients WHERE household_id = {household_id} | 2.845076 | 3 |
public/app.py | ayushmankumar7/theOSS | 0 | 6615261 | <gh_stars>0
from flask import Flask, render_template, request
import firebase_admin
from firebase import firebase
from firebase_admin import credentials, firestore
try:
cred = credentials.Certificate("/home/arkaprabha/Desktop/theoss-a4460-firebase-adminsdk-hh09p-fd5a411e88.json")
default_app = firebase_admin.initialize_app(cred)
db = firestore.client()
except:
pass
app = Flask(__name__)
@app.route("/")
def index():
return render_template("index.html")
@app.route("/sign")
def home():
return render_template("toss.html")
@app.route("/login")
def login():
return render_template("login.html")
@app.route("/login_val", methods =["GET","POST"])
def login_valid():
username = request.form.get('username')
password = <PASSWORD>('password')
return " Welcome username : {} ".format(username)
@app.route("/signup")
def signin():
return render_template("signup.html")
@app.route("/signup_val", methods =["GET","POST"])
def signin_valid():
username = request.form.get('name')
password = <PASSWORD>.form.get('password')
email = request.form.get('email')
s = str(username)
doc_ref = db.collection("User").document(s)
doc_ref.set({'Name':username,'Email':email,'Password':password})
return " Welcome username : {} ".format(username)
@app.route("/contact")
def contact():
return "<EMAIL>"
if __name__ == '__main__':
app.run(debug=True)
| from flask import Flask, render_template, request
import firebase_admin
from firebase import firebase
from firebase_admin import credentials, firestore
try:
cred = credentials.Certificate("/home/arkaprabha/Desktop/theoss-a4460-firebase-adminsdk-hh09p-fd5a411e88.json")
default_app = firebase_admin.initialize_app(cred)
db = firestore.client()
except:
pass
app = Flask(__name__)
@app.route("/")
def index():
return render_template("index.html")
@app.route("/sign")
def home():
return render_template("toss.html")
@app.route("/login")
def login():
return render_template("login.html")
@app.route("/login_val", methods =["GET","POST"])
def login_valid():
username = request.form.get('username')
password = <PASSWORD>('password')
return " Welcome username : {} ".format(username)
@app.route("/signup")
def signin():
return render_template("signup.html")
@app.route("/signup_val", methods =["GET","POST"])
def signin_valid():
username = request.form.get('name')
password = <PASSWORD>.form.get('password')
email = request.form.get('email')
s = str(username)
doc_ref = db.collection("User").document(s)
doc_ref.set({'Name':username,'Email':email,'Password':password})
return " Welcome username : {} ".format(username)
@app.route("/contact")
def contact():
return "<EMAIL>"
if __name__ == '__main__':
app.run(debug=True) | none | 1 | 2.38496 | 2 | |
src/radtext/models/ner/ner_spacy.py | bionlplab/radtext | 0 | 6615262 | <reponame>bionlplab/radtext
import collections
from typing import List, Generator
from bioc import BioCAnnotation, BioCLocation, BioCPassage, BioCSentence
from spacy.matcher import PhraseMatcher
from spacy.tokens.doc import Doc
from radtext.core import BioCProcessor
from radtext.models.ner.utils import longest_matching, remove_duplicates, remove_excludes, filter_number, \
filter_stop_words, STOP_WORDS
from radtext.models.ner.utils import NERMatch
class NerSpacyPhraseMatchers:
def __init__(self):
self.include_text_matcher = None # type: PhraseMatcher | None
self.include_lemma_matcher = None # type: PhraseMatcher | None
self.exclude_text_matcher = None # type: PhraseMatcher | None
self.exclude_lemma_matcher = None # type: PhraseMatcher | None
self.id2concept = {}
def finditer_include(self, doc: Doc) -> Generator[NERMatch, None, None]:
yield from self._finditer(doc, self.include_text_matcher)
yield from self._finditer(doc, self.include_lemma_matcher)
def finditer_exclude(self, doc: Doc) -> Generator[NERMatch, None, None]:
if self.exclude_text_matcher is not None:
yield from self._finditer(doc, self.exclude_text_matcher)
if self.exclude_lemma_matcher is not None:
yield from self._finditer(doc, self.exclude_lemma_matcher)
def _finditer(self, doc: Doc, matcher: PhraseMatcher) -> Generator[NERMatch, None, None]:
for match_id, start, end in matcher(doc):
nermatch = NERMatch()
nermatch.concept_id = doc.vocab.strings[match_id]
nermatch.concept = self.id2concept[nermatch.concept_id]
nermatch.start = doc[start].idx
nermatch.end = doc[end-1].idx + len(doc[end-1])
nermatch.text = doc[start:end].text
# print(nermatch)
yield nermatch
class NerSpacyExtractor:
def __init__(self, nlp, phrase_matchers: NerSpacyPhraseMatchers, filter_number: bool=True,
filter_stop_words: bool=True):
self.nlp = nlp
self.phrase_matchers = phrase_matchers
self.filter_number = filter_number
self.filter_stop_words = filter_stop_words
def findall(self, text: str) -> List[NERMatch]:
doc = self.nlp(text)
includes_matches = [m for m in self.phrase_matchers.finditer_include(doc)]
excludes_matches = [m for m in self.phrase_matchers.finditer_exclude(doc)]
results = remove_excludes(includes_matches, excludes_matches)
results = remove_duplicates(results)
results = longest_matching(results)
if self.filter_number:
results = filter_number(results)
if self.filter_stop_words:
results = filter_stop_words(results, STOP_WORDS)
return results
class BioCNerSpacy(BioCProcessor):
def __init__(self, extractor: NerSpacyExtractor, name: str, filter_integers=True):
super(BioCNerSpacy, self).__init__('ner:spacy')
self.extractor = extractor
self.filter_integers = filter_integers
self.model = name
# def _find_other_ids(self, ann):
# this_id = ann.infons['source_concept_id']
# if ann.text == ann.infons['source_concept']:
# return
# for i, id in enumerate(self.extractor.text2ids[ann.text], 1):
# if id != this_id:
# ann.infons['note_nlp_concept_id'] += ';{}'.format(id)
# ann.infons['note_nlp_concept'] += ';{}'.format(self.extractor.id2pref[id])
# # ann.infons[f'concept_id_{i}'] = id
# # ann.infons[f'preferred_name_{i}'] = self.extractor.id2pref[id]
def ner(self, text, offset):
anns = []
for match in self.extractor.findall(text):
start = match.start
end = match.end
ann = BioCAnnotation()
ann.id = 'a{}'.format(match.start)
ann.infons['note_nlp_concept_id'] = match.concept_id
ann.infons['note_nlp_concept'] = match.concept
ann.infons['nlp_system'] = self.nlp_system
ann.infons['nlp_date_time'] = self.nlp_date_time
ann.add_location(BioCLocation(start + offset, end - start))
ann.text = match.text
anns.append(ann)
# find other ids
# self._find_other_ids(ann)
# k = ann.total_span
# if k not in ann_map:
# ann_map[k] = ann
return anns
def process_passage(self, passage: BioCPassage, docid: str = None) -> BioCPassage:
anns= self.ner(passage.text, passage.offset)
passage.annotations += anns
for sentence in passage.sentences:
self.process_sentence(sentence, docid)
return passage
def process_sentence(self, sentence: BioCSentence, docid: str = None) -> BioCSentence:
anns = self.ner(sentence.text, sentence.offset)
sentence.annotations += anns
return sentence
| import collections
from typing import List, Generator
from bioc import BioCAnnotation, BioCLocation, BioCPassage, BioCSentence
from spacy.matcher import PhraseMatcher
from spacy.tokens.doc import Doc
from radtext.core import BioCProcessor
from radtext.models.ner.utils import longest_matching, remove_duplicates, remove_excludes, filter_number, \
filter_stop_words, STOP_WORDS
from radtext.models.ner.utils import NERMatch
class NerSpacyPhraseMatchers:
def __init__(self):
self.include_text_matcher = None # type: PhraseMatcher | None
self.include_lemma_matcher = None # type: PhraseMatcher | None
self.exclude_text_matcher = None # type: PhraseMatcher | None
self.exclude_lemma_matcher = None # type: PhraseMatcher | None
self.id2concept = {}
def finditer_include(self, doc: Doc) -> Generator[NERMatch, None, None]:
yield from self._finditer(doc, self.include_text_matcher)
yield from self._finditer(doc, self.include_lemma_matcher)
def finditer_exclude(self, doc: Doc) -> Generator[NERMatch, None, None]:
if self.exclude_text_matcher is not None:
yield from self._finditer(doc, self.exclude_text_matcher)
if self.exclude_lemma_matcher is not None:
yield from self._finditer(doc, self.exclude_lemma_matcher)
def _finditer(self, doc: Doc, matcher: PhraseMatcher) -> Generator[NERMatch, None, None]:
for match_id, start, end in matcher(doc):
nermatch = NERMatch()
nermatch.concept_id = doc.vocab.strings[match_id]
nermatch.concept = self.id2concept[nermatch.concept_id]
nermatch.start = doc[start].idx
nermatch.end = doc[end-1].idx + len(doc[end-1])
nermatch.text = doc[start:end].text
# print(nermatch)
yield nermatch
class NerSpacyExtractor:
def __init__(self, nlp, phrase_matchers: NerSpacyPhraseMatchers, filter_number: bool=True,
filter_stop_words: bool=True):
self.nlp = nlp
self.phrase_matchers = phrase_matchers
self.filter_number = filter_number
self.filter_stop_words = filter_stop_words
def findall(self, text: str) -> List[NERMatch]:
doc = self.nlp(text)
includes_matches = [m for m in self.phrase_matchers.finditer_include(doc)]
excludes_matches = [m for m in self.phrase_matchers.finditer_exclude(doc)]
results = remove_excludes(includes_matches, excludes_matches)
results = remove_duplicates(results)
results = longest_matching(results)
if self.filter_number:
results = filter_number(results)
if self.filter_stop_words:
results = filter_stop_words(results, STOP_WORDS)
return results
class BioCNerSpacy(BioCProcessor):
def __init__(self, extractor: NerSpacyExtractor, name: str, filter_integers=True):
super(BioCNerSpacy, self).__init__('ner:spacy')
self.extractor = extractor
self.filter_integers = filter_integers
self.model = name
# def _find_other_ids(self, ann):
# this_id = ann.infons['source_concept_id']
# if ann.text == ann.infons['source_concept']:
# return
# for i, id in enumerate(self.extractor.text2ids[ann.text], 1):
# if id != this_id:
# ann.infons['note_nlp_concept_id'] += ';{}'.format(id)
# ann.infons['note_nlp_concept'] += ';{}'.format(self.extractor.id2pref[id])
# # ann.infons[f'concept_id_{i}'] = id
# # ann.infons[f'preferred_name_{i}'] = self.extractor.id2pref[id]
def ner(self, text, offset):
anns = []
for match in self.extractor.findall(text):
start = match.start
end = match.end
ann = BioCAnnotation()
ann.id = 'a{}'.format(match.start)
ann.infons['note_nlp_concept_id'] = match.concept_id
ann.infons['note_nlp_concept'] = match.concept
ann.infons['nlp_system'] = self.nlp_system
ann.infons['nlp_date_time'] = self.nlp_date_time
ann.add_location(BioCLocation(start + offset, end - start))
ann.text = match.text
anns.append(ann)
# find other ids
# self._find_other_ids(ann)
# k = ann.total_span
# if k not in ann_map:
# ann_map[k] = ann
return anns
def process_passage(self, passage: BioCPassage, docid: str = None) -> BioCPassage:
anns= self.ner(passage.text, passage.offset)
passage.annotations += anns
for sentence in passage.sentences:
self.process_sentence(sentence, docid)
return passage
def process_sentence(self, sentence: BioCSentence, docid: str = None) -> BioCSentence:
anns = self.ner(sentence.text, sentence.offset)
sentence.annotations += anns
return sentence | en | 0.241683 | # type: PhraseMatcher | None # type: PhraseMatcher | None # type: PhraseMatcher | None # type: PhraseMatcher | None # print(nermatch) # def _find_other_ids(self, ann): # this_id = ann.infons['source_concept_id'] # if ann.text == ann.infons['source_concept']: # return # for i, id in enumerate(self.extractor.text2ids[ann.text], 1): # if id != this_id: # ann.infons['note_nlp_concept_id'] += ';{}'.format(id) # ann.infons['note_nlp_concept'] += ';{}'.format(self.extractor.id2pref[id]) # # ann.infons[f'concept_id_{i}'] = id # # ann.infons[f'preferred_name_{i}'] = self.extractor.id2pref[id] # find other ids # self._find_other_ids(ann) # k = ann.total_span # if k not in ann_map: # ann_map[k] = ann | 2.117703 | 2 |
examples/get_credentials.py | aziz-abibulaiev/python-mitto-sdk | 0 | 6615263 | """
Getting all jobs creds in Mitto instance.
"""
import os
import sys
import uuid
from dotenv import load_dotenv
from create_credentials import main as created_credentials
from mitto_sdk import Mitto
load_dotenv()
BASE_URL = os.getenv("MITTO_BASE_URL")
API_KEY = os.getenv("MITTO_API_KEY")
UUID = str(uuid.uuid4())
NAME = f"creds_{UUID}".replace("-", "_")
TYPE = "sql"
BASE_URL = os.getenv("MITTO_BASE_URL")
API_KEY = os.getenv("MITTO_API_KEY")
NEW_CREDS = {
"name": NAME,
"type": TYPE,
"data": {}
}
def main(base_url=BASE_URL, api_key=API_KEY, new_creds=NEW_CREDS): # noqa: E501
"""getting creds"""
mitto = Mitto(
base_url=BASE_URL,
api_key=API_KEY
)
created_creds = created_credentials(new_creds=new_creds) # noqa: F841, E501# pylint: disable=W0612
creds = mitto.get_credentials()
return creds
if __name__ == "__main__":
sys.exit(main(base_url=BASE_URL, api_key=API_KEY, new_creds=NEW_CREDS))
| """
Getting all jobs creds in Mitto instance.
"""
import os
import sys
import uuid
from dotenv import load_dotenv
from create_credentials import main as created_credentials
from mitto_sdk import Mitto
load_dotenv()
BASE_URL = os.getenv("MITTO_BASE_URL")
API_KEY = os.getenv("MITTO_API_KEY")
UUID = str(uuid.uuid4())
NAME = f"creds_{UUID}".replace("-", "_")
TYPE = "sql"
BASE_URL = os.getenv("MITTO_BASE_URL")
API_KEY = os.getenv("MITTO_API_KEY")
NEW_CREDS = {
"name": NAME,
"type": TYPE,
"data": {}
}
def main(base_url=BASE_URL, api_key=API_KEY, new_creds=NEW_CREDS): # noqa: E501
"""getting creds"""
mitto = Mitto(
base_url=BASE_URL,
api_key=API_KEY
)
created_creds = created_credentials(new_creds=new_creds) # noqa: F841, E501# pylint: disable=W0612
creds = mitto.get_credentials()
return creds
if __name__ == "__main__":
sys.exit(main(base_url=BASE_URL, api_key=API_KEY, new_creds=NEW_CREDS))
| en | 0.7049 | Getting all jobs creds in Mitto instance. # noqa: E501 getting creds # noqa: F841, E501# pylint: disable=W0612 | 2.480129 | 2 |
implicit_maml/omnitest_dataset.py | leeeejunnnn/imaml_dev | 0 | 6615264 | <gh_stars>0
#%%
#%%
import os
import random
import numpy as np
import torch
import pickle
import torch.nn as nn
import matplotlib.pyplot as plt
#import implicit_maml.utils as utils
import utils as utils
from torch.utils.data import Dataset, DataLoader
from torchvision import transforms
from PIL import Image
from scipy.ndimage import rotate
DATA_DIR = '/home/sss-linux1/project/leejun/imaml_dev/data/omniglot/'
#%%
np.random.seed(123)
torch.manual_seed(123)
random.seed(123)
# There are 1623 characters (for Omniglot)
train_val_permutation = list(range(1623))
random.shuffle(train_val_permutation)
root = DATA_DIR
num_cls = 5
num_inst = 1
num_tasks = 20000
#%%
root1 = os.path.join(root, 'images_background')
root2 = os.path.join(root, 'images_evaluation')
num_cls = num_cls
num_inst = num_inst
#%%
# Sample num_cls characters and num_inst instances of each
languages1 = os.listdir(root1)
languages2 = os.listdir(root2)
languages1.sort()
languages2.sort()
#%%
train=True
chars = []
for l in languages1:
chars += [os.path.join(root1, l, x) for x in os.listdir(os.path.join(root1, l))]
for l in languages2:
chars += [os.path.join(root2, l, x) for x in os.listdir(os.path.join(root2, l))]
chars = np.array(chars)[train_val_permutation]
chars = chars[:1200] if train else chars[1200:]
random.shuffle(chars)
#%%
classes = chars[:num_cls]
labels = np.array(range(len(classes)))
labels = dict(zip(classes, labels))
instances = dict()
#%%
# Now sample from the chosen classes to create class-balanced train and val sets
train_ids = []
val_ids = []
for c in classes:
# First get all isntances of that class
temp = [os.path.join(c, x.decode('UTF-8')) for x in os.listdir(c)]
instances[c] = random.sample(temp, len(temp))
# Sample num_inst instances randomly each for train and val
train_ids += instances[c][:num_inst]
val_ids += instances[c][num_inst:num_inst * 2]
# Keep instances separated by class for class-balanced mini-batches
train_labels = [labels[get_class(x)] for x in train_ids]
val_labels = [labels[get_class(x)] for x in val_ids]
#%%
class OmniglotTask(object):
"""
Create the task definition for N-way k-shot learning with Omniglot dataset
Assumption: number of train and val instances are same (easy to lift in the future)
"""
def __init__(self, train_val_permutation, root=DATA_DIR, num_cls=5, num_inst=1, train=True):
"""
:param train_val_permutation: permutation of the 1623 characters, first 1200 are for train, rest for val
:param root: location of the dataset
:param num_cls: number of classes in task instance (N-way)
:param num_inst: number of instances per class (k-shot)
:param train: bool, True if meta-training phase and False if test/deployment phase
"""
# different sampling stratergy
# 1200 classes for meta-train phase and rest for test phase
self.root1 = os.path.join(root, 'images_background')
self.root2 = os.path.join(root, 'images_evaluation')
self.num_cls = num_cls
self.num_inst = num_inst
# Sample num_cls characters and num_inst instances of each
languages1 = os.listdir(self.root1)
languages2 = os.listdir(self.root2)
languages1.sort()
languages2.sort()
chars = []
for l in languages1:
chars += [os.path.join(self.root1, l, x) for x in os.listdir(os.path.join(self.root1, l))]
for l in languages2:
chars += [os.path.join(self.root2, l, x) for x in os.listdir(os.path.join(self.root2, l))]
chars = np.array(chars)[train_val_permutation]
chars = chars[:1200] if train else chars[1200:]
random.shuffle(chars)
classes = chars[:num_cls]
labels = np.array(range(len(classes)))
labels = dict(zip(classes, labels))
instances = dict()
# Now sample from the chosen classes to create class-balanced train and val sets
self.train_ids = []
self.val_ids = []
for c in classes:
# First get all isntances of that class
temp = [os.path.join(c, x) for x in os.listdir(c)]
instances[c] = random.sample(temp, len(temp))
# Sample num_inst instances randomly each for train and val
self.train_ids += instances[c][:num_inst]
self.val_ids += instances[c][num_inst:num_inst * 2]
# Keep instances separated by class for class-balanced mini-batches
self.train_labels = [labels[self.get_class(x)] for x in self.train_ids]
self.val_labels = [labels[self.get_class(x)] for x in self.val_ids]
def get_class(self, instance):
return '/' + os.path.join(*instance.split('/')[:-1])
| #%%
#%%
import os
import random
import numpy as np
import torch
import pickle
import torch.nn as nn
import matplotlib.pyplot as plt
#import implicit_maml.utils as utils
import utils as utils
from torch.utils.data import Dataset, DataLoader
from torchvision import transforms
from PIL import Image
from scipy.ndimage import rotate
DATA_DIR = '/home/sss-linux1/project/leejun/imaml_dev/data/omniglot/'
#%%
np.random.seed(123)
torch.manual_seed(123)
random.seed(123)
# There are 1623 characters (for Omniglot)
train_val_permutation = list(range(1623))
random.shuffle(train_val_permutation)
root = DATA_DIR
num_cls = 5
num_inst = 1
num_tasks = 20000
#%%
root1 = os.path.join(root, 'images_background')
root2 = os.path.join(root, 'images_evaluation')
num_cls = num_cls
num_inst = num_inst
#%%
# Sample num_cls characters and num_inst instances of each
languages1 = os.listdir(root1)
languages2 = os.listdir(root2)
languages1.sort()
languages2.sort()
#%%
train=True
chars = []
for l in languages1:
chars += [os.path.join(root1, l, x) for x in os.listdir(os.path.join(root1, l))]
for l in languages2:
chars += [os.path.join(root2, l, x) for x in os.listdir(os.path.join(root2, l))]
chars = np.array(chars)[train_val_permutation]
chars = chars[:1200] if train else chars[1200:]
random.shuffle(chars)
#%%
classes = chars[:num_cls]
labels = np.array(range(len(classes)))
labels = dict(zip(classes, labels))
instances = dict()
#%%
# Now sample from the chosen classes to create class-balanced train and val sets
train_ids = []
val_ids = []
for c in classes:
# First get all isntances of that class
temp = [os.path.join(c, x.decode('UTF-8')) for x in os.listdir(c)]
instances[c] = random.sample(temp, len(temp))
# Sample num_inst instances randomly each for train and val
train_ids += instances[c][:num_inst]
val_ids += instances[c][num_inst:num_inst * 2]
# Keep instances separated by class for class-balanced mini-batches
train_labels = [labels[get_class(x)] for x in train_ids]
val_labels = [labels[get_class(x)] for x in val_ids]
#%%
class OmniglotTask(object):
"""
Create the task definition for N-way k-shot learning with Omniglot dataset
Assumption: number of train and val instances are same (easy to lift in the future)
"""
def __init__(self, train_val_permutation, root=DATA_DIR, num_cls=5, num_inst=1, train=True):
"""
:param train_val_permutation: permutation of the 1623 characters, first 1200 are for train, rest for val
:param root: location of the dataset
:param num_cls: number of classes in task instance (N-way)
:param num_inst: number of instances per class (k-shot)
:param train: bool, True if meta-training phase and False if test/deployment phase
"""
# different sampling stratergy
# 1200 classes for meta-train phase and rest for test phase
self.root1 = os.path.join(root, 'images_background')
self.root2 = os.path.join(root, 'images_evaluation')
self.num_cls = num_cls
self.num_inst = num_inst
# Sample num_cls characters and num_inst instances of each
languages1 = os.listdir(self.root1)
languages2 = os.listdir(self.root2)
languages1.sort()
languages2.sort()
chars = []
for l in languages1:
chars += [os.path.join(self.root1, l, x) for x in os.listdir(os.path.join(self.root1, l))]
for l in languages2:
chars += [os.path.join(self.root2, l, x) for x in os.listdir(os.path.join(self.root2, l))]
chars = np.array(chars)[train_val_permutation]
chars = chars[:1200] if train else chars[1200:]
random.shuffle(chars)
classes = chars[:num_cls]
labels = np.array(range(len(classes)))
labels = dict(zip(classes, labels))
instances = dict()
# Now sample from the chosen classes to create class-balanced train and val sets
self.train_ids = []
self.val_ids = []
for c in classes:
# First get all isntances of that class
temp = [os.path.join(c, x) for x in os.listdir(c)]
instances[c] = random.sample(temp, len(temp))
# Sample num_inst instances randomly each for train and val
self.train_ids += instances[c][:num_inst]
self.val_ids += instances[c][num_inst:num_inst * 2]
# Keep instances separated by class for class-balanced mini-batches
self.train_labels = [labels[self.get_class(x)] for x in self.train_ids]
self.val_labels = [labels[self.get_class(x)] for x in self.val_ids]
def get_class(self, instance):
return '/' + os.path.join(*instance.split('/')[:-1]) | en | 0.792148 | #%% #%% #import implicit_maml.utils as utils #%% # There are 1623 characters (for Omniglot) #%% #%% # Sample num_cls characters and num_inst instances of each #%% #%% #%% # Now sample from the chosen classes to create class-balanced train and val sets # First get all isntances of that class # Sample num_inst instances randomly each for train and val # Keep instances separated by class for class-balanced mini-batches #%% Create the task definition for N-way k-shot learning with Omniglot dataset Assumption: number of train and val instances are same (easy to lift in the future) :param train_val_permutation: permutation of the 1623 characters, first 1200 are for train, rest for val :param root: location of the dataset :param num_cls: number of classes in task instance (N-way) :param num_inst: number of instances per class (k-shot) :param train: bool, True if meta-training phase and False if test/deployment phase # different sampling stratergy # 1200 classes for meta-train phase and rest for test phase # Sample num_cls characters and num_inst instances of each # Now sample from the chosen classes to create class-balanced train and val sets # First get all isntances of that class # Sample num_inst instances randomly each for train and val # Keep instances separated by class for class-balanced mini-batches | 2.0801 | 2 |
pgdrive/envs/marl_envs/maround_phero.py | Edwardhk/pgdrive | 0 | 6615265 | <gh_stars>0
import time
import numpy as np
from gym.spaces import Box, Dict
from pgdrive.envs.marl_envs.marl_inout_roundabout import MultiAgentRoundaboutEnv as MARound, \
LidarStateObservationMARound
from pgdrive.envs.marl_envs.pheromone_map import PheromoneMap
class PheroObs(LidarStateObservationMARound):
@property
def observation_space(self):
space = super(PheroObs, self).observation_space
assert isinstance(space, Box)
assert len(space.shape) == 1
# Add extra 9 pheromones information!
length = space.shape[0] + self.config["num_neighbours"] * self.config["num_channels"]
space = Box(
low=np.array([space.low[0]] * length),
high=np.array([space.high[0]] * length),
shape=(length, ),
dtype=space.dtype
)
return space
class MARoundPhero(MARound):
@classmethod
def default_config(cls):
config = super(MARoundPhero, cls).default_config()
config.update(
dict(
attenuation_rate=0.99,
diffusion_rate=0.95,
num_channels=1,
num_neighbours=1, # or 9.
granularity=0.5
)
)
return config
def __init__(self, config=None):
super(MARoundPhero, self).__init__(config)
assert self.config["num_neighbours"] in [1, 9]
assert 0 <= self.config["attenuation_rate"] <= 1.0
assert 0 <= self.config["diffusion_rate"] <= 1.0
self.phero_map = None
def _post_process_config(self, config):
config = super(MARoundPhero, self)._post_process_config(config)
config["vehicle_config"]["num_neighbours"] = config["num_neighbours"]
config["vehicle_config"]["num_channels"] = config["num_channels"]
config["vehicle_config"]["extra_action_dim"] = config["num_channels"]
return config
def get_single_observation(self, vehicle_config):
return PheroObs(vehicle_config)
def _after_lazy_init(self):
super(MARoundPhero, self)._after_lazy_init()
self._update_map()
min_x, max_x, min_y, max_y = float('inf'), float('-inf'), float('inf'), float('-inf')
for b in self.current_map.blocks:
min_x = min(b.bounding_box[0], min_x)
max_x = max(b.bounding_box[1], max_x)
min_y = min(b.bounding_box[2], min_y)
max_y = max(b.bounding_box[3], max_y)
self.phero_map = PheromoneMap(
min_x=min_x,
min_y=min_y,
total_width=max_x - min_x + 1,
total_length=max_y - min_y + 1,
num_channels=self.config["num_channels"],
diffusion_rate=self.config["diffusion_rate"],
attenuation_rate=self.config["attenuation_rate"],
granularity=self.config["granularity"]
)
def _get_reset_return(self):
self.phero_map.clear()
obses = super(MARoundPhero, self)._get_reset_return()
ret = {v_id: self._add_phero(v_id, obs) for v_id, obs in obses.items()}
return ret
def _step_simulator(self, actions, action_infos):
ret = super(MARoundPhero, self)._step_simulator(actions, action_infos)
for v_id, act in actions.items():
self.phero_map.add(self.vehicles[v_id].position, act[2:])
self.phero_map.step()
return ret
def step(self, actions):
o, r, d, i = super(MARoundPhero, self).step(actions)
ret = {v_id: self._add_phero(v_id, obs) for v_id, obs in o.items()}
return ret, r, d, i
def _add_phero(self, v_id, o):
if v_id not in self.vehicles:
ret = np.zeros((self.config["num_neighbours"] * self.config["num_channels"], ))
else:
ret = self.phero_map.get_nearest_pheromone(self.vehicles[v_id].position, self.config["num_neighbours"])
return np.concatenate([o, ret])
def _render_topdown(self, *args, **kwargs):
if self._top_down_renderer is None:
from pgdrive.obs.top_down_renderer import PheromoneRenderer
self._top_down_renderer = PheromoneRenderer(self.current_map, *args, **kwargs)
self._top_down_renderer.render(list(self.vehicles.values()), pheromone_map=self.phero_map, data=kwargs.get("data"))
def _profile():
env = MARoundPhero({"num_agents": 40})
obs = env.reset()
start = time.time()
for s in range(10000):
o, r, d, i = env.step(env.action_space.sample())
if all(d.values()):
env.reset()
if (s + 1) % 100 == 0:
print(
"Finish {}/10000 simulation steps. Time elapse: {:.4f}. Average FPS: {:.4f}".format(
s + 1,
time.time() - start, (s + 1) / (time.time() - start)
)
)
print(f"(PGDriveEnvV2) Total Time Elapse: {time.time() - start}")
def _test():
env = MARoundPhero({"num_channels": 3})
o = env.reset()
assert env.observation_space.contains(o)
assert all([0 <= oo[-1] <= 1.0 for oo in o.values()])
total_r = 0
ep_s = 0
for i in range(1, 100000):
o, r, d, info = env.step(env.action_space.sample())
assert env.observation_space.contains(o)
assert all([0 <= oo[-1] <= 1.0 for oo in o.values()])
for r_ in r.values():
total_r += r_
ep_s += 1
if d["__all__"]:
print(
"Finish! Current step {}. Group Reward: {}. Average reward: {}".format(
i, total_r, total_r / env.agent_manager.next_agent_count
)
)
break
if len(env.vehicles) == 0:
total_r = 0
print("Reset")
env.reset()
env.close()
def _vis():
env = MARoundPhero(
{
"num_channels": 1,
"num_agents": 40,
"diffusion_rate": 0.95,
"attenuation_rate": 0.99,
"granularity": 0.5
}
)
o = env.reset()
start = time.time()
for s in range(1, 100000):
# o, r, d, info = env.step(env.action_space.sample())
o, r, d, info = env.step(
{k: [np.random.uniform(-0.01, 0.01), 1, np.random.uniform(0, 1)]
for k in env.vehicles.keys()}
)
# o, r, d, info = env.step({k: [np.random.uniform(-0.01, 0.01), 1, np.random.uyni] for k in env.vehicles.keys()})
env.render(mode="top_down", data={
"reward": sum(r.values()),
"velocity": 0.2
})
# env.render(mode="top_down")
# env.render(mode="top_down", film_size=(1000, 1000))
if d["__all__"]:
env.reset()
if (s + 1) % 100 == 0:
print(
"Finish {}/10000 simulation steps. Time elapse: {:.4f}. Average FPS: {:.4f}".format(
s + 1,
time.time() - start, (s + 1) / (time.time() - start)
)
)
env.close()
if __name__ == '__main__':
# _test()
# _profile()
_vis()
| import time
import numpy as np
from gym.spaces import Box, Dict
from pgdrive.envs.marl_envs.marl_inout_roundabout import MultiAgentRoundaboutEnv as MARound, \
LidarStateObservationMARound
from pgdrive.envs.marl_envs.pheromone_map import PheromoneMap
class PheroObs(LidarStateObservationMARound):
@property
def observation_space(self):
space = super(PheroObs, self).observation_space
assert isinstance(space, Box)
assert len(space.shape) == 1
# Add extra 9 pheromones information!
length = space.shape[0] + self.config["num_neighbours"] * self.config["num_channels"]
space = Box(
low=np.array([space.low[0]] * length),
high=np.array([space.high[0]] * length),
shape=(length, ),
dtype=space.dtype
)
return space
class MARoundPhero(MARound):
@classmethod
def default_config(cls):
config = super(MARoundPhero, cls).default_config()
config.update(
dict(
attenuation_rate=0.99,
diffusion_rate=0.95,
num_channels=1,
num_neighbours=1, # or 9.
granularity=0.5
)
)
return config
def __init__(self, config=None):
super(MARoundPhero, self).__init__(config)
assert self.config["num_neighbours"] in [1, 9]
assert 0 <= self.config["attenuation_rate"] <= 1.0
assert 0 <= self.config["diffusion_rate"] <= 1.0
self.phero_map = None
def _post_process_config(self, config):
config = super(MARoundPhero, self)._post_process_config(config)
config["vehicle_config"]["num_neighbours"] = config["num_neighbours"]
config["vehicle_config"]["num_channels"] = config["num_channels"]
config["vehicle_config"]["extra_action_dim"] = config["num_channels"]
return config
def get_single_observation(self, vehicle_config):
return PheroObs(vehicle_config)
def _after_lazy_init(self):
super(MARoundPhero, self)._after_lazy_init()
self._update_map()
min_x, max_x, min_y, max_y = float('inf'), float('-inf'), float('inf'), float('-inf')
for b in self.current_map.blocks:
min_x = min(b.bounding_box[0], min_x)
max_x = max(b.bounding_box[1], max_x)
min_y = min(b.bounding_box[2], min_y)
max_y = max(b.bounding_box[3], max_y)
self.phero_map = PheromoneMap(
min_x=min_x,
min_y=min_y,
total_width=max_x - min_x + 1,
total_length=max_y - min_y + 1,
num_channels=self.config["num_channels"],
diffusion_rate=self.config["diffusion_rate"],
attenuation_rate=self.config["attenuation_rate"],
granularity=self.config["granularity"]
)
def _get_reset_return(self):
self.phero_map.clear()
obses = super(MARoundPhero, self)._get_reset_return()
ret = {v_id: self._add_phero(v_id, obs) for v_id, obs in obses.items()}
return ret
def _step_simulator(self, actions, action_infos):
ret = super(MARoundPhero, self)._step_simulator(actions, action_infos)
for v_id, act in actions.items():
self.phero_map.add(self.vehicles[v_id].position, act[2:])
self.phero_map.step()
return ret
def step(self, actions):
o, r, d, i = super(MARoundPhero, self).step(actions)
ret = {v_id: self._add_phero(v_id, obs) for v_id, obs in o.items()}
return ret, r, d, i
def _add_phero(self, v_id, o):
if v_id not in self.vehicles:
ret = np.zeros((self.config["num_neighbours"] * self.config["num_channels"], ))
else:
ret = self.phero_map.get_nearest_pheromone(self.vehicles[v_id].position, self.config["num_neighbours"])
return np.concatenate([o, ret])
def _render_topdown(self, *args, **kwargs):
if self._top_down_renderer is None:
from pgdrive.obs.top_down_renderer import PheromoneRenderer
self._top_down_renderer = PheromoneRenderer(self.current_map, *args, **kwargs)
self._top_down_renderer.render(list(self.vehicles.values()), pheromone_map=self.phero_map, data=kwargs.get("data"))
def _profile():
env = MARoundPhero({"num_agents": 40})
obs = env.reset()
start = time.time()
for s in range(10000):
o, r, d, i = env.step(env.action_space.sample())
if all(d.values()):
env.reset()
if (s + 1) % 100 == 0:
print(
"Finish {}/10000 simulation steps. Time elapse: {:.4f}. Average FPS: {:.4f}".format(
s + 1,
time.time() - start, (s + 1) / (time.time() - start)
)
)
print(f"(PGDriveEnvV2) Total Time Elapse: {time.time() - start}")
def _test():
env = MARoundPhero({"num_channels": 3})
o = env.reset()
assert env.observation_space.contains(o)
assert all([0 <= oo[-1] <= 1.0 for oo in o.values()])
total_r = 0
ep_s = 0
for i in range(1, 100000):
o, r, d, info = env.step(env.action_space.sample())
assert env.observation_space.contains(o)
assert all([0 <= oo[-1] <= 1.0 for oo in o.values()])
for r_ in r.values():
total_r += r_
ep_s += 1
if d["__all__"]:
print(
"Finish! Current step {}. Group Reward: {}. Average reward: {}".format(
i, total_r, total_r / env.agent_manager.next_agent_count
)
)
break
if len(env.vehicles) == 0:
total_r = 0
print("Reset")
env.reset()
env.close()
def _vis():
env = MARoundPhero(
{
"num_channels": 1,
"num_agents": 40,
"diffusion_rate": 0.95,
"attenuation_rate": 0.99,
"granularity": 0.5
}
)
o = env.reset()
start = time.time()
for s in range(1, 100000):
# o, r, d, info = env.step(env.action_space.sample())
o, r, d, info = env.step(
{k: [np.random.uniform(-0.01, 0.01), 1, np.random.uniform(0, 1)]
for k in env.vehicles.keys()}
)
# o, r, d, info = env.step({k: [np.random.uniform(-0.01, 0.01), 1, np.random.uyni] for k in env.vehicles.keys()})
env.render(mode="top_down", data={
"reward": sum(r.values()),
"velocity": 0.2
})
# env.render(mode="top_down")
# env.render(mode="top_down", film_size=(1000, 1000))
if d["__all__"]:
env.reset()
if (s + 1) % 100 == 0:
print(
"Finish {}/10000 simulation steps. Time elapse: {:.4f}. Average FPS: {:.4f}".format(
s + 1,
time.time() - start, (s + 1) / (time.time() - start)
)
)
env.close()
if __name__ == '__main__':
# _test()
# _profile()
_vis() | en | 0.201099 | # Add extra 9 pheromones information! # or 9. # o, r, d, info = env.step(env.action_space.sample()) # o, r, d, info = env.step({k: [np.random.uniform(-0.01, 0.01), 1, np.random.uyni] for k in env.vehicles.keys()}) # env.render(mode="top_down") # env.render(mode="top_down", film_size=(1000, 1000)) # _test() # _profile() | 2.207279 | 2 |
parallel_prowler.py | jonathanbglass/parallel_prowler | 3 | 6615266 | import argparse
import boto3
import csv
import json
import logging
import mmap
import numpy as np
import os
import pandas as pd
import psutil
import queue
from shlex import quote
import subprocess
import sys
import threading
import time
import uuid
def setup_args(parser):
parser.add_argument("-p", "--profile",
help="AWS Profile")
parser.add_argument("-pp", "--prowlerPath",
help="Path to Prowler Executable. "
"Defaults to ./prowler/prowler")
parser.add_argument("-pc", "--prowlerCheck",
help="Single or List of Prowler Check(s) [check11]")
parser.add_argument("-pg", "--prowlerGroup",
help="Group of Prowler Checks [cislevel2]")
parser.add_argument("-pE", "--prowlerExclude",
help="Execute all tests except a list of specified "
"checks separated by comma (i.e. check21,check31)")
parser.add_argument("-R", "--region",
help="AWS Region")
parser.add_argument("-r", "--regex",
help="REGEX Pattern to Identify AWS Profiles")
parser.add_argument("-o", "--outputDir",
help="Output Directory")
# parser.add_argument("-o", "--organization",
# help="AWS Profile for Organization Account")
parser.add_argument("-t", "--maxthreads", type=int,
help="Max threads: defaults to # of CPUs")
parser.add_argument("-F", "--resultsFile", type=str,
help="Results CSV to process to a report XLSX file")
parser.add_argument("-l", "--log", type=str,
choices=['info', 'INFO', 'debug', 'DEBUG'],
help="Set LogLevel to INFO (Default) or DEBUG")
parser.add_argument("-v", "--verbosity", type=int, choices=[0, 1],
help="increase output verbosity")
def check_args_debug(args):
# Handle logging
global outputDir
global logging
if args.log and args.log.upper() == "DEBUG":
loglevel = "DEBUG"
else:
loglevel = "INFO"
logging.basicConfig(filename=outputDir + '/' + 'assessment.log',
format='%(levelname)s:%(message)s',
level=loglevel)
def check_args_prowlerPath(args):
# Handle prowlerPath
global logging
global prowlerPath
if args.prowlerPath and os.path.exists(args.prowlerPath):
prowlerPath = args.prowlerPath
else:
if not os.path.exists("./prowler/prowler"):
print("Prowler not found. Install or clone the repository into "
"this directory or provide the path with -pp, --prowlerPath")
quit()
else:
prowlerPath = "./prowler/prowler"
def check_args_verbosity(args):
# handle verbosity
global logging
global verbose
if args.verbosity == 1:
verbose = True
logging.info("Verbose")
else:
verbose = False
logging.info("No Verbosity")
def check_args_creds(args):
# handle profiles / authentication / credentials
workingCreds = False
global logging
global verbose
global workingProfiles
workingProfiles = []
if not args.profile and not args.regex:
logging.info("Using AWS Default Profile")
if verbose:
print("Using AWS Default Profile")
print(args.profile)
if (not check_profile("default")):
logging.error("Default credentials not working.")
print("Default credentials not working.")
quit()
else:
workingProfiles.append("default")
workingCreds = True
if args.profile and args.profile is not None:
logging.info("Using " + args.profile + " Profile")
if verbose:
print("Using " + args.profile + " Profile")
if (not check_profile(args.profile)):
logging.error("Profile " + args.profile + " not working")
if verbose:
print("Profile " + args.profile + " not working")
quit()
else:
logging.info("Profile " + args.profile + " working")
if verbose:
print("Profile " + args.profile + " working")
workingProfiles.append(args.profile)
workingCreds = True
def check_args_regex(args):
global logging
global verbose
if not args.regex:
logging.info("No REGEX Pattern. Working on a single account.")
if verbose:
print("No REGEX Pattern. Working on a single account.")
else:
# To Do: turn these variable into arguments
configFile = "~/.aws/config"
credFile = "~/.aws/credentials"
profileCount = 0
if os.path.exists(os.path.expanduser(configFile)):
configFileContent = open(
os.path.expanduser(configFile), 'r').read()
else:
logging.error("AWS Config file unreadable")
print("AWS Config file unreadable")
quit()
if args.regex in configFileContent:
logging.info("REGEX found")
if verbose:
print("REGEX found")
for x in configFileContent.split("\n"):
if "[profile" in x and args.regex in x:
profileCount += 1
thisProfile = x.strip('[]').split(" ")[1]
logging.debug("Checking profile: " + thisProfile)
if verbose:
print("Checking profile: " + thisProfile)
if (check_profile(thisProfile)):
logging.debug("Profile " + thisProfile + " works.")
if verbose:
print("Profile " + thisProfile + " works.")
workingProfiles.append(thisProfile)
else:
logging.debug("Profile " + thisProfile
+ " does not work.")
if verbose:
print("Profile " + thisProfile + " does not work.")
if (profileCount > 1) or (profileCount == 0):
profresp = (str(profileCount) + " Profiles found. "
+ str(len(workingProfiles)) + " Profiles work.")
else:
profresp = str(profileCount) + " Profile found and works"
if(len(workingProfiles) == 0):
logging.error("No working profiles, REGEX: " + str(args.regex))
print("No working profiles for REGEX: " + str(args.regex))
quit()
print(profresp)
logging.info(profresp)
else:
logging.error("REGEX " + str(args.regex)
+ " not found in " + configFile)
print("REGEX " + str(args.regex) + " not found in " + configFile)
quit()
def check_args_outputDir(args):
global logging
global outputDir
outputDir = os.path.abspath(os.curdir)
if args.outputDir:
if not os.path.exists(args.outputDir):
print("Output Directory Does Not Exist: " + args.outputDir)
quit()
else:
outputDir = os.path.abspath(args.outputDir)
def process_args(args):
check_args_outputDir(args)
check_args_debug(args)
check_args_verbosity(args)
check_args_prowlerPath(args)
check_args_creds(args)
check_args_regex(args)
def check_profile(profile):
global logging
try:
if(profile == "default"):
client = boto3.session.Session()
else:
logging.info("Testing profile: " + profile)
client = boto3.session.Session(profile_name=profile)
except Exception as e:
logging.error("Error connecting: ")
logging.error(e)
return False
try:
iam = client.client('iam')
response = iam.list_users()
except Exception as e:
logging.error("Error listing users: ")
logging.error(e)
return False
if len(response['Users']) == 0:
logging.info("No users")
if len(response) > 0:
usercnt = len(response['Users'])
if(usercnt > 1):
userresp = " Users"
else:
userresp = " User"
logging.info(str(usercnt) + userresp)
return True
def run_prowler(x):
global args
global logging
global outputDir
global prowlerPath
global resultDict
global verbose
logging.debug("Inside run_prowler: " + x)
if verbose:
print("Inside run_prowler: " + x)
cmd = os.path.realpath(prowlerPath)
cmdopts = ' -p {}'.format(quote(x))
if args.region:
cmdopts += ' -r {}'.format(quote(args.Region))
else:
cmdopts += ' -r us-east-1'
if args.prowlerExclude:
cmdopts += ' -E {}'.format(quote(args.prowlerExclude))
cmdopts += ' -n'
cmdopts += ' -b -M csv'
if args.prowlerCheck is not None:
cmdopts += ' -c {}'.format(quote(args.prowlerCheck))
if args.prowlerGroup is not None:
cmdopts += ' -g {}'.format(quote(args.prowlerGroup))
logging.info(cmd+cmdopts)
if verbose:
print(cmd+cmdopts)
p = subprocess.run([cmd + cmdopts], shell=True, text=True, check=False,
capture_output=True)
logging.debug("Inside run_prowler - subprocess: ")
logging.info(p)
if verbose:
print("Inside run_prowler - subprocess")
print(p)
resultDict[x] = p.stdout
fname = 'prowler-' + str(int(scanTime)) + '-' + str(scanUUID)\
+ '-' + quote(x) + '.csv'
fname = outputDir + '/' + fname
f = open(fname, 'w')
f.write(p.stdout)
f.close()
def worker():
global logging
global q
global resultDict
global verbose
while True:
x = q.get()
if x is None: # EOF?
return
else:
logging.debug("Inside worker: " + x)
if verbose:
print("Inside worker: " + x)
run_prowler(x)
def check_args_organizations(args):
global logging
pass
# # Handle Organizations and use it to create list of accounts to audit
# if not args.organization:
# logging.info("No AWS Organization Account")
# if verbose:
# print("No AWS Organization Account")
# else:
# print("Not implemented yet")
def get_col_widths(dataframe, index):
# First we find the maximum length of the index column
if index:
idx_max = max([len(str(s)) for s in dataframe.index.values]
+ [len(str(dataframe.index.name))])
return [idx_max] + [max([len(str(s)) for s in dataframe[col].values]
+ [len(col)]) for col in dataframe.columns]
else:
# Then, we concatenate this to the max of the lengths of column name
# and its values for each column, left to right
return [max([len(str(s)) for s in dataframe[col].values]
+ [len(col)]) for col in dataframe.columns]
def process_results(resultFileName):
global args
global logging
if 'verbose' in globals():
verbose = True
else:
verbose = False
if args.resultsFile:
excelName = args.resultsFile.split('.')[0] + '.xlsx'
else:
excelName = 'results-'+str(int(scanTime))+'-'+str(scanUUID)+'.xlsx'
if 'outputDir' in globals():
excelName = outputDir + '/' + excelName
p_df = pd.read_csv(resultFileName,
dtype={'ACCOUNT_NUM': str, 'TITLE_ID': str})
if verbose:
print(p_df.shape)
print(p_df)
writer = pd.ExcelWriter(excelName, engine='xlsxwriter')
workbook = writer.book
# Write Summary first
q3 = ('(LEVEL == "Level 1" or LEVEL == "Level 2") and '
'(RESULT == "PASS" or RESULT == "FAIL")')
p_df_pass = p_df.query(q3)
if verbose:
print(p_df_pass)
p_df_pass.groupby(['PROFILE', 'ACCOUNT_NUM', 'RESULT'])['RESULT'].count().to_excel(writer, sheet_name="Summary")
worksheet = writer.sheets['Summary']
for i, width in enumerate(get_col_widths(p_df, False)):
worksheet.set_column(i, i, width)
# Write raw results to Excel
p_df.to_excel(writer, sheet_name='RawResults', index=False)
worksheet = writer.sheets['RawResults']
for i, width in enumerate(get_col_widths(p_df, False)):
worksheet.set_column(i, i, width)
# Write Passing results to Excel
q1 = 'RESULT == "PASS"'
p_df_pass = pd.pivot_table(
p_df.query(q1),
index=['TITLE_ID', 'TITLE_TEXT'],
columns=['PROFILE', 'ACCOUNT_NUM'], values='RESULT',
aggfunc=np.count_nonzero, fill_value=0)
p_df_pass.to_excel(writer, sheet_name="All Passing")
# Write Failing results to Excel
q2 = 'RESULT == "FAIL"'
p_df_fail = pd.pivot_table(
p_df.query(q2),
index=['TITLE_ID', 'TITLE_TEXT'],
columns=['PROFILE', 'ACCOUNT_NUM'], values='RESULT',
aggfunc=np.count_nonzero, fill_value=0)
p_df_fail.to_excel(writer, sheet_name="All Failing")
# Write CIS Benchmarks Passing results to Excel
q3 = 'RESULT == "PASS" and (LEVEL == "Level 1" or LEVEL == "Level 2")'
p_df_cis_pass = pd.pivot_table(
p_df.query(q1),
index=['TITLE_ID', 'LEVEL', 'SCORED', 'TITLE_TEXT'],
columns=['PROFILE', 'ACCOUNT_NUM'], values='RESULT',
aggfunc=np.count_nonzero, fill_value=0)
p_df_cis_pass.to_excel(writer, sheet_name="CIS Benchmarks Passing")
# Write CIS Benchmarks failing results to Excel
q4 = 'RESULT == "FAIL" and (LEVEL == "Level 1" or LEVEL == "Level 2")'
p_df_cis_fail = pd.pivot_table(
p_df.query(q2),
index=['TITLE_ID', 'LEVEL', 'SCORED', 'TITLE_TEXT'],
columns=['PROFILE', 'ACCOUNT_NUM'], values='RESULT',
aggfunc=np.count_nonzero, fill_value=0)
p_df_cis_fail.to_excel(writer, sheet_name="CIS Benchmarks Failing")
print("Report Excel File: " + excelName)
writer.save()
def main():
global logging
parser = argparse.ArgumentParser()
setup_args(parser)
global args
args = parser.parse_args()
if not args.resultsFile:
process_args(args)
global resultDict
resultDict = {}
global scanUUID
global scanTime
# Generate a Testing UUID and TimeStamp to add to logs / results
scanUUID = uuid.uuid4()
scanTime = time.time()
logging.info(scanUUID)
logging.info(int(scanTime))
if verbose:
print(scanUUID)
print(int(scanTime))
# setting up queues
global q
q = queue.Queue()
# process workingProfiles, run assessment tool(s) against each Profile
for x in workingProfiles:
q.put(x)
if args.maxthreads and args.maxthreads > 0:
maxthreads = int(args.maxthreads)
else:
maxthreads = psutil.cpu_count(logical=False)
threads = [threading.Thread(target=worker) for _i in range(maxthreads)]
for thread in threads:
thread.start()
q.put(None) # one EOF marker for each thread
for thread in threads:
thread.join()
header = False
resultFileName = 'results-'+str(int(scanTime))+'-'+str(scanUUID)+'.csv'
resultFileName = outputDir + '/' + resultFileName
print("Opening CSV")
resultFile = open(resultFileName, 'w+')
for key in resultDict:
print("resultDict Key: " + key)
print("Value:")
print(resultDict[key])
for i in range(len(resultDict[key].split('\n'))):
if header:
if 'ACCOUNT_NUM' not in resultDict[key].split('\n')[i]:
resultFile.write(resultDict[key].split('\n')[i] + "\n")
else:
print("Writing Headers")
resultFile.write(resultDict[key].split('\n')[0] + "\n")
header = True
resultFile.close()
print("Result File: " + resultFileName)
process_results(resultFileName)
else:
if os.path.exists(args.resultsFile):
process_results(args.resultsFile)
else:
print('File unreadable: ' + str(args.resultsFile))
log.error('File unreadable: ' + str(args.resultsFile))
if __name__ == "__main__":
# execute only if run as a script
main()
| import argparse
import boto3
import csv
import json
import logging
import mmap
import numpy as np
import os
import pandas as pd
import psutil
import queue
from shlex import quote
import subprocess
import sys
import threading
import time
import uuid
def setup_args(parser):
parser.add_argument("-p", "--profile",
help="AWS Profile")
parser.add_argument("-pp", "--prowlerPath",
help="Path to Prowler Executable. "
"Defaults to ./prowler/prowler")
parser.add_argument("-pc", "--prowlerCheck",
help="Single or List of Prowler Check(s) [check11]")
parser.add_argument("-pg", "--prowlerGroup",
help="Group of Prowler Checks [cislevel2]")
parser.add_argument("-pE", "--prowlerExclude",
help="Execute all tests except a list of specified "
"checks separated by comma (i.e. check21,check31)")
parser.add_argument("-R", "--region",
help="AWS Region")
parser.add_argument("-r", "--regex",
help="REGEX Pattern to Identify AWS Profiles")
parser.add_argument("-o", "--outputDir",
help="Output Directory")
# parser.add_argument("-o", "--organization",
# help="AWS Profile for Organization Account")
parser.add_argument("-t", "--maxthreads", type=int,
help="Max threads: defaults to # of CPUs")
parser.add_argument("-F", "--resultsFile", type=str,
help="Results CSV to process to a report XLSX file")
parser.add_argument("-l", "--log", type=str,
choices=['info', 'INFO', 'debug', 'DEBUG'],
help="Set LogLevel to INFO (Default) or DEBUG")
parser.add_argument("-v", "--verbosity", type=int, choices=[0, 1],
help="increase output verbosity")
def check_args_debug(args):
# Handle logging
global outputDir
global logging
if args.log and args.log.upper() == "DEBUG":
loglevel = "DEBUG"
else:
loglevel = "INFO"
logging.basicConfig(filename=outputDir + '/' + 'assessment.log',
format='%(levelname)s:%(message)s',
level=loglevel)
def check_args_prowlerPath(args):
# Handle prowlerPath
global logging
global prowlerPath
if args.prowlerPath and os.path.exists(args.prowlerPath):
prowlerPath = args.prowlerPath
else:
if not os.path.exists("./prowler/prowler"):
print("Prowler not found. Install or clone the repository into "
"this directory or provide the path with -pp, --prowlerPath")
quit()
else:
prowlerPath = "./prowler/prowler"
def check_args_verbosity(args):
# handle verbosity
global logging
global verbose
if args.verbosity == 1:
verbose = True
logging.info("Verbose")
else:
verbose = False
logging.info("No Verbosity")
def check_args_creds(args):
# handle profiles / authentication / credentials
workingCreds = False
global logging
global verbose
global workingProfiles
workingProfiles = []
if not args.profile and not args.regex:
logging.info("Using AWS Default Profile")
if verbose:
print("Using AWS Default Profile")
print(args.profile)
if (not check_profile("default")):
logging.error("Default credentials not working.")
print("Default credentials not working.")
quit()
else:
workingProfiles.append("default")
workingCreds = True
if args.profile and args.profile is not None:
logging.info("Using " + args.profile + " Profile")
if verbose:
print("Using " + args.profile + " Profile")
if (not check_profile(args.profile)):
logging.error("Profile " + args.profile + " not working")
if verbose:
print("Profile " + args.profile + " not working")
quit()
else:
logging.info("Profile " + args.profile + " working")
if verbose:
print("Profile " + args.profile + " working")
workingProfiles.append(args.profile)
workingCreds = True
def check_args_regex(args):
global logging
global verbose
if not args.regex:
logging.info("No REGEX Pattern. Working on a single account.")
if verbose:
print("No REGEX Pattern. Working on a single account.")
else:
# To Do: turn these variable into arguments
configFile = "~/.aws/config"
credFile = "~/.aws/credentials"
profileCount = 0
if os.path.exists(os.path.expanduser(configFile)):
configFileContent = open(
os.path.expanduser(configFile), 'r').read()
else:
logging.error("AWS Config file unreadable")
print("AWS Config file unreadable")
quit()
if args.regex in configFileContent:
logging.info("REGEX found")
if verbose:
print("REGEX found")
for x in configFileContent.split("\n"):
if "[profile" in x and args.regex in x:
profileCount += 1
thisProfile = x.strip('[]').split(" ")[1]
logging.debug("Checking profile: " + thisProfile)
if verbose:
print("Checking profile: " + thisProfile)
if (check_profile(thisProfile)):
logging.debug("Profile " + thisProfile + " works.")
if verbose:
print("Profile " + thisProfile + " works.")
workingProfiles.append(thisProfile)
else:
logging.debug("Profile " + thisProfile
+ " does not work.")
if verbose:
print("Profile " + thisProfile + " does not work.")
if (profileCount > 1) or (profileCount == 0):
profresp = (str(profileCount) + " Profiles found. "
+ str(len(workingProfiles)) + " Profiles work.")
else:
profresp = str(profileCount) + " Profile found and works"
if(len(workingProfiles) == 0):
logging.error("No working profiles, REGEX: " + str(args.regex))
print("No working profiles for REGEX: " + str(args.regex))
quit()
print(profresp)
logging.info(profresp)
else:
logging.error("REGEX " + str(args.regex)
+ " not found in " + configFile)
print("REGEX " + str(args.regex) + " not found in " + configFile)
quit()
def check_args_outputDir(args):
global logging
global outputDir
outputDir = os.path.abspath(os.curdir)
if args.outputDir:
if not os.path.exists(args.outputDir):
print("Output Directory Does Not Exist: " + args.outputDir)
quit()
else:
outputDir = os.path.abspath(args.outputDir)
def process_args(args):
check_args_outputDir(args)
check_args_debug(args)
check_args_verbosity(args)
check_args_prowlerPath(args)
check_args_creds(args)
check_args_regex(args)
def check_profile(profile):
global logging
try:
if(profile == "default"):
client = boto3.session.Session()
else:
logging.info("Testing profile: " + profile)
client = boto3.session.Session(profile_name=profile)
except Exception as e:
logging.error("Error connecting: ")
logging.error(e)
return False
try:
iam = client.client('iam')
response = iam.list_users()
except Exception as e:
logging.error("Error listing users: ")
logging.error(e)
return False
if len(response['Users']) == 0:
logging.info("No users")
if len(response) > 0:
usercnt = len(response['Users'])
if(usercnt > 1):
userresp = " Users"
else:
userresp = " User"
logging.info(str(usercnt) + userresp)
return True
def run_prowler(x):
global args
global logging
global outputDir
global prowlerPath
global resultDict
global verbose
logging.debug("Inside run_prowler: " + x)
if verbose:
print("Inside run_prowler: " + x)
cmd = os.path.realpath(prowlerPath)
cmdopts = ' -p {}'.format(quote(x))
if args.region:
cmdopts += ' -r {}'.format(quote(args.Region))
else:
cmdopts += ' -r us-east-1'
if args.prowlerExclude:
cmdopts += ' -E {}'.format(quote(args.prowlerExclude))
cmdopts += ' -n'
cmdopts += ' -b -M csv'
if args.prowlerCheck is not None:
cmdopts += ' -c {}'.format(quote(args.prowlerCheck))
if args.prowlerGroup is not None:
cmdopts += ' -g {}'.format(quote(args.prowlerGroup))
logging.info(cmd+cmdopts)
if verbose:
print(cmd+cmdopts)
p = subprocess.run([cmd + cmdopts], shell=True, text=True, check=False,
capture_output=True)
logging.debug("Inside run_prowler - subprocess: ")
logging.info(p)
if verbose:
print("Inside run_prowler - subprocess")
print(p)
resultDict[x] = p.stdout
fname = 'prowler-' + str(int(scanTime)) + '-' + str(scanUUID)\
+ '-' + quote(x) + '.csv'
fname = outputDir + '/' + fname
f = open(fname, 'w')
f.write(p.stdout)
f.close()
def worker():
global logging
global q
global resultDict
global verbose
while True:
x = q.get()
if x is None: # EOF?
return
else:
logging.debug("Inside worker: " + x)
if verbose:
print("Inside worker: " + x)
run_prowler(x)
def check_args_organizations(args):
global logging
pass
# # Handle Organizations and use it to create list of accounts to audit
# if not args.organization:
# logging.info("No AWS Organization Account")
# if verbose:
# print("No AWS Organization Account")
# else:
# print("Not implemented yet")
def get_col_widths(dataframe, index):
# First we find the maximum length of the index column
if index:
idx_max = max([len(str(s)) for s in dataframe.index.values]
+ [len(str(dataframe.index.name))])
return [idx_max] + [max([len(str(s)) for s in dataframe[col].values]
+ [len(col)]) for col in dataframe.columns]
else:
# Then, we concatenate this to the max of the lengths of column name
# and its values for each column, left to right
return [max([len(str(s)) for s in dataframe[col].values]
+ [len(col)]) for col in dataframe.columns]
def process_results(resultFileName):
global args
global logging
if 'verbose' in globals():
verbose = True
else:
verbose = False
if args.resultsFile:
excelName = args.resultsFile.split('.')[0] + '.xlsx'
else:
excelName = 'results-'+str(int(scanTime))+'-'+str(scanUUID)+'.xlsx'
if 'outputDir' in globals():
excelName = outputDir + '/' + excelName
p_df = pd.read_csv(resultFileName,
dtype={'ACCOUNT_NUM': str, 'TITLE_ID': str})
if verbose:
print(p_df.shape)
print(p_df)
writer = pd.ExcelWriter(excelName, engine='xlsxwriter')
workbook = writer.book
# Write Summary first
q3 = ('(LEVEL == "Level 1" or LEVEL == "Level 2") and '
'(RESULT == "PASS" or RESULT == "FAIL")')
p_df_pass = p_df.query(q3)
if verbose:
print(p_df_pass)
p_df_pass.groupby(['PROFILE', 'ACCOUNT_NUM', 'RESULT'])['RESULT'].count().to_excel(writer, sheet_name="Summary")
worksheet = writer.sheets['Summary']
for i, width in enumerate(get_col_widths(p_df, False)):
worksheet.set_column(i, i, width)
# Write raw results to Excel
p_df.to_excel(writer, sheet_name='RawResults', index=False)
worksheet = writer.sheets['RawResults']
for i, width in enumerate(get_col_widths(p_df, False)):
worksheet.set_column(i, i, width)
# Write Passing results to Excel
q1 = 'RESULT == "PASS"'
p_df_pass = pd.pivot_table(
p_df.query(q1),
index=['TITLE_ID', 'TITLE_TEXT'],
columns=['PROFILE', 'ACCOUNT_NUM'], values='RESULT',
aggfunc=np.count_nonzero, fill_value=0)
p_df_pass.to_excel(writer, sheet_name="All Passing")
# Write Failing results to Excel
q2 = 'RESULT == "FAIL"'
p_df_fail = pd.pivot_table(
p_df.query(q2),
index=['TITLE_ID', 'TITLE_TEXT'],
columns=['PROFILE', 'ACCOUNT_NUM'], values='RESULT',
aggfunc=np.count_nonzero, fill_value=0)
p_df_fail.to_excel(writer, sheet_name="All Failing")
# Write CIS Benchmarks Passing results to Excel
q3 = 'RESULT == "PASS" and (LEVEL == "Level 1" or LEVEL == "Level 2")'
p_df_cis_pass = pd.pivot_table(
p_df.query(q1),
index=['TITLE_ID', 'LEVEL', 'SCORED', 'TITLE_TEXT'],
columns=['PROFILE', 'ACCOUNT_NUM'], values='RESULT',
aggfunc=np.count_nonzero, fill_value=0)
p_df_cis_pass.to_excel(writer, sheet_name="CIS Benchmarks Passing")
# Write CIS Benchmarks failing results to Excel
q4 = 'RESULT == "FAIL" and (LEVEL == "Level 1" or LEVEL == "Level 2")'
p_df_cis_fail = pd.pivot_table(
p_df.query(q2),
index=['TITLE_ID', 'LEVEL', 'SCORED', 'TITLE_TEXT'],
columns=['PROFILE', 'ACCOUNT_NUM'], values='RESULT',
aggfunc=np.count_nonzero, fill_value=0)
p_df_cis_fail.to_excel(writer, sheet_name="CIS Benchmarks Failing")
print("Report Excel File: " + excelName)
writer.save()
def main():
global logging
parser = argparse.ArgumentParser()
setup_args(parser)
global args
args = parser.parse_args()
if not args.resultsFile:
process_args(args)
global resultDict
resultDict = {}
global scanUUID
global scanTime
# Generate a Testing UUID and TimeStamp to add to logs / results
scanUUID = uuid.uuid4()
scanTime = time.time()
logging.info(scanUUID)
logging.info(int(scanTime))
if verbose:
print(scanUUID)
print(int(scanTime))
# setting up queues
global q
q = queue.Queue()
# process workingProfiles, run assessment tool(s) against each Profile
for x in workingProfiles:
q.put(x)
if args.maxthreads and args.maxthreads > 0:
maxthreads = int(args.maxthreads)
else:
maxthreads = psutil.cpu_count(logical=False)
threads = [threading.Thread(target=worker) for _i in range(maxthreads)]
for thread in threads:
thread.start()
q.put(None) # one EOF marker for each thread
for thread in threads:
thread.join()
header = False
resultFileName = 'results-'+str(int(scanTime))+'-'+str(scanUUID)+'.csv'
resultFileName = outputDir + '/' + resultFileName
print("Opening CSV")
resultFile = open(resultFileName, 'w+')
for key in resultDict:
print("resultDict Key: " + key)
print("Value:")
print(resultDict[key])
for i in range(len(resultDict[key].split('\n'))):
if header:
if 'ACCOUNT_NUM' not in resultDict[key].split('\n')[i]:
resultFile.write(resultDict[key].split('\n')[i] + "\n")
else:
print("Writing Headers")
resultFile.write(resultDict[key].split('\n')[0] + "\n")
header = True
resultFile.close()
print("Result File: " + resultFileName)
process_results(resultFileName)
else:
if os.path.exists(args.resultsFile):
process_results(args.resultsFile)
else:
print('File unreadable: ' + str(args.resultsFile))
log.error('File unreadable: ' + str(args.resultsFile))
if __name__ == "__main__":
# execute only if run as a script
main()
| en | 0.725512 | # parser.add_argument("-o", "--organization", # help="AWS Profile for Organization Account") # of CPUs") # Handle logging # Handle prowlerPath # handle verbosity # handle profiles / authentication / credentials # To Do: turn these variable into arguments # EOF? # # Handle Organizations and use it to create list of accounts to audit # if not args.organization: # logging.info("No AWS Organization Account") # if verbose: # print("No AWS Organization Account") # else: # print("Not implemented yet") # First we find the maximum length of the index column # Then, we concatenate this to the max of the lengths of column name # and its values for each column, left to right # Write Summary first # Write raw results to Excel # Write Passing results to Excel # Write Failing results to Excel # Write CIS Benchmarks Passing results to Excel # Write CIS Benchmarks failing results to Excel # Generate a Testing UUID and TimeStamp to add to logs / results # setting up queues # process workingProfiles, run assessment tool(s) against each Profile # one EOF marker for each thread # execute only if run as a script | 2.258748 | 2 |
xiaomi_thermo_unified/sensors/device_info.py | h4/xiaomi_thermo_unified | 1 | 6615267 | from dataclasses import dataclass
@dataclass
class DeviceInfo:
#device_name: str
serial_number: str
manufacturer: str
model_number: str
hardware_revision: str
firmware_revision: str
| from dataclasses import dataclass
@dataclass
class DeviceInfo:
#device_name: str
serial_number: str
manufacturer: str
model_number: str
hardware_revision: str
firmware_revision: str
| en | 0.738802 | #device_name: str | 2.63886 | 3 |
antx/ann_patterns.py | Esukhia/annotation_transfer | 0 | 6615268 | hl = chr(200000) # hfml local_id lower
hu = chr(1000049) # hfml local_id upper
HFML_ANN_PATTERN = [
("author", fr"(\<[{hl}-{hu}]?au)"),
("book-title", fr"(\<[{hl}-{hu}]?k1)"),
("poti_title", fr"(\<[{hl}-{hu}]?k2)"),
("chapter_title", fr"(\<[{hl}-{hu}]?k3)"),
("cittation_start", fr"(\<[{hl}-{hu}]?g)"),
("citation_end", r"(g\>)"),
("sabche_start", fr"(\<[{hl}-{hu}]?q)"),
("sabche_end", r"(q\>)"),
("tsawa_start", fr"(\<[{hl}-{hu}]?m)"),
("tsawa_end", r"(m\>)"),
("yigchung_start", fr"(\<[{hl}-{hu}]?y)"),
("yigchung_end", r"(y\>)"),
("end-1", r"(\>)"),
]
| hl = chr(200000) # hfml local_id lower
hu = chr(1000049) # hfml local_id upper
HFML_ANN_PATTERN = [
("author", fr"(\<[{hl}-{hu}]?au)"),
("book-title", fr"(\<[{hl}-{hu}]?k1)"),
("poti_title", fr"(\<[{hl}-{hu}]?k2)"),
("chapter_title", fr"(\<[{hl}-{hu}]?k3)"),
("cittation_start", fr"(\<[{hl}-{hu}]?g)"),
("citation_end", r"(g\>)"),
("sabche_start", fr"(\<[{hl}-{hu}]?q)"),
("sabche_end", r"(q\>)"),
("tsawa_start", fr"(\<[{hl}-{hu}]?m)"),
("tsawa_end", r"(m\>)"),
("yigchung_start", fr"(\<[{hl}-{hu}]?y)"),
("yigchung_end", r"(y\>)"),
("end-1", r"(\>)"),
]
| en | 0.63036 | # hfml local_id lower # hfml local_id upper | 1.798009 | 2 |
src/smarttest/tests/__init__.py | kidosoft/django-smarttest | 0 | 6615269 | <filename>src/smarttest/tests/__init__.py<gh_stars>0
""" Tests. """
from .test_compat import * # noqa
from .test_decorators import * # noqa
from .test_testcases import * # noqa
| <filename>src/smarttest/tests/__init__.py<gh_stars>0
""" Tests. """
from .test_compat import * # noqa
from .test_decorators import * # noqa
from .test_testcases import * # noqa
| uz | 0.391524 | Tests. # noqa # noqa # noqa | 1.248348 | 1 |
telugu_text_gen.py | golden-panther/Telugu_text_generator | 0 | 6615270 | <gh_stars>0
from googletrans import Translator
from PIL import Image
import streamlit as st
import requests
st.set_option('deprecation.showfileUploaderEncoding', False)
img = Image.open("telugu-talli.png")
st.sidebar.image(img)
st.sidebar.subheader("దేశ భాషలందు తెలుగు లెస్స \n పంచదార కన్న \n పనస తొనల కన్న \n కమ్మని తేనె కన్న \n తెలుగు మిన్న\n")
st.sidebar.write("You can find some telugu\ntext here (for custom input):\n https://te.wikipedia.org/wiki/")
st.title("Telugu Language Text Generator")
st.subheader("This is a simple AI text generator. It takes a seed text as input and generates telugu text based on seed/prompt as output.")
option = st.selectbox('Input custom prompt or use one of our examples:',('Custom Prompt', 'ప్రపంచంలో ఒకప్పుడు', 'ఆరోగ్యంగా ఉండటానికి', 'ప్రపంచంలోని ఉత్తమ సంగీత స్వరకర్త', 'పర్యావరణ కాలుష్యం', 'నా పేరు'))
if option=="Custom Prompt":
telugu_query = st.text_input(label="Enter seed(prompt) text in telugu")
else:
telugu_query = option
with st.form(key='my_form'):
amount_of_text = st.slider(label="Selcect the amount of text to generate", min_value = 1, max_value=10, step = 1)
submit_button = st.form_submit_button(label='Submit')
if submit_button:
if not telugu_query:
st.info("You haven't given any seed text in telugu (తెలుగు) language")
else:
translator = Translator()
try:
temp1 = translator.translate(telugu_query, src='te', dest='en')
english_query = temp1.text
english_text = ''
for i in range(amount_of_text):
r = requests.post('https://api.deepai.org/api/text-generator',
data={'text': english_query},
headers={'api-key': '<KEY>'
})
english_text += r.json()['output']
temp2 = translator.translate(english_text, src='en', dest='te')
telugu_text = temp2.text
st.write(telugu_text)
st.info("Successfully generated telugu text")
# st.balloons()
except:
pass
| from googletrans import Translator
from PIL import Image
import streamlit as st
import requests
st.set_option('deprecation.showfileUploaderEncoding', False)
img = Image.open("telugu-talli.png")
st.sidebar.image(img)
st.sidebar.subheader("దేశ భాషలందు తెలుగు లెస్స \n పంచదార కన్న \n పనస తొనల కన్న \n కమ్మని తేనె కన్న \n తెలుగు మిన్న\n")
st.sidebar.write("You can find some telugu\ntext here (for custom input):\n https://te.wikipedia.org/wiki/")
st.title("Telugu Language Text Generator")
st.subheader("This is a simple AI text generator. It takes a seed text as input and generates telugu text based on seed/prompt as output.")
option = st.selectbox('Input custom prompt or use one of our examples:',('Custom Prompt', 'ప్రపంచంలో ఒకప్పుడు', 'ఆరోగ్యంగా ఉండటానికి', 'ప్రపంచంలోని ఉత్తమ సంగీత స్వరకర్త', 'పర్యావరణ కాలుష్యం', 'నా పేరు'))
if option=="Custom Prompt":
telugu_query = st.text_input(label="Enter seed(prompt) text in telugu")
else:
telugu_query = option
with st.form(key='my_form'):
amount_of_text = st.slider(label="Selcect the amount of text to generate", min_value = 1, max_value=10, step = 1)
submit_button = st.form_submit_button(label='Submit')
if submit_button:
if not telugu_query:
st.info("You haven't given any seed text in telugu (తెలుగు) language")
else:
translator = Translator()
try:
temp1 = translator.translate(telugu_query, src='te', dest='en')
english_query = temp1.text
english_text = ''
for i in range(amount_of_text):
r = requests.post('https://api.deepai.org/api/text-generator',
data={'text': english_query},
headers={'api-key': '<KEY>'
})
english_text += r.json()['output']
temp2 = translator.translate(english_text, src='en', dest='te')
telugu_text = temp2.text
st.write(telugu_text)
st.info("Successfully generated telugu text")
# st.balloons()
except:
pass | ru | 0.22502 | # st.balloons() | 3.575729 | 4 |
main.py | AbChatt/SigmaCapital | 0 | 6615271 | <gh_stars>0
from model import *
import asyncio
from ISStreamer.Streamer import Streamer
# print(modelSelection('AAPL_Jun_2019_2020.csv', "AAPL"))
# print(modelSelection('AAPL-2.csv', "AAPL"))
# list of companies
# tickers = ['AAPL', 'DIS', 'XOM']
# tickers = ['AAPL', 'DIS', 'XOM']
tickers = input("Please enter the tickers of the stocks you want, seperated by a space: ").split()
print(tickers)
def highlights(ratings, rating):
buy_and_sell = sorted(ratings, key=lambda x: x[0])
buy = sorted(list(filter(lambda x: x[0] == "BUY", buy_and_sell)), key=lambda x: x[1])
sell = sorted(list(filter(lambda x: x[0] == "SELL", buy_and_sell)), key=lambda x: x[1])
our_highlights = (list(reversed(buy[-5:])), list(sell[0:5]))
if rating == "BUY":
return [x[3] for x in our_highlights[0]]
else:
return [x[3] for x in our_highlights[1]]
# get API key
print("Please enter your API key: ")
api_key = input()
# asynchronously get input
loop = asyncio.get_event_loop()
tasks = [get_data(ticker, api_key) for ticker in tickers]
group1 = asyncio.gather(*tasks)
results = loop.run_until_complete(group1)
# Configure Streaming Bucket
ACCESS_KEY = "<KEY>"
BUCKET_KEY = "<KEY>"
BUCKET_NAME = "Hack The Northeast"
streamer = Streamer(bucket_name=BUCKET_NAME, bucket_key=BUCKET_KEY, access_key=ACCESS_KEY)
print("And the results are: ")
for i in range(len(results)):
# print(modelSelection(results[i][0], tickers[i]))
now = modelSelection(results[i], tickers[i])
streamer.log(tickers[i], now[0])
print(now)
streamer.flush()
# print(highlights([("SELL", -1, 1, "GOOGL"), ("SELL", -0.5, 1, "RDSA"), ("BUY", 100, 1, "APPL"), ("BUY", 80, 1, "BP"), ("SELL", -0.3, 1, "XXON"), ("BUY", 70, 1, "MSFT"), ("BUY", 60, 1, "BRK.A"), ("BUY", 50, 1, "JPM"), ("SELL", -0.4, 1, "CMCSA"), ("SELL", -0.2, 1, "PM")], "SELL")) | from model import *
import asyncio
from ISStreamer.Streamer import Streamer
# print(modelSelection('AAPL_Jun_2019_2020.csv', "AAPL"))
# print(modelSelection('AAPL-2.csv', "AAPL"))
# list of companies
# tickers = ['AAPL', 'DIS', 'XOM']
# tickers = ['AAPL', 'DIS', 'XOM']
tickers = input("Please enter the tickers of the stocks you want, seperated by a space: ").split()
print(tickers)
def highlights(ratings, rating):
buy_and_sell = sorted(ratings, key=lambda x: x[0])
buy = sorted(list(filter(lambda x: x[0] == "BUY", buy_and_sell)), key=lambda x: x[1])
sell = sorted(list(filter(lambda x: x[0] == "SELL", buy_and_sell)), key=lambda x: x[1])
our_highlights = (list(reversed(buy[-5:])), list(sell[0:5]))
if rating == "BUY":
return [x[3] for x in our_highlights[0]]
else:
return [x[3] for x in our_highlights[1]]
# get API key
print("Please enter your API key: ")
api_key = input()
# asynchronously get input
loop = asyncio.get_event_loop()
tasks = [get_data(ticker, api_key) for ticker in tickers]
group1 = asyncio.gather(*tasks)
results = loop.run_until_complete(group1)
# Configure Streaming Bucket
ACCESS_KEY = "<KEY>"
BUCKET_KEY = "<KEY>"
BUCKET_NAME = "Hack The Northeast"
streamer = Streamer(bucket_name=BUCKET_NAME, bucket_key=BUCKET_KEY, access_key=ACCESS_KEY)
print("And the results are: ")
for i in range(len(results)):
# print(modelSelection(results[i][0], tickers[i]))
now = modelSelection(results[i], tickers[i])
streamer.log(tickers[i], now[0])
print(now)
streamer.flush()
# print(highlights([("SELL", -1, 1, "GOOGL"), ("SELL", -0.5, 1, "RDSA"), ("BUY", 100, 1, "APPL"), ("BUY", 80, 1, "BP"), ("SELL", -0.3, 1, "XXON"), ("BUY", 70, 1, "MSFT"), ("BUY", 60, 1, "BRK.A"), ("BUY", 50, 1, "JPM"), ("SELL", -0.4, 1, "CMCSA"), ("SELL", -0.2, 1, "PM")], "SELL")) | en | 0.517502 | # print(modelSelection('AAPL_Jun_2019_2020.csv', "AAPL")) # print(modelSelection('AAPL-2.csv', "AAPL")) # list of companies # tickers = ['AAPL', 'DIS', 'XOM'] # tickers = ['AAPL', 'DIS', 'XOM'] # get API key # asynchronously get input # Configure Streaming Bucket # print(modelSelection(results[i][0], tickers[i])) # print(highlights([("SELL", -1, 1, "GOOGL"), ("SELL", -0.5, 1, "RDSA"), ("BUY", 100, 1, "APPL"), ("BUY", 80, 1, "BP"), ("SELL", -0.3, 1, "XXON"), ("BUY", 70, 1, "MSFT"), ("BUY", 60, 1, "BRK.A"), ("BUY", 50, 1, "JPM"), ("SELL", -0.4, 1, "CMCSA"), ("SELL", -0.2, 1, "PM")], "SELL")) | 2.943307 | 3 |
python/file_recompress_images.py | VelionaVollerei/PMX-VMD-Scripting-Tools | 0 | 6615272 | <reponame>VelionaVollerei/PMX-VMD-Scripting-Tools<filename>python/file_recompress_images.py<gh_stars>0
_SCRIPT_VERSION = "Script version: Nuthouse01 - 6/10/2021 - v6.00"
# This code is free to use and re-distribute, but I cannot be held responsible for damages that it may or may not cause.
#####################
# first, system imports
import os
Image = None
# NOTE: i comment this block before compiling the EXE cuz the Pillow library is gigantic & makes the exe version like 200K
try:
from PIL import Image
except ImportError:
Image = None
# second, wrap custom imports with a try-except to catch it if files are missing
try:
# these imports work if running from GUI
from . import nuthouse01_core as core
from . import nuthouse01_pmx_parser as pmxlib
from . import file_sort_textures
except ImportError as eee:
try:
# these imports work if running from double-click on THIS script
import nuthouse01_core as core
import nuthouse01_pmx_parser as pmxlib
import file_sort_textures
except ImportError as eee:
print(eee.__class__.__name__, eee)
print("ERROR: failed to import some of the necessary files, all my scripts must be together in the same folder!")
print("...press ENTER to exit...")
input()
exit()
core = pmxlib = file_sort_textures = None
# when debug=True, disable the catchall try-except block. this means the full stack trace gets printed when it crashes,
# but if launched in a new window it exits immediately so you can't read it.
DEBUG = False
# this is recommended true, for obvious reasons
MAKE_BACKUP_ZIPFILE = True
# note: zipper automatically appends .zip onto whatever output name i give it, so dont give it a .zip suffix here
BACKUP_SUFFIX = "beforePNG"
IM_FORMAT_ALWAYS_CONVERT = ("DDS", "TIFF", "TGA")
IM_FORMAT_ALWAYS_SKIP = ("JPEG", "GIF")
# these are rare BMP formats that are known to be incompatible with MocuMocuDance
KNOWN_BAD_FORMATS = ("BGR;15", "BGR;16")
# if recompression saves less than XXX KB, then don't save the result
REQUIRED_COMPRESSION_AMOUNT_KB = 100
# how PIL reads things:
# PNG, JPEG, BMP, DDS, TIFF, GIF
IMG_TYPE_TO_EXT = file_sort_textures.IMG_TYPE_TO_EXT
IMG_EXT = file_sort_textures.IMG_EXT
helptext = '''=================================================
file_recompress_images:
This tool will try to re-compress all image files in the file tree.
Generally this means converting BMP/TGA/other images to PNG format, for maximum lossless image compression.
JPEG image compression is more aggressive than PNG, so JPEG images will stay as JPEG. GIFs are weird so they are also not modified.
This requires a PMX file to use as a root so it knows where to start reading files from.
It creates a zipfile backup of the entire folder, just in case.
This script does NOT ask for permission beforehand, it just creates a backup and does its thing, then afterwards it reports what it did.
Bonus: this can process all "neighbor" pmx files in addition to the target, this highly recommended because neighbors usually reference similar sets of files.
Note: this script requires the Python library 'Pillow' to be installed.
Note: unlike my other scripts, this overwrites the original input PMX file(s) instead of creating a new file with a suffix. This is because I already create a zipfile that contains the original input PMX, so that serves as a good backup.
'''
# dds/tga/tiff will always be converted to png
# jpeg/gif will always be skipped (jpeg is already lossy & therefore compresses better than png, gif is animated & complex)
# bmp will be re-compressed to png if the original bmp is in 15-bit or 16-bit encoding (mocumocudance compatability)
# other image types are re-compressed to png if doing so saves 100kb or more
# also, all images are renamed so that the file extension matches the actual image data format
def main(moreinfo=False):
# step zero: verify that Pillow exists
if Image is None:
core.MY_PRINT_FUNC("ERROR: Python library 'Pillow' not found. This script requires this library to run!")
core.MY_PRINT_FUNC("This script cannot be ran from the EXE version, the Pillow library is too large to package into the executable.")
core.MY_PRINT_FUNC("To install Pillow, please use the command 'pip install Pillow' in the Windows command prompt and then run the Python scripts directly.")
return None
# print pillow version just cuz
core.MY_PRINT_FUNC("Using Pillow version '%s'" % Image.__version__)
core.MY_PRINT_FUNC("Please enter name of PMX model file:")
input_filename_pmx = core.MY_FILEPROMPT_FUNC(".pmx")
# absolute path to directory holding the pmx
input_filename_pmx_abs = os.path.normpath(os.path.abspath(input_filename_pmx))
startpath, input_filename_pmx_rel = os.path.split(input_filename_pmx_abs)
# =========================================================================================================
# =========================================================================================================
# =========================================================================================================
# first, build the list of ALL files that actually exist, then filter it down to neighbor PMXs and relevant files
relative_all_exist_files = file_sort_textures.walk_filetree_from_root(startpath)
core.MY_PRINT_FUNC("ALL EXISTING FILES:", len(relative_all_exist_files))
# now fill "neighbor_pmx" by finding files without path separator that end in PMX
# these are relative paths tho
neighbor_pmx = [f for f in relative_all_exist_files if
(f.lower().endswith(".pmx")) and
(os.path.sep not in f) and
f != input_filename_pmx_rel]
core.MY_PRINT_FUNC("NEIGHBOR PMX FILES:", len(neighbor_pmx))
# filter down to just image files
relevant_exist_files = [f for f in relative_all_exist_files if f.lower().endswith(IMG_EXT)]
core.MY_PRINT_FUNC("RELEVANT EXISTING FILES:", len(relevant_exist_files))
# =========================================================================================================
# =========================================================================================================
# =========================================================================================================
# now ask if I care about the neighbors and read the PMXes into memory
pmx_filenames = [input_filename_pmx_rel]
if neighbor_pmx:
core.MY_PRINT_FUNC("")
info = [
"Detected %d top-level neighboring PMX files, these probably share the same filebase as the target." % len(neighbor_pmx),
"If files are moved/renamed but the neighbors are not processed, the neighbor texture references will probably break.",
"Do you want to process all neighbors in addition to the target? (highly recommended)",
"1 = Yes, 2 = No"]
r = core.MY_SIMPLECHOICE_FUNC((1, 2), info)
if r == 1:
core.MY_PRINT_FUNC("Processing target + all neighbor files")
# append neighbor PMX files onto the list of files to be processed
pmx_filenames += neighbor_pmx
else:
core.MY_PRINT_FUNC("WARNING: Processing only target, ignoring %d neighbor PMX files" % len(neighbor_pmx))
# now read all the PMX objects & store in dict alongside the relative name
# dictionary where keys are filename and values are resulting pmx objects
all_pmx_obj = {}
for this_pmx_name in pmx_filenames:
this_pmx_obj = pmxlib.read_pmx(os.path.join(startpath, this_pmx_name), moreinfo=moreinfo)
all_pmx_obj[this_pmx_name] = this_pmx_obj
# =========================================================================================================
# =========================================================================================================
# =========================================================================================================
# for each pmx, for each file on disk, match against files used in textures (case-insensitive) and replace with canonical name-on-disk
# also fill out how much and how each file is used, and unify dupes between files, all that good stuff
filerecord_list = file_sort_textures.build_filerecord_list(all_pmx_obj, relevant_exist_files, moreinfo)
# =========================================================================================================
# =========================================================================================================
# =========================================================================================================
# DETERMINE NEW NAMES FOR FILES
# first, create a backup of the folder
# save the name, so that i can delete it if i didn't make any changes
zipfile_name = ""
if MAKE_BACKUP_ZIPFILE:
r = file_sort_textures.make_zipfile_backup(startpath, BACKUP_SUFFIX)
if not r:
# this happens if the backup failed somehow AND the user decided to quit
core.MY_PRINT_FUNC("Aborting: no files were changed")
return None
zipfile_name = r
# name used for temporary location
tempfilename = os.path.join(startpath,"temp_image_file_just_delete_me.png")
pil_cannot_inspect = 0
pil_cannot_inspect_list = []
pil_imgext_mismatch = 0
num_recompressed = 0
# list of memory saved by recompressing each file. same order/length as "image_filerecords"
mem_saved = []
# make image persistient, so I know it always exists and I can always call "close" before open
im = None
# only iterate over images that exist, obviously
image_filerecords = [f for f in filerecord_list if f.exists]
# iterate over the images
for i, p in enumerate(image_filerecords):
abspath = os.path.join(startpath, p.name)
orig_size = os.path.getsize(abspath)
# if not moreinfo, then each line overwrites the previous like a progress printout does
# if moreinfo, then each line is printed permanently
core.MY_PRINT_FUNC("...analyzing {:>3}/{:>3}, file='{}', size={} ".format(
i+1, len(image_filerecords), p.name, core.prettyprint_file_size(orig_size)), is_progress=(not moreinfo))
mem_saved.append(0)
# before opening, try to close it just in case
if im is not None:
im.close()
# open the image & catch all possible errors
try:
im = Image.open(abspath)
except FileNotFoundError as eeee:
core.MY_PRINT_FUNC("FILESYSTEM MALFUNCTION!!", eeee.__class__.__name__, eeee)
core.MY_PRINT_FUNC("os.walk created a list of all filenames on disk, but then this filename doesn't exist when i try to open it?")
im = None
except OSError as eeee:
# this has 2 causes, "Unsupported BMP bitfields layout" or "cannot identify image file"
if DEBUG:
print("CANNOT INSPECT!1", eeee.__class__.__name__, eeee, p.name)
im = None
except NotImplementedError as eeee:
# this is because there's some DDS format it can't make sense of
if DEBUG:
print("CANNOT INSPECT!2", eeee.__class__.__name__, eeee, p.name)
im = None
if im is None:
pil_cannot_inspect += 1
pil_cannot_inspect_list.append(p.name)
continue
if im.format not in IMG_TYPE_TO_EXT:
core.MY_PRINT_FUNC("WARNING: file '%s' has unusual image format '%s', attempting to continue" % (p.name, im.format))
# now the image is successfully opened!
newname = p.name
base, currext = os.path.splitext(newname)
# 1, depending on image format, attempt to re-save as PNG
if im.format not in IM_FORMAT_ALWAYS_SKIP:
# delete temp file if it still exists
if os.path.exists(tempfilename):
try:
os.remove(tempfilename)
except OSError as e:
core.MY_PRINT_FUNC(e.__class__.__name__, e)
core.MY_PRINT_FUNC("ERROR1: failed to delete temp image file '%s' during processing" % tempfilename)
break
# save to tempfilename with png format, use optimize=true
try:
im.save(tempfilename, format="PNG", optimize=True)
except OSError as e:
core.MY_PRINT_FUNC(e.__class__.__name__, e)
core.MY_PRINT_FUNC("ERROR2: failed to re-compress image '%s', original not modified" % p.name)
continue
# measure & compare file size
new_size = os.path.getsize(tempfilename)
diff = orig_size - new_size
# if using a 16-bit BMP format, re-save back to bmp with same name
is_bad_bmp = False
if im.format == "BMP":
try:
# this might fail, images are weird, sometimes they don't have the attributes i expect
if im.tile[0][3][0] in KNOWN_BAD_FORMATS:
is_bad_bmp = True
except Exception as e:
if DEBUG:
print(e.__class__.__name__, e, "BMP THING", p.name, im.tile)
if diff > (REQUIRED_COMPRESSION_AMOUNT_KB * 1024) \
or is_bad_bmp\
or im.format in IM_FORMAT_ALWAYS_CONVERT:
# if it frees up at least XXX kb, i will keep it!
# also keep it if it is a bmp encoded with 15-bit or 16-bit colors
# set p.newname = png, and delete original and move tempname to base.png
try:
# delete original
os.remove(os.path.join(startpath, p.name))
except OSError as e:
core.MY_PRINT_FUNC(e.__class__.__name__, e)
core.MY_PRINT_FUNC("ERROR3: failed to delete old image '%s' after recompressing" % p.name)
continue
newname = base + ".png"
# resolve potential collisions by adding numbers suffix to file names
# first need to make path absolute so get_unused_file_name can check the disk.
newname = os.path.join(startpath, newname)
# then check uniqueness against files on disk
newname = core.get_unused_file_name(newname)
# now dest path is guaranteed unique against other existing files
# make the path no longer absolute: undo adding "startpath" above
newname = os.path.relpath(newname, startpath)
try:
# move new into place
os.rename(tempfilename, os.path.join(startpath, newname))
except OSError as e:
core.MY_PRINT_FUNC(e.__class__.__name__, e)
core.MY_PRINT_FUNC("ERROR4: after deleting original '%s', failed to move recompressed version into place!" % p.name)
continue
num_recompressed += 1
p.newname = newname
mem_saved[-1] = diff
continue # if succesfully re-saved, do not do the extension-checking below
# if this is not sufficiently compressed, do not use "continue", DO hit the extension-checking below
# 2, if the file extension doesn't match with the image type, then make it match
# this only happens if the image was not re-saved above
if im.format in IMG_TYPE_TO_EXT and currext not in IMG_TYPE_TO_EXT[im.format]:
newname = base + IMG_TYPE_TO_EXT[im.format][0]
# resolve potential collisions by adding numbers suffix to file names
# first need to make path absolute so get_unused_file_name can check the disk.
newname = os.path.join(startpath, newname)
# then check uniqueness against files on disk
newname = core.get_unused_file_name(newname)
# now dest path is guaranteed unique against other existing files
# make the path no longer absolute: undo adding "startpath" above
newname = os.path.relpath(newname, startpath)
# do the actual rename here & now
try:
# os.renames creates all necessary intermediate folders needed for the destination
# it also deletes the source folders if they become empty after the rename operation
os.renames(os.path.join(startpath, p.name), os.path.join(startpath, newname))
except OSError as e:
core.MY_PRINT_FUNC(e.__class__.__name__, e)
core.MY_PRINT_FUNC("ERROR5: unable to rename file '%s' --> '%s', attempting to continue with other file rename operations"
% (p.name, newname))
continue
pil_imgext_mismatch += 1
p.newname = newname
continue
# these must be the same length after iterating
assert len(mem_saved) == len(image_filerecords)
# if the image is still open, close it
if im is not None:
im.close()
# delete temp file if it still exists
if os.path.exists(tempfilename):
try:
os.remove(tempfilename)
except OSError as e:
core.MY_PRINT_FUNC(e.__class__.__name__, e)
core.MY_PRINT_FUNC("WARNING: failed to delete temp image file '%s' after processing" % tempfilename)
# =========================================================================================================
# =========================================================================================================
# =========================================================================================================
# are there any with proposed renaming?
if not any(u.newname is not None for u in image_filerecords):
core.MY_PRINT_FUNC("No proposed file changes")
# if nothing was changed, delete the backup zip!
core.MY_PRINT_FUNC("Deleting backup archive")
if os.path.exists(zipfile_name):
try:
os.remove(zipfile_name)
except OSError as e:
core.MY_PRINT_FUNC(e.__class__.__name__, e)
core.MY_PRINT_FUNC("WARNING: failed to delete pointless zip file '%s'" % zipfile_name)
core.MY_PRINT_FUNC("Aborting: no files were changed")
return None
# =========================================================================================================
# =========================================================================================================
# =========================================================================================================
# finally, do the actual renaming:
# do all renaming in PMXes
file_sort_textures.apply_file_renaming(all_pmx_obj, image_filerecords, startpath, skipdiskrename=True)
# write out
for this_pmx_name, this_pmx_obj in all_pmx_obj.items():
# NOTE: this is OVERWRITING THE PREVIOUS PMX FILE, NOT CREATING A NEW ONE
# because I make a zipfile backup I don't need to feel worried about preserving the old version
output_filename_pmx = os.path.join(startpath, this_pmx_name)
# output_filename_pmx = core.get_unused_file_name(output_filename_pmx)
pmxlib.write_pmx(output_filename_pmx, this_pmx_obj, moreinfo=moreinfo)
# =========================================================================================================
# =========================================================================================================
# =========================================================================================================
# NOW PRINT MY RENAMINGS and other findings
filerecord_with_savings = zip(image_filerecords, mem_saved)
changed_files = [u for u in filerecord_with_savings if u[0].newname is not None]
core.MY_PRINT_FUNC("="*60)
if pil_cannot_inspect:
core.MY_PRINT_FUNC("WARNING: failed to inspect %d image files, these must be handled manually" % pil_cannot_inspect)
core.MY_PRINT_FUNC(pil_cannot_inspect_list)
if num_recompressed:
core.MY_PRINT_FUNC("Recompressed %d images! %s of disk space has been freed" % (num_recompressed, core.prettyprint_file_size(sum(mem_saved))))
if pil_imgext_mismatch:
core.MY_PRINT_FUNC("Renamed %d images that had incorrect extensions (included below)" % pil_imgext_mismatch)
oldname_list = [p[0].name for p in changed_files]
oldname_list_j = core.MY_JUSTIFY_STRINGLIST(oldname_list)
newname_list = [p[0].newname for p in changed_files]
newname_list_j = core.MY_JUSTIFY_STRINGLIST(newname_list)
savings_list = [("" if p[1]==0 else "saved " + core.prettyprint_file_size(p[1])) for p in changed_files]
zipped = list(zip(oldname_list_j, newname_list_j, savings_list))
zipped_and_sorted = sorted(zipped, key=lambda y: file_sort_textures.sortbydirdepth(y[0]))
for o,n,s in zipped_and_sorted:
# print 'from' with the case/separator it uses in the PMX
core.MY_PRINT_FUNC(" {:s} --> {:s} | {:s}".format(o, n, s))
core.MY_PRINT_FUNC("Done!")
return None
if __name__ == '__main__':
print(_SCRIPT_VERSION)
if DEBUG:
# print info to explain the purpose of this file
core.MY_PRINT_FUNC(helptext)
core.MY_PRINT_FUNC("")
main()
core.pause_and_quit("Done with everything! Goodbye!")
else:
try:
# print info to explain the purpose of this file
core.MY_PRINT_FUNC(helptext)
core.MY_PRINT_FUNC("")
main()
core.pause_and_quit("Done with everything! Goodbye!")
except (KeyboardInterrupt, SystemExit):
# this is normal and expected, do nothing and die normally
pass
except Exception as ee:
# if an unexpected error occurs, catch it and print it and call pause_and_quit so the window stays open for a bit
core.MY_PRINT_FUNC(ee)
core.pause_and_quit("ERROR: something truly strange and unexpected has occurred, sorry, good luck figuring out what tho")
| _SCRIPT_VERSION = "Script version: Nuthouse01 - 6/10/2021 - v6.00"
# This code is free to use and re-distribute, but I cannot be held responsible for damages that it may or may not cause.
#####################
# first, system imports
import os
Image = None
# NOTE: i comment this block before compiling the EXE cuz the Pillow library is gigantic & makes the exe version like 200K
try:
from PIL import Image
except ImportError:
Image = None
# second, wrap custom imports with a try-except to catch it if files are missing
try:
# these imports work if running from GUI
from . import nuthouse01_core as core
from . import nuthouse01_pmx_parser as pmxlib
from . import file_sort_textures
except ImportError as eee:
try:
# these imports work if running from double-click on THIS script
import nuthouse01_core as core
import nuthouse01_pmx_parser as pmxlib
import file_sort_textures
except ImportError as eee:
print(eee.__class__.__name__, eee)
print("ERROR: failed to import some of the necessary files, all my scripts must be together in the same folder!")
print("...press ENTER to exit...")
input()
exit()
core = pmxlib = file_sort_textures = None
# when debug=True, disable the catchall try-except block. this means the full stack trace gets printed when it crashes,
# but if launched in a new window it exits immediately so you can't read it.
DEBUG = False
# this is recommended true, for obvious reasons
MAKE_BACKUP_ZIPFILE = True
# note: zipper automatically appends .zip onto whatever output name i give it, so dont give it a .zip suffix here
BACKUP_SUFFIX = "beforePNG"
IM_FORMAT_ALWAYS_CONVERT = ("DDS", "TIFF", "TGA")
IM_FORMAT_ALWAYS_SKIP = ("JPEG", "GIF")
# these are rare BMP formats that are known to be incompatible with MocuMocuDance
KNOWN_BAD_FORMATS = ("BGR;15", "BGR;16")
# if recompression saves less than XXX KB, then don't save the result
REQUIRED_COMPRESSION_AMOUNT_KB = 100
# how PIL reads things:
# PNG, JPEG, BMP, DDS, TIFF, GIF
IMG_TYPE_TO_EXT = file_sort_textures.IMG_TYPE_TO_EXT
IMG_EXT = file_sort_textures.IMG_EXT
helptext = '''=================================================
file_recompress_images:
This tool will try to re-compress all image files in the file tree.
Generally this means converting BMP/TGA/other images to PNG format, for maximum lossless image compression.
JPEG image compression is more aggressive than PNG, so JPEG images will stay as JPEG. GIFs are weird so they are also not modified.
This requires a PMX file to use as a root so it knows where to start reading files from.
It creates a zipfile backup of the entire folder, just in case.
This script does NOT ask for permission beforehand, it just creates a backup and does its thing, then afterwards it reports what it did.
Bonus: this can process all "neighbor" pmx files in addition to the target, this highly recommended because neighbors usually reference similar sets of files.
Note: this script requires the Python library 'Pillow' to be installed.
Note: unlike my other scripts, this overwrites the original input PMX file(s) instead of creating a new file with a suffix. This is because I already create a zipfile that contains the original input PMX, so that serves as a good backup.
'''
# dds/tga/tiff will always be converted to png
# jpeg/gif will always be skipped (jpeg is already lossy & therefore compresses better than png, gif is animated & complex)
# bmp will be re-compressed to png if the original bmp is in 15-bit or 16-bit encoding (mocumocudance compatability)
# other image types are re-compressed to png if doing so saves 100kb or more
# also, all images are renamed so that the file extension matches the actual image data format
def main(moreinfo=False):
# step zero: verify that Pillow exists
if Image is None:
core.MY_PRINT_FUNC("ERROR: Python library 'Pillow' not found. This script requires this library to run!")
core.MY_PRINT_FUNC("This script cannot be ran from the EXE version, the Pillow library is too large to package into the executable.")
core.MY_PRINT_FUNC("To install Pillow, please use the command 'pip install Pillow' in the Windows command prompt and then run the Python scripts directly.")
return None
# print pillow version just cuz
core.MY_PRINT_FUNC("Using Pillow version '%s'" % Image.__version__)
core.MY_PRINT_FUNC("Please enter name of PMX model file:")
input_filename_pmx = core.MY_FILEPROMPT_FUNC(".pmx")
# absolute path to directory holding the pmx
input_filename_pmx_abs = os.path.normpath(os.path.abspath(input_filename_pmx))
startpath, input_filename_pmx_rel = os.path.split(input_filename_pmx_abs)
# =========================================================================================================
# =========================================================================================================
# =========================================================================================================
# first, build the list of ALL files that actually exist, then filter it down to neighbor PMXs and relevant files
relative_all_exist_files = file_sort_textures.walk_filetree_from_root(startpath)
core.MY_PRINT_FUNC("ALL EXISTING FILES:", len(relative_all_exist_files))
# now fill "neighbor_pmx" by finding files without path separator that end in PMX
# these are relative paths tho
neighbor_pmx = [f for f in relative_all_exist_files if
(f.lower().endswith(".pmx")) and
(os.path.sep not in f) and
f != input_filename_pmx_rel]
core.MY_PRINT_FUNC("NEIGHBOR PMX FILES:", len(neighbor_pmx))
# filter down to just image files
relevant_exist_files = [f for f in relative_all_exist_files if f.lower().endswith(IMG_EXT)]
core.MY_PRINT_FUNC("RELEVANT EXISTING FILES:", len(relevant_exist_files))
# =========================================================================================================
# =========================================================================================================
# =========================================================================================================
# now ask if I care about the neighbors and read the PMXes into memory
pmx_filenames = [input_filename_pmx_rel]
if neighbor_pmx:
core.MY_PRINT_FUNC("")
info = [
"Detected %d top-level neighboring PMX files, these probably share the same filebase as the target." % len(neighbor_pmx),
"If files are moved/renamed but the neighbors are not processed, the neighbor texture references will probably break.",
"Do you want to process all neighbors in addition to the target? (highly recommended)",
"1 = Yes, 2 = No"]
r = core.MY_SIMPLECHOICE_FUNC((1, 2), info)
if r == 1:
core.MY_PRINT_FUNC("Processing target + all neighbor files")
# append neighbor PMX files onto the list of files to be processed
pmx_filenames += neighbor_pmx
else:
core.MY_PRINT_FUNC("WARNING: Processing only target, ignoring %d neighbor PMX files" % len(neighbor_pmx))
# now read all the PMX objects & store in dict alongside the relative name
# dictionary where keys are filename and values are resulting pmx objects
all_pmx_obj = {}
for this_pmx_name in pmx_filenames:
this_pmx_obj = pmxlib.read_pmx(os.path.join(startpath, this_pmx_name), moreinfo=moreinfo)
all_pmx_obj[this_pmx_name] = this_pmx_obj
# =========================================================================================================
# =========================================================================================================
# =========================================================================================================
# for each pmx, for each file on disk, match against files used in textures (case-insensitive) and replace with canonical name-on-disk
# also fill out how much and how each file is used, and unify dupes between files, all that good stuff
filerecord_list = file_sort_textures.build_filerecord_list(all_pmx_obj, relevant_exist_files, moreinfo)
# =========================================================================================================
# =========================================================================================================
# =========================================================================================================
# DETERMINE NEW NAMES FOR FILES
# first, create a backup of the folder
# save the name, so that i can delete it if i didn't make any changes
zipfile_name = ""
if MAKE_BACKUP_ZIPFILE:
r = file_sort_textures.make_zipfile_backup(startpath, BACKUP_SUFFIX)
if not r:
# this happens if the backup failed somehow AND the user decided to quit
core.MY_PRINT_FUNC("Aborting: no files were changed")
return None
zipfile_name = r
# name used for temporary location
tempfilename = os.path.join(startpath,"temp_image_file_just_delete_me.png")
pil_cannot_inspect = 0
pil_cannot_inspect_list = []
pil_imgext_mismatch = 0
num_recompressed = 0
# list of memory saved by recompressing each file. same order/length as "image_filerecords"
mem_saved = []
# make image persistient, so I know it always exists and I can always call "close" before open
im = None
# only iterate over images that exist, obviously
image_filerecords = [f for f in filerecord_list if f.exists]
# iterate over the images
for i, p in enumerate(image_filerecords):
abspath = os.path.join(startpath, p.name)
orig_size = os.path.getsize(abspath)
# if not moreinfo, then each line overwrites the previous like a progress printout does
# if moreinfo, then each line is printed permanently
core.MY_PRINT_FUNC("...analyzing {:>3}/{:>3}, file='{}', size={} ".format(
i+1, len(image_filerecords), p.name, core.prettyprint_file_size(orig_size)), is_progress=(not moreinfo))
mem_saved.append(0)
# before opening, try to close it just in case
if im is not None:
im.close()
# open the image & catch all possible errors
try:
im = Image.open(abspath)
except FileNotFoundError as eeee:
core.MY_PRINT_FUNC("FILESYSTEM MALFUNCTION!!", eeee.__class__.__name__, eeee)
core.MY_PRINT_FUNC("os.walk created a list of all filenames on disk, but then this filename doesn't exist when i try to open it?")
im = None
except OSError as eeee:
# this has 2 causes, "Unsupported BMP bitfields layout" or "cannot identify image file"
if DEBUG:
print("CANNOT INSPECT!1", eeee.__class__.__name__, eeee, p.name)
im = None
except NotImplementedError as eeee:
# this is because there's some DDS format it can't make sense of
if DEBUG:
print("CANNOT INSPECT!2", eeee.__class__.__name__, eeee, p.name)
im = None
if im is None:
pil_cannot_inspect += 1
pil_cannot_inspect_list.append(p.name)
continue
if im.format not in IMG_TYPE_TO_EXT:
core.MY_PRINT_FUNC("WARNING: file '%s' has unusual image format '%s', attempting to continue" % (p.name, im.format))
# now the image is successfully opened!
newname = p.name
base, currext = os.path.splitext(newname)
# 1, depending on image format, attempt to re-save as PNG
if im.format not in IM_FORMAT_ALWAYS_SKIP:
# delete temp file if it still exists
if os.path.exists(tempfilename):
try:
os.remove(tempfilename)
except OSError as e:
core.MY_PRINT_FUNC(e.__class__.__name__, e)
core.MY_PRINT_FUNC("ERROR1: failed to delete temp image file '%s' during processing" % tempfilename)
break
# save to tempfilename with png format, use optimize=true
try:
im.save(tempfilename, format="PNG", optimize=True)
except OSError as e:
core.MY_PRINT_FUNC(e.__class__.__name__, e)
core.MY_PRINT_FUNC("ERROR2: failed to re-compress image '%s', original not modified" % p.name)
continue
# measure & compare file size
new_size = os.path.getsize(tempfilename)
diff = orig_size - new_size
# if using a 16-bit BMP format, re-save back to bmp with same name
is_bad_bmp = False
if im.format == "BMP":
try:
# this might fail, images are weird, sometimes they don't have the attributes i expect
if im.tile[0][3][0] in KNOWN_BAD_FORMATS:
is_bad_bmp = True
except Exception as e:
if DEBUG:
print(e.__class__.__name__, e, "BMP THING", p.name, im.tile)
if diff > (REQUIRED_COMPRESSION_AMOUNT_KB * 1024) \
or is_bad_bmp\
or im.format in IM_FORMAT_ALWAYS_CONVERT:
# if it frees up at least XXX kb, i will keep it!
# also keep it if it is a bmp encoded with 15-bit or 16-bit colors
# set p.newname = png, and delete original and move tempname to base.png
try:
# delete original
os.remove(os.path.join(startpath, p.name))
except OSError as e:
core.MY_PRINT_FUNC(e.__class__.__name__, e)
core.MY_PRINT_FUNC("ERROR3: failed to delete old image '%s' after recompressing" % p.name)
continue
newname = base + ".png"
# resolve potential collisions by adding numbers suffix to file names
# first need to make path absolute so get_unused_file_name can check the disk.
newname = os.path.join(startpath, newname)
# then check uniqueness against files on disk
newname = core.get_unused_file_name(newname)
# now dest path is guaranteed unique against other existing files
# make the path no longer absolute: undo adding "startpath" above
newname = os.path.relpath(newname, startpath)
try:
# move new into place
os.rename(tempfilename, os.path.join(startpath, newname))
except OSError as e:
core.MY_PRINT_FUNC(e.__class__.__name__, e)
core.MY_PRINT_FUNC("ERROR4: after deleting original '%s', failed to move recompressed version into place!" % p.name)
continue
num_recompressed += 1
p.newname = newname
mem_saved[-1] = diff
continue # if succesfully re-saved, do not do the extension-checking below
# if this is not sufficiently compressed, do not use "continue", DO hit the extension-checking below
# 2, if the file extension doesn't match with the image type, then make it match
# this only happens if the image was not re-saved above
if im.format in IMG_TYPE_TO_EXT and currext not in IMG_TYPE_TO_EXT[im.format]:
newname = base + IMG_TYPE_TO_EXT[im.format][0]
# resolve potential collisions by adding numbers suffix to file names
# first need to make path absolute so get_unused_file_name can check the disk.
newname = os.path.join(startpath, newname)
# then check uniqueness against files on disk
newname = core.get_unused_file_name(newname)
# now dest path is guaranteed unique against other existing files
# make the path no longer absolute: undo adding "startpath" above
newname = os.path.relpath(newname, startpath)
# do the actual rename here & now
try:
# os.renames creates all necessary intermediate folders needed for the destination
# it also deletes the source folders if they become empty after the rename operation
os.renames(os.path.join(startpath, p.name), os.path.join(startpath, newname))
except OSError as e:
core.MY_PRINT_FUNC(e.__class__.__name__, e)
core.MY_PRINT_FUNC("ERROR5: unable to rename file '%s' --> '%s', attempting to continue with other file rename operations"
% (p.name, newname))
continue
pil_imgext_mismatch += 1
p.newname = newname
continue
# these must be the same length after iterating
assert len(mem_saved) == len(image_filerecords)
# if the image is still open, close it
if im is not None:
im.close()
# delete temp file if it still exists
if os.path.exists(tempfilename):
try:
os.remove(tempfilename)
except OSError as e:
core.MY_PRINT_FUNC(e.__class__.__name__, e)
core.MY_PRINT_FUNC("WARNING: failed to delete temp image file '%s' after processing" % tempfilename)
# =========================================================================================================
# =========================================================================================================
# =========================================================================================================
# are there any with proposed renaming?
if not any(u.newname is not None for u in image_filerecords):
core.MY_PRINT_FUNC("No proposed file changes")
# if nothing was changed, delete the backup zip!
core.MY_PRINT_FUNC("Deleting backup archive")
if os.path.exists(zipfile_name):
try:
os.remove(zipfile_name)
except OSError as e:
core.MY_PRINT_FUNC(e.__class__.__name__, e)
core.MY_PRINT_FUNC("WARNING: failed to delete pointless zip file '%s'" % zipfile_name)
core.MY_PRINT_FUNC("Aborting: no files were changed")
return None
# =========================================================================================================
# =========================================================================================================
# =========================================================================================================
# finally, do the actual renaming:
# do all renaming in PMXes
file_sort_textures.apply_file_renaming(all_pmx_obj, image_filerecords, startpath, skipdiskrename=True)
# write out
for this_pmx_name, this_pmx_obj in all_pmx_obj.items():
# NOTE: this is OVERWRITING THE PREVIOUS PMX FILE, NOT CREATING A NEW ONE
# because I make a zipfile backup I don't need to feel worried about preserving the old version
output_filename_pmx = os.path.join(startpath, this_pmx_name)
# output_filename_pmx = core.get_unused_file_name(output_filename_pmx)
pmxlib.write_pmx(output_filename_pmx, this_pmx_obj, moreinfo=moreinfo)
# =========================================================================================================
# =========================================================================================================
# =========================================================================================================
# NOW PRINT MY RENAMINGS and other findings
filerecord_with_savings = zip(image_filerecords, mem_saved)
changed_files = [u for u in filerecord_with_savings if u[0].newname is not None]
core.MY_PRINT_FUNC("="*60)
if pil_cannot_inspect:
core.MY_PRINT_FUNC("WARNING: failed to inspect %d image files, these must be handled manually" % pil_cannot_inspect)
core.MY_PRINT_FUNC(pil_cannot_inspect_list)
if num_recompressed:
core.MY_PRINT_FUNC("Recompressed %d images! %s of disk space has been freed" % (num_recompressed, core.prettyprint_file_size(sum(mem_saved))))
if pil_imgext_mismatch:
core.MY_PRINT_FUNC("Renamed %d images that had incorrect extensions (included below)" % pil_imgext_mismatch)
oldname_list = [p[0].name for p in changed_files]
oldname_list_j = core.MY_JUSTIFY_STRINGLIST(oldname_list)
newname_list = [p[0].newname for p in changed_files]
newname_list_j = core.MY_JUSTIFY_STRINGLIST(newname_list)
savings_list = [("" if p[1]==0 else "saved " + core.prettyprint_file_size(p[1])) for p in changed_files]
zipped = list(zip(oldname_list_j, newname_list_j, savings_list))
zipped_and_sorted = sorted(zipped, key=lambda y: file_sort_textures.sortbydirdepth(y[0]))
for o,n,s in zipped_and_sorted:
# print 'from' with the case/separator it uses in the PMX
core.MY_PRINT_FUNC(" {:s} --> {:s} | {:s}".format(o, n, s))
core.MY_PRINT_FUNC("Done!")
return None
if __name__ == '__main__':
print(_SCRIPT_VERSION)
if DEBUG:
# print info to explain the purpose of this file
core.MY_PRINT_FUNC(helptext)
core.MY_PRINT_FUNC("")
main()
core.pause_and_quit("Done with everything! Goodbye!")
else:
try:
# print info to explain the purpose of this file
core.MY_PRINT_FUNC(helptext)
core.MY_PRINT_FUNC("")
main()
core.pause_and_quit("Done with everything! Goodbye!")
except (KeyboardInterrupt, SystemExit):
# this is normal and expected, do nothing and die normally
pass
except Exception as ee:
# if an unexpected error occurs, catch it and print it and call pause_and_quit so the window stays open for a bit
core.MY_PRINT_FUNC(ee)
core.pause_and_quit("ERROR: something truly strange and unexpected has occurred, sorry, good luck figuring out what tho") | en | 0.803808 | # This code is free to use and re-distribute, but I cannot be held responsible for damages that it may or may not cause. ##################### # first, system imports # NOTE: i comment this block before compiling the EXE cuz the Pillow library is gigantic & makes the exe version like 200K # second, wrap custom imports with a try-except to catch it if files are missing # these imports work if running from GUI # these imports work if running from double-click on THIS script # when debug=True, disable the catchall try-except block. this means the full stack trace gets printed when it crashes, # but if launched in a new window it exits immediately so you can't read it. # this is recommended true, for obvious reasons # note: zipper automatically appends .zip onto whatever output name i give it, so dont give it a .zip suffix here # these are rare BMP formats that are known to be incompatible with MocuMocuDance # if recompression saves less than XXX KB, then don't save the result # how PIL reads things: # PNG, JPEG, BMP, DDS, TIFF, GIF ================================================= file_recompress_images: This tool will try to re-compress all image files in the file tree. Generally this means converting BMP/TGA/other images to PNG format, for maximum lossless image compression. JPEG image compression is more aggressive than PNG, so JPEG images will stay as JPEG. GIFs are weird so they are also not modified. This requires a PMX file to use as a root so it knows where to start reading files from. It creates a zipfile backup of the entire folder, just in case. This script does NOT ask for permission beforehand, it just creates a backup and does its thing, then afterwards it reports what it did. Bonus: this can process all "neighbor" pmx files in addition to the target, this highly recommended because neighbors usually reference similar sets of files. Note: this script requires the Python library 'Pillow' to be installed. Note: unlike my other scripts, this overwrites the original input PMX file(s) instead of creating a new file with a suffix. This is because I already create a zipfile that contains the original input PMX, so that serves as a good backup. # dds/tga/tiff will always be converted to png # jpeg/gif will always be skipped (jpeg is already lossy & therefore compresses better than png, gif is animated & complex) # bmp will be re-compressed to png if the original bmp is in 15-bit or 16-bit encoding (mocumocudance compatability) # other image types are re-compressed to png if doing so saves 100kb or more # also, all images are renamed so that the file extension matches the actual image data format # step zero: verify that Pillow exists # print pillow version just cuz # absolute path to directory holding the pmx # ========================================================================================================= # ========================================================================================================= # ========================================================================================================= # first, build the list of ALL files that actually exist, then filter it down to neighbor PMXs and relevant files # now fill "neighbor_pmx" by finding files without path separator that end in PMX # these are relative paths tho # filter down to just image files # ========================================================================================================= # ========================================================================================================= # ========================================================================================================= # now ask if I care about the neighbors and read the PMXes into memory # append neighbor PMX files onto the list of files to be processed # now read all the PMX objects & store in dict alongside the relative name # dictionary where keys are filename and values are resulting pmx objects # ========================================================================================================= # ========================================================================================================= # ========================================================================================================= # for each pmx, for each file on disk, match against files used in textures (case-insensitive) and replace with canonical name-on-disk # also fill out how much and how each file is used, and unify dupes between files, all that good stuff # ========================================================================================================= # ========================================================================================================= # ========================================================================================================= # DETERMINE NEW NAMES FOR FILES # first, create a backup of the folder # save the name, so that i can delete it if i didn't make any changes # this happens if the backup failed somehow AND the user decided to quit # name used for temporary location # list of memory saved by recompressing each file. same order/length as "image_filerecords" # make image persistient, so I know it always exists and I can always call "close" before open # only iterate over images that exist, obviously # iterate over the images # if not moreinfo, then each line overwrites the previous like a progress printout does # if moreinfo, then each line is printed permanently # before opening, try to close it just in case # open the image & catch all possible errors # this has 2 causes, "Unsupported BMP bitfields layout" or "cannot identify image file" # this is because there's some DDS format it can't make sense of # now the image is successfully opened! # 1, depending on image format, attempt to re-save as PNG # delete temp file if it still exists # save to tempfilename with png format, use optimize=true # measure & compare file size # if using a 16-bit BMP format, re-save back to bmp with same name # this might fail, images are weird, sometimes they don't have the attributes i expect # if it frees up at least XXX kb, i will keep it! # also keep it if it is a bmp encoded with 15-bit or 16-bit colors # set p.newname = png, and delete original and move tempname to base.png # delete original # resolve potential collisions by adding numbers suffix to file names # first need to make path absolute so get_unused_file_name can check the disk. # then check uniqueness against files on disk # now dest path is guaranteed unique against other existing files # make the path no longer absolute: undo adding "startpath" above # move new into place # if succesfully re-saved, do not do the extension-checking below # if this is not sufficiently compressed, do not use "continue", DO hit the extension-checking below # 2, if the file extension doesn't match with the image type, then make it match # this only happens if the image was not re-saved above # resolve potential collisions by adding numbers suffix to file names # first need to make path absolute so get_unused_file_name can check the disk. # then check uniqueness against files on disk # now dest path is guaranteed unique against other existing files # make the path no longer absolute: undo adding "startpath" above # do the actual rename here & now # os.renames creates all necessary intermediate folders needed for the destination # it also deletes the source folders if they become empty after the rename operation # these must be the same length after iterating # if the image is still open, close it # delete temp file if it still exists # ========================================================================================================= # ========================================================================================================= # ========================================================================================================= # are there any with proposed renaming? # if nothing was changed, delete the backup zip! # ========================================================================================================= # ========================================================================================================= # ========================================================================================================= # finally, do the actual renaming: # do all renaming in PMXes # write out # NOTE: this is OVERWRITING THE PREVIOUS PMX FILE, NOT CREATING A NEW ONE # because I make a zipfile backup I don't need to feel worried about preserving the old version # output_filename_pmx = core.get_unused_file_name(output_filename_pmx) # ========================================================================================================= # ========================================================================================================= # ========================================================================================================= # NOW PRINT MY RENAMINGS and other findings # print 'from' with the case/separator it uses in the PMX # print info to explain the purpose of this file # print info to explain the purpose of this file # this is normal and expected, do nothing and die normally # if an unexpected error occurs, catch it and print it and call pause_and_quit so the window stays open for a bit | 2.180727 | 2 |
test.py | mirestrepo/voxels-at-lems | 2 | 6615273 | #import Table
#fout = open('mytable.tex','w')
#t = Table.Table(3, justs='lrc', caption='Awesome results', label="tab:label")
#t.add_header_row(['obj', 'X', '$\\beta$'])
#col1 = [0.001,0.556,10.56] # just numbers
#col2 = [0.001,0.556,10.56] # just numbers
#col3 = [0.001,0.556,10.56] # just numbers
#t.add_data([col1,col2,col3], 2)
#t.print_table(fout)
#fout.close()
#
#import numpy
#a=numpy.zeros((2,2))
#print " \\\\\n".join([" & ".join(map(str,line)) for line in a])
#
#print "DOne",
#"test"
import numpy
from mayavi.mlab import *
""" Demo the bar chart plot with a 2D array.
"""
s = numpy.abs(numpy.random.random((3, 3)))
barchart(s, mode='cube')
show()
print s
| #import Table
#fout = open('mytable.tex','w')
#t = Table.Table(3, justs='lrc', caption='Awesome results', label="tab:label")
#t.add_header_row(['obj', 'X', '$\\beta$'])
#col1 = [0.001,0.556,10.56] # just numbers
#col2 = [0.001,0.556,10.56] # just numbers
#col3 = [0.001,0.556,10.56] # just numbers
#t.add_data([col1,col2,col3], 2)
#t.print_table(fout)
#fout.close()
#
#import numpy
#a=numpy.zeros((2,2))
#print " \\\\\n".join([" & ".join(map(str,line)) for line in a])
#
#print "DOne",
#"test"
import numpy
from mayavi.mlab import *
""" Demo the bar chart plot with a 2D array.
"""
s = numpy.abs(numpy.random.random((3, 3)))
barchart(s, mode='cube')
show()
print s
| en | 0.341122 | #import Table #fout = open('mytable.tex','w') #t = Table.Table(3, justs='lrc', caption='Awesome results', label="tab:label") #t.add_header_row(['obj', 'X', '$\\beta$']) #col1 = [0.001,0.556,10.56] # just numbers #col2 = [0.001,0.556,10.56] # just numbers #col3 = [0.001,0.556,10.56] # just numbers #t.add_data([col1,col2,col3], 2) #t.print_table(fout) #fout.close() # #import numpy #a=numpy.zeros((2,2)) #print " \\\\\n".join([" & ".join(map(str,line)) for line in a]) # #print "DOne", #"test" Demo the bar chart plot with a 2D array. | 2.828805 | 3 |
PyNote.py | roshanpoduval1998/PyNote | 0 | 6615274 | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
import tkinter as tk
from tkinter import ttk
from tkinter import*
from tkinter import filedialog
from tkinter import messagebox
import os
from pynput.mouse import Controller
import keyboard
import tempfile
import win32api
import win32print
from Font import Font_size
location = os.path.dirname(os.path.realpath(__file__))
icon_location = location+"/icon/"
def disable_event():
pass
class Createsearch_online:
def __init__(self,app_name,note):
self.app_name = app_name
app_name.bind("<Shift-Alt-Y>",self.new_window)
self.note = note
def new_window(self,Event):
self.Entry = tk.Entry(self.app_name,width=800,bg="gray10",fg="white",font=("Segoe UI",20),insertbackground="white")
self.Entry.place(x=0,y=0)
self.Entry.focus()
self.Entry.bind("<Return>",self.double_click)
self.Entry.bind("<Leave>",self.double_click)
def double_click(self,Event):
get_text_val = self.Entry.get()
if get_text_val == "":
print("")
else:
os.startfile("https://www.google.com/search?client=firefox-b-d&q={}".format(get_text_val))
self.Entry.destroy()
class Scrollbar:
#use Scrollbar(Textboxname)
def __init__(self,text):
self.frame = text.master
self.text = text
self.text.configure(wrap='none')
self.for_x_view()
self.for_y_view()
def for_x_view(self):
self.scroll_x = ttk.Scrollbar(self.frame, orient='horizontal',command=self.text.xview)
self.scroll_x.config(command=self.text.xview)
self.text.configure(xscrollcommand=self.scroll_x.set)
self.scroll_x.pack(side='bottom', fill='x', anchor='w')
return
def for_y_view(self):
self.scroll_y = tk.Scrollbar(self.frame)
self.scroll_y.config(command=self.text.yview)
self.text.configure(yscrollcommand=self.scroll_y.set)
self.scroll_y.pack(side='right', fill='y')
return
class main_frame:
def __init__(self):
User = os.getenv("username")
self.ops_path = 'C:/Users/{}/'.format(User)
batch_ = open(self.ops_path + 'operation.bat','w')
batch_.write("")
batch_.close()
lap = tk.Tk()
lap.title('untitled')
self.lap = lap
self.menubar = Menu(self.lap)
self.filemenu = Menu(self.menubar, tearoff=0)
self.filemenu.add_command(label="New Ctrl+N", command=self.new)
self.filemenu.add_command(label="Open Ctrl+O", command=self.opentxt)
self.filemenu.add_command(label="Save Ctrl+S" , command=self.save)
self.filemenu.add_command(label="Save as" , command=self.saveas)
self.filemenu.add_command(label="Print Ctrl+P" , command=self.print_)
self.filemenu.add_separator()
self.filemenu.add_command(label="Exit Ctrl+Q", command=self.end__)
self.menubar.add_cascade(label="File", menu=self.filemenu)
self.editmenu = Menu(self.menubar, tearoff=0)
self.editmenu.add_command(label="Cut Ctrl+X", command=self.cut_content)
self.editmenu.add_command(label="Copy Ctrl+C", command=self.copy_content)
self.editmenu.add_command(label="Paste Ctrl+V", command=self.paste_content)
self.editmenu.add_command(label="Delete", command=self.delete_content)
self.editmenu.add_separator()
self.editmenu.add_command(label="Select All CTRL+A", command=self.select_all_content)
self.menubar.add_cascade(label="Edit", menu=self.editmenu)
self.theme = Menu(self.menubar, tearoff=0)
self.theme.add_command(label="Default", command=self.default_theme)
self.theme.add_separator()
self.theme.add_command(label="DeepBlue", command=self.DeepBlue)
self.theme.add_command(label="PowerMad", command=self.PowerMad)
self.theme.add_command(label="NightPole", command=self.nightcrawl)
self.theme.add_command(label="RockMan", command=self.rockman)
self.menubar.add_cascade(label="Theme", menu=self.theme)
note_entry = tk.Text(lap,font=("segoe ui",20),selectbackground="lightblue1")
self.note_entry = note_entry
Scrollbar(self.note_entry)
note_entry.pack(expand=True,fill=BOTH)
self.rightclick_event()
Createsearch_online(self.lap,self.note_entry)
self.note_entry.focus()
self.Format = Menu(self.menubar, tearoff=0)
self.Font_size_val = Font_size(self.menubar,self.note_entry)
self.Format.add_cascade(label="Font size", menu=self.Font_size_val)
self.Format.add_command(label="Search", command=self.search)
self.menubar.add_cascade(label="Format", menu=self.Format)
self.helpmenu = Menu(self.menubar, tearoff=0)
self.helpmenu.add_command(label="Help Ctrl+H", command=self.help_)
self.helpmenu.add_command(label="About Ctrl+I", command=self.page)
self.menubar.add_cascade(label="Help", menu=self.helpmenu)
self.default_theme()
lap.config(menu=self.menubar)
lap.geometry('800x800+250+50')
lap.bind_all('<Key>', self.keypress)
lap.wm_attributes('-alpha',0.9)
lap.iconbitmap(icon_location+"icon.ico")
lap.protocol('WM_DELETE_WINDOW',self.quit_func)
lap.mainloop()
def rightclick_event(self):
self.menu__ = Menu(self.lap, tearoff=0)
self.Font_size_val = Font_size(self.menu__,self.note_entry)
self.menu__.add_cascade(label="Font Size",menu=self.Font_size_val, underline=0)
self.menu__.add_separator()
self.menu__.add_command(label="Cut",command=self.cut_content)
self.menu__.add_command(label="Copy",command=self.copy_content)
self.menu__.add_command(label="Paste",command=self.paste_content)
self.menu__.add_command(label="Select All",command=self.select_all_content)
self.submenu = Menu(self.menu__, tearoff=0)
self.submenu.add_command(label='DeepBlue', command=self.DeepBlue, underline=0)
self.submenu.add_command(label='PowerMad', command=self.PowerMad, underline=0)
self.submenu.add_command(label='Nightcrawl', command=self.nightcrawl, underline=0)
self.submenu.add_command(label='RockMan', command=self.rockman, underline=0)
self.menu__.add_separator()
self.menu__.add_cascade(label="Theme", menu=self.submenu, underline=0)
self.menu__.add_command(label="Search", command=self.search)
self.lap.bind("<Button-3>", self.showMenu)
def showMenu(self, e):
self.menu__.post(e.x_root, e.y_root)
def search(self):
keyboard.press_and_release('shift+alt+y')
def keypress(self,event):
if keyboard.is_pressed("CTRL") and keyboard.is_pressed("S"):
self.save()
if keyboard.is_pressed("CTRL") and keyboard.is_pressed("N"):
self.new()
if keyboard.is_pressed("CTRL") and keyboard.is_pressed("O"):
self.opentxt()
if keyboard.is_pressed("CTRL") and keyboard.is_pressed("Q"):
self.quit_func()
if keyboard.is_pressed("CTRL") and keyboard.is_pressed("X"):
self.cut_content()
if keyboard.is_pressed("CTRL") and keyboard.is_pressed("C"):
self.copy_content()
if keyboard.is_pressed("CTRL") and keyboard.is_pressed("V"):
self.paste_content()
if keyboard.is_pressed("CTRL") and keyboard.is_pressed("A"):
self.select_all_content()
if keyboard.is_pressed("CTRL") and keyboard.is_pressed("H"):
self.help_()
if keyboard.is_pressed("CTRL") and keyboard.is_pressed("I"):
self.page()
def end__(self):
self.lap.quit()
def quit_func(self):
try:
text_val = self.note_entry.get('1.0',END)
if text_val != "":
message1 = "Save-PyNote"
message2 = "Do you want to save changes"
quit__ = tk.Tk()
quit__.iconbitmap(icon_location+"icon.ico")
quit__.configure(bg="gray20")
quit__.title(message1)
quit__.geometry("+500+450")
label = tk.Label(quit__, text=message2,bg="gray20",fg='white'
,font=("Banschrift",15))
label.pack()
def destroy_button():
self.note_entry.config(state=NORMAL)
quit__.destroy()
try:
self.lap.wm_attributes('-alpha',0.92)
self.lap.protocol("WM_DELETE_WINDOW", self.quit_func)
except:
pass
def destroy_PyNote():
quit__.destroy()
try:
self.lap.destroy()
except:
pass
def save_button():
quit__.destroy()
try:
self.saveas()
self.lap.destroy()
except:
pass
cancel = tk.Button(quit__, text="Cancel",width="15", height="2", bg="gray60",fg='gray1'
,borderwidth=0,highlightthickness=0,command=destroy_button)
cancel.pack(side=RIGHT)
Dont_save = tk.Button(quit__, text="Don't_save",width="15", height="2", bg="gray70",fg='gray1'
,borderwidth=0,highlightthickness=0,command=destroy_PyNote)
Dont_save.pack(side=RIGHT)
save = tk.Button(quit__, text="Save",width="15", height="2", bg="gray80",fg='gray1'
,borderwidth=0,highlightthickness=0,command=save_button)
save.pack(side=RIGHT)
self.lap.wm_attributes('-alpha',0.7)
quit__.wm_attributes('-alpha',0.95)
self.note_entry.config(state=DISABLED)
self.lap.protocol("WM_DELETE_WINDOW", disable_event)
quit__.protocol('WM_DELETE_WINDOW',destroy_button)
quit__.resizable(width=0,height=0)
quit__.mainloop()
except:
text_val = self.note_entry.get('1.0',END)
if text_val != "":
x = tk.messagebox.askyesnocancel("Save", "Do you want to save changes")
if x == NO:
self.lap.destroy()
if x == YES:
self.lap.saveas()
self.lap.destroy()
def opentxt(self):
try:
fileloc_ = filedialog.askopenfilename(initialdir = "/",title = "Select file"
,filetypes = (("Text files","*.txt"),("all files","*.*")))
with open(fileloc_, 'r') as file:
data = file.read()
self.note_entry.delete('1.0',END)
self.note_entry.insert(END,data)
self.lap.title(fileloc_+"/-PyNote")
except:
pass
def new(self):
try:
self.note_entry.delete('1.0',END)
self.lap.title("untitled")
batch_ = open(self.ops_path+'operation.bat','w')
batch_.write("")
batch_.close()
except:
pass
def save(self):
try:
try:
batch_ = open(self.ops_path+'operation.bat','r')
except:
raise KeyError
try:
text = batch_.read()
if text=="":
raise KeyError
text_val = self.note_entry.get('1.0',END)
save = open(text,'w')
save.write('{}'.format(text_val))
except:
raise KeyError
except KeyError:
save = filedialog.asksaveasfile(mode='w',defaultextension = '.txt')
text_val = self.note_entry.get('1.0',END)
word = open('{}'.format(save.name),'w')
word.write("{}".format(text_val))
word.close()
self.lap.title(save.name)
batch = open(self.ops_path+'operation.bat','w')
batch.write(save.name)
batch.close()
def saveas(self):
try:
save = filedialog.asksaveasfile(mode='w'
,defaultextension = '.txt')
text_val = self.note_entry.get('1.0',END)
word = open('{}'.format(save.name),'w')
word.write("{}".format(text_val))
word.close()
self.lap.title(save.name)
except:
pass
def print_(self):
text_val = self.note_entry.get('1.0',END)
filename = tempfile.mktemp(".txt")
open (filename, "w").write(text_val)
win32api.ShellExecute(
0,
"print",
filename,
'/d:"%s"' % win32print.GetDefaultPrinter(),
".",
0
)
def cut_content(self):
keyboard.press_and_release('ctrl+x')
def copy_content(self):
keyboard.press_and_release('ctrl+c')
def paste_content(self):
keyboard.press_and_release('ctrl+v')
def delete_content(self):
keyboard.press_and_release('del')
def select_all_content(self):
self.note_entry.tag_add(SEL,'1.0',END)
def DeepBlue(self):
self.note_entry.config(bg='lightblue2', fg="gray10"
, font=("segoe ui",20),selectbackground="lightblue4",insertbackground="black")
self.filemenu.config(tearoff=0,background='powder blue',foreground='black',activebackground='lightSteelBlue', activeforeground='white')
self.editmenu.config(tearoff=0,background='powder blue',foreground='black',activebackground='lightSteelBlue', activeforeground='white')
self.theme.config(tearoff=0,background='powder blue',foreground='black',activebackground='lightSteelBlue', activeforeground='white')
self.helpmenu.config(tearoff=0,background='powder blue',foreground='black',activebackground='lightSteelBlue', activeforeground='white')
self.menu__.config(tearoff=0,background='powder blue',foreground='black',activebackground='lightSteelBlue', activeforeground='white')
self.Font_size_val.config(tearoff=0,background='powder blue',foreground='black',activebackground='lightSteelBlue', activeforeground='white')
self.submenu.config(tearoff=0,background='powder blue',foreground='black',activebackground='lightSteelBlue', activeforeground='white')
self.Format.config(tearoff=0,background='powder blue',foreground='black',activebackground='lightSteelBlue', activeforeground='white')
def PowerMad(self):
self.note_entry.config(bg='red', fg="white"
, font=("segoe ui",20),selectbackground="coral3",insertbackground="white")
self.filemenu.config(tearoff=0,background='coral1',foreground='black',activebackground='gray80', activeforeground='white')
self.editmenu.config(tearoff=0,background='coral1',foreground='black',activebackground='gray80', activeforeground='white')
self.theme.config(tearoff=0,background='coral1',foreground='black',activebackground='gray80', activeforeground='white')
self.helpmenu.config(tearoff=0,background='coral1',foreground='black',activebackground='gray80', activeforeground='white')
self.menu__.config(tearoff=0,background='coral1',foreground='black',activebackground='gray80', activeforeground='white')
self.Font_size_val.config(tearoff=0,background='coral1',foreground='black',activebackground='gray80', activeforeground='white')
self.submenu.config(tearoff=0,background='coral1',foreground='black',activebackground='gray80', activeforeground='white')
self.Format.config(tearoff=0,background='coral1',foreground='black',activebackground='gray80', activeforeground='white')
def nightcrawl(self):
self.note_entry.config(bg='black', fg="white"
, font=("segoe ui",20),selectbackground="white",insertbackground="white")
self.filemenu.config(tearoff=0,background='gray',foreground='white',activebackground='gray30', activeforeground="light cyan")
self.editmenu.config(tearoff=0,background='gray',foreground='white',activebackground='gray30', activeforeground="light cyan")
self.theme.config(tearoff=0,background='gray',foreground='white',activebackground='gray30', activeforeground="light cyan")
self.helpmenu.config(tearoff=0,background='gray',foreground='white',activebackground='gray30', activeforeground="light cyan")
self.menu__.config(tearoff=0,background='gray',foreground='white',activebackground='gray30', activeforeground="light cyan")
self.Font_size_val.config(tearoff=0,background='gray',foreground='white',activebackground='gray30', activeforeground="light cyan")
self.submenu.config(tearoff=0,background='gray',foreground='white',activebackground='gray30', activeforeground="light cyan")
self.Format.config(tearoff=0,background='gray',foreground='white',activebackground='gray30', activeforeground="light cyan")
def rockman(self):
self.note_entry.config(bg='azure', fg="red"
, font=("segoe ui",20),selectbackground="coral",insertbackground="red")
self.filemenu.config(tearoff=0,background='white',foreground='red',activebackground='gray80', activeforeground='white')
self.editmenu.config(tearoff=0,background='white',foreground='red',activebackground='gray80', activeforeground='white')
self.theme.config(tearoff=0,background='white',foreground='red',activebackground='gray80', activeforeground='white')
self.helpmenu.config(tearoff=0,background='white',foreground='red',activebackground='gray80', activeforeground='white')
self.menu__.config(tearoff=0,background='white',foreground='red',activebackground='gray80', activeforeground='white')
self.Font_size_val.config(tearoff=0,background='white',foreground='red',activebackground='gray80', activeforeground='white')
self.submenu.config(tearoff=0,background='white',foreground='red',activebackground='gray80', activeforeground='white')
self.Format.config(tearoff=0,background='white',foreground='red',activebackground='gray80', activeforeground='white')
def default_theme(self):
self.note_entry.config(bg='white', fg="black"
, font=("segoe ui",20),selectbackground="lightblue1",insertbackground="black")
self.filemenu.config(tearoff=0,background='white',foreground='black',activebackground='lightblue1', activeforeground='black')
self.editmenu.config(tearoff=0,background='white',foreground='black',activebackground='lightblue1', activeforeground='black')
self.theme.config(tearoff=0,background='white',foreground='black',activebackground='lightblue1', activeforeground='black')
self.helpmenu.config(tearoff=0,background='white',foreground='black',activebackground='lightblue1', activeforeground='black')
self.menu__.config(tearoff=0,background='white',foreground='black',activebackground='lightblue1', activeforeground='black')
self.Font_size_val.config(tearoff=0,background='white',foreground='black',activebackground='lightblue1', activeforeground='black')
self.submenu.config(tearoff=0,background='white',foreground='black',activebackground='lightblue1', activeforeground='black')
self.Format.config(tearoff=0,background='white',foreground='black',activebackground='lightblue1', activeforeground='black')
def help_(self):
os.startfile("https://www.github.com/roshanpoduval1998/PyNote")
def page(self):
try:
message1 = "About"
message2 = "PyNote is written in python language\nDeveloped by <NAME>\nversion: 1.0.1"
about_ = tk.Tk()
about_.iconbitmap(icon_location+"icon.ico")
about_.configure(bg="gray20")
about_.title(message1)
about_.geometry("+500+450")
label = tk.Label(about_, text=message2,bg="gray20",fg='white'
,font=("Banschrift",13))
label.pack()
def destroy_button():
self.note_entry.config(state=NORMAL)
about_.destroy()
try:
self.lap.wm_attributes('-alpha',0.92)
self.lap.protocol("WM_DELETE_WINDOW", self.quit_func)
except:
pass
button = tk.Button(about_, text="Close",width="15", bg="gray80",fg='gray1'
,borderwidth=0,highlightthickness=0,command=destroy_button)
button.pack()
self.lap.wm_attributes('-alpha',0.7)
about_.wm_attributes('-alpha',0.95)
self.note_entry.config(state=DISABLED)
self.lap.protocol("WM_DELETE_WINDOW", disable_event)
about_.protocol('WM_DELETE_WINDOW',destroy_button)
about_.resizable(width=0,height=0)
about_.mainloop()
except:
messagebox.showinfo("PyNote--","PyNote is written in python language\nDeveloped by <NAME>\nversion: 1.0.1")
if __name__ == "__main__":
main_frame()
| # -*- coding: utf-8 -*-
from __future__ import unicode_literals
import tkinter as tk
from tkinter import ttk
from tkinter import*
from tkinter import filedialog
from tkinter import messagebox
import os
from pynput.mouse import Controller
import keyboard
import tempfile
import win32api
import win32print
from Font import Font_size
location = os.path.dirname(os.path.realpath(__file__))
icon_location = location+"/icon/"
def disable_event():
pass
class Createsearch_online:
def __init__(self,app_name,note):
self.app_name = app_name
app_name.bind("<Shift-Alt-Y>",self.new_window)
self.note = note
def new_window(self,Event):
self.Entry = tk.Entry(self.app_name,width=800,bg="gray10",fg="white",font=("Segoe UI",20),insertbackground="white")
self.Entry.place(x=0,y=0)
self.Entry.focus()
self.Entry.bind("<Return>",self.double_click)
self.Entry.bind("<Leave>",self.double_click)
def double_click(self,Event):
get_text_val = self.Entry.get()
if get_text_val == "":
print("")
else:
os.startfile("https://www.google.com/search?client=firefox-b-d&q={}".format(get_text_val))
self.Entry.destroy()
class Scrollbar:
#use Scrollbar(Textboxname)
def __init__(self,text):
self.frame = text.master
self.text = text
self.text.configure(wrap='none')
self.for_x_view()
self.for_y_view()
def for_x_view(self):
self.scroll_x = ttk.Scrollbar(self.frame, orient='horizontal',command=self.text.xview)
self.scroll_x.config(command=self.text.xview)
self.text.configure(xscrollcommand=self.scroll_x.set)
self.scroll_x.pack(side='bottom', fill='x', anchor='w')
return
def for_y_view(self):
self.scroll_y = tk.Scrollbar(self.frame)
self.scroll_y.config(command=self.text.yview)
self.text.configure(yscrollcommand=self.scroll_y.set)
self.scroll_y.pack(side='right', fill='y')
return
class main_frame:
def __init__(self):
User = os.getenv("username")
self.ops_path = 'C:/Users/{}/'.format(User)
batch_ = open(self.ops_path + 'operation.bat','w')
batch_.write("")
batch_.close()
lap = tk.Tk()
lap.title('untitled')
self.lap = lap
self.menubar = Menu(self.lap)
self.filemenu = Menu(self.menubar, tearoff=0)
self.filemenu.add_command(label="New Ctrl+N", command=self.new)
self.filemenu.add_command(label="Open Ctrl+O", command=self.opentxt)
self.filemenu.add_command(label="Save Ctrl+S" , command=self.save)
self.filemenu.add_command(label="Save as" , command=self.saveas)
self.filemenu.add_command(label="Print Ctrl+P" , command=self.print_)
self.filemenu.add_separator()
self.filemenu.add_command(label="Exit Ctrl+Q", command=self.end__)
self.menubar.add_cascade(label="File", menu=self.filemenu)
self.editmenu = Menu(self.menubar, tearoff=0)
self.editmenu.add_command(label="Cut Ctrl+X", command=self.cut_content)
self.editmenu.add_command(label="Copy Ctrl+C", command=self.copy_content)
self.editmenu.add_command(label="Paste Ctrl+V", command=self.paste_content)
self.editmenu.add_command(label="Delete", command=self.delete_content)
self.editmenu.add_separator()
self.editmenu.add_command(label="Select All CTRL+A", command=self.select_all_content)
self.menubar.add_cascade(label="Edit", menu=self.editmenu)
self.theme = Menu(self.menubar, tearoff=0)
self.theme.add_command(label="Default", command=self.default_theme)
self.theme.add_separator()
self.theme.add_command(label="DeepBlue", command=self.DeepBlue)
self.theme.add_command(label="PowerMad", command=self.PowerMad)
self.theme.add_command(label="NightPole", command=self.nightcrawl)
self.theme.add_command(label="RockMan", command=self.rockman)
self.menubar.add_cascade(label="Theme", menu=self.theme)
note_entry = tk.Text(lap,font=("segoe ui",20),selectbackground="lightblue1")
self.note_entry = note_entry
Scrollbar(self.note_entry)
note_entry.pack(expand=True,fill=BOTH)
self.rightclick_event()
Createsearch_online(self.lap,self.note_entry)
self.note_entry.focus()
self.Format = Menu(self.menubar, tearoff=0)
self.Font_size_val = Font_size(self.menubar,self.note_entry)
self.Format.add_cascade(label="Font size", menu=self.Font_size_val)
self.Format.add_command(label="Search", command=self.search)
self.menubar.add_cascade(label="Format", menu=self.Format)
self.helpmenu = Menu(self.menubar, tearoff=0)
self.helpmenu.add_command(label="Help Ctrl+H", command=self.help_)
self.helpmenu.add_command(label="About Ctrl+I", command=self.page)
self.menubar.add_cascade(label="Help", menu=self.helpmenu)
self.default_theme()
lap.config(menu=self.menubar)
lap.geometry('800x800+250+50')
lap.bind_all('<Key>', self.keypress)
lap.wm_attributes('-alpha',0.9)
lap.iconbitmap(icon_location+"icon.ico")
lap.protocol('WM_DELETE_WINDOW',self.quit_func)
lap.mainloop()
def rightclick_event(self):
self.menu__ = Menu(self.lap, tearoff=0)
self.Font_size_val = Font_size(self.menu__,self.note_entry)
self.menu__.add_cascade(label="Font Size",menu=self.Font_size_val, underline=0)
self.menu__.add_separator()
self.menu__.add_command(label="Cut",command=self.cut_content)
self.menu__.add_command(label="Copy",command=self.copy_content)
self.menu__.add_command(label="Paste",command=self.paste_content)
self.menu__.add_command(label="Select All",command=self.select_all_content)
self.submenu = Menu(self.menu__, tearoff=0)
self.submenu.add_command(label='DeepBlue', command=self.DeepBlue, underline=0)
self.submenu.add_command(label='PowerMad', command=self.PowerMad, underline=0)
self.submenu.add_command(label='Nightcrawl', command=self.nightcrawl, underline=0)
self.submenu.add_command(label='RockMan', command=self.rockman, underline=0)
self.menu__.add_separator()
self.menu__.add_cascade(label="Theme", menu=self.submenu, underline=0)
self.menu__.add_command(label="Search", command=self.search)
self.lap.bind("<Button-3>", self.showMenu)
def showMenu(self, e):
self.menu__.post(e.x_root, e.y_root)
def search(self):
keyboard.press_and_release('shift+alt+y')
def keypress(self,event):
if keyboard.is_pressed("CTRL") and keyboard.is_pressed("S"):
self.save()
if keyboard.is_pressed("CTRL") and keyboard.is_pressed("N"):
self.new()
if keyboard.is_pressed("CTRL") and keyboard.is_pressed("O"):
self.opentxt()
if keyboard.is_pressed("CTRL") and keyboard.is_pressed("Q"):
self.quit_func()
if keyboard.is_pressed("CTRL") and keyboard.is_pressed("X"):
self.cut_content()
if keyboard.is_pressed("CTRL") and keyboard.is_pressed("C"):
self.copy_content()
if keyboard.is_pressed("CTRL") and keyboard.is_pressed("V"):
self.paste_content()
if keyboard.is_pressed("CTRL") and keyboard.is_pressed("A"):
self.select_all_content()
if keyboard.is_pressed("CTRL") and keyboard.is_pressed("H"):
self.help_()
if keyboard.is_pressed("CTRL") and keyboard.is_pressed("I"):
self.page()
def end__(self):
self.lap.quit()
def quit_func(self):
try:
text_val = self.note_entry.get('1.0',END)
if text_val != "":
message1 = "Save-PyNote"
message2 = "Do you want to save changes"
quit__ = tk.Tk()
quit__.iconbitmap(icon_location+"icon.ico")
quit__.configure(bg="gray20")
quit__.title(message1)
quit__.geometry("+500+450")
label = tk.Label(quit__, text=message2,bg="gray20",fg='white'
,font=("Banschrift",15))
label.pack()
def destroy_button():
self.note_entry.config(state=NORMAL)
quit__.destroy()
try:
self.lap.wm_attributes('-alpha',0.92)
self.lap.protocol("WM_DELETE_WINDOW", self.quit_func)
except:
pass
def destroy_PyNote():
quit__.destroy()
try:
self.lap.destroy()
except:
pass
def save_button():
quit__.destroy()
try:
self.saveas()
self.lap.destroy()
except:
pass
cancel = tk.Button(quit__, text="Cancel",width="15", height="2", bg="gray60",fg='gray1'
,borderwidth=0,highlightthickness=0,command=destroy_button)
cancel.pack(side=RIGHT)
Dont_save = tk.Button(quit__, text="Don't_save",width="15", height="2", bg="gray70",fg='gray1'
,borderwidth=0,highlightthickness=0,command=destroy_PyNote)
Dont_save.pack(side=RIGHT)
save = tk.Button(quit__, text="Save",width="15", height="2", bg="gray80",fg='gray1'
,borderwidth=0,highlightthickness=0,command=save_button)
save.pack(side=RIGHT)
self.lap.wm_attributes('-alpha',0.7)
quit__.wm_attributes('-alpha',0.95)
self.note_entry.config(state=DISABLED)
self.lap.protocol("WM_DELETE_WINDOW", disable_event)
quit__.protocol('WM_DELETE_WINDOW',destroy_button)
quit__.resizable(width=0,height=0)
quit__.mainloop()
except:
text_val = self.note_entry.get('1.0',END)
if text_val != "":
x = tk.messagebox.askyesnocancel("Save", "Do you want to save changes")
if x == NO:
self.lap.destroy()
if x == YES:
self.lap.saveas()
self.lap.destroy()
def opentxt(self):
try:
fileloc_ = filedialog.askopenfilename(initialdir = "/",title = "Select file"
,filetypes = (("Text files","*.txt"),("all files","*.*")))
with open(fileloc_, 'r') as file:
data = file.read()
self.note_entry.delete('1.0',END)
self.note_entry.insert(END,data)
self.lap.title(fileloc_+"/-PyNote")
except:
pass
def new(self):
try:
self.note_entry.delete('1.0',END)
self.lap.title("untitled")
batch_ = open(self.ops_path+'operation.bat','w')
batch_.write("")
batch_.close()
except:
pass
def save(self):
try:
try:
batch_ = open(self.ops_path+'operation.bat','r')
except:
raise KeyError
try:
text = batch_.read()
if text=="":
raise KeyError
text_val = self.note_entry.get('1.0',END)
save = open(text,'w')
save.write('{}'.format(text_val))
except:
raise KeyError
except KeyError:
save = filedialog.asksaveasfile(mode='w',defaultextension = '.txt')
text_val = self.note_entry.get('1.0',END)
word = open('{}'.format(save.name),'w')
word.write("{}".format(text_val))
word.close()
self.lap.title(save.name)
batch = open(self.ops_path+'operation.bat','w')
batch.write(save.name)
batch.close()
def saveas(self):
try:
save = filedialog.asksaveasfile(mode='w'
,defaultextension = '.txt')
text_val = self.note_entry.get('1.0',END)
word = open('{}'.format(save.name),'w')
word.write("{}".format(text_val))
word.close()
self.lap.title(save.name)
except:
pass
def print_(self):
text_val = self.note_entry.get('1.0',END)
filename = tempfile.mktemp(".txt")
open (filename, "w").write(text_val)
win32api.ShellExecute(
0,
"print",
filename,
'/d:"%s"' % win32print.GetDefaultPrinter(),
".",
0
)
def cut_content(self):
keyboard.press_and_release('ctrl+x')
def copy_content(self):
keyboard.press_and_release('ctrl+c')
def paste_content(self):
keyboard.press_and_release('ctrl+v')
def delete_content(self):
keyboard.press_and_release('del')
def select_all_content(self):
self.note_entry.tag_add(SEL,'1.0',END)
def DeepBlue(self):
self.note_entry.config(bg='lightblue2', fg="gray10"
, font=("segoe ui",20),selectbackground="lightblue4",insertbackground="black")
self.filemenu.config(tearoff=0,background='powder blue',foreground='black',activebackground='lightSteelBlue', activeforeground='white')
self.editmenu.config(tearoff=0,background='powder blue',foreground='black',activebackground='lightSteelBlue', activeforeground='white')
self.theme.config(tearoff=0,background='powder blue',foreground='black',activebackground='lightSteelBlue', activeforeground='white')
self.helpmenu.config(tearoff=0,background='powder blue',foreground='black',activebackground='lightSteelBlue', activeforeground='white')
self.menu__.config(tearoff=0,background='powder blue',foreground='black',activebackground='lightSteelBlue', activeforeground='white')
self.Font_size_val.config(tearoff=0,background='powder blue',foreground='black',activebackground='lightSteelBlue', activeforeground='white')
self.submenu.config(tearoff=0,background='powder blue',foreground='black',activebackground='lightSteelBlue', activeforeground='white')
self.Format.config(tearoff=0,background='powder blue',foreground='black',activebackground='lightSteelBlue', activeforeground='white')
def PowerMad(self):
self.note_entry.config(bg='red', fg="white"
, font=("segoe ui",20),selectbackground="coral3",insertbackground="white")
self.filemenu.config(tearoff=0,background='coral1',foreground='black',activebackground='gray80', activeforeground='white')
self.editmenu.config(tearoff=0,background='coral1',foreground='black',activebackground='gray80', activeforeground='white')
self.theme.config(tearoff=0,background='coral1',foreground='black',activebackground='gray80', activeforeground='white')
self.helpmenu.config(tearoff=0,background='coral1',foreground='black',activebackground='gray80', activeforeground='white')
self.menu__.config(tearoff=0,background='coral1',foreground='black',activebackground='gray80', activeforeground='white')
self.Font_size_val.config(tearoff=0,background='coral1',foreground='black',activebackground='gray80', activeforeground='white')
self.submenu.config(tearoff=0,background='coral1',foreground='black',activebackground='gray80', activeforeground='white')
self.Format.config(tearoff=0,background='coral1',foreground='black',activebackground='gray80', activeforeground='white')
def nightcrawl(self):
self.note_entry.config(bg='black', fg="white"
, font=("segoe ui",20),selectbackground="white",insertbackground="white")
self.filemenu.config(tearoff=0,background='gray',foreground='white',activebackground='gray30', activeforeground="light cyan")
self.editmenu.config(tearoff=0,background='gray',foreground='white',activebackground='gray30', activeforeground="light cyan")
self.theme.config(tearoff=0,background='gray',foreground='white',activebackground='gray30', activeforeground="light cyan")
self.helpmenu.config(tearoff=0,background='gray',foreground='white',activebackground='gray30', activeforeground="light cyan")
self.menu__.config(tearoff=0,background='gray',foreground='white',activebackground='gray30', activeforeground="light cyan")
self.Font_size_val.config(tearoff=0,background='gray',foreground='white',activebackground='gray30', activeforeground="light cyan")
self.submenu.config(tearoff=0,background='gray',foreground='white',activebackground='gray30', activeforeground="light cyan")
self.Format.config(tearoff=0,background='gray',foreground='white',activebackground='gray30', activeforeground="light cyan")
def rockman(self):
self.note_entry.config(bg='azure', fg="red"
, font=("segoe ui",20),selectbackground="coral",insertbackground="red")
self.filemenu.config(tearoff=0,background='white',foreground='red',activebackground='gray80', activeforeground='white')
self.editmenu.config(tearoff=0,background='white',foreground='red',activebackground='gray80', activeforeground='white')
self.theme.config(tearoff=0,background='white',foreground='red',activebackground='gray80', activeforeground='white')
self.helpmenu.config(tearoff=0,background='white',foreground='red',activebackground='gray80', activeforeground='white')
self.menu__.config(tearoff=0,background='white',foreground='red',activebackground='gray80', activeforeground='white')
self.Font_size_val.config(tearoff=0,background='white',foreground='red',activebackground='gray80', activeforeground='white')
self.submenu.config(tearoff=0,background='white',foreground='red',activebackground='gray80', activeforeground='white')
self.Format.config(tearoff=0,background='white',foreground='red',activebackground='gray80', activeforeground='white')
def default_theme(self):
self.note_entry.config(bg='white', fg="black"
, font=("segoe ui",20),selectbackground="lightblue1",insertbackground="black")
self.filemenu.config(tearoff=0,background='white',foreground='black',activebackground='lightblue1', activeforeground='black')
self.editmenu.config(tearoff=0,background='white',foreground='black',activebackground='lightblue1', activeforeground='black')
self.theme.config(tearoff=0,background='white',foreground='black',activebackground='lightblue1', activeforeground='black')
self.helpmenu.config(tearoff=0,background='white',foreground='black',activebackground='lightblue1', activeforeground='black')
self.menu__.config(tearoff=0,background='white',foreground='black',activebackground='lightblue1', activeforeground='black')
self.Font_size_val.config(tearoff=0,background='white',foreground='black',activebackground='lightblue1', activeforeground='black')
self.submenu.config(tearoff=0,background='white',foreground='black',activebackground='lightblue1', activeforeground='black')
self.Format.config(tearoff=0,background='white',foreground='black',activebackground='lightblue1', activeforeground='black')
def help_(self):
os.startfile("https://www.github.com/roshanpoduval1998/PyNote")
def page(self):
try:
message1 = "About"
message2 = "PyNote is written in python language\nDeveloped by <NAME>\nversion: 1.0.1"
about_ = tk.Tk()
about_.iconbitmap(icon_location+"icon.ico")
about_.configure(bg="gray20")
about_.title(message1)
about_.geometry("+500+450")
label = tk.Label(about_, text=message2,bg="gray20",fg='white'
,font=("Banschrift",13))
label.pack()
def destroy_button():
self.note_entry.config(state=NORMAL)
about_.destroy()
try:
self.lap.wm_attributes('-alpha',0.92)
self.lap.protocol("WM_DELETE_WINDOW", self.quit_func)
except:
pass
button = tk.Button(about_, text="Close",width="15", bg="gray80",fg='gray1'
,borderwidth=0,highlightthickness=0,command=destroy_button)
button.pack()
self.lap.wm_attributes('-alpha',0.7)
about_.wm_attributes('-alpha',0.95)
self.note_entry.config(state=DISABLED)
self.lap.protocol("WM_DELETE_WINDOW", disable_event)
about_.protocol('WM_DELETE_WINDOW',destroy_button)
about_.resizable(width=0,height=0)
about_.mainloop()
except:
messagebox.showinfo("PyNote--","PyNote is written in python language\nDeveloped by <NAME>\nversion: 1.0.1")
if __name__ == "__main__":
main_frame()
| en | 0.438278 | # -*- coding: utf-8 -*- #use Scrollbar(Textboxname) | 3.079755 | 3 |
startup/users/30-user-Dinca2021C2B.py | NSLS-II-SMI/profile_collection | 0 | 6615275 | ##Collect data:
'''
#SMI: 2021/2/9
SAF: 307138 Standard Beamline 12-ID proposal: 308023 (CFN, 307961)
create proposal: proposal_id( '2021_1', '307138_Dinca' ) #create the proposal id and folder
%run -i /home/xf12id/.ipython/profile_collection/startup/users/30-user-Dinca2021C2B.py
RE( shopen() ) # to open the beam and feedback
RE( shclose())
# Energy: 16.1 keV, 0.77009 A
# SAXS distance 8300
# The beam center on SAXS: [ 490, 565 ] Or [ 491, 565 ] #1M X: 1.2, Y -60, Z: 8300, BS: x: 1.5 ,
#March 9 night 11:00pm change detector distance to 5000 mm,
# BS change to 2.0 mm , 1M X change to 1.3, Y change t0 -60.3, to make the beam center same as [ 490, 565 ]
#March 9 night 11:22 pm change detector distance to 1600 mm,
# BS change to 2.3 mm , 1M X change to 1.6, Y change t0 -60.8, to make the beam center same as [ 490, 565 ]
#March 10, 10AM, change back to 5 meter for MIT
detector distance to 5000 mm,
# BS change to 2.0 mm , 1M X change to 1.3, Y change t0 -60.3, to make the beam center same as [ 490, 565 ]
FOR MIT CELL,
Cables:
Cell A:
#3, red to red
#2 orange to blue
Cell B:
#9 black to black
#4 brown to brown
WAXS beam center: [ 87, 96 ], there is bad pixel here could check later, (BS: X: -20.92 )
Put Att and move bs to check the BC_WAXS, --> [ 87, 96 ]
'''
'''
Run1 , using the 3-D printed holder wiithout hole (the optical image quality is not good)
sample_dict = { 1: 'FT_01', 2: 'FT_02', 3:'',
4: 'FT_04', 5: 'FT_05', 6: 'FT_06', 7:'',
8: 'FT_07', 9: 'FT_08', 10: 'FT_09', 11:'',
12: 'GZ_01', 13: '', 14: 'GZ_02', 15: '',16: 'GZ_03',
17:'', 18: 'GZ_04', 19: '', 20: 'GB_05', 21:'', 22:'GB_06', 23: '', 24: 'GB07', 25: '', 26: 'GZ_08',
} #All Z from 12-26 is 4800 (ALL GB),
pxy_dict = {
1: ( -44300, -3100),
2: ( -42300, -2700 ), 4: ( -34300, -2000 ), 5: ( -30800, -2000 ),
6: ( -27700, -3000 ), 8: ( -19700, 1100 ), 9: ( -17400, 900 ),
10: ( -14500, -1100 ), 12: ( -5200, 2500 ), 14: ( 1900, 2200 ),
16: ( 8500, 1400 ), 18: ( 16100 , 3800 ), 20: ( 23000, 3000 ),
22: ( 30200 , 1700 ), 24: ( 38000 , 1500), 26: ( 44800, 1200 ),
}
# Run 2, using the 3-D printed holder with rectangle holes (the optical image quality is not good)
sample_dict = { 1: 'FT_03', 2: 'GB_09', 3:'',
4: 'GB_10', 6: 'GB_11', 8: 'GB_12', 10: 'GB_13', 12: 'GB_14',
14: 'GB_15', 16: 'GB_16', 18: 'GB_17', 20: 'GB_18', 22: 'GB_19', 24: 'GB_20', 26: 'GB_21',
} #All Z 3800
pxy_dict = {
1: ( -44860, 1600 ),
2: ( -41260, 2700 ), 4: ( -34100, 200 ),
6: ( -26500, 1100 ), 8: ( -19700, 2600 ),
10: ( -12800, 100 ), 12: ( -5200, -900 ), 14: ( 2100, 3200 ),
16: ( 8800, 1900 ), 18: ( 16100 , 4200 ), 20: ( 23000, 4700 ),
22: ( 30200 , 4700 ), 24: ( 38000 , -300 ), 26: ( 44800, 4800 ),
}
# RUn 3, using the CMS (traditional holder), opticla a little better
sample_dict = { 1: 'GB_22', 2: 'GB_23', 3:'GB_24',
4: 'GB_25', 5: 'GB_26' , 6: 'GB_27', 7: 'GB_28', 8: 'GB_29', 9: 'GB_30', 10: 'GB_31', 11:'GB_32', 12: 'GB_33',
13: 'GB_34', 14: 'GB_35', 15: 'Dummy_Steel'
} #All Z 1400 , the direction is oposite to the previous one
pxy_dict = { 15: (-43499.91, -5019.78) , 14: (-37199.98, -5019.78), 13:(-31099.93, -5019.78), 12: (-24599.85, -5019.78),
11: (-17999.94, -5019.78), 10: (-11600.02, -5019.9), 9: (-5400, -5019.9), 8: ( 1100 , -5019.9), 7: ( 7600 , -5019.9), 6: ( 13900 , -5019.9),
5: ( 20200 , -5019.9), 4: ( 26500 , -5019.9), 3: ( 32800 , -5019.9), 2: ( 39200 , -5019.9), 1: ( 45400 , -5019.9),
}
# Run 4, using the 3-D printed holder with rectangle holes (the optical image quality is not good)
sample_dict = { 2: 'GB_36', 3:'',
4: 'GB_37', 6: 'GB_38', 8: 'GB_40', 10: 'GB_41', 12: 'GB_44',
14: 'GB_42', 16: 'GB_43', 18: 'GB_45', 20: 'FL_UIO66',
} #All Z 3800
pxy_dict = {
1: ( -44860, 1600 ),
2: ( -41260, 2700 ), 4: ( -34100, 200 ),
6: ( -26500, 1100 ), 8: ( -19700, 2600 ),
10: ( -12800, 100 ), 12: ( -5200, -900 ), 14: ( 2100, 3200 ),
16: ( 8800, 1900 ), 18: ( 16100 , 4200 ), 20: ( 23000, 4700 ),
22: ( 30200 , 4700 ), 24: ( 38000 , -300 ), 26: ( 44800, 4800 ),
}
# Run 5, using the 3-D printed holder with 15 holes for 2mm capillary
sample_dict = { 3: 'WX_01_PureS', 4:'WX_02_PureD',
5: 'WX_03_Mix_SD', 6: 'WX_04_Mix_SDA', 7: 'WX_05_LHCE_1_Mix_LiFSISD', 8: 'WX_06_LHCE_2_Mix_LiFSISDA', 9: 'WX_07_LiFS_S', 10: 'WX_08_LiFS_SA',
12: 'FL_UIo_66'
} #All Z 3800
#measure 8.3 meter first, measure 1 sec, 10 sec #piezo_z = 3800
#Then move detecter to 5 meter #piezo_z = 3800
# Then change det-dis to 1.6 meter #piezo_z = 3800
# Then measure WAXS, using #piezo_z = 3800 (should use the 1400 , need to re-calibrate )
#
pxy_dict = {
3: (-30040, -1800),
4: ( -22840, -1800),
5: ( -17240, -1800),
6: ( -11540, -1800),
7: ( -4140, -4600 ),
8: ( 2060, -1800 ),
9: ( 7460, -1800),
10: ( 13460, 1200 ),
12: ( 25200, 0 ),
}
# Run 6, using the 3-D printed holder for Cell measurements, Cell 2 (bad cell), Cell 3 (good cell )
# change the post, make the hexapod Y = 6.0 ( only change the hexpod height in +/-6 )
# Cell 3 using the Red (rough, Working) and orange (bue, smooth, Counting), the cell is good
# Cell 2 using the black (rough, Working) and brown (bue, smooth, Counting), the cell is not good
sample_dict = { #1: 'Dinca_Cell_2_NiBHT_K2SO4_Repeat' , 2: 'Dinca_Cell_3_NiBHT_NaNO3'
#1: 'Dinca_Cell_2_NiBHT_K2SO4_Repeat' ,
#2: 'Dinca_Cell_3_NiBHT_NaNO3_V_0p4'
#2: 'Dinca_Cell_3_NiBHT_NaNO3_V_N0p4',
#2: 'Dinca_Cell_3_NiBHT_NaNO3_V_N0p5',
2: 'Dinca_Cell_3_NiBHT_NaNO3_V_0p5',
} #All Z 4700
# measure SAXS at 5 meters using 0.2 sec, 1 sec
# also measrue WAXS using 11 angles first, maybe it only be worthy to measure the zero angle
#
pxy_dict = {
#1: (-29400, 5400),
2: (30200, 7200), #(29800, 7400), # (29800, 7300), #(29600, 7300), #( 31000, 6700 ),
}
# Run 7, using the 3-D printed holder for Cell measurements, Cell 5 ( good cell) Cell 6 (bad cell )
sample_dict = {
#1: 'Dinca_Cell_5_CuHHTT_K2SO4' ,
#1: 'Dinca_Cell_5_CuHHTT_K2SO4_N0p5' ,
#1: 'Dinca_Cell_5_CuHHTT_K2SO4_0p5' ,
#1: 'Dinca_Cell_5_CuHHTT_K2SO4_0p5_2' ,
#1: 'Dinca_Cell_5_CuHHTT_K2SO4_Np5_2' ,
#2: 'Dinca_Cell_6_CuHHTT_TACl'
}
#measure no V first, saxs and waxs, three angles, Z=4700
# Only for cell 5, apply -0.5, saxs/waxs
#then apply 0.5 V, saxs, waxs
pxy_dict = {
1: (-28700, 5100), # (-28900, 4800), # (-28700, 5100), # (-28600, 5100),
#2: ( 29700, 6000 ),
}
# Run 8, Cell 7 and cell 9
#measure no V first, saxs and waxs, three angles, Z=4700, there are cables block beam, open chamber and re-do it
sample_dict = {
1: 'Dinca_Cell_9_TiBQ_NaClO4' ,
2: 'Dinca_Cell_7_CuHHTT_CsBr' ,
}
pxy_dict = {
1: (-30100, 5100),
2: ( 30800, 7100 ),
}
# Run 9, Cell 7 and cell 9
#measure no V first, saxs and waxs, three angles, Z=4700,
#
sample_dict = {
# 1: 'Dinca_Cell_9_TiBQ_NaClO4' , #measue waxs first then saxs
#1: 'Dinca_Cell_9_TiBQ_NaClO4_N0p5' , #measue waxs first then saxs
#1: 'Dinca_Cell_9_TiBQ_NaClO4_0p5' , #measue waxs first then saxs
#1: 'Dinca_Cell_9_TiBQ_NaClO4_N0p5_2' , #measue waxs first then saxs
#1: 'Dinca_Cell_9_TiBQ_NaClO4_0p5_2' , #measue waxs first then saxs
#1: 'Dinca_Cell_9_TiBQ_NaClO4_0p5_3' , #measue waxs first then saxs
1: 'Dinca_Cell_9_TiBQ_NaClO4_N0p5_3' , #measue waxs first then saxs
#2: 'Dinca_Cell_7_CuHHTT_CsBr' ,
# 2: 'Dinca_Cell_7_CuHHTT_CsBr_N0p5' #measure twice, between this sperate about 5 mim
#2: 'Dinca_Cell_7_CuHHTT_CsBr_0p5'
#2: 'Dinca_Cell_7_CuHHTT_CsBr_0p5_2'
# 2: 'Dinca_Cell_7_CuHHTT_CsBr_N0p5_2'
## Repeat again using a different location
#2: 'Dinca_Cell_7_CuHHTT_CsBr_Repeat' ,
#2: 'Dinca_Cell_7_CuHHTT_CsBr_N0p5_Repeat' #measue waxs first then saxs
# 2: 'Dinca_Cell_7_CuHHTT_CsBr_0p5_Repeat' #measue waxs first then saxs
# 2: 'Dinca_Cell_7_CuHHTT_CsBr_0p5_2_Repeat' #measue waxs first then saxs
#2: 'Dinca_Cell_7_CuHHTT_CsBr_N0p5_2_Repeat' #measue waxs first then saxs
}
pxy_dict = {
1: ( -29700, 5300),
#2: (30600, 5900), # ( 30000, 5400 ),
}
'''
# Run 9, Cell 4 and cell 8
#measure no V first, saxs and waxs, three angles, Z=4700,
#
sample_dict = {
#1: 'Dinca_Cell_4_CuHHTT_K2SO4' ,
2: 'Dinca_Cell_1_NiBH_K2S4' ,
}
pxy_dict = {
# 1: ( -29700, 5000),
2: (30500, 5200),
}
def _measure_one_potential( V ='' ):
mov_sam ( 2 )
RE.md['sample'] += V
print( RE.md['sample'])
RE( measure_waxs() )
time.sleep(3)
RE(measure_saxs(1, move_y= False ) )
##################################################
############ Some convinent functions#################
#########################################################
def movx( dx ):
RE( bps.mvr(piezo.x, dx) )
def movy( dy ):
RE( bps.mvr(piezo.y, dy) )
def get_posxy( ):
return round( piezo.x.user_readback.value, 2 ),round( piezo.y.user_readback.value , 2 )
def move_waxs( waxs_angle=8.0):
RE( bps.mv(waxs, waxs_angle) )
def move_waxs_off( waxs_angle=8.0 ):
RE( bps.mv(waxs, waxs_angle) )
def move_waxs_on( waxs_angle=0.0 ):
RE( bps.mv(waxs, waxs_angle) )
def mov_sam( pos ):
px,py = pxy_dict[ pos ]
RE( bps.mv(piezo.x, px) )
RE( bps.mv(piezo.y, py) )
sample = sample_dict[pos]
print('Move to pos=%s for sample:%s...'%(pos, sample ))
RE.md['sample'] = sample
def check_saxs_sample_loc( sleep = 5 ):
ks = list( sample_dict.keys() )
for k in ks:
mov_sam( k )
time.sleep( sleep )
def measure_saxs( t = .1, att='None', move_y=False, user_name='', sample= None ):
if sample is None:
sample = RE.md['sample']
dets = [ pil1M ]
#att_in( att )
name_fmt = '{sample}_x{x_pos}_y{y_pos}_det{saxs_z}m_expt{expt}s_att{att}_sid{scan_id:08d}'
sample_name = name_fmt.format(sample=sample, x_pos=np.round(piezo.x.position,2), y_pos=np.round(piezo.y.position,2),
saxs_z=np.round(pil1m_pos.z.position,2), expt=t, att=att, scan_id=RE.md['scan_id'])
if move_y:
yield from bps.mvr(piezo.y, 30 )
det_exposure_time( t, t)
sample_id(user_name=user_name, sample_name=sample_name )
print(f'\n\t=== Sample: {sample_name} ===\n')
print('Collect data here....')
yield from bp.count(dets, num=1)
#att_out( att )
sample_id(user_name='test', sample_name='test')
det_exposure_time(0.5)
def measure_waxs( t = 1.0, att='None', move_y=False, user_name='',
saxs_on = False, waxs_angles = [ 0. , 6.5, 13. ] , inverse_angle = False ):
#waxs_angles = np.linspace(0, 65, 11) #the max range
#waxs_angles = np.linspace(0, 65, 11),
#[ 0. , 6.5, 13. , 19.5]
waxs_angle_array = np.array( waxs_angles )
dets = [ pil300KW ]
max_waxs_angle = np.max( waxs_angle_array )
for waxs_angle in waxs_angle_array:
yield from bps.mv(waxs, waxs_angle)
sample = RE.md['sample']
name_fmt = '{sample}_x{x_pos:05.2f}_y{y_pos:05.2f}_z{z_pos:05.2f}_waxs{waxs_angle:05.2f}_expt{expt}s_sid{scan_id:08d}'
sample_name = name_fmt.format(sample=sample, x_pos=piezo.x.position, y_pos=piezo.y.position, z_pos=piezo.z.position,
waxs_angle=waxs_angle, expt= t, scan_id=RE.md['scan_id'])
print( sample_name )
if saxs_on:
if waxs_angle == max_waxs_angle:
dets = [ pil1M, pil300KW ] # waxs, maxs, saxs = [pil300KW, rayonix, pil1M]
else:
dets= [ pil300KW ]
if move_y:
yield from bps.mvr(piezo.y, 100 )
det_exposure_time( t, t )
sample_id(user_name=user_name, sample_name=sample_name )
print(f'\n\t=== Sample: {sample_name} ===\n')
#yield from bp.scan(dets, waxs, *waxs_arc)
yield from bp.count(dets, num=1)
sample_id(user_name='test', sample_name='test')
det_exposure_time(0.5)
def measure_series_saxs( ):
ks = list( sample_dict.keys() ) #[4:]
for k in ks:
mov_sam( k )
#movy( 100 )
RE( measure_saxs( t = 1, att= 'None', move_y= False, user_name='' ) )
#movy( 100 )
#RE( measure_saxs( t = 10, att= 'None', move_y= False, user_name='' ) )
def measure_waxs_loop_sample( t = 1.0, att='None', move_y=False, user_name='',
saxs_on = False, waxs_angles = [ 0. , 6.5, 13. ] , inverse_angle = False ):
#waxs_angles = np.linspace(0, 65, 11) #the max range
#waxs_angles = np.linspace(0, 65, 11),
#[ 0. , 6.5, 13. , 19.5]
ks = list( sample_dict.keys() ) #[4:]
waxs_angle_array = np.array( waxs_angles )
dets = [ pil300KW ]
max_waxs_angle = np.max( waxs_angle_array )
for waxs_angle in waxs_angle_array:
yield from bps.mv(waxs, waxs_angle)
for pos in ks:
#mov_sam( k )
px,py = pxy_dict[ pos ]
#py += 300
print( px, py )
yield from bps.mv(piezo.x, px)
yield from bps.mv(piezo.y, py)
sample = sample_dict[pos]
print('Move to pos=%s for sample:%s...'%(pos, sample ))
RE.md['sample'] = sample
sample = RE.md['sample']
name_fmt = '{sample}_x{x_pos:05.2f}_y{y_pos:05.2f}_z{z_pos:05.2f}_waxs{waxs_angle:05.2f}_expt{expt}s_sid{scan_id:08d}'
sample_name = name_fmt.format(sample=sample, x_pos=piezo.x.position, y_pos=piezo.y.position, z_pos=piezo.z.position,
waxs_angle=waxs_angle, expt= t, scan_id=RE.md['scan_id'])
print( sample_name )
if saxs_on:
if waxs_angle == max_waxs_angle:
dets = [ pil1M, pil300KW ] # waxs, maxs, saxs = [pil300KW, rayonix, pil1M]
else:
dets= [ pil300KW ]
if move_y:
yield from bps.mvr(piezo.y, 100 )
det_exposure_time( t, t )
sample_id(user_name=user_name, sample_name=sample_name )
print(f'\n\t=== Sample: {sample_name} ===\n')
#yield from bp.scan(dets, waxs, *waxs_arc)
yield from bp.count(dets, num=1)
sample_id(user_name='test', sample_name='test')
det_exposure_time(0.5)
| ##Collect data:
'''
#SMI: 2021/2/9
SAF: 307138 Standard Beamline 12-ID proposal: 308023 (CFN, 307961)
create proposal: proposal_id( '2021_1', '307138_Dinca' ) #create the proposal id and folder
%run -i /home/xf12id/.ipython/profile_collection/startup/users/30-user-Dinca2021C2B.py
RE( shopen() ) # to open the beam and feedback
RE( shclose())
# Energy: 16.1 keV, 0.77009 A
# SAXS distance 8300
# The beam center on SAXS: [ 490, 565 ] Or [ 491, 565 ] #1M X: 1.2, Y -60, Z: 8300, BS: x: 1.5 ,
#March 9 night 11:00pm change detector distance to 5000 mm,
# BS change to 2.0 mm , 1M X change to 1.3, Y change t0 -60.3, to make the beam center same as [ 490, 565 ]
#March 9 night 11:22 pm change detector distance to 1600 mm,
# BS change to 2.3 mm , 1M X change to 1.6, Y change t0 -60.8, to make the beam center same as [ 490, 565 ]
#March 10, 10AM, change back to 5 meter for MIT
detector distance to 5000 mm,
# BS change to 2.0 mm , 1M X change to 1.3, Y change t0 -60.3, to make the beam center same as [ 490, 565 ]
FOR MIT CELL,
Cables:
Cell A:
#3, red to red
#2 orange to blue
Cell B:
#9 black to black
#4 brown to brown
WAXS beam center: [ 87, 96 ], there is bad pixel here could check later, (BS: X: -20.92 )
Put Att and move bs to check the BC_WAXS, --> [ 87, 96 ]
'''
'''
Run1 , using the 3-D printed holder wiithout hole (the optical image quality is not good)
sample_dict = { 1: 'FT_01', 2: 'FT_02', 3:'',
4: 'FT_04', 5: 'FT_05', 6: 'FT_06', 7:'',
8: 'FT_07', 9: 'FT_08', 10: 'FT_09', 11:'',
12: 'GZ_01', 13: '', 14: 'GZ_02', 15: '',16: 'GZ_03',
17:'', 18: 'GZ_04', 19: '', 20: 'GB_05', 21:'', 22:'GB_06', 23: '', 24: 'GB07', 25: '', 26: 'GZ_08',
} #All Z from 12-26 is 4800 (ALL GB),
pxy_dict = {
1: ( -44300, -3100),
2: ( -42300, -2700 ), 4: ( -34300, -2000 ), 5: ( -30800, -2000 ),
6: ( -27700, -3000 ), 8: ( -19700, 1100 ), 9: ( -17400, 900 ),
10: ( -14500, -1100 ), 12: ( -5200, 2500 ), 14: ( 1900, 2200 ),
16: ( 8500, 1400 ), 18: ( 16100 , 3800 ), 20: ( 23000, 3000 ),
22: ( 30200 , 1700 ), 24: ( 38000 , 1500), 26: ( 44800, 1200 ),
}
# Run 2, using the 3-D printed holder with rectangle holes (the optical image quality is not good)
sample_dict = { 1: 'FT_03', 2: 'GB_09', 3:'',
4: 'GB_10', 6: 'GB_11', 8: 'GB_12', 10: 'GB_13', 12: 'GB_14',
14: 'GB_15', 16: 'GB_16', 18: 'GB_17', 20: 'GB_18', 22: 'GB_19', 24: 'GB_20', 26: 'GB_21',
} #All Z 3800
pxy_dict = {
1: ( -44860, 1600 ),
2: ( -41260, 2700 ), 4: ( -34100, 200 ),
6: ( -26500, 1100 ), 8: ( -19700, 2600 ),
10: ( -12800, 100 ), 12: ( -5200, -900 ), 14: ( 2100, 3200 ),
16: ( 8800, 1900 ), 18: ( 16100 , 4200 ), 20: ( 23000, 4700 ),
22: ( 30200 , 4700 ), 24: ( 38000 , -300 ), 26: ( 44800, 4800 ),
}
# RUn 3, using the CMS (traditional holder), opticla a little better
sample_dict = { 1: 'GB_22', 2: 'GB_23', 3:'GB_24',
4: 'GB_25', 5: 'GB_26' , 6: 'GB_27', 7: 'GB_28', 8: 'GB_29', 9: 'GB_30', 10: 'GB_31', 11:'GB_32', 12: 'GB_33',
13: 'GB_34', 14: 'GB_35', 15: 'Dummy_Steel'
} #All Z 1400 , the direction is oposite to the previous one
pxy_dict = { 15: (-43499.91, -5019.78) , 14: (-37199.98, -5019.78), 13:(-31099.93, -5019.78), 12: (-24599.85, -5019.78),
11: (-17999.94, -5019.78), 10: (-11600.02, -5019.9), 9: (-5400, -5019.9), 8: ( 1100 , -5019.9), 7: ( 7600 , -5019.9), 6: ( 13900 , -5019.9),
5: ( 20200 , -5019.9), 4: ( 26500 , -5019.9), 3: ( 32800 , -5019.9), 2: ( 39200 , -5019.9), 1: ( 45400 , -5019.9),
}
# Run 4, using the 3-D printed holder with rectangle holes (the optical image quality is not good)
sample_dict = { 2: 'GB_36', 3:'',
4: 'GB_37', 6: 'GB_38', 8: 'GB_40', 10: 'GB_41', 12: 'GB_44',
14: 'GB_42', 16: 'GB_43', 18: 'GB_45', 20: 'FL_UIO66',
} #All Z 3800
pxy_dict = {
1: ( -44860, 1600 ),
2: ( -41260, 2700 ), 4: ( -34100, 200 ),
6: ( -26500, 1100 ), 8: ( -19700, 2600 ),
10: ( -12800, 100 ), 12: ( -5200, -900 ), 14: ( 2100, 3200 ),
16: ( 8800, 1900 ), 18: ( 16100 , 4200 ), 20: ( 23000, 4700 ),
22: ( 30200 , 4700 ), 24: ( 38000 , -300 ), 26: ( 44800, 4800 ),
}
# Run 5, using the 3-D printed holder with 15 holes for 2mm capillary
sample_dict = { 3: 'WX_01_PureS', 4:'WX_02_PureD',
5: 'WX_03_Mix_SD', 6: 'WX_04_Mix_SDA', 7: 'WX_05_LHCE_1_Mix_LiFSISD', 8: 'WX_06_LHCE_2_Mix_LiFSISDA', 9: 'WX_07_LiFS_S', 10: 'WX_08_LiFS_SA',
12: 'FL_UIo_66'
} #All Z 3800
#measure 8.3 meter first, measure 1 sec, 10 sec #piezo_z = 3800
#Then move detecter to 5 meter #piezo_z = 3800
# Then change det-dis to 1.6 meter #piezo_z = 3800
# Then measure WAXS, using #piezo_z = 3800 (should use the 1400 , need to re-calibrate )
#
pxy_dict = {
3: (-30040, -1800),
4: ( -22840, -1800),
5: ( -17240, -1800),
6: ( -11540, -1800),
7: ( -4140, -4600 ),
8: ( 2060, -1800 ),
9: ( 7460, -1800),
10: ( 13460, 1200 ),
12: ( 25200, 0 ),
}
# Run 6, using the 3-D printed holder for Cell measurements, Cell 2 (bad cell), Cell 3 (good cell )
# change the post, make the hexapod Y = 6.0 ( only change the hexpod height in +/-6 )
# Cell 3 using the Red (rough, Working) and orange (bue, smooth, Counting), the cell is good
# Cell 2 using the black (rough, Working) and brown (bue, smooth, Counting), the cell is not good
sample_dict = { #1: 'Dinca_Cell_2_NiBHT_K2SO4_Repeat' , 2: 'Dinca_Cell_3_NiBHT_NaNO3'
#1: 'Dinca_Cell_2_NiBHT_K2SO4_Repeat' ,
#2: 'Dinca_Cell_3_NiBHT_NaNO3_V_0p4'
#2: 'Dinca_Cell_3_NiBHT_NaNO3_V_N0p4',
#2: 'Dinca_Cell_3_NiBHT_NaNO3_V_N0p5',
2: 'Dinca_Cell_3_NiBHT_NaNO3_V_0p5',
} #All Z 4700
# measure SAXS at 5 meters using 0.2 sec, 1 sec
# also measrue WAXS using 11 angles first, maybe it only be worthy to measure the zero angle
#
pxy_dict = {
#1: (-29400, 5400),
2: (30200, 7200), #(29800, 7400), # (29800, 7300), #(29600, 7300), #( 31000, 6700 ),
}
# Run 7, using the 3-D printed holder for Cell measurements, Cell 5 ( good cell) Cell 6 (bad cell )
sample_dict = {
#1: 'Dinca_Cell_5_CuHHTT_K2SO4' ,
#1: 'Dinca_Cell_5_CuHHTT_K2SO4_N0p5' ,
#1: 'Dinca_Cell_5_CuHHTT_K2SO4_0p5' ,
#1: 'Dinca_Cell_5_CuHHTT_K2SO4_0p5_2' ,
#1: 'Dinca_Cell_5_CuHHTT_K2SO4_Np5_2' ,
#2: 'Dinca_Cell_6_CuHHTT_TACl'
}
#measure no V first, saxs and waxs, three angles, Z=4700
# Only for cell 5, apply -0.5, saxs/waxs
#then apply 0.5 V, saxs, waxs
pxy_dict = {
1: (-28700, 5100), # (-28900, 4800), # (-28700, 5100), # (-28600, 5100),
#2: ( 29700, 6000 ),
}
# Run 8, Cell 7 and cell 9
#measure no V first, saxs and waxs, three angles, Z=4700, there are cables block beam, open chamber and re-do it
sample_dict = {
1: 'Dinca_Cell_9_TiBQ_NaClO4' ,
2: 'Dinca_Cell_7_CuHHTT_CsBr' ,
}
pxy_dict = {
1: (-30100, 5100),
2: ( 30800, 7100 ),
}
# Run 9, Cell 7 and cell 9
#measure no V first, saxs and waxs, three angles, Z=4700,
#
sample_dict = {
# 1: 'Dinca_Cell_9_TiBQ_NaClO4' , #measue waxs first then saxs
#1: 'Dinca_Cell_9_TiBQ_NaClO4_N0p5' , #measue waxs first then saxs
#1: 'Dinca_Cell_9_TiBQ_NaClO4_0p5' , #measue waxs first then saxs
#1: 'Dinca_Cell_9_TiBQ_NaClO4_N0p5_2' , #measue waxs first then saxs
#1: 'Dinca_Cell_9_TiBQ_NaClO4_0p5_2' , #measue waxs first then saxs
#1: 'Dinca_Cell_9_TiBQ_NaClO4_0p5_3' , #measue waxs first then saxs
1: 'Dinca_Cell_9_TiBQ_NaClO4_N0p5_3' , #measue waxs first then saxs
#2: 'Dinca_Cell_7_CuHHTT_CsBr' ,
# 2: 'Dinca_Cell_7_CuHHTT_CsBr_N0p5' #measure twice, between this sperate about 5 mim
#2: 'Dinca_Cell_7_CuHHTT_CsBr_0p5'
#2: 'Dinca_Cell_7_CuHHTT_CsBr_0p5_2'
# 2: 'Dinca_Cell_7_CuHHTT_CsBr_N0p5_2'
## Repeat again using a different location
#2: 'Dinca_Cell_7_CuHHTT_CsBr_Repeat' ,
#2: 'Dinca_Cell_7_CuHHTT_CsBr_N0p5_Repeat' #measue waxs first then saxs
# 2: 'Dinca_Cell_7_CuHHTT_CsBr_0p5_Repeat' #measue waxs first then saxs
# 2: 'Dinca_Cell_7_CuHHTT_CsBr_0p5_2_Repeat' #measue waxs first then saxs
#2: 'Dinca_Cell_7_CuHHTT_CsBr_N0p5_2_Repeat' #measue waxs first then saxs
}
pxy_dict = {
1: ( -29700, 5300),
#2: (30600, 5900), # ( 30000, 5400 ),
}
'''
# Run 9, Cell 4 and cell 8
#measure no V first, saxs and waxs, three angles, Z=4700,
#
sample_dict = {
#1: 'Dinca_Cell_4_CuHHTT_K2SO4' ,
2: 'Dinca_Cell_1_NiBH_K2S4' ,
}
pxy_dict = {
# 1: ( -29700, 5000),
2: (30500, 5200),
}
def _measure_one_potential( V ='' ):
mov_sam ( 2 )
RE.md['sample'] += V
print( RE.md['sample'])
RE( measure_waxs() )
time.sleep(3)
RE(measure_saxs(1, move_y= False ) )
##################################################
############ Some convinent functions#################
#########################################################
def movx( dx ):
RE( bps.mvr(piezo.x, dx) )
def movy( dy ):
RE( bps.mvr(piezo.y, dy) )
def get_posxy( ):
return round( piezo.x.user_readback.value, 2 ),round( piezo.y.user_readback.value , 2 )
def move_waxs( waxs_angle=8.0):
RE( bps.mv(waxs, waxs_angle) )
def move_waxs_off( waxs_angle=8.0 ):
RE( bps.mv(waxs, waxs_angle) )
def move_waxs_on( waxs_angle=0.0 ):
RE( bps.mv(waxs, waxs_angle) )
def mov_sam( pos ):
px,py = pxy_dict[ pos ]
RE( bps.mv(piezo.x, px) )
RE( bps.mv(piezo.y, py) )
sample = sample_dict[pos]
print('Move to pos=%s for sample:%s...'%(pos, sample ))
RE.md['sample'] = sample
def check_saxs_sample_loc( sleep = 5 ):
ks = list( sample_dict.keys() )
for k in ks:
mov_sam( k )
time.sleep( sleep )
def measure_saxs( t = .1, att='None', move_y=False, user_name='', sample= None ):
if sample is None:
sample = RE.md['sample']
dets = [ pil1M ]
#att_in( att )
name_fmt = '{sample}_x{x_pos}_y{y_pos}_det{saxs_z}m_expt{expt}s_att{att}_sid{scan_id:08d}'
sample_name = name_fmt.format(sample=sample, x_pos=np.round(piezo.x.position,2), y_pos=np.round(piezo.y.position,2),
saxs_z=np.round(pil1m_pos.z.position,2), expt=t, att=att, scan_id=RE.md['scan_id'])
if move_y:
yield from bps.mvr(piezo.y, 30 )
det_exposure_time( t, t)
sample_id(user_name=user_name, sample_name=sample_name )
print(f'\n\t=== Sample: {sample_name} ===\n')
print('Collect data here....')
yield from bp.count(dets, num=1)
#att_out( att )
sample_id(user_name='test', sample_name='test')
det_exposure_time(0.5)
def measure_waxs( t = 1.0, att='None', move_y=False, user_name='',
saxs_on = False, waxs_angles = [ 0. , 6.5, 13. ] , inverse_angle = False ):
#waxs_angles = np.linspace(0, 65, 11) #the max range
#waxs_angles = np.linspace(0, 65, 11),
#[ 0. , 6.5, 13. , 19.5]
waxs_angle_array = np.array( waxs_angles )
dets = [ pil300KW ]
max_waxs_angle = np.max( waxs_angle_array )
for waxs_angle in waxs_angle_array:
yield from bps.mv(waxs, waxs_angle)
sample = RE.md['sample']
name_fmt = '{sample}_x{x_pos:05.2f}_y{y_pos:05.2f}_z{z_pos:05.2f}_waxs{waxs_angle:05.2f}_expt{expt}s_sid{scan_id:08d}'
sample_name = name_fmt.format(sample=sample, x_pos=piezo.x.position, y_pos=piezo.y.position, z_pos=piezo.z.position,
waxs_angle=waxs_angle, expt= t, scan_id=RE.md['scan_id'])
print( sample_name )
if saxs_on:
if waxs_angle == max_waxs_angle:
dets = [ pil1M, pil300KW ] # waxs, maxs, saxs = [pil300KW, rayonix, pil1M]
else:
dets= [ pil300KW ]
if move_y:
yield from bps.mvr(piezo.y, 100 )
det_exposure_time( t, t )
sample_id(user_name=user_name, sample_name=sample_name )
print(f'\n\t=== Sample: {sample_name} ===\n')
#yield from bp.scan(dets, waxs, *waxs_arc)
yield from bp.count(dets, num=1)
sample_id(user_name='test', sample_name='test')
det_exposure_time(0.5)
def measure_series_saxs( ):
ks = list( sample_dict.keys() ) #[4:]
for k in ks:
mov_sam( k )
#movy( 100 )
RE( measure_saxs( t = 1, att= 'None', move_y= False, user_name='' ) )
#movy( 100 )
#RE( measure_saxs( t = 10, att= 'None', move_y= False, user_name='' ) )
def measure_waxs_loop_sample( t = 1.0, att='None', move_y=False, user_name='',
saxs_on = False, waxs_angles = [ 0. , 6.5, 13. ] , inverse_angle = False ):
#waxs_angles = np.linspace(0, 65, 11) #the max range
#waxs_angles = np.linspace(0, 65, 11),
#[ 0. , 6.5, 13. , 19.5]
ks = list( sample_dict.keys() ) #[4:]
waxs_angle_array = np.array( waxs_angles )
dets = [ pil300KW ]
max_waxs_angle = np.max( waxs_angle_array )
for waxs_angle in waxs_angle_array:
yield from bps.mv(waxs, waxs_angle)
for pos in ks:
#mov_sam( k )
px,py = pxy_dict[ pos ]
#py += 300
print( px, py )
yield from bps.mv(piezo.x, px)
yield from bps.mv(piezo.y, py)
sample = sample_dict[pos]
print('Move to pos=%s for sample:%s...'%(pos, sample ))
RE.md['sample'] = sample
sample = RE.md['sample']
name_fmt = '{sample}_x{x_pos:05.2f}_y{y_pos:05.2f}_z{z_pos:05.2f}_waxs{waxs_angle:05.2f}_expt{expt}s_sid{scan_id:08d}'
sample_name = name_fmt.format(sample=sample, x_pos=piezo.x.position, y_pos=piezo.y.position, z_pos=piezo.z.position,
waxs_angle=waxs_angle, expt= t, scan_id=RE.md['scan_id'])
print( sample_name )
if saxs_on:
if waxs_angle == max_waxs_angle:
dets = [ pil1M, pil300KW ] # waxs, maxs, saxs = [pil300KW, rayonix, pil1M]
else:
dets= [ pil300KW ]
if move_y:
yield from bps.mvr(piezo.y, 100 )
det_exposure_time( t, t )
sample_id(user_name=user_name, sample_name=sample_name )
print(f'\n\t=== Sample: {sample_name} ===\n')
#yield from bp.scan(dets, waxs, *waxs_arc)
yield from bp.count(dets, num=1)
sample_id(user_name='test', sample_name='test')
det_exposure_time(0.5)
| en | 0.622464 | ##Collect data: #SMI: 2021/2/9 SAF: 307138 Standard Beamline 12-ID proposal: 308023 (CFN, 307961) create proposal: proposal_id( '2021_1', '307138_Dinca' ) #create the proposal id and folder %run -i /home/xf12id/.ipython/profile_collection/startup/users/30-user-Dinca2021C2B.py RE( shopen() ) # to open the beam and feedback RE( shclose()) # Energy: 16.1 keV, 0.77009 A # SAXS distance 8300 # The beam center on SAXS: [ 490, 565 ] Or [ 491, 565 ] #1M X: 1.2, Y -60, Z: 8300, BS: x: 1.5 , #March 9 night 11:00pm change detector distance to 5000 mm, # BS change to 2.0 mm , 1M X change to 1.3, Y change t0 -60.3, to make the beam center same as [ 490, 565 ] #March 9 night 11:22 pm change detector distance to 1600 mm, # BS change to 2.3 mm , 1M X change to 1.6, Y change t0 -60.8, to make the beam center same as [ 490, 565 ] #March 10, 10AM, change back to 5 meter for MIT detector distance to 5000 mm, # BS change to 2.0 mm , 1M X change to 1.3, Y change t0 -60.3, to make the beam center same as [ 490, 565 ] FOR MIT CELL, Cables: Cell A: #3, red to red #2 orange to blue Cell B: #9 black to black #4 brown to brown WAXS beam center: [ 87, 96 ], there is bad pixel here could check later, (BS: X: -20.92 ) Put Att and move bs to check the BC_WAXS, --> [ 87, 96 ] Run1 , using the 3-D printed holder wiithout hole (the optical image quality is not good) sample_dict = { 1: 'FT_01', 2: 'FT_02', 3:'', 4: 'FT_04', 5: 'FT_05', 6: 'FT_06', 7:'', 8: 'FT_07', 9: 'FT_08', 10: 'FT_09', 11:'', 12: 'GZ_01', 13: '', 14: 'GZ_02', 15: '',16: 'GZ_03', 17:'', 18: 'GZ_04', 19: '', 20: 'GB_05', 21:'', 22:'GB_06', 23: '', 24: 'GB07', 25: '', 26: 'GZ_08', } #All Z from 12-26 is 4800 (ALL GB), pxy_dict = { 1: ( -44300, -3100), 2: ( -42300, -2700 ), 4: ( -34300, -2000 ), 5: ( -30800, -2000 ), 6: ( -27700, -3000 ), 8: ( -19700, 1100 ), 9: ( -17400, 900 ), 10: ( -14500, -1100 ), 12: ( -5200, 2500 ), 14: ( 1900, 2200 ), 16: ( 8500, 1400 ), 18: ( 16100 , 3800 ), 20: ( 23000, 3000 ), 22: ( 30200 , 1700 ), 24: ( 38000 , 1500), 26: ( 44800, 1200 ), } # Run 2, using the 3-D printed holder with rectangle holes (the optical image quality is not good) sample_dict = { 1: 'FT_03', 2: 'GB_09', 3:'', 4: 'GB_10', 6: 'GB_11', 8: 'GB_12', 10: 'GB_13', 12: 'GB_14', 14: 'GB_15', 16: 'GB_16', 18: 'GB_17', 20: 'GB_18', 22: 'GB_19', 24: 'GB_20', 26: 'GB_21', } #All Z 3800 pxy_dict = { 1: ( -44860, 1600 ), 2: ( -41260, 2700 ), 4: ( -34100, 200 ), 6: ( -26500, 1100 ), 8: ( -19700, 2600 ), 10: ( -12800, 100 ), 12: ( -5200, -900 ), 14: ( 2100, 3200 ), 16: ( 8800, 1900 ), 18: ( 16100 , 4200 ), 20: ( 23000, 4700 ), 22: ( 30200 , 4700 ), 24: ( 38000 , -300 ), 26: ( 44800, 4800 ), } # RUn 3, using the CMS (traditional holder), opticla a little better sample_dict = { 1: 'GB_22', 2: 'GB_23', 3:'GB_24', 4: 'GB_25', 5: 'GB_26' , 6: 'GB_27', 7: 'GB_28', 8: 'GB_29', 9: 'GB_30', 10: 'GB_31', 11:'GB_32', 12: 'GB_33', 13: 'GB_34', 14: 'GB_35', 15: 'Dummy_Steel' } #All Z 1400 , the direction is oposite to the previous one pxy_dict = { 15: (-43499.91, -5019.78) , 14: (-37199.98, -5019.78), 13:(-31099.93, -5019.78), 12: (-24599.85, -5019.78), 11: (-17999.94, -5019.78), 10: (-11600.02, -5019.9), 9: (-5400, -5019.9), 8: ( 1100 , -5019.9), 7: ( 7600 , -5019.9), 6: ( 13900 , -5019.9), 5: ( 20200 , -5019.9), 4: ( 26500 , -5019.9), 3: ( 32800 , -5019.9), 2: ( 39200 , -5019.9), 1: ( 45400 , -5019.9), } # Run 4, using the 3-D printed holder with rectangle holes (the optical image quality is not good) sample_dict = { 2: 'GB_36', 3:'', 4: 'GB_37', 6: 'GB_38', 8: 'GB_40', 10: 'GB_41', 12: 'GB_44', 14: 'GB_42', 16: 'GB_43', 18: 'GB_45', 20: 'FL_UIO66', } #All Z 3800 pxy_dict = { 1: ( -44860, 1600 ), 2: ( -41260, 2700 ), 4: ( -34100, 200 ), 6: ( -26500, 1100 ), 8: ( -19700, 2600 ), 10: ( -12800, 100 ), 12: ( -5200, -900 ), 14: ( 2100, 3200 ), 16: ( 8800, 1900 ), 18: ( 16100 , 4200 ), 20: ( 23000, 4700 ), 22: ( 30200 , 4700 ), 24: ( 38000 , -300 ), 26: ( 44800, 4800 ), } # Run 5, using the 3-D printed holder with 15 holes for 2mm capillary sample_dict = { 3: 'WX_01_PureS', 4:'WX_02_PureD', 5: 'WX_03_Mix_SD', 6: 'WX_04_Mix_SDA', 7: 'WX_05_LHCE_1_Mix_LiFSISD', 8: 'WX_06_LHCE_2_Mix_LiFSISDA', 9: 'WX_07_LiFS_S', 10: 'WX_08_LiFS_SA', 12: 'FL_UIo_66' } #All Z 3800 #measure 8.3 meter first, measure 1 sec, 10 sec #piezo_z = 3800 #Then move detecter to 5 meter #piezo_z = 3800 # Then change det-dis to 1.6 meter #piezo_z = 3800 # Then measure WAXS, using #piezo_z = 3800 (should use the 1400 , need to re-calibrate ) # pxy_dict = { 3: (-30040, -1800), 4: ( -22840, -1800), 5: ( -17240, -1800), 6: ( -11540, -1800), 7: ( -4140, -4600 ), 8: ( 2060, -1800 ), 9: ( 7460, -1800), 10: ( 13460, 1200 ), 12: ( 25200, 0 ), } # Run 6, using the 3-D printed holder for Cell measurements, Cell 2 (bad cell), Cell 3 (good cell ) # change the post, make the hexapod Y = 6.0 ( only change the hexpod height in +/-6 ) # Cell 3 using the Red (rough, Working) and orange (bue, smooth, Counting), the cell is good # Cell 2 using the black (rough, Working) and brown (bue, smooth, Counting), the cell is not good sample_dict = { #1: 'Dinca_Cell_2_NiBHT_K2SO4_Repeat' , 2: 'Dinca_Cell_3_NiBHT_NaNO3' #1: 'Dinca_Cell_2_NiBHT_K2SO4_Repeat' , #2: 'Dinca_Cell_3_NiBHT_NaNO3_V_0p4' #2: 'Dinca_Cell_3_NiBHT_NaNO3_V_N0p4', #2: 'Dinca_Cell_3_NiBHT_NaNO3_V_N0p5', 2: 'Dinca_Cell_3_NiBHT_NaNO3_V_0p5', } #All Z 4700 # measure SAXS at 5 meters using 0.2 sec, 1 sec # also measrue WAXS using 11 angles first, maybe it only be worthy to measure the zero angle # pxy_dict = { #1: (-29400, 5400), 2: (30200, 7200), #(29800, 7400), # (29800, 7300), #(29600, 7300), #( 31000, 6700 ), } # Run 7, using the 3-D printed holder for Cell measurements, Cell 5 ( good cell) Cell 6 (bad cell ) sample_dict = { #1: 'Dinca_Cell_5_CuHHTT_K2SO4' , #1: 'Dinca_Cell_5_CuHHTT_K2SO4_N0p5' , #1: 'Dinca_Cell_5_CuHHTT_K2SO4_0p5' , #1: 'Dinca_Cell_5_CuHHTT_K2SO4_0p5_2' , #1: 'Dinca_Cell_5_CuHHTT_K2SO4_Np5_2' , #2: 'Dinca_Cell_6_CuHHTT_TACl' } #measure no V first, saxs and waxs, three angles, Z=4700 # Only for cell 5, apply -0.5, saxs/waxs #then apply 0.5 V, saxs, waxs pxy_dict = { 1: (-28700, 5100), # (-28900, 4800), # (-28700, 5100), # (-28600, 5100), #2: ( 29700, 6000 ), } # Run 8, Cell 7 and cell 9 #measure no V first, saxs and waxs, three angles, Z=4700, there are cables block beam, open chamber and re-do it sample_dict = { 1: 'Dinca_Cell_9_TiBQ_NaClO4' , 2: 'Dinca_Cell_7_CuHHTT_CsBr' , } pxy_dict = { 1: (-30100, 5100), 2: ( 30800, 7100 ), } # Run 9, Cell 7 and cell 9 #measure no V first, saxs and waxs, three angles, Z=4700, # sample_dict = { # 1: 'Dinca_Cell_9_TiBQ_NaClO4' , #measue waxs first then saxs #1: 'Dinca_Cell_9_TiBQ_NaClO4_N0p5' , #measue waxs first then saxs #1: 'Dinca_Cell_9_TiBQ_NaClO4_0p5' , #measue waxs first then saxs #1: 'Dinca_Cell_9_TiBQ_NaClO4_N0p5_2' , #measue waxs first then saxs #1: 'Dinca_Cell_9_TiBQ_NaClO4_0p5_2' , #measue waxs first then saxs #1: 'Dinca_Cell_9_TiBQ_NaClO4_0p5_3' , #measue waxs first then saxs 1: 'Dinca_Cell_9_TiBQ_NaClO4_N0p5_3' , #measue waxs first then saxs #2: 'Dinca_Cell_7_CuHHTT_CsBr' , # 2: 'Dinca_Cell_7_CuHHTT_CsBr_N0p5' #measure twice, between this sperate about 5 mim #2: 'Dinca_Cell_7_CuHHTT_CsBr_0p5' #2: 'Dinca_Cell_7_CuHHTT_CsBr_0p5_2' # 2: 'Dinca_Cell_7_CuHHTT_CsBr_N0p5_2' ## Repeat again using a different location #2: 'Dinca_Cell_7_CuHHTT_CsBr_Repeat' , #2: 'Dinca_Cell_7_CuHHTT_CsBr_N0p5_Repeat' #measue waxs first then saxs # 2: 'Dinca_Cell_7_CuHHTT_CsBr_0p5_Repeat' #measue waxs first then saxs # 2: 'Dinca_Cell_7_CuHHTT_CsBr_0p5_2_Repeat' #measue waxs first then saxs #2: 'Dinca_Cell_7_CuHHTT_CsBr_N0p5_2_Repeat' #measue waxs first then saxs } pxy_dict = { 1: ( -29700, 5300), #2: (30600, 5900), # ( 30000, 5400 ), } # Run 9, Cell 4 and cell 8 #measure no V first, saxs and waxs, three angles, Z=4700, # #1: 'Dinca_Cell_4_CuHHTT_K2SO4' , # 1: ( -29700, 5000), ################################################## ############ Some convinent functions################# ######################################################### #att_in( att ) #att_out( att ) #waxs_angles = np.linspace(0, 65, 11) #the max range #waxs_angles = np.linspace(0, 65, 11), #[ 0. , 6.5, 13. , 19.5] # waxs, maxs, saxs = [pil300KW, rayonix, pil1M] #yield from bp.scan(dets, waxs, *waxs_arc) #[4:] #movy( 100 ) #movy( 100 ) #RE( measure_saxs( t = 10, att= 'None', move_y= False, user_name='' ) ) #waxs_angles = np.linspace(0, 65, 11) #the max range #waxs_angles = np.linspace(0, 65, 11), #[ 0. , 6.5, 13. , 19.5] #[4:] #mov_sam( k ) #py += 300 # waxs, maxs, saxs = [pil300KW, rayonix, pil1M] #yield from bp.scan(dets, waxs, *waxs_arc) | 1.942398 | 2 |
exercicios/PythonExercicios/ex044.py | Roberto-Sartore/Python | 0 | 6615276 | <gh_stars>0
print('{:=^40}'.format(' LOJAS TEM DE TUDO '))
valor = float(input('Valor das compras: R$ '))
print('''FORMAS DE PAGAMENTO
[ 1 ] à vista dinheiro/cheque
[ 2 ] à vista cartão
[ 3 ] até 2x no cartão
[ 4 ] 3xou mais no cartão''')
op = int(input('Qual opção de pagamento? '))
if op == 1:
total = valor - (valor * 10 / 100)
elif op == 2:
total = valor - (valor * 5 / 100)
elif op == 3:
total = valor
parcela = total / 2
print('Sua compra será parcelada em 2x de R$ {:.2f} sem juros'.format(parcela))
elif op == 4:
total = valor + (valor * 20 / 100)
totparc = int(input('Quantas parcelas? '))
parcela = total / totparc
print('Sua compra será parcelada em {}x de R$ {:.2f} com juros'.format(totparc, parcela))
else:
total = valor
print('\033[31m Opção de pagamento invalida! tente novamente \033[m')
print('Sua compra de R$ {:.2f} vai custar R$ {:.2f} no final'.format(valor, total))
| print('{:=^40}'.format(' LOJAS TEM DE TUDO '))
valor = float(input('Valor das compras: R$ '))
print('''FORMAS DE PAGAMENTO
[ 1 ] à vista dinheiro/cheque
[ 2 ] à vista cartão
[ 3 ] até 2x no cartão
[ 4 ] 3xou mais no cartão''')
op = int(input('Qual opção de pagamento? '))
if op == 1:
total = valor - (valor * 10 / 100)
elif op == 2:
total = valor - (valor * 5 / 100)
elif op == 3:
total = valor
parcela = total / 2
print('Sua compra será parcelada em 2x de R$ {:.2f} sem juros'.format(parcela))
elif op == 4:
total = valor + (valor * 20 / 100)
totparc = int(input('Quantas parcelas? '))
parcela = total / totparc
print('Sua compra será parcelada em {}x de R$ {:.2f} com juros'.format(totparc, parcela))
else:
total = valor
print('\033[31m Opção de pagamento invalida! tente novamente \033[m')
print('Sua compra de R$ {:.2f} vai custar R$ {:.2f} no final'.format(valor, total)) | pt | 0.976398 | FORMAS DE PAGAMENTO [ 1 ] à vista dinheiro/cheque [ 2 ] à vista cartão [ 3 ] até 2x no cartão [ 4 ] 3xou mais no cartão | 3.795812 | 4 |
xyz/icexmoon/java_notes/ch14/mixin/main.py | icexmoon/java-notebook | 0 | 6615277 | <reponame>icexmoon/java-notebook
class Base:
def __init__(self, obj: object = None) -> None:
super().__init__()
self.obj = obj
def getObj(self) -> object:
return self.obj
def setObj(self, obj: object) -> None:
self.obj = obj
class Counter:
def __init__(self) -> None:
super().__init__()
num: int = getattr(self.__class__, "num", 0)
num += 1
self.id: int = num
setattr(self.__class__, "num", num)
def getId(self) -> int:
return self.id
class Mixin(Base, Counter):
def __init__(self, obj: object = None) -> None:
super().__init__(obj)
print(Mixin.__mro__)
mix: Mixin = Mixin()
print(mix.getId())
mix.setObj("hello")
print(mix.getObj())
# (<class '__main__.Mixin'>, <class '__main__.Base'>, <class '__main__.Counter'>, <class 'object'>)
# 1
# hello | class Base:
def __init__(self, obj: object = None) -> None:
super().__init__()
self.obj = obj
def getObj(self) -> object:
return self.obj
def setObj(self, obj: object) -> None:
self.obj = obj
class Counter:
def __init__(self) -> None:
super().__init__()
num: int = getattr(self.__class__, "num", 0)
num += 1
self.id: int = num
setattr(self.__class__, "num", num)
def getId(self) -> int:
return self.id
class Mixin(Base, Counter):
def __init__(self, obj: object = None) -> None:
super().__init__(obj)
print(Mixin.__mro__)
mix: Mixin = Mixin()
print(mix.getId())
mix.setObj("hello")
print(mix.getObj())
# (<class '__main__.Mixin'>, <class '__main__.Base'>, <class '__main__.Counter'>, <class 'object'>)
# 1
# hello | en | 0.310807 | # (<class '__main__.Mixin'>, <class '__main__.Base'>, <class '__main__.Counter'>, <class 'object'>) # 1 # hello | 3.545349 | 4 |
codegen/gen_plumbing.py | onlyrico/functorch | 279 | 6615278 | <filename>codegen/gen_plumbing.py
import argparse
from codegen_outofplacebatching import deindent, get_signatures, gen_unwraps
def get_signature(op, path):
signatures = get_signatures(path, include_op=True)
result = [sig for sig in signatures if sig[0] == op]
if len(result) != 1:
raise ValueError("")
return result[0]
def gen_return_sig(return_t):
if len(return_t) == 1:
return return_t[0]
return f'std::tuple<{".".join(return_t)}>'
def gen_args_sig(args_t):
args = [f'{typ} {argname}' for typ, argname in args_t]
return ', '.join(args)
def gen_args_list(args_t):
args = [f'{argname}' for _, argname in args_t]
return ', '.join(args)
def gen_plumbing(signature):
# "add.Tensor"
op, return_t, args_t = signature
maybe_op_and_variant = op.split('.')
if len(maybe_op_and_variant) == 1:
op = maybe_op_and_variant[0]
variant = ''
opname = op
else:
op, variant = maybe_op_and_variant
opname = f'{op}_{variant}'
if op.endswith('_'):
raise ValueError('Codegen doesn\'t handle in-place ops')
arg_types, arg_names = zip(*args_t)
unwraps, _ = gen_unwraps(arg_types, arg_names)
result = deindent(f"""\
{gen_return_sig(return_t)} {opname}_plumbing({gen_args_sig(args_t)}) {{
auto maybe_layer = maybeCurrentDynamicLayer();
TORCH_INTERNAL_ASSERT(maybe_layer.has_value());
int64_t cur_level = maybe_layer->layerId();
{unwraps}
// Your logic here
static auto op = c10::Dispatcher::singleton()
.findSchemaOrThrow("aten::{op}", "{variant}");
return slow_fallback<{','.join(return_t)}>(op, {{ {gen_args_list(args_t)} }});
}}
""")
return result
if __name__ == '__main__':
parser = argparse.ArgumentParser(
description='Generate the batch rule plumbing for an op')
parser.add_argument('op',
help='the operator name (with overload name)')
parser.add_argument('path',
help='link to RegistrationDeclarations.h')
# Sample usage:
# gen_plumbing.py add.Tensor ~/pytorch/build/aten/src/ATen/RegistrationDeclarations.h
args = parser.parse_args()
signature = get_signature(args.op, args.path)
result = gen_plumbing(signature)
print(result)
| <filename>codegen/gen_plumbing.py
import argparse
from codegen_outofplacebatching import deindent, get_signatures, gen_unwraps
def get_signature(op, path):
signatures = get_signatures(path, include_op=True)
result = [sig for sig in signatures if sig[0] == op]
if len(result) != 1:
raise ValueError("")
return result[0]
def gen_return_sig(return_t):
if len(return_t) == 1:
return return_t[0]
return f'std::tuple<{".".join(return_t)}>'
def gen_args_sig(args_t):
args = [f'{typ} {argname}' for typ, argname in args_t]
return ', '.join(args)
def gen_args_list(args_t):
args = [f'{argname}' for _, argname in args_t]
return ', '.join(args)
def gen_plumbing(signature):
# "add.Tensor"
op, return_t, args_t = signature
maybe_op_and_variant = op.split('.')
if len(maybe_op_and_variant) == 1:
op = maybe_op_and_variant[0]
variant = ''
opname = op
else:
op, variant = maybe_op_and_variant
opname = f'{op}_{variant}'
if op.endswith('_'):
raise ValueError('Codegen doesn\'t handle in-place ops')
arg_types, arg_names = zip(*args_t)
unwraps, _ = gen_unwraps(arg_types, arg_names)
result = deindent(f"""\
{gen_return_sig(return_t)} {opname}_plumbing({gen_args_sig(args_t)}) {{
auto maybe_layer = maybeCurrentDynamicLayer();
TORCH_INTERNAL_ASSERT(maybe_layer.has_value());
int64_t cur_level = maybe_layer->layerId();
{unwraps}
// Your logic here
static auto op = c10::Dispatcher::singleton()
.findSchemaOrThrow("aten::{op}", "{variant}");
return slow_fallback<{','.join(return_t)}>(op, {{ {gen_args_list(args_t)} }});
}}
""")
return result
if __name__ == '__main__':
parser = argparse.ArgumentParser(
description='Generate the batch rule plumbing for an op')
parser.add_argument('op',
help='the operator name (with overload name)')
parser.add_argument('path',
help='link to RegistrationDeclarations.h')
# Sample usage:
# gen_plumbing.py add.Tensor ~/pytorch/build/aten/src/ATen/RegistrationDeclarations.h
args = parser.parse_args()
signature = get_signature(args.op, args.path)
result = gen_plumbing(signature)
print(result)
| en | 0.21994 | # "add.Tensor" \ {gen_return_sig(return_t)} {opname}_plumbing({gen_args_sig(args_t)}) {{ auto maybe_layer = maybeCurrentDynamicLayer(); TORCH_INTERNAL_ASSERT(maybe_layer.has_value()); int64_t cur_level = maybe_layer->layerId(); {unwraps} // Your logic here static auto op = c10::Dispatcher::singleton() .findSchemaOrThrow("aten::{op}", "{variant}"); return slow_fallback<{','.join(return_t)}>(op, {{ {gen_args_list(args_t)} }}); }} # Sample usage: # gen_plumbing.py add.Tensor ~/pytorch/build/aten/src/ATen/RegistrationDeclarations.h | 2.275554 | 2 |
samples/fashion_FGC/fashion.py | IITGuwahati-AI/Mask_RCNN | 0 | 6615279 | <reponame>IITGuwahati-AI/Mask_RCNN<gh_stars>0
#!/usr/bin/env python
# coding: utf-8
# In[70]:
import os
import gc
import sys
import json
import glob
import random
from pathlib import Path
import cv2
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import itertools
from tqdm import tqdm
from imgaug import augmenters as iaa
from sklearn.model_selection import StratifiedKFold, KFold
import tensorflow as tf
# In[71]:
DATA_DIR = Path(r'/home/aditya/Downloads/Mask_RCNN/fashion_dataset')
ROOT_DIR = Path(r'/home/aditya/Downloads/Mask_RCNN')
# In[72]:
sys.path.append(r'/home/aditya/Downloads/Mask_RCNN')
from mrcnn.config import Config
from mrcnn import utils_for_FGC
import mrcnn.model_for_FGC as modellib
from mrcnn import visualize
from mrcnn.model_for_FGC import log
# In[ ]:
# In[73]:
COCO_WEIGHTS_PATH = r'/home/aditya/Downloads/Mask_RCNN/mask_rcnn_coco.h5'
NUM_CATS = 46
IMAGE_SIZE = 1024
# In[74]:
class FashionConfig(Config):
NAME = "fashion"
NUM_CLASSES = NUM_CATS + 1 # +1 for the background class
GPU_COUNT = 1
IMAGES_PER_GPU = 2 # a memory error occurs when IMAGES_PER_GPU is too high
RPN_ANCHOR_SCALES = (16, 32, 64, 128, 256)
#DETECTION_NMS_THRESHOLD = 0.0
# STEPS_PER_EPOCH should be the number of instances
# divided by (GPU_COUNT*IMAGES_PER_GPU), and so should VALIDATION_STEPS;
# however, due to the time limit, I set them so that this kernel can be run in 9 hours
STEPS_PER_EPOCH = 1000
VALIDATION_STEPS = 200
## My changes CA
BACKBONE = 'resnet101'
IMAGE_MIN_DIM = 1024
IMAGE_MAX_DIM = 1024
IMAGE_RESIZE_MODE = 'square'
MINI_MASK_SHAPE = (112, 112) # (height, width) of the mini-mask
NUM_ATTR = 294
LOSS_WEIGHTS = {
"rpn_class_loss": 1.,
"rpn_bbox_loss": 1.,
"mrcnn_class_loss": 1.,
"mrcnn_bbox_loss": 1.,
"mrcnn_mask_loss": 1.,
"mrcnn_attr_loss":1.
}
config = FashionConfig()
config.display()
# In[75]:
with open(DATA_DIR/"label_descriptions.json") as f:
label_descriptions = json.load(f)
class_names = [x['name'] for x in label_descriptions['categories']]
attr_names = [x['name'] for x in label_descriptions['attributes']]
# In[76]:
print(len(class_names),len(attr_names))
# In[77]:
segment_df = pd.read_csv(DATA_DIR/"train_small.csv")
segment_df['AttributesIds'] = segment_df['AttributesIds'].apply(lambda x:tuple([int(i) for i in x.split(',')]))
# In[78]:
def pad_tuple_attrs(x):
if x!=x:
x = []
else:
x = list(x)
for i in range(10):
if(i>=len(x)):
x.append(-1)
if x[i]>=281 and x[i]<284:
x[i] = x[i]-46
elif x[i]>284:
x[i] = x[i]-47
x = tuple(x)
return x
# In[79]:
segment_df['AttributesIds'] = segment_df['AttributesIds'].apply(pad_tuple_attrs)
# In[80]:
segment_df['AttributesIds'].head()
# In[81]:
image_df = segment_df.groupby('ImageId')['EncodedPixels', 'ClassId', 'AttributesIds'].agg(lambda x: list(x))
size_df = segment_df.groupby('ImageId')['Height', 'Width'].mean()
image_df = image_df.join(size_df, on='ImageId')
print("Total images: ", len(image_df))
image_df.head()
# In[82]:
c = segment_df.iloc[0]['EncodedPixels']
# c= c.split(',')
print(type(c))
# In[83]:
def resize_image(image_path):
image_path = image_path + ".jpg"
# print("image_path", image_path)
img = cv2.imread(image_path)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
img = cv2.resize(img, (IMAGE_SIZE, IMAGE_SIZE), interpolation=cv2.INTER_AREA)
return img
# In[84]:
class FashionDataset(utils_for_FGC.Dataset):
def __init__(self, df):
super().__init__(self)
# Add classes
for i, name in enumerate(class_names):
self.add_class("fashion", i+1, name)
for i, name in enumerate(attr_names):
self.add_attribute("fashion", i, name)
# Add images
for i, row in df.iterrows():
self.add_image("fashion",
image_id=row.name,
path=str(DATA_DIR/'train'/row.name),
labels=row['ClassId'],
attributes=row['AttributesIds'],
annotations=row['EncodedPixels'],
height=row['Height'], width=row['Width'])
def image_reference(self, image_id):
attr_sublist=[]
attr_list=[]
info = self.image_info[image_id]
for x in info['attributes']:
for j in x:
if(j>234):
j=j-46
attr_sublist.append(attr_names[int(j)])
attr_list.append(attr_sublist)
return info['path'], [class_names[int(x)] for x in info['labels']],attr_list
def load_image(self, image_id):
return resize_image(self.image_info[image_id]['path'])
def load_mask(self, image_id):
info = self.image_info[image_id]
mask = np.zeros((IMAGE_SIZE, IMAGE_SIZE, len(info['annotations'])), dtype=np.uint8)
labels = []
attributes = []
for m, (annotation, label) in enumerate(zip(info['annotations'], info['labels'])):
sub_mask = np.full(info['height']*info['width'], 0, dtype=np.uint8)
annotation = [int(x) for x in annotation.split(' ')]
for i, start_pixel in enumerate(annotation[::2]):
sub_mask[start_pixel: start_pixel+annotation[2*i+1]] = 1
sub_mask = sub_mask.reshape((info['height'], info['width']), order='F')
sub_mask = cv2.resize(sub_mask, (IMAGE_SIZE, IMAGE_SIZE), interpolation=cv2.INTER_NEAREST)
mask[:, :, m] = sub_mask
labels.append(int(label)+1)
attributes.append(list(info['attributes'][m]))
return mask, np.array(labels), np.array([np.array(attr) for attr in attributes])
# In[85]:
dataset = FashionDataset(image_df)
dataset.prepare()
for i in range(1):
image_id = random.choice(dataset.image_ids)
print(dataset.image_reference(image_id))
image = dataset.load_image(image_id)
mask, class_ids, attr_ids = dataset.load_mask(image_id)
# print("class_ids", class_ids)
# print("attr_ids", attr_ids)
# print(type(attr_ids))
visualize.display_top_masks(image, mask, class_ids, attr_ids, dataset.class_names, dataset.attr_names, limit=4)
# In[86]:
# This code partially supports k-fold training,
# you can specify the fold to train and the total number of folds here
FOLD = 0
N_FOLDS = 3
kf = KFold(n_splits=N_FOLDS, random_state=42, shuffle=True)
splits = kf.split(image_df) # ideally, this should be multilabel stratification
def get_fold():
for i, (train_index, valid_index) in enumerate(splits):
if i == FOLD:
return image_df.iloc[train_index], image_df.iloc[valid_index]
train_df, valid_df = get_fold()
train_dataset = FashionDataset(train_df)
train_dataset.prepare()
valid_dataset = FashionDataset(valid_df)
valid_dataset.prepare()
# In[87]:
train_segments = np.concatenate(train_df['ClassId'].values).astype(int)
print("Total train images: ", len(train_df))
print("Total train segments: ", len(train_segments))
plt.figure(figsize=(12, 3))
values, counts = np.unique(train_segments, return_counts=True)
plt.bar(values, counts)
plt.xticks(values, class_names, rotation='vertical')
plt.show()
valid_segments = np.concatenate(valid_df['ClassId'].values).astype(int)
print("Total train images: ", len(valid_df))
print("Total validation segments: ", len(valid_segments))
plt.figure(figsize=(12, 3))
values, counts = np.unique(valid_segments, return_counts=True)
plt.bar(values, counts)
plt.xticks(values, class_names, rotation='vertical')
plt.show()
train2_segments = np.concatenate(train_df['AttributesIds'].values).astype(int).reshape((-1,))
train2_segments = train2_segments[train2_segments!= -1]
# print("Total train images: ", len(valid_df))
print("Total train segments: ", len(train2_segments))
plt.figure(figsize=(12, 3))
values, counts = np.unique(train2_segments, return_counts=True)
plt.bar(values, counts)
plt.xticks(values, attr_names, rotation='vertical')
plt.show()
train2_segments = np.concatenate(valid_df['AttributesIds'].values).astype(int).reshape((-1,))
train2_segments = train2_segments[train2_segments!= -1]
# print("Total train images: ", len(valid_df))
print("Total train segments: ", len(train2_segments))
plt.figure(figsize=(12, 3))
values, counts = np.unique(train2_segments, return_counts=True)
plt.bar(values, counts)
plt.xticks(values, attr_names, rotation='vertical')
plt.show()
# In[88]:
import warnings
warnings.filterwarnings("ignore")
# In[89]:
mask
# In[90]:
model = modellib.MaskRCNN(mode='training', config=config, model_dir=ROOT_DIR)
model.load_weights(COCO_WEIGHTS_PATH, by_name=True, exclude=['mrcnn_class_logits', 'mrcnn_bbox_fc', 'mrcnn_bbox', 'mrcnn_mask'])
# In[68]:
augmentation = iaa.Sequential([
iaa.Fliplr(0.5) # only horizontal flip here
])
# In[44]:
# get_ipython().run_cell_magic('time', '', "model.train(train_dataset, valid_dataset,\n learning_rate=config.LEARNING_RATE*2, # train heads with higher lr to speedup learning\n epochs=2,\n layers='heads',\n augmentation=None)\n\nhistory = model.keras_model.history.history")
# In[29]:
import tensorflow as tf
print(tf.__version__)
# In[ ]:
# In[ ]:
# In[ ]:
| #!/usr/bin/env python
# coding: utf-8
# In[70]:
import os
import gc
import sys
import json
import glob
import random
from pathlib import Path
import cv2
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import itertools
from tqdm import tqdm
from imgaug import augmenters as iaa
from sklearn.model_selection import StratifiedKFold, KFold
import tensorflow as tf
# In[71]:
DATA_DIR = Path(r'/home/aditya/Downloads/Mask_RCNN/fashion_dataset')
ROOT_DIR = Path(r'/home/aditya/Downloads/Mask_RCNN')
# In[72]:
sys.path.append(r'/home/aditya/Downloads/Mask_RCNN')
from mrcnn.config import Config
from mrcnn import utils_for_FGC
import mrcnn.model_for_FGC as modellib
from mrcnn import visualize
from mrcnn.model_for_FGC import log
# In[ ]:
# In[73]:
COCO_WEIGHTS_PATH = r'/home/aditya/Downloads/Mask_RCNN/mask_rcnn_coco.h5'
NUM_CATS = 46
IMAGE_SIZE = 1024
# In[74]:
class FashionConfig(Config):
NAME = "fashion"
NUM_CLASSES = NUM_CATS + 1 # +1 for the background class
GPU_COUNT = 1
IMAGES_PER_GPU = 2 # a memory error occurs when IMAGES_PER_GPU is too high
RPN_ANCHOR_SCALES = (16, 32, 64, 128, 256)
#DETECTION_NMS_THRESHOLD = 0.0
# STEPS_PER_EPOCH should be the number of instances
# divided by (GPU_COUNT*IMAGES_PER_GPU), and so should VALIDATION_STEPS;
# however, due to the time limit, I set them so that this kernel can be run in 9 hours
STEPS_PER_EPOCH = 1000
VALIDATION_STEPS = 200
## My changes CA
BACKBONE = 'resnet101'
IMAGE_MIN_DIM = 1024
IMAGE_MAX_DIM = 1024
IMAGE_RESIZE_MODE = 'square'
MINI_MASK_SHAPE = (112, 112) # (height, width) of the mini-mask
NUM_ATTR = 294
LOSS_WEIGHTS = {
"rpn_class_loss": 1.,
"rpn_bbox_loss": 1.,
"mrcnn_class_loss": 1.,
"mrcnn_bbox_loss": 1.,
"mrcnn_mask_loss": 1.,
"mrcnn_attr_loss":1.
}
config = FashionConfig()
config.display()
# In[75]:
with open(DATA_DIR/"label_descriptions.json") as f:
label_descriptions = json.load(f)
class_names = [x['name'] for x in label_descriptions['categories']]
attr_names = [x['name'] for x in label_descriptions['attributes']]
# In[76]:
print(len(class_names),len(attr_names))
# In[77]:
segment_df = pd.read_csv(DATA_DIR/"train_small.csv")
segment_df['AttributesIds'] = segment_df['AttributesIds'].apply(lambda x:tuple([int(i) for i in x.split(',')]))
# In[78]:
def pad_tuple_attrs(x):
if x!=x:
x = []
else:
x = list(x)
for i in range(10):
if(i>=len(x)):
x.append(-1)
if x[i]>=281 and x[i]<284:
x[i] = x[i]-46
elif x[i]>284:
x[i] = x[i]-47
x = tuple(x)
return x
# In[79]:
segment_df['AttributesIds'] = segment_df['AttributesIds'].apply(pad_tuple_attrs)
# In[80]:
segment_df['AttributesIds'].head()
# In[81]:
image_df = segment_df.groupby('ImageId')['EncodedPixels', 'ClassId', 'AttributesIds'].agg(lambda x: list(x))
size_df = segment_df.groupby('ImageId')['Height', 'Width'].mean()
image_df = image_df.join(size_df, on='ImageId')
print("Total images: ", len(image_df))
image_df.head()
# In[82]:
c = segment_df.iloc[0]['EncodedPixels']
# c= c.split(',')
print(type(c))
# In[83]:
def resize_image(image_path):
image_path = image_path + ".jpg"
# print("image_path", image_path)
img = cv2.imread(image_path)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
img = cv2.resize(img, (IMAGE_SIZE, IMAGE_SIZE), interpolation=cv2.INTER_AREA)
return img
# In[84]:
class FashionDataset(utils_for_FGC.Dataset):
def __init__(self, df):
super().__init__(self)
# Add classes
for i, name in enumerate(class_names):
self.add_class("fashion", i+1, name)
for i, name in enumerate(attr_names):
self.add_attribute("fashion", i, name)
# Add images
for i, row in df.iterrows():
self.add_image("fashion",
image_id=row.name,
path=str(DATA_DIR/'train'/row.name),
labels=row['ClassId'],
attributes=row['AttributesIds'],
annotations=row['EncodedPixels'],
height=row['Height'], width=row['Width'])
def image_reference(self, image_id):
attr_sublist=[]
attr_list=[]
info = self.image_info[image_id]
for x in info['attributes']:
for j in x:
if(j>234):
j=j-46
attr_sublist.append(attr_names[int(j)])
attr_list.append(attr_sublist)
return info['path'], [class_names[int(x)] for x in info['labels']],attr_list
def load_image(self, image_id):
return resize_image(self.image_info[image_id]['path'])
def load_mask(self, image_id):
info = self.image_info[image_id]
mask = np.zeros((IMAGE_SIZE, IMAGE_SIZE, len(info['annotations'])), dtype=np.uint8)
labels = []
attributes = []
for m, (annotation, label) in enumerate(zip(info['annotations'], info['labels'])):
sub_mask = np.full(info['height']*info['width'], 0, dtype=np.uint8)
annotation = [int(x) for x in annotation.split(' ')]
for i, start_pixel in enumerate(annotation[::2]):
sub_mask[start_pixel: start_pixel+annotation[2*i+1]] = 1
sub_mask = sub_mask.reshape((info['height'], info['width']), order='F')
sub_mask = cv2.resize(sub_mask, (IMAGE_SIZE, IMAGE_SIZE), interpolation=cv2.INTER_NEAREST)
mask[:, :, m] = sub_mask
labels.append(int(label)+1)
attributes.append(list(info['attributes'][m]))
return mask, np.array(labels), np.array([np.array(attr) for attr in attributes])
# In[85]:
dataset = FashionDataset(image_df)
dataset.prepare()
for i in range(1):
image_id = random.choice(dataset.image_ids)
print(dataset.image_reference(image_id))
image = dataset.load_image(image_id)
mask, class_ids, attr_ids = dataset.load_mask(image_id)
# print("class_ids", class_ids)
# print("attr_ids", attr_ids)
# print(type(attr_ids))
visualize.display_top_masks(image, mask, class_ids, attr_ids, dataset.class_names, dataset.attr_names, limit=4)
# In[86]:
# This code partially supports k-fold training,
# you can specify the fold to train and the total number of folds here
FOLD = 0
N_FOLDS = 3
kf = KFold(n_splits=N_FOLDS, random_state=42, shuffle=True)
splits = kf.split(image_df) # ideally, this should be multilabel stratification
def get_fold():
for i, (train_index, valid_index) in enumerate(splits):
if i == FOLD:
return image_df.iloc[train_index], image_df.iloc[valid_index]
train_df, valid_df = get_fold()
train_dataset = FashionDataset(train_df)
train_dataset.prepare()
valid_dataset = FashionDataset(valid_df)
valid_dataset.prepare()
# In[87]:
train_segments = np.concatenate(train_df['ClassId'].values).astype(int)
print("Total train images: ", len(train_df))
print("Total train segments: ", len(train_segments))
plt.figure(figsize=(12, 3))
values, counts = np.unique(train_segments, return_counts=True)
plt.bar(values, counts)
plt.xticks(values, class_names, rotation='vertical')
plt.show()
valid_segments = np.concatenate(valid_df['ClassId'].values).astype(int)
print("Total train images: ", len(valid_df))
print("Total validation segments: ", len(valid_segments))
plt.figure(figsize=(12, 3))
values, counts = np.unique(valid_segments, return_counts=True)
plt.bar(values, counts)
plt.xticks(values, class_names, rotation='vertical')
plt.show()
train2_segments = np.concatenate(train_df['AttributesIds'].values).astype(int).reshape((-1,))
train2_segments = train2_segments[train2_segments!= -1]
# print("Total train images: ", len(valid_df))
print("Total train segments: ", len(train2_segments))
plt.figure(figsize=(12, 3))
values, counts = np.unique(train2_segments, return_counts=True)
plt.bar(values, counts)
plt.xticks(values, attr_names, rotation='vertical')
plt.show()
train2_segments = np.concatenate(valid_df['AttributesIds'].values).astype(int).reshape((-1,))
train2_segments = train2_segments[train2_segments!= -1]
# print("Total train images: ", len(valid_df))
print("Total train segments: ", len(train2_segments))
plt.figure(figsize=(12, 3))
values, counts = np.unique(train2_segments, return_counts=True)
plt.bar(values, counts)
plt.xticks(values, attr_names, rotation='vertical')
plt.show()
# In[88]:
import warnings
warnings.filterwarnings("ignore")
# In[89]:
mask
# In[90]:
model = modellib.MaskRCNN(mode='training', config=config, model_dir=ROOT_DIR)
model.load_weights(COCO_WEIGHTS_PATH, by_name=True, exclude=['mrcnn_class_logits', 'mrcnn_bbox_fc', 'mrcnn_bbox', 'mrcnn_mask'])
# In[68]:
augmentation = iaa.Sequential([
iaa.Fliplr(0.5) # only horizontal flip here
])
# In[44]:
# get_ipython().run_cell_magic('time', '', "model.train(train_dataset, valid_dataset,\n learning_rate=config.LEARNING_RATE*2, # train heads with higher lr to speedup learning\n epochs=2,\n layers='heads',\n augmentation=None)\n\nhistory = model.keras_model.history.history")
# In[29]:
import tensorflow as tf
print(tf.__version__)
# In[ ]:
# In[ ]:
# In[ ]: | en | 0.618986 | #!/usr/bin/env python # coding: utf-8 # In[70]: # In[71]: # In[72]: # In[ ]: # In[73]: # In[74]: # +1 for the background class # a memory error occurs when IMAGES_PER_GPU is too high #DETECTION_NMS_THRESHOLD = 0.0 # STEPS_PER_EPOCH should be the number of instances # divided by (GPU_COUNT*IMAGES_PER_GPU), and so should VALIDATION_STEPS; # however, due to the time limit, I set them so that this kernel can be run in 9 hours ## My changes CA # (height, width) of the mini-mask # In[75]: # In[76]: # In[77]: # In[78]: # In[79]: # In[80]: # In[81]: # In[82]: # c= c.split(',') # In[83]: # print("image_path", image_path) # In[84]: # Add classes # Add images # In[85]: # print("class_ids", class_ids) # print("attr_ids", attr_ids) # print(type(attr_ids)) # In[86]: # This code partially supports k-fold training, # you can specify the fold to train and the total number of folds here # ideally, this should be multilabel stratification # In[87]: # print("Total train images: ", len(valid_df)) # print("Total train images: ", len(valid_df)) # In[88]: # In[89]: # In[90]: # In[68]: # only horizontal flip here # In[44]: # get_ipython().run_cell_magic('time', '', "model.train(train_dataset, valid_dataset,\n learning_rate=config.LEARNING_RATE*2, # train heads with higher lr to speedup learning\n epochs=2,\n layers='heads',\n augmentation=None)\n\nhistory = model.keras_model.history.history") # In[29]: # In[ ]: # In[ ]: # In[ ]: | 1.908731 | 2 |
ModpackDownloader.py | MrKelpy/Modpack-Installer | 0 | 6615280 | <reponame>MrKelpy/Modpack-Installer
"""
This file is distributed as part of the Modpack Installer Project.
The source code may be available at
https://github.com/MrKelpy/Modpack-Installer
If a license applies for this project, the former can be found
in every distribution, as a "LICENSE" file at top level.
"""
# Built-in Imports
import os
import random
import shutil
import string
import zipfile
from datetime import datetime
# Third Party Imports
import requests
from alive_progress import alive_bar
from loguru import logger
# Local Application Imports
from GeneralUtils import GeneralUtils
class ModpackDownloader:
"""
This class implements the main functions of the program, downloading and automatic
management of files in the .minecraft/mods folder.
"""
def __init__(self):
self.__minecraft_folder = os.path.join(os.environ["APPDATA"], ".minecraft")
self.__mods_folder_path = os.path.join(self.__minecraft_folder, "mods")
self.__old_folder_path = os.path.join(self.__mods_folder_path, ".OLD_MODS")
self.__removed_files = list() # List of removed files during the program execution
# There's no need for a list of added files because we can simply check for what files are in the "mods" folder.
def start(self) -> None:
"""
Acts as the main executor of all the main program functions.
:return:
"""
os.makedirs(self.__mods_folder_path, exist_ok=True)
self._secure_old_files(self.__mods_folder_path)
self._download_modpack()
self._setup_game_directories()
self._show_files()
def _download_modpack(self) -> None:
"""
Downloads the modpack and shows a progress bar for the progress.
:return:
"""
download_zip: str = os.path.join(self.__mods_folder_path, "modpack.zip")
chosen_bar: str = random.choice(["classic", "classic2", "squares", "ruler2", "brackets", "fish"])
chunk_size: int = int(GeneralUtils().get_panel_setting("DOWNLOAD-CHUNK-SIZE"))
# Gets the redirect link and opens a request stream to download the content
# Initialises an alive bar to show completion. This bar will be random within the allowed ones.
with requests.get(GeneralUtils().get_panel_setting("REDIRECT1"), stream=True) as r, \
alive_bar(int(r.headers["content-length"])//chunk_size+1, force_tty=True, title="[INFO] Downloading mods",
monitor=False, length=50, elapsed=False, stats=False, bar=chosen_bar, spinner="classic",
spinner_length=0) as bar, \
open(download_zip, 'wb') as file:
r.raise_for_status()
# Downloads a moderately sized chunk per iteration, writing it into a zip in the "mods" folder.
for chunk in r.iter_content(chunk_size=8192):
file.write(chunk)
bar()
# Finishes the remaining part of the progress bar (Because of precision losses)
while bar.current() < (int(r.headers["content-length"])//chunk_size):
bar()
# Extracts the zip contents into the mods folder
with zipfile.ZipFile(download_zip, 'r') as zip_ref:
logger.info("Extracting modpack files")
zip_ref.extractall(self.__mods_folder_path)
# Removes the modpack zip
os.remove(download_zip)
logger.info("Removed residual files")
def _show_files(self) -> None:
"""
Displays a list of the file changes that occurred in the "mods" folder
during the program execution
:return:
"""
logger.debug("Displaying file change info")
# Displays the removed files
for file in self.__removed_files:
logger.info(f"[-] {file}")
# Displays the added files
for file in [x for x in os.listdir(self.__mods_folder_path) if x != ".OLD_FILES"]:
logger.info(f"[+] {file}")
def _secure_old_files(self, path: str, count_removed: bool = True) -> None:
"""
Secures all the "old" files inside the specified folder by moving them
into an .OLD_FILES/<timestamp> folder, so they don't interfere with the installation of the new modpack.
:param str path: The path to the directory to secure the files
:param bool count_removed: Whether or not to count the files towards the removed files.
:return:
"""
if len(os.listdir(path)) <= 0:
logger.info(f"No files found in {path}, nothing to secure.")
return # Not sure how a length < 0 could happen, but I wouldn't be surprised if it did.
# Old files folder to be used. ".OLD_FILES/TIMESTAMP"
old_files_folder: str = os.path.join(path, ".OLD_FILES", str(int(datetime.now().timestamp())))
os.makedirs(old_files_folder, exist_ok=True)
for file in [x for x in os.listdir(path) if x != ".OLD_FILES"]:
filepath: str = os.path.join(path, file)
shutil.move(filepath, old_files_folder)
if count_removed: self.__removed_files.append(file)
logger.debug(f"[$] Secured {filepath} in {old_files_folder}")
def _setup_game_directories(self) -> None:
"""
Detects any directories that have been extracted from the modpack.zip file
and sets them up in the .minecraft folder in a similar way as the "mods" folder.
:return:
Any directories without characters in their name are probably coremod forge version
folders, and the old files directory.
"""
for directory in [x for x in os.listdir(self.__mods_folder_path)
if os.path.isdir(os.path.join(self.__mods_folder_path, x)) and x != ".OLD_FILES"]:
# Ignore directories with no characters in their name
if not any(char in directory for char in string.ascii_letters): continue
logger.info(f"Setting up the {directory} folder")
dst_dirpath: str = os.path.join(self.__minecraft_folder, directory)
src_dirpath: str = os.path.join(self.__mods_folder_path, directory)
os.makedirs(dst_dirpath, exist_ok=True)
logger.debug(f"Ensured the existance of a {dst_dirpath} directory")
# The "mods" folder is an exception since it gets secured when the program boots up.
if directory != "mods": self._secure_old_files(dst_dirpath, count_removed=False)
for file in os.listdir(os.path.join(self.__mods_folder_path, directory)):
filepath: str = os.path.join(src_dirpath, file)
shutil.move(filepath, dst_dirpath)
logger.debug(f"[>] Sent {file} to the {directory} folder")
os.rmdir(os.path.join(self.__mods_folder_path, directory))
| """
This file is distributed as part of the Modpack Installer Project.
The source code may be available at
https://github.com/MrKelpy/Modpack-Installer
If a license applies for this project, the former can be found
in every distribution, as a "LICENSE" file at top level.
"""
# Built-in Imports
import os
import random
import shutil
import string
import zipfile
from datetime import datetime
# Third Party Imports
import requests
from alive_progress import alive_bar
from loguru import logger
# Local Application Imports
from GeneralUtils import GeneralUtils
class ModpackDownloader:
"""
This class implements the main functions of the program, downloading and automatic
management of files in the .minecraft/mods folder.
"""
def __init__(self):
self.__minecraft_folder = os.path.join(os.environ["APPDATA"], ".minecraft")
self.__mods_folder_path = os.path.join(self.__minecraft_folder, "mods")
self.__old_folder_path = os.path.join(self.__mods_folder_path, ".OLD_MODS")
self.__removed_files = list() # List of removed files during the program execution
# There's no need for a list of added files because we can simply check for what files are in the "mods" folder.
def start(self) -> None:
"""
Acts as the main executor of all the main program functions.
:return:
"""
os.makedirs(self.__mods_folder_path, exist_ok=True)
self._secure_old_files(self.__mods_folder_path)
self._download_modpack()
self._setup_game_directories()
self._show_files()
def _download_modpack(self) -> None:
"""
Downloads the modpack and shows a progress bar for the progress.
:return:
"""
download_zip: str = os.path.join(self.__mods_folder_path, "modpack.zip")
chosen_bar: str = random.choice(["classic", "classic2", "squares", "ruler2", "brackets", "fish"])
chunk_size: int = int(GeneralUtils().get_panel_setting("DOWNLOAD-CHUNK-SIZE"))
# Gets the redirect link and opens a request stream to download the content
# Initialises an alive bar to show completion. This bar will be random within the allowed ones.
with requests.get(GeneralUtils().get_panel_setting("REDIRECT1"), stream=True) as r, \
alive_bar(int(r.headers["content-length"])//chunk_size+1, force_tty=True, title="[INFO] Downloading mods",
monitor=False, length=50, elapsed=False, stats=False, bar=chosen_bar, spinner="classic",
spinner_length=0) as bar, \
open(download_zip, 'wb') as file:
r.raise_for_status()
# Downloads a moderately sized chunk per iteration, writing it into a zip in the "mods" folder.
for chunk in r.iter_content(chunk_size=8192):
file.write(chunk)
bar()
# Finishes the remaining part of the progress bar (Because of precision losses)
while bar.current() < (int(r.headers["content-length"])//chunk_size):
bar()
# Extracts the zip contents into the mods folder
with zipfile.ZipFile(download_zip, 'r') as zip_ref:
logger.info("Extracting modpack files")
zip_ref.extractall(self.__mods_folder_path)
# Removes the modpack zip
os.remove(download_zip)
logger.info("Removed residual files")
def _show_files(self) -> None:
"""
Displays a list of the file changes that occurred in the "mods" folder
during the program execution
:return:
"""
logger.debug("Displaying file change info")
# Displays the removed files
for file in self.__removed_files:
logger.info(f"[-] {file}")
# Displays the added files
for file in [x for x in os.listdir(self.__mods_folder_path) if x != ".OLD_FILES"]:
logger.info(f"[+] {file}")
def _secure_old_files(self, path: str, count_removed: bool = True) -> None:
"""
Secures all the "old" files inside the specified folder by moving them
into an .OLD_FILES/<timestamp> folder, so they don't interfere with the installation of the new modpack.
:param str path: The path to the directory to secure the files
:param bool count_removed: Whether or not to count the files towards the removed files.
:return:
"""
if len(os.listdir(path)) <= 0:
logger.info(f"No files found in {path}, nothing to secure.")
return # Not sure how a length < 0 could happen, but I wouldn't be surprised if it did.
# Old files folder to be used. ".OLD_FILES/TIMESTAMP"
old_files_folder: str = os.path.join(path, ".OLD_FILES", str(int(datetime.now().timestamp())))
os.makedirs(old_files_folder, exist_ok=True)
for file in [x for x in os.listdir(path) if x != ".OLD_FILES"]:
filepath: str = os.path.join(path, file)
shutil.move(filepath, old_files_folder)
if count_removed: self.__removed_files.append(file)
logger.debug(f"[$] Secured {filepath} in {old_files_folder}")
def _setup_game_directories(self) -> None:
"""
Detects any directories that have been extracted from the modpack.zip file
and sets them up in the .minecraft folder in a similar way as the "mods" folder.
:return:
Any directories without characters in their name are probably coremod forge version
folders, and the old files directory.
"""
for directory in [x for x in os.listdir(self.__mods_folder_path)
if os.path.isdir(os.path.join(self.__mods_folder_path, x)) and x != ".OLD_FILES"]:
# Ignore directories with no characters in their name
if not any(char in directory for char in string.ascii_letters): continue
logger.info(f"Setting up the {directory} folder")
dst_dirpath: str = os.path.join(self.__minecraft_folder, directory)
src_dirpath: str = os.path.join(self.__mods_folder_path, directory)
os.makedirs(dst_dirpath, exist_ok=True)
logger.debug(f"Ensured the existance of a {dst_dirpath} directory")
# The "mods" folder is an exception since it gets secured when the program boots up.
if directory != "mods": self._secure_old_files(dst_dirpath, count_removed=False)
for file in os.listdir(os.path.join(self.__mods_folder_path, directory)):
filepath: str = os.path.join(src_dirpath, file)
shutil.move(filepath, dst_dirpath)
logger.debug(f"[>] Sent {file} to the {directory} folder")
os.rmdir(os.path.join(self.__mods_folder_path, directory)) | en | 0.909421 | This file is distributed as part of the Modpack Installer Project. The source code may be available at https://github.com/MrKelpy/Modpack-Installer If a license applies for this project, the former can be found in every distribution, as a "LICENSE" file at top level. # Built-in Imports # Third Party Imports # Local Application Imports This class implements the main functions of the program, downloading and automatic management of files in the .minecraft/mods folder. # List of removed files during the program execution # There's no need for a list of added files because we can simply check for what files are in the "mods" folder. Acts as the main executor of all the main program functions. :return: Downloads the modpack and shows a progress bar for the progress. :return: # Gets the redirect link and opens a request stream to download the content # Initialises an alive bar to show completion. This bar will be random within the allowed ones. # Downloads a moderately sized chunk per iteration, writing it into a zip in the "mods" folder. # Finishes the remaining part of the progress bar (Because of precision losses) # Extracts the zip contents into the mods folder # Removes the modpack zip Displays a list of the file changes that occurred in the "mods" folder during the program execution :return: # Displays the removed files # Displays the added files Secures all the "old" files inside the specified folder by moving them into an .OLD_FILES/<timestamp> folder, so they don't interfere with the installation of the new modpack. :param str path: The path to the directory to secure the files :param bool count_removed: Whether or not to count the files towards the removed files. :return: # Not sure how a length < 0 could happen, but I wouldn't be surprised if it did. # Old files folder to be used. ".OLD_FILES/TIMESTAMP" Detects any directories that have been extracted from the modpack.zip file and sets them up in the .minecraft folder in a similar way as the "mods" folder. :return: Any directories without characters in their name are probably coremod forge version folders, and the old files directory. # Ignore directories with no characters in their name # The "mods" folder is an exception since it gets secured when the program boots up. | 2.816547 | 3 |
rabin/crypto_configuration.py | LukasForst/rabin-crypto | 0 | 6615281 | # given by the assignment
# https://blackboard.au.dk/webapps/assignment/uploadAssignment?content_id=_2832837_1&course_id=_136793_1&group_id=&mode=view
TARGET_SECURITY_LEVEL_BITS = 128
PRIME_LENGTH_BITS = 1536
# m < n and as |n| = |p| + |q|
MAX_ENCRYPTED_BITS = (PRIME_LENGTH_BITS * 2) - 1
# must be smaller then MAX_ENCRYPTED_BITS // 8
# arbitrary selected as 256 bytes
PLAINTEXT_BLOCK_SIZE_BYTES = 256
# encrypting 256 bytes result in 384 cipher text bytes
CIPHERTEXT_BLOCK_SIZE_BYTES = PLAINTEXT_BLOCK_SIZE_BYTES + 128
| # given by the assignment
# https://blackboard.au.dk/webapps/assignment/uploadAssignment?content_id=_2832837_1&course_id=_136793_1&group_id=&mode=view
TARGET_SECURITY_LEVEL_BITS = 128
PRIME_LENGTH_BITS = 1536
# m < n and as |n| = |p| + |q|
MAX_ENCRYPTED_BITS = (PRIME_LENGTH_BITS * 2) - 1
# must be smaller then MAX_ENCRYPTED_BITS // 8
# arbitrary selected as 256 bytes
PLAINTEXT_BLOCK_SIZE_BYTES = 256
# encrypting 256 bytes result in 384 cipher text bytes
CIPHERTEXT_BLOCK_SIZE_BYTES = PLAINTEXT_BLOCK_SIZE_BYTES + 128
| en | 0.756123 | # given by the assignment # https://blackboard.au.dk/webapps/assignment/uploadAssignment?content_id=_2832837_1&course_id=_136793_1&group_id=&mode=view # m < n and as |n| = |p| + |q| # must be smaller then MAX_ENCRYPTED_BITS // 8 # arbitrary selected as 256 bytes # encrypting 256 bytes result in 384 cipher text bytes | 2.40016 | 2 |
fclpy/lispenv.py | fclpy/fclpy | 1 | 6615282 | <gh_stars>1-10
import fclpy.lisptype
current_environment = fclpy.lisptype.Environment() | import fclpy.lisptype
current_environment = fclpy.lisptype.Environment() | none | 1 | 1.154574 | 1 | |
pip_api/_pep650.py | sugatoray/pip-api | 81 | 6615283 | <filename>pip_api/_pep650.py
import subprocess
from pip_api._call import call
def invoke_install(path, *, dependency_group=None, **kwargs):
try:
call(
"install", "--requirement", dependency_group or "requirements.txt", cwd=path
)
except subprocess.CalledProcessError as e:
return e.returncode
return 0
def invoke_uninstall(path, *, dependency_group=None, **kwargs):
try:
call(
"uninstall",
"--requirement",
dependency_group or "requirements.txt",
cwd=path,
)
except subprocess.CalledProcessError as e:
return e.returncode
return 0
def get_dependencies_to_install(path, *, dependency_group=None, **kwargs):
# See https://github.com/pypa/pip/issues/53
raise Exception("pip is unable to do a dry run")
def get_dependency_groups(path, **kwargs):
raise Exception("pip is unable to discover dependency groups")
def update_dependencies(
path, dependency_specifiers, *, dependency_group=None, **kwargs
):
# See https://github.com/pypa/pip/issues/1479
raise Exception("pip is unable to update dependency files")
| <filename>pip_api/_pep650.py
import subprocess
from pip_api._call import call
def invoke_install(path, *, dependency_group=None, **kwargs):
try:
call(
"install", "--requirement", dependency_group or "requirements.txt", cwd=path
)
except subprocess.CalledProcessError as e:
return e.returncode
return 0
def invoke_uninstall(path, *, dependency_group=None, **kwargs):
try:
call(
"uninstall",
"--requirement",
dependency_group or "requirements.txt",
cwd=path,
)
except subprocess.CalledProcessError as e:
return e.returncode
return 0
def get_dependencies_to_install(path, *, dependency_group=None, **kwargs):
# See https://github.com/pypa/pip/issues/53
raise Exception("pip is unable to do a dry run")
def get_dependency_groups(path, **kwargs):
raise Exception("pip is unable to discover dependency groups")
def update_dependencies(
path, dependency_specifiers, *, dependency_group=None, **kwargs
):
# See https://github.com/pypa/pip/issues/1479
raise Exception("pip is unable to update dependency files")
| en | 0.554669 | # See https://github.com/pypa/pip/issues/53 # See https://github.com/pypa/pip/issues/1479 | 2.510659 | 3 |
Respostas/ex-3.py | danielsaad/oficina-matplotlib | 5 | 6615284 | <reponame>danielsaad/oficina-matplotlib
import matplotlib.pyplot as plt
import numpy as np
fig = plt.figure()
ax1 = fig.add_subplot(211)
ax2 = fig.add_subplot(212)
labels = ['Comédia','Ação','Romance','Drama','Ficção Científica','Terror']
colors = ['yellow','red','blue','green','purple','gray']
valores = [4,5,6,1,4,2]
width=.5
gap = 1;
locs = [np.arange(0,len(valores)*width,width)]
for i,valor in enumerate(valores):
loc = i*width
ax1.bar(loc,valor,width=width,color=colors[i],label=labels[i])
ax2.pie(valores,labels=labels,shadow=True,autopct='%1.1f%%',colors=colors)
ax1.set_title("Quantidade Absoluta de Respostas")
ax1.legend(loc='best')
ax2.set_title("Quantidade Relativa de Respostas")
plt.show()
| import matplotlib.pyplot as plt
import numpy as np
fig = plt.figure()
ax1 = fig.add_subplot(211)
ax2 = fig.add_subplot(212)
labels = ['Comédia','Ação','Romance','Drama','Ficção Científica','Terror']
colors = ['yellow','red','blue','green','purple','gray']
valores = [4,5,6,1,4,2]
width=.5
gap = 1;
locs = [np.arange(0,len(valores)*width,width)]
for i,valor in enumerate(valores):
loc = i*width
ax1.bar(loc,valor,width=width,color=colors[i],label=labels[i])
ax2.pie(valores,labels=labels,shadow=True,autopct='%1.1f%%',colors=colors)
ax1.set_title("Quantidade Absoluta de Respostas")
ax1.legend(loc='best')
ax2.set_title("Quantidade Relativa de Respostas")
plt.show() | none | 1 | 3.136928 | 3 | |
tests/python/test_taskmage2/test_utils/test_ctags.py | willjp/vim-taskmage | 1 | 6615285 | import os
import re
import taskmage2
from taskmage2.utils import ctags
_taskmagedir = os.path.dirname(os.path.abspath(taskmage2.__file__))
_test_resources = os.path.abspath('{}/../../tests/resources'.format(_taskmagedir))
class Test_CtagsFile:
def test_read_from_file(self):
filepath = '{}/mixed_headers.tasklist'.format(_test_resources)
ctagsfile = ctags.CtagsFile()
ctagsfile.load_file(filepath)
render = ctagsfile.render()
expects = (
'!_TAG_FILE_ENCODING utf-8\n'
+ '!_TAG_FILE_FORMAT 2\n'
+ '!_TAG_FILE_SORTED 1\n'
+ 'section 1\t{}\t/^section 1$/;"\ts\tline:5\n'.format(filepath)
+ 'section 1.a\t{}\t/^section 1.a$/;"\ts\tline:10\tsection:section 1\n'.format(filepath)
+ 'section 1.b\t{}\t/^section 1.b$/;"\ts\tline:15\tsection:section 1\n'.format(filepath)
+ 'section 2\t{}\t/^section 2$/;"\ts\tline:20\n'.format(filepath)
+ 'section 2.a\t{}\t/^section 2.a$/;"\ts\tline:23\tsection:section 2'.format(filepath)
)
assert render == expects
class Test_CtagsHeaderEntry:
class Test_match_regex:
@classmethod
def setup_class(self):
self.regex = ctags.CtagsHeaderEntry.match_regex()
def test_no_headers_no_match(self):
text = (
'* task\n'
'* {*5820C61E8A1B4293B0FBBEFC176917CF*}another task\n'
)
match = re.search(self.regex, text, re.MULTILINE)
assert match is None
def test_matches_header_with_id(self):
text = (
'{*8ED87AC2D52F4734BAFCB7BDAA923DA4*}My Header\n'
'========='
)
match = re.search(self.regex, text, re.MULTILINE)
assert match.group() == text
def test_matches_header_without_id(self):
text = (
'My Header\n'
'========='
)
match = re.search(self.regex, text, re.MULTILINE)
assert match.group() == text
def test_matches_header_with_type_and_id(self):
text = (
'{*8ED87AC2D52F4734BAFCB7BDAA923DA4*}file::My Header\n'
'========='
)
match = re.search(self.regex, text, re.MULTILINE)
assert match.group() == text
def test_matches_header_with_type_without_id(self):
text = (
'file::My Header\n'
'========='
)
match = re.search(self.regex, text, re.MULTILINE)
assert match.group() == text
def test_matches_multiple_headers(self):
text = (
'My Header\n'
'========='
'\n'
'My Other Header\n'
'---------------'
)
assert len(list(re.finditer(self.regex, text, re.MULTILINE))) == 2
def test_extracts_name(self):
text = (
'My Header\n'
'========='
)
match = re.search(self.regex, text, re.MULTILINE)
assert match.group('name') == 'My Header'
def test_extracts_underline(self):
text = (
'My Header\n'
'========='
)
match = re.search(self.regex, text, re.MULTILINE)
assert match.group('underline') == '========='
def test_extracts_id_when_present(self):
text = (
'{*F89595A0D463456D9489A19736C5ABC0*}My Header\n'
'========='
)
match = re.search(self.regex, text, re.MULTILINE)
assert match.group('uuid') == 'F89595A0D463456D9489A19736C5ABC0'
def test_extracts_id_when_not_present(self):
text = (
'My Header\n'
'========='
)
match = re.search(self.regex, text, re.MULTILINE)
assert match.group('uuid') is None
def test_extracts_type_when_present(self):
text = (
'file::My Header\n'
'========='
)
match = re.search(self.regex, text, re.MULTILINE)
assert match.group('type') == 'file'
def test_extracts_type_when_not_present(self):
text = (
'My Header\n'
'========='
)
match = re.search(self.regex, text, re.MULTILINE)
assert match.group('type') == ''
class Test__find_entries:
def test_finds_match(self):
text = (
'{*8ED87AC2D52F4734BAFCB7BDAA923DA4*}My Header\n'
'========='
)
matches = ctags.CtagsHeaderEntry._find_entries(text)
assert len(matches) == 1
def test_no_matches_returns_empty_list(self):
text = ''
matches = ctags.CtagsHeaderEntry._find_entries(text)
assert matches == []
def test_rejects_headers_without_underline_matching_title_length(self):
text = (
'My Header\n'
'====='
)
matches = ctags.CtagsHeaderEntry._find_entries(text)
assert matches == []
class Test__set_entries_lineno:
def test_obtains_first_match_lineno(self):
text = (
'* task A\n'
'* task B\n'
'{*8ED87AC2D52F4734BAFCB7BDAA923DA4*}My Header\n'
'========='
)
entries = [
ctags.CtagsHeaderEntry(
uuid_='8ED87AC2D52F4734BAFCB7BDAA923DA4',
name='My Header',
ntype='', # section has no ntype
filepath='/path/file.tasklist',
start_pos=18,
uline_char='=',
)
]
ctags.CtagsHeaderEntry._set_entries_lineno(text, entries)
assert len(entries) == 1
assert entries[0].lineno == 3
def test_obtains_second_match_lineno(self):
text = (
'* task A\n'
'* task B\n'
'{*8ED87AC2D52F4734BAFCB7BDAA923DA4*}My Header\n'
'=========\n'
'\n'
'* subtask A\n' # 86
'* subtask B\n' # 98
'\n'
'Header 2\n'
'--------\n'
)
entries = [
ctags.CtagsHeaderEntry(
uuid_='8ED87AC2D52F4734BAFCB7BDAA923DA4',
name='My Header',
ntype='', # section has no ntype
filepath='/path/file.tasklist',
start_pos=18,
uline_char='=',
),
ctags.CtagsHeaderEntry(
name='Header 2',
ntype='', # section has no ntype
filepath='/path/file.tasklist',
start_pos=100,
uline_char='-',
)
]
ctags.CtagsHeaderEntry._set_entries_lineno(text, entries)
assert len(entries) == 2
assert entries[1].lineno == 9
class Test_render:
def test_section(self):
entry = ctags.CtagsHeaderEntry(
name='My Header',
filepath='/path/to/todo.mtask',
ntype='', # sections do not have a prefix
lineno=5,
)
render = entry.render()
expected = 'My Header\t/path/to/todo.mtask\t/^My Header$/;"\ts\tline:5'
assert render == expected
| import os
import re
import taskmage2
from taskmage2.utils import ctags
_taskmagedir = os.path.dirname(os.path.abspath(taskmage2.__file__))
_test_resources = os.path.abspath('{}/../../tests/resources'.format(_taskmagedir))
class Test_CtagsFile:
def test_read_from_file(self):
filepath = '{}/mixed_headers.tasklist'.format(_test_resources)
ctagsfile = ctags.CtagsFile()
ctagsfile.load_file(filepath)
render = ctagsfile.render()
expects = (
'!_TAG_FILE_ENCODING utf-8\n'
+ '!_TAG_FILE_FORMAT 2\n'
+ '!_TAG_FILE_SORTED 1\n'
+ 'section 1\t{}\t/^section 1$/;"\ts\tline:5\n'.format(filepath)
+ 'section 1.a\t{}\t/^section 1.a$/;"\ts\tline:10\tsection:section 1\n'.format(filepath)
+ 'section 1.b\t{}\t/^section 1.b$/;"\ts\tline:15\tsection:section 1\n'.format(filepath)
+ 'section 2\t{}\t/^section 2$/;"\ts\tline:20\n'.format(filepath)
+ 'section 2.a\t{}\t/^section 2.a$/;"\ts\tline:23\tsection:section 2'.format(filepath)
)
assert render == expects
class Test_CtagsHeaderEntry:
class Test_match_regex:
@classmethod
def setup_class(self):
self.regex = ctags.CtagsHeaderEntry.match_regex()
def test_no_headers_no_match(self):
text = (
'* task\n'
'* {*5820C61E8A1B4293B0FBBEFC176917CF*}another task\n'
)
match = re.search(self.regex, text, re.MULTILINE)
assert match is None
def test_matches_header_with_id(self):
text = (
'{*8ED87AC2D52F4734BAFCB7BDAA923DA4*}My Header\n'
'========='
)
match = re.search(self.regex, text, re.MULTILINE)
assert match.group() == text
def test_matches_header_without_id(self):
text = (
'My Header\n'
'========='
)
match = re.search(self.regex, text, re.MULTILINE)
assert match.group() == text
def test_matches_header_with_type_and_id(self):
text = (
'{*8ED87AC2D52F4734BAFCB7BDAA923DA4*}file::My Header\n'
'========='
)
match = re.search(self.regex, text, re.MULTILINE)
assert match.group() == text
def test_matches_header_with_type_without_id(self):
text = (
'file::My Header\n'
'========='
)
match = re.search(self.regex, text, re.MULTILINE)
assert match.group() == text
def test_matches_multiple_headers(self):
text = (
'My Header\n'
'========='
'\n'
'My Other Header\n'
'---------------'
)
assert len(list(re.finditer(self.regex, text, re.MULTILINE))) == 2
def test_extracts_name(self):
text = (
'My Header\n'
'========='
)
match = re.search(self.regex, text, re.MULTILINE)
assert match.group('name') == 'My Header'
def test_extracts_underline(self):
text = (
'My Header\n'
'========='
)
match = re.search(self.regex, text, re.MULTILINE)
assert match.group('underline') == '========='
def test_extracts_id_when_present(self):
text = (
'{*F89595A0D463456D9489A19736C5ABC0*}My Header\n'
'========='
)
match = re.search(self.regex, text, re.MULTILINE)
assert match.group('uuid') == 'F89595A0D463456D9489A19736C5ABC0'
def test_extracts_id_when_not_present(self):
text = (
'My Header\n'
'========='
)
match = re.search(self.regex, text, re.MULTILINE)
assert match.group('uuid') is None
def test_extracts_type_when_present(self):
text = (
'file::My Header\n'
'========='
)
match = re.search(self.regex, text, re.MULTILINE)
assert match.group('type') == 'file'
def test_extracts_type_when_not_present(self):
text = (
'My Header\n'
'========='
)
match = re.search(self.regex, text, re.MULTILINE)
assert match.group('type') == ''
class Test__find_entries:
def test_finds_match(self):
text = (
'{*8ED87AC2D52F4734BAFCB7BDAA923DA4*}My Header\n'
'========='
)
matches = ctags.CtagsHeaderEntry._find_entries(text)
assert len(matches) == 1
def test_no_matches_returns_empty_list(self):
text = ''
matches = ctags.CtagsHeaderEntry._find_entries(text)
assert matches == []
def test_rejects_headers_without_underline_matching_title_length(self):
text = (
'My Header\n'
'====='
)
matches = ctags.CtagsHeaderEntry._find_entries(text)
assert matches == []
class Test__set_entries_lineno:
def test_obtains_first_match_lineno(self):
text = (
'* task A\n'
'* task B\n'
'{*8ED87AC2D52F4734BAFCB7BDAA923DA4*}My Header\n'
'========='
)
entries = [
ctags.CtagsHeaderEntry(
uuid_='8ED87AC2D52F4734BAFCB7BDAA923DA4',
name='My Header',
ntype='', # section has no ntype
filepath='/path/file.tasklist',
start_pos=18,
uline_char='=',
)
]
ctags.CtagsHeaderEntry._set_entries_lineno(text, entries)
assert len(entries) == 1
assert entries[0].lineno == 3
def test_obtains_second_match_lineno(self):
text = (
'* task A\n'
'* task B\n'
'{*8ED87AC2D52F4734BAFCB7BDAA923DA4*}My Header\n'
'=========\n'
'\n'
'* subtask A\n' # 86
'* subtask B\n' # 98
'\n'
'Header 2\n'
'--------\n'
)
entries = [
ctags.CtagsHeaderEntry(
uuid_='8ED87AC2D52F4734BAFCB7BDAA923DA4',
name='My Header',
ntype='', # section has no ntype
filepath='/path/file.tasklist',
start_pos=18,
uline_char='=',
),
ctags.CtagsHeaderEntry(
name='Header 2',
ntype='', # section has no ntype
filepath='/path/file.tasklist',
start_pos=100,
uline_char='-',
)
]
ctags.CtagsHeaderEntry._set_entries_lineno(text, entries)
assert len(entries) == 2
assert entries[1].lineno == 9
class Test_render:
def test_section(self):
entry = ctags.CtagsHeaderEntry(
name='My Header',
filepath='/path/to/todo.mtask',
ntype='', # sections do not have a prefix
lineno=5,
)
render = entry.render()
expected = 'My Header\t/path/to/todo.mtask\t/^My Header$/;"\ts\tline:5'
assert render == expected
| en | 0.921904 | # section has no ntype # 86 # 98 # section has no ntype # section has no ntype # sections do not have a prefix | 2.573937 | 3 |
Python_ABC/2-5list/1listIndexSlice.py | Chandler-Song/Python_Awesome | 0 | 6615286 | <reponame>Chandler-Song/Python_Awesome
my_list = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
# 0, 1, 2, 3, 4, 5, 6, 7, 8, 9
# -10,-9,-8,-7,-6,-5,-4,-3,-2,-1
# list[start:end:step]
print(my_list)
# print(my_list[0:4])
# print(my_list[3:8])
# print(my_list[-7:8])
# print(my_list[-7:-2])
# print(my_list[:-2])
# print(my_list[:])
# print(my_list[::2])
# print(my_list[-1:2:-1])
# print(my_list[2:-1:-1])
print(my_list[-1:2:-2])
| my_list = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
# 0, 1, 2, 3, 4, 5, 6, 7, 8, 9
# -10,-9,-8,-7,-6,-5,-4,-3,-2,-1
# list[start:end:step]
print(my_list)
# print(my_list[0:4])
# print(my_list[3:8])
# print(my_list[-7:8])
# print(my_list[-7:-2])
# print(my_list[:-2])
# print(my_list[:])
# print(my_list[::2])
# print(my_list[-1:2:-1])
# print(my_list[2:-1:-1])
print(my_list[-1:2:-2]) | en | 0.09563 | # 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 # -10,-9,-8,-7,-6,-5,-4,-3,-2,-1 # list[start:end:step] # print(my_list[0:4]) # print(my_list[3:8]) # print(my_list[-7:8]) # print(my_list[-7:-2]) # print(my_list[:-2]) # print(my_list[:]) # print(my_list[::2]) # print(my_list[-1:2:-1]) # print(my_list[2:-1:-1]) | 3.455604 | 3 |
dp/pager.py | dico-api/dico-dp | 6 | 6615287 | import typing
import asyncio
import dico
import dico_command
class Pager:
next_emoji = "➡"
prev_emoji = "⬅"
def __init__(self,
bot: dico_command.Bot,
channel: dico.Channel,
author: dico.User,
pages: typing.List[typing.Union[str, dico.Embed]],
extra_button: dico.Button = None,
*, is_embed: bool = False,
reply: dico.Message = None,
timeout: int = 30):
self.bot = bot
self.channel = channel
self.author = author
self.pages = pages
self.extra_button = extra_button
self.is_embed = is_embed
self.reply = reply
self.timeout = timeout
self.current = 0
self.message = None # Should be set later.
@property
def __max_page(self):
return len(self.pages) - 1
@property
def current_page(self):
return self.current + 1
def next(self):
self.current = self.current + 1 if self.current + 1 <= self.__max_page else 0
return self.pages[self.current]
def prev(self):
self.current = self.current - 1 if self.current - 1 >= 0 else self.__max_page
return self.pages[self.current]
@property
def page_button(self):
return dico.Button(style=2, label=f"Page {self.current_page}/{len(self.pages)}", custom_id=f"pages{self.message.id}", disabled=True)
def current_action_row(self, disabled: bool = False):
next_button = dico.Button(style=1, label="Next", custom_id=f"next{self.message.id}", emoji=self.next_emoji, disabled=disabled)
prev_button = dico.Button(style=1, label="Prev", custom_id=f"prev{self.message.id}", emoji=self.prev_emoji, disabled=disabled)
buttons = [prev_button, self.page_button, next_button]
if self.extra_button:
if not self.extra_button.custom_id.endswith(str(self.message.id)):
self.extra_button.custom_id += str(self.message.id)
if disabled:
self.extra_button.disabled = True
else:
self.extra_button.disabled = False
buttons.append(self.extra_button)
return dico.ActionRow(*buttons)
async def start_flatten(self):
return_list = []
async for x in self.start():
return_list.append(x)
return return_list
async def start(self):
func = self.channel.send if not self.reply else self.reply.reply
self.message = await func(content=self.pages[0] if not self.is_embed else None, embed=self.pages[0] if self.is_embed else None)
await self.message.edit(component=self.current_action_row())
while not self.bot.websocket_closed:
try:
interaction: dico.Interaction = await self.bot.wait(
"interaction_create",
check=lambda inter: int(inter.author) == int(self.author.id),
timeout=self.timeout
)
await interaction.create_response(dico.InteractionResponse(callback_type=dico.InteractionCallbackType.DEFERRED_UPDATE_MESSAGE, data={}))
if interaction.data.custom_id.startswith("next"):
page = self.next()
await self.message.edit(content=page if not self.is_embed else None,
embed=page if self.is_embed else None,
component=self.current_action_row())
elif interaction.data.custom_id.startswith("prev"):
page = self.prev()
await self.message.edit(content=page if not self.is_embed else None,
embed=page if self.is_embed else None,
component=self.current_action_row())
else:
yield self.current_page
except asyncio.TimeoutError:
break
await self.message.edit(component=self.current_action_row(disabled=True))
| import typing
import asyncio
import dico
import dico_command
class Pager:
next_emoji = "➡"
prev_emoji = "⬅"
def __init__(self,
bot: dico_command.Bot,
channel: dico.Channel,
author: dico.User,
pages: typing.List[typing.Union[str, dico.Embed]],
extra_button: dico.Button = None,
*, is_embed: bool = False,
reply: dico.Message = None,
timeout: int = 30):
self.bot = bot
self.channel = channel
self.author = author
self.pages = pages
self.extra_button = extra_button
self.is_embed = is_embed
self.reply = reply
self.timeout = timeout
self.current = 0
self.message = None # Should be set later.
@property
def __max_page(self):
return len(self.pages) - 1
@property
def current_page(self):
return self.current + 1
def next(self):
self.current = self.current + 1 if self.current + 1 <= self.__max_page else 0
return self.pages[self.current]
def prev(self):
self.current = self.current - 1 if self.current - 1 >= 0 else self.__max_page
return self.pages[self.current]
@property
def page_button(self):
return dico.Button(style=2, label=f"Page {self.current_page}/{len(self.pages)}", custom_id=f"pages{self.message.id}", disabled=True)
def current_action_row(self, disabled: bool = False):
next_button = dico.Button(style=1, label="Next", custom_id=f"next{self.message.id}", emoji=self.next_emoji, disabled=disabled)
prev_button = dico.Button(style=1, label="Prev", custom_id=f"prev{self.message.id}", emoji=self.prev_emoji, disabled=disabled)
buttons = [prev_button, self.page_button, next_button]
if self.extra_button:
if not self.extra_button.custom_id.endswith(str(self.message.id)):
self.extra_button.custom_id += str(self.message.id)
if disabled:
self.extra_button.disabled = True
else:
self.extra_button.disabled = False
buttons.append(self.extra_button)
return dico.ActionRow(*buttons)
async def start_flatten(self):
return_list = []
async for x in self.start():
return_list.append(x)
return return_list
async def start(self):
func = self.channel.send if not self.reply else self.reply.reply
self.message = await func(content=self.pages[0] if not self.is_embed else None, embed=self.pages[0] if self.is_embed else None)
await self.message.edit(component=self.current_action_row())
while not self.bot.websocket_closed:
try:
interaction: dico.Interaction = await self.bot.wait(
"interaction_create",
check=lambda inter: int(inter.author) == int(self.author.id),
timeout=self.timeout
)
await interaction.create_response(dico.InteractionResponse(callback_type=dico.InteractionCallbackType.DEFERRED_UPDATE_MESSAGE, data={}))
if interaction.data.custom_id.startswith("next"):
page = self.next()
await self.message.edit(content=page if not self.is_embed else None,
embed=page if self.is_embed else None,
component=self.current_action_row())
elif interaction.data.custom_id.startswith("prev"):
page = self.prev()
await self.message.edit(content=page if not self.is_embed else None,
embed=page if self.is_embed else None,
component=self.current_action_row())
else:
yield self.current_page
except asyncio.TimeoutError:
break
await self.message.edit(component=self.current_action_row(disabled=True))
| en | 0.794131 | # Should be set later. | 2.430362 | 2 |
notebooks/plotbetalog.py | aw02m/Spiking_neural_networks | 0 | 6615288 | <reponame>aw02m/Spiking_neural_networks<gh_stars>0
from cmath import nan
import numpy as np
import math
import matplotlib.pyplot as plt
bifparams = np.load('betalog.npy')[:, 1:3]
plt.plot(bifparams[:, 0], bifparams[:, 1])
plt.savefig('betalog.jpg') | from cmath import nan
import numpy as np
import math
import matplotlib.pyplot as plt
bifparams = np.load('betalog.npy')[:, 1:3]
plt.plot(bifparams[:, 0], bifparams[:, 1])
plt.savefig('betalog.jpg') | none | 1 | 2.453941 | 2 | |
Projects/cam-pir2/pir-sample.py | pkbullock/RaspberryPi | 0 | 6615289 | <reponame>pkbullock/RaspberryPi
import RPi.GPIO as GPIO
import time
GPIO.setmode(GPIO.BCM)
GPIO.setup(17, GPIO.IN)
GPIO.setup(18, GPIO.OUT)
while True:
input_state = GPIO.input(17)
GPIO.output(18, False)
if input_state == True:
print('Motion Detected')
GPIO.output(18, True)
time.sleep(1) | import RPi.GPIO as GPIO
import time
GPIO.setmode(GPIO.BCM)
GPIO.setup(17, GPIO.IN)
GPIO.setup(18, GPIO.OUT)
while True:
input_state = GPIO.input(17)
GPIO.output(18, False)
if input_state == True:
print('Motion Detected')
GPIO.output(18, True)
time.sleep(1) | none | 1 | 3.240065 | 3 | |
models/base.py | gugadev/peewee-crud | 0 | 6615290 | <reponame>gugadev/peewee-crud
from peewee import *
from playhouse.sqlite_ext import SqliteExtDatabase
class BaseModel(Model):
class Meta:
database = SqliteExtDatabase("data.db")
| from peewee import *
from playhouse.sqlite_ext import SqliteExtDatabase
class BaseModel(Model):
class Meta:
database = SqliteExtDatabase("data.db") | none | 1 | 2.13475 | 2 | |
picbackend/middleware.py | bbcawodu/careadvisors-backend | 0 | 6615291 | class CsrfHeaderMiddleware:
def process_response(self, request, response):
if "CSRF_COOKIE" in request.META:
# csrfviewmiddleware sets response cookie as request.META['CSRF_COOKIE']
response["X-CSRFTOKEN"] = request.META['CSRF_COOKIE']
return response
| class CsrfHeaderMiddleware:
def process_response(self, request, response):
if "CSRF_COOKIE" in request.META:
# csrfviewmiddleware sets response cookie as request.META['CSRF_COOKIE']
response["X-CSRFTOKEN"] = request.META['CSRF_COOKIE']
return response
| en | 0.865189 | # csrfviewmiddleware sets response cookie as request.META['CSRF_COOKIE'] | 2.13975 | 2 |
quickstarts/python-detect-anomalies.py | Tapasgt/AnomalyDetector | 0 | 6615292 | # Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
import requests
import json
# URLs for anomaly detection with the Anomaly Detector API
batch_detection_url = "/anomalydetector/v1.0/timeseries/entire/detect"
latest_point_detection_url = "/anomalydetector/v1.0/timeseries/last/detect"
###############################################
#### Update or verify the following values. ###
###############################################
# Replace the endpoint URL with the correct one for your subscription. Your endpoint can be found in the Azure portal.
# for example: https://westus2.api.cognitive.microsoft.com
endpoint = "[YOUR_ENDPOINT_URL]"
# Replace with your valid subscription key.
subscription_key = "[YOUR_SUBSCRIPTION_KEY]"
# Replace with a path to the JSON formatted time series data.
data_location = "[PATH_TO_TIME_SERIES_DATA]"
###############################################
"""
Sends an anomaly detection request to the Anomaly Detector API.
If the request is successful, the JSON response is returned.
"""
def send_request(endpoint, url, subscription_key, request_data):
headers = {'Content-Type': 'application/json', 'Ocp-Apim-Subscription-Key': subscription_key}
response = requests.post(endpoint+url, data=json.dumps(request_data), headers=headers)
if response.status_code == 200:
return json.loads(response.content.decode("utf-8"))
else:
print(response.status_code)
raise Exception(response.text)
"""
Detect anomalies throughout the time series data by submitting it as a batch to the API.
"""
def detect_batch(request_data):
print("Detecting anomalies as a batch")
# Send the request, and print the JSON result
result = send_request(endpoint, batch_detection_url, subscription_key, request_data)
print(json.dumps(result, indent=4))
# Find and display the positions of anomalies in the data set
anomalies = result["isAnomaly"]
print("Anomalies detected in the following data positions:")
for x in range(len(anomalies)):
if anomalies[x] == True:
print (x)
"""
Detect if the latest data point in the time series is an anomaly.
"""
def detect_latest(request_data):
print("Determining if latest data point is an anomaly")
# send the request, and print the JSON result
result = send_request(endpoint, latest_point_detection_url, subscription_key, request_data)
print(json.dumps(result, indent=4))
# read json time series data from file
file_handler = open (data_location)
json_data = json.load(file_handler)
# send the request
detect_batch(json_data)
detect_latest(json_data)
| # Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
import requests
import json
# URLs for anomaly detection with the Anomaly Detector API
batch_detection_url = "/anomalydetector/v1.0/timeseries/entire/detect"
latest_point_detection_url = "/anomalydetector/v1.0/timeseries/last/detect"
###############################################
#### Update or verify the following values. ###
###############################################
# Replace the endpoint URL with the correct one for your subscription. Your endpoint can be found in the Azure portal.
# for example: https://westus2.api.cognitive.microsoft.com
endpoint = "[YOUR_ENDPOINT_URL]"
# Replace with your valid subscription key.
subscription_key = "[YOUR_SUBSCRIPTION_KEY]"
# Replace with a path to the JSON formatted time series data.
data_location = "[PATH_TO_TIME_SERIES_DATA]"
###############################################
"""
Sends an anomaly detection request to the Anomaly Detector API.
If the request is successful, the JSON response is returned.
"""
def send_request(endpoint, url, subscription_key, request_data):
headers = {'Content-Type': 'application/json', 'Ocp-Apim-Subscription-Key': subscription_key}
response = requests.post(endpoint+url, data=json.dumps(request_data), headers=headers)
if response.status_code == 200:
return json.loads(response.content.decode("utf-8"))
else:
print(response.status_code)
raise Exception(response.text)
"""
Detect anomalies throughout the time series data by submitting it as a batch to the API.
"""
def detect_batch(request_data):
print("Detecting anomalies as a batch")
# Send the request, and print the JSON result
result = send_request(endpoint, batch_detection_url, subscription_key, request_data)
print(json.dumps(result, indent=4))
# Find and display the positions of anomalies in the data set
anomalies = result["isAnomaly"]
print("Anomalies detected in the following data positions:")
for x in range(len(anomalies)):
if anomalies[x] == True:
print (x)
"""
Detect if the latest data point in the time series is an anomaly.
"""
def detect_latest(request_data):
print("Determining if latest data point is an anomaly")
# send the request, and print the JSON result
result = send_request(endpoint, latest_point_detection_url, subscription_key, request_data)
print(json.dumps(result, indent=4))
# read json time series data from file
file_handler = open (data_location)
json_data = json.load(file_handler)
# send the request
detect_batch(json_data)
detect_latest(json_data)
| en | 0.675404 | # Copyright (c) Microsoft Corporation. All rights reserved. # Licensed under the MIT License. # URLs for anomaly detection with the Anomaly Detector API ############################################### #### Update or verify the following values. ### ############################################### # Replace the endpoint URL with the correct one for your subscription. Your endpoint can be found in the Azure portal. # for example: https://westus2.api.cognitive.microsoft.com # Replace with your valid subscription key. # Replace with a path to the JSON formatted time series data. ############################################### Sends an anomaly detection request to the Anomaly Detector API. If the request is successful, the JSON response is returned. Detect anomalies throughout the time series data by submitting it as a batch to the API. # Send the request, and print the JSON result # Find and display the positions of anomalies in the data set Detect if the latest data point in the time series is an anomaly. # send the request, and print the JSON result # read json time series data from file # send the request | 2.504403 | 3 |
slurm_job.py | jlplenio/slurm_template_for_Python | 0 | 6615293 | #!/usr/bin/env python
from multiprocessing import Pool
import csv
import timeit
import sys
def calc_pi(run_no):
from decimal import Decimal, getcontext
getcontext().prec = 100000
result = sum(1 / Decimal(16) ** k *
(Decimal(4) / (8 * k + 1) -
Decimal(2) / (8 * k + 4) -
Decimal(1) / (8 * k + 5) -
Decimal(1) / (8 * k + 6)) for k in range(100))
print("done", run_no)
return result
if __name__ == '__main__':
core_count = 1
if len(sys.argv) > 1:
core_count = int(sys.argv[1])
print(f"Starting with {core_count} counts")
start = timeit.default_timer()
with Pool(core_count) as p:
pi_list = (p.map(calc_pi, range(100)))
with open("out/pi_list.csv", 'w', newline='') as file:
wr = csv.writer(file, quoting=csv.QUOTE_ALL)
wr.writerow(pi_list)
stop = timeit.default_timer()
print('Time: ', stop - start)
| #!/usr/bin/env python
from multiprocessing import Pool
import csv
import timeit
import sys
def calc_pi(run_no):
from decimal import Decimal, getcontext
getcontext().prec = 100000
result = sum(1 / Decimal(16) ** k *
(Decimal(4) / (8 * k + 1) -
Decimal(2) / (8 * k + 4) -
Decimal(1) / (8 * k + 5) -
Decimal(1) / (8 * k + 6)) for k in range(100))
print("done", run_no)
return result
if __name__ == '__main__':
core_count = 1
if len(sys.argv) > 1:
core_count = int(sys.argv[1])
print(f"Starting with {core_count} counts")
start = timeit.default_timer()
with Pool(core_count) as p:
pi_list = (p.map(calc_pi, range(100)))
with open("out/pi_list.csv", 'w', newline='') as file:
wr = csv.writer(file, quoting=csv.QUOTE_ALL)
wr.writerow(pi_list)
stop = timeit.default_timer()
print('Time: ', stop - start)
| ru | 0.26433 | #!/usr/bin/env python | 3.074539 | 3 |
twindb_backup/ssh/exceptions.py | RyanCPeters/backup | 69 | 6615294 | <gh_stars>10-100
"""
SSH Client Exceptions.
"""
from twindb_backup.exceptions import TwinDBBackupError
class SshClientException(TwinDBBackupError):
"""Exception in SshClient"""
pass
| """
SSH Client Exceptions.
"""
from twindb_backup.exceptions import TwinDBBackupError
class SshClientException(TwinDBBackupError):
"""Exception in SshClient"""
pass | en | 0.424237 | SSH Client Exceptions. Exception in SshClient | 2.032551 | 2 |
load_visual_data.py | nullJaX/Rhombus-AI | 0 | 6615295 | <gh_stars>0
from time import sleep, time
from cv2 import imshow, waitKey, destroyAllWindows, resize
from util.grabscreen import grab_screen
from utils import extract_gameboard, reduce_gameboard, draw_search_areas, m_gameboard_4_debug
WINDOW_WIDTH = 1280
WINDOW_HEIGHT = 720
def main():
# Countdown
for i in list(range(4))[::-1]:
print(i + 1)
sleep(1)
last_time = time()
# Main loop
while True:
screen = grab_screen(region=(0, 32, WINDOW_WIDTH, WINDOW_HEIGHT + 32))
screen = extract_gameboard(screen, WINDOW_WIDTH, WINDOW_HEIGHT)
screen = reduce_gameboard(screen, WINDOW_HEIGHT)
screen = m_gameboard_4_debug(screen, WINDOW_HEIGHT)
# For alignment purposes
screen = draw_search_areas(screen)
screen = resize(screen, None, fx=0.75, fy=0.75)
print('Frame took {} seconds'.format(time() - last_time))
last_time = time()
imshow('window', screen)
if waitKey(1) & 0xFF == ord('q'):
destroyAllWindows()
break
if __name__ == '__main__':
main()
| from time import sleep, time
from cv2 import imshow, waitKey, destroyAllWindows, resize
from util.grabscreen import grab_screen
from utils import extract_gameboard, reduce_gameboard, draw_search_areas, m_gameboard_4_debug
WINDOW_WIDTH = 1280
WINDOW_HEIGHT = 720
def main():
# Countdown
for i in list(range(4))[::-1]:
print(i + 1)
sleep(1)
last_time = time()
# Main loop
while True:
screen = grab_screen(region=(0, 32, WINDOW_WIDTH, WINDOW_HEIGHT + 32))
screen = extract_gameboard(screen, WINDOW_WIDTH, WINDOW_HEIGHT)
screen = reduce_gameboard(screen, WINDOW_HEIGHT)
screen = m_gameboard_4_debug(screen, WINDOW_HEIGHT)
# For alignment purposes
screen = draw_search_areas(screen)
screen = resize(screen, None, fx=0.75, fy=0.75)
print('Frame took {} seconds'.format(time() - last_time))
last_time = time()
imshow('window', screen)
if waitKey(1) & 0xFF == ord('q'):
destroyAllWindows()
break
if __name__ == '__main__':
main() | en | 0.714978 | # Countdown # Main loop # For alignment purposes | 2.520302 | 3 |
eex/translators/amber/amber_write.py | MolSSI/EEX | 7 | 6615296 | <filename>eex/translators/amber/amber_write.py
"""
Writer for amber
"""
import time
import pandas as pd
import numpy as np
from collections import Counter
import eex
import logging
# AMBER local imports
from . import amber_metadata as amd
logger = logging.getLogger(__name__)
def _write_1d(file_handle, data, ncols, fmt):
data = data.ravel()
remainder_size = data.size % ncols
if data.size == 0:
file_handle.write("\n".encode())
elif remainder_size == 0:
np.savetxt(file_handle, data.reshape(-1, ncols), fmt=fmt, delimiter="")
else:
rem_data = data[-remainder_size:].reshape(1, -1)
data = data[:-remainder_size].reshape(-1, ncols)
np.savetxt(file_handle, data, fmt=fmt, delimiter="")
np.savetxt(file_handle, rem_data, fmt=fmt, delimiter="")
# print(data.shape, rem_data.shape)
# Write data to file
file_handle.flush()
def _write_amber_data(file_handle, data, category):
fmt_string = amd.data_labels[category][1]
fmt_data = amd.parse_format(fmt_string)
file_handle.write(("%%FLAG %s\n" % category).encode())
file_handle.write((fmt_string + "\n").encode())
ncols = fmt_data[0]
fmt = amd.build_format(fmt_data)
_write_1d(file_handle, np.array(data), ncols, fmt)
def _get_charmm_dihedral_count(dl):
# TODO this a temporary solution to get the number of dihedrals stored in the datalayer.
order = 4
ret = 0
charmm = amd.forcefield_parameters['dihedral']['form']
terms = dl.list_term_parameters(order)
for j in terms.values():
term_md = eex.metadata.get_term_metadata(order, "forms", j[0])
# Zip up the parameters
parameters = {k: v for k, v in zip(term_md["parameters"], j[1:])}
parameters = eex.form_converters.convert_form(order, parameters, j[0],
charmm)
if isinstance(parameters['K'], float):
ret += 1
elif isinstance(parameters['K'], np.ndarray):
ret += len(parameters['K'])
return ret
def _check_dl_compatibility(dl):
"""
This function examines a datalayer to determine if it is compatible with Amber.
Conversions between functional forms and pairwise interaction mixing are performed (if possible).
"""
# Loop over force field information - check functional form compatibility
for k, v in amd.forcefield_parameters.items():
if k != "nonbond":
terms = dl.list_term_parameters(v["order"])
for j in terms.values():
term_md = eex.metadata.get_term_metadata(
v["order"], "forms", j[0])
canonical_form = term_md['canonical_form']
compatible_forms = eex.metadata.get_term_metadata(
v["order"], "group")[canonical_form]
if v['form'] not in compatible_forms:
# Will need to insert check to see if these can be easily converted (ex OPLS dihedral <-> charmmfsw)
raise TypeError(
"Functional form %s stored in datalayer is not compatible with Amber.\n"
% (j[0]))
else:
# Handle nonbonds. Make sure only LJ types are stored in datalayer.
nb_forms = dl.list_stored_nb_types()
if set(nb_forms) != set(["LJ"]):
raise KeyError(
"Nonbond forms must be type LJ. Forms stored in datalayer are not compatible with Amber - %s"
% nb_forms)
# Grab all pair interactions stored in datalayer. Amber must have pair interactions.
nb = dl.list_nb_parameters(
nb_name="LJ", nb_model="AB", itype="pair")
# Get the number of atom types. The number of pair interactions should be equal to num_atom_types * (num_atom_types + 1)) / 2
num_atom_types = len(dl.get_unique_atom_types())
# This will occur if there are no pair interactions stored in the dl.
if len(nb.keys()) == 0:
# Check that the stored mixing rule is compatible with amber. Amber should be able to handle any set of parameters,
# but if we allow this, it should be something the user has to override somehow (or, they could apply the mixing
# rule before calling the amber writer.
if dl.get_mixing_rule() not in amd.mixing_rule:
raise TypeError(
"Mixing rule %s not compatible with amber (lorentz-berthelot mixing rule)"
% dl.get_mixing_rule())
# Calculate pair interactions according to amber.
dl.build_LJ_mixing_table()
# This condition will be met if some pair interactions are stored, but not the correct number.
elif len(nb.keys()) != (num_atom_types * (num_atom_types + 1)) / 2:
raise ValueError(
"Amber compatibility check : Incorrect number of pair interactions\n"
)
# Check NB scaling factors are compatible with amber
scaling_types = eex.metadata.additional_metadata.nb_scaling[
"scaling_type"]
for scale_type in scaling_types:
if scale_type not in dl.store.list_tables():
if not dl.get_nb_scaling_factors():
raise ValueError(
"No nonbond scaling (%s) information is set in datalayer"
% (scale_type))
else:
# Build atom-wise scaling list
dl.build_scaling_list()
# Check that NB scaling factors are compatible with amber (ie 1,2 and 1,3 must be 0 (excluded))
pair_scalings = dl.get_pair_scalings(
nb_labels=[scale_type], order=True)
p12 = pair_scalings[pair_scalings["order"] == 2][scale_type]
p13 = pair_scalings[pair_scalings["order"] == 3][scale_type]
if p12.nonzero()[0].any():
raise ValueError(
"Nonbond scaling (order=2, %s) is not consistent with Amber. In Amber, 1-2 nonbond "
"interactions are excluded" % (scale_type))
if p13.nonzero()[0].any():
raise ValueError(
"Nonbond scaling (order=3, %s) is not consistent with Amber. In Amber, 1-3 nonbond "
"interactions are excluded" % (scale_type))
## TODO - need check scaling 0 is set for all order 2 and order 3
stored_properties = dl.list_atom_properties()
required_properties = list(amd.atom_property_names.values())
diff = np.setdiff1d(required_properties, stored_properties)
natoms = dl.get_atom_count()
index = np.arange(1, natoms + 1)
# Build and curate the data
df = pd.DataFrame({'atom_index': index})
df.dropna(axis=0, how="any", inplace=True)
df.set_index('atom_index', inplace=True)
add_properties = []
# Fill in default or raise error
for req in diff:
if req == 'atom_name':
atom_names = ['A'] * natoms
df[req] = atom_names
add_properties.append(req)
elif req == 'atomic_number':
# Just say it's carbon...doesn't seem like this matters too much for amber
atomic_numbers = [6] * natoms
df[req] = atomic_numbers
add_properties.append(req)
elif req == "mass":
try:
dl.get_atoms(properties=["mass"])
except:
raise KeyError("No masses stored in datalayer")
else:
raise KeyError(
"Atom property %s is missing from datalayer" % (req))
# Check for residue_index
if "residue_index" not in stored_properties:
# If molecule_index is set, set residue index to this.
# Otherwise, set all to 1.0
if "molecule_index" in stored_properties:
df["residue_index"] = dl.get_atoms(properties=["molecule_index"])
add_properties.append("residue_index")
else:
df["residue_index"] = 1
add_properties.append("residue_index")
if "residue_name" not in stored_properties:
df["residue_name"] = "BLA"
elif "residue_name" not in stored_properties:
df["residue_name"] = "BLA"
add_properties.append("residue_name")
if len(add_properties) > 0:
dl.add_atoms(df, by_value=True)
def write_amber_file(dl, filename, inpcrd=None):
"""
Parameters
------------
dl : eex.DataLayer
The datalayer containing information about the system to write
filename : str
The name of the file to write
inpcrd : str, optional
If None, attempts to read the file filename.replace("prmtop", "inpcrd") otherwise passes. #
"""
# First get information into Amber pointers. All keys are initially filled with zero.
# Ones that are currently 0, but should be implemented eventually are marked with
_check_dl_compatibility(dl)
dihedral_count = _get_charmm_dihedral_count(dl)
# Figure out what is hydrogen for the header
num_H_list = []
inc_hydrogen = {}
without_hydrogen = {}
hidx = (dl.get_atoms("atomic_number") == 1)['atomic_number']
hidx = hidx[hidx].index
for term_type, term_name in zip([2, 3, 4],
["bonds", "angles", "dihedrals"]):
term = dl.get_terms(term_type)
if term.shape[0] == 0:
num_H_list.append(0)
continue
# Build up an index of what is in hydrogen or not
inc_hydrogen_mask = term["atom1"].isin(hidx)
for n in range(term_type - 1):
name = "atom" + str(n + 2)
inc_hydrogen_mask |= term[name].isin(hidx)
num_H_list.append(len(term.loc[inc_hydrogen_mask].values))
inc_hydrogen[term_name] = term.loc[inc_hydrogen_mask].values
without_hydrogen[term_name] = term.loc[~inc_hydrogen_mask].values
output_sizes = {k: 0 for k in amd.size_keys}
output_sizes['NATOM'] = dl.get_atom_count() # Number of atoms
output_sizes["NBONH"] = num_H_list[
0] # Number of bonds containing hydrogen
output_sizes["MBONA"] = dl.get_term_count(2, "total") - output_sizes[
"NBONH"] # Number of bonds not containing hydrogen
output_sizes['NBONA'] = output_sizes[
"MBONA"] # MBONA + number of constraint bonds (MBONA = NBONA always)
output_sizes["NTHETH"] = num_H_list[
1] # Number of angles containing hydrogen
output_sizes["MTHETA"] = dl.get_term_count(3, "total") - output_sizes[
"NTHETH"] # Number of angles not containing hydrogen
output_sizes['NTHETA'] = output_sizes[
"MTHETA"] # MTHETA + number of constraint angles (NTHETA = MTHETA always)
output_sizes["NPHIH"] = num_H_list[
2] # Number of torsions containing hydrogen
output_sizes["MPHIA"] = dl.get_term_count(4, "total") - output_sizes[
"NPHIH"] # Number of torsions not containing hydrogen
output_sizes["NPHIA"] = output_sizes["MPHIA"]
output_sizes["NUMBND"] = len(
dl.list_term_uids(2)) # Number of unique bond types
output_sizes["NUMANG"] = len(
dl.list_term_uids(3)) # Number of unique angle types
output_sizes["NPTRA"] = dihedral_count # Number of unique torsion types
output_sizes["NRES"] = len(
dl.list_atom_uids("residue_name")) # Number of residues (not stable)
output_sizes["NTYPES"] = len(np.unique(
dl.get_atoms("atom_type"))) # Number of distinct LJ atom types
output_sizes[
"NPARM"] = 0 # Used to determine if this is a LES-compatible prmtop (??)
output_sizes["NNB"] = dl.get_atom_count(
) # Number of excluded atoms - Set to num atoms for our test cases. Amber will not run with 0
# 0 - no box, 1 - orthorhombic box, 2 - truncated octahedron
output_sizes["NMXRS"] = 0 # Number of atoms in the largest residue
output_sizes["IFCAP"] = 0 # Set to 1 if a solvent CAP is being used
output_sizes["NUMEXTRA"] = 0 # Number of extra points in the topology file
# Needs check for orthorhomibic box (1) or truncated octahedron (2). Currently just 0 or 1
output_sizes["IFBOX"] = [
0 if dl.get_box_size() == {} else 1
][0] # Flag indicating whether a periodic box is present
written_categories = []
# Figure out size each section should be based on metadata
label_sizes = {}
for k, v in amd.data_labels.items():
if isinstance(v[0], int):
label_sizes[k] = v[0]
elif v[0] in list(output_sizes):
label_sizes[k] = output_sizes[v[0]]
else:
# print("%30s %40s %d" % (k, v[0], int(eval(v[0], sizes_dict))))
label_sizes[k] = int(eval(v[0], output_sizes))
# Write title and version information
f = open(filename, "w")
f.write('%%VERSION VERSION_STAMP = V0001.000 DATE = %s %s\n' %
(time.strftime("%x"), time.strftime("%H:%M:%S")))
f.write("%FLAG TITLE\n%FORMAT(20a4)\n")
f.write("prmtop generated by MolSSI EEX\n")
# Write pointers section
f.write("%%FLAG POINTERS\n%s\n" % (amd.data_labels["POINTERS"][1]))
ncols, dtype, width = amd.parse_format(amd.data_labels["POINTERS"][1])
format_string = "%%%sd" % width
count = 0
for k in amd.size_keys:
f.write(format_string % output_sizes[k])
count += 1
if count % ncols == 0:
f.write("\n")
f.write("\n")
f.close()
written_categories.append("POINTERS")
# Write atom properties sections
file_handle = open(filename, "ab")
for k in amd.atom_property_names:
# Get data
data = dl.get_atoms(
amd.atom_property_names[k],
by_value=True,
utype=amd.atom_data_units).values.ravel()
_write_amber_data(file_handle, data, k)
written_categories.append(k)
# Handle residues
# We assume these are sorted WRT to atom and itself at the moment... not great
res_data = dl.get_atoms(["residue_index", "residue_name"], by_value=True)
uvals, uidx, ucnts = np.unique(
res_data["residue_index"], return_index=True, return_counts=True)
labels = res_data["residue_name"].iloc[uidx].values
_write_amber_data(file_handle, labels, "RESIDUE_LABEL")
written_categories.append("RESIDUE_LABEL")
starts = np.concatenate(([1], np.cumsum(ucnts) + 1))[:-1]
_write_amber_data(file_handle, starts, "RESIDUE_POINTER")
written_categories.append("RESIDUE_POINTER")
# Write out term parameters
for term_type in ["bond", "angle", "dihedral"]:
uids = sorted(dl.list_term_uids(term_type))
if len(uids) == 0:
continue
term_md = amd.forcefield_parameters[term_type]
tmps = {k: [] for k in term_md["column_names"].keys()}
utype = term_md["units"]
order = term_md["order"]
inv_lookup = {v: k for k, v in term_md["column_names"].items()}
# Build lists of data since AMBER holds this as 1D
for uid in uids:
params = dl.get_term_parameter(
order, uid, utype=utype, ftype=term_md['form'])
for k, v in params[1].items():
tmps[inv_lookup[k]].append(v)
# Write out FLAGS
for k, v in tmps.items():
_write_amber_data(file_handle, v, k)
written_categories.append(k)
for term_type, term_name in zip([2, 3, 4],
["bonds", "angles", "dihedrals"]):
term = dl.get_terms(term_type)
if term.shape[0] == 0:
continue
# Scale by weird AMBER factors
inc_hydrogen[term_name][:, :-1] = (
inc_hydrogen[term_name][:, :-1] - 1) * 3
without_hydrogen[term_name][:, :-1] = (
without_hydrogen[term_name][:, :-1] - 1) * 3
inc_h_name = term_name.upper() + "_INC_HYDROGEN"
without_h_name = term_name.upper() + "_WITHOUT_HYDROGEN"
_write_amber_data(file_handle, inc_hydrogen[term_name], inc_h_name)
written_categories.append(inc_h_name)
_write_amber_data(file_handle, without_hydrogen[term_name],
without_h_name)
written_categories.append(without_h_name)
# Handle SOLVENT_POINTERS, ATOMS_PER_MOLECULE and BOX_DIMENSIONS. Only present if IFBOX>0.
if output_sizes["IFBOX"] > 0:
# Solvent pointers section
# There are three numbers here - IPTRES, NSPM, NSPSOL
# where
# IPTRES = final residue part of solute, NSPM = total number of molecules, NSPSOL = first solvent molecule
# Just say everything is solute for now.
iptres = dl.get_atoms(["residue_index"]).values[-1]
nspm = len(np.unique(dl.get_atoms(["molecule_index"]).values))
solvent_pointers = [iptres, nspm, nspm]
_write_amber_data(file_handle, solvent_pointers, "SOLVENT_POINTERS")
# Handle atoms per molecule
molecule_list = dl.get_atoms(["molecule_index"]).values.ravel()
count_atoms_per_molecule = Counter(molecule_list)
atoms_per_molecule = []
for x in range(1, nspm + 1):
atoms_per_molecule.append(count_atoms_per_molecule[x])
_write_amber_data(file_handle, atoms_per_molecule,
"ATOMS_PER_MOLECULE")
# Write box dimensions section
box_dimensions = dl.get_box_size(
utype={
"a": amd.box_units["length"],
"b": amd.box_units["length"],
"c": amd.box_units["length"],
"alpha": amd.box_units["angle"],
"beta": amd.box_units["angle"],
"gamma": amd.box_units["angle"]
})
write_box = [
box_dimensions["beta"], box_dimensions["a"], box_dimensions["b"],
box_dimensions["c"]
]
_write_amber_data(file_handle, write_box, "BOX_DIMENSIONS")
written_categories.append("BOX_DIMENSIONS")
written_categories.append("SOLVENT_POINTERS")
written_categories.append("ATOMS_PER_MOLECULE")
# Quick fix for radius set will be one line string description in files prepared by xleap
_write_amber_data(file_handle, ["Place holder - EEX"], "RADIUS_SET")
written_categories.append("RADIUS_SET")
# Handle NB data
# Relevant headers = NONBOND_PARM_INDEX, LENNARD_JONES_ACOEF, LENNARD_JONES_BCOEF
stored_atom_types = dl.get_unique_atom_types()
ntypes = len(stored_atom_types)
# Get parameters from datalayer using correct amber units
stored_nb_parameters = dl.list_nb_parameters(
nb_name="LJ",
nb_model="AB",
utype=amd.forcefield_parameters["nonbond"]["units"],
itype="pair")
nonbonded_parm_index = np.zeros(ntypes * ntypes)
lj_a_coeff = []
lj_b_coeff = []
# Build a_coeff, b_coeff, and nb_parm lists
for key, value in stored_nb_parameters.items():
lj_a_coeff.append(value['A'])
lj_b_coeff.append(value['B'])
index_to_nb = ntypes * (key[0] - 1) + key[1]
index_to_nb2 = ntypes * (key[1] - 1) + key[0]
nonbonded_parm_index[index_to_nb - 1] = len(lj_a_coeff)
nonbonded_parm_index[index_to_nb2 - 1] = len(lj_a_coeff)
_write_amber_data(file_handle, nonbonded_parm_index,
"NONBONDED_PARM_INDEX")
_write_amber_data(file_handle, lj_a_coeff, "LENNARD_JONES_ACOEF")
_write_amber_data(file_handle, lj_b_coeff, "LENNARD_JONES_BCOEF")
for category in amd.forcefield_parameters["nonbond"]["column_names"]:
written_categories.append(category)
# Handle exclusions and scaling
exclusion_categories = {}
for excl in amd.exclusion_sections:
exclusion_categories[excl] = []
exclusions_scaling = dl.get_pair_scalings(order=True)
order_2_3_4 = exclusions_scaling[(exclusions_scaling["order"].notnull())]
# Build NUMBER_EXCLUDED_ATOMS and EXCLUDED_ATOMS_LIST.
for ind in sorted(dl.get_atoms("atomic_number").index.values):
if ind in order_2_3_4.index:
excluded_atoms_df = order_2_3_4.loc[ind]
excluded_atoms = excluded_atoms_df.index.values
exclusion_categories["EXCLUDED_ATOMS_LIST"].extend(excluded_atoms)
exclusion_categories["NUMBER_EXCLUDED_ATOMS"].append(
len(excluded_atoms))
else:
exclusion_categories["NUMBER_EXCLUDED_ATOMS"].append(1)
exclusion_categories["EXCLUDED_ATOMS_LIST"].append(0)
for excl in amd.exclusion_sections:
_write_amber_data(file_handle, exclusion_categories[excl], excl)
written_categories.append(excl)
# Write headers for other sections (file will not work in AMBER without these)
for k in amd.data_labels:
if k not in written_categories:
if label_sizes[k] > 0:
data = np.zeros(label_sizes[k])
_write_amber_data(file_handle, data, k)
else:
file_handle.write((
"%%FLAG %s\n%s\n\n" % (k, amd.data_labels[k][1])).encode())
written_categories.append(k)
file_handle.close()
# Now we need to write out the INPCRD
if '.prmtop' in filename:
inpcrd_file = filename.replace('.prmtop', '.inpcrd')
else:
inpcrd_file = filename + '.inpcrd'
file_handle = open(inpcrd_file, "wb")
xyz = dl.get_atoms("XYZ", utype={"XYZ": "angstrom"})
file_handle.write("default_name\n".encode())
file_handle.write(("%6d\n" % xyz.shape[0]).encode())
_write_1d(file_handle, xyz.values.ravel(), 6, "%12.6f")
if output_sizes["IFBOX"] > 0:
box = pd.DataFrame(box_dimensions, index=[0])
box = box[['a', 'b', 'c', 'alpha', 'beta', 'gamma']]
_write_1d(file_handle, box.values.ravel(), 6, "%12.6f")
file_handle.close()
return 0
| <filename>eex/translators/amber/amber_write.py
"""
Writer for amber
"""
import time
import pandas as pd
import numpy as np
from collections import Counter
import eex
import logging
# AMBER local imports
from . import amber_metadata as amd
logger = logging.getLogger(__name__)
def _write_1d(file_handle, data, ncols, fmt):
data = data.ravel()
remainder_size = data.size % ncols
if data.size == 0:
file_handle.write("\n".encode())
elif remainder_size == 0:
np.savetxt(file_handle, data.reshape(-1, ncols), fmt=fmt, delimiter="")
else:
rem_data = data[-remainder_size:].reshape(1, -1)
data = data[:-remainder_size].reshape(-1, ncols)
np.savetxt(file_handle, data, fmt=fmt, delimiter="")
np.savetxt(file_handle, rem_data, fmt=fmt, delimiter="")
# print(data.shape, rem_data.shape)
# Write data to file
file_handle.flush()
def _write_amber_data(file_handle, data, category):
fmt_string = amd.data_labels[category][1]
fmt_data = amd.parse_format(fmt_string)
file_handle.write(("%%FLAG %s\n" % category).encode())
file_handle.write((fmt_string + "\n").encode())
ncols = fmt_data[0]
fmt = amd.build_format(fmt_data)
_write_1d(file_handle, np.array(data), ncols, fmt)
def _get_charmm_dihedral_count(dl):
# TODO this a temporary solution to get the number of dihedrals stored in the datalayer.
order = 4
ret = 0
charmm = amd.forcefield_parameters['dihedral']['form']
terms = dl.list_term_parameters(order)
for j in terms.values():
term_md = eex.metadata.get_term_metadata(order, "forms", j[0])
# Zip up the parameters
parameters = {k: v for k, v in zip(term_md["parameters"], j[1:])}
parameters = eex.form_converters.convert_form(order, parameters, j[0],
charmm)
if isinstance(parameters['K'], float):
ret += 1
elif isinstance(parameters['K'], np.ndarray):
ret += len(parameters['K'])
return ret
def _check_dl_compatibility(dl):
"""
This function examines a datalayer to determine if it is compatible with Amber.
Conversions between functional forms and pairwise interaction mixing are performed (if possible).
"""
# Loop over force field information - check functional form compatibility
for k, v in amd.forcefield_parameters.items():
if k != "nonbond":
terms = dl.list_term_parameters(v["order"])
for j in terms.values():
term_md = eex.metadata.get_term_metadata(
v["order"], "forms", j[0])
canonical_form = term_md['canonical_form']
compatible_forms = eex.metadata.get_term_metadata(
v["order"], "group")[canonical_form]
if v['form'] not in compatible_forms:
# Will need to insert check to see if these can be easily converted (ex OPLS dihedral <-> charmmfsw)
raise TypeError(
"Functional form %s stored in datalayer is not compatible with Amber.\n"
% (j[0]))
else:
# Handle nonbonds. Make sure only LJ types are stored in datalayer.
nb_forms = dl.list_stored_nb_types()
if set(nb_forms) != set(["LJ"]):
raise KeyError(
"Nonbond forms must be type LJ. Forms stored in datalayer are not compatible with Amber - %s"
% nb_forms)
# Grab all pair interactions stored in datalayer. Amber must have pair interactions.
nb = dl.list_nb_parameters(
nb_name="LJ", nb_model="AB", itype="pair")
# Get the number of atom types. The number of pair interactions should be equal to num_atom_types * (num_atom_types + 1)) / 2
num_atom_types = len(dl.get_unique_atom_types())
# This will occur if there are no pair interactions stored in the dl.
if len(nb.keys()) == 0:
# Check that the stored mixing rule is compatible with amber. Amber should be able to handle any set of parameters,
# but if we allow this, it should be something the user has to override somehow (or, they could apply the mixing
# rule before calling the amber writer.
if dl.get_mixing_rule() not in amd.mixing_rule:
raise TypeError(
"Mixing rule %s not compatible with amber (lorentz-berthelot mixing rule)"
% dl.get_mixing_rule())
# Calculate pair interactions according to amber.
dl.build_LJ_mixing_table()
# This condition will be met if some pair interactions are stored, but not the correct number.
elif len(nb.keys()) != (num_atom_types * (num_atom_types + 1)) / 2:
raise ValueError(
"Amber compatibility check : Incorrect number of pair interactions\n"
)
# Check NB scaling factors are compatible with amber
scaling_types = eex.metadata.additional_metadata.nb_scaling[
"scaling_type"]
for scale_type in scaling_types:
if scale_type not in dl.store.list_tables():
if not dl.get_nb_scaling_factors():
raise ValueError(
"No nonbond scaling (%s) information is set in datalayer"
% (scale_type))
else:
# Build atom-wise scaling list
dl.build_scaling_list()
# Check that NB scaling factors are compatible with amber (ie 1,2 and 1,3 must be 0 (excluded))
pair_scalings = dl.get_pair_scalings(
nb_labels=[scale_type], order=True)
p12 = pair_scalings[pair_scalings["order"] == 2][scale_type]
p13 = pair_scalings[pair_scalings["order"] == 3][scale_type]
if p12.nonzero()[0].any():
raise ValueError(
"Nonbond scaling (order=2, %s) is not consistent with Amber. In Amber, 1-2 nonbond "
"interactions are excluded" % (scale_type))
if p13.nonzero()[0].any():
raise ValueError(
"Nonbond scaling (order=3, %s) is not consistent with Amber. In Amber, 1-3 nonbond "
"interactions are excluded" % (scale_type))
## TODO - need check scaling 0 is set for all order 2 and order 3
stored_properties = dl.list_atom_properties()
required_properties = list(amd.atom_property_names.values())
diff = np.setdiff1d(required_properties, stored_properties)
natoms = dl.get_atom_count()
index = np.arange(1, natoms + 1)
# Build and curate the data
df = pd.DataFrame({'atom_index': index})
df.dropna(axis=0, how="any", inplace=True)
df.set_index('atom_index', inplace=True)
add_properties = []
# Fill in default or raise error
for req in diff:
if req == 'atom_name':
atom_names = ['A'] * natoms
df[req] = atom_names
add_properties.append(req)
elif req == 'atomic_number':
# Just say it's carbon...doesn't seem like this matters too much for amber
atomic_numbers = [6] * natoms
df[req] = atomic_numbers
add_properties.append(req)
elif req == "mass":
try:
dl.get_atoms(properties=["mass"])
except:
raise KeyError("No masses stored in datalayer")
else:
raise KeyError(
"Atom property %s is missing from datalayer" % (req))
# Check for residue_index
if "residue_index" not in stored_properties:
# If molecule_index is set, set residue index to this.
# Otherwise, set all to 1.0
if "molecule_index" in stored_properties:
df["residue_index"] = dl.get_atoms(properties=["molecule_index"])
add_properties.append("residue_index")
else:
df["residue_index"] = 1
add_properties.append("residue_index")
if "residue_name" not in stored_properties:
df["residue_name"] = "BLA"
elif "residue_name" not in stored_properties:
df["residue_name"] = "BLA"
add_properties.append("residue_name")
if len(add_properties) > 0:
dl.add_atoms(df, by_value=True)
def write_amber_file(dl, filename, inpcrd=None):
"""
Parameters
------------
dl : eex.DataLayer
The datalayer containing information about the system to write
filename : str
The name of the file to write
inpcrd : str, optional
If None, attempts to read the file filename.replace("prmtop", "inpcrd") otherwise passes. #
"""
# First get information into Amber pointers. All keys are initially filled with zero.
# Ones that are currently 0, but should be implemented eventually are marked with
_check_dl_compatibility(dl)
dihedral_count = _get_charmm_dihedral_count(dl)
# Figure out what is hydrogen for the header
num_H_list = []
inc_hydrogen = {}
without_hydrogen = {}
hidx = (dl.get_atoms("atomic_number") == 1)['atomic_number']
hidx = hidx[hidx].index
for term_type, term_name in zip([2, 3, 4],
["bonds", "angles", "dihedrals"]):
term = dl.get_terms(term_type)
if term.shape[0] == 0:
num_H_list.append(0)
continue
# Build up an index of what is in hydrogen or not
inc_hydrogen_mask = term["atom1"].isin(hidx)
for n in range(term_type - 1):
name = "atom" + str(n + 2)
inc_hydrogen_mask |= term[name].isin(hidx)
num_H_list.append(len(term.loc[inc_hydrogen_mask].values))
inc_hydrogen[term_name] = term.loc[inc_hydrogen_mask].values
without_hydrogen[term_name] = term.loc[~inc_hydrogen_mask].values
output_sizes = {k: 0 for k in amd.size_keys}
output_sizes['NATOM'] = dl.get_atom_count() # Number of atoms
output_sizes["NBONH"] = num_H_list[
0] # Number of bonds containing hydrogen
output_sizes["MBONA"] = dl.get_term_count(2, "total") - output_sizes[
"NBONH"] # Number of bonds not containing hydrogen
output_sizes['NBONA'] = output_sizes[
"MBONA"] # MBONA + number of constraint bonds (MBONA = NBONA always)
output_sizes["NTHETH"] = num_H_list[
1] # Number of angles containing hydrogen
output_sizes["MTHETA"] = dl.get_term_count(3, "total") - output_sizes[
"NTHETH"] # Number of angles not containing hydrogen
output_sizes['NTHETA'] = output_sizes[
"MTHETA"] # MTHETA + number of constraint angles (NTHETA = MTHETA always)
output_sizes["NPHIH"] = num_H_list[
2] # Number of torsions containing hydrogen
output_sizes["MPHIA"] = dl.get_term_count(4, "total") - output_sizes[
"NPHIH"] # Number of torsions not containing hydrogen
output_sizes["NPHIA"] = output_sizes["MPHIA"]
output_sizes["NUMBND"] = len(
dl.list_term_uids(2)) # Number of unique bond types
output_sizes["NUMANG"] = len(
dl.list_term_uids(3)) # Number of unique angle types
output_sizes["NPTRA"] = dihedral_count # Number of unique torsion types
output_sizes["NRES"] = len(
dl.list_atom_uids("residue_name")) # Number of residues (not stable)
output_sizes["NTYPES"] = len(np.unique(
dl.get_atoms("atom_type"))) # Number of distinct LJ atom types
output_sizes[
"NPARM"] = 0 # Used to determine if this is a LES-compatible prmtop (??)
output_sizes["NNB"] = dl.get_atom_count(
) # Number of excluded atoms - Set to num atoms for our test cases. Amber will not run with 0
# 0 - no box, 1 - orthorhombic box, 2 - truncated octahedron
output_sizes["NMXRS"] = 0 # Number of atoms in the largest residue
output_sizes["IFCAP"] = 0 # Set to 1 if a solvent CAP is being used
output_sizes["NUMEXTRA"] = 0 # Number of extra points in the topology file
# Needs check for orthorhomibic box (1) or truncated octahedron (2). Currently just 0 or 1
output_sizes["IFBOX"] = [
0 if dl.get_box_size() == {} else 1
][0] # Flag indicating whether a periodic box is present
written_categories = []
# Figure out size each section should be based on metadata
label_sizes = {}
for k, v in amd.data_labels.items():
if isinstance(v[0], int):
label_sizes[k] = v[0]
elif v[0] in list(output_sizes):
label_sizes[k] = output_sizes[v[0]]
else:
# print("%30s %40s %d" % (k, v[0], int(eval(v[0], sizes_dict))))
label_sizes[k] = int(eval(v[0], output_sizes))
# Write title and version information
f = open(filename, "w")
f.write('%%VERSION VERSION_STAMP = V0001.000 DATE = %s %s\n' %
(time.strftime("%x"), time.strftime("%H:%M:%S")))
f.write("%FLAG TITLE\n%FORMAT(20a4)\n")
f.write("prmtop generated by MolSSI EEX\n")
# Write pointers section
f.write("%%FLAG POINTERS\n%s\n" % (amd.data_labels["POINTERS"][1]))
ncols, dtype, width = amd.parse_format(amd.data_labels["POINTERS"][1])
format_string = "%%%sd" % width
count = 0
for k in amd.size_keys:
f.write(format_string % output_sizes[k])
count += 1
if count % ncols == 0:
f.write("\n")
f.write("\n")
f.close()
written_categories.append("POINTERS")
# Write atom properties sections
file_handle = open(filename, "ab")
for k in amd.atom_property_names:
# Get data
data = dl.get_atoms(
amd.atom_property_names[k],
by_value=True,
utype=amd.atom_data_units).values.ravel()
_write_amber_data(file_handle, data, k)
written_categories.append(k)
# Handle residues
# We assume these are sorted WRT to atom and itself at the moment... not great
res_data = dl.get_atoms(["residue_index", "residue_name"], by_value=True)
uvals, uidx, ucnts = np.unique(
res_data["residue_index"], return_index=True, return_counts=True)
labels = res_data["residue_name"].iloc[uidx].values
_write_amber_data(file_handle, labels, "RESIDUE_LABEL")
written_categories.append("RESIDUE_LABEL")
starts = np.concatenate(([1], np.cumsum(ucnts) + 1))[:-1]
_write_amber_data(file_handle, starts, "RESIDUE_POINTER")
written_categories.append("RESIDUE_POINTER")
# Write out term parameters
for term_type in ["bond", "angle", "dihedral"]:
uids = sorted(dl.list_term_uids(term_type))
if len(uids) == 0:
continue
term_md = amd.forcefield_parameters[term_type]
tmps = {k: [] for k in term_md["column_names"].keys()}
utype = term_md["units"]
order = term_md["order"]
inv_lookup = {v: k for k, v in term_md["column_names"].items()}
# Build lists of data since AMBER holds this as 1D
for uid in uids:
params = dl.get_term_parameter(
order, uid, utype=utype, ftype=term_md['form'])
for k, v in params[1].items():
tmps[inv_lookup[k]].append(v)
# Write out FLAGS
for k, v in tmps.items():
_write_amber_data(file_handle, v, k)
written_categories.append(k)
for term_type, term_name in zip([2, 3, 4],
["bonds", "angles", "dihedrals"]):
term = dl.get_terms(term_type)
if term.shape[0] == 0:
continue
# Scale by weird AMBER factors
inc_hydrogen[term_name][:, :-1] = (
inc_hydrogen[term_name][:, :-1] - 1) * 3
without_hydrogen[term_name][:, :-1] = (
without_hydrogen[term_name][:, :-1] - 1) * 3
inc_h_name = term_name.upper() + "_INC_HYDROGEN"
without_h_name = term_name.upper() + "_WITHOUT_HYDROGEN"
_write_amber_data(file_handle, inc_hydrogen[term_name], inc_h_name)
written_categories.append(inc_h_name)
_write_amber_data(file_handle, without_hydrogen[term_name],
without_h_name)
written_categories.append(without_h_name)
# Handle SOLVENT_POINTERS, ATOMS_PER_MOLECULE and BOX_DIMENSIONS. Only present if IFBOX>0.
if output_sizes["IFBOX"] > 0:
# Solvent pointers section
# There are three numbers here - IPTRES, NSPM, NSPSOL
# where
# IPTRES = final residue part of solute, NSPM = total number of molecules, NSPSOL = first solvent molecule
# Just say everything is solute for now.
iptres = dl.get_atoms(["residue_index"]).values[-1]
nspm = len(np.unique(dl.get_atoms(["molecule_index"]).values))
solvent_pointers = [iptres, nspm, nspm]
_write_amber_data(file_handle, solvent_pointers, "SOLVENT_POINTERS")
# Handle atoms per molecule
molecule_list = dl.get_atoms(["molecule_index"]).values.ravel()
count_atoms_per_molecule = Counter(molecule_list)
atoms_per_molecule = []
for x in range(1, nspm + 1):
atoms_per_molecule.append(count_atoms_per_molecule[x])
_write_amber_data(file_handle, atoms_per_molecule,
"ATOMS_PER_MOLECULE")
# Write box dimensions section
box_dimensions = dl.get_box_size(
utype={
"a": amd.box_units["length"],
"b": amd.box_units["length"],
"c": amd.box_units["length"],
"alpha": amd.box_units["angle"],
"beta": amd.box_units["angle"],
"gamma": amd.box_units["angle"]
})
write_box = [
box_dimensions["beta"], box_dimensions["a"], box_dimensions["b"],
box_dimensions["c"]
]
_write_amber_data(file_handle, write_box, "BOX_DIMENSIONS")
written_categories.append("BOX_DIMENSIONS")
written_categories.append("SOLVENT_POINTERS")
written_categories.append("ATOMS_PER_MOLECULE")
# Quick fix for radius set will be one line string description in files prepared by xleap
_write_amber_data(file_handle, ["Place holder - EEX"], "RADIUS_SET")
written_categories.append("RADIUS_SET")
# Handle NB data
# Relevant headers = NONBOND_PARM_INDEX, LENNARD_JONES_ACOEF, LENNARD_JONES_BCOEF
stored_atom_types = dl.get_unique_atom_types()
ntypes = len(stored_atom_types)
# Get parameters from datalayer using correct amber units
stored_nb_parameters = dl.list_nb_parameters(
nb_name="LJ",
nb_model="AB",
utype=amd.forcefield_parameters["nonbond"]["units"],
itype="pair")
nonbonded_parm_index = np.zeros(ntypes * ntypes)
lj_a_coeff = []
lj_b_coeff = []
# Build a_coeff, b_coeff, and nb_parm lists
for key, value in stored_nb_parameters.items():
lj_a_coeff.append(value['A'])
lj_b_coeff.append(value['B'])
index_to_nb = ntypes * (key[0] - 1) + key[1]
index_to_nb2 = ntypes * (key[1] - 1) + key[0]
nonbonded_parm_index[index_to_nb - 1] = len(lj_a_coeff)
nonbonded_parm_index[index_to_nb2 - 1] = len(lj_a_coeff)
_write_amber_data(file_handle, nonbonded_parm_index,
"NONBONDED_PARM_INDEX")
_write_amber_data(file_handle, lj_a_coeff, "LENNARD_JONES_ACOEF")
_write_amber_data(file_handle, lj_b_coeff, "LENNARD_JONES_BCOEF")
for category in amd.forcefield_parameters["nonbond"]["column_names"]:
written_categories.append(category)
# Handle exclusions and scaling
exclusion_categories = {}
for excl in amd.exclusion_sections:
exclusion_categories[excl] = []
exclusions_scaling = dl.get_pair_scalings(order=True)
order_2_3_4 = exclusions_scaling[(exclusions_scaling["order"].notnull())]
# Build NUMBER_EXCLUDED_ATOMS and EXCLUDED_ATOMS_LIST.
for ind in sorted(dl.get_atoms("atomic_number").index.values):
if ind in order_2_3_4.index:
excluded_atoms_df = order_2_3_4.loc[ind]
excluded_atoms = excluded_atoms_df.index.values
exclusion_categories["EXCLUDED_ATOMS_LIST"].extend(excluded_atoms)
exclusion_categories["NUMBER_EXCLUDED_ATOMS"].append(
len(excluded_atoms))
else:
exclusion_categories["NUMBER_EXCLUDED_ATOMS"].append(1)
exclusion_categories["EXCLUDED_ATOMS_LIST"].append(0)
for excl in amd.exclusion_sections:
_write_amber_data(file_handle, exclusion_categories[excl], excl)
written_categories.append(excl)
# Write headers for other sections (file will not work in AMBER without these)
for k in amd.data_labels:
if k not in written_categories:
if label_sizes[k] > 0:
data = np.zeros(label_sizes[k])
_write_amber_data(file_handle, data, k)
else:
file_handle.write((
"%%FLAG %s\n%s\n\n" % (k, amd.data_labels[k][1])).encode())
written_categories.append(k)
file_handle.close()
# Now we need to write out the INPCRD
if '.prmtop' in filename:
inpcrd_file = filename.replace('.prmtop', '.inpcrd')
else:
inpcrd_file = filename + '.inpcrd'
file_handle = open(inpcrd_file, "wb")
xyz = dl.get_atoms("XYZ", utype={"XYZ": "angstrom"})
file_handle.write("default_name\n".encode())
file_handle.write(("%6d\n" % xyz.shape[0]).encode())
_write_1d(file_handle, xyz.values.ravel(), 6, "%12.6f")
if output_sizes["IFBOX"] > 0:
box = pd.DataFrame(box_dimensions, index=[0])
box = box[['a', 'b', 'c', 'alpha', 'beta', 'gamma']]
_write_1d(file_handle, box.values.ravel(), 6, "%12.6f")
file_handle.close()
return 0
| en | 0.799806 | Writer for amber # AMBER local imports # print(data.shape, rem_data.shape) # Write data to file # TODO this a temporary solution to get the number of dihedrals stored in the datalayer. # Zip up the parameters This function examines a datalayer to determine if it is compatible with Amber. Conversions between functional forms and pairwise interaction mixing are performed (if possible). # Loop over force field information - check functional form compatibility # Will need to insert check to see if these can be easily converted (ex OPLS dihedral <-> charmmfsw) # Handle nonbonds. Make sure only LJ types are stored in datalayer. # Grab all pair interactions stored in datalayer. Amber must have pair interactions. # Get the number of atom types. The number of pair interactions should be equal to num_atom_types * (num_atom_types + 1)) / 2 # This will occur if there are no pair interactions stored in the dl. # Check that the stored mixing rule is compatible with amber. Amber should be able to handle any set of parameters, # but if we allow this, it should be something the user has to override somehow (or, they could apply the mixing # rule before calling the amber writer. # Calculate pair interactions according to amber. # This condition will be met if some pair interactions are stored, but not the correct number. # Check NB scaling factors are compatible with amber # Build atom-wise scaling list # Check that NB scaling factors are compatible with amber (ie 1,2 and 1,3 must be 0 (excluded)) ## TODO - need check scaling 0 is set for all order 2 and order 3 # Build and curate the data # Fill in default or raise error # Just say it's carbon...doesn't seem like this matters too much for amber # Check for residue_index # If molecule_index is set, set residue index to this. # Otherwise, set all to 1.0 Parameters ------------ dl : eex.DataLayer The datalayer containing information about the system to write filename : str The name of the file to write inpcrd : str, optional If None, attempts to read the file filename.replace("prmtop", "inpcrd") otherwise passes. # # First get information into Amber pointers. All keys are initially filled with zero. # Ones that are currently 0, but should be implemented eventually are marked with # Figure out what is hydrogen for the header # Build up an index of what is in hydrogen or not # Number of atoms # Number of bonds containing hydrogen # Number of bonds not containing hydrogen # MBONA + number of constraint bonds (MBONA = NBONA always) # Number of angles containing hydrogen # Number of angles not containing hydrogen # MTHETA + number of constraint angles (NTHETA = MTHETA always) # Number of torsions containing hydrogen # Number of torsions not containing hydrogen # Number of unique bond types # Number of unique angle types # Number of unique torsion types # Number of residues (not stable) # Number of distinct LJ atom types # Used to determine if this is a LES-compatible prmtop (??) # Number of excluded atoms - Set to num atoms for our test cases. Amber will not run with 0 # 0 - no box, 1 - orthorhombic box, 2 - truncated octahedron # Number of atoms in the largest residue # Set to 1 if a solvent CAP is being used # Number of extra points in the topology file # Needs check for orthorhomibic box (1) or truncated octahedron (2). Currently just 0 or 1 # Flag indicating whether a periodic box is present # Figure out size each section should be based on metadata # print("%30s %40s %d" % (k, v[0], int(eval(v[0], sizes_dict)))) # Write title and version information # Write pointers section # Write atom properties sections # Get data # Handle residues # We assume these are sorted WRT to atom and itself at the moment... not great # Write out term parameters # Build lists of data since AMBER holds this as 1D # Write out FLAGS # Scale by weird AMBER factors # Handle SOLVENT_POINTERS, ATOMS_PER_MOLECULE and BOX_DIMENSIONS. Only present if IFBOX>0. # Solvent pointers section # There are three numbers here - IPTRES, NSPM, NSPSOL # where # IPTRES = final residue part of solute, NSPM = total number of molecules, NSPSOL = first solvent molecule # Just say everything is solute for now. # Handle atoms per molecule # Write box dimensions section # Quick fix for radius set will be one line string description in files prepared by xleap # Handle NB data # Relevant headers = NONBOND_PARM_INDEX, LENNARD_JONES_ACOEF, LENNARD_JONES_BCOEF # Get parameters from datalayer using correct amber units # Build a_coeff, b_coeff, and nb_parm lists # Handle exclusions and scaling # Build NUMBER_EXCLUDED_ATOMS and EXCLUDED_ATOMS_LIST. # Write headers for other sections (file will not work in AMBER without these) # Now we need to write out the INPCRD | 2.135669 | 2 |
function/python/brightics/function/transform/sql/calcitepython/calcite2pd_utils/rexnode_util.py | parkjh80/studio | 202 | 6615297 | """
Copyright 2019 Samsung SDS
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
from ..functions.operators import binary_operators
from ..functions.operators import prefix_operators
class RexNodeTool(object):
def __init__(self):
self.__rextree = None
self.__fields = None
self.__tablename = None
self.__explained = []
self.__binary_ops = binary_operators
self.__prefix_ops = prefix_operators
def set_fields(self, fields):
self.__fields = fields
def set_rextree(self, root):
self.__rextree = root
def set_tablename(self, tablename):
self.__tablename = tablename
def clear(self):
self.__tablename = None
self.__rextree = None
self.__fields = None
self.__explained.clear()
def check_toSeries(self, exps, flag=True):
for exp in exps:
flag = self.__check_toSeries(exp, flag)
return flag
def __check_toSeries(self, exp, flag):
rexTypeName = exp['rexTypeName']
if rexTypeName == 'RexInputRef':
flag = False
elif rexTypeName == ['RexCall']:
operands = exp['operands']
for oprd in operands:
flag = self.__check_toSeries(oprd, flag)
return flag
def unparse(self,
rextree=None,
fields=None,
tablename=None,
tofield=False):
if rextree is not None:
self.__rextree = rextree
if fields is not None:
self.__fields = fields
self.__explain_rexnode(rextree, tablename, tofield=tofield)
return ''.join(self.__explained)
def __explain_rexinputref(self, rexnode):
index = rexnode['index']
fd = self.__fields[index]
if self.__tablename is not None:
self.__explained.append(self.__tablename + '.' + fd)
else:
self.__explained.append(fd)
def __explain_rexliteral(self, rexnode):
value = rexnode['value']
valueInstance = rexnode['valueInstance']
if valueInstance == 'NlsString':
self.__explained.append('\'')
self.__explained.append(value)
self.__explained.append('\'')
else:
self.__explained.append(str(value))
def __explain_rexcall(self, rexnode, tofield=False):
op_supertype = rexnode['opInstance']
operands = rexnode['operands']
opkind = rexnode['opKind']
if opkind == 'FLOORDIVIDE' and tofield:
opkind = 'DIVIDE'
if op_supertype == 'SqlBinaryOperator':
operator = self.__binary_ops[opkind]
self.__explain_binary_operator(operator, operands, tofield=tofield)
# we need to check the function name of identityFunc in calcite
elif op_supertype == 'IdentityFunc':
self.__explain_identity_function(operands)
elif op_supertype == 'SqlPrefixOperator':
operator = self.__prefix_ops[opkind]
self.__explain_prefix_operator(operator, operands)
elif op_supertype == 'SqlCastFunction':
if opkind == 'CAST':
oprd = operands[0]
if oprd['rexTypeName'] == 'RexInputRef':
self.__explain_rexinputref(oprd)
else:
raise ValueError
def __explain_binary_operator(self, operator, operands, tofield=False):
self.__explained.append('(')
self.__explain_rexnode(operands[0], tofield=tofield)
self.__explained.append(operator)
self.__explain_rexnode(operands[1], tofield=tofield)
self.__explained.append(')')
def __explain_prefix_operator(self, operator, operands, tofield=False):
self.__explained.append('(')
self.__explained.append(operator)
self.__explain_rexnode(operands[0], tofield=tofield)
self.__explained.append(')')
def __explain_identity_function(self, operands, tofield=False):
self.__explain_rexnode(operands[0], tofield=tofield)
def __explain_rexnode(self, rexnode, tablename=None, tofield=False):
if tablename is not None:
self.__tablename = tablename
rexnode_type = rexnode['rexTypeName']
if rexnode_type == 'RexInputRef':
self.__explain_rexinputref(rexnode)
elif rexnode_type == 'RexLiteral':
self.__explain_rexliteral(rexnode)
elif rexnode_type == 'RexCall':
self.__explain_rexcall(rexnode, tofield=tofield)
else:
raise ValueError('unknown rexnode_type')
def rexLiteral_get_value(self, exp):
valueInstance = exp['valueInstance']
value = exp['value']
if valueInstance == 'BigDecimal':
val = float(value)
if val.is_integer():
return int(val)
else:
return val
elif valueInstance == 'NlsString':
return value
elif valueInstance == 'Boolean':
return value
else:
raise ValueError('Unknown or UnImplemented rexLiteral Type')
RexNodeTool = RexNodeTool()
| """
Copyright 2019 Samsung SDS
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
from ..functions.operators import binary_operators
from ..functions.operators import prefix_operators
class RexNodeTool(object):
def __init__(self):
self.__rextree = None
self.__fields = None
self.__tablename = None
self.__explained = []
self.__binary_ops = binary_operators
self.__prefix_ops = prefix_operators
def set_fields(self, fields):
self.__fields = fields
def set_rextree(self, root):
self.__rextree = root
def set_tablename(self, tablename):
self.__tablename = tablename
def clear(self):
self.__tablename = None
self.__rextree = None
self.__fields = None
self.__explained.clear()
def check_toSeries(self, exps, flag=True):
for exp in exps:
flag = self.__check_toSeries(exp, flag)
return flag
def __check_toSeries(self, exp, flag):
rexTypeName = exp['rexTypeName']
if rexTypeName == 'RexInputRef':
flag = False
elif rexTypeName == ['RexCall']:
operands = exp['operands']
for oprd in operands:
flag = self.__check_toSeries(oprd, flag)
return flag
def unparse(self,
rextree=None,
fields=None,
tablename=None,
tofield=False):
if rextree is not None:
self.__rextree = rextree
if fields is not None:
self.__fields = fields
self.__explain_rexnode(rextree, tablename, tofield=tofield)
return ''.join(self.__explained)
def __explain_rexinputref(self, rexnode):
index = rexnode['index']
fd = self.__fields[index]
if self.__tablename is not None:
self.__explained.append(self.__tablename + '.' + fd)
else:
self.__explained.append(fd)
def __explain_rexliteral(self, rexnode):
value = rexnode['value']
valueInstance = rexnode['valueInstance']
if valueInstance == 'NlsString':
self.__explained.append('\'')
self.__explained.append(value)
self.__explained.append('\'')
else:
self.__explained.append(str(value))
def __explain_rexcall(self, rexnode, tofield=False):
op_supertype = rexnode['opInstance']
operands = rexnode['operands']
opkind = rexnode['opKind']
if opkind == 'FLOORDIVIDE' and tofield:
opkind = 'DIVIDE'
if op_supertype == 'SqlBinaryOperator':
operator = self.__binary_ops[opkind]
self.__explain_binary_operator(operator, operands, tofield=tofield)
# we need to check the function name of identityFunc in calcite
elif op_supertype == 'IdentityFunc':
self.__explain_identity_function(operands)
elif op_supertype == 'SqlPrefixOperator':
operator = self.__prefix_ops[opkind]
self.__explain_prefix_operator(operator, operands)
elif op_supertype == 'SqlCastFunction':
if opkind == 'CAST':
oprd = operands[0]
if oprd['rexTypeName'] == 'RexInputRef':
self.__explain_rexinputref(oprd)
else:
raise ValueError
def __explain_binary_operator(self, operator, operands, tofield=False):
self.__explained.append('(')
self.__explain_rexnode(operands[0], tofield=tofield)
self.__explained.append(operator)
self.__explain_rexnode(operands[1], tofield=tofield)
self.__explained.append(')')
def __explain_prefix_operator(self, operator, operands, tofield=False):
self.__explained.append('(')
self.__explained.append(operator)
self.__explain_rexnode(operands[0], tofield=tofield)
self.__explained.append(')')
def __explain_identity_function(self, operands, tofield=False):
self.__explain_rexnode(operands[0], tofield=tofield)
def __explain_rexnode(self, rexnode, tablename=None, tofield=False):
if tablename is not None:
self.__tablename = tablename
rexnode_type = rexnode['rexTypeName']
if rexnode_type == 'RexInputRef':
self.__explain_rexinputref(rexnode)
elif rexnode_type == 'RexLiteral':
self.__explain_rexliteral(rexnode)
elif rexnode_type == 'RexCall':
self.__explain_rexcall(rexnode, tofield=tofield)
else:
raise ValueError('unknown rexnode_type')
def rexLiteral_get_value(self, exp):
valueInstance = exp['valueInstance']
value = exp['value']
if valueInstance == 'BigDecimal':
val = float(value)
if val.is_integer():
return int(val)
else:
return val
elif valueInstance == 'NlsString':
return value
elif valueInstance == 'Boolean':
return value
else:
raise ValueError('Unknown or UnImplemented rexLiteral Type')
RexNodeTool = RexNodeTool()
| en | 0.84136 | Copyright 2019 Samsung SDS
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. # we need to check the function name of identityFunc in calcite | 2.048744 | 2 |
lib/geo/srs.py | casmlab/quac | 34 | 6615298 | '''This module contains various geographic utilities related to spatial
reference systems.'''
# Copyright (c) Los Alamos National Security, LLC, and others.
import io
import json
import re
import sys
from django.contrib.gis import geos
from django.contrib.gis import gdal
import numpy as np
import pyproj
import testable
import u
### Custom spatial reference systems ###
class Magic_SRS_Dict(dict):
'''This class is a hack to make custom spatial reference systems available
to Python code. The basic idea is: we look like a dict, and lookups are
by SRID; values are gdal.SpatialReference objects. The reason we return
such objects is to avoid repeatedly parsing PROJ4 strings for our custom
SRSes. Example:
>>> srs = Magic_SRS_Dict()
>>> srs[4326].srid
4326
>>> srs[540036].srid
540036
>>> srs[540036].name
'Miller_Mm'
>>> srs[540036].proj
'+proj=mill +lat_0=0 +lon_0=0 +x_0=0 +y_0=0 +R_A +ellps=WGS84 +towgs84=0,0,0,0,0,0,0 +to_meter=1000000 +no_defs '
'''
def __init__(self):
for (srid, (name, proj4)) in CUSTOM_SRS.items():
# An explanation of the extreme fail here. SpatialReference objects
# created from proj4 text get a name of "unnamed" and an SRID of
# 4326, and there's nothing you can do about it (the relevant
# attributes are read-only). Our solution is to create an object,
# dump its WKT, munge that, and create a new object (can't switch to
# WKT because SpatiaLite doesn't grok it). Excuse me while I vomit.
wkt = gdal.SpatialReference(proj4).wkt
wkt = re.sub(r'unnamed', name, wkt)
wkt = re.sub(r'AUTHORITY\["EPSG","4326"\]',
'AUTHORITY["LOCAL","%d"]' % (srid), wkt)
self[srid] = gdal.SpatialReference(wkt)
def __getitem__(self, key):
try:
return dict.__getitem__(self, key)
except KeyError:
self[key] = gdal.SpatialReference(key)
return self[key]
CUSTOM_SRS = {
# FIXME: If you need a km- or Mm-based version of a meter-based SRS with
# SRID=x, number the new one x*10+3 or x*10+6, respectively. There is code
# that relies on this numbering scheme in base.py.
# no proj4 on spatialreference.org for EPSG 32663; omitted
54003: ('Miller', '+proj=mill +lat_0=0 +lon_0=0 +x_0=0 +y_0=0 +R_A +ellps=WGS84 +datum=WGS84 +units=m +no_defs'),
540033: ('Miller_Km', '+proj=mill +lat_0=0 +lon_0=0 +x_0=0 +y_0=0 +R_A +ellps=WGS84 +datum=WGS84 +to_meter=1000 +no_defs'),
540036: ('Miller_Mm', '+proj=mill +lat_0=0 +lon_0=0 +x_0=0 +y_0=0 +R_A +ellps=WGS84 +datum=WGS84 +to_meter=1000000 +no_defs'),
54009: ('Mollweide', '+proj=moll +lon_0=0 +x_0=0 +y_0=0 +ellps=WGS84 +datum=WGS84 +units=m +no_defs'),
540093: ('Mollweide_Km', '+proj=moll +lon_0=0 +x_0=0 +y_0=0 +ellps=WGS84 +datum=WGS84 +to_meter=1000 +no_defs'),
540096: ('Mollweide_Mm', '+proj=moll +lon_0=0 +x_0=0 +y_0=0 +ellps=WGS84 +datum=WGS84 +to_meter=1000000 +no_defs'),
}
SRS = Magic_SRS_Dict()
SRID_WGS84 = u.WGS84_SRID # FIXME: eliminate redundant definition
SRS_WGS84 = SRS[SRID_WGS84]
SRID_EQAREA = 54009 # Mollweide. Units must be meters.
SRS_EQAREA = SRS[SRID_EQAREA]
### Geodesic computations ###
GEOD = pyproj.Geod(ellps='WGS84')
EARTH_RADIUS_KM = 6371.009 # http://en.wikipedia.org/wiki/Earth_radius
DEG2RAD = 0.017453293
# We have a couple of options for computing geodesic distances. We can use an
# ellipsoidal computation, which is accurate but slow (with the libraries we
# are using), or a spherical one, which is faster and can operate on vectors
# but is less accurate (up to about 0.3% error). Set the aliases below to
# choose which you prefer.
def geodesic_distance_ell(a, b):
'''Return the ellipsoidal geodesic distance in kilometers from geos.Point a
to geos.Point b, which can be in any SRS. For example, to compute
distance from BNA to LAX
(http://en.wikipedia.org/wiki/Great-circle_distance):
>>> geodesic_distance_ell(geos.Point(-86.67, 36.12, srid=4326),
... geos.Point(-118.40, 33.94, srid=4326))
2892.77...'''
return geodesic_distance_mp_ell(a, geos.MultiPoint([b], srid=b.srid))[0]
def geodesic_distance_sph(a, b):
'''Return the spherical geodesic distance in kilometers from geos.Point a
to geos.Point b. E.g. (this is in error by about 0.2%):
>>> geodesic_distance_sph(geos.Point(-86.67, 36.12, srid=4326),
... geos.Point(-118.40, 33.94, srid=4326))
2886.44...'''
return geodesic_distance_mp_sph(a, geos.MultiPoint([b], srid=b.srid))[0]
def geodesic_distance_mp_ell(a, b):
a = transform(a, SRID_WGS84)
b = transform(b, SRID_WGS84)
dist_m = np.array([GEOD.inv(a.x, a.y, bx, by)[2] for (bx, by) in b.coords])
return dist_m / 1000
def geodesic_distance_mp_sph(a, b):
# Formula from <http://williams.best.vwh.net/avform.htm#Dist>.
def c2as(seq):
xys = DEG2RAD * np.array(seq)
return (xys[:,0], xys[:,1])
assert (a.geom_type == 'Point' and b.geom_type == 'MultiPoint')
a = transform(a, SRID_WGS84)
b = transform(b, SRID_WGS84)
(alon, alat) = c2as([a.coords] * len(b))
(blon, blat) = c2as(b.coords)
return (EARTH_RADIUS_KM
* 2 * np.arcsin(np.sqrt((np.sin((alat - blat) / 2))**2
+ (np.cos(alat) * np.cos(blat)
* (np.sin((alon - blon) / 2))**2))))
geodesic_distance = geodesic_distance_sph
geodesic_distance_mp = geodesic_distance_mp_sph
def geodesic_area(p):
'''Return the geodesic area of Polygon or MultiPolygon p in km^2. This
simply projects to an equal-area projection and then computes the
geometric area; a notable alternative is the Chamberlain & Duquette
formula. For example, to compute the approximate area of Colorado:
>>> co = geos.Polygon([(-109.05, 41), (-102.05, 41), (-102.05, 37),
... (-109.05, 37), (-109.05, 41)], srid=4326)
>>> geodesic_area(co)
269492.44...
Wikipedia says the area is 269,837 km^2, so we're off by about 0.1%.'''
if (p.geom_type == 'Polygon'):
mp = geos.MultiPolygon([p])
elif (p.geom_type == 'MultiPolygon'):
mp = p
else:
raise TypeError('need Polygon or MultiPolygon, not %s' % (p.geom_type))
return (transform(p, SRID_EQAREA).area / 1e6)
### Transforming from one SRS to another ###
TRANSFORMERS = {}
def transform(geom, srid, always_copy=False):
'''Return geom transformed to SRID srid. The returned object may be geom
itself, if no transformation is needed, unless always_copy==True, in
which case a copy is always made.'''
# NOTE: This function works around a Django bug that prevents transforming
# from a custom SRID (https://code.djangoproject.com/ticket/19171). The
# workaround is to set a fake SRID on the source object, do the
# transformation, and the put the real SRID back.
if (geom.srid == srid and not always_copy):
return geom
try:
ct = TRANSFORMERS[(geom.srid, srid)]
except KeyError:
ct = gdal.CoordTransform(SRS[geom.srid], SRS[srid])
TRANSFORMERS[(geom.srid, srid)] = ct
source_srid_real = geom.srid
geom.srid = SRID_WGS84
result = geom.transform(ct, clone=True)
geom.srid = source_srid_real
return result
### Trimming geometries that slop off the globe ###
LATMAX = 89.99
LATMIN = -LATMAX
LONMAX = 180
LONMIN = -LONMAX
LON_BUFFER = 12
@u.memoize
def bounding_box_srid(srid):
'''Return a bounding box for given SRID as a polygon in that SRID. E.g.:
>>> bounding_box_srid(54003).coords
(((240181312.2..., -14671436.0...), (240181312.2..., 14671436.0...), (-240181312.2..., 14671436.03...), (-240181312.2..., -14671436.0...), (240181312.2..., -14671436.0...)),)'''
(xmin, xmax) = lon_bounds_srid(srid)
(ymin, ymax) = lat_bounds_srid(srid)
return geos.Polygon([(xmin, ymin), (xmin, ymax), (xmax, ymax),
(xmax, ymin), (xmin, ymin)], srid=srid)
def inbounds_p(geom):
'''Return True if geom is entirely in-bounds (i.e., geom == trim(geom)),
False otherwise. For example:
>>> inbounds_p(geos.Point(0, 90.1, srid=SRID_WGS84))
False'''
assert (geom.geom_type == 'Point'), 'untested for non-Points'
(s, n) = lat_bounds_srid(geom.srid)
return (geom.extent[1] > s and geom.extent[3] < n)
@u.memoize
def lat_bounds_srid(srid):
'''Return a tuple containing the Y coordinates of the (south, north) poles.
For example:
>>> lat_bounds_srid(4326)
(-89.99, 89.99)
>>> lat_bounds_srid(54003)
(-14671436.0..., 14671436.0...)'''
return (transform(geos.Point(0, LATMIN, srid=SRID_WGS84), srid).y,
transform(geos.Point(0, LATMAX, srid=SRID_WGS84), srid).y)
@u.memoize
def lon_bounds_srid(srid):
'''Return a tuple containing the X coordinates of the (west, east)
"boundaries" of the given SRID. This is a little arbitrary, because
there's no boundary (all the trigonometry works fine even if you go
around and around the world), but it's useful to be able to make a
boundary rectangle. Thus, we add a generous buffer.'''
xmin = transform(geos.Point(LONMIN, 0, srid=SRID_WGS84), srid).x
xmax = transform(geos.Point(LONMAX, 0, srid=SRID_WGS84), srid).x
return (xmin * LON_BUFFER, xmax * LON_BUFFER)
def trim(geom):
'''Given geometry geom in any SRS, remove any portions that extend off the
globe (i.e., latitude further north than the North Pole or south than
the South Pole) and return what's left. E.g.:
# 90 degrees latitude is roughly 14,675,057 meters in this SRS
>>> y = 15e6 # latitude
>>> x = 10e6 # longitude
>>> pl = geos.Polygon([(0,y), (x,0), (0,-y), (-x,0), (0,y)], srid=54003)
>>> trim(pl).coords
(((-219042.6..., 14671436.0...), (219042.6..., 14671436.0...), (10000000.0, 0.0), (219042.6..., -14671436.0...), (-219042.6..., -14671436.0...), (-10000000.0, 0.0), (-219042.6..., 14671436.0...)),)
WARNING: This function only works correctly on projections with straight
lines of latitude. E.g., azimuthal and conic projections will silently
return incorrect results. (FIXME: add an assertion for this.)
(FIXME: This doesn't test that the returned geometry makes any sense, is
not null, or is valid.)'''
# This actually requires a little finesse. If one defines a "trim polygon"
# in WGS84, it can only extend to +/- 180 degrees longitude if we are to
# transform it correctly to geom's SRS. On the other hand, geom can't be
# transformed to WGS84 because (by definition) it may contain vertices
# invalid in WGS84. So, we do more focused transformation of components.
return bounding_box_srid(geom.srid).intersection(geom)
### Input and output ###
def dump_geojson(basename, geoms):
'''Write a GeoJSON representation of geoms (transformed to WGS84, and
cannot be a GeometryCollection because mixed types in a layer are
unsupported in QGIS) to the file basename.geojson.'''
assert (geoms.geom_type != 'GeometryCollection')
d = { 'type': 'FeatureCollection',
'crs': { 'type': 'name',
'properties': { 'name': 'EPSG:%d' % (SRID_WGS84) } },
'features': [] }
geoms = transform(geoms, SRID_WGS84)
if (geoms.num_geom == 1):
geoms = geos.GeometryCollection([geoms])
for geom in geoms:
# FIXME: It is super lame that we can't get a datastructure instead of a
# JSON string from geometries.
d['features'].append({ 'type': 'Feature',
'properties': {},
'geometry': json.loads(geom.json) })
# can't dump directly to file b/c "TypeError: must be unicode, not str"
json_ = json.dumps(d, ensure_ascii=False, indent=2)
assert (isinstance(json_, str))
fp = io.open(basename + '.geojson', mode='wt', encoding='utf8')
fp.write(json_)
### Tests ###
# Test-Depends: geo
testable.register('''
# Make sure the SRIDs we're interested in are available.
>>> for srid in (4326, 54003, 540033, 540036, 54009, 540093, 540096):
... if not isinstance(SRS[srid], gdal.SpatialReference): srid
# Test that we can transform to and from the custom SRSes.
>>> a = geos.Point(1, 2, srid=SRID_WGS84)
>>> b = transform(a, 540036)
>>> a.srid
4326
>>> b.coords
(0.111..., 0.220...)
>>> b.srid
540036
>>> c = transform(b, 4326)
>>> c.srid
4326
>>> [round(x, 4) for x in c.coords]
[1.0, 2.0]
# geodesic_area() should except if we give it a bogus geometry type.
>>> geodesic_area(geos.Point(0,0))
Traceback (most recent call last):
...
TypeError: need Polygon or MultiPolygon, not Point
# inbounds_p() should work north/sound and on SRS that requires transform
>>> inbounds_p(geos.Point(0, 89.98, srid=SRID_WGS84))
True
>>> inbounds_p(geos.Point(0, 90.01, srid=SRID_WGS84))
False
>>> inbounds_p(geos.Point(0, -89.98, srid=SRID_WGS84))
True
>>> inbounds_p(geos.Point(0, -90.01, srid=SRID_WGS84))
False
>>> inbounds_p(geos.Point(0, 14671436.0, srid=54003))
True
>>> inbounds_p(geos.Point(0, 14671436.1, srid=54003))
False
>>> inbounds_p(geos.Point(0, -14671436.0, srid=54003))
True
>>> inbounds_p(geos.Point(0, -14671436.1, srid=54003))
False
# Ensure that trim() works on multipolygons.
>>> yo = 15e6
>>> yi = 14e6
>>> mp = geos.MultiPoint([geos.Point(0, yi), geos.Point(0, yo)], srid=54003)
>>> trim(mp).coords
(0.0, 14000000.0)
''')
| '''This module contains various geographic utilities related to spatial
reference systems.'''
# Copyright (c) Los Alamos National Security, LLC, and others.
import io
import json
import re
import sys
from django.contrib.gis import geos
from django.contrib.gis import gdal
import numpy as np
import pyproj
import testable
import u
### Custom spatial reference systems ###
class Magic_SRS_Dict(dict):
'''This class is a hack to make custom spatial reference systems available
to Python code. The basic idea is: we look like a dict, and lookups are
by SRID; values are gdal.SpatialReference objects. The reason we return
such objects is to avoid repeatedly parsing PROJ4 strings for our custom
SRSes. Example:
>>> srs = Magic_SRS_Dict()
>>> srs[4326].srid
4326
>>> srs[540036].srid
540036
>>> srs[540036].name
'Miller_Mm'
>>> srs[540036].proj
'+proj=mill +lat_0=0 +lon_0=0 +x_0=0 +y_0=0 +R_A +ellps=WGS84 +towgs84=0,0,0,0,0,0,0 +to_meter=1000000 +no_defs '
'''
def __init__(self):
for (srid, (name, proj4)) in CUSTOM_SRS.items():
# An explanation of the extreme fail here. SpatialReference objects
# created from proj4 text get a name of "unnamed" and an SRID of
# 4326, and there's nothing you can do about it (the relevant
# attributes are read-only). Our solution is to create an object,
# dump its WKT, munge that, and create a new object (can't switch to
# WKT because SpatiaLite doesn't grok it). Excuse me while I vomit.
wkt = gdal.SpatialReference(proj4).wkt
wkt = re.sub(r'unnamed', name, wkt)
wkt = re.sub(r'AUTHORITY\["EPSG","4326"\]',
'AUTHORITY["LOCAL","%d"]' % (srid), wkt)
self[srid] = gdal.SpatialReference(wkt)
def __getitem__(self, key):
try:
return dict.__getitem__(self, key)
except KeyError:
self[key] = gdal.SpatialReference(key)
return self[key]
CUSTOM_SRS = {
# FIXME: If you need a km- or Mm-based version of a meter-based SRS with
# SRID=x, number the new one x*10+3 or x*10+6, respectively. There is code
# that relies on this numbering scheme in base.py.
# no proj4 on spatialreference.org for EPSG 32663; omitted
54003: ('Miller', '+proj=mill +lat_0=0 +lon_0=0 +x_0=0 +y_0=0 +R_A +ellps=WGS84 +datum=WGS84 +units=m +no_defs'),
540033: ('Miller_Km', '+proj=mill +lat_0=0 +lon_0=0 +x_0=0 +y_0=0 +R_A +ellps=WGS84 +datum=WGS84 +to_meter=1000 +no_defs'),
540036: ('Miller_Mm', '+proj=mill +lat_0=0 +lon_0=0 +x_0=0 +y_0=0 +R_A +ellps=WGS84 +datum=WGS84 +to_meter=1000000 +no_defs'),
54009: ('Mollweide', '+proj=moll +lon_0=0 +x_0=0 +y_0=0 +ellps=WGS84 +datum=WGS84 +units=m +no_defs'),
540093: ('Mollweide_Km', '+proj=moll +lon_0=0 +x_0=0 +y_0=0 +ellps=WGS84 +datum=WGS84 +to_meter=1000 +no_defs'),
540096: ('Mollweide_Mm', '+proj=moll +lon_0=0 +x_0=0 +y_0=0 +ellps=WGS84 +datum=WGS84 +to_meter=1000000 +no_defs'),
}
SRS = Magic_SRS_Dict()
SRID_WGS84 = u.WGS84_SRID # FIXME: eliminate redundant definition
SRS_WGS84 = SRS[SRID_WGS84]
SRID_EQAREA = 54009 # Mollweide. Units must be meters.
SRS_EQAREA = SRS[SRID_EQAREA]
### Geodesic computations ###
GEOD = pyproj.Geod(ellps='WGS84')
EARTH_RADIUS_KM = 6371.009 # http://en.wikipedia.org/wiki/Earth_radius
DEG2RAD = 0.017453293
# We have a couple of options for computing geodesic distances. We can use an
# ellipsoidal computation, which is accurate but slow (with the libraries we
# are using), or a spherical one, which is faster and can operate on vectors
# but is less accurate (up to about 0.3% error). Set the aliases below to
# choose which you prefer.
def geodesic_distance_ell(a, b):
'''Return the ellipsoidal geodesic distance in kilometers from geos.Point a
to geos.Point b, which can be in any SRS. For example, to compute
distance from BNA to LAX
(http://en.wikipedia.org/wiki/Great-circle_distance):
>>> geodesic_distance_ell(geos.Point(-86.67, 36.12, srid=4326),
... geos.Point(-118.40, 33.94, srid=4326))
2892.77...'''
return geodesic_distance_mp_ell(a, geos.MultiPoint([b], srid=b.srid))[0]
def geodesic_distance_sph(a, b):
'''Return the spherical geodesic distance in kilometers from geos.Point a
to geos.Point b. E.g. (this is in error by about 0.2%):
>>> geodesic_distance_sph(geos.Point(-86.67, 36.12, srid=4326),
... geos.Point(-118.40, 33.94, srid=4326))
2886.44...'''
return geodesic_distance_mp_sph(a, geos.MultiPoint([b], srid=b.srid))[0]
def geodesic_distance_mp_ell(a, b):
a = transform(a, SRID_WGS84)
b = transform(b, SRID_WGS84)
dist_m = np.array([GEOD.inv(a.x, a.y, bx, by)[2] for (bx, by) in b.coords])
return dist_m / 1000
def geodesic_distance_mp_sph(a, b):
# Formula from <http://williams.best.vwh.net/avform.htm#Dist>.
def c2as(seq):
xys = DEG2RAD * np.array(seq)
return (xys[:,0], xys[:,1])
assert (a.geom_type == 'Point' and b.geom_type == 'MultiPoint')
a = transform(a, SRID_WGS84)
b = transform(b, SRID_WGS84)
(alon, alat) = c2as([a.coords] * len(b))
(blon, blat) = c2as(b.coords)
return (EARTH_RADIUS_KM
* 2 * np.arcsin(np.sqrt((np.sin((alat - blat) / 2))**2
+ (np.cos(alat) * np.cos(blat)
* (np.sin((alon - blon) / 2))**2))))
geodesic_distance = geodesic_distance_sph
geodesic_distance_mp = geodesic_distance_mp_sph
def geodesic_area(p):
'''Return the geodesic area of Polygon or MultiPolygon p in km^2. This
simply projects to an equal-area projection and then computes the
geometric area; a notable alternative is the Chamberlain & Duquette
formula. For example, to compute the approximate area of Colorado:
>>> co = geos.Polygon([(-109.05, 41), (-102.05, 41), (-102.05, 37),
... (-109.05, 37), (-109.05, 41)], srid=4326)
>>> geodesic_area(co)
269492.44...
Wikipedia says the area is 269,837 km^2, so we're off by about 0.1%.'''
if (p.geom_type == 'Polygon'):
mp = geos.MultiPolygon([p])
elif (p.geom_type == 'MultiPolygon'):
mp = p
else:
raise TypeError('need Polygon or MultiPolygon, not %s' % (p.geom_type))
return (transform(p, SRID_EQAREA).area / 1e6)
### Transforming from one SRS to another ###
TRANSFORMERS = {}
def transform(geom, srid, always_copy=False):
'''Return geom transformed to SRID srid. The returned object may be geom
itself, if no transformation is needed, unless always_copy==True, in
which case a copy is always made.'''
# NOTE: This function works around a Django bug that prevents transforming
# from a custom SRID (https://code.djangoproject.com/ticket/19171). The
# workaround is to set a fake SRID on the source object, do the
# transformation, and the put the real SRID back.
if (geom.srid == srid and not always_copy):
return geom
try:
ct = TRANSFORMERS[(geom.srid, srid)]
except KeyError:
ct = gdal.CoordTransform(SRS[geom.srid], SRS[srid])
TRANSFORMERS[(geom.srid, srid)] = ct
source_srid_real = geom.srid
geom.srid = SRID_WGS84
result = geom.transform(ct, clone=True)
geom.srid = source_srid_real
return result
### Trimming geometries that slop off the globe ###
LATMAX = 89.99
LATMIN = -LATMAX
LONMAX = 180
LONMIN = -LONMAX
LON_BUFFER = 12
@u.memoize
def bounding_box_srid(srid):
'''Return a bounding box for given SRID as a polygon in that SRID. E.g.:
>>> bounding_box_srid(54003).coords
(((240181312.2..., -14671436.0...), (240181312.2..., 14671436.0...), (-240181312.2..., 14671436.03...), (-240181312.2..., -14671436.0...), (240181312.2..., -14671436.0...)),)'''
(xmin, xmax) = lon_bounds_srid(srid)
(ymin, ymax) = lat_bounds_srid(srid)
return geos.Polygon([(xmin, ymin), (xmin, ymax), (xmax, ymax),
(xmax, ymin), (xmin, ymin)], srid=srid)
def inbounds_p(geom):
'''Return True if geom is entirely in-bounds (i.e., geom == trim(geom)),
False otherwise. For example:
>>> inbounds_p(geos.Point(0, 90.1, srid=SRID_WGS84))
False'''
assert (geom.geom_type == 'Point'), 'untested for non-Points'
(s, n) = lat_bounds_srid(geom.srid)
return (geom.extent[1] > s and geom.extent[3] < n)
@u.memoize
def lat_bounds_srid(srid):
'''Return a tuple containing the Y coordinates of the (south, north) poles.
For example:
>>> lat_bounds_srid(4326)
(-89.99, 89.99)
>>> lat_bounds_srid(54003)
(-14671436.0..., 14671436.0...)'''
return (transform(geos.Point(0, LATMIN, srid=SRID_WGS84), srid).y,
transform(geos.Point(0, LATMAX, srid=SRID_WGS84), srid).y)
@u.memoize
def lon_bounds_srid(srid):
'''Return a tuple containing the X coordinates of the (west, east)
"boundaries" of the given SRID. This is a little arbitrary, because
there's no boundary (all the trigonometry works fine even if you go
around and around the world), but it's useful to be able to make a
boundary rectangle. Thus, we add a generous buffer.'''
xmin = transform(geos.Point(LONMIN, 0, srid=SRID_WGS84), srid).x
xmax = transform(geos.Point(LONMAX, 0, srid=SRID_WGS84), srid).x
return (xmin * LON_BUFFER, xmax * LON_BUFFER)
def trim(geom):
'''Given geometry geom in any SRS, remove any portions that extend off the
globe (i.e., latitude further north than the North Pole or south than
the South Pole) and return what's left. E.g.:
# 90 degrees latitude is roughly 14,675,057 meters in this SRS
>>> y = 15e6 # latitude
>>> x = 10e6 # longitude
>>> pl = geos.Polygon([(0,y), (x,0), (0,-y), (-x,0), (0,y)], srid=54003)
>>> trim(pl).coords
(((-219042.6..., 14671436.0...), (219042.6..., 14671436.0...), (10000000.0, 0.0), (219042.6..., -14671436.0...), (-219042.6..., -14671436.0...), (-10000000.0, 0.0), (-219042.6..., 14671436.0...)),)
WARNING: This function only works correctly on projections with straight
lines of latitude. E.g., azimuthal and conic projections will silently
return incorrect results. (FIXME: add an assertion for this.)
(FIXME: This doesn't test that the returned geometry makes any sense, is
not null, or is valid.)'''
# This actually requires a little finesse. If one defines a "trim polygon"
# in WGS84, it can only extend to +/- 180 degrees longitude if we are to
# transform it correctly to geom's SRS. On the other hand, geom can't be
# transformed to WGS84 because (by definition) it may contain vertices
# invalid in WGS84. So, we do more focused transformation of components.
return bounding_box_srid(geom.srid).intersection(geom)
### Input and output ###
def dump_geojson(basename, geoms):
'''Write a GeoJSON representation of geoms (transformed to WGS84, and
cannot be a GeometryCollection because mixed types in a layer are
unsupported in QGIS) to the file basename.geojson.'''
assert (geoms.geom_type != 'GeometryCollection')
d = { 'type': 'FeatureCollection',
'crs': { 'type': 'name',
'properties': { 'name': 'EPSG:%d' % (SRID_WGS84) } },
'features': [] }
geoms = transform(geoms, SRID_WGS84)
if (geoms.num_geom == 1):
geoms = geos.GeometryCollection([geoms])
for geom in geoms:
# FIXME: It is super lame that we can't get a datastructure instead of a
# JSON string from geometries.
d['features'].append({ 'type': 'Feature',
'properties': {},
'geometry': json.loads(geom.json) })
# can't dump directly to file b/c "TypeError: must be unicode, not str"
json_ = json.dumps(d, ensure_ascii=False, indent=2)
assert (isinstance(json_, str))
fp = io.open(basename + '.geojson', mode='wt', encoding='utf8')
fp.write(json_)
### Tests ###
# Test-Depends: geo
testable.register('''
# Make sure the SRIDs we're interested in are available.
>>> for srid in (4326, 54003, 540033, 540036, 54009, 540093, 540096):
... if not isinstance(SRS[srid], gdal.SpatialReference): srid
# Test that we can transform to and from the custom SRSes.
>>> a = geos.Point(1, 2, srid=SRID_WGS84)
>>> b = transform(a, 540036)
>>> a.srid
4326
>>> b.coords
(0.111..., 0.220...)
>>> b.srid
540036
>>> c = transform(b, 4326)
>>> c.srid
4326
>>> [round(x, 4) for x in c.coords]
[1.0, 2.0]
# geodesic_area() should except if we give it a bogus geometry type.
>>> geodesic_area(geos.Point(0,0))
Traceback (most recent call last):
...
TypeError: need Polygon or MultiPolygon, not Point
# inbounds_p() should work north/sound and on SRS that requires transform
>>> inbounds_p(geos.Point(0, 89.98, srid=SRID_WGS84))
True
>>> inbounds_p(geos.Point(0, 90.01, srid=SRID_WGS84))
False
>>> inbounds_p(geos.Point(0, -89.98, srid=SRID_WGS84))
True
>>> inbounds_p(geos.Point(0, -90.01, srid=SRID_WGS84))
False
>>> inbounds_p(geos.Point(0, 14671436.0, srid=54003))
True
>>> inbounds_p(geos.Point(0, 14671436.1, srid=54003))
False
>>> inbounds_p(geos.Point(0, -14671436.0, srid=54003))
True
>>> inbounds_p(geos.Point(0, -14671436.1, srid=54003))
False
# Ensure that trim() works on multipolygons.
>>> yo = 15e6
>>> yi = 14e6
>>> mp = geos.MultiPoint([geos.Point(0, yi), geos.Point(0, yo)], srid=54003)
>>> trim(mp).coords
(0.0, 14000000.0)
''')
| en | 0.738892 | This module contains various geographic utilities related to spatial reference systems. # Copyright (c) Los Alamos National Security, LLC, and others. ### Custom spatial reference systems ### This class is a hack to make custom spatial reference systems available to Python code. The basic idea is: we look like a dict, and lookups are by SRID; values are gdal.SpatialReference objects. The reason we return such objects is to avoid repeatedly parsing PROJ4 strings for our custom SRSes. Example: >>> srs = Magic_SRS_Dict() >>> srs[4326].srid 4326 >>> srs[540036].srid 540036 >>> srs[540036].name 'Miller_Mm' >>> srs[540036].proj '+proj=mill +lat_0=0 +lon_0=0 +x_0=0 +y_0=0 +R_A +ellps=WGS84 +towgs84=0,0,0,0,0,0,0 +to_meter=1000000 +no_defs ' # An explanation of the extreme fail here. SpatialReference objects # created from proj4 text get a name of "unnamed" and an SRID of # 4326, and there's nothing you can do about it (the relevant # attributes are read-only). Our solution is to create an object, # dump its WKT, munge that, and create a new object (can't switch to # WKT because SpatiaLite doesn't grok it). Excuse me while I vomit. # FIXME: If you need a km- or Mm-based version of a meter-based SRS with # SRID=x, number the new one x*10+3 or x*10+6, respectively. There is code # that relies on this numbering scheme in base.py. # no proj4 on spatialreference.org for EPSG 32663; omitted # FIXME: eliminate redundant definition # Mollweide. Units must be meters. ### Geodesic computations ### # http://en.wikipedia.org/wiki/Earth_radius # We have a couple of options for computing geodesic distances. We can use an # ellipsoidal computation, which is accurate but slow (with the libraries we # are using), or a spherical one, which is faster and can operate on vectors # but is less accurate (up to about 0.3% error). Set the aliases below to # choose which you prefer. Return the ellipsoidal geodesic distance in kilometers from geos.Point a to geos.Point b, which can be in any SRS. For example, to compute distance from BNA to LAX (http://en.wikipedia.org/wiki/Great-circle_distance): >>> geodesic_distance_ell(geos.Point(-86.67, 36.12, srid=4326), ... geos.Point(-118.40, 33.94, srid=4326)) 2892.77... Return the spherical geodesic distance in kilometers from geos.Point a to geos.Point b. E.g. (this is in error by about 0.2%): >>> geodesic_distance_sph(geos.Point(-86.67, 36.12, srid=4326), ... geos.Point(-118.40, 33.94, srid=4326)) 2886.44... # Formula from <http://williams.best.vwh.net/avform.htm#Dist>. Return the geodesic area of Polygon or MultiPolygon p in km^2. This simply projects to an equal-area projection and then computes the geometric area; a notable alternative is the Chamberlain & Duquette formula. For example, to compute the approximate area of Colorado: >>> co = geos.Polygon([(-109.05, 41), (-102.05, 41), (-102.05, 37), ... (-109.05, 37), (-109.05, 41)], srid=4326) >>> geodesic_area(co) 269492.44... Wikipedia says the area is 269,837 km^2, so we're off by about 0.1%. ### Transforming from one SRS to another ### Return geom transformed to SRID srid. The returned object may be geom itself, if no transformation is needed, unless always_copy==True, in which case a copy is always made. # NOTE: This function works around a Django bug that prevents transforming # from a custom SRID (https://code.djangoproject.com/ticket/19171). The # workaround is to set a fake SRID on the source object, do the # transformation, and the put the real SRID back. ### Trimming geometries that slop off the globe ### Return a bounding box for given SRID as a polygon in that SRID. E.g.: >>> bounding_box_srid(54003).coords (((240181312.2..., -14671436.0...), (240181312.2..., 14671436.0...), (-240181312.2..., 14671436.03...), (-240181312.2..., -14671436.0...), (240181312.2..., -14671436.0...)),) Return True if geom is entirely in-bounds (i.e., geom == trim(geom)), False otherwise. For example: >>> inbounds_p(geos.Point(0, 90.1, srid=SRID_WGS84)) False Return a tuple containing the Y coordinates of the (south, north) poles. For example: >>> lat_bounds_srid(4326) (-89.99, 89.99) >>> lat_bounds_srid(54003) (-14671436.0..., 14671436.0...) Return a tuple containing the X coordinates of the (west, east) "boundaries" of the given SRID. This is a little arbitrary, because there's no boundary (all the trigonometry works fine even if you go around and around the world), but it's useful to be able to make a boundary rectangle. Thus, we add a generous buffer. Given geometry geom in any SRS, remove any portions that extend off the globe (i.e., latitude further north than the North Pole or south than the South Pole) and return what's left. E.g.: # 90 degrees latitude is roughly 14,675,057 meters in this SRS >>> y = 15e6 # latitude >>> x = 10e6 # longitude >>> pl = geos.Polygon([(0,y), (x,0), (0,-y), (-x,0), (0,y)], srid=54003) >>> trim(pl).coords (((-219042.6..., 14671436.0...), (219042.6..., 14671436.0...), (10000000.0, 0.0), (219042.6..., -14671436.0...), (-219042.6..., -14671436.0...), (-10000000.0, 0.0), (-219042.6..., 14671436.0...)),) WARNING: This function only works correctly on projections with straight lines of latitude. E.g., azimuthal and conic projections will silently return incorrect results. (FIXME: add an assertion for this.) (FIXME: This doesn't test that the returned geometry makes any sense, is not null, or is valid.) # This actually requires a little finesse. If one defines a "trim polygon" # in WGS84, it can only extend to +/- 180 degrees longitude if we are to # transform it correctly to geom's SRS. On the other hand, geom can't be # transformed to WGS84 because (by definition) it may contain vertices # invalid in WGS84. So, we do more focused transformation of components. ### Input and output ### Write a GeoJSON representation of geoms (transformed to WGS84, and cannot be a GeometryCollection because mixed types in a layer are unsupported in QGIS) to the file basename.geojson. # FIXME: It is super lame that we can't get a datastructure instead of a # JSON string from geometries. # can't dump directly to file b/c "TypeError: must be unicode, not str" ### Tests ### # Test-Depends: geo # Make sure the SRIDs we're interested in are available. >>> for srid in (4326, 54003, 540033, 540036, 54009, 540093, 540096): ... if not isinstance(SRS[srid], gdal.SpatialReference): srid # Test that we can transform to and from the custom SRSes. >>> a = geos.Point(1, 2, srid=SRID_WGS84) >>> b = transform(a, 540036) >>> a.srid 4326 >>> b.coords (0.111..., 0.220...) >>> b.srid 540036 >>> c = transform(b, 4326) >>> c.srid 4326 >>> [round(x, 4) for x in c.coords] [1.0, 2.0] # geodesic_area() should except if we give it a bogus geometry type. >>> geodesic_area(geos.Point(0,0)) Traceback (most recent call last): ... TypeError: need Polygon or MultiPolygon, not Point # inbounds_p() should work north/sound and on SRS that requires transform >>> inbounds_p(geos.Point(0, 89.98, srid=SRID_WGS84)) True >>> inbounds_p(geos.Point(0, 90.01, srid=SRID_WGS84)) False >>> inbounds_p(geos.Point(0, -89.98, srid=SRID_WGS84)) True >>> inbounds_p(geos.Point(0, -90.01, srid=SRID_WGS84)) False >>> inbounds_p(geos.Point(0, 14671436.0, srid=54003)) True >>> inbounds_p(geos.Point(0, 14671436.1, srid=54003)) False >>> inbounds_p(geos.Point(0, -14671436.0, srid=54003)) True >>> inbounds_p(geos.Point(0, -14671436.1, srid=54003)) False # Ensure that trim() works on multipolygons. >>> yo = 15e6 >>> yi = 14e6 >>> mp = geos.MultiPoint([geos.Point(0, yi), geos.Point(0, yo)], srid=54003) >>> trim(mp).coords (0.0, 14000000.0) | 2.534754 | 3 |
S4/S4 Decompiler/Old Libraries/xdis/dropbox/decrypt25.py | NeonOcean/Environment | 1 | 6615299 | #!/usr/bin/env python
# Copyright <NAME>, 2012, License: GPL-2.0
# Adaption and generalization for xdis use by <NAME>
# Dropbox Python bytecode decryption tool for Dropbox versions of 1.1x
# (and possibly earlier) which uses Python bytecode 2.5.
from __future__ import print_function
import types
import struct
from xdis import PYTHON3
import xdis.marsh as xmarshal
from xdis.code import Code2Compat
def rng(a, b):
b = ((b << 13) ^ b) & 0xffffffff
c = (b ^ (b >> 17))
c = (c ^ (c << 5))
return (a * 69069 + c + 0x6611CB3B) & 0xffffffff
# this is replaced by mersenne in newer versions
def get_keys(a, b):
ka = rng(a,b)
kb = rng(ka, a)
kc = rng(kb, ka)
kd = rng(kc, kb)
ke = rng(kd, kc)
return (kb,kc,kd,ke)
def MX(z, y, sum, key, p, e):
return (((z >> 5 ^ y << 2) + (y >> 3 ^ z << 4)) ^ ((sum ^ y) + (key[(p & 3)^e] ^ z)))
def tea_decipher(v, key):
"""
Tiny Decryption Algorithm decription (TEA)
See https://en.wikipedia.org/wiki/Tiny_Encryption_Algorithm
"""
DELTA = 0x9e3779b9
n = len(v)
rounds = 6 + 52//n
sum = (rounds*DELTA)
y = v[0]
while sum != 0:
e = (sum >> 2) & 3
for p in range(n-1, -1, -1):
z = v[(n + p - 1) % n]
v[p] = (v[p] - MX(z, y, sum, key, p, e)) & 0xffffffff
y = v[p]
sum -= DELTA
return v
def load_code(self):
"""
Returns a Python code object like xdis.unmarshal.load_code(),
but in we decrypt the data in self.bufstr.
That is:
* calculate the TEA key,
* decrypt self.bufstr
* create and return a Python code-object
"""
a = self.load_int()
b = self.load_int()
key = get_keys(a, b)
padsize = (b + 15) & ~0xf
intsize = padsize/4
data = self.bufstr[self.bufpos:self.bufpos+padsize]
# print("%d: %d (%d=%d)" % (self.bufpos, b, padsize, len(data)))
data = list(struct.unpack('<%dL' % intsize, data))
tea_decipher(data, key)
self.bufpos += padsize
obj = xmarshal._FastUnmarshaller(struct.pack('<%dL' % intsize, *data))
code = obj.load_code()
co_code = patch(code.co_code)
if PYTHON3:
return Code2Compat(code.co_argcount, code.co_nlocals, code.co_stacksize,
code.co_flags,
co_code, code.co_consts, code.co_names, code.co_varnames,
code.co_filename, code.co_name, code.co_firstlineno,
code.co_lnotab, code.co_freevars, code.co_cellvars)
else:
return types.CodeType(code.co_argcount, code.co_nlocals, code.co_stacksize, code.co_flags,
co_code, code.co_consts, code.co_names, code.co_varnames,
code.co_filename, code.co_name, code.co_firstlineno,
code.co_lnotab, code.co_freevars, code.co_cellvars)
try:
a = bytearray()
except:
class bytearray(object):
def __init__(self, s):
self.l = map(ord, s)
def __setitem__(self, idx, val):
self.l[idx] = val
def __getitem__(self, idx):
return self.l[idx]
def __str__(self):
return ''.join(map(chr, self.l))
def __len__(self):
return len(self.l)
# Automatically generated opcode substitution table for v1
# A different dropbox table for v.2 table is at
# https://github.com/kholia/dedrop/blob/master/src/dedrop-v2/dedrop-v2.py
# They are similar but not the same.
table = { 0: 0, 1: 87, 2: 66, 4: 25, 6: 55, 7: 62,
9: 71, 10: 79, 12: 21, 13: 4, 14: 72, 15: 1, 16: 30,
17: 31, 18: 32, 19: 33, 22: 63,
26: 86, 29: 56, 31: 60,
33: 73, 34: 15, 35: 74, 36: 20, 38: 12, 39: 68, 40: 80,
41: 22, 42: 89, 43: 26, 50: 64, 51: 82, 52: 23, 54: 11,
55: 24,
56: 84, 59: 2, 60: 3, 61: 40, 62: 41, 63: 42, 64: 43,
65: 85, 66: 83, 67: 88, 68: 18, 69: 61, 70: 116, 71: 126,
72: 100,
73: 110, 74: 120, 75: 122, 76: 132,
77: 133, 78: 104, 79: 101, 80: 102,
81: 93, 82: 125, 83: 111, 84: 95, 85: 134, 86: 105,
88: 107, 89: 108, 90: 112, 91: 130, 92: 124,
93: 92, 94: 91, 95: 90,
97: 135, 99: 136, 100: 137, 101: 106,
102: 131, 103: 113, 104: 99,
105: 97, 106: 121, 107: 103, 111: 140,
112: 141,
113: 142}
# manual mapping of the rest
table[37] = 81
table[28] = 19
table[87] = 96
table[21] = 65
table[96] = 119
table[8] = 57
table[32] = 28
table[44] = 50
table[45] = 51
table[46] = 52
table[47] = 53
table[23] = 78
table[24] = 77
table[3] = 59
table[11] = 75
table[58] = 76
misses = {}
def patch(code):
code = bytearray(code)
i = 0
n = len(code)
while i < n:
op = code[i]
if not op in table:
print("missing opcode %d. code: " % op, repr(str(code)))
misses[op] = misses.get(op, 0) + 1
code[i] = table.get(op,op)
i += 1
if table.get(op,op) >= 90: # opcode.HAVE_ARGUMENT:
i += 2
return bytes(code) if PYTHON3 else str(code)
try: from __pypy__ import builtinify
except ImportError: builtinify = lambda f: f
@builtinify
def loads(s):
"""
xdis.marshal.load() but with its dispatch load_code() function replaced
with our decoding version.
"""
um = xmarshal._FastUnmarshaller(s)
um.dispatch[xmarshal.TYPE_CODE] = load_code
return um.load()
def fix_dropbox_pyc(fp, fixed_pyc='/tmp/test.pyc'):
source_size = struct.unpack("I", fp.read(4))[0] # size mod 2**32
ts = fp.read(4)
timestamp = struct.unpack("I", ts)[0]
b = fp.read()
co = loads(b)
return 2.5, timestamp, 62131, co, False, source_size
def fix_dir(path):
import os
for root, dirs, files in os.walk(path):
for name in files:
if not name.endswith('pyc'): continue
name = os.path.join(root, name)
print("fixing", name)
data = open(name).read()
try:
c = xmarshal.loads(data[8:])
except Exception as e:
print("error", e, repr(e))
# print repr(data[8:])
continue
# fix the version indicator and save
open(name, "w").write(
"\xb3\xf2\r\n" + data[4:8] + xmarshal.dumps(c))
if __name__ == '__main__':
import os, sys
if sys.argv != 2:
print("Usage: %s python-file" % os.path.basename(sys.argv[0]))
sys.exit(1)
fix_dir(sys.argv[1])
# for the sake of completeness, here are the code-fragments to automatically generate
# the opcode substitution table
if 0:
import marshal
def fill(c, d):
if len(c.co_code) != len(d.co_code):
print("len mismatch", c, d)
return
for i, j in zip(c.co_code, d.co_code):
# if i in m and not table[i] == j:
# print "mismatch %c (%x) => %c (%x)" % (ord(i),ord(i),ord(j),ord(j))
v = table.setdefault(i, {})
v[j] = v.get(j, 0) + 1
pass
return
for root, dirs, files in os.walk('library'):
for name in files:
name = os.path.join(root, name)
if not name.endswith('pyc'): continue
f2 = os.path.join('/tmp/python-defaults-2.7.2/Python-2.5.4/Lib', name[8:])
if not os.path.exists(f2): continue
print("loading", name)
try:
c = xmarshal.loads(open(name).read()[8:])
except:
print("error", name)
continue
d = marshal.loads(open(f2).read()[8:])
fill(c, d)
codes_c = filter(lambda x: type(x) == type(c), c.co_consts)
codes_d = filter(lambda x: type(x) == type(c), d.co_consts)
for i, j in zip(codes_c, codes_d):
fill(i, j)
pass
pass
def print_table(m):
k = m.keys()
k.sort()
table = {}
for i in k:
print("%c (%02x %s) =>" %
(ord(i), ord(i), bin(ord(i))), end='')
for j, count in m[i].iteritems():
if j == i: continue
table[ord(i)] = ord(j)
print("\t%c (%02x %s) [%d]" %
(ord(j), ord(j), bin(ord(j)), count), end="")
# print("%c (%02x %s) => %c (%02x %s)\t%d\t%s" % (ord(i),ord(i),bin(ord(i)),ord(j),ord(j),bin(ord(j)),ord(j)-ord(i),bin(ord(i)^ord(j)|0x100).replace('0', ' ')))
print()
return table
import re
re.compile('offset loc_(\w+)').findall('dd offset loc_8096DC4, offset loc_8096963, offset loc_8095462')
def load(name):
a = re.compile('offset loc_(\w+)').findall(open(name).read())
a = [int(i, 16) for i in a]
c = a[:]
c.sort()
c = [(i, c.index(i)) for i in a]
d = {}
for i, (addr, pos) in enumerate(c):
d[addr] = (i, pos)
return c, d
| #!/usr/bin/env python
# Copyright <NAME>, 2012, License: GPL-2.0
# Adaption and generalization for xdis use by <NAME>
# Dropbox Python bytecode decryption tool for Dropbox versions of 1.1x
# (and possibly earlier) which uses Python bytecode 2.5.
from __future__ import print_function
import types
import struct
from xdis import PYTHON3
import xdis.marsh as xmarshal
from xdis.code import Code2Compat
def rng(a, b):
b = ((b << 13) ^ b) & 0xffffffff
c = (b ^ (b >> 17))
c = (c ^ (c << 5))
return (a * 69069 + c + 0x6611CB3B) & 0xffffffff
# this is replaced by mersenne in newer versions
def get_keys(a, b):
ka = rng(a,b)
kb = rng(ka, a)
kc = rng(kb, ka)
kd = rng(kc, kb)
ke = rng(kd, kc)
return (kb,kc,kd,ke)
def MX(z, y, sum, key, p, e):
return (((z >> 5 ^ y << 2) + (y >> 3 ^ z << 4)) ^ ((sum ^ y) + (key[(p & 3)^e] ^ z)))
def tea_decipher(v, key):
"""
Tiny Decryption Algorithm decription (TEA)
See https://en.wikipedia.org/wiki/Tiny_Encryption_Algorithm
"""
DELTA = 0x9e3779b9
n = len(v)
rounds = 6 + 52//n
sum = (rounds*DELTA)
y = v[0]
while sum != 0:
e = (sum >> 2) & 3
for p in range(n-1, -1, -1):
z = v[(n + p - 1) % n]
v[p] = (v[p] - MX(z, y, sum, key, p, e)) & 0xffffffff
y = v[p]
sum -= DELTA
return v
def load_code(self):
"""
Returns a Python code object like xdis.unmarshal.load_code(),
but in we decrypt the data in self.bufstr.
That is:
* calculate the TEA key,
* decrypt self.bufstr
* create and return a Python code-object
"""
a = self.load_int()
b = self.load_int()
key = get_keys(a, b)
padsize = (b + 15) & ~0xf
intsize = padsize/4
data = self.bufstr[self.bufpos:self.bufpos+padsize]
# print("%d: %d (%d=%d)" % (self.bufpos, b, padsize, len(data)))
data = list(struct.unpack('<%dL' % intsize, data))
tea_decipher(data, key)
self.bufpos += padsize
obj = xmarshal._FastUnmarshaller(struct.pack('<%dL' % intsize, *data))
code = obj.load_code()
co_code = patch(code.co_code)
if PYTHON3:
return Code2Compat(code.co_argcount, code.co_nlocals, code.co_stacksize,
code.co_flags,
co_code, code.co_consts, code.co_names, code.co_varnames,
code.co_filename, code.co_name, code.co_firstlineno,
code.co_lnotab, code.co_freevars, code.co_cellvars)
else:
return types.CodeType(code.co_argcount, code.co_nlocals, code.co_stacksize, code.co_flags,
co_code, code.co_consts, code.co_names, code.co_varnames,
code.co_filename, code.co_name, code.co_firstlineno,
code.co_lnotab, code.co_freevars, code.co_cellvars)
try:
a = bytearray()
except:
class bytearray(object):
def __init__(self, s):
self.l = map(ord, s)
def __setitem__(self, idx, val):
self.l[idx] = val
def __getitem__(self, idx):
return self.l[idx]
def __str__(self):
return ''.join(map(chr, self.l))
def __len__(self):
return len(self.l)
# Automatically generated opcode substitution table for v1
# A different dropbox table for v.2 table is at
# https://github.com/kholia/dedrop/blob/master/src/dedrop-v2/dedrop-v2.py
# They are similar but not the same.
table = { 0: 0, 1: 87, 2: 66, 4: 25, 6: 55, 7: 62,
9: 71, 10: 79, 12: 21, 13: 4, 14: 72, 15: 1, 16: 30,
17: 31, 18: 32, 19: 33, 22: 63,
26: 86, 29: 56, 31: 60,
33: 73, 34: 15, 35: 74, 36: 20, 38: 12, 39: 68, 40: 80,
41: 22, 42: 89, 43: 26, 50: 64, 51: 82, 52: 23, 54: 11,
55: 24,
56: 84, 59: 2, 60: 3, 61: 40, 62: 41, 63: 42, 64: 43,
65: 85, 66: 83, 67: 88, 68: 18, 69: 61, 70: 116, 71: 126,
72: 100,
73: 110, 74: 120, 75: 122, 76: 132,
77: 133, 78: 104, 79: 101, 80: 102,
81: 93, 82: 125, 83: 111, 84: 95, 85: 134, 86: 105,
88: 107, 89: 108, 90: 112, 91: 130, 92: 124,
93: 92, 94: 91, 95: 90,
97: 135, 99: 136, 100: 137, 101: 106,
102: 131, 103: 113, 104: 99,
105: 97, 106: 121, 107: 103, 111: 140,
112: 141,
113: 142}
# manual mapping of the rest
table[37] = 81
table[28] = 19
table[87] = 96
table[21] = 65
table[96] = 119
table[8] = 57
table[32] = 28
table[44] = 50
table[45] = 51
table[46] = 52
table[47] = 53
table[23] = 78
table[24] = 77
table[3] = 59
table[11] = 75
table[58] = 76
misses = {}
def patch(code):
code = bytearray(code)
i = 0
n = len(code)
while i < n:
op = code[i]
if not op in table:
print("missing opcode %d. code: " % op, repr(str(code)))
misses[op] = misses.get(op, 0) + 1
code[i] = table.get(op,op)
i += 1
if table.get(op,op) >= 90: # opcode.HAVE_ARGUMENT:
i += 2
return bytes(code) if PYTHON3 else str(code)
try: from __pypy__ import builtinify
except ImportError: builtinify = lambda f: f
@builtinify
def loads(s):
"""
xdis.marshal.load() but with its dispatch load_code() function replaced
with our decoding version.
"""
um = xmarshal._FastUnmarshaller(s)
um.dispatch[xmarshal.TYPE_CODE] = load_code
return um.load()
def fix_dropbox_pyc(fp, fixed_pyc='/tmp/test.pyc'):
source_size = struct.unpack("I", fp.read(4))[0] # size mod 2**32
ts = fp.read(4)
timestamp = struct.unpack("I", ts)[0]
b = fp.read()
co = loads(b)
return 2.5, timestamp, 62131, co, False, source_size
def fix_dir(path):
import os
for root, dirs, files in os.walk(path):
for name in files:
if not name.endswith('pyc'): continue
name = os.path.join(root, name)
print("fixing", name)
data = open(name).read()
try:
c = xmarshal.loads(data[8:])
except Exception as e:
print("error", e, repr(e))
# print repr(data[8:])
continue
# fix the version indicator and save
open(name, "w").write(
"\xb3\xf2\r\n" + data[4:8] + xmarshal.dumps(c))
if __name__ == '__main__':
import os, sys
if sys.argv != 2:
print("Usage: %s python-file" % os.path.basename(sys.argv[0]))
sys.exit(1)
fix_dir(sys.argv[1])
# for the sake of completeness, here are the code-fragments to automatically generate
# the opcode substitution table
if 0:
import marshal
def fill(c, d):
if len(c.co_code) != len(d.co_code):
print("len mismatch", c, d)
return
for i, j in zip(c.co_code, d.co_code):
# if i in m and not table[i] == j:
# print "mismatch %c (%x) => %c (%x)" % (ord(i),ord(i),ord(j),ord(j))
v = table.setdefault(i, {})
v[j] = v.get(j, 0) + 1
pass
return
for root, dirs, files in os.walk('library'):
for name in files:
name = os.path.join(root, name)
if not name.endswith('pyc'): continue
f2 = os.path.join('/tmp/python-defaults-2.7.2/Python-2.5.4/Lib', name[8:])
if not os.path.exists(f2): continue
print("loading", name)
try:
c = xmarshal.loads(open(name).read()[8:])
except:
print("error", name)
continue
d = marshal.loads(open(f2).read()[8:])
fill(c, d)
codes_c = filter(lambda x: type(x) == type(c), c.co_consts)
codes_d = filter(lambda x: type(x) == type(c), d.co_consts)
for i, j in zip(codes_c, codes_d):
fill(i, j)
pass
pass
def print_table(m):
k = m.keys()
k.sort()
table = {}
for i in k:
print("%c (%02x %s) =>" %
(ord(i), ord(i), bin(ord(i))), end='')
for j, count in m[i].iteritems():
if j == i: continue
table[ord(i)] = ord(j)
print("\t%c (%02x %s) [%d]" %
(ord(j), ord(j), bin(ord(j)), count), end="")
# print("%c (%02x %s) => %c (%02x %s)\t%d\t%s" % (ord(i),ord(i),bin(ord(i)),ord(j),ord(j),bin(ord(j)),ord(j)-ord(i),bin(ord(i)^ord(j)|0x100).replace('0', ' ')))
print()
return table
import re
re.compile('offset loc_(\w+)').findall('dd offset loc_8096DC4, offset loc_8096963, offset loc_8095462')
def load(name):
a = re.compile('offset loc_(\w+)').findall(open(name).read())
a = [int(i, 16) for i in a]
c = a[:]
c.sort()
c = [(i, c.index(i)) for i in a]
d = {}
for i, (addr, pos) in enumerate(c):
d[addr] = (i, pos)
return c, d
| en | 0.559261 | #!/usr/bin/env python # Copyright <NAME>, 2012, License: GPL-2.0 # Adaption and generalization for xdis use by <NAME> # Dropbox Python bytecode decryption tool for Dropbox versions of 1.1x # (and possibly earlier) which uses Python bytecode 2.5. # this is replaced by mersenne in newer versions Tiny Decryption Algorithm decription (TEA) See https://en.wikipedia.org/wiki/Tiny_Encryption_Algorithm Returns a Python code object like xdis.unmarshal.load_code(), but in we decrypt the data in self.bufstr. That is: * calculate the TEA key, * decrypt self.bufstr * create and return a Python code-object # print("%d: %d (%d=%d)" % (self.bufpos, b, padsize, len(data))) # Automatically generated opcode substitution table for v1 # A different dropbox table for v.2 table is at # https://github.com/kholia/dedrop/blob/master/src/dedrop-v2/dedrop-v2.py # They are similar but not the same. # manual mapping of the rest # opcode.HAVE_ARGUMENT: xdis.marshal.load() but with its dispatch load_code() function replaced with our decoding version. # size mod 2**32 # print repr(data[8:]) # fix the version indicator and save # for the sake of completeness, here are the code-fragments to automatically generate # the opcode substitution table # if i in m and not table[i] == j: # print "mismatch %c (%x) => %c (%x)" % (ord(i),ord(i),ord(j),ord(j)) # print("%c (%02x %s) => %c (%02x %s)\t%d\t%s" % (ord(i),ord(i),bin(ord(i)),ord(j),ord(j),bin(ord(j)),ord(j)-ord(i),bin(ord(i)^ord(j)|0x100).replace('0', ' '))) | 2.734572 | 3 |
tsuru_autoscale/action/tests.py | cezarsa/tsuru-autoscale-dashboard | 11 | 6615300 | <filename>tsuru_autoscale/action/tests.py
from django.test import TestCase
from django.core.urlresolvers import reverse
from tsuru_autoscale.action.forms import ActionForm
from tsuru_autoscale.action import client
import httpretty
import mock
import os
class RemoveTestCase(TestCase):
@mock.patch("tsuru_autoscale.action.client.list")
@mock.patch("tsuru_autoscale.action.client.remove")
def test_new_post(self, remove_mock, list_mock):
url = "{}?TSURU_TOKEN=bla".format(reverse("action-remove", args=["name"]))
response = self.client.delete(url)
url = "{}?TSURU_TOKEN=bla".format(reverse("action-list"))
self.assertRedirects(response, url)
remove_mock.assert_called_with("name", "bla")
class NewTestCase(TestCase):
def test_new(self):
url = "{}?TSURU_TOKEN=bla".format(reverse("action-new"))
response = self.client.get(url)
self.assertTemplateUsed(response, "action/new.html")
self.assertIsInstance(response.context['form'], ActionForm)
self.assertFalse(response.context['form'].is_bound)
def test_new_invalid_post(self):
url = "{}?TSURU_TOKEN=bla".format(reverse("action-new"))
response = self.client.post(url, {})
self.assertFalse(response.context['form'].is_valid())
@mock.patch("tsuru_autoscale.action.client.list")
@mock.patch("tsuru_autoscale.action.client.new")
def test_new_post(self, new_mock, list_mock):
data = {
'url': u'someurl',
'body': u'',
'headers': u'',
'name': u'name',
'method': u'GET',
}
url = "{}?TSURU_TOKEN=bla".format(reverse("action-new"))
response = self.client.post(url, data)
url = "{}?TSURU_TOKEN=bla".format(reverse("action-list"))
self.assertRedirects(response, url)
new_mock.assert_called_with(data, "bla")
class ListTestCase(TestCase):
@mock.patch("tsuru_autoscale.action.client.list")
def test_list(self, list_mock):
url = "{}?TSURU_TOKEN=bla".format(reverse("action-list"))
response = self.client.get(url)
self.assertTemplateUsed(response, "action/list.html")
self.assertIn('list', response.context)
list_mock.assert_called_with("bla")
class GetTestCase(TestCase):
@mock.patch("tsuru_autoscale.action.client.get")
def test_get(self, get_mock):
result_mock = mock.Mock()
result_mock.json.return_value = {"Name": "ble"}
get_mock.return_value = result_mock
url = "{}?TSURU_TOKEN=bla".format(reverse("action-get", args=["name"]))
response = self.client.get(url)
self.assertTemplateUsed(response, "action/get.html")
self.assertIn('item', response.context)
get_mock.assert_called_with("name", "bla")
class ClientTestCase(TestCase):
def setUp(self):
httpretty.enable()
def tearDown(self):
httpretty.disable()
httpretty.reset()
def test_list(self):
os.environ["AUTOSCALE_HOST"] = "http://autoscalehost.com"
httpretty.register_uri(
httpretty.GET,
"http://autoscalehost.com/action",
)
client.list("token")
def test_new(self):
os.environ["AUTOSCALE_HOST"] = "http://autoscalehost.com"
httpretty.register_uri(
httpretty.POST,
"http://autoscalehost.com/action",
)
client.new({}, "token")
def test_remove(self):
os.environ["AUTOSCALE_HOST"] = "http://autoscalehost.com"
httpretty.register_uri(
httpretty.DELETE,
"http://autoscalehost.com/action/name",
)
client.remove("name", "token")
def test_get(self):
os.environ["AUTOSCALE_HOST"] = "http://autoscalehost.com"
httpretty.register_uri(
httpretty.GET,
"http://autoscalehost.com/action/name",
"result",
)
result = client.get("name", "token")
self.assertEqual(result.text, "result")
class ActionFormTestCase(TestCase):
def test_required_fields(self):
fields = {
"url": True,
"method": True,
"name": True,
"body": False,
"headers": False,
}
form = ActionForm()
for field, required in fields.items():
self.assertEqual(form.fields[field].required, required)
| <filename>tsuru_autoscale/action/tests.py
from django.test import TestCase
from django.core.urlresolvers import reverse
from tsuru_autoscale.action.forms import ActionForm
from tsuru_autoscale.action import client
import httpretty
import mock
import os
class RemoveTestCase(TestCase):
@mock.patch("tsuru_autoscale.action.client.list")
@mock.patch("tsuru_autoscale.action.client.remove")
def test_new_post(self, remove_mock, list_mock):
url = "{}?TSURU_TOKEN=bla".format(reverse("action-remove", args=["name"]))
response = self.client.delete(url)
url = "{}?TSURU_TOKEN=bla".format(reverse("action-list"))
self.assertRedirects(response, url)
remove_mock.assert_called_with("name", "bla")
class NewTestCase(TestCase):
def test_new(self):
url = "{}?TSURU_TOKEN=bla".format(reverse("action-new"))
response = self.client.get(url)
self.assertTemplateUsed(response, "action/new.html")
self.assertIsInstance(response.context['form'], ActionForm)
self.assertFalse(response.context['form'].is_bound)
def test_new_invalid_post(self):
url = "{}?TSURU_TOKEN=bla".format(reverse("action-new"))
response = self.client.post(url, {})
self.assertFalse(response.context['form'].is_valid())
@mock.patch("tsuru_autoscale.action.client.list")
@mock.patch("tsuru_autoscale.action.client.new")
def test_new_post(self, new_mock, list_mock):
data = {
'url': u'someurl',
'body': u'',
'headers': u'',
'name': u'name',
'method': u'GET',
}
url = "{}?TSURU_TOKEN=bla".format(reverse("action-new"))
response = self.client.post(url, data)
url = "{}?TSURU_TOKEN=bla".format(reverse("action-list"))
self.assertRedirects(response, url)
new_mock.assert_called_with(data, "bla")
class ListTestCase(TestCase):
@mock.patch("tsuru_autoscale.action.client.list")
def test_list(self, list_mock):
url = "{}?TSURU_TOKEN=bla".format(reverse("action-list"))
response = self.client.get(url)
self.assertTemplateUsed(response, "action/list.html")
self.assertIn('list', response.context)
list_mock.assert_called_with("bla")
class GetTestCase(TestCase):
@mock.patch("tsuru_autoscale.action.client.get")
def test_get(self, get_mock):
result_mock = mock.Mock()
result_mock.json.return_value = {"Name": "ble"}
get_mock.return_value = result_mock
url = "{}?TSURU_TOKEN=bla".format(reverse("action-get", args=["name"]))
response = self.client.get(url)
self.assertTemplateUsed(response, "action/get.html")
self.assertIn('item', response.context)
get_mock.assert_called_with("name", "bla")
class ClientTestCase(TestCase):
def setUp(self):
httpretty.enable()
def tearDown(self):
httpretty.disable()
httpretty.reset()
def test_list(self):
os.environ["AUTOSCALE_HOST"] = "http://autoscalehost.com"
httpretty.register_uri(
httpretty.GET,
"http://autoscalehost.com/action",
)
client.list("token")
def test_new(self):
os.environ["AUTOSCALE_HOST"] = "http://autoscalehost.com"
httpretty.register_uri(
httpretty.POST,
"http://autoscalehost.com/action",
)
client.new({}, "token")
def test_remove(self):
os.environ["AUTOSCALE_HOST"] = "http://autoscalehost.com"
httpretty.register_uri(
httpretty.DELETE,
"http://autoscalehost.com/action/name",
)
client.remove("name", "token")
def test_get(self):
os.environ["AUTOSCALE_HOST"] = "http://autoscalehost.com"
httpretty.register_uri(
httpretty.GET,
"http://autoscalehost.com/action/name",
"result",
)
result = client.get("name", "token")
self.assertEqual(result.text, "result")
class ActionFormTestCase(TestCase):
def test_required_fields(self):
fields = {
"url": True,
"method": True,
"name": True,
"body": False,
"headers": False,
}
form = ActionForm()
for field, required in fields.items():
self.assertEqual(form.fields[field].required, required)
| none | 1 | 2.22472 | 2 | |
chap07/phone_and_email.py | asadmoosvi/automate-the-boring-stuff-solutions | 0 | 6615301 | <filename>chap07/phone_and_email.py
import pyperclip, re
phone_regex = re.compile(
r'''(
(\d{3}|\(\d{3}\))? # area code
(\s|-|\.)? # seperator
(\d{3}) # first 3 digits
(\s|-|\.) # seperator
(\d{4}) # last 4 digits
(\s*(ext|x|ext.)\s*(\d{2,5}))? # extension
)''',
re.VERBOSE
)
email_regex = re.compile(
r'''(
[a-zA-Z0-9._%+-]+ # username
@ # @ symbol
[a-zA-Z0-9.-]+ # domain name
(\.[a-zA-Z]{2,4}) # dot something
)''',
re.VERBOSE
)
text = pyperclip.paste()
matches = []
for groups in phone_regex.findall(text):
phone_num = '-'.join([groups[1], groups[3], groups[5]])
if groups[8]:
phone_num += ' x' + groups[8]
matches.append(phone_num)
for groups in email_regex.findall(text):
matches.append(groups[0])
if len(matches) > 0:
pyperclip.copy('\n'.join(matches))
print(':: copied matched results to clipboard.')
print('\n'.join(matches))
else:
print(':: No phone numbers or email addresses found.')
| <filename>chap07/phone_and_email.py
import pyperclip, re
phone_regex = re.compile(
r'''(
(\d{3}|\(\d{3}\))? # area code
(\s|-|\.)? # seperator
(\d{3}) # first 3 digits
(\s|-|\.) # seperator
(\d{4}) # last 4 digits
(\s*(ext|x|ext.)\s*(\d{2,5}))? # extension
)''',
re.VERBOSE
)
email_regex = re.compile(
r'''(
[a-zA-Z0-9._%+-]+ # username
@ # @ symbol
[a-zA-Z0-9.-]+ # domain name
(\.[a-zA-Z]{2,4}) # dot something
)''',
re.VERBOSE
)
text = pyperclip.paste()
matches = []
for groups in phone_regex.findall(text):
phone_num = '-'.join([groups[1], groups[3], groups[5]])
if groups[8]:
phone_num += ' x' + groups[8]
matches.append(phone_num)
for groups in email_regex.findall(text):
matches.append(groups[0])
if len(matches) > 0:
pyperclip.copy('\n'.join(matches))
print(':: copied matched results to clipboard.')
print('\n'.join(matches))
else:
print(':: No phone numbers or email addresses found.')
| en | 0.359443 | ( (\d{3}|\(\d{3}\))? # area code (\s|-|\.)? # seperator (\d{3}) # first 3 digits (\s|-|\.) # seperator (\d{4}) # last 4 digits (\s*(ext|x|ext.)\s*(\d{2,5}))? # extension ) ( [a-zA-Z0-9._%+-]+ # username @ # @ symbol [a-zA-Z0-9.-]+ # domain name (\.[a-zA-Z]{2,4}) # dot something ) | 3.153047 | 3 |
operation/parallel_overhead.py | microsoft/dstoolkit-anomaly-detection-ijungle | 3 | 6615302 | # -*- coding: utf-8 -*-
from azureml.core import Model, Run
import argparse
import numpy as np
import iJungle
import joblib
run = Run.get_context()
print("iJungle version:", iJungle.__version__)
run.log('iJungle_version', iJungle.__version__)
parser = argparse.ArgumentParser()
# Input Data
parser.add_argument("--input-data", type=str, dest='input_data', help='Overhead dataset')
parser.add_argument("--id-feature", type=str, dest='id_feature', help='ID Freature')
# Hyper parameters
parser.add_argument('--trees', type=int, dest='trees', default=100, help='Number of trees')
parser.add_argument('--subsample-size', type=int, dest='subsample_size', default=8192, help='Subsample size')
# Add arguments to args collection
args = parser.parse_args()
id_feat = str(args.id_feature)
print('id feature', id_feat)
# Log Hyperparameter values
trees = np.int(args.trees)
subsample_size = np.int(args.subsample_size)
print('trees', trees)
print('subsample_size', subsample_size)
run.log('trees', trees)
run.log('subsample_size', subsample_size)
# Load training data
print("Loading Data...")
W = run.input_datasets['overhead_data'].to_pandas_dataframe() # Get the training data from the estimator input
W.set_index(id_feat, inplace=True)
# Load iFor_list pickle
print("Loading pickle...")
model_name = 'iJungle_light_' + str(trees) + '_' + str(subsample_size)
print(model_name)
model_path = Model.get_model_path(model_name)
print(model_path)
iFor_list = joblib.load(model_path)
# Evaluation
print("Starting evaluation ...")
os.makedirs(iJungle._MODEL_DIR, exist_ok=True)
results = iJungle.model_eval_fun(W, iFor_list)
results_filename = os.path.join(iJungle._MODEL_DIR, model_name + '_results.pkl')
print("Writing results:", results_filename)
joblib.dump(value=results, filename=results_filename)
# Log dummy metric
run.log('Dummy', np.float(0))
run.complete()
| # -*- coding: utf-8 -*-
from azureml.core import Model, Run
import argparse
import numpy as np
import iJungle
import joblib
run = Run.get_context()
print("iJungle version:", iJungle.__version__)
run.log('iJungle_version', iJungle.__version__)
parser = argparse.ArgumentParser()
# Input Data
parser.add_argument("--input-data", type=str, dest='input_data', help='Overhead dataset')
parser.add_argument("--id-feature", type=str, dest='id_feature', help='ID Freature')
# Hyper parameters
parser.add_argument('--trees', type=int, dest='trees', default=100, help='Number of trees')
parser.add_argument('--subsample-size', type=int, dest='subsample_size', default=8192, help='Subsample size')
# Add arguments to args collection
args = parser.parse_args()
id_feat = str(args.id_feature)
print('id feature', id_feat)
# Log Hyperparameter values
trees = np.int(args.trees)
subsample_size = np.int(args.subsample_size)
print('trees', trees)
print('subsample_size', subsample_size)
run.log('trees', trees)
run.log('subsample_size', subsample_size)
# Load training data
print("Loading Data...")
W = run.input_datasets['overhead_data'].to_pandas_dataframe() # Get the training data from the estimator input
W.set_index(id_feat, inplace=True)
# Load iFor_list pickle
print("Loading pickle...")
model_name = 'iJungle_light_' + str(trees) + '_' + str(subsample_size)
print(model_name)
model_path = Model.get_model_path(model_name)
print(model_path)
iFor_list = joblib.load(model_path)
# Evaluation
print("Starting evaluation ...")
os.makedirs(iJungle._MODEL_DIR, exist_ok=True)
results = iJungle.model_eval_fun(W, iFor_list)
results_filename = os.path.join(iJungle._MODEL_DIR, model_name + '_results.pkl')
print("Writing results:", results_filename)
joblib.dump(value=results, filename=results_filename)
# Log dummy metric
run.log('Dummy', np.float(0))
run.complete()
| en | 0.357605 | # -*- coding: utf-8 -*- # Input Data # Hyper parameters # Add arguments to args collection # Log Hyperparameter values # Load training data # Get the training data from the estimator input # Load iFor_list pickle # Evaluation # Log dummy metric | 2.26811 | 2 |
flows/builtin/helpers/capabilities.py | vdloo/jobrunner | 1 | 6615303 | <reponame>vdloo/jobrunner<gh_stars>1-10
from time import time
from platform import uname
from jobrunner.plugins import register_capability
from jobrunner.logbook import get_flow_details_by_uuid
from jobrunner.utils import check_nonzero_exit
cached_is_x86_64 = None
cached_port_is_free = None
cached_port_is_free_timestamp = None
def set_cached_is_x86_64(capable):
"""
Set the memoized is_x86_64 result
Function so the module level global can be overwritten
from other modules. The test case uses this to prevent
leaking the state across test methods.
:param bool capable: True if capable, False if not
:return None:
"""
global cached_is_x86_64
cached_is_x86_64 = capable
@register_capability()
def is_x86_64(_):
"""
Check if the conductor is an x86_64 host
:param obj _: The jobboard job. Unused in the capability
:return bool capable: True if host is capable of running jobs with this
capability as a requirement, False if not
"""
global cached_is_x86_64
if cached_is_x86_64 is None:
set_cached_is_x86_64(uname()[4] == 'x86_64')
return cached_is_x86_64
def set_cached_port_is_free(capable):
"""
Set the memoized port_is_free result
Function so the module level global can be overwritten
from other modules. The test case uses this to prevent
leaking the state across test methods.
:param bool capable: True if capable, False if not
:return None:
"""
global cached_port_is_free
cached_port_is_free = capable
def reset_cached_port_is_free_timestamp():
"""
Reset the memoized port_is_free timestamp to None
Function so the module level global can be overwritten
from other modules. The test case uses this to prevent
leaking the state across test methods.
:return None:
"""
global cached_port_is_free_timestamp
cached_port_is_free_timestamp = None
@register_capability()
def port_is_free(job):
"""
Check if the specified port is free on the host
running this conductor.
:param obj job: TaskFlow jobboard job
:return bool capable: True if host is capable of running jobs with this
capability as a requirement, False if not
"""
global cached_port_is_free
global cached_port_is_free_timestamp
flow_details = get_flow_details_by_uuid(job.details['flow_uuid'])
port_to_check = flow_details.meta['store']['port']
check_port_free_command = "netstat -tuna | grep -q {:d}".format(port_to_check)
should_recheck = cached_port_is_free_timestamp is None or time(
) - cached_port_is_free_timestamp > 10
if cached_port_is_free is None or \
cached_port_is_free is True or should_recheck:
set_cached_port_is_free(
not check_nonzero_exit(check_port_free_command)
)
cached_port_is_free_timestamp = time()
# NOTE: There is a race condition here. If the port becomes unavailable
# after the check but before the flow allocates the port, the flow will
# crash. This should be fine though since all jobs should ideally be
# re-posted if failed and still needed.
return cached_port_is_free
| from time import time
from platform import uname
from jobrunner.plugins import register_capability
from jobrunner.logbook import get_flow_details_by_uuid
from jobrunner.utils import check_nonzero_exit
cached_is_x86_64 = None
cached_port_is_free = None
cached_port_is_free_timestamp = None
def set_cached_is_x86_64(capable):
"""
Set the memoized is_x86_64 result
Function so the module level global can be overwritten
from other modules. The test case uses this to prevent
leaking the state across test methods.
:param bool capable: True if capable, False if not
:return None:
"""
global cached_is_x86_64
cached_is_x86_64 = capable
@register_capability()
def is_x86_64(_):
"""
Check if the conductor is an x86_64 host
:param obj _: The jobboard job. Unused in the capability
:return bool capable: True if host is capable of running jobs with this
capability as a requirement, False if not
"""
global cached_is_x86_64
if cached_is_x86_64 is None:
set_cached_is_x86_64(uname()[4] == 'x86_64')
return cached_is_x86_64
def set_cached_port_is_free(capable):
"""
Set the memoized port_is_free result
Function so the module level global can be overwritten
from other modules. The test case uses this to prevent
leaking the state across test methods.
:param bool capable: True if capable, False if not
:return None:
"""
global cached_port_is_free
cached_port_is_free = capable
def reset_cached_port_is_free_timestamp():
"""
Reset the memoized port_is_free timestamp to None
Function so the module level global can be overwritten
from other modules. The test case uses this to prevent
leaking the state across test methods.
:return None:
"""
global cached_port_is_free_timestamp
cached_port_is_free_timestamp = None
@register_capability()
def port_is_free(job):
"""
Check if the specified port is free on the host
running this conductor.
:param obj job: TaskFlow jobboard job
:return bool capable: True if host is capable of running jobs with this
capability as a requirement, False if not
"""
global cached_port_is_free
global cached_port_is_free_timestamp
flow_details = get_flow_details_by_uuid(job.details['flow_uuid'])
port_to_check = flow_details.meta['store']['port']
check_port_free_command = "netstat -tuna | grep -q {:d}".format(port_to_check)
should_recheck = cached_port_is_free_timestamp is None or time(
) - cached_port_is_free_timestamp > 10
if cached_port_is_free is None or \
cached_port_is_free is True or should_recheck:
set_cached_port_is_free(
not check_nonzero_exit(check_port_free_command)
)
cached_port_is_free_timestamp = time()
# NOTE: There is a race condition here. If the port becomes unavailable
# after the check but before the flow allocates the port, the flow will
# crash. This should be fine though since all jobs should ideally be
# re-posted if failed and still needed.
return cached_port_is_free | en | 0.779892 | Set the memoized is_x86_64 result Function so the module level global can be overwritten from other modules. The test case uses this to prevent leaking the state across test methods. :param bool capable: True if capable, False if not :return None: Check if the conductor is an x86_64 host :param obj _: The jobboard job. Unused in the capability :return bool capable: True if host is capable of running jobs with this capability as a requirement, False if not Set the memoized port_is_free result Function so the module level global can be overwritten from other modules. The test case uses this to prevent leaking the state across test methods. :param bool capable: True if capable, False if not :return None: Reset the memoized port_is_free timestamp to None Function so the module level global can be overwritten from other modules. The test case uses this to prevent leaking the state across test methods. :return None: Check if the specified port is free on the host running this conductor. :param obj job: TaskFlow jobboard job :return bool capable: True if host is capable of running jobs with this capability as a requirement, False if not # NOTE: There is a race condition here. If the port becomes unavailable # after the check but before the flow allocates the port, the flow will # crash. This should be fine though since all jobs should ideally be # re-posted if failed and still needed. | 2.349154 | 2 |
download_npo/sites_test.py | Carpetsmoker/download-npo | 20 | 6615304 | import unittest
import download_npo.sites
class Test_NPOPlayer(unittest.TestCase):
def test_find_video(self):
site = download_npo.sites.NPOPlayer()
req, player_id, extension = site.find_video('POW_03414349')
self.assertEqual(player_id, 'POW_03414349')
self.assertEqual(extension, 'mp4')
def test_meta(self):
site = download_npo.sites.NPOPlayer()
meta = site.meta('POW_03414349')
self.assertEqual(meta.get('STATUS'), 'OK')
| import unittest
import download_npo.sites
class Test_NPOPlayer(unittest.TestCase):
def test_find_video(self):
site = download_npo.sites.NPOPlayer()
req, player_id, extension = site.find_video('POW_03414349')
self.assertEqual(player_id, 'POW_03414349')
self.assertEqual(extension, 'mp4')
def test_meta(self):
site = download_npo.sites.NPOPlayer()
meta = site.meta('POW_03414349')
self.assertEqual(meta.get('STATUS'), 'OK')
| none | 1 | 2.531555 | 3 | |
pj1/myanalysis/part5.py | rcm2000/project-preparations | 0 | 6615305 | <reponame>rcm2000/project-preparations
import json
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import folium as fol
from config.settings import DATA_DIRS, TEMPLATES
from sklearn import preprocessing
df = pd.read_excel(DATA_DIRS[0]+'//data1.xlsx',engine='openpyxl',
header=0)
df2 = pd.read_excel(DATA_DIRS[0]+'//data2.xlsx',engine='openpyxl')
df = df.fillna(method='ffill')
df3 = pd.read_csv(DATA_DIRS[0]+'//auto-mpg.csv',header=None)
df3.columns = ['mpg','cylinders','displacement','horsepower','weight','acceleration','modelyear','origin','carname']
tt = sns.load_dataset('titanic');
df4 = pd.read_excel(DATA_DIRS[0]+'//data3.xlsx',engine='openpyxl')
df5 = pd.read_excel(DATA_DIRS[0]+'//data4.xlsx',engine='openpyxl')
st = pd.read_csv(DATA_DIRS[0]+'//stock-data.csv')
class Part5:
def p172(self):
print(tt.info())
print(tt['deck'].value_counts(dropna=False))
print(tt.isnull().sum())
#thresh로 결측치가 n개 이상인 열을 삭제
ttage = tt.dropna(subset=['age'],how='any',axis=0)
tt1 = tt.dropna(axis=1,thresh=500)
# print(tt1)
# print(tt1.shape)
# print(tt1.isnull().sum())
print(ttage)
def p178(self):
mage = tt['age'].mean();
print(mage)
tt['age'].fillna(mage,inplace=True)
print(tt['age'])
def p180(self):
et = tt['embark_town'].value_counts().idxmax()
print(et)
tt['embark_town'].fillna(et,inplace=True)
def p181(self):
tt['embark_town'].fillna(method='ffill', inplace=True)
def p186(self):
print(df3)
mpg_to_kpl = 1.60934/3.78541
df3['kpl'] = (df3['mpg'] * mpg_to_kpl).round(2)
print(df3)
def p188(self):
print(df3.info())
df3['horsepower'].replace('?',np.NaN,inplace=True)
print(df3['horsepower'].unique())
df3.dropna(subset=['horsepower'],axis=0,inplace=True)
df3['horsepower'] = df3['horsepower'].astype('float')
print(df3['horsepower'])
def p190(self):
print(df3.info())
print(df3['origin'].unique())
df3['origin'].replace({1:'USA',2:'EU',3:'JPN'},inplace=True)
df3['origin'] = df3['origin'].astype('category')
print(df3['origin'].dtypes)
def p192(self):
df3['horsepower'].replace('?', np.NaN, inplace=True)
print(df3['horsepower'].unique())
df3.dropna(subset=['horsepower'], axis=0, inplace=True)
df3['horsepower'] = df3['horsepower'].astype('float')
cnt, bin_dividers = np.histogram(df3['horsepower'],bins = 3)
print(cnt,bin_dividers)
bin_names = ['고','중','저']
df3['hp_bin'] = pd.cut(
x=df3['horsepower'],
bins = bin_dividers,
labels=bin_names,
include_lowest=True
)
hp_dum = pd.get_dummies(df3['hp_bin'])
print(hp_dum)
lable_encoder = preprocessing.LabelEncoder()
onehot_encoder = preprocessing.OneHotEncoder()
onehot_labled = lable_encoder.fit_transform(df3['hp_bin'].head(15))
print(onehot_labled)
def p198(self):
df3['horsepower'].replace('?', np.NaN, inplace=True)
df3.dropna(subset=['horsepower'], axis=0, inplace=True)
df3['horsepower'] = df3['horsepower'].astype('float')
df3['horsepower'] = df3['horsepower']/abs(df3['horsepower'].max())
print(df3['horsepower'])
def p201(self):
print(st)
print(st.info())
st['new_Date'] = pd.to_datetime(st['Date'])
print(st)
print(st.info())
print(st['new_Date'][0])
st.set_index(st['new_Date'],inplace=True)
print(st)
def p205(self):
dates = ['2019-01-01','2020-03-01','2021-06-01']
ts_dates = pd.to_datetime(dates)
print(ts_dates)
pr_day = ts_dates.to_period(freq='D')
pr_month = ts_dates.to_period(freq='M')
pr_year = ts_dates.to_period(freq='A')
print(pr_day)
print(pr_month)
print(pr_year)
def p206(self):
ts_ms = pd.date_range(
start = '2020-01-01',
end=None,
periods=6,
freq='MS',
tz='Asia/seoul')
print(ts_ms)
pr_ms = pd.period_range(
start='2020-01-01',
end=None,
periods=6,
freq='M')
print(pr_ms)
def p209(self):
st['new_Date'] = pd.to_datetime(st['Date'])
st.set_index(st['new_Date'],inplace=True)
print(st)
st['Year'] = st.index.dt
def p212(self):
st['new_Date'] = pd.to_datetime(st['Date'])
st.set_index(st['new_Date'], inplace=True)
print(st)
st_y = st.loc['2018-06-05':'2018-06-01','High':'Low']
print(st_y)
if __name__ == '__main__':
Part5().p212() | import json
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import folium as fol
from config.settings import DATA_DIRS, TEMPLATES
from sklearn import preprocessing
df = pd.read_excel(DATA_DIRS[0]+'//data1.xlsx',engine='openpyxl',
header=0)
df2 = pd.read_excel(DATA_DIRS[0]+'//data2.xlsx',engine='openpyxl')
df = df.fillna(method='ffill')
df3 = pd.read_csv(DATA_DIRS[0]+'//auto-mpg.csv',header=None)
df3.columns = ['mpg','cylinders','displacement','horsepower','weight','acceleration','modelyear','origin','carname']
tt = sns.load_dataset('titanic');
df4 = pd.read_excel(DATA_DIRS[0]+'//data3.xlsx',engine='openpyxl')
df5 = pd.read_excel(DATA_DIRS[0]+'//data4.xlsx',engine='openpyxl')
st = pd.read_csv(DATA_DIRS[0]+'//stock-data.csv')
class Part5:
def p172(self):
print(tt.info())
print(tt['deck'].value_counts(dropna=False))
print(tt.isnull().sum())
#thresh로 결측치가 n개 이상인 열을 삭제
ttage = tt.dropna(subset=['age'],how='any',axis=0)
tt1 = tt.dropna(axis=1,thresh=500)
# print(tt1)
# print(tt1.shape)
# print(tt1.isnull().sum())
print(ttage)
def p178(self):
mage = tt['age'].mean();
print(mage)
tt['age'].fillna(mage,inplace=True)
print(tt['age'])
def p180(self):
et = tt['embark_town'].value_counts().idxmax()
print(et)
tt['embark_town'].fillna(et,inplace=True)
def p181(self):
tt['embark_town'].fillna(method='ffill', inplace=True)
def p186(self):
print(df3)
mpg_to_kpl = 1.60934/3.78541
df3['kpl'] = (df3['mpg'] * mpg_to_kpl).round(2)
print(df3)
def p188(self):
print(df3.info())
df3['horsepower'].replace('?',np.NaN,inplace=True)
print(df3['horsepower'].unique())
df3.dropna(subset=['horsepower'],axis=0,inplace=True)
df3['horsepower'] = df3['horsepower'].astype('float')
print(df3['horsepower'])
def p190(self):
print(df3.info())
print(df3['origin'].unique())
df3['origin'].replace({1:'USA',2:'EU',3:'JPN'},inplace=True)
df3['origin'] = df3['origin'].astype('category')
print(df3['origin'].dtypes)
def p192(self):
df3['horsepower'].replace('?', np.NaN, inplace=True)
print(df3['horsepower'].unique())
df3.dropna(subset=['horsepower'], axis=0, inplace=True)
df3['horsepower'] = df3['horsepower'].astype('float')
cnt, bin_dividers = np.histogram(df3['horsepower'],bins = 3)
print(cnt,bin_dividers)
bin_names = ['고','중','저']
df3['hp_bin'] = pd.cut(
x=df3['horsepower'],
bins = bin_dividers,
labels=bin_names,
include_lowest=True
)
hp_dum = pd.get_dummies(df3['hp_bin'])
print(hp_dum)
lable_encoder = preprocessing.LabelEncoder()
onehot_encoder = preprocessing.OneHotEncoder()
onehot_labled = lable_encoder.fit_transform(df3['hp_bin'].head(15))
print(onehot_labled)
def p198(self):
df3['horsepower'].replace('?', np.NaN, inplace=True)
df3.dropna(subset=['horsepower'], axis=0, inplace=True)
df3['horsepower'] = df3['horsepower'].astype('float')
df3['horsepower'] = df3['horsepower']/abs(df3['horsepower'].max())
print(df3['horsepower'])
def p201(self):
print(st)
print(st.info())
st['new_Date'] = pd.to_datetime(st['Date'])
print(st)
print(st.info())
print(st['new_Date'][0])
st.set_index(st['new_Date'],inplace=True)
print(st)
def p205(self):
dates = ['2019-01-01','2020-03-01','2021-06-01']
ts_dates = pd.to_datetime(dates)
print(ts_dates)
pr_day = ts_dates.to_period(freq='D')
pr_month = ts_dates.to_period(freq='M')
pr_year = ts_dates.to_period(freq='A')
print(pr_day)
print(pr_month)
print(pr_year)
def p206(self):
ts_ms = pd.date_range(
start = '2020-01-01',
end=None,
periods=6,
freq='MS',
tz='Asia/seoul')
print(ts_ms)
pr_ms = pd.period_range(
start='2020-01-01',
end=None,
periods=6,
freq='M')
print(pr_ms)
def p209(self):
st['new_Date'] = pd.to_datetime(st['Date'])
st.set_index(st['new_Date'],inplace=True)
print(st)
st['Year'] = st.index.dt
def p212(self):
st['new_Date'] = pd.to_datetime(st['Date'])
st.set_index(st['new_Date'], inplace=True)
print(st)
st_y = st.loc['2018-06-05':'2018-06-01','High':'Low']
print(st_y)
if __name__ == '__main__':
Part5().p212() | ko | 0.820488 | #thresh로 결측치가 n개 이상인 열을 삭제 # print(tt1) # print(tt1.shape) # print(tt1.isnull().sum()) | 2.76863 | 3 |
python/server/app.py | ivan-alles/preference-model | 1 | 6615306 | # Copyright 2016-2020 <NAME>. See also the LICENSE file.
import base64
import io
import flask
from flask_cors import CORS
import numpy as np
from PIL import Image
from server import preference_model
rng = np.random.RandomState(0)
# If True, initialize tensorflow and generate images by the model.
# This can be slow. You can set it to False to test the basic
# functionality of the client.
USE_GENERATOR = False
if USE_GENERATOR:
from server import generator
generator = generator.Generator('karras2018iclr-celebahq-1024x1024.tf')
preference_model = preference_model.SphericalLinearPreferenceModel(rng=rng)
def encode_image(image):
"""
Convert an image to the format accepted by a browser.
:param image: image as numpy array.
:return: an encoded image.
"""
pil_img = Image.fromarray(image, 'RGB')
byte_arr = io.BytesIO()
pil_img.save(byte_arr, format='PNG') # convert the PIL image to byte array
encoded_img = base64.b64encode(byte_arr.getvalue()).decode('ascii')
encoded_img = "data:image/png;base64," + encoded_img
return encoded_img
# instantiate the app
app = flask.Flask(__name__)
app.config.from_object(__name__)
# enable CORS
CORS(app, resources={r'/*': {'origins': '*'}})
@app.route('/images', methods=['GET'])
def images():
"""
Generate the next portion of images based on current settings.
:return: an HTTP response containing a list of image objects.
"""
count = int(flask.request.args['count'])
variance = int(flask.request.args['variance'])
latents = preference_model.generate(count, variance)
if USE_GENERATOR:
images = generator.generate(latents)
else:
images = np.broadcast_to(rng.uniform(0, 255, (count, 1, 1, 3)).astype(np.uint8),
(count, 256, 256, 3))
image_objects = []
for i in range(len(images)):
image = images[i]
image_object = {
'picture': encode_image(image),
'latents': latents[i].tolist() # rng.randn(512).tolist()
}
image_objects.append(image_object)
response_object = {'status': 'success', 'images': image_objects}
return flask.jsonify(response_object)
@app.route('/learn', methods=['POST'])
def learn():
"""
Learn user preferences from likes.
:return: an HTTP response.
"""
post_data = flask.request.get_json()
latents = np.array(post_data)
preference_model.train(latents)
response_object = {'status': 'success'}
return flask.jsonify(response_object)
if __name__ == '__main__':
app.run()
| # Copyright 2016-2020 <NAME>. See also the LICENSE file.
import base64
import io
import flask
from flask_cors import CORS
import numpy as np
from PIL import Image
from server import preference_model
rng = np.random.RandomState(0)
# If True, initialize tensorflow and generate images by the model.
# This can be slow. You can set it to False to test the basic
# functionality of the client.
USE_GENERATOR = False
if USE_GENERATOR:
from server import generator
generator = generator.Generator('karras2018iclr-celebahq-1024x1024.tf')
preference_model = preference_model.SphericalLinearPreferenceModel(rng=rng)
def encode_image(image):
"""
Convert an image to the format accepted by a browser.
:param image: image as numpy array.
:return: an encoded image.
"""
pil_img = Image.fromarray(image, 'RGB')
byte_arr = io.BytesIO()
pil_img.save(byte_arr, format='PNG') # convert the PIL image to byte array
encoded_img = base64.b64encode(byte_arr.getvalue()).decode('ascii')
encoded_img = "data:image/png;base64," + encoded_img
return encoded_img
# instantiate the app
app = flask.Flask(__name__)
app.config.from_object(__name__)
# enable CORS
CORS(app, resources={r'/*': {'origins': '*'}})
@app.route('/images', methods=['GET'])
def images():
"""
Generate the next portion of images based on current settings.
:return: an HTTP response containing a list of image objects.
"""
count = int(flask.request.args['count'])
variance = int(flask.request.args['variance'])
latents = preference_model.generate(count, variance)
if USE_GENERATOR:
images = generator.generate(latents)
else:
images = np.broadcast_to(rng.uniform(0, 255, (count, 1, 1, 3)).astype(np.uint8),
(count, 256, 256, 3))
image_objects = []
for i in range(len(images)):
image = images[i]
image_object = {
'picture': encode_image(image),
'latents': latents[i].tolist() # rng.randn(512).tolist()
}
image_objects.append(image_object)
response_object = {'status': 'success', 'images': image_objects}
return flask.jsonify(response_object)
@app.route('/learn', methods=['POST'])
def learn():
"""
Learn user preferences from likes.
:return: an HTTP response.
"""
post_data = flask.request.get_json()
latents = np.array(post_data)
preference_model.train(latents)
response_object = {'status': 'success'}
return flask.jsonify(response_object)
if __name__ == '__main__':
app.run()
| en | 0.801254 | # Copyright 2016-2020 <NAME>. See also the LICENSE file. # If True, initialize tensorflow and generate images by the model. # This can be slow. You can set it to False to test the basic # functionality of the client. Convert an image to the format accepted by a browser. :param image: image as numpy array. :return: an encoded image. # convert the PIL image to byte array # instantiate the app # enable CORS Generate the next portion of images based on current settings. :return: an HTTP response containing a list of image objects. # rng.randn(512).tolist() Learn user preferences from likes. :return: an HTTP response. | 2.356865 | 2 |
tests/test_livepoint.py | Rodrigo-Tenorio/nessai | 16 | 6615307 | <filename>tests/test_livepoint.py
import numpy as np
import pandas as pd
import pytest
import nessai.livepoint as lp
@pytest.fixture
def live_point():
return np.array([(1., 2., 3., 0., 0.)],
dtype=[('x', lp.DEFAULT_FLOAT_DTYPE),
('y', lp.DEFAULT_FLOAT_DTYPE),
('z', lp.DEFAULT_FLOAT_DTYPE),
('logP', lp.DEFAULT_FLOAT_DTYPE),
('logL', lp.LOGL_DTYPE)])
@pytest.fixture
def live_points():
return np.array([(1., 2., 3., 0., 0.),
(4., 5., 6., 0., 0.)],
dtype=[('x', lp.DEFAULT_FLOAT_DTYPE),
('y', lp.DEFAULT_FLOAT_DTYPE),
('z', lp.DEFAULT_FLOAT_DTYPE),
('logP', lp.DEFAULT_FLOAT_DTYPE),
('logL', lp.LOGL_DTYPE)])
@pytest.fixture
def empty_live_point():
return np.empty(0, dtype=[('x', lp.DEFAULT_FLOAT_DTYPE),
('y', lp.DEFAULT_FLOAT_DTYPE),
('z', lp.DEFAULT_FLOAT_DTYPE),
('logP', lp.DEFAULT_FLOAT_DTYPE),
('logL', lp.LOGL_DTYPE)])
def test_parameters_to_live_point(live_point):
"""
Test function that produces a single live point given the parameter
values for the live point as a live or an array
"""
x = lp.parameters_to_live_point([1., 2., 3.], ['x', 'y', 'z'])
np.testing.assert_array_equal(live_point, x)
def test_empty_parameters_to_live_point(empty_live_point):
"""
Test behaviour when an empty parameter is parsed
"""
np.testing.assert_array_equal(
empty_live_point,
lp.parameters_to_live_point([], ['x', 'y', 'z']))
def test_numpy_array_to_live_point(live_point):
"""
Test the function the produces an array of live points given numpy array
of shape [# dimensions]
"""
array = np.array([1., 2., 3.])
x = lp.numpy_array_to_live_points(array, names=['x', 'y', 'z'])
np.testing.assert_array_equal(live_point, x)
def test_numpy_array_multiple_to_live_points(live_points):
"""
Test the function the produces an array of live points given numpy array
of shape [# point, # dimensions]
"""
array = np.array([[1., 2., 3.], [4., 5., 6.]])
x = lp.numpy_array_to_live_points(array, names=['x', 'y', 'z'])
np.testing.assert_array_equal(live_points, x)
def test_empty_numpy_array_to_live_points(empty_live_point):
"""
Test the function the produces an array of live points given an empty
numpy array
"""
np.testing.assert_array_equal(
empty_live_point,
lp.numpy_array_to_live_points(np.array([]), names=['x', 'y', 'z']))
@pytest.mark.parametrize(
'd',
[
{'x': 1, 'y': 2, 'z': 3},
{'x': 1.0, 'y': 2.0, 'z': 3.0},
]
)
def test_dict_to_live_point(live_point, d):
"""
Test the function that converts a dictionary with a single live point to
a live point array
"""
x = lp.dict_to_live_points(d)
np.testing.assert_array_equal(live_point, x)
@pytest.mark.parametrize(
'd',
[
{'x': [1, 4], 'y': [2, 5], 'z': [3, 6]},
{'x': np.array([1, 4]), 'y': np.array([2, 5]), 'z': np.array([3, 6])},
]
)
def test_dict_multiple_to_live_points(live_points, d):
"""
Test the function that converts a dictionary with multiple live points to
a live point array
"""
x = lp.dict_to_live_points(d)
np.testing.assert_array_equal(live_points, x)
def test_empty_dict_to_live_points(empty_live_point):
"""
Test the function that converts a dictionary with empty lists to
a live point array
"""
np.testing.assert_array_equal(
empty_live_point,
lp.dict_to_live_points({'x': [], 'y': [], 'z': []}))
def test_dataframe_to_lve_points(live_points):
"""Test converting from a pandas dataframe to live points."""
df = pd.DataFrame({'x': [1, 4], 'y': [2, 5], 'z': [3, 6]})
out = lp.dataframe_to_live_points(df)
np.testing.assert_array_equal(out, live_points)
def test_live_point_to_numpy_array(live_point):
"""
Test conversion from a live point to an unstructured numpy array
"""
np.testing.assert_array_equal(
np.array([[1, 2, 3, 0, 0]]),
lp.live_points_to_array(live_point))
def test_live_point_to_numpy_array_with_names(live_point):
"""
Test conversion from a live point to an unstructured numpy array with
specific fields
"""
np.testing.assert_array_equal(
np.array([[1, 3, 0]]),
lp.live_points_to_array(live_point, names=['x', 'z', 'logP']))
def test_live_point_to_dict(live_point):
"""
Test conversion of a live point to a dictionary
"""
d = {'x': 1., 'y': 2., 'z': 3., 'logP': 0., 'logL': 0.}
assert d == lp.live_points_to_dict(live_point)
def test_live_point_to_dict_with_names(live_point):
"""
Test conversion of a live point to a dictionary
"""
d = {'x': 1., 'z': 3., 'logP': 0.}
assert d == lp.live_points_to_dict(live_point, names=['x', 'z', 'logP'])
def test_multiple_live_points_to_dict(live_points):
"""
Test conversion of multiple_live points to a dictionary
"""
d = {'x': [1, 4], 'y': [2, 5], 'z': [3, 6], 'logP': [0, 0], 'logL': [0, 0]}
d_out = lp.live_points_to_dict(live_points)
assert list(d.keys()) == list(d_out.keys())
np.testing.assert_array_equal(list(d.values()), list(d_out.values()))
| <filename>tests/test_livepoint.py
import numpy as np
import pandas as pd
import pytest
import nessai.livepoint as lp
@pytest.fixture
def live_point():
return np.array([(1., 2., 3., 0., 0.)],
dtype=[('x', lp.DEFAULT_FLOAT_DTYPE),
('y', lp.DEFAULT_FLOAT_DTYPE),
('z', lp.DEFAULT_FLOAT_DTYPE),
('logP', lp.DEFAULT_FLOAT_DTYPE),
('logL', lp.LOGL_DTYPE)])
@pytest.fixture
def live_points():
return np.array([(1., 2., 3., 0., 0.),
(4., 5., 6., 0., 0.)],
dtype=[('x', lp.DEFAULT_FLOAT_DTYPE),
('y', lp.DEFAULT_FLOAT_DTYPE),
('z', lp.DEFAULT_FLOAT_DTYPE),
('logP', lp.DEFAULT_FLOAT_DTYPE),
('logL', lp.LOGL_DTYPE)])
@pytest.fixture
def empty_live_point():
return np.empty(0, dtype=[('x', lp.DEFAULT_FLOAT_DTYPE),
('y', lp.DEFAULT_FLOAT_DTYPE),
('z', lp.DEFAULT_FLOAT_DTYPE),
('logP', lp.DEFAULT_FLOAT_DTYPE),
('logL', lp.LOGL_DTYPE)])
def test_parameters_to_live_point(live_point):
"""
Test function that produces a single live point given the parameter
values for the live point as a live or an array
"""
x = lp.parameters_to_live_point([1., 2., 3.], ['x', 'y', 'z'])
np.testing.assert_array_equal(live_point, x)
def test_empty_parameters_to_live_point(empty_live_point):
"""
Test behaviour when an empty parameter is parsed
"""
np.testing.assert_array_equal(
empty_live_point,
lp.parameters_to_live_point([], ['x', 'y', 'z']))
def test_numpy_array_to_live_point(live_point):
"""
Test the function the produces an array of live points given numpy array
of shape [# dimensions]
"""
array = np.array([1., 2., 3.])
x = lp.numpy_array_to_live_points(array, names=['x', 'y', 'z'])
np.testing.assert_array_equal(live_point, x)
def test_numpy_array_multiple_to_live_points(live_points):
"""
Test the function the produces an array of live points given numpy array
of shape [# point, # dimensions]
"""
array = np.array([[1., 2., 3.], [4., 5., 6.]])
x = lp.numpy_array_to_live_points(array, names=['x', 'y', 'z'])
np.testing.assert_array_equal(live_points, x)
def test_empty_numpy_array_to_live_points(empty_live_point):
"""
Test the function the produces an array of live points given an empty
numpy array
"""
np.testing.assert_array_equal(
empty_live_point,
lp.numpy_array_to_live_points(np.array([]), names=['x', 'y', 'z']))
@pytest.mark.parametrize(
'd',
[
{'x': 1, 'y': 2, 'z': 3},
{'x': 1.0, 'y': 2.0, 'z': 3.0},
]
)
def test_dict_to_live_point(live_point, d):
"""
Test the function that converts a dictionary with a single live point to
a live point array
"""
x = lp.dict_to_live_points(d)
np.testing.assert_array_equal(live_point, x)
@pytest.mark.parametrize(
'd',
[
{'x': [1, 4], 'y': [2, 5], 'z': [3, 6]},
{'x': np.array([1, 4]), 'y': np.array([2, 5]), 'z': np.array([3, 6])},
]
)
def test_dict_multiple_to_live_points(live_points, d):
"""
Test the function that converts a dictionary with multiple live points to
a live point array
"""
x = lp.dict_to_live_points(d)
np.testing.assert_array_equal(live_points, x)
def test_empty_dict_to_live_points(empty_live_point):
"""
Test the function that converts a dictionary with empty lists to
a live point array
"""
np.testing.assert_array_equal(
empty_live_point,
lp.dict_to_live_points({'x': [], 'y': [], 'z': []}))
def test_dataframe_to_lve_points(live_points):
"""Test converting from a pandas dataframe to live points."""
df = pd.DataFrame({'x': [1, 4], 'y': [2, 5], 'z': [3, 6]})
out = lp.dataframe_to_live_points(df)
np.testing.assert_array_equal(out, live_points)
def test_live_point_to_numpy_array(live_point):
"""
Test conversion from a live point to an unstructured numpy array
"""
np.testing.assert_array_equal(
np.array([[1, 2, 3, 0, 0]]),
lp.live_points_to_array(live_point))
def test_live_point_to_numpy_array_with_names(live_point):
"""
Test conversion from a live point to an unstructured numpy array with
specific fields
"""
np.testing.assert_array_equal(
np.array([[1, 3, 0]]),
lp.live_points_to_array(live_point, names=['x', 'z', 'logP']))
def test_live_point_to_dict(live_point):
"""
Test conversion of a live point to a dictionary
"""
d = {'x': 1., 'y': 2., 'z': 3., 'logP': 0., 'logL': 0.}
assert d == lp.live_points_to_dict(live_point)
def test_live_point_to_dict_with_names(live_point):
"""
Test conversion of a live point to a dictionary
"""
d = {'x': 1., 'z': 3., 'logP': 0.}
assert d == lp.live_points_to_dict(live_point, names=['x', 'z', 'logP'])
def test_multiple_live_points_to_dict(live_points):
"""
Test conversion of multiple_live points to a dictionary
"""
d = {'x': [1, 4], 'y': [2, 5], 'z': [3, 6], 'logP': [0, 0], 'logL': [0, 0]}
d_out = lp.live_points_to_dict(live_points)
assert list(d.keys()) == list(d_out.keys())
np.testing.assert_array_equal(list(d.values()), list(d_out.values()))
| en | 0.489434 | Test function that produces a single live point given the parameter values for the live point as a live or an array Test behaviour when an empty parameter is parsed Test the function the produces an array of live points given numpy array of shape [# dimensions] Test the function the produces an array of live points given numpy array of shape [# point, # dimensions] Test the function the produces an array of live points given an empty numpy array Test the function that converts a dictionary with a single live point to a live point array Test the function that converts a dictionary with multiple live points to a live point array Test the function that converts a dictionary with empty lists to a live point array Test converting from a pandas dataframe to live points. Test conversion from a live point to an unstructured numpy array Test conversion from a live point to an unstructured numpy array with specific fields Test conversion of a live point to a dictionary Test conversion of a live point to a dictionary Test conversion of multiple_live points to a dictionary | 2.824611 | 3 |
editor/serializers.py | CarlosMatheus/PyginWeb | 2 | 6615308 | from rest_framework import serializers
from editor import models
"""Project serializers"""
class ProjectSerializer(serializers.ModelSerializer):
class Meta:
fields = (
'name',
'creation_date',
'user',
)
model = models.Project
class ProjectGetSerializer(serializers.ModelSerializer):
class Meta:
fields = (
'id',
'name',
'creation_date',
'user',
)
model = models.Project
"""Scene serializer"""
class SceneSerializer(serializers.ModelSerializer):
class Meta:
fields = (
'name',
'project',
)
model = models.Scene
class SceneGetSerializer(serializers.ModelSerializer):
class Meta:
fields = (
'id',
'name',
'project',
)
model = models.Scene
"""Game object serializers"""
class GameObjectSerializer(serializers.ModelSerializer):
class Meta:
fields = (
'name',
'type',
'scene',
)
model = models.GameObject
class GameObjectGetSerializer(serializers.ModelSerializer):
class Meta:
fields = (
'id',
'name',
'type',
'scene',
)
model = models.GameObject
"""Transform serializer"""
class TransformSerializer(serializers.ModelSerializer):
class Meta:
fields = (
'positionx',
'positiony',
'positionz',
'rotation',
'scalex',
'scaley',
'gameobject'
)
model = models.Transform
class TransformGetSerializer(serializers.ModelSerializer):
class Meta:
fields = (
'id',
'positionx',
'positiony',
'positionz',
'rotation',
'scalex',
'scaley',
'gameobject'
)
model = models.Transform
| from rest_framework import serializers
from editor import models
"""Project serializers"""
class ProjectSerializer(serializers.ModelSerializer):
class Meta:
fields = (
'name',
'creation_date',
'user',
)
model = models.Project
class ProjectGetSerializer(serializers.ModelSerializer):
class Meta:
fields = (
'id',
'name',
'creation_date',
'user',
)
model = models.Project
"""Scene serializer"""
class SceneSerializer(serializers.ModelSerializer):
class Meta:
fields = (
'name',
'project',
)
model = models.Scene
class SceneGetSerializer(serializers.ModelSerializer):
class Meta:
fields = (
'id',
'name',
'project',
)
model = models.Scene
"""Game object serializers"""
class GameObjectSerializer(serializers.ModelSerializer):
class Meta:
fields = (
'name',
'type',
'scene',
)
model = models.GameObject
class GameObjectGetSerializer(serializers.ModelSerializer):
class Meta:
fields = (
'id',
'name',
'type',
'scene',
)
model = models.GameObject
"""Transform serializer"""
class TransformSerializer(serializers.ModelSerializer):
class Meta:
fields = (
'positionx',
'positiony',
'positionz',
'rotation',
'scalex',
'scaley',
'gameobject'
)
model = models.Transform
class TransformGetSerializer(serializers.ModelSerializer):
class Meta:
fields = (
'id',
'positionx',
'positiony',
'positionz',
'rotation',
'scalex',
'scaley',
'gameobject'
)
model = models.Transform
| en | 0.48988 | Project serializers Scene serializer Game object serializers Transform serializer | 2.441263 | 2 |
expapp/calculations.py | Surajmahawar0707/my_project | 0 | 6615309 | import time, random
print("User ID", "\t\t", "Traced users", "\t\t", "")
while(True):
N = 100
user_id = []
traced_no_of_users = []
symptom_status = []
ans = [0]*N
for i in range(0, N):
user_id.append(i)
traced_no_of_users.append(random.randint(0, 5))
symptom_status.append(random.randint(0,1))
pro_of_traced_users = []
time_of_contact = []
for j in range(0, traced_no_of_users[i]):
pro_of_traced_users.append(round(random.uniform(0.00, 1.00),4))
time_of_contact.append(random.randint(0,10))
if time_of_contact[j] < 3:
ans[i] += (1 - ans[i])*(time_of_contact[j]/3)*pro_of_traced_users[j]
else:
ans[i] += (1 - ans[i])*pro_of_traced_users[j]
if i == 99:
print(ans[i], "#########", traced_no_of_users[i], "#########", pro_of_traced_users[j],"#######", time_of_contact[j])
print(round(ans[99],2))
print("\n")
time.sleep(25) | import time, random
print("User ID", "\t\t", "Traced users", "\t\t", "")
while(True):
N = 100
user_id = []
traced_no_of_users = []
symptom_status = []
ans = [0]*N
for i in range(0, N):
user_id.append(i)
traced_no_of_users.append(random.randint(0, 5))
symptom_status.append(random.randint(0,1))
pro_of_traced_users = []
time_of_contact = []
for j in range(0, traced_no_of_users[i]):
pro_of_traced_users.append(round(random.uniform(0.00, 1.00),4))
time_of_contact.append(random.randint(0,10))
if time_of_contact[j] < 3:
ans[i] += (1 - ans[i])*(time_of_contact[j]/3)*pro_of_traced_users[j]
else:
ans[i] += (1 - ans[i])*pro_of_traced_users[j]
if i == 99:
print(ans[i], "#########", traced_no_of_users[i], "#########", pro_of_traced_users[j],"#######", time_of_contact[j])
print(round(ans[99],2))
print("\n")
time.sleep(25) | en | 0.411988 | ########", traced_no_of_users[i], "#########", pro_of_traced_users[j],"#######", time_of_contact[j]) | 2.882243 | 3 |
devon/web/logCatalog.py | joehewitt/devon | 3 | 6615310 | <filename>devon/web/logCatalog.py
import devon.projects.run
import os.path
# **************************************************************************************************
def main(request):
devon.projects.run.writeProjectLogCatalog(request.project, request.out)
| <filename>devon/web/logCatalog.py
import devon.projects.run
import os.path
# **************************************************************************************************
def main(request):
devon.projects.run.writeProjectLogCatalog(request.project, request.out)
| hu | 0.295777 | # ************************************************************************************************** | 1.580857 | 2 |
TriEA_kernels.py | dinesh110598/Spin_glass_NN | 1 | 6615311 | <filename>TriEA_kernels.py
import math
import numpy as np
from numba import cuda
#The variable perm used all over these kernels describe the permutation of lattices in the entire ensemble
@cuda.jit
def update_red (spin, seed, T, J_nn, perm):
m = T.shape[1]
z, x, y = cuda.grid (3)
z = perm[z]
n = int(math.floor (z / m))
l = z % m
p, q = x % 3, y % 2
def random_uniform ():
seed[z, x, y] = np.int32((seed[z ,x, y]*1664525 + 1013904223) % 2**31)
return seed[z, x, y] / (2**31)
def bvc (x):
if x == spin.shape[1]:
x = 0
return x
def sum_nn(): # This adds spins of six neighbours instead of 4 subject to
#many constraints characteristic of triangular lattices
value = 0.
if (y % 2 == 0):
value += J_nn[n,x,y,0]*spin[z, x, bvc(y+1)]
value += J_nn[n,x-1, bvc(y+1),2]*spin[z, x-1, bvc(y+1)]
value += J_nn[n,x,y,2]*spin[z, x, y-1]
value += J_nn[n,x-1,y-1,0]*spin[z, x-1, y-1]
else:
value += J_nn[n,x,y,0]*spin[z, bvc(x+1), bvc(y+1)]
value += J_nn[n,x,bvc(y+1),2]*spin[z, x, bvc(y+1)]
value += J_nn[n,x,y,2]*spin[z, bvc(x+1), y-1]
value += J_nn[n,x,y-1,0]*spin[z, x, y-1]
value += J_nn[n,x,y,1]*spin[z, bvc(x+1), y]
value += J_nn[n,x-1,y,1]*spin[z, x-1, y]
return value
def calc():
probs = random_uniform()
if (probs < math.exp(2*spin[z, x, y]*sum_nn()/T[n,l])):
spin[z, x, y] *= np.int8(-1)
if (p == 0 and q == 0) or (p == 1 and q == 1):
calc()
@cuda.jit
def update_blue (spin, seed, T, J_nn, perm):
m = T.shape[1]
z, x, y = cuda.grid(3)
z = perm [z]
n = int(math.floor (z / m))
l = z % m
p, q = x % 3, y % 2
def random_uniform():
seed[z, x, y] = np.int32((seed[z ,x, y]*1664525 + 1013904223) % 2**31)
return seed[z, x, y] / (2**31)
def bvc (x):
if x == spin.shape[1]:
x = 0
return x
def sum_nn(): # This adds spins of six neighbours instead of 4 subject to
#many constraints characteristic of triangular lattices
value = 0.
if (y % 2 == 0):
value += J_nn[n,x,y,0]*spin[z, x, bvc(y+1)]
value += J_nn[n,x-1, bvc(y+1),2]*spin[z, x-1, bvc(y+1)]
value += J_nn[n,x,y,2]*spin[z, x, y-1]
value += J_nn[n,x-1,y-1,0]*spin[z, x-1, y-1]
else:
value += J_nn[n,x,y,0]*spin[z, bvc(x+1), bvc(y+1)]
value += J_nn[n,x,bvc(y+1),2]*spin[z, x, bvc(y+1)]
value += J_nn[n,x,y,2]*spin[z, bvc(x+1), y-1]
value += J_nn[n,x,y-1,0]*spin[z, x, y-1]
value += J_nn[n,x,y,1]*spin[z, bvc(x+1), y]
value += J_nn[n,x-1,y,1]*spin[z, x-1, y]
return value
def calc():
probs = random_uniform()
if (probs < math.exp(2*spin[z, x, y]*sum_nn()/T[n,l])):
spin[z, x, y] *= np.int8(-1)
if (p == 1 and q == 0) or (p == 2 and q == 1):
calc()
@cuda.jit
def update_green (spin, seed, T, J_nn, perm):
m = T.shape[1]
z, x, y = cuda.grid(3)
z = perm [z]
n = int(math.floor (z / m))
l = z % m
p, q = x % 3, y % 2
def random_uniform():
seed[z, x, y] = np.int32((seed[z ,x, y]*1664525 + 1013904223) % 2**31)
return seed[z, x, y] / (2**31)
def bvc (x):
if x == spin.shape[1]:
x = 0
return x
def sum_nn(): # This adds spins of six neighbours instead of 4 subject to
#many constraints characteristic of triangular lattices
value = 0.
if (y % 2 == 0):
value += J_nn[n,x,y,0]*spin[z, x, bvc(y+1)]
value += J_nn[n,x-1, bvc(y+1),2]*spin[z, x-1, bvc(y+1)]
value += J_nn[n,x,y,2]*spin[z, x, y-1]
value += J_nn[n,x-1,y-1,0]*spin[z, x-1, y-1]
else:
value += J_nn[n,x,y,0]*spin[z, bvc(x+1), bvc(y+1)]
value += J_nn[n,x,bvc(y+1),2]*spin[z, x, bvc(y+1)]
value += J_nn[n,x,y,2]*spin[z, bvc(x+1), y-1]
value += J_nn[n,x,y-1,0]*spin[z, x, y-1]
value += J_nn[n,x,y,1]*spin[z, bvc(x+1), y]
value += J_nn[n,x-1,y,1]*spin[z, x-1, y]
return value
def calc():
probs = random_uniform()
if (probs < math.exp(2*spin[z, x, y]*sum_nn()/T[n,l])):
spin[z, x, y] *= np.int8(-1)
if (p == 2 and q == 0) or (p == 0 and q == 1):
calc()
@cuda.jit
def calc_energy (spin, energy, J_nn): #No need to put perm map here because no T
z = cuda.grid (1)
n = int(math.floor (z / energy.shape[1]))
l = z % energy.shape[1]
def bvc (x):
if x == spin.shape[1]:
x = 0
return x
def sum_nn_part(x, y, z): # This adds spins of six neighbours instead of 4 subject to
#many constraints characteristic of triangular lattices
value = 0.
if (x % 2 == 0):
value += J_nn[n,x,y,0]*spin[z, x, bvc(y+1)]
value += J_nn[n,x,y,2]*spin[z, x, y-1]
else:
value += J_nn[n,x,y,0]*spin[z, bvc(x+1), bvc(y+1)]
value += J_nn[n,x,y,2]*spin[z, bvc(x+1), y-1]
value += J_nn[n,x,y,1]*spin[z, bvc(x+1), y]
return value
ener = 0
if z < spin.shape[0]:
for x in range (spin.shape[1]):
for y in range (spin.shape[2]):
ener += spin[z,x,y]*sum_nn_part(x,y,z)
energy[n,l] = ener
@cuda.jit
def parallel_temper2 (T, seed, energy, perm):
z = cuda.grid(1)
m = T.shape[1]//2
n = int(math.floor (z/m))
l = z % m #Takes values between 0 and m//2
if z < seed.shape[0]//2:
rand_n = 0 if np.float32(seed[n, 0, 0]/2**31) < 0.5 else 1
ptr = 2*l + rand_n
z = 2*z + rand_n
if ptr < energy.shape[0]-1:
val0 = perm[z]
val1 = perm[z+1]
e0 = energy[n,ptr]
e1 = energy[n,ptr+1]
rand_unif = np.float32(seed[z, 1, 0] / 2**31)
arg = (e0 - e1)*((1./T[n,ptr]) - (1./T[n,ptr+1]))
if (arg < 0):
if rand_unif < math.exp(arg):
perm[z] = val1
perm[z+1] = val0
else:
perm[z] = val1
perm[z+1] = val0
| <filename>TriEA_kernels.py
import math
import numpy as np
from numba import cuda
#The variable perm used all over these kernels describe the permutation of lattices in the entire ensemble
@cuda.jit
def update_red (spin, seed, T, J_nn, perm):
m = T.shape[1]
z, x, y = cuda.grid (3)
z = perm[z]
n = int(math.floor (z / m))
l = z % m
p, q = x % 3, y % 2
def random_uniform ():
seed[z, x, y] = np.int32((seed[z ,x, y]*1664525 + 1013904223) % 2**31)
return seed[z, x, y] / (2**31)
def bvc (x):
if x == spin.shape[1]:
x = 0
return x
def sum_nn(): # This adds spins of six neighbours instead of 4 subject to
#many constraints characteristic of triangular lattices
value = 0.
if (y % 2 == 0):
value += J_nn[n,x,y,0]*spin[z, x, bvc(y+1)]
value += J_nn[n,x-1, bvc(y+1),2]*spin[z, x-1, bvc(y+1)]
value += J_nn[n,x,y,2]*spin[z, x, y-1]
value += J_nn[n,x-1,y-1,0]*spin[z, x-1, y-1]
else:
value += J_nn[n,x,y,0]*spin[z, bvc(x+1), bvc(y+1)]
value += J_nn[n,x,bvc(y+1),2]*spin[z, x, bvc(y+1)]
value += J_nn[n,x,y,2]*spin[z, bvc(x+1), y-1]
value += J_nn[n,x,y-1,0]*spin[z, x, y-1]
value += J_nn[n,x,y,1]*spin[z, bvc(x+1), y]
value += J_nn[n,x-1,y,1]*spin[z, x-1, y]
return value
def calc():
probs = random_uniform()
if (probs < math.exp(2*spin[z, x, y]*sum_nn()/T[n,l])):
spin[z, x, y] *= np.int8(-1)
if (p == 0 and q == 0) or (p == 1 and q == 1):
calc()
@cuda.jit
def update_blue (spin, seed, T, J_nn, perm):
m = T.shape[1]
z, x, y = cuda.grid(3)
z = perm [z]
n = int(math.floor (z / m))
l = z % m
p, q = x % 3, y % 2
def random_uniform():
seed[z, x, y] = np.int32((seed[z ,x, y]*1664525 + 1013904223) % 2**31)
return seed[z, x, y] / (2**31)
def bvc (x):
if x == spin.shape[1]:
x = 0
return x
def sum_nn(): # This adds spins of six neighbours instead of 4 subject to
#many constraints characteristic of triangular lattices
value = 0.
if (y % 2 == 0):
value += J_nn[n,x,y,0]*spin[z, x, bvc(y+1)]
value += J_nn[n,x-1, bvc(y+1),2]*spin[z, x-1, bvc(y+1)]
value += J_nn[n,x,y,2]*spin[z, x, y-1]
value += J_nn[n,x-1,y-1,0]*spin[z, x-1, y-1]
else:
value += J_nn[n,x,y,0]*spin[z, bvc(x+1), bvc(y+1)]
value += J_nn[n,x,bvc(y+1),2]*spin[z, x, bvc(y+1)]
value += J_nn[n,x,y,2]*spin[z, bvc(x+1), y-1]
value += J_nn[n,x,y-1,0]*spin[z, x, y-1]
value += J_nn[n,x,y,1]*spin[z, bvc(x+1), y]
value += J_nn[n,x-1,y,1]*spin[z, x-1, y]
return value
def calc():
probs = random_uniform()
if (probs < math.exp(2*spin[z, x, y]*sum_nn()/T[n,l])):
spin[z, x, y] *= np.int8(-1)
if (p == 1 and q == 0) or (p == 2 and q == 1):
calc()
@cuda.jit
def update_green (spin, seed, T, J_nn, perm):
m = T.shape[1]
z, x, y = cuda.grid(3)
z = perm [z]
n = int(math.floor (z / m))
l = z % m
p, q = x % 3, y % 2
def random_uniform():
seed[z, x, y] = np.int32((seed[z ,x, y]*1664525 + 1013904223) % 2**31)
return seed[z, x, y] / (2**31)
def bvc (x):
if x == spin.shape[1]:
x = 0
return x
def sum_nn(): # This adds spins of six neighbours instead of 4 subject to
#many constraints characteristic of triangular lattices
value = 0.
if (y % 2 == 0):
value += J_nn[n,x,y,0]*spin[z, x, bvc(y+1)]
value += J_nn[n,x-1, bvc(y+1),2]*spin[z, x-1, bvc(y+1)]
value += J_nn[n,x,y,2]*spin[z, x, y-1]
value += J_nn[n,x-1,y-1,0]*spin[z, x-1, y-1]
else:
value += J_nn[n,x,y,0]*spin[z, bvc(x+1), bvc(y+1)]
value += J_nn[n,x,bvc(y+1),2]*spin[z, x, bvc(y+1)]
value += J_nn[n,x,y,2]*spin[z, bvc(x+1), y-1]
value += J_nn[n,x,y-1,0]*spin[z, x, y-1]
value += J_nn[n,x,y,1]*spin[z, bvc(x+1), y]
value += J_nn[n,x-1,y,1]*spin[z, x-1, y]
return value
def calc():
probs = random_uniform()
if (probs < math.exp(2*spin[z, x, y]*sum_nn()/T[n,l])):
spin[z, x, y] *= np.int8(-1)
if (p == 2 and q == 0) or (p == 0 and q == 1):
calc()
@cuda.jit
def calc_energy (spin, energy, J_nn): #No need to put perm map here because no T
z = cuda.grid (1)
n = int(math.floor (z / energy.shape[1]))
l = z % energy.shape[1]
def bvc (x):
if x == spin.shape[1]:
x = 0
return x
def sum_nn_part(x, y, z): # This adds spins of six neighbours instead of 4 subject to
#many constraints characteristic of triangular lattices
value = 0.
if (x % 2 == 0):
value += J_nn[n,x,y,0]*spin[z, x, bvc(y+1)]
value += J_nn[n,x,y,2]*spin[z, x, y-1]
else:
value += J_nn[n,x,y,0]*spin[z, bvc(x+1), bvc(y+1)]
value += J_nn[n,x,y,2]*spin[z, bvc(x+1), y-1]
value += J_nn[n,x,y,1]*spin[z, bvc(x+1), y]
return value
ener = 0
if z < spin.shape[0]:
for x in range (spin.shape[1]):
for y in range (spin.shape[2]):
ener += spin[z,x,y]*sum_nn_part(x,y,z)
energy[n,l] = ener
@cuda.jit
def parallel_temper2 (T, seed, energy, perm):
z = cuda.grid(1)
m = T.shape[1]//2
n = int(math.floor (z/m))
l = z % m #Takes values between 0 and m//2
if z < seed.shape[0]//2:
rand_n = 0 if np.float32(seed[n, 0, 0]/2**31) < 0.5 else 1
ptr = 2*l + rand_n
z = 2*z + rand_n
if ptr < energy.shape[0]-1:
val0 = perm[z]
val1 = perm[z+1]
e0 = energy[n,ptr]
e1 = energy[n,ptr+1]
rand_unif = np.float32(seed[z, 1, 0] / 2**31)
arg = (e0 - e1)*((1./T[n,ptr]) - (1./T[n,ptr+1]))
if (arg < 0):
if rand_unif < math.exp(arg):
perm[z] = val1
perm[z+1] = val0
else:
perm[z] = val1
perm[z+1] = val0
| en | 0.847962 | #The variable perm used all over these kernels describe the permutation of lattices in the entire ensemble # This adds spins of six neighbours instead of 4 subject to #many constraints characteristic of triangular lattices # This adds spins of six neighbours instead of 4 subject to #many constraints characteristic of triangular lattices # This adds spins of six neighbours instead of 4 subject to #many constraints characteristic of triangular lattices #No need to put perm map here because no T # This adds spins of six neighbours instead of 4 subject to #many constraints characteristic of triangular lattices #Takes values between 0 and m//2 | 2.446669 | 2 |
leetcode/044_wildcard_matching.py | aiden0z/snippets | 0 | 6615312 | """Wildcard Matching
Given an input string (s) and a pattern (p), implement wildcard pattern matching with support for
'?' and '*'.
'?' Matches any single chracter.
'*' Matches any sequence of characters ( including the empty sequence).
The matching should cover the entire input string (not partial).
Note:
s could be empty and contains only lowercase letters a-z;
p could be empty and contains only lowercase letters a-z, and chracters like ? or *.
Example 1:
Input:
s = "aa"
p = "a"
Output: false
Explanation: "a" does not match the entire string "aa".
Example 2:
Input:
s = "aa"
p = "*"
Output: true
Explanation: '*' matches any sequence.
Example 3:
Input:
s = "cb"
p = "?a"
Output: false
Explanation: '?' matches 'c', but the second letter is 'a', which does not match 'b'.
Example 4:
Input:
s = "abceb"
p = "*a*b"
Output: true
Explanation: The first '*' matches the empty sequence, while the second '*' matches the
substring "dce".
Example 5:
Input:
s = "acdcb"
p = "a*c?b"
Output: false
"""
class Solution:
def isMatch(self, s: str, p: str) -> bool:
sp, pp = 0, 0
match = 0
start = -1
while sp < len(s):
if pp < len(p) and (s[sp] == p[pp] or p[pp] == '?'):
sp += 1
pp += 1
elif pp < len(p) and p[pp] == '*':
start = pp
match = sp
pp += 1
elif start != -1:
pp = start + 1
match += 1
sp = match
else:
return False
# 处理连续 * 情况
while pp < len(p) and p[pp] == '*':
pp += 1
return pp == len(p)
class SolutionDP:
def isMatch(self, s: str, p: str) -> bool:
m, n = len(s), len(p)
# 状态数组,dp[i][j] 表示字符串 s[0:i-1] 是否和 p[0:j-1] 匹配
dp = [[False for j in range(n + 1)] for i in range(m + 1)]
# s 和 p 都为空时表示匹配
dp[0][0] = True
# # 当 s 为空,p 为连续的星号时的情况。
# # 由于星号是可以代表空串的,所以只要 s 为空,那么连续的星号的位置都应该为 true,
# # 所以我们现将连续星号的位置都赋为 true。
# for i in range(1, n + 1):
# if p[i - 1] == '*':
# dp[0][i] = dp[0][i - 1]
# 当 s 为空时,p 必需有 * 才能匹配,且他的真值一定和去掉 * 后的前面字符串匹配情况相同
for i in range(1, n + 1):
dp[0][i] = p[i - 1] == '*' and dp[0][i - 1]
for i in range(1, m + 1):
for j in range(1, n + 1):
# 当 p 上一个字符串为 '?' 后者 p 上一个字符串等于 s 上一个字符串,则当前真值与上一位相同
if p[j - 1] == s[i - 1] or p[j - 1] == '?':
dp[i][j] = dp[i - 1][j - 1]
# p 上一个字符串为 * 时,则表示 p 往后走一位或者 s 往后走一位
elif p[j - 1] == '*':
dp[i][j] = dp[i - 1][j] or dp[i][j - 1]
return dp[m][n]
if __name__ == '__main__':
cases = [
('aa', 'a', False),
('aa', '*', True),
('cb', '?a', False),
('abceb', '*a*b', True),
('acdcb', 'a*c?b', False)
] # yapf: disable
for case in cases:
for S in [Solution, SolutionDP]:
assert S().isMatch(case[0], case[1]) == case[2]
| """Wildcard Matching
Given an input string (s) and a pattern (p), implement wildcard pattern matching with support for
'?' and '*'.
'?' Matches any single chracter.
'*' Matches any sequence of characters ( including the empty sequence).
The matching should cover the entire input string (not partial).
Note:
s could be empty and contains only lowercase letters a-z;
p could be empty and contains only lowercase letters a-z, and chracters like ? or *.
Example 1:
Input:
s = "aa"
p = "a"
Output: false
Explanation: "a" does not match the entire string "aa".
Example 2:
Input:
s = "aa"
p = "*"
Output: true
Explanation: '*' matches any sequence.
Example 3:
Input:
s = "cb"
p = "?a"
Output: false
Explanation: '?' matches 'c', but the second letter is 'a', which does not match 'b'.
Example 4:
Input:
s = "abceb"
p = "*a*b"
Output: true
Explanation: The first '*' matches the empty sequence, while the second '*' matches the
substring "dce".
Example 5:
Input:
s = "acdcb"
p = "a*c?b"
Output: false
"""
class Solution:
def isMatch(self, s: str, p: str) -> bool:
sp, pp = 0, 0
match = 0
start = -1
while sp < len(s):
if pp < len(p) and (s[sp] == p[pp] or p[pp] == '?'):
sp += 1
pp += 1
elif pp < len(p) and p[pp] == '*':
start = pp
match = sp
pp += 1
elif start != -1:
pp = start + 1
match += 1
sp = match
else:
return False
# 处理连续 * 情况
while pp < len(p) and p[pp] == '*':
pp += 1
return pp == len(p)
class SolutionDP:
def isMatch(self, s: str, p: str) -> bool:
m, n = len(s), len(p)
# 状态数组,dp[i][j] 表示字符串 s[0:i-1] 是否和 p[0:j-1] 匹配
dp = [[False for j in range(n + 1)] for i in range(m + 1)]
# s 和 p 都为空时表示匹配
dp[0][0] = True
# # 当 s 为空,p 为连续的星号时的情况。
# # 由于星号是可以代表空串的,所以只要 s 为空,那么连续的星号的位置都应该为 true,
# # 所以我们现将连续星号的位置都赋为 true。
# for i in range(1, n + 1):
# if p[i - 1] == '*':
# dp[0][i] = dp[0][i - 1]
# 当 s 为空时,p 必需有 * 才能匹配,且他的真值一定和去掉 * 后的前面字符串匹配情况相同
for i in range(1, n + 1):
dp[0][i] = p[i - 1] == '*' and dp[0][i - 1]
for i in range(1, m + 1):
for j in range(1, n + 1):
# 当 p 上一个字符串为 '?' 后者 p 上一个字符串等于 s 上一个字符串,则当前真值与上一位相同
if p[j - 1] == s[i - 1] or p[j - 1] == '?':
dp[i][j] = dp[i - 1][j - 1]
# p 上一个字符串为 * 时,则表示 p 往后走一位或者 s 往后走一位
elif p[j - 1] == '*':
dp[i][j] = dp[i - 1][j] or dp[i][j - 1]
return dp[m][n]
if __name__ == '__main__':
cases = [
('aa', 'a', False),
('aa', '*', True),
('cb', '?a', False),
('abceb', '*a*b', True),
('acdcb', 'a*c?b', False)
] # yapf: disable
for case in cases:
for S in [Solution, SolutionDP]:
assert S().isMatch(case[0], case[1]) == case[2]
| en | 0.391158 | Wildcard Matching Given an input string (s) and a pattern (p), implement wildcard pattern matching with support for '?' and '*'. '?' Matches any single chracter. '*' Matches any sequence of characters ( including the empty sequence). The matching should cover the entire input string (not partial). Note: s could be empty and contains only lowercase letters a-z; p could be empty and contains only lowercase letters a-z, and chracters like ? or *. Example 1: Input: s = "aa" p = "a" Output: false Explanation: "a" does not match the entire string "aa". Example 2: Input: s = "aa" p = "*" Output: true Explanation: '*' matches any sequence. Example 3: Input: s = "cb" p = "?a" Output: false Explanation: '?' matches 'c', but the second letter is 'a', which does not match 'b'. Example 4: Input: s = "abceb" p = "*a*b" Output: true Explanation: The first '*' matches the empty sequence, while the second '*' matches the substring "dce". Example 5: Input: s = "acdcb" p = "a*c?b" Output: false # 处理连续 * 情况 # 状态数组,dp[i][j] 表示字符串 s[0:i-1] 是否和 p[0:j-1] 匹配 # s 和 p 都为空时表示匹配 # # 当 s 为空,p 为连续的星号时的情况。 # # 由于星号是可以代表空串的,所以只要 s 为空,那么连续的星号的位置都应该为 true, # # 所以我们现将连续星号的位置都赋为 true。 # for i in range(1, n + 1): # if p[i - 1] == '*': # dp[0][i] = dp[0][i - 1] # 当 s 为空时,p 必需有 * 才能匹配,且他的真值一定和去掉 * 后的前面字符串匹配情况相同 # 当 p 上一个字符串为 '?' 后者 p 上一个字符串等于 s 上一个字符串,则当前真值与上一位相同 # p 上一个字符串为 * 时,则表示 p 往后走一位或者 s 往后走一位 # yapf: disable | 4.16564 | 4 |
tests/dhcpv4/classification/test_v4_classification_release.py | isc-projects/forge | 22 | 6615313 | """DHCPv4 Client Classification release process"""
# pylint: disable=invalid-name,line-too-long
import pytest
import srv_msg
import misc
import srv_control
@pytest.mark.v4
@pytest.mark.classification
@pytest.mark.release
def test_v4_client_classification_release_same_chaddr_client_id():
misc.test_setup()
srv_control.config_srv_subnet('192.168.50.0/24', '192.168.50.1-192.168.50.1')
srv_control.config_client_classification(0, 'VENDOR_CLASS_my-own-class')
srv_control.build_and_send_config_files()
srv_control.start_srv('DHCP', 'started')
misc.test_procedure()
srv_msg.client_sets_value('Client', 'chaddr', '00:00:00:00:00:00')
srv_msg.client_does_include_with_value('client_id', '00010203040506')
srv_msg.client_does_include_with_value('vendor_class_id', 'my-own-class')
srv_msg.client_requests_option(1)
srv_msg.client_send_msg('DISCOVER')
misc.pass_criteria()
srv_msg.send_wait_for_message('MUST', 'OFFER')
srv_msg.response_check_content('yiaddr', '192.168.50.1')
srv_msg.response_check_option_content(1, 'value', '255.255.255.0')
srv_msg.response_check_option_content(54, 'value', '$(SRV4_ADDR)')
srv_msg.response_check_option_content(61, 'value', '00010203040506')
misc.test_procedure()
srv_msg.client_sets_value('Client', 'chaddr', '00:00:00:00:00:00')
srv_msg.client_does_include_with_value('client_id', '00010203040506')
srv_msg.client_copy_option('server_id')
srv_msg.client_does_include_with_value('requested_addr', '192.168.50.1')
srv_msg.client_does_include_with_value('vendor_class_id', 'my-own-class')
srv_msg.client_requests_option(1)
srv_msg.client_send_msg('REQUEST')
misc.pass_criteria()
srv_msg.send_wait_for_message('MUST', 'ACK')
srv_msg.response_check_content('yiaddr', '192.168.50.1')
srv_msg.response_check_option_content(1, 'value', '255.255.255.0')
srv_msg.response_check_option_content(54, 'value', '$(SRV4_ADDR)')
srv_msg.response_check_option_content(61, 'value', '00010203040506')
misc.test_procedure()
srv_msg.client_sets_value('Client', 'chaddr', '00:00:00:00:00:00')
srv_msg.client_does_include_with_value('client_id', '00010203040506')
srv_msg.client_copy_option('server_id')
srv_msg.client_sets_value('Client', 'ciaddr', '192.168.50.1')
srv_msg.client_send_msg('RELEASE')
misc.pass_criteria()
srv_msg.send_dont_wait_for_message()
misc.test_procedure()
srv_msg.client_sets_value('Client', 'chaddr', '00:1F:D0:11:22:33')
srv_msg.client_does_include_with_value('client_id', '00010203040506')
srv_msg.client_does_include_with_value('vendor_class_id', 'my-own-class')
srv_msg.client_requests_option(1)
srv_msg.client_send_msg('DISCOVER')
misc.pass_criteria()
srv_msg.send_wait_for_message('MUST', 'OFFER')
srv_msg.response_check_content('yiaddr', '192.168.50.1')
srv_msg.response_check_option_content(1, 'value', '255.255.255.0')
srv_msg.response_check_option_content(54, 'value', '$(SRV4_ADDR)')
srv_msg.response_check_option_content(61, 'value', '00010203040506')
@pytest.mark.v4
@pytest.mark.classification
@pytest.mark.release
def test_v4_client_classification_release_different_chaddr_client_id():
misc.test_setup()
srv_control.config_srv_subnet('192.168.50.0/24', '192.168.50.1-192.168.50.1')
srv_control.config_client_classification(0, 'VENDOR_CLASS_my-own-class')
srv_control.build_and_send_config_files()
srv_control.start_srv('DHCP', 'started')
misc.test_procedure()
srv_msg.client_sets_value('Client', 'chaddr', '00:00:00:00:00:00')
srv_msg.client_does_include_with_value('client_id', '00010203040506')
srv_msg.client_does_include_with_value('vendor_class_id', 'my-own-class')
srv_msg.client_requests_option(1)
srv_msg.client_send_msg('DISCOVER')
misc.pass_criteria()
srv_msg.send_wait_for_message('MUST', 'OFFER')
srv_msg.response_check_content('yiaddr', '192.168.50.1')
srv_msg.response_check_option_content(1, 'value', '255.255.255.0')
srv_msg.response_check_option_content(54, 'value', '$(SRV4_ADDR)')
srv_msg.response_check_option_content(61, 'value', '00010203040506')
misc.test_procedure()
srv_msg.client_sets_value('Client', 'chaddr', '00:00:00:00:00:00')
srv_msg.client_does_include_with_value('client_id', '00010203040506')
srv_msg.client_copy_option('server_id')
srv_msg.client_does_include_with_value('requested_addr', '192.168.50.1')
srv_msg.client_does_include_with_value('vendor_class_id', 'my-own-class')
srv_msg.client_requests_option(1)
srv_msg.client_send_msg('REQUEST')
misc.pass_criteria()
srv_msg.send_wait_for_message('MUST', 'ACK')
srv_msg.response_check_content('yiaddr', '192.168.50.1')
srv_msg.response_check_option_content(1, 'value', '255.255.255.0')
srv_msg.response_check_option_content(54, 'value', '$(SRV4_ADDR)')
srv_msg.response_check_option_content(61, 'value', '00010203040506')
misc.test_procedure()
srv_msg.client_sets_value('Client', 'chaddr', '00:00:00:11:22:33')
srv_msg.client_does_include_with_value('client_id', '00010203123456')
srv_msg.client_copy_option('server_id')
srv_msg.client_sets_value('Client', 'ciaddr', '192.168.50.1')
srv_msg.client_send_msg('RELEASE')
misc.pass_criteria()
srv_msg.send_dont_wait_for_message()
misc.test_procedure()
srv_msg.client_sets_value('Client', 'chaddr', '00:1F:D0:11:22:33')
# Client adds to the message client_id with value 00010203040506.
srv_msg.client_does_include_with_value('vendor_class_id', 'my-own-class')
srv_msg.client_requests_option(1)
srv_msg.client_send_msg('DISCOVER')
misc.pass_criteria()
srv_msg.send_dont_wait_for_message()
# we should check logs here..
| """DHCPv4 Client Classification release process"""
# pylint: disable=invalid-name,line-too-long
import pytest
import srv_msg
import misc
import srv_control
@pytest.mark.v4
@pytest.mark.classification
@pytest.mark.release
def test_v4_client_classification_release_same_chaddr_client_id():
misc.test_setup()
srv_control.config_srv_subnet('192.168.50.0/24', '192.168.50.1-192.168.50.1')
srv_control.config_client_classification(0, 'VENDOR_CLASS_my-own-class')
srv_control.build_and_send_config_files()
srv_control.start_srv('DHCP', 'started')
misc.test_procedure()
srv_msg.client_sets_value('Client', 'chaddr', '00:00:00:00:00:00')
srv_msg.client_does_include_with_value('client_id', '00010203040506')
srv_msg.client_does_include_with_value('vendor_class_id', 'my-own-class')
srv_msg.client_requests_option(1)
srv_msg.client_send_msg('DISCOVER')
misc.pass_criteria()
srv_msg.send_wait_for_message('MUST', 'OFFER')
srv_msg.response_check_content('yiaddr', '192.168.50.1')
srv_msg.response_check_option_content(1, 'value', '255.255.255.0')
srv_msg.response_check_option_content(54, 'value', '$(SRV4_ADDR)')
srv_msg.response_check_option_content(61, 'value', '00010203040506')
misc.test_procedure()
srv_msg.client_sets_value('Client', 'chaddr', '00:00:00:00:00:00')
srv_msg.client_does_include_with_value('client_id', '00010203040506')
srv_msg.client_copy_option('server_id')
srv_msg.client_does_include_with_value('requested_addr', '192.168.50.1')
srv_msg.client_does_include_with_value('vendor_class_id', 'my-own-class')
srv_msg.client_requests_option(1)
srv_msg.client_send_msg('REQUEST')
misc.pass_criteria()
srv_msg.send_wait_for_message('MUST', 'ACK')
srv_msg.response_check_content('yiaddr', '192.168.50.1')
srv_msg.response_check_option_content(1, 'value', '255.255.255.0')
srv_msg.response_check_option_content(54, 'value', '$(SRV4_ADDR)')
srv_msg.response_check_option_content(61, 'value', '00010203040506')
misc.test_procedure()
srv_msg.client_sets_value('Client', 'chaddr', '00:00:00:00:00:00')
srv_msg.client_does_include_with_value('client_id', '00010203040506')
srv_msg.client_copy_option('server_id')
srv_msg.client_sets_value('Client', 'ciaddr', '192.168.50.1')
srv_msg.client_send_msg('RELEASE')
misc.pass_criteria()
srv_msg.send_dont_wait_for_message()
misc.test_procedure()
srv_msg.client_sets_value('Client', 'chaddr', '00:1F:D0:11:22:33')
srv_msg.client_does_include_with_value('client_id', '00010203040506')
srv_msg.client_does_include_with_value('vendor_class_id', 'my-own-class')
srv_msg.client_requests_option(1)
srv_msg.client_send_msg('DISCOVER')
misc.pass_criteria()
srv_msg.send_wait_for_message('MUST', 'OFFER')
srv_msg.response_check_content('yiaddr', '192.168.50.1')
srv_msg.response_check_option_content(1, 'value', '255.255.255.0')
srv_msg.response_check_option_content(54, 'value', '$(SRV4_ADDR)')
srv_msg.response_check_option_content(61, 'value', '00010203040506')
@pytest.mark.v4
@pytest.mark.classification
@pytest.mark.release
def test_v4_client_classification_release_different_chaddr_client_id():
misc.test_setup()
srv_control.config_srv_subnet('192.168.50.0/24', '192.168.50.1-192.168.50.1')
srv_control.config_client_classification(0, 'VENDOR_CLASS_my-own-class')
srv_control.build_and_send_config_files()
srv_control.start_srv('DHCP', 'started')
misc.test_procedure()
srv_msg.client_sets_value('Client', 'chaddr', '00:00:00:00:00:00')
srv_msg.client_does_include_with_value('client_id', '00010203040506')
srv_msg.client_does_include_with_value('vendor_class_id', 'my-own-class')
srv_msg.client_requests_option(1)
srv_msg.client_send_msg('DISCOVER')
misc.pass_criteria()
srv_msg.send_wait_for_message('MUST', 'OFFER')
srv_msg.response_check_content('yiaddr', '192.168.50.1')
srv_msg.response_check_option_content(1, 'value', '255.255.255.0')
srv_msg.response_check_option_content(54, 'value', '$(SRV4_ADDR)')
srv_msg.response_check_option_content(61, 'value', '00010203040506')
misc.test_procedure()
srv_msg.client_sets_value('Client', 'chaddr', '00:00:00:00:00:00')
srv_msg.client_does_include_with_value('client_id', '00010203040506')
srv_msg.client_copy_option('server_id')
srv_msg.client_does_include_with_value('requested_addr', '192.168.50.1')
srv_msg.client_does_include_with_value('vendor_class_id', 'my-own-class')
srv_msg.client_requests_option(1)
srv_msg.client_send_msg('REQUEST')
misc.pass_criteria()
srv_msg.send_wait_for_message('MUST', 'ACK')
srv_msg.response_check_content('yiaddr', '192.168.50.1')
srv_msg.response_check_option_content(1, 'value', '255.255.255.0')
srv_msg.response_check_option_content(54, 'value', '$(SRV4_ADDR)')
srv_msg.response_check_option_content(61, 'value', '00010203040506')
misc.test_procedure()
srv_msg.client_sets_value('Client', 'chaddr', '00:00:00:11:22:33')
srv_msg.client_does_include_with_value('client_id', '00010203123456')
srv_msg.client_copy_option('server_id')
srv_msg.client_sets_value('Client', 'ciaddr', '192.168.50.1')
srv_msg.client_send_msg('RELEASE')
misc.pass_criteria()
srv_msg.send_dont_wait_for_message()
misc.test_procedure()
srv_msg.client_sets_value('Client', 'chaddr', '00:1F:D0:11:22:33')
# Client adds to the message client_id with value 00010203040506.
srv_msg.client_does_include_with_value('vendor_class_id', 'my-own-class')
srv_msg.client_requests_option(1)
srv_msg.client_send_msg('DISCOVER')
misc.pass_criteria()
srv_msg.send_dont_wait_for_message()
# we should check logs here..
| en | 0.523177 | DHCPv4 Client Classification release process # pylint: disable=invalid-name,line-too-long # Client adds to the message client_id with value 00010203040506. # we should check logs here.. | 1.980654 | 2 |
waspy/app.py | colinchilds/waspy | 44 | 6615314 | <reponame>colinchilds/waspy
import asyncio
import logging
import signal
import sys
from functools import wraps
from typing import List, Union, Iterable
from http import HTTPStatus
from concurrent.futures import CancelledError
from contextvars import ContextVar, copy_context
from copy import copy
from .parser import ParserABC, JSONParser, parsers as app_parsers
from ._cors import CORSHandler
from .client import Client
from .webtypes import Request, Response
from .exceptions import ResponseError, UnsupportedMediaType
from .router import Router
from .transports.transportabc import TransportABC
from .transports.rabbitmqtransport import NackMePleaseError
from .configuration import Config, ConfigError
from .ctx import request_context
from . import errorlogging
logging.basicConfig(format='%(asctime)s %(levelname)s [%(module)s.%(funcName)s] %(message)s')
logger = logging.getLogger('waspy')
async def response_wrapper_factory(app, handler):
@wraps(handler)
async def wrap_response_middleware(request):
response = await handler(request)
if not isinstance(response, Response):
if isinstance(response, tuple):
body = response[0]
status = response[1]
response = Response(status=status, body=body)
elif isinstance(response, dict) or isinstance(response, str):
response = Response(body=response)
elif response is None:
response = Response(status=HTTPStatus.NO_CONTENT)
else:
raise ValueError('Request handler returned an invalid type.'
' Return types should be one of '
'[Response, dict, str, None, (dict, int)]')
return response
return wrap_response_middleware
class Application:
def __init__(self,
transport: Union[TransportABC,
Iterable[TransportABC]]=None,
*,
middlewares: List[callable]=None,
default_headers: dict=None,
debug: bool=False,
router: Router=None,
config: Config=None,
loop=None,
parsers=None,
default_content_type='application/json'):
if transport is None:
from waspy.transports.httptransport import HTTPTransport
transport = HTTPTransport()
if isinstance(transport, (list, set)):
transport = tuple(transport)
if not isinstance(transport, tuple):
transport = (transport,)
if middlewares is None:
middlewares = ()
middlewares = tuple(m for m in middlewares)
middlewares += (response_wrapper_factory,)
if router is None:
router = Router()
if default_headers is None:
default_headers = {'Server': 'waspy'}
if not config:
config = Config()
# Parser management
if not parsers:
parsers = [JSONParser()]
for parser in parsers:
self.add_parser(parser)
self.transport = transport
self.middlewares = middlewares
self.default_headers = default_headers
self.debug = debug
self.router = router
self.on_start = []
self.on_stop = []
self._client = None
self.config = config
self.raven = None
self.logger = None
self._cors_handler = None
self.loop = loop
self.default_content_type = default_content_type
@property
def client(self) -> Client:
if not self._client:
self._client = Client(transport=self.transport[0].get_client())
return self._client
def add_parser(self, parser: ParserABC):
app_parsers[parser.content_type] = parser
def start_shutdown(self, signum=None, frame=None):
# loop = asyncio.get_event_loop()
for t in self.transport:
t.shutdown()
def run(self):
if not self.loop:
self.loop = asyncio.get_event_loop()
loop = self.loop
if self.config['debug']:
logger.setLevel('DEBUG')
self.loop.set_debug(True)
# init logger
self._create_logger()
# add cors support if needed
self._cors_handler = CORSHandler.from_config(self.config)
if self._cors_handler:
self.router.add_generic_options_handler(self._cors_handler.options_handler)
# wrap handlers in middleware
loop.run_until_complete(self._wrap_handlers())
for t in self.transport:
t.listen(loop=loop, config=self.config)
# Call on-startup hooks
loop.run_until_complete(self.run_on_start_hooks())
# todo: fork/add processes?
tasks = []
for t in self.transport:
tasks.append(t.start(self.handle_request))
# register signals, so that stopping the service works correctly
loop.add_signal_handler(signal.SIGTERM, self.start_shutdown)
loop.add_signal_handler(signal.SIGINT, self.start_shutdown)
# Run all transports - they shouldn't return until shutdown
loop.run_until_complete(asyncio.gather(*tasks))
self.shutdown()
async def run_on_start_hooks(self):
"""
Run all hooks in on_start. Allows for coroutines and synchronous functions.
"""
logger.debug("Running on start hooks")
await self._run_hooks(self.on_start)
async def run_on_stop_hooks(self):
"""
Run all hooks in on_stop. Allows for coroutines and synchronous functions.
"""
logger.debug("Running on stop hooks")
await self._run_hooks(self.on_stop)
async def handle_request(self, request: Request) -> Response:
"""
coroutine: This method is called by Transport
implementation to handle the actual request.
It returns a webtype.Response object.
"""
# Get handler
try:
try:
self._set_ctx(request)
handler = self.router.get_handler_for_request(request)
request.app = self
response = await handler(request)
response.app = self
except ResponseError as r:
parser = app_parsers.get(request.content_type, None)
# Content-Type of an error response will be the same as the incoming request
# unless a parser for that content type is not found.
if not parser:
content_type = r.content_type
if not content_type:
content_type = self.default_content_type
else:
content_type = request.content_type
response = Response(
headers=r.headers, correlation_id=r.correlation_id, body=r.body,
status=r.status, content_type=content_type
)
response.app = self
if r.log:
exc_info = sys.exc_info()
self.logger.log_exception(request, exc_info, level='warning')
# invoke serialization (json) to make sure it works
_ = response.body
except CancelledError:
# This error can happen if a client closes the connection
# The response shouldnt really ever be used
return None
except asyncio.TimeoutError:
response = Response(status=HTTPStatus.GATEWAY_TIMEOUT,
body={'message': 'Gateway Timeout'})
response.app = self
except NackMePleaseError:
""" See message where this error is defined """
raise
except Exception:
exc_info = sys.exc_info()
self.logger.log_exception(request, exc_info)
response = Response(status=HTTPStatus.INTERNAL_SERVER_ERROR,
body={'message': 'Server Error'})
response.app = self
if not response.correlation_id:
response.correlation_id = request.correlation_id
if self._cors_handler is not None:
self._cors_handler.add_cors_headers(request, response)
# add default headers
response.headers = {**self.default_headers, **response.headers}
return response
def _set_ctx(self, request):
ctx = {'correlation_id': request.correlation_id,
'ctx_headers':
{k: v for k, v in request.headers.items() if k.startswith('ctx-')}}
request_context.set(ctx)
async def _wrap_handlers(self):
handler_gen = self.router._get_and_wrap_routes()
try:
handler = next(handler_gen)
while True:
wrapped = handler
for middleware in self.middlewares[::-1]:
wrapped = await middleware(self, wrapped)
handler = handler_gen.send(wrapped)
except StopIteration:
pass
def _create_logger(self):
try:
dsn = self.config['sentry']['dsn']
except (ConfigError, ValueError):
self.logger = errorlogging.ErrorLoggingBase()
else:
try:
env = self.config['app_env']
except (ConfigError):
env = 'waspy'
self.logger = errorlogging.SentryLogging(
dsn=dsn,
environment=env
)
async def _run_hooks(self, hooks):
coros = []
while len(hooks):
task = hooks.pop()
if asyncio.iscoroutinefunction(task):
coros.append(task(self))
else:
task(self)
await asyncio.gather(*coros)
def shutdown(self):
self.loop.run_until_complete(self.run_on_stop_hooks())
self.loop.close()
| import asyncio
import logging
import signal
import sys
from functools import wraps
from typing import List, Union, Iterable
from http import HTTPStatus
from concurrent.futures import CancelledError
from contextvars import ContextVar, copy_context
from copy import copy
from .parser import ParserABC, JSONParser, parsers as app_parsers
from ._cors import CORSHandler
from .client import Client
from .webtypes import Request, Response
from .exceptions import ResponseError, UnsupportedMediaType
from .router import Router
from .transports.transportabc import TransportABC
from .transports.rabbitmqtransport import NackMePleaseError
from .configuration import Config, ConfigError
from .ctx import request_context
from . import errorlogging
logging.basicConfig(format='%(asctime)s %(levelname)s [%(module)s.%(funcName)s] %(message)s')
logger = logging.getLogger('waspy')
async def response_wrapper_factory(app, handler):
@wraps(handler)
async def wrap_response_middleware(request):
response = await handler(request)
if not isinstance(response, Response):
if isinstance(response, tuple):
body = response[0]
status = response[1]
response = Response(status=status, body=body)
elif isinstance(response, dict) or isinstance(response, str):
response = Response(body=response)
elif response is None:
response = Response(status=HTTPStatus.NO_CONTENT)
else:
raise ValueError('Request handler returned an invalid type.'
' Return types should be one of '
'[Response, dict, str, None, (dict, int)]')
return response
return wrap_response_middleware
class Application:
def __init__(self,
transport: Union[TransportABC,
Iterable[TransportABC]]=None,
*,
middlewares: List[callable]=None,
default_headers: dict=None,
debug: bool=False,
router: Router=None,
config: Config=None,
loop=None,
parsers=None,
default_content_type='application/json'):
if transport is None:
from waspy.transports.httptransport import HTTPTransport
transport = HTTPTransport()
if isinstance(transport, (list, set)):
transport = tuple(transport)
if not isinstance(transport, tuple):
transport = (transport,)
if middlewares is None:
middlewares = ()
middlewares = tuple(m for m in middlewares)
middlewares += (response_wrapper_factory,)
if router is None:
router = Router()
if default_headers is None:
default_headers = {'Server': 'waspy'}
if not config:
config = Config()
# Parser management
if not parsers:
parsers = [JSONParser()]
for parser in parsers:
self.add_parser(parser)
self.transport = transport
self.middlewares = middlewares
self.default_headers = default_headers
self.debug = debug
self.router = router
self.on_start = []
self.on_stop = []
self._client = None
self.config = config
self.raven = None
self.logger = None
self._cors_handler = None
self.loop = loop
self.default_content_type = default_content_type
@property
def client(self) -> Client:
if not self._client:
self._client = Client(transport=self.transport[0].get_client())
return self._client
def add_parser(self, parser: ParserABC):
app_parsers[parser.content_type] = parser
def start_shutdown(self, signum=None, frame=None):
# loop = asyncio.get_event_loop()
for t in self.transport:
t.shutdown()
def run(self):
if not self.loop:
self.loop = asyncio.get_event_loop()
loop = self.loop
if self.config['debug']:
logger.setLevel('DEBUG')
self.loop.set_debug(True)
# init logger
self._create_logger()
# add cors support if needed
self._cors_handler = CORSHandler.from_config(self.config)
if self._cors_handler:
self.router.add_generic_options_handler(self._cors_handler.options_handler)
# wrap handlers in middleware
loop.run_until_complete(self._wrap_handlers())
for t in self.transport:
t.listen(loop=loop, config=self.config)
# Call on-startup hooks
loop.run_until_complete(self.run_on_start_hooks())
# todo: fork/add processes?
tasks = []
for t in self.transport:
tasks.append(t.start(self.handle_request))
# register signals, so that stopping the service works correctly
loop.add_signal_handler(signal.SIGTERM, self.start_shutdown)
loop.add_signal_handler(signal.SIGINT, self.start_shutdown)
# Run all transports - they shouldn't return until shutdown
loop.run_until_complete(asyncio.gather(*tasks))
self.shutdown()
async def run_on_start_hooks(self):
"""
Run all hooks in on_start. Allows for coroutines and synchronous functions.
"""
logger.debug("Running on start hooks")
await self._run_hooks(self.on_start)
async def run_on_stop_hooks(self):
"""
Run all hooks in on_stop. Allows for coroutines and synchronous functions.
"""
logger.debug("Running on stop hooks")
await self._run_hooks(self.on_stop)
async def handle_request(self, request: Request) -> Response:
"""
coroutine: This method is called by Transport
implementation to handle the actual request.
It returns a webtype.Response object.
"""
# Get handler
try:
try:
self._set_ctx(request)
handler = self.router.get_handler_for_request(request)
request.app = self
response = await handler(request)
response.app = self
except ResponseError as r:
parser = app_parsers.get(request.content_type, None)
# Content-Type of an error response will be the same as the incoming request
# unless a parser for that content type is not found.
if not parser:
content_type = r.content_type
if not content_type:
content_type = self.default_content_type
else:
content_type = request.content_type
response = Response(
headers=r.headers, correlation_id=r.correlation_id, body=r.body,
status=r.status, content_type=content_type
)
response.app = self
if r.log:
exc_info = sys.exc_info()
self.logger.log_exception(request, exc_info, level='warning')
# invoke serialization (json) to make sure it works
_ = response.body
except CancelledError:
# This error can happen if a client closes the connection
# The response shouldnt really ever be used
return None
except asyncio.TimeoutError:
response = Response(status=HTTPStatus.GATEWAY_TIMEOUT,
body={'message': 'Gateway Timeout'})
response.app = self
except NackMePleaseError:
""" See message where this error is defined """
raise
except Exception:
exc_info = sys.exc_info()
self.logger.log_exception(request, exc_info)
response = Response(status=HTTPStatus.INTERNAL_SERVER_ERROR,
body={'message': 'Server Error'})
response.app = self
if not response.correlation_id:
response.correlation_id = request.correlation_id
if self._cors_handler is not None:
self._cors_handler.add_cors_headers(request, response)
# add default headers
response.headers = {**self.default_headers, **response.headers}
return response
def _set_ctx(self, request):
ctx = {'correlation_id': request.correlation_id,
'ctx_headers':
{k: v for k, v in request.headers.items() if k.startswith('ctx-')}}
request_context.set(ctx)
async def _wrap_handlers(self):
handler_gen = self.router._get_and_wrap_routes()
try:
handler = next(handler_gen)
while True:
wrapped = handler
for middleware in self.middlewares[::-1]:
wrapped = await middleware(self, wrapped)
handler = handler_gen.send(wrapped)
except StopIteration:
pass
def _create_logger(self):
try:
dsn = self.config['sentry']['dsn']
except (ConfigError, ValueError):
self.logger = errorlogging.ErrorLoggingBase()
else:
try:
env = self.config['app_env']
except (ConfigError):
env = 'waspy'
self.logger = errorlogging.SentryLogging(
dsn=dsn,
environment=env
)
async def _run_hooks(self, hooks):
coros = []
while len(hooks):
task = hooks.pop()
if asyncio.iscoroutinefunction(task):
coros.append(task(self))
else:
task(self)
await asyncio.gather(*coros)
def shutdown(self):
self.loop.run_until_complete(self.run_on_stop_hooks())
self.loop.close() | en | 0.76845 | # Parser management # loop = asyncio.get_event_loop() # init logger # add cors support if needed # wrap handlers in middleware # Call on-startup hooks # todo: fork/add processes? # register signals, so that stopping the service works correctly # Run all transports - they shouldn't return until shutdown Run all hooks in on_start. Allows for coroutines and synchronous functions. Run all hooks in on_stop. Allows for coroutines and synchronous functions. coroutine: This method is called by Transport implementation to handle the actual request. It returns a webtype.Response object. # Get handler # Content-Type of an error response will be the same as the incoming request # unless a parser for that content type is not found. # invoke serialization (json) to make sure it works # This error can happen if a client closes the connection # The response shouldnt really ever be used See message where this error is defined # add default headers | 1.941769 | 2 |
func_tests/test_optout_page.py | ralphqq/dc-alerts-service | 0 | 6615315 | import os
import time
from selenium.webdriver.common.keys import Keys
from func_tests.base import FunctionalTest
MAX_WAIT = 30 # seconds
SAMPLE_RECIPIENT_EMAIL = os.environ.get('SAMPLE_RECIPIENT_EMAIL', '<EMAIL>')
class OptOutPageTest(FunctionalTest):
def test_user_can_request_unsubscribe_link(self):
# User visits homepage
self.browser.get(self.live_server_url)
# User sees the 'Unsubscribe' item in the top navbar
menu_item = self.browser.find_element_by_xpath(
'//a[@class="nav-link" and text()="Unsubscribe"]'
)
menu_item.click()
# She sees the title 'Unsubscribe from our mailing list'
text_to_find = 'Unsubscribe from our mailing list'
self.assertIn(text_to_find, self.browser.title)
header_text = self.browser.find_element_by_tag_name('h1').text
self.assertIn(text_to_find, header_text)
# She is asked to enter her email address
email_input_box = self.browser.find_element_by_id('id_email_optout')
self.assertEqual(
email_input_box.get_attribute('placeholder'),
'Enter your email address'
)
# She provides a valid email address
valid_email = SAMPLE_RECIPIENT_EMAIL
email_input_box.send_keys(valid_email)
# She hits enter and a page loads asking to click
# the link in the email we sent
email_input_box.send_keys(Keys.ENTER)
header_text = self.wait_for(
lambda: self.browser.find_element_by_xpath(
'//h1[contains(text(), "Click")]'
)
)
self.assertIn(
'Click the link we sent you',
header_text.text
)
# She also sees the email address she typed.
body_text = self.browser.find_element_by_tag_name('body').text
self.assertIn(valid_email, body_text)
| import os
import time
from selenium.webdriver.common.keys import Keys
from func_tests.base import FunctionalTest
MAX_WAIT = 30 # seconds
SAMPLE_RECIPIENT_EMAIL = os.environ.get('SAMPLE_RECIPIENT_EMAIL', '<EMAIL>')
class OptOutPageTest(FunctionalTest):
def test_user_can_request_unsubscribe_link(self):
# User visits homepage
self.browser.get(self.live_server_url)
# User sees the 'Unsubscribe' item in the top navbar
menu_item = self.browser.find_element_by_xpath(
'//a[@class="nav-link" and text()="Unsubscribe"]'
)
menu_item.click()
# She sees the title 'Unsubscribe from our mailing list'
text_to_find = 'Unsubscribe from our mailing list'
self.assertIn(text_to_find, self.browser.title)
header_text = self.browser.find_element_by_tag_name('h1').text
self.assertIn(text_to_find, header_text)
# She is asked to enter her email address
email_input_box = self.browser.find_element_by_id('id_email_optout')
self.assertEqual(
email_input_box.get_attribute('placeholder'),
'Enter your email address'
)
# She provides a valid email address
valid_email = SAMPLE_RECIPIENT_EMAIL
email_input_box.send_keys(valid_email)
# She hits enter and a page loads asking to click
# the link in the email we sent
email_input_box.send_keys(Keys.ENTER)
header_text = self.wait_for(
lambda: self.browser.find_element_by_xpath(
'//h1[contains(text(), "Click")]'
)
)
self.assertIn(
'Click the link we sent you',
header_text.text
)
# She also sees the email address she typed.
body_text = self.browser.find_element_by_tag_name('body').text
self.assertIn(valid_email, body_text)
| en | 0.942954 | # seconds # User visits homepage # User sees the 'Unsubscribe' item in the top navbar # She sees the title 'Unsubscribe from our mailing list' # She is asked to enter her email address # She provides a valid email address # She hits enter and a page loads asking to click # the link in the email we sent # She also sees the email address she typed. | 2.3653 | 2 |
Kooi_github/scripts/pre-processing/Kooi_theoretical_profiles.py | dlobelle/Local_postpro | 0 | 6615316 | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Fri Jan 10 12:10:22 2020
@author: Lobel001
"""
# V3 COMBINES ALL PROFILES THAT ARE TIME DEPENDENT INTO ONE LOOP TO SAVE AS PICKLE
#------------ Kooi et al. 2017 kernals --------------
## TOTAL EQUATION FOR VERTICAL VELOCITY: Vs = -(((rho_tot - rho_sw)/rho_sw) * g * omega * upsilon_sw)**(1/3)
#from IPython import get_ipython
#get_ipython().magic('reset -sf')
#%matplotlib qt
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.ticker as mtick
import math as math
#import pandas as pd
#import operator
import pickle
from numpy import *
#import scipy.linalg
#import scipy.integrate as integrate
#from math import e
plt.close("all")
def cube(x):
if 0<=x: return x**(1./3.)
return -(-x)**(1./3.)
# depth profile used in NEMO [metres]
nemo_z = [0, 1.023907, 2.10319, 3.251309, 4.485053, 5.825238, 7.297443,
8.932686, 10.7679, 12.84599, 15.21527, 17.92792, 21.03757, 24.59599,
28.64965, 33.23697, 38.3871, 44.12101, 50.45447, 57.40257, 64.9846,
73.2287, 82.17556, 91.88141, 102.4202, 113.8852, 126.3909, 140.074,
155.095, 171.6402, 189.9228, 210.1845, 232.697, 257.7629, 285.7158,
316.9199, 351.768, 390.6786, 434.0905, 482.4563, 536.2332, 595.8721,
661.8052, 734.4321, 814.1057, 901.118, 995.6885, 1097.954, 1207.963,
1325.672, 1450.95, 1583.582, 1723.28, 1869.693, 2022.425, 2181.044,
2345.101, 2514.137, 2687.699, 2865.347, 3046.659, 3231.24, 3418.723,
3608.769, 3801.072, 3995.354, 4191.367, 4388.89, 4587.726, 4787.702,
4988.667, 5190.488, 5393.049, 5596.249, 5800]
# CHOOSE THESE:
rho_pl = 1050. # density of plastic (kg m-3): 840, 920, 940, 1050, 1380 (last 2 are initially non-buoyant)
r_pl = 10.**(-4) # radius of plastic (m): ranges from 10 mm to 0.1 um or 10-2 to 10-7 m
# CONSTANTS
#rho_tot = rho_pl + rho_bf # total density of plastic (kg m-3)
g = 7.32e10#/(86400**2) # gravitational acceleration (m d-2), now [s-2]
k = 1.0306E-13#/(86400**2) # Boltzmann constant [m2 kg d-2 K-1] now [s-2] (=1.3804E-23)
D_n = 2*r_pl # equivalent spherical diameter [m]
# INITIAL CONDITIONS
A_0 = 0 # initial number of attached algae [no. m-2] - changes dynamically with Runge-Kutta integration
z = 0 # initial depth [m]
t = 0 # time? [days?]
""" Temperature, salinity and density profiles: rho_sw [kg m-3] """ # using a Hill function (Hill et al. 1910)
# ------Temperature [C] (Eq. 22)------
surf_z = 0
tot_z = -4000 # total depth of ocean
z_step = -1
T_surf = 25.
T_bot = 1.5
p = 2 # steepness temp decrease
z_c = -300 # depth of thermocline (m)
T_z = []
for z_i in range(len(nemo_z)): #range (surf_z,tot_z,z_step):
z = nemo_z[z_i]
T_z.append(T_surf + ((T_bot - T_surf) * (z**p/(z**p + z_c**p))))
#y = int(-tot_z+surf_z/-z_step)
depth = nemo_z #range(0,y)
fig, (ax1, ax2, ax3, ax4, ax5) = plt.subplots(nrows=1, ncols=5) #, sharex=True, sharey=True)
ax1.scatter(T_z,depth)
ax1.title.set_text('Temperature [C]')
ax1.set_ylabel('Depth [m]')
ax1.set_ylim(ax1.get_ylim()[::-1])
ax1.set_xlim([min(T_z), max(T_z)])
# ------Salinity [g kg-1] (Eq. 24)------
# Using North Pacific coefficients for salinity: used in paper (Table S4)
c1 = 9.9979979767E-17
c2 = 1.0536246487E-12
c3 = 3.9968286066E-09
c4 = 6.5411526250E-06
c5 = 4.1954014008E-03
c6 = 3.5172984035E+01
s_fix = 34.6 # [g kg-1] constant salinity from z > zfix
z_fix = -1000 # NOTE: depth must be negative for equation to work
#ind = (enumerate(nemo_z)*-1>z_fix, key=operator.itemgetter(1))
ind = nonzero(np.array(nemo_z)*-1>np.array(z_fix))
z_i = []
S_z_g = []
for z_i in range (size(ind)): #surf_z,z_fix, z_step):
z = nemo_z[z_i]*-1
S_z_g.append((c1*(z**5)) + (c2*(z**4)) + (c3*(z**3)) + (c4*(z**2)) + (c5*z) + c6)
# to add a linear fit from 1000 to 2000 m
idx = (np.array(nemo_z)<2000)*(np.array(nemo_z)>1000)
n = np.where(idx)
S_end = S_z_g[-1]
s_ = np.linspace(S_end,s_fix,size(n)+2)
# to add the fixed salinity below 2000m (in paper it is 1000 but in excel, linear interp to 2000m)
S_rest = size(depth)- size(s_) - size(S_z_g)+1
s = np.array([s_fix] * S_rest)
S_z_g = np.concatenate((S_z_g[0:-1],s_,s))
#y2 = int(-z_fix/-z_step)
#depth2 = range(0,y2)
ax2.scatter(S_z_g,depth) # depth2
ax2.title.set_text('Salinity [g kg-1]')
ax2.set_ylim(ax2.get_ylim()[::-1])
ax2.set_xlim([min(S_z_g), max(S_z_g)])
#------ Density profile [kg m-3] (Eq. 23)------
a1 = 9.999E2
a2 = 2.034E-2
a3 = -6.162E-3
a4 = 2.261E-5
a5 = -4.657E-8
b1 = 8.020E2
b2 = -2.001
b3 = 1.677E-2
b4 = -3.060E-5 #2.261E-5 # THIS WAS A TYPO IN TABLE S1 AND KOOI CODE
b5 = -4.657E-5
rho_sw = []
S_z = S_z_g/1000 # NOTE: salinity must be in kg/kg instead of g/kg for equation to work
for i in range(len(depth)):
rho_sw.append((a1 + (a2*T_z[i]) + (a3*(T_z[i]**2)) + (a4*(T_z[i]**3)) +
(a5*(T_z[i]**4))) + ((b1*S_z[i]) + (b2*S_z[i]*T_z[i]) +
(b3*S_z[i]*(T_z[i]**2)) + (b4*S_z[i]*(T_z[i]**3)) + (b5*(S_z[i]**2)*(T_z[i]**2))))
ax3.scatter(rho_sw,depth) # depth2
ax3.title.set_text('Density [kg m-3]')
ax3.set_ylim(ax3.get_ylim()[::-1])
ax3.set_xlim([min(rho_sw), max(rho_sw)])
mu_w = [] # dynamic viscosity [kg m-1 s-1]
A = []
B = []
mu_sw = []
upsilon_sw = []
""" Kinematic viscosity: upsilon_sw [m2 s-1]: Eq. 25 to 29"""
for ii in range(len(depth)):
mu_w.append(4.2844E-5 + (1/((0.157*(T_z[ii] + 64.993)**2)-91.296))) # kg m-1 s-1
A.append(1.541 + 1.998E-2*T_z[ii]- 9.52E-5*T_z[ii]**2) #
B.append(7.974 - 7.561E-2*T_z[ii] + 4.724E-4*T_z[ii]**2)
mu_sw.append(mu_w[ii]*(1+ A[ii]*S_z[ii] + B[ii]*S_z[ii]**2))
upsilon_sw.append(mu_sw[ii]/rho_sw[ii])
ax4.scatter(upsilon_sw,depth) # depth2
ax4.title.set_text('Kinematic viscosity [m2 s-1]')
ax4.set_ylim(ax4.get_ylim()[::-1])
ax4.set_xlim([min(upsilon_sw), max(upsilon_sw)])
ax4.xaxis.set_major_formatter(mtick.FormatStrFormatter('%.2e'))
ax5.scatter(mu_sw,depth) # depth2
ax5.title.set_text('Seawater viscosity [m2 s-1]')
ax5.set_ylim(ax5.get_ylim()[::-1])
ax5.set_xlim([min(mu_sw), max(mu_sw)])
ax5.xaxis.set_major_formatter(mtick.FormatStrFormatter('%.2e'))
plt.show()
# TO SAVE FIRST PROFILES IN SAME PICKLE
#with open('profiles.pickle', 'wb') as f:
# pickle.dump([depth,T_z,S_z,rho_sw,upsilon_sw,mu_sw], f)
""" Algal growth: Eq. 3 to 21"""
#------ Algae properties [Table S1]------
rho_bf = 1388. # density of biofilm (kg m-3)
V_A = 2.0E-16 # Volume of 1 algal cell [m-3]
R_A = 0.1 # respiration rate [d-1]
Q10 = 2. # temperature coefficient respiration
mu_max = 1.85#/86400 # maximum growth rate algae [d-1], now [s-1]
alpha = 0.12#/86400 # initial slope [d-1], now [s-1]
T_min = 0.2 # min temp algal growth [oC]
T_opt = 26.7 # optimal algal growth temp [oC]
T_max = 33.3 # max temp algal growth [oC]
I_opt = 1.75392E13#/86400 # optimal light intensity algal growth [uE m-2 d-1], now [s-1]
gamma = 1.728E5#/86400 # shear [d-1], now [s-1]
#------ Light profile [Table S1]: I tried different light extinction coefficients to check why the euphotic zone was so shall in Kooi------
I_m = 1.2E8#/86400 # Surface light intensity at noon [uE m-2 d-1], now [s-1]
k_w = 0.2 #0.1 #0.2 # extinction coefficient water [m-1]
k_p = 0.02 #0.12 # extinction coefficient algae [m-1 mg-1 chl L-1]
#------ Variables computed below using equations 3 to 21------
V_pl = [] # volume of plastic [m3]
theta_pl = [] # surface area of plastic particle [m2]
r_tot = [] # total radius of plastic + biofilm thickness
r_A = [] # radius of individual particle [m]
D_pl = [] # diffusivity of plastic particle [m2 s-1]
D_A = [] # diffusivity of algae [m2 s-1]
beta_Abrown = [] # Brownian motion [m3 s-1]
beta_Aset = [] # differential settling [m3 s-1]
beta_Ashear = [] # advective shear [m3 s-1]
deltaA = [] # attached algal growth
V_bf = [] # volume of biofilm [m3]
V_tot = [] # volume of total [m3]
t_bf = [] # biofilm thickness [m]
rho_tot = [] # total density [kg m-3]
Dstar = [] # dimensionless particle diameter
omega_star = [] # dimensionless settling velocity
epsilon = [] # light extinction coefficient [m-1]
I_0 = [] # light availability at the sea surface [uE m-2 s-1]
I_z = [] # light intensity at depth z [uE m-2 s-1]
phi = [] # temperature influence on algal growth
mu = [] # algal growth [s-1]
# ----------- Volumes -----------------
V_pl = (4/3)*math.pi*r_pl**3 # [m3]
theta_pl = 4*math.pi*r_pl**2 # surface area of plastic particle [m2]
V_bf = (V_A*A_0)*theta_pl # [m3]
V_tot = V_bf + V_pl # [m3]
t_bf = cube(V_tot*(3/(4*math.pi)))-r_pl # [m] #V_tot*(3/(4*math.pi))**(1/3) - r_pl
r_tot = r_pl + t_bf #[m]
rho_tot = ((r_pl**3) * rho_pl + ((r_pl + t_bf)**3 - (r_pl**3))*rho_bf)/((r_pl + t_bf)**3) # [kg m-3]
theta_tot = 4*math.pi*r_tot**2 #[m2]
#Dstar = ((rho_tot - rho_sw)*g*D_n**3)/(rho_sw*upsilon_sw**2)
#omega_star = 1.74E-4*Dstar**2
# --------------- Chlorophyll profile (same as Uitz et al. 2006), proxy for A_A (ambient algal concentrations [no. m-3]):
# using chl surf conc of 0.04-0.08 mg m-3 as default from table S2 -----------
Chla_Zbase = 0.151 # mg m-3
Cb = 0.533 # -
Cmax = 1.194 # -
zmax = 92.01 # m
deltaz = 43.46 # m
s2 = 1.72E-03 # m-1
euph_z = 120
C = np.zeros(len(depth))
depth2 = []
e_z2 = euph_z*2
d2 = np.array(depth)<e_z2
depth2 = np.array(depth)[d2]
C2 = []
for zz in range (len(depth2)):
z = depth2[zz]
C[zz] = ((Cb + Cmax*math.exp(-1*((z-zmax)/deltaz)**2)- s2*z)*Chla_Zbase)
C2.append((Cb + Cmax*math.exp(-1*((z-zmax)/deltaz)**2)- s2*z)*Chla_Zbase)
plt.figure(2)
plt.scatter(C,depth)
plt.xlabel('Kooi code Chla')
plt.ylabel('Depth [m]')
plt.ylabel('Depth [m]')
plt.gca().invert_yaxis()
#------------Profiles: light, carbon, ambient algae, algal growth (t,z): using 24 hours and 75 depth levels--------------
shift = 0 #0.5 #I_m/2 #0.5 # in the excel for Merel's test of changing num of hrs of light in a day
days = 1 #3
hours = 24 # mins = 24*60*days
time = np.linspace(0,days,hours) #mins)
# Defining the euphotic layer depth (1% of surf irradiance at midday)
epsilon = k_w + (k_p*0.05) # Merel only uses surface concentration Chla
Iz_max = []
for zz in range (len(depth)):
z = depth[zz]
Iz_max.append(I_m*math.exp(-1*epsilon*z))
idx = nonzero(np.array(Iz_max)<I_m*0.01)
euph_z_true = depth[idx[0][0]]
z =[]
zz = []
t = []
tt = []
I_0 = []
#I_z_t = []
#Carbon_t = []
#A_A_t = []
#mu_A_t = []
Carbon = np.zeros((len(time),len(depth)))
A_A = np.zeros((len(time),len(depth)))
phi = np.zeros((len(time),len(depth)))
mu_opt = np.zeros((len(time),len(depth)))
mu_A = np.zeros((len(time),len(depth)))
I_z = np.zeros((len(time),len(depth)))
for tt in range (0,hours): #mins):
t = time[tt]
e2 = 2*math.pi*t
if I_m*(math.sin(e2)+shift)<0:
I_0.append(0)
else:
I_0.append(I_m*(math.sin(2*math.pi*t)+shift))
for zz in range (len(depth2)): #0,euph_z[0]*2): #(len(depth)):
#Chla_int.append(np.trapz(C[0:z])) # np.cumsum(C) #
#epsilon.append(k_w + (k_p*Chla_int[z])) #C[z]))
z = depth2[zz]
I_z[tt,zz] = (I_0[tt]*math.exp(-1*epsilon*z))
Carbon[tt,zz] = (C[zz]/(0.003+1.0154*math.exp(0.050*T_z[zz])*math.exp(-0.059*I_z[tt,zz]/1E6)))
if Carbon[tt,zz]/(2726*1E-9)<0:
A_A[tt,zz] = 0
else:
A_A[tt,zz] = (Carbon[tt,zz]/(2726*1E-9))
phi[tt,zz] = (((T_z[zz] - T_max)*(T_z[zz] - T_min)**2)/((T_opt-T_min)*((T_opt-T_min)*
(T_z[zz]-T_opt)-(T_opt-T_max)*(T_opt+T_min-(2*T_z[zz])))))
if T_z[zz]<T_min:
mu_maxT = 0
elif T_z[zz]>T_max:
mu_maxT = 0
else:
mu_maxT = mu_max
#
if z > euph_z_true: #Iz_max[zz]<I_m*0.01: # since euphotic zone is defined as depth of 1% of midday surf light
mu_opt[tt,zz] = 0
else:
mu_opt[tt,zz] = (mu_maxT*(I_z[tt,zz]/(I_z[tt,zz] + (mu_maxT/alpha)*((I_z[tt,zz]/I_opt)-1)**2)))
mu_A[tt,zz] = (mu_opt[tt,zz]*phi[tt,zz])
plt.figure(4)
plt.scatter(I_z[tt,:],depth)
plt.figure(5)
plt.scatter(Carbon[tt,:],depth)
plt.figure(6)
plt.scatter(A_A[tt,:],depth)
plt.figure(7)
plt.scatter(mu_A[tt,:]/86400,depth)
# a2.scatter(I_z[tt,0:len(depth2)],depth2)
# a3.scatter(A_A[tt,0:len(depth2)],depth2)
# a4.scatter(mu_A[tt,0:len(depth2)],depth2)
#
#I_z_t.append(I_z)
# Carbon_t.append(Carbon)
# A_A_t.append(A_A)
# mu_A_t.append(mu_A)
# I_z_t2[tt,] = I_z
plt.figure(3)
plt.scatter(time,I_0)
plt.xlabel('Time [days]')
plt.ylabel('Light availability [uE m-2 min-1]') # double check units
plt.title('Light availability at the surface over 1 day [uE m-2 d-1]')
plt.figure(4)
plt.xlabel('Light availability [uE m-2 d-1]')
plt.ylabel('Depth [m]')
plt.title('Light availability for every surface I_0 [uE m-2 d-1]')
plt.gca().invert_yaxis()
plt.figure(5)
plt.xlabel('Carbon [mg C m-3]')
plt.ylabel('Depth [m]')
plt.title('Carbon')
plt.gca().invert_yaxis()
plt.figure(6)
plt.xlabel('Ambient algae [no. m-3]')
plt.ylabel('Depth [m]')
plt.title('Ambient algae')
plt.gca().invert_yaxis()
plt.figure(7)
#plt.scatter(mu_A,depth)
plt.xlabel('Algal growth [s-1]') #[d-1]
plt.ylabel('Depth [m]')
plt.title('Algal growth')
plt.gca().invert_yaxis()
A_A_t = A_A
mu_A_t = mu_A
#fig, (ax1, ax2, ax3, ax4) = plt.subplots(nrows=1, ncols=4) #, sharex=True, sharey=True)
#C2 = np.array(C)[d2]
#a1.scatter(list(C2),list(depth2))
#ax1.scatter(T_z,depth)
#ax1.title.set_text('Temperature [C]')
#ax1.set_ylabel('Depth [m]')
#ax1.set_ylim(ax1.get_ylim()[::-1])
#ax1.set_xlim([min(T_z), max(T_z)])
#with open('profiles_t.pickle', 'wb') as p:
# pickle.dump([depth,time,A_A_t,mu_A_t], p)
####### STOPPED FROM HERE ON SINCE REALISED THAT THE TIME DERVIATIVE WILL BE EASIER TO COMPUTE FROM WITHIN PARCELS
| #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Fri Jan 10 12:10:22 2020
@author: Lobel001
"""
# V3 COMBINES ALL PROFILES THAT ARE TIME DEPENDENT INTO ONE LOOP TO SAVE AS PICKLE
#------------ Kooi et al. 2017 kernals --------------
## TOTAL EQUATION FOR VERTICAL VELOCITY: Vs = -(((rho_tot - rho_sw)/rho_sw) * g * omega * upsilon_sw)**(1/3)
#from IPython import get_ipython
#get_ipython().magic('reset -sf')
#%matplotlib qt
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.ticker as mtick
import math as math
#import pandas as pd
#import operator
import pickle
from numpy import *
#import scipy.linalg
#import scipy.integrate as integrate
#from math import e
plt.close("all")
def cube(x):
if 0<=x: return x**(1./3.)
return -(-x)**(1./3.)
# depth profile used in NEMO [metres]
nemo_z = [0, 1.023907, 2.10319, 3.251309, 4.485053, 5.825238, 7.297443,
8.932686, 10.7679, 12.84599, 15.21527, 17.92792, 21.03757, 24.59599,
28.64965, 33.23697, 38.3871, 44.12101, 50.45447, 57.40257, 64.9846,
73.2287, 82.17556, 91.88141, 102.4202, 113.8852, 126.3909, 140.074,
155.095, 171.6402, 189.9228, 210.1845, 232.697, 257.7629, 285.7158,
316.9199, 351.768, 390.6786, 434.0905, 482.4563, 536.2332, 595.8721,
661.8052, 734.4321, 814.1057, 901.118, 995.6885, 1097.954, 1207.963,
1325.672, 1450.95, 1583.582, 1723.28, 1869.693, 2022.425, 2181.044,
2345.101, 2514.137, 2687.699, 2865.347, 3046.659, 3231.24, 3418.723,
3608.769, 3801.072, 3995.354, 4191.367, 4388.89, 4587.726, 4787.702,
4988.667, 5190.488, 5393.049, 5596.249, 5800]
# CHOOSE THESE:
rho_pl = 1050. # density of plastic (kg m-3): 840, 920, 940, 1050, 1380 (last 2 are initially non-buoyant)
r_pl = 10.**(-4) # radius of plastic (m): ranges from 10 mm to 0.1 um or 10-2 to 10-7 m
# CONSTANTS
#rho_tot = rho_pl + rho_bf # total density of plastic (kg m-3)
g = 7.32e10#/(86400**2) # gravitational acceleration (m d-2), now [s-2]
k = 1.0306E-13#/(86400**2) # Boltzmann constant [m2 kg d-2 K-1] now [s-2] (=1.3804E-23)
D_n = 2*r_pl # equivalent spherical diameter [m]
# INITIAL CONDITIONS
A_0 = 0 # initial number of attached algae [no. m-2] - changes dynamically with Runge-Kutta integration
z = 0 # initial depth [m]
t = 0 # time? [days?]
""" Temperature, salinity and density profiles: rho_sw [kg m-3] """ # using a Hill function (Hill et al. 1910)
# ------Temperature [C] (Eq. 22)------
surf_z = 0
tot_z = -4000 # total depth of ocean
z_step = -1
T_surf = 25.
T_bot = 1.5
p = 2 # steepness temp decrease
z_c = -300 # depth of thermocline (m)
T_z = []
for z_i in range(len(nemo_z)): #range (surf_z,tot_z,z_step):
z = nemo_z[z_i]
T_z.append(T_surf + ((T_bot - T_surf) * (z**p/(z**p + z_c**p))))
#y = int(-tot_z+surf_z/-z_step)
depth = nemo_z #range(0,y)
fig, (ax1, ax2, ax3, ax4, ax5) = plt.subplots(nrows=1, ncols=5) #, sharex=True, sharey=True)
ax1.scatter(T_z,depth)
ax1.title.set_text('Temperature [C]')
ax1.set_ylabel('Depth [m]')
ax1.set_ylim(ax1.get_ylim()[::-1])
ax1.set_xlim([min(T_z), max(T_z)])
# ------Salinity [g kg-1] (Eq. 24)------
# Using North Pacific coefficients for salinity: used in paper (Table S4)
c1 = 9.9979979767E-17
c2 = 1.0536246487E-12
c3 = 3.9968286066E-09
c4 = 6.5411526250E-06
c5 = 4.1954014008E-03
c6 = 3.5172984035E+01
s_fix = 34.6 # [g kg-1] constant salinity from z > zfix
z_fix = -1000 # NOTE: depth must be negative for equation to work
#ind = (enumerate(nemo_z)*-1>z_fix, key=operator.itemgetter(1))
ind = nonzero(np.array(nemo_z)*-1>np.array(z_fix))
z_i = []
S_z_g = []
for z_i in range (size(ind)): #surf_z,z_fix, z_step):
z = nemo_z[z_i]*-1
S_z_g.append((c1*(z**5)) + (c2*(z**4)) + (c3*(z**3)) + (c4*(z**2)) + (c5*z) + c6)
# to add a linear fit from 1000 to 2000 m
idx = (np.array(nemo_z)<2000)*(np.array(nemo_z)>1000)
n = np.where(idx)
S_end = S_z_g[-1]
s_ = np.linspace(S_end,s_fix,size(n)+2)
# to add the fixed salinity below 2000m (in paper it is 1000 but in excel, linear interp to 2000m)
S_rest = size(depth)- size(s_) - size(S_z_g)+1
s = np.array([s_fix] * S_rest)
S_z_g = np.concatenate((S_z_g[0:-1],s_,s))
#y2 = int(-z_fix/-z_step)
#depth2 = range(0,y2)
ax2.scatter(S_z_g,depth) # depth2
ax2.title.set_text('Salinity [g kg-1]')
ax2.set_ylim(ax2.get_ylim()[::-1])
ax2.set_xlim([min(S_z_g), max(S_z_g)])
#------ Density profile [kg m-3] (Eq. 23)------
a1 = 9.999E2
a2 = 2.034E-2
a3 = -6.162E-3
a4 = 2.261E-5
a5 = -4.657E-8
b1 = 8.020E2
b2 = -2.001
b3 = 1.677E-2
b4 = -3.060E-5 #2.261E-5 # THIS WAS A TYPO IN TABLE S1 AND KOOI CODE
b5 = -4.657E-5
rho_sw = []
S_z = S_z_g/1000 # NOTE: salinity must be in kg/kg instead of g/kg for equation to work
for i in range(len(depth)):
rho_sw.append((a1 + (a2*T_z[i]) + (a3*(T_z[i]**2)) + (a4*(T_z[i]**3)) +
(a5*(T_z[i]**4))) + ((b1*S_z[i]) + (b2*S_z[i]*T_z[i]) +
(b3*S_z[i]*(T_z[i]**2)) + (b4*S_z[i]*(T_z[i]**3)) + (b5*(S_z[i]**2)*(T_z[i]**2))))
ax3.scatter(rho_sw,depth) # depth2
ax3.title.set_text('Density [kg m-3]')
ax3.set_ylim(ax3.get_ylim()[::-1])
ax3.set_xlim([min(rho_sw), max(rho_sw)])
mu_w = [] # dynamic viscosity [kg m-1 s-1]
A = []
B = []
mu_sw = []
upsilon_sw = []
""" Kinematic viscosity: upsilon_sw [m2 s-1]: Eq. 25 to 29"""
for ii in range(len(depth)):
mu_w.append(4.2844E-5 + (1/((0.157*(T_z[ii] + 64.993)**2)-91.296))) # kg m-1 s-1
A.append(1.541 + 1.998E-2*T_z[ii]- 9.52E-5*T_z[ii]**2) #
B.append(7.974 - 7.561E-2*T_z[ii] + 4.724E-4*T_z[ii]**2)
mu_sw.append(mu_w[ii]*(1+ A[ii]*S_z[ii] + B[ii]*S_z[ii]**2))
upsilon_sw.append(mu_sw[ii]/rho_sw[ii])
ax4.scatter(upsilon_sw,depth) # depth2
ax4.title.set_text('Kinematic viscosity [m2 s-1]')
ax4.set_ylim(ax4.get_ylim()[::-1])
ax4.set_xlim([min(upsilon_sw), max(upsilon_sw)])
ax4.xaxis.set_major_formatter(mtick.FormatStrFormatter('%.2e'))
ax5.scatter(mu_sw,depth) # depth2
ax5.title.set_text('Seawater viscosity [m2 s-1]')
ax5.set_ylim(ax5.get_ylim()[::-1])
ax5.set_xlim([min(mu_sw), max(mu_sw)])
ax5.xaxis.set_major_formatter(mtick.FormatStrFormatter('%.2e'))
plt.show()
# TO SAVE FIRST PROFILES IN SAME PICKLE
#with open('profiles.pickle', 'wb') as f:
# pickle.dump([depth,T_z,S_z,rho_sw,upsilon_sw,mu_sw], f)
""" Algal growth: Eq. 3 to 21"""
#------ Algae properties [Table S1]------
rho_bf = 1388. # density of biofilm (kg m-3)
V_A = 2.0E-16 # Volume of 1 algal cell [m-3]
R_A = 0.1 # respiration rate [d-1]
Q10 = 2. # temperature coefficient respiration
mu_max = 1.85#/86400 # maximum growth rate algae [d-1], now [s-1]
alpha = 0.12#/86400 # initial slope [d-1], now [s-1]
T_min = 0.2 # min temp algal growth [oC]
T_opt = 26.7 # optimal algal growth temp [oC]
T_max = 33.3 # max temp algal growth [oC]
I_opt = 1.75392E13#/86400 # optimal light intensity algal growth [uE m-2 d-1], now [s-1]
gamma = 1.728E5#/86400 # shear [d-1], now [s-1]
#------ Light profile [Table S1]: I tried different light extinction coefficients to check why the euphotic zone was so shall in Kooi------
I_m = 1.2E8#/86400 # Surface light intensity at noon [uE m-2 d-1], now [s-1]
k_w = 0.2 #0.1 #0.2 # extinction coefficient water [m-1]
k_p = 0.02 #0.12 # extinction coefficient algae [m-1 mg-1 chl L-1]
#------ Variables computed below using equations 3 to 21------
V_pl = [] # volume of plastic [m3]
theta_pl = [] # surface area of plastic particle [m2]
r_tot = [] # total radius of plastic + biofilm thickness
r_A = [] # radius of individual particle [m]
D_pl = [] # diffusivity of plastic particle [m2 s-1]
D_A = [] # diffusivity of algae [m2 s-1]
beta_Abrown = [] # Brownian motion [m3 s-1]
beta_Aset = [] # differential settling [m3 s-1]
beta_Ashear = [] # advective shear [m3 s-1]
deltaA = [] # attached algal growth
V_bf = [] # volume of biofilm [m3]
V_tot = [] # volume of total [m3]
t_bf = [] # biofilm thickness [m]
rho_tot = [] # total density [kg m-3]
Dstar = [] # dimensionless particle diameter
omega_star = [] # dimensionless settling velocity
epsilon = [] # light extinction coefficient [m-1]
I_0 = [] # light availability at the sea surface [uE m-2 s-1]
I_z = [] # light intensity at depth z [uE m-2 s-1]
phi = [] # temperature influence on algal growth
mu = [] # algal growth [s-1]
# ----------- Volumes -----------------
V_pl = (4/3)*math.pi*r_pl**3 # [m3]
theta_pl = 4*math.pi*r_pl**2 # surface area of plastic particle [m2]
V_bf = (V_A*A_0)*theta_pl # [m3]
V_tot = V_bf + V_pl # [m3]
t_bf = cube(V_tot*(3/(4*math.pi)))-r_pl # [m] #V_tot*(3/(4*math.pi))**(1/3) - r_pl
r_tot = r_pl + t_bf #[m]
rho_tot = ((r_pl**3) * rho_pl + ((r_pl + t_bf)**3 - (r_pl**3))*rho_bf)/((r_pl + t_bf)**3) # [kg m-3]
theta_tot = 4*math.pi*r_tot**2 #[m2]
#Dstar = ((rho_tot - rho_sw)*g*D_n**3)/(rho_sw*upsilon_sw**2)
#omega_star = 1.74E-4*Dstar**2
# --------------- Chlorophyll profile (same as Uitz et al. 2006), proxy for A_A (ambient algal concentrations [no. m-3]):
# using chl surf conc of 0.04-0.08 mg m-3 as default from table S2 -----------
Chla_Zbase = 0.151 # mg m-3
Cb = 0.533 # -
Cmax = 1.194 # -
zmax = 92.01 # m
deltaz = 43.46 # m
s2 = 1.72E-03 # m-1
euph_z = 120
C = np.zeros(len(depth))
depth2 = []
e_z2 = euph_z*2
d2 = np.array(depth)<e_z2
depth2 = np.array(depth)[d2]
C2 = []
for zz in range (len(depth2)):
z = depth2[zz]
C[zz] = ((Cb + Cmax*math.exp(-1*((z-zmax)/deltaz)**2)- s2*z)*Chla_Zbase)
C2.append((Cb + Cmax*math.exp(-1*((z-zmax)/deltaz)**2)- s2*z)*Chla_Zbase)
plt.figure(2)
plt.scatter(C,depth)
plt.xlabel('Kooi code Chla')
plt.ylabel('Depth [m]')
plt.ylabel('Depth [m]')
plt.gca().invert_yaxis()
#------------Profiles: light, carbon, ambient algae, algal growth (t,z): using 24 hours and 75 depth levels--------------
shift = 0 #0.5 #I_m/2 #0.5 # in the excel for Merel's test of changing num of hrs of light in a day
days = 1 #3
hours = 24 # mins = 24*60*days
time = np.linspace(0,days,hours) #mins)
# Defining the euphotic layer depth (1% of surf irradiance at midday)
epsilon = k_w + (k_p*0.05) # Merel only uses surface concentration Chla
Iz_max = []
for zz in range (len(depth)):
z = depth[zz]
Iz_max.append(I_m*math.exp(-1*epsilon*z))
idx = nonzero(np.array(Iz_max)<I_m*0.01)
euph_z_true = depth[idx[0][0]]
z =[]
zz = []
t = []
tt = []
I_0 = []
#I_z_t = []
#Carbon_t = []
#A_A_t = []
#mu_A_t = []
Carbon = np.zeros((len(time),len(depth)))
A_A = np.zeros((len(time),len(depth)))
phi = np.zeros((len(time),len(depth)))
mu_opt = np.zeros((len(time),len(depth)))
mu_A = np.zeros((len(time),len(depth)))
I_z = np.zeros((len(time),len(depth)))
for tt in range (0,hours): #mins):
t = time[tt]
e2 = 2*math.pi*t
if I_m*(math.sin(e2)+shift)<0:
I_0.append(0)
else:
I_0.append(I_m*(math.sin(2*math.pi*t)+shift))
for zz in range (len(depth2)): #0,euph_z[0]*2): #(len(depth)):
#Chla_int.append(np.trapz(C[0:z])) # np.cumsum(C) #
#epsilon.append(k_w + (k_p*Chla_int[z])) #C[z]))
z = depth2[zz]
I_z[tt,zz] = (I_0[tt]*math.exp(-1*epsilon*z))
Carbon[tt,zz] = (C[zz]/(0.003+1.0154*math.exp(0.050*T_z[zz])*math.exp(-0.059*I_z[tt,zz]/1E6)))
if Carbon[tt,zz]/(2726*1E-9)<0:
A_A[tt,zz] = 0
else:
A_A[tt,zz] = (Carbon[tt,zz]/(2726*1E-9))
phi[tt,zz] = (((T_z[zz] - T_max)*(T_z[zz] - T_min)**2)/((T_opt-T_min)*((T_opt-T_min)*
(T_z[zz]-T_opt)-(T_opt-T_max)*(T_opt+T_min-(2*T_z[zz])))))
if T_z[zz]<T_min:
mu_maxT = 0
elif T_z[zz]>T_max:
mu_maxT = 0
else:
mu_maxT = mu_max
#
if z > euph_z_true: #Iz_max[zz]<I_m*0.01: # since euphotic zone is defined as depth of 1% of midday surf light
mu_opt[tt,zz] = 0
else:
mu_opt[tt,zz] = (mu_maxT*(I_z[tt,zz]/(I_z[tt,zz] + (mu_maxT/alpha)*((I_z[tt,zz]/I_opt)-1)**2)))
mu_A[tt,zz] = (mu_opt[tt,zz]*phi[tt,zz])
plt.figure(4)
plt.scatter(I_z[tt,:],depth)
plt.figure(5)
plt.scatter(Carbon[tt,:],depth)
plt.figure(6)
plt.scatter(A_A[tt,:],depth)
plt.figure(7)
plt.scatter(mu_A[tt,:]/86400,depth)
# a2.scatter(I_z[tt,0:len(depth2)],depth2)
# a3.scatter(A_A[tt,0:len(depth2)],depth2)
# a4.scatter(mu_A[tt,0:len(depth2)],depth2)
#
#I_z_t.append(I_z)
# Carbon_t.append(Carbon)
# A_A_t.append(A_A)
# mu_A_t.append(mu_A)
# I_z_t2[tt,] = I_z
plt.figure(3)
plt.scatter(time,I_0)
plt.xlabel('Time [days]')
plt.ylabel('Light availability [uE m-2 min-1]') # double check units
plt.title('Light availability at the surface over 1 day [uE m-2 d-1]')
plt.figure(4)
plt.xlabel('Light availability [uE m-2 d-1]')
plt.ylabel('Depth [m]')
plt.title('Light availability for every surface I_0 [uE m-2 d-1]')
plt.gca().invert_yaxis()
plt.figure(5)
plt.xlabel('Carbon [mg C m-3]')
plt.ylabel('Depth [m]')
plt.title('Carbon')
plt.gca().invert_yaxis()
plt.figure(6)
plt.xlabel('Ambient algae [no. m-3]')
plt.ylabel('Depth [m]')
plt.title('Ambient algae')
plt.gca().invert_yaxis()
plt.figure(7)
#plt.scatter(mu_A,depth)
plt.xlabel('Algal growth [s-1]') #[d-1]
plt.ylabel('Depth [m]')
plt.title('Algal growth')
plt.gca().invert_yaxis()
A_A_t = A_A
mu_A_t = mu_A
#fig, (ax1, ax2, ax3, ax4) = plt.subplots(nrows=1, ncols=4) #, sharex=True, sharey=True)
#C2 = np.array(C)[d2]
#a1.scatter(list(C2),list(depth2))
#ax1.scatter(T_z,depth)
#ax1.title.set_text('Temperature [C]')
#ax1.set_ylabel('Depth [m]')
#ax1.set_ylim(ax1.get_ylim()[::-1])
#ax1.set_xlim([min(T_z), max(T_z)])
#with open('profiles_t.pickle', 'wb') as p:
# pickle.dump([depth,time,A_A_t,mu_A_t], p)
####### STOPPED FROM HERE ON SINCE REALISED THAT THE TIME DERVIATIVE WILL BE EASIER TO COMPUTE FROM WITHIN PARCELS
| en | 0.555202 | #!/usr/bin/env python3 # -*- coding: utf-8 -*- Created on Fri Jan 10 12:10:22 2020 @author: Lobel001 # V3 COMBINES ALL PROFILES THAT ARE TIME DEPENDENT INTO ONE LOOP TO SAVE AS PICKLE #------------ Kooi et al. 2017 kernals -------------- ## TOTAL EQUATION FOR VERTICAL VELOCITY: Vs = -(((rho_tot - rho_sw)/rho_sw) * g * omega * upsilon_sw)**(1/3) #from IPython import get_ipython #get_ipython().magic('reset -sf') #%matplotlib qt #import pandas as pd #import operator #import scipy.linalg #import scipy.integrate as integrate #from math import e # depth profile used in NEMO [metres] # CHOOSE THESE: # density of plastic (kg m-3): 840, 920, 940, 1050, 1380 (last 2 are initially non-buoyant) # radius of plastic (m): ranges from 10 mm to 0.1 um or 10-2 to 10-7 m # CONSTANTS #rho_tot = rho_pl + rho_bf # total density of plastic (kg m-3) #/(86400**2) # gravitational acceleration (m d-2), now [s-2] #/(86400**2) # Boltzmann constant [m2 kg d-2 K-1] now [s-2] (=1.3804E-23) # equivalent spherical diameter [m] # INITIAL CONDITIONS # initial number of attached algae [no. m-2] - changes dynamically with Runge-Kutta integration # initial depth [m] # time? [days?] Temperature, salinity and density profiles: rho_sw [kg m-3] # using a Hill function (Hill et al. 1910) # ------Temperature [C] (Eq. 22)------ # total depth of ocean # steepness temp decrease # depth of thermocline (m) #range (surf_z,tot_z,z_step): #y = int(-tot_z+surf_z/-z_step) #range(0,y) #, sharex=True, sharey=True) # ------Salinity [g kg-1] (Eq. 24)------ # Using North Pacific coefficients for salinity: used in paper (Table S4) # [g kg-1] constant salinity from z > zfix # NOTE: depth must be negative for equation to work #ind = (enumerate(nemo_z)*-1>z_fix, key=operator.itemgetter(1)) #surf_z,z_fix, z_step): # to add a linear fit from 1000 to 2000 m # to add the fixed salinity below 2000m (in paper it is 1000 but in excel, linear interp to 2000m) #y2 = int(-z_fix/-z_step) #depth2 = range(0,y2) # depth2 #------ Density profile [kg m-3] (Eq. 23)------ #2.261E-5 # THIS WAS A TYPO IN TABLE S1 AND KOOI CODE # NOTE: salinity must be in kg/kg instead of g/kg for equation to work # depth2 # dynamic viscosity [kg m-1 s-1] Kinematic viscosity: upsilon_sw [m2 s-1]: Eq. 25 to 29 # kg m-1 s-1 # # depth2 # depth2 # TO SAVE FIRST PROFILES IN SAME PICKLE #with open('profiles.pickle', 'wb') as f: # pickle.dump([depth,T_z,S_z,rho_sw,upsilon_sw,mu_sw], f) Algal growth: Eq. 3 to 21 #------ Algae properties [Table S1]------ # density of biofilm (kg m-3) # Volume of 1 algal cell [m-3] # respiration rate [d-1] # temperature coefficient respiration #/86400 # maximum growth rate algae [d-1], now [s-1] #/86400 # initial slope [d-1], now [s-1] # min temp algal growth [oC] # optimal algal growth temp [oC] # max temp algal growth [oC] #/86400 # optimal light intensity algal growth [uE m-2 d-1], now [s-1] #/86400 # shear [d-1], now [s-1] #------ Light profile [Table S1]: I tried different light extinction coefficients to check why the euphotic zone was so shall in Kooi------ #/86400 # Surface light intensity at noon [uE m-2 d-1], now [s-1] #0.1 #0.2 # extinction coefficient water [m-1] #0.12 # extinction coefficient algae [m-1 mg-1 chl L-1] #------ Variables computed below using equations 3 to 21------ # volume of plastic [m3] # surface area of plastic particle [m2] # total radius of plastic + biofilm thickness # radius of individual particle [m] # diffusivity of plastic particle [m2 s-1] # diffusivity of algae [m2 s-1] # Brownian motion [m3 s-1] # differential settling [m3 s-1] # advective shear [m3 s-1] # attached algal growth # volume of biofilm [m3] # volume of total [m3] # biofilm thickness [m] # total density [kg m-3] # dimensionless particle diameter # dimensionless settling velocity # light extinction coefficient [m-1] # light availability at the sea surface [uE m-2 s-1] # light intensity at depth z [uE m-2 s-1] # temperature influence on algal growth # algal growth [s-1] # ----------- Volumes ----------------- # [m3] # surface area of plastic particle [m2] # [m3] # [m3] # [m] #V_tot*(3/(4*math.pi))**(1/3) - r_pl #[m] # [kg m-3] #[m2] #Dstar = ((rho_tot - rho_sw)*g*D_n**3)/(rho_sw*upsilon_sw**2) #omega_star = 1.74E-4*Dstar**2 # --------------- Chlorophyll profile (same as Uitz et al. 2006), proxy for A_A (ambient algal concentrations [no. m-3]): # using chl surf conc of 0.04-0.08 mg m-3 as default from table S2 ----------- # mg m-3 # - # - # m # m # m-1 #------------Profiles: light, carbon, ambient algae, algal growth (t,z): using 24 hours and 75 depth levels-------------- #0.5 #I_m/2 #0.5 # in the excel for Merel's test of changing num of hrs of light in a day #3 # mins = 24*60*days #mins) # Defining the euphotic layer depth (1% of surf irradiance at midday) # Merel only uses surface concentration Chla #I_z_t = [] #Carbon_t = [] #A_A_t = [] #mu_A_t = [] #mins): #0,euph_z[0]*2): #(len(depth)): #Chla_int.append(np.trapz(C[0:z])) # np.cumsum(C) # #epsilon.append(k_w + (k_p*Chla_int[z])) #C[z])) # #Iz_max[zz]<I_m*0.01: # since euphotic zone is defined as depth of 1% of midday surf light # a2.scatter(I_z[tt,0:len(depth2)],depth2) # a3.scatter(A_A[tt,0:len(depth2)],depth2) # a4.scatter(mu_A[tt,0:len(depth2)],depth2) # #I_z_t.append(I_z) # Carbon_t.append(Carbon) # A_A_t.append(A_A) # mu_A_t.append(mu_A) # I_z_t2[tt,] = I_z # double check units #plt.scatter(mu_A,depth) #[d-1] #fig, (ax1, ax2, ax3, ax4) = plt.subplots(nrows=1, ncols=4) #, sharex=True, sharey=True) #C2 = np.array(C)[d2] #a1.scatter(list(C2),list(depth2)) #ax1.scatter(T_z,depth) #ax1.title.set_text('Temperature [C]') #ax1.set_ylabel('Depth [m]') #ax1.set_ylim(ax1.get_ylim()[::-1]) #ax1.set_xlim([min(T_z), max(T_z)]) #with open('profiles_t.pickle', 'wb') as p: # pickle.dump([depth,time,A_A_t,mu_A_t], p) ####### STOPPED FROM HERE ON SINCE REALISED THAT THE TIME DERVIATIVE WILL BE EASIER TO COMPUTE FROM WITHIN PARCELS | 2.044054 | 2 |
fp1/hardware/sdaccel_design/lib/scripts/create_sdaccel_metadata.py | fpga-accel/ctyun-fpga | 0 | 6615317 | #!/usr/bin/env python
#
#-------------------------------------------------------------------------------
# Copyright 2018 Huawei Technologies Co., Ltd. All Rights Reserved.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the Huawei Software License (the "License").
# A copy of the License is located in the "LICENSE" file accompanying
# this file.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# Huawei Software License for more details.
#-------------------------------------------------------------------------------
'''create ocl metadata.json'''
import json
import xml.dom.minidom
import os,sys,re
import time
import hashlib
#*********************************
# Parse the xclbin file
#*********************************
xclbin_path=os.path.abspath(sys.argv[1])
xclbin_name=os.path.basename(os.path.splitext(xclbin_path)[0])
dcp_path=os.path.abspath(sys.argv[2])
bitstream=open(dcp_path,"rb").read()
dcp_sha256=hashlib.sha256(bitstream).hexdigest()
json_path=os.path.abspath(os.path.join(os.path.dirname(xclbin_path),"../bin/manifest.json"))
if os.path.exists(json_path):
os.remove(json_path)
#read xclbin get the xml index
f=open(xclbin_path,"rb")
bit_len=f.read().index("<?xml")
f.seek(bit_len,0)
xml_len=f.read().index("</project>")
f.seek(bit_len,0)
xml_len +=10
xml_stream=f.read(xml_len)
f.close()
#*********************************
# get xclbin_timestamp and clock_freq and clock_freqone
#*********************************
try:
dom = xml.dom.minidom.parseString(xml_stream)
item=dom.getElementsByTagName('platform')[0]
xclbin_timestamp=item.getAttribute('featureRomTime')
itemlist=dom.getElementsByTagName('clock')
except:
print "ERROR:get xclbin_timestamp and clock_freq fail"
sys.exit(1)
for x in itemlist:
#print x.getAttribute('port')
if x.getAttribute('port')=="DATA_CLK":
clockFreq=x.getAttribute('frequency')
if x.getAttribute('port')=="KERNEL_CLK":
clockFreqone=x.getAttribute('frequency')
clock_split=re.compile(r'([0-9]\d*)([a-zA-Z]*)').findall(clockFreq)
clock_freq=str(int(clock_split[0][0]))
clock_splitone=re.compile(r'([0-9]\d*)([a-zA-Z]*)').findall(clockFreqone)
clock_freqone=str(int(clock_splitone[0][0]))
file_time=os.path.getctime(xclbin_path)
create_time=time.strftime('%Y-%m-%d %H:%M:%S',time.localtime(file_time))
#**********************************
#get shell id and hdk_version
#**********************************
version_path=os.path.abspath(os.path.join(sys.path[0],"../../../"))
with open(os.path.join(version_path,"version_note_sdaccel.txt")) as f:
f.seek(5,0)
file_content=f.read()
shell_id=re.compile(r'([0-9a-fA-F]\w*)').findall(file_content)[0]
with open(os.path.join(version_path,"version_hdk_tag.txt")) as f:
hdk_version=f.read().strip()
#**********************************
# metadata json
#**********************************
metadata={}
metadata['version']="2.0"
metadata['fpga_board']="vu9p_vb"
metadata['fpga_vendor']="xilinx"
metadata['fpga_chip']="vu9p"
metadata['bitstream_format']="bin"
metadata['flow_type']="sdaccel"
metadata['xclbin_timestamp']=xclbin_timestamp
metadata['clock_freq']=clock_freq
metadata['clock_freq2']=clock_freqone
metadata['shell_id']="0x"+str(shell_id)
metadata['pci_vendor_id']="0x19e5"
metadata['pci_device_id']="0xD512"
metadata['pci_subsystem_id']="0x4341"
metadata['pci_subsystem_vendor_id']="0x10ee"
metadata['tool_version']="2017.4.op"
metadata['hdk_version']=hdk_version
metadata['create_time']=create_time
metadata['dcp_sha256']=dcp_sha256
meta=json.dumps(metadata)
with open(json_path,"w") as f:
f.write(meta)
| #!/usr/bin/env python
#
#-------------------------------------------------------------------------------
# Copyright 2018 Huawei Technologies Co., Ltd. All Rights Reserved.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the Huawei Software License (the "License").
# A copy of the License is located in the "LICENSE" file accompanying
# this file.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# Huawei Software License for more details.
#-------------------------------------------------------------------------------
'''create ocl metadata.json'''
import json
import xml.dom.minidom
import os,sys,re
import time
import hashlib
#*********************************
# Parse the xclbin file
#*********************************
xclbin_path=os.path.abspath(sys.argv[1])
xclbin_name=os.path.basename(os.path.splitext(xclbin_path)[0])
dcp_path=os.path.abspath(sys.argv[2])
bitstream=open(dcp_path,"rb").read()
dcp_sha256=hashlib.sha256(bitstream).hexdigest()
json_path=os.path.abspath(os.path.join(os.path.dirname(xclbin_path),"../bin/manifest.json"))
if os.path.exists(json_path):
os.remove(json_path)
#read xclbin get the xml index
f=open(xclbin_path,"rb")
bit_len=f.read().index("<?xml")
f.seek(bit_len,0)
xml_len=f.read().index("</project>")
f.seek(bit_len,0)
xml_len +=10
xml_stream=f.read(xml_len)
f.close()
#*********************************
# get xclbin_timestamp and clock_freq and clock_freqone
#*********************************
try:
dom = xml.dom.minidom.parseString(xml_stream)
item=dom.getElementsByTagName('platform')[0]
xclbin_timestamp=item.getAttribute('featureRomTime')
itemlist=dom.getElementsByTagName('clock')
except:
print "ERROR:get xclbin_timestamp and clock_freq fail"
sys.exit(1)
for x in itemlist:
#print x.getAttribute('port')
if x.getAttribute('port')=="DATA_CLK":
clockFreq=x.getAttribute('frequency')
if x.getAttribute('port')=="KERNEL_CLK":
clockFreqone=x.getAttribute('frequency')
clock_split=re.compile(r'([0-9]\d*)([a-zA-Z]*)').findall(clockFreq)
clock_freq=str(int(clock_split[0][0]))
clock_splitone=re.compile(r'([0-9]\d*)([a-zA-Z]*)').findall(clockFreqone)
clock_freqone=str(int(clock_splitone[0][0]))
file_time=os.path.getctime(xclbin_path)
create_time=time.strftime('%Y-%m-%d %H:%M:%S',time.localtime(file_time))
#**********************************
#get shell id and hdk_version
#**********************************
version_path=os.path.abspath(os.path.join(sys.path[0],"../../../"))
with open(os.path.join(version_path,"version_note_sdaccel.txt")) as f:
f.seek(5,0)
file_content=f.read()
shell_id=re.compile(r'([0-9a-fA-F]\w*)').findall(file_content)[0]
with open(os.path.join(version_path,"version_hdk_tag.txt")) as f:
hdk_version=f.read().strip()
#**********************************
# metadata json
#**********************************
metadata={}
metadata['version']="2.0"
metadata['fpga_board']="vu9p_vb"
metadata['fpga_vendor']="xilinx"
metadata['fpga_chip']="vu9p"
metadata['bitstream_format']="bin"
metadata['flow_type']="sdaccel"
metadata['xclbin_timestamp']=xclbin_timestamp
metadata['clock_freq']=clock_freq
metadata['clock_freq2']=clock_freqone
metadata['shell_id']="0x"+str(shell_id)
metadata['pci_vendor_id']="0x19e5"
metadata['pci_device_id']="0xD512"
metadata['pci_subsystem_id']="0x4341"
metadata['pci_subsystem_vendor_id']="0x10ee"
metadata['tool_version']="2017.4.op"
metadata['hdk_version']=hdk_version
metadata['create_time']=create_time
metadata['dcp_sha256']=dcp_sha256
meta=json.dumps(metadata)
with open(json_path,"w") as f:
f.write(meta)
| en | 0.498374 | #!/usr/bin/env python # #------------------------------------------------------------------------------- # Copyright 2018 Huawei Technologies Co., Ltd. All Rights Reserved. # # This program is free software; you can redistribute it and/or modify # it under the terms of the Huawei Software License (the "License"). # A copy of the License is located in the "LICENSE" file accompanying # this file. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # Huawei Software License for more details. #------------------------------------------------------------------------------- create ocl metadata.json #********************************* # Parse the xclbin file #********************************* #read xclbin get the xml index #********************************* # get xclbin_timestamp and clock_freq and clock_freqone #********************************* #print x.getAttribute('port') #********************************** #get shell id and hdk_version #********************************** #********************************** # metadata json #********************************** | 1.87869 | 2 |
298/test_fasta_to_2line_fasta.py | jsh/pybites | 0 | 6615318 | from fasta_to_2line_fasta import fasta_to_2line_fasta, FASTA_FILE
EXPECTED_RECORDS = 59
def test_well_formed_fasta():
"""
Test if output is correct with well-formed input.
"""
CONVERTED_FASTA = f"{FASTA_FILE}-test.fasta"
assert fasta_to_2line_fasta(FASTA_FILE, CONVERTED_FASTA) == EXPECTED_RECORDS
with open(FASTA_FILE, "r") as f:
f.readline()
assert (
f.readline().strip()
== "MNLLSIQPLNRIAIQFGPLTVYWYGIIIGIGILLGLILATREGKKLQVPSNTFTDLVLYA"
)
with open(CONVERTED_FASTA, "r") as f_conv:
f_conv.readline()
assert (
f_conv.readline().strip()
== "MNLLSIQPLNRIAIQFGPLTVYWYGIIIGIGILLGLILATREGKKLQVPSNTFTDLVLYA"
"LPISILSARIYYVLFEWAYYKNHLNEIFAIWNGGIAIHGGLIGAIVTTIVFTKKRNISF"
"WKLADIAAPSLILGQAIGRWGNFMNQEAHGGPVSRTFLESLRLPDIIINQMYINGSYYH"
"PTFLYESIWNIIGFVTLLILRKGSLKRGEIFLSYLIWYSIGRFFVEGLRTDSLMLTSSL"
"RMAQVMSISLIIISLLLMIYRRRKGLATTRYNELNNNALE"
)
def test_malformed_fasta():
"""
Test if output is correct with mal-formed input.
"""
MALFORMED_FASTA = f"{FASTA_FILE}.malformed.fasta"
CONVERTED_FASTA = f"{FASTA_FILE}.malformed-test.fasta"
with open(FASTA_FILE, "r") as f_in, open(MALFORMED_FASTA, "w") as f_out:
f_out.write(f_in.read()[1:])
assert (
fasta_to_2line_fasta(MALFORMED_FASTA, CONVERTED_FASTA) == EXPECTED_RECORDS - 1
)
with open(CONVERTED_FASTA, "r") as f_conv:
assert (
f_conv.readline().strip()
== ">sp|Q74NT6|ARSC1_BACC1 Arsenate reductase 1 OS=Bacillus cereu"
"s (strain ATCC 10987 / NRS 248) OX=222523 GN=arsC1 PE=3 SV=1"
)
assert (
f_conv.readline().strip()
== "MENKKTIYFLCTGNSCRSQMAEAWGKKYLGDKWNVLSAGIEAHGVNPNAIKAMKEVDIDIT"
"DQTSDIIDRDILDKADLVVTLCGHANDVCPTTPPHVKRVHWGFDDPAGQEWSVFQRVRDE"
"IGARIKKYAETGE"
) | from fasta_to_2line_fasta import fasta_to_2line_fasta, FASTA_FILE
EXPECTED_RECORDS = 59
def test_well_formed_fasta():
"""
Test if output is correct with well-formed input.
"""
CONVERTED_FASTA = f"{FASTA_FILE}-test.fasta"
assert fasta_to_2line_fasta(FASTA_FILE, CONVERTED_FASTA) == EXPECTED_RECORDS
with open(FASTA_FILE, "r") as f:
f.readline()
assert (
f.readline().strip()
== "MNLLSIQPLNRIAIQFGPLTVYWYGIIIGIGILLGLILATREGKKLQVPSNTFTDLVLYA"
)
with open(CONVERTED_FASTA, "r") as f_conv:
f_conv.readline()
assert (
f_conv.readline().strip()
== "MNLLSIQPLNRIAIQFGPLTVYWYGIIIGIGILLGLILATREGKKLQVPSNTFTDLVLYA"
"LPISILSARIYYVLFEWAYYKNHLNEIFAIWNGGIAIHGGLIGAIVTTIVFTKKRNISF"
"WKLADIAAPSLILGQAIGRWGNFMNQEAHGGPVSRTFLESLRLPDIIINQMYINGSYYH"
"PTFLYESIWNIIGFVTLLILRKGSLKRGEIFLSYLIWYSIGRFFVEGLRTDSLMLTSSL"
"RMAQVMSISLIIISLLLMIYRRRKGLATTRYNELNNNALE"
)
def test_malformed_fasta():
"""
Test if output is correct with mal-formed input.
"""
MALFORMED_FASTA = f"{FASTA_FILE}.malformed.fasta"
CONVERTED_FASTA = f"{FASTA_FILE}.malformed-test.fasta"
with open(FASTA_FILE, "r") as f_in, open(MALFORMED_FASTA, "w") as f_out:
f_out.write(f_in.read()[1:])
assert (
fasta_to_2line_fasta(MALFORMED_FASTA, CONVERTED_FASTA) == EXPECTED_RECORDS - 1
)
with open(CONVERTED_FASTA, "r") as f_conv:
assert (
f_conv.readline().strip()
== ">sp|Q74NT6|ARSC1_BACC1 Arsenate reductase 1 OS=Bacillus cereu"
"s (strain ATCC 10987 / NRS 248) OX=222523 GN=arsC1 PE=3 SV=1"
)
assert (
f_conv.readline().strip()
== "MENKKTIYFLCTGNSCRSQMAEAWGKKYLGDKWNVLSAGIEAHGVNPNAIKAMKEVDIDIT"
"DQTSDIIDRDILDKADLVVTLCGHANDVCPTTPPHVKRVHWGFDDPAGQEWSVFQRVRDE"
"IGARIKKYAETGE"
) | en | 0.653317 | Test if output is correct with well-formed input. Test if output is correct with mal-formed input. | 3.179349 | 3 |
osmaxx/profile/forms.py | tyrasd/osmaxx | 27 | 6615319 | <gh_stars>10-100
from crispy_forms.helper import FormHelper
from crispy_forms.layout import Submit, Layout
from django import forms
from django.utils.translation import ugettext_lazy as _
from osmaxx.profile.models import Profile
class ProfileForm(forms.ModelForm):
unverified_email = forms.EmailField(max_length=200, required=True, label=_('email address'))
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.helper = FormHelper()
self.helper.form_tag = False
self.helper.add_input(Submit('submit', 'Submit'))
self.helper.layout = Layout(
'unverified_email',
)
class Meta:
model = Profile
fields = ['unverified_email']
| from crispy_forms.helper import FormHelper
from crispy_forms.layout import Submit, Layout
from django import forms
from django.utils.translation import ugettext_lazy as _
from osmaxx.profile.models import Profile
class ProfileForm(forms.ModelForm):
unverified_email = forms.EmailField(max_length=200, required=True, label=_('email address'))
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.helper = FormHelper()
self.helper.form_tag = False
self.helper.add_input(Submit('submit', 'Submit'))
self.helper.layout = Layout(
'unverified_email',
)
class Meta:
model = Profile
fields = ['unverified_email'] | none | 1 | 2.289199 | 2 | |
cyberbattle/samples/active_directory/tiny_ad.py | ghost-pep/CyberBattleSim | 0 | 6615320 | <gh_stars>0
from cyberbattle.simulation.model import FirewallConfiguration, FirewallRule, RulePermission
from cyberbattle.simulation import model as m
from typing import Dict
firewall_conf = FirewallConfiguration(
[FirewallRule("SMB", RulePermission.ALLOW), FirewallRule("AD", RulePermission.ALLOW), FirewallRule("SHELL", RulePermission.ALLOW)],
[FirewallRule("SMB", RulePermission.ALLOW), FirewallRule("AD", RulePermission.ALLOW), FirewallRule("SHELL", RulePermission.ALLOW)])
def default_vulnerabilities() -> m.VulnerabilityLibrary:
lib = {}
lib['FindDomainControllers'] = m.VulnerabilityInfo(
description="Search for valid domain controllers in the current machines environment.",
type=m.VulnerabilityType.LOCAL,
outcome=m.LeakedNodesId(nodes=["domain_controller_1"]),
reward_string="Found domain controllers"
)
lib['EnumerateFileShares'] = m.VulnerabilityInfo(
description="Find all SMB shares this machine knows about.",
type=m.VulnerabilityType.LOCAL,
outcome=m.ExploitFailed(),
reward_string="Found shares"
)
lib["ProbeAdmin"] = m.VulnerabilityInfo(
description="Probe a workstation to see if you have admin creds on it",
type=m.VulnerabilityType.LOCAL,
outcome=m.ProbeFailed(),
reward_string="No admin creds."
)
lib['ScanForCreds'] = m.VulnerabilityInfo(
description="Scan the local security managers for credentials. Need to be admin on the box.",
type=m.VulnerabilityType.LOCAL,
outcome=m.LeakedCredentials(credentials=[m.CachedCredential(node="domain_controller_1", port="AD", credential="dc_1")]),
precondition=m.Precondition("admin"),
reward_string="DA credentials found"
)
return lib
def breach_vulnerabilities(lib: m.VulnerabilityLibrary) -> m.VulnerabilityLibrary:
lib['AuthorizationSpoofAndCrack'] = m.VulnerabilityInfo(
description="Spoof an authoritative source on the network to get a crackable hash, then try to crack it",
type=m.VulnerabilityType.LOCAL,
outcome=m.LeakedCredentials(credentials=[m.CachedCredential(node="workstation_1", port="SHELL", credential="user_1")])
)
return lib
def admin_vulnerabilities(lib: m.VulnerabilityLibrary) -> m.VulnerabilityLibrary:
lib["ProbeAdmin"] = m.VulnerabilityInfo(
description="Probe a workstation to see if you have admin creds on it",
type=m.VulnerabilityType.LOCAL,
outcome=m.ProbeSucceeded(discovered_properties=["admin"]),
reward_string="Admin creds verified."
)
return lib
def dc_vulnerabilities(lib: m.VulnerabilityLibrary) -> m.VulnerabilityLibrary:
lib['DumpNTDS'] = m.VulnerabilityInfo(
description="Dump the NTDS file from AD",
type=m.VulnerabilityType.LOCAL,
outcome=m.LeakedCredentials(credentials=[m.CachedCredential(node=f"workstation_{wid}", port='SHELL', credential=f'user_{uid}') for wid in range(0, 1) for uid in range(0, 20)]),
precondition=m.Precondition("domain_controller"),
reward_string="Dumped all user hashes. Get crackin'"
)
return lib
nodes = {
"domain_controller_1": m.NodeInfo(services=[m.ListeningService(name="AD", allowedCredentials=["dc_1"])],
properties=["domain_controller"],
value=100,
firewall=firewall_conf,
vulnerabilities=dc_vulnerabilities(default_vulnerabilities())),
"workstation_0": m.NodeInfo(services=[m.ListeningService(name="SHELL", allowedCredentials=[f"user_{uid}" for uid in range(0, 20)])],
value=0,
properties=["breach_node"],
vulnerabilities=breach_vulnerabilities(default_vulnerabilities()),
agent_installed=True,
firewall=firewall_conf,
reimagable=False),
"workstation_1": m.NodeInfo(services=[m.ListeningService(name="SHELL", allowedCredentials=[f"user_{uid}" for uid in range(0, 20)])],
properties=["admin"],
value=1,
firewall=firewall_conf,
vulnerabilities=admin_vulnerabilities(default_vulnerabilities()))
}
global_vulnerability_library: Dict[m.VulnerabilityID, m.VulnerabilityInfo] = dict([])
# Environment constants
ENV_IDENTIFIERS = m.Identifiers(
properties=[
'breach_node',
'domain_controller',
"admin" # whether or not the users of this machine are admins
],
ports=['SMB', 'AD', 'SHELL'],
local_vulnerabilities=[
'FindDomainControllers',
'EnumerateFileShares',
'AuthorizationSpoofAndCrack',
'ScanForCreds',
'DumpNTDS',
'ProbeAdmin'
],
remote_vulnerabilities=[
'PasswordSpray'
]
)
def new_environment() -> m.Environment:
return m.Environment(
network=m.create_network(nodes),
vulnerability_library=global_vulnerability_library,
identifiers=ENV_IDENTIFIERS
)
| from cyberbattle.simulation.model import FirewallConfiguration, FirewallRule, RulePermission
from cyberbattle.simulation import model as m
from typing import Dict
firewall_conf = FirewallConfiguration(
[FirewallRule("SMB", RulePermission.ALLOW), FirewallRule("AD", RulePermission.ALLOW), FirewallRule("SHELL", RulePermission.ALLOW)],
[FirewallRule("SMB", RulePermission.ALLOW), FirewallRule("AD", RulePermission.ALLOW), FirewallRule("SHELL", RulePermission.ALLOW)])
def default_vulnerabilities() -> m.VulnerabilityLibrary:
lib = {}
lib['FindDomainControllers'] = m.VulnerabilityInfo(
description="Search for valid domain controllers in the current machines environment.",
type=m.VulnerabilityType.LOCAL,
outcome=m.LeakedNodesId(nodes=["domain_controller_1"]),
reward_string="Found domain controllers"
)
lib['EnumerateFileShares'] = m.VulnerabilityInfo(
description="Find all SMB shares this machine knows about.",
type=m.VulnerabilityType.LOCAL,
outcome=m.ExploitFailed(),
reward_string="Found shares"
)
lib["ProbeAdmin"] = m.VulnerabilityInfo(
description="Probe a workstation to see if you have admin creds on it",
type=m.VulnerabilityType.LOCAL,
outcome=m.ProbeFailed(),
reward_string="No admin creds."
)
lib['ScanForCreds'] = m.VulnerabilityInfo(
description="Scan the local security managers for credentials. Need to be admin on the box.",
type=m.VulnerabilityType.LOCAL,
outcome=m.LeakedCredentials(credentials=[m.CachedCredential(node="domain_controller_1", port="AD", credential="dc_1")]),
precondition=m.Precondition("admin"),
reward_string="DA credentials found"
)
return lib
def breach_vulnerabilities(lib: m.VulnerabilityLibrary) -> m.VulnerabilityLibrary:
lib['AuthorizationSpoofAndCrack'] = m.VulnerabilityInfo(
description="Spoof an authoritative source on the network to get a crackable hash, then try to crack it",
type=m.VulnerabilityType.LOCAL,
outcome=m.LeakedCredentials(credentials=[m.CachedCredential(node="workstation_1", port="SHELL", credential="user_1")])
)
return lib
def admin_vulnerabilities(lib: m.VulnerabilityLibrary) -> m.VulnerabilityLibrary:
lib["ProbeAdmin"] = m.VulnerabilityInfo(
description="Probe a workstation to see if you have admin creds on it",
type=m.VulnerabilityType.LOCAL,
outcome=m.ProbeSucceeded(discovered_properties=["admin"]),
reward_string="Admin creds verified."
)
return lib
def dc_vulnerabilities(lib: m.VulnerabilityLibrary) -> m.VulnerabilityLibrary:
lib['DumpNTDS'] = m.VulnerabilityInfo(
description="Dump the NTDS file from AD",
type=m.VulnerabilityType.LOCAL,
outcome=m.LeakedCredentials(credentials=[m.CachedCredential(node=f"workstation_{wid}", port='SHELL', credential=f'user_{uid}') for wid in range(0, 1) for uid in range(0, 20)]),
precondition=m.Precondition("domain_controller"),
reward_string="Dumped all user hashes. Get crackin'"
)
return lib
nodes = {
"domain_controller_1": m.NodeInfo(services=[m.ListeningService(name="AD", allowedCredentials=["dc_1"])],
properties=["domain_controller"],
value=100,
firewall=firewall_conf,
vulnerabilities=dc_vulnerabilities(default_vulnerabilities())),
"workstation_0": m.NodeInfo(services=[m.ListeningService(name="SHELL", allowedCredentials=[f"user_{uid}" for uid in range(0, 20)])],
value=0,
properties=["breach_node"],
vulnerabilities=breach_vulnerabilities(default_vulnerabilities()),
agent_installed=True,
firewall=firewall_conf,
reimagable=False),
"workstation_1": m.NodeInfo(services=[m.ListeningService(name="SHELL", allowedCredentials=[f"user_{uid}" for uid in range(0, 20)])],
properties=["admin"],
value=1,
firewall=firewall_conf,
vulnerabilities=admin_vulnerabilities(default_vulnerabilities()))
}
global_vulnerability_library: Dict[m.VulnerabilityID, m.VulnerabilityInfo] = dict([])
# Environment constants
ENV_IDENTIFIERS = m.Identifiers(
properties=[
'breach_node',
'domain_controller',
"admin" # whether or not the users of this machine are admins
],
ports=['SMB', 'AD', 'SHELL'],
local_vulnerabilities=[
'FindDomainControllers',
'EnumerateFileShares',
'AuthorizationSpoofAndCrack',
'ScanForCreds',
'DumpNTDS',
'ProbeAdmin'
],
remote_vulnerabilities=[
'PasswordSpray'
]
)
def new_environment() -> m.Environment:
return m.Environment(
network=m.create_network(nodes),
vulnerability_library=global_vulnerability_library,
identifiers=ENV_IDENTIFIERS
) | en | 0.713872 | # Environment constants # whether or not the users of this machine are admins | 2.248941 | 2 |
util/utils.py | KerimovEmil/ProjectEuler | 1 | 6615321 | import numpy as np
import time
from itertools import accumulate
from functools import lru_cache, reduce
from math import gcd
from typing import List, Union, Dict, Generator, Optional
class Hungarian:
"""
Implementation of the Hungarian (Munkres) Algorithm using np.
Usage:
hungarian = Hungarian(cost_matrix)
hungarian.calculate()
or
hungarian = Hungarian()
hungarian.calculate(cost_matrix)
Handle Profit matrix:
hungarian = Hungarian(profit_matrix, is_profit_matrix=True)
or
cost_matrix = Hungarian.make_cost_matrix(profit_matrix)
The matrix will be automatically padded if it is not square.
For that numpy's resize function is used, which automatically adds 0's to any row/column that is added
Get results and total potential after calculation:
hungarian.get_results()
hungarian.get_total_potential()
Implementation of the Hungarian (Munkres) Algorithm using Python and NumPy
References:
http://www.ams.jhu.edu/~castello/362/Handouts/hungarian.pdf
http://weber.ucsd.edu/~vcrawfor/hungar.pdf
http://en.wikipedia.org/wiki/Hungarian_algorithm
http://www.public.iastate.edu/~ddoty/HungarianAlgorithm.html
http://www.clapper.org/software/python/munkres/
# Module Information.
__version__ = "1.1.1"
__author__ = "<NAME>"
__url__ = "http://github.com/tdedecko/hungarian-algorithm"
__copyright__ = "(c) 2010 <NAME>"
__license__ = "MIT License"
"""
def __init__(self, input_matrix=None, is_profit_matrix=False):
"""
input_matrix is a List of Lists.
input_matrix is assumed to be a cost matrix unless is_profit_matrix is True.
"""
if input_matrix is not None:
# Save input
my_matrix = np.array(input_matrix)
self._input_matrix = np.array(input_matrix)
self._maxColumn = my_matrix.shape[1]
self._maxRow = my_matrix.shape[0]
# Adds 0s if any columns/rows are added. Otherwise stays unaltered
matrix_size = max(self._maxColumn, self._maxRow)
my_matrix.resize(matrix_size, matrix_size)
# Convert matrix to profit matrix if necessary
if is_profit_matrix:
my_matrix = self.make_cost_matrix(my_matrix)
self._cost_matrix = my_matrix
self._size = len(my_matrix)
self._shape = my_matrix.shape
# Results from algorithm.
self._results = []
self._totalPotential = 0
else:
self._cost_matrix = None
def get_results(self):
"""Get results after calculation."""
return self._results
def get_total_potential(self):
"""Returns expected value after calculation."""
return self._totalPotential
def calculate(self, input_matrix=None, is_profit_matrix=False):
"""
Implementation of the Hungarian (Munkres) Algorithm.
input_matrix is a List of Lists.
input_matrix is assumed to be a cost matrix unless is_profit_matrix is True.
"""
# Handle invalid and new matrix inputs.
if input_matrix is None and self._cost_matrix is None:
raise TypeError("Invalid input")
elif input_matrix is not None:
self.__init__(input_matrix, is_profit_matrix)
result_matrix = self._cost_matrix.copy()
# Step 1: Subtract row mins from each row.
for index, row in enumerate(result_matrix):
result_matrix[index] -= row.min()
# Step 2: Subtract column mins from each column.
for index, column in enumerate(result_matrix.T):
result_matrix[:, index] -= column.min()
# Step 3: Use minimum number of lines to cover all zeros in the matrix.
# If the total covered rows+columns is not equal to the matrix size then adjust matrix and repeat.
total_covered = 0
while total_covered < self._size:
# Find minimum number of lines to cover all zeros in the matrix and find total covered rows and columns.
cover_zeros = CoverZeros(result_matrix)
covered_rows = cover_zeros.get_covered_rows()
covered_columns = cover_zeros.get_covered_columns()
total_covered = len(covered_rows) + len(covered_columns)
# if the total covered rows+columns is not equal to the matrix size then adjust it by min uncovered num (m).
if total_covered < self._size:
result_matrix = self._adjust_matrix_by_min_uncovered_num(result_matrix, covered_rows, covered_columns)
# Step 4: Starting with the top row, work your way downwards as you make assignments.
# Find single zeros in rows or columns.
# Add them to final result and remove them and their associated row/column from the matrix.
expected_results = min(self._maxColumn, self._maxRow)
zero_locations = (result_matrix == 0)
while len(self._results) != expected_results:
# If number of zeros in the matrix is zero before finding all the results then an error has occurred.
if not zero_locations.any():
raise TypeError("Unable to find results. Algorithm has failed.")
# Find results and mark rows and columns for deletion
matched_rows, matched_columns = self.__find_matches(zero_locations)
# Make arbitrary selection
total_matched = len(matched_rows) + len(matched_columns)
if total_matched == 0:
matched_rows, matched_columns = self.select_arbitrary_match(zero_locations)
# Delete rows and columns
for row in matched_rows:
zero_locations[row] = False
for column in matched_columns:
zero_locations[:, column] = False
# Save Results
self.__set_results(zip(matched_rows, matched_columns))
# Calculate total potential
value = 0
for row, column in self._results:
value += self._input_matrix[row, column]
self._totalPotential = value
@staticmethod
def make_cost_matrix(profit_matrix):
"""
Converts a profit matrix into a cost matrix.
Expects NumPy objects as input.
"""
# subtract profit matrix from a matrix made of the max value of the profit matrix
matrix_shape = profit_matrix.shape
offset_matrix = np.ones(matrix_shape) * profit_matrix.max()
cost_matrix = offset_matrix - profit_matrix
return cost_matrix
def _adjust_matrix_by_min_uncovered_num(self, result_matrix, covered_rows, covered_columns):
"""Subtract m from every uncovered number and add m to every element covered with two lines."""
# Calculate minimum uncovered number (m)
elements = []
for row_index, row in enumerate(result_matrix):
if row_index not in covered_rows:
for index, element in enumerate(row):
if index not in covered_columns:
elements.append(element)
min_uncovered_num = min(elements)
# Add m to every covered element
adjusted_matrix = result_matrix
for row in covered_rows:
adjusted_matrix[row] += min_uncovered_num
for column in covered_columns:
adjusted_matrix[:, column] += min_uncovered_num
# Subtract m from every element
m_matrix = np.ones(self._shape) * min_uncovered_num
adjusted_matrix -= m_matrix
return adjusted_matrix
def __find_matches(self, zero_locations):
"""Returns rows and columns with matches in them."""
marked_rows = np.array([], dtype=int)
marked_columns = np.array([], dtype=int)
# Mark rows and columns with matches
# Iterate over rows
for index, row in enumerate(zero_locations):
row_index = np.array([index])
if np.sum(row) == 1:
column_index, = np.where(row)
marked_rows, marked_columns = self.__mark_rows_and_columns(marked_rows, marked_columns, row_index,
column_index)
# Iterate over columns
for index, column in enumerate(zero_locations.T):
column_index = np.array([index])
if np.sum(column) == 1:
row_index, = np.where(column)
marked_rows, marked_columns = self.__mark_rows_and_columns(marked_rows, marked_columns, row_index,
column_index)
return marked_rows, marked_columns
@staticmethod
def __mark_rows_and_columns(marked_rows, marked_columns, row_index, column_index):
"""Check if column or row is marked. If not marked then mark it."""
new_marked_rows = marked_rows
new_marked_columns = marked_columns
if not (marked_rows == row_index).any() and not (marked_columns == column_index).any():
new_marked_rows = np.insert(marked_rows, len(marked_rows), row_index)
new_marked_columns = np.insert(marked_columns, len(marked_columns), column_index)
return new_marked_rows, new_marked_columns
@staticmethod
def select_arbitrary_match(zero_locations):
"""Selects row column combination with minimum number of zeros in it."""
# Count number of zeros in row and column combinations
rows, columns = np.where(zero_locations)
zero_count = []
for index, row in enumerate(rows):
total_zeros = np.sum(zero_locations[row]) + np.sum(zero_locations[:, columns[index]])
zero_count.append(total_zeros)
# Get the row column combination with the minimum number of zeros.
indices = zero_count.index(min(zero_count))
row = np.array([rows[indices]])
column = np.array([columns[indices]])
return row, column
def __set_results(self, result_lists):
"""Set results during calculation."""
# Check if results values are out of bound from input matrix (because of matrix being padded).
# Add results to results list.
for result in result_lists:
row, column = result
if row < self._maxRow and column < self._maxColumn:
new_result = (int(row), int(column))
self._results.append(new_result)
class CoverZeros:
"""
Use minimum number of lines to cover all zeros in the matrix.
Algorithm based on: http://weber.ucsd.edu/~vcrawfor/hungar.pdf
"""
def __init__(self, matrix):
"""
Input a matrix and save it as a boolean matrix to designate zero locations.
Run calculation procedure to generate results.
"""
# Find zeros in matrix
self._zero_locations = (matrix == 0)
self._shape = matrix.shape
# Choices starts without any choices made.
self._choices = np.zeros(self._shape, dtype=bool)
self._marked_rows = []
self._marked_columns = []
# marks rows and columns
self.__calculate()
# Draw lines through all unmarked rows and all marked columns.
self._covered_rows = list(set(range(self._shape[0])) - set(self._marked_rows))
self._covered_columns = self._marked_columns
def get_covered_rows(self):
"""Return list of covered rows."""
return self._covered_rows
def get_covered_columns(self):
"""Return list of covered columns."""
return self._covered_columns
def __calculate(self):
"""
Calculates minimum number of lines necessary to cover all zeros in a matrix.
Algorithm based on: http://weber.ucsd.edu/~vcrawfor/hungar.pdf
"""
while True:
# Erase all marks.
self._marked_rows = []
self._marked_columns = []
# Mark all rows in which no choice has been made.
for index, row in enumerate(self._choices):
if not row.any():
self._marked_rows.append(index)
# If no marked rows then finish.
if not self._marked_rows:
return True
# Mark all columns not already marked which have zeros in marked rows.
num_marked_columns = self.__mark_new_columns_with_zeros_in_marked_rows()
# If no new marked columns then finish.
if num_marked_columns == 0:
return True
# While there is some choice in every marked column.
while self.__choice_in_all_marked_columns():
# Some Choice in every marked column.
# Mark all rows not already marked which have choices in marked columns.
num_marked_rows = self.__mark_new_rows_with_choices_in_marked_columns()
# If no new marks then Finish.
if num_marked_rows == 0:
return True
# Mark all columns not already marked which have zeros in marked rows.
num_marked_columns = self.__mark_new_columns_with_zeros_in_marked_rows()
# If no new marked columns then finish.
if num_marked_columns == 0:
return True
# No choice in one or more marked columns.
# Find a marked column that does not have a choice.
choice_column_index = self.__find_marked_column_without_choice()
while choice_column_index is not None:
# Find a zero in the column indexed that does not have a row with a choice.
choice_row_index = self.__find_row_without_choice(choice_column_index)
# Check if an available row was found.
new_choice_column_index = None
if choice_row_index is None:
# Find a good row to accomodate swap. Find its column pair.
choice_row_index, new_choice_column_index = \
self.__find_best_choice_row_and_new_column(choice_column_index)
# Delete old choice.
self._choices[choice_row_index, new_choice_column_index] = False
# Set zero to choice.
self._choices[choice_row_index, choice_column_index] = True
# Loop again if choice is added to a row with a choice already in it.
choice_column_index = new_choice_column_index
def __mark_new_columns_with_zeros_in_marked_rows(self):
"""Mark all columns not already marked which have zeros in marked rows."""
num_marked_columns = 0
for index, column in enumerate(self._zero_locations.T):
if index not in self._marked_columns:
if column.any():
row_indices, = np.where(column)
zeros_in_marked_rows = (set(self._marked_rows) & set(row_indices)) != set([])
if zeros_in_marked_rows:
self._marked_columns.append(index)
num_marked_columns += 1
return num_marked_columns
def __mark_new_rows_with_choices_in_marked_columns(self):
"""Mark all rows not already marked which have choices in marked columns."""
num_marked_rows = 0
for index, row in enumerate(self._choices):
if index not in self._marked_rows:
if row.any():
column_index, = np.where(row)
if column_index in self._marked_columns:
self._marked_rows.append(index)
num_marked_rows += 1
return num_marked_rows
def __choice_in_all_marked_columns(self):
"""Return Boolean True if there is a choice in all marked columns. Returns boolean False otherwise."""
for column_index in self._marked_columns:
if not self._choices[:, column_index].any():
return False
return True
def __find_marked_column_without_choice(self):
"""Find a marked column that does not have a choice."""
for column_index in self._marked_columns:
if not self._choices[:, column_index].any():
return column_index
raise TypeError(
"Could not find a column without a choice. Failed to cover matrix zeros. Algorithm has failed.")
def __find_row_without_choice(self, choice_column_index):
"""Find a row without a choice in it for the column indexed. If a row does not exist then return None."""
row_indices, = np.where(self._zero_locations[:, choice_column_index])
for row_index in row_indices:
if not self._choices[row_index].any():
return row_index
# All rows have choices. Return None.
return None
def __find_best_choice_row_and_new_column(self, choice_column_index):
"""
Find a row index to use for the choice so that the column that needs to be changed is optimal.
Return a random row and column if unable to find an optimal selection.
"""
row_indices, = np.where(self._zero_locations[:, choice_column_index])
for row_index in row_indices:
column_indices, = np.where(self._choices[row_index])
column_index = column_indices[0]
if self.__find_row_without_choice(column_index) is not None:
return row_index, column_index
# Cannot find optimal row and column. Return a random row and column.
from random import shuffle
shuffle(row_indices)
column_index, = np.where(self._choices[row_indices[0]])
return row_indices[0], column_index[0]
def basic_factorial(x):
"""Returns the factorial of the integer x."""
ans = 1
while x:
ans *= x
x -= 1
return ans
def basic_falling_factorial(high, low):
"""Returns the high! / low! """
if low == high:
return 1
if high < low:
return 0
i = low + 1
ans = 1
while i <= high:
ans *= i
i += 1
return ans
def lcm(x, y):
return x * y // gcd(x, y)
class Matrix:
def __init__(self, entries):
self.entries = entries
def __mul__(self, other):
result = [[0 for j in range(len(other.entries[0]))] for i in range(len(self.entries))]
for i in range(len(self.entries)):
for j in range(len(other.entries[0])):
for k in range(len(other.entries)):
result[i][j] += self.entries[i][k] * other.entries[k][j]
return Matrix(result)
def __mod__(self, mod):
if mod:
for i in range(len(self.entries)):
for j in range(len(self.entries[0])):
self.entries[i][j] %= mod
return self
def __pow__(self, n, mod=None):
assert (n > 0)
if n == 1:
return self.__mod__(mod)
half = self.__pow__(n >> 1, mod)
if n & 1 == 1: # if odd
return half.__mul__(half).__mul__(self).__mod__(mod)
else: # if even
return half.__mul__(half).__mod__(mod)
def __str__(self):
return str(self.entries)
class LinearHomogeneousRecurrence:
"""
Solve f(n+1) = c(n) f(n) + c(n-1) f(n-1) + ... + c(n-k) f(n-k) with
f(0) = a(0), f(1) = a(1), ..., f(k) = a(k).
Input:
coefficients = [c(n), c(n-1), ..., c(n-k)]
initial_values = [a(k), a(k-1), ..., a(0)]
"""
def __init__(self, coefficients, initial_values):
assert (len(coefficients) == len(initial_values))
self.dim = len(coefficients)
self.companion_matrix = self.__init__companion_matrix(coefficients)
self.initial_state = self.__init__initial_state(initial_values)
def __init__companion_matrix(self, coefficients):
entries = [[0 for j in range(self.dim)] for i in range(self.dim)]
for i in range(self.dim):
entries[0][i] = coefficients[i]
for i in range(1, self.dim):
entries[i][i - 1] = 1
return Matrix(entries)
def __init__initial_state(self, initial_values):
entries = [[value] for value in initial_values]
return Matrix(entries)
def get(self, n, mod=None):
if n < self.dim:
value = self.initial_state.entries[self.dim - n - 1][0]
return value % mod if mod else value
else:
return ((pow(self.companion_matrix, n - self.dim + 1, mod) * self.initial_state) % mod).entries[0][0]
class BaseConverter:
def convert_decimal(self, n, base):
reversed_rep = []
d = n
while d:
d, r = divmod(d, base)
reversed_rep.append(r)
return reversed_rep[::-1]
def convert_rep(self, rep, base):
result = 0
for digit in rep:
result = result * base + digit
return result
class BinomialCoefficient:
def __init__(self, prime):
self.prime = prime
self.base_values = self.__init_base_values(prime)
self.cache_values = {}
self.base_converter = BaseConverter()
def __init_base_values(self, prime):
curr = [1]
result = [curr]
for n in range(2, prime + 1):
next = [1]
for k in range(1, n - 1):
next.append(curr[k - 1] + curr[k])
next.append(1)
curr = next
result.append(curr)
return result
def get(self, m, n):
if m not in self.cache_values:
self.cache_values[m] = {}
if n not in self.cache_values[m]:
m_rep = self.base_converter.convert_decimal(m, self.prime)
n_rep = self.base_converter.convert_decimal(n, self.prime)
offset = len(m_rep) - len(n_rep)
result = 1
for i in range(len(n_rep)):
m_i = m_rep[offset + i]
n_i = n_rep[i]
if m_i < n_i:
return 0
result = (result * self.base_values[m_i][n_i]) % self.prime
self.cache_values[m][n] = result
return self.cache_values[m][n]
class EulerNumber:
def __init__(self, prime):
self.prime = prime
self.binomial_coefficient = BinomialCoefficient(prime)
self.factorial_mod = self.__init_factorial_mod(prime)
self.values = {0: (1, prime - 1)}
def __init_factorial_mod(self, prime):
result = [1]
for i in range(1, prime):
result.append((result[-1] * i) % prime)
return result
def get(self, n):
if n not in self.values:
a = self.__factorial_mod(n)
b = -1
for k in range(n):
c = self.binomial_coefficient.get(n, k)
a_k, b_k = self.get(k)
a += c * a_k
b += c * b_k
b -= c * self.__factorial_mod(n - k)
self.values[n] = (a % self.prime, b % self.prime)
return self.values[n]
def __factorial_mod(self, n):
if n >= self.prime:
return 0
return self.factorial_mod[n]
def sieve(n):
"""Return all primes <= n."""
np1 = n + 1
s = list(range(np1))
s[1] = 0
sqrtn = int(round(n ** 0.5))
for i in range(2, sqrtn + 1):
if s[i]:
s[i * i: np1: i] = [0] * len(range(i * i, np1, i))
return filter(None, s)
def timeit(method):
def timed(*args, **kw):
ts = time.time()
result = method(*args, **kw)
te = time.time()
print('{} took: {:.3f} seconds'.format(method.__name__, (te - ts)))
return result
return timed
def is_pandigital(num):
"""Return true if integer num uses all of the digits from 1 to n exactly once. False otherwise."""
str_num = str(num)
if str_num.count('0') > 0:
return False
n_digits = len(str_num)
for i in range(1, n_digits+1):
if str_num.count(str(i)) != 1:
return False
return True
def is_palindrome(n: int) -> bool:
ls = list(str(n))
return ls == ls[::-1]
def new_mod(str_a, m): # todo: test for bugs
"""
Returns a mod m.
Works well for m=0,1,2,3,4,5,8,9,10,11
Args:
str_a: <str>
m: <num>
Returns: a mod m
"""
int_a = int(str_a)
if len(str_a) > 2:
if m == 0 or m == 1:
return 0
if int_a == m:
return 0
if m == 2:
last = str_a[-1:]
return new_mod(last, m)
if m == 3 or m == 9:
sum_of_digits = sum([int(d) for d in str_a])
return new_mod(str(sum_of_digits), m)
if m == 4:
last = int(str_a[-1])
second_last = int(str_a[-2:-1])
answer = 2 * second_last + last
return new_mod(str(answer), m)
if m == 5:
last = str_a[-1]
return new_mod(last, m)
if m == 7:
last = int(str_a[-1:])
first = int(str_a[:-1])
answer = new_mod(str(first - 2 * last), m)
if answer == 0:
return 0
else:
return int_a % m
if m == 8:
last = int(str_a[-1:])
second_last = int(str_a[-2:-1])
third_last = int(str_a[-3:-2])
answer = 4 * third_last + 2 * second_last + last
return new_mod(str(answer), m)
if m == 10:
last = int(str_a[-1:])
return last
if m == 11:
new_a = 0
for i, digit in enumerate(str_a):
if not i % 2:
new_a += int(digit)
else:
new_a -= int(digit)
return new_mod(str(new_a), m)
if m == 13:
last = int(str_a[-1:])
first = int(str_a[:-1])
answer = new_mod(str(first - 9 * last), m)
if answer == 0:
return 0
else:
return int_a % m
return int_a % m
else:
return int_a % m
def combin(n, r):
"""A fast way to calculate binomial coefficients by <NAME> (contrib)."""
if 0 <= r <= n:
ntok = 1
rtok = 1
for t in range(1, min(r, n - r) + 1):
ntok *= n
rtok *= t
n -= 1
return ntok // rtok # bit-wise operation
else:
return 0
def square_free_sieve(limit):
"""Generator that yields all square free numbers less than limit"""
a = [True] * limit
# Needed so we don't mark off multiples of 1^2
yield 1
a[0] = a[1] = False
for i, is_square_free in enumerate(a):
if is_square_free:
yield i
i2 = i * i
for n in range(i2, limit, i2):
a[n] = False
def square_primes_sieve(limit, primes=None):
"""Returns a list all prime squares less than limit"""
if primes is None:
primes = sieve(int(limit))
return [i**2 for i in primes]
def primes_of_n(n, ls_prime=None):
"""
Given an integer n, return the prime factorization.
Args:
n: <int> integer
ls_prime: <list> optional parameter to specify a list of possible primes
Returns: <dict> of prime factors with the keys being the prime number, and the values
being the multiplicity of that factor.
"""
factors = {}
if ls_prime is None:
i = 2
p = 2
def next_prime(j):
return j
else:
i = 0
p = ls_prime[i]
def next_prime(j):
return ls_prime[j]
while p * p <= n:
while n % p == 0:
if p not in factors:
factors[p] = 0
factors[p] += 1
n //= p
i += 1
p = next_prime(i)
if n > 1:
factors[n] = 1
return factors
def cumsum(ls):
"""
Given a list, return the cumulative sum of the list
Args:
ls: list of numbers
Returns: <list>
"""
return list(accumulate(ls))
def generate_ascending_sub_sequence(options, num):
"""
Args:
options: <list> of objects, ordered in ascending order
num: <int> the size of the sub-sequence to return
Returns: an generator of sub-sequences of options in ascending order
e.g.
options = ['0', '1', '2']
num = 3
Returns:
('0', '0', '0')
('0', '0', '1')
('0', '0', '2')
('0', '1', '1')
('0', '1', '2')
('0', '2', '2')
('1', '1', '1')
('1', '1', '2')
('1', '2', '2')
('2', '2', '2')
"""
if num == 1:
for i in options:
yield (i, )
else:
for idx, j in enumerate(options):
for k in generate_ascending_sub_sequence(options[idx:], num - 1):
yield (j, *k)
@lru_cache(maxsize=None, typed=False)
def partition_number(n):
"""
Compute the partition number of n.
Using recursive equation found here: http://www.cs.utsa.edu/~wagner/python/fp/part.html
p(n) = sum_{k=1}^{n} (-1)^{k+1} (p(x) + p(y))
x = n - k*(3k-1)/2
y = n - k*(3k+1)/2
"""
if n < 0:
return 0
if n == 0:
return 1
sign = 1
summation = 0
for k in range(1, n+1):
x = n - int(k*(3*k-1)/2)
y = n - int(k*(3*k+1)/2)
summation += sign*(partition_number(x) + partition_number(y))
sign *= -1
return summation
def euler_totient_function(n):
dc_factors = primes_of_n(n)
iter_primes = ((1-1/p) for p in dc_factors.keys())
output = n
for p in iter_primes:
output *= p
return int(output)
@lru_cache(maxsize=None)
def sum_phi(n):
"""Returns sum_{i=1 to n} phi(i) where phi is the euler totient function"""
if n == 0:
return 0
if n == 1:
return 1
v = int(n**0.5)
nv = n//(v+1)
return n*(n+1)//2 - sum(sum_phi(x) * (n//x - n//(x+1)) for x in range(1, v+1)) - sum(sum_phi(n//k) for k in range(2, nv+1))
def farey(n, descending=False):
"""Print the n'th Farey sequence. Allow for either ascending or descending."""
a, b, c, d = 0, 1, 1, n
if descending:
a, c = 1, n - 1
ls_farey = [(a, b)]
while (c <= n and not descending) or (a > 0 and descending):
k = int((n + b) / d)
a, b, c, d = c, d, k * c - a, k * d - b
ls_farey.append((a, b))
return ls_farey
@lru_cache(maxsize=None, typed=False)
def len_faray_seq(n):
"""
Calculates the length of the n'th Faray Sequence.
Args:
n: <int>
Returns: <int>
Using the recursive relation |F_{n}| = |F_{n-1}| + euler_totient(n),
Expanding for all n and then inverting the relation, after using |F_1| = 2 we get
|F_{n}| = 1/2 * (n+3) * n - sum_{d=2}^{n} |F_{floor(n/d)}|
See Also: https://en.wikipedia.org/wiki/Farey_sequence
"""
if n == 1:
return 2
elif n < 1:
raise NotImplementedError("What happened??")
else:
return int(0.5*(n+3)*n) - sum(len_faray_seq(int(n/d)) for d in range(2, n+1))
def sign(x):
if x < 0:
return -1
elif x > 0:
return 1
else:
return 0
@timeit
def mobius_sieve(n: int, ls_prime: Union[List[int], None]) -> List[int]:
"""
Returns a list of all mobius function values.
mobius(n) = 1 if i is square-free with even number of primes,
-1 if odd number,
0 if contains square
"""
ls_m = [1]*n
if ls_prime is None:
ls_p = list(sieve(n))
else:
ls_p = ls_prime
for p in ls_p:
ls_m[p:n:p] = [-1 * x for x in ls_m[p:n:p]]
p2 = p ** 2
ls_m[p2:n:p2] = [0] * ((n-1)//p2) # len(ls_m[p2:n:p2]) == (n-1)//p2
return ls_m
# @lru_cache(maxsize=None)
def num_of_divisors(n):
"""
Return the number of positive divisors of n.
e.g. sigma(12) = 6
"""
dc_primes = primes_of_n(n)
return reduce(lambda a, b: a * b, (v + 1 for v in dc_primes.values()))
def divisors(prime_factors: Dict[int, int]) -> Generator[int, None, None]:
"""
Given the prime factorization of a number, return a generator of all of it's divisors.
Args:
prime_factors: a dictionary with the key being the prime and the value being the multiplicity of the prime.
For example if n=12 then then input would be {2:2, 3:1} since 12 = 2*2*3, and the generator would return
1,2,4,3,6,12
"""
ls_primes = list(prime_factors.keys())
# generates factors from ls_primes[k:] subset
def generate(k):
if k == len(ls_primes):
yield 1
else:
rest = generate(k + 1)
prime = ls_primes[k]
for factor in rest:
prime_to_i = 1
# prime_to_i iterates prime**i values, i being all possible exponents
for _ in range(prime_factors[prime] + 1):
yield factor * prime_to_i
prime_to_i *= prime
yield from generate(0)
@lru_cache(maxsize=None)
def binomial_recursive(n: int, k: int, mod_m: int) -> int:
if k == 0:
return 1
if n == 0:
return 0
if n == k:
return 1
if k > (n-k):
return binomial_recursive(n, n-k, mod_m)
return (binomial_recursive(n-1, k, mod_m) + binomial_recursive(n-1, k-1, mod_m)) % mod_m
@lru_cache(maxsize=None)
def fib(n: int, mod_m: int = int(1e12)) -> int:
if n < 2:
return n
return (fib(n-1, mod_m) + fib(n-2, mod_m)) % mod_m
def fibonacci_n_term(n: int) -> Union[int, NotImplementedError]:
"""Returns the nth fibonacci number"""
if n < 0:
return NotImplementedError('negative n is not implemented')
sq_5 = 5**0.5
phi_pos = (1 + sq_5) / 2
return round(phi_pos**n / sq_5)
def fibonacci_k_n_term(n: int, k: int) -> Union[int, NotImplementedError]:
"""
Returns the nth fibonacci_k number.
Where F_{k,n+1} = k*F_{k,n} + F_{k,n−1} for n ≥ 1
"""
if n < 0:
return NotImplementedError('negative n is not implemented')
if n in [0, 1]:
return n
root = (k+(k**2 + 4)**0.5) / 2
return round((root**n - (-root)**(-n)) / (root + 1/root))
def catalan_transform(n: int = 1, seq: List[int] = None, mod_m: Optional[int] = None) -> int:
"""http://www.kurims.kyoto-u.ac.jp/EMIS/journals/JIS/VOL8/Barry/barry84.pdf"""
if n == 0:
return 0
if mod_m is not None:
return int(sum((i * combin(2*n-i, n-i) * seq[i] // (2*n-i)) % mod_m for i in range(1, n+1)))
else:
return int(sum(i * combin(2*n-i, n-i) * seq[i] // (2*n-i) for i in range(1, n+1)))
def inv_catalan_transform(n: int = 1, seq: List[int] = None) -> int:
"""http://www.kurims.kyoto-u.ac.jp/EMIS/journals/JIS/VOL8/Barry/barry84.pdf"""
return sum(combin(i, n-i) * (-1)**(n-i) * seq[i] for i in range(n+1))
def get_all_mod_inverse_dict(m: int, max_n: int) -> Dict[int, int]:
"""
Computes a^-1 mod m for all a in [1, a-1]
https://cp-algorithms.com/algebra/module-inverse.html#mod-inv-all-num
Taking the key * value mod m for each key value would result in 1
"""
dc_inv = {1: 1}
for i in range(2, max_n+1):
dc_inv[i] = m - (m // i) * dc_inv[m % i] % m
return dc_inv
def get_all_mod_inverse_list(m: int, max_n: int) -> List[int]:
"""
Computes a^-1 mod m for all a in [1, a-1]
https://cp-algorithms.com/algebra/module-inverse.html#mod-inv-all-num
"""
ls_inv = [0, 1]
for i in range(2, max_n+1):
ls_inv.append(m - (m // i) * ls_inv[m % i] % m)
return ls_inv
def cycle_length(k: int) -> int:
"""
Computes the repeated cycle length of the decimal expansion of 1/k.
e.g.
1/6 = 0.1(6) -> 1
1/7 = 0.(142857) -> 6
For k not equal to a multiple of 2 or 5,
1/k has a cycle of d digits if 10^d == 1 mod k = 0
"""
while k % 2 == 0:
k //= 2 # remove factors of 2
while k % 5 == 0:
k //= 5 # remove factors of 5
if k == 1:
return 0 # this is not a repeating decimal
d = 1
x = 10 % k
while x != 1:
x = (x*10) % k
d += 1
return d
| import numpy as np
import time
from itertools import accumulate
from functools import lru_cache, reduce
from math import gcd
from typing import List, Union, Dict, Generator, Optional
class Hungarian:
"""
Implementation of the Hungarian (Munkres) Algorithm using np.
Usage:
hungarian = Hungarian(cost_matrix)
hungarian.calculate()
or
hungarian = Hungarian()
hungarian.calculate(cost_matrix)
Handle Profit matrix:
hungarian = Hungarian(profit_matrix, is_profit_matrix=True)
or
cost_matrix = Hungarian.make_cost_matrix(profit_matrix)
The matrix will be automatically padded if it is not square.
For that numpy's resize function is used, which automatically adds 0's to any row/column that is added
Get results and total potential after calculation:
hungarian.get_results()
hungarian.get_total_potential()
Implementation of the Hungarian (Munkres) Algorithm using Python and NumPy
References:
http://www.ams.jhu.edu/~castello/362/Handouts/hungarian.pdf
http://weber.ucsd.edu/~vcrawfor/hungar.pdf
http://en.wikipedia.org/wiki/Hungarian_algorithm
http://www.public.iastate.edu/~ddoty/HungarianAlgorithm.html
http://www.clapper.org/software/python/munkres/
# Module Information.
__version__ = "1.1.1"
__author__ = "<NAME>"
__url__ = "http://github.com/tdedecko/hungarian-algorithm"
__copyright__ = "(c) 2010 <NAME>"
__license__ = "MIT License"
"""
def __init__(self, input_matrix=None, is_profit_matrix=False):
"""
input_matrix is a List of Lists.
input_matrix is assumed to be a cost matrix unless is_profit_matrix is True.
"""
if input_matrix is not None:
# Save input
my_matrix = np.array(input_matrix)
self._input_matrix = np.array(input_matrix)
self._maxColumn = my_matrix.shape[1]
self._maxRow = my_matrix.shape[0]
# Adds 0s if any columns/rows are added. Otherwise stays unaltered
matrix_size = max(self._maxColumn, self._maxRow)
my_matrix.resize(matrix_size, matrix_size)
# Convert matrix to profit matrix if necessary
if is_profit_matrix:
my_matrix = self.make_cost_matrix(my_matrix)
self._cost_matrix = my_matrix
self._size = len(my_matrix)
self._shape = my_matrix.shape
# Results from algorithm.
self._results = []
self._totalPotential = 0
else:
self._cost_matrix = None
def get_results(self):
"""Get results after calculation."""
return self._results
def get_total_potential(self):
"""Returns expected value after calculation."""
return self._totalPotential
def calculate(self, input_matrix=None, is_profit_matrix=False):
"""
Implementation of the Hungarian (Munkres) Algorithm.
input_matrix is a List of Lists.
input_matrix is assumed to be a cost matrix unless is_profit_matrix is True.
"""
# Handle invalid and new matrix inputs.
if input_matrix is None and self._cost_matrix is None:
raise TypeError("Invalid input")
elif input_matrix is not None:
self.__init__(input_matrix, is_profit_matrix)
result_matrix = self._cost_matrix.copy()
# Step 1: Subtract row mins from each row.
for index, row in enumerate(result_matrix):
result_matrix[index] -= row.min()
# Step 2: Subtract column mins from each column.
for index, column in enumerate(result_matrix.T):
result_matrix[:, index] -= column.min()
# Step 3: Use minimum number of lines to cover all zeros in the matrix.
# If the total covered rows+columns is not equal to the matrix size then adjust matrix and repeat.
total_covered = 0
while total_covered < self._size:
# Find minimum number of lines to cover all zeros in the matrix and find total covered rows and columns.
cover_zeros = CoverZeros(result_matrix)
covered_rows = cover_zeros.get_covered_rows()
covered_columns = cover_zeros.get_covered_columns()
total_covered = len(covered_rows) + len(covered_columns)
# if the total covered rows+columns is not equal to the matrix size then adjust it by min uncovered num (m).
if total_covered < self._size:
result_matrix = self._adjust_matrix_by_min_uncovered_num(result_matrix, covered_rows, covered_columns)
# Step 4: Starting with the top row, work your way downwards as you make assignments.
# Find single zeros in rows or columns.
# Add them to final result and remove them and their associated row/column from the matrix.
expected_results = min(self._maxColumn, self._maxRow)
zero_locations = (result_matrix == 0)
while len(self._results) != expected_results:
# If number of zeros in the matrix is zero before finding all the results then an error has occurred.
if not zero_locations.any():
raise TypeError("Unable to find results. Algorithm has failed.")
# Find results and mark rows and columns for deletion
matched_rows, matched_columns = self.__find_matches(zero_locations)
# Make arbitrary selection
total_matched = len(matched_rows) + len(matched_columns)
if total_matched == 0:
matched_rows, matched_columns = self.select_arbitrary_match(zero_locations)
# Delete rows and columns
for row in matched_rows:
zero_locations[row] = False
for column in matched_columns:
zero_locations[:, column] = False
# Save Results
self.__set_results(zip(matched_rows, matched_columns))
# Calculate total potential
value = 0
for row, column in self._results:
value += self._input_matrix[row, column]
self._totalPotential = value
@staticmethod
def make_cost_matrix(profit_matrix):
"""
Converts a profit matrix into a cost matrix.
Expects NumPy objects as input.
"""
# subtract profit matrix from a matrix made of the max value of the profit matrix
matrix_shape = profit_matrix.shape
offset_matrix = np.ones(matrix_shape) * profit_matrix.max()
cost_matrix = offset_matrix - profit_matrix
return cost_matrix
def _adjust_matrix_by_min_uncovered_num(self, result_matrix, covered_rows, covered_columns):
"""Subtract m from every uncovered number and add m to every element covered with two lines."""
# Calculate minimum uncovered number (m)
elements = []
for row_index, row in enumerate(result_matrix):
if row_index not in covered_rows:
for index, element in enumerate(row):
if index not in covered_columns:
elements.append(element)
min_uncovered_num = min(elements)
# Add m to every covered element
adjusted_matrix = result_matrix
for row in covered_rows:
adjusted_matrix[row] += min_uncovered_num
for column in covered_columns:
adjusted_matrix[:, column] += min_uncovered_num
# Subtract m from every element
m_matrix = np.ones(self._shape) * min_uncovered_num
adjusted_matrix -= m_matrix
return adjusted_matrix
def __find_matches(self, zero_locations):
"""Returns rows and columns with matches in them."""
marked_rows = np.array([], dtype=int)
marked_columns = np.array([], dtype=int)
# Mark rows and columns with matches
# Iterate over rows
for index, row in enumerate(zero_locations):
row_index = np.array([index])
if np.sum(row) == 1:
column_index, = np.where(row)
marked_rows, marked_columns = self.__mark_rows_and_columns(marked_rows, marked_columns, row_index,
column_index)
# Iterate over columns
for index, column in enumerate(zero_locations.T):
column_index = np.array([index])
if np.sum(column) == 1:
row_index, = np.where(column)
marked_rows, marked_columns = self.__mark_rows_and_columns(marked_rows, marked_columns, row_index,
column_index)
return marked_rows, marked_columns
@staticmethod
def __mark_rows_and_columns(marked_rows, marked_columns, row_index, column_index):
"""Check if column or row is marked. If not marked then mark it."""
new_marked_rows = marked_rows
new_marked_columns = marked_columns
if not (marked_rows == row_index).any() and not (marked_columns == column_index).any():
new_marked_rows = np.insert(marked_rows, len(marked_rows), row_index)
new_marked_columns = np.insert(marked_columns, len(marked_columns), column_index)
return new_marked_rows, new_marked_columns
@staticmethod
def select_arbitrary_match(zero_locations):
"""Selects row column combination with minimum number of zeros in it."""
# Count number of zeros in row and column combinations
rows, columns = np.where(zero_locations)
zero_count = []
for index, row in enumerate(rows):
total_zeros = np.sum(zero_locations[row]) + np.sum(zero_locations[:, columns[index]])
zero_count.append(total_zeros)
# Get the row column combination with the minimum number of zeros.
indices = zero_count.index(min(zero_count))
row = np.array([rows[indices]])
column = np.array([columns[indices]])
return row, column
def __set_results(self, result_lists):
"""Set results during calculation."""
# Check if results values are out of bound from input matrix (because of matrix being padded).
# Add results to results list.
for result in result_lists:
row, column = result
if row < self._maxRow and column < self._maxColumn:
new_result = (int(row), int(column))
self._results.append(new_result)
class CoverZeros:
"""
Use minimum number of lines to cover all zeros in the matrix.
Algorithm based on: http://weber.ucsd.edu/~vcrawfor/hungar.pdf
"""
def __init__(self, matrix):
"""
Input a matrix and save it as a boolean matrix to designate zero locations.
Run calculation procedure to generate results.
"""
# Find zeros in matrix
self._zero_locations = (matrix == 0)
self._shape = matrix.shape
# Choices starts without any choices made.
self._choices = np.zeros(self._shape, dtype=bool)
self._marked_rows = []
self._marked_columns = []
# marks rows and columns
self.__calculate()
# Draw lines through all unmarked rows and all marked columns.
self._covered_rows = list(set(range(self._shape[0])) - set(self._marked_rows))
self._covered_columns = self._marked_columns
def get_covered_rows(self):
"""Return list of covered rows."""
return self._covered_rows
def get_covered_columns(self):
"""Return list of covered columns."""
return self._covered_columns
def __calculate(self):
"""
Calculates minimum number of lines necessary to cover all zeros in a matrix.
Algorithm based on: http://weber.ucsd.edu/~vcrawfor/hungar.pdf
"""
while True:
# Erase all marks.
self._marked_rows = []
self._marked_columns = []
# Mark all rows in which no choice has been made.
for index, row in enumerate(self._choices):
if not row.any():
self._marked_rows.append(index)
# If no marked rows then finish.
if not self._marked_rows:
return True
# Mark all columns not already marked which have zeros in marked rows.
num_marked_columns = self.__mark_new_columns_with_zeros_in_marked_rows()
# If no new marked columns then finish.
if num_marked_columns == 0:
return True
# While there is some choice in every marked column.
while self.__choice_in_all_marked_columns():
# Some Choice in every marked column.
# Mark all rows not already marked which have choices in marked columns.
num_marked_rows = self.__mark_new_rows_with_choices_in_marked_columns()
# If no new marks then Finish.
if num_marked_rows == 0:
return True
# Mark all columns not already marked which have zeros in marked rows.
num_marked_columns = self.__mark_new_columns_with_zeros_in_marked_rows()
# If no new marked columns then finish.
if num_marked_columns == 0:
return True
# No choice in one or more marked columns.
# Find a marked column that does not have a choice.
choice_column_index = self.__find_marked_column_without_choice()
while choice_column_index is not None:
# Find a zero in the column indexed that does not have a row with a choice.
choice_row_index = self.__find_row_without_choice(choice_column_index)
# Check if an available row was found.
new_choice_column_index = None
if choice_row_index is None:
# Find a good row to accomodate swap. Find its column pair.
choice_row_index, new_choice_column_index = \
self.__find_best_choice_row_and_new_column(choice_column_index)
# Delete old choice.
self._choices[choice_row_index, new_choice_column_index] = False
# Set zero to choice.
self._choices[choice_row_index, choice_column_index] = True
# Loop again if choice is added to a row with a choice already in it.
choice_column_index = new_choice_column_index
def __mark_new_columns_with_zeros_in_marked_rows(self):
"""Mark all columns not already marked which have zeros in marked rows."""
num_marked_columns = 0
for index, column in enumerate(self._zero_locations.T):
if index not in self._marked_columns:
if column.any():
row_indices, = np.where(column)
zeros_in_marked_rows = (set(self._marked_rows) & set(row_indices)) != set([])
if zeros_in_marked_rows:
self._marked_columns.append(index)
num_marked_columns += 1
return num_marked_columns
def __mark_new_rows_with_choices_in_marked_columns(self):
"""Mark all rows not already marked which have choices in marked columns."""
num_marked_rows = 0
for index, row in enumerate(self._choices):
if index not in self._marked_rows:
if row.any():
column_index, = np.where(row)
if column_index in self._marked_columns:
self._marked_rows.append(index)
num_marked_rows += 1
return num_marked_rows
def __choice_in_all_marked_columns(self):
"""Return Boolean True if there is a choice in all marked columns. Returns boolean False otherwise."""
for column_index in self._marked_columns:
if not self._choices[:, column_index].any():
return False
return True
def __find_marked_column_without_choice(self):
"""Find a marked column that does not have a choice."""
for column_index in self._marked_columns:
if not self._choices[:, column_index].any():
return column_index
raise TypeError(
"Could not find a column without a choice. Failed to cover matrix zeros. Algorithm has failed.")
def __find_row_without_choice(self, choice_column_index):
"""Find a row without a choice in it for the column indexed. If a row does not exist then return None."""
row_indices, = np.where(self._zero_locations[:, choice_column_index])
for row_index in row_indices:
if not self._choices[row_index].any():
return row_index
# All rows have choices. Return None.
return None
def __find_best_choice_row_and_new_column(self, choice_column_index):
"""
Find a row index to use for the choice so that the column that needs to be changed is optimal.
Return a random row and column if unable to find an optimal selection.
"""
row_indices, = np.where(self._zero_locations[:, choice_column_index])
for row_index in row_indices:
column_indices, = np.where(self._choices[row_index])
column_index = column_indices[0]
if self.__find_row_without_choice(column_index) is not None:
return row_index, column_index
# Cannot find optimal row and column. Return a random row and column.
from random import shuffle
shuffle(row_indices)
column_index, = np.where(self._choices[row_indices[0]])
return row_indices[0], column_index[0]
def basic_factorial(x):
"""Returns the factorial of the integer x."""
ans = 1
while x:
ans *= x
x -= 1
return ans
def basic_falling_factorial(high, low):
"""Returns the high! / low! """
if low == high:
return 1
if high < low:
return 0
i = low + 1
ans = 1
while i <= high:
ans *= i
i += 1
return ans
def lcm(x, y):
return x * y // gcd(x, y)
class Matrix:
def __init__(self, entries):
self.entries = entries
def __mul__(self, other):
result = [[0 for j in range(len(other.entries[0]))] for i in range(len(self.entries))]
for i in range(len(self.entries)):
for j in range(len(other.entries[0])):
for k in range(len(other.entries)):
result[i][j] += self.entries[i][k] * other.entries[k][j]
return Matrix(result)
def __mod__(self, mod):
if mod:
for i in range(len(self.entries)):
for j in range(len(self.entries[0])):
self.entries[i][j] %= mod
return self
def __pow__(self, n, mod=None):
assert (n > 0)
if n == 1:
return self.__mod__(mod)
half = self.__pow__(n >> 1, mod)
if n & 1 == 1: # if odd
return half.__mul__(half).__mul__(self).__mod__(mod)
else: # if even
return half.__mul__(half).__mod__(mod)
def __str__(self):
return str(self.entries)
class LinearHomogeneousRecurrence:
"""
Solve f(n+1) = c(n) f(n) + c(n-1) f(n-1) + ... + c(n-k) f(n-k) with
f(0) = a(0), f(1) = a(1), ..., f(k) = a(k).
Input:
coefficients = [c(n), c(n-1), ..., c(n-k)]
initial_values = [a(k), a(k-1), ..., a(0)]
"""
def __init__(self, coefficients, initial_values):
assert (len(coefficients) == len(initial_values))
self.dim = len(coefficients)
self.companion_matrix = self.__init__companion_matrix(coefficients)
self.initial_state = self.__init__initial_state(initial_values)
def __init__companion_matrix(self, coefficients):
entries = [[0 for j in range(self.dim)] for i in range(self.dim)]
for i in range(self.dim):
entries[0][i] = coefficients[i]
for i in range(1, self.dim):
entries[i][i - 1] = 1
return Matrix(entries)
def __init__initial_state(self, initial_values):
entries = [[value] for value in initial_values]
return Matrix(entries)
def get(self, n, mod=None):
if n < self.dim:
value = self.initial_state.entries[self.dim - n - 1][0]
return value % mod if mod else value
else:
return ((pow(self.companion_matrix, n - self.dim + 1, mod) * self.initial_state) % mod).entries[0][0]
class BaseConverter:
def convert_decimal(self, n, base):
reversed_rep = []
d = n
while d:
d, r = divmod(d, base)
reversed_rep.append(r)
return reversed_rep[::-1]
def convert_rep(self, rep, base):
result = 0
for digit in rep:
result = result * base + digit
return result
class BinomialCoefficient:
def __init__(self, prime):
self.prime = prime
self.base_values = self.__init_base_values(prime)
self.cache_values = {}
self.base_converter = BaseConverter()
def __init_base_values(self, prime):
curr = [1]
result = [curr]
for n in range(2, prime + 1):
next = [1]
for k in range(1, n - 1):
next.append(curr[k - 1] + curr[k])
next.append(1)
curr = next
result.append(curr)
return result
def get(self, m, n):
if m not in self.cache_values:
self.cache_values[m] = {}
if n not in self.cache_values[m]:
m_rep = self.base_converter.convert_decimal(m, self.prime)
n_rep = self.base_converter.convert_decimal(n, self.prime)
offset = len(m_rep) - len(n_rep)
result = 1
for i in range(len(n_rep)):
m_i = m_rep[offset + i]
n_i = n_rep[i]
if m_i < n_i:
return 0
result = (result * self.base_values[m_i][n_i]) % self.prime
self.cache_values[m][n] = result
return self.cache_values[m][n]
class EulerNumber:
def __init__(self, prime):
self.prime = prime
self.binomial_coefficient = BinomialCoefficient(prime)
self.factorial_mod = self.__init_factorial_mod(prime)
self.values = {0: (1, prime - 1)}
def __init_factorial_mod(self, prime):
result = [1]
for i in range(1, prime):
result.append((result[-1] * i) % prime)
return result
def get(self, n):
if n not in self.values:
a = self.__factorial_mod(n)
b = -1
for k in range(n):
c = self.binomial_coefficient.get(n, k)
a_k, b_k = self.get(k)
a += c * a_k
b += c * b_k
b -= c * self.__factorial_mod(n - k)
self.values[n] = (a % self.prime, b % self.prime)
return self.values[n]
def __factorial_mod(self, n):
if n >= self.prime:
return 0
return self.factorial_mod[n]
def sieve(n):
"""Return all primes <= n."""
np1 = n + 1
s = list(range(np1))
s[1] = 0
sqrtn = int(round(n ** 0.5))
for i in range(2, sqrtn + 1):
if s[i]:
s[i * i: np1: i] = [0] * len(range(i * i, np1, i))
return filter(None, s)
def timeit(method):
def timed(*args, **kw):
ts = time.time()
result = method(*args, **kw)
te = time.time()
print('{} took: {:.3f} seconds'.format(method.__name__, (te - ts)))
return result
return timed
def is_pandigital(num):
"""Return true if integer num uses all of the digits from 1 to n exactly once. False otherwise."""
str_num = str(num)
if str_num.count('0') > 0:
return False
n_digits = len(str_num)
for i in range(1, n_digits+1):
if str_num.count(str(i)) != 1:
return False
return True
def is_palindrome(n: int) -> bool:
ls = list(str(n))
return ls == ls[::-1]
def new_mod(str_a, m): # todo: test for bugs
"""
Returns a mod m.
Works well for m=0,1,2,3,4,5,8,9,10,11
Args:
str_a: <str>
m: <num>
Returns: a mod m
"""
int_a = int(str_a)
if len(str_a) > 2:
if m == 0 or m == 1:
return 0
if int_a == m:
return 0
if m == 2:
last = str_a[-1:]
return new_mod(last, m)
if m == 3 or m == 9:
sum_of_digits = sum([int(d) for d in str_a])
return new_mod(str(sum_of_digits), m)
if m == 4:
last = int(str_a[-1])
second_last = int(str_a[-2:-1])
answer = 2 * second_last + last
return new_mod(str(answer), m)
if m == 5:
last = str_a[-1]
return new_mod(last, m)
if m == 7:
last = int(str_a[-1:])
first = int(str_a[:-1])
answer = new_mod(str(first - 2 * last), m)
if answer == 0:
return 0
else:
return int_a % m
if m == 8:
last = int(str_a[-1:])
second_last = int(str_a[-2:-1])
third_last = int(str_a[-3:-2])
answer = 4 * third_last + 2 * second_last + last
return new_mod(str(answer), m)
if m == 10:
last = int(str_a[-1:])
return last
if m == 11:
new_a = 0
for i, digit in enumerate(str_a):
if not i % 2:
new_a += int(digit)
else:
new_a -= int(digit)
return new_mod(str(new_a), m)
if m == 13:
last = int(str_a[-1:])
first = int(str_a[:-1])
answer = new_mod(str(first - 9 * last), m)
if answer == 0:
return 0
else:
return int_a % m
return int_a % m
else:
return int_a % m
def combin(n, r):
"""A fast way to calculate binomial coefficients by <NAME> (contrib)."""
if 0 <= r <= n:
ntok = 1
rtok = 1
for t in range(1, min(r, n - r) + 1):
ntok *= n
rtok *= t
n -= 1
return ntok // rtok # bit-wise operation
else:
return 0
def square_free_sieve(limit):
"""Generator that yields all square free numbers less than limit"""
a = [True] * limit
# Needed so we don't mark off multiples of 1^2
yield 1
a[0] = a[1] = False
for i, is_square_free in enumerate(a):
if is_square_free:
yield i
i2 = i * i
for n in range(i2, limit, i2):
a[n] = False
def square_primes_sieve(limit, primes=None):
"""Returns a list all prime squares less than limit"""
if primes is None:
primes = sieve(int(limit))
return [i**2 for i in primes]
def primes_of_n(n, ls_prime=None):
"""
Given an integer n, return the prime factorization.
Args:
n: <int> integer
ls_prime: <list> optional parameter to specify a list of possible primes
Returns: <dict> of prime factors with the keys being the prime number, and the values
being the multiplicity of that factor.
"""
factors = {}
if ls_prime is None:
i = 2
p = 2
def next_prime(j):
return j
else:
i = 0
p = ls_prime[i]
def next_prime(j):
return ls_prime[j]
while p * p <= n:
while n % p == 0:
if p not in factors:
factors[p] = 0
factors[p] += 1
n //= p
i += 1
p = next_prime(i)
if n > 1:
factors[n] = 1
return factors
def cumsum(ls):
"""
Given a list, return the cumulative sum of the list
Args:
ls: list of numbers
Returns: <list>
"""
return list(accumulate(ls))
def generate_ascending_sub_sequence(options, num):
"""
Args:
options: <list> of objects, ordered in ascending order
num: <int> the size of the sub-sequence to return
Returns: an generator of sub-sequences of options in ascending order
e.g.
options = ['0', '1', '2']
num = 3
Returns:
('0', '0', '0')
('0', '0', '1')
('0', '0', '2')
('0', '1', '1')
('0', '1', '2')
('0', '2', '2')
('1', '1', '1')
('1', '1', '2')
('1', '2', '2')
('2', '2', '2')
"""
if num == 1:
for i in options:
yield (i, )
else:
for idx, j in enumerate(options):
for k in generate_ascending_sub_sequence(options[idx:], num - 1):
yield (j, *k)
@lru_cache(maxsize=None, typed=False)
def partition_number(n):
"""
Compute the partition number of n.
Using recursive equation found here: http://www.cs.utsa.edu/~wagner/python/fp/part.html
p(n) = sum_{k=1}^{n} (-1)^{k+1} (p(x) + p(y))
x = n - k*(3k-1)/2
y = n - k*(3k+1)/2
"""
if n < 0:
return 0
if n == 0:
return 1
sign = 1
summation = 0
for k in range(1, n+1):
x = n - int(k*(3*k-1)/2)
y = n - int(k*(3*k+1)/2)
summation += sign*(partition_number(x) + partition_number(y))
sign *= -1
return summation
def euler_totient_function(n):
dc_factors = primes_of_n(n)
iter_primes = ((1-1/p) for p in dc_factors.keys())
output = n
for p in iter_primes:
output *= p
return int(output)
@lru_cache(maxsize=None)
def sum_phi(n):
"""Returns sum_{i=1 to n} phi(i) where phi is the euler totient function"""
if n == 0:
return 0
if n == 1:
return 1
v = int(n**0.5)
nv = n//(v+1)
return n*(n+1)//2 - sum(sum_phi(x) * (n//x - n//(x+1)) for x in range(1, v+1)) - sum(sum_phi(n//k) for k in range(2, nv+1))
def farey(n, descending=False):
"""Print the n'th Farey sequence. Allow for either ascending or descending."""
a, b, c, d = 0, 1, 1, n
if descending:
a, c = 1, n - 1
ls_farey = [(a, b)]
while (c <= n and not descending) or (a > 0 and descending):
k = int((n + b) / d)
a, b, c, d = c, d, k * c - a, k * d - b
ls_farey.append((a, b))
return ls_farey
@lru_cache(maxsize=None, typed=False)
def len_faray_seq(n):
"""
Calculates the length of the n'th Faray Sequence.
Args:
n: <int>
Returns: <int>
Using the recursive relation |F_{n}| = |F_{n-1}| + euler_totient(n),
Expanding for all n and then inverting the relation, after using |F_1| = 2 we get
|F_{n}| = 1/2 * (n+3) * n - sum_{d=2}^{n} |F_{floor(n/d)}|
See Also: https://en.wikipedia.org/wiki/Farey_sequence
"""
if n == 1:
return 2
elif n < 1:
raise NotImplementedError("What happened??")
else:
return int(0.5*(n+3)*n) - sum(len_faray_seq(int(n/d)) for d in range(2, n+1))
def sign(x):
if x < 0:
return -1
elif x > 0:
return 1
else:
return 0
@timeit
def mobius_sieve(n: int, ls_prime: Union[List[int], None]) -> List[int]:
"""
Returns a list of all mobius function values.
mobius(n) = 1 if i is square-free with even number of primes,
-1 if odd number,
0 if contains square
"""
ls_m = [1]*n
if ls_prime is None:
ls_p = list(sieve(n))
else:
ls_p = ls_prime
for p in ls_p:
ls_m[p:n:p] = [-1 * x for x in ls_m[p:n:p]]
p2 = p ** 2
ls_m[p2:n:p2] = [0] * ((n-1)//p2) # len(ls_m[p2:n:p2]) == (n-1)//p2
return ls_m
# @lru_cache(maxsize=None)
def num_of_divisors(n):
"""
Return the number of positive divisors of n.
e.g. sigma(12) = 6
"""
dc_primes = primes_of_n(n)
return reduce(lambda a, b: a * b, (v + 1 for v in dc_primes.values()))
def divisors(prime_factors: Dict[int, int]) -> Generator[int, None, None]:
"""
Given the prime factorization of a number, return a generator of all of it's divisors.
Args:
prime_factors: a dictionary with the key being the prime and the value being the multiplicity of the prime.
For example if n=12 then then input would be {2:2, 3:1} since 12 = 2*2*3, and the generator would return
1,2,4,3,6,12
"""
ls_primes = list(prime_factors.keys())
# generates factors from ls_primes[k:] subset
def generate(k):
if k == len(ls_primes):
yield 1
else:
rest = generate(k + 1)
prime = ls_primes[k]
for factor in rest:
prime_to_i = 1
# prime_to_i iterates prime**i values, i being all possible exponents
for _ in range(prime_factors[prime] + 1):
yield factor * prime_to_i
prime_to_i *= prime
yield from generate(0)
@lru_cache(maxsize=None)
def binomial_recursive(n: int, k: int, mod_m: int) -> int:
if k == 0:
return 1
if n == 0:
return 0
if n == k:
return 1
if k > (n-k):
return binomial_recursive(n, n-k, mod_m)
return (binomial_recursive(n-1, k, mod_m) + binomial_recursive(n-1, k-1, mod_m)) % mod_m
@lru_cache(maxsize=None)
def fib(n: int, mod_m: int = int(1e12)) -> int:
if n < 2:
return n
return (fib(n-1, mod_m) + fib(n-2, mod_m)) % mod_m
def fibonacci_n_term(n: int) -> Union[int, NotImplementedError]:
"""Returns the nth fibonacci number"""
if n < 0:
return NotImplementedError('negative n is not implemented')
sq_5 = 5**0.5
phi_pos = (1 + sq_5) / 2
return round(phi_pos**n / sq_5)
def fibonacci_k_n_term(n: int, k: int) -> Union[int, NotImplementedError]:
"""
Returns the nth fibonacci_k number.
Where F_{k,n+1} = k*F_{k,n} + F_{k,n−1} for n ≥ 1
"""
if n < 0:
return NotImplementedError('negative n is not implemented')
if n in [0, 1]:
return n
root = (k+(k**2 + 4)**0.5) / 2
return round((root**n - (-root)**(-n)) / (root + 1/root))
def catalan_transform(n: int = 1, seq: List[int] = None, mod_m: Optional[int] = None) -> int:
"""http://www.kurims.kyoto-u.ac.jp/EMIS/journals/JIS/VOL8/Barry/barry84.pdf"""
if n == 0:
return 0
if mod_m is not None:
return int(sum((i * combin(2*n-i, n-i) * seq[i] // (2*n-i)) % mod_m for i in range(1, n+1)))
else:
return int(sum(i * combin(2*n-i, n-i) * seq[i] // (2*n-i) for i in range(1, n+1)))
def inv_catalan_transform(n: int = 1, seq: List[int] = None) -> int:
"""http://www.kurims.kyoto-u.ac.jp/EMIS/journals/JIS/VOL8/Barry/barry84.pdf"""
return sum(combin(i, n-i) * (-1)**(n-i) * seq[i] for i in range(n+1))
def get_all_mod_inverse_dict(m: int, max_n: int) -> Dict[int, int]:
"""
Computes a^-1 mod m for all a in [1, a-1]
https://cp-algorithms.com/algebra/module-inverse.html#mod-inv-all-num
Taking the key * value mod m for each key value would result in 1
"""
dc_inv = {1: 1}
for i in range(2, max_n+1):
dc_inv[i] = m - (m // i) * dc_inv[m % i] % m
return dc_inv
def get_all_mod_inverse_list(m: int, max_n: int) -> List[int]:
"""
Computes a^-1 mod m for all a in [1, a-1]
https://cp-algorithms.com/algebra/module-inverse.html#mod-inv-all-num
"""
ls_inv = [0, 1]
for i in range(2, max_n+1):
ls_inv.append(m - (m // i) * ls_inv[m % i] % m)
return ls_inv
def cycle_length(k: int) -> int:
"""
Computes the repeated cycle length of the decimal expansion of 1/k.
e.g.
1/6 = 0.1(6) -> 1
1/7 = 0.(142857) -> 6
For k not equal to a multiple of 2 or 5,
1/k has a cycle of d digits if 10^d == 1 mod k = 0
"""
while k % 2 == 0:
k //= 2 # remove factors of 2
while k % 5 == 0:
k //= 5 # remove factors of 5
if k == 1:
return 0 # this is not a repeating decimal
d = 1
x = 10 % k
while x != 1:
x = (x*10) % k
d += 1
return d
| en | 0.766389 | Implementation of the Hungarian (Munkres) Algorithm using np. Usage: hungarian = Hungarian(cost_matrix) hungarian.calculate() or hungarian = Hungarian() hungarian.calculate(cost_matrix) Handle Profit matrix: hungarian = Hungarian(profit_matrix, is_profit_matrix=True) or cost_matrix = Hungarian.make_cost_matrix(profit_matrix) The matrix will be automatically padded if it is not square. For that numpy's resize function is used, which automatically adds 0's to any row/column that is added Get results and total potential after calculation: hungarian.get_results() hungarian.get_total_potential() Implementation of the Hungarian (Munkres) Algorithm using Python and NumPy References: http://www.ams.jhu.edu/~castello/362/Handouts/hungarian.pdf http://weber.ucsd.edu/~vcrawfor/hungar.pdf http://en.wikipedia.org/wiki/Hungarian_algorithm http://www.public.iastate.edu/~ddoty/HungarianAlgorithm.html http://www.clapper.org/software/python/munkres/ # Module Information. __version__ = "1.1.1" __author__ = "<NAME>" __url__ = "http://github.com/tdedecko/hungarian-algorithm" __copyright__ = "(c) 2010 <NAME>" __license__ = "MIT License" input_matrix is a List of Lists. input_matrix is assumed to be a cost matrix unless is_profit_matrix is True. # Save input # Adds 0s if any columns/rows are added. Otherwise stays unaltered # Convert matrix to profit matrix if necessary # Results from algorithm. Get results after calculation. Returns expected value after calculation. Implementation of the Hungarian (Munkres) Algorithm. input_matrix is a List of Lists. input_matrix is assumed to be a cost matrix unless is_profit_matrix is True. # Handle invalid and new matrix inputs. # Step 1: Subtract row mins from each row. # Step 2: Subtract column mins from each column. # Step 3: Use minimum number of lines to cover all zeros in the matrix. # If the total covered rows+columns is not equal to the matrix size then adjust matrix and repeat. # Find minimum number of lines to cover all zeros in the matrix and find total covered rows and columns. # if the total covered rows+columns is not equal to the matrix size then adjust it by min uncovered num (m). # Step 4: Starting with the top row, work your way downwards as you make assignments. # Find single zeros in rows or columns. # Add them to final result and remove them and their associated row/column from the matrix. # If number of zeros in the matrix is zero before finding all the results then an error has occurred. # Find results and mark rows and columns for deletion # Make arbitrary selection # Delete rows and columns # Save Results # Calculate total potential Converts a profit matrix into a cost matrix. Expects NumPy objects as input. # subtract profit matrix from a matrix made of the max value of the profit matrix Subtract m from every uncovered number and add m to every element covered with two lines. # Calculate minimum uncovered number (m) # Add m to every covered element # Subtract m from every element Returns rows and columns with matches in them. # Mark rows and columns with matches # Iterate over rows # Iterate over columns Check if column or row is marked. If not marked then mark it. Selects row column combination with minimum number of zeros in it. # Count number of zeros in row and column combinations # Get the row column combination with the minimum number of zeros. Set results during calculation. # Check if results values are out of bound from input matrix (because of matrix being padded). # Add results to results list. Use minimum number of lines to cover all zeros in the matrix. Algorithm based on: http://weber.ucsd.edu/~vcrawfor/hungar.pdf Input a matrix and save it as a boolean matrix to designate zero locations. Run calculation procedure to generate results. # Find zeros in matrix # Choices starts without any choices made. # marks rows and columns # Draw lines through all unmarked rows and all marked columns. Return list of covered rows. Return list of covered columns. Calculates minimum number of lines necessary to cover all zeros in a matrix. Algorithm based on: http://weber.ucsd.edu/~vcrawfor/hungar.pdf # Erase all marks. # Mark all rows in which no choice has been made. # If no marked rows then finish. # Mark all columns not already marked which have zeros in marked rows. # If no new marked columns then finish. # While there is some choice in every marked column. # Some Choice in every marked column. # Mark all rows not already marked which have choices in marked columns. # If no new marks then Finish. # Mark all columns not already marked which have zeros in marked rows. # If no new marked columns then finish. # No choice in one or more marked columns. # Find a marked column that does not have a choice. # Find a zero in the column indexed that does not have a row with a choice. # Check if an available row was found. # Find a good row to accomodate swap. Find its column pair. # Delete old choice. # Set zero to choice. # Loop again if choice is added to a row with a choice already in it. Mark all columns not already marked which have zeros in marked rows. Mark all rows not already marked which have choices in marked columns. Return Boolean True if there is a choice in all marked columns. Returns boolean False otherwise. Find a marked column that does not have a choice. Find a row without a choice in it for the column indexed. If a row does not exist then return None. # All rows have choices. Return None. Find a row index to use for the choice so that the column that needs to be changed is optimal. Return a random row and column if unable to find an optimal selection. # Cannot find optimal row and column. Return a random row and column. Returns the factorial of the integer x. Returns the high! / low! # if odd # if even Solve f(n+1) = c(n) f(n) + c(n-1) f(n-1) + ... + c(n-k) f(n-k) with f(0) = a(0), f(1) = a(1), ..., f(k) = a(k). Input: coefficients = [c(n), c(n-1), ..., c(n-k)] initial_values = [a(k), a(k-1), ..., a(0)] Return all primes <= n. Return true if integer num uses all of the digits from 1 to n exactly once. False otherwise. # todo: test for bugs Returns a mod m. Works well for m=0,1,2,3,4,5,8,9,10,11 Args: str_a: <str> m: <num> Returns: a mod m A fast way to calculate binomial coefficients by <NAME> (contrib). # bit-wise operation Generator that yields all square free numbers less than limit # Needed so we don't mark off multiples of 1^2 Returns a list all prime squares less than limit Given an integer n, return the prime factorization. Args: n: <int> integer ls_prime: <list> optional parameter to specify a list of possible primes Returns: <dict> of prime factors with the keys being the prime number, and the values being the multiplicity of that factor. Given a list, return the cumulative sum of the list Args: ls: list of numbers Returns: <list> Args: options: <list> of objects, ordered in ascending order num: <int> the size of the sub-sequence to return Returns: an generator of sub-sequences of options in ascending order e.g. options = ['0', '1', '2'] num = 3 Returns: ('0', '0', '0') ('0', '0', '1') ('0', '0', '2') ('0', '1', '1') ('0', '1', '2') ('0', '2', '2') ('1', '1', '1') ('1', '1', '2') ('1', '2', '2') ('2', '2', '2') Compute the partition number of n. Using recursive equation found here: http://www.cs.utsa.edu/~wagner/python/fp/part.html p(n) = sum_{k=1}^{n} (-1)^{k+1} (p(x) + p(y)) x = n - k*(3k-1)/2 y = n - k*(3k+1)/2 Returns sum_{i=1 to n} phi(i) where phi is the euler totient function Print the n'th Farey sequence. Allow for either ascending or descending. Calculates the length of the n'th Faray Sequence. Args: n: <int> Returns: <int> Using the recursive relation |F_{n}| = |F_{n-1}| + euler_totient(n), Expanding for all n and then inverting the relation, after using |F_1| = 2 we get |F_{n}| = 1/2 * (n+3) * n - sum_{d=2}^{n} |F_{floor(n/d)}| See Also: https://en.wikipedia.org/wiki/Farey_sequence Returns a list of all mobius function values. mobius(n) = 1 if i is square-free with even number of primes, -1 if odd number, 0 if contains square # len(ls_m[p2:n:p2]) == (n-1)//p2 # @lru_cache(maxsize=None) Return the number of positive divisors of n. e.g. sigma(12) = 6 Given the prime factorization of a number, return a generator of all of it's divisors. Args: prime_factors: a dictionary with the key being the prime and the value being the multiplicity of the prime. For example if n=12 then then input would be {2:2, 3:1} since 12 = 2*2*3, and the generator would return 1,2,4,3,6,12 # generates factors from ls_primes[k:] subset # prime_to_i iterates prime**i values, i being all possible exponents Returns the nth fibonacci number Returns the nth fibonacci_k number. Where F_{k,n+1} = k*F_{k,n} + F_{k,n−1} for n ≥ 1 http://www.kurims.kyoto-u.ac.jp/EMIS/journals/JIS/VOL8/Barry/barry84.pdf http://www.kurims.kyoto-u.ac.jp/EMIS/journals/JIS/VOL8/Barry/barry84.pdf Computes a^-1 mod m for all a in [1, a-1] https://cp-algorithms.com/algebra/module-inverse.html#mod-inv-all-num Taking the key * value mod m for each key value would result in 1 Computes a^-1 mod m for all a in [1, a-1] https://cp-algorithms.com/algebra/module-inverse.html#mod-inv-all-num Computes the repeated cycle length of the decimal expansion of 1/k. e.g. 1/6 = 0.1(6) -> 1 1/7 = 0.(142857) -> 6 For k not equal to a multiple of 2 or 5, 1/k has a cycle of d digits if 10^d == 1 mod k = 0 # remove factors of 2 # remove factors of 5 # this is not a repeating decimal | 3.5167 | 4 |
cli/parser.py | JashG/dupe-deleter | 0 | 6615322 | import argparse
import sys
def _create_parser():
parser = argparse.ArgumentParser()
subparsers = parser.add_subparsers(title='Commands',
description='Actions to perform',
help='Commands')
# 'Delete' command
parser_delete = subparsers.add_parser('delete')
# List of directories from which to delete duplicates
parser.add_argument('dirs',
metavar='Directory List',
nargs='*',
help='Path(s) to directories to search, separated by space',
action='store')
# Optional recursive delete
parser.add_argument('-r',
'--recursive',
metavar='Directory List',
type=list,
nargs='*',
help='Enable recursive search for all provided path(s).'
'If you only want to recursively search some path(s), but not all,'
'include them here',
action='store')
# Optional dry run (prints duplicates but doesn't delete them)
parser.add_argument('-dry',
'--dry',
help='Executes as dry run; prints duplicate files but does not make any changes',
action='store_true')
return parser
def main():
args = _create_parser().parse_args(sys.argv)
print(args)
if __name__ == "__main__":
main()
| import argparse
import sys
def _create_parser():
parser = argparse.ArgumentParser()
subparsers = parser.add_subparsers(title='Commands',
description='Actions to perform',
help='Commands')
# 'Delete' command
parser_delete = subparsers.add_parser('delete')
# List of directories from which to delete duplicates
parser.add_argument('dirs',
metavar='Directory List',
nargs='*',
help='Path(s) to directories to search, separated by space',
action='store')
# Optional recursive delete
parser.add_argument('-r',
'--recursive',
metavar='Directory List',
type=list,
nargs='*',
help='Enable recursive search for all provided path(s).'
'If you only want to recursively search some path(s), but not all,'
'include them here',
action='store')
# Optional dry run (prints duplicates but doesn't delete them)
parser.add_argument('-dry',
'--dry',
help='Executes as dry run; prints duplicate files but does not make any changes',
action='store_true')
return parser
def main():
args = _create_parser().parse_args(sys.argv)
print(args)
if __name__ == "__main__":
main()
| en | 0.987338 | # 'Delete' command # List of directories from which to delete duplicates # Optional recursive delete # Optional dry run (prints duplicates but doesn't delete them) | 3.500495 | 4 |
main.py | Nicolinho/ACC | 2 | 6615323 | import numpy as np
import torch
import gym
import argparse
import os
import copy
from pathlib import Path
import time
try:
from torch.utils.tensorboard import SummaryWriter
TB_AVAILABLE = True
except ImportError:
TB_AVAILABLE = False
from tqc import structures, DEVICE
from tqc.trainer import Trainer
from tqc.structures import Actor, Critic, RescaleAction
from tqc.functions import eval_policy
EPISODE_LENGTH = 1000
def rl(args, results_dir, models_dir, prefix):
print(' ' * 10 + 'Options')
for k, v in vars(args).items():
print(' ' * 10 + k + ': ' + str(v))
use_acc = args.use_acc
# remove TimeLimit
env = gym.make(args.env).unwrapped
eval_env = gym.make(args.env).unwrapped
# policy outputs values in [-1,1], this is rescaled to actual action space
# which is [-1,1] for the gym cont. contr. envs except humanoid: [-0.4, 0.4]
env = RescaleAction(env, -1., 1.)
eval_env = RescaleAction(eval_env, -1., 1.)
state_dim = env.observation_space.shape[0]
action_dim = env.action_space.shape[0]
env.seed(args.seed)
eval_env.seed(10 + args.seed)
torch.manual_seed(args.seed)
np.random.seed(args.seed)
replay_buffer = structures.ReplayBuffer(state_dim, action_dim)
actor = Actor(state_dim, action_dim).to(DEVICE)
critic = Critic(state_dim, action_dim, args.n_quantiles, args.n_nets).to(DEVICE)
critic_target = copy.deepcopy(critic)
top_quantiles_to_drop = args.top_quantiles_to_drop_per_net * args.n_nets
if len(prefix) == 0:
file_name = f"{args.env}_{args.seed}"
else:
file_name = f"{prefix}_{args.env}_{args.seed}"
if TB_AVAILABLE:
writer = SummaryWriter(results_dir / file_name)
else:
class DummyWriter():
def add_scalar(self, *args, **kwargs):
pass
writer = DummyWriter()
trainer = Trainer(actor=actor,
critic=critic,
critic_target=critic_target,
top_quantiles_to_drop=top_quantiles_to_drop,
discount=args.discount,
tau=args.tau,
target_entropy=-np.prod(env.action_space.shape).item(),
use_acc=args.use_acc,
lr_dropped_quantiles=args.lr_dropped_quantiles,
adjusted_dropped_quantiles_init=args.adjusted_dropped_quantiles_init,
adjusted_dropped_quantiles_max=args.adjusted_dropped_quantiles_max,
diff_ma_coef=args.diff_ma_coef,
num_critic_updates=args.num_critic_updates,
writer=writer)
evaluations = []
state, done = env.reset(), False
episode_return, last_episode_return = 0, 0
episode_timesteps = 0
episode_num = 0
start_time = time.time()
if use_acc:
reward_list = []
start_ptr = replay_buffer.ptr
ptr_list = []
disc_return = []
time_since_beta_update = 0
do_beta_update = False
actor.train()
for t in range(int(args.max_timesteps)):
action = actor.select_action(state)
next_state, reward, done, _ = env.step(action)
episode_timesteps += 1
if use_acc:
reward_list.append(reward)
time_since_beta_update += 1
replay_buffer.add(state.astype('float32'), action, next_state, reward, done)
state = next_state
episode_return += reward
if done or episode_timesteps >= EPISODE_LENGTH:
if use_acc:
ptr_list.append([start_ptr, replay_buffer.ptr])
start_ptr = replay_buffer.ptr
if t > 1:
for i in range(episode_timesteps):
disc_return.append(
np.sum(np.array(reward_list)[i:] * (args.discount ** np.arange(0, episode_timesteps - i))))
if time_since_beta_update >= args.beta_udate_rate and t >= args.init_num_steps_before_beta_updates:
do_beta_update = True
reward_list = []
# Train agent after collecting sufficient data
if t >= args.init_expl_steps:
if use_acc and do_beta_update:
trainer.train(replay_buffer, args.batch_size, ptr_list, disc_return, do_beta_update)
do_beta_update= False
for ii, ptr_pair in enumerate(copy.deepcopy(ptr_list)):
if (ptr_pair[0] < replay_buffer.ptr - args.size_limit_beta_update_batch):
disc_return = disc_return[ptr_pair[1] - ptr_pair[0]:]
ptr_list.pop(0)
elif (ptr_pair[0] > replay_buffer.ptr and
replay_buffer.max_size - ptr_pair[0] + replay_buffer.ptr > args.size_limit_beta_update_batch):
if ptr_pair[1] > ptr_pair[0]:
disc_return = disc_return[ptr_pair[1] - ptr_pair[0]:]
else:
disc_return = disc_return[replay_buffer.max_size - ptr_pair[0] + ptr_pair[1]:]
ptr_list.pop(0)
else:
break
time_since_beta_update = 0
else:
trainer.train(replay_buffer, args.batch_size)
if done or episode_timesteps >= EPISODE_LENGTH:
# +1 to account for 0 indexing. +0 on ep_timesteps since it will increment +1 even if done=True
print(f"Seed: {args.seed} Total T: {t + 1} Episode Num: {episode_num + 1} Episode T: {episode_timesteps} Reward: {episode_return:.1f}")
# Reset environment
state, done = env.reset(), False
last_episode_return = episode_return
episode_return = 0
episode_timesteps = 0
episode_num += 1
# Evaluate episode
if (t + 1) % args.eval_freq == 0:
file_name = f"{prefix}_{args.env}_{args.seed}"
avg_reward = eval_policy(actor, eval_env, EPISODE_LENGTH)
evaluations.append(avg_reward)
np.save(results_dir / file_name, evaluations)
writer.add_scalar('evaluator_return', evaluations[-1], t)
print( f"EVALUATION: {results_dir.parts[-1] + '/' + file_name} | Seed: {args.seed} Total T: {t + 1} Reward: {evaluations[-1]:.1f}")
if args.save_model: trainer.save(models_dir / file_name)
if t % 1000 == 0 and t > 0:
writer.add_scalar('exploration_return', last_episode_return, t)
if t % 5000 == 0 and t > 0:
if t >=10000:
writer.add_scalar('fps', 5000 / round(time.time() - start_time), t)
start_time = time.time()
if __name__ == "__main__":
def str2bool(v):
if v.lower() in ('yes', 'true', 't', 'y', '1'):
return True
elif v.lower() in ('no', 'false', 'f', 'n', '0'):
return False
else:
raise argparse.ArgumentTypeError('Boolean value expected.')
parser = argparse.ArgumentParser()
parser.add_argument("--env", default="HalfCheetah-v3") # OpenAI gym environment name
parser.add_argument("--eval_freq", default=5e3, type=int) # How often (time steps) we evaluate
parser.add_argument("--max_timesteps", default=1e6, type=int) # Max time steps to run environment
parser.add_argument("--init_expl_steps", default=5000, type=int) # num of exploration steps before training starts
parser.add_argument("--seed", default=0, type=int) # random seed
parser.add_argument("--n_quantiles", default=25, type=int) # number of quantiles for TQC
parser.add_argument("--use_acc", default=True, type=str2bool) # if acc for automatic tuning of beta shall be used, o/w top_quantiles_to_drop_per_net will be used
parser.add_argument("--top_quantiles_to_drop_per_net",
default=2, type=int) # how many quantiles to drop per net. Parameter has no effect if: use_acc = True
parser.add_argument("--beta_udate_rate", default=1000, type=int)# num of steps between beta/dropped_quantiles updates
parser.add_argument("--init_num_steps_before_beta_updates",
default=25000, type=int) # num steps before updates to dropped_quantiles are started
parser.add_argument("--size_limit_beta_update_batch",
default=5000, type=int) # size of most recent state-action pairs stored for dropped_quantiles updates
parser.add_argument("--lr_dropped_quantiles",
default=0.1, type=float) # learning rate for dropped_quantiles
parser.add_argument("--adjusted_dropped_quantiles_init",
default=2.5, type=float) # initial value of dropped_quantiles
parser.add_argument("--adjusted_dropped_quantiles_max",
default=5.0, type=float) # maximal value for dropped_quantiles
parser.add_argument("--diff_ma_coef", default=0.05, type=float) # moving average param. for normalization of dropped_quantiles loss
parser.add_argument("--num_critic_updates", default=1, type=int) # number of critic updates per environment step
parser.add_argument("--n_nets", default=5, type=int) # number of critic networks
parser.add_argument("--batch_size", default=256, type=int) # Batch size for both actor and critic
parser.add_argument("--discount", default=0.99, type=float) # Discount factor
parser.add_argument("--tau", default=0.005, type=float) # Target network update rate
parser.add_argument("--log_dir", default='results') # results directory
parser.add_argument("--exp_name", default='eval_run') # name of experiment
parser.add_argument("--prefix", default='') # optional prefix to the name of the experiments
parser.add_argument("--save_model", default=True, type=str2bool) # if the model weights shall be saved
args = parser.parse_args()
log_dir = Path(args.log_dir)
results_dir = log_dir / args.exp_name
models_dir = results_dir / 'models'
if not os.path.exists(results_dir):
os.makedirs(results_dir)
if args.save_model and not os.path.exists(models_dir):
os.makedirs(models_dir)
rl(args, results_dir, models_dir, args.prefix)
| import numpy as np
import torch
import gym
import argparse
import os
import copy
from pathlib import Path
import time
try:
from torch.utils.tensorboard import SummaryWriter
TB_AVAILABLE = True
except ImportError:
TB_AVAILABLE = False
from tqc import structures, DEVICE
from tqc.trainer import Trainer
from tqc.structures import Actor, Critic, RescaleAction
from tqc.functions import eval_policy
EPISODE_LENGTH = 1000
def rl(args, results_dir, models_dir, prefix):
print(' ' * 10 + 'Options')
for k, v in vars(args).items():
print(' ' * 10 + k + ': ' + str(v))
use_acc = args.use_acc
# remove TimeLimit
env = gym.make(args.env).unwrapped
eval_env = gym.make(args.env).unwrapped
# policy outputs values in [-1,1], this is rescaled to actual action space
# which is [-1,1] for the gym cont. contr. envs except humanoid: [-0.4, 0.4]
env = RescaleAction(env, -1., 1.)
eval_env = RescaleAction(eval_env, -1., 1.)
state_dim = env.observation_space.shape[0]
action_dim = env.action_space.shape[0]
env.seed(args.seed)
eval_env.seed(10 + args.seed)
torch.manual_seed(args.seed)
np.random.seed(args.seed)
replay_buffer = structures.ReplayBuffer(state_dim, action_dim)
actor = Actor(state_dim, action_dim).to(DEVICE)
critic = Critic(state_dim, action_dim, args.n_quantiles, args.n_nets).to(DEVICE)
critic_target = copy.deepcopy(critic)
top_quantiles_to_drop = args.top_quantiles_to_drop_per_net * args.n_nets
if len(prefix) == 0:
file_name = f"{args.env}_{args.seed}"
else:
file_name = f"{prefix}_{args.env}_{args.seed}"
if TB_AVAILABLE:
writer = SummaryWriter(results_dir / file_name)
else:
class DummyWriter():
def add_scalar(self, *args, **kwargs):
pass
writer = DummyWriter()
trainer = Trainer(actor=actor,
critic=critic,
critic_target=critic_target,
top_quantiles_to_drop=top_quantiles_to_drop,
discount=args.discount,
tau=args.tau,
target_entropy=-np.prod(env.action_space.shape).item(),
use_acc=args.use_acc,
lr_dropped_quantiles=args.lr_dropped_quantiles,
adjusted_dropped_quantiles_init=args.adjusted_dropped_quantiles_init,
adjusted_dropped_quantiles_max=args.adjusted_dropped_quantiles_max,
diff_ma_coef=args.diff_ma_coef,
num_critic_updates=args.num_critic_updates,
writer=writer)
evaluations = []
state, done = env.reset(), False
episode_return, last_episode_return = 0, 0
episode_timesteps = 0
episode_num = 0
start_time = time.time()
if use_acc:
reward_list = []
start_ptr = replay_buffer.ptr
ptr_list = []
disc_return = []
time_since_beta_update = 0
do_beta_update = False
actor.train()
for t in range(int(args.max_timesteps)):
action = actor.select_action(state)
next_state, reward, done, _ = env.step(action)
episode_timesteps += 1
if use_acc:
reward_list.append(reward)
time_since_beta_update += 1
replay_buffer.add(state.astype('float32'), action, next_state, reward, done)
state = next_state
episode_return += reward
if done or episode_timesteps >= EPISODE_LENGTH:
if use_acc:
ptr_list.append([start_ptr, replay_buffer.ptr])
start_ptr = replay_buffer.ptr
if t > 1:
for i in range(episode_timesteps):
disc_return.append(
np.sum(np.array(reward_list)[i:] * (args.discount ** np.arange(0, episode_timesteps - i))))
if time_since_beta_update >= args.beta_udate_rate and t >= args.init_num_steps_before_beta_updates:
do_beta_update = True
reward_list = []
# Train agent after collecting sufficient data
if t >= args.init_expl_steps:
if use_acc and do_beta_update:
trainer.train(replay_buffer, args.batch_size, ptr_list, disc_return, do_beta_update)
do_beta_update= False
for ii, ptr_pair in enumerate(copy.deepcopy(ptr_list)):
if (ptr_pair[0] < replay_buffer.ptr - args.size_limit_beta_update_batch):
disc_return = disc_return[ptr_pair[1] - ptr_pair[0]:]
ptr_list.pop(0)
elif (ptr_pair[0] > replay_buffer.ptr and
replay_buffer.max_size - ptr_pair[0] + replay_buffer.ptr > args.size_limit_beta_update_batch):
if ptr_pair[1] > ptr_pair[0]:
disc_return = disc_return[ptr_pair[1] - ptr_pair[0]:]
else:
disc_return = disc_return[replay_buffer.max_size - ptr_pair[0] + ptr_pair[1]:]
ptr_list.pop(0)
else:
break
time_since_beta_update = 0
else:
trainer.train(replay_buffer, args.batch_size)
if done or episode_timesteps >= EPISODE_LENGTH:
# +1 to account for 0 indexing. +0 on ep_timesteps since it will increment +1 even if done=True
print(f"Seed: {args.seed} Total T: {t + 1} Episode Num: {episode_num + 1} Episode T: {episode_timesteps} Reward: {episode_return:.1f}")
# Reset environment
state, done = env.reset(), False
last_episode_return = episode_return
episode_return = 0
episode_timesteps = 0
episode_num += 1
# Evaluate episode
if (t + 1) % args.eval_freq == 0:
file_name = f"{prefix}_{args.env}_{args.seed}"
avg_reward = eval_policy(actor, eval_env, EPISODE_LENGTH)
evaluations.append(avg_reward)
np.save(results_dir / file_name, evaluations)
writer.add_scalar('evaluator_return', evaluations[-1], t)
print( f"EVALUATION: {results_dir.parts[-1] + '/' + file_name} | Seed: {args.seed} Total T: {t + 1} Reward: {evaluations[-1]:.1f}")
if args.save_model: trainer.save(models_dir / file_name)
if t % 1000 == 0 and t > 0:
writer.add_scalar('exploration_return', last_episode_return, t)
if t % 5000 == 0 and t > 0:
if t >=10000:
writer.add_scalar('fps', 5000 / round(time.time() - start_time), t)
start_time = time.time()
if __name__ == "__main__":
def str2bool(v):
if v.lower() in ('yes', 'true', 't', 'y', '1'):
return True
elif v.lower() in ('no', 'false', 'f', 'n', '0'):
return False
else:
raise argparse.ArgumentTypeError('Boolean value expected.')
parser = argparse.ArgumentParser()
parser.add_argument("--env", default="HalfCheetah-v3") # OpenAI gym environment name
parser.add_argument("--eval_freq", default=5e3, type=int) # How often (time steps) we evaluate
parser.add_argument("--max_timesteps", default=1e6, type=int) # Max time steps to run environment
parser.add_argument("--init_expl_steps", default=5000, type=int) # num of exploration steps before training starts
parser.add_argument("--seed", default=0, type=int) # random seed
parser.add_argument("--n_quantiles", default=25, type=int) # number of quantiles for TQC
parser.add_argument("--use_acc", default=True, type=str2bool) # if acc for automatic tuning of beta shall be used, o/w top_quantiles_to_drop_per_net will be used
parser.add_argument("--top_quantiles_to_drop_per_net",
default=2, type=int) # how many quantiles to drop per net. Parameter has no effect if: use_acc = True
parser.add_argument("--beta_udate_rate", default=1000, type=int)# num of steps between beta/dropped_quantiles updates
parser.add_argument("--init_num_steps_before_beta_updates",
default=25000, type=int) # num steps before updates to dropped_quantiles are started
parser.add_argument("--size_limit_beta_update_batch",
default=5000, type=int) # size of most recent state-action pairs stored for dropped_quantiles updates
parser.add_argument("--lr_dropped_quantiles",
default=0.1, type=float) # learning rate for dropped_quantiles
parser.add_argument("--adjusted_dropped_quantiles_init",
default=2.5, type=float) # initial value of dropped_quantiles
parser.add_argument("--adjusted_dropped_quantiles_max",
default=5.0, type=float) # maximal value for dropped_quantiles
parser.add_argument("--diff_ma_coef", default=0.05, type=float) # moving average param. for normalization of dropped_quantiles loss
parser.add_argument("--num_critic_updates", default=1, type=int) # number of critic updates per environment step
parser.add_argument("--n_nets", default=5, type=int) # number of critic networks
parser.add_argument("--batch_size", default=256, type=int) # Batch size for both actor and critic
parser.add_argument("--discount", default=0.99, type=float) # Discount factor
parser.add_argument("--tau", default=0.005, type=float) # Target network update rate
parser.add_argument("--log_dir", default='results') # results directory
parser.add_argument("--exp_name", default='eval_run') # name of experiment
parser.add_argument("--prefix", default='') # optional prefix to the name of the experiments
parser.add_argument("--save_model", default=True, type=str2bool) # if the model weights shall be saved
args = parser.parse_args()
log_dir = Path(args.log_dir)
results_dir = log_dir / args.exp_name
models_dir = results_dir / 'models'
if not os.path.exists(results_dir):
os.makedirs(results_dir)
if args.save_model and not os.path.exists(models_dir):
os.makedirs(models_dir)
rl(args, results_dir, models_dir, args.prefix)
| en | 0.77046 | # remove TimeLimit # policy outputs values in [-1,1], this is rescaled to actual action space # which is [-1,1] for the gym cont. contr. envs except humanoid: [-0.4, 0.4] # Train agent after collecting sufficient data # +1 to account for 0 indexing. +0 on ep_timesteps since it will increment +1 even if done=True # Reset environment # Evaluate episode # OpenAI gym environment name # How often (time steps) we evaluate # Max time steps to run environment # num of exploration steps before training starts # random seed # number of quantiles for TQC # if acc for automatic tuning of beta shall be used, o/w top_quantiles_to_drop_per_net will be used # how many quantiles to drop per net. Parameter has no effect if: use_acc = True # num of steps between beta/dropped_quantiles updates # num steps before updates to dropped_quantiles are started # size of most recent state-action pairs stored for dropped_quantiles updates # learning rate for dropped_quantiles # initial value of dropped_quantiles # maximal value for dropped_quantiles # moving average param. for normalization of dropped_quantiles loss # number of critic updates per environment step # number of critic networks # Batch size for both actor and critic # Discount factor # Target network update rate # results directory # name of experiment # optional prefix to the name of the experiments # if the model weights shall be saved | 2.043513 | 2 |
maintemplate.py | Kasimir123/agarclone | 0 | 6615324 | import random
import pygame
from pygame import *
# Init pygame
pygame.init()
clock = pygame.time.Clock()
# Defines Constants
# Width and Height of Screen
W, H = 1000, 500
# Frames Per Second
FPS = 30
# Check if the game is running or not
IS_RUNNING = True
# Color Constants
WHITE = (255,255,255)
BLACK = (0,0,0)
# Direction Constants
UP = 'up()'
DOWN = 'down()'
LEFT = 'left()'
RIGHT = 'right()'
# Open game window
size = (W, H)
screen = pygame.display.set_mode(size)
pygame.display.set_caption("NAME")
background = pygame.surface.Surface(size).convert()
background.fill(BLACK)
# Check for input
def input():
for event in pygame.event.get():
if event.type == QUIT:
exit()
keys = pygame.key.get_pressed()
if keys[pygame.K_UP]:
pass
if keys[pygame.K_DOWN]:
pass
if keys[pygame.K_LEFT]:
pass
if keys[pygame.K_RIGHT]:
pass
# Main Loop Method
def run():
time_passed = clock.tick(FPS) / 1000.0
input()
drawScreen()
# Drawing method
def drawScreen():
screen.blit(background, (0 ,0))
pygame.display.update()
# Main Loop
while IS_RUNNING:
run() | import random
import pygame
from pygame import *
# Init pygame
pygame.init()
clock = pygame.time.Clock()
# Defines Constants
# Width and Height of Screen
W, H = 1000, 500
# Frames Per Second
FPS = 30
# Check if the game is running or not
IS_RUNNING = True
# Color Constants
WHITE = (255,255,255)
BLACK = (0,0,0)
# Direction Constants
UP = 'up()'
DOWN = 'down()'
LEFT = 'left()'
RIGHT = 'right()'
# Open game window
size = (W, H)
screen = pygame.display.set_mode(size)
pygame.display.set_caption("NAME")
background = pygame.surface.Surface(size).convert()
background.fill(BLACK)
# Check for input
def input():
for event in pygame.event.get():
if event.type == QUIT:
exit()
keys = pygame.key.get_pressed()
if keys[pygame.K_UP]:
pass
if keys[pygame.K_DOWN]:
pass
if keys[pygame.K_LEFT]:
pass
if keys[pygame.K_RIGHT]:
pass
# Main Loop Method
def run():
time_passed = clock.tick(FPS) / 1000.0
input()
drawScreen()
# Drawing method
def drawScreen():
screen.blit(background, (0 ,0))
pygame.display.update()
# Main Loop
while IS_RUNNING:
run() | en | 0.695115 | # Init pygame # Defines Constants # Width and Height of Screen # Frames Per Second # Check if the game is running or not # Color Constants # Direction Constants # Open game window # Check for input # Main Loop Method # Drawing method # Main Loop | 3.484036 | 3 |
InvenTree/order/api.py | cednore/InvenTree | 1 | 6615325 | <gh_stars>1-10
"""
JSON API for the Order app
"""
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.urls import include, path, re_path
from django.db.models import Q, F
from django_filters import rest_framework as rest_filters
from rest_framework import generics
from rest_framework import filters, status
from rest_framework.response import Response
from company.models import SupplierPart
from InvenTree.filters import InvenTreeOrderingFilter
from InvenTree.helpers import str2bool, DownloadFile
from InvenTree.api import AttachmentMixin
from InvenTree.status_codes import PurchaseOrderStatus, SalesOrderStatus
from order.admin import PurchaseOrderLineItemResource
import order.models as models
import order.serializers as serializers
from part.models import Part
from users.models import Owner
class GeneralExtraLineList:
"""
General template for ExtraLine API classes
"""
def get_serializer(self, *args, **kwargs):
try:
params = self.request.query_params
kwargs['order_detail'] = str2bool(params.get('order_detail', False))
except AttributeError:
pass
kwargs['context'] = self.get_serializer_context()
return self.serializer_class(*args, **kwargs)
def get_queryset(self, *args, **kwargs):
queryset = super().get_queryset(*args, **kwargs)
queryset = queryset.prefetch_related(
'order',
)
return queryset
filter_backends = [
rest_filters.DjangoFilterBackend,
filters.SearchFilter,
filters.OrderingFilter
]
ordering_fields = [
'title',
'quantity',
'note',
'reference',
]
search_fields = [
'title',
'quantity',
'note',
'reference'
]
filter_fields = [
'order',
]
class PurchaseOrderFilter(rest_filters.FilterSet):
"""
Custom API filters for the PurchaseOrderList endpoint
"""
assigned_to_me = rest_filters.BooleanFilter(label='assigned_to_me', method='filter_assigned_to_me')
def filter_assigned_to_me(self, queryset, name, value):
"""
Filter by orders which are assigned to the current user
"""
value = str2bool(value)
# Work out who "me" is!
owners = Owner.get_owners_matching_user(self.request.user)
if value:
queryset = queryset.filter(responsible__in=owners)
else:
queryset = queryset.exclude(responsible__in=owners)
return queryset
class Meta:
model = models.PurchaseOrder
fields = [
'supplier',
]
class PurchaseOrderList(generics.ListCreateAPIView):
""" API endpoint for accessing a list of PurchaseOrder objects
- GET: Return list of PurchaseOrder objects (with filters)
- POST: Create a new PurchaseOrder object
"""
queryset = models.PurchaseOrder.objects.all()
serializer_class = serializers.PurchaseOrderSerializer
filterset_class = PurchaseOrderFilter
def create(self, request, *args, **kwargs):
"""
Save user information on create
"""
serializer = self.get_serializer(data=request.data)
serializer.is_valid(raise_exception=True)
item = serializer.save()
item.created_by = request.user
item.save()
headers = self.get_success_headers(serializer.data)
return Response(serializer.data, status=status.HTTP_201_CREATED, headers=headers)
def get_serializer(self, *args, **kwargs):
try:
kwargs['supplier_detail'] = str2bool(self.request.query_params.get('supplier_detail', False))
except AttributeError:
pass
# Ensure the request context is passed through
kwargs['context'] = self.get_serializer_context()
return self.serializer_class(*args, **kwargs)
def get_queryset(self, *args, **kwargs):
queryset = super().get_queryset(*args, **kwargs)
queryset = queryset.prefetch_related(
'supplier',
'lines',
)
queryset = serializers.PurchaseOrderSerializer.annotate_queryset(queryset)
return queryset
def filter_queryset(self, queryset):
# Perform basic filtering
queryset = super().filter_queryset(queryset)
params = self.request.query_params
# Filter by 'outstanding' status
outstanding = params.get('outstanding', None)
if outstanding is not None:
outstanding = str2bool(outstanding)
if outstanding:
queryset = queryset.filter(status__in=PurchaseOrderStatus.OPEN)
else:
queryset = queryset.exclude(status__in=PurchaseOrderStatus.OPEN)
# Filter by 'overdue' status
overdue = params.get('overdue', None)
if overdue is not None:
overdue = str2bool(overdue)
if overdue:
queryset = queryset.filter(models.PurchaseOrder.OVERDUE_FILTER)
else:
queryset = queryset.exclude(models.PurchaseOrder.OVERDUE_FILTER)
# Special filtering for 'status' field
status = params.get('status', None)
if status is not None:
# First attempt to filter by integer value
queryset = queryset.filter(status=status)
# Attempt to filter by part
part = params.get('part', None)
if part is not None:
try:
part = Part.objects.get(pk=part)
queryset = queryset.filter(id__in=[p.id for p in part.purchase_orders()])
except (Part.DoesNotExist, ValueError):
pass
# Attempt to filter by supplier part
supplier_part = params.get('supplier_part', None)
if supplier_part is not None:
try:
supplier_part = SupplierPart.objects.get(pk=supplier_part)
queryset = queryset.filter(id__in=[p.id for p in supplier_part.purchase_orders()])
except (ValueError, SupplierPart.DoesNotExist):
pass
# Filter by 'date range'
min_date = params.get('min_date', None)
max_date = params.get('max_date', None)
if min_date is not None and max_date is not None:
queryset = models.PurchaseOrder.filterByDate(queryset, min_date, max_date)
return queryset
filter_backends = [
rest_filters.DjangoFilterBackend,
filters.SearchFilter,
InvenTreeOrderingFilter,
]
ordering_field_aliases = {
'reference': ['reference_int', 'reference'],
}
search_fields = [
'reference',
'supplier__name',
'supplier_reference',
'description',
]
ordering_fields = [
'creation_date',
'reference',
'supplier__name',
'target_date',
'line_items',
'status',
]
ordering = '-creation_date'
class PurchaseOrderDetail(generics.RetrieveUpdateDestroyAPIView):
""" API endpoint for detail view of a PurchaseOrder object """
queryset = models.PurchaseOrder.objects.all()
serializer_class = serializers.PurchaseOrderSerializer
def get_serializer(self, *args, **kwargs):
try:
kwargs['supplier_detail'] = str2bool(self.request.query_params.get('supplier_detail', False))
except AttributeError:
pass
# Ensure the request context is passed through
kwargs['context'] = self.get_serializer_context()
return self.serializer_class(*args, **kwargs)
def get_queryset(self, *args, **kwargs):
queryset = super().get_queryset(*args, **kwargs)
queryset = queryset.prefetch_related(
'supplier',
'lines',
)
queryset = serializers.PurchaseOrderSerializer.annotate_queryset(queryset)
return queryset
class PurchaseOrderReceive(generics.CreateAPIView):
"""
API endpoint to receive stock items against a purchase order.
- The purchase order is specified in the URL.
- Items to receive are specified as a list called "items" with the following options:
- supplier_part: pk value of the supplier part
- quantity: quantity to receive
- status: stock item status
- location: destination for stock item (optional)
- A global location can also be specified
"""
queryset = models.PurchaseOrderLineItem.objects.none()
serializer_class = serializers.PurchaseOrderReceiveSerializer
def get_serializer_context(self):
context = super().get_serializer_context()
# Pass the purchase order through to the serializer for validation
try:
context['order'] = models.PurchaseOrder.objects.get(pk=self.kwargs.get('pk', None))
except:
pass
context['request'] = self.request
return context
class PurchaseOrderLineItemFilter(rest_filters.FilterSet):
"""
Custom filters for the PurchaseOrderLineItemList endpoint
"""
class Meta:
model = models.PurchaseOrderLineItem
fields = [
'order',
'part',
]
pending = rest_filters.BooleanFilter(label='pending', method='filter_pending')
def filter_pending(self, queryset, name, value):
"""
Filter by "pending" status (order status = pending)
"""
value = str2bool(value)
if value:
queryset = queryset.filter(order__status__in=PurchaseOrderStatus.OPEN)
else:
queryset = queryset.exclude(order__status__in=PurchaseOrderStatus.OPEN)
return queryset
order_status = rest_filters.NumberFilter(label='order_status', field_name='order__status')
received = rest_filters.BooleanFilter(label='received', method='filter_received')
def filter_received(self, queryset, name, value):
"""
Filter by lines which are "received" (or "not" received)
A line is considered "received" when received >= quantity
"""
value = str2bool(value)
q = Q(received__gte=F('quantity'))
if value:
queryset = queryset.filter(q)
else:
# Only count "pending" orders
queryset = queryset.exclude(q).filter(order__status__in=PurchaseOrderStatus.OPEN)
return queryset
class PurchaseOrderLineItemList(generics.ListCreateAPIView):
""" API endpoint for accessing a list of PurchaseOrderLineItem objects
- GET: Return a list of PurchaseOrder Line Item objects
- POST: Create a new PurchaseOrderLineItem object
"""
queryset = models.PurchaseOrderLineItem.objects.all()
serializer_class = serializers.PurchaseOrderLineItemSerializer
filterset_class = PurchaseOrderLineItemFilter
def get_queryset(self, *args, **kwargs):
queryset = super().get_queryset(*args, **kwargs)
queryset = serializers.PurchaseOrderLineItemSerializer.annotate_queryset(queryset)
return queryset
def get_serializer(self, *args, **kwargs):
try:
kwargs['part_detail'] = str2bool(self.request.query_params.get('part_detail', False))
kwargs['order_detail'] = str2bool(self.request.query_params.get('order_detail', False))
except AttributeError:
pass
kwargs['context'] = self.get_serializer_context()
return self.serializer_class(*args, **kwargs)
def filter_queryset(self, queryset):
"""
Additional filtering options
"""
params = self.request.query_params
queryset = super().filter_queryset(queryset)
base_part = params.get('base_part', None)
if base_part:
try:
base_part = Part.objects.get(pk=base_part)
queryset = queryset.filter(part__part=base_part)
except (ValueError, Part.DoesNotExist):
pass
return queryset
def list(self, request, *args, **kwargs):
queryset = self.filter_queryset(self.get_queryset())
# Check if we wish to export the queried data to a file
export_format = request.query_params.get('export', None)
if export_format:
export_format = str(export_format).strip().lower()
if export_format in ['csv', 'tsv', 'xls', 'xlsx']:
dataset = PurchaseOrderLineItemResource().export(queryset=queryset)
filedata = dataset.export(export_format)
filename = f"InvenTree_PurchaseOrderData.{export_format}"
return DownloadFile(filedata, filename)
page = self.paginate_queryset(queryset)
if page is not None:
serializer = self.get_serializer(page, many=True)
return self.get_paginated_response(serializer.data)
serializer = self.get_serializer(queryset, many=True)
return Response(serializer.data)
filter_backends = [
rest_filters.DjangoFilterBackend,
filters.SearchFilter,
InvenTreeOrderingFilter
]
ordering_field_aliases = {
'MPN': 'part__manufacturer_part__MPN',
'SKU': 'part__SKU',
'part_name': 'part__part__name',
}
ordering_fields = [
'MPN',
'part_name',
'purchase_price',
'quantity',
'received',
'reference',
'SKU',
'total_price',
'target_date',
]
search_fields = [
'part__part__name',
'part__part__description',
'part__MPN',
'part__SKU',
'reference',
]
class PurchaseOrderLineItemDetail(generics.RetrieveUpdateDestroyAPIView):
"""
Detail API endpoint for PurchaseOrderLineItem object
"""
queryset = models.PurchaseOrderLineItem.objects.all()
serializer_class = serializers.PurchaseOrderLineItemSerializer
def get_queryset(self):
queryset = super().get_queryset()
queryset = serializers.PurchaseOrderLineItemSerializer.annotate_queryset(queryset)
return queryset
class PurchaseOrderExtraLineList(GeneralExtraLineList, generics.ListCreateAPIView):
"""
API endpoint for accessing a list of PurchaseOrderExtraLine objects.
"""
queryset = models.PurchaseOrderExtraLine.objects.all()
serializer_class = serializers.PurchaseOrderExtraLineSerializer
class PurchaseOrderExtraLineDetail(generics.RetrieveUpdateDestroyAPIView):
""" API endpoint for detail view of a PurchaseOrderExtraLine object """
queryset = models.PurchaseOrderExtraLine.objects.all()
serializer_class = serializers.PurchaseOrderExtraLineSerializer
class SalesOrderAttachmentList(generics.ListCreateAPIView, AttachmentMixin):
"""
API endpoint for listing (and creating) a SalesOrderAttachment (file upload)
"""
queryset = models.SalesOrderAttachment.objects.all()
serializer_class = serializers.SalesOrderAttachmentSerializer
filter_backends = [
rest_filters.DjangoFilterBackend,
]
filter_fields = [
'order',
]
class SalesOrderAttachmentDetail(generics.RetrieveUpdateDestroyAPIView, AttachmentMixin):
"""
Detail endpoint for SalesOrderAttachment
"""
queryset = models.SalesOrderAttachment.objects.all()
serializer_class = serializers.SalesOrderAttachmentSerializer
class SalesOrderList(generics.ListCreateAPIView):
"""
API endpoint for accessing a list of SalesOrder objects.
- GET: Return list of SalesOrder objects (with filters)
- POST: Create a new SalesOrder
"""
queryset = models.SalesOrder.objects.all()
serializer_class = serializers.SalesOrderSerializer
def create(self, request, *args, **kwargs):
"""
Save user information on create
"""
serializer = self.get_serializer(data=request.data)
serializer.is_valid(raise_exception=True)
item = serializer.save()
item.created_by = request.user
item.save()
headers = self.get_success_headers(serializer.data)
return Response(serializer.data, status=status.HTTP_201_CREATED, headers=headers)
def get_serializer(self, *args, **kwargs):
try:
kwargs['customer_detail'] = str2bool(self.request.query_params.get('customer_detail', False))
except AttributeError:
pass
# Ensure the context is passed through to the serializer
kwargs['context'] = self.get_serializer_context()
return self.serializer_class(*args, **kwargs)
def get_queryset(self, *args, **kwargs):
queryset = super().get_queryset(*args, **kwargs)
queryset = queryset.prefetch_related(
'customer',
'lines'
)
queryset = serializers.SalesOrderSerializer.annotate_queryset(queryset)
return queryset
def filter_queryset(self, queryset):
"""
Perform custom filtering operations on the SalesOrder queryset.
"""
queryset = super().filter_queryset(queryset)
params = self.request.query_params
# Filter by 'outstanding' status
outstanding = params.get('outstanding', None)
if outstanding is not None:
outstanding = str2bool(outstanding)
if outstanding:
queryset = queryset.filter(status__in=models.SalesOrderStatus.OPEN)
else:
queryset = queryset.exclude(status__in=models.SalesOrderStatus.OPEN)
# Filter by 'overdue' status
overdue = params.get('overdue', None)
if overdue is not None:
overdue = str2bool(overdue)
if overdue:
queryset = queryset.filter(models.SalesOrder.OVERDUE_FILTER)
else:
queryset = queryset.exclude(models.SalesOrder.OVERDUE_FILTER)
status = params.get('status', None)
if status is not None:
queryset = queryset.filter(status=status)
# Filter by "Part"
# Only return SalesOrder which have LineItem referencing the part
part = params.get('part', None)
if part is not None:
try:
part = Part.objects.get(pk=part)
queryset = queryset.filter(id__in=[so.id for so in part.sales_orders()])
except (Part.DoesNotExist, ValueError):
pass
# Filter by 'date range'
min_date = params.get('min_date', None)
max_date = params.get('max_date', None)
if min_date is not None and max_date is not None:
queryset = models.SalesOrder.filterByDate(queryset, min_date, max_date)
return queryset
filter_backends = [
rest_filters.DjangoFilterBackend,
filters.SearchFilter,
InvenTreeOrderingFilter,
]
ordering_field_aliases = {
'reference': ['reference_int', 'reference'],
}
filter_fields = [
'customer',
]
ordering_fields = [
'creation_date',
'reference',
'customer__name',
'customer_reference',
'status',
'target_date',
'line_items',
'shipment_date',
]
search_fields = [
'customer__name',
'reference',
'description',
'customer_reference',
]
ordering = '-creation_date'
class SalesOrderDetail(generics.RetrieveUpdateDestroyAPIView):
"""
API endpoint for detail view of a SalesOrder object.
"""
queryset = models.SalesOrder.objects.all()
serializer_class = serializers.SalesOrderSerializer
def get_serializer(self, *args, **kwargs):
try:
kwargs['customer_detail'] = str2bool(self.request.query_params.get('customer_detail', False))
except AttributeError:
pass
kwargs['context'] = self.get_serializer_context()
return self.serializer_class(*args, **kwargs)
def get_queryset(self, *args, **kwargs):
queryset = super().get_queryset(*args, **kwargs)
queryset = queryset.prefetch_related('customer', 'lines')
queryset = serializers.SalesOrderSerializer.annotate_queryset(queryset)
return queryset
class SalesOrderLineItemFilter(rest_filters.FilterSet):
"""
Custom filters for SalesOrderLineItemList endpoint
"""
class Meta:
model = models.SalesOrderLineItem
fields = [
'order',
'part',
]
completed = rest_filters.BooleanFilter(label='completed', method='filter_completed')
def filter_completed(self, queryset, name, value):
"""
Filter by lines which are "completed"
A line is completed when shipped >= quantity
"""
value = str2bool(value)
q = Q(shipped__gte=F('quantity'))
if value:
queryset = queryset.filter(q)
else:
queryset = queryset.exclude(q)
return queryset
class SalesOrderLineItemList(generics.ListCreateAPIView):
"""
API endpoint for accessing a list of SalesOrderLineItem objects.
"""
queryset = models.SalesOrderLineItem.objects.all()
serializer_class = serializers.SalesOrderLineItemSerializer
filterset_class = SalesOrderLineItemFilter
def get_serializer(self, *args, **kwargs):
try:
params = self.request.query_params
kwargs['part_detail'] = str2bool(params.get('part_detail', False))
kwargs['order_detail'] = str2bool(params.get('order_detail', False))
kwargs['allocations'] = str2bool(params.get('allocations', False))
except AttributeError:
pass
kwargs['context'] = self.get_serializer_context()
return self.serializer_class(*args, **kwargs)
def get_queryset(self, *args, **kwargs):
queryset = super().get_queryset(*args, **kwargs)
queryset = queryset.prefetch_related(
'part',
'part__stock_items',
'allocations',
'allocations__item__location',
'order',
'order__stock_items',
)
return queryset
filter_backends = [
rest_filters.DjangoFilterBackend,
filters.SearchFilter,
filters.OrderingFilter
]
ordering_fields = [
'part__name',
'quantity',
'reference',
'target_date',
]
search_fields = [
'part__name',
'quantity',
'reference',
]
filter_fields = [
'order',
'part',
]
class SalesOrderExtraLineList(GeneralExtraLineList, generics.ListCreateAPIView):
"""
API endpoint for accessing a list of SalesOrderExtraLine objects.
"""
queryset = models.SalesOrderExtraLine.objects.all()
serializer_class = serializers.SalesOrderExtraLineSerializer
class SalesOrderExtraLineDetail(generics.RetrieveUpdateDestroyAPIView):
""" API endpoint for detail view of a SalesOrderExtraLine object """
queryset = models.SalesOrderExtraLine.objects.all()
serializer_class = serializers.SalesOrderExtraLineSerializer
class SalesOrderLineItemDetail(generics.RetrieveUpdateDestroyAPIView):
""" API endpoint for detail view of a SalesOrderLineItem object """
queryset = models.SalesOrderLineItem.objects.all()
serializer_class = serializers.SalesOrderLineItemSerializer
class SalesOrderComplete(generics.CreateAPIView):
"""
API endpoint for manually marking a SalesOrder as "complete".
"""
queryset = models.SalesOrder.objects.all()
serializer_class = serializers.SalesOrderCompleteSerializer
def get_serializer_context(self):
ctx = super().get_serializer_context()
ctx['request'] = self.request
try:
ctx['order'] = models.SalesOrder.objects.get(pk=self.kwargs.get('pk', None))
except:
pass
return ctx
class SalesOrderAllocateSerials(generics.CreateAPIView):
"""
API endpoint to allocation stock items against a SalesOrder,
by specifying serial numbers.
"""
queryset = models.SalesOrder.objects.none()
serializer_class = serializers.SalesOrderSerialAllocationSerializer
def get_serializer_context(self):
ctx = super().get_serializer_context()
# Pass through the SalesOrder object to the serializer
try:
ctx['order'] = models.SalesOrder.objects.get(pk=self.kwargs.get('pk', None))
except:
pass
ctx['request'] = self.request
return ctx
class SalesOrderAllocate(generics.CreateAPIView):
"""
API endpoint to allocate stock items against a SalesOrder
- The SalesOrder is specified in the URL
- See the SalesOrderShipmentAllocationSerializer class
"""
queryset = models.SalesOrder.objects.none()
serializer_class = serializers.SalesOrderShipmentAllocationSerializer
def get_serializer_context(self):
ctx = super().get_serializer_context()
# Pass through the SalesOrder object to the serializer
try:
ctx['order'] = models.SalesOrder.objects.get(pk=self.kwargs.get('pk', None))
except:
pass
ctx['request'] = self.request
return ctx
class SalesOrderAllocationDetail(generics.RetrieveUpdateDestroyAPIView):
"""
API endpoint for detali view of a SalesOrderAllocation object
"""
queryset = models.SalesOrderAllocation.objects.all()
serializer_class = serializers.SalesOrderAllocationSerializer
class SalesOrderAllocationList(generics.ListAPIView):
"""
API endpoint for listing SalesOrderAllocation objects
"""
queryset = models.SalesOrderAllocation.objects.all()
serializer_class = serializers.SalesOrderAllocationSerializer
def get_serializer(self, *args, **kwargs):
try:
params = self.request.query_params
kwargs['part_detail'] = str2bool(params.get('part_detail', False))
kwargs['item_detail'] = str2bool(params.get('item_detail', False))
kwargs['order_detail'] = str2bool(params.get('order_detail', False))
kwargs['location_detail'] = str2bool(params.get('location_detail', False))
kwargs['customer_detail'] = str2bool(params.get('customer_detail', False))
except AttributeError:
pass
return self.serializer_class(*args, **kwargs)
def filter_queryset(self, queryset):
queryset = super().filter_queryset(queryset)
# Filter by order
params = self.request.query_params
# Filter by "part" reference
part = params.get('part', None)
if part is not None:
queryset = queryset.filter(item__part=part)
# Filter by "order" reference
order = params.get('order', None)
if order is not None:
queryset = queryset.filter(line__order=order)
# Filter by "stock item"
item = params.get('item', params.get('stock_item', None))
if item is not None:
queryset = queryset.filter(item=item)
# Filter by "outstanding" order status
outstanding = params.get('outstanding', None)
if outstanding is not None:
outstanding = str2bool(outstanding)
if outstanding:
# Filter only "open" orders
# Filter only allocations which have *not* shipped
queryset = queryset.filter(
line__order__status__in=SalesOrderStatus.OPEN,
shipment__shipment_date=None,
)
else:
queryset = queryset.exclude(
line__order__status__in=SalesOrderStatus.OPEN,
shipment__shipment_date=None
)
return queryset
filter_backends = [
rest_filters.DjangoFilterBackend,
]
# Default filterable fields
filter_fields = [
]
class SalesOrderShipmentFilter(rest_filters.FilterSet):
"""
Custom filterset for the SalesOrderShipmentList endpoint
"""
shipped = rest_filters.BooleanFilter(label='shipped', method='filter_shipped')
def filter_shipped(self, queryset, name, value):
value = str2bool(value)
if value:
queryset = queryset.exclude(shipment_date=None)
else:
queryset = queryset.filter(shipment_date=None)
return queryset
class Meta:
model = models.SalesOrderShipment
fields = [
'order',
]
class SalesOrderShipmentList(generics.ListCreateAPIView):
"""
API list endpoint for SalesOrderShipment model
"""
queryset = models.SalesOrderShipment.objects.all()
serializer_class = serializers.SalesOrderShipmentSerializer
filterset_class = SalesOrderShipmentFilter
filter_backends = [
rest_filters.DjangoFilterBackend,
]
class SalesOrderShipmentDetail(generics.RetrieveUpdateDestroyAPIView):
"""
API detail endpooint for SalesOrderShipment model
"""
queryset = models.SalesOrderShipment.objects.all()
serializer_class = serializers.SalesOrderShipmentSerializer
class SalesOrderShipmentComplete(generics.CreateAPIView):
"""
API endpoint for completing (shipping) a SalesOrderShipment
"""
queryset = models.SalesOrderShipment.objects.all()
serializer_class = serializers.SalesOrderShipmentCompleteSerializer
def get_serializer_context(self):
"""
Pass the request object to the serializer
"""
ctx = super().get_serializer_context()
ctx['request'] = self.request
try:
ctx['shipment'] = models.SalesOrderShipment.objects.get(
pk=self.kwargs.get('pk', None)
)
except:
pass
return ctx
class PurchaseOrderAttachmentList(generics.ListCreateAPIView, AttachmentMixin):
"""
API endpoint for listing (and creating) a PurchaseOrderAttachment (file upload)
"""
queryset = models.PurchaseOrderAttachment.objects.all()
serializer_class = serializers.PurchaseOrderAttachmentSerializer
filter_backends = [
rest_filters.DjangoFilterBackend,
]
filter_fields = [
'order',
]
class PurchaseOrderAttachmentDetail(generics.RetrieveUpdateDestroyAPIView, AttachmentMixin):
"""
Detail endpoint for a PurchaseOrderAttachment
"""
queryset = models.PurchaseOrderAttachment.objects.all()
serializer_class = serializers.PurchaseOrderAttachmentSerializer
order_api_urls = [
# API endpoints for purchase orders
re_path(r'^po/', include([
# Purchase order attachments
re_path(r'attachment/', include([
path('<int:pk>/', PurchaseOrderAttachmentDetail.as_view(), name='api-po-attachment-detail'),
re_path(r'^.*$', PurchaseOrderAttachmentList.as_view(), name='api-po-attachment-list'),
])),
# Individual purchase order detail URLs
re_path(r'^(?P<pk>\d+)/', include([
re_path(r'^receive/', PurchaseOrderReceive.as_view(), name='api-po-receive'),
re_path(r'.*$', PurchaseOrderDetail.as_view(), name='api-po-detail'),
])),
# Purchase order list
re_path(r'^.*$', PurchaseOrderList.as_view(), name='api-po-list'),
])),
# API endpoints for purchase order line items
re_path(r'^po-line/', include([
path('<int:pk>/', PurchaseOrderLineItemDetail.as_view(), name='api-po-line-detail'),
re_path(r'^.*$', PurchaseOrderLineItemList.as_view(), name='api-po-line-list'),
])),
# API endpoints for purchase order extra line
re_path(r'^po-extra-line/', include([
path('<int:pk>/', PurchaseOrderExtraLineDetail.as_view(), name='api-po-extra-line-detail'),
path('', PurchaseOrderExtraLineList.as_view(), name='api-po-extra-line-list'),
])),
# API endpoints for sales ordesr
re_path(r'^so/', include([
re_path(r'attachment/', include([
path('<int:pk>/', SalesOrderAttachmentDetail.as_view(), name='api-so-attachment-detail'),
re_path(r'^.*$', SalesOrderAttachmentList.as_view(), name='api-so-attachment-list'),
])),
re_path(r'^shipment/', include([
re_path(r'^(?P<pk>\d+)/', include([
path('ship/', SalesOrderShipmentComplete.as_view(), name='api-so-shipment-ship'),
re_path(r'^.*$', SalesOrderShipmentDetail.as_view(), name='api-so-shipment-detail'),
])),
re_path(r'^.*$', SalesOrderShipmentList.as_view(), name='api-so-shipment-list'),
])),
# Sales order detail view
re_path(r'^(?P<pk>\d+)/', include([
re_path(r'^complete/', SalesOrderComplete.as_view(), name='api-so-complete'),
re_path(r'^allocate/', SalesOrderAllocate.as_view(), name='api-so-allocate'),
re_path(r'^allocate-serials/', SalesOrderAllocateSerials.as_view(), name='api-so-allocate-serials'),
re_path(r'^.*$', SalesOrderDetail.as_view(), name='api-so-detail'),
])),
# Sales order list view
re_path(r'^.*$', SalesOrderList.as_view(), name='api-so-list'),
])),
# API endpoints for sales order line items
re_path(r'^so-line/', include([
path('<int:pk>/', SalesOrderLineItemDetail.as_view(), name='api-so-line-detail'),
path('', SalesOrderLineItemList.as_view(), name='api-so-line-list'),
])),
# API endpoints for sales order extra line
re_path(r'^so-extra-line/', include([
path('<int:pk>/', SalesOrderExtraLineDetail.as_view(), name='api-so-extra-line-detail'),
path('', SalesOrderExtraLineList.as_view(), name='api-so-extra-line-list'),
])),
# API endpoints for sales order allocations
re_path(r'^so-allocation/', include([
path('<int:pk>/', SalesOrderAllocationDetail.as_view(), name='api-so-allocation-detail'),
re_path(r'^.*$', SalesOrderAllocationList.as_view(), name='api-so-allocation-list'),
])),
]
| """
JSON API for the Order app
"""
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.urls import include, path, re_path
from django.db.models import Q, F
from django_filters import rest_framework as rest_filters
from rest_framework import generics
from rest_framework import filters, status
from rest_framework.response import Response
from company.models import SupplierPart
from InvenTree.filters import InvenTreeOrderingFilter
from InvenTree.helpers import str2bool, DownloadFile
from InvenTree.api import AttachmentMixin
from InvenTree.status_codes import PurchaseOrderStatus, SalesOrderStatus
from order.admin import PurchaseOrderLineItemResource
import order.models as models
import order.serializers as serializers
from part.models import Part
from users.models import Owner
class GeneralExtraLineList:
"""
General template for ExtraLine API classes
"""
def get_serializer(self, *args, **kwargs):
try:
params = self.request.query_params
kwargs['order_detail'] = str2bool(params.get('order_detail', False))
except AttributeError:
pass
kwargs['context'] = self.get_serializer_context()
return self.serializer_class(*args, **kwargs)
def get_queryset(self, *args, **kwargs):
queryset = super().get_queryset(*args, **kwargs)
queryset = queryset.prefetch_related(
'order',
)
return queryset
filter_backends = [
rest_filters.DjangoFilterBackend,
filters.SearchFilter,
filters.OrderingFilter
]
ordering_fields = [
'title',
'quantity',
'note',
'reference',
]
search_fields = [
'title',
'quantity',
'note',
'reference'
]
filter_fields = [
'order',
]
class PurchaseOrderFilter(rest_filters.FilterSet):
"""
Custom API filters for the PurchaseOrderList endpoint
"""
assigned_to_me = rest_filters.BooleanFilter(label='assigned_to_me', method='filter_assigned_to_me')
def filter_assigned_to_me(self, queryset, name, value):
"""
Filter by orders which are assigned to the current user
"""
value = str2bool(value)
# Work out who "me" is!
owners = Owner.get_owners_matching_user(self.request.user)
if value:
queryset = queryset.filter(responsible__in=owners)
else:
queryset = queryset.exclude(responsible__in=owners)
return queryset
class Meta:
model = models.PurchaseOrder
fields = [
'supplier',
]
class PurchaseOrderList(generics.ListCreateAPIView):
""" API endpoint for accessing a list of PurchaseOrder objects
- GET: Return list of PurchaseOrder objects (with filters)
- POST: Create a new PurchaseOrder object
"""
queryset = models.PurchaseOrder.objects.all()
serializer_class = serializers.PurchaseOrderSerializer
filterset_class = PurchaseOrderFilter
def create(self, request, *args, **kwargs):
"""
Save user information on create
"""
serializer = self.get_serializer(data=request.data)
serializer.is_valid(raise_exception=True)
item = serializer.save()
item.created_by = request.user
item.save()
headers = self.get_success_headers(serializer.data)
return Response(serializer.data, status=status.HTTP_201_CREATED, headers=headers)
def get_serializer(self, *args, **kwargs):
try:
kwargs['supplier_detail'] = str2bool(self.request.query_params.get('supplier_detail', False))
except AttributeError:
pass
# Ensure the request context is passed through
kwargs['context'] = self.get_serializer_context()
return self.serializer_class(*args, **kwargs)
def get_queryset(self, *args, **kwargs):
queryset = super().get_queryset(*args, **kwargs)
queryset = queryset.prefetch_related(
'supplier',
'lines',
)
queryset = serializers.PurchaseOrderSerializer.annotate_queryset(queryset)
return queryset
def filter_queryset(self, queryset):
# Perform basic filtering
queryset = super().filter_queryset(queryset)
params = self.request.query_params
# Filter by 'outstanding' status
outstanding = params.get('outstanding', None)
if outstanding is not None:
outstanding = str2bool(outstanding)
if outstanding:
queryset = queryset.filter(status__in=PurchaseOrderStatus.OPEN)
else:
queryset = queryset.exclude(status__in=PurchaseOrderStatus.OPEN)
# Filter by 'overdue' status
overdue = params.get('overdue', None)
if overdue is not None:
overdue = str2bool(overdue)
if overdue:
queryset = queryset.filter(models.PurchaseOrder.OVERDUE_FILTER)
else:
queryset = queryset.exclude(models.PurchaseOrder.OVERDUE_FILTER)
# Special filtering for 'status' field
status = params.get('status', None)
if status is not None:
# First attempt to filter by integer value
queryset = queryset.filter(status=status)
# Attempt to filter by part
part = params.get('part', None)
if part is not None:
try:
part = Part.objects.get(pk=part)
queryset = queryset.filter(id__in=[p.id for p in part.purchase_orders()])
except (Part.DoesNotExist, ValueError):
pass
# Attempt to filter by supplier part
supplier_part = params.get('supplier_part', None)
if supplier_part is not None:
try:
supplier_part = SupplierPart.objects.get(pk=supplier_part)
queryset = queryset.filter(id__in=[p.id for p in supplier_part.purchase_orders()])
except (ValueError, SupplierPart.DoesNotExist):
pass
# Filter by 'date range'
min_date = params.get('min_date', None)
max_date = params.get('max_date', None)
if min_date is not None and max_date is not None:
queryset = models.PurchaseOrder.filterByDate(queryset, min_date, max_date)
return queryset
filter_backends = [
rest_filters.DjangoFilterBackend,
filters.SearchFilter,
InvenTreeOrderingFilter,
]
ordering_field_aliases = {
'reference': ['reference_int', 'reference'],
}
search_fields = [
'reference',
'supplier__name',
'supplier_reference',
'description',
]
ordering_fields = [
'creation_date',
'reference',
'supplier__name',
'target_date',
'line_items',
'status',
]
ordering = '-creation_date'
class PurchaseOrderDetail(generics.RetrieveUpdateDestroyAPIView):
""" API endpoint for detail view of a PurchaseOrder object """
queryset = models.PurchaseOrder.objects.all()
serializer_class = serializers.PurchaseOrderSerializer
def get_serializer(self, *args, **kwargs):
try:
kwargs['supplier_detail'] = str2bool(self.request.query_params.get('supplier_detail', False))
except AttributeError:
pass
# Ensure the request context is passed through
kwargs['context'] = self.get_serializer_context()
return self.serializer_class(*args, **kwargs)
def get_queryset(self, *args, **kwargs):
queryset = super().get_queryset(*args, **kwargs)
queryset = queryset.prefetch_related(
'supplier',
'lines',
)
queryset = serializers.PurchaseOrderSerializer.annotate_queryset(queryset)
return queryset
class PurchaseOrderReceive(generics.CreateAPIView):
"""
API endpoint to receive stock items against a purchase order.
- The purchase order is specified in the URL.
- Items to receive are specified as a list called "items" with the following options:
- supplier_part: pk value of the supplier part
- quantity: quantity to receive
- status: stock item status
- location: destination for stock item (optional)
- A global location can also be specified
"""
queryset = models.PurchaseOrderLineItem.objects.none()
serializer_class = serializers.PurchaseOrderReceiveSerializer
def get_serializer_context(self):
context = super().get_serializer_context()
# Pass the purchase order through to the serializer for validation
try:
context['order'] = models.PurchaseOrder.objects.get(pk=self.kwargs.get('pk', None))
except:
pass
context['request'] = self.request
return context
class PurchaseOrderLineItemFilter(rest_filters.FilterSet):
"""
Custom filters for the PurchaseOrderLineItemList endpoint
"""
class Meta:
model = models.PurchaseOrderLineItem
fields = [
'order',
'part',
]
pending = rest_filters.BooleanFilter(label='pending', method='filter_pending')
def filter_pending(self, queryset, name, value):
"""
Filter by "pending" status (order status = pending)
"""
value = str2bool(value)
if value:
queryset = queryset.filter(order__status__in=PurchaseOrderStatus.OPEN)
else:
queryset = queryset.exclude(order__status__in=PurchaseOrderStatus.OPEN)
return queryset
order_status = rest_filters.NumberFilter(label='order_status', field_name='order__status')
received = rest_filters.BooleanFilter(label='received', method='filter_received')
def filter_received(self, queryset, name, value):
"""
Filter by lines which are "received" (or "not" received)
A line is considered "received" when received >= quantity
"""
value = str2bool(value)
q = Q(received__gte=F('quantity'))
if value:
queryset = queryset.filter(q)
else:
# Only count "pending" orders
queryset = queryset.exclude(q).filter(order__status__in=PurchaseOrderStatus.OPEN)
return queryset
class PurchaseOrderLineItemList(generics.ListCreateAPIView):
""" API endpoint for accessing a list of PurchaseOrderLineItem objects
- GET: Return a list of PurchaseOrder Line Item objects
- POST: Create a new PurchaseOrderLineItem object
"""
queryset = models.PurchaseOrderLineItem.objects.all()
serializer_class = serializers.PurchaseOrderLineItemSerializer
filterset_class = PurchaseOrderLineItemFilter
def get_queryset(self, *args, **kwargs):
queryset = super().get_queryset(*args, **kwargs)
queryset = serializers.PurchaseOrderLineItemSerializer.annotate_queryset(queryset)
return queryset
def get_serializer(self, *args, **kwargs):
try:
kwargs['part_detail'] = str2bool(self.request.query_params.get('part_detail', False))
kwargs['order_detail'] = str2bool(self.request.query_params.get('order_detail', False))
except AttributeError:
pass
kwargs['context'] = self.get_serializer_context()
return self.serializer_class(*args, **kwargs)
def filter_queryset(self, queryset):
"""
Additional filtering options
"""
params = self.request.query_params
queryset = super().filter_queryset(queryset)
base_part = params.get('base_part', None)
if base_part:
try:
base_part = Part.objects.get(pk=base_part)
queryset = queryset.filter(part__part=base_part)
except (ValueError, Part.DoesNotExist):
pass
return queryset
def list(self, request, *args, **kwargs):
queryset = self.filter_queryset(self.get_queryset())
# Check if we wish to export the queried data to a file
export_format = request.query_params.get('export', None)
if export_format:
export_format = str(export_format).strip().lower()
if export_format in ['csv', 'tsv', 'xls', 'xlsx']:
dataset = PurchaseOrderLineItemResource().export(queryset=queryset)
filedata = dataset.export(export_format)
filename = f"InvenTree_PurchaseOrderData.{export_format}"
return DownloadFile(filedata, filename)
page = self.paginate_queryset(queryset)
if page is not None:
serializer = self.get_serializer(page, many=True)
return self.get_paginated_response(serializer.data)
serializer = self.get_serializer(queryset, many=True)
return Response(serializer.data)
filter_backends = [
rest_filters.DjangoFilterBackend,
filters.SearchFilter,
InvenTreeOrderingFilter
]
ordering_field_aliases = {
'MPN': 'part__manufacturer_part__MPN',
'SKU': 'part__SKU',
'part_name': 'part__part__name',
}
ordering_fields = [
'MPN',
'part_name',
'purchase_price',
'quantity',
'received',
'reference',
'SKU',
'total_price',
'target_date',
]
search_fields = [
'part__part__name',
'part__part__description',
'part__MPN',
'part__SKU',
'reference',
]
class PurchaseOrderLineItemDetail(generics.RetrieveUpdateDestroyAPIView):
"""
Detail API endpoint for PurchaseOrderLineItem object
"""
queryset = models.PurchaseOrderLineItem.objects.all()
serializer_class = serializers.PurchaseOrderLineItemSerializer
def get_queryset(self):
queryset = super().get_queryset()
queryset = serializers.PurchaseOrderLineItemSerializer.annotate_queryset(queryset)
return queryset
class PurchaseOrderExtraLineList(GeneralExtraLineList, generics.ListCreateAPIView):
"""
API endpoint for accessing a list of PurchaseOrderExtraLine objects.
"""
queryset = models.PurchaseOrderExtraLine.objects.all()
serializer_class = serializers.PurchaseOrderExtraLineSerializer
class PurchaseOrderExtraLineDetail(generics.RetrieveUpdateDestroyAPIView):
""" API endpoint for detail view of a PurchaseOrderExtraLine object """
queryset = models.PurchaseOrderExtraLine.objects.all()
serializer_class = serializers.PurchaseOrderExtraLineSerializer
class SalesOrderAttachmentList(generics.ListCreateAPIView, AttachmentMixin):
"""
API endpoint for listing (and creating) a SalesOrderAttachment (file upload)
"""
queryset = models.SalesOrderAttachment.objects.all()
serializer_class = serializers.SalesOrderAttachmentSerializer
filter_backends = [
rest_filters.DjangoFilterBackend,
]
filter_fields = [
'order',
]
class SalesOrderAttachmentDetail(generics.RetrieveUpdateDestroyAPIView, AttachmentMixin):
"""
Detail endpoint for SalesOrderAttachment
"""
queryset = models.SalesOrderAttachment.objects.all()
serializer_class = serializers.SalesOrderAttachmentSerializer
class SalesOrderList(generics.ListCreateAPIView):
"""
API endpoint for accessing a list of SalesOrder objects.
- GET: Return list of SalesOrder objects (with filters)
- POST: Create a new SalesOrder
"""
queryset = models.SalesOrder.objects.all()
serializer_class = serializers.SalesOrderSerializer
def create(self, request, *args, **kwargs):
"""
Save user information on create
"""
serializer = self.get_serializer(data=request.data)
serializer.is_valid(raise_exception=True)
item = serializer.save()
item.created_by = request.user
item.save()
headers = self.get_success_headers(serializer.data)
return Response(serializer.data, status=status.HTTP_201_CREATED, headers=headers)
def get_serializer(self, *args, **kwargs):
try:
kwargs['customer_detail'] = str2bool(self.request.query_params.get('customer_detail', False))
except AttributeError:
pass
# Ensure the context is passed through to the serializer
kwargs['context'] = self.get_serializer_context()
return self.serializer_class(*args, **kwargs)
def get_queryset(self, *args, **kwargs):
queryset = super().get_queryset(*args, **kwargs)
queryset = queryset.prefetch_related(
'customer',
'lines'
)
queryset = serializers.SalesOrderSerializer.annotate_queryset(queryset)
return queryset
def filter_queryset(self, queryset):
"""
Perform custom filtering operations on the SalesOrder queryset.
"""
queryset = super().filter_queryset(queryset)
params = self.request.query_params
# Filter by 'outstanding' status
outstanding = params.get('outstanding', None)
if outstanding is not None:
outstanding = str2bool(outstanding)
if outstanding:
queryset = queryset.filter(status__in=models.SalesOrderStatus.OPEN)
else:
queryset = queryset.exclude(status__in=models.SalesOrderStatus.OPEN)
# Filter by 'overdue' status
overdue = params.get('overdue', None)
if overdue is not None:
overdue = str2bool(overdue)
if overdue:
queryset = queryset.filter(models.SalesOrder.OVERDUE_FILTER)
else:
queryset = queryset.exclude(models.SalesOrder.OVERDUE_FILTER)
status = params.get('status', None)
if status is not None:
queryset = queryset.filter(status=status)
# Filter by "Part"
# Only return SalesOrder which have LineItem referencing the part
part = params.get('part', None)
if part is not None:
try:
part = Part.objects.get(pk=part)
queryset = queryset.filter(id__in=[so.id for so in part.sales_orders()])
except (Part.DoesNotExist, ValueError):
pass
# Filter by 'date range'
min_date = params.get('min_date', None)
max_date = params.get('max_date', None)
if min_date is not None and max_date is not None:
queryset = models.SalesOrder.filterByDate(queryset, min_date, max_date)
return queryset
filter_backends = [
rest_filters.DjangoFilterBackend,
filters.SearchFilter,
InvenTreeOrderingFilter,
]
ordering_field_aliases = {
'reference': ['reference_int', 'reference'],
}
filter_fields = [
'customer',
]
ordering_fields = [
'creation_date',
'reference',
'customer__name',
'customer_reference',
'status',
'target_date',
'line_items',
'shipment_date',
]
search_fields = [
'customer__name',
'reference',
'description',
'customer_reference',
]
ordering = '-creation_date'
class SalesOrderDetail(generics.RetrieveUpdateDestroyAPIView):
"""
API endpoint for detail view of a SalesOrder object.
"""
queryset = models.SalesOrder.objects.all()
serializer_class = serializers.SalesOrderSerializer
def get_serializer(self, *args, **kwargs):
try:
kwargs['customer_detail'] = str2bool(self.request.query_params.get('customer_detail', False))
except AttributeError:
pass
kwargs['context'] = self.get_serializer_context()
return self.serializer_class(*args, **kwargs)
def get_queryset(self, *args, **kwargs):
queryset = super().get_queryset(*args, **kwargs)
queryset = queryset.prefetch_related('customer', 'lines')
queryset = serializers.SalesOrderSerializer.annotate_queryset(queryset)
return queryset
class SalesOrderLineItemFilter(rest_filters.FilterSet):
"""
Custom filters for SalesOrderLineItemList endpoint
"""
class Meta:
model = models.SalesOrderLineItem
fields = [
'order',
'part',
]
completed = rest_filters.BooleanFilter(label='completed', method='filter_completed')
def filter_completed(self, queryset, name, value):
"""
Filter by lines which are "completed"
A line is completed when shipped >= quantity
"""
value = str2bool(value)
q = Q(shipped__gte=F('quantity'))
if value:
queryset = queryset.filter(q)
else:
queryset = queryset.exclude(q)
return queryset
class SalesOrderLineItemList(generics.ListCreateAPIView):
"""
API endpoint for accessing a list of SalesOrderLineItem objects.
"""
queryset = models.SalesOrderLineItem.objects.all()
serializer_class = serializers.SalesOrderLineItemSerializer
filterset_class = SalesOrderLineItemFilter
def get_serializer(self, *args, **kwargs):
try:
params = self.request.query_params
kwargs['part_detail'] = str2bool(params.get('part_detail', False))
kwargs['order_detail'] = str2bool(params.get('order_detail', False))
kwargs['allocations'] = str2bool(params.get('allocations', False))
except AttributeError:
pass
kwargs['context'] = self.get_serializer_context()
return self.serializer_class(*args, **kwargs)
def get_queryset(self, *args, **kwargs):
queryset = super().get_queryset(*args, **kwargs)
queryset = queryset.prefetch_related(
'part',
'part__stock_items',
'allocations',
'allocations__item__location',
'order',
'order__stock_items',
)
return queryset
filter_backends = [
rest_filters.DjangoFilterBackend,
filters.SearchFilter,
filters.OrderingFilter
]
ordering_fields = [
'part__name',
'quantity',
'reference',
'target_date',
]
search_fields = [
'part__name',
'quantity',
'reference',
]
filter_fields = [
'order',
'part',
]
class SalesOrderExtraLineList(GeneralExtraLineList, generics.ListCreateAPIView):
"""
API endpoint for accessing a list of SalesOrderExtraLine objects.
"""
queryset = models.SalesOrderExtraLine.objects.all()
serializer_class = serializers.SalesOrderExtraLineSerializer
class SalesOrderExtraLineDetail(generics.RetrieveUpdateDestroyAPIView):
""" API endpoint for detail view of a SalesOrderExtraLine object """
queryset = models.SalesOrderExtraLine.objects.all()
serializer_class = serializers.SalesOrderExtraLineSerializer
class SalesOrderLineItemDetail(generics.RetrieveUpdateDestroyAPIView):
""" API endpoint for detail view of a SalesOrderLineItem object """
queryset = models.SalesOrderLineItem.objects.all()
serializer_class = serializers.SalesOrderLineItemSerializer
class SalesOrderComplete(generics.CreateAPIView):
"""
API endpoint for manually marking a SalesOrder as "complete".
"""
queryset = models.SalesOrder.objects.all()
serializer_class = serializers.SalesOrderCompleteSerializer
def get_serializer_context(self):
ctx = super().get_serializer_context()
ctx['request'] = self.request
try:
ctx['order'] = models.SalesOrder.objects.get(pk=self.kwargs.get('pk', None))
except:
pass
return ctx
class SalesOrderAllocateSerials(generics.CreateAPIView):
"""
API endpoint to allocation stock items against a SalesOrder,
by specifying serial numbers.
"""
queryset = models.SalesOrder.objects.none()
serializer_class = serializers.SalesOrderSerialAllocationSerializer
def get_serializer_context(self):
ctx = super().get_serializer_context()
# Pass through the SalesOrder object to the serializer
try:
ctx['order'] = models.SalesOrder.objects.get(pk=self.kwargs.get('pk', None))
except:
pass
ctx['request'] = self.request
return ctx
class SalesOrderAllocate(generics.CreateAPIView):
"""
API endpoint to allocate stock items against a SalesOrder
- The SalesOrder is specified in the URL
- See the SalesOrderShipmentAllocationSerializer class
"""
queryset = models.SalesOrder.objects.none()
serializer_class = serializers.SalesOrderShipmentAllocationSerializer
def get_serializer_context(self):
ctx = super().get_serializer_context()
# Pass through the SalesOrder object to the serializer
try:
ctx['order'] = models.SalesOrder.objects.get(pk=self.kwargs.get('pk', None))
except:
pass
ctx['request'] = self.request
return ctx
class SalesOrderAllocationDetail(generics.RetrieveUpdateDestroyAPIView):
"""
API endpoint for detali view of a SalesOrderAllocation object
"""
queryset = models.SalesOrderAllocation.objects.all()
serializer_class = serializers.SalesOrderAllocationSerializer
class SalesOrderAllocationList(generics.ListAPIView):
"""
API endpoint for listing SalesOrderAllocation objects
"""
queryset = models.SalesOrderAllocation.objects.all()
serializer_class = serializers.SalesOrderAllocationSerializer
def get_serializer(self, *args, **kwargs):
try:
params = self.request.query_params
kwargs['part_detail'] = str2bool(params.get('part_detail', False))
kwargs['item_detail'] = str2bool(params.get('item_detail', False))
kwargs['order_detail'] = str2bool(params.get('order_detail', False))
kwargs['location_detail'] = str2bool(params.get('location_detail', False))
kwargs['customer_detail'] = str2bool(params.get('customer_detail', False))
except AttributeError:
pass
return self.serializer_class(*args, **kwargs)
def filter_queryset(self, queryset):
queryset = super().filter_queryset(queryset)
# Filter by order
params = self.request.query_params
# Filter by "part" reference
part = params.get('part', None)
if part is not None:
queryset = queryset.filter(item__part=part)
# Filter by "order" reference
order = params.get('order', None)
if order is not None:
queryset = queryset.filter(line__order=order)
# Filter by "stock item"
item = params.get('item', params.get('stock_item', None))
if item is not None:
queryset = queryset.filter(item=item)
# Filter by "outstanding" order status
outstanding = params.get('outstanding', None)
if outstanding is not None:
outstanding = str2bool(outstanding)
if outstanding:
# Filter only "open" orders
# Filter only allocations which have *not* shipped
queryset = queryset.filter(
line__order__status__in=SalesOrderStatus.OPEN,
shipment__shipment_date=None,
)
else:
queryset = queryset.exclude(
line__order__status__in=SalesOrderStatus.OPEN,
shipment__shipment_date=None
)
return queryset
filter_backends = [
rest_filters.DjangoFilterBackend,
]
# Default filterable fields
filter_fields = [
]
class SalesOrderShipmentFilter(rest_filters.FilterSet):
"""
Custom filterset for the SalesOrderShipmentList endpoint
"""
shipped = rest_filters.BooleanFilter(label='shipped', method='filter_shipped')
def filter_shipped(self, queryset, name, value):
value = str2bool(value)
if value:
queryset = queryset.exclude(shipment_date=None)
else:
queryset = queryset.filter(shipment_date=None)
return queryset
class Meta:
model = models.SalesOrderShipment
fields = [
'order',
]
class SalesOrderShipmentList(generics.ListCreateAPIView):
"""
API list endpoint for SalesOrderShipment model
"""
queryset = models.SalesOrderShipment.objects.all()
serializer_class = serializers.SalesOrderShipmentSerializer
filterset_class = SalesOrderShipmentFilter
filter_backends = [
rest_filters.DjangoFilterBackend,
]
class SalesOrderShipmentDetail(generics.RetrieveUpdateDestroyAPIView):
"""
API detail endpooint for SalesOrderShipment model
"""
queryset = models.SalesOrderShipment.objects.all()
serializer_class = serializers.SalesOrderShipmentSerializer
class SalesOrderShipmentComplete(generics.CreateAPIView):
"""
API endpoint for completing (shipping) a SalesOrderShipment
"""
queryset = models.SalesOrderShipment.objects.all()
serializer_class = serializers.SalesOrderShipmentCompleteSerializer
def get_serializer_context(self):
"""
Pass the request object to the serializer
"""
ctx = super().get_serializer_context()
ctx['request'] = self.request
try:
ctx['shipment'] = models.SalesOrderShipment.objects.get(
pk=self.kwargs.get('pk', None)
)
except:
pass
return ctx
class PurchaseOrderAttachmentList(generics.ListCreateAPIView, AttachmentMixin):
"""
API endpoint for listing (and creating) a PurchaseOrderAttachment (file upload)
"""
queryset = models.PurchaseOrderAttachment.objects.all()
serializer_class = serializers.PurchaseOrderAttachmentSerializer
filter_backends = [
rest_filters.DjangoFilterBackend,
]
filter_fields = [
'order',
]
class PurchaseOrderAttachmentDetail(generics.RetrieveUpdateDestroyAPIView, AttachmentMixin):
"""
Detail endpoint for a PurchaseOrderAttachment
"""
queryset = models.PurchaseOrderAttachment.objects.all()
serializer_class = serializers.PurchaseOrderAttachmentSerializer
order_api_urls = [
# API endpoints for purchase orders
re_path(r'^po/', include([
# Purchase order attachments
re_path(r'attachment/', include([
path('<int:pk>/', PurchaseOrderAttachmentDetail.as_view(), name='api-po-attachment-detail'),
re_path(r'^.*$', PurchaseOrderAttachmentList.as_view(), name='api-po-attachment-list'),
])),
# Individual purchase order detail URLs
re_path(r'^(?P<pk>\d+)/', include([
re_path(r'^receive/', PurchaseOrderReceive.as_view(), name='api-po-receive'),
re_path(r'.*$', PurchaseOrderDetail.as_view(), name='api-po-detail'),
])),
# Purchase order list
re_path(r'^.*$', PurchaseOrderList.as_view(), name='api-po-list'),
])),
# API endpoints for purchase order line items
re_path(r'^po-line/', include([
path('<int:pk>/', PurchaseOrderLineItemDetail.as_view(), name='api-po-line-detail'),
re_path(r'^.*$', PurchaseOrderLineItemList.as_view(), name='api-po-line-list'),
])),
# API endpoints for purchase order extra line
re_path(r'^po-extra-line/', include([
path('<int:pk>/', PurchaseOrderExtraLineDetail.as_view(), name='api-po-extra-line-detail'),
path('', PurchaseOrderExtraLineList.as_view(), name='api-po-extra-line-list'),
])),
# API endpoints for sales ordesr
re_path(r'^so/', include([
re_path(r'attachment/', include([
path('<int:pk>/', SalesOrderAttachmentDetail.as_view(), name='api-so-attachment-detail'),
re_path(r'^.*$', SalesOrderAttachmentList.as_view(), name='api-so-attachment-list'),
])),
re_path(r'^shipment/', include([
re_path(r'^(?P<pk>\d+)/', include([
path('ship/', SalesOrderShipmentComplete.as_view(), name='api-so-shipment-ship'),
re_path(r'^.*$', SalesOrderShipmentDetail.as_view(), name='api-so-shipment-detail'),
])),
re_path(r'^.*$', SalesOrderShipmentList.as_view(), name='api-so-shipment-list'),
])),
# Sales order detail view
re_path(r'^(?P<pk>\d+)/', include([
re_path(r'^complete/', SalesOrderComplete.as_view(), name='api-so-complete'),
re_path(r'^allocate/', SalesOrderAllocate.as_view(), name='api-so-allocate'),
re_path(r'^allocate-serials/', SalesOrderAllocateSerials.as_view(), name='api-so-allocate-serials'),
re_path(r'^.*$', SalesOrderDetail.as_view(), name='api-so-detail'),
])),
# Sales order list view
re_path(r'^.*$', SalesOrderList.as_view(), name='api-so-list'),
])),
# API endpoints for sales order line items
re_path(r'^so-line/', include([
path('<int:pk>/', SalesOrderLineItemDetail.as_view(), name='api-so-line-detail'),
path('', SalesOrderLineItemList.as_view(), name='api-so-line-list'),
])),
# API endpoints for sales order extra line
re_path(r'^so-extra-line/', include([
path('<int:pk>/', SalesOrderExtraLineDetail.as_view(), name='api-so-extra-line-detail'),
path('', SalesOrderExtraLineList.as_view(), name='api-so-extra-line-list'),
])),
# API endpoints for sales order allocations
re_path(r'^so-allocation/', include([
path('<int:pk>/', SalesOrderAllocationDetail.as_view(), name='api-so-allocation-detail'),
re_path(r'^.*$', SalesOrderAllocationList.as_view(), name='api-so-allocation-list'),
])),
] | en | 0.802262 | JSON API for the Order app # -*- coding: utf-8 -*- General template for ExtraLine API classes Custom API filters for the PurchaseOrderList endpoint Filter by orders which are assigned to the current user # Work out who "me" is! API endpoint for accessing a list of PurchaseOrder objects - GET: Return list of PurchaseOrder objects (with filters) - POST: Create a new PurchaseOrder object Save user information on create # Ensure the request context is passed through # Perform basic filtering # Filter by 'outstanding' status # Filter by 'overdue' status # Special filtering for 'status' field # First attempt to filter by integer value # Attempt to filter by part # Attempt to filter by supplier part # Filter by 'date range' API endpoint for detail view of a PurchaseOrder object # Ensure the request context is passed through API endpoint to receive stock items against a purchase order. - The purchase order is specified in the URL. - Items to receive are specified as a list called "items" with the following options: - supplier_part: pk value of the supplier part - quantity: quantity to receive - status: stock item status - location: destination for stock item (optional) - A global location can also be specified # Pass the purchase order through to the serializer for validation Custom filters for the PurchaseOrderLineItemList endpoint Filter by "pending" status (order status = pending) Filter by lines which are "received" (or "not" received) A line is considered "received" when received >= quantity # Only count "pending" orders API endpoint for accessing a list of PurchaseOrderLineItem objects - GET: Return a list of PurchaseOrder Line Item objects - POST: Create a new PurchaseOrderLineItem object Additional filtering options # Check if we wish to export the queried data to a file Detail API endpoint for PurchaseOrderLineItem object API endpoint for accessing a list of PurchaseOrderExtraLine objects. API endpoint for detail view of a PurchaseOrderExtraLine object API endpoint for listing (and creating) a SalesOrderAttachment (file upload) Detail endpoint for SalesOrderAttachment API endpoint for accessing a list of SalesOrder objects. - GET: Return list of SalesOrder objects (with filters) - POST: Create a new SalesOrder Save user information on create # Ensure the context is passed through to the serializer Perform custom filtering operations on the SalesOrder queryset. # Filter by 'outstanding' status # Filter by 'overdue' status # Filter by "Part" # Only return SalesOrder which have LineItem referencing the part # Filter by 'date range' API endpoint for detail view of a SalesOrder object. Custom filters for SalesOrderLineItemList endpoint Filter by lines which are "completed" A line is completed when shipped >= quantity API endpoint for accessing a list of SalesOrderLineItem objects. API endpoint for accessing a list of SalesOrderExtraLine objects. API endpoint for detail view of a SalesOrderExtraLine object API endpoint for detail view of a SalesOrderLineItem object API endpoint for manually marking a SalesOrder as "complete". API endpoint to allocation stock items against a SalesOrder, by specifying serial numbers. # Pass through the SalesOrder object to the serializer API endpoint to allocate stock items against a SalesOrder - The SalesOrder is specified in the URL - See the SalesOrderShipmentAllocationSerializer class # Pass through the SalesOrder object to the serializer API endpoint for detali view of a SalesOrderAllocation object API endpoint for listing SalesOrderAllocation objects # Filter by order # Filter by "part" reference # Filter by "order" reference # Filter by "stock item" # Filter by "outstanding" order status # Filter only "open" orders # Filter only allocations which have *not* shipped # Default filterable fields Custom filterset for the SalesOrderShipmentList endpoint API list endpoint for SalesOrderShipment model API detail endpooint for SalesOrderShipment model API endpoint for completing (shipping) a SalesOrderShipment Pass the request object to the serializer API endpoint for listing (and creating) a PurchaseOrderAttachment (file upload) Detail endpoint for a PurchaseOrderAttachment # API endpoints for purchase orders # Purchase order attachments # Individual purchase order detail URLs # Purchase order list # API endpoints for purchase order line items # API endpoints for purchase order extra line # API endpoints for sales ordesr # Sales order detail view # Sales order list view # API endpoints for sales order line items # API endpoints for sales order extra line # API endpoints for sales order allocations | 2.305458 | 2 |
shorty/custom_shorty/custom.py | andrewpap22/URL-Shortener_Rest-API | 0 | 6615326 | <filename>shorty/custom_shorty/custom.py
from base64 import b64encode
from hashlib import blake2b
import random
import re
from flask import Blueprint, jsonify, request, abort, render_template
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
def url_valid(url):
"""Validates a url by parsing it with a regular expression.
Parameters:
url - string representing a url to be validated.
Return values:
Boolean, indicating the validity of the url.
"""
return re.match(regex, url) is not None
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
def provider_valid(provider):
providers = ["bitly", "tinyurl"]
if provider in providers:
return True
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
def shorten(url):
"""Shortens a url by generating a 9 byte hash, and then
converting it to a 12 character long base 64 url friendly string. (gets called only if both bitly and tinyurl are unavailable.)
Parameters:
url - the url to be shortened.
Return values:
String, the unique shortened url, acting as a key for the entered long url.
"""
url_hash = blake2b(str.encode(url), digest_size=DIGEST_SIZE)
while url_hash in shortened:
url += str(random.randint(0, 9))
url_hash = blake2b(str.encode(url), digest_size=DIGEST_SIZE)
b64 = b64encode(url_hash.digest(), altchars=b'-_')
string_b64 = b64.decode('utf-8')
# append http for testing purposes.
if string_b64[:4] != 'http':
string_b64 = 'http://' + string_b64
return (string_b64)
#return b64.decode('utf-8')
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
def bad_request(message):
"""Takes a supplied message and attaches it to a HttpResponse with code 400.
Parameters:
message - string containing the error message.
Return values:
An object with a message string and a status_code set to 400.
"""
response = jsonify({'message': message})
response.status_code = 400
return response
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# From https://stackoverflow.com/questions/7160737/python-how-to-validate-a-url-in-python-malformed-or-not#7160778
# Slightly modified to not use ftp.
regex = re.compile(
r'^(?:http)s?://'
r'(?:(?:[A-Z0-9](?:[A-Z0-9-]{0,61}[A-Z0-9])?\.)+(?:[A-Z]{2,6}\.?|[A-Z0-9-]{2,}\.?)|'
r'localhost|'
r'\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})'
r'(?::\d+)?'
r'(?:/?|[/?]\S+)$', re.IGNORECASE)
DIGEST_SIZE = 9 # 72 bits of entropy.
shortened = {} | <filename>shorty/custom_shorty/custom.py
from base64 import b64encode
from hashlib import blake2b
import random
import re
from flask import Blueprint, jsonify, request, abort, render_template
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
def url_valid(url):
"""Validates a url by parsing it with a regular expression.
Parameters:
url - string representing a url to be validated.
Return values:
Boolean, indicating the validity of the url.
"""
return re.match(regex, url) is not None
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
def provider_valid(provider):
providers = ["bitly", "tinyurl"]
if provider in providers:
return True
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
def shorten(url):
"""Shortens a url by generating a 9 byte hash, and then
converting it to a 12 character long base 64 url friendly string. (gets called only if both bitly and tinyurl are unavailable.)
Parameters:
url - the url to be shortened.
Return values:
String, the unique shortened url, acting as a key for the entered long url.
"""
url_hash = blake2b(str.encode(url), digest_size=DIGEST_SIZE)
while url_hash in shortened:
url += str(random.randint(0, 9))
url_hash = blake2b(str.encode(url), digest_size=DIGEST_SIZE)
b64 = b64encode(url_hash.digest(), altchars=b'-_')
string_b64 = b64.decode('utf-8')
# append http for testing purposes.
if string_b64[:4] != 'http':
string_b64 = 'http://' + string_b64
return (string_b64)
#return b64.decode('utf-8')
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
def bad_request(message):
"""Takes a supplied message and attaches it to a HttpResponse with code 400.
Parameters:
message - string containing the error message.
Return values:
An object with a message string and a status_code set to 400.
"""
response = jsonify({'message': message})
response.status_code = 400
return response
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# From https://stackoverflow.com/questions/7160737/python-how-to-validate-a-url-in-python-malformed-or-not#7160778
# Slightly modified to not use ftp.
regex = re.compile(
r'^(?:http)s?://'
r'(?:(?:[A-Z0-9](?:[A-Z0-9-]{0,61}[A-Z0-9])?\.)+(?:[A-Z]{2,6}\.?|[A-Z0-9-]{2,}\.?)|'
r'localhost|'
r'\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})'
r'(?::\d+)?'
r'(?:/?|[/?]\S+)$', re.IGNORECASE)
DIGEST_SIZE = 9 # 72 bits of entropy.
shortened = {} | en | 0.318946 | # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Validates a url by parsing it with a regular expression. Parameters: url - string representing a url to be validated. Return values: Boolean, indicating the validity of the url. # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Shortens a url by generating a 9 byte hash, and then converting it to a 12 character long base 64 url friendly string. (gets called only if both bitly and tinyurl are unavailable.) Parameters: url - the url to be shortened. Return values: String, the unique shortened url, acting as a key for the entered long url. # append http for testing purposes. #return b64.decode('utf-8') # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Takes a supplied message and attaches it to a HttpResponse with code 400. Parameters: message - string containing the error message. Return values: An object with a message string and a status_code set to 400. # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ # From https://stackoverflow.com/questions/7160737/python-how-to-validate-a-url-in-python-malformed-or-not#7160778 # Slightly modified to not use ftp. # 72 bits of entropy. | 2.658389 | 3 |
src/zerohunger/orders/serializers.py | BuildForSDG/Team-250-Backends | 0 | 6615327 | from rest_framework import serializers
from .models import Orders, ItemsOrdered
from accounts.serializers import UserSerializer
from product.serializers import ProduceSerializer
class ItemOrderedSerializer(serializers.ModelSerializer):
produce = ProduceSerializer(read_only=True)
class Meta:
model = ItemsOrdered
fields = ['produce', 'quantity']
class OrderSerializer(serializers.ModelSerializer):
items_ordered = serializers.SerializerMethodField()
customer_id = UserSerializer(read_only=True)
class Meta:
model = Orders
fields = [
'id',
'customer_id',
'amount_due',
'items_ordered',
'dateAndTimeOfOrder'
]
def get_items_ordered(self, obj):
qset = ItemsOrdered.objects.filter(orders=obj)
return [ItemOrderedSerializer(m).data for m in qset]
| from rest_framework import serializers
from .models import Orders, ItemsOrdered
from accounts.serializers import UserSerializer
from product.serializers import ProduceSerializer
class ItemOrderedSerializer(serializers.ModelSerializer):
produce = ProduceSerializer(read_only=True)
class Meta:
model = ItemsOrdered
fields = ['produce', 'quantity']
class OrderSerializer(serializers.ModelSerializer):
items_ordered = serializers.SerializerMethodField()
customer_id = UserSerializer(read_only=True)
class Meta:
model = Orders
fields = [
'id',
'customer_id',
'amount_due',
'items_ordered',
'dateAndTimeOfOrder'
]
def get_items_ordered(self, obj):
qset = ItemsOrdered.objects.filter(orders=obj)
return [ItemOrderedSerializer(m).data for m in qset]
| none | 1 | 2.155897 | 2 | |
app/models.py | rajatomar788/pyblog | 0 | 6615328 | <reponame>rajatomar788/pyblog
from app import app
from datetime import *
from app import db
from flask import url_for
from werkzeug.security import generate_password_hash, check_password_hash
from flask_login import UserMixin
from app import login
from hashlib import md5
from flask_sqlalchemy import SQLAlchemy
import sys
class User(UserMixin, db.Model):
id = db.Column(db.Integer, primary_key=True)
username = db.Column(db.String(64), index=True, unique=True)
email = db.Column(db.String(120), index=True, unique=True)
password_hash = db.Column(db.String(128))
fullname = db.Column(db.String(50))
occupation = db.Column(db.String(50))
hobby = db.Column(db.String(50))
about_me = db.Column(db.String(140))
last_seen = db.Column(db.DateTime, default=datetime.utcnow)
def __repr__(self):
return '<User {}>'.format(self.username)
def set_password(self, password):
self.password_hash = generate_password_hash(password)
def check_password(self, password):
return check_password_hash(self.password_hash, password)
# this function can also take different sizes
def avatar(self, size=128):
digest = md5(self.email.lower().encode('utf-8')).hexdigest()
return 'https://www.gravatar.com/avatar/{}?d=identicon&s={}'.format(
digest, size)
class Contact(db.Model):
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(100))
email = db.Column(db.String(140))
timestamp = db.Column(db.DateTime, index=True, default=datetime.utcnow)
company = db.Column(db.String(100))
message = db.Column(db.String(500))
def __repr__(self):
return '< Name : {}, Message : {}>'.format(self.name, self.message)
@login.user_loader
def load_user(id):
return User.query.get(int(id))
class PostCategory(db.Model):
__tablename__ = 'postcategory'
id = db.Column(db.Integer, primary_key=True)
category_name = db.Column(db.String, nullable=False)
posts = db.relationship('Post', backref='postcategory')
def __repr__(self):
return '<Post Category %s>' %(self.category_name)
def len_category(self,categoryID):
cat = PostCategory().query.get(int(categoryID))
return len(cat.posts)
post_tags = db.Table('post_tags',
db.Column('post_id', db.Integer, db.ForeignKey('post.id')),
db.Column('tag_id', db.Integer, db.ForeignKey('tag.id'))
)
class Post(db.Model):
__tablename__ = 'post'
id = db.Column(db.Integer, primary_key=True)
author = db.Column(db.String(50))
heading = db.Column(db.String)
timestamp = db.Column(db.DateTime, default=datetime.utcnow)
body = db.Column(db.String)
post_url = db.Column(db.String(140))
comments = db.relationship('Comment', backref='comment_on_page', lazy='dynamic')
category_id = db.Column(db.Integer, db.ForeignKey('postcategory.id'))
likes = db.Column(db.Integer, default=1)
tags = db.relationship('Tag', secondary=post_tags,
backref=db.backref('post_tags', lazy='dynamic')
)
thumbnail = db.Column(db.String)
def get_post_url(self):
return self.post_url
def get_post_id(self):
return self.id
def latest_posts(self):
return Post.query.order_by(Post.timestamp.desc())
def post_author(self, user):
return Post.query.filter_by(author=user).first()
def post_author_avatar(self, user, size):
author = User.query.filter_by(username=user).first()
return author.avatar(size)
def post_thumbnail(self, postID):
return Post().query.get(postID).thumbnail
def category(self, postID):
post = Post().query.get(int(postID))
category = post.postcategory.category_name
return category
def categoryID(self, name):
return PostCategory().query.filter_by(category_name=name).first().id
def related(self, postID):
post= Post().query.get(int(postID))
postcategory = post.postcategory
return postcategory.posts[:5]
def remove_tags(self, postID):
tags = post_tags.query.filter_by(post_id=postID).all()
db.session.delete(tags)
db.session.commit()
class Tag(db.Model):
__tablename__ = 'tag'
id = db.Column(db.Integer, primary_key=True)
tag_name = db.Column(db.String)
posts = db.relationship('Post', secondary=post_tags,
backref=db.backref('post_tags', lazy='dynamic')
)
def new_tag(self, tag):
if len(tag) >=1:
new_tag = Tag(tag_name=tag)
db.session.add(new_tag)
db.session.commit()
return 'tag added'
def __repr__(self):
return '<Tag: {}>'.format(self.tag_name)
class NewsLetter(db.Model):
id = db.Column(db.Integer, primary_key=True, index=True)
email = db.Column(db.String(50), nullable=False)
def __repr__(self):
return '<subscriber {}>'.format(self.email)
class Comment(db.Model):
id = db.Column(db.Integer, primary_key=True)
comment = db.Column(db.String(400))
post_id = db.Column(db.Integer, db.ForeignKey('post.id'), index=True)
timestamp = db.Column(db.DateTime, default=datetime.utcnow)
user_id = db.Column(db.Integer, db.ForeignKey('user.id'))
approved = db.Column(db.Integer, default='0')
def __repr__(self):
return '<Comment %r >' %(self.comment)
def username(self, id):
return User.query.get(int(id)).username
def get_post_heading(self, postID):
post = Post.query.get(int(postID))
return post.heading
def get_user_avatar(self, userID, size):
user = User.query.get(int(userID))
return User.avatar(user, size)
def comments_on_user_posts(self, userID):
user = User.query.get(int(userID))
user_posts = Post().query.filter_by(author=user.username).all()
pending_comments = []
for post in user_posts:
for comment in Comment().query.filter_by(post_id=post.id).all():
if comment.approved == 0:
pending_comments.append(comment)
return pending_comments
class File(db.Model):
id = db.Column(db.Integer, primary_key=True)
timestamp = db.Column(db.DateTime, default=datetime.utcnow)
filename = db.Column(db.String)
filedata = db.Column(db.LargeBinary)
user_id = db.Column(db.Integer, db.ForeignKey('user.id'))
def username(self, id):
return User.query.get(int(id)).username
def filetype(self, filename):
filetype = filename.split('.')[1].upper()
return str(filetype)
| from app import app
from datetime import *
from app import db
from flask import url_for
from werkzeug.security import generate_password_hash, check_password_hash
from flask_login import UserMixin
from app import login
from hashlib import md5
from flask_sqlalchemy import SQLAlchemy
import sys
class User(UserMixin, db.Model):
id = db.Column(db.Integer, primary_key=True)
username = db.Column(db.String(64), index=True, unique=True)
email = db.Column(db.String(120), index=True, unique=True)
password_hash = db.Column(db.String(128))
fullname = db.Column(db.String(50))
occupation = db.Column(db.String(50))
hobby = db.Column(db.String(50))
about_me = db.Column(db.String(140))
last_seen = db.Column(db.DateTime, default=datetime.utcnow)
def __repr__(self):
return '<User {}>'.format(self.username)
def set_password(self, password):
self.password_hash = generate_password_hash(password)
def check_password(self, password):
return check_password_hash(self.password_hash, password)
# this function can also take different sizes
def avatar(self, size=128):
digest = md5(self.email.lower().encode('utf-8')).hexdigest()
return 'https://www.gravatar.com/avatar/{}?d=identicon&s={}'.format(
digest, size)
class Contact(db.Model):
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(100))
email = db.Column(db.String(140))
timestamp = db.Column(db.DateTime, index=True, default=datetime.utcnow)
company = db.Column(db.String(100))
message = db.Column(db.String(500))
def __repr__(self):
return '< Name : {}, Message : {}>'.format(self.name, self.message)
@login.user_loader
def load_user(id):
return User.query.get(int(id))
class PostCategory(db.Model):
__tablename__ = 'postcategory'
id = db.Column(db.Integer, primary_key=True)
category_name = db.Column(db.String, nullable=False)
posts = db.relationship('Post', backref='postcategory')
def __repr__(self):
return '<Post Category %s>' %(self.category_name)
def len_category(self,categoryID):
cat = PostCategory().query.get(int(categoryID))
return len(cat.posts)
post_tags = db.Table('post_tags',
db.Column('post_id', db.Integer, db.ForeignKey('post.id')),
db.Column('tag_id', db.Integer, db.ForeignKey('tag.id'))
)
class Post(db.Model):
__tablename__ = 'post'
id = db.Column(db.Integer, primary_key=True)
author = db.Column(db.String(50))
heading = db.Column(db.String)
timestamp = db.Column(db.DateTime, default=datetime.utcnow)
body = db.Column(db.String)
post_url = db.Column(db.String(140))
comments = db.relationship('Comment', backref='comment_on_page', lazy='dynamic')
category_id = db.Column(db.Integer, db.ForeignKey('postcategory.id'))
likes = db.Column(db.Integer, default=1)
tags = db.relationship('Tag', secondary=post_tags,
backref=db.backref('post_tags', lazy='dynamic')
)
thumbnail = db.Column(db.String)
def get_post_url(self):
return self.post_url
def get_post_id(self):
return self.id
def latest_posts(self):
return Post.query.order_by(Post.timestamp.desc())
def post_author(self, user):
return Post.query.filter_by(author=user).first()
def post_author_avatar(self, user, size):
author = User.query.filter_by(username=user).first()
return author.avatar(size)
def post_thumbnail(self, postID):
return Post().query.get(postID).thumbnail
def category(self, postID):
post = Post().query.get(int(postID))
category = post.postcategory.category_name
return category
def categoryID(self, name):
return PostCategory().query.filter_by(category_name=name).first().id
def related(self, postID):
post= Post().query.get(int(postID))
postcategory = post.postcategory
return postcategory.posts[:5]
def remove_tags(self, postID):
tags = post_tags.query.filter_by(post_id=postID).all()
db.session.delete(tags)
db.session.commit()
class Tag(db.Model):
__tablename__ = 'tag'
id = db.Column(db.Integer, primary_key=True)
tag_name = db.Column(db.String)
posts = db.relationship('Post', secondary=post_tags,
backref=db.backref('post_tags', lazy='dynamic')
)
def new_tag(self, tag):
if len(tag) >=1:
new_tag = Tag(tag_name=tag)
db.session.add(new_tag)
db.session.commit()
return 'tag added'
def __repr__(self):
return '<Tag: {}>'.format(self.tag_name)
class NewsLetter(db.Model):
id = db.Column(db.Integer, primary_key=True, index=True)
email = db.Column(db.String(50), nullable=False)
def __repr__(self):
return '<subscriber {}>'.format(self.email)
class Comment(db.Model):
id = db.Column(db.Integer, primary_key=True)
comment = db.Column(db.String(400))
post_id = db.Column(db.Integer, db.ForeignKey('post.id'), index=True)
timestamp = db.Column(db.DateTime, default=datetime.utcnow)
user_id = db.Column(db.Integer, db.ForeignKey('user.id'))
approved = db.Column(db.Integer, default='0')
def __repr__(self):
return '<Comment %r >' %(self.comment)
def username(self, id):
return User.query.get(int(id)).username
def get_post_heading(self, postID):
post = Post.query.get(int(postID))
return post.heading
def get_user_avatar(self, userID, size):
user = User.query.get(int(userID))
return User.avatar(user, size)
def comments_on_user_posts(self, userID):
user = User.query.get(int(userID))
user_posts = Post().query.filter_by(author=user.username).all()
pending_comments = []
for post in user_posts:
for comment in Comment().query.filter_by(post_id=post.id).all():
if comment.approved == 0:
pending_comments.append(comment)
return pending_comments
class File(db.Model):
id = db.Column(db.Integer, primary_key=True)
timestamp = db.Column(db.DateTime, default=datetime.utcnow)
filename = db.Column(db.String)
filedata = db.Column(db.LargeBinary)
user_id = db.Column(db.Integer, db.ForeignKey('user.id'))
def username(self, id):
return User.query.get(int(id)).username
def filetype(self, filename):
filetype = filename.split('.')[1].upper()
return str(filetype) | en | 0.91111 | # this function can also take different sizes | 2.707888 | 3 |
utils/flow_plotly.py | GEANT-DataPlaneProgramming/int-analytics | 3 | 6615329 | from influxdb import InfluxDBClient
import time
import pandas as pd
import numpy as np
from pprint import pprint
import plotly.graph_objs as go
import plotly.io as pio
from datetime import datetime, timedelta
import pandas as pd
import os
host = 'hs-04.ipa.psnc.pl'
port = 8086
user = 'root'
password = '<PASSWORD>'
dbname = 'int_telemetry_db'
dbuser = 'int'
dbuser_password = '<PASSWORD>'
pio.kaleido.scope.default_width = 1000
pio.kaleido.scope.default_height = 0.6 * pio.kaleido.scope.default_width
client = InfluxDBClient(host, port, user, password, dbname)
def get_datatime(int_reports):
if len(int_reports) == 0:
return []
timestamps = [r['dstts'] for r in int_reports]
timestamps = pd.DataFrame(timestamps, dtype='float64')
return timestamps[0].astype('datetime64[ns]').tolist()
def get_flow_from_influx(flow, duration, starttime=''):
if flow is None:
return []
timestamp = time.time()
src_ip, dst_ip = flow.split('_')
influxdb = InfluxDBClient(host=host, port=port, username=user, password=password, database=dbname)
if starttime == "":
q = '''SELECT * FROM int_telemetry WHERE "srcip" = '%s' and "dstip" = '%s' and time > now() - %sms''' % (src_ip, dst_ip, duration)
else:
endtime = datetime.strptime(starttime, "%Y-%m-%dT%H:%M:%S.%f") + timedelta(milliseconds=duration)
q = '''SELECT * FROM int_telemetry WHERE "srcip" = '%s' and "dstip" = '%s' and time > '%s' and time < '%s' ''' % (src_ip, dst_ip, starttime+'Z', endtime.isoformat()+'Z')
print(q)
query_resp = influxdb.query(q)
int_reports = list(query_resp.get_points())
pprint(int_reports[0])
print("query time is", time.time()-timestamp)
print("Numer of points queries", len(int_reports))
return int_reports
def get_flow_rate_from_influx(flow, duration, starttime=''):
if flow is None:
return []
timestamp = time.time()
src_ip, dst_ip = flow.split('_')
influxdb = InfluxDBClient(host=host, port=port, username=user, password=password, database=dbname)
if starttime == "":
q = ''' SELECT count("dstts") FROM int_telemetry WHERE "srcip" = '%s' and "dstip" = '%s' AND time > now() - %sms GROUP BY time(1ms) ''' % (src_ip, dst_ip, duration)
else:
endtime = datetime.strptime(starttime, "%Y-%m-%dT%H:%M:%S.%f") + timedelta(milliseconds=duration)
q = ''' SELECT count("dstts") FROM int_telemetry WHERE "srcip" = '%s' and "dstip" = '%s' AND time > '%s' and time < '%s' GROUP BY time(1ms) ''' % (src_ip, dst_ip, starttime+'Z', endtime.isoformat()+'Z')
print(q)
query_resp = influxdb.query(q)
int_reports = list(query_resp.get_points())
pprint(int_reports[0])
print("query time is", time.time()-timestamp)
print("Numer of points queries", len(int_reports))
return int_reports
MILION = 1000000.0
def create_delay(int_reports, flow, starttime, duration):
timestamp = time.time()
delays = [(r['dstts'] - r['origts'])/MILION for r in int_reports]
min_delay = min(delays)
max_dalay = max(delays)
shift = 10 #(max_dalay - min_delay)/10.0
delays = [d - min_delay + shift for d in delays]
fig = go.Figure(data = go.Scatter(
x= get_datatime(int_reports),
y= np.array(delays),
mode='markers',
#marker_color=np.array(delays),
#marker_colorscale=["blue", "green", "red"], # 'Rainbow',
marker_size=2)
)
fig.update_layout(
title="Timestamps difference",
xaxis_title="time",
yaxis_title="diff (ms)",
#yaxis_tickformat='.1e',
template='plotly_white', #'simple_white'
)
print("scatter time is", time.time()-timestamp)
filename = "%s_%sms_flow_delay.png" % (starttime.replace(':', '.'), duration)
pio.write_image(fig, filename, scale=4)
print("png time is", time.time()-timestamp)
fig = go.Figure(data = go.Histogram(
x = np.array(delays),
nbinsx=100)
)
fig.update_layout(
title="Timestamps difference histogram",
xaxis_title="diff (ms)",
yaxis_title="count",
#xaxis_tickformat='.1e',
template='plotly_white', #'simple_white'
)
print("scatter time is", time.time()-timestamp)
filename = "%s_%sms_flow_delay_hist.png" % (starttime.replace(':', '.'), duration)
pio.write_image(fig, filename, scale=1)
print("png time is", time.time()-timestamp)
def create_jitter(int_reports, flow, starttime, duration):
timestamp = time.time()
last_dstts = 0
jitter = []
for r in int_reports:
jitter.append(r['dstts']/MILION - last_dstts)
last_dstts = r['dstts']/MILION
fig = go.Figure(data = go.Scatter(
x= get_datatime(int_reports),
y= np.array(jitter[1:]),
mode='markers',
#marker_color=np.array(jitter[1:]),
#marker_colorscale=["blue"], # "green", "red"], # 'Rainbow',
marker_size=2)
)
fig.update_layout(
title="Packet Inter-Arrival Time",
xaxis_title="time",
yaxis_title="IAT (ms)",
template='plotly_white', #'simple_white'
)
print("scatter time is", time.time()-timestamp)
filename = "%s_%sms_flow_iat.png" % (starttime.replace(':', '.'), duration)
pio.write_image(fig, filename, scale=4)
print("png time is", time.time()-timestamp)
fig = go.Figure(data = go.Histogram(
x = np.array(jitter[1:]),
nbinsx=100)
)
fig.update_layout(
title="Packet Inter-Arrival Time histogram",
xaxis_title="IAT (ms)",
yaxis_title="count",
yaxis_type="log",
template='plotly_white', #'simple_white'
)
print("scatter time is", time.time()-timestamp)
filename = "%s_%sms_flow_iat_hist.png" % (starttime.replace(':', '.'), duration)
pio.write_image(fig, filename, scale=1)
print("png time is", time.time()-timestamp)
def create_ipvd(int_reports, flow, starttime, duration):
timestamp = time.time()
delays = [(r['dstts'] - r['origts'])/MILION for r in int_reports]
last_delay = 0
ipvd = []
for d in delays:
ipvd.append(d - last_delay)
last_delay = d
fig = go.Figure(data = go.Scatter(
x= get_datatime(int_reports),
y= np.array(ipvd[1:]),
mode='markers',
#marker_color=np.array(delays),
#marker_colorscale=["blue", "green", "red"], # 'Rainbow',
marker_size=2)
)
fig.update_layout(
title="Packet Delay Variation (PDV)",
xaxis_title="time",
yaxis_title="PDV (ms)",
template='plotly_white', #'simple_white'
)
print("scatter time is", time.time()-timestamp)
filename = "%s_%sms_flow_pvd.png" % (starttime.replace(':', '.'), duration)
pio.write_image(fig, filename, scale=2)
print("png time is", time.time()-timestamp)
fig = go.Figure(data = go.Histogram(
x = np.array(ipvd[1:]),
nbinsx=100)
)
fig.update_layout(
title="Packet Delay Variation histogram",
xaxis_title="PDV (ms)",
yaxis_title="count",
#xaxis_tickformat='.1e',
template='plotly_white', #'simple_white'
)
print("scatter time is", time.time()-timestamp)
filename = "%s_%sms_flow_dpv_hist.png" % (starttime.replace(':', '.'), duration)
pio.write_image(fig, filename, scale=1)
print("png time is", time.time()-timestamp)
def create_packet_rate(flow, duration, starttime):
int_reports = get_flow_rate_from_influx(flow, duration, starttime)
timestamps = [r['time'] for r in int_reports]
rates = [r['count'] for r in int_reports]
timestamp = time.time()
fig = go.Figure(data = go.Scatter(
x= timestamps,
y= np.array(rates),
mode='markers',
marker_size=2)
)
fig.update_layout(
title="Packet rate",
xaxis_title="time",
yaxis_title="Pkt/ms",
template='plotly_white'
)
print("scatter time is", time.time()-timestamp)
filename = "%s_%sms_flow_rate.png" % (starttime.replace(':', '.'), duration)
pio.write_image(fig, filename, scale=1)
print("png time is", time.time()-timestamp)
save(flow+"_rate", starttime, duration, int_reports)
def save(flow, starttime, duration, int_reports):
filename = "flow_%s_%s_%sms.csv" % (flow, starttime.replace(':', '.'), duration)
df = pd.DataFrame(int_reports)
df.to_csv(filename, index=False)
os.system("zip %s %s" % (filename.replace('csv', 'zip'), filename))
os.system("rm %s" % filename)
# https://plotly.com/python/static-image-export/
#https://plotly.com/python/datashader/
for duration in [1000*60]:
starttime = "2020-12-08T17:30:00.00"
#starttime = datetime.utcnow().isoformat()
#flow="192.168.3.11_195.113.172.46"
flow="192.168.127.12_195.113.172.46"
#duration=1000 #miliseconds
#int_reports = get_flow_from_influx(flow=flow, duration=duration, starttime=starttime)
#int_reports = get_flow_from_influx(flow=flow, duration=duration)
create_packet_rate(flow, duration, starttime)
#if len(int_reports) > 0:
#save(flow, starttime, duration, int_reports)
#~ create_delay(int_reports, flow, starttime, duration)
#~ create_jitter(int_reports, flow, starttime, duration)
#~ create_ipvd(int_reports, flow, starttime, duration)
| from influxdb import InfluxDBClient
import time
import pandas as pd
import numpy as np
from pprint import pprint
import plotly.graph_objs as go
import plotly.io as pio
from datetime import datetime, timedelta
import pandas as pd
import os
host = 'hs-04.ipa.psnc.pl'
port = 8086
user = 'root'
password = '<PASSWORD>'
dbname = 'int_telemetry_db'
dbuser = 'int'
dbuser_password = '<PASSWORD>'
pio.kaleido.scope.default_width = 1000
pio.kaleido.scope.default_height = 0.6 * pio.kaleido.scope.default_width
client = InfluxDBClient(host, port, user, password, dbname)
def get_datatime(int_reports):
if len(int_reports) == 0:
return []
timestamps = [r['dstts'] for r in int_reports]
timestamps = pd.DataFrame(timestamps, dtype='float64')
return timestamps[0].astype('datetime64[ns]').tolist()
def get_flow_from_influx(flow, duration, starttime=''):
if flow is None:
return []
timestamp = time.time()
src_ip, dst_ip = flow.split('_')
influxdb = InfluxDBClient(host=host, port=port, username=user, password=password, database=dbname)
if starttime == "":
q = '''SELECT * FROM int_telemetry WHERE "srcip" = '%s' and "dstip" = '%s' and time > now() - %sms''' % (src_ip, dst_ip, duration)
else:
endtime = datetime.strptime(starttime, "%Y-%m-%dT%H:%M:%S.%f") + timedelta(milliseconds=duration)
q = '''SELECT * FROM int_telemetry WHERE "srcip" = '%s' and "dstip" = '%s' and time > '%s' and time < '%s' ''' % (src_ip, dst_ip, starttime+'Z', endtime.isoformat()+'Z')
print(q)
query_resp = influxdb.query(q)
int_reports = list(query_resp.get_points())
pprint(int_reports[0])
print("query time is", time.time()-timestamp)
print("Numer of points queries", len(int_reports))
return int_reports
def get_flow_rate_from_influx(flow, duration, starttime=''):
if flow is None:
return []
timestamp = time.time()
src_ip, dst_ip = flow.split('_')
influxdb = InfluxDBClient(host=host, port=port, username=user, password=password, database=dbname)
if starttime == "":
q = ''' SELECT count("dstts") FROM int_telemetry WHERE "srcip" = '%s' and "dstip" = '%s' AND time > now() - %sms GROUP BY time(1ms) ''' % (src_ip, dst_ip, duration)
else:
endtime = datetime.strptime(starttime, "%Y-%m-%dT%H:%M:%S.%f") + timedelta(milliseconds=duration)
q = ''' SELECT count("dstts") FROM int_telemetry WHERE "srcip" = '%s' and "dstip" = '%s' AND time > '%s' and time < '%s' GROUP BY time(1ms) ''' % (src_ip, dst_ip, starttime+'Z', endtime.isoformat()+'Z')
print(q)
query_resp = influxdb.query(q)
int_reports = list(query_resp.get_points())
pprint(int_reports[0])
print("query time is", time.time()-timestamp)
print("Numer of points queries", len(int_reports))
return int_reports
MILION = 1000000.0
def create_delay(int_reports, flow, starttime, duration):
timestamp = time.time()
delays = [(r['dstts'] - r['origts'])/MILION for r in int_reports]
min_delay = min(delays)
max_dalay = max(delays)
shift = 10 #(max_dalay - min_delay)/10.0
delays = [d - min_delay + shift for d in delays]
fig = go.Figure(data = go.Scatter(
x= get_datatime(int_reports),
y= np.array(delays),
mode='markers',
#marker_color=np.array(delays),
#marker_colorscale=["blue", "green", "red"], # 'Rainbow',
marker_size=2)
)
fig.update_layout(
title="Timestamps difference",
xaxis_title="time",
yaxis_title="diff (ms)",
#yaxis_tickformat='.1e',
template='plotly_white', #'simple_white'
)
print("scatter time is", time.time()-timestamp)
filename = "%s_%sms_flow_delay.png" % (starttime.replace(':', '.'), duration)
pio.write_image(fig, filename, scale=4)
print("png time is", time.time()-timestamp)
fig = go.Figure(data = go.Histogram(
x = np.array(delays),
nbinsx=100)
)
fig.update_layout(
title="Timestamps difference histogram",
xaxis_title="diff (ms)",
yaxis_title="count",
#xaxis_tickformat='.1e',
template='plotly_white', #'simple_white'
)
print("scatter time is", time.time()-timestamp)
filename = "%s_%sms_flow_delay_hist.png" % (starttime.replace(':', '.'), duration)
pio.write_image(fig, filename, scale=1)
print("png time is", time.time()-timestamp)
def create_jitter(int_reports, flow, starttime, duration):
timestamp = time.time()
last_dstts = 0
jitter = []
for r in int_reports:
jitter.append(r['dstts']/MILION - last_dstts)
last_dstts = r['dstts']/MILION
fig = go.Figure(data = go.Scatter(
x= get_datatime(int_reports),
y= np.array(jitter[1:]),
mode='markers',
#marker_color=np.array(jitter[1:]),
#marker_colorscale=["blue"], # "green", "red"], # 'Rainbow',
marker_size=2)
)
fig.update_layout(
title="Packet Inter-Arrival Time",
xaxis_title="time",
yaxis_title="IAT (ms)",
template='plotly_white', #'simple_white'
)
print("scatter time is", time.time()-timestamp)
filename = "%s_%sms_flow_iat.png" % (starttime.replace(':', '.'), duration)
pio.write_image(fig, filename, scale=4)
print("png time is", time.time()-timestamp)
fig = go.Figure(data = go.Histogram(
x = np.array(jitter[1:]),
nbinsx=100)
)
fig.update_layout(
title="Packet Inter-Arrival Time histogram",
xaxis_title="IAT (ms)",
yaxis_title="count",
yaxis_type="log",
template='plotly_white', #'simple_white'
)
print("scatter time is", time.time()-timestamp)
filename = "%s_%sms_flow_iat_hist.png" % (starttime.replace(':', '.'), duration)
pio.write_image(fig, filename, scale=1)
print("png time is", time.time()-timestamp)
def create_ipvd(int_reports, flow, starttime, duration):
timestamp = time.time()
delays = [(r['dstts'] - r['origts'])/MILION for r in int_reports]
last_delay = 0
ipvd = []
for d in delays:
ipvd.append(d - last_delay)
last_delay = d
fig = go.Figure(data = go.Scatter(
x= get_datatime(int_reports),
y= np.array(ipvd[1:]),
mode='markers',
#marker_color=np.array(delays),
#marker_colorscale=["blue", "green", "red"], # 'Rainbow',
marker_size=2)
)
fig.update_layout(
title="Packet Delay Variation (PDV)",
xaxis_title="time",
yaxis_title="PDV (ms)",
template='plotly_white', #'simple_white'
)
print("scatter time is", time.time()-timestamp)
filename = "%s_%sms_flow_pvd.png" % (starttime.replace(':', '.'), duration)
pio.write_image(fig, filename, scale=2)
print("png time is", time.time()-timestamp)
fig = go.Figure(data = go.Histogram(
x = np.array(ipvd[1:]),
nbinsx=100)
)
fig.update_layout(
title="Packet Delay Variation histogram",
xaxis_title="PDV (ms)",
yaxis_title="count",
#xaxis_tickformat='.1e',
template='plotly_white', #'simple_white'
)
print("scatter time is", time.time()-timestamp)
filename = "%s_%sms_flow_dpv_hist.png" % (starttime.replace(':', '.'), duration)
pio.write_image(fig, filename, scale=1)
print("png time is", time.time()-timestamp)
def create_packet_rate(flow, duration, starttime):
int_reports = get_flow_rate_from_influx(flow, duration, starttime)
timestamps = [r['time'] for r in int_reports]
rates = [r['count'] for r in int_reports]
timestamp = time.time()
fig = go.Figure(data = go.Scatter(
x= timestamps,
y= np.array(rates),
mode='markers',
marker_size=2)
)
fig.update_layout(
title="Packet rate",
xaxis_title="time",
yaxis_title="Pkt/ms",
template='plotly_white'
)
print("scatter time is", time.time()-timestamp)
filename = "%s_%sms_flow_rate.png" % (starttime.replace(':', '.'), duration)
pio.write_image(fig, filename, scale=1)
print("png time is", time.time()-timestamp)
save(flow+"_rate", starttime, duration, int_reports)
def save(flow, starttime, duration, int_reports):
filename = "flow_%s_%s_%sms.csv" % (flow, starttime.replace(':', '.'), duration)
df = pd.DataFrame(int_reports)
df.to_csv(filename, index=False)
os.system("zip %s %s" % (filename.replace('csv', 'zip'), filename))
os.system("rm %s" % filename)
# https://plotly.com/python/static-image-export/
#https://plotly.com/python/datashader/
for duration in [1000*60]:
starttime = "2020-12-08T17:30:00.00"
#starttime = datetime.utcnow().isoformat()
#flow="192.168.3.11_195.113.172.46"
flow="192.168.127.12_195.113.172.46"
#duration=1000 #miliseconds
#int_reports = get_flow_from_influx(flow=flow, duration=duration, starttime=starttime)
#int_reports = get_flow_from_influx(flow=flow, duration=duration)
create_packet_rate(flow, duration, starttime)
#if len(int_reports) > 0:
#save(flow, starttime, duration, int_reports)
#~ create_delay(int_reports, flow, starttime, duration)
#~ create_jitter(int_reports, flow, starttime, duration)
#~ create_ipvd(int_reports, flow, starttime, duration)
| en | 0.382281 | SELECT * FROM int_telemetry WHERE "srcip" = '%s' and "dstip" = '%s' and time > now() - %sms SELECT * FROM int_telemetry WHERE "srcip" = '%s' and "dstip" = '%s' and time > '%s' and time < '%s' SELECT count("dstts") FROM int_telemetry WHERE "srcip" = '%s' and "dstip" = '%s' AND time > now() - %sms GROUP BY time(1ms) SELECT count("dstts") FROM int_telemetry WHERE "srcip" = '%s' and "dstip" = '%s' AND time > '%s' and time < '%s' GROUP BY time(1ms) #(max_dalay - min_delay)/10.0 #marker_color=np.array(delays), #marker_colorscale=["blue", "green", "red"], # 'Rainbow', #yaxis_tickformat='.1e', #'simple_white' #xaxis_tickformat='.1e', #'simple_white' #marker_color=np.array(jitter[1:]), #marker_colorscale=["blue"], # "green", "red"], # 'Rainbow', #'simple_white' #'simple_white' #marker_color=np.array(delays), #marker_colorscale=["blue", "green", "red"], # 'Rainbow', #'simple_white' #xaxis_tickformat='.1e', #'simple_white' # https://plotly.com/python/static-image-export/ #https://plotly.com/python/datashader/ #starttime = datetime.utcnow().isoformat() #flow="192.168.3.11_195.113.172.46" #duration=1000 #miliseconds #int_reports = get_flow_from_influx(flow=flow, duration=duration, starttime=starttime) #int_reports = get_flow_from_influx(flow=flow, duration=duration) #if len(int_reports) > 0: #save(flow, starttime, duration, int_reports) #~ create_delay(int_reports, flow, starttime, duration) #~ create_jitter(int_reports, flow, starttime, duration) #~ create_ipvd(int_reports, flow, starttime, duration) | 2.765938 | 3 |
reminderbot.py | DoggieLicc/ReminderFriend | 3 | 6615330 | import os
import sys
try:
os.chdir(os.path.dirname(sys.argv[0]))
except OSError:
pass
import discord
from classes import CustomBot
__cogs__ = ['help', 'remind', 'misc']
bot = CustomBot(case_insensitive=True,
command_prefix='$',
activity=discord.Game(name='Bot has been rewritten, please use $help'),
strip_after_prefix=True,
max_messages=None)
@bot.event
async def on_guild_remove(guild):
if not guild: return
async with bot.db.cursor() as cursor:
await cursor.execute('DELETE FROM prefixes WHERE guild_id = (?)', (guild.id,))
await bot.db.commit()
if __name__ == '__main__':
for cog in __cogs__:
bot.load_extension(f'cogs.{cog}')
bot.run('token-here')
| import os
import sys
try:
os.chdir(os.path.dirname(sys.argv[0]))
except OSError:
pass
import discord
from classes import CustomBot
__cogs__ = ['help', 'remind', 'misc']
bot = CustomBot(case_insensitive=True,
command_prefix='$',
activity=discord.Game(name='Bot has been rewritten, please use $help'),
strip_after_prefix=True,
max_messages=None)
@bot.event
async def on_guild_remove(guild):
if not guild: return
async with bot.db.cursor() as cursor:
await cursor.execute('DELETE FROM prefixes WHERE guild_id = (?)', (guild.id,))
await bot.db.commit()
if __name__ == '__main__':
for cog in __cogs__:
bot.load_extension(f'cogs.{cog}')
bot.run('token-here')
| none | 1 | 2.225371 | 2 | |
week_1/homework/find_prime_numbers.py | khh180cm/algorithm | 0 | 6615331 | <reponame>khh180cm/algorithm<filename>week_1/homework/find_prime_numbers.py<gh_stars>0
input_num = 2
def find_prime_list_under_number(number):
"""
number 미만의 소수들을 구하는 헬퍼 함수
ex) 7 -> [2, 3, 5]
:param number: 양의 정수
:return: 소수들로 이루어진 리스트
"""
prime_list = []
for divisor in range(2, number):
for divided_by in range(2, divisor):
# 나머지가 0일 때 -> 소수 조건 미충족
if divisor % divided_by == 0:
break
else:
prime_list.append(divisor)
return prime_list
result = find_prime_list_under_number(input_num)
print(result)
| input_num = 2
def find_prime_list_under_number(number):
"""
number 미만의 소수들을 구하는 헬퍼 함수
ex) 7 -> [2, 3, 5]
:param number: 양의 정수
:return: 소수들로 이루어진 리스트
"""
prime_list = []
for divisor in range(2, number):
for divided_by in range(2, divisor):
# 나머지가 0일 때 -> 소수 조건 미충족
if divisor % divided_by == 0:
break
else:
prime_list.append(divisor)
return prime_list
result = find_prime_list_under_number(input_num)
print(result) | ko | 1.000038 | number 미만의 소수들을 구하는 헬퍼 함수 ex) 7 -> [2, 3, 5] :param number: 양의 정수 :return: 소수들로 이루어진 리스트 # 나머지가 0일 때 -> 소수 조건 미충족 | 3.919611 | 4 |
nowtrade/neural_network.py | integracore2/NowTrade | 87 | 6615332 | """
Module that enables the use of neural networks with NowTrade.
"""
import cPickle
import numpy as np
from pybrain.tools.shortcuts import buildNetwork
from pybrain.datasets.supervised import SupervisedDataSet
from pybrain.supervised.trainers.backprop import BackpropTrainer
from nowtrade import logger
# Networks
FEED_FORWARD_NETWORK = 0
RECURRENT_NETWORK = 1
# Datasets
SUPERVISED_DATASET = 0
SEQUENTIAL_DATASET = 1
CLASSIFICATION_DATASET = 2
SEQUENTIAL_CLASSIFICATION_DATASET = 3
IMPORTANCE_DATASET = 4
# Trainers
BACKPROP_TRAINER = 0
RPROP_TRAINER = 1
def load(network, dataset=None):
"""
Load a previously pickled neural network.
"""
network = cPickle.loads(network)
if dataset:
network.build_network(dataset, new=False)
return network
def load_from_file(filename, dataset=None):
"""
Load a neural network from a previous one saved to file.
"""
file_handler = open(filename, 'rb')
network = cPickle.load(file_handler)
file_handler.close()
if dataset:
network.build_network(dataset, new=False)
return network
class InvalidNetworkType(Exception):
"""
Exception raised when an invalid network type is specified.
"""
pass
class InvalidTrainerType(Exception):
"""
Exception raised when an invalid trainer type is specified.
"""
pass
class InvalidNetworkDatasetType(Exception):
"""
Exception raised when an invalid network dataset type is specified.
"""
pass
class InvalidDataset(Exception):
"""
Exception raised when a invalid dataset is specified.
"""
pass
class NeuralNetwork(object):
"""
The neural network class does all the heavy lifting to incorporate pybrain
neural networks into the NowTrade ecosystem.
"""
def __init__(self, train_data, prediction_data, network_type=FEED_FORWARD_NETWORK,
network_dataset_type=SUPERVISED_DATASET,
trainer_type=BACKPROP_TRAINER):
self.train_data = train_data
self.prediction_data = prediction_data
self.network_type = network_type
self.network_dataset_type = network_dataset_type
self.trainer_type = trainer_type
self.network = None
self.network_dataset = None
self.dataset = None
self.trainer = None
self.trained_iterations = 0
self.momentum = None
self.learning_rate = None
self.hidden_layers = None
self.prediction_window = None
self.logger = logger.Logger(self.__class__.__name__)
self.logger.info('train_data: %s prediction_data: %s, network_type: %s, \
network_dataset_type: %s, trainer_type: %s'
%(train_data, prediction_data, network_type, \
network_dataset_type, trainer_type))
def save(self):
"""
Returns the pickled trained/tested neural network as a string.
"""
return cPickle.dumps(self)
def save_to_file(self, filename):
"""
Saves a neural network to file for later use.
Look into pybrain.datasets.supervised.SupervisedDataSet.saveToFile()
http://pybrain.org/docs/api/datasets/superviseddataset.html
"""
file_handler = open(filename, 'wb')
cPickle.dump(self, file_handler)
file_handler.close()
def build_network(self, dataset, new=True, **kwargs):
"""
Builds a neural network using the dataset provided.
Expected keyword args:
- 'hidden_layers'
- 'prediction_window'
- 'learning_rate'
- 'momentum'
"""
self.hidden_layers = kwargs.get('hidden_layers', 3)
self.prediction_window = kwargs.get('prediction_window', 1)
self.learning_rate = kwargs.get('learning_rate', 0.1)
self.momentum = kwargs.get('momentum', 0.01)
if not new:
self.network.sorted = False
self.network.sortModules()
if self.network_dataset_type == SUPERVISED_DATASET:
self.ready_supervised_dataset(dataset)
else: raise InvalidNetworkDatasetType()
else:
if self.network_type == FEED_FORWARD_NETWORK:
self.network = buildNetwork(len(self.train_data), self.hidden_layers, 1)
else: raise InvalidNetworkType()
if self.network_dataset_type == SUPERVISED_DATASET:
self.ready_supervised_dataset(dataset)
else: raise InvalidNetworkDatasetType()
if self.trainer_type == BACKPROP_TRAINER:
self.trainer = BackpropTrainer(self.network,
learningrate=self.learning_rate,
momentum=self.momentum,
verbose=True)
self.trainer.setData(self.network_dataset)
else: raise InvalidTrainerType()
def ready_supervised_dataset(self, dataset):
"""
Ready the supervised dataset for training.
@TODO: Need to randomize the data being fed to the network.
See randomBatches() here: http://pybrain.org/docs/api/datasets/superviseddataset.html
"""
self.network_dataset = SupervisedDataSet(len(self.train_data), 1)
# Currently only supports log function for normalizing data
training_values = np.log(dataset.data_frame[self.train_data])
results = np.log(dataset.data_frame[self.prediction_data].shift(-self.prediction_window))
training_values['PREDICTION_%s' %self.prediction_data[0]] = results
training_values = training_values.dropna()
for _, row_data in enumerate(training_values.iterrows()):
_, data = row_data
sample = list(data[:-1])
result = [data[-1]]
self.network_dataset.addSample(sample, result)
def train(self, cycles=1):
"""
Trains the network the number of iteration specified in the cycles parameter.
"""
for _ in range(cycles):
res = self.trainer.train()
self.trained_iterations += 1
return res
def train_until_convergence(self, max_cycles=1000, continue_cycles=10,
validation_proportion=0.25):
"""
Wrapper around the pybrain BackpropTrainer trainUntilConvergence method.
@see: http://pybrain.org/docs/api/supervised/trainers.html
"""
self.trainer = \
self.trainer.trainUntilConvergence(maxEpochs=max_cycles,
continueEpochs=continue_cycles,
validationProportion=validation_proportion)
def _activate(self, data):
"""
Activates the network using the data specified.
Returns the network's prediction.
"""
return self.network.activate(data)[0]
def activate_all(self, data_frame):
"""
Activates the network for all values in the dataframe specified.
"""
dataframe = np.log(data_frame[self.train_data])
res = []
for _, row_data in enumerate(dataframe.iterrows()):
_, data = row_data
sample = list(data)
res.append(self._activate(sample))
return np.exp(res)
| """
Module that enables the use of neural networks with NowTrade.
"""
import cPickle
import numpy as np
from pybrain.tools.shortcuts import buildNetwork
from pybrain.datasets.supervised import SupervisedDataSet
from pybrain.supervised.trainers.backprop import BackpropTrainer
from nowtrade import logger
# Networks
FEED_FORWARD_NETWORK = 0
RECURRENT_NETWORK = 1
# Datasets
SUPERVISED_DATASET = 0
SEQUENTIAL_DATASET = 1
CLASSIFICATION_DATASET = 2
SEQUENTIAL_CLASSIFICATION_DATASET = 3
IMPORTANCE_DATASET = 4
# Trainers
BACKPROP_TRAINER = 0
RPROP_TRAINER = 1
def load(network, dataset=None):
"""
Load a previously pickled neural network.
"""
network = cPickle.loads(network)
if dataset:
network.build_network(dataset, new=False)
return network
def load_from_file(filename, dataset=None):
"""
Load a neural network from a previous one saved to file.
"""
file_handler = open(filename, 'rb')
network = cPickle.load(file_handler)
file_handler.close()
if dataset:
network.build_network(dataset, new=False)
return network
class InvalidNetworkType(Exception):
"""
Exception raised when an invalid network type is specified.
"""
pass
class InvalidTrainerType(Exception):
"""
Exception raised when an invalid trainer type is specified.
"""
pass
class InvalidNetworkDatasetType(Exception):
"""
Exception raised when an invalid network dataset type is specified.
"""
pass
class InvalidDataset(Exception):
"""
Exception raised when a invalid dataset is specified.
"""
pass
class NeuralNetwork(object):
"""
The neural network class does all the heavy lifting to incorporate pybrain
neural networks into the NowTrade ecosystem.
"""
def __init__(self, train_data, prediction_data, network_type=FEED_FORWARD_NETWORK,
network_dataset_type=SUPERVISED_DATASET,
trainer_type=BACKPROP_TRAINER):
self.train_data = train_data
self.prediction_data = prediction_data
self.network_type = network_type
self.network_dataset_type = network_dataset_type
self.trainer_type = trainer_type
self.network = None
self.network_dataset = None
self.dataset = None
self.trainer = None
self.trained_iterations = 0
self.momentum = None
self.learning_rate = None
self.hidden_layers = None
self.prediction_window = None
self.logger = logger.Logger(self.__class__.__name__)
self.logger.info('train_data: %s prediction_data: %s, network_type: %s, \
network_dataset_type: %s, trainer_type: %s'
%(train_data, prediction_data, network_type, \
network_dataset_type, trainer_type))
def save(self):
"""
Returns the pickled trained/tested neural network as a string.
"""
return cPickle.dumps(self)
def save_to_file(self, filename):
"""
Saves a neural network to file for later use.
Look into pybrain.datasets.supervised.SupervisedDataSet.saveToFile()
http://pybrain.org/docs/api/datasets/superviseddataset.html
"""
file_handler = open(filename, 'wb')
cPickle.dump(self, file_handler)
file_handler.close()
def build_network(self, dataset, new=True, **kwargs):
"""
Builds a neural network using the dataset provided.
Expected keyword args:
- 'hidden_layers'
- 'prediction_window'
- 'learning_rate'
- 'momentum'
"""
self.hidden_layers = kwargs.get('hidden_layers', 3)
self.prediction_window = kwargs.get('prediction_window', 1)
self.learning_rate = kwargs.get('learning_rate', 0.1)
self.momentum = kwargs.get('momentum', 0.01)
if not new:
self.network.sorted = False
self.network.sortModules()
if self.network_dataset_type == SUPERVISED_DATASET:
self.ready_supervised_dataset(dataset)
else: raise InvalidNetworkDatasetType()
else:
if self.network_type == FEED_FORWARD_NETWORK:
self.network = buildNetwork(len(self.train_data), self.hidden_layers, 1)
else: raise InvalidNetworkType()
if self.network_dataset_type == SUPERVISED_DATASET:
self.ready_supervised_dataset(dataset)
else: raise InvalidNetworkDatasetType()
if self.trainer_type == BACKPROP_TRAINER:
self.trainer = BackpropTrainer(self.network,
learningrate=self.learning_rate,
momentum=self.momentum,
verbose=True)
self.trainer.setData(self.network_dataset)
else: raise InvalidTrainerType()
def ready_supervised_dataset(self, dataset):
"""
Ready the supervised dataset for training.
@TODO: Need to randomize the data being fed to the network.
See randomBatches() here: http://pybrain.org/docs/api/datasets/superviseddataset.html
"""
self.network_dataset = SupervisedDataSet(len(self.train_data), 1)
# Currently only supports log function for normalizing data
training_values = np.log(dataset.data_frame[self.train_data])
results = np.log(dataset.data_frame[self.prediction_data].shift(-self.prediction_window))
training_values['PREDICTION_%s' %self.prediction_data[0]] = results
training_values = training_values.dropna()
for _, row_data in enumerate(training_values.iterrows()):
_, data = row_data
sample = list(data[:-1])
result = [data[-1]]
self.network_dataset.addSample(sample, result)
def train(self, cycles=1):
"""
Trains the network the number of iteration specified in the cycles parameter.
"""
for _ in range(cycles):
res = self.trainer.train()
self.trained_iterations += 1
return res
def train_until_convergence(self, max_cycles=1000, continue_cycles=10,
validation_proportion=0.25):
"""
Wrapper around the pybrain BackpropTrainer trainUntilConvergence method.
@see: http://pybrain.org/docs/api/supervised/trainers.html
"""
self.trainer = \
self.trainer.trainUntilConvergence(maxEpochs=max_cycles,
continueEpochs=continue_cycles,
validationProportion=validation_proportion)
def _activate(self, data):
"""
Activates the network using the data specified.
Returns the network's prediction.
"""
return self.network.activate(data)[0]
def activate_all(self, data_frame):
"""
Activates the network for all values in the dataframe specified.
"""
dataframe = np.log(data_frame[self.train_data])
res = []
for _, row_data in enumerate(dataframe.iterrows()):
_, data = row_data
sample = list(data)
res.append(self._activate(sample))
return np.exp(res)
| en | 0.753044 | Module that enables the use of neural networks with NowTrade. # Networks # Datasets # Trainers Load a previously pickled neural network. Load a neural network from a previous one saved to file. Exception raised when an invalid network type is specified. Exception raised when an invalid trainer type is specified. Exception raised when an invalid network dataset type is specified. Exception raised when a invalid dataset is specified. The neural network class does all the heavy lifting to incorporate pybrain neural networks into the NowTrade ecosystem. Returns the pickled trained/tested neural network as a string. Saves a neural network to file for later use. Look into pybrain.datasets.supervised.SupervisedDataSet.saveToFile() http://pybrain.org/docs/api/datasets/superviseddataset.html Builds a neural network using the dataset provided. Expected keyword args: - 'hidden_layers' - 'prediction_window' - 'learning_rate' - 'momentum' Ready the supervised dataset for training. @TODO: Need to randomize the data being fed to the network. See randomBatches() here: http://pybrain.org/docs/api/datasets/superviseddataset.html # Currently only supports log function for normalizing data Trains the network the number of iteration specified in the cycles parameter. Wrapper around the pybrain BackpropTrainer trainUntilConvergence method. @see: http://pybrain.org/docs/api/supervised/trainers.html Activates the network using the data specified. Returns the network's prediction. Activates the network for all values in the dataframe specified. | 3.090055 | 3 |
mgnify_util/parser/interproscan_parser.py | EBI-Metagenomics/ebi-metagenomics-libs | 0 | 6615333 | <reponame>EBI-Metagenomics/ebi-metagenomics-libs
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright 2018 EMBL - European Bioinformatics Institute
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import csv
__author__ = "<NAME>"
__version__ = "0.1"
__status__ = "Development"
class FunctionalAnnotation:
"""
Describes a function annotation for instance InterPro:IPR004361 or GO:0004462.
"""
def __init__(self, database, identifier):
self.database = database
self.identifier = identifier
def __hash__(self):
return hash((self.database, self.identifier))
def __eq__(self, other):
return self.database == other.database and self.identifier == other.identifier
class Annotations:
"""
Describes a set function annotation which can be easily added by the add_annotation function.
"""
def __init__(self):
self._annotations = set()
def add_annotation(self, database: str, identifier: str):
annotation = FunctionalAnnotation(database, identifier)
self._annotations.add(annotation)
@staticmethod
def _get_identifiers(annotations: set) -> set:
"""
Converts set of objects into a set of string.
:param annotations:
:return:
"""
result = set()
for annotation in annotations:
result.add(annotation.identifier)
return result
def get_all_annotations(self):
return self._annotations
class InterProScanTSVResultParser:
"""
Parses TSV formatted input file and stores mappings between
sequence accessions and functional annotations.
"""
def __init__(self, input_tsv_file):
self.input_tsv_file = input_tsv_file
self.annotations = {} # map of sequence accessions and functional annotations
def parse_file(self):
with open(self.input_tsv_file) as file:
rows = csv.reader(file, delimiter="\t", quotechar='"')
for row in rows:
seq_id = row[0]
if seq_id not in self.annotations:
self.annotations[seq_id] = Annotations()
for x in range(11, len(row)):
if "IPR" in row[x]:
self.annotations.get(seq_id).add_annotation("InterPro",
row[x])
elif "GO" in row[x]:
go_entries = row[x].split('|')
for go_entry in go_entries:
self.annotations.get(seq_id). \
add_annotation("GO",
go_entry.replace('GO:', ''))
elif "KEGG" in row[x]:
pathway_entries = row[x].split('|')
for pathway_entry in pathway_entries:
if "KEGG" in pathway_entry:
self.annotations.get(seq_id).add_annotation(
"KEGG",
pathway_entry.replace('KEGG: ', ''))
elif "MetaCyc" in pathway_entry:
self.annotations.get(seq_id).add_annotation(
"MetaCyc",
pathway_entry.replace('MetaCyc: ', ''))
elif "Reactome" in pathway_entry:
self.annotations.get(seq_id).add_annotation(
"Reactome",
pathway_entry.replace('Reactome: ', ''))
| #!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright 2018 EMBL - European Bioinformatics Institute
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import csv
__author__ = "<NAME>"
__version__ = "0.1"
__status__ = "Development"
class FunctionalAnnotation:
"""
Describes a function annotation for instance InterPro:IPR004361 or GO:0004462.
"""
def __init__(self, database, identifier):
self.database = database
self.identifier = identifier
def __hash__(self):
return hash((self.database, self.identifier))
def __eq__(self, other):
return self.database == other.database and self.identifier == other.identifier
class Annotations:
"""
Describes a set function annotation which can be easily added by the add_annotation function.
"""
def __init__(self):
self._annotations = set()
def add_annotation(self, database: str, identifier: str):
annotation = FunctionalAnnotation(database, identifier)
self._annotations.add(annotation)
@staticmethod
def _get_identifiers(annotations: set) -> set:
"""
Converts set of objects into a set of string.
:param annotations:
:return:
"""
result = set()
for annotation in annotations:
result.add(annotation.identifier)
return result
def get_all_annotations(self):
return self._annotations
class InterProScanTSVResultParser:
"""
Parses TSV formatted input file and stores mappings between
sequence accessions and functional annotations.
"""
def __init__(self, input_tsv_file):
self.input_tsv_file = input_tsv_file
self.annotations = {} # map of sequence accessions and functional annotations
def parse_file(self):
with open(self.input_tsv_file) as file:
rows = csv.reader(file, delimiter="\t", quotechar='"')
for row in rows:
seq_id = row[0]
if seq_id not in self.annotations:
self.annotations[seq_id] = Annotations()
for x in range(11, len(row)):
if "IPR" in row[x]:
self.annotations.get(seq_id).add_annotation("InterPro",
row[x])
elif "GO" in row[x]:
go_entries = row[x].split('|')
for go_entry in go_entries:
self.annotations.get(seq_id). \
add_annotation("GO",
go_entry.replace('GO:', ''))
elif "KEGG" in row[x]:
pathway_entries = row[x].split('|')
for pathway_entry in pathway_entries:
if "KEGG" in pathway_entry:
self.annotations.get(seq_id).add_annotation(
"KEGG",
pathway_entry.replace('KEGG: ', ''))
elif "MetaCyc" in pathway_entry:
self.annotations.get(seq_id).add_annotation(
"MetaCyc",
pathway_entry.replace('MetaCyc: ', ''))
elif "Reactome" in pathway_entry:
self.annotations.get(seq_id).add_annotation(
"Reactome",
pathway_entry.replace('Reactome: ', '')) | en | 0.808329 | #!/usr/bin/env python # -*- coding: utf-8 -*- # Copyright 2018 EMBL - European Bioinformatics Institute # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. Describes a function annotation for instance InterPro:IPR004361 or GO:0004462. Describes a set function annotation which can be easily added by the add_annotation function. Converts set of objects into a set of string. :param annotations: :return: Parses TSV formatted input file and stores mappings between sequence accessions and functional annotations. # map of sequence accessions and functional annotations | 2.362571 | 2 |
Library/libData/models.py | himangshuishere/Library-Management-System | 1 | 6615334 | import uuid
from django.urls import reverse
from django.db import models
# Create your models here.
class UsersModel(models.Model):
userId = models.UUIDField(default=uuid.uuid4, editable=False, primary_key=True)
userName = models.CharField(max_length=50)
userEmail = models.EmailField(max_length=50, unique=True)
userPassword = models.CharField(max_length=50)
def __str__(self):
return self.userName
class Meta:
verbose_name_plural = "Users"
class BooksModel(models.Model):
bookID = models.UUIDField(default=uuid.uuid4, editable=False, primary_key=True)
bookName = models.CharField(max_length=50)
bookAuthor = models.CharField(max_length=50)
bookDescription = models.TextField(max_length=500)
def __str__(self):
return self.bookName
def get_absolute_url(self):
return reverse("book-Detail", args=[self.bookID])
class Meta:
verbose_name_plural = "Books"
class IssuedBooksModel(models.Model):
issuedBy = models.ForeignKey(UsersModel, on_delete=models.CASCADE)
issuedBook = models.ForeignKey(BooksModel, on_delete=models.CASCADE)
issuedDate = models.DateField(auto_now_add=True)
def __str__(self):
return self.issuedBook.bookName
class Meta:
verbose_name_plural = "Issued Books" | import uuid
from django.urls import reverse
from django.db import models
# Create your models here.
class UsersModel(models.Model):
userId = models.UUIDField(default=uuid.uuid4, editable=False, primary_key=True)
userName = models.CharField(max_length=50)
userEmail = models.EmailField(max_length=50, unique=True)
userPassword = models.CharField(max_length=50)
def __str__(self):
return self.userName
class Meta:
verbose_name_plural = "Users"
class BooksModel(models.Model):
bookID = models.UUIDField(default=uuid.uuid4, editable=False, primary_key=True)
bookName = models.CharField(max_length=50)
bookAuthor = models.CharField(max_length=50)
bookDescription = models.TextField(max_length=500)
def __str__(self):
return self.bookName
def get_absolute_url(self):
return reverse("book-Detail", args=[self.bookID])
class Meta:
verbose_name_plural = "Books"
class IssuedBooksModel(models.Model):
issuedBy = models.ForeignKey(UsersModel, on_delete=models.CASCADE)
issuedBook = models.ForeignKey(BooksModel, on_delete=models.CASCADE)
issuedDate = models.DateField(auto_now_add=True)
def __str__(self):
return self.issuedBook.bookName
class Meta:
verbose_name_plural = "Issued Books" | en | 0.963489 | # Create your models here. | 2.69804 | 3 |
CIM14/IEC61968/AssetModels/WindingInfo.py | MaximeBaudette/PyCIM | 58 | 6615335 | <filename>CIM14/IEC61968/AssetModels/WindingInfo.py
# Copyright (C) 2010-2011 <NAME>
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to
# deal in the Software without restriction, including without limitation the
# rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
# sell copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
# IN THE SOFTWARE.
from CIM14.IEC61970.Core.IdentifiedObject import IdentifiedObject
class WindingInfo(IdentifiedObject):
"""Winding data.
"""
def __init__(self, connectionKind="Yn", r=0.0, phaseAngle=0, emergencyS=0.0, ratedU=0.0, insulationU=0.0, ratedS=0.0, sequenceNumber=0, shortTermS=0.0, Windings=None, WindingTests=None, TransformerInfo=None, ToWindingSpecs=None, *args, **kw_args):
"""Initialises a new 'WindingInfo' instance.
@param connectionKind: Kind of connection of this winding. Values are: "Yn", "Y", "D", "I", "Z", "A", "Zn"
@param r: DC resistance of this winding.
@param phaseAngle: Winding phase angle where 360 degrees are represented with clock hours, so the valid values are {0, ..., 11}. For example, to express winding code 'Dyn11', set attributes as follows: 'connectionKind' = Yn and 'phaseAngle' = 11.
@param emergencyS: Apparent power that the winding can carry under emergency conditions.
@param ratedU: Rated voltage of this winding: phase-phase for three-phase windings, and either phase-phase or phase-neutral for single-phase windings.
@param insulationU: Basic insulation level voltage rating.
@param ratedS: Normal apparent power rating of this winding.
@param sequenceNumber: Sequence number for this winding, corresponding to the winding's order in the TransformerBank.vectorGroup attribute. Highest voltage winding should be '1'.
@param shortTermS: Apparent power that this winding can carry for a short period of time.
@param Windings: All windings described by this winding data.
@param WindingTests: All winding tests during which voltage or current was applied to this winding.
@param TransformerInfo: Transformer data that this winding description is part of.
@param ToWindingSpecs: Tap steps and induced voltage/angle measurements for tests in which this winding was not excited.
"""
#: Kind of connection of this winding. Values are: "Yn", "Y", "D", "I", "Z", "A", "Zn"
self.connectionKind = connectionKind
#: DC resistance of this winding.
self.r = r
#: Winding phase angle where 360 degrees are represented with clock hours, so the valid values are {0, ..., 11}. For example, to express winding code 'Dyn11', set attributes as follows: 'connectionKind' = Yn and 'phaseAngle' = 11.
self.phaseAngle = phaseAngle
#: Apparent power that the winding can carry under emergency conditions.
self.emergencyS = emergencyS
#: Rated voltage of this winding: phase-phase for three-phase windings, and either phase-phase or phase-neutral for single-phase windings.
self.ratedU = ratedU
#: Basic insulation level voltage rating.
self.insulationU = insulationU
#: Normal apparent power rating of this winding.
self.ratedS = ratedS
#: Sequence number for this winding, corresponding to the winding's order in the TransformerBank.vectorGroup attribute. Highest voltage winding should be '1'.
self.sequenceNumber = sequenceNumber
#: Apparent power that this winding can carry for a short period of time.
self.shortTermS = shortTermS
self._Windings = []
self.Windings = [] if Windings is None else Windings
self._WindingTests = []
self.WindingTests = [] if WindingTests is None else WindingTests
self._TransformerInfo = None
self.TransformerInfo = TransformerInfo
self._ToWindingSpecs = []
self.ToWindingSpecs = [] if ToWindingSpecs is None else ToWindingSpecs
super(WindingInfo, self).__init__(*args, **kw_args)
_attrs = ["connectionKind", "r", "phaseAngle", "emergencyS", "ratedU", "insulationU", "ratedS", "sequenceNumber", "shortTermS"]
_attr_types = {"connectionKind": str, "r": float, "phaseAngle": int, "emergencyS": float, "ratedU": float, "insulationU": float, "ratedS": float, "sequenceNumber": int, "shortTermS": float}
_defaults = {"connectionKind": "Yn", "r": 0.0, "phaseAngle": 0, "emergencyS": 0.0, "ratedU": 0.0, "insulationU": 0.0, "ratedS": 0.0, "sequenceNumber": 0, "shortTermS": 0.0}
_enums = {"connectionKind": "WindingConnection"}
_refs = ["Windings", "WindingTests", "TransformerInfo", "ToWindingSpecs"]
_many_refs = ["Windings", "WindingTests", "ToWindingSpecs"]
def getWindings(self):
"""All windings described by this winding data.
"""
return self._Windings
def setWindings(self, value):
for x in self._Windings:
x.WindingInfo = None
for y in value:
y._WindingInfo = self
self._Windings = value
Windings = property(getWindings, setWindings)
def addWindings(self, *Windings):
for obj in Windings:
obj.WindingInfo = self
def removeWindings(self, *Windings):
for obj in Windings:
obj.WindingInfo = None
def getWindingTests(self):
"""All winding tests during which voltage or current was applied to this winding.
"""
return self._WindingTests
def setWindingTests(self, value):
for x in self._WindingTests:
x.FromWinding = None
for y in value:
y._FromWinding = self
self._WindingTests = value
WindingTests = property(getWindingTests, setWindingTests)
def addWindingTests(self, *WindingTests):
for obj in WindingTests:
obj.FromWinding = self
def removeWindingTests(self, *WindingTests):
for obj in WindingTests:
obj.FromWinding = None
def getTransformerInfo(self):
"""Transformer data that this winding description is part of.
"""
return self._TransformerInfo
def setTransformerInfo(self, value):
if self._TransformerInfo is not None:
filtered = [x for x in self.TransformerInfo.WindingInfos if x != self]
self._TransformerInfo._WindingInfos = filtered
self._TransformerInfo = value
if self._TransformerInfo is not None:
if self not in self._TransformerInfo._WindingInfos:
self._TransformerInfo._WindingInfos.append(self)
TransformerInfo = property(getTransformerInfo, setTransformerInfo)
def getToWindingSpecs(self):
"""Tap steps and induced voltage/angle measurements for tests in which this winding was not excited.
"""
return self._ToWindingSpecs
def setToWindingSpecs(self, value):
for x in self._ToWindingSpecs:
x.ToWinding = None
for y in value:
y._ToWinding = self
self._ToWindingSpecs = value
ToWindingSpecs = property(getToWindingSpecs, setToWindingSpecs)
def addToWindingSpecs(self, *ToWindingSpecs):
for obj in ToWindingSpecs:
obj.ToWinding = self
def removeToWindingSpecs(self, *ToWindingSpecs):
for obj in ToWindingSpecs:
obj.ToWinding = None
| <filename>CIM14/IEC61968/AssetModels/WindingInfo.py
# Copyright (C) 2010-2011 <NAME>
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to
# deal in the Software without restriction, including without limitation the
# rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
# sell copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
# IN THE SOFTWARE.
from CIM14.IEC61970.Core.IdentifiedObject import IdentifiedObject
class WindingInfo(IdentifiedObject):
"""Winding data.
"""
def __init__(self, connectionKind="Yn", r=0.0, phaseAngle=0, emergencyS=0.0, ratedU=0.0, insulationU=0.0, ratedS=0.0, sequenceNumber=0, shortTermS=0.0, Windings=None, WindingTests=None, TransformerInfo=None, ToWindingSpecs=None, *args, **kw_args):
"""Initialises a new 'WindingInfo' instance.
@param connectionKind: Kind of connection of this winding. Values are: "Yn", "Y", "D", "I", "Z", "A", "Zn"
@param r: DC resistance of this winding.
@param phaseAngle: Winding phase angle where 360 degrees are represented with clock hours, so the valid values are {0, ..., 11}. For example, to express winding code 'Dyn11', set attributes as follows: 'connectionKind' = Yn and 'phaseAngle' = 11.
@param emergencyS: Apparent power that the winding can carry under emergency conditions.
@param ratedU: Rated voltage of this winding: phase-phase for three-phase windings, and either phase-phase or phase-neutral for single-phase windings.
@param insulationU: Basic insulation level voltage rating.
@param ratedS: Normal apparent power rating of this winding.
@param sequenceNumber: Sequence number for this winding, corresponding to the winding's order in the TransformerBank.vectorGroup attribute. Highest voltage winding should be '1'.
@param shortTermS: Apparent power that this winding can carry for a short period of time.
@param Windings: All windings described by this winding data.
@param WindingTests: All winding tests during which voltage or current was applied to this winding.
@param TransformerInfo: Transformer data that this winding description is part of.
@param ToWindingSpecs: Tap steps and induced voltage/angle measurements for tests in which this winding was not excited.
"""
#: Kind of connection of this winding. Values are: "Yn", "Y", "D", "I", "Z", "A", "Zn"
self.connectionKind = connectionKind
#: DC resistance of this winding.
self.r = r
#: Winding phase angle where 360 degrees are represented with clock hours, so the valid values are {0, ..., 11}. For example, to express winding code 'Dyn11', set attributes as follows: 'connectionKind' = Yn and 'phaseAngle' = 11.
self.phaseAngle = phaseAngle
#: Apparent power that the winding can carry under emergency conditions.
self.emergencyS = emergencyS
#: Rated voltage of this winding: phase-phase for three-phase windings, and either phase-phase or phase-neutral for single-phase windings.
self.ratedU = ratedU
#: Basic insulation level voltage rating.
self.insulationU = insulationU
#: Normal apparent power rating of this winding.
self.ratedS = ratedS
#: Sequence number for this winding, corresponding to the winding's order in the TransformerBank.vectorGroup attribute. Highest voltage winding should be '1'.
self.sequenceNumber = sequenceNumber
#: Apparent power that this winding can carry for a short period of time.
self.shortTermS = shortTermS
self._Windings = []
self.Windings = [] if Windings is None else Windings
self._WindingTests = []
self.WindingTests = [] if WindingTests is None else WindingTests
self._TransformerInfo = None
self.TransformerInfo = TransformerInfo
self._ToWindingSpecs = []
self.ToWindingSpecs = [] if ToWindingSpecs is None else ToWindingSpecs
super(WindingInfo, self).__init__(*args, **kw_args)
_attrs = ["connectionKind", "r", "phaseAngle", "emergencyS", "ratedU", "insulationU", "ratedS", "sequenceNumber", "shortTermS"]
_attr_types = {"connectionKind": str, "r": float, "phaseAngle": int, "emergencyS": float, "ratedU": float, "insulationU": float, "ratedS": float, "sequenceNumber": int, "shortTermS": float}
_defaults = {"connectionKind": "Yn", "r": 0.0, "phaseAngle": 0, "emergencyS": 0.0, "ratedU": 0.0, "insulationU": 0.0, "ratedS": 0.0, "sequenceNumber": 0, "shortTermS": 0.0}
_enums = {"connectionKind": "WindingConnection"}
_refs = ["Windings", "WindingTests", "TransformerInfo", "ToWindingSpecs"]
_many_refs = ["Windings", "WindingTests", "ToWindingSpecs"]
def getWindings(self):
"""All windings described by this winding data.
"""
return self._Windings
def setWindings(self, value):
for x in self._Windings:
x.WindingInfo = None
for y in value:
y._WindingInfo = self
self._Windings = value
Windings = property(getWindings, setWindings)
def addWindings(self, *Windings):
for obj in Windings:
obj.WindingInfo = self
def removeWindings(self, *Windings):
for obj in Windings:
obj.WindingInfo = None
def getWindingTests(self):
"""All winding tests during which voltage or current was applied to this winding.
"""
return self._WindingTests
def setWindingTests(self, value):
for x in self._WindingTests:
x.FromWinding = None
for y in value:
y._FromWinding = self
self._WindingTests = value
WindingTests = property(getWindingTests, setWindingTests)
def addWindingTests(self, *WindingTests):
for obj in WindingTests:
obj.FromWinding = self
def removeWindingTests(self, *WindingTests):
for obj in WindingTests:
obj.FromWinding = None
def getTransformerInfo(self):
"""Transformer data that this winding description is part of.
"""
return self._TransformerInfo
def setTransformerInfo(self, value):
if self._TransformerInfo is not None:
filtered = [x for x in self.TransformerInfo.WindingInfos if x != self]
self._TransformerInfo._WindingInfos = filtered
self._TransformerInfo = value
if self._TransformerInfo is not None:
if self not in self._TransformerInfo._WindingInfos:
self._TransformerInfo._WindingInfos.append(self)
TransformerInfo = property(getTransformerInfo, setTransformerInfo)
def getToWindingSpecs(self):
"""Tap steps and induced voltage/angle measurements for tests in which this winding was not excited.
"""
return self._ToWindingSpecs
def setToWindingSpecs(self, value):
for x in self._ToWindingSpecs:
x.ToWinding = None
for y in value:
y._ToWinding = self
self._ToWindingSpecs = value
ToWindingSpecs = property(getToWindingSpecs, setToWindingSpecs)
def addToWindingSpecs(self, *ToWindingSpecs):
for obj in ToWindingSpecs:
obj.ToWinding = self
def removeToWindingSpecs(self, *ToWindingSpecs):
for obj in ToWindingSpecs:
obj.ToWinding = None
| en | 0.875883 | # Copyright (C) 2010-2011 <NAME> # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to # deal in the Software without restriction, including without limitation the # rights to use, copy, modify, merge, publish, distribute, sublicense, and/or # sell copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING # FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. Winding data. Initialises a new 'WindingInfo' instance. @param connectionKind: Kind of connection of this winding. Values are: "Yn", "Y", "D", "I", "Z", "A", "Zn" @param r: DC resistance of this winding. @param phaseAngle: Winding phase angle where 360 degrees are represented with clock hours, so the valid values are {0, ..., 11}. For example, to express winding code 'Dyn11', set attributes as follows: 'connectionKind' = Yn and 'phaseAngle' = 11. @param emergencyS: Apparent power that the winding can carry under emergency conditions. @param ratedU: Rated voltage of this winding: phase-phase for three-phase windings, and either phase-phase or phase-neutral for single-phase windings. @param insulationU: Basic insulation level voltage rating. @param ratedS: Normal apparent power rating of this winding. @param sequenceNumber: Sequence number for this winding, corresponding to the winding's order in the TransformerBank.vectorGroup attribute. Highest voltage winding should be '1'. @param shortTermS: Apparent power that this winding can carry for a short period of time. @param Windings: All windings described by this winding data. @param WindingTests: All winding tests during which voltage or current was applied to this winding. @param TransformerInfo: Transformer data that this winding description is part of. @param ToWindingSpecs: Tap steps and induced voltage/angle measurements for tests in which this winding was not excited. #: Kind of connection of this winding. Values are: "Yn", "Y", "D", "I", "Z", "A", "Zn" #: DC resistance of this winding. #: Winding phase angle where 360 degrees are represented with clock hours, so the valid values are {0, ..., 11}. For example, to express winding code 'Dyn11', set attributes as follows: 'connectionKind' = Yn and 'phaseAngle' = 11. #: Apparent power that the winding can carry under emergency conditions. #: Rated voltage of this winding: phase-phase for three-phase windings, and either phase-phase or phase-neutral for single-phase windings. #: Basic insulation level voltage rating. #: Normal apparent power rating of this winding. #: Sequence number for this winding, corresponding to the winding's order in the TransformerBank.vectorGroup attribute. Highest voltage winding should be '1'. #: Apparent power that this winding can carry for a short period of time. All windings described by this winding data. All winding tests during which voltage or current was applied to this winding. Transformer data that this winding description is part of. Tap steps and induced voltage/angle measurements for tests in which this winding was not excited. | 2.037833 | 2 |
Scraper/pull_books.py | ColinB19/BookRecommender | 1 | 6615336 | <reponame>ColinB19/BookRecommender<filename>Scraper/pull_books.py
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Mon Sep 14 18:51:50 2020
@author: colin
This will be a simple script that will just pull all of the book titles,
authors, and numbers from a single list on goodreads. Check out
https://www.goodreads.com/list to find a list you want to scrape! I will
add the functionality to scrape the entire list (which requires going from
page to page) at a later date. This is just the first step in building a
GoodReads webscraper.
Also check out https://github.com/OmarEinea/GoodReadsScraper for a
fully thought out scraper! I will be using this as a reference throughout
this process.
We will use BeautifulSoup, requests, and pandas!
pip install beautifulsoup4
pip install requests
"""
import requests
from bs4 import BeautifulSoup
import pandas as pd
#setting up scraper
URL = 'https://www.goodreads.com/list/show/50.The_Best_Epic_Fantasy_fiction?page=1'
page = requests.get(URL)
number_of_pages = 35
soup = BeautifulSoup(page.content,'html.parser')
results = soup.find(id = 'all_votes')
# I'm just getting the elements I want here, only the Title, Author, rating,
# and id are necessary.
books = results.find_all('tr')
scraped = []
scraped_collection = [] #this is so I can go back and pull the whole series of individual books later
for el in books :
title_elem = el.find('a', class_='bookTitle')
author_elem = el.find('a', class_='authorName')
av_rating_elem = el.find('span', class_='minirating')
url_elem = el.find('div', class_ = 'js-tooltipTrigger tooltipTrigger')['data-resource-id']
# This just checks for unique books. I don't want to recommend series
# or bodies of work in the final product, just books.
if None in (title_elem, author_elem , av_rating_elem, url_elem):
continue
if ('Boxed Set' in title_elem.text.strip()
or 'Collection' in title_elem.text.strip()
or 'Anthology' in title_elem.text.strip()
or 'Complete Set' in title_elem.text.strip()):
scraped_collection.append([url_elem,
title_elem.text.strip(),
author_elem.text.strip(),
av_rating_elem.text.strip()])
continue
scraped.append([url_elem,
title_elem.text.strip(),
author_elem.text.strip(),
av_rating_elem.text.strip()])
scraped_clean = []
for el in scraped:
ID = int(el[0])
# This just checks if the element is a series or not
if ('(' in el[1]):
title = el[1].split('(')[0][:-1]
series_name = el[1].split('(')[1].split('#')[0][:-2]
# Again, filtering out enitre series. I'm only keeping individual books.
try:
volume = int(el[1].split('(')[1].split('#')[1][:-1])
except ValueError:
continue
else:
title = el[1]
series_name = 'Stand Alone Novel'
volume = '0'
author = el[2]
rate_string = el[3].split(' ')
temp = []
# the ratings string has an non-constant structure, this fixes that.
for word in rate_string:
try:
temp.append(float(word.replace(',','')))
except ValueError:
pass
av_rating = temp[0]
num_of_ratings = int(temp[1])
scraped_clean.append([ID,
title,
author,
series_name,
volume,
av_rating,
num_of_ratings])
#%%
print(scraped_collection)
#%%
# this is just for ease of exporting, also for visualization if you wanted to add that.
df = pd.DataFrame(scraped_clean,columns = ['id',
'Title',
'Author',
'Series Title',
'Volume No.',
'Av. Rating',
'No. of Reviews'])
df.to_csv('Data/gr_book_list_no_'+URL.split('/')[-1]+'.csv',index = False) | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Mon Sep 14 18:51:50 2020
@author: colin
This will be a simple script that will just pull all of the book titles,
authors, and numbers from a single list on goodreads. Check out
https://www.goodreads.com/list to find a list you want to scrape! I will
add the functionality to scrape the entire list (which requires going from
page to page) at a later date. This is just the first step in building a
GoodReads webscraper.
Also check out https://github.com/OmarEinea/GoodReadsScraper for a
fully thought out scraper! I will be using this as a reference throughout
this process.
We will use BeautifulSoup, requests, and pandas!
pip install beautifulsoup4
pip install requests
"""
import requests
from bs4 import BeautifulSoup
import pandas as pd
#setting up scraper
URL = 'https://www.goodreads.com/list/show/50.The_Best_Epic_Fantasy_fiction?page=1'
page = requests.get(URL)
number_of_pages = 35
soup = BeautifulSoup(page.content,'html.parser')
results = soup.find(id = 'all_votes')
# I'm just getting the elements I want here, only the Title, Author, rating,
# and id are necessary.
books = results.find_all('tr')
scraped = []
scraped_collection = [] #this is so I can go back and pull the whole series of individual books later
for el in books :
title_elem = el.find('a', class_='bookTitle')
author_elem = el.find('a', class_='authorName')
av_rating_elem = el.find('span', class_='minirating')
url_elem = el.find('div', class_ = 'js-tooltipTrigger tooltipTrigger')['data-resource-id']
# This just checks for unique books. I don't want to recommend series
# or bodies of work in the final product, just books.
if None in (title_elem, author_elem , av_rating_elem, url_elem):
continue
if ('Boxed Set' in title_elem.text.strip()
or 'Collection' in title_elem.text.strip()
or 'Anthology' in title_elem.text.strip()
or 'Complete Set' in title_elem.text.strip()):
scraped_collection.append([url_elem,
title_elem.text.strip(),
author_elem.text.strip(),
av_rating_elem.text.strip()])
continue
scraped.append([url_elem,
title_elem.text.strip(),
author_elem.text.strip(),
av_rating_elem.text.strip()])
scraped_clean = []
for el in scraped:
ID = int(el[0])
# This just checks if the element is a series or not
if ('(' in el[1]):
title = el[1].split('(')[0][:-1]
series_name = el[1].split('(')[1].split('#')[0][:-2]
# Again, filtering out enitre series. I'm only keeping individual books.
try:
volume = int(el[1].split('(')[1].split('#')[1][:-1])
except ValueError:
continue
else:
title = el[1]
series_name = 'Stand Alone Novel'
volume = '0'
author = el[2]
rate_string = el[3].split(' ')
temp = []
# the ratings string has an non-constant structure, this fixes that.
for word in rate_string:
try:
temp.append(float(word.replace(',','')))
except ValueError:
pass
av_rating = temp[0]
num_of_ratings = int(temp[1])
scraped_clean.append([ID,
title,
author,
series_name,
volume,
av_rating,
num_of_ratings])
#%%
print(scraped_collection)
#%%
# this is just for ease of exporting, also for visualization if you wanted to add that.
df = pd.DataFrame(scraped_clean,columns = ['id',
'Title',
'Author',
'Series Title',
'Volume No.',
'Av. Rating',
'No. of Reviews'])
df.to_csv('Data/gr_book_list_no_'+URL.split('/')[-1]+'.csv',index = False) | en | 0.920092 | #!/usr/bin/env python3 # -*- coding: utf-8 -*- Created on Mon Sep 14 18:51:50 2020 @author: colin This will be a simple script that will just pull all of the book titles, authors, and numbers from a single list on goodreads. Check out https://www.goodreads.com/list to find a list you want to scrape! I will add the functionality to scrape the entire list (which requires going from page to page) at a later date. This is just the first step in building a GoodReads webscraper. Also check out https://github.com/OmarEinea/GoodReadsScraper for a fully thought out scraper! I will be using this as a reference throughout this process. We will use BeautifulSoup, requests, and pandas! pip install beautifulsoup4 pip install requests #setting up scraper # I'm just getting the elements I want here, only the Title, Author, rating, # and id are necessary. #this is so I can go back and pull the whole series of individual books later # This just checks for unique books. I don't want to recommend series # or bodies of work in the final product, just books. # This just checks if the element is a series or not # Again, filtering out enitre series. I'm only keeping individual books. # the ratings string has an non-constant structure, this fixes that. #%% #%% # this is just for ease of exporting, also for visualization if you wanted to add that. | 3.499482 | 3 |
src/wavelet-FE/class_weighted_bce_loss.py | Omekaago101/Intracranial-Hemorrhage-Classification | 22 | 6615337 | import torch
import torch.nn as nn
import torch.nn.functional as F
class WeightedBCEWithLogitsLoss(nn.Module):
"""
Log-loss for RSNA Intracranial Hemorhage Competition.
Args:
class_weights (tensor): weights for 6 classes any, intraparenchymal,
intraventricular, subarachnoid, subdural, epidural.
reduction (str): Specifies the reduction to apply to the output.
"""
def __init__(self, class_weights, reduction='none'):
super(WeightedBCEWithLogitsLoss, self).__init__()
self.class_weights = class_weights
self.reduction = reduction
def forward(self, input, target):
self.class_weights = self.class_weights.to(input.device)
if input.dtype == torch.float16:
self.class_weights = self.class_weights.half()
class_losses = F.binary_cross_entropy_with_logits(input, target, reduction='none')
class_losses = class_losses.mean(0) * self.class_weights
if self.reduction == 'none':
return class_losses
elif self.reduction == 'mean':
return class_losses.sum() / self.class_weights.sum()
| import torch
import torch.nn as nn
import torch.nn.functional as F
class WeightedBCEWithLogitsLoss(nn.Module):
"""
Log-loss for RSNA Intracranial Hemorhage Competition.
Args:
class_weights (tensor): weights for 6 classes any, intraparenchymal,
intraventricular, subarachnoid, subdural, epidural.
reduction (str): Specifies the reduction to apply to the output.
"""
def __init__(self, class_weights, reduction='none'):
super(WeightedBCEWithLogitsLoss, self).__init__()
self.class_weights = class_weights
self.reduction = reduction
def forward(self, input, target):
self.class_weights = self.class_weights.to(input.device)
if input.dtype == torch.float16:
self.class_weights = self.class_weights.half()
class_losses = F.binary_cross_entropy_with_logits(input, target, reduction='none')
class_losses = class_losses.mean(0) * self.class_weights
if self.reduction == 'none':
return class_losses
elif self.reduction == 'mean':
return class_losses.sum() / self.class_weights.sum()
| en | 0.679236 | Log-loss for RSNA Intracranial Hemorhage Competition. Args: class_weights (tensor): weights for 6 classes any, intraparenchymal, intraventricular, subarachnoid, subdural, epidural. reduction (str): Specifies the reduction to apply to the output. | 2.570919 | 3 |
data/get_menu.py | BrianCottrell/theta-fire | 0 | 6615338 | import requests
import json
url = 'https://api.theta.tv/v1/theta/channel/list?number=100'
data = requests.get(url).json()
channels = data['body']
data = []
for i, c in enumerate(channels):
s = c['live_stream']
title = s['title'].encode('ascii', 'ignore').decode('ascii')
obj = {
'id': str(i + 1),
'title': title,
'description': title,
'duration': str(60),
'thumbURL': s['game']['thumbnail_url'],
'imgURL': s['game']['logo_url'],
'videoURL': s['video_url_map']['2d']['master'],
'categories': [s['game']['name']],
'channel_id': "123456"
}
data.append(obj)
break
with open('data.json', 'w') as f:
s = json.dumps(data)
f.write(s)
| import requests
import json
url = 'https://api.theta.tv/v1/theta/channel/list?number=100'
data = requests.get(url).json()
channels = data['body']
data = []
for i, c in enumerate(channels):
s = c['live_stream']
title = s['title'].encode('ascii', 'ignore').decode('ascii')
obj = {
'id': str(i + 1),
'title': title,
'description': title,
'duration': str(60),
'thumbURL': s['game']['thumbnail_url'],
'imgURL': s['game']['logo_url'],
'videoURL': s['video_url_map']['2d']['master'],
'categories': [s['game']['name']],
'channel_id': "123456"
}
data.append(obj)
break
with open('data.json', 'w') as f:
s = json.dumps(data)
f.write(s)
| none | 1 | 2.841446 | 3 | |
recognition/lpr_net.py | shuxin-qin/eulpr | 0 | 6615339 | import tensorflow as tf
from recognition.lpr_util import NUM_CHARS
class LPRNet(object):
def __init__(self, training=False):
self._training = training
def build_net(self, inputs, version='v2'):
if version == 'v1':
return self.get_train_model_multitask(inputs)
elif version == 'v2':
return self.get_train_model_multitask_v2(inputs)
elif version == 'v3':
return self.get_train_model_multitask_v3(inputs)
def small_inception_block(self, x, im, om, scope='incep_block'):
#参考squeezenet的fire module,先squeeze,然后expand
with tf.variable_scope(scope):
x = self.conv(x,im,int(om/4),ksize=[1,1])
x = tf.nn.relu(x)
#参考inception v3
#branch1
x1 = self.conv(x, int(om/4), int(om/4), ksize=[1,1], layer_name='conv1')
x1 = tf.nn.relu(x1)
#branch2
x2 = self.conv(x, int(om/4), int(om/4), ksize=[3,1], pad='SAME', layer_name='conv2_1')
x2 = tf.nn.relu(x2)
x2 = self.conv(x2, int(om/4), int(om/4), ksize=[1,3], pad='SAME', layer_name='conv2_2')
x2 = tf.nn.relu(x2)
#branch3
x3 = self.conv(x, int(om/4), int(om/4), ksize=[5,1], pad='SAME', layer_name='conv3_1')
x3 = tf.nn.relu(x3)
x3 = self.conv(x3, int(om/4), int(om/4), ksize=[1,5], pad='SAME', layer_name='conv3_2')
x3 = tf.nn.relu(x3)
#branch4
x4 = self.conv(x, int(om/4), int(om/4), ksize=[7,1], pad='SAME', layer_name='conv4_1')
x4 = tf.nn.relu(x4)
x4 = self.conv(x4, int(om/4), int(om/4), ksize=[1,7], pad='SAME', layer_name='conv4_2')
x4 = tf.nn.relu(x4)
#x4 = tf.nn.max_pool(x, ksize=[1, 3, 3, 1], strides=[1, 1, 1, 1], padding='SAME')
x = tf.concat([x1,x2,x3,x4], 3)
x = self.conv(x, om, om, ksize=[1,1], layer_name='conv5')
return x
def conv(self, x, im, om, ksize, stride=[1,1,1,1], pad='SAME', layer_name='conv'):
with tf.variable_scope(layer_name):
conv_weights = tf.Variable(
tf.truncated_normal([ksize[0], ksize[1], im, om], stddev=0.1,
seed=None, dtype=tf.float32), name='weight')
conv_biases = tf.Variable(tf.zeros([om], dtype=tf.float32), name='biase')
out = tf.nn.conv2d(x, conv_weights, strides=stride, padding=pad)
relu = tf.nn.bias_add(out, conv_biases)
return relu
#1107417个变量
def get_train_model_multitask(self, inputs):
#输入:96*36*3
x = inputs
#卷积核:3*3*3*64,输出:96*36*64
x = self.conv(x, 3, 64, ksize=[3,3])
x = tf.layers.batch_normalization(x, training=self._training)
x = tf.nn.relu(x)
x = tf.nn.max_pool(x, ksize=[1, 3, 3, 1], strides=[1, 1, 1, 1], padding='SAME')
#输出:96*36*128
x = self.small_inception_block(x, 64, 128)
x2 = x
x = tf.layers.batch_normalization(x, training=self._training)
x = tf.nn.relu(x)
#输出:48*36*64
x = tf.nn.max_pool(x, ksize=[1, 3, 3, 1], strides=[1, 2, 1, 1], padding='SAME')
#输出:48*36*256
x = self.small_inception_block(x, 128, 256)
x = tf.layers.batch_normalization(x, training=self._training)
x = tf.nn.relu(x)
#输出:48*36*256
x = self.small_inception_block(x, 256, 256)
x3 = x
x = tf.layers.batch_normalization(x, training=self._training)
x = tf.nn.relu(x)
#输出:24*36*256
x = tf.nn.max_pool(x, ksize=[1, 3, 3, 1], strides=[1, 2, 1, 1], padding='SAME')
x = tf.layers.dropout(x)
x_classify = x
#输出:24*36*64
x_classify = self.conv(x_classify, 256, 32, ksize=[1, 1])
#输出:12*12*32
x_classify = tf.nn.max_pool(x_classify, ksize=[1, 3, 3, 1], strides = [1, 2, 3, 1], padding='SAME')
#输出:10*10*32
x_classify = self.conv(x_classify, 32, 32, ksize=[3, 3], stride=[1,1,1,1], pad='VALID')
#输出:5*5*32
x_classify = tf.nn.max_pool(x_classify, ksize=[1, 3, 3, 1], strides = [1, 2, 2, 1], padding='SAME')
#输出:800
cl_shape = x_classify.get_shape().as_list()
#nodes = cl_shape[1]*cl_shape[2]*cl_shape[3]
x_classify = tf.reshape(x_classify, [-1, cl_shape[1]*cl_shape[2]*cl_shape[3]])
dense = tf.layers.dense(inputs=x_classify, units=128, activation=tf.nn.relu, kernel_regularizer=tf.contrib.layers.l2_regularizer(0.003))
dense = tf.layers.dense(inputs=dense, units=32, activation=tf.nn.relu, kernel_regularizer=tf.contrib.layers.l2_regularizer(0.003))
logits_classify = tf.layers.dense(inputs=dense, units=1, activation=tf.nn.sigmoid)
#卷积核:4*1*256*256,输出:24*36*256
x = self.conv(x, 256, 256, ksize=[4, 1])
#函数默认的drop rate=0.5
x = tf.layers.dropout(x)
x = tf.layers.batch_normalization(x, training=self._training)
x = tf.nn.relu(x)
#卷积核:1*13*256*67,输出:24*36*67
x = self.conv(x,256,NUM_CHARS+1,ksize=[1,13],pad='SAME')
x = tf.nn.relu(x)
cx = tf.reduce_mean(tf.square(x))
x = tf.div(x,cx)
#池化:输入:96*36*3,输出:x1 = 24*36*3
x1 = tf.nn.avg_pool(inputs, ksize=[1, 4, 1, 1], strides=[1, 4, 1, 1], padding='SAME')
cx1 = tf.reduce_mean(tf.square(x1))
x1 = tf.div(x1, cx1)
#池化:输入:96*36*128,输出:x2 = 24*36*128
x2 = tf.nn.avg_pool(x2, ksize=[1, 4, 1, 1], strides=[1, 4, 1, 1], padding='SAME')
cx2 = tf.reduce_mean(tf.square(x2))
x2 = tf.div(x2, cx2)
#池化:输入:48*36*256,输出:x3 = 24*36*256
x3 = tf.nn.avg_pool(x3, ksize=[1, 2, 1, 1], strides=[1, 2, 1, 1], padding='SAME')
cx3 = tf.reduce_mean(tf.square(x3))
x3 = tf.div(x3, cx3)
#通道合并:输入24*36*(67+3+128+256),输出:24*36*454
x = tf.concat([x,x1,x2,x3],3)
#卷积核:1*1*454*67,输出:24*36*67
x = self.conv(x, x.get_shape().as_list()[3], NUM_CHARS + 1, ksize=(1, 1))
x_shape = x.get_shape().as_list()
x_up = tf.slice(x, [0, 0, 0, 0], [x_shape[0], x_shape[1], int(x_shape[2]/3), x_shape[3]])
x_down = tf.slice(x, [0, 0, int(x_shape[2]/3), 0], [x_shape[0], x_shape[1], int(x_shape[2]/3*2), x_shape[3]])
#降维:输入:b*24*36*67,输出:b*24*67
logits = tf.reduce_mean(x, axis=2)
logits_up = tf.reduce_mean(x_up, axis=2)
logits_down = tf.reduce_mean(x_down, axis=2)
#返回值:logits:(b*24*67), inputs(b*96*36*3), targets(1), seq_len(n)
return logits, logits_up, logits_down, logits_classify
#1168387个变量
def get_train_model_multitask_v2(self, inputs):
#输入:96*36*3
x = inputs
#卷积核:3*3*3*64,输出:96*36*64
x = self.conv(x, 3, 64, ksize=[3,3], layer_name='conv1')
x = tf.layers.batch_normalization(x, training=self._training)
x = tf.nn.relu(x)
x = tf.nn.max_pool(x, ksize=[1, 3, 3, 1], strides=[1, 1, 1, 1], padding='SAME')
#输出:96*36*128
x = self.small_inception_block(x, 64, 128, scope='incep_block1')
x2 = x
x = tf.layers.batch_normalization(x, training=self._training)
x = tf.nn.relu(x)
#输出:48*36*64
x = tf.nn.max_pool(x, ksize=[1, 3, 3, 1], strides=[1, 2, 1, 1], padding='SAME')
#输出:48*36*256
x = self.small_inception_block(x, 128, 256, scope='incep_block2')
x = tf.layers.batch_normalization(x, training=self._training)
x = tf.nn.relu(x)
#输出:48*36*256
x = self.small_inception_block(x, 256, 256, scope='incep_block3')
x3 = x
x = tf.layers.batch_normalization(x, training=self._training)
x = tf.nn.relu(x)
#输出:24*36*256
x = tf.nn.max_pool(x, ksize=[1, 3, 3, 1], strides=[1, 2, 1, 1], padding='SAME')
x = tf.layers.dropout(inputs=x, rate=0.3, training=self._training)
with tf.variable_scope('classify'):
x_classify = x
#输出:24*36*64
x_classify = self.conv(x_classify, 256, 32, ksize=[1, 1], layer_name='conv1')
#输出:12*12*32
x_classify = tf.nn.max_pool(x_classify, ksize=[1, 3, 3, 1], strides = [1, 2, 3, 1], padding='SAME')
#输出:10*10*32
x_classify = self.conv(x_classify, 32, 32, ksize=[3, 3], stride=[1,1,1,1], pad='VALID', layer_name='conv2')
#输出:5*5*32
x_classify = tf.nn.max_pool(x_classify, ksize=[1, 3, 3, 1], strides = [1, 2, 2, 1], padding='SAME')
#输出:800
cl_shape = x_classify.get_shape().as_list()
#nodes = cl_shape[1]*cl_shape[2]*cl_shape[3]
x_classify = tf.reshape(x_classify, [-1, cl_shape[1]*cl_shape[2]*cl_shape[3]])
dense = tf.layers.dense(inputs=x_classify, units=128, activation=tf.nn.relu, kernel_regularizer=tf.contrib.layers.l2_regularizer(0.003))
dense = tf.layers.dense(inputs=dense, units=32, activation=tf.nn.relu, kernel_regularizer=tf.contrib.layers.l2_regularizer(0.003))
logits_classify = tf.layers.dense(inputs=dense, units=1, activation=tf.nn.sigmoid)
#卷积核:4*1*256*256,输出:24*36*256
x = self.conv(x, 256, 256, ksize=[4, 1], layer_name='conv2')
#函数默认的drop rate=0.2
x = tf.layers.dropout(inputs=x, rate=0.3, training=self._training)
x = tf.layers.batch_normalization(x, training=self._training)
x = tf.nn.relu(x)
#卷积核:1*13*256*67,输出:24*36*67
x = self.conv(x,256,NUM_CHARS+1,ksize=[1,13],pad='SAME', layer_name='conv3')
x = tf.nn.relu(x)
cx = tf.reduce_mean(tf.square(x))
x = tf.div(x,cx)
#池化:输入:96*36*3,输出:x1 = 24*36*3
x1 = tf.nn.avg_pool(inputs, ksize=[1, 4, 1, 1], strides=[1, 4, 1, 1], padding='SAME')
cx1 = tf.reduce_mean(tf.square(x1))
x1 = tf.div(x1, cx1)
#池化:输入:96*36*128,输出:x2 = 24*36*128
x2 = tf.nn.avg_pool(x2, ksize=[1, 4, 1, 1], strides=[1, 4, 1, 1], padding='SAME')
cx2 = tf.reduce_mean(tf.square(x2))
x2 = tf.div(x2, cx2)
#池化:输入:48*36*256,输出:x3 = 24*36*256
x3 = tf.nn.avg_pool(x3, ksize=[1, 2, 1, 1], strides=[1, 2, 1, 1], padding='SAME')
cx3 = tf.reduce_mean(tf.square(x3))
x3 = tf.div(x3, cx3)
#通道合并:输入24*36*(67+3+128+256),输出:24*36*454
x = tf.concat([x,x1,x2,x3],3)
with tf.variable_scope('two_line'):
x_shape = x.get_shape().as_list()
x_up = tf.slice(x, [0, 0, 0, 0], [x_shape[0], x_shape[1], int(x_shape[2]/3), x_shape[3]])
x_down = tf.slice(x, [0, 0, int(x_shape[2]/3), 0], [x_shape[0], x_shape[1], int(x_shape[2]/3*2), x_shape[3]])
x_up = self.conv(x_up, x_up.get_shape().as_list()[3], NUM_CHARS + 1, ksize=(1, 1), layer_name='conv1')
x_down = self.conv(x_down, x_down.get_shape().as_list()[3], NUM_CHARS + 1, ksize=(1, 1), layer_name='conv2')
#卷积核:1*1*454*67,输出:24*36*67
x = self.conv(x, x.get_shape().as_list()[3], NUM_CHARS + 1, ksize=(1, 1), layer_name='conv4')
#降维:输入:b*24*36*67,输出:b*24*67
logits = tf.reduce_mean(x, axis=2)
logits_up = tf.reduce_mean(x_up, axis=2)
logits_down = tf.reduce_mean(x_down, axis=2)
#返回值:logits:(b*24*67), inputs(b*96*36*3), targets(1), seq_len(n)
return logits, logits_up, logits_down, logits_classify
#1180648个变量
def get_train_model_multitask_v3(self, inputs):
with tf.variable_scope('base'):
#输入:96*36*3
x = inputs
#卷积核:3*3*3*64,输出:96*36*64
x = self.conv(x, 3, 64, ksize=[3,3], layer_name='conv1')
x = tf.layers.batch_normalization(x, training=self._training)
x = tf.nn.relu(x)
x = tf.nn.max_pool(x, ksize=[1, 3, 3, 1], strides=[1, 1, 1, 1], padding='SAME')
x1 = x
#输出:96*36*128
x = self.small_inception_block(x, 64, 128, scope='incep_block1')
x = tf.layers.batch_normalization(x, training=self._training)
x = tf.nn.relu(x)
x2 = x
#输出:48*36*64
x = tf.nn.max_pool(x, ksize=[1, 3, 3, 1], strides=[1, 2, 1, 1], padding='SAME')
#输出:48*36*256
x = self.small_inception_block(x, 128, 256, scope='incep_block2')
x = tf.layers.batch_normalization(x, training=self._training)
x = tf.nn.relu(x)
#输出:48*36*256
x = self.small_inception_block(x, 256, 256, scope='incep_block3')
x = tf.layers.batch_normalization(x, training=self._training)
x = tf.nn.relu(x)
x3 = x
#输出:24*36*256
x = tf.nn.max_pool(x, ksize=[1, 3, 3, 1], strides=[1, 2, 1, 1], padding='SAME')
x = tf.layers.dropout(inputs=x, rate=0.3, training=self._training)
x_classify = x
#卷积核:4*1*256*256,输出:24*36*256
x = self.conv(x, 256, 256, ksize=[4, 1], layer_name='conv2')
x = tf.layers.dropout(inputs=x, rate=0.3, training=self._training)
x = tf.layers.batch_normalization(x, training=self._training)
x = tf.nn.relu(x)
#卷积核:1*13*256*67,输出:24*36*67
x = self.conv(x, 256, NUM_CHARS+1, ksize=[1,13], pad='SAME', layer_name='conv3')
x = tf.nn.relu(x)
#池化:输入:96*36*3,输出:x1 = 24*36*3
x1 = tf.nn.avg_pool(x1, ksize=[1, 4, 1, 1], strides=[1, 4, 1, 1], padding='SAME')
#池化:输入:96*36*128,输出:x2 = 24*36*128
x2 = tf.nn.avg_pool(x2, ksize=[1, 4, 1, 1], strides=[1, 4, 1, 1], padding='SAME')
#池化:输入:48*36*256,输出:x3 = 24*36*256
x3 = tf.nn.avg_pool(x3, ksize=[1, 2, 1, 1], strides=[1, 2, 1, 1], padding='SAME')
#通道合并:输入24*36*(67+64+128+256),输出:24*36*515
x = tf.concat([x,x1,x2,x3],3)
x_twoline = x
#卷积核:1*1*454*67,输出:24*36*67
x = self.conv(x, x.get_shape().as_list()[3], NUM_CHARS + 1, ksize=(1, 1), layer_name='conv4')
#降维:输入:b*24*36*67,输出:b*24*67
logits = tf.reduce_mean(x, axis=2)
with tf.variable_scope('classify'):
#输出:24*36*64
x_classify = self.conv(x_classify, 256, 32, ksize=[1, 1], layer_name='conv1')
#输出:12*12*32
x_classify = tf.nn.max_pool(x_classify, ksize=[1, 3, 3, 1], strides = [1, 2, 3, 1], padding='SAME')
#输出:10*10*32
x_classify = self.conv(x_classify, 32, 32, ksize=[3, 3], stride=[1,1,1,1], pad='VALID', layer_name='conv2')
#输出:5*5*32
x_classify = tf.nn.max_pool(x_classify, ksize=[1, 3, 3, 1], strides = [1, 2, 2, 1], padding='SAME')
#输出:800
cl_shape = x_classify.get_shape().as_list()
#nodes = cl_shape[1]*cl_shape[2]*cl_shape[3]
x_classify = tf.reshape(x_classify, [-1, cl_shape[1]*cl_shape[2]*cl_shape[3]])
dense = tf.layers.dense(inputs=x_classify, units=128, activation=tf.nn.relu, kernel_regularizer=tf.contrib.layers.l2_regularizer(0.003))
dense = tf.layers.dense(inputs=dense, units=32, activation=tf.nn.relu, kernel_regularizer=tf.contrib.layers.l2_regularizer(0.003))
logits_classify = tf.layers.dense(inputs=dense, units=1, activation=tf.nn.sigmoid)
with tf.variable_scope('two_line'):
x_shape = x_twoline.get_shape().as_list()
x_up = tf.slice(x_twoline, [0, 0, 0, 0], [x_shape[0], x_shape[1], int(x_shape[2]/3), x_shape[3]])
x_down = tf.slice(x_twoline, [0, 0, int(x_shape[2]/3), 0], [x_shape[0], x_shape[1], int(x_shape[2]/3*2), x_shape[3]])
x_up = self.conv(x_up, x_up.get_shape().as_list()[3], NUM_CHARS + 1, ksize=(1, 1), layer_name='conv1')
x_down = self.conv(x_down, x_down.get_shape().as_list()[3], NUM_CHARS + 1, ksize=(1, 1), layer_name='conv2')
logits_up = tf.reduce_mean(x_up, axis=2)
logits_down = tf.reduce_mean(x_down, axis=2)
#返回值:logits:(b*24*67), inputs(b*96*36*3), targets(1), seq_len(n)
return logits, logits_up, logits_down, logits_classify
| import tensorflow as tf
from recognition.lpr_util import NUM_CHARS
class LPRNet(object):
def __init__(self, training=False):
self._training = training
def build_net(self, inputs, version='v2'):
if version == 'v1':
return self.get_train_model_multitask(inputs)
elif version == 'v2':
return self.get_train_model_multitask_v2(inputs)
elif version == 'v3':
return self.get_train_model_multitask_v3(inputs)
def small_inception_block(self, x, im, om, scope='incep_block'):
#参考squeezenet的fire module,先squeeze,然后expand
with tf.variable_scope(scope):
x = self.conv(x,im,int(om/4),ksize=[1,1])
x = tf.nn.relu(x)
#参考inception v3
#branch1
x1 = self.conv(x, int(om/4), int(om/4), ksize=[1,1], layer_name='conv1')
x1 = tf.nn.relu(x1)
#branch2
x2 = self.conv(x, int(om/4), int(om/4), ksize=[3,1], pad='SAME', layer_name='conv2_1')
x2 = tf.nn.relu(x2)
x2 = self.conv(x2, int(om/4), int(om/4), ksize=[1,3], pad='SAME', layer_name='conv2_2')
x2 = tf.nn.relu(x2)
#branch3
x3 = self.conv(x, int(om/4), int(om/4), ksize=[5,1], pad='SAME', layer_name='conv3_1')
x3 = tf.nn.relu(x3)
x3 = self.conv(x3, int(om/4), int(om/4), ksize=[1,5], pad='SAME', layer_name='conv3_2')
x3 = tf.nn.relu(x3)
#branch4
x4 = self.conv(x, int(om/4), int(om/4), ksize=[7,1], pad='SAME', layer_name='conv4_1')
x4 = tf.nn.relu(x4)
x4 = self.conv(x4, int(om/4), int(om/4), ksize=[1,7], pad='SAME', layer_name='conv4_2')
x4 = tf.nn.relu(x4)
#x4 = tf.nn.max_pool(x, ksize=[1, 3, 3, 1], strides=[1, 1, 1, 1], padding='SAME')
x = tf.concat([x1,x2,x3,x4], 3)
x = self.conv(x, om, om, ksize=[1,1], layer_name='conv5')
return x
def conv(self, x, im, om, ksize, stride=[1,1,1,1], pad='SAME', layer_name='conv'):
with tf.variable_scope(layer_name):
conv_weights = tf.Variable(
tf.truncated_normal([ksize[0], ksize[1], im, om], stddev=0.1,
seed=None, dtype=tf.float32), name='weight')
conv_biases = tf.Variable(tf.zeros([om], dtype=tf.float32), name='biase')
out = tf.nn.conv2d(x, conv_weights, strides=stride, padding=pad)
relu = tf.nn.bias_add(out, conv_biases)
return relu
#1107417个变量
def get_train_model_multitask(self, inputs):
#输入:96*36*3
x = inputs
#卷积核:3*3*3*64,输出:96*36*64
x = self.conv(x, 3, 64, ksize=[3,3])
x = tf.layers.batch_normalization(x, training=self._training)
x = tf.nn.relu(x)
x = tf.nn.max_pool(x, ksize=[1, 3, 3, 1], strides=[1, 1, 1, 1], padding='SAME')
#输出:96*36*128
x = self.small_inception_block(x, 64, 128)
x2 = x
x = tf.layers.batch_normalization(x, training=self._training)
x = tf.nn.relu(x)
#输出:48*36*64
x = tf.nn.max_pool(x, ksize=[1, 3, 3, 1], strides=[1, 2, 1, 1], padding='SAME')
#输出:48*36*256
x = self.small_inception_block(x, 128, 256)
x = tf.layers.batch_normalization(x, training=self._training)
x = tf.nn.relu(x)
#输出:48*36*256
x = self.small_inception_block(x, 256, 256)
x3 = x
x = tf.layers.batch_normalization(x, training=self._training)
x = tf.nn.relu(x)
#输出:24*36*256
x = tf.nn.max_pool(x, ksize=[1, 3, 3, 1], strides=[1, 2, 1, 1], padding='SAME')
x = tf.layers.dropout(x)
x_classify = x
#输出:24*36*64
x_classify = self.conv(x_classify, 256, 32, ksize=[1, 1])
#输出:12*12*32
x_classify = tf.nn.max_pool(x_classify, ksize=[1, 3, 3, 1], strides = [1, 2, 3, 1], padding='SAME')
#输出:10*10*32
x_classify = self.conv(x_classify, 32, 32, ksize=[3, 3], stride=[1,1,1,1], pad='VALID')
#输出:5*5*32
x_classify = tf.nn.max_pool(x_classify, ksize=[1, 3, 3, 1], strides = [1, 2, 2, 1], padding='SAME')
#输出:800
cl_shape = x_classify.get_shape().as_list()
#nodes = cl_shape[1]*cl_shape[2]*cl_shape[3]
x_classify = tf.reshape(x_classify, [-1, cl_shape[1]*cl_shape[2]*cl_shape[3]])
dense = tf.layers.dense(inputs=x_classify, units=128, activation=tf.nn.relu, kernel_regularizer=tf.contrib.layers.l2_regularizer(0.003))
dense = tf.layers.dense(inputs=dense, units=32, activation=tf.nn.relu, kernel_regularizer=tf.contrib.layers.l2_regularizer(0.003))
logits_classify = tf.layers.dense(inputs=dense, units=1, activation=tf.nn.sigmoid)
#卷积核:4*1*256*256,输出:24*36*256
x = self.conv(x, 256, 256, ksize=[4, 1])
#函数默认的drop rate=0.5
x = tf.layers.dropout(x)
x = tf.layers.batch_normalization(x, training=self._training)
x = tf.nn.relu(x)
#卷积核:1*13*256*67,输出:24*36*67
x = self.conv(x,256,NUM_CHARS+1,ksize=[1,13],pad='SAME')
x = tf.nn.relu(x)
cx = tf.reduce_mean(tf.square(x))
x = tf.div(x,cx)
#池化:输入:96*36*3,输出:x1 = 24*36*3
x1 = tf.nn.avg_pool(inputs, ksize=[1, 4, 1, 1], strides=[1, 4, 1, 1], padding='SAME')
cx1 = tf.reduce_mean(tf.square(x1))
x1 = tf.div(x1, cx1)
#池化:输入:96*36*128,输出:x2 = 24*36*128
x2 = tf.nn.avg_pool(x2, ksize=[1, 4, 1, 1], strides=[1, 4, 1, 1], padding='SAME')
cx2 = tf.reduce_mean(tf.square(x2))
x2 = tf.div(x2, cx2)
#池化:输入:48*36*256,输出:x3 = 24*36*256
x3 = tf.nn.avg_pool(x3, ksize=[1, 2, 1, 1], strides=[1, 2, 1, 1], padding='SAME')
cx3 = tf.reduce_mean(tf.square(x3))
x3 = tf.div(x3, cx3)
#通道合并:输入24*36*(67+3+128+256),输出:24*36*454
x = tf.concat([x,x1,x2,x3],3)
#卷积核:1*1*454*67,输出:24*36*67
x = self.conv(x, x.get_shape().as_list()[3], NUM_CHARS + 1, ksize=(1, 1))
x_shape = x.get_shape().as_list()
x_up = tf.slice(x, [0, 0, 0, 0], [x_shape[0], x_shape[1], int(x_shape[2]/3), x_shape[3]])
x_down = tf.slice(x, [0, 0, int(x_shape[2]/3), 0], [x_shape[0], x_shape[1], int(x_shape[2]/3*2), x_shape[3]])
#降维:输入:b*24*36*67,输出:b*24*67
logits = tf.reduce_mean(x, axis=2)
logits_up = tf.reduce_mean(x_up, axis=2)
logits_down = tf.reduce_mean(x_down, axis=2)
#返回值:logits:(b*24*67), inputs(b*96*36*3), targets(1), seq_len(n)
return logits, logits_up, logits_down, logits_classify
#1168387个变量
def get_train_model_multitask_v2(self, inputs):
#输入:96*36*3
x = inputs
#卷积核:3*3*3*64,输出:96*36*64
x = self.conv(x, 3, 64, ksize=[3,3], layer_name='conv1')
x = tf.layers.batch_normalization(x, training=self._training)
x = tf.nn.relu(x)
x = tf.nn.max_pool(x, ksize=[1, 3, 3, 1], strides=[1, 1, 1, 1], padding='SAME')
#输出:96*36*128
x = self.small_inception_block(x, 64, 128, scope='incep_block1')
x2 = x
x = tf.layers.batch_normalization(x, training=self._training)
x = tf.nn.relu(x)
#输出:48*36*64
x = tf.nn.max_pool(x, ksize=[1, 3, 3, 1], strides=[1, 2, 1, 1], padding='SAME')
#输出:48*36*256
x = self.small_inception_block(x, 128, 256, scope='incep_block2')
x = tf.layers.batch_normalization(x, training=self._training)
x = tf.nn.relu(x)
#输出:48*36*256
x = self.small_inception_block(x, 256, 256, scope='incep_block3')
x3 = x
x = tf.layers.batch_normalization(x, training=self._training)
x = tf.nn.relu(x)
#输出:24*36*256
x = tf.nn.max_pool(x, ksize=[1, 3, 3, 1], strides=[1, 2, 1, 1], padding='SAME')
x = tf.layers.dropout(inputs=x, rate=0.3, training=self._training)
with tf.variable_scope('classify'):
x_classify = x
#输出:24*36*64
x_classify = self.conv(x_classify, 256, 32, ksize=[1, 1], layer_name='conv1')
#输出:12*12*32
x_classify = tf.nn.max_pool(x_classify, ksize=[1, 3, 3, 1], strides = [1, 2, 3, 1], padding='SAME')
#输出:10*10*32
x_classify = self.conv(x_classify, 32, 32, ksize=[3, 3], stride=[1,1,1,1], pad='VALID', layer_name='conv2')
#输出:5*5*32
x_classify = tf.nn.max_pool(x_classify, ksize=[1, 3, 3, 1], strides = [1, 2, 2, 1], padding='SAME')
#输出:800
cl_shape = x_classify.get_shape().as_list()
#nodes = cl_shape[1]*cl_shape[2]*cl_shape[3]
x_classify = tf.reshape(x_classify, [-1, cl_shape[1]*cl_shape[2]*cl_shape[3]])
dense = tf.layers.dense(inputs=x_classify, units=128, activation=tf.nn.relu, kernel_regularizer=tf.contrib.layers.l2_regularizer(0.003))
dense = tf.layers.dense(inputs=dense, units=32, activation=tf.nn.relu, kernel_regularizer=tf.contrib.layers.l2_regularizer(0.003))
logits_classify = tf.layers.dense(inputs=dense, units=1, activation=tf.nn.sigmoid)
#卷积核:4*1*256*256,输出:24*36*256
x = self.conv(x, 256, 256, ksize=[4, 1], layer_name='conv2')
#函数默认的drop rate=0.2
x = tf.layers.dropout(inputs=x, rate=0.3, training=self._training)
x = tf.layers.batch_normalization(x, training=self._training)
x = tf.nn.relu(x)
#卷积核:1*13*256*67,输出:24*36*67
x = self.conv(x,256,NUM_CHARS+1,ksize=[1,13],pad='SAME', layer_name='conv3')
x = tf.nn.relu(x)
cx = tf.reduce_mean(tf.square(x))
x = tf.div(x,cx)
#池化:输入:96*36*3,输出:x1 = 24*36*3
x1 = tf.nn.avg_pool(inputs, ksize=[1, 4, 1, 1], strides=[1, 4, 1, 1], padding='SAME')
cx1 = tf.reduce_mean(tf.square(x1))
x1 = tf.div(x1, cx1)
#池化:输入:96*36*128,输出:x2 = 24*36*128
x2 = tf.nn.avg_pool(x2, ksize=[1, 4, 1, 1], strides=[1, 4, 1, 1], padding='SAME')
cx2 = tf.reduce_mean(tf.square(x2))
x2 = tf.div(x2, cx2)
#池化:输入:48*36*256,输出:x3 = 24*36*256
x3 = tf.nn.avg_pool(x3, ksize=[1, 2, 1, 1], strides=[1, 2, 1, 1], padding='SAME')
cx3 = tf.reduce_mean(tf.square(x3))
x3 = tf.div(x3, cx3)
#通道合并:输入24*36*(67+3+128+256),输出:24*36*454
x = tf.concat([x,x1,x2,x3],3)
with tf.variable_scope('two_line'):
x_shape = x.get_shape().as_list()
x_up = tf.slice(x, [0, 0, 0, 0], [x_shape[0], x_shape[1], int(x_shape[2]/3), x_shape[3]])
x_down = tf.slice(x, [0, 0, int(x_shape[2]/3), 0], [x_shape[0], x_shape[1], int(x_shape[2]/3*2), x_shape[3]])
x_up = self.conv(x_up, x_up.get_shape().as_list()[3], NUM_CHARS + 1, ksize=(1, 1), layer_name='conv1')
x_down = self.conv(x_down, x_down.get_shape().as_list()[3], NUM_CHARS + 1, ksize=(1, 1), layer_name='conv2')
#卷积核:1*1*454*67,输出:24*36*67
x = self.conv(x, x.get_shape().as_list()[3], NUM_CHARS + 1, ksize=(1, 1), layer_name='conv4')
#降维:输入:b*24*36*67,输出:b*24*67
logits = tf.reduce_mean(x, axis=2)
logits_up = tf.reduce_mean(x_up, axis=2)
logits_down = tf.reduce_mean(x_down, axis=2)
#返回值:logits:(b*24*67), inputs(b*96*36*3), targets(1), seq_len(n)
return logits, logits_up, logits_down, logits_classify
#1180648个变量
def get_train_model_multitask_v3(self, inputs):
with tf.variable_scope('base'):
#输入:96*36*3
x = inputs
#卷积核:3*3*3*64,输出:96*36*64
x = self.conv(x, 3, 64, ksize=[3,3], layer_name='conv1')
x = tf.layers.batch_normalization(x, training=self._training)
x = tf.nn.relu(x)
x = tf.nn.max_pool(x, ksize=[1, 3, 3, 1], strides=[1, 1, 1, 1], padding='SAME')
x1 = x
#输出:96*36*128
x = self.small_inception_block(x, 64, 128, scope='incep_block1')
x = tf.layers.batch_normalization(x, training=self._training)
x = tf.nn.relu(x)
x2 = x
#输出:48*36*64
x = tf.nn.max_pool(x, ksize=[1, 3, 3, 1], strides=[1, 2, 1, 1], padding='SAME')
#输出:48*36*256
x = self.small_inception_block(x, 128, 256, scope='incep_block2')
x = tf.layers.batch_normalization(x, training=self._training)
x = tf.nn.relu(x)
#输出:48*36*256
x = self.small_inception_block(x, 256, 256, scope='incep_block3')
x = tf.layers.batch_normalization(x, training=self._training)
x = tf.nn.relu(x)
x3 = x
#输出:24*36*256
x = tf.nn.max_pool(x, ksize=[1, 3, 3, 1], strides=[1, 2, 1, 1], padding='SAME')
x = tf.layers.dropout(inputs=x, rate=0.3, training=self._training)
x_classify = x
#卷积核:4*1*256*256,输出:24*36*256
x = self.conv(x, 256, 256, ksize=[4, 1], layer_name='conv2')
x = tf.layers.dropout(inputs=x, rate=0.3, training=self._training)
x = tf.layers.batch_normalization(x, training=self._training)
x = tf.nn.relu(x)
#卷积核:1*13*256*67,输出:24*36*67
x = self.conv(x, 256, NUM_CHARS+1, ksize=[1,13], pad='SAME', layer_name='conv3')
x = tf.nn.relu(x)
#池化:输入:96*36*3,输出:x1 = 24*36*3
x1 = tf.nn.avg_pool(x1, ksize=[1, 4, 1, 1], strides=[1, 4, 1, 1], padding='SAME')
#池化:输入:96*36*128,输出:x2 = 24*36*128
x2 = tf.nn.avg_pool(x2, ksize=[1, 4, 1, 1], strides=[1, 4, 1, 1], padding='SAME')
#池化:输入:48*36*256,输出:x3 = 24*36*256
x3 = tf.nn.avg_pool(x3, ksize=[1, 2, 1, 1], strides=[1, 2, 1, 1], padding='SAME')
#通道合并:输入24*36*(67+64+128+256),输出:24*36*515
x = tf.concat([x,x1,x2,x3],3)
x_twoline = x
#卷积核:1*1*454*67,输出:24*36*67
x = self.conv(x, x.get_shape().as_list()[3], NUM_CHARS + 1, ksize=(1, 1), layer_name='conv4')
#降维:输入:b*24*36*67,输出:b*24*67
logits = tf.reduce_mean(x, axis=2)
with tf.variable_scope('classify'):
#输出:24*36*64
x_classify = self.conv(x_classify, 256, 32, ksize=[1, 1], layer_name='conv1')
#输出:12*12*32
x_classify = tf.nn.max_pool(x_classify, ksize=[1, 3, 3, 1], strides = [1, 2, 3, 1], padding='SAME')
#输出:10*10*32
x_classify = self.conv(x_classify, 32, 32, ksize=[3, 3], stride=[1,1,1,1], pad='VALID', layer_name='conv2')
#输出:5*5*32
x_classify = tf.nn.max_pool(x_classify, ksize=[1, 3, 3, 1], strides = [1, 2, 2, 1], padding='SAME')
#输出:800
cl_shape = x_classify.get_shape().as_list()
#nodes = cl_shape[1]*cl_shape[2]*cl_shape[3]
x_classify = tf.reshape(x_classify, [-1, cl_shape[1]*cl_shape[2]*cl_shape[3]])
dense = tf.layers.dense(inputs=x_classify, units=128, activation=tf.nn.relu, kernel_regularizer=tf.contrib.layers.l2_regularizer(0.003))
dense = tf.layers.dense(inputs=dense, units=32, activation=tf.nn.relu, kernel_regularizer=tf.contrib.layers.l2_regularizer(0.003))
logits_classify = tf.layers.dense(inputs=dense, units=1, activation=tf.nn.sigmoid)
with tf.variable_scope('two_line'):
x_shape = x_twoline.get_shape().as_list()
x_up = tf.slice(x_twoline, [0, 0, 0, 0], [x_shape[0], x_shape[1], int(x_shape[2]/3), x_shape[3]])
x_down = tf.slice(x_twoline, [0, 0, int(x_shape[2]/3), 0], [x_shape[0], x_shape[1], int(x_shape[2]/3*2), x_shape[3]])
x_up = self.conv(x_up, x_up.get_shape().as_list()[3], NUM_CHARS + 1, ksize=(1, 1), layer_name='conv1')
x_down = self.conv(x_down, x_down.get_shape().as_list()[3], NUM_CHARS + 1, ksize=(1, 1), layer_name='conv2')
logits_up = tf.reduce_mean(x_up, axis=2)
logits_down = tf.reduce_mean(x_down, axis=2)
#返回值:logits:(b*24*67), inputs(b*96*36*3), targets(1), seq_len(n)
return logits, logits_up, logits_down, logits_classify
| zh | 0.487504 | #参考squeezenet的fire module,先squeeze,然后expand #参考inception v3 #branch1 #branch2 #branch3 #branch4 #x4 = tf.nn.max_pool(x, ksize=[1, 3, 3, 1], strides=[1, 1, 1, 1], padding='SAME') #1107417个变量 #输入:96*36*3 #卷积核:3*3*3*64,输出:96*36*64 #输出:96*36*128 #输出:48*36*64 #输出:48*36*256 #输出:48*36*256 #输出:24*36*256 #输出:24*36*64 #输出:12*12*32 #输出:10*10*32 #输出:5*5*32 #输出:800 #nodes = cl_shape[1]*cl_shape[2]*cl_shape[3] #卷积核:4*1*256*256,输出:24*36*256 #函数默认的drop rate=0.5 #卷积核:1*13*256*67,输出:24*36*67 #池化:输入:96*36*3,输出:x1 = 24*36*3 #池化:输入:96*36*128,输出:x2 = 24*36*128 #池化:输入:48*36*256,输出:x3 = 24*36*256 #通道合并:输入24*36*(67+3+128+256),输出:24*36*454 #卷积核:1*1*454*67,输出:24*36*67 #降维:输入:b*24*36*67,输出:b*24*67 #返回值:logits:(b*24*67), inputs(b*96*36*3), targets(1), seq_len(n) #1168387个变量 #输入:96*36*3 #卷积核:3*3*3*64,输出:96*36*64 #输出:96*36*128 #输出:48*36*64 #输出:48*36*256 #输出:48*36*256 #输出:24*36*256 #输出:24*36*64 #输出:12*12*32 #输出:10*10*32 #输出:5*5*32 #输出:800 #nodes = cl_shape[1]*cl_shape[2]*cl_shape[3] #卷积核:4*1*256*256,输出:24*36*256 #函数默认的drop rate=0.2 #卷积核:1*13*256*67,输出:24*36*67 #池化:输入:96*36*3,输出:x1 = 24*36*3 #池化:输入:96*36*128,输出:x2 = 24*36*128 #池化:输入:48*36*256,输出:x3 = 24*36*256 #通道合并:输入24*36*(67+3+128+256),输出:24*36*454 #卷积核:1*1*454*67,输出:24*36*67 #降维:输入:b*24*36*67,输出:b*24*67 #返回值:logits:(b*24*67), inputs(b*96*36*3), targets(1), seq_len(n) #1180648个变量 #输入:96*36*3 #卷积核:3*3*3*64,输出:96*36*64 #输出:96*36*128 #输出:48*36*64 #输出:48*36*256 #输出:48*36*256 #输出:24*36*256 #卷积核:4*1*256*256,输出:24*36*256 #卷积核:1*13*256*67,输出:24*36*67 #池化:输入:96*36*3,输出:x1 = 24*36*3 #池化:输入:96*36*128,输出:x2 = 24*36*128 #池化:输入:48*36*256,输出:x3 = 24*36*256 #通道合并:输入24*36*(67+64+128+256),输出:24*36*515 #卷积核:1*1*454*67,输出:24*36*67 #降维:输入:b*24*36*67,输出:b*24*67 #输出:24*36*64 #输出:12*12*32 #输出:10*10*32 #输出:5*5*32 #输出:800 #nodes = cl_shape[1]*cl_shape[2]*cl_shape[3] #返回值:logits:(b*24*67), inputs(b*96*36*3), targets(1), seq_len(n) | 2.622622 | 3 |
dmriprep/workflows/dwi/__init__.py | j1c/dmriprep | 0 | 6615340 | <gh_stars>0
#!/usr/bin/env python
"""
Pre-processing dMRI workflows
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. automodule:: dmriprep.workflows.dwi.base
.. automodule:: dmriprep.workflows.dwi.util
"""
from .base import init_dwi_preproc_wf
from .util import init_dwi_concat_wf
__all__ = [
'init_dwi_preproc_wf',
'init_dwi_concat_wf'
]
| #!/usr/bin/env python
"""
Pre-processing dMRI workflows
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. automodule:: dmriprep.workflows.dwi.base
.. automodule:: dmriprep.workflows.dwi.util
"""
from .base import init_dwi_preproc_wf
from .util import init_dwi_concat_wf
__all__ = [
'init_dwi_preproc_wf',
'init_dwi_concat_wf'
] | my | 0.121978 | #!/usr/bin/env python Pre-processing dMRI workflows ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. automodule:: dmriprep.workflows.dwi.base .. automodule:: dmriprep.workflows.dwi.util | 1.078636 | 1 |
src/onegov/org/views/external_link.py | politbuero-kampagnen/onegov-cloud | 0 | 6615341 | from onegov.org import _
from onegov.core.security import Private
from onegov.org import OrgApp
from onegov.org.forms.external_link import ExternalLinkForm
from onegov.org.layout import ExternalLinkLayout, DefaultLayout
from onegov.org.models.external_link import (
ExternalLinkCollection, ExternalLink
)
from morepath import redirect
def get_external_link_form(model, request):
if isinstance(model, ExternalLinkCollection):
model = model.model_class()
return model.with_content_extensions(ExternalLinkForm, request)
@OrgApp.form(
model=ExternalLinkCollection, name='new', template='form.pt',
permission=Private, form=get_external_link_form
)
def handle_new_external_link(self, request, form, layout=None):
if form.submitted(request):
external_link = self.add_by_form(form)
request.success(_("Added a new external form"))
return redirect(request.class_link(
ExternalLinkCollection.target(external_link)
))
layout = layout or DefaultLayout(self, request)
return {
'layout': layout,
'title': request.params.get('title', _("New external link")),
'form': form,
}
@OrgApp.form(model=ExternalLink, name='edit', template='form.pt',
permission=Private, form=get_external_link_form)
def edit_external_link(self, request, form, layout=None):
if form.submitted(request):
form.populate_obj(self)
request.success(_("Your changes were saved"))
to = request.params.get('to')
return redirect(to or request.link(request.app.org))
form.process(obj=self)
layout = layout or ExternalLinkLayout(self, request)
return {
'layout': layout,
'title': request.params.get('title', _("Edit external link")),
'form': form,
}
@OrgApp.view(model=ExternalLink, permission=Private, request_method='DELETE')
def delete_external_link(self, request):
request.assert_valid_csrf_token()
request.session.delete(self)
| from onegov.org import _
from onegov.core.security import Private
from onegov.org import OrgApp
from onegov.org.forms.external_link import ExternalLinkForm
from onegov.org.layout import ExternalLinkLayout, DefaultLayout
from onegov.org.models.external_link import (
ExternalLinkCollection, ExternalLink
)
from morepath import redirect
def get_external_link_form(model, request):
if isinstance(model, ExternalLinkCollection):
model = model.model_class()
return model.with_content_extensions(ExternalLinkForm, request)
@OrgApp.form(
model=ExternalLinkCollection, name='new', template='form.pt',
permission=Private, form=get_external_link_form
)
def handle_new_external_link(self, request, form, layout=None):
if form.submitted(request):
external_link = self.add_by_form(form)
request.success(_("Added a new external form"))
return redirect(request.class_link(
ExternalLinkCollection.target(external_link)
))
layout = layout or DefaultLayout(self, request)
return {
'layout': layout,
'title': request.params.get('title', _("New external link")),
'form': form,
}
@OrgApp.form(model=ExternalLink, name='edit', template='form.pt',
permission=Private, form=get_external_link_form)
def edit_external_link(self, request, form, layout=None):
if form.submitted(request):
form.populate_obj(self)
request.success(_("Your changes were saved"))
to = request.params.get('to')
return redirect(to or request.link(request.app.org))
form.process(obj=self)
layout = layout or ExternalLinkLayout(self, request)
return {
'layout': layout,
'title': request.params.get('title', _("Edit external link")),
'form': form,
}
@OrgApp.view(model=ExternalLink, permission=Private, request_method='DELETE')
def delete_external_link(self, request):
request.assert_valid_csrf_token()
request.session.delete(self)
| none | 1 | 2.182773 | 2 | |
test/test-gnuplot.py | Cam2337/snap-python | 242 | 6615342 | <reponame>Cam2337/snap-python
import snap
G = snap.GenPrefAttach(100000, 3)
snap.PlotInDegDistr(G, "pref-attach", "PrefAttach(100000, 3) in Degree")
| import snap
G = snap.GenPrefAttach(100000, 3)
snap.PlotInDegDistr(G, "pref-attach", "PrefAttach(100000, 3) in Degree") | none | 1 | 1.440493 | 1 | |
SourceCode/gemini_exchange.py | ahrenstein/docker-cryptodip-bot | 5 | 6615343 | <filename>SourceCode/gemini_exchange.py
#!/usr/bin/env python3
"""Functions to use with the Gemini Exchange"""
#
# Python Script:: gemini_exchange.py
#
# Linter:: pylint
#
# Copyright 2021, <NAME>, All Rights Reserved.
#
# Maintainers:
# - <NAME>: <EMAIL>
#
# See LICENSE
#
import base64
import datetime
import json
import time
import hmac
import hashlib
import requests
# Create custom api call for Gemini
# as per https://docs.gemini.com/rest-api/#private-api-invocation
def gemini_api_call(api_url: str, gemini_api_key: str,
gemini_api_secret: str, api_query: str) -> dict:
"""Make a post to the Gemini Exchange API
Args:
api_url: The API URL for the Gemini Exchange
gemini_api_key: An API key for Gemini Exhcange
gemini_api_secret: An API secret for Gemini Exhcange
api_query: The query to be posted to the API
Returns:
api_response: The API response
"""
full_query_url = api_url + api_query
# Using POSIX timestamps in UTC tp avoid repeating nonce issues.
# This avoids the bad design of the API reference sample code
current_time = datetime.datetime.now(datetime.timezone.utc)
epoch = datetime.datetime(1970, 1, 1, tzinfo=datetime.timezone.utc) # use POSIX epoch
posix_timestamp_micros = (current_time - epoch) // datetime.timedelta(microseconds=1)
payload_nonce = str(posix_timestamp_micros)
payload = {"request": api_query, "nonce": payload_nonce}
encoded_payload = json.dumps(payload).encode()
b64 = base64.b64encode(encoded_payload)
signature = hmac.new(gemini_api_secret.encode(), b64, hashlib.sha384).hexdigest()
request_headers = {
'Content-Type': "text/plain",
'Content-Length': "0",
'X-GEMINI-APIKEY': gemini_api_key,
'X-GEMINI-PAYLOAD': b64,
'X-GEMINI-SIGNATURE': signature,
'Cache-Control': "no-cache"
}
response = requests.post(full_query_url, headers=request_headers)
return response.json()
def get_gemini_creds_from_file(config_file: str) -> [str, str]:
"""Open a JSON file and get Gemini credentials out of it
Args:
config_file: Path to the JSON file containing credentials and config options
Returns:
gemini_api_key: An API key for Gemini Exhcange
gemini_api_secret: An API secret for Gemini Exhcange
"""
with open(config_file) as creds_file:
data = json.load(creds_file)
gemini_api_key = data['gemini']['api_key']
gemini_api_secret = data['gemini']['api_secret']
return gemini_api_key, gemini_api_secret
def get_coin_price(api_url: str, currency: str) -> float:
"""
Get the USD price of a coin from Gemini
Args:
api_url: The API URL for Gemini
currency: The cryptocurrency the bot is monitoring
Returns:
coin_price: The price the coin currently holds in USD
"""
# Instantiate Gemini and query the price
coin_price = -1
api_query = "/v1/pricefeed"
try:
price_feeds = requests.get(api_url + api_query).json()
for feed in price_feeds:
if feed.get('pair') == currency + "USD":
coin_price = float(feed.get('price'))
except Exception as err:
print("ERROR: Unable to get price due to %s" % err)
print("Price feed: %s" % price_feeds)
return coin_price
def verify_balance(api_url: str, config_file: str, buy_amount: float) -> bool:
"""Check if enough money exists in the account
Args:
api_url: The API URL for Gemini
config_file: Path to the JSON file containing credentials and config options
buy_amount: The amount of $USD the bot plans to spend
Returns:
all_clear: A bool that returns true if there is enough money to transact
"""
# Instantiate Gemini and query the price
gemini_creds = get_gemini_creds_from_file(config_file)
api_query = "/v1/balances"
try:
result = gemini_api_call(api_url, gemini_creds[0], gemini_creds[1], api_query)
for account in result:
if account.get('currency') == "USD":
balance = float(account.get('amount'))
if balance >= buy_amount:
return True
except Exception as err:
print("ERROR: Unable to get current balance!")
print(err)
return False
# Return false by default
return False
def get_decimal_max(api_url: str, currency: str) -> int:
"""Get the maximum amount of decimals permitted for a currency
Args:
api_url: The API URL for the Gemini Exchange
currency: The cryptocurrency the bot is monitoring
Returns:
tick_size: An integer of decimal places permitted
"""
# Instantiate Gemini and query the price
api_query = "/v1/symbols/details/%s" % (currency + "usd").lower()
symbol_details = requests.get(api_url + api_query).json()
tick_size = str(symbol_details.get('tick_size'))[3:]
return int(tick_size)
def buy_currency(api_url: str, config_file: str,
currency: str, buy_amount: float) -> bool:
"""Conduct a trade on Gemini to trade a currency with USD
Args:
api_url: The API URL for the Gemini Exchange
config_file: Path to the JSON file containing credentials and config options
currency: The cryptocurrency the bot is monitoring
buy_amount: The amount of $USD the bot plans to spend
Returns:
api_response: The API response
"""
# Gemini's API doesn't support market orders in an effort to protect you from yourself
# So we just do a limit order at the current price multipled by 1.2
coin_current_price = get_coin_price(api_url, currency)
# Gemini also denominates purchases in the coin amount not USD so we have to do math
tick_size = get_decimal_max(api_url, currency)
coin_amount = round(buy_amount / coin_current_price, tick_size)
market_price_fix = round(coin_current_price * 1.2, 2)
# Instantiate Gemini and query the price
gemini_creds = get_gemini_creds_from_file(config_file)
full_query_url = api_url + "/v1/order/new"
# Using POSIX timestamps in UTC tp avoid repeating nonce issues.
# This avoids the bad design of the API reference sample code
current_time = datetime.datetime.now(datetime.timezone.utc)
epoch = datetime.datetime(1970, 1, 1, tzinfo=datetime.timezone.utc) # use POSIX epoch
posix_timestamp_micros = (current_time - epoch) // datetime.timedelta(microseconds=1)
payload_nonce = str(posix_timestamp_micros)
payload = {
"request": "/v1/order/new",
"nonce": payload_nonce,
"symbol": currency + "usd",
"amount": str(coin_amount),
"price": str(market_price_fix),
"side": "buy",
"type": "exchange limit",
"options": ["immediate-or-cancel"]
}
encoded_payload = json.dumps(payload).encode()
b64 = base64.b64encode(encoded_payload)
signature = hmac.new(gemini_creds[1].encode(), b64, hashlib.sha384).hexdigest()
request_headers = {
'Content-Type': "text/plain",
'Content-Length': "0",
'X-GEMINI-APIKEY': gemini_creds[0],
'X-GEMINI-PAYLOAD': b64,
'X-GEMINI-SIGNATURE': signature,
'Cache-Control': "no-cache"
}
response = requests.post(full_query_url, data=None, headers=request_headers)
order_result = response.json()
if 'executed_amount' in order_result.keys():
print("LOG: Buy order succeeded.")
print("LOG: Buy Results: %s" % json.dumps(order_result, indent=2))
return True
else:
print("LOG: Buy order failed.")
print("LOG: Reason: %s" % json.dumps(order_result, indent=2))
return False
| <filename>SourceCode/gemini_exchange.py
#!/usr/bin/env python3
"""Functions to use with the Gemini Exchange"""
#
# Python Script:: gemini_exchange.py
#
# Linter:: pylint
#
# Copyright 2021, <NAME>, All Rights Reserved.
#
# Maintainers:
# - <NAME>: <EMAIL>
#
# See LICENSE
#
import base64
import datetime
import json
import time
import hmac
import hashlib
import requests
# Create custom api call for Gemini
# as per https://docs.gemini.com/rest-api/#private-api-invocation
def gemini_api_call(api_url: str, gemini_api_key: str,
gemini_api_secret: str, api_query: str) -> dict:
"""Make a post to the Gemini Exchange API
Args:
api_url: The API URL for the Gemini Exchange
gemini_api_key: An API key for Gemini Exhcange
gemini_api_secret: An API secret for Gemini Exhcange
api_query: The query to be posted to the API
Returns:
api_response: The API response
"""
full_query_url = api_url + api_query
# Using POSIX timestamps in UTC tp avoid repeating nonce issues.
# This avoids the bad design of the API reference sample code
current_time = datetime.datetime.now(datetime.timezone.utc)
epoch = datetime.datetime(1970, 1, 1, tzinfo=datetime.timezone.utc) # use POSIX epoch
posix_timestamp_micros = (current_time - epoch) // datetime.timedelta(microseconds=1)
payload_nonce = str(posix_timestamp_micros)
payload = {"request": api_query, "nonce": payload_nonce}
encoded_payload = json.dumps(payload).encode()
b64 = base64.b64encode(encoded_payload)
signature = hmac.new(gemini_api_secret.encode(), b64, hashlib.sha384).hexdigest()
request_headers = {
'Content-Type': "text/plain",
'Content-Length': "0",
'X-GEMINI-APIKEY': gemini_api_key,
'X-GEMINI-PAYLOAD': b64,
'X-GEMINI-SIGNATURE': signature,
'Cache-Control': "no-cache"
}
response = requests.post(full_query_url, headers=request_headers)
return response.json()
def get_gemini_creds_from_file(config_file: str) -> [str, str]:
"""Open a JSON file and get Gemini credentials out of it
Args:
config_file: Path to the JSON file containing credentials and config options
Returns:
gemini_api_key: An API key for Gemini Exhcange
gemini_api_secret: An API secret for Gemini Exhcange
"""
with open(config_file) as creds_file:
data = json.load(creds_file)
gemini_api_key = data['gemini']['api_key']
gemini_api_secret = data['gemini']['api_secret']
return gemini_api_key, gemini_api_secret
def get_coin_price(api_url: str, currency: str) -> float:
"""
Get the USD price of a coin from Gemini
Args:
api_url: The API URL for Gemini
currency: The cryptocurrency the bot is monitoring
Returns:
coin_price: The price the coin currently holds in USD
"""
# Instantiate Gemini and query the price
coin_price = -1
api_query = "/v1/pricefeed"
try:
price_feeds = requests.get(api_url + api_query).json()
for feed in price_feeds:
if feed.get('pair') == currency + "USD":
coin_price = float(feed.get('price'))
except Exception as err:
print("ERROR: Unable to get price due to %s" % err)
print("Price feed: %s" % price_feeds)
return coin_price
def verify_balance(api_url: str, config_file: str, buy_amount: float) -> bool:
"""Check if enough money exists in the account
Args:
api_url: The API URL for Gemini
config_file: Path to the JSON file containing credentials and config options
buy_amount: The amount of $USD the bot plans to spend
Returns:
all_clear: A bool that returns true if there is enough money to transact
"""
# Instantiate Gemini and query the price
gemini_creds = get_gemini_creds_from_file(config_file)
api_query = "/v1/balances"
try:
result = gemini_api_call(api_url, gemini_creds[0], gemini_creds[1], api_query)
for account in result:
if account.get('currency') == "USD":
balance = float(account.get('amount'))
if balance >= buy_amount:
return True
except Exception as err:
print("ERROR: Unable to get current balance!")
print(err)
return False
# Return false by default
return False
def get_decimal_max(api_url: str, currency: str) -> int:
"""Get the maximum amount of decimals permitted for a currency
Args:
api_url: The API URL for the Gemini Exchange
currency: The cryptocurrency the bot is monitoring
Returns:
tick_size: An integer of decimal places permitted
"""
# Instantiate Gemini and query the price
api_query = "/v1/symbols/details/%s" % (currency + "usd").lower()
symbol_details = requests.get(api_url + api_query).json()
tick_size = str(symbol_details.get('tick_size'))[3:]
return int(tick_size)
def buy_currency(api_url: str, config_file: str,
currency: str, buy_amount: float) -> bool:
"""Conduct a trade on Gemini to trade a currency with USD
Args:
api_url: The API URL for the Gemini Exchange
config_file: Path to the JSON file containing credentials and config options
currency: The cryptocurrency the bot is monitoring
buy_amount: The amount of $USD the bot plans to spend
Returns:
api_response: The API response
"""
# Gemini's API doesn't support market orders in an effort to protect you from yourself
# So we just do a limit order at the current price multipled by 1.2
coin_current_price = get_coin_price(api_url, currency)
# Gemini also denominates purchases in the coin amount not USD so we have to do math
tick_size = get_decimal_max(api_url, currency)
coin_amount = round(buy_amount / coin_current_price, tick_size)
market_price_fix = round(coin_current_price * 1.2, 2)
# Instantiate Gemini and query the price
gemini_creds = get_gemini_creds_from_file(config_file)
full_query_url = api_url + "/v1/order/new"
# Using POSIX timestamps in UTC tp avoid repeating nonce issues.
# This avoids the bad design of the API reference sample code
current_time = datetime.datetime.now(datetime.timezone.utc)
epoch = datetime.datetime(1970, 1, 1, tzinfo=datetime.timezone.utc) # use POSIX epoch
posix_timestamp_micros = (current_time - epoch) // datetime.timedelta(microseconds=1)
payload_nonce = str(posix_timestamp_micros)
payload = {
"request": "/v1/order/new",
"nonce": payload_nonce,
"symbol": currency + "usd",
"amount": str(coin_amount),
"price": str(market_price_fix),
"side": "buy",
"type": "exchange limit",
"options": ["immediate-or-cancel"]
}
encoded_payload = json.dumps(payload).encode()
b64 = base64.b64encode(encoded_payload)
signature = hmac.new(gemini_creds[1].encode(), b64, hashlib.sha384).hexdigest()
request_headers = {
'Content-Type': "text/plain",
'Content-Length': "0",
'X-GEMINI-APIKEY': gemini_creds[0],
'X-GEMINI-PAYLOAD': b64,
'X-GEMINI-SIGNATURE': signature,
'Cache-Control': "no-cache"
}
response = requests.post(full_query_url, data=None, headers=request_headers)
order_result = response.json()
if 'executed_amount' in order_result.keys():
print("LOG: Buy order succeeded.")
print("LOG: Buy Results: %s" % json.dumps(order_result, indent=2))
return True
else:
print("LOG: Buy order failed.")
print("LOG: Reason: %s" % json.dumps(order_result, indent=2))
return False
| en | 0.730548 | #!/usr/bin/env python3 Functions to use with the Gemini Exchange # # Python Script:: gemini_exchange.py # # Linter:: pylint # # Copyright 2021, <NAME>, All Rights Reserved. # # Maintainers: # - <NAME>: <EMAIL> # # See LICENSE # # Create custom api call for Gemini # as per https://docs.gemini.com/rest-api/#private-api-invocation Make a post to the Gemini Exchange API Args: api_url: The API URL for the Gemini Exchange gemini_api_key: An API key for Gemini Exhcange gemini_api_secret: An API secret for Gemini Exhcange api_query: The query to be posted to the API Returns: api_response: The API response # Using POSIX timestamps in UTC tp avoid repeating nonce issues. # This avoids the bad design of the API reference sample code # use POSIX epoch Open a JSON file and get Gemini credentials out of it Args: config_file: Path to the JSON file containing credentials and config options Returns: gemini_api_key: An API key for Gemini Exhcange gemini_api_secret: An API secret for Gemini Exhcange Get the USD price of a coin from Gemini Args: api_url: The API URL for Gemini currency: The cryptocurrency the bot is monitoring Returns: coin_price: The price the coin currently holds in USD # Instantiate Gemini and query the price Check if enough money exists in the account Args: api_url: The API URL for Gemini config_file: Path to the JSON file containing credentials and config options buy_amount: The amount of $USD the bot plans to spend Returns: all_clear: A bool that returns true if there is enough money to transact # Instantiate Gemini and query the price # Return false by default Get the maximum amount of decimals permitted for a currency Args: api_url: The API URL for the Gemini Exchange currency: The cryptocurrency the bot is monitoring Returns: tick_size: An integer of decimal places permitted # Instantiate Gemini and query the price Conduct a trade on Gemini to trade a currency with USD Args: api_url: The API URL for the Gemini Exchange config_file: Path to the JSON file containing credentials and config options currency: The cryptocurrency the bot is monitoring buy_amount: The amount of $USD the bot plans to spend Returns: api_response: The API response # Gemini's API doesn't support market orders in an effort to protect you from yourself # So we just do a limit order at the current price multipled by 1.2 # Gemini also denominates purchases in the coin amount not USD so we have to do math # Instantiate Gemini and query the price # Using POSIX timestamps in UTC tp avoid repeating nonce issues. # This avoids the bad design of the API reference sample code # use POSIX epoch | 2.518693 | 3 |
tests/test_sortable.py | pyexcel/pyexcel-sortable | 5 | 6615344 | import uuid
import pyexcel as p
from nose.tools import raises
from pyexcel_sortable.sortable import Sortable
def test_render_sheet():
test_header = uuid.uuid4().hex
sheet = p.Sheet([[test_header], [1]])
sortable = Sortable('sortable.html')
stream = sortable.get_io()
sortable.set_output_stream(stream)
sortable.render_sheet(sheet)
assert test_header in stream.getvalue()
@raises(Exception)
def test_render_book():
sortable = Sortable('sortable.html')
sortable.render_book("not supported yet")
| import uuid
import pyexcel as p
from nose.tools import raises
from pyexcel_sortable.sortable import Sortable
def test_render_sheet():
test_header = uuid.uuid4().hex
sheet = p.Sheet([[test_header], [1]])
sortable = Sortable('sortable.html')
stream = sortable.get_io()
sortable.set_output_stream(stream)
sortable.render_sheet(sheet)
assert test_header in stream.getvalue()
@raises(Exception)
def test_render_book():
sortable = Sortable('sortable.html')
sortable.render_book("not supported yet")
| none | 1 | 2.155071 | 2 | |
petrarch/utilities.py | johnb30/petrarch | 40 | 6615345 | # -*- coding: utf-8 -*-
## utilities.py [module]
##
# Utilities for the PETRARCH event data coder
##
# SYSTEM REQUIREMENTS
# This program has been successfully run under Mac OS 10.10; it is standard Python 2.7
# so it should also run in Unix or Windows.
#
# INITIAL PROVENANCE:
# Programmer:
# <NAME>
# Caerus Associates/Penn State University
# Washington, DC / State College, PA, 16801 U.S.A.
# http://caerusassociates.com
# http://bdss.psu.edu
#
# GitHub repository: https://github.com/openeventdata/petrarch
#
# Copyright (c) 2014 <NAME>. All rights reserved.
#
# This project is part of the Open Event Data Alliance tool set
#
# This code is covered under the MIT license
#
# Report bugs to: <EMAIL>
#
# REVISION HISTORY:
# Summer-14: Initial version
# ------------------------------------------------------------------------
from __future__ import print_function
from __future__ import unicode_literals
import os
import logging
import corenlp
import dateutil.parser
import PETRglobals
from collections import defaultdict, Counter
def stanford_parse(event_dict):
logger = logging.getLogger('petr_log')
# What is dead can never die...
print("\nSetting up StanfordNLP. The program isn't dead. Promise.")
logger.info('Setting up StanfordNLP')
core = corenlp.StanfordCoreNLP(PETRglobals.stanfordnlp,
properties=_get_data('data/config/',
'petrarch.properties'),
memory='2g')
total = len(list(event_dict.keys()))
print(
"Stanford setup complete. Starting parse of {} stories...".format(total))
logger.info(
'Stanford setup complete. Starting parse of {} stories.'.format(total))
for i, key in enumerate(event_dict.keys()):
if (i / float(total)) * 100 in [10.0, 25.0, 50, 75.0]:
print('Parse is {}% complete...'.format((i / float(total)) * 100))
for sent in event_dict[key]['sents']:
logger.info('StanfordNLP parsing {}_{}...'.format(key, sent))
sent_dict = event_dict[key]['sents'][sent]
if len(sent_dict['content']) > 512 or len(
sent_dict['content']) < 64:
logger.warning(
'\tText length wrong. Either too long or too short.')
pass
else:
try:
stanford_result = core.raw_parse(sent_dict['content'])
s_parsetree = stanford_result['sentences'][0]['parsetree']
if 'coref' in stanford_result:
sent_dict['coref'] = stanford_result['coref']
# TODO: To go backwards you'd do str.replace(' ) ', ')')
sent_dict['parsed'] = _format_parsed_str(s_parsetree)
except Exception as e:
print('Something went wrong. ¯\_(ツ)_/¯. See log file.')
logger.warning(
'Error on {}_{}. ¯\_(ツ)_/¯. {}'.format(key, sent, e))
print('Done with StanfordNLP parse...\n\n')
logger.info('Done with StanfordNLP parse.')
return event_dict
def story_filter(story_dict, story_id):
"""
One-a-story filter for the events. There can only be only one unique
(DATE, SRC, TGT, EVENT) tuple per story.
Parameters
----------
story_dict: Dictionary.
Story-level dictionary as stored in the main event-holding
dictionary within PETRARCH.
story_id: String.
Unique StoryID in standard PETRARCH format.
Returns
-------
filtered: Dictionary.
Holder for filtered events with the format
{(EVENT TUPLE): {'issues': [], 'ids': []}} where the 'issues'
list is optional.
"""
filtered = defaultdict(dict)
story_date = story_dict['meta']['date']
for sent in story_dict['sents']:
sent_dict = story_dict['sents'][sent]
sent_id = '{}_{}'.format(story_id, sent)
if 'events' in sent_dict:
events = story_dict['sents'][sent]['events']
for event in events:
# do not print unresolved agents
try:
if event[0][0] != '-' and event[1][0] != '-':
alist = [story_date]
alist.extend(event)
event_tuple = tuple(alist)
filtered[event_tuple]
if 'issues' in sent_dict:
filtered[event_tuple]['issues'] = Counter()
issues = sent_dict['issues']
for issue in issues:
filtered[event_tuple]['issues'][
issue[0]] += issue[1]
# Will keep track of this info, but not necessarily write it
# out
filtered[event_tuple]['ids'] = []
filtered[event_tuple]['ids'].append(sent_id)
except IndexError:
pass
else:
pass
return filtered
def _format_parsed_str(parsed_str):
parsed = parsed_str.split('\n')
parsed = [line.strip() + ' ' for line in [line1.strip() for line1 in
parsed if line1] if line]
parsed = [line.replace(')', ' ) ').upper() for line in parsed]
treestr = ''.join(parsed)
return treestr
def _format_datestr(date):
datetime = dateutil.parser.parse(date)
date = '{}{:02}{:02}'.format(datetime.year, datetime.month, datetime.day)
return date
def _get_data(dir_path, path):
"""Private function to get the absolute path to the installed files."""
cwd = os.path.abspath(os.path.dirname(__file__))
joined = os.path.join(dir_path, path)
out_dir = os.path.join(cwd, joined)
return out_dir
def _get_config(config_name):
cwd = os.path.abspath(os.path.dirname(__file__))
out_dir = os.path.join(cwd, config_name)
return out_dir
def init_logger(logger_filename):
logger = logging.getLogger('petr_log')
logger.setLevel(logging.INFO)
cwd = os.getcwd()
logger_filepath = os.path.join(cwd, logger_filename)
fh = logging.FileHandler(logger_filepath, 'w')
formatter = logging.Formatter('%(levelname)s %(asctime)s: %(message)s')
fh.setFormatter(formatter)
logger.addHandler(fh)
logger.info('Running')
| # -*- coding: utf-8 -*-
## utilities.py [module]
##
# Utilities for the PETRARCH event data coder
##
# SYSTEM REQUIREMENTS
# This program has been successfully run under Mac OS 10.10; it is standard Python 2.7
# so it should also run in Unix or Windows.
#
# INITIAL PROVENANCE:
# Programmer:
# <NAME>
# Caerus Associates/Penn State University
# Washington, DC / State College, PA, 16801 U.S.A.
# http://caerusassociates.com
# http://bdss.psu.edu
#
# GitHub repository: https://github.com/openeventdata/petrarch
#
# Copyright (c) 2014 <NAME>. All rights reserved.
#
# This project is part of the Open Event Data Alliance tool set
#
# This code is covered under the MIT license
#
# Report bugs to: <EMAIL>
#
# REVISION HISTORY:
# Summer-14: Initial version
# ------------------------------------------------------------------------
from __future__ import print_function
from __future__ import unicode_literals
import os
import logging
import corenlp
import dateutil.parser
import PETRglobals
from collections import defaultdict, Counter
def stanford_parse(event_dict):
logger = logging.getLogger('petr_log')
# What is dead can never die...
print("\nSetting up StanfordNLP. The program isn't dead. Promise.")
logger.info('Setting up StanfordNLP')
core = corenlp.StanfordCoreNLP(PETRglobals.stanfordnlp,
properties=_get_data('data/config/',
'petrarch.properties'),
memory='2g')
total = len(list(event_dict.keys()))
print(
"Stanford setup complete. Starting parse of {} stories...".format(total))
logger.info(
'Stanford setup complete. Starting parse of {} stories.'.format(total))
for i, key in enumerate(event_dict.keys()):
if (i / float(total)) * 100 in [10.0, 25.0, 50, 75.0]:
print('Parse is {}% complete...'.format((i / float(total)) * 100))
for sent in event_dict[key]['sents']:
logger.info('StanfordNLP parsing {}_{}...'.format(key, sent))
sent_dict = event_dict[key]['sents'][sent]
if len(sent_dict['content']) > 512 or len(
sent_dict['content']) < 64:
logger.warning(
'\tText length wrong. Either too long or too short.')
pass
else:
try:
stanford_result = core.raw_parse(sent_dict['content'])
s_parsetree = stanford_result['sentences'][0]['parsetree']
if 'coref' in stanford_result:
sent_dict['coref'] = stanford_result['coref']
# TODO: To go backwards you'd do str.replace(' ) ', ')')
sent_dict['parsed'] = _format_parsed_str(s_parsetree)
except Exception as e:
print('Something went wrong. ¯\_(ツ)_/¯. See log file.')
logger.warning(
'Error on {}_{}. ¯\_(ツ)_/¯. {}'.format(key, sent, e))
print('Done with StanfordNLP parse...\n\n')
logger.info('Done with StanfordNLP parse.')
return event_dict
def story_filter(story_dict, story_id):
"""
One-a-story filter for the events. There can only be only one unique
(DATE, SRC, TGT, EVENT) tuple per story.
Parameters
----------
story_dict: Dictionary.
Story-level dictionary as stored in the main event-holding
dictionary within PETRARCH.
story_id: String.
Unique StoryID in standard PETRARCH format.
Returns
-------
filtered: Dictionary.
Holder for filtered events with the format
{(EVENT TUPLE): {'issues': [], 'ids': []}} where the 'issues'
list is optional.
"""
filtered = defaultdict(dict)
story_date = story_dict['meta']['date']
for sent in story_dict['sents']:
sent_dict = story_dict['sents'][sent]
sent_id = '{}_{}'.format(story_id, sent)
if 'events' in sent_dict:
events = story_dict['sents'][sent]['events']
for event in events:
# do not print unresolved agents
try:
if event[0][0] != '-' and event[1][0] != '-':
alist = [story_date]
alist.extend(event)
event_tuple = tuple(alist)
filtered[event_tuple]
if 'issues' in sent_dict:
filtered[event_tuple]['issues'] = Counter()
issues = sent_dict['issues']
for issue in issues:
filtered[event_tuple]['issues'][
issue[0]] += issue[1]
# Will keep track of this info, but not necessarily write it
# out
filtered[event_tuple]['ids'] = []
filtered[event_tuple]['ids'].append(sent_id)
except IndexError:
pass
else:
pass
return filtered
def _format_parsed_str(parsed_str):
parsed = parsed_str.split('\n')
parsed = [line.strip() + ' ' for line in [line1.strip() for line1 in
parsed if line1] if line]
parsed = [line.replace(')', ' ) ').upper() for line in parsed]
treestr = ''.join(parsed)
return treestr
def _format_datestr(date):
datetime = dateutil.parser.parse(date)
date = '{}{:02}{:02}'.format(datetime.year, datetime.month, datetime.day)
return date
def _get_data(dir_path, path):
"""Private function to get the absolute path to the installed files."""
cwd = os.path.abspath(os.path.dirname(__file__))
joined = os.path.join(dir_path, path)
out_dir = os.path.join(cwd, joined)
return out_dir
def _get_config(config_name):
cwd = os.path.abspath(os.path.dirname(__file__))
out_dir = os.path.join(cwd, config_name)
return out_dir
def init_logger(logger_filename):
logger = logging.getLogger('petr_log')
logger.setLevel(logging.INFO)
cwd = os.getcwd()
logger_filepath = os.path.join(cwd, logger_filename)
fh = logging.FileHandler(logger_filepath, 'w')
formatter = logging.Formatter('%(levelname)s %(asctime)s: %(message)s')
fh.setFormatter(formatter)
logger.addHandler(fh)
logger.info('Running')
| en | 0.747643 | # -*- coding: utf-8 -*- ## utilities.py [module] ## # Utilities for the PETRARCH event data coder ## # SYSTEM REQUIREMENTS # This program has been successfully run under Mac OS 10.10; it is standard Python 2.7 # so it should also run in Unix or Windows. # # INITIAL PROVENANCE: # Programmer: # <NAME> # Caerus Associates/Penn State University # Washington, DC / State College, PA, 16801 U.S.A. # http://caerusassociates.com # http://bdss.psu.edu # # GitHub repository: https://github.com/openeventdata/petrarch # # Copyright (c) 2014 <NAME>. All rights reserved. # # This project is part of the Open Event Data Alliance tool set # # This code is covered under the MIT license # # Report bugs to: <EMAIL> # # REVISION HISTORY: # Summer-14: Initial version # ------------------------------------------------------------------------ # What is dead can never die... # TODO: To go backwards you'd do str.replace(' ) ', ')') One-a-story filter for the events. There can only be only one unique (DATE, SRC, TGT, EVENT) tuple per story. Parameters ---------- story_dict: Dictionary. Story-level dictionary as stored in the main event-holding dictionary within PETRARCH. story_id: String. Unique StoryID in standard PETRARCH format. Returns ------- filtered: Dictionary. Holder for filtered events with the format {(EVENT TUPLE): {'issues': [], 'ids': []}} where the 'issues' list is optional. # do not print unresolved agents # Will keep track of this info, but not necessarily write it # out Private function to get the absolute path to the installed files. | 2.522081 | 3 |
utils/make_voc2freq.py | retrieva/WordGCN | 0 | 6615346 | import argparse
import MeCab
import spacy
from collections import Counter
from file_loader import Fileloader
from pathlib import Path
from tqdm import tqdm
class Tokenizer:
def __init__(self, name):
self.name = name.lower()
if self.name == 'mecab':
wakati = MeCab.Tagger("-O wakati")
wakati.parse("")
self.parser = wakati
elif self.name == 'sudachi':
nlp = spacy.load('ja_ginza')
self.parser = nlp
else:
raise('{} is not prepared as tokenizer'.format(name))
def tokenize(self, text: str) -> list:
if self.name == 'mecab':
return self.parser.parse(text).strip().split()
elif self.name == 'sudachi':
doc = self.parser.make_doc(text)
return [token.text for token in doc]
def counter(file_pathes: list, loader: Fileloader, tokenizer_name: str) -> list:
voc2freq = Counter()
tokenizer = Tokenizer(tokenizer_name)
for file_path in tqdm(file_pathes):
texts = loader.load(file_path)
for text in texts:
voc2freq.update(tokenizer.tokenize(text))
return voc2freq
def save(voc2freq: Counter, output_file: Path):
with output_file.open(mode='w') as f:
for voc, freq in voc2freq.items():
if voc:
print('{}\t{}'.format(voc, freq), file=f)
def main(args):
target_dir = Path(args.indir)
output_file = Path(args.output_file)
file_format = args.format
tokenizer_name = args.tokenizer
text_fields = args.text_fields.strip().split(',')
file_pathes = list(target_dir.glob("**/*"))
loader = Fileloader(file_format, text_fields)
voc2freq = counter(file_pathes, loader, tokenizer_name)
save(voc2freq, output_file)
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument('-i', dest='indir', help='input dir')
parser.add_argument('-o', dest='output_file', default='voc2freq.txt')
parser.add_argument('-t', dest='tokenizer', default='mecab', help='mecab or sucachi')
parser.add_argument('--format', default='txt', help="select file format txt or jsonl")
parser.add_argument('--text_fields', help="set json's textfields as csv")
args = parser.parse_args()
main(args)
| import argparse
import MeCab
import spacy
from collections import Counter
from file_loader import Fileloader
from pathlib import Path
from tqdm import tqdm
class Tokenizer:
def __init__(self, name):
self.name = name.lower()
if self.name == 'mecab':
wakati = MeCab.Tagger("-O wakati")
wakati.parse("")
self.parser = wakati
elif self.name == 'sudachi':
nlp = spacy.load('ja_ginza')
self.parser = nlp
else:
raise('{} is not prepared as tokenizer'.format(name))
def tokenize(self, text: str) -> list:
if self.name == 'mecab':
return self.parser.parse(text).strip().split()
elif self.name == 'sudachi':
doc = self.parser.make_doc(text)
return [token.text for token in doc]
def counter(file_pathes: list, loader: Fileloader, tokenizer_name: str) -> list:
voc2freq = Counter()
tokenizer = Tokenizer(tokenizer_name)
for file_path in tqdm(file_pathes):
texts = loader.load(file_path)
for text in texts:
voc2freq.update(tokenizer.tokenize(text))
return voc2freq
def save(voc2freq: Counter, output_file: Path):
with output_file.open(mode='w') as f:
for voc, freq in voc2freq.items():
if voc:
print('{}\t{}'.format(voc, freq), file=f)
def main(args):
target_dir = Path(args.indir)
output_file = Path(args.output_file)
file_format = args.format
tokenizer_name = args.tokenizer
text_fields = args.text_fields.strip().split(',')
file_pathes = list(target_dir.glob("**/*"))
loader = Fileloader(file_format, text_fields)
voc2freq = counter(file_pathes, loader, tokenizer_name)
save(voc2freq, output_file)
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument('-i', dest='indir', help='input dir')
parser.add_argument('-o', dest='output_file', default='voc2freq.txt')
parser.add_argument('-t', dest='tokenizer', default='mecab', help='mecab or sucachi')
parser.add_argument('--format', default='txt', help="select file format txt or jsonl")
parser.add_argument('--text_fields', help="set json's textfields as csv")
args = parser.parse_args()
main(args)
| none | 1 | 2.74124 | 3 | |
neoload/commands/status.py | adamleskis/neoload-cli | 0 | 6615347 | import click
from neoload_cli_lib import user_data
@click.command()
def cli():
"""get status of NeoLoad cli Settings"""
login = user_data.get_user_data(False)
if login is None:
print("No settings is stored. Please use \"neoload login\" to start.")
else:
print(login)
pass
| import click
from neoload_cli_lib import user_data
@click.command()
def cli():
"""get status of NeoLoad cli Settings"""
login = user_data.get_user_data(False)
if login is None:
print("No settings is stored. Please use \"neoload login\" to start.")
else:
print(login)
pass
| en | 0.558613 | get status of NeoLoad cli Settings | 2.472686 | 2 |
PhytonScript.py | bastib83/Machine-learing-2 | 0 | 6615348 | # Quelle: https://data-science-blog.com/blog/2016/04/26/machine-learning-mit-python-tutorial-minimalbeispiel/
import numpy as numpy
import matplotlib.pyplot as pyplot
from mpl_toolkits.mplot3d import Axes3D #Erweiterung für die Matplotlib - siehe: http://matplotlib.org/mpl_toolkits/
def readDataSet(filename):
fr = open(filename) # Datei-Stream vorbereiten
numberOfLines = len(fr.readlines()) # Anzahl der Zeilen ermitteln
returnMat = numpy.zeros((numberOfLines-1,3)) # Eine Numpy-Matrix in Höhe der Zeilenanzahl (minus Kopfzeile) und in Breite der drei Merkmal-Spalten
classLabelVector = [] # Hier werden die tatsächlichen Kategorien (Haus, Wohnung, Büro) vermerkt
classColorVector = [] # Hier werden die Kategorien über Farben vermerkt (zur späteren Unterscheidung im 3D-Plot!)
#print(returnMat) # Ggf. mal die noch die ausge-null-te Matrix anzeigen lassen (bei Python 2.7: die Klammern weglassen!)
fr = open(filename) # Datei-Stream öffnen
index = 0
for line in fr.readlines(): # Zeile für Zeile der Datei lesen
if index != 0: # Kopfzeile überspringen
line = line.strip()
listFromLine = line.split('\t') # Jede Zeile wird zur temporären Liste (Tabulator als Trennzeichen)
returnMat[index-1,:] = listFromLine[1:4] #Liste in die entsprechende Zeile der Matrix überführen
classLabel = listFromLine[4] # Kategorie (Haus, Wohnung, Büro) für diese Zeile merken
if classLabel == "Buero":
color = 'yellow'
elif classLabel == "Wohnung":
color = 'red'
else:
color = 'blue'
classLabelVector.append(classLabel) # Kategorie (Haus, Wohnung, Büro) als Text-Label speichern
classColorVector.append(color) # Kategorie als Farbe speichern (Büro = gelb, Wohnung = rot, Haus = Blau)
index += 1
return returnMat, classLabelVector, classColorVector
dataSet, classLabelVector, classColorVector = readDataSet("K-Nearst_Neighbour-DataSet.txt")
fig = pyplot.figure()
ax = fig.add_subplot(111)
ax.scatter(dataSet[:,0], dataSet[:,1], marker='o', color=classColorVector)
ax.set_xlabel("Raumflaeche in Quadratmeter")
ax.set_ylabel("Wandhohe")
ax.set_xlim(xmin=0)
ax.set_ylim(ymin=0)
pyplot.show()
fig = pyplot.figure()
ax = fig.add_subplot(111)
ax.scatter(dataSet[:,0], dataSet[:,2], marker='o', color=classColorVector)
ax.set_xlabel("Raumflaeche in Quadratmeter")
ax.set_ylabel("IA_Ratio")
ax.set_xlim(xmin=0)
ax.set_ylim(ymin=0)
pyplot.show()
fig = pyplot.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(dataSet[:,0], dataSet[:,2], dataSet[:,1], marker='o', color=classColorVector)
ax.set_xlabel("Raumflaeche in Quadratmeter")
ax.set_ylabel("IA_Ratio")
ax.set_zlabel("Wandhoehe in Meter")
ax.set_xlim(left=0)
ax.set_ylim(bottom=0)
ax.set_zlim(bottom=0)
pyplot.show()
def normalizeDataSet(dataSet):
dataSet_n = numpy.zeros(numpy.shape(dataSet)) #[[ 0. 0. 0.]
# [ 0. 0. 0.]
# [ 0. 0. 0.]
# ...,
# [ 0. 0. 0.]
# [ 0. 0. 0.]
# [ 0. 0. 0.]]
minValues = dataSet.min(0) # [ 10. 2.6 0.]
ranges = dataSet.max(0) - dataSet.min(0) # [ 1775. 2.4 68.]
minValues = dataSet.min(0) # [ 10. 2.6 0.]
maxValues = dataSet.max(0) # [ 1785. 5. 68.]
ranges = maxValues - minValues # [ 1775. 2.4 68.]
rowCount = dataSet.shape[0] # 1039
# numpy.tile() wiederholt Sequenzen (hier: [[ 10. 2.6 0. ], ..., [ 10. 2.6 0. ]]
dataSet_n = dataSet - numpy.tile(minValues, (rowCount, 1)) #[[ 2.56000000e+02 9.00000000e-01 1.80000000e+01]
# [ 6.60000000e+01 2.00000000e-01 5.40000000e+01]
# [ 3.32000000e+02 1.50000000e-01 1.00000000e+01]
# ...,
# [ 1.58000000e+02 6.00000000e-01 0.00000000e+00]
# [ 5.70000000e+01 1.00000000e-01 5.20000000e+01]
# [ 1.68000000e+02 2.00000000e-01 0.00000000e+00]]
dataSet_n = dataSet_n / numpy.tile(ranges, (rowCount, 1)) #[[ 0.14422535 0.375 0.26470588]
# [ 0.0371831 0.08333333 0.79411765]
# [ 0.18704225 0.0625 0.14705882]
# ...,
# [ 0.08901408 0.25 0.]
# [ 0.03211268 0.04166667 0.76470588]
# [ 0.09464789 0.08333333 0.]]
#print(dataSet_n)
return dataSet_n, ranges, minValues
dataSet_n, ranges, minValues = normalizeDataSet(dataSet)
def classify(inX, dataSet, labels, k):
rowCount = dataSet.shape[0] # Anzahl an Zeilen bestimmen
diffMat = numpy.tile(inX, (rowCount,1)) - dataSet # Berechnung der Katheten
# (über tile() wird der Eingangsdatensatz über die Zeilenanzahl des dataSet vervielfacht,
# der dataSet davon substrahiert)
sqDiffMat = diffMat**2 # Quadrat der Katheten
sqDistances = sqDiffMat.sum(axis=1) # Aufsummieren der Differenzpaare
distances = sqDistances**0.5 # Quadratwurzel über alle Werte
sortedDistIndicies = distances.argsort() # Aufsteigende Sortierung
classCount = {}
#print("inX = %s, k = %s" % (inX, k))
#print(sortedDistIndicies)
for i in range(k): # Eingrenzung auf k-Werte in der sortierten Liste
closest = labels[sortedDistIndicies[i]] # Label (Kategorie [Büro, Wohnung, Haus] entsprechend der Sortierung aufnehmen
classCount[closest] = classCount.get(closest, 0) + 1 # Aufbau eines Dictionary über die
sortedClassCount = sorted(classCount, key = classCount.get, reverse=True) # Absteigende Sortierung der gesammelten Labels in k-Reichweite
# wobei die Sortierung über den Count (Value) erfolgt
#print(classCount)
#print(sortedClassCount[0])
return sortedClassCount[0] # Liefere das erste Label zurück
# also das Label mit der höchsten Anzahl innerhalb der k-Reichweite
errorCount = 0
k = 5 # k-Eingrenzung (hier: auf 5 Nachbarn einschränken)
rowCount = dataSet_n.shape[0] # Anzahl der Zeilen im gesamten Datensatz
numTestVectors = 30 # Datensätze 0 - 29 werden zum testen von k verwendet,
# die Datensätze ab Zeile 30 werden zur Klassifikation verwendet
for i in range(0, numTestVectors): # Aufruf des Klassifikators von 0 bis 29
result = classify(dataSet_n[i,:], dataSet_n[numTestVectors:rowCount,:], classLabelVector[numTestVectors:rowCount], k)
print("%s - the classifier came back with: %s, the real answer is: %s" %(i, result, classLabelVector[i]))
if (result != classLabelVector[i]):
errorCount += 1.0
print("Error Count: %d" % errorCount)
| # Quelle: https://data-science-blog.com/blog/2016/04/26/machine-learning-mit-python-tutorial-minimalbeispiel/
import numpy as numpy
import matplotlib.pyplot as pyplot
from mpl_toolkits.mplot3d import Axes3D #Erweiterung für die Matplotlib - siehe: http://matplotlib.org/mpl_toolkits/
def readDataSet(filename):
fr = open(filename) # Datei-Stream vorbereiten
numberOfLines = len(fr.readlines()) # Anzahl der Zeilen ermitteln
returnMat = numpy.zeros((numberOfLines-1,3)) # Eine Numpy-Matrix in Höhe der Zeilenanzahl (minus Kopfzeile) und in Breite der drei Merkmal-Spalten
classLabelVector = [] # Hier werden die tatsächlichen Kategorien (Haus, Wohnung, Büro) vermerkt
classColorVector = [] # Hier werden die Kategorien über Farben vermerkt (zur späteren Unterscheidung im 3D-Plot!)
#print(returnMat) # Ggf. mal die noch die ausge-null-te Matrix anzeigen lassen (bei Python 2.7: die Klammern weglassen!)
fr = open(filename) # Datei-Stream öffnen
index = 0
for line in fr.readlines(): # Zeile für Zeile der Datei lesen
if index != 0: # Kopfzeile überspringen
line = line.strip()
listFromLine = line.split('\t') # Jede Zeile wird zur temporären Liste (Tabulator als Trennzeichen)
returnMat[index-1,:] = listFromLine[1:4] #Liste in die entsprechende Zeile der Matrix überführen
classLabel = listFromLine[4] # Kategorie (Haus, Wohnung, Büro) für diese Zeile merken
if classLabel == "Buero":
color = 'yellow'
elif classLabel == "Wohnung":
color = 'red'
else:
color = 'blue'
classLabelVector.append(classLabel) # Kategorie (Haus, Wohnung, Büro) als Text-Label speichern
classColorVector.append(color) # Kategorie als Farbe speichern (Büro = gelb, Wohnung = rot, Haus = Blau)
index += 1
return returnMat, classLabelVector, classColorVector
dataSet, classLabelVector, classColorVector = readDataSet("K-Nearst_Neighbour-DataSet.txt")
fig = pyplot.figure()
ax = fig.add_subplot(111)
ax.scatter(dataSet[:,0], dataSet[:,1], marker='o', color=classColorVector)
ax.set_xlabel("Raumflaeche in Quadratmeter")
ax.set_ylabel("Wandhohe")
ax.set_xlim(xmin=0)
ax.set_ylim(ymin=0)
pyplot.show()
fig = pyplot.figure()
ax = fig.add_subplot(111)
ax.scatter(dataSet[:,0], dataSet[:,2], marker='o', color=classColorVector)
ax.set_xlabel("Raumflaeche in Quadratmeter")
ax.set_ylabel("IA_Ratio")
ax.set_xlim(xmin=0)
ax.set_ylim(ymin=0)
pyplot.show()
fig = pyplot.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(dataSet[:,0], dataSet[:,2], dataSet[:,1], marker='o', color=classColorVector)
ax.set_xlabel("Raumflaeche in Quadratmeter")
ax.set_ylabel("IA_Ratio")
ax.set_zlabel("Wandhoehe in Meter")
ax.set_xlim(left=0)
ax.set_ylim(bottom=0)
ax.set_zlim(bottom=0)
pyplot.show()
def normalizeDataSet(dataSet):
dataSet_n = numpy.zeros(numpy.shape(dataSet)) #[[ 0. 0. 0.]
# [ 0. 0. 0.]
# [ 0. 0. 0.]
# ...,
# [ 0. 0. 0.]
# [ 0. 0. 0.]
# [ 0. 0. 0.]]
minValues = dataSet.min(0) # [ 10. 2.6 0.]
ranges = dataSet.max(0) - dataSet.min(0) # [ 1775. 2.4 68.]
minValues = dataSet.min(0) # [ 10. 2.6 0.]
maxValues = dataSet.max(0) # [ 1785. 5. 68.]
ranges = maxValues - minValues # [ 1775. 2.4 68.]
rowCount = dataSet.shape[0] # 1039
# numpy.tile() wiederholt Sequenzen (hier: [[ 10. 2.6 0. ], ..., [ 10. 2.6 0. ]]
dataSet_n = dataSet - numpy.tile(minValues, (rowCount, 1)) #[[ 2.56000000e+02 9.00000000e-01 1.80000000e+01]
# [ 6.60000000e+01 2.00000000e-01 5.40000000e+01]
# [ 3.32000000e+02 1.50000000e-01 1.00000000e+01]
# ...,
# [ 1.58000000e+02 6.00000000e-01 0.00000000e+00]
# [ 5.70000000e+01 1.00000000e-01 5.20000000e+01]
# [ 1.68000000e+02 2.00000000e-01 0.00000000e+00]]
dataSet_n = dataSet_n / numpy.tile(ranges, (rowCount, 1)) #[[ 0.14422535 0.375 0.26470588]
# [ 0.0371831 0.08333333 0.79411765]
# [ 0.18704225 0.0625 0.14705882]
# ...,
# [ 0.08901408 0.25 0.]
# [ 0.03211268 0.04166667 0.76470588]
# [ 0.09464789 0.08333333 0.]]
#print(dataSet_n)
return dataSet_n, ranges, minValues
dataSet_n, ranges, minValues = normalizeDataSet(dataSet)
def classify(inX, dataSet, labels, k):
rowCount = dataSet.shape[0] # Anzahl an Zeilen bestimmen
diffMat = numpy.tile(inX, (rowCount,1)) - dataSet # Berechnung der Katheten
# (über tile() wird der Eingangsdatensatz über die Zeilenanzahl des dataSet vervielfacht,
# der dataSet davon substrahiert)
sqDiffMat = diffMat**2 # Quadrat der Katheten
sqDistances = sqDiffMat.sum(axis=1) # Aufsummieren der Differenzpaare
distances = sqDistances**0.5 # Quadratwurzel über alle Werte
sortedDistIndicies = distances.argsort() # Aufsteigende Sortierung
classCount = {}
#print("inX = %s, k = %s" % (inX, k))
#print(sortedDistIndicies)
for i in range(k): # Eingrenzung auf k-Werte in der sortierten Liste
closest = labels[sortedDistIndicies[i]] # Label (Kategorie [Büro, Wohnung, Haus] entsprechend der Sortierung aufnehmen
classCount[closest] = classCount.get(closest, 0) + 1 # Aufbau eines Dictionary über die
sortedClassCount = sorted(classCount, key = classCount.get, reverse=True) # Absteigende Sortierung der gesammelten Labels in k-Reichweite
# wobei die Sortierung über den Count (Value) erfolgt
#print(classCount)
#print(sortedClassCount[0])
return sortedClassCount[0] # Liefere das erste Label zurück
# also das Label mit der höchsten Anzahl innerhalb der k-Reichweite
errorCount = 0
k = 5 # k-Eingrenzung (hier: auf 5 Nachbarn einschränken)
rowCount = dataSet_n.shape[0] # Anzahl der Zeilen im gesamten Datensatz
numTestVectors = 30 # Datensätze 0 - 29 werden zum testen von k verwendet,
# die Datensätze ab Zeile 30 werden zur Klassifikation verwendet
for i in range(0, numTestVectors): # Aufruf des Klassifikators von 0 bis 29
result = classify(dataSet_n[i,:], dataSet_n[numTestVectors:rowCount,:], classLabelVector[numTestVectors:rowCount], k)
print("%s - the classifier came back with: %s, the real answer is: %s" %(i, result, classLabelVector[i]))
if (result != classLabelVector[i]):
errorCount += 1.0
print("Error Count: %d" % errorCount)
| de | 0.912511 | # Quelle: https://data-science-blog.com/blog/2016/04/26/machine-learning-mit-python-tutorial-minimalbeispiel/ #Erweiterung für die Matplotlib - siehe: http://matplotlib.org/mpl_toolkits/ # Datei-Stream vorbereiten # Anzahl der Zeilen ermitteln # Eine Numpy-Matrix in Höhe der Zeilenanzahl (minus Kopfzeile) und in Breite der drei Merkmal-Spalten # Hier werden die tatsächlichen Kategorien (Haus, Wohnung, Büro) vermerkt # Hier werden die Kategorien über Farben vermerkt (zur späteren Unterscheidung im 3D-Plot!) #print(returnMat) # Ggf. mal die noch die ausge-null-te Matrix anzeigen lassen (bei Python 2.7: die Klammern weglassen!) # Datei-Stream öffnen # Zeile für Zeile der Datei lesen # Kopfzeile überspringen # Jede Zeile wird zur temporären Liste (Tabulator als Trennzeichen) #Liste in die entsprechende Zeile der Matrix überführen # Kategorie (Haus, Wohnung, Büro) für diese Zeile merken # Kategorie (Haus, Wohnung, Büro) als Text-Label speichern # Kategorie als Farbe speichern (Büro = gelb, Wohnung = rot, Haus = Blau) #[[ 0. 0. 0.] # [ 0. 0. 0.] # [ 0. 0. 0.] # ..., # [ 0. 0. 0.] # [ 0. 0. 0.] # [ 0. 0. 0.]] # [ 10. 2.6 0.] # [ 1775. 2.4 68.] # [ 10. 2.6 0.] # [ 1785. 5. 68.] # [ 1775. 2.4 68.] # 1039 # numpy.tile() wiederholt Sequenzen (hier: [[ 10. 2.6 0. ], ..., [ 10. 2.6 0. ]] #[[ 2.56000000e+02 9.00000000e-01 1.80000000e+01] # [ 6.60000000e+01 2.00000000e-01 5.40000000e+01] # [ 3.32000000e+02 1.50000000e-01 1.00000000e+01] # ..., # [ 1.58000000e+02 6.00000000e-01 0.00000000e+00] # [ 5.70000000e+01 1.00000000e-01 5.20000000e+01] # [ 1.68000000e+02 2.00000000e-01 0.00000000e+00]] #[[ 0.14422535 0.375 0.26470588] # [ 0.0371831 0.08333333 0.79411765] # [ 0.18704225 0.0625 0.14705882] # ..., # [ 0.08901408 0.25 0.] # [ 0.03211268 0.04166667 0.76470588] # [ 0.09464789 0.08333333 0.]] #print(dataSet_n) # Anzahl an Zeilen bestimmen # Berechnung der Katheten # (über tile() wird der Eingangsdatensatz über die Zeilenanzahl des dataSet vervielfacht, # der dataSet davon substrahiert) # Quadrat der Katheten # Aufsummieren der Differenzpaare # Quadratwurzel über alle Werte # Aufsteigende Sortierung #print("inX = %s, k = %s" % (inX, k)) #print(sortedDistIndicies) # Eingrenzung auf k-Werte in der sortierten Liste # Label (Kategorie [Büro, Wohnung, Haus] entsprechend der Sortierung aufnehmen # Aufbau eines Dictionary über die # Absteigende Sortierung der gesammelten Labels in k-Reichweite # wobei die Sortierung über den Count (Value) erfolgt #print(classCount) #print(sortedClassCount[0]) # Liefere das erste Label zurück # also das Label mit der höchsten Anzahl innerhalb der k-Reichweite # k-Eingrenzung (hier: auf 5 Nachbarn einschränken) # Anzahl der Zeilen im gesamten Datensatz # Datensätze 0 - 29 werden zum testen von k verwendet, # die Datensätze ab Zeile 30 werden zur Klassifikation verwendet # Aufruf des Klassifikators von 0 bis 29 | 3.291955 | 3 |
imagecut.py | changsuchoi/cspy | 0 | 6615349 | <filename>imagecut.py<gh_stars>0
# code for image cut and copy from original image to small images
# in rectangluar region of input size
# and input center position (RA,DEC) ind degree unit
# python imagecut.py test.fits
# if you want to cut image in pixel coordinates, then use 'image' NOT 'physical' in DS9 window.
# test.fits -> test-cut.fits
# <NAME> 2017/8/22
import astropy.io.fits as fits
import numpy as np
from astropy.wcs import WCS
from pyraf import iraf
import astropy.wcs.utils as wcsutils
import os,sys
import glob
from astropy.nddata import Cutout2D
from astropy.coordinates import SkyCoord
from astropy.coordinates import ICRS, Galactic, FK4, FK5
from astropy.coordinates import Angle, Latitude, Longitude
import astropy.units as u
import astropy.coordinates as coord
#im=sys.argv[1]
#infile=sorted(glob.glob(im))
#size=sys.argv[2] # arcmin
# input
#im= 'test.fits'
#ra, dec = 308.71805, 60.15368 #NGC6946
#ra, dec = 178.20602, 44.12025
#ra, dec = 161.64562673, 13.75085444
#ra, dec = 14.95871, -07.57796
#197.44879, -23.38383, #'13:09:47.71', '-23:23:01.8' # NGC4993 center coordinates J2000 deg unit
#ra, dec = 197.47311, -23.368377
size = 10 # arcmin unit, length of square side
ra,dec=161.645641, 13.750859
positions=(False,False)
positions=(ra,dec)
def imcopy(im,size=size,positions=positions):
'''
size=10
position=(ra,dec), default (False,False)
'''
outname=os.path.splitext(im)[0]+'_'+str(size)+'min_cut.fits'
if os.path.isfile(outname) : os.system('rm '+outname)
hdr= fits.getheader(im)
w = WCS(hdr)
xpscale,ypscale=wcsutils.proj_plane_pixel_scales(w)*3600
pixscale=(xpscale+ypscale)/2.
if positions==(False,False) :
print('RA or DEC input, False, position will be center of',im)
px, py = hdr['NAXIS1']/2., hdr['NAXIS2']/2.
ax, bx = px-size/2/pixscale*60, px+size/2/pixscale*60
ay, by = py-size/2/pixscale*60, py+size/2/pixscale*60
else:
ra,dec=positions[0],positions[1]
px, py = w.wcs_world2pix(ra, dec, 1)
print ('center pixel coordinates', int(px), int(py) )
ax, bx = px-size/2/pixscale*60, px+size/2/pixscale*60
ay, by = py-size/2/pixscale*60, py+size/2/pixscale*60
print ('pixel scale =', '%.3f'% (pixscale), size,
'arcmin rectangular cut =',int(bx - ax),'pixels')
region='['+str(int(ax))+':'+str(int(bx))+','+str(int(ay))+':'+str(int(by))+']'
print (outname,'will be created')
#region='[200:2048,60:2048]'
chinim=im+region
iraf.imcopy(chinim,output=outname)
return 'Done'
def radec_center(im):
from astropy.wcs import WCS
from astropy.coordinates import SkyCoord
from astropy.coordinates import ICRS, Galactic, FK4, FK5
from astropy.coordinates import Angle, Latitude, Longitude
from astropy.io import fits
import astropy.units as u
import astropy.coordinates as coord
import numpy as np
hdr = fits.getheader(im)
# RA, Dec center for reference catalog query
xcent, ycent= hdr['NAXIS1']/2., hdr['NAXIS2']/2.
w = WCS(im)
racent, deccent = w.all_pix2world(xcent, ycent, 1)
c=SkyCoord(racent,deccent,unit="deg")
rastr=c.ra.to_string(unit=u.hourangle,sep=':')
decstr=c.dec.to_string(unit=u.deg,sep=':')
racent, deccent = racent.item(), deccent.item()
return rastr,decstr,racent,deccent
pos=(False,False)
pos=(161.645641, 13.750859)
sz=(10,10)
def trim(inim, positions=pos, sizes=sz):
'''
positions=(ra,dec) in deg unit
sizes=(px,py) in pixel unit
'''
hdu = fits.open(inim)[0]
hdr=hdu.header
w = WCS(hdu.header)
outim=os.path.splitext(inim)[0]+'_'+'{0:02d}'.format(sizes[0])+'min_cut.fits'
if positions==(False,False):
px, py= hdr['NAXIS1']/2., hdr['NAXIS2']/2.
cra,cdec=w.all_pix2world(px,py,1)
print('Image Center position used,', cra, cdec, '\n')
positions=SkyCoord(cra*u.deg,cdec*u.deg)
else : print('Input center position(deg)',positions)
xpscale,ypscale=wcsutils.proj_plane_pixel_scales(w)*3600
pixscale=(xpscale+ypscale)/2.
size=u.Quantity(sizes,u.arcmin)
#sizes=(int(round(sizes[0]*60/pixscale)),int(round(sizes[1]*60/pixscale)))
print('sizes',size)
positions=SkyCoord(positions[0]*u.deg, positions[1]*u.deg)
# Load the image and the WCS
# Make the cutout, including the WCS
cutout = Cutout2D(hdu.data, position=positions, size=size, wcs=w,
mode='trim',fill_value=1.0e-30)
# Put the cutout image in the FITS HDU
hdu.data = cutout.data
# Update the FITS header with the cutout WCS
hdu.header.update(cutout.wcs.to_header())
# Write the cutout to a new FITS file
hdu.writeto(outim, overwrite=True)
return 'Done'
def xyimcopy(inim,sizex,sizey):
hdr= fits.getheader(inim)
w = WCS(hdr)
px, py = w.wcs_world2pix(ra, dec, 1)
print ('center pixel coordinates', int(px), int(py) )
xpscale,ypscale=wcsutils.proj_plane_pixel_scales(w)*60 # pixel scale armin unit
pixscale=(xpscale+ypscale)/2.
ax,bx=px-sizex/2/pixscale,px+sizex/2/pixscale
ay,by=py-sizey/2/pixscale,py+sizey/2/pixscale
print ('pixel scale =', '%.3f'% (pixscale*60), sizex, 'arcmin rectangular cut =',int(bx - ax),'pixels')
print ('pixel scale =', '%.3f'% (pixscale*60), sizey, 'arcmin rectangular cut =',int(by - ay),'pixels')
region='['+str(int(ax))+':'+str(int(bx))+','+str(int(ay))+':'+str(int(by))+']'
outname=inim[:-5]+'-'+str(sizex)+'+'+str(sizey)+'-arcmin-cut.fits'
print (outname,'will be created')
#region='[200:2048,60:2048]'
chinim=inim+region
iraf.imcopy(chinim,output=outname)
'''
def imcopypix(inim,region):
#hdr= fits.getheader(inim)
#w = WCS(hdr)
#px, py = w.wcs_world2pix(ra, dec, 1)
#print 'center pixel coordinates', int(px), int(py)
#xpscale,ypscale=wcsutils.proj_plane_pixel_scales(w)*60 # pixel scale armin unit
#pixscale=(xpscale+ypscale)/2.
#ax,bx=px-size/2/pixscale,px+size/2/pixscale
#ay,by=py-size/2/pixscale,py+size/2/pixscale
#print 'pixel scale =', '%.3f'% (pixscale*60), size, 'arcmin rectangular cut =',int(bx - ax),'pixels'
#region='['+str(int(ax))+':'+str(int(bx))+','+str(int(ay))+':'+str(int(by))+']'
outname=inim[:-5]+'-cut.fits'
print outname,'will be created'
#region='[200:2048,60:2048]'
chinim=inim+region
iraf.imcopy(chinim,output=outname)
'''
#2D cutout WCS example
'''
>>> from astropy.coordinates import SkyCoord
>>> from astropy.wcs import WCS
>>> position = SkyCoord('13h11m29.96s -01d19m18.7s', frame='icrs')
>>> wcs = WCS(naxis=2)
>>> rho = np.pi / 3.
>>> scale = 0.05 / 3600.
>>> wcs.wcs.cd = [[scale*np.cos(rho), -scale*np.sin(rho)],
... [scale*np.sin(rho), scale*np.cos(rho)]]
>>> wcs.wcs.ctype = ['RA---TAN', 'DEC--TAN']
>>> wcs.wcs.crval = [position.ra.to_value(u.deg),
... position.dec.to_value(u.deg)]
>>> wcs.wcs.crpix = [50, 100]
# Download an example FITS file, create a 2D cutout, and save it to a
# new FITS file, including the updated cutout WCS.
from astropy.io import fits
from astropy.nddata import Cutout2D
from astropy.utils.data import download_file
from astropy.wcs import WCS
def download_image_save_cutout(url, position, size):
# Download the image
filename = download_file(url)
# Load the image and the WCS
hdu = fits.open(filename)[0]
wcs = WCS(hdu.header)
# Make the cutout, including the WCS
cutout = Cutout2D(hdu.data, position=position, size=size, wcs=wcs)
# Put the cutout image in the FITS HDU
hdu.data = cutout.data
# Update the FITS header with the cutout WCS
hdu.header.update(cutout.wcs.to_header())
# Write the cutout to a new FITS file
cutout_filename = 'example_cutout.fits'
hdu.writeto(cutout_filename, overwrite=True)
if __name__ == '__main__':
url = 'https://astropy.stsci.edu/data/photometry/spitzer_example_image.fits'
position = (500, 300)
size = (400, 400)
download_image_save_cutout(url, position, size)
'''
| <filename>imagecut.py<gh_stars>0
# code for image cut and copy from original image to small images
# in rectangluar region of input size
# and input center position (RA,DEC) ind degree unit
# python imagecut.py test.fits
# if you want to cut image in pixel coordinates, then use 'image' NOT 'physical' in DS9 window.
# test.fits -> test-cut.fits
# <NAME> 2017/8/22
import astropy.io.fits as fits
import numpy as np
from astropy.wcs import WCS
from pyraf import iraf
import astropy.wcs.utils as wcsutils
import os,sys
import glob
from astropy.nddata import Cutout2D
from astropy.coordinates import SkyCoord
from astropy.coordinates import ICRS, Galactic, FK4, FK5
from astropy.coordinates import Angle, Latitude, Longitude
import astropy.units as u
import astropy.coordinates as coord
#im=sys.argv[1]
#infile=sorted(glob.glob(im))
#size=sys.argv[2] # arcmin
# input
#im= 'test.fits'
#ra, dec = 308.71805, 60.15368 #NGC6946
#ra, dec = 178.20602, 44.12025
#ra, dec = 161.64562673, 13.75085444
#ra, dec = 14.95871, -07.57796
#197.44879, -23.38383, #'13:09:47.71', '-23:23:01.8' # NGC4993 center coordinates J2000 deg unit
#ra, dec = 197.47311, -23.368377
size = 10 # arcmin unit, length of square side
ra,dec=161.645641, 13.750859
positions=(False,False)
positions=(ra,dec)
def imcopy(im,size=size,positions=positions):
'''
size=10
position=(ra,dec), default (False,False)
'''
outname=os.path.splitext(im)[0]+'_'+str(size)+'min_cut.fits'
if os.path.isfile(outname) : os.system('rm '+outname)
hdr= fits.getheader(im)
w = WCS(hdr)
xpscale,ypscale=wcsutils.proj_plane_pixel_scales(w)*3600
pixscale=(xpscale+ypscale)/2.
if positions==(False,False) :
print('RA or DEC input, False, position will be center of',im)
px, py = hdr['NAXIS1']/2., hdr['NAXIS2']/2.
ax, bx = px-size/2/pixscale*60, px+size/2/pixscale*60
ay, by = py-size/2/pixscale*60, py+size/2/pixscale*60
else:
ra,dec=positions[0],positions[1]
px, py = w.wcs_world2pix(ra, dec, 1)
print ('center pixel coordinates', int(px), int(py) )
ax, bx = px-size/2/pixscale*60, px+size/2/pixscale*60
ay, by = py-size/2/pixscale*60, py+size/2/pixscale*60
print ('pixel scale =', '%.3f'% (pixscale), size,
'arcmin rectangular cut =',int(bx - ax),'pixels')
region='['+str(int(ax))+':'+str(int(bx))+','+str(int(ay))+':'+str(int(by))+']'
print (outname,'will be created')
#region='[200:2048,60:2048]'
chinim=im+region
iraf.imcopy(chinim,output=outname)
return 'Done'
def radec_center(im):
from astropy.wcs import WCS
from astropy.coordinates import SkyCoord
from astropy.coordinates import ICRS, Galactic, FK4, FK5
from astropy.coordinates import Angle, Latitude, Longitude
from astropy.io import fits
import astropy.units as u
import astropy.coordinates as coord
import numpy as np
hdr = fits.getheader(im)
# RA, Dec center for reference catalog query
xcent, ycent= hdr['NAXIS1']/2., hdr['NAXIS2']/2.
w = WCS(im)
racent, deccent = w.all_pix2world(xcent, ycent, 1)
c=SkyCoord(racent,deccent,unit="deg")
rastr=c.ra.to_string(unit=u.hourangle,sep=':')
decstr=c.dec.to_string(unit=u.deg,sep=':')
racent, deccent = racent.item(), deccent.item()
return rastr,decstr,racent,deccent
pos=(False,False)
pos=(161.645641, 13.750859)
sz=(10,10)
def trim(inim, positions=pos, sizes=sz):
'''
positions=(ra,dec) in deg unit
sizes=(px,py) in pixel unit
'''
hdu = fits.open(inim)[0]
hdr=hdu.header
w = WCS(hdu.header)
outim=os.path.splitext(inim)[0]+'_'+'{0:02d}'.format(sizes[0])+'min_cut.fits'
if positions==(False,False):
px, py= hdr['NAXIS1']/2., hdr['NAXIS2']/2.
cra,cdec=w.all_pix2world(px,py,1)
print('Image Center position used,', cra, cdec, '\n')
positions=SkyCoord(cra*u.deg,cdec*u.deg)
else : print('Input center position(deg)',positions)
xpscale,ypscale=wcsutils.proj_plane_pixel_scales(w)*3600
pixscale=(xpscale+ypscale)/2.
size=u.Quantity(sizes,u.arcmin)
#sizes=(int(round(sizes[0]*60/pixscale)),int(round(sizes[1]*60/pixscale)))
print('sizes',size)
positions=SkyCoord(positions[0]*u.deg, positions[1]*u.deg)
# Load the image and the WCS
# Make the cutout, including the WCS
cutout = Cutout2D(hdu.data, position=positions, size=size, wcs=w,
mode='trim',fill_value=1.0e-30)
# Put the cutout image in the FITS HDU
hdu.data = cutout.data
# Update the FITS header with the cutout WCS
hdu.header.update(cutout.wcs.to_header())
# Write the cutout to a new FITS file
hdu.writeto(outim, overwrite=True)
return 'Done'
def xyimcopy(inim,sizex,sizey):
hdr= fits.getheader(inim)
w = WCS(hdr)
px, py = w.wcs_world2pix(ra, dec, 1)
print ('center pixel coordinates', int(px), int(py) )
xpscale,ypscale=wcsutils.proj_plane_pixel_scales(w)*60 # pixel scale armin unit
pixscale=(xpscale+ypscale)/2.
ax,bx=px-sizex/2/pixscale,px+sizex/2/pixscale
ay,by=py-sizey/2/pixscale,py+sizey/2/pixscale
print ('pixel scale =', '%.3f'% (pixscale*60), sizex, 'arcmin rectangular cut =',int(bx - ax),'pixels')
print ('pixel scale =', '%.3f'% (pixscale*60), sizey, 'arcmin rectangular cut =',int(by - ay),'pixels')
region='['+str(int(ax))+':'+str(int(bx))+','+str(int(ay))+':'+str(int(by))+']'
outname=inim[:-5]+'-'+str(sizex)+'+'+str(sizey)+'-arcmin-cut.fits'
print (outname,'will be created')
#region='[200:2048,60:2048]'
chinim=inim+region
iraf.imcopy(chinim,output=outname)
'''
def imcopypix(inim,region):
#hdr= fits.getheader(inim)
#w = WCS(hdr)
#px, py = w.wcs_world2pix(ra, dec, 1)
#print 'center pixel coordinates', int(px), int(py)
#xpscale,ypscale=wcsutils.proj_plane_pixel_scales(w)*60 # pixel scale armin unit
#pixscale=(xpscale+ypscale)/2.
#ax,bx=px-size/2/pixscale,px+size/2/pixscale
#ay,by=py-size/2/pixscale,py+size/2/pixscale
#print 'pixel scale =', '%.3f'% (pixscale*60), size, 'arcmin rectangular cut =',int(bx - ax),'pixels'
#region='['+str(int(ax))+':'+str(int(bx))+','+str(int(ay))+':'+str(int(by))+']'
outname=inim[:-5]+'-cut.fits'
print outname,'will be created'
#region='[200:2048,60:2048]'
chinim=inim+region
iraf.imcopy(chinim,output=outname)
'''
#2D cutout WCS example
'''
>>> from astropy.coordinates import SkyCoord
>>> from astropy.wcs import WCS
>>> position = SkyCoord('13h11m29.96s -01d19m18.7s', frame='icrs')
>>> wcs = WCS(naxis=2)
>>> rho = np.pi / 3.
>>> scale = 0.05 / 3600.
>>> wcs.wcs.cd = [[scale*np.cos(rho), -scale*np.sin(rho)],
... [scale*np.sin(rho), scale*np.cos(rho)]]
>>> wcs.wcs.ctype = ['RA---TAN', 'DEC--TAN']
>>> wcs.wcs.crval = [position.ra.to_value(u.deg),
... position.dec.to_value(u.deg)]
>>> wcs.wcs.crpix = [50, 100]
# Download an example FITS file, create a 2D cutout, and save it to a
# new FITS file, including the updated cutout WCS.
from astropy.io import fits
from astropy.nddata import Cutout2D
from astropy.utils.data import download_file
from astropy.wcs import WCS
def download_image_save_cutout(url, position, size):
# Download the image
filename = download_file(url)
# Load the image and the WCS
hdu = fits.open(filename)[0]
wcs = WCS(hdu.header)
# Make the cutout, including the WCS
cutout = Cutout2D(hdu.data, position=position, size=size, wcs=wcs)
# Put the cutout image in the FITS HDU
hdu.data = cutout.data
# Update the FITS header with the cutout WCS
hdu.header.update(cutout.wcs.to_header())
# Write the cutout to a new FITS file
cutout_filename = 'example_cutout.fits'
hdu.writeto(cutout_filename, overwrite=True)
if __name__ == '__main__':
url = 'https://astropy.stsci.edu/data/photometry/spitzer_example_image.fits'
position = (500, 300)
size = (400, 400)
download_image_save_cutout(url, position, size)
'''
| en | 0.488383 | # code for image cut and copy from original image to small images # in rectangluar region of input size # and input center position (RA,DEC) ind degree unit # python imagecut.py test.fits # if you want to cut image in pixel coordinates, then use 'image' NOT 'physical' in DS9 window. # test.fits -> test-cut.fits # <NAME> 2017/8/22 #im=sys.argv[1] #infile=sorted(glob.glob(im)) #size=sys.argv[2] # arcmin # input #im= 'test.fits' #ra, dec = 308.71805, 60.15368 #NGC6946 #ra, dec = 178.20602, 44.12025 #ra, dec = 161.64562673, 13.75085444 #ra, dec = 14.95871, -07.57796 #197.44879, -23.38383, #'13:09:47.71', '-23:23:01.8' # NGC4993 center coordinates J2000 deg unit #ra, dec = 197.47311, -23.368377 # arcmin unit, length of square side size=10 position=(ra,dec), default (False,False) #region='[200:2048,60:2048]' # RA, Dec center for reference catalog query positions=(ra,dec) in deg unit sizes=(px,py) in pixel unit #sizes=(int(round(sizes[0]*60/pixscale)),int(round(sizes[1]*60/pixscale))) # Load the image and the WCS # Make the cutout, including the WCS # Put the cutout image in the FITS HDU # Update the FITS header with the cutout WCS # Write the cutout to a new FITS file # pixel scale armin unit #region='[200:2048,60:2048]' def imcopypix(inim,region): #hdr= fits.getheader(inim) #w = WCS(hdr) #px, py = w.wcs_world2pix(ra, dec, 1) #print 'center pixel coordinates', int(px), int(py) #xpscale,ypscale=wcsutils.proj_plane_pixel_scales(w)*60 # pixel scale armin unit #pixscale=(xpscale+ypscale)/2. #ax,bx=px-size/2/pixscale,px+size/2/pixscale #ay,by=py-size/2/pixscale,py+size/2/pixscale #print 'pixel scale =', '%.3f'% (pixscale*60), size, 'arcmin rectangular cut =',int(bx - ax),'pixels' #region='['+str(int(ax))+':'+str(int(bx))+','+str(int(ay))+':'+str(int(by))+']' outname=inim[:-5]+'-cut.fits' print outname,'will be created' #region='[200:2048,60:2048]' chinim=inim+region iraf.imcopy(chinim,output=outname) #2D cutout WCS example >>> from astropy.coordinates import SkyCoord >>> from astropy.wcs import WCS >>> position = SkyCoord('13h11m29.96s -01d19m18.7s', frame='icrs') >>> wcs = WCS(naxis=2) >>> rho = np.pi / 3. >>> scale = 0.05 / 3600. >>> wcs.wcs.cd = [[scale*np.cos(rho), -scale*np.sin(rho)], ... [scale*np.sin(rho), scale*np.cos(rho)]] >>> wcs.wcs.ctype = ['RA---TAN', 'DEC--TAN'] >>> wcs.wcs.crval = [position.ra.to_value(u.deg), ... position.dec.to_value(u.deg)] >>> wcs.wcs.crpix = [50, 100] # Download an example FITS file, create a 2D cutout, and save it to a # new FITS file, including the updated cutout WCS. from astropy.io import fits from astropy.nddata import Cutout2D from astropy.utils.data import download_file from astropy.wcs import WCS def download_image_save_cutout(url, position, size): # Download the image filename = download_file(url) # Load the image and the WCS hdu = fits.open(filename)[0] wcs = WCS(hdu.header) # Make the cutout, including the WCS cutout = Cutout2D(hdu.data, position=position, size=size, wcs=wcs) # Put the cutout image in the FITS HDU hdu.data = cutout.data # Update the FITS header with the cutout WCS hdu.header.update(cutout.wcs.to_header()) # Write the cutout to a new FITS file cutout_filename = 'example_cutout.fits' hdu.writeto(cutout_filename, overwrite=True) if __name__ == '__main__': url = 'https://astropy.stsci.edu/data/photometry/spitzer_example_image.fits' position = (500, 300) size = (400, 400) download_image_save_cutout(url, position, size) | 2.603445 | 3 |
src/product/models.py | oussamabouchikhi/Bigdeals | 2 | 6615350 | from django.db import models
from django.utils.translation import ugettext_lazy as _
from django.utils.text import slugify
from django.urls import reverse
# Create your models here.
class Product(models.Model):
prodName = models.CharField(max_length=100, verbose_name=_("Name"))
"""
- class Category is below, so we have to put it inside single quotes
- Each product have a category
- to avoid cross import erro we use 'appName.modelName' trick
"""
prodCategory = models.ForeignKey('Category', on_delete=models.CASCADE, blank=True, null=True, verbose_name=_("Category"))
prodBrand = models.ForeignKey('settings.Brand', on_delete=models.CASCADE, blank=True, null=True, verbose_name=_("Brand"))
prodDesc = models.TextField(verbose_name=_("Description"))
prodImage = models.ImageField(upload_to='product/', verbose_name=_("Image"), blank=True, null=True)
prodPrice = models.DecimalField(max_digits=5, decimal_places=2, verbose_name=_("Price"))
prodOldPrice = models.DecimalField(max_digits=5, decimal_places=2, verbose_name=_("Old Price"))
prodCost = models.DecimalField(max_digits=5, decimal_places=2, verbose_name=_("Cost"))
prodCreated = models.DateTimeField(verbose_name=_("Created at"))
prodSlug = models.SlugField(blank=True, null=True, verbose_name=_("URL"))
prodIsNew = models.BooleanField(default=True, verbose_name=_("New"))
prodIsBestseller = models.BooleanField(default=True, verbose_name=_("Bestseller"))
prodIsLimited = models.BooleanField(default=True, verbose_name=_("Limited"))
"""
because Product model is a Class each producSt is an object(instance)
so we need to define __str__ function to show the name of each product
"""
class Meta:
verbose_name = _("Product")
verbose_name_plural = _("Products")
"""
Override save method
"""
def save(self, *args, **kwargs):
# if there is no slug
if not self.prodSlug :
# Generate a slug from product name
self.prodSlug = slugify(self.prodName)
super(Product, self).save(*args, **kwargs)
def get_absolute_url(self):
return reverse('products:product_detail', kwargs={'slug': self.prodSlug})
def __str__(self):
return str(self.prodName)
class ProductImage(models.Model):
# Every product has an image. delete image if product deleted
PIProduct = models.ForeignKey(Product, on_delete=models.CASCADE, verbose_name=_("Product"))
PIImage = models.ImageField(upload_to='product/', verbose_name=_("Image"))
def __str__(self):
return str(self.PIProduct)
class Category(models.Model):
CATName = models.CharField(max_length=50, verbose_name=_("Name"))
"""
- Main category & sub category are same so Parent references to itself(Recursive relation).
- if there's no sub category it will be null
- Subcateogries are two levels down so a subcateogry can't have another subcateogry
- [limit_choices_to]: show only parent categories (its parent is null)
"""
CATParent = models.ForeignKey('self', on_delete=models.CASCADE, limit_choices_to={'CATParent__isnull': True}, verbose_name=_("Main Category"), blank=True, null=True)
CATDesc = models.TextField(verbose_name=_("Description"))
CATImg = models.ImageField(upload_to='category/', verbose_name=_("Image"))
class Meta:
verbose_name = _("Category")
verbose_name_plural = _("Categories")
def __str__(self):
return str(self.CATName)
class Product_Alternative(models.Model):
"""
- Since both PAName & PAAlternative are related to the same model
Django will see them as one, so we have to give them a related_name.
"""
PAName = models.ForeignKey(Product, on_delete=models.CASCADE, related_name='main_product', verbose_name=_("Product"))
PAAlternative = models.ManyToManyField(Product, related_name='alternative_products', verbose_name=_("Alternatives"))
class Meta:
verbose_name = _("Product Alternative")
verbose_name_plural = _("Product Alternatives")
def __str__(self):
return str(self.PAName)
class Product_Accessory(models.Model):
"""
- Since both PAName & PAAlternative are related to the same model
Django will see them as one, so we have to give them a related_name.
"""
PACCName = models.ForeignKey(Product, on_delete=models.CASCADE, related_name='main_accessory_product', verbose_name=_("Product"))
PACCAlternative = models.ManyToManyField(Product, related_name='accessories_products', verbose_name=_("Accessories"))
class Meta:
verbose_name = _("Product Accessory")
verbose_name_plural = _("Product Accessories")
def __str__(self):
return str(self.PACCName)
| from django.db import models
from django.utils.translation import ugettext_lazy as _
from django.utils.text import slugify
from django.urls import reverse
# Create your models here.
class Product(models.Model):
prodName = models.CharField(max_length=100, verbose_name=_("Name"))
"""
- class Category is below, so we have to put it inside single quotes
- Each product have a category
- to avoid cross import erro we use 'appName.modelName' trick
"""
prodCategory = models.ForeignKey('Category', on_delete=models.CASCADE, blank=True, null=True, verbose_name=_("Category"))
prodBrand = models.ForeignKey('settings.Brand', on_delete=models.CASCADE, blank=True, null=True, verbose_name=_("Brand"))
prodDesc = models.TextField(verbose_name=_("Description"))
prodImage = models.ImageField(upload_to='product/', verbose_name=_("Image"), blank=True, null=True)
prodPrice = models.DecimalField(max_digits=5, decimal_places=2, verbose_name=_("Price"))
prodOldPrice = models.DecimalField(max_digits=5, decimal_places=2, verbose_name=_("Old Price"))
prodCost = models.DecimalField(max_digits=5, decimal_places=2, verbose_name=_("Cost"))
prodCreated = models.DateTimeField(verbose_name=_("Created at"))
prodSlug = models.SlugField(blank=True, null=True, verbose_name=_("URL"))
prodIsNew = models.BooleanField(default=True, verbose_name=_("New"))
prodIsBestseller = models.BooleanField(default=True, verbose_name=_("Bestseller"))
prodIsLimited = models.BooleanField(default=True, verbose_name=_("Limited"))
"""
because Product model is a Class each producSt is an object(instance)
so we need to define __str__ function to show the name of each product
"""
class Meta:
verbose_name = _("Product")
verbose_name_plural = _("Products")
"""
Override save method
"""
def save(self, *args, **kwargs):
# if there is no slug
if not self.prodSlug :
# Generate a slug from product name
self.prodSlug = slugify(self.prodName)
super(Product, self).save(*args, **kwargs)
def get_absolute_url(self):
return reverse('products:product_detail', kwargs={'slug': self.prodSlug})
def __str__(self):
return str(self.prodName)
class ProductImage(models.Model):
# Every product has an image. delete image if product deleted
PIProduct = models.ForeignKey(Product, on_delete=models.CASCADE, verbose_name=_("Product"))
PIImage = models.ImageField(upload_to='product/', verbose_name=_("Image"))
def __str__(self):
return str(self.PIProduct)
class Category(models.Model):
CATName = models.CharField(max_length=50, verbose_name=_("Name"))
"""
- Main category & sub category are same so Parent references to itself(Recursive relation).
- if there's no sub category it will be null
- Subcateogries are two levels down so a subcateogry can't have another subcateogry
- [limit_choices_to]: show only parent categories (its parent is null)
"""
CATParent = models.ForeignKey('self', on_delete=models.CASCADE, limit_choices_to={'CATParent__isnull': True}, verbose_name=_("Main Category"), blank=True, null=True)
CATDesc = models.TextField(verbose_name=_("Description"))
CATImg = models.ImageField(upload_to='category/', verbose_name=_("Image"))
class Meta:
verbose_name = _("Category")
verbose_name_plural = _("Categories")
def __str__(self):
return str(self.CATName)
class Product_Alternative(models.Model):
"""
- Since both PAName & PAAlternative are related to the same model
Django will see them as one, so we have to give them a related_name.
"""
PAName = models.ForeignKey(Product, on_delete=models.CASCADE, related_name='main_product', verbose_name=_("Product"))
PAAlternative = models.ManyToManyField(Product, related_name='alternative_products', verbose_name=_("Alternatives"))
class Meta:
verbose_name = _("Product Alternative")
verbose_name_plural = _("Product Alternatives")
def __str__(self):
return str(self.PAName)
class Product_Accessory(models.Model):
"""
- Since both PAName & PAAlternative are related to the same model
Django will see them as one, so we have to give them a related_name.
"""
PACCName = models.ForeignKey(Product, on_delete=models.CASCADE, related_name='main_accessory_product', verbose_name=_("Product"))
PACCAlternative = models.ManyToManyField(Product, related_name='accessories_products', verbose_name=_("Accessories"))
class Meta:
verbose_name = _("Product Accessory")
verbose_name_plural = _("Product Accessories")
def __str__(self):
return str(self.PACCName)
| en | 0.90449 | # Create your models here. - class Category is below, so we have to put it inside single quotes - Each product have a category - to avoid cross import erro we use 'appName.modelName' trick because Product model is a Class each producSt is an object(instance) so we need to define __str__ function to show the name of each product Override save method # if there is no slug # Generate a slug from product name # Every product has an image. delete image if product deleted - Main category & sub category are same so Parent references to itself(Recursive relation). - if there's no sub category it will be null - Subcateogries are two levels down so a subcateogry can't have another subcateogry - [limit_choices_to]: show only parent categories (its parent is null) - Since both PAName & PAAlternative are related to the same model Django will see them as one, so we have to give them a related_name. - Since both PAName & PAAlternative are related to the same model Django will see them as one, so we have to give them a related_name. | 2.381969 | 2 |