hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
625a19aeeb78d1a163e46b551accd53b6ef2d20c | 532 | py | Python | torch2trt/__init__.py | SnowMasaya/torch2trt | d526b2473805f9b9a704a201bef3ce5be25d284f | [
"MIT"
] | 2 | 2020-07-10T06:26:03.000Z | 2020-07-10T07:38:08.000Z | torch2trt/__init__.py | SnowMasaya/torch2trt | d526b2473805f9b9a704a201bef3ce5be25d284f | [
"MIT"
] | 1 | 2020-02-16T09:43:35.000Z | 2020-02-16T09:43:35.000Z | torch2trt/__init__.py | SnowMasaya/torch2trt | d526b2473805f9b9a704a201bef3ce5be25d284f | [
"MIT"
] | 1 | 2019-10-14T01:11:23.000Z | 2019-10-14T01:11:23.000Z | from .torch2trt import *
from .converters import *
import tensorrt as trt
def load_plugins():
import os
import ctypes
ctypes.CDLL(os.path.join(os.path.dirname(__file__), 'libtorch2trt.so'))
registry = trt.get_plugin_registry()
torch2trt_creators = [c for c in registry.plugin_creator_list if c.plugin_namespace == 'torch2trt']
for c in torch2trt_creators:
registry.register_creator(c, 'torch2trt')
try:
load_plugins()
PLUGINS_LOADED = True
except OSError:
PLUGINS_LOADED = False
| 24.181818 | 103 | 0.716165 | 69 | 532 | 5.289855 | 0.536232 | 0.060274 | 0.032877 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013986 | 0.193609 | 532 | 21 | 104 | 25.333333 | 0.83683 | 0 | 0 | 0 | 0 | 0 | 0.06203 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0.3125 | 0 | 0.375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
6261f7a9c9b18a89ffbec87fba08c79cb2839e13 | 1,151 | py | Python | code/glucocheck/homepage/migrations/0007_auto_20210315_1807.py | kmcgreg5/Glucocheck | 4ab4ada7f967ae41c1241c94523d14e693e05dd4 | [
"FSFAP"
] | null | null | null | code/glucocheck/homepage/migrations/0007_auto_20210315_1807.py | kmcgreg5/Glucocheck | 4ab4ada7f967ae41c1241c94523d14e693e05dd4 | [
"FSFAP"
] | null | null | null | code/glucocheck/homepage/migrations/0007_auto_20210315_1807.py | kmcgreg5/Glucocheck | 4ab4ada7f967ae41c1241c94523d14e693e05dd4 | [
"FSFAP"
] | null | null | null | # Generated by Django 3.1.7 on 2021-03-15 22:07
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('homepage', '0006_carbohydrate_glucose_insulin_recordingcategory'),
]
operations = [
migrations.RenameField(
model_name='carbohydrate',
old_name='reading',
new_name='carb_reading',
),
migrations.RemoveField(
model_name='glucose',
name='categories',
),
migrations.AddField(
model_name='glucose',
name='categories',
field=models.ManyToManyField(to='homepage.RecordingCategory'),
),
migrations.AlterField(
model_name='recordingcategory',
name='name',
field=models.CharField(choices=[('fasting', 'Fasting'), ('before breakfast', 'Before Breakfast'), ('after breakfast', 'After Breakfast'), ('before lunch', 'Before Lunch'), ('after lunch', 'After Lunch'), ('snacks', 'Snacks'), ('before dinner', 'Before Dinner'), ('after dinner', 'After Dinner')], max_length=255, unique=True),
),
]
| 34.878788 | 338 | 0.600348 | 106 | 1,151 | 6.40566 | 0.518868 | 0.053019 | 0.047128 | 0.05891 | 0.088365 | 0 | 0 | 0 | 0 | 0 | 0 | 0.025791 | 0.258905 | 1,151 | 32 | 339 | 35.96875 | 0.770223 | 0.039096 | 0 | 0.307692 | 1 | 0 | 0.321558 | 0.069746 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.038462 | 0 | 0.153846 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6262bae7dfc3df2c02ba7e5efae6983d3daa02cb | 1,826 | py | Python | models/SnapshotTeam.py | Fa1c0n35/RootTheBoxs | 4f2a9886c8eedca3039604b93929c8c09866115e | [
"Apache-2.0"
] | 1 | 2019-06-29T08:40:54.000Z | 2019-06-29T08:40:54.000Z | models/SnapshotTeam.py | Fa1c0n35/RootTheBoxs | 4f2a9886c8eedca3039604b93929c8c09866115e | [
"Apache-2.0"
] | null | null | null | models/SnapshotTeam.py | Fa1c0n35/RootTheBoxs | 4f2a9886c8eedca3039604b93929c8c09866115e | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Created on Mar 11, 2012
@author: moloch
Copyright 2012 Root the Box
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
from sqlalchemy import Column, ForeignKey
from sqlalchemy.orm import relationship, backref
from sqlalchemy.types import Integer
from models import dbsession
from models.Team import Team
from models.Relationships import snapshot_team_to_flag, snapshot_team_to_game_level
from models.BaseModels import DatabaseObject
class SnapshotTeam(DatabaseObject):
"""
Used by game history; snapshot of a single team in history
"""
team_id = Column(Integer, ForeignKey("team.id"), nullable=False)
money = Column(Integer, nullable=False)
bots = Column(Integer, nullable=False)
game_levels = relationship(
"GameLevel",
secondary=snapshot_team_to_game_level,
backref=backref("snapshot_team", lazy="select"),
)
flags = relationship(
"Flag",
secondary=snapshot_team_to_flag,
backref=backref("snapshot_team", lazy="select"),
)
@property
def name(self):
return dbsession.query(Team._name).filter_by(id=self.team_id).first()[0]
@classmethod
def all(cls):
""" Returns a list of all objects in the database """
return dbsession.query(cls).all()
| 29.451613 | 83 | 0.7092 | 242 | 1,826 | 5.264463 | 0.504132 | 0.047096 | 0.043956 | 0.025118 | 0.092622 | 0.056515 | 0 | 0 | 0 | 0 | 0 | 0.011065 | 0.208105 | 1,826 | 61 | 84 | 29.934426 | 0.869986 | 0.417853 | 0 | 0.074074 | 0 | 0 | 0.056093 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.074074 | false | 0 | 0.259259 | 0.037037 | 0.62963 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
62643e087525aca4ccc614812b7bfd674336652f | 411 | py | Python | pythonexercicios/ex101-funcvotacao.py | marroni1103/exercicios-pyton | 734162cc4b63ed30d754a6efe4c5622baaa1a50b | [
"MIT"
] | null | null | null | pythonexercicios/ex101-funcvotacao.py | marroni1103/exercicios-pyton | 734162cc4b63ed30d754a6efe4c5622baaa1a50b | [
"MIT"
] | null | null | null | pythonexercicios/ex101-funcvotacao.py | marroni1103/exercicios-pyton | 734162cc4b63ed30d754a6efe4c5622baaa1a50b | [
"MIT"
] | null | null | null | def voto(num):
from datetime import date
anoatual = date.today().year
idade = anoatual - num
if idade < 16:
return f"Com {idade} anos: NÃO VOTA"
elif 16 <= idade < 18 or idade > 65:
return f'Com {idade} anos: VOTO OPCIONAL'
else:
return f"Com {idade} anos: VOTO OBRIGATORIO"
print('-' * 30)
anonasc = int(input('Em que ano você nasceu? '))
print(voto(anonasc))
| 25.6875 | 52 | 0.610706 | 59 | 411 | 4.254237 | 0.610169 | 0.083665 | 0.119522 | 0.179283 | 0.258964 | 0.183267 | 0 | 0 | 0 | 0 | 0 | 0.033113 | 0.265207 | 411 | 15 | 53 | 27.4 | 0.798013 | 0 | 0 | 0 | 0 | 0 | 0.282238 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0.076923 | 0 | 0.384615 | 0.153846 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
626e4c17d238ffdd4b719fcf03cef903734ecb10 | 201 | py | Python | secondstring.py | Kokouvi/reversorder | 157e39eaf424d816715080dbce0850670836e8fd | [
"MIT"
] | null | null | null | secondstring.py | Kokouvi/reversorder | 157e39eaf424d816715080dbce0850670836e8fd | [
"MIT"
] | null | null | null | secondstring.py | Kokouvi/reversorder | 157e39eaf424d816715080dbce0850670836e8fd | [
"MIT"
] | null | null | null | str = "The quick brown fox jumps over the lazy dog." # initial string
reversed = "".join(reversed(str)) #.join() method merges all of the charactera
print(reversed[0:43:2]) # print the reversed string
| 50.25 | 78 | 0.731343 | 32 | 201 | 4.59375 | 0.6875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.023392 | 0.149254 | 201 | 3 | 79 | 67 | 0.836257 | 0.412935 | 0 | 0 | 0 | 0 | 0.385965 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.333333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6280695be38110adda77e21a75a8350fbff3df45 | 9,433 | py | Python | fabfile/data.py | nprapps/linklater | 9ba8fbefcbe9148253e5d5c47572e8b887ce9485 | [
"FSFAP"
] | null | null | null | fabfile/data.py | nprapps/linklater | 9ba8fbefcbe9148253e5d5c47572e8b887ce9485 | [
"FSFAP"
] | 47 | 2015-01-22T16:12:16.000Z | 2015-01-28T18:51:58.000Z | fabfile/data.py | nprapps/linklater | 9ba8fbefcbe9148253e5d5c47572e8b887ce9485 | [
"FSFAP"
] | 1 | 2021-02-18T11:26:35.000Z | 2021-02-18T11:26:35.000Z | #!/usr/bin/env python
"""
Commands that update or process the application data.
"""
from datetime import datetime
import json
from bs4 import BeautifulSoup
from flask import render_template
from fabric.api import task
from fabric.state import env
from facebook import GraphAPI
from twitter import Twitter, OAuth
from jinja2 import Environment, FileSystemLoader
import app_config
import copytext
import os
import requests
TWITTER_BATCH_SIZE = 200
@task(default=True)
def update():
"""
Stub function for updating app-specific data.
"""
#update_featured_social()
@task
def make_tumblr_draft_html():
links = fetch_tweets(env.twitter_handle, env.twitter_timeframe)
template = env.jinja_env.get_template('tumblr.html')
output = template.render(links=links)
return output
@task
def fetch_tweets(username, days):
"""
Get tweets of a specific user
"""
current_time = datetime.now()
secrets = app_config.get_secrets()
twitter_api = Twitter(
auth=OAuth(
secrets['TWITTER_API_OAUTH_TOKEN'],
secrets['TWITTER_API_OAUTH_SECRET'],
secrets['TWITTER_API_CONSUMER_KEY'],
secrets['TWITTER_API_CONSUMER_SECRET']
)
)
out = []
tweets = twitter_api.statuses.user_timeline(screen_name=username, count=TWITTER_BATCH_SIZE)
i = 0
while True:
if i > (len(tweets)-1):
break
tweet = tweets[i]
created_time = datetime.strptime(tweet['created_at'], '%a %b %d %H:%M:%S +0000 %Y')
time_difference = (current_time - created_time).days
if time_difference > int(days):
break
out.extend(_process_tweet(tweet, username))
i += 1
if i > (TWITTER_BATCH_SIZE-1):
tweets = twitter_api.statuses.user_timeline(screen_name=username, count=TWITTER_BATCH_SIZE, max_id=tweet['id'])
i = 0
out = _dedupe_links(out)
return out
def _process_tweet(tweet, username):
out = []
for url in tweet['entities']['urls']:
if url['display_url'].startswith('pic.twitter.com'):
continue
row = _grab_url(url['expanded_url'])
if row:
row['tweet_text'] = tweet['text']
if tweet.get('retweeted_status'):
row['tweet_url'] = 'http://twitter.com/%s/status/%s' % (tweet['retweeted_status']['user']['screen_name'], tweet['id'])
row['tweeted_by'] = tweet['retweeted_status']['user']['screen_name']
out.append(row)
else:
row['tweet_url'] = 'http://twitter.com/%s/status/%s' % (username, tweet['id'])
out.append(row)
return out
def _grab_url(url):
"""
Returns data of the form:
{
'title': <TITLE>,
'description': <DESCRIPTION>,
'type': <page/image/download>,
'image': <IMAGE_URL>,
'tweet_url': <TWEET_URL>.
'tweet_text': <TWEET_TEXT>,
'tweeted_by': <USERNAME>
}
"""
data = None
try:
resp = requests.get(url, timeout=5)
except requests.exceptions.Timeout:
print '%s timed out.' % url
return None
real_url = resp.url
if resp.status_code == 200 and resp.headers.get('content-type').startswith('text/html'):
data = {}
data['url'] = real_url
soup = BeautifulSoup(resp.content)
og_tags = ('image', 'title', 'description')
for og_tag in og_tags:
match = soup.find(attrs={'property': 'og:%s' % og_tag})
if match and match.attrs.get('content'):
data[og_tag] = match.attrs.get('content')
else:
print "There was an error accessing %s (%s)" % (real_url, resp.status_code)
return data
def _dedupe_links(links):
"""
Get rid of duplicate URLs
"""
out = []
urls_seen = []
for link in links:
if link['url'] not in urls_seen:
urls_seen.append(link['url'])
out.append(link)
else:
print "%s is a duplicate, skipping" % link['url']
return out
@task
def update_featured_social():
"""
Update featured tweets
"""
COPY = copytext.Copy(app_config.COPY_PATH)
secrets = app_config.get_secrets()
# Twitter
print 'Fetching tweets...'
twitter_api = Twitter(
auth=OAuth(
secrets['TWITTER_API_OAUTH_TOKEN'],
secrets['TWITTER_API_OAUTH_SECRET'],
secrets['TWITTER_API_CONSUMER_KEY'],
secrets['TWITTER_API_CONSUMER_SECRET']
)
)
tweets = []
for i in range(1, 4):
tweet_url = COPY['share']['featured_tweet%i' % i]
if isinstance(tweet_url, copytext.Error) or unicode(tweet_url).strip() == '':
continue
tweet_id = unicode(tweet_url).split('/')[-1]
tweet = twitter_api.statuses.show(id=tweet_id)
creation_date = datetime.strptime(tweet['created_at'],'%a %b %d %H:%M:%S +0000 %Y')
creation_date = '%s %i' % (creation_date.strftime('%b'), creation_date.day)
tweet_url = 'http://twitter.com/%s/status/%s' % (tweet['user']['screen_name'], tweet['id'])
photo = None
html = tweet['text']
subs = {}
for media in tweet['entities'].get('media', []):
original = tweet['text'][media['indices'][0]:media['indices'][1]]
replacement = '<a href="%s" target="_blank" onclick="_gaq.push([\'_trackEvent\', \'%s\', \'featured-tweet-action\', \'link\', 0, \'%s\']);">%s</a>' % (media['url'], app_config.PROJECT_SLUG, tweet_url, media['display_url'])
subs[original] = replacement
if media['type'] == 'photo' and not photo:
photo = {
'url': media['media_url']
}
for url in tweet['entities'].get('urls', []):
original = tweet['text'][url['indices'][0]:url['indices'][1]]
replacement = '<a href="%s" target="_blank" onclick="_gaq.push([\'_trackEvent\', \'%s\', \'featured-tweet-action\', \'link\', 0, \'%s\']);">%s</a>' % (url['url'], app_config.PROJECT_SLUG, tweet_url, url['display_url'])
subs[original] = replacement
for hashtag in tweet['entities'].get('hashtags', []):
original = tweet['text'][hashtag['indices'][0]:hashtag['indices'][1]]
replacement = '<a href="https://twitter.com/hashtag/%s" target="_blank" onclick="_gaq.push([\'_trackEvent\', \'%s\', \'featured-tweet-action\', \'hashtag\', 0, \'%s\']);">%s</a>' % (hashtag['text'], app_config.PROJECT_SLUG, tweet_url, '#%s' % hashtag['text'])
subs[original] = replacement
for original, replacement in subs.items():
html = html.replace(original, replacement)
# https://dev.twitter.com/docs/api/1.1/get/statuses/show/%3Aid
tweets.append({
'id': tweet['id'],
'url': tweet_url,
'html': html,
'favorite_count': tweet['favorite_count'],
'retweet_count': tweet['retweet_count'],
'user': {
'id': tweet['user']['id'],
'name': tweet['user']['name'],
'screen_name': tweet['user']['screen_name'],
'profile_image_url': tweet['user']['profile_image_url'],
'url': tweet['user']['url'],
},
'creation_date': creation_date,
'photo': photo
})
# Facebook
print 'Fetching Facebook posts...'
fb_api = GraphAPI(secrets['FACEBOOK_API_APP_TOKEN'])
facebook_posts = []
for i in range(1, 4):
fb_url = COPY['share']['featured_facebook%i' % i]
if isinstance(fb_url, copytext.Error) or unicode(fb_url).strip() == '':
continue
fb_id = unicode(fb_url).split('/')[-1]
post = fb_api.get_object(fb_id)
user = fb_api.get_object(post['from']['id'])
user_picture = fb_api.get_object('%s/picture' % post['from']['id'])
likes = fb_api.get_object('%s/likes' % fb_id, summary='true')
comments = fb_api.get_object('%s/comments' % fb_id, summary='true')
#shares = fb_api.get_object('%s/sharedposts' % fb_id)
creation_date = datetime.strptime(post['created_time'],'%Y-%m-%dT%H:%M:%S+0000')
creation_date = '%s %i' % (creation_date.strftime('%b'), creation_date.day)
# https://developers.facebook.com/docs/graph-api/reference/v2.0/post
facebook_posts.append({
'id': post['id'],
'message': post['message'],
'link': {
'url': post['link'],
'name': post['name'],
'caption': (post['caption'] if 'caption' in post else None),
'description': post['description'],
'picture': post['picture']
},
'from': {
'name': user['name'],
'link': user['link'],
'picture': user_picture['url']
},
'likes': likes['summary']['total_count'],
'comments': comments['summary']['total_count'],
#'shares': shares['summary']['total_count'],
'creation_date': creation_date
})
# Render to JSON
output = {
'tweets': tweets,
'facebook_posts': facebook_posts
}
with open('data/featured.json', 'w') as f:
json.dump(output, f)
| 31.029605 | 271 | 0.56578 | 1,105 | 9,433 | 4.647059 | 0.213575 | 0.025316 | 0.029796 | 0.016358 | 0.292502 | 0.231353 | 0.197079 | 0.185005 | 0.185005 | 0.164362 | 0 | 0.006714 | 0.27372 | 9,433 | 303 | 272 | 31.132013 | 0.742811 | 0.031697 | 0 | 0.222222 | 0 | 0.005051 | 0.204075 | 0.035274 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.065657 | null | null | 0.025253 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6289d050b39c8fee926a510fc4214e7ee940d801 | 4,723 | py | Python | dust/admin.py | MerlinEmris/eBazar | f159314183a8a95afd97d36b0d3d8cf22015a512 | [
"MIT"
] | null | null | null | dust/admin.py | MerlinEmris/eBazar | f159314183a8a95afd97d36b0d3d8cf22015a512 | [
"MIT"
] | null | null | null | dust/admin.py | MerlinEmris/eBazar | f159314183a8a95afd97d36b0d3d8cf22015a512 | [
"MIT"
] | null | null | null | from django.utils.html import format_html
def full_address(self):
return format_html('%s - <b>%s,%s</b>' % (self.address, self.city, self.state))
# null derek yazylmaly hat
admin.site.empty_value_display = '???'
admin.site.register(Item)
# str funksivany ady bilen gorkezyar
class ItemAdmin(admin.ModelAdmin):
list_display = ['name', '__str__']
# ozine gora hat chykarmat
class StoreAdmin(admin.ModelAdmin):
list_display = ['name', 'address', 'upper_case_city_state']
def upper_case_city_state(self, obj):
return ("%s %s" % (obj.city, obj.state)).upper()
upper_case_city_state.short_description = 'City/State'
# email domain ady gaytar
class Store(models.Model):
name = models.CharField(max_length=30)
email = models.EmailField()
def email_domain(self):
return self.email.split("@")[-1]
email_domain.short_description = 'Email domain'
class StoreAdmin(admin.ModelAdmin):
list_display = ['name','email_domain']
# how to sort manually created field that related with db
# models.py
from django.db import models
from django.utils.html import format_html
class Store(models.Model):
name = models.CharField(max_length=30)
address = models.CharField(max_length=30,unique=True)
city = models.CharField(max_length=30)
state = models.CharField(max_length=2)
def full_address(self):
return format_html('%s - <b>%s,%s</b>' % (self.address,self.city,self.state))
full_address.admin_order_field = '-city'
# admin.py
from django.contrib import admin
from coffeehouse.stores.models import Store
class StoreAdmin(admin.ModelAdmin):
list_display = ['name','full_address']
# gerekli column link goyyar
list_display_links = ['name', 'user', 'location', 'price']
# filtr ulananda detail girip chykanda filtirsyz edip gorkezyar
preserve_filters = False
#doredilen wagtyna gora filtrlemek uchin
date_hierarchy = 'created'
# yokarda yerleshen action manu-ny ayyryar
actions_on_top = False
#show only this fields
fields = ['address','city','state','email']
# changing type of field
formfield_overrides = {
models.CharField: {'widget': forms.Textarea}
}
# fills address field with sluged type of city and state field
prepopulated_fields = {'address': ['city','state']}
# create button that clone the record
save_as = True
save_as_continue = False #after cloning go to main page
# go to the page ayyryar
view_on_site = False
# if you want manually enter foreignkey ang manytomanyfield values
raw_id_fields = ["menu"]
#show foreignkeys and manytomanyfield like radio button
radio_fields = {"location": admin.HORIZONTAL}
# change admin form for user type
class MyModelAdmin(admin.ModelAdmin):
def get_form(self, request, obj=None, **kwargs):
if request.user.is_superuser:
kwargs['form'] = MySuperuserForm
return super(MyModelAdmin, self).get_form(request, obj, **kwargs)
#foreignkey values according to user
class MyModelAdmin(admin.ModelAdmin):
def formfield_for_foreignkey(self, db_field, request, **kwargs):
if db_field.name == "car":
kwargs["queryset"] = Car.objects.filter(owner=request.user)
return super(MyModelAdmin, self).formfield_for_foreignkey(db_field, request, **kwargs)
# manytomanyfield values according to user
class MyModelAdmin(admin.ModelAdmin):
def formfield_for_manytomany(self, db_field, request, **kwargs):
if db_field.name == "cars":
kwargs["queryset"] = Car.objects.filter(owner=request.user)
return super(MyModelAdmin, self).formfield_for_manytomany(db_field, request, **kwargs)
#calls this after admin delete
def response_delete(request, obj_display, obj_id):
Determines the HttpResponse for the delete_view() stage.
response_delete is called after the object has been deleted.
You can override it to change the default behavior after the object has been deleted.
obj_display is a string with the name of the deleted object.
obj_id is the serialized identifier used to retrieve the object to be deleted.
# colored admin field
from django.db import models
from django.contrib import admin
from django.utils.html import format_html
class Person(models.Model):
first_name = models.CharField(max_length=50)
color_code = models.CharField(max_length=6)
def colored_first_name(self):
return format_html(
'<span style="color: #{};">{}</span>',
self.color_code,
self.first_name,
)
colored_first_name.admin_order_field = 'first_name'
class PersonAdmin(admin.ModelAdmin):
list_display = ('first_name', 'colored_first_name')
#cvbsxfgbsfdgs
list_select_related = ('organization', 'user')
| 27.782353 | 94 | 0.724328 | 632 | 4,723 | 5.254747 | 0.332278 | 0.036134 | 0.03794 | 0.050587 | 0.367359 | 0.308642 | 0.276724 | 0.20807 | 0.185486 | 0.163204 | 0 | 0.003319 | 0.170654 | 4,723 | 169 | 95 | 27.946746 | 0.844524 | 0.176583 | 0 | 0.270588 | 0 | 0 | 0.089165 | 0.005443 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.094118 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
628f60f6980f2fba69cda100a9a49fdeb649e134 | 1,226 | py | Python | notebook/dict_keys_values_items.py | vhn0912/python-snippets | 80b2e1d6b2b8f12ae30d6dbe86d25bb2b3a02038 | [
"MIT"
] | 174 | 2018-05-30T21:14:50.000Z | 2022-03-25T07:59:37.000Z | notebook/dict_keys_values_items.py | vhn0912/python-snippets | 80b2e1d6b2b8f12ae30d6dbe86d25bb2b3a02038 | [
"MIT"
] | 5 | 2019-08-10T03:22:02.000Z | 2021-07-12T20:31:17.000Z | notebook/dict_keys_values_items.py | vhn0912/python-snippets | 80b2e1d6b2b8f12ae30d6dbe86d25bb2b3a02038 | [
"MIT"
] | 53 | 2018-04-27T05:26:35.000Z | 2022-03-25T07:59:37.000Z | d = {'key1': 1, 'key2': 2, 'key3': 3}
for k in d:
print(k)
# key1
# key2
# key3
for k in d.keys():
print(k)
# key1
# key2
# key3
keys = d.keys()
print(keys)
print(type(keys))
# dict_keys(['key1', 'key2', 'key3'])
# <class 'dict_keys'>
k_list = list(d.keys())
print(k_list)
print(type(k_list))
# ['key1', 'key2', 'key3']
# <class 'list'>
for v in d.values():
print(v)
# 1
# 2
# 3
values = d.values()
print(values)
print(type(values))
# dict_values([1, 2, 3])
# <class 'dict_values'>
v_list = list(d.values())
print(v_list)
print(type(v_list))
# [1, 2, 3]
# <class 'list'>
for k, v in d.items():
print(k, v)
# key1 1
# key2 2
# key3 3
for t in d.items():
print(t)
print(type(t))
print(t[0])
print(t[1])
print('---')
# ('key1', 1)
# <class 'tuple'>
# key1
# 1
# ---
# ('key2', 2)
# <class 'tuple'>
# key2
# 2
# ---
# ('key3', 3)
# <class 'tuple'>
# key3
# 3
# ---
items = d.items()
print(items)
print(type(items))
# dict_items([('key1', 1), ('key2', 2), ('key3', 3)])
# <class 'dict_items'>
i_list = list(d.items())
print(i_list)
print(type(i_list))
# [('key1', 1), ('key2', 2), ('key3', 3)]
# <class 'list'>
print(i_list[0])
print(type(i_list[0]))
# ('key1', 1)
# <class 'tuple'>
| 13.775281 | 53 | 0.546493 | 204 | 1,226 | 3.20098 | 0.112745 | 0.11026 | 0.068913 | 0.07657 | 0.194487 | 0.116386 | 0.116386 | 0 | 0 | 0 | 0 | 0.063636 | 0.192496 | 1,226 | 88 | 54 | 13.931818 | 0.59596 | 0.403752 | 0 | 0.057143 | 0 | 0 | 0.021771 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.657143 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
65599b2db0af8388cda22867211e56c2902c85cb | 3,815 | py | Python | experiments/vitchyr/icml2017/watermaze_memory/generate_bellman_ablation_figure_data.py | Asap7772/rail-rl-franka-eval | 4bf99072376828193d05b53cf83c7e8f4efbd3ba | [
"MIT"
] | null | null | null | experiments/vitchyr/icml2017/watermaze_memory/generate_bellman_ablation_figure_data.py | Asap7772/rail-rl-franka-eval | 4bf99072376828193d05b53cf83c7e8f4efbd3ba | [
"MIT"
] | null | null | null | experiments/vitchyr/icml2017/watermaze_memory/generate_bellman_ablation_figure_data.py | Asap7772/rail-rl-franka-eval | 4bf99072376828193d05b53cf83c7e8f4efbd3ba | [
"MIT"
] | null | null | null | """
Generate data for ablation analysis for ICML 2017 workshop paper.
"""
import random
from torch.nn import functional as F
from railrl.envs.pygame.water_maze import (
WaterMazeMemory,
)
from railrl.exploration_strategies.ou_strategy import OUStrategy
from railrl.launchers.launcher_util import (
run_experiment,
)
from railrl.launchers.memory_bptt_launchers import bptt_ddpg_launcher
from railrl.pythonplusplus import identity
from railrl.memory_states.qfunctions import MemoryQFunction
from railrl.torch.rnn import GRUCell
if __name__ == '__main__':
n_seeds = 1
mode = "here"
exp_prefix = "dev-generate-bellman-ablation-figure-data"
run_mode = 'none'
n_seeds = 5
mode = "ec2"
exp_prefix = "generate-bellman_ablation-figure-data"
use_gpu = True
if mode != "here":
use_gpu = False
H = 25
subtraj_length = None
num_steps_per_iteration = 1000
num_steps_per_eval = 1000
num_iterations = 100
batch_size = 100
memory_dim = 100
version = "Our Method"
# noinspection PyTypeChecker
variant = dict(
memory_dim=memory_dim,
env_class=WaterMazeMemory,
env_params=dict(
horizon=H,
give_time=True,
),
memory_aug_params=dict(
max_magnitude=1,
),
algo_params=dict(
subtraj_length=subtraj_length,
batch_size=batch_size,
num_epochs=num_iterations,
num_steps_per_epoch=num_steps_per_iteration,
num_steps_per_eval=num_steps_per_eval,
discount=0.9,
use_action_policy_params_for_entire_policy=False,
action_policy_optimize_bellman=False,
write_policy_optimizes='bellman',
action_policy_learning_rate=0.001,
write_policy_learning_rate=0.0005,
qf_learning_rate=0.002,
max_path_length=H,
refresh_entire_buffer_period=None,
save_new_memories_back_to_replay_buffer=True,
write_policy_weight_decay=0,
action_policy_weight_decay=0,
do_not_load_initial_memories=False,
save_memory_gradients=False,
),
qf_class=MemoryQFunction,
qf_params=dict(
output_activation=identity,
fc1_size=400,
fc2_size=300,
ignore_memory=False,
),
policy_params=dict(
fc1_size=400,
fc2_size=300,
cell_class=GRUCell,
output_activation=F.tanh,
only_one_fc_for_action=False,
),
es_params=dict(
env_es_class=OUStrategy,
env_es_params=dict(
max_sigma=1,
min_sigma=None,
),
memory_es_class=OUStrategy,
memory_es_params=dict(
max_sigma=1,
min_sigma=None,
),
),
version=version,
)
for subtraj_length in [1, 5, 10, 15, 20, 25]:
variant['algo_params']['subtraj_length'] = subtraj_length
for exp_id, (
write_policy_optimizes,
version,
) in enumerate([
("bellman", "Bellman Error"),
("qf", "Q-Function"),
("both", "Both"),
]):
variant['algo_params']['write_policy_optimizes'] = (
write_policy_optimizes
)
variant['version'] = version
for _ in range(n_seeds):
seed = random.randint(0, 10000)
run_experiment(
bptt_ddpg_launcher,
exp_prefix=exp_prefix,
seed=seed,
mode=mode,
variant=variant,
exp_id=exp_id,
)
| 30.03937 | 69 | 0.5827 | 417 | 3,815 | 4.961631 | 0.378897 | 0.038666 | 0.031899 | 0.02175 | 0.083132 | 0.051232 | 0.031899 | 0.031899 | 0.031899 | 0 | 0 | 0.031051 | 0.341547 | 3,815 | 126 | 70 | 30.277778 | 0.792596 | 0.024377 | 0 | 0.13913 | 1 | 0 | 0.060043 | 0.026925 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.078261 | 0 | 0.078261 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
655e6832b1d17e6f32fa6e6e3a8b08c294e9b6de | 4,146 | py | Python | jspp_imageutils/image/chunking.py | jspaezp/jspp_imageutils | 6376e274a1b0675622a7979c181b9effc125aa09 | [
"Apache-2.0"
] | null | null | null | jspp_imageutils/image/chunking.py | jspaezp/jspp_imageutils | 6376e274a1b0675622a7979c181b9effc125aa09 | [
"Apache-2.0"
] | null | null | null | jspp_imageutils/image/chunking.py | jspaezp/jspp_imageutils | 6376e274a1b0675622a7979c181b9effc125aa09 | [
"Apache-2.0"
] | null | null | null | import itertools
import numpy as np
from jspp_imageutils.image.types import GenImgArray, GenImgBatch
from typing import Tuple, Iterable, Iterator
# TODO: fix everywhere the x and y axis nomenclature
"""
chunk_image_on_position -> returns images
chunk_image_generator -> returns images
chunk_data_image_generator -> returns batches of data
"""
def chunk_image_on_position(arr_img: GenImgArray,
x_pos: Iterable[int], y_pos: Iterable[int],
dimensions: Tuple[int, int] = (50, 50),
warn_leftovers=True) -> \
Iterator[Tuple[int, int, GenImgArray]]:
# TODO decide if this should handle centering the points ...
x_ends = [x + dimensions[0] for x in x_pos]
y_ends = [y + dimensions[1] for y in y_pos]
i = 0
# TODO find a better way to indent this ...
for y_start, y_end, x_start, x_end in \
zip(y_pos, y_ends, x_pos, x_ends):
temp_arr_img = arr_img[x_start:x_end, y_start:y_end, ]
if temp_arr_img.shape[0:2] == dimensions:
yield x_start, y_start, temp_arr_img
i += 1
else:
if warn_leftovers:
print("skipping chunk due to weird size",
str(temp_arr_img.shape))
print("Image generator yielded ", str(i), " images")
def chunk_image_generator(img,
chunk_size: Tuple[int, int] = (500, 500),
displacement: Tuple[int, int] = (250, 250),
warn_leftovers=True) -> \
Iterator[Tuple[int, int, GenImgArray]]:
"""
Gets an image read with tensorflow.keras.preprocessing.image.load_img
and returns a generator that iterates over rectangular areas of it.
chunks are of dims (chunk_size, colors)
"""
# TODO unify the input for this guy ...
arr_img = np.asarray(img)
dims = arr_img.shape
x_starts = [
displacement[0] * x for x in range(dims[0] // displacement[0])
]
x_starts = [x for x in x_starts if
x >= 0 & (x + chunk_size[0]) < dims[0]]
y_starts = [
displacement[1] * y for y in range(dims[1] // displacement[1])
]
y_starts = [y for y in y_starts if
y >= 0 & (y + chunk_size[1]) < dims[1]]
coord_pairs = itertools.product(x_starts, y_starts)
coord_pairs = np.array(list(coord_pairs))
my_gen = chunk_image_on_position(
arr_img, coord_pairs[:, 0], coord_pairs[:, 1],
dimensions=chunk_size, warn_leftovers=warn_leftovers)
for chunk in my_gen:
yield(chunk)
def chunk_data_image_generator(img: GenImgArray,
chunk_size: Tuple[int, int] = (500, 500),
displacement: Tuple[int, int] = (250, 250),
batch: int = 16) -> GenImgBatch:
"""
chunk_data_image_generator [summary]
Gets an image read with tensorflow.keras.preprocessing.image.load_img
and returns a generator that iterates over BATCHES of rectangular
areas of it
dimensions are (batch, chunk_size, colors)
:param img: [description]
:type img: GenImgArray
:param chunk_size: [description], defaults to (500, 500)
:type chunk_size: Tuple[int, int], optional
:param displacement: [description], defaults to (250, 250)
:type displacement: Tuple[int, int], optional
:param batch: [description], defaults to 16
:type batch: int, optional
:return: [description]
:rtype: GenImgBatch
"""
# np.concatenate((a1, a2))
img_generator = chunk_image_generator(
img=img, chunk_size=chunk_size,
displacement=displacement)
counter = 0
img_buffer = []
for _, _, temp_arr_img in img_generator:
tmp_arr_dims = temp_arr_img.shape
temp_arr_img = temp_arr_img.reshape(1, *tmp_arr_dims)
img_buffer.append(temp_arr_img)
counter += 1
if counter == batch:
yield(np.concatenate(img_buffer))
counter = 0
img_buffer = []
yield(np.concatenate(img_buffer))
| 33.168 | 74 | 0.605885 | 542 | 4,146 | 4.424354 | 0.239852 | 0.035029 | 0.041284 | 0.025021 | 0.232277 | 0.185988 | 0.164304 | 0.164304 | 0.125104 | 0.125104 | 0 | 0.023989 | 0.296189 | 4,146 | 124 | 75 | 33.435484 | 0.797807 | 0.239749 | 0 | 0.212121 | 0 | 0 | 0.021649 | 0 | 0 | 0 | 0 | 0.024194 | 0 | 1 | 0.045455 | false | 0 | 0.060606 | 0 | 0.106061 | 0.030303 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
655fb86e683d32b0ac543bc333e3c26a2dfbef4d | 1,256 | py | Python | modules/transfer/scripts/info.py | sishuiliunian/falcon-plus | eb6e2a5c29b26812601535cec602b33ee42b0632 | [
"Apache-2.0"
] | 7,208 | 2017-01-15T08:32:54.000Z | 2022-03-31T14:09:04.000Z | modules/transfer/scripts/info.py | sishuiliunian/falcon-plus | eb6e2a5c29b26812601535cec602b33ee42b0632 | [
"Apache-2.0"
] | 745 | 2017-01-17T06:55:21.000Z | 2022-03-28T03:33:45.000Z | modules/transfer/scripts/info.py | sishuiliunian/falcon-plus | eb6e2a5c29b26812601535cec602b33ee42b0632 | [
"Apache-2.0"
] | 1,699 | 2017-01-11T09:16:44.000Z | 2022-03-29T10:40:31.000Z | import requests
# Copyright 2017 Xiaomi, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import json
d = [
{
"endpoint": "hh-op-mon-tran01.bj",
"counter": "load.15min",
},
{
"endpoint": "hh-op-mon-tran01.bj",
"counter": "net.if.in.bytes/iface=eth0",
},
{
"endpoint": "10.202.31.14:7934",
"counter": "p2-com.xiaomi.miui.mibi.service.MibiService-method-createTradeV1",
},
]
url = "http://query.falcon.miliao.srv:9966/graph/info"
r = requests.post(url, data=json.dumps(d))
print r.text
#curl "localhost:9966/graph/info/one?endpoint=`hostname`&counter=load.1min" |python -m json.tool
| 32.205128 | 96 | 0.630573 | 168 | 1,256 | 4.714286 | 0.690476 | 0.075758 | 0.032828 | 0.040404 | 0.075758 | 0.075758 | 0.075758 | 0 | 0 | 0 | 0 | 0.041314 | 0.248408 | 1,256 | 38 | 97 | 33.052632 | 0.797669 | 0.512739 | 0 | 0.105263 | 0 | 0 | 0.41206 | 0.150754 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.105263 | null | null | 0.052632 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
65710b598ac66d11ae7f738f8a51d26c406a0a31 | 6,525 | py | Python | cron/sync_cs_schedule.py | vovagalchenko/onsite-inflight | 7acd4bc6a12b89ab09b465a81ae495bef35bab0a | [
"MIT"
] | null | null | null | cron/sync_cs_schedule.py | vovagalchenko/onsite-inflight | 7acd4bc6a12b89ab09b465a81ae495bef35bab0a | [
"MIT"
] | 1 | 2016-05-24T00:00:10.000Z | 2016-05-24T00:00:10.000Z | cron/sync_cs_schedule.py | vovagalchenko/onsite-inflight | 7acd4bc6a12b89ab09b465a81ae495bef35bab0a | [
"MIT"
] | null | null | null | #!/usr/bin/env python
import sys
import pprint
from model.cs_rep import CS_Rep
from pytz import timezone, utc
from datetime import datetime, timedelta
from lib.calendar import Google_Calendar, google_ts_to_datetime, DEFAULT_DATE, LOS_ANGELES_TZ
from lib.conf import CFG
from model.db_session import DB_Session_Factory
from json import dumps, loads
from oauth2client.client import AccessTokenRefreshError
from apiclient.errors import HttpError
from pytz import timezone
import pdb
import re
target_weekday = 3 # Thursday
target_timerange = [12, 14]
target_calendar_id = "box.com_gk9hfef9s7fulrq0t3mftrvevk@group.calendar.google.com"
def get_ts_from_event(event, ts_key):
return google_ts_to_datetime(event.get(ts_key, {}).get('dateTime', DEFAULT_DATE))
def main(argv):
calendar = Google_Calendar.get_calendar()
db_session = DB_Session_Factory.get_db_session()
now = datetime.now()
today_weekday = now.weekday()
next_target_weekday = now + timedelta(days = (target_weekday - today_weekday + 6)%7 + 1)
la_timezone = timezone(LOS_ANGELES_TZ)
start_period_naive = datetime(next_target_weekday.year, next_target_weekday.month, next_target_weekday.day, target_timerange[0])
start_period = la_timezone.localize(start_period_naive)
end_period_naive = datetime(next_target_weekday.year, next_target_weekday.month, next_target_weekday.day, target_timerange[1])
end_period = la_timezone.localize(end_period_naive)
print str(start_period) + " - " + str(end_period)
try:
cs_rep_list = db_session.query(CS_Rep).order_by(CS_Rep.email)
source_events = {}
for cs_rep in cs_rep_list:
current_period_start = start_period_naive
current_period_end = start_period_naive + timedelta(hours = 1)
print "Checking calendar for " + cs_rep.name
source_events_request = calendar.service.events().list(calendarId = cs_rep.email, timeZone = LOS_ANGELES_TZ, timeMin = start_period.isoformat(), timeMax = end_period.isoformat(), orderBy = 'startTime', singleEvents = True, maxAttendees = 1000)
while (source_events_request != None):
response = source_events_request.execute(calendar.http)
for event in response.get('items', []):
summary = event.get('summary', '')
start_time = get_ts_from_event(event, 'start')
end_time = get_ts_from_event(event, 'end')
if start_time < start_period_naive or end_time > end_period_naive or start_time < current_period_start or end_time - start_time > timedelta(hours=1):
continue
while current_period_end < end_time:
current_period_start = current_period_start + timedelta(hours = 1)
current_period_end = current_period_end + timedelta(hours = 1)
match = re.search("\*$", summary)
if match:
source_events[event['id']] = event
current_period_start = current_period_start + timedelta(hours = 1)
current_period_end = current_period_end + timedelta(hours = 1)
else:
print "no match: " + summary
source_events_request = calendar.service.events().list_next(source_events_request, response)
to_delete = []
to_update = {}
target_events_request = calendar.service.events().list(calendarId = target_calendar_id, timeZone = LOS_ANGELES_TZ, timeMin = start_period.isoformat(), timeMax = end_period.isoformat(), orderBy = 'startTime', singleEvents = True)
while (target_events_request != None):
response = target_events_request.execute(calendar.http)
for event in response.get('items', []):
source_event = source_events.get(event['id'], None)
if source_event is None:
to_delete.append(event)
else:
to_update[event['id']] = {'before' : event, 'after' : source_events[event['id']].copy()}
del source_events[event['id']]
target_events_request = calendar.service.events().list_next(target_events_request, response)
for event in to_delete:
print "Removing: " + event.get('summary', "")
calendar.service.events().delete(calendarId = target_calendar_id, eventId = event['id']).execute(calendar.http)
for event_id in to_update:
original_event = to_update[event_id]['before']
original_start = get_ts_from_event(original_event, 'start')
original_end = get_ts_from_event(original_event, 'end')
after_event = to_update[event_id]['after']
after_start = get_ts_from_event(after_event, 'start')
after_end = get_ts_from_event(after_event, 'end')
if original_start != after_start or original_end != after_end:
original_event['start'] = after_event['start']
original_event['end'] = after_event['end']
print "Updating: " + original_event.get('summary', "")
calendar.service.events().update(calendarId = target_calendar_id, eventId = event_id, body = original_event).execute(calendar.http)
for event_id in source_events:
source_event = source_events[event_id]
print "Adding: " + source_event.get('summary', "")
source_event['organizer'] = {'self' : True}
source_event['location'] = '4440-3-4 The Marina'
while True:
try:
calendar.service.events().import_(calendarId = target_calendar_id, body = source_event).execute(calendar.http)
break
except HttpError as e:
error_data = loads(e.content)
print error_data['error']['code']
if error_data.get('error', {'code' : None}).get('code', None) == 400:
source_event['sequence'] += 1
else:
sys.stderr.write("HTTP Error: " + e.content)
exit(1)
except AccessTokenRefreshError:
print ("The credentials have been revoked or expired, please re-run"
"the application to re-authorize")
if __name__ == '__main__':
main(sys.argv)
| 50.976563 | 255 | 0.630192 | 760 | 6,525 | 5.098684 | 0.215789 | 0.037161 | 0.016258 | 0.02529 | 0.365419 | 0.330323 | 0.273548 | 0.184258 | 0.184258 | 0.184258 | 0 | 0.007813 | 0.274176 | 6,525 | 127 | 256 | 51.377953 | 0.810389 | 0.004444 | 0 | 0.101852 | 0 | 0 | 0.068371 | 0.009239 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.138889 | null | null | 0.083333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
657d1df2ec7237f7821e8629ba8c0b4d674b5456 | 2,406 | py | Python | app/core/tests/test_admin.py | ido777/newish | 298a3d5babf411ba1eb777101eb6e8f70b9e495f | [
"MIT"
] | null | null | null | app/core/tests/test_admin.py | ido777/newish | 298a3d5babf411ba1eb777101eb6e8f70b9e495f | [
"MIT"
] | null | null | null | app/core/tests/test_admin.py | ido777/newish | 298a3d5babf411ba1eb777101eb6e8f70b9e495f | [
"MIT"
] | null | null | null | import pytest
from django.urls import reverse
@pytest.mark.skip(reason="WIP moving to pytest tests")
def test_with_authenticated_client(client, django_user_model):
email = 'admin@somewhere.com'
password = 'password123'
admin_user = django_user_model.objects.create_superuser(
email, password)
client.force_login(user=admin_user)
user = django_user_model.objects.create_user('user@somewhere.com', password='password123',
name='Test user full name')
url = reverse('admin:core_user_changelist')
res = client.get(url)
assert user.name in res
assert user.email in res
def test_user_page_change(client, django_user_model):
"""Test that the user edit page works"""
email = 'admin@somewhere.com'
password = 'password123'
admin_user = django_user_model.objects.create_superuser(
email, password)
client.force_login(user=admin_user)
user = django_user_model.objects.create_user('user@somewhere.com', password='password123',
name='Test user full name')
url = reverse('admin:core_user_change', args=[user.id])
res = client.get(url)
assert res.status_code == 200
def test_create_user_page(client, django_user_model):
"""Test that the create user page works"""
email = 'admin@somewhere.com'
password = 'password123'
admin_user = django_user_model.objects.create_superuser(
email, password)
client.force_login(user=admin_user)
url = reverse('admin:core_user_add')
res = client.get(url)
assert res.status_code == 200
'''
@pytest.mark.django_db
def test_user_create():
User.objects.create_user('user@somewhere.com', password='password123', name='Test user full name')
assert User.objects.count() == 1
@pytest.mark.parametrize(
'admin, user, client',
get_user_model().objects.create_superuser(
'admin@somewhere.com', password='password123'),
get_user_model().objects.create_user(
'user@somewhere.com', password='password123', name='Test user full name'),
Client()
)
@pytest.mark.db
def test_users_listed(admin, user, client):
"""Test that users are listed on the user page """
url = reverse('admin:core_user_changelist')
res = client.get(url)
assert user.name in res
assert user.email in res
'''
| 31.246753 | 102 | 0.676226 | 310 | 2,406 | 5.051613 | 0.193548 | 0.057471 | 0.076628 | 0.158365 | 0.747765 | 0.686462 | 0.686462 | 0.645594 | 0.645594 | 0.59834 | 0 | 0.016342 | 0.211554 | 2,406 | 76 | 103 | 31.657895 | 0.809172 | 0.02951 | 0 | 0.685714 | 0 | 0 | 0.178961 | 0.030789 | 0 | 0 | 0 | 0 | 0.114286 | 1 | 0.085714 | false | 0.228571 | 0.057143 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
6582ec795f9be718fba1c563c5c66e44261c6ce1 | 3,053 | py | Python | tests/bugs/core_4160_test.py | reevespaul/firebird-qa | 98f16f425aa9ab8ee63b86172f959d63a2d76f21 | [
"MIT"
] | null | null | null | tests/bugs/core_4160_test.py | reevespaul/firebird-qa | 98f16f425aa9ab8ee63b86172f959d63a2d76f21 | [
"MIT"
] | null | null | null | tests/bugs/core_4160_test.py | reevespaul/firebird-qa | 98f16f425aa9ab8ee63b86172f959d63a2d76f21 | [
"MIT"
] | null | null | null | #coding:utf-8
#
# id: bugs.core_4160
# title: Parameterized exception does not accept not ASCII characters as parameter
# decription:
# tracker_id: CORE-4160
# min_versions: ['3.0']
# versions: 3.0
# qmid: None
import pytest
from firebird.qa import db_factory, isql_act, Action
# version: 3.0
# resources: None
substitutions_1 = [('-At procedure.*', '')]
init_script_1 = """
create or alter procedure sp_alert(a_lang char(2), a_new_amount int) as begin end;
commit;
recreate exception ex_negative_remainder ' @1 (@2)';
commit;
"""
db_1 = db_factory(page_size=4096, charset='UTF8', sql_dialect=3, init=init_script_1)
test_script_1 = """
set term ^;
create or alter procedure sp_alert(a_lang char(2), a_new_amount int) as
begin
if (a_lang = 'cz') then
exception ex_negative_remainder using ('Czech: New Balance bude menší než nula', a_new_amount);
else if (a_lang = 'pt') then
exception ex_negative_remainder using ('Portuguese: New saldo será menor do que zero', a_new_amount);
else if (a_lang = 'dm') then
exception ex_negative_remainder using ('Danish: New Balance vil være mindre end nul', a_new_amount);
else if (a_lang = 'gc') then
exception ex_negative_remainder using ('Greek: Νέα ισορροπία θα είναι κάτω από το μηδέν', a_new_amount);
else if (a_lang = 'fr') then
exception ex_negative_remainder using ('French: Nouveau solde sera inférieur à zéro', a_new_amount);
else
exception ex_negative_remainder using ('Russian: Новый остаток будет меньше нуля', a_new_amount);
end
^
set term ;^
commit;
execute procedure sp_alert('cz', -1);
execute procedure sp_alert('pt', -2);
execute procedure sp_alert('dm', -3);
execute procedure sp_alert('gc', -4);
execute procedure sp_alert('fr', -5);
execute procedure sp_alert('jp', -6);
"""
act_1 = isql_act('db_1', test_script_1, substitutions=substitutions_1)
expected_stderr_1 = """
Statement failed, SQLSTATE = HY000
exception 1
-EX_NEGATIVE_REMAINDER
- Czech: New Balance bude menší než nula (-1)
Statement failed, SQLSTATE = HY000
exception 1
-EX_NEGATIVE_REMAINDER
- Portuguese: New saldo será menor do que zero (-2)
Statement failed, SQLSTATE = HY000
exception 1
-EX_NEGATIVE_REMAINDER
- Danish: New Balance vil være mindre end nul (-3)
Statement failed, SQLSTATE = HY000
exception 1
-EX_NEGATIVE_REMAINDER
- Greek: Νέα ισορροπία θα είναι κάτω από το μηδέν (-4)
Statement failed, SQLSTATE = HY000
exception 1
-EX_NEGATIVE_REMAINDER
- French: Nouveau solde sera inférieur à zéro (-5)
Statement failed, SQLSTATE = HY000
exception 1
-EX_NEGATIVE_REMAINDER
- Russian: Новый остаток будет меньше нуля (-6)
"""
@pytest.mark.version('>=3.0')
def test_1(act_1: Action):
act_1.expected_stderr = expected_stderr_1
act_1.execute()
assert act_1.clean_expected_stderr == act_1.clean_stderr
| 31.802083 | 113 | 0.681625 | 432 | 3,053 | 4.601852 | 0.314815 | 0.065392 | 0.124245 | 0.098592 | 0.595573 | 0.578974 | 0.45171 | 0.342052 | 0.270624 | 0.117706 | 0 | 0.033684 | 0.222077 | 3,053 | 95 | 114 | 32.136842 | 0.803368 | 0.083852 | 0 | 0.347826 | 0 | 0 | 0.814722 | 0.10018 | 0 | 0 | 0 | 0 | 0.014493 | 1 | 0.014493 | false | 0 | 0.028986 | 0 | 0.043478 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6584791fe17e82f5787899fa97ce0db3fa35bfb0 | 1,535 | py | Python | uhelpers/tests/test_archive_helpers.py | Johannes-Sahlmann/uhelpers | 58f8e25ef8644ab5b24a5be76fd58a338a400912 | [
"BSD-3-Clause"
] | null | null | null | uhelpers/tests/test_archive_helpers.py | Johannes-Sahlmann/uhelpers | 58f8e25ef8644ab5b24a5be76fd58a338a400912 | [
"BSD-3-Clause"
] | 2 | 2020-12-21T18:08:48.000Z | 2021-01-26T01:24:39.000Z | uhelpers/tests/test_archive_helpers.py | Johannes-Sahlmann/uhelpers | 58f8e25ef8644ab5b24a5be76fd58a338a400912 | [
"BSD-3-Clause"
] | 5 | 2019-10-02T14:16:15.000Z | 2021-12-27T18:46:18.000Z | #!/usr/bin/env python
"""Tests for the jwcf hawki module.
Authors
-------
Johannes Sahlmann
"""
import netrc
import os
from astropy.table import Table
import pytest
from ..archive_helpers import get_exoplanet_orbit_database, gacs_list_query
local_dir = os.path.dirname(os.path.abspath(__file__))
ON_TRAVIS = os.environ.get('TRAVIS') == 'true'
@pytest.mark.skipif(ON_TRAVIS, reason='timeout issue.')
def test_eod():
"""Test the access to the exoplanet orbit database."""
catalog = get_exoplanet_orbit_database(local_dir, verbose=False)
assert len(catalog) > 100
@pytest.mark.skipif(ON_TRAVIS, reason='Requires access to .netrc file.')
def test_gacs_list_query():
# print('test gacs list query')
# Define which host in the .netrc file to use
HOST = 'http://gea.esac.esa.int'
# Read from the .netrc file in your home directory
secrets = netrc.netrc()
username, account, password = secrets.authenticators(HOST)
out_dir = os.path.dirname(__file__)
T = Table()
id_str_input_table = 'ID_HIP'
T[id_str_input_table] = [1, 2, 3, 4, 5, 6, 7]
gacs_table_name = 'tgas_source'
id_str_gacs_table = 'hip'
input_table_name = 'hip_star_list'
input_table = os.path.join(out_dir, 'hip_star_list.vot')
T[[id_str_input_table]].write(input_table, format='votable', overwrite=1)
T_out = gacs_list_query(username, password, out_dir, input_table, input_table_name, gacs_table_name,
id_str_gacs_table, id_str_input_table)
T_out.pprint()
| 29.519231 | 104 | 0.70684 | 230 | 1,535 | 4.421739 | 0.434783 | 0.088496 | 0.051131 | 0.058997 | 0.129794 | 0.058997 | 0 | 0 | 0 | 0 | 0 | 0.008703 | 0.176547 | 1,535 | 51 | 105 | 30.098039 | 0.795886 | 0.171987 | 0 | 0 | 0 | 0 | 0.10757 | 0 | 0 | 0 | 0 | 0 | 0.035714 | 1 | 0.071429 | false | 0.071429 | 0.178571 | 0 | 0.25 | 0.035714 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
6584f2d684176e56a028fa83fba17e1495411607 | 1,264 | py | Python | TP3/test.py | paul-arthurthiery/IAMethodesAlgos | f49fe17c278424588df263ab0e6778721cbc4394 | [
"MIT"
] | null | null | null | TP3/test.py | paul-arthurthiery/IAMethodesAlgos | f49fe17c278424588df263ab0e6778721cbc4394 | [
"MIT"
] | null | null | null | TP3/test.py | paul-arthurthiery/IAMethodesAlgos | f49fe17c278424588df263ab0e6778721cbc4394 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Sun Dec 2 14:33:13 2018
@author: Nathan
"""
import numpy as np
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
# load dataset
data,target =load_iris().data,load_iris().target
# split data in train/test sets
X_train, X_test, y_train, y_test = train_test_split( data, target, test_size=0.33, random_state=42)
# standardize columns using normal distribution
# fit on X_train and not on X_test to avoid Data Leakage
s = StandardScaler()
X_train = s.fit_transform(X_train)
X_test = s.transform(X_test)
from SoftmaxClassifier import SoftmaxClassifier
# import the custom classifier
cl = SoftmaxClassifier()
# train on X_train and not on X_test to avoid overfitting
train_p = cl.fit_predict(X_train,y_train)
test_p = cl.predict(X_test)
from sklearn.metrics import precision_recall_fscore_support
# display precision, recall and f1-score on train/test set
print("train : "+ str(precision_recall_fscore_support(y_train, train_p,average = "macro")))
print("test : "+ str(precision_recall_fscore_support(y_test, test_p,average = "macro")))
import matplotlib.pyplot as plt
plt.plot(cl.losses_)
plt.show() | 26.333333 | 99 | 0.77769 | 202 | 1,264 | 4.653465 | 0.425743 | 0.038298 | 0.067021 | 0.089362 | 0.12766 | 0.12766 | 0.059574 | 0.059574 | 0.059574 | 0.059574 | 0 | 0.01721 | 0.126582 | 1,264 | 48 | 100 | 26.333333 | 0.834239 | 0.302215 | 0 | 0 | 0 | 0 | 0.028835 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.368421 | 0 | 0.368421 | 0.105263 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
6587d36784219790a446003c11e770c4bed4d07f | 8,409 | py | Python | ratin_cpython/common/common.py | openearth/eo-rivers | 752f90aed92fa862a2c107bb58bcae298c1bf313 | [
"MIT"
] | 2 | 2018-10-19T03:20:08.000Z | 2020-05-06T22:56:20.000Z | ratin_cpython/common/common.py | openearth/eo-river | 752f90aed92fa862a2c107bb58bcae298c1bf313 | [
"MIT"
] | 11 | 2018-06-05T09:41:15.000Z | 2021-11-15T17:47:27.000Z | ratin_cpython/common/common.py | openearth/eo-rivers | 752f90aed92fa862a2c107bb58bcae298c1bf313 | [
"MIT"
] | 2 | 2020-10-15T12:29:36.000Z | 2021-12-13T22:53:58.000Z | import numpy as np
from math import factorial
import scipy.signal
#Gaussian filter with convolution - faster and easier to handle
## Degree is equal to the number of values left and right of the central value
## of the gaussian window:
## ie degree=3 yields a window of length 7
## It uses normalized weights (sum of weights = 1)
## Based on:
## http://en.wikipedia.org/wiki/Gaussian_filter
## http://en.wikipedia.org/wiki/Standard_deviation
## http://en.wikipedia.org/wiki/Window_function#Gaussian_window
def smooth(array_in, degree=5):
'''
Gaussian smooth line using a window of specified degree (=half-length)
'''
degree = int(degree) #make sure it is of integer type
n = 2*degree+1
if degree <= 0:
return array_in
if type(array_in) == type(np.array([])) and len(array_in.shape)>1:
array_in = array_in.flatten()
array_in = list(array_in)
# If degree is larger than twice the original data, make it smaller
if len(array_in) < n:
degree = len(array_in)/2
n = 2*degree+1
print "Changed smoothing degree to:",degree
#extend the array's initial and ending values with equal ones, accordingly
array_in = np.array( [array_in[0]]*degree + array_in + [array_in[-1]]*degree )
#TODO: These parameters are subject to change - depends on the implementation
# Gaussian parameters:
x = np.linspace(-degree,degree,n)
sigma = np.sqrt( sum( (x-np.mean(x))**2 ) / n )
alpha = 1.0 / (2.0 * sigma**2)
weight = np.sqrt(alpha/np.pi) * np.exp(-alpha*x**2 ) #gaussian
weights = weight / sum(weight) #normalize
return np.convolve(array_in, weights, 'valid')
#TODO: revise
#Gaussian 2D smoothing, anisotropic
## http://homepages.inf.ed.ac.uk/rbf/HIPR2/gsmooth.htm
def smooth2D(matrix_in, fill, degree=5, sigma=2.0, a=1.0, b=1.0):
'''
Gaussian smooth matrix using a window of specified degree
'''
kx, ky = np.arange(-degree,degree+1.0),np.arange(-degree,degree+1.0)
kernel = np.zeros([kx.shape[0],ky.shape[0]])
for i in range(len(kx)):
for j in range(len(ky)):
kernel[i,j] = 1./(2*np.pi*sigma**2) * np.exp( -(b*kx[i]**2+a*ky[j]**2)/(2*sigma**2) )
kernel /= kernel.sum()
matrix_out = scipy.signal.convolve2d(matrix_in, kernel, mode='same', fillvalue=fill)
return matrix_out
def get_direction(x, y, smoothdegree=0, units='degrees'):
'''
Return direction (cartesian reference) of point
The direction of each point is calculated as the mean of directions
on both sides
'''
#Calculate direction in RADIANS
direction = np.array([])
#first point: Can determine direction only based on next point
direction = np.append(direction,np.angle((x[1]-x[0])+(y[1]-y[0])*1j))
for j in range(1, len(x)-1):
# Base direction on points before and after current point
direction = np.append(direction,np.angle((x[j+1]-x[j-1])+(y[j+1]-y[j-1])*1j))
#last point: Can determine direction only based on previous point
direction = np.append(direction,np.angle((x[-1]-x[-2])+(y[-1]-y[-2])*1j))
#fix 'jumps' in data
direction = fix_angle_vector(direction)
#Smoothing - do not perform if input degree is equal/less than 0.0
if smoothdegree <= 0.0:
pass
else:
direction = smooth(direction, degree=smoothdegree)
#TODO: Review! Do we need to confine it?
#Limit the representation in the space of [0,2*pi]
gaps = np.where(np.abs(direction) > np.radians(360.0))[0]
direction[gaps] -= np.radians(360.0)
if units=='radians':
pass
elif units == 'degrees':
direction = np.degrees(direction)
return direction
def distance(p1, p2):
"""
Distance in between two points (given as tuples)
"""
dist = np.sqrt( (p2[0]-p1[0])**2 + (p2[1]-p1[1])**2 )
return dist
def distance_matrix(x0, y0, x1, y1, aniso):
"""
Returns distances between points in a matrix formation.
An anisotropy factor is set as input. If >1, the points in
x direction shift closer. If <1, the points in x direction
shift further apart. If =1, normal distances are computed.
"""
aniso = float(aniso)
x0 = np.array(x0).flatten()
y0 = np.array(y0).flatten()
x1 = np.array(x1).flatten()
y1 = np.array(y1).flatten()
#transpose observations
vertical = np.vstack((x0, y0)).T
horizontal = np.vstack((x1, y1)).T
# Make a distance matrix between pairwise observations
# Note: from <http://stackoverflow.com/questions/1871536>
if aniso<=0.0:
print "Warning: Anisotropy factor cannot be 0 or negative; set to 1.0."
aniso = 1.0
d0 = np.subtract.outer(vertical[:,0], horizontal[:,0]) * (1./aniso)
d1 = np.subtract.outer(vertical[:,1], horizontal[:,1])
return np.hypot(d0, d1)
#retrieve s values streamwise
def get_chainage(x, y):
"""
Get chain distances for a set of continuous points
"""
s = np.array([0.0]) #start
for j in range(1,len(x)):
s = np.append( s, s[j-1] + distance([x[j-1],y[j-1]], [x[j],y[j]]) )
return s
def to_sn(Gx, Gy):
"""
Transform (Gx,Gy) Cartesian coordinates to flow-oriented ones (Gs,Gn),
where Gx and Gy stand for gridded x and gridded y, and Gs and Gn are their
transformed counterparts.
Gx,Gy,Gs,Gn are all numpy arrays in the form of matrices.
"""
rows, cols = Gx.shape
#find s-direction coordinates
midrow = int(rows/2)
c_x = Gx[midrow,:]
c_y = Gy[midrow,:]
Salong = get_chainage(c_x,c_y)
#all s-direction points have the same spacing
Gs = np.tile(Salong, (rows,1)) #"stretch" all longitudinals
#find n-direction coordinates
Gn = np.zeros([rows,cols])
for j in range(cols): #for each column
Gn[midrow::-1,j] = -get_chainage(Gx[midrow::-1,j],Gy[midrow::-1,j])
Gn[midrow:,j] = get_chainage(Gx[midrow:,j],Gy[midrow:,j])
return Gs, Gn
def to_grid(data, rows, cols):
"""
Transform a list of data to a grid-like (matrix) form of specified shape
"""
data = np.array(data).flatten()
return data.reshape(rows,cols)
##??['Brute-force' way but works correctly]
def fix_angle_vector(theta):
'''
Fixes a vector of angles (in radians) that show 'jumps' because of changes
between 360 and 0 degrees
'''
thetadiff = np.diff(theta)
gaps = np.where(np.abs(thetadiff) > np.radians(180))[0]
while len(gaps)>0:
gap = gaps[0]
if thetadiff[gap]<0:
theta[gap+1:] += np.radians(360)
else:
theta[gap+1:] -= np.radians(360)
thetadiff = np.diff(theta)
gaps = np.where(np.abs(thetadiff) > np.radians(180))[0]
return theta
def get_parallel_line(x, y, direction, distance, units = 'degrees'):
'''
Create parallel lines for representation of MAT path.
'''
if units == 'degrees':
direction = np.radians(direction)
perpendicular_direction = np.array(direction)+0.5*np.pi
xn = np.array(x)+np.array(distance)*np.array(np.cos(perpendicular_direction))
yn = np.array(y)+np.array(distance)*np.array(np.sin(perpendicular_direction))
return xn, yn
#http://wiki.scipy.org/Cookbook/SavitzkyGolay
def savitzky_golay(y, window_size, order, deriv=0, rate=1):
try:
window_size = np.abs(np.int(window_size))
order = np.abs(np.int(order))
except ValueError, msg:
raise ValueError("window_size and order have to be of type int", msg)
if window_size % 2 != 1 or window_size < 1:
raise TypeError("window_size size must be a positive odd number")
if window_size < order + 2:
raise TypeError("window_size is too small for the polynomials order")
order_range = range(order+1)
half_window = (window_size -1) // 2
# precompute coefficients
b = np.mat([[k**i for i in order_range] for k in range(-half_window, half_window+1)])
m = np.linalg.pinv(b).A[deriv] * rate**deriv * factorial(deriv)
# pad the signal at the extremes with
# values taken from the signal itself
firstvals = y[0] - np.abs( y[1:half_window+1][::-1] - y[0] )
lastvals = y[-1] + np.abs(y[-half_window-1:-1][::-1] - y[-1])
y = np.concatenate((firstvals, y, lastvals))
return np.convolve( m[::-1], y, mode='valid')
| 35.331933 | 97 | 0.631704 | 1,292 | 8,409 | 4.063467 | 0.26548 | 0.021333 | 0.004571 | 0.008381 | 0.139048 | 0.115238 | 0.076381 | 0.05619 | 0.037714 | 0.037714 | 0 | 0.028968 | 0.228208 | 8,409 | 238 | 98 | 35.331933 | 0.779969 | 0.199905 | 0 | 0.081301 | 0 | 0 | 0.05148 | 0 | 0 | 0 | 0 | 0.008403 | 0 | 0 | null | null | 0.01626 | 0.02439 | null | null | 0.01626 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
658c87d29e07d35154d2bbcefbc473d8ad660860 | 1,152 | py | Python | renovation_core_graphql/auth/otp.py | e-lobo/renovation_core_graphql | 31e464e00badc308bf03c70364331b08ad9d1b1d | [
"MIT"
] | 1 | 2021-12-15T06:05:06.000Z | 2021-12-15T06:05:06.000Z | renovation_core_graphql/auth/otp.py | e-lobo/renovation_core_graphql | 31e464e00badc308bf03c70364331b08ad9d1b1d | [
"MIT"
] | 5 | 2021-06-09T19:00:56.000Z | 2022-01-23T09:51:13.000Z | renovation_core_graphql/auth/otp.py | e-lobo/renovation_core_graphql | 31e464e00badc308bf03c70364331b08ad9d1b1d | [
"MIT"
] | 1 | 2021-06-01T05:22:41.000Z | 2021-06-01T05:22:41.000Z | from graphql import GraphQLResolveInfo
import frappe
from renovation_core.utils.auth import generate_otp, verify_otp
VERIFY_OTP_STATUS_MAP = {
"no_linked_user": "NO_LINKED_USER",
"no_otp_for_mobile": "NO_OTP_GENERATED",
"invalid_otp": "INVALID_OTP",
"verified": "VERIFIED",
}
def generate_otp_resolver(obj, info: GraphQLResolveInfo, **kwargs):
r = generate_otp(**kwargs)
r.status = "SUCCESS" if r.status == "success" else "FAILED"
return r
def verify_otp_resolver(obj, info: GraphQLResolveInfo, **kwargs):
kwargs["login_to_user"] = 1 if kwargs.get("login_to_user") else 0
if kwargs["login_to_user"] and kwargs["use_jwt"]:
frappe.local.form_dict.use_jwt = 1
del kwargs["use_jwt"]
status_dict = verify_otp(**kwargs)
status_dict.update(frappe.local.response)
if status_dict.get("user"):
status_dict["user"] = frappe._dict(doctype="User", name=status_dict["user"])
status = status_dict.get("status")
if status in VERIFY_OTP_STATUS_MAP:
status_dict.status = VERIFY_OTP_STATUS_MAP[status]
else:
status_dict.status = "FAILED"
return status_dict
| 29.538462 | 84 | 0.703993 | 158 | 1,152 | 4.822785 | 0.316456 | 0.11811 | 0.059055 | 0.070866 | 0.173228 | 0.110236 | 0 | 0 | 0 | 0 | 0 | 0.003161 | 0.176215 | 1,152 | 38 | 85 | 30.315789 | 0.799789 | 0 | 0 | 0 | 1 | 0 | 0.173611 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0.107143 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
65923c69268087aca7de1d2a3dc4a13663164289 | 5,813 | py | Python | imutils/big/make_shards.py | JacobARose/image-utils | aa0e005c0b4df5198d188b074f4e21f8d8f97962 | [
"MIT"
] | null | null | null | imutils/big/make_shards.py | JacobARose/image-utils | aa0e005c0b4df5198d188b074f4e21f8d8f97962 | [
"MIT"
] | null | null | null | imutils/big/make_shards.py | JacobARose/image-utils | aa0e005c0b4df5198d188b074f4e21f8d8f97962 | [
"MIT"
] | null | null | null | """
imutils/big/make_shards.py
Generate one or more webdataset-compatible tar archive shards from an image classification dataset.
Based on script: https://github.com/tmbdev-archive/webdataset-examples/blob/7f56e9a8b978254c06aa0a98572a1331968b0eb3/makeshards.py
Added on: Sunday March 6th, 2022
Example usage:
python "/media/data/jacob/GitHub/image-utils/imutils/big/make_shards.py" \
--subsets=train,val,test \
--maxsize='1e9' \
--maxcount=50000 \
--shard_dir="/media/data_cifs/projects/prj_fossils/users/jacob/data/herbarium_2022/webdataset" \
--catalog_dir="/media/data_cifs/projects/prj_fossils/users/jacob/data/herbarium_2022/catalog" \
--debug
"""
import sys
import os
import os.path
import random
import argparse
from torchvision import datasets
import webdataset as wds
import numpy as np
import os
from typing import Optional, Tuple, Any, Dict
from tqdm import trange, tqdm
import tarfile
tarfile.DEFAULT_FORMAT = tarfile.GNU_FORMAT
import webdataset as wds
# from imutils.big.datamodule import Herbarium2022DataModule, Herbarium2022Dataset
from imutils.ml.data.datamodule import Herbarium2022DataModule, Herbarium2022Dataset
def read_file_binary(fname):
"Read a binary file from disk."
with open(fname, "rb") as stream:
return stream.read()
all_keys = set()
def prepare_sample(dataset, index, subset: str="train", filekey: bool=False) -> Dict[str, Any]:
image_binary, label, metadata = dataset[index]
key = metadata["catalog_number"]
assert key not in all_keys
all_keys.add(key)
xkey = key if filekey else "%07d" % index
sample = {"__key__": xkey,
"image.jpg": image_binary}
if subset != "test":
assert label == dataset.targets[index]
sample["label.cls"] = int(label)
return sample
def write_dataset(catalog_dir: Optional[str]=None,
shard_dir: Optional[str]=None,
subset="train",
maxsize=1e9,
maxcount=100000,
limit_num_samples: Optional[int]=np.inf,
filekey: bool=False,
dataset=None):
if dataset is None:
datamodule = Herbarium2022DataModule(catalog_dir=catalog_dir,
num_workers=4,
image_reader=read_file_binary,
remove_transforms=True)
datamodule.setup()
dataset = datamodule.get_dataset(subset=subset)
num_samples = len(dataset)
print(f"With subset={subset}, Total num_samples: {num_samples}")
if limit_num_samples < num_samples:
num_samples = limit_num_samples
print(f"Limiting this run to num_samples: {num_samples}")
indices = list(range(num_samples))
os.makedirs(shard_dir, exist_ok=True)
pattern = os.path.join(shard_dir, f"herbarium_2022-{subset}-%06d.tar")
with wds.ShardWriter(pattern, maxsize=maxsize, maxcount=maxcount) as sink:
for i in tqdm(indices, desc=f"idx(Total={num_samples})"):
sample = prepare_sample(dataset, index=i, subset=subset, filekey=filekey)
sink.write(sample)
return dataset, indices
def parse_args() -> argparse.Namespace:
parser = argparse.ArgumentParser("""Generate sharded dataset from supervised image dataset.""")
parser.add_argument("--subsets", default="train,val,test", help="which subsets to write")
parser.add_argument(
"--filekey", action="store_true", help="use file as key (default: index)"
)
parser.add_argument("--maxsize", type=float, default=1e9)
parser.add_argument("--maxcount", type=float, default=100000)
parser.add_argument(
"--shard_dir",
default="/media/data_cifs/projects/prj_fossils/users/jacob/data/herbarium_2022/webdataset",
help="directory where shards are written"
)
parser.add_argument(
"--catalog_dir",
default="/media/data_cifs/projects/prj_fossils/users/jacob/data/herbarium_2022/catalog",
help="directory containing csv versions of the original train & test metadata json files from herbarium 2022",
)
parser.add_argument("--debug", action="store_true", default=False,
help="Provide this boolean flag to produce a debugging shard dataset of only a maximum of 200 samples per data subset. [TODO] Switch to temp directories when this flag is passed.")
args = parser.parse_args()
return args
def main(args):
# args = parse_args()
assert args.maxsize > 10000000 # Shards must be a minimum of 10+ MB
assert args.maxcount < 1000000 # Shards must contain a maximum of 1,000,000 samples each
limit_num_samples = 200 if args.debug else np.inf
# if not os.path.isdir(os.path.join(args.data, "train")):
# print(f"{args.data}: should be directory containing ImageNet", file=sys.stderr)
# print(f"suitable as argument for torchvision.datasets.ImageNet(...)", file=sys.stderr)
# sys.exit(1)
# if not os.path.isdir(os.path.join(args.shards, ".")):
# print(f"{args.shards}: should be a writable destination directory for shards", file=sys.stderr)
# sys.exit(1)
subsets = args.subsets.split(",")
for subset in tqdm(subsets, leave=True, desc=f"Processing {len(subsets)} subsets"):
# print("# subset", subset)
dataset, indices = write_dataset(catalog_dir=args.catalog_dir,
shard_dir=args.shard_dir,
subset=subset,
maxsize=args.maxsize,
maxcount=args.maxcount,
limit_num_samples=limit_num_samples,
filekey=args.filekey)
CATALOG_DIR = "/media/data_cifs/projects/prj_fossils/users/jacob/data/herbarium_2022/catalog"
# SHARD_DIR = "/media/data_cifs/projects/prj_fossils/users/jacob/data/herbarium_2022/webdataset"
if __name__ == "__main__":
args = parse_args()
main(args)
written_files = os.listdir(args.shard_dir)
files_per_subset = {"train":[],
"val":[],
"test":[]}
for subset,v in files_per_subset.items():
files_per_subset[subset] = len([f for f in written_files if subset in f])
from rich import print as pp
print(f"SUCCESS! TARGET SHARD DIR CONTAINS THE FOLLOWING:")
pp(files_per_subset)
| 31.085561 | 188 | 0.732152 | 809 | 5,813 | 5.118665 | 0.295426 | 0.036223 | 0.028737 | 0.030427 | 0.157208 | 0.134509 | 0.124366 | 0.124366 | 0.124366 | 0.109877 | 0 | 0.029311 | 0.148976 | 5,813 | 186 | 189 | 31.252688 | 0.807762 | 0.244624 | 0 | 0.064815 | 1 | 0.009259 | 0.249547 | 0.06573 | 0 | 0 | 0 | 0 | 0.037037 | 1 | 0.046296 | false | 0.009259 | 0.138889 | 0 | 0.222222 | 0.037037 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6594ce65379700398a3a74c57669881f0dce9a22 | 1,182 | py | Python | linear.py | AliRzvn/HW1 | d6420c1656800372aae78e18327612df540b674e | [
"MIT"
] | null | null | null | linear.py | AliRzvn/HW1 | d6420c1656800372aae78e18327612df540b674e | [
"MIT"
] | null | null | null | linear.py | AliRzvn/HW1 | d6420c1656800372aae78e18327612df540b674e | [
"MIT"
] | null | null | null | import numpy as np
from module import Module
class Linear(Module):
def __init__(self, name, input_dim, output_dim, l2_coef=.0):
super(Linear, self).__init__(name)
self.l2_coef = l2_coef # coefficient of l2 regularization.
self.W = np.random.randn(input_dim, output_dim) # weights of the layer.
self.b = np.random.randn(output_dim, ) # biases of the layer.
self.dW = None # gradients of loss w.r.t. the weights.
self.db = None # gradients of loss w.r.t. the biases.
def forward(self, x, **kwargs):
"""
x: input array.
out: output of Linear module for input x.
**Save whatever you need for backward pass in self.cache.
"""
out = None
# todo: implement the forward propagation for Linear module.
return out
def backward(self, dout):
"""
dout: gradients of Loss w.r.t. this layer's output.
dx: gradients of Loss w.r.t. this layer's input.
"""
dx = None
# todo: implement the backward propagation for Linear module.
# don't forget to update self.dW and self.db.
return dx
| 31.945946 | 80 | 0.600677 | 168 | 1,182 | 4.130952 | 0.380952 | 0.069164 | 0.086455 | 0.092219 | 0.152738 | 0.152738 | 0.152738 | 0.152738 | 0.080692 | 0 | 0 | 0.006098 | 0.306261 | 1,182 | 36 | 81 | 32.833333 | 0.840244 | 0.450085 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.055556 | 0 | 1 | 0.1875 | false | 0 | 0.125 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6598ac2ebf4cb397f3e2b86a4a598e93fd0dbafd | 659 | py | Python | pages/login_page.py | 0verchenko/PageObject | b50ec33b6f511680e5be14b16c379df825b87285 | [
"Apache-2.0"
] | null | null | null | pages/login_page.py | 0verchenko/PageObject | b50ec33b6f511680e5be14b16c379df825b87285 | [
"Apache-2.0"
] | 1 | 2021-06-02T00:14:07.000Z | 2021-06-02T00:14:07.000Z | pages/login_page.py | 0verchenko/PageObject | b50ec33b6f511680e5be14b16c379df825b87285 | [
"Apache-2.0"
] | null | null | null | from .base_page import BasePage
from .locators import LoginPageLocators
class LoginPage(BasePage):
def should_be_login_page(self):
self.should_be_login_url()
self.should_be_login_form()
self.should_be_register_form()
def should_be_login_url(self):
assert "login" in self.browser.current_url
def should_be_login_form(self):
login_form = self.browser.find_element(*LoginPageLocators.LOGIN_FORM)
assert login_form.is_displayed()
def should_be_register_form(self):
register_form = self.browser.find_element(*LoginPageLocators.REGISTER_FORM)
assert register_form.is_displayed()
| 29.954545 | 83 | 0.740516 | 86 | 659 | 5.290698 | 0.290698 | 0.123077 | 0.142857 | 0.105495 | 0.369231 | 0.189011 | 0 | 0 | 0 | 0 | 0 | 0 | 0.183612 | 659 | 21 | 84 | 31.380952 | 0.845725 | 0 | 0 | 0 | 0 | 0 | 0.007587 | 0 | 0 | 0 | 0 | 0 | 0.2 | 1 | 0.266667 | false | 0 | 0.133333 | 0 | 0.466667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
659af9491af7136fafb0016f0624386d06bcfa4b | 3,280 | py | Python | demo/demo/settings.py | ikcam/django-boilerplate | d8253665d74f0f18cf9a5fd46772598a60f20c5c | [
"Apache-2.0"
] | 5 | 2016-10-02T04:57:10.000Z | 2019-08-12T22:22:39.000Z | demo/demo/settings.py | ikcam/django-boilerplate | d8253665d74f0f18cf9a5fd46772598a60f20c5c | [
"Apache-2.0"
] | null | null | null | demo/demo/settings.py | ikcam/django-boilerplate | d8253665d74f0f18cf9a5fd46772598a60f20c5c | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Django settings for demo project.
For more information on this file, see
https://docs.djangoproject.com/en/1.9/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/1.9/ref/settings/
"""
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
import os
from django.core.urlresolvers import reverse_lazy
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/1.8/howto/deployment/checklist/
DEBUG = True
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = '__SHHH_ITS_A_SECRET__'
ALLOWED_HOSTS = []
ADMINS = []
MANAGERS = []
INTERNAL_IPS = []
# Application definition
INSTALLED_APPS = (
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'django.contrib.humanize',
# To make it look nice
'bootstrap3',
# Boilerplate
'boilerplate',
# Apps
'account',
'store',
)
MIDDLEWARE = (
'django.middleware.common.BrokenLinkEmailsMiddleware',
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.locale.LocaleMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.auth.middleware.SessionAuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
)
ROOT_URLCONF = 'demo.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [
os.path.join(BASE_DIR, 'templates/'),
],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.core.context_processors.media',
'django.contrib.messages.context_processors.messages',
],
},
},
]
TEMPLATE_LOADERS = [
'django.template.loaders.filesystem.Loader',
'django.template.loaders.app_directories.Loader'
]
LOCALE_PATHS = [
os.path.join(BASE_DIR, 'locale'),
]
WSGI_APPLICATION = 'demo.wsgi.application'
# Database
# https://docs.djangoproject.com/en/1.9/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
}
}
EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend'
# Internationalization
# https://docs.djangoproject.com/en/1.8/topics/i18n/
LANGUAGE_CODE = 'en'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files
MEDIA_ROOT = os.path.join(BASE_DIR, 'media/')
STATIC_ROOT = os.path.join(BASE_DIR, 'static/')
MEDIA_URL = '/media/'
STATIC_URL = '/static/'
LOGIN_URL = reverse_lazy('account:login')
| 24.661654 | 71 | 0.690549 | 365 | 3,280 | 6.076712 | 0.421918 | 0.076195 | 0.027051 | 0.037872 | 0.143372 | 0.098287 | 0.079351 | 0.051849 | 0.036069 | 0 | 0 | 0.007383 | 0.174085 | 3,280 | 132 | 72 | 24.848485 | 0.81137 | 0.225 | 0 | 0.025641 | 0 | 0 | 0.511905 | 0.426984 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.025641 | 0 | 0.025641 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
65a9792b2934e3a0bc3ead9a9eef72f6382f49c5 | 3,454 | py | Python | Important_data/Thesis figure scripts/six_sigmoids.py | haakonvt/LearningTensorFlow | 6988a15af2ac916ae1a5e23b2c5bde9630cc0519 | [
"MIT"
] | 5 | 2018-09-06T12:52:12.000Z | 2020-05-09T01:40:12.000Z | Important_data/Thesis figure scripts/six_sigmoids.py | haakonvt/LearningTensorFlow | 6988a15af2ac916ae1a5e23b2c5bde9630cc0519 | [
"MIT"
] | null | null | null | Important_data/Thesis figure scripts/six_sigmoids.py | haakonvt/LearningTensorFlow | 6988a15af2ac916ae1a5e23b2c5bde9630cc0519 | [
"MIT"
] | 4 | 2018-02-06T08:42:06.000Z | 2019-04-16T11:23:06.000Z | from matplotlib import rc
rc('font',**{'family':'sans-serif','sans-serif':['Helvetica']})
rc('text', usetex=True)
rc('legend',**{'fontsize':11}) # Font size for legend
from mpl_toolkits.axes_grid.axislines import SubplotZero
import matplotlib as mpl
mpl.rcParams['lines.linewidth'] = 2.5
import matplotlib.pyplot as plt
from math import erf,sqrt
import numpy as np
xmin = -4; xmax = 4
x = np.linspace(xmin,xmax,1001)
y1 = lambda x: np.array([erf(0.5*i*sqrt(np.pi)) for i in x])
y2 = lambda x: np.tanh(x)
y3 = lambda x: 4./np.pi*np.arctan(np.tanh(np.pi*x/4.))
y4 = lambda x: x/np.sqrt(1.+x**2)
y5 = lambda x: 2.0/np.pi*np.arctan(np.pi/2.0 * x)
y6 = lambda x: x/(1+np.abs(x))
fig = plt.figure(1)
ax = SubplotZero(fig, 111)
fig.add_subplot(ax)
plt.subplots_adjust(left = 0.125, # the left side of the subplots of the figure
right = 0.9, # the right side of the subplots of the figure
bottom = 0.1, # the bottom of the subplots of the figure
top = 0.9, # the top of the subplots of the figure
wspace = 0., # the amount of width reserved for blank space between subplots
hspace = 0.) # the amount of height reserved for white space between subplots
plt.setp(ax, xticks=[-3,-2,-1,1,2,3], xticklabels=[" "," "," "," "," "," ",], yticks=[-1,1], yticklabels=[" "," ",])
# Make coordinate axes with "arrows"
for direction in ["xzero", "yzero"]:
ax.axis[direction].set_visible(True)
# Coordinate axes with arrow (guess what, these are the arrows)
plt.arrow(2.65, 0.0, 0.5, 0.0, color="k", clip_on=False, head_length=0.06, head_width=0.08)
plt.arrow(0.0, 1.03, 0.0, 0.1, color="k", clip_on=False, head_length=0.06, head_width=0.08)
# Remove edge around the entire plot
for direction in ["left", "right", "bottom", "top"]:
ax.axis[direction].set_visible(False)
plt.rc('text', usetex=True)
plt.rc('font', family='serif')
colormap = plt.cm.Spectral #nipy_spectral # Other possible colormaps: Set1, Accent, nipy_spectral, Paired
colors = [colormap(i) for i in np.linspace(0, 1, 6)]
plt.title("Six sigmoid functions", fontsize=18, y=1.08)
leg_list = [r"$\mathrm{erf}\left(\frac{\sqrt{\pi}}{2}x \right)$",
r"$\tanh(x)$",
r"$\frac{2}{\pi}\mathrm{gd}\left( \frac{\pi}{2}x \right)$",
r"$x\left(1+x^2\right)^{-\frac{1}{2}}$",
r"$\frac{2}{\pi}\mathrm{arctan}\left( \frac{\pi}{2}x \right)$",
r"$x\left(1+|x|\right)^{-1}$"]
for i in range(1,7):
s = "ax.plot(x,y%s(x),color=colors[i-1])" %(str(i))
eval(s)
ax.legend(leg_list,loc="best", ncol=2, fancybox=True) # title="Legend", fontsize=12
# ax.grid(True, which='both')
ax.set_aspect('equal')
ax.set_xlim([-3.1,3.1])
ax.set_ylim([-1.1,1.1])
ax.annotate('1', xy=(0.08, 1-0.02))
ax.annotate('0', xy=(0.08, -0.2))
ax.annotate('-1', xy=(0.08, -1-0.03))
for i in [-3,-2,-1,1,2,3]:
ax.annotate('%s' %str(i), xy=(i-0.03, -0.2))
maybe = raw_input("\nUpdate figure directly in master thesis?\nEnter 'YES' (anything else = ONLY show to screen) ")
if maybe == "YES": # Only save to disc if need to be updated
filenameWithPath = "/Users/haakonvt/Dropbox/uio/master/latex-master/Illustrations/six_sigmoids.pdf"
plt.savefig(filenameWithPath, bbox_inches='tight') #, pad_inches=0.2)
print 'Saved over previous file in location:\n "%s"' %filenameWithPath
else:
print 'Figure was only shown on screen.'
plt.show()
| 40.635294 | 116 | 0.630573 | 600 | 3,454 | 3.591667 | 0.348333 | 0.018561 | 0.011137 | 0.027842 | 0.186543 | 0.132715 | 0.104872 | 0.078886 | 0.062181 | 0.062181 | 0 | 0.052632 | 0.17487 | 3,454 | 84 | 117 | 41.119048 | 0.703509 | 0.183555 | 0 | 0 | 0 | 0.046154 | 0.244381 | 0.10025 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.092308 | null | null | 0.030769 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
65aa73e15457005cd520549df842b9dc33211c7c | 3,820 | py | Python | src/web/modules/search/controllers/search/control.py | unkyulee/elastic-cms | 3ccf4476c3523d4fefc0d8d9dee0196815b81489 | [
"MIT"
] | 2 | 2017-04-30T07:29:23.000Z | 2017-04-30T07:36:27.000Z | src/web/modules/search/controllers/search/control.py | unkyulee/elastic-cms | 3ccf4476c3523d4fefc0d8d9dee0196815b81489 | [
"MIT"
] | null | null | null | src/web/modules/search/controllers/search/control.py | unkyulee/elastic-cms | 3ccf4476c3523d4fefc0d8d9dee0196815b81489 | [
"MIT"
] | null | null | null | import json
import urllib2
import traceback
import cgi
from flask import render_template, request
import web.util.tools as tools
import lib.http as http
import lib.es as es
from web import app
from lib.read import readfile
def get(p):
host = p['c']['host']; index = p['c']['index'];
# debug
p['debug'] = tools.get('debug', '')
# search keyword
p["q"] = tools.get('q', p['c']['query'])
# pagination
p["from"] = int(tools.get('from', 0))
p["size"] = int(tools.get('size', p['c']['page_size']))
# sort
p['sort_field'] = tools.get('sort_field', p['c']['sort_field'])
p['sort_dir'] = tools.get('sort_dir', p['c']['sort_dir'])
# selected app
p['selected_app'] = tools.get('app')
# search query
p["q"] = p["q"].replace('"', '\\"') # escape some special chars
p['search_query'] = render_template("search/search_query.html", p=p)
p["q"] = tools.get('q', p['c']['query']) # restore to what was entered originally
# send search request
try:
search_url = "{}/{}/post/_search".format(host, index)
p['response'] = http.http_req_json(search_url, "POST", p['search_query'])
except urllib2.HTTPError, e:
raise Exception("url: {}\nquery: {}\{}".format(
search_url, p['search_query'], e.read()))
# process the search result
p['post_list'] = []
for r in p['response']["hits"]["hits"]:
item = {}
# first take items from the fields
for k, v in r["_source"].items():
item[k] = v
# fetch highlight
if r.get('highlight'):
for k, v in r["highlight"].items():
if k == "url" or k == "_index" or k == "app":
continue
value = cgi.escape(v[0])
value = value.replace("::highlight::", "<font color=red>")
value = value.replace("::highlight_end::", "</font>")
item[k] = value
# produce standard fields
if r.get('_index') and not item.get('app'):
item['app'] = r.get('_index')
if not item.get('url'):
item['url'] = '{}/redirect?index={}&id={}'.format(
p.get('url'),
r.get('_index'),
r.get('_id'))
# Save to SearchResult
p['post_list'].append(item)
# Application Lists
p['applications'] = []
if p['response'].get('aggregations'):
internal = p['response']['aggregations']['internal']['buckets']
p['applications'].extend(
[item for item in internal if item.get('key') != 'search']
)
external = p['response']['aggregations']['external']['buckets']
p['applications'].extend(external)
# sort based on the count
p['applications'] = sorted(p['applications'],
key=lambda x: x['doc_count'], reverse=True)
# Feed Pagination
p["total"] = int(p['response']["hits"]["total"])
# Suggestion
p["suggestion"] = []; AnySuggestion = False;
# suggest.didyoumean[].options[].text
if p['response']["suggest"].get("didyoumean"):
for idx, term in enumerate(p['response']["suggest"].get("didyoumean")):
p["suggestion"].append(term["text"])
for o in term["options"]:
AnySuggestion = True
p["suggestion"][idx] = o["text"]
break # just take the first option
# if there are no suggestions then don't display
if not AnySuggestion: p["suggestion"] = []
# return json format
if tools.get("json"):
callback = tools.get("callback")
if not callback:
return json.dumps(p['response'])
else:
return "{}({})".format(callback, json.dumps(p['response']))
return render_template("search/default.html", p=p)
| 33.217391 | 85 | 0.546073 | 472 | 3,820 | 4.353814 | 0.311441 | 0.038929 | 0.017518 | 0.009732 | 0.053528 | 0.017518 | 0.017518 | 0.017518 | 0 | 0 | 0 | 0.001435 | 0.270157 | 3,820 | 114 | 86 | 33.508772 | 0.735653 | 0.121728 | 0 | 0.026316 | 0 | 0 | 0.235594 | 0.015006 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.131579 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
65b8b4c75d35105b5ff106a11aa54530eaf30029 | 2,687 | py | Python | stellar_sdk/xdr/survey_response_body.py | Shaptic/py-stellar-base | f5fa47f4d96f215889d99249fb25c7be002f5cf3 | [
"Apache-2.0"
] | null | null | null | stellar_sdk/xdr/survey_response_body.py | Shaptic/py-stellar-base | f5fa47f4d96f215889d99249fb25c7be002f5cf3 | [
"Apache-2.0"
] | 27 | 2022-01-12T10:55:38.000Z | 2022-03-28T01:38:24.000Z | stellar_sdk/xdr/survey_response_body.py | Shaptic/py-stellar-base | f5fa47f4d96f215889d99249fb25c7be002f5cf3 | [
"Apache-2.0"
] | 2 | 2021-12-02T12:42:03.000Z | 2021-12-07T20:53:10.000Z | # This is an automatically generated file.
# DO NOT EDIT or your changes may be overwritten
import base64
from xdrlib import Packer, Unpacker
from ..type_checked import type_checked
from .survey_message_command_type import SurveyMessageCommandType
from .topology_response_body import TopologyResponseBody
__all__ = ["SurveyResponseBody"]
@type_checked
class SurveyResponseBody:
"""
XDR Source Code::
union SurveyResponseBody switch (SurveyMessageCommandType type)
{
case SURVEY_TOPOLOGY:
TopologyResponseBody topologyResponseBody;
};
"""
def __init__(
self,
type: SurveyMessageCommandType,
topology_response_body: TopologyResponseBody = None,
) -> None:
self.type = type
self.topology_response_body = topology_response_body
def pack(self, packer: Packer) -> None:
self.type.pack(packer)
if self.type == SurveyMessageCommandType.SURVEY_TOPOLOGY:
if self.topology_response_body is None:
raise ValueError("topology_response_body should not be None.")
self.topology_response_body.pack(packer)
return
@classmethod
def unpack(cls, unpacker: Unpacker) -> "SurveyResponseBody":
type = SurveyMessageCommandType.unpack(unpacker)
if type == SurveyMessageCommandType.SURVEY_TOPOLOGY:
topology_response_body = TopologyResponseBody.unpack(unpacker)
return cls(type=type, topology_response_body=topology_response_body)
return cls(type=type)
def to_xdr_bytes(self) -> bytes:
packer = Packer()
self.pack(packer)
return packer.get_buffer()
@classmethod
def from_xdr_bytes(cls, xdr: bytes) -> "SurveyResponseBody":
unpacker = Unpacker(xdr)
return cls.unpack(unpacker)
def to_xdr(self) -> str:
xdr_bytes = self.to_xdr_bytes()
return base64.b64encode(xdr_bytes).decode()
@classmethod
def from_xdr(cls, xdr: str) -> "SurveyResponseBody":
xdr_bytes = base64.b64decode(xdr.encode())
return cls.from_xdr_bytes(xdr_bytes)
def __eq__(self, other: object):
if not isinstance(other, self.__class__):
return NotImplemented
return (
self.type == other.type
and self.topology_response_body == other.topology_response_body
)
def __str__(self):
out = []
out.append(f"type={self.type}")
out.append(
f"topology_response_body={self.topology_response_body}"
) if self.topology_response_body is not None else None
return f"<SurveyResponseBody {[', '.join(out)]}>"
| 32.373494 | 80 | 0.668031 | 287 | 2,687 | 5.996516 | 0.254355 | 0.139454 | 0.174317 | 0.083672 | 0.079024 | 0.079024 | 0 | 0 | 0 | 0 | 0 | 0.004948 | 0.24786 | 2,687 | 82 | 81 | 32.768293 | 0.846611 | 0.098623 | 0 | 0.051724 | 1 | 0 | 0.092662 | 0.031027 | 0 | 0 | 0 | 0 | 0 | 1 | 0.155172 | false | 0 | 0.086207 | 0 | 0.431034 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
65b9bd2ad1163a0006a5a233a9d9d9cd5e6a3646 | 763 | py | Python | poll/migrations/0002_auto_20210114_2215.py | slk007/SahiGalat.com | 786688e07237f3554187b90e01149225efaa1713 | [
"MIT"
] | null | null | null | poll/migrations/0002_auto_20210114_2215.py | slk007/SahiGalat.com | 786688e07237f3554187b90e01149225efaa1713 | [
"MIT"
] | null | null | null | poll/migrations/0002_auto_20210114_2215.py | slk007/SahiGalat.com | 786688e07237f3554187b90e01149225efaa1713 | [
"MIT"
] | null | null | null | # Generated by Django 3.1.5 on 2021-01-14 22:15
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('poll', '0001_initial'),
]
operations = [
migrations.CreateModel(
name='Topic',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('topic_name', models.CharField(max_length=50)),
('topic_descrption', models.CharField(max_length=255)),
],
),
migrations.AddField(
model_name='question',
name='topics',
field=models.ManyToManyField(related_name='questions', to='poll.Topic'),
),
]
| 28.259259 | 114 | 0.571429 | 76 | 763 | 5.605263 | 0.697368 | 0.070423 | 0.084507 | 0.112676 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.044693 | 0.296199 | 763 | 26 | 115 | 29.346154 | 0.748603 | 0.058978 | 0 | 0.1 | 1 | 0 | 0.117318 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.05 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
65b9efe5fd413429042a21c46095ea299b352b7a | 370 | py | Python | Leetcode/Python/_1493.py | Xrenya/algorithms | aded82cacde2f4f2114241907861251e0e2e5638 | [
"MIT"
] | null | null | null | Leetcode/Python/_1493.py | Xrenya/algorithms | aded82cacde2f4f2114241907861251e0e2e5638 | [
"MIT"
] | null | null | null | Leetcode/Python/_1493.py | Xrenya/algorithms | aded82cacde2f4f2114241907861251e0e2e5638 | [
"MIT"
] | null | null | null | class Solution:
def longestSubarray(self, nums: List[int]) -> int:
k = 1
max_len, i = 0, 0
for j in range(len(nums)):
if nums[j] == 0:
k -= 1
if k < 0:
if nums[i] == 0:
k += 1
i += 1
max_len = max(max_len, j - i)
return max_len
| 23.125 | 54 | 0.37027 | 49 | 370 | 2.714286 | 0.408163 | 0.180451 | 0.105263 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.050279 | 0.516216 | 370 | 15 | 55 | 24.666667 | 0.692737 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0 | 0 | 0.230769 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
65c1e68e0dc7466b357152cbb876f5ad24ac99ef | 9,154 | py | Python | SaIL/envs/state_lattice_planner_env.py | yonetaniryo/SaIL | c7404024c7787184c3638e9730bd185373ed0bf6 | [
"BSD-3-Clause"
] | 12 | 2018-05-18T19:29:09.000Z | 2020-05-15T13:47:12.000Z | SaIL/envs/state_lattice_planner_env.py | yonetaniryo/SaIL | c7404024c7787184c3638e9730bd185373ed0bf6 | [
"BSD-3-Clause"
] | 1 | 2018-05-18T19:36:42.000Z | 2018-07-20T03:03:13.000Z | SaIL/envs/state_lattice_planner_env.py | yonetaniryo/SaIL | c7404024c7787184c3638e9730bd185373ed0bf6 | [
"BSD-3-Clause"
] | 10 | 2018-01-11T21:23:40.000Z | 2021-11-10T04:38:07.000Z | #!/usr/bin/env python
"""An environment that takes as input databases of environments and runs episodes,
where each episode is a search based planner. It then returns the average number of expansions,
and features (if training)
Author: Mohak Bhardwaj
"""
from collections import defaultdict
import numpy as np
import os
from SaIL.learners.supervised_regression_network import SupervisedRegressionNetwork
from planning_python.data_structures.priority_queue import PriorityQueue
from planning_python.planners.search_based_planner import SearchBasedPlanner
from planning_python.environment_interface.env_2d import Env2D
from planning_python.state_lattices.common_lattice.xy_analytic_lattice import XYAnalyticLattice
from planning_python.state_lattices.common_lattice.xyh_analytic_lattice import XYHAnalyticLattice
from planning_python.cost_functions.cost_function import PathLengthNoAng, DubinsPathLength
from planning_python.heuristic_functions.heuristic_function import EuclideanHeuristicNoAng, ManhattanHeuristicNoAng, DubinsHeuristic
from planning_python.data_structures.planning_problem import PlanningProblem
class StateLatticePlannerEnv(SearchBasedPlanner):
def __init__(self, env_params, lattice_type, lattice_params, cost_fn, learner_params):
self.env_params = env_params
self.cost_fn = cost_fn
self.lattice_type = lattice_type
if lattice_type == "XY":
self.lattice = XYAnalyticLattice(lattice_params)
self.start_n = self.lattice.state_to_node((lattice_params['x_lims'][0], lattice_params['y_lims'][0]))
self.goal_n = self.lattice.state_to_node((lattice_params['x_lims'][1]-1, lattice_params['y_lims'][0]-1))
elif lattice_type == "XYH":
self.lattice = XYHAnalyticLattice(lattice_params)
self.start_n = self.lattice.state_to_node((lattice_params['x_lims'][0], lattice_params['y_lims'][0], 0))
self.goal_n = self.lattice.state_to_node((lattice_params['x_lims'][1]-1, lattice_params['y_lims'][0]-1, 0))
self.lattice.precalc_costs(self.cost_fn) #Enumerate and cache successors and edge costs
self.learner_policy = None #This will be set prior to running a polciy using set_learner_policy
#Data structures for planning
self.frontier = [] #Frontier is un-sorted as it is sorted on demand (using heuristic)
self.oracle_frontier = PriorityQueue() #Frontier sorted according to oracle(for mixing)
self.visited = {} #Keep track of visited cells
self.c_obs = [] #Keep track of collision checks done so far
self.cost_so_far = defaultdict(lambda: np.inf) #Keep track of cost of path to the node
self.came_from = {} #Keep track of parent during search
self.learner = SupervisedRegressionNetwork(learner_params) #learner is a part of the environment
def initialize(self, env_folder, oracle_folder, num_envs, file_start_num, phase='train', visualize=False):
"""Initialize everything"""
self.env_folder = env_folder
self.oracle_folder = oracle_folder
self.num_envs = num_envs
self.phase = phase
self.visualize = visualize
self.curr_env_num = file_start_num - 1
def set_mixing_param(self, beta):
self.beta = beta
def run_episode(k_tsteps=None, max_expansions=1000000):
assert self.initialized == True, "Planner has not been initialized properly. Please call initialize or reset_problem function before plan function"
start_t = time.time()
data = [] #Dataset that will be filled during training
self.came_from[self.start_n]= (None, None)
self.cost_so_far[self.start_n] = 0. #For each node, this is the cost of the shortest path to the start
self.num_invalid_predecessors[start] = 0
self.num_invalid_siblings[start] = 0
self.depth_so_far[start] = 0
if self.phase == "train":
start_h_val = self.oracle[self.start_n]
self.oracle_frontier.put(self.start_n, start_h_val)
self.frontier.append(self.start_n) #This frontier is just a list
curr_expansions = 0 #Number of expansions done
num_rexpansions = 0
found_goal = False
path =[]
path_cost = np.inf
while len(self.frontier) > 0:
#Check 1: Stop search if frontier gets too large
if curr_expansions >= max_expansions:
print("Max Expansions Done.")
break
#Check 2: Stop search if open list gets too large
if len(self.frontier) > 500000:
print("Timeout.")
break
#################################################################################################
#Step 1: With probability beta, we select the oracle and (1-beta) we select the learner, also we collect data if
# curr_expansions is in one of the k timesteps
if phase == "train":
if curr_expansions in k_tsteps:
rand_idx = np.random.randint(len(self.frontier))
n = self.frontier[rand_idx] #Choose a random action
data.append(self.get_feature_vec[n], self.curr_oracle[n]) #Query oracle for Q-value of that action and append to dataset
if np.random.random() <= self.beta:
h, curr_node = self.oracle_frontier.get()
else
curr_node = self.get_best_node()
else:
curr_node = self.get_best_node()
#################################################################################################
if curr_node in self.visited:
continue
#Step 3: Add to visited
self.visited[curr_node] = 1
#Check 3: Stop search if goal found
if curr_node == self.goal_node:
print "Found goal"
found_goal = True
break
#Step 4: If search has not ended, add neighbors of current node to frontier
neighbors, edge_costs, valid_edges, invalid_edges = self.get_successors(curr_node)
#Update the features of the parent and current node
n_invalid_edges = len(invalid_edges)
self.num_invalid_grand_children[self.came_from[curr_node][0]] += n_invalid_edges
self.num_invalid_children[curr_node] = n_invalid_edges
#Step 5: Update c_obs with collision checks performed
self.c_obs.append(invalid_edges)
g = self.cost_so_far[curr_node]
for i, neighbor in enumerate(neighbors):
new_g = g + edge_costs[i]
if neighbor not in self.visited
#Add neighbor to open only if it wasn't in open already (don't need duplicates) [Note: Only do this if ordering in the frontier doesn't matter]
if neighbor not in self.cost_so_far:
#Update the oracle frontier only during training (for mixing)
if self.phase == "train":
h_val = self.curr_oracle[neighbor]
self.oracle_frontier.put(neighbor, h_val)
self.frontier.append(neighbor)
#Keep track of cost of shortest path to neighbor and parent it came from (for features and reconstruct path)
if new_g < self.cost_so_far[neighbor]:
self.came_from[neighbor] = (curr_node, valid_edges[i])
self.cost_so_far[neighbor] = new_g
#Update feature dicts
self.learner.cost_so_far[neighbor] = new_g
self.learner.num_invalid_predecessors[neighbor] = self.num_invalid_predecessors[curr_node] + n_invalid_edges
self.learner.num_invalid_siblings[neighbor] = n_invalid_edges
self.learner.depth_so_far[neighbor] = self.depth_so_far[curr_node] + 1
#Step 6:increment number of expansions
curr_expansions += 1
if found_goal:
path, path_cost = self.reconstruct_path(self.came_from, self.start_node, self.goal_node, self.cost_so_far)
else:
print ('Found no solution, priority queue empty')
time_taken = time.time()- start_t
return path, path_cost, curr_expansions, time_taken, self.came_from, self.cost_so_far, self.c_obs #Run planner on current env and return data seetn. Also, update current env to next env
def get_heuristic(self, node, goal):
"""Given a node and goal, calculate features and get heuristic value"""
return 0
def get_best_node(self):
"""Evaluates all the nodes in the frontier and returns the best node"""
return None
def sample_world(self, mode='cycle'):
self.curr_env_num = (self.curr_env_num+1)%self.num_envs
file_path = os.path.join(os.path.abspath(self.env_folder), str(self.curr_env_num)+'.png')
self.curr_env = initialize_env_from_file(file_path)
def compute_oracle(self, mode='cycle'):
file_path = os.path.join(os.path.abspath(self.oracle_folder), "oracle_"+str(self.curr_env_num)+'.p')
self.curr_oracle = pickle.load(cost_so_far, open(file_path, 'rb'))
def initialize_env_from_file(self, file_path):
env = Env2D()
env.initialize(file_path, self.env_params)
if self.visualize:
self.env.initialize_plot(self.lattice.node_to_state(self.start_node), self.lattice.node_to_state(self.goal_node))
self.initialized = True
return env
def clear_planner(self):
self.frontier.clear()
self.visited = {}
self.c_obs = []
self.cost_so_far = {}
self.came_from = {}
| 45.093596 | 192 | 0.693358 | 1,285 | 9,154 | 4.719844 | 0.224903 | 0.012366 | 0.016323 | 0.019291 | 0.179885 | 0.101401 | 0.085903 | 0.06249 | 0.06249 | 0.050948 | 0 | 0.007256 | 0.202097 | 9,154 | 202 | 193 | 45.316832 | 0.823111 | 0.179266 | 0 | 0.108696 | 0 | 0 | 0.041685 | 0 | 0 | 0 | 0 | 0 | 0.007246 | 0 | null | null | 0 | 0.086957 | null | null | 0.028986 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
65c266ffeb9dad82408ef950252b4d7368839fc3 | 966 | py | Python | opi_dragon_api/auth/__init__.py | CEAC33/opi-dragon-api | 8f050a0466dab4aaeec13151b9f49990bbd73640 | [
"MIT"
] | null | null | null | opi_dragon_api/auth/__init__.py | CEAC33/opi-dragon-api | 8f050a0466dab4aaeec13151b9f49990bbd73640 | [
"MIT"
] | null | null | null | opi_dragon_api/auth/__init__.py | CEAC33/opi-dragon-api | 8f050a0466dab4aaeec13151b9f49990bbd73640 | [
"MIT"
] | null | null | null | from sanic_jwt import exceptions
class User:
def __init__(self, id, username, password):
self.user_id = id
self.username = username
self.password = password
def __repr__(self):
return "User(id='{}')".format(self.user_id)
def to_dict(self):
return {"user_id": self.user_id, "username": self.username}
users = [User(1, "opi-user", "~Zñujh*B2D`9T!<j")]
username_table = {u.username: u for u in users}
userid_table = {u.user_id: u for u in users}
async def my_authenticate(request, *args, **kwargs):
username = request.json.get("username", None)
password = request.json.get("password", None)
if not username or not password:
raise exceptions.AuthenticationFailed("Missing username or password.")
user = username_table.get(username, None)
if user is None or password != user.password:
raise exceptions.AuthenticationFailed("Incorrect username or password")
return user | 29.272727 | 79 | 0.677019 | 129 | 966 | 4.922481 | 0.364341 | 0.056693 | 0.047244 | 0.050394 | 0.037795 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003901 | 0.203934 | 966 | 33 | 80 | 29.272727 | 0.821847 | 0 | 0 | 0 | 0 | 0 | 0.131334 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.136364 | false | 0.318182 | 0.045455 | 0.090909 | 0.363636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
65c64d0d6e346b2c86db0238e477f1aee46d6160 | 2,313 | py | Python | tensorflow/python/data/experimental/kernel_tests/serialization/textline_dataset_serialization_test.py | DanMitroshin/tensorflow | 74aa353842f1788bdb7506ecceaf6ba99140e165 | [
"Apache-2.0"
] | 4 | 2021-06-02T03:21:44.000Z | 2021-11-08T09:47:24.000Z | tensorflow/python/data/experimental/kernel_tests/serialization/textline_dataset_serialization_test.py | DanMitroshin/tensorflow | 74aa353842f1788bdb7506ecceaf6ba99140e165 | [
"Apache-2.0"
] | 7 | 2021-11-10T20:21:23.000Z | 2022-03-22T19:18:39.000Z | tensorflow/python/data/experimental/kernel_tests/serialization/textline_dataset_serialization_test.py | DanMitroshin/tensorflow | 74aa353842f1788bdb7506ecceaf6ba99140e165 | [
"Apache-2.0"
] | 3 | 2021-05-09T13:41:29.000Z | 2021-06-24T06:12:05.000Z | # Copyright 2017 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for checkpointing the TextLineDataset."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from absl.testing import parameterized
from tensorflow.python.data.experimental.kernel_tests import reader_dataset_ops_test_base
from tensorflow.python.data.kernel_tests import checkpoint_test_base
from tensorflow.python.data.kernel_tests import test_base
from tensorflow.python.data.ops import readers as core_readers
from tensorflow.python.framework import combinations
from tensorflow.python.platform import test
class TextLineDatasetCheckpointTest(
reader_dataset_ops_test_base.TextLineDatasetTestBase,
checkpoint_test_base.CheckpointTestBase, parameterized.TestCase):
def _build_iterator_graph(self, test_filenames, compression_type=None):
return core_readers.TextLineDataset(
test_filenames, compression_type=compression_type, buffer_size=10)
@combinations.generate(test_base.default_test_combinations())
def testTextLineCore(self):
compression_types = [None, "GZIP", "ZLIB"]
num_files = 5
lines_per_file = 5
num_outputs = num_files * lines_per_file
for compression_type in compression_types:
test_filenames = self._createFiles(
num_files,
lines_per_file,
crlf=True,
compression_type=compression_type)
# pylint: disable=cell-var-from-loop
self.run_core_tests(
lambda: self._build_iterator_graph(test_filenames, compression_type),
num_outputs)
# pylint: enable=cell-var-from-loop
if __name__ == "__main__":
test.main()
| 39.20339 | 89 | 0.750108 | 291 | 2,313 | 5.697595 | 0.474227 | 0.063329 | 0.072376 | 0.057901 | 0.126659 | 0.078408 | 0.059107 | 0.059107 | 0.059107 | 0 | 0 | 0.006119 | 0.152183 | 2,313 | 58 | 90 | 39.87931 | 0.839368 | 0.335063 | 0 | 0 | 0 | 0 | 0.010547 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.060606 | false | 0 | 0.30303 | 0.030303 | 0.424242 | 0.030303 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
65d0a80d19258c77b9d91fc06cfaa6455396ecc8 | 10,012 | py | Python | octopus_deploy_swagger_client/models/phase_resource.py | cvent/octopus-deploy-api-client | 0e03e842e1beb29b132776aee077df570b88366a | [
"Apache-2.0"
] | null | null | null | octopus_deploy_swagger_client/models/phase_resource.py | cvent/octopus-deploy-api-client | 0e03e842e1beb29b132776aee077df570b88366a | [
"Apache-2.0"
] | null | null | null | octopus_deploy_swagger_client/models/phase_resource.py | cvent/octopus-deploy-api-client | 0e03e842e1beb29b132776aee077df570b88366a | [
"Apache-2.0"
] | null | null | null | # coding: utf-8
"""
Octopus Server API
No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen) # noqa: E501
OpenAPI spec version: 2019.6.7+Branch.tags-2019.6.7.Sha.aa18dc6809953218c66f57eff7d26481d9b23d6a
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
import pprint
import re # noqa: F401
import six
from octopus_deploy_swagger_client.models.retention_period import RetentionPeriod # noqa: F401,E501
class PhaseResource(object):
"""NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
"""
"""
Attributes:
swagger_types (dict): The key is attribute name
and the value is attribute type.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
"""
swagger_types = {
'id': 'str',
'name': 'str',
'automatic_deployment_targets': 'list[str]',
'optional_deployment_targets': 'list[str]',
'minimum_environments_before_promotion': 'int',
'is_optional_phase': 'bool',
'release_retention_policy': 'RetentionPeriod',
'tentacle_retention_policy': 'RetentionPeriod'
}
attribute_map = {
'id': 'Id',
'name': 'Name',
'automatic_deployment_targets': 'AutomaticDeploymentTargets',
'optional_deployment_targets': 'OptionalDeploymentTargets',
'minimum_environments_before_promotion': 'MinimumEnvironmentsBeforePromotion',
'is_optional_phase': 'IsOptionalPhase',
'release_retention_policy': 'ReleaseRetentionPolicy',
'tentacle_retention_policy': 'TentacleRetentionPolicy'
}
def __init__(self, id=None, name=None, automatic_deployment_targets=None, optional_deployment_targets=None, minimum_environments_before_promotion=None, is_optional_phase=None, release_retention_policy=None, tentacle_retention_policy=None): # noqa: E501
"""PhaseResource - a model defined in Swagger""" # noqa: E501
self._id = None
self._name = None
self._automatic_deployment_targets = None
self._optional_deployment_targets = None
self._minimum_environments_before_promotion = None
self._is_optional_phase = None
self._release_retention_policy = None
self._tentacle_retention_policy = None
self.discriminator = None
if id is not None:
self.id = id
if name is not None:
self.name = name
if automatic_deployment_targets is not None:
self.automatic_deployment_targets = automatic_deployment_targets
if optional_deployment_targets is not None:
self.optional_deployment_targets = optional_deployment_targets
if minimum_environments_before_promotion is not None:
self.minimum_environments_before_promotion = minimum_environments_before_promotion
if is_optional_phase is not None:
self.is_optional_phase = is_optional_phase
if release_retention_policy is not None:
self.release_retention_policy = release_retention_policy
if tentacle_retention_policy is not None:
self.tentacle_retention_policy = tentacle_retention_policy
@property
def id(self):
"""Gets the id of this PhaseResource. # noqa: E501
:return: The id of this PhaseResource. # noqa: E501
:rtype: str
"""
return self._id
@id.setter
def id(self, id):
"""Sets the id of this PhaseResource.
:param id: The id of this PhaseResource. # noqa: E501
:type: str
"""
self._id = id
@property
def name(self):
"""Gets the name of this PhaseResource. # noqa: E501
:return: The name of this PhaseResource. # noqa: E501
:rtype: str
"""
return self._name
@name.setter
def name(self, name):
"""Sets the name of this PhaseResource.
:param name: The name of this PhaseResource. # noqa: E501
:type: str
"""
self._name = name
@property
def automatic_deployment_targets(self):
"""Gets the automatic_deployment_targets of this PhaseResource. # noqa: E501
:return: The automatic_deployment_targets of this PhaseResource. # noqa: E501
:rtype: list[str]
"""
return self._automatic_deployment_targets
@automatic_deployment_targets.setter
def automatic_deployment_targets(self, automatic_deployment_targets):
"""Sets the automatic_deployment_targets of this PhaseResource.
:param automatic_deployment_targets: The automatic_deployment_targets of this PhaseResource. # noqa: E501
:type: list[str]
"""
self._automatic_deployment_targets = automatic_deployment_targets
@property
def optional_deployment_targets(self):
"""Gets the optional_deployment_targets of this PhaseResource. # noqa: E501
:return: The optional_deployment_targets of this PhaseResource. # noqa: E501
:rtype: list[str]
"""
return self._optional_deployment_targets
@optional_deployment_targets.setter
def optional_deployment_targets(self, optional_deployment_targets):
"""Sets the optional_deployment_targets of this PhaseResource.
:param optional_deployment_targets: The optional_deployment_targets of this PhaseResource. # noqa: E501
:type: list[str]
"""
self._optional_deployment_targets = optional_deployment_targets
@property
def minimum_environments_before_promotion(self):
"""Gets the minimum_environments_before_promotion of this PhaseResource. # noqa: E501
:return: The minimum_environments_before_promotion of this PhaseResource. # noqa: E501
:rtype: int
"""
return self._minimum_environments_before_promotion
@minimum_environments_before_promotion.setter
def minimum_environments_before_promotion(self, minimum_environments_before_promotion):
"""Sets the minimum_environments_before_promotion of this PhaseResource.
:param minimum_environments_before_promotion: The minimum_environments_before_promotion of this PhaseResource. # noqa: E501
:type: int
"""
self._minimum_environments_before_promotion = minimum_environments_before_promotion
@property
def is_optional_phase(self):
"""Gets the is_optional_phase of this PhaseResource. # noqa: E501
:return: The is_optional_phase of this PhaseResource. # noqa: E501
:rtype: bool
"""
return self._is_optional_phase
@is_optional_phase.setter
def is_optional_phase(self, is_optional_phase):
"""Sets the is_optional_phase of this PhaseResource.
:param is_optional_phase: The is_optional_phase of this PhaseResource. # noqa: E501
:type: bool
"""
self._is_optional_phase = is_optional_phase
@property
def release_retention_policy(self):
"""Gets the release_retention_policy of this PhaseResource. # noqa: E501
:return: The release_retention_policy of this PhaseResource. # noqa: E501
:rtype: RetentionPeriod
"""
return self._release_retention_policy
@release_retention_policy.setter
def release_retention_policy(self, release_retention_policy):
"""Sets the release_retention_policy of this PhaseResource.
:param release_retention_policy: The release_retention_policy of this PhaseResource. # noqa: E501
:type: RetentionPeriod
"""
self._release_retention_policy = release_retention_policy
@property
def tentacle_retention_policy(self):
"""Gets the tentacle_retention_policy of this PhaseResource. # noqa: E501
:return: The tentacle_retention_policy of this PhaseResource. # noqa: E501
:rtype: RetentionPeriod
"""
return self._tentacle_retention_policy
@tentacle_retention_policy.setter
def tentacle_retention_policy(self, tentacle_retention_policy):
"""Sets the tentacle_retention_policy of this PhaseResource.
:param tentacle_retention_policy: The tentacle_retention_policy of this PhaseResource. # noqa: E501
:type: RetentionPeriod
"""
self._tentacle_retention_policy = tentacle_retention_policy
def to_dict(self):
"""Returns the model properties as a dict"""
result = {}
for attr, _ in six.iteritems(self.swagger_types):
value = getattr(self, attr)
if isinstance(value, list):
result[attr] = list(map(
lambda x: x.to_dict() if hasattr(x, "to_dict") else x,
value
))
elif hasattr(value, "to_dict"):
result[attr] = value.to_dict()
elif isinstance(value, dict):
result[attr] = dict(map(
lambda item: (item[0], item[1].to_dict())
if hasattr(item[1], "to_dict") else item,
value.items()
))
else:
result[attr] = value
if issubclass(PhaseResource, dict):
for key, value in self.items():
result[key] = value
return result
def to_str(self):
"""Returns the string representation of the model"""
return pprint.pformat(self.to_dict())
def __repr__(self):
"""For `print` and `pprint`"""
return self.to_str()
def __eq__(self, other):
"""Returns true if both objects are equal"""
if not isinstance(other, PhaseResource):
return False
return self.__dict__ == other.__dict__
def __ne__(self, other):
"""Returns true if both objects are not equal"""
return not self == other
| 33.373333 | 257 | 0.663504 | 1,102 | 10,012 | 5.742287 | 0.129764 | 0.102086 | 0.096081 | 0.087231 | 0.588021 | 0.495575 | 0.45354 | 0.278919 | 0.246681 | 0.125158 | 0 | 0.017811 | 0.259788 | 10,012 | 299 | 258 | 33.48495 | 0.836055 | 0.326808 | 0 | 0.075758 | 1 | 0 | 0.09478 | 0.069606 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.030303 | 0 | 0.325758 | 0.015152 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
65d4761a181f8a12d33c2a0e4fbbb20be034782f | 309 | py | Python | project/server/main/modules/__init__.py | ardikabs/dnsmanager | 4d2f302ea9f54fd4d5416328dc46a1c47b573e5b | [
"MIT"
] | 1 | 2019-01-15T10:33:04.000Z | 2019-01-15T10:33:04.000Z | project/server/main/modules/__init__.py | ardikabs/dnsmanager | 4d2f302ea9f54fd4d5416328dc46a1c47b573e5b | [
"MIT"
] | null | null | null | project/server/main/modules/__init__.py | ardikabs/dnsmanager | 4d2f302ea9f54fd4d5416328dc46a1c47b573e5b | [
"MIT"
] | null | null | null | """ All Available Module on Server Belong to Here """
AVAILABLE_MODULES = (
"api",
)
def init_app(app, **kwargs):
from importlib import import_module
for module in AVAILABLE_MODULES:
import_module(
f".{module}",
package=__name__
).init_app(app, **kwargs) | 23.769231 | 53 | 0.614887 | 36 | 309 | 5 | 0.611111 | 0.177778 | 0.111111 | 0.177778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.275081 | 309 | 13 | 54 | 23.769231 | 0.803571 | 0.145631 | 0 | 0 | 0 | 0 | 0.046693 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.2 | 0 | 0.3 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
65d5f60d4b7acc40612bcf45d7c9efe894269057 | 1,050 | py | Python | JSS Users Cleanup/setup.py | killahquam/JAMF | 77b003a72375b9b01bdb961cb466b7519c859116 | [
"MIT"
] | 34 | 2015-06-11T16:37:54.000Z | 2021-06-02T20:42:55.000Z | JSS Users Cleanup/setup.py | killahquam/JAMF | 77b003a72375b9b01bdb961cb466b7519c859116 | [
"MIT"
] | 1 | 2016-01-03T04:05:30.000Z | 2016-09-26T20:25:51.000Z | JSS Users Cleanup/setup.py | killahquam/JAMF | 77b003a72375b9b01bdb961cb466b7519c859116 | [
"MIT"
] | 6 | 2015-12-29T20:39:56.000Z | 2020-06-30T19:33:23.000Z | #!/usr/bin/python
#Quam Sodji 2015
#Setup script to install the needed python modules
#Installs kn/Slack and python-jss modules
#We assume you have Git installed.......
import subprocess
import os
import sys
import shutil
clone_jss = subprocess.check_output(['git','clone','git://github.com/sheagcraig/python-jss.git'])
clone_slack = subprocess.check_output(['git','clone','git://github.com/kn/slack.git'])
path = os.path.dirname(os.path.realpath(__file__))
#Installing Slack
print "Installing Slack"
slack_folder = os.chdir(path + '/slack')
install_slack = subprocess.check_output(['python','setup.py','install'])
print "slack module installed"
#Installing Python JSS
print "Installing Python JSS"
jss_folder = os.chdir(path + '/python-jss')
install_jss = subprocess.check_output(['python','setup.py','install'])
print "python-jss module installed"
#Cleaning up
print "Cleaning up"
change_location = os.chdir(path)
remove_slack_clone = shutil.rmtree(path + '/slack')
remove_jss_clone = shutil.rmtree(path + '/python-jss')
print "Done."
sys.exit(0) | 33.870968 | 97 | 0.75619 | 152 | 1,050 | 5.098684 | 0.355263 | 0.08129 | 0.108387 | 0.061935 | 0.224516 | 0.224516 | 0.224516 | 0.224516 | 0 | 0 | 0 | 0.005252 | 0.093333 | 1,050 | 31 | 98 | 33.870968 | 0.808824 | 0.197143 | 0 | 0 | 0 | 0 | 0.316986 | 0.084928 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.190476 | null | null | 0.285714 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
65db99db18c44b4e940ff60964e5dae8b718ca83 | 3,988 | py | Python | datamining_assignments/datamining_assiment_3/nmf.py | xuerenlv/PaperWork | f096b57a80e8d771f080a02b925a22edbbee722a | [
"Apache-2.0"
] | 1 | 2015-10-15T12:26:07.000Z | 2015-10-15T12:26:07.000Z | datamining_assignments/datamining_assiment_3/nmf.py | xuerenlv/PaperWork | f096b57a80e8d771f080a02b925a22edbbee722a | [
"Apache-2.0"
] | null | null | null | datamining_assignments/datamining_assiment_3/nmf.py | xuerenlv/PaperWork | f096b57a80e8d771f080a02b925a22edbbee722a | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
'''
Created on Oct 27, 2015
@author: nlp
'''
import numpy as np
import math
# nmf 聚类主体
def nmf(file_list, k):
X = np.array(file_list).transpose()
m_x, n_x = X.shape
# 随机生成初始矩阵
U = np.random.rand(m_x, k)
V = np.random.rand(n_x, k)
is_convergence = False
count = 0
while not is_convergence:
count+=1
U_old = U.copy()
V_old = V.copy()
X_V = np.dot(X, V)
U_VT_V = np.dot(U, np.dot(V.transpose(), V))
U = U * X_V / U_VT_V
XT_U = np.dot(X.transpose(), U)
V_UT_U = np.dot(V, np.dot(U.transpose(), U))
V = V * XT_U / V_UT_U
if abs((U - U_old).sum()) < 0.01 and abs((V - V_old).sum()) < 0.01:
is_convergence = True
# normalize U and V
u_pow_2 = (U ** 2).sum(axis=0)
u_sqrt_pow_2 = [math.sqrt(w) for w in u_pow_2]
for i in range(m_x):
for j in range(k):
U[i, j] = U[i, j] / u_sqrt_pow_2[j]
for i in range(n_x):
for j in range(k):
V[i, j] *= u_sqrt_pow_2[j]
# restlt_example_map_cluster
restlt_example_map_cluster = {}
for i in range(n_x):
max_val = 0
for j in range(k):
if V[i][j] > max_val:
max_val = V[i][j]
restlt_example_map_cluster[i] = j
return restlt_example_map_cluster
# 读文件 生成list[list[]],里面的list代表文件的一行; list[] 代表第i行所属的类。
def read_file(file_name):
file_list = []
lable_list = []
for line in open(file_name).readlines():
arr_line = list(line.split(','));
lable_list.append(arr_line[-1][:-1]);
del arr_line[-1];
file_list.append([float(one) if not one=='0' else 0.000001 for one in arr_line])
return (file_list, lable_list)
#***************** 评价标准 **********************************
# purity
def gen_purity(file_list, lable_list, restlt_example_map_cluster, cluster_num):
# 初始化 m(i,j)二维数组
gen_matrix = [[0 for j in range(cluster_num)] for i in range(cluster_num)]
for index in xrange(len(file_list)):
lable = int(lable_list[index]) if int(lable_list[index]) > 0 else 0
gen_matrix[lable][restlt_example_map_cluster[index]] += 1
p_j = [0 for i in range(cluster_num)]
for j in range(cluster_num):
max_m_i_j = 0
for i in range(cluster_num):
if gen_matrix[i][j] > max_m_i_j:
max_m_i_j = gen_matrix[i][j]
p_j[j] = max_m_i_j
sum_val = 0
for x in p_j:
sum_val += x
return float(sum_val) / float(len(file_list))
# Gini
def gen_gini(file_list, lable_list, restlt_example_map_cluster, cluster_num):
# 初始化 m(i,j)二维数组
gen_matrix = np.array([[0 for j in range(cluster_num)] for i in range(cluster_num)])
for index in xrange(len(file_list)):
lable = int(lable_list[index]) if int(lable_list[index]) > 0 else 0
gen_matrix[lable][restlt_example_map_cluster[index]] += 1
M_j = gen_matrix.sum(axis=0)
g_j = [0 for i in range(cluster_num)]
for j in range(cluster_num):
for i in range(cluster_num):
g_j[j] += (float(gen_matrix[i][j]) / float(M_j[j])) ** 2
g_j[j] = 1 - g_j[j]
fenzi_sum = 0.0
for j in range(cluster_num):
fenzi_sum += g_j[j] * M_j[j]
return float(fenzi_sum) / float(len(file_list))
#****************************************************************************
def nmf_main(file_name,cluster_nums):
file_list, lable_list = read_file(file_name)
restlt_example_map_cluster = nmf(file_list, cluster_nums)
purity = gen_purity(file_list, lable_list, restlt_example_map_cluster, cluster_nums)
gini = gen_gini(file_list, lable_list, restlt_example_map_cluster, cluster_nums)
print file_name,'purity:',purity, "gini:",gini
if __name__ == '__main__':
nmf_main("german.txt", 2)
nmf_main("mnist.txt", 10)
pass
| 31.15625 | 89 | 0.570963 | 650 | 3,988 | 3.223077 | 0.170769 | 0.056802 | 0.08401 | 0.120764 | 0.436277 | 0.398568 | 0.367542 | 0.345585 | 0.334606 | 0.334606 | 0 | 0.018608 | 0.272317 | 3,988 | 127 | 90 | 31.401575 | 0.703308 | 0.082497 | 0 | 0.183908 | 0 | 0 | 0.011386 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.011494 | 0.022989 | null | null | 0.011494 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
65ddc57bb1b73bd27f58c41a027c88ec873b6740 | 2,541 | py | Python | setup.py | jimbydamonk/jenkins-job-builder-addons | 172672e25089992ed94dc223c7e30f29c46719b0 | [
"Apache-2.0"
] | 8 | 2015-08-21T15:53:22.000Z | 2019-04-09T20:42:58.000Z | setup.py | jimbydamonk/jenkins-job-builder-addons | 172672e25089992ed94dc223c7e30f29c46719b0 | [
"Apache-2.0"
] | 5 | 2016-03-23T17:46:16.000Z | 2018-03-05T13:56:17.000Z | setup.py | jimbydamonk/jenkins-job-builder-addons | 172672e25089992ed94dc223c7e30f29c46719b0 | [
"Apache-2.0"
] | 11 | 2015-10-05T21:58:33.000Z | 2019-04-14T04:50:48.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
from setuptools.command.test import test as TestCommand
try:
from setuptools import setup
except ImportError:
from distutils.core import setup
with open('README.rst') as readme_file:
readme = readme_file.read()
with open('HISTORY.rst') as history_file:
history = history_file.read().replace('.. :changelog:', '')
requirements = [
# TODO: put package requirements here
]
test_requirements = [
# TODO: put package test requirements here
]
class Tox(TestCommand):
user_options = [('tox-args=', 'a', "Arguments to pass to tox")]
def initialize_options(self):
TestCommand.initialize_options(self)
self.tox_args = None
def finalize_options(self):
TestCommand.finalize_options(self)
self.test_args = []
self.test_suite = True
def run_tests(self):
#import here, cause outside the eggs aren't loaded
import tox
import shlex
args = self.tox_args
if args:
args = shlex.split(self.tox_args)
tox.cmdline(args=args)
setup(
name='jenkins-job-builder-addons',
version='1.0.5',
description="A suite of jenkins job builder addons",
long_description=readme + '\n\n' + history,
author="Mike Buzzetti",
author_email='mike.buzzetti@gmail.com',
url='https://github.com/jimbydamonk/jenkins-job-builder-addons',
packages=['jenkins_jobs_addons'],
include_package_data=True,
install_requires=requirements,
license="Apache",
zip_safe=False,
keywords='jenkins ',
classifiers=[
'Development Status :: 2 - Pre-Alpha',
'Intended Audience :: Developers',
'License :: OSI Approved :: Apache Software License',
'Natural Language :: English',
"Programming Language :: Python :: 2",
'Programming Language :: Python :: 2.6',
'Programming Language :: Python :: 2.7',
],
test_suite='tests',
tests_require=['tox'] + test_requirements,
cmdclass={'test': Tox},
entry_points={
'jenkins_jobs.projects': [
'folder=jenkins_jobs_addons.folders:Folder',
],
'jenkins_jobs.views': [
'all=jenkins_jobs_addons.views:all_view',
'build_pipeline=jenkins_jobs_addons.views:build_pipeline_view',
'delivery_pipeline=jenkins_jobs_addons.'
'views:delivery_pipeline_view'
],
'jenkins_jobs.modules': [
'views=jenkins_jobs_addons.views:Views'
]
},
)
| 28.550562 | 75 | 0.637151 | 289 | 2,541 | 5.435986 | 0.449827 | 0.063017 | 0.064927 | 0.056015 | 0.038192 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005181 | 0.240457 | 2,541 | 88 | 76 | 28.875 | 0.808808 | 0.066116 | 0 | 0.042857 | 0 | 0 | 0.351351 | 0.131757 | 0 | 0 | 0 | 0.011364 | 0 | 1 | 0.042857 | false | 0.014286 | 0.085714 | 0 | 0.157143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
65e1ff2eb00e84049f3aabe94179a02fc82570ba | 802 | py | Python | hw/scripts/__main__.py | jonasblixt/mongoose | 4f392353f42d9c9245cdb5d9511348ec40bd936f | [
"BSD-3-Clause"
] | 4 | 2019-07-31T17:59:14.000Z | 2019-10-06T11:46:28.000Z | hw/scripts/__main__.py | jonasblixt/mongoose | 4f392353f42d9c9245cdb5d9511348ec40bd936f | [
"BSD-3-Clause"
] | null | null | null | hw/scripts/__main__.py | jonasblixt/mongoose | 4f392353f42d9c9245cdb5d9511348ec40bd936f | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
import kicad
import model
from stackups import JLCPCB6Layers
#from dram import lp4
# IMX8MM
# Diff pairs should be matched within 1ps
# CK_t/CK_c max 200 ps
# CA[5:0]
# CS[1:0] min: CK_t - 25ps, max: CK_t + 25ps
# CKE[1:0]
# DQS0_t/DQS0_c min: CK_t - 85ps, max CK_t + 85ps
# DQ[7:0] min: DQS0_t - 10ps, max DQS0_t + 10ps
# DM0
# DQS1_t/DQS1_c min: CK_t - 85ps, max CK_t + 85ps
# DQ[15:8] min: DQS1_t - 10ps, max DQS1_t + 10ps
# DM1
if __name__ == "__main__":
pcb = kicad.KicadPCB("../mongoose.kicad_pcb", JLCPCB6Layers())
# DiffPair(pcb, "_n","_p", max_delay_ps=200.0, max_skew_ps=1.0)
for net_index in pcb.get_nets().keys():
net = pcb.get_nets()[net_index]
print(net.get_name() + " dly: %.2f ps"%(net.get_delay_ps()))
| 21.675676 | 68 | 0.627182 | 144 | 802 | 3.229167 | 0.4375 | 0.045161 | 0.060215 | 0.030108 | 0.098925 | 0.098925 | 0.098925 | 0.098925 | 0.098925 | 0.098925 | 0 | 0.090909 | 0.218204 | 802 | 36 | 69 | 22.277778 | 0.650718 | 0.55611 | 0 | 0 | 0 | 0 | 0.123529 | 0.061765 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.375 | 0 | 0.375 | 0.125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
65e2db02f151a8da25b3c6a7203333c4f0b917f2 | 4,795 | py | Python | scripts/runOptimizer.py | sschulz365/PhC_Optimization | 9a4add4eb638d797647cabbdf0f96b29b78114f2 | [
"Naumen",
"Condor-1.1",
"MS-PL"
] | 2 | 2017-05-13T05:33:06.000Z | 2021-02-26T14:39:44.000Z | scripts/runOptimizer.py | sschulz365/PhC_Optimization | 9a4add4eb638d797647cabbdf0f96b29b78114f2 | [
"Naumen",
"Condor-1.1",
"MS-PL"
] | null | null | null | scripts/runOptimizer.py | sschulz365/PhC_Optimization | 9a4add4eb638d797647cabbdf0f96b29b78114f2 | [
"Naumen",
"Condor-1.1",
"MS-PL"
] | null | null | null | #Sean Billings, 2015
import random
import numpy
import subprocess
import constraints
from experiment import Experiment
from objectiveFunctions import WeightedSumObjectiveFunction, IdealDifferentialObjectiveFunction
from waveGuideMPBOptimizer import differentialEvolution, createPopulation, gradientDescentAlgorithm
import utilities
import math
paramMap = {}
paramMap["s1"] = 0 # First row vertical shift
paramMap["s2"] = 0 # Second row vertical shift
paramMap["s3"] = 0 # Third row vertical shift
paramMap["p1"] = 0 # First row horizontal shift
paramMap["p2"] = 0 # Second row horizontal shift
paramMap["p3"] = 0 # Third row horizontal shift
paramMap["r0"] = 0.3 # Default air-hole radius
paramMap["r1"] = 0.3 # Default first row radius
paramMap["r2"] = 0.3 # Default second row radius
paramMap["r3"] = 0.3 # Default third row radius
# absolute path to the mpb executable
mpb = "/Users/sean/documents/mpb-1.5/mpb/mpb"
# absolute path to the input ctl
inputFile = "/Users/sean/documents/W1_2D_v03.ctl.txt"
# absolute path to the output ctl
outputFile = "/Users/sean/documents/optimizerTestFile.txt"
# we define a general experiment object
# that we reuse whenever we need to make a command-line mpb call
# see experiment.py for functionality
experiment = Experiment(mpb, inputFile, outputFile)
# ex.setParams(paramVector)
experiment.setCalculationType('4') # accepts an int from 0 to 5
experiment.setBand(23)
# see constraints.py
constraintFunctions = [constraints.latticeConstraintsLD]
max_generation = 15 # number of iterations of the DE alg
population_size = 20 # number of solutions to consider in DE
random_update = 0.2 # chance of updating vector fields in DE alg
elite_size = 10 # number of solutions to store in DE, and use for GD
band = 23 # band of interest for MPB computations
# specify the weights for the IdealDifferentialObjectiveFunction
w1 = 0 #0.01 # bandwidth weight
w2 = 30 #100 # group index weight
w3 = 0 # average loss weight
w4 = 0 # BGP weight
w5 = 30 #0.002 # loss at ngo (group index) weight
w6 = 0
# these wights are use in the Objective Function to score mpb results
weights = [ w1, w2, w3, w4, w5, w6]
ideal_group_index = 30 #self.ideal_solution[0]
ideal_bandwidth = 0.007 #self.ideal_solution[1]
ideal_loss_at_group_index = 30 #self.ideal_solution[2]
ideal_bgp = 0.3 #self.ideal_solution[3]
ideal_delay = 300 #self.ideal_solution[4]
ideal = [ideal_group_index, ideal_bandwidth, ideal_loss_at_group_index, ideal_bgp, ideal_delay]
#Initialize objective function
#objFunc = IdealDifferentialObjectiveFunction(weights, experiment, ideal)
objFunc = WeightedSumObjectiveFunction(weights, experiment)
# Differential Evolution section
print "Starting Differential Evolution Optimizer"
# DEsolutions is an array of solutions generated by the DE alg
DEsolutions = differentialEvolution(constraintFunctions, objFunc,
max_generation, population_size, random_update,
paramMap, elite_size, experiment)
print "\nDifferential Evolution solutions generated"
population = DEsolutions
# test line
#population = createPopulation(constraintFunctions, population_size, paramMap)
descent_scaler = 0.2
completion_scaler = 0.1
alpha_scaler = 0.9
# Gradient Descent Section
print "\nStarting Gradient Descent Optimizer"
# GDsolutions is an array of solutions generated by the GD algorihtms
GDsolutions = gradientDescentAlgorithm(objFunc,
constraintFunctions,
population, descent_scaler,
completion_scaler, alpha_scaler)
population = GDsolutions
print "\nResults"
for solution in population:
print "\nSolution: " + str(solution)
results = objFunc.evaluate(solution)
solution_score = results[0]
bandwidth = results[1]
group_index = results[2]
avgLoss = results[3] # average loss
bandwidth_group_index_product = results[4] #BGP
loss_at_ng0 = results[5] # loss at group index
print "\nScore: " + str(solution_score)
print "\nNormalized Bandwidth: " + str(bandwidth)
print "\nGroup Index: " + str(group_index)
print "\nAverage Loss: " + str(avgLoss)
print "\nLoss at Group Index: " + str(loss_at_ng0)
print "\nBGP: " + str(bandwidth_group_index_product)
#print "\nComputing Fabrication Stability..."
#laplacian = utilities.computeLaplacian(weights, weightedSumObjectiveFunction, solution, experiment)
#fabrication_stability = 0
#for key in laplacian.keys():
# fabrication_stability = fabrication_stability + laplacian[key]**2
#fabrication_stability = math.sqrt(fabrication_stability)
#print "\nFabrication Stability " + str(fabrication_stability)
print "\nOptimization Complete"
| 33.531469 | 104 | 0.737018 | 594 | 4,795 | 5.848485 | 0.340067 | 0.034542 | 0.024467 | 0.020725 | 0.045481 | 0.036269 | 0.019574 | 0.019574 | 0 | 0 | 0 | 0.028834 | 0.18269 | 4,795 | 142 | 105 | 33.767606 | 0.857617 | 0.403962 | 0 | 0 | 0 | 0 | 0.142857 | 0.0425 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.115385 | null | null | 0.153846 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
65e31c331679c439236e3ccff96fa39b9166d6f4 | 435 | py | Python | setup.py | jigyasudhingra/music-recommendation-system | 09c66c4f207002b200d6394cf72e853741e44b6e | [
"MIT"
] | 2 | 2021-12-04T08:47:41.000Z | 2021-12-06T16:54:36.000Z | setup.py | jigyasudhingra/music-recommendation-system | 09c66c4f207002b200d6394cf72e853741e44b6e | [
"MIT"
] | null | null | null | setup.py | jigyasudhingra/music-recommendation-system | 09c66c4f207002b200d6394cf72e853741e44b6e | [
"MIT"
] | 1 | 2020-12-12T15:55:20.000Z | 2020-12-12T15:55:20.000Z | import os
import urllib.request
from zipfile import ZipFile
HOME_DIRECTORY = os.path.join('datasets','raw')
ROOT_URL = 'https://os.unil.cloud.switch.ch/fma/fma_metadata.zip'
if not os.path.isdir(HOME_DIRECTORY):
os.makedirs(HOME_DIRECTORY)
zip_path = os.path.join(HOME_DIRECTORY, 'data.zip')
urllib.request.urlretrieve(ROOT_URL, zip_path)
with ZipFile(zip_path, 'r') as zip:
zip.extractall(HOME_DIRECTORY)
print("Done!") | 29 | 65 | 0.758621 | 68 | 435 | 4.691176 | 0.5 | 0.203762 | 0.094044 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.103448 | 435 | 15 | 66 | 29 | 0.817949 | 0 | 0 | 0 | 0 | 0 | 0.176606 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0.083333 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
65e8c2b06a56311edf49d920f21df0bd1cab027c | 708 | py | Python | StationeersSaveFileDebugTools.py | lostinplace/StationeersSaveFileDebugTools | 372a2fc86a9fc3af25044a56131271b577d4d97b | [
"MIT"
] | null | null | null | StationeersSaveFileDebugTools.py | lostinplace/StationeersSaveFileDebugTools | 372a2fc86a9fc3af25044a56131271b577d4d97b | [
"MIT"
] | 1 | 2021-01-10T21:12:41.000Z | 2021-01-10T21:14:49.000Z | StationeersSaveFileDebugTools.py | lostinplace/StationeersSaveFileDebugTools | 372a2fc86a9fc3af25044a56131271b577d4d97b | [
"MIT"
] | null | null | null | import click
@click.group()
def cli():
pass
@cli.command("restore_atmo")
@click.argument('currentFile')
@click.argument('backupFile')
@click.argument('newFilePath')
def restore_atmo(current_file, backup_file, new_file_path):
from Utils.AtmoFileProcessing.RestoreAtmo import create_restored_world_file
create_restored_world_file(current_file, backup_file, new_file_path)
@cli.command("generate_start_condition")
@click.argument('world')
def generate_start_condition(world):
from Utils.StartConditionProcessing.StartConditionGenerator import convert_world_file_to_startconditions
out = convert_world_file_to_startconditions(world)
print(out)
if __name__ == '__main__':
cli() | 26.222222 | 108 | 0.79661 | 86 | 708 | 6.139535 | 0.44186 | 0.098485 | 0.064394 | 0.079545 | 0.246212 | 0.121212 | 0.121212 | 0 | 0 | 0 | 0 | 0 | 0.103107 | 708 | 27 | 109 | 26.222222 | 0.831496 | 0 | 0 | 0 | 0 | 0 | 0.114245 | 0.03385 | 0 | 0 | 0 | 0 | 0 | 1 | 0.157895 | false | 0.052632 | 0.157895 | 0 | 0.315789 | 0.052632 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
028ccbb703922e522d9de79fd431e21d9aeac192 | 909 | py | Python | src/main/python/server/test.py | areichmann-tgm/client_travis | c00163e6d7630ff4efaf28605b134e356e02a9d1 | [
"MIT"
] | null | null | null | src/main/python/server/test.py | areichmann-tgm/client_travis | c00163e6d7630ff4efaf28605b134e356e02a9d1 | [
"MIT"
] | null | null | null | src/main/python/server/test.py | areichmann-tgm/client_travis | c00163e6d7630ff4efaf28605b134e356e02a9d1 | [
"MIT"
] | null | null | null | import pytest
from server import rest
@pytest.fixture
def client():
rest.app.testing = True
#rest.app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///server.MyStudents'
client = rest.app.test_client()
yield client
def test_get(client):
res = client.get('/schueler')
assert res.status_code == 200
def test_get(client):
res = client.get('/schuelerA')
assert res.status_code == 200
def test_delete(client):
res = client.delete('/schuelerA',data={'schueler_id':'1000'})
assert res.status_code == 200
def test_update(client):
"""res = client.put('/schuelerA',data={'schueler_id':'1000','usernameX':'Adrian','emailX':'adrian@new.at','picture':'-'})"""
assert True
def test_insert(client):
res = client.put('/schuelerA',data={'schueler_id': '10', 'usernameX': 'Nicht_Adrian', 'emailX': 'adrian@new.at', 'picture': '-'})
assert res.status_code == 200
| 25.971429 | 133 | 0.667767 | 119 | 909 | 4.966387 | 0.352941 | 0.059222 | 0.126904 | 0.128596 | 0.57022 | 0.490694 | 0.490694 | 0.138748 | 0 | 0 | 0 | 0.028497 | 0.150715 | 909 | 34 | 134 | 26.735294 | 0.737047 | 0.212321 | 0 | 0.285714 | 0 | 0 | 0.1622 | 0 | 0 | 0 | 0 | 0 | 0.238095 | 1 | 0.285714 | false | 0 | 0.095238 | 0 | 0.380952 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
028dbb898943de5745b9b0587b4aecb405f08834 | 3,143 | py | Python | helper-scripts/instrnocombine.py | felixaestheticus/realcode-validation | c599cc41797fc074bd2b71d205d6b2b904e1d64b | [
"BSD-3-Clause"
] | null | null | null | helper-scripts/instrnocombine.py | felixaestheticus/realcode-validation | c599cc41797fc074bd2b71d205d6b2b904e1d64b | [
"BSD-3-Clause"
] | null | null | null | helper-scripts/instrnocombine.py | felixaestheticus/realcode-validation | c599cc41797fc074bd2b71d205d6b2b904e1d64b | [
"BSD-3-Clause"
] | null | null | null | #combine.py, combines available dictionaries into one, and generates csv file for latex
#f = open('dict_random')
mem_ops = 'MOVS','MOV','LDR','LDRH','LDRB','LDRSH','LDRSB','LDM','STR','STRH','STRB','STM'
ari_ops = 'ADDS','ADD','ADC','ADCS','ADR','SUBS','SUB','SBCS','RSBS','MULS','MUL','RSB','SBC'
com_ops = 'CMP','CMN'
log_ops = 'ANDS','EORS','ORRS','BICS','MVNS','TST','EOR','MVN','ORR'
sys_ops = 'PUSH','POP','SVC','CPSID','CPSIE','MRS','MSR','BKPT','SEV','WFE','WFI','YIELD','NOP','ISB','DMB','DSB'
bra_ops = 'B','BL','BLX','BX','BCC','BCS','BEQ','BIC','BLS','BNE','BPL','BGE','BGT','BHI','BLE','BLT','BMI','BVC','BVS'
man_ops = 'SXTH','SXTB','UXTH','UXTB','REV','REV16','REVSH','LSLS','LSRS','RORS','ASR','ASRS','LSL','LSR','ROR'
import os,sys
path = '.'
#files = []
#for i in os.listdir(path):
# if os.path.isfile(os.path.join(path,i)) and i.startswith('typelist') and not i.endswith('~'):
# files.append(i)
files = sys.argv[1:]
print(files)
dic_all = {}
print(dic_all)
for f in files:
f = open(f)
lines = f.readlines()
dic = {}
line = lines[0]
if(line!= ''):
dic = eval(line)
for key in dic:
if(key not in dic_all):
dic_all[key] = str(dic[key])
else:
dic_all[key] = str(dic_all[key]) + "," + str(dic[key])
for key in dic_all:
dic_all[key] = ''
for f in files:
f = open(f)
lines = f.readlines()
dic = {}
line = lines[0]
if(line!= ''):
dic = eval(line)
for key in dic:
#if(dic_all[key] != ''):
dic_all[key] = str(dic_all[key]) + str(dic[key])
for key in dic_all:
if(key not in dic):
dic_all[key] = str(dic_all[key]) +"0"
dic_all[key] = str(dic_all[key]) +","
print(dic_all)
ou = open('dict_nocomb','w')
ou.write(str(dic_all))
csv1 = open("tablenocomb1.csv","w")
csv2 = open("tablenocomb2.csv","w")
csv1.write("Instr. Name, Occur.(Random),Occur.(Real),Type\n")
csv2.write("Instr. Name, Occur.(Random),Occur.(Real),Type\n")
keylist = [key for key in dic_all]
keylist.sort()
nonempty = 0.0
nonemptyr = 0.0
for key in dic_all:
h= str(key)
if(h in mem_ops):
#print("1\n")
dic_all[key] = dic_all[key]+'M'
elif(h in ari_ops):
#print("2\n")
dic_all[key] = dic_all[key]+'A'
elif(h in com_ops):
#print("3\n")
dic_all[key] = dic_all[key]+'C'
elif(h in log_ops):
#print("4\n")
dic_all[key] = dic_all[key]+'L'
elif(h in sys_ops):
#print("5\n")
dic_all[key] = dic_all[key]+'S'
elif(h in bra_ops):
#print("6\n")
dic_all[key] = dic_all[key]+'B'
elif(h in man_ops):
#print("7\n")
dic_all[key] = dic_all[key]+'R'
else:
#print("no cat, sorry\n")
dic_all[key] = dic_all[key]+'O'
#for key in dic_all:
for i in range(len(keylist)):
key = keylist[i]
if(dic_all[key].split(",")[1]!='0'):
nonempty = nonempty+1
#print(str(i)+",")
if(dic_all[key].split(",")[0]!='0'):
nonemptyr = nonemptyr+1
if(i < len(keylist)/2):
csv1.write(str(key) + ',' + str(dic_all[key])+'\n')
else:
csv2.write(str(key) + ',' + str(dic_all[key])+'\n')
print( "Coverage rate -real:" + str(nonempty/len(keylist)))
print( "Coverage rate - random:" + str(nonemptyr/len(keylist)))
csv1.close()
csv2.close()
#print( "Success rate:" + str((nonempty/len(keylist)))
| 25.144 | 119 | 0.602291 | 547 | 3,143 | 3.35649 | 0.330896 | 0.133987 | 0.151961 | 0.058824 | 0.381264 | 0.355664 | 0.303922 | 0.198257 | 0.172113 | 0.12963 | 0 | 0.012588 | 0.14063 | 3,143 | 124 | 120 | 25.346774 | 0.66716 | 0.154311 | 0 | 0.289157 | 1 | 0 | 0.189087 | 0.025767 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.012048 | 0 | 0.012048 | 0.060241 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
02974a7f2e55a4545889ad1727cb810be5d621b5 | 1,254 | py | Python | file/txt2bin.py | QPointNotebook/PythonSample | 53c2a54da2bf9a61449ed1c7d2864c5c0eedc5e0 | [
"MIT"
] | null | null | null | file/txt2bin.py | QPointNotebook/PythonSample | 53c2a54da2bf9a61449ed1c7d2864c5c0eedc5e0 | [
"MIT"
] | null | null | null | file/txt2bin.py | QPointNotebook/PythonSample | 53c2a54da2bf9a61449ed1c7d2864c5c0eedc5e0 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
from file.file import file
class txt2bin( file ):
def read( self, file ):
datas = []
with open( file, 'r', encoding='utf-8' ) as f:
lines = f.readlines()
for line in lines:
data = line.splitlines() # split by '\n'
if not data[0]:
d = b''
else:
val = int( data[0], 16 ) # txt -> int
leng = len( data[0] ) // 2
d = val.to_bytes( leng, byteorder='big' ) # int -> binary
datas.append( d )
return datas
def write( self, file, datas ):
with open( file, 'w', encoding='utf-8' ) as f:
for data in datas:
val = int.from_bytes( data, byteorder='big' ) # binary -> int
d = hex( val ) # int -> hex
s = str( d )[2:] # cut '0x'
if len( data ) == 0:
f.write( '\n' )
else:
if data[0] < 0x10:
s = '0' + s # add '0' for loss of digit
f.write( s + '\n' )
| 31.35 | 77 | 0.369219 | 138 | 1,254 | 3.34058 | 0.463768 | 0.05423 | 0.056399 | 0.073753 | 0.173536 | 0.10846 | 0 | 0 | 0 | 0 | 0 | 0.030547 | 0.503987 | 1,254 | 39 | 78 | 32.153846 | 0.710611 | 0.11244 | 0 | 0.071429 | 0 | 0 | 0.020833 | 0 | 0 | 0 | 0.003623 | 0 | 0 | 1 | 0.071429 | false | 0 | 0.035714 | 0 | 0.178571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
029b069d68471e7fbe34c10e131ca57fcd80d3f5 | 892 | py | Python | blog/app/admin/views.py | web-user/flask-blog | 130f5dbcdb18b8f325c7aa8dd3d71cbc7190485a | [
"MIT"
] | null | null | null | blog/app/admin/views.py | web-user/flask-blog | 130f5dbcdb18b8f325c7aa8dd3d71cbc7190485a | [
"MIT"
] | null | null | null | blog/app/admin/views.py | web-user/flask-blog | 130f5dbcdb18b8f325c7aa8dd3d71cbc7190485a | [
"MIT"
] | null | null | null | from flask import Flask, render_template, session, redirect, url_for, request, flash, abort, current_app, make_response
from flask_login import login_user, logout_user, login_required, current_user
from . import admin
from .. import db
from ..models import User, Post
from ..form import PostForm
from functools import wraps
from flask import g, request, redirect, url_for
@admin.route('/admin', methods = ['GET', 'POST'])
@login_required
def admin():
form = PostForm()
error = None
if request.method == 'POST' and form.validate():
print(form.body.data)
print('MMM----------NNNN')
post = Post(body=form.body.data, title=form.title.data)
db.session.add(post)
db.session.commit()
return redirect(url_for('main.home'))
flash('Invalid username or password.')
return render_template('admin.html', title='Admin', form=form)
| 34.307692 | 119 | 0.690583 | 121 | 892 | 4.983471 | 0.454545 | 0.044776 | 0.069652 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.180493 | 892 | 25 | 120 | 35.68 | 0.824897 | 0 | 0 | 0 | 0 | 0 | 0.097753 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.045455 | false | 0.045455 | 0.363636 | 0 | 0.5 | 0.090909 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
029ed725f1f2d111375bab605c9d49677c361f7c | 2,858 | py | Python | src/tests/crud/test_user.py | Behnam-sn/neat-backend | ba6e6356ee092eba27179f72fd2a15e25c68d1b8 | [
"MIT"
] | 1 | 2022-03-07T22:16:48.000Z | 2022-03-07T22:16:48.000Z | src/tests/crud/test_user.py | Behnam-sn/neat-backend | ba6e6356ee092eba27179f72fd2a15e25c68d1b8 | [
"MIT"
] | null | null | null | src/tests/crud/test_user.py | Behnam-sn/neat-backend | ba6e6356ee092eba27179f72fd2a15e25c68d1b8 | [
"MIT"
] | 1 | 2022-03-07T22:16:49.000Z | 2022-03-07T22:16:49.000Z | from sqlalchemy.orm import Session
from src import crud
from src.core.security import verify_password
from src.schemas.user import UserCreate, UserUpdate
from src.tests.utils.user import create_random_user_by_api
from src.tests.utils.utils import random_lower_string
def test_create_user(db: Session):
username = random_lower_string()
password = random_lower_string()
user_in = UserCreate(username=username, password=password)
user_obj = crud.create_user(db, user=user_in)
assert user_obj.username == username
assert hasattr(user_obj, "hashed_password")
def test_authenticate_user(db: Session):
username = random_lower_string()
password = random_lower_string()
create_random_user_by_api(username=username, password=password)
authenticated_user = crud.authenticate_user(
db,
username=username,
password=password
)
assert authenticated_user
assert authenticated_user.username == username
def test_not_authenticate_user(db: Session):
user = crud.authenticate_user(
db,
username=random_lower_string(),
password=random_lower_string()
)
assert user is None
def test_get_all_users(db: Session):
users = crud. get_users(db)
assert users
def test_get_user(db: Session):
username = random_lower_string()
password = random_lower_string()
create_random_user_by_api(username=username, password=password)
user = crud.get_user_by_username(db, username=username)
assert user
assert user.username == username
def test_update_user(db: Session):
username = random_lower_string()
password = random_lower_string()
create_random_user_by_api(username=username, password=password)
new_username = random_lower_string()
full_name = random_lower_string()
user_in_update = UserUpdate(
username=new_username,
full_name=full_name,
)
crud.update_user(db, username=username, user_update=user_in_update)
user = crud.get_user_by_username(db, username=new_username)
assert user
assert username != new_username
assert user.full_name
def test_update_password(db: Session):
username = random_lower_string()
password = random_lower_string()
create_random_user_by_api(username=username, password=password)
new_password = random_lower_string()
crud.update_password(db, username=username, new_password=new_password)
user = crud.get_user_by_username(db, username=username)
assert user
assert verify_password(new_password, user.hashed_password)
def test_delete_user(db: Session):
username = random_lower_string()
password = random_lower_string()
create_random_user_by_api(username=username, password=password)
user = crud.remove_user(db, username=username)
assert user
assert user.username == username
| 26.220183 | 74 | 0.747726 | 369 | 2,858 | 5.457995 | 0.124661 | 0.098312 | 0.151936 | 0.099305 | 0.57994 | 0.495531 | 0.46574 | 0.46574 | 0.423535 | 0.386792 | 0 | 0 | 0.175647 | 2,858 | 108 | 75 | 26.462963 | 0.854839 | 0 | 0 | 0.388889 | 0 | 0 | 0.005248 | 0 | 0 | 0 | 0 | 0 | 0.208333 | 1 | 0.111111 | false | 0.277778 | 0.083333 | 0 | 0.194444 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
02a4beb0015cd6725cf78ab2fb76439c197ecfc1 | 2,073 | py | Python | sims/s251/calc-err.py | ammarhakim/ammar-simjournal | 85b64ddc9556f01a4fab37977864a7d878eac637 | [
"MIT",
"Unlicense"
] | 1 | 2019-12-19T16:21:13.000Z | 2019-12-19T16:21:13.000Z | sims/s251/calc-err.py | ammarhakim/ammar-simjournal | 85b64ddc9556f01a4fab37977864a7d878eac637 | [
"MIT",
"Unlicense"
] | null | null | null | sims/s251/calc-err.py | ammarhakim/ammar-simjournal | 85b64ddc9556f01a4fab37977864a7d878eac637 | [
"MIT",
"Unlicense"
] | 2 | 2020-01-08T06:23:33.000Z | 2020-01-08T07:06:50.000Z | from pylab import *
import tables
def exactSol(X, Y, t):
return exp(-2*t)*sin(X)*cos(Y)
fh = tables.openFile("s251-dg-diffuse-2d_q_1.h5")
q = fh.root.StructGridField
nx, ny, nc = q.shape
dx = 2*pi/nx
Xf = linspace(0, 2*pi-dx, nx)
dy = 2*pi/ny
Yf = linspace(0, 2*pi-dy, ny)
XX, YY = meshgrid(Xf, Yf)
Xhr = linspace(0, 2*pi, 101)
Yhr = linspace(0, 2*pi, 101)
XXhr, YYhr = meshgrid(Xhr, Yhr)
fhr = exactSol(XXhr, YYhr, 1.0)
figure(1)
pcolormesh(Xhr, Yhr, fhr)
colorbar()
figure(2)
pcolormesh(Xf, Yf, q[:,:,0])
colorbar()
# compute error
fex = exactSol(XX, YY, 1.0)
error = abs(fex.transpose()-q[:,:,0]).sum()/(nx*ny);
print "%g %g" % (dx, error)
def evalSum(coeff, fields):
res = 0.0*fields[0]
for i in range(len(coeff)):
res = res + coeff[i]*fields[i]
return res
def projectOnFinerGrid_f24(Xc, Yc, q):
dx = Xc[1]-Xc[0]
dy = Yc[1]-Yc[0]
nx = Xc.shape[0]
ny = Yc.shape[0]
# mesh coordinates
Xn = linspace(Xc[0]-0.5*dx, Xc[-1]+0.5*dx, 2*nx+1) # one more
Yn = linspace(Yc[0]-0.5*dy, Yc[-1]+0.5*dy, 2*ny+1) # one more
XXn, YYn = meshgrid(Xn, Yn)
# data
qn = zeros((2*Xc.shape[0], 2*Yc.shape[0]), float)
v1 = q[:,:,0]
v2 = q[:,:,1]
v3 = q[:,:,2]
v4 = q[:,:,3]
vList = [v1,v2,v3,v4]
# node 1
c1 = [0.5625,0.1875,0.0625,0.1875]
qn[0:2*nx:2, 0:2*ny:2] = evalSum(c1, vList)
# node 2
c2 = [0.1875,0.5625,0.1875,0.0625]
qn[1:2*nx:2, 0:2*ny:2] = evalSum(c2, vList)
# node 3
c3 = [0.1875,0.0625,0.1875,0.5625]
qn[0:2*nx:2, 1:2*ny:2] = evalSum(c3, vList)
# node 4
c4 = [0.0625,0.1875,0.5625,0.1875]
qn[1:2*nx:2, 1:2*ny:2] = evalSum(c4, vList)
return XXn, YYn, qn
Xc = linspace(0.5*dx, 2*pi-0.5*dx, nx)
Yc = linspace(0.5*dy, 2*pi-0.5*dy, ny)
Xp, Yp, qp = projectOnFinerGrid_f24(Xc, Yc, q)
figure(1)
subplot(1,2,1)
pcolormesh(Xp, Yp, transpose(qp))
title('RDG t=1')
colorbar(shrink=0.5)
axis('image')
subplot(1,2,2)
pcolormesh(Xhr, Yhr, fhr)
title('Exact t=1')
colorbar(shrink=0.5)
axis('image')
savefig('s251-exact-cmp.png')
show()
| 21.371134 | 65 | 0.578389 | 412 | 2,073 | 2.900485 | 0.269417 | 0.016736 | 0.030126 | 0.040167 | 0.244351 | 0.16569 | 0.098745 | 0.098745 | 0 | 0 | 0 | 0.131325 | 0.199228 | 2,073 | 96 | 66 | 21.59375 | 0.588554 | 0.039074 | 0 | 0.144928 | 0 | 0 | 0.037336 | 0.012614 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.028986 | null | null | 0.014493 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
02a600c96645d56c182f0a175380bb6948a7e4b5 | 973 | py | Python | PyCharm/Exercicios/Aula12/ex041.py | fabiodarice/Python | 15ec1c7428f138be875111ac98ba38cf2eec1a93 | [
"MIT"
] | null | null | null | PyCharm/Exercicios/Aula12/ex041.py | fabiodarice/Python | 15ec1c7428f138be875111ac98ba38cf2eec1a93 | [
"MIT"
] | null | null | null | PyCharm/Exercicios/Aula12/ex041.py | fabiodarice/Python | 15ec1c7428f138be875111ac98ba38cf2eec1a93 | [
"MIT"
] | null | null | null | # Importação de bibliotecas
from datetime import date
# Título do programa
print('\033[1;34;40mCLASSIFICAÇÃO DE CATEGORIAS PARA NATAÇÃO\033[m')
# Objetos
nascimento = int(input('\033[30mDigite o ano do seu nascimento:\033[m '))
idade = date.today().year - nascimento
mirim = 9
infantil = 14
junior = 19
senior = 20
# Lógica
if idade <= mirim:
print('Sua idade é \033[1;33m{} anos\033[m, e sua categoria é a \033[1;34mMIRIM!\033[m'.format(idade))
elif idade <= infantil:
print('Sua idade é \033[1;33m{}\033[m anos, e sua categoria é a \033[1;34mINFANTIL!\033[m'.format(idade))
elif idade <= junior:
print('Sua idade é \033[1;33m{}\033[m anos, e sua categoria é a \033[1;34mJUNIOR!\033[m'.format(idade))
elif idade <= senior:
print('Sua idade é \033[1;33m{}\033[m anos, e sua categoria é a \033[1;34mSÊNIOR!\033[m'.format(idade))
elif idade > senior:
print('Sua idade é \033[1;33m{}\033[m anos, e sua categoria é \033[1;34mMASTER!\033[m'.format(idade)) | 38.92 | 109 | 0.693731 | 172 | 973 | 3.924419 | 0.313953 | 0.071111 | 0.044444 | 0.103704 | 0.496296 | 0.496296 | 0.425185 | 0.365926 | 0.365926 | 0.365926 | 0 | 0.139759 | 0.146968 | 973 | 25 | 110 | 38.92 | 0.673494 | 0.060637 | 0 | 0 | 0 | 0.277778 | 0.553846 | 0.156044 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.055556 | 0 | 0.055556 | 0.333333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
02a632349d6da6f348ea1c189802c694c33a0241 | 1,681 | py | Python | github-bot/harvester_github_bot/config.py | futuretea/bot | 5f1f1a08e0fca6519e0126ff8f0b87fec23a38e3 | [
"Apache-2.0"
] | null | null | null | github-bot/harvester_github_bot/config.py | futuretea/bot | 5f1f1a08e0fca6519e0126ff8f0b87fec23a38e3 | [
"Apache-2.0"
] | null | null | null | github-bot/harvester_github_bot/config.py | futuretea/bot | 5f1f1a08e0fca6519e0126ff8f0b87fec23a38e3 | [
"Apache-2.0"
] | null | null | null | from everett.component import RequiredConfigMixin, ConfigOptions
from everett.manager import ConfigManager, ConfigOSEnv
class BotConfig(RequiredConfigMixin):
required_config = ConfigOptions()
required_config.add_option('flask_loglevel', parser=str, default='info', doc='Set the log level for Flask.')
required_config.add_option('flask_password', parser=str, doc='Password for HTTP authentication in Flask.')
required_config.add_option('flask_username', parser=str, doc='Username for HTTP authentication in Flask.')
required_config.add_option('github_owner', parser=str, default='harvester', doc='Set the owner of the target GitHub '
'repository.')
required_config.add_option('github_repository', parser=str, default='harvester', doc='Set the name of the target '
'GitHub repository.')
required_config.add_option('github_repository_test', parser=str, default='tests', doc='Set the name of the tests '
'GitHub repository.')
required_config.add_option('github_token', parser=str, doc='Set the token of the GitHub machine user.')
required_config.add_option('zenhub_pipeline', parser=str, default='Review', doc='Set the target ZenHub pipeline to '
'handle events for.')
def get_config():
config = ConfigManager(environments=[
ConfigOSEnv()
])
return config.with_options(BotConfig())
| 64.653846 | 121 | 0.606782 | 172 | 1,681 | 5.767442 | 0.325581 | 0.127016 | 0.137097 | 0.185484 | 0.444556 | 0.410282 | 0.349798 | 0.235887 | 0.235887 | 0.133065 | 0 | 0 | 0.303986 | 1,681 | 25 | 122 | 67.24 | 0.847863 | 0 | 0 | 0.095238 | 0 | 0 | 0.293278 | 0.013087 | 0 | 0 | 0 | 0 | 0 | 1 | 0.047619 | false | 0.047619 | 0.095238 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
02b2cd966c362b3581d56d85cfd72c1cf6dfa614 | 1,212 | py | Python | finetwork/plotter/_centrality_metrics.py | annakuchko/FinNetwork | 4566ff96b33fb5668f9b28f41a94791d1cf9249c | [
"MIT"
] | 5 | 2021-12-07T22:14:10.000Z | 2022-03-30T14:09:15.000Z | finetwork/plotter/_centrality_metrics.py | annakuchko/FinNetwork | 4566ff96b33fb5668f9b28f41a94791d1cf9249c | [
"MIT"
] | null | null | null | finetwork/plotter/_centrality_metrics.py | annakuchko/FinNetwork | 4566ff96b33fb5668f9b28f41a94791d1cf9249c | [
"MIT"
] | null | null | null | import networkx as nx
class _CentralityMetrics:
def __init__(self, G, metrics):
self.G = G
self.metrics = metrics
def _compute_metrics(self):
metrics = self.metrics
if metrics == 'degree_centrality':
c = self.degree_centrality()
elif metrics == 'betweenness_centrality':
c = self.betweenness_centrality()
elif metrics == 'closeness_centrality':
c = self.closeness_centrality()
elif metrics == 'eigenvector_centrality':
c = self.bonachi_eigenvector_centrality()
return c
def degree_centrality(self):
centrality = nx.degree_centrality(self.G, weight='weight')
return centrality
def betweenness_centrality(self):
centrality = nx.betweenness_centrality(self.G, weight='weight')
return centrality
def closeness_centrality(self):
centrality = nx.closeness_centrality(self.G, weight='weight')
return centrality
def bonachi_eigenvector_centrality(self):
centrality = nx.eigenvector_centrality(self.G, weight='weight')
return centrality
| 32.756757 | 72 | 0.615512 | 116 | 1,212 | 6.215517 | 0.198276 | 0.15534 | 0.083218 | 0.144244 | 0.25104 | 0.25104 | 0.25104 | 0.191401 | 0 | 0 | 0 | 0 | 0.302805 | 1,212 | 36 | 73 | 33.666667 | 0.853254 | 0 | 0 | 0.142857 | 0 | 0 | 0.089286 | 0.037415 | 0 | 0 | 0 | 0 | 0 | 1 | 0.214286 | false | 0 | 0.035714 | 0 | 0.464286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
02b6a5972aef51ad1a07e6ff7ba0827ae6cad8a4 | 2,235 | py | Python | t/text_test.py | gsnedders/Template-Python | 4081e4d820c1be0c0448a8dcb79e0703066da099 | [
"Artistic-2.0"
] | null | null | null | t/text_test.py | gsnedders/Template-Python | 4081e4d820c1be0c0448a8dcb79e0703066da099 | [
"Artistic-2.0"
] | 6 | 2015-10-13T13:46:10.000Z | 2019-06-17T09:39:57.000Z | t/text_test.py | gsnedders/Template-Python | 4081e4d820c1be0c0448a8dcb79e0703066da099 | [
"Artistic-2.0"
] | 3 | 2018-12-03T13:15:21.000Z | 2019-03-13T09:12:09.000Z | from template import Template
from template.test import TestCase, main
class Stringy:
def __init__(self, text):
self.text = text
def asString(self):
return self.text
__str__ = asString
class TextTest(TestCase):
def testText(self):
tt = (("basic", Template()),
("interp", Template({ "INTERPOLATE": 1 })))
vars = self._callsign()
v2 = { "ref": lambda obj: "%s[%s]" % (obj, obj.__class__.__name__),
"sfoo": Stringy("foo"),
"sbar": Stringy("bar") }
vars.update(v2)
self.Expect(DATA, tt, vars)
DATA = r"""
-- test --
This is a text block "hello" 'hello' 1/3 1\4 <html> </html>
$ @ { } @{ } ${ } # ~ ' ! % *foo
$a ${b} $c
-- expect --
This is a text block "hello" 'hello' 1/3 1\4 <html> </html>
$ @ { } @{ } ${ } # ~ ' ! % *foo
$a ${b} $c
-- test --
<table width=50%>©
-- expect --
<table width=50%>©
-- test --
[% foo = 'Hello World' -%]
start
[%
#
# [% foo %]
#
#
-%]
end
-- expect --
start
end
-- test --
pre
[%
# [% PROCESS foo %]
-%]
mid
[% BLOCK foo; "This is foo"; END %]
-- expect --
pre
mid
-- test --
-- use interp --
This is a text block "hello" 'hello' 1/3 1\4 <html> </html>
\$ @ { } @{ } \${ } # ~ ' ! % *foo
$a ${b} $c
-- expect --
This is a text block "hello" 'hello' 1/3 1\4 <html> </html>
$ @ { } @{ } ${ } # ~ ' ! % *foo
alpha bravo charlie
-- test --
<table width=50%>©
-- expect --
<table width=50%>©
-- test --
[% foo = 'Hello World' -%]
start
[%
#
# [% foo %]
#
#
-%]
end
-- expect --
start
end
-- test --
pre
[%
#
# [% PROCESS foo %]
#
-%]
mid
[% BLOCK foo; "This is foo"; END %]
-- expect --
pre
mid
-- test --
[% a = "C'est un test"; a %]
-- expect --
C'est un test
-- test --
[% META title = "C'est un test" -%]
[% component.title -%]
-- expect --
C'est un test
-- test --
[% META title = 'C\'est un autre test' -%]
[% component.title -%]
-- expect --
C'est un autre test
-- test --
[% META title = "C'est un \"test\"" -%]
[% component.title -%]
-- expect --
C'est un "test"
-- test --
[% sfoo %]/[% sbar %]
-- expect --
foo/bar
-- test --
[% s1 = "$sfoo"
s2 = "$sbar ";
s3 = sfoo;
ref(s1);
'/';
ref(s2);
'/';
ref(s3);
-%]
-- expect --
foo[str]/bar [str]/foo[Stringy]
"""
| 14.607843 | 71 | 0.503803 | 293 | 2,235 | 3.784983 | 0.242321 | 0.028855 | 0.043282 | 0.054103 | 0.58972 | 0.580703 | 0.580703 | 0.553652 | 0.553652 | 0.553652 | 0 | 0.019596 | 0.246532 | 2,235 | 152 | 72 | 14.703947 | 0.638955 | 0 | 0 | 0.640625 | 0 | 0.03125 | 0.760412 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.023438 | false | 0 | 0.015625 | 0.007813 | 0.070313 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
02bb2ad5f8635de13653c1ed22f4978ec39fcfc6 | 377 | py | Python | performance_test.py | alan-augustine/python_singly_linkedlist | f227a4154b22de8a273d319ecdd6329035d5d258 | [
"MIT"
] | null | null | null | performance_test.py | alan-augustine/python_singly_linkedlist | f227a4154b22de8a273d319ecdd6329035d5d258 | [
"MIT"
] | null | null | null | performance_test.py | alan-augustine/python_singly_linkedlist | f227a4154b22de8a273d319ecdd6329035d5d258 | [
"MIT"
] | null | null | null | from time import time
import os
import sys
sys.path.append(os.path.join(os.path.dirname(__file__), 'src'))
from singly_linkedlist.singly_linkedlist import SinglyLinkedList
start = time()
linked_list = SinglyLinkedList()
for i in range(100000):
linked_list.insert_head(111111111111)
end = time()
print("Took {0} seconds".format(start-end))
# linked_list.print_elements()
| 23.5625 | 64 | 0.774536 | 54 | 377 | 5.203704 | 0.592593 | 0.106762 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.05638 | 0.106101 | 377 | 15 | 65 | 25.133333 | 0.777448 | 0.074271 | 0 | 0 | 0 | 0 | 0.054913 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.363636 | 0 | 0.363636 | 0.090909 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
02bea4753652cd78237dd184ed6e67ea923d42ea | 454 | py | Python | dataprocess/print_msg.py | lifelong-robotic-vision/openloris-scene-tools | ce6a4839f618bf036d3f3dbae14561bfc7413641 | [
"MIT"
] | 13 | 2021-03-27T15:49:21.000Z | 2022-03-19T13:26:30.000Z | dataprocess/print_msg.py | lifelong-robotic-vision/openloris-scene-tools | ce6a4839f618bf036d3f3dbae14561bfc7413641 | [
"MIT"
] | 4 | 2021-03-30T10:40:43.000Z | 2022-03-28T01:36:57.000Z | dataprocess/print_msg.py | lifelong-robotic-vision/openloris-scene-tools | ce6a4839f618bf036d3f3dbae14561bfc7413641 | [
"MIT"
] | 1 | 2022-02-16T13:42:32.000Z | 2022-02-16T13:42:32.000Z | #!/usr/bin/env python2
import rosbag
import sys
filename = sys.argv[1]
topics = sys.argv[2:]
with rosbag.Bag(filename) as bag:
for topic, msg, t in bag.read_messages(topics):
print('%s @%.7f ----------------------------' % (topic, t.to_sec()))
print(msg)
print('Press ENTER to continue')
while True:
try:
raw_input()
break
except EOFError:
pass
| 25.222222 | 76 | 0.497797 | 54 | 454 | 4.12963 | 0.722222 | 0.06278 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013158 | 0.330396 | 454 | 17 | 77 | 26.705882 | 0.720395 | 0.046256 | 0 | 0 | 0 | 0 | 0.138889 | 0.064815 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.066667 | 0.133333 | 0 | 0.133333 | 0.2 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
02beda4568a4663c141bf81401d0595971779e3a | 1,011 | py | Python | alegra/resources/invoice.py | okchaty/alegra | 6c423b23a24650c9121da5f165f6f03669b98468 | [
"MIT"
] | 1 | 2022-03-31T03:44:50.000Z | 2022-03-31T03:44:50.000Z | alegra/resources/invoice.py | okchaty/alegra | 6c423b23a24650c9121da5f165f6f03669b98468 | [
"MIT"
] | 4 | 2020-03-24T17:54:03.000Z | 2021-06-02T00:48:50.000Z | alegra/resources/invoice.py | okchaty/alegra | 6c423b23a24650c9121da5f165f6f03669b98468 | [
"MIT"
] | null | null | null | from alegra.api_requestor import APIRequestor
from alegra.resources.abstract import CreateableAPIResource
from alegra.resources.abstract import EmailableAPIResource
from alegra.resources.abstract import ListableAPIResource
from alegra.resources.abstract import UpdateableAPIResource
from alegra.resources.abstract import VoidableAPIResource
class Invoice(
CreateableAPIResource,
EmailableAPIResource,
ListableAPIResource,
UpdateableAPIResource,
VoidableAPIResource,
):
OBJECT_NAME = "invoices"
@classmethod
def open(cls, resource_id, user=None, token=None, api_base=None,
api_version=None, **json):
requestor = APIRequestor(
user=user,
token=token,
api_base=api_base,
api_version=api_version,
)
url = cls.class_url() + str(resource_id) + "/open/"
response = requestor.request(
method="post",
url=url,
json=json,
)
return response
| 29.735294 | 68 | 0.681503 | 96 | 1,011 | 7.0625 | 0.385417 | 0.088496 | 0.140118 | 0.199115 | 0.243363 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.246291 | 1,011 | 33 | 69 | 30.636364 | 0.889764 | 0 | 0 | 0 | 0 | 0 | 0.017804 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.033333 | false | 0 | 0.2 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
02c18f6d2d3ebb8100e01a783419de97602121b6 | 1,723 | py | Python | code/generateElevationFile.py | etcluvic/sme.altm | ffdb51d380a6b8cd8073d5ef3bd6fd15fa0779ea | [
"CC-BY-4.0"
] | null | null | null | code/generateElevationFile.py | etcluvic/sme.altm | ffdb51d380a6b8cd8073d5ef3bd6fd15fa0779ea | [
"CC-BY-4.0"
] | null | null | null | code/generateElevationFile.py | etcluvic/sme.altm | ffdb51d380a6b8cd8073d5ef3bd6fd15fa0779ea | [
"CC-BY-4.0"
] | null | null | null | from bs4 import BeautifulSoup
from datetime import datetime
from lxml import etree
import time
import codecs
import pickle
import os
def printSeparator(character, times):
print(character * times)
if __name__ == '__main__':
doiPrefix = '10.7202' #erudit's doi prefix
myTime = datetime.now().strftime('%Y-%m-%d_%H-%M-%S-%f')
referencedDocs = '/mnt/smeCode/altm/code/out/' + '2017-10-13_22-44-03-672976' + '.xml'
pickleFile = '/mnt/smeCode/parseMe2/code/pickles/keywords.p'
outputPath = '/mnt/smeCode/altm/code/elevation.files/'
outputFile = 'test.xml'
printSeparator('*',50)
print('loading pickle...')
keywords = pickle.load( open( pickleFile, "rb" ) )
print('pickle loaded!')
printSeparator('*',50)
#elevation file
rootElement = etree.Element("elevate")
f = codecs.open(referencedDocs,'r','utf-8')
markup = f.read()
f.close()
soup = BeautifulSoup(markup, "lxml-xml")
documents = soup.find_all('doi')
for d in documents:
doi = d.get_text().split('/')[1]
print(doi)
#print(d.get_text())
if doi in keywords.keys():
print(keywords[doi])
queryElement = etree.SubElement(rootElement, "query")
queryElement.set("text", ' '. join(list(keywords[doi]['terms'])))
docElement = etree.SubElement(queryElement, "doc")
docElement.set("id", doi)
printSeparator('*',50)
printSeparator('*', 50)
print 'Elevation - Saving xml file...'
xmlString = etree.tostring(rootElement, pretty_print=True, encoding='UTF-8')
fh = codecs.open(os.path.join(outputPath, myTime + '.xml'),'w', encoding='utf-8' )
fh.write(xmlString.decode('utf-8'))
fh.close()
print 'done'
printSeparator('*', 50)
print(xmlString)
print('bye')
| 22.671053 | 89 | 0.663958 | 218 | 1,723 | 5.183486 | 0.490826 | 0.070796 | 0.055752 | 0.031858 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.029799 | 0.162507 | 1,723 | 75 | 90 | 22.973333 | 0.753292 | 0.03018 | 0 | 0.106383 | 0 | 0 | 0.194245 | 0.082134 | 0.021277 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.148936 | null | null | 0.340426 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
02c8b5302247d3f0de4a0fcfd8043adc64146600 | 1,564 | py | Python | setup.py | nilp0inter/threadedprocess | 0120d6e795782c9f527397490846cd214d9196e1 | [
"PSF-2.0"
] | 9 | 2018-03-21T22:19:10.000Z | 2021-06-08T12:10:15.000Z | setup.py | nilp0inter/threadedprocess | 0120d6e795782c9f527397490846cd214d9196e1 | [
"PSF-2.0"
] | 3 | 2019-09-18T19:57:28.000Z | 2020-07-17T08:06:54.000Z | setup.py | nilp0inter/threadedprocess | 0120d6e795782c9f527397490846cd214d9196e1 | [
"PSF-2.0"
] | 4 | 2018-03-24T23:10:38.000Z | 2020-06-18T02:26:24.000Z | import os
from setuptools import setup
try:
import concurrent.futures
except ImportError:
CONCURRENT_FUTURES_PRESENT = False
else:
CONCURRENT_FUTURES_PRESENT = True
def read(fname):
return open(os.path.join(os.path.dirname(__file__), fname)).read()
setup(
name="threadedprocess",
version="0.0.5",
author="Roberto Abdelkader Martinez Perez",
author_email="robertomartinezp@gmail.com",
description=(
"A `ThreadedProcessPoolExecutor` is formed by a modified "
"`ProcessPoolExecutor` that generates processes that use a "
"`ThreadPoolExecutor` instance to run the given tasks."),
license="BSD",
keywords="concurrent futures executor process thread",
url="https://github.com/nilp0inter/threadedprocess",
py_modules=['threadedprocess'],
long_description=read('README.rst'),
install_requires=[] if CONCURRENT_FUTURES_PRESENT else ["futures"],
classifiers=[
"Development Status :: 3 - Alpha",
"License :: OSI Approved :: BSD License",
'Programming Language :: Python',
'Programming Language :: Python :: 2',
'Programming Language :: Python :: 2.6',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.4',
'Programming Language :: Python :: 3.5',
'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: Implementation :: CPython',
'Programming Language :: Python :: Implementation :: PyPy'
],
)
| 34 | 71 | 0.658568 | 160 | 1,564 | 6.35 | 0.56875 | 0.187008 | 0.246063 | 0.102362 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013889 | 0.217391 | 1,564 | 45 | 72 | 34.755556 | 0.816176 | 0 | 0 | 0 | 0 | 0 | 0.535166 | 0.048593 | 0 | 0 | 0 | 0 | 0 | 1 | 0.025 | false | 0 | 0.1 | 0.025 | 0.15 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
02c90b77315d19cadcdffd4cbada1b9dd920626e | 2,592 | py | Python | coremltools/converters/mil/mil/passes/const_elimination.py | VadimLevin/coremltools | 66c17b0fa040a0d8088d33590ab5c355478a9e5c | [
"BSD-3-Clause"
] | 3 | 2018-10-02T17:23:01.000Z | 2020-08-15T04:47:07.000Z | coremltools/converters/mil/mil/passes/const_elimination.py | holzschu/coremltools | 5ece9069a1487d5083f00f56afe07832d88e3dfa | [
"BSD-3-Clause"
] | null | null | null | coremltools/converters/mil/mil/passes/const_elimination.py | holzschu/coremltools | 5ece9069a1487d5083f00f56afe07832d88e3dfa | [
"BSD-3-Clause"
] | 1 | 2021-05-07T15:38:20.000Z | 2021-05-07T15:38:20.000Z | # -*- coding: utf-8 -*-
# Copyright (c) 2020, Apple Inc. All rights reserved.
#
# Use of this source code is governed by a BSD-3-clause license that can be
# found in the LICENSE.txt file or at https://opensource.org/licenses/BSD-3-Clause
import numpy as np
from coremltools.converters.mil.mil import Builder as mb
from coremltools.converters.mil.mil.passes.pass_registry import register_pass
def get_const_mode(val):
# Heuristics to determine if a val should be file value or immediate
# value.
if isinstance(val, (str, bool, int)):
return "immediate_value"
if isinstance(val, (np.generic, np.ndarray)):
if val.size > 10:
return "file_value"
return "immediate_value"
raise ValueError("val {} not recognized.".format(val))
def const_elimination_block(block):
# shallow copy hides changes on f.operations during the loop
for op in list(block.operations):
if op.op_type == "const":
continue
for b in op.blocks:
const_elimination_block(b)
all_outputs_are_const = True
for i, o in enumerate(op.outputs):
if o.val is not None:
with block:
res = mb.const(
val=o.val,
mode=get_const_mode(o.val),
before_op=op,
# same var name, but different python
# instance does not violate SSA property.
name=o.name,
)
op.enclosing_block.replace_uses_of_var_after_op(
anchor_op=op, old_var=o, new_var=res
)
# rename the const output
o.set_name(o.name+'_ignored')
else:
all_outputs_are_const = False
if all_outputs_are_const:
op.remove_from_block()
@register_pass(namespace="common")
def const_elimination(prog):
"""
prog: Program
# Replace non-const ops that have const Var
# outputs replaced with const op. Example:
#
# Given:
# %2, %3 = non_const_op(...) # %2 is const, %3 isn't const
# %4 = other_op(%2, %3)
#
# Result:
# _, %3 = non_const_op(...) # _ is the ignored output
# %2_const = const(mode=m) # %2_const name is for illustration only
# %4 = other_op(%2_const, %3)
#
# where m is 'file_value' / 'immediate_value' depending on heuristics
# in get_const_mode.
"""
for f_name, f in prog.functions.items():
const_elimination_block(f)
| 32 | 83 | 0.58179 | 341 | 2,592 | 4.255132 | 0.451613 | 0.02481 | 0.02481 | 0.037216 | 0.082702 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012557 | 0.324074 | 2,592 | 80 | 84 | 32.4 | 0.815639 | 0.359182 | 0 | 0.051282 | 0 | 0 | 0.051298 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0.051282 | 0.076923 | 0 | 0.230769 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
02d235dc4031cc79fd9ab325030c238874738554 | 2,232 | py | Python | epochCiCdApi/ita/viewsOperations.py | matsumoto-epoch/epoch | c4b1982e68aa8cb108e6ae9b1c0de489d40d4db5 | [
"Apache-2.0"
] | null | null | null | epochCiCdApi/ita/viewsOperations.py | matsumoto-epoch/epoch | c4b1982e68aa8cb108e6ae9b1c0de489d40d4db5 | [
"Apache-2.0"
] | null | null | null | epochCiCdApi/ita/viewsOperations.py | matsumoto-epoch/epoch | c4b1982e68aa8cb108e6ae9b1c0de489d40d4db5 | [
"Apache-2.0"
] | null | null | null | # Copyright 2019 NEC Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import cgi # CGIモジュールのインポート
import cgitb
import sys
import requests
import json
import subprocess
import traceback
import os
import base64
import io
import logging
from django.shortcuts import render
from django.http import HttpResponse
from django.http.response import JsonResponse
from django.views.decorators.csrf import csrf_exempt
from django.views.decorators.http import require_http_methods
ita_host = os.environ['EPOCH_ITA_HOST']
ita_port = os.environ['EPOCH_ITA_PORT']
ita_user = os.environ['EPOCH_ITA_USER']
ita_pass = os.environ['EPOCH_ITA_PASSWORD']
# メニューID
ite_menu_operation = '2100000304'
ita_restapi_endpoint='http://' + ita_host + ':' + ita_port + '/default/menu/07_rest_api_ver1.php'
logger = logging.getLogger('apilog')
@require_http_methods(['GET'])
@csrf_exempt
def index(request):
# sys.stdout = io.TextIOWrapper(sys.stdout.buffer, encoding='utf-8')
logger.debug("CALL " + __name__ + ":{}".format(request.method))
if request.method == 'GET':
return get(request)
else:
return ""
@csrf_exempt
def get(request):
# HTTPヘッダの生成
filter_headers = {
'host': ita_host + ':' + ita_port,
'Content-Type': 'application/json',
'Authorization': base64.b64encode((ita_user + ':' + ita_pass).encode()),
'X-Command': 'FILTER',
}
#
# オペレーションの取得
#
opelist_resp = requests.post(ita_restapi_endpoint + '?no=' + ite_menu_operation, headers=filter_headers)
opelist_json = json.loads(opelist_resp.text)
logger.debug('---- Operation ----')
logger.debug(opelist_resp.text)
return JsonResponse(opelist_json, status=200)
| 28.987013 | 108 | 0.72043 | 297 | 2,232 | 5.255892 | 0.508418 | 0.038437 | 0.035874 | 0.043562 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016757 | 0.171147 | 2,232 | 76 | 109 | 29.368421 | 0.827027 | 0.306004 | 0 | 0.045455 | 0 | 0 | 0.142202 | 0.02228 | 0 | 0 | 0 | 0 | 0 | 1 | 0.045455 | false | 0.045455 | 0.363636 | 0 | 0.477273 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
02d56efb28c0baac4d608dce2e0ed1e45b667e10 | 932 | py | Python | src/service/uri_generator.py | HalbardHobby/git-LFS-for-Lambda | d19ba6fc4605d5dc2dba52acb4236c68787f8bde | [
"MIT"
] | null | null | null | src/service/uri_generator.py | HalbardHobby/git-LFS-for-Lambda | d19ba6fc4605d5dc2dba52acb4236c68787f8bde | [
"MIT"
] | null | null | null | src/service/uri_generator.py | HalbardHobby/git-LFS-for-Lambda | d19ba6fc4605d5dc2dba52acb4236c68787f8bde | [
"MIT"
] | null | null | null | """Generates pre-signed uri's for blob handling."""
from boto3 import client
import os
s3_client = client('s3')
def create_uri(repo_name, resource_oid, upload=False, expires_in=300):
"""Create a download uri for the given oid and repo."""
action = 'get_object'
if upload:
action = 'put_object'
params = {'Bucket': os.environ['LFS_S3_BUCKET_NAME'],
'Key': repo_name + '/' + resource_oid}
return s3_client.generate_presigned_url(action, Params=params,
ExpiresIn=expires_in)
def file_exists(repo_name, resource_oid):
"""Check if the file exists within the bucket."""
key = repo_name + '/' + resource_oid
response = s3_client.list_objects_v2(
Bucket=os.environ['LFS_S3_BUCKET_NAME'], Prefix=key)
for obj in response.get('Contents', []):
if obj['Key'] == key:
return True
return False
| 31.066667 | 71 | 0.626609 | 122 | 932 | 4.565574 | 0.467213 | 0.057451 | 0.114901 | 0.136445 | 0.186715 | 0.10772 | 0.10772 | 0 | 0 | 0 | 0 | 0.015896 | 0.257511 | 932 | 29 | 72 | 32.137931 | 0.789017 | 0.149142 | 0 | 0 | 1 | 0 | 0.10296 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.105263 | false | 0 | 0.105263 | 0 | 0.368421 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
02d7c80b9c168487db13fab6edd36bd30ed15c3d | 4,919 | py | Python | rnn/chatbot/chatbot.py | llichengtong/yx4 | 17de7a6257a9f0c38e12089b2d1947927ec54c90 | [
"Apache-2.0"
] | 128 | 2017-03-04T08:53:44.000Z | 2020-06-05T11:19:16.000Z | rnn/chatbot/chatbot.py | github-jinwei/TensorFlowBook | 17de7a6257a9f0c38e12089b2d1947927ec54c90 | [
"Apache-2.0"
] | null | null | null | rnn/chatbot/chatbot.py | github-jinwei/TensorFlowBook | 17de7a6257a9f0c38e12089b2d1947927ec54c90 | [
"Apache-2.0"
] | 120 | 2017-02-07T09:41:25.000Z | 2022-03-17T00:57:59.000Z | # coding=utf8
import logging
import os
import random
import re
import numpy as np
import tensorflow as tf
from seq2seq_conversation_model import seq2seq_model
from seq2seq_conversation_model import data_utils
from seq2seq_conversation_model import tokenizer
from seq2seq_conversation_model.seq2seq_conversation_model import FLAGS, _buckets
from settings import SEQ2SEQ_MODEL_DIR
_LOGGER = logging.getLogger('track')
UNK_TOKEN_REPLACEMENT = [
'?',
'我不知道你在说什么',
'什么鬼。。。',
'宝宝不知道你在说什么呐。。。',
]
ENGLISHWORD_PATTERN = re.compile(r'[a-zA-Z0-9]')
def is_unichar_englishnum(char):
return ENGLISHWORD_PATTERN.match(char)
def trim(s):
"""
1. delete every space between chinese words
2. suppress extra spaces
:param s: some python string
:return: the trimmed string
"""
if not (isinstance(s, unicode) or isinstance(s, str)):
return s
unistr = s.decode('utf8') if type(s) != unicode else s
unistr = unistr.strip()
if not unistr:
return ''
trimmed_str = []
if unistr[0] != ' ':
trimmed_str.append(unistr[0])
for ind in xrange(1, len(unistr) - 1):
prev_char = unistr[ind - 1] if len(trimmed_str) == 0 else trimmed_str[-1]
cur_char = unistr[ind]
maybe_trim = cur_char == ' '
next_char = unistr[ind + 1]
if not maybe_trim:
trimmed_str.append(cur_char)
else:
if is_unichar_englishnum(prev_char) and is_unichar_englishnum(next_char):
trimmed_str.append(cur_char)
else:
continue
if unistr[-1] != ' ':
trimmed_str.append(unistr[-1])
return ''.join(trimmed_str)
class Chatbot():
"""
answer an enquiry using trained seq2seq model
"""
def __init__(self, model_dir):
# Create model and load parameters.
self.session = tf.InteractiveSession()
self.model = self.create_model(self.session, model_dir, True)
self.model.batch_size = 1
# Load vocabularies.
vocab_path = os.path.join(FLAGS.data_dir, "vocab%d" % FLAGS.vocab_size)
self.vocab, self.rev_vocab = data_utils.initialize_vocabulary(vocab_path)
def create_model(self, session, model_dir, forward_only):
"""Create conversation model and initialize or load parameters in session."""
model = seq2seq_model.Seq2SeqModel(
FLAGS.vocab_size, FLAGS.vocab_size, _buckets,
FLAGS.size, FLAGS.num_layers, FLAGS.max_gradient_norm,
FLAGS.batch_size,
FLAGS.learning_rate, FLAGS.learning_rate_decay_factor,
use_lstm=FLAGS.use_lstm,
forward_only=forward_only)
ckpt = tf.train.get_checkpoint_state(model_dir)
if ckpt and tf.train.checkpoint_exists(ckpt.model_checkpoint_path):
_LOGGER.info("Reading model parameters from %s" % ckpt.model_checkpoint_path)
model.saver.restore(session, ckpt.model_checkpoint_path)
_LOGGER.info("Read model parameter succeed!")
else:
raise ValueError(
"Failed to find legal model checkpoint files in %s" % model_dir)
return model
def generate_answer(self, enquiry):
# Get token-ids for the input sentence.
token_ids = data_utils.sentence_to_token_ids(enquiry, self.vocab, tokenizer.fmm_tokenizer)
if len(token_ids) == 0:
_LOGGER.error('lens of token ids of sentence %s is 0' % enquiry)
# Which bucket does it belong to?
bucket_id = min([b for b in xrange(len(_buckets))
if _buckets[b][0] > len(token_ids)])
# Get a 1-element batch to feed the sentence to the model.
encoder_inputs, decoder_inputs, target_weights = self.model.get_batch(
{bucket_id: [(token_ids, [])]}, bucket_id)
# Get output logits for the sentence.
_, _, output_logits = self.model.step(self.session, encoder_inputs,
decoder_inputs,
target_weights, bucket_id, True)
# This is a greedy decoder - outputs are just argmaxes of output_logits.
outputs = [int(np.argmax(logit, axis=1)) for logit in output_logits]
# If there is an EOS symbol in outputs, cut them at that point.
if tokenizer.EOS_ID in outputs:
outputs = outputs[:outputs.index(tokenizer.EOS_ID)]
# Print out response sentence corresponding to outputs.
answer = " ".join([self.rev_vocab[output] for output in outputs])
if tokenizer._UNK in answer:
answer = random.choice(UNK_TOKEN_REPLACEMENT)
answer = trim(answer)
return answer
def close(self):
self.session.close()
if __name__ == "__main__":
m = Chatbot(SEQ2SEQ_MODEL_DIR + '/train/')
response = m.generate_answer(u'我知道你不知道我知道你不知道我说的是什么意思')
print response
| 36.708955 | 98 | 0.645456 | 625 | 4,919 | 4.8608 | 0.328 | 0.026333 | 0.0395 | 0.036866 | 0.129032 | 0.084924 | 0 | 0 | 0 | 0 | 0 | 0.009101 | 0.262858 | 4,919 | 133 | 99 | 36.984962 | 0.828737 | 0.084163 | 0 | 0.052632 | 0 | 0 | 0.058292 | 0.005234 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.115789 | null | null | 0.010526 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
02d7c976dba252653f990cef7776c119996e55c4 | 5,986 | py | Python | chip8_pygame_integration/config_test.py | Artoooooor/chip8 | d5132348f3081aeb9af19814d8251084ae723379 | [
"MIT"
] | null | null | null | chip8_pygame_integration/config_test.py | Artoooooor/chip8 | d5132348f3081aeb9af19814d8251084ae723379 | [
"MIT"
] | null | null | null | chip8_pygame_integration/config_test.py | Artoooooor/chip8 | d5132348f3081aeb9af19814d8251084ae723379 | [
"MIT"
] | null | null | null | import unittest
import pygame
from chip8_pygame_integration.config import get_config, KeyBind, to_text
DEFAULT = [KeyBind(pygame.K_o, pygame.KMOD_CTRL, 'some_command')]
class ConfigLoadTest(unittest.TestCase):
def setUp(self):
self.default = None
def test_empty_pattern_returns_empty_array(self):
self.assertEqual([], get_config((), []))
def test_single_command_pattern_parses_single_key(self):
self.when_pattern_is((('comm1',),))
self.when_lines_are(['A'])
self.expect_config([KeyBind(pygame.K_a, pygame.KMOD_NONE, 'comm1')])
def test_two_command_pattern_parses_2_keys(self):
self.when_pattern_is((('comm1', 'comm2',),))
self.when_lines_are(['A D'])
self.expect_config([
KeyBind(pygame.K_a, pygame.KMOD_NONE, 'comm1'),
KeyBind(pygame.K_d, pygame.KMOD_NONE, 'comm2')])
def test_2_lines_pattern_parses_2_lines(self):
self.when_pattern_is((('comm1',), ('comm2',)))
self.when_lines_are(['A', 'D'])
self.expect_config([
KeyBind(pygame.K_a, pygame.KMOD_NONE, 'comm1'),
KeyBind(pygame.K_d, pygame.KMOD_NONE, 'comm2')])
def test_too_little_elements_in_line_return_default(self):
self.when_pattern_is((('comm1', 'comm2'),))
self.when_lines_are(['A'])
self.when_default_is(DEFAULT)
self.expect_config(DEFAULT)
def test_ctrl_is_parsed_as_KMOD_CTRL(self):
self.when_pattern_is((('comm1',),))
self.when_lines_are(['ctrl+A'])
self.expect_config([KeyBind(pygame.K_a, pygame.KMOD_CTRL, 'comm1')])
def test_two_modifiers_are_parsed(self):
self.when_pattern_is((('comm1',),))
self.when_lines_are(['ctrl+lshift+A'])
kmods = pygame.KMOD_CTRL | pygame.KMOD_LSHIFT
self.expect_config([KeyBind(pygame.K_a, kmods, 'comm1')])
def test_lowercase_keys_are_parsed(self):
self.when_pattern_is((('comm1',),))
self.when_lines_are(['a'])
self.expect_config([KeyBind(pygame.K_a, pygame.KMOD_NONE, 'comm1')])
def test_lowercase_special_keys_are_parsed(self):
self.when_pattern_is((('comm1',),))
self.when_lines_are(['space'])
self.expect_config(
[KeyBind(pygame.K_SPACE, pygame.KMOD_NONE, 'comm1')])
def test_uppercase_modifiers_are_parsed(self):
self.when_pattern_is((('comm1',),))
self.when_lines_are(['LCTRL+A'])
self.expect_config([KeyBind(pygame.K_a, pygame.KMOD_LCTRL, 'comm1')])
def test_invalid_key_results_in_default(self):
self.when_pattern_is((('comm1',),))
self.when_lines_are(['F42'])
self.when_default_is(DEFAULT)
self.expect_config(DEFAULT)
def when_pattern_is(self, pattern):
self.pattern = pattern
def when_lines_are(self, lines):
self.lines = lines
def when_default_is(self, default):
self.default = default
def expect_config(self, config):
result = get_config(self.pattern, self.lines, self.default)
self.assertEqual(config, result)
class ConfigSaveTest(unittest.TestCase):
def test_empty_pattern_generates_empty_file(self):
self.assertEqual([], to_text((), []))
def test_one_command_generates_1_line(self):
self.when_pattern_is((('comm1',),))
self.when_config_is([KeyBind(pygame.K_a, pygame.KMOD_NONE, 'comm1')])
self.expect_generated_text(['a'])
def test_two_commands_generate_line_with_2_elements(self):
self.when_pattern_is((('comm1', 'comm2'),))
self.when_config_is([KeyBind(pygame.K_a, pygame.KMOD_NONE, 'comm1'),
KeyBind(pygame.K_b, pygame.KMOD_NONE, 'comm2')])
self.expect_generated_text(['a b'])
def test_commands_are_generated_in_order_of_pattern(self):
self.when_pattern_is((('comm1', 'comm2'),))
self.when_config_is([KeyBind(pygame.K_a, pygame.KMOD_NONE, 'comm2'),
KeyBind(pygame.K_b, pygame.KMOD_NONE, 'comm1')])
self.expect_generated_text(['b a'])
def test_two_lines_generate_2_lines_(self):
self.when_pattern_is((('comm1',), ('comm2',),))
self.when_config_is([KeyBind(pygame.K_a, pygame.KMOD_NONE, 'comm2'),
KeyBind(pygame.K_b, pygame.KMOD_NONE, 'comm1')])
self.expect_generated_text(['b', 'a'])
def test_KMOD_CTRL_generates_output(self):
self.expect_3_mod_versions_handled('ctrl')
def test_KMOD_SHIFT_generates_output(self):
self.expect_3_mod_versions_handled('shift')
def test_KMOD_ALT_generates_output(self):
self.expect_3_mod_versions_handled('alt')
def test_KMOD_META_generates_output(self):
self.expect_3_mod_versions_handled('meta')
def test_KMOD_CAPS_generates_output(self):
self.expect_mod_handled('caps')
def test_KMOD_NUM_generates_output(self):
self.expect_mod_handled('num')
def test_KMOD_MODE_generates_output(self):
self.expect_mod_handled('mode')
def expect_3_mod_versions_handled(self, baseModName):
self.expect_mod_handled(baseModName)
self.expect_mod_handled('l' + baseModName)
self.expect_mod_handled('r' + baseModName)
def expect_mod_handled(self, modName):
self.when_pattern_is((('comm1',),))
fieldName = 'KMOD_' + modName.upper()
mod = getattr(pygame, fieldName)
self.when_config_is([KeyBind(pygame.K_a, mod, 'comm1')])
expected = '{}+a'.format(modName)
self.expect_generated_text([expected])
def when_pattern_is(self, pattern):
self.pattern = pattern
def when_config_is(self, config):
self.config = config
def expect_generated_text(self, text):
text = self.add_newlines(text)
self.assertEqual(text, to_text(self.pattern, self.config))
def add_newlines(self, lines):
return [l + '\n' for l in lines]
if __name__ == '__main__':
unittest.main()
| 36.278788 | 77 | 0.664885 | 791 | 5,986 | 4.646018 | 0.132743 | 0.06966 | 0.072381 | 0.069388 | 0.602449 | 0.557551 | 0.542313 | 0.49415 | 0.472381 | 0.420136 | 0 | 0.011072 | 0.200301 | 5,986 | 164 | 78 | 36.5 | 0.756633 | 0 | 0 | 0.274194 | 0 | 0 | 0.051119 | 0 | 0 | 0 | 0 | 0 | 0.032258 | 1 | 0.274194 | false | 0 | 0.024194 | 0.008065 | 0.322581 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
02db29d58f9fcbf982055980d5e6b51e86d8c020 | 2,419 | py | Python | Form-Filler.py | Zaidtech/AUTOMATION-SCRIPTS | 88c83e1edca02b0b86f3de4981a5f27f398b4441 | [
"MIT"
] | 4 | 2020-11-04T13:25:48.000Z | 2022-03-29T01:21:49.000Z | Form-Filler.py | Zaidtech/AUTOMATION-SCRIPTS | 88c83e1edca02b0b86f3de4981a5f27f398b4441 | [
"MIT"
] | null | null | null | Form-Filler.py | Zaidtech/AUTOMATION-SCRIPTS | 88c83e1edca02b0b86f3de4981a5f27f398b4441 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
"""
This script has been tested on various custom google forms and other various forms with
few alteratios ..
Google forms which does include the input type "token" attribute are found
to be safer than those who don't.
Any form contains various fields.
1. input text fields
2. radio
3. checkboxes
4. textareas
5. Uploads --- important . still working.
"""
import re
import requests
from urllib.request import urlopen
from bs4 import BeautifulSoup
params = {}
url = input("Enter the website url")
page = urlopen(url)
bs_obj = BeautifulSoup(page, 'html.parser')
# bs_obj.prettify() --> it's effects on the tags buried deep in the divs
requests.session()
input_tags = bs_obj.find_all('input')
# print(input_tags)
form_action = bs_obj.find('form') # some pages have multiple form tags ...
text_tags = bs_obj.find_all('textarea')
for text in text_tags:
try:
print(text['name'])
text['name'] = "Running around and fill this form"
except:
print('Key Error')
# if form_action.attrs['action'] == "" or None:
# print("Form action not specifies")
# else:
# print(form_action)
url = form_action.attrs['action']
print(f"Post request is send in here: {url}")
# there might be some custom fields which are to be looked and inspected manually as they skip the scrapper
# like params['entry.377191685'] = 'Faculty'
# params['tos'] = 'true'
# vary accordingly as at least an attck is just not that easy. ;-)
for tag in input_tags:
try:
print(tag.attrs['aria-label'])
except:
pass
try:
if tag.attrs['value'] == "" or None:
tag.attrs['value'] = input(f"Enter the value of {tag.attrs['name']}")
params[tag.attrs['name']] = tag.attrs['value']
# except:
# value= input(f"Enter the value of {tag.attrs['name']}")
# params[tag.attrs['name']] = value
else:
params[tag.attrs['name']] = tag.attrs['value'].strip('\n')
except:
pass
print(params)
# getting the dicts as printed here... which is to be submitted
while True:
requests.session()
r = requests.post(url, data=params)
print(r.status_code)
# 200 OK ---> submitted
# 400 BAD REQUEST ERROR --> input data corrupt or server incompatible
# 401 UNAOUTHORIZED ACCESS --> validation failed (need to deal with tokens and the cookies)
| 27.804598 | 107 | 0.653162 | 345 | 2,419 | 4.530435 | 0.489855 | 0.051184 | 0.038388 | 0.034549 | 0.120282 | 0.099808 | 0.099808 | 0.071657 | 0.071657 | 0.071657 | 0 | 0.013362 | 0.22654 | 2,419 | 86 | 108 | 28.127907 | 0.82202 | 0.503514 | 0 | 0.263158 | 0 | 0 | 0.185532 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.052632 | 0.105263 | 0 | 0.105263 | 0.157895 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
02dbf3b5b09c9427c60b05103927121e020bab72 | 1,375 | py | Python | controllers/main.py | dduarte-odoogap/odoo_jenkins | 69bfcf088f75426c0e4b961a60b5c15a65b37979 | [
"BSD-2-Clause"
] | 5 | 2018-10-26T19:52:45.000Z | 2021-11-04T03:59:22.000Z | controllers/main.py | dduarte-odoogap/odoo_jenkins | 69bfcf088f75426c0e4b961a60b5c15a65b37979 | [
"BSD-2-Clause"
] | null | null | null | controllers/main.py | dduarte-odoogap/odoo_jenkins | 69bfcf088f75426c0e4b961a60b5c15a65b37979 | [
"BSD-2-Clause"
] | 6 | 2017-11-10T07:15:40.000Z | 2021-02-24T10:55:15.000Z | # -*- coding: utf-8 -*-
from odoo import http
from odoo.http import request
import jenkins
class JenkinsController(http.Controller):
@http.route('/web/jenkins/jobs', type='json', auth='user')
def jenkins_get_jobs(self, **kw):
params = request.env['ir.config_parameter']
jenkins_url = params.sudo().get_param('jenkins_ci.url', default='')
jenkins_user = params.sudo().get_param('jenkins_ci.user', default='')
jenkins_password = params.sudo().get_param('jenkins_ci.password', default='')
server = jenkins.Jenkins(jenkins_url, username=jenkins_user, password=jenkins_password)
res = []
jobs = server.get_jobs()
for job in jobs:
jid = {
"color": job['color'],
"name": job['name'],
"healthReport": server.get_job_info(job['name'])['healthReport']
}
res.append(jid)
return {
'jobs': res
}
@http.route('/web/jenkins/build', type='json', auth='user')
def jenkins_build_job(self, job, **kw):
jenkins_url = self.jenkins_url
jenkins_user = self.jenkins_user
jenkins_password = self.jenkins_password
server = jenkins.Jenkins(jenkins_url, username=jenkins_user, password=jenkins_password)
res = server.build_job(job)
return {'result': res}
| 33.536585 | 95 | 0.611636 | 160 | 1,375 | 5.06875 | 0.30625 | 0.061652 | 0.048089 | 0.066584 | 0.348952 | 0.348952 | 0.184957 | 0.184957 | 0.184957 | 0.184957 | 0 | 0.00097 | 0.250182 | 1,375 | 40 | 96 | 34.375 | 0.785645 | 0.015273 | 0 | 0.064516 | 0 | 0 | 0.128793 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.064516 | false | 0.129032 | 0.096774 | 0 | 0.258065 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
02f4fc8fa710340e57d5ba18128bb096623e09a7 | 871 | py | Python | start_palpeo.py | RealDebian/Palpeo | 23be184831a3c529cf933277944e7aacda08cdad | [
"MIT"
] | null | null | null | start_palpeo.py | RealDebian/Palpeo | 23be184831a3c529cf933277944e7aacda08cdad | [
"MIT"
] | null | null | null | start_palpeo.py | RealDebian/Palpeo | 23be184831a3c529cf933277944e7aacda08cdad | [
"MIT"
] | null | null | null | from link_extractor import run_enumeration
from colorama import Fore
from utils.headers import HEADERS
from time import sleep
import requests
import database
import re
import json
from bs4 import BeautifulSoup
import colorama
print(Fore.GREEN + '-----------------------------------' + Fore.RESET, Fore.RED)
print('尸闩㇄尸㠪龱 - Website Link Extractor')
print(' by @RealDebian | V0.02')
print(Fore.GREEN + '-----------------------------------' + Fore.RESET)
print()
sleep(1)
print('Example:')
print()
target_host = str(input('Target Site: '))
print('Select the Protocol (http|https)')
sleep(.5)
protocol = str(input('http=0 | https=1: '))
while True:
if protocol == '0':
run_enumeration('http://' + target_host)
break
elif protocol == '1':
run_enumeration('https://' + target_host)
break
else:
print('Wrong option!')
| 24.194444 | 80 | 0.624569 | 108 | 871 | 4.981481 | 0.481481 | 0.078067 | 0.052045 | 0.066915 | 0.085502 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013908 | 0.174512 | 871 | 35 | 81 | 24.885714 | 0.732962 | 0 | 0 | 0.129032 | 0 | 0 | 0.263218 | 0.08046 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.322581 | 0 | 0.322581 | 0.290323 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
02f6d5351b6d28ac6a5a83e1bce309686a5a07fc | 833 | py | Python | src/backend/backend/shopit/migrations/0024_auto_20201028_2008.py | tejpratap545/E-Commerce-Application | c1aada5d86f231e5acd6ba4c6c9b88ff4b351f7a | [
"MIT"
] | null | null | null | src/backend/backend/shopit/migrations/0024_auto_20201028_2008.py | tejpratap545/E-Commerce-Application | c1aada5d86f231e5acd6ba4c6c9b88ff4b351f7a | [
"MIT"
] | 7 | 2021-08-13T23:05:47.000Z | 2022-02-27T10:23:46.000Z | src/backend/backend/shopit/migrations/0024_auto_20201028_2008.py | tejpratap545/E-Commerce-Application | c1aada5d86f231e5acd6ba4c6c9b88ff4b351f7a | [
"MIT"
] | null | null | null | # Generated by Django 3.1.2 on 2020-10-28 14:38
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('shopit', '0023_availablefilterselectoptions_value'),
]
operations = [
migrations.RemoveField(
model_name='productinfo',
name='is_available',
),
migrations.RemoveField(
model_name='productinfo',
name='stock',
),
migrations.AddField(
model_name='product',
name='popularity',
field=models.SmallIntegerField(blank=True, default=5, null=True),
),
migrations.AddField(
model_name='product',
name='stock',
field=models.PositiveIntegerField(blank=True, default=1, null=True),
),
]
| 26.03125 | 80 | 0.57503 | 75 | 833 | 6.293333 | 0.573333 | 0.076271 | 0.110169 | 0.127119 | 0.351695 | 0.351695 | 0 | 0 | 0 | 0 | 0 | 0.036649 | 0.312125 | 833 | 31 | 81 | 26.870968 | 0.787086 | 0.054022 | 0 | 0.56 | 1 | 0 | 0.143766 | 0.049618 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.04 | 0 | 0.16 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
02f79e3624d623adc544da46b4a6554d6c1bfa3b | 849 | py | Python | fileo/accounts/forms.py | Tiqur/Fileo | 0c663f3bb28985d2d7b4cb475a95b1592cfb2013 | [
"MIT"
] | null | null | null | fileo/accounts/forms.py | Tiqur/Fileo | 0c663f3bb28985d2d7b4cb475a95b1592cfb2013 | [
"MIT"
] | null | null | null | fileo/accounts/forms.py | Tiqur/Fileo | 0c663f3bb28985d2d7b4cb475a95b1592cfb2013 | [
"MIT"
] | null | null | null | from django import forms
from django.contrib.auth import authenticate
from django.contrib.auth.forms import UserCreationForm
from .models import FileoUser
User = FileoUser()
class UserLoginForm(forms.ModelForm):
password = forms.CharField(label='Password', widget=forms.PasswordInput)
class Meta:
model = FileoUser
fields = ('email', 'password')
def clean(self):
email = self.cleaned_data['email']
password = self.cleaned_data['password']
if not authenticate(email=email, password=password):
raise forms.ValidationError('Invalid login')
class UserRegisterForm(UserCreationForm):
email = forms.EmailField(max_length=60, help_text='Add a valid email address')
class Meta:
model = FileoUser
fields = ('email', 'username', 'password1', 'password2')
| 29.275862 | 82 | 0.69258 | 92 | 849 | 6.347826 | 0.521739 | 0.05137 | 0.058219 | 0.071918 | 0.116438 | 0.116438 | 0 | 0 | 0 | 0 | 0 | 0.005926 | 0.204947 | 849 | 28 | 83 | 30.321429 | 0.859259 | 0 | 0 | 0.2 | 0 | 0 | 0.121319 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05 | false | 0.25 | 0.2 | 0 | 0.55 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
02f8318053016bd127b7feb86e89f4c704276dce | 465 | py | Python | kagi/upper/west/_capital/four.py | jedhsu/kagi | 1301f7fc437bb445118b25ca92324dbd58d6ad2d | [
"MIT"
] | null | null | null | kagi/upper/west/_capital/four.py | jedhsu/kagi | 1301f7fc437bb445118b25ca92324dbd58d6ad2d | [
"MIT"
] | null | null | null | kagi/upper/west/_capital/four.py | jedhsu/kagi | 1301f7fc437bb445118b25ca92324dbd58d6ad2d | [
"MIT"
] | null | null | null | """
*Upper-West Capital 4* ⠨
The upper-west capital four gi.
"""
from dataclasses import dataclass
from ....._gi import Gi
from ....capital import CapitalGi
from ...._gi import StrismicGi
from ....west import WesternGi
from ...._number import FourGi
from ..._gi import UpperGi
__all__ = ["UpperWestCapital4"]
@dataclass
class UpperWestCapital4(
Gi,
StrismicGi,
UpperGi,
WesternGi,
CapitalGi,
FourGi,
):
symbol = "\u2828"
| 15 | 33 | 0.668817 | 52 | 465 | 5.846154 | 0.442308 | 0.059211 | 0.118421 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.019126 | 0.212903 | 465 | 30 | 34 | 15.5 | 0.808743 | 0.126882 | 0 | 0 | 0 | 0 | 0.058974 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.388889 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
02f8b65e136d03ceacb32c0a454b3d2ad573a0cb | 191 | py | Python | acmicpc/5612.py | juseongkr/BOJ | 8f10a2bf9a7d695455493fbe7423347a8b648416 | [
"Apache-2.0"
] | 7 | 2020-02-03T10:00:19.000Z | 2021-11-16T11:03:57.000Z | acmicpc/5612.py | juseongkr/Algorithm-training | 8f10a2bf9a7d695455493fbe7423347a8b648416 | [
"Apache-2.0"
] | 1 | 2021-01-03T06:58:24.000Z | 2021-01-03T06:58:24.000Z | acmicpc/5612.py | juseongkr/Algorithm-training | 8f10a2bf9a7d695455493fbe7423347a8b648416 | [
"Apache-2.0"
] | 1 | 2020-01-22T14:34:03.000Z | 2020-01-22T14:34:03.000Z | n = int(input())
m = int(input())
r = m
for i in range(n):
a, b = map(int, input().split())
m += a
m -= b
if m < 0:
print(0)
exit()
r = max(r, m)
print(r)
| 14.692308 | 36 | 0.418848 | 35 | 191 | 2.285714 | 0.514286 | 0.3 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016667 | 0.371728 | 191 | 12 | 37 | 15.916667 | 0.65 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.166667 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
02f942ae72f558610fdbd2e0d719bb8a1bc37d6c | 1,849 | py | Python | users/models.py | uoe-compsci-grp30/campusgame | d2d7ba99210f352a7b45a1db06cea0a09e3b8c31 | [
"MIT"
] | null | null | null | users/models.py | uoe-compsci-grp30/campusgame | d2d7ba99210f352a7b45a1db06cea0a09e3b8c31 | [
"MIT"
] | null | null | null | users/models.py | uoe-compsci-grp30/campusgame | d2d7ba99210f352a7b45a1db06cea0a09e3b8c31 | [
"MIT"
] | null | null | null | import uuid
from django.contrib.auth.models import AbstractUser
from django.db import models
"""
The user model that represents a user participating in the game.
Implemented using the built-in Django user model: AbstractUser.
"""
class User(AbstractUser):
""" The User class that represents a user that has created an account.
Implemented using the built-in Django user model 'AbstractUser'.
The User class consists of an id that uniquely identifies a user. It uses a uuid in order to be more secure.
It also contains a profile picture that is uploaded by the user.
"""
id = models.UUIDField(default=uuid.uuid4, primary_key=True) # id uniquely identifies a user
is_gamekeeper = models.BooleanField(default=False) # is the user a gamekeeper?
class GameParticipation(models.Model):
"""
Game Participation class represents information about a user currently participating in a game. This is useful because it provides an easy way to store data about users currently playing a game. The class consists of a User that is currently playing the game. A Game that the user is currently participating in. The current Zone that the user is in. A boolean value of whether the user is alive. A boolean value of whether the user is eliminated
"""
user = models.ForeignKey(User, on_delete=models.CASCADE) # User that is currently participating in a game
game = models.ForeignKey("games.Game", on_delete=models.CASCADE) # What game is the user currently participating in
current_zone = models.ForeignKey("games.Zone", on_delete=models.DO_NOTHING) # What zone is the user currently in
score = models.IntegerField(default=0) # User score
is_alive = models.BooleanField(default=False) # Is the player alive
is_eliminated = models.BooleanField(default=False) # Is the player eliminated
| 52.828571 | 449 | 0.760411 | 277 | 1,849 | 5.043321 | 0.31769 | 0.055118 | 0.068719 | 0.064424 | 0.245526 | 0.204009 | 0.178955 | 0.120258 | 0.075877 | 0 | 0 | 0.001318 | 0.179557 | 1,849 | 34 | 450 | 54.382353 | 0.919578 | 0.538129 | 0 | 0 | 0 | 0 | 0.029806 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.230769 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
02fcd2548a49becf32a01085ecf16e34635af225 | 32,807 | py | Python | train.py | EdwardLeeMacau/PFFNet | dfa6e45062627ce6ab7a1b1a37bada5cccae7167 | [
"MIT"
] | null | null | null | train.py | EdwardLeeMacau/PFFNet | dfa6e45062627ce6ab7a1b1a37bada5cccae7167 | [
"MIT"
] | null | null | null | train.py | EdwardLeeMacau/PFFNet | dfa6e45062627ce6ab7a1b1a37bada5cccae7167 | [
"MIT"
] | null | null | null | """
FileName [ train.py ]
PackageName [ PFFNet ]
Synopsis [ Train the model ]
Usage:
>>> python train.py --normalized --cuda
"""
import argparse
import os
import shutil
from datetime import date
import matplotlib
import numpy as np
import pandas as pd
import torch
import torchvision
import torchvision.models
from torchvision import transforms
from matplotlib import pyplot as plt
from matplotlib import gridspec
from skimage.measure import compare_psnr, compare_ssim
from torch import nn, optim
from torch.backends import cudnn
from torch.utils.data import DataLoader
from torchvision.transforms import (
CenterCrop, Compose, Normalize, RandomCrop, Resize, ToTensor)
from torchvision.utils import make_grid
import cmdparser
import graphs
import utils
from model import lossnet
from data import DatasetFromFolder
from model.rpnet import Net
from model.rpnet_improve import ImproveNet
from model.lossnet import LossNetwork
# Select Device
device = utils.selectDevice()
cudnn.benchmark = True
# Normalization(Mean Shift)
mean = torch.Tensor([0.485, 0.456, 0.406]).to(device)
std = torch.Tensor([0.229, 0.224, 0.225]).to(device)
def getDataset(opt, transform):
"""
Return the dataloader object
Parameters
----------
opt : namespace
transform : torchvision.transform
Return
------
train_loader, val_loader : torch.utils.data.DataLoader
"""
train_dataset = DatasetFromFolder(opt.train, transform=transform)
val_dataset = DatasetFromFolder(opt.val, transform=transform)
train_loader = DataLoader(
dataset=train_dataset,
num_workers=opt.threads,
batch_size=opt.batchsize,
pin_memory=True,
shuffle=True
)
val_loader = DataLoader(
dataset=val_dataset,
num_workers=opt.threads,
batch_size=opt.batchsize,
pin_memory=True,
shuffle=True
)
return train_loader, val_loader
def getOptimizer(model, opt):
"""
Return the optimizer (and schedular)
Parameters
----------
model : torch.nn.Model
opt : namespace
Return
------
optimizer : torch.optim
"""
if opt.optimizer == "Adam":
optimizer = optim.Adam(
filter(lambda p: p.requires_grad, model.parameters()),
lr=opt.lr,
weight_decay=opt.weight_decay
)
elif opt.optimizer == "SGD":
optimizer = optim.SGD(
filter(lambda p: p.requires_grad, model.parameters()),
lr=opt.lr,
weight_decay=opt.weight_decay
)
elif opt.optimizer == "ASGD":
optimizer = optim.ASGD(
filter(lambda p: p.requires_grad, model.parameters()),
lr=opt.lr,
lambd=1e-4,
alpha=0.75,
t0=1000000.0,
weight_decay=opt.weight_decay
)
elif opt.optimizer == "Adadelta":
optimizer = optim.Adadelta(
filter(lambda p: p.requires_grad, model.parameters()),
lr=opt.lr,
rho=0.9,
eps=1e-06,
weight_decay=opt.weight_decay
)
elif opt.optimizer == "Adagrad":
optimizer = optim.Adagrad(
filter(lambda p: p.requires_grad, model.parameters()),
lr=opt.lr,
lr_decay=0,
weight_decay=opt.weight_decay,
initial_accumulator_value=0
)
elif opt.optimizer == "Adam":
optimizer = optim.Adam(
filter(lambda p: p.requires_grad, model.parameters()),
lr=opt.lr,
weight_decay=opt.weight_decay
)
elif opt.optimizer == "SGD":
optimizer = optim.SGD(
filter(lambda p: p.requires_grad, model.parameters()),
lr=opt.lr,
weight_decay=opt.weight_decay
)
elif opt.optimizer == "ASGD":
optimizer = optim.ASGD(
filter(lambda p: p.requires_grad, model.parameters()),
lr=opt.lr,
lambd=1e-4,
alpha=0.75,
t0=1000000.0,
weight_decay=opt.weight_decay
)
elif opt.optimizer == "Adadelta":
optimizer = optim.Adadelta(
filter(lambda p: p.requires_grad, model.parameters()),
lr=opt.lr,
rho=0.9,
eps=1e-06,
weight_decay=opt.weight_decay
)
elif opt.optimizer == "Adagrad":
optimizer = optim.Adagrad(
filter(lambda p: p.requires_grad, model.parameters()),
lr=opt.lr,
lr_decay=0,
weight_decay=opt.weight_decay,
initial_accumulator_value=0
)
elif opt.optimizer == "SparseAdam":
optimizer = optim.SparseAdam(
filter(lambda p: p.requires_grad, model.parameters()),
lr=opt.lr,
betas=(opt.b1, opt.b2),
eps=1e-08
)
elif opt.optimizer == "Adamax":
optimizer = optim.Adamax(
filter(lambda p: p.requires_grad, model.parameters()),
lr=opt.lr,
betas=(opt.b1, opt.b2),
eps=1e-08,
weight_decay=opt.weight_dacay
)
else:
raise ValueError(opt.optimizer, " doesn't exist.")
return optimizer
# TODO: Developing
def logMsg(epoch, iteration, train_loader, perceptual, trainloss, perceloss)
msg = "===> [Epoch {}] [{:4d}/{:4d}] ImgLoss: (Mean: {:.6f}, Std: {:.6f})".format(
epoch, iteration, len(train_loader), np.mean(trainloss), np.std(trainloss)
)
if not perceptual is None:
msg = "\t".join([msg, "PerceptualLoss: (Mean: {:.6f}, Std: {:.6f})".format(np.mean(perceloss), np.std(perceloss))])
return msg
def getFigureSpec(iteration: int, perceptual: bool):
"""
Get 2x2 Figure And Axis
Parameters
----------
iterations : int
perceptual : bool
If true, generate the axis of perceptual loss
Return
------
fig, axis : matplotlib.figure.Figure, matplotlib.axes.Axes
The plotting instance.
"""
fig, grids = plt.figure(figsize=(19.2, 10.8)), gridspec.GridSpec(2, 2)
axis = [ fig.add_subplot(gs) for gs in grids ]
for ax in axis:
ax.set_xlabel("Epoch(s) / Iteration: {}".format(iteration))
# Linear scale of Loss
axis[0].set_ylabel("Image Loss")
axis[0].set_title("Loss")
# Log scale of Loss
axis[1].set_yscale("log")
axis[1].set_ylabel("Image Loss")
axis[1].set_title("Loss (Log scale)")
# PSNR
axis[2].set_title("Average PSNR")
# Learning Rate
axis[3].set_yscale('log')
axis[3].set_title("Learning Rate")
# Add TwinScale for Perceptual Loss
if perceptual:
axis.append( axis[0].twinx() )
axis[4].set_ylabel("Perceptual Loss")
axis.append( axis[1].twinx() )
axis[5].set_ylabel("Perceptual Loss")
return fig, axis
def getPerceptualModel(model):
"""
Return the Perceptual Model
Parameters
----------
model : str
The name of the perceptual Model.
Return
------
perceptual : {nn.Module, None}
Not None if the perceptual model is supported.
"""
perceptual = None
if opt.perceptual == 'vgg16':
print("==========> Using VGG16 as Perceptual Loss Model")
perceptual = LossNetwork(
torchvision.models.vgg16(pretrained=True),
lossnet.VGG16_Layer
)
if opt.perceptual == 'vgg16_bn':
print("==========> Using VGG16 with Batch Normalization as Perceptual Loss Model")
perceptual = LossNetwork(
torchvision.models.vgg16_bn(pretrained=True),
lossnet.VGG16_bn_Layer
)
if opt.perceptual == 'vgg19':
print("==========> Using VGG19 as Perceptual Loss Model")
perceptual = LossNetwork(
torchvision.models.vgg19(pretrained=True),
lossnet.VGG19_Layer
)
if opt.perceptual == 'vgg19_bn':
print("==========> Using VGG19 with Batch Normalization as Perceptual Loss Model")
perceptual = LossNetwork(
torchvision.models.vgg19_bn(pretrained=True),
lossnet.VGG19_bn_Layer
)
if opt.perceptual == "resnet18":
print("==========> Using Resnet18 as Perceptual Loss Model")
perceptual = LossNetwork(
torchvision.models.resnet18(pretrained=True),
lossnet.Resnet18_Layer
)
if opt.perceptual == "resnet34":
print("==========> Using Resnet34 as Perceptual Loss Model")
perceptual = LossNetwork(
torchvision.models.resnet34(pretrained=True),
lossnet.Resnet34_Layer
)
if opt.perceptual == "resnet50":
print("==========> Using Resnet50 as Perceptual Loss Model")
perceptual = LossNetwork(
torchvision.models.resnet50(pertrained=True),
lossnet.Resnet50_Layer
)
return perceptual
# TODO: Developing
def getTrainSpec(opt):
"""
Initialize the objects needs at Training.
Parameters
----------
opt : namespace
(...)
Return
------
model
optimizer
criterion
perceptual
train_loader, val_loader
scheduler
epoch,
loss_iter, perc_iter, mse_iter, psnr_iter, ssim_iter, lr_iter
iterations, opt,
name,
fig,
axis,
saveCheckpoint
"""
if opt.fixrandomseed:
seed = 1334
torch.manual_seed(seed)
if opt.cuda: torch.cuda.manual_seed(seed)
print("==========> Loading datasets")
img_transform = Compose([ToTensor(), Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])]) if opt.normalize else ToTensor()
# Dataset
train_loader, val_loader = getDataset(opt, img_transform)
# TODO: Parameters Selection
# TODO: Mean shift Layer Handling
# Load Model
print("==========> Building model")
model = ImproveNet(opt.rb)
# ----------------------------------------------- #
# Loss: L1 Norm / L2 Norm #
# Perceptual Model (Optional) #
# TODO Append Layer (Optional) #
# ----------------------------------------------- #
criterion = nn.MSELoss(reduction='mean')
perceptual = None if (opt.perceptual is None) else getPerceptualModel(opt.perceptual).eval()
# ----------------------------------------------- #
# Optimizer and learning rate scheduler #
# ----------------------------------------------- #
print("==========> Setting Optimizer: {}".format(opt.optimizer))
optimizer = getOptimizer(model, opt)
scheduler = optim.lr_scheduler.MultiStepLR(optimizer, milestones=opt.milestones, gamma=opt.gamma)
# ----------------------------------------------- #
# Option: resume training process from checkpoint #
# ----------------------------------------------- #
if opt.resume:
if os.path.isfile(opt.resume):
print("=> loading checkpoint '{}'".format(opt.resume))
model, optimizer, _, _, scheduler = utils.loadCheckpoint(opt.resume, model, optimizer, scheduler)
else:
raise Exception("=> no checkpoint found at '{}'".format(opt.resume))
# ----------------------------------------------- #
# Option: load weights from a pretrain network #
# ----------------------------------------------- #
if opt.pretrained:
if os.path.isfile(opt.pretrained):
print("=> loading pretrained model '{}'".format(opt.pretrained))
model = utils.loadModel(opt.pretrained, model, True)
else:
raise Exception("=> no pretrained model found at '{}'".format(opt.pretrained))
# Select training device
if opt.cuda:
print("==========> Setting GPU")
model = nn.DataParallel(model, device_ids=[i for i in range(opt.gpus)]).cuda()
criterion = criterion.cuda()
if perceptual is not None: perceptual = perceptual.cuda()
else:
print("==========> Setting CPU")
model = model.cpu()
criterion = criterion.cpu()
if perceptual is not None: perceptual = perceptual.cpu()
# Create container
length = opt.epochs * len(train_loader) // opt.val_interval
loss_iter = np.empty(length, dtype=float)
perc_iter = np.empty(length, dtype=float)
psnr_iter = np.empty(length, dtype=float)
ssim_iter = np.empty(length, dtype=float)
mse_iter = np.empty(length, dtype=float)
lr_iter = np.empty(length, dtype=float)
iterations = np.empty(length, dtype=float)
loss_iter[:] = np.nan
perc_iter[:] = np.nan
psnr_iter[:] = np.nan
ssim_iter[:] = np.nan
mse_iter[:] = np.nan
lr_iter[:] = np.nan
iterations[:] = np.nan
# Set plotter to plot the loss curves
twinx = (opt.perceptual is not None)
fig, axis = getFigureSpec(len(train_loader), twinx)
# Set Model Saving Function
if opt.save_item == "model":
print("==========> Save Function: saveModel()")
saveCheckpoint = utils.saveModel
elif opt.save_item == "checkpoint":
print("==========> Save Function: saveCheckpoint()")
saveCheckpoint = utils.saveCheckpoint
else:
raise ValueError("Save Checkpoint Function Error")
return (
model, optimizer, criterion, perceptual, train_loader, val_loader, scheduler, epoch,
loss_iter, perc_iter, mse_iter, psnr_iter, ssim_iter, lr_iter, iterations, opt,
name, fig, axis, saveCheckpoint
)
def main(opt):
"""
Main process of train.py
Parameters
----------
opt : namespace
The option (hyperparameters) of these model
"""
if opt.fixrandomseed:
seed = 1334
torch.manual_seed(seed)
if opt.cuda:
torch.cuda.manual_seed(seed)
print("==========> Loading datasets")
img_transform = Compose([ToTensor(), Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])]) if opt.normalize else ToTensor()
# Dataset
train_loader, val_loader = getDataset(opt, img_transform)
# TODO: Parameters Selection
# TODO: Mean shift Layer Handling
# Load Model
print("==========> Building model")
model = ImproveNet(opt.rb)
# ----------------------------------------------- #
# Loss: L1 Norm / L2 Norm #
# Perceptual Model (Optional) #
# TODO Append Layer (Optional) #
# ----------------------------------------------- #
criterion = nn.MSELoss(reduction='mean')
perceptual = None if (opt.perceptual is None) else getPerceptualModel(opt.perceptual).eval()
# ----------------------------------------------- #
# Optimizer and learning rate scheduler #
# ----------------------------------------------- #
print("==========> Setting Optimizer: {}".format(opt.optimizer))
optimizer = getOptimizer(model, opt)
scheduler = optim.lr_scheduler.MultiStepLR(optimizer, milestones=opt.milestones, gamma=opt.gamma)
# ----------------------------------------------- #
# Option: resume training process from checkpoint #
# ----------------------------------------------- #
if opt.resume:
if os.path.isfile(opt.resume):
print("=> loading checkpoint '{}'".format(opt.resume))
model, optimizer, _, _, scheduler = utils.loadCheckpoint(opt.resume, model, optimizer, scheduler)
else:
raise Exception("=> no checkpoint found at '{}'".format(opt.resume))
# ----------------------------------------------- #
# Option: load weights from a pretrain network #
# ----------------------------------------------- #
if opt.pretrained:
if os.path.isfile(opt.pretrained):
print("=> loading pretrained model '{}'".format(opt.pretrained))
model = utils.loadModel(opt.pretrained, model, True)
else:
raise Exception("=> no pretrained model found at '{}'".format(opt.pretrained))
# Select training device
if opt.cuda:
print("==========> Setting GPU")
model = nn.DataParallel(model, device_ids=[i for i in range(opt.gpus)]).cuda()
criterion = criterion.cuda()
if perceptual is not None:
perceptual = perceptual.cuda()
else:
print("==========> Setting CPU")
model = model.cpu()
criterion = criterion.cpu()
if perceptual is not None:
perceptual = perceptual.cpu()
# Create container
length = opt.epochs * len(train_loader) // opt.val_interval
loss_iter = np.empty(length, dtype=float)
perc_iter = np.empty(length, dtype=float)
psnr_iter = np.empty(length, dtype=float)
ssim_iter = np.empty(length, dtype=float)
mse_iter = np.empty(length, dtype=float)
lr_iter = np.empty(length, dtype=float)
iterations = np.empty(length, dtype=float)
loss_iter[:] = np.nan
perc_iter[:] = np.nan
psnr_iter[:] = np.nan
ssim_iter[:] = np.nan
mse_iter[:] = np.nan
lr_iter[:] = np.nan
iterations[:] = np.nan
# Set plotter to plot the loss curves
twinx = (opt.perceptual is not None)
fig, axis = getFigureSpec(len(train_loader), twinx)
# Set Model Saving Function
if opt.save_item == "model":
print("==========> Save Function: saveModel()")
saveCheckpoint = utils.saveModel
elif opt.save_item == "checkpoint":
print("==========> Save Function: saveCheckpoint()")
saveCheckpoint = utils.saveCheckpoint
else:
raise ValueError("Save Checkpoint Function Error")
# Start Training
print("==========> Training")
for epoch in range(opt.starts, opt.epochs + 1):
loss_iter, perc_iter, mse_iter, psnr_iter, ssim_iter, lr_iter, iterations, _, _ = train(
model, optimizer, criterion, perceptual, train_loader, val_loader, scheduler, epoch,
loss_iter, perc_iter, mse_iter, psnr_iter, ssim_iter, lr_iter, iterations,
opt, name, fig, axis, saveCheckpoint
)
scheduler.step()
# Save the last checkpoint for resume training
utils.saveCheckpoint(os.path.join(opt.checkpoints, name, "final.pth"), model, optimizer, scheduler, epoch, len(train_loader))
# TODO: Fine tuning
return
def train(model, optimizer, criterion, perceptual, train_loader, val_loader,
scheduler: optim.lr_scheduler.MultiStepLR, epoch: int, loss_iter,
perc_iter, mse_iter, psnr_iter, ssim_iter, lr_iter, iters, opt, name,
fig: matplotlib.figure.Figure, ax: matplotlib.axes.Axes,
saveCheckpoint=utils.saveCheckpoint):
"""
Main function of training and vaildation
Parameters
----------
model, optimizer, criterion : nn.Module, optim.Optimizer, nn.Module
The main elements of the Neural Network
perceptual : {nn.Module, None} optional
Pass None or a pretrained Neural Network to calculate perceptual loss
train_loader, val_loader : DataLoader
The training and validation dataset
scheduler : optim.lr_scheduler.MultiStepLR
Learning rate scheduler
epoch : int
The processing train epoch
loss_iter, perc_iter, mse_iter, psnr_iter, ssim_iter, iters : 1D-Array like
The container to record the training performance
opt : namespace
The training option
name : str
(...)
fig, ax : matplotlib.figure.Figure, matplotlib.axes.Axes
(...)
saveCheckpoint : callable
(...)
"""
trainloss, perceloss = [], []
for iteration, (data, label) in enumerate(train_loader, 1):
steps = len(train_loader) * (epoch - 1) + iteration
model.train()
# ----------------------------------------------------- #
# Handling: #
# 1. Perceptual Loss #
# 2. Multiscaling #
# 2.0 Without Multiscaling (multiscaling = [1.0]) #
# 2.1 Regular Multiscaling #
# 2.2 Random Multiscaling #
# ----------------------------------------------------- #
# 2.0 Without Multiscaling
if opt.multiscale == [1.0]:
optimizer.zero_grad()
data, label = data.to(device), label.to(device)
output = model(data)
# Calculate loss
image_loss = criterion(output, label)
if perceptual is not None: perceptual_loss = perceptual(output, label)
# Backpropagation
loss = image_loss if (perceptual is None) else image_loss + opt.perceptual_weight * percuptual_loss
loss.backward()
optimizer.step()
# Record the training loss
trainloss.append(image_loss.item())
if perceptual is not None: perceloss.append(perceptual_loss.item())
# TODO: Efficient Issue
# TODO: Resizing Loss
# 2.1 Regular Multiscaling
elif not opt.multiscaleShuffle:
data, label = data.to(device), label.to(device)
originWidth, originHeight = data.shape[1:3]
for scale in opt.multiscale:
optimizer.zero_grad()
if scale != 1.0:
newSize = (int(originWidth * scale), int(originHeight * scale))
data, label = Resize(size=newSize)(data), Resize(size=newSize)(label)
output = model(data)
# Calculate loss
image_loss = criterion(output, label)
if perceptual is not None: perceptual_loss = perceptual(output, label)
# Backpropagation
loss = image_loss if (perceptual is None) else image_loss + opt.perceptual_weight * percuptual_loss
loss.backward()
optimizer.step()
# Record the training loss
trainloss.append(image_loss.item())
if perceptual is not None: perceloss.append(perceptual_loss.item())
# TODO: Check Usage
# 2.2 Random Multiscaling
else:
optimizer.zero_grad()
data, label = data.to(device), label.to(device)
originWidth, originHeight = data.shape[1:3]
scale = np.random.choice(opt.multiscale, 1)
if scale != 1.0:
newSize = (int(originWidth * scale), int(originHeight * scale))
data, label = Resize(size=newSize)(data), Resize(size=newSize)(label)
output = model(data)
# Calculate loss
image_loss = criterion(output, label)
if perceptual is not None: perceptual_loss = perceptual(output, label)
# Backpropagation
loss = image_loss if (perceptual is None) else image_loss + opt.perceptual_weight * percuptual_loss
loss.backward()
optimizer.step()
# Record the training loss
trainloss.append(image_loss.item())
if perceptual is not None: perceloss.append(perceptual_loss.item())
# ----------------------------------------------------- #
# Execute for a period #
# 1. Print the training message #
# 2. Plot the gradient of each layer (Deprecated) #
# 3. Validate the model #
# 4. Saving the network #
# ----------------------------------------------------- #
# 1. Print the training message
if steps % opt.log_interval == 0:
msg = "===> [Epoch {}] [{:4d}/{:4d}] ImgLoss: (Mean: {:.6f}, Std: {:.6f})".format(
epoch, iteration, len(train_loader), np.mean(trainloss), np.std(trainloss)
)
if not perceptual is None:
msg = "\t".join([msg, "PerceptualLoss: (Mean: {:.6f}, Std: {:.6f})".format(np.mean(perceloss), np.std(perceloss))])
print(msg)
# 2. Print the gradient statistic message for each layer
# graphs.draw_gradient()
# 3. Save the model
if steps % opt.save_interval == 0:
checkpoint_path = os.path.join(opt.checkpoints, name, "{}.pth".format(steps))
saveCheckpoint(checkpoint_path, model, optimizer, scheduler, epoch, iteration)
# 4. Validating the network
if steps % opt.val_interval == 0:
mse, psnr = validate(model, val_loader, criterion, epoch, iteration, normalize=opt.normalize)
idx = steps // opt.val_interval - 1
loss_iter[idx] = np.mean(trainloss)
mse_iter[idx] = mse
psnr_iter[idx] = psnr
lr_iter[idx] = optimizer.param_groups[0]["lr"]
iters[idx] = steps / len(train_loader)
if perceptual is not None: perc_iter[idx] = np.mean(perceloss)
# Clean up the list
trainloss, preceloss = [], []
# Save the loss
df = pd.DataFrame(data={
'Iterations': iters * len(train_loader),
'TrainL2Loss': loss_iter,
'TrainPerceptual': perc_iter,
'ValidationLoss': mse_iter,
'ValidationPSNR': psnr_iter
})
# Loss (Training Curve) Message
df = df.nlargest(5, 'ValidationPSNR').append(df)
df.to_excel(os.path.join(opt.detail, name, "statistical.xlsx"))
# Show images in grid with validation set
# graphs.grid_show()
# Plot TrainLoss, ValidationLoss
fig, ax = training_curve(
loss_iter, perc_iter, mse_iter, psnr_iter, ssim_iter, iters, lr_iter,
epoch, len(train_loader), fig, ax
)
plt.tight_layout()
plt.savefig(os.path.join(opt.detail, name, "loss.png"))
return loss_iter, perc_iter, mse_iter, psnr_iter, ssim_iter, lr_iter, iters, fig, ax
def training_curve(train_loss, perc_iter, val_loss, psnr, ssim, x, lr, epoch, iters_per_epoch,
fig: matplotlib.figure.Figure, axis: matplotlib.axes.Axes, linewidth=0.25):
"""
Plot out learning rate, training loss, validation loss and PSNR.
Parameters
----------
train_loss, perc_iter, val_loss, psnr, ssim, lr, x: 1D-array like
(...)
iters_per_epoch : int
To show the iterations in the epoch
fig, axis : matplotlib.figure.Figure, matplotlib.axes.Axes
Matplotlib plotting object.
linewidth : float
Default linewidth
Return
------
fig, axis : matplotlib.figure.Figure, matplotlib.axes.Axes
The training curve
"""
# Linear scale of loss curve
ax = axis[0]
ax.clear()
line1, = ax.plot(x, val_loss, label="Validation Loss", color='red', linewidth=linewidth)
line2, = ax.plot(x, train_loss, label="Train Loss", color='blue', linewidth=linewidth)
ax.plot(x, np.repeat(np.amin(val_loss), len(x)), linestyle=':', linewidth=linewidth)
ax.set_xlabel("Epoch(s) / Iteration: {}".format(iters_per_epoch))
ax.set_ylabel("Image Loss")
ax.set_title("Loss")
if not np.isnan(perc_iter).all():
ax = axis[4]
ax.clear()
line4, = ax.plot(x, perc_iter, label="Perceptual Loss", color='green', linewidth=linewidth)
ax.set_ylabel("Perceptual Loss")
ax.legend(handles=(line1, line2, line4, )) if not np.isnan(perc_iter).all() else ax.legend(handles=(line1, line2, ))
# Log scale of loss curve
ax = axis[1]
ax.clear()
line1, = ax.plot(x, val_loss, label="Validation Loss", color='red', linewidth=linewidth)
line2, = ax.plot(x, train_loss, label="Train Loss", color='blue', linewidth=linewidth)
ax.plot(x, np.repeat(np.amin(val_loss), len(x)), linestyle=':', linewidth=linewidth)
ax.set_xlabel("Epoch(s) / Iteration: {}".format(iters_per_epoch))
ax.set_yscale('log')
ax.set_title("Loss(Log scale)")
if not np.isnan(perc_iter).all():
ax = axis[5]
ax.clear()
line4, = ax.plot(x, perc_iter, label="Perceptual Loss", color='green', linewidth=linewidth)
ax.set_ylabel("Perceptual Loss")
ax.legend(handles=(line1, line2, line4, )) if not np.isnan(perc_iter).all() else ax.legend(handles=(line1, line2, ))
# Linear scale of PSNR, SSIM
ax = axis[2]
ax.clear()
line1, = ax.plot(x, psnr, label="PSNR", color='blue', linewidth=linewidth)
ax.plot(x, np.repeat(np.amax(psnr), len(x)), linestyle=':', linewidth=linewidth)
ax.set_xlabel("Epochs(s) / Iteration: {}".format(iters_per_epoch))
ax.set_ylabel("Average PSNR")
ax.set_title("Validation Performance")
ax.legend(handles=(line1, ))
# Learning Rate Curve
ax = axis[3]
ax.clear()
line1, = ax.plot(x, lr, label="Learning Rate", color='cyan', linewidth=linewidth)
ax.set_xlabel("Epochs(s) / Iteration: {}".format(iters_per_epoch))
ax.set_title("Learning Rate")
ax.set_yscale('log')
ax.legend(handles=(line1, ))
return fig, axis
def validate(model: nn.Module, loader: DataLoader, criterion: nn.Module, epoch, iteration, normalize=False):
"""
Validate the model
Parameters
----------
model : nn.Module
The neural networks to train
loader : torch.utils.data.DataLoader
The training data
epoch : int
The training epoch
criterion : nn.Module
Loss function
normalize : bool
If true, normalize the image before and after the NN.
Return
------
mse, psnr : np.float
np.mean(mse) and np.mean(psnr)
"""
psnrs, mses = [], []
model.eval()
with torch.no_grad():
for index, (data, label) in enumerate(loader, 1):
data, label = data.to(device), label.to(device)
output = model(data)
mse = criterion(output, label).item()
mses.append(mse)
if normalize:
data = data * std[:, None, None] + mean[:, None, None]
label = label * std[:, None, None] + mean[:, None, None]
output = output * std[:, None, None] + mean[:, None, None]
mse = criterion(output, label).item()
psnr = 10 * np.log10(1.0 / mse)
mses.append(mse)
psnrs.append(psnr)
print("===> [Epoch {}] [ Vaild ] MSE: {:.6f}, PSNR: {:.4f}".format(epoch, np.mean(mses), np.mean(psnrs)))
return np.mean(mses), np.mean(psnrs)
if __name__ == "__main__":
# Clean up OS screen
os.system('clear')
# Cmd Parser
parser = cmdparser.parser
opt = parser.parse_args()
# Check arguments
if opt.cuda and not torch.cuda.is_available():
raise Exception("No GPU found, please run without --cuda")
if opt.resume and opt.pretrained:
raise ValueError("opt.resume and opt.pretrain should not be True in the same time.")
if opt.resume and (not os.path.isfile(opt.resume)):
raise ValueError("{} doesn't not exists".format(opt.resume))
if opt.pretrained and (not os.path.isfile(opt.pretrained)):
raise ValueError("{} doesn't not exists".format(opt.pretrained))
# Check training dataset directory
for path in opt.train:
if not os.path.exists(path):
raise ValueError("{} doesn't exist".format(path))
# Check validation dataset directory
for path in opt.val:
if not os.path.exists(path):
raise ValueError("{} doesn't exist".format(path))
# Make checkpoint storage directory
name = "{}_{}".format(opt.tag, date.today().strftime("%Y%m%d"))
os.makedirs(os.path.join(opt.checkpoints, name), exist_ok=True)
# Copy the code of model to logging file
if os.path.exists(os.path.join(opt.detail, name, 'model')):
shutil.rmtree(os.path.join(opt.detail, name, 'model'))
if os.path.exists(os.path.join(opt.checkpoints, name, 'model')):
shutil.rmtree(os.path.join(opt.checkpoints, name, 'model'))
shutil.copytree('./model', os.path.join(opt.detail, name, 'model'))
shutil.copytree('./model', os.path.join(opt.checkpoints, name, 'model'))
shutil.copyfile(__file__, os.path.join(opt.detail, name, os.path.basename(__file__)))
# Show Detail
print('==========> Training setting')
utils.details(opt, os.path.join(opt.detail, name, 'args.txt'))
# Execute main process
main(opt)
| 33.648205 | 140 | 0.572073 | 3,648 | 32,807 | 5.055921 | 0.124726 | 0.007862 | 0.010627 | 0.013663 | 0.645088 | 0.6099 | 0.58111 | 0.578562 | 0.548146 | 0.532422 | 0 | 0.013145 | 0.278843 | 32,807 | 974 | 141 | 33.682752 | 0.766431 | 0.113787 | 0 | 0.541744 | 0 | 0.003711 | 0.097176 | 0 | 0 | 0 | 0 | 0.005133 | 0 | 0 | null | null | 0 | 0.050093 | null | null | 0.053803 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
02fe97635bdf12eb93fa73109a7854ea036f69bf | 546 | py | Python | python_high/chapter_3/3.1.py | Rolling-meatballs/deepshare | 47c1e599c915ccd0a123fa9ab26e1f20738252ef | [
"MIT"
] | null | null | null | python_high/chapter_3/3.1.py | Rolling-meatballs/deepshare | 47c1e599c915ccd0a123fa9ab26e1f20738252ef | [
"MIT"
] | null | null | null | python_high/chapter_3/3.1.py | Rolling-meatballs/deepshare | 47c1e599c915ccd0a123fa9ab26e1f20738252ef | [
"MIT"
] | null | null | null | name = " alberT"
one = name.rsplit()
print("one:", one)
two = name.index('al', 0)
print("two:", two)
three = name.index('T', -1)
print("three:", three)
four = name.replace('l', 'p')
print("four:", four)
five = name.split('l')
print("five:", five)
six = name.upper()
print("six:", six)
seven = name.lower()
print("seven:", seven)
eight = name[1]
print("eight:", eight )
nine = name[:3]
print("nine:", nine)
ten = name[-2:]
print("ten:", ten)
eleven = name.index("e")
print("eleven:", eleven)
twelve = name[:-1]
print("twelve:", twelve) | 14.756757 | 29 | 0.598901 | 82 | 546 | 3.987805 | 0.365854 | 0.082569 | 0.061162 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012848 | 0.144689 | 546 | 37 | 30 | 14.756757 | 0.687366 | 0 | 0 | 0 | 0 | 0 | 0.140768 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.48 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
02feb42fde4ca975bc72c9c78d9e0931c5f1d4a2 | 384 | py | Python | src/views/simplepage/models.py | svenvandescheur/svenv.nl-new | c448714853d96ad31d26c825d8b35c4890be40a1 | [
"MIT"
] | null | null | null | src/views/simplepage/models.py | svenvandescheur/svenv.nl-new | c448714853d96ad31d26c825d8b35c4890be40a1 | [
"MIT"
] | null | null | null | src/views/simplepage/models.py | svenvandescheur/svenv.nl-new | c448714853d96ad31d26c825d8b35c4890be40a1 | [
"MIT"
] | null | null | null | from cms.extensions import PageExtension
from cms.extensions.extension_pool import extension_pool
from django.utils.translation import ugettext as _
from filer.fields.image import FilerImageField
class SimplePageExtension(PageExtension):
"""
A generic website page.
"""
image = FilerImageField(verbose_name=_("image"))
extension_pool.register(SimplePageExtension)
| 25.6 | 56 | 0.796875 | 42 | 384 | 7.142857 | 0.595238 | 0.13 | 0.113333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.130208 | 384 | 14 | 57 | 27.428571 | 0.898204 | 0.059896 | 0 | 0 | 0 | 0 | 0.014493 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.571429 | 0 | 0.857143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
f302cba30df57e2c4fa0a9201628774e666043a8 | 3,021 | py | Python | Ideas/cricket-umpire-assistance-master/visualization/test2.py | hsspratt/Nott-Hawkeye1 | 178f4f0fef62e8699f6057d9d50adfd61a851047 | [
"MIT"
] | null | null | null | Ideas/cricket-umpire-assistance-master/visualization/test2.py | hsspratt/Nott-Hawkeye1 | 178f4f0fef62e8699f6057d9d50adfd61a851047 | [
"MIT"
] | 1 | 2021-11-11T22:15:36.000Z | 2021-11-11T22:15:36.000Z | Ideas/cricket-umpire-assistance-master/visualization/test2.py | hsspratt/Nott-Hawkeye1 | 178f4f0fef62e8699f6057d9d50adfd61a851047 | [
"MIT"
] | null | null | null | ### INITIALIZE VPYTHON
# -----------------------------------------------------------------------
from __future__ import division
from visual import *
from physutil import *
from visual.graph import *
### SETUP ELEMENTS FOR GRAPHING, SIMULATION, VISUALIZATION, TIMING
# ------------------------------------------------------------------------
# Set window title
scene.title = "Projectile Motion Particle Model"
# Make scene background black
scene.background = color.black
# Define scene objects (units are in meters)
field = box(pos = vector(0, 0, 0), size = (300, 10, 100), color = color.green, opacity = 0.3)
ball = sphere(radius = 5, color = color.blue)
# Define axis marks the field with a specified number of tick marks
xaxis = PhysAxis(field, 10) # 10 tick marks
yaxis = PhysAxis(field, 5, # 5 tick marks
axisType = "y",
labelOrientation = "left",
startPos = vector(-150, 0, 0), # start the y axis at the left edge of the scene
length = 100) # units are in meters
# Set up graph with two plots
posgraph = PhysGraph(2)
# Set up trail to mark the ball's trajectory
trail = curve(color = color.yellow, radius = 1) # units are in meters
# Set up motion map for ball
motionMap = MotionMap(ball, 8.163, # expected end time in seconds
10, # number of markers to draw
labelMarkerOffset = vector(0, -20, 0),
dropTime = False)
# Set timer in top right of screen
timerDisplay = PhysTimer(140, 150) # timer position (units are in meters)
### SETUP PARAMETERS AND INITIAL CONDITIONS
# ----------------------------------------------------------------------------------------
# Define parameters
ball.m = 0.6 # mass of ball in kg
ball.pos = vector(-150, 0, 0) # initial position of the ball in(x, y, z) form, units are in meters
ball.v = vector(30, 40, 0) # initial velocity of car in (vx, vy, vz) form, units are m/s
g = vector(0, -9.8, 0) # acceleration due to gravity; units are m/s/s
# Define time parameters
t = 0 # starting time
deltat = 0.001 # time step units are s
### CALCULATION LOOP; perform physics updates and drawing
# ------------------------------------------------------------------------------------
while ball.pos.y >= 0 : #while the ball's y-position is greater than 0 (above the ground)
# Required to make animation visible / refresh smoothly (keeps program from running faster
# than 1000 frames/s)
rate(1000)
# Compute Net Force
Fnet = ball.m * g
# Newton's 2nd Law
ball.v = ball.v + (Fnet/ball.m * deltat)
# Position update
ball.pos = ball.pos + ball.v * deltat
# Update motion map, graph, timer, and trail
motionMap.update(t, ball.v)
posgraph.plot(t, ball.pos.x, ball.pos.y) # plot x and y position vs. time
trail.append(pos = ball.pos)
timerDisplay.update(t)
# Time update
t = t + deltat
### OUTPUT
# --------------------------------------------------------------------------------------
# Print the final time and the ball's final position
print t
print ball.pos | 32.138298 | 98 | 0.589209 | 411 | 3,021 | 4.321168 | 0.428224 | 0.036036 | 0.028153 | 0.045045 | 0.023649 | 0.023649 | 0 | 0 | 0 | 0 | 0 | 0.031967 | 0.19232 | 3,021 | 94 | 99 | 32.138298 | 0.695902 | 0.567693 | 0 | 0 | 0 | 0 | 0.029553 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.102564 | null | null | 0.051282 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f3041c623ca233066149adf01d25baef21dbb909 | 727 | py | Python | parking_systems/models.py | InaraShalfei/parking_system | f1b326f12037808ab80e3b1d6b305235ba59a0db | [
"MIT"
] | null | null | null | parking_systems/models.py | InaraShalfei/parking_system | f1b326f12037808ab80e3b1d6b305235ba59a0db | [
"MIT"
] | null | null | null | parking_systems/models.py | InaraShalfei/parking_system | f1b326f12037808ab80e3b1d6b305235ba59a0db | [
"MIT"
] | null | null | null | from django.db import models
class Parking(models.Model):
def __str__(self):
return f'Парковочное место №{self.id}'
class Reservation(models.Model):
parking_space = models.ForeignKey(Parking, on_delete=models.CASCADE, related_name='reservations',
verbose_name='Номер парковочного места')
start_time = models.DateTimeField(verbose_name='Время начала брони')
finish_time = models.DateTimeField(verbose_name='Время окончания брони')
class Meta:
ordering = ['-start_time']
def __str__(self):
format = "%d.%m.%y %H:%M"
return f'Бронирование №{self.id} (c {self.start_time.strftime(format)} по {self.finish_time.strftime(format)})'
| 33.045455 | 119 | 0.671252 | 90 | 727 | 5.255556 | 0.544444 | 0.069767 | 0.042283 | 0.12685 | 0.164905 | 0.164905 | 0 | 0 | 0 | 0 | 0 | 0 | 0.207703 | 727 | 21 | 120 | 34.619048 | 0.814236 | 0 | 0 | 0.142857 | 0 | 0.071429 | 0.314993 | 0.096286 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.071429 | 0.071429 | 0.785714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
f30640fd7966c16ad8a70aa7a32537803f35f977 | 3,172 | py | Python | src/dummy/toga_dummy/widgets/canvas.py | Donyme/toga | 2647c7dc5db248025847e3a60b115ff51d4a0d4a | [
"BSD-3-Clause"
] | null | null | null | src/dummy/toga_dummy/widgets/canvas.py | Donyme/toga | 2647c7dc5db248025847e3a60b115ff51d4a0d4a | [
"BSD-3-Clause"
] | null | null | null | src/dummy/toga_dummy/widgets/canvas.py | Donyme/toga | 2647c7dc5db248025847e3a60b115ff51d4a0d4a | [
"BSD-3-Clause"
] | null | null | null | import re
from .base import Widget
class Canvas(Widget):
def create(self):
self._action('create Canvas')
def set_on_draw(self, handler):
self._set_value('on_draw', handler)
def set_context(self, context):
self._set_value('context', context)
def line_width(self, width=2.0):
self._set_value('line_width', width)
def fill_style(self, color=None):
if color is not None:
num = re.search('^rgba\((\d*\.?\d*), (\d*\.?\d*), (\d*\.?\d*), (\d*\.?\d*)\)$', color)
if num is not None:
r = num.group(1)
g = num.group(2)
b = num.group(3)
a = num.group(4)
rgba = str(r + ', ' + g + ', ' + b + ', ' + a)
self._set_value('fill_style', rgba)
else:
pass
# Support future colosseum versions
# for named_color, rgb in colors.NAMED_COLOR.items():
# if named_color == color:
# exec('self._set_value('fill_style', color)
else:
# set color to black
self._set_value('fill_style', '0, 0, 0, 1')
def stroke_style(self, color=None):
self.fill_style(color)
def close_path(self):
self._action('close path')
def closed_path(self, x, y):
self._action('closed path', x=x, y=y)
def move_to(self, x, y):
self._action('move to', x=x, y=y)
def line_to(self, x, y):
self._action('line to', x=x, y=y)
def bezier_curve_to(self, cp1x, cp1y, cp2x, cp2y, x, y):
self._action('bezier curve to', cp1x=cp1x, cp1y=cp1y, cp2x=cp2x, cp2y=cp2y, x=x, y=y)
def quadratic_curve_to(self, cpx, cpy, x, y):
self._action('quadratic curve to', cpx=cpx, cpy=cpy, x=x, y=y)
def arc(self, x, y, radius, startangle, endangle, anticlockwise):
self._action('arc', x=x, y=y, radius=radius, startangle=startangle, endangle=endangle, anticlockwise=anticlockwise)
def ellipse(self, x, y, radiusx, radiusy, rotation, startangle, endangle, anticlockwise):
self._action('ellipse', x=x, y=y, radiusx=radiusx, radiusy=radiusy, rotation=rotation, startangle=startangle, endangle=endangle, anticlockwise=anticlockwise)
def rect(self, x, y, width, height):
self._action('rect', x=x, y=y, width=width, height=height)
# Drawing Paths
def fill(self, fill_rule, preserve):
self._set_value('fill rule', fill_rule)
if preserve:
self._action('fill preserve')
else:
self._action('fill')
def stroke(self):
self._action('stroke')
# Transformations
def rotate(self, radians):
self._action('rotate', radians=radians)
def scale(self, sx, sy):
self._action('scale', sx=sx, sy=sy)
def translate(self, tx, ty):
self._action('translate', tx=tx, ty=ty)
def reset_transform(self):
self._action('reset transform')
def write_text(self, text, x, y, font):
self._action('write text', text=text, x=x, y=y, font=font)
def rehint(self):
self._action('rehint Canvas')
| 31.72 | 165 | 0.573455 | 430 | 3,172 | 4.090698 | 0.237209 | 0.108016 | 0.01535 | 0.020466 | 0.212621 | 0.109153 | 0.078454 | 0 | 0 | 0 | 0 | 0.00962 | 0.279004 | 3,172 | 99 | 166 | 32.040404 | 0.75951 | 0.067465 | 0 | 0.046154 | 0 | 0.015385 | 0.10339 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.353846 | false | 0.015385 | 0.030769 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f309247f76f7d18c28aea4b2f1973377cd29af7f | 5,470 | py | Python | Objected-Oriented Systems/Python_OOP_SDA/Task1.py | syedwaleedhyder/Freelance_Projects | 7e2b85fc968850fc018014667b5ce9af0f00cb09 | [
"MIT"
] | 1 | 2020-08-13T17:26:13.000Z | 2020-08-13T17:26:13.000Z | Objected-Oriented Systems/Python_OOP_SDA/Task1.py | syedwaleedhyder/Freelance_Projects | 7e2b85fc968850fc018014667b5ce9af0f00cb09 | [
"MIT"
] | null | null | null | Objected-Oriented Systems/Python_OOP_SDA/Task1.py | syedwaleedhyder/Freelance_Projects | 7e2b85fc968850fc018014667b5ce9af0f00cb09 | [
"MIT"
] | null | null | null | from abc import ABCMeta, abstractmethod, abstractproperty
from datetime import datetime, date
class Item(metaclass=ABCMeta):
def __init__(self, code, name, quantity, cost, offer):
self.item_code=code
self.item_name=name
self.quantity_on_hand=quantity
self.cost_price=cost
self.on_offer=offer
pass
@property
def quantity_on_hand(self): # implements the get - this name is *the* name
return self._quantity_on_hand
#
@quantity_on_hand.setter
def quantity_on_hand(self, value): # name must be the same
self._quantity_on_hand = value
@property
def cost_price(self): # implements the get - this name is *the* name
return self._cost_price
#
@cost_price.setter
def cost_price(self, value): # name must be the same
self._cost_price = value
def changeOffer():
if(self.on_offer == "Yes"):
self.on_offer = "No"
elif(self.on_offer == "No"):
self.on_offer == "Yes"
@abstractmethod
def selling_price(self):
pass
@abstractmethod
def offer_price(self):
pass
@abstractmethod
def profit_margin(self):
pass
@abstractmethod
def discount_rate(self):
pass
def to_string(self):
if(self.on_offer == "Yes"):
offer = "**Offer"
else:
offer = "(No Offer)"
string = self.item_code + " " + self.item_name + " Availalbe= " + str(self.quantity_on_hand) + " " + offer
return string
class Perishable(Item):
def __init__(self, code, name, quantity, cost, offer, expiry):
Item.__init__(self, code, name, quantity, cost, offer)
self.expiry_date = expiry
def profit_margin(self):
return self.cost_price * 0.25
def selling_price(self):
return self.cost_price + self.profit_margin()
def days_before_expiry(self):
now = datetime.now().date()
days = self.expiry_date- now
return days.days
def discount_rate(self):
days = self.days_before_expiry()
price = self.selling_price()
if(days < 15):
return price * 0.3
elif(days < 30):
return price * 0.2
elif (days > 29):
return price * 0.1
def offer_price(self):
if(self.on_offer == "No"):
return selling_price()
return self.selling_price() - self.discount_rate()
def to_string(self):
if(self.on_offer == "Yes"):
offer = "**Offer**"
else:
offer = "(No Offer)"
string = self.item_code + " " + self.item_name + " Available= " + str(self.quantity_on_hand) + " Price: $" + str(self.offer_price()) +" " + offer + " Expiry Date: " + self.expiry_date.strftime('%d %b %Y') + " Perishable Item"
return string
class NonPerishable(Item):
def __init__(self, code, name, quantity, cost, offer):
Item.__init__(self, code, name, quantity, cost, offer)
def profit_margin(self):
return self.cost_price * 0.3
def selling_price(self):
return self.cost_price + self.profit_margin()
def discount_rate(self):
return self.selling_price() * 0.1
def offer_price(self):
if(self.on_offer == "No"):
return self.selling_price()
return self.selling_price() - self.discount_rate()
def to_string(self):
if(self.on_offer == "Yes"):
offer = "**Offer**"
else:
offer = "(No Offer)"
string = self.item_code + " " + self.item_name + " Available= " + str(self.quantity_on_hand) + " Price: $" + str(self.offer_price()) +" " + offer + " Non Perishable Item"
return string
class Grocer:
def __init__(self):
self.items_list = []
def print_items(self):
for item in self.items_list:
print(item.to_string())
def add_to_list(self, item_to_be_added):
self.items_list.append(item_to_be_added)
return
def update_quantity_on_hand(self, item_code, new_quantity):
if(new_quantity < 0):
print("Quantity cannot be zero. Failed to update.")
return False
for item in self.items_list:
if(item.item_code == item_code):
item.quantity_on_hand = new_quantity
return True
perishable = Perishable("P101", "Real Raisins", 10, 2, "Yes", date(2018,12, 10))
non_perishable = NonPerishable("NP210", "Tan Baking Paper", 25, 2, "No")
perishable2 = Perishable("P105", "Eggy Soup Tofu", 14, 1.85, "Yes", date(2018,11, 26))
grocer = Grocer()
grocer.add_to_list(perishable)
grocer.add_to_list(non_perishable)
grocer.add_to_list(perishable2)
grocer.print_items()
grocer.update_quantity_on_hand("P105", 10)
print()
grocer.print_items()
####################################################################
#DISCUSSION
"""
Single Responsibility Principle:
1) IN Perishable clas.
2) In NonPersishable class.
Open Closed Principle
1) Abstract class Item is open to be extended
2) Abstract class Item is closed for modification
Interface Segregation Principle
1) For using Perishable items, user don't have to know anything about Non-perishable items.
2) For using Non-perishable items, users don't have to know tha details of Perishable items.
Hence users are not forced to use methods they don't require.
"""
#################################################################### | 31.988304 | 233 | 0.609506 | 696 | 5,470 | 4.58046 | 0.198276 | 0.0367 | 0.052698 | 0.033877 | 0.461731 | 0.361669 | 0.347867 | 0.347867 | 0.292033 | 0.242472 | 0 | 0.016995 | 0.25777 | 5,470 | 171 | 234 | 31.988304 | 0.768227 | 0.026143 | 0 | 0.436508 | 0 | 0 | 0.064949 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | false | 0.039683 | 0.015873 | 0.055556 | 0.436508 | 0.047619 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f30949586393ae32e93e9cb38a2df996aa7486fd | 1,116 | py | Python | compose/production/mongodb_backup/scripts/list_dbs.py | IMTEK-Simulation/mongodb-backup-container-image | b0e04c03cab9321d6b4277ee88412938fec95726 | [
"MIT"
] | null | null | null | compose/production/mongodb_backup/scripts/list_dbs.py | IMTEK-Simulation/mongodb-backup-container-image | b0e04c03cab9321d6b4277ee88412938fec95726 | [
"MIT"
] | null | null | null | compose/production/mongodb_backup/scripts/list_dbs.py | IMTEK-Simulation/mongodb-backup-container-image | b0e04c03cab9321d6b4277ee88412938fec95726 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
host = 'mongodb'
port = 27017
ssl_ca_cert='/run/secrets/rootCA.pem'
ssl_certfile='/run/secrets/tls_cert.pem'
ssl_keyfile='/run/secrets/tls_key.pem'
# don't turn these signal into exceptions, just die.
# necessary for integrating into bash script pipelines seamlessly.
import signal
signal.signal(signal.SIGINT, signal.SIG_DFL)
signal.signal(signal.SIGPIPE, signal.SIG_DFL)
# get administrator credentials
with open('/run/secrets/username','r') as f:
username = f.read()
with open('/run/secrets/password','r') as f:
password = f.read()
from pymongo import MongoClient
client = MongoClient(host, port,
ssl=True,
username=username,
password=password,
authSource=username, # assume admin database and admin user share name
ssl_ca_certs=ssl_ca_cert,
ssl_certfile=ssl_certfile,
ssl_keyfile=ssl_keyfile,
tlsAllowInvalidHostnames=True)
# Within the container environment, mongod runs on host 'mongodb'.
# That hostname, however, is not mentioned within the host certificate.
dbs = client.list_database_names()
for db in dbs:
print(db)
client.close()
| 27.9 | 74 | 0.750896 | 159 | 1,116 | 5.157233 | 0.54717 | 0.060976 | 0.065854 | 0.043902 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00627 | 0.142473 | 1,116 | 39 | 75 | 28.615385 | 0.850575 | 0.31362 | 0 | 0 | 0 | 0 | 0.162055 | 0.150198 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.115385 | 0.076923 | 0 | 0.076923 | 0.038462 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
f3133d707d13f1d41040304efdb1e48fd46e0e3f | 4,270 | py | Python | src/piminder_service/resources/db_autoinit.py | ZAdamMac/pyminder | 059f57cb7cea4f517f77b1bbf391ce99f25d83bb | [
"MIT"
] | null | null | null | src/piminder_service/resources/db_autoinit.py | ZAdamMac/pyminder | 059f57cb7cea4f517f77b1bbf391ce99f25d83bb | [
"MIT"
] | 3 | 2021-05-05T21:08:24.000Z | 2021-06-23T10:47:40.000Z | src/piminder_service/resources/db_autoinit.py | ZAdamMac/pyminder | 059f57cb7cea4f517f77b1bbf391ce99f25d83bb | [
"MIT"
] | null | null | null | """
This script is a component of Piminder's back-end controller.
Specifically, it is a helper utility to be used to intialize a database for the user and message tables.
Author: Zac Adam-MacEwen (zadammac@kenshosec.com)
An Arcana Labs utility.
Produced under license.
Full license and documentation to be found at:
https://github.com/ZAdamMac/Piminder
"""
import bcrypt
import getpass
import os
import pymysql
__version__ = "1.0.0" # This is the version of service that we can init, NOT the version of the script itself.
spec_tables = [
"""CREATE TABLE `messages` (
`id` CHAR(36) NOT NULL,
`name` VARCHAR(255) NOT NULL,
`message` TEXT DEFAULT NULL,
`errorlevel` CHAR(5) DEFAULT NULL,
`time_raised` TIMESTAMP,
`read_flag` BIT DEFAULT 0,
PRIMARY KEY (`id`)
)""",
"""CREATE TABLE `users` (
`username` CHAR(36) NOT NULL,
`password` VARCHAR(255) NOT NULL,
`permlevel` INT(1) DEFAULT 1,
`memo` TEXT DEFAULT NULL,
PRIMARY KEY (`username`)
)"""
]
def connect_to_db():
"""Detects if it is necessary to prompt for the root password, and either way,
establishes the db connection, returning it.
:return:
"""
print("We must now connect to the database.")
try:
db_user = os.environ['PIMINDER_DB_USER']
except KeyError:
print("Missing envvar: Piminder_DB_USER")
exit(1)
root_password = None
try:
root_password = os.environ['PIMINDER_DB_PASSWORD']
except KeyError:
print("Missing envvar: Piminder_DB_PASSWORD")
exit(1)
try:
db_host = os.environ['PIMINDER_DB_HOST']
except KeyError:
print("Missing envvar: Piminder_DB_HOST")
exit(1)
finally:
conn = pymysql.connect(host=db_host, user=db_user,
password=root_password, db='Piminder',
charset='utf8mb4', cursorclass=pymysql.cursors.DictCursor)
return conn
def create_tables(list_tables, connection):
"""Accepts a list of create statements for tables and pushes them to the DB.
:param list_tables: A list of CREATE statements in string form.
:param connection: a pymysql.connect() object, such as returned by connect_to_db
:return:
"""
cursor = connection.cursor()
connection.begin()
for table in list_tables:
try:
cursor.execute(table)
except pymysql.err.ProgrammingError:
print("Error in the following statement; table was skipped.")
print(table)
except pymysql.err.OperationalError as error:
if str(error.args[0]) == 1050: # This table already exists
print("%s, skipping" % error.args[1])
else:
print(error)
connection.commit()
def create_administrative_user(connection):
"""Creates an administrative user if it does not already exist.
:param connection:
:return:
"""
print("Validating an admin user exists:")
try:
admin_name = os.environ['PIMINDER_ADMIN_USER']
except KeyError:
print("Missing envvar: Piminder_ADMIN_USER")
exit(1)
cur = connection.cursor()
command = "SELECT count(username) AS howmany FROM users WHERE permlevel like 3;"
# Wait, how many admins are there?
cur.execute(command)
count = cur.fetchone()["howmany"]
if count < 1: # Only do this if no more than 0 exists.
command = "INSERT INTO users (username, password, memo, permlevel) VALUES (%s, %s, 'Default User', 3);"
try:
root_password = os.environ['PIMINDER_ADMIN_PASSWORD']
except KeyError:
print("Missing envvar: Piminder_ADMIN_PASSWORD")
exit(1)
hashed_rootpw = bcrypt.hashpw(root_password.encode('utf8'), bcrypt.gensalt())
cur.execute(command, (admin_name, hashed_rootpw))
print("Created administrative user: %s" % admin_name)
else:
print("Administrative user already exists, skipping.")
connection.commit()
def runtime():
print("Now Creating Tables")
mariadb = connect_to_db()
create_tables(spec_tables, mariadb)
create_administrative_user(mariadb)
mariadb.commit()
mariadb.close()
print("Done.")
| 31.865672 | 111 | 0.646136 | 532 | 4,270 | 5.078947 | 0.370301 | 0.026647 | 0.031458 | 0.048113 | 0.129534 | 0.112509 | 0.088823 | 0 | 0 | 0 | 0 | 0.010986 | 0.253864 | 4,270 | 133 | 112 | 32.105263 | 0.8371 | 0.230211 | 0 | 0.25641 | 0 | 0.012821 | 0.251733 | 0.016782 | 0 | 0 | 0 | 0 | 0 | 1 | 0.051282 | false | 0.115385 | 0.051282 | 0 | 0.115385 | 0.192308 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
f3159c44193bd89a772b6f2bca9dbffb2ffaa8bc | 5,933 | py | Python | test/search/capacity.py | sbutler/spotseeker_server | 02bd2d646eab9f26ddbe8536b30e391359796c9c | [
"Apache-2.0"
] | null | null | null | test/search/capacity.py | sbutler/spotseeker_server | 02bd2d646eab9f26ddbe8536b30e391359796c9c | [
"Apache-2.0"
] | null | null | null | test/search/capacity.py | sbutler/spotseeker_server | 02bd2d646eab9f26ddbe8536b30e391359796c9c | [
"Apache-2.0"
] | null | null | null | """ Copyright 2012, 2013 UW Information Technology, University of Washington
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
from django.test import TestCase
from django.conf import settings
from django.test.client import Client
from spotseeker_server.models import Spot, SpotExtendedInfo, SpotType
import simplejson as json
from django.test.utils import override_settings
from mock import patch
from django.core import cache
from spotseeker_server import models
@override_settings(SPOTSEEKER_AUTH_MODULE='spotseeker_server.auth.all_ok')
class SpotSearchCapacityTest(TestCase):
def test_capacity(self):
dummy_cache = cache.get_cache('django.core.cache.backends.dummy.DummyCache')
with patch.object(models, 'cache', dummy_cache):
spot1 = Spot.objects.create(name="capacity: 1", capacity=1)
spot1.save()
spot2 = Spot.objects.create(name="capacity: 2", capacity=2)
spot2.save()
spot3 = Spot.objects.create(name="capacity: 3", capacity=3)
spot3.save()
spot4 = Spot.objects.create(name="capacity: 4", capacity=4)
spot4.save()
spot5 = Spot.objects.create(name="capacity: 50", capacity=50)
spot5.save()
c = Client()
response = c.get("/api/v1/spot", {'capacity': '', 'name': 'capacity'})
self.assertEquals(response["Content-Type"], "application/json", "Has the json header")
spots = json.loads(response.content)
has_1 = False
has_2 = False
has_3 = False
has_4 = False
has_5 = False
for spot in spots:
if spot['id'] == spot1.pk:
has_1 = True
if spot['id'] == spot2.pk:
has_2 = True
if spot['id'] == spot3.pk:
has_3 = True
if spot['id'] == spot4.pk:
has_4 = True
if spot['id'] == spot5.pk:
has_5 = True
self.assertEquals(has_1, True)
self.assertEquals(has_2, True)
self.assertEquals(has_3, True)
self.assertEquals(has_4, True)
self.assertEquals(has_5, True)
response = c.get("/api/v1/spot", {'capacity': '1'})
self.assertEquals(response["Content-Type"], "application/json", "Has the json header")
spots = json.loads(response.content)
has_1 = False
has_2 = False
has_3 = False
has_4 = False
has_5 = False
for spot in spots:
if spot['id'] == spot1.pk:
has_1 = True
if spot['id'] == spot2.pk:
has_2 = True
if spot['id'] == spot3.pk:
has_3 = True
if spot['id'] == spot4.pk:
has_4 = True
if spot['id'] == spot5.pk:
has_5 = True
self.assertEquals(has_1, True)
self.assertEquals(has_2, True)
self.assertEquals(has_3, True)
self.assertEquals(has_4, True)
self.assertEquals(has_5, True)
response = c.get("/api/v1/spot", {'capacity': '49'})
self.assertEquals(response["Content-Type"], "application/json", "Has the json header")
spots = json.loads(response.content)
has_1 = False
has_2 = False
has_3 = False
has_4 = False
has_5 = False
for spot in spots:
if spot['id'] == spot1.pk:
has_1 = True
if spot['id'] == spot2.pk:
has_2 = True
if spot['id'] == spot3.pk:
has_3 = True
if spot['id'] == spot4.pk:
has_4 = True
if spot['id'] == spot5.pk:
has_5 = True
self.assertEquals(has_1, False)
self.assertEquals(has_2, False)
self.assertEquals(has_3, False)
self.assertEquals(has_4, False)
self.assertEquals(has_5, True)
response = c.get("/api/v1/spot", {'capacity': '501'})
self.assertEquals(response["Content-Type"], "application/json", "Has the json header")
spots = json.loads(response.content)
has_1 = False
has_2 = False
has_3 = False
has_4 = False
has_5 = False
for spot in spots:
if spot['id'] == spot1.pk:
has_1 = True
if spot['id'] == spot2.pk:
has_2 = True
if spot['id'] == spot3.pk:
has_3 = True
if spot['id'] == spot4.pk:
has_4 = True
if spot['id'] == spot5.pk:
has_5 = True
self.assertEquals(has_1, False)
self.assertEquals(has_2, False)
self.assertEquals(has_3, False)
self.assertEquals(has_4, False)
self.assertEquals(has_5, False)
response = c.get("/api/v1/spot", {'capacity': '1', 'distance': '100', 'limit': '4'})
#testing sorting by distance, which is impossible given no center
self.assertEquals(response.status_code, 400)
| 36.398773 | 98 | 0.532783 | 697 | 5,933 | 4.430416 | 0.213773 | 0.129534 | 0.051813 | 0.062176 | 0.599741 | 0.552785 | 0.552785 | 0.543394 | 0.533355 | 0.533355 | 0 | 0.035157 | 0.36238 | 5,933 | 162 | 99 | 36.623457 | 0.781126 | 0.110905 | 0 | 0.739837 | 0 | 0 | 0.095011 | 0.013764 | 0 | 0 | 0 | 0 | 0.203252 | 1 | 0.00813 | false | 0 | 0.073171 | 0 | 0.089431 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f31ce1a1719984d1cf324a95ea4f226d430436e1 | 361 | py | Python | DEQModel/utils/debug.py | JunLi-Galios/deq | 80eb6b598357e8e01ad419126465fa3ed53b12c7 | [
"MIT"
] | 548 | 2019-09-05T04:25:21.000Z | 2022-03-22T01:49:35.000Z | DEQModel/utils/debug.py | JunLi-Galios/deq | 80eb6b598357e8e01ad419126465fa3ed53b12c7 | [
"MIT"
] | 21 | 2019-10-04T16:36:05.000Z | 2022-03-24T02:20:28.000Z | DEQModel/utils/debug.py | JunLi-Galios/deq | 80eb6b598357e8e01ad419126465fa3ed53b12c7 | [
"MIT"
] | 75 | 2019-09-05T22:40:32.000Z | 2022-03-31T09:40:44.000Z | import torch
from torch.autograd import Function
class Identity(Function):
@staticmethod
def forward(ctx, x, name):
ctx.name = name
return x.clone()
def backward(ctx, grad):
import pydevd
pydevd.settrace(suspend=False, trace_only_current_thread=True)
grad_temp = grad.clone()
return grad_temp, None | 24.066667 | 70 | 0.65928 | 45 | 361 | 5.177778 | 0.622222 | 0.06867 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.257618 | 361 | 15 | 71 | 24.066667 | 0.869403 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.25 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
f327633efe0ce2c9e557f60f7f82ada184c4948d | 576 | py | Python | bottomline/blweb/migrations/0012_vehicleconfig_color.py | mcm219/BottomLine | db82eef403c79bffa3864c4db6bc336632abaca5 | [
"MIT"
] | null | null | null | bottomline/blweb/migrations/0012_vehicleconfig_color.py | mcm219/BottomLine | db82eef403c79bffa3864c4db6bc336632abaca5 | [
"MIT"
] | 1 | 2021-06-14T02:20:40.000Z | 2021-06-14T02:20:40.000Z | bottomline/blweb/migrations/0012_vehicleconfig_color.py | mcm219/BottomLine | db82eef403c79bffa3864c4db6bc336632abaca5 | [
"MIT"
] | null | null | null | # Generated by Django 3.2.2 on 2021-07-10 03:16
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('blweb', '0011_vehiclecolor'),
]
operations = [
migrations.AddField(
model_name='vehicleconfig',
name='color',
field=models.ForeignKey(blank=True, default=None, help_text='The chosen color for this config', null=True, on_delete=django.db.models.deletion.CASCADE, related_name='color', to='blweb.vehiclecolor'),
),
]
| 28.8 | 211 | 0.663194 | 69 | 576 | 5.463768 | 0.681159 | 0.06366 | 0.074271 | 0.116711 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.042035 | 0.215278 | 576 | 19 | 212 | 30.315789 | 0.792035 | 0.078125 | 0 | 0 | 1 | 0 | 0.179584 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.153846 | 0 | 0.384615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b823df535990bd76d900f1381be1d7cc948408cf | 11,634 | py | Python | src/acs_3dpsf.py | davidharvey1986/rrg | 26b4658f14279af21af1a61d57e9936daf315a71 | [
"MIT"
] | 2 | 2019-11-18T12:51:09.000Z | 2019-12-11T03:13:51.000Z | src/acs_3dpsf.py | davidharvey1986/rrg | 26b4658f14279af21af1a61d57e9936daf315a71 | [
"MIT"
] | 5 | 2017-06-09T10:06:27.000Z | 2019-07-19T11:28:18.000Z | src/acs_3dpsf.py | davidharvey1986/rrg | 26b4658f14279af21af1a61d57e9936daf315a71 | [
"MIT"
] | 2 | 2017-07-19T15:48:33.000Z | 2017-08-09T16:07:20.000Z | import numpy as np
from . import acs_map_xy as acs_map
def acs_3dpsf_basisfunctions( degree, x, y, focus ):
# Generate relevant basis functions
n_stars=np.max( np.array([len(x),len(y),len(focus)]))
basis_function_order=np.zeros((1,3)) # All zeros
for k in range(degree[2]+1):
for j in range(degree[1]+1):
for i in range(degree[0]+1):
if (i+j+k > 0) & ((i+j) <= np.max(degree[0:2])):
basis_function_order=np.vstack((basis_function_order, [i,j,k]))
n_basis_functions= basis_function_order.shape[0]
basis_function_value = np.zeros( (n_basis_functions, n_stars))
for i in range(n_basis_functions):
basis_function_value[i,:] = x**basis_function_order[i,0]*\
y**basis_function_order[i,1] * \
focus**basis_function_order[i,2]
return basis_function_value
# **********************************************************************
# **********************************************************************
# **********************************************************************
def acs_3dpsf_fit( scat, degree=np.array([3,2,2]),
mag_cut=np.array([20.5,22]),
e_cut=1, size_cut=np.array([-np.inf,3]), verbose=False
):
# Fit the PSF from data in a SCAT catalogue
# F814 I magnitude catalogue cut
degree = np.array(degree)
if len(degree) < 3 :
print("DEGREE must be 3D")
degree[ degree > 0 ] = np.min(degree[ degree > 0 ])
# Find the line dividing CCDs 1 and 2
ccd_boundary = acs_map.acs_map_xy( np.array([0, 4095, 0, 4095]),
np.array([2047, 2047, 2048, 2048]),
pixel_scale=scat.pixscale)
x1=np.mean([ccd_boundary.x[0],ccd_boundary.x[2]])
x2=np.mean([ccd_boundary.x[1],ccd_boundary.x[3]])
y1=np.mean([ccd_boundary.y[0],ccd_boundary.y[2]])
y2=np.mean([ccd_boundary.y[1],ccd_boundary.y[3]])
ccd_boundary_x1=np.mean([ccd_boundary.x[0],ccd_boundary.x[2]])
ccd_boundary_x2=np.mean([ccd_boundary.x[1],ccd_boundary.x[3]])
ccd_boundary_y1=np.mean([ccd_boundary.y[0],ccd_boundary.y[2]])
ccd_boundary_y2=np.mean([ccd_boundary.y[1],ccd_boundary.y[3]])
ccd_boundary_m=(ccd_boundary_y2-ccd_boundary_y1)/(ccd_boundary_x2-ccd_boundary_x1)
ccd_boundary_c=ccd_boundary_y1-ccd_boundary_m*ccd_boundary_x1
# Find the centre of each CCD
ccd_centre = acs_map.acs_map_xy( np.array([2048,2048]),
np.array([3072,1024]), pixel_scale=scat.pixscale)
# Select only the well-behaved stars
good= np.isfinite(scat.field_focus[0][scat.field_id[0]]) & \
np.isfinite(scat.e1_uncor_unrot[0]) & \
np.isfinite(scat.e2_uncor_unrot[0]) & \
np.isfinite(scat.xx_uncor[0]) & \
np.isfinite(scat.xy_uncor[0]) & \
np.isfinite(scat.yy_uncor[0]) & \
np.isfinite(scat.xxxx_uncor[0]) & \
np.isfinite(scat.xxxy_uncor[0]) & \
np.isfinite(scat.xxyy_uncor[0]) & \
np.isfinite(scat.xyyy_uncor[0]) & \
np.isfinite(scat.yyyy_uncor[0])
n_good = len(np.arange( len( good ))[good])
if verbose:
print("Found a total of "+str(len(scat.x[0]))+" real stars, of which "+str(n_good)+" look well-behaved")
# Store quantities to be fitted in local variables
x=scat.x[0][good]
y=scat.y[0][good]
focus=scat.field_focus[0][scat.field_id[0]][good]
ixx=scat.xx_uncor[0][good]
ixy=scat.xy_uncor[0][good]
iyy=scat.yy_uncor[0][good]
ixxxx=scat.xxxx_uncor[0][good]
ixxxy=scat.xxxy_uncor[0][good]
ixxyy=scat.xxyy_uncor[0][good]
ixyyy=scat.xyyy_uncor[0][good]
iyyyy=scat.yyyy_uncor[0][good]
e1=scat.e1_uncor_unrot[0][good]
e2=scat.e2_uncor_unrot[0][good]
# Work on each CCD separately
init_coeffs_flag = True
for ccd in range(2):
# Report which CCD is being considered
if ccd +1 == 1:
in_ccd = np.arange(len(y))[ y >= ccd_boundary_m*x+ccd_boundary_c]
n_in_CCD = len(in_ccd)
if ccd + 1 == 2:
in_ccd = np.arange(len( y))[ y < ccd_boundary_m*x+ccd_boundary_c]
n_in_CCD = len(in_ccd)
if n_in_CCD > 0:
#Compute matrix necessary for matrix inversion
if verbose:
print("Fitting moments of "+str(n_in_CCD)+" real stars in CCD#"+str(ccd+1))
basis_function_value=acs_3dpsf_basisfunctions(degree,
x[in_ccd]-ccd_centre.x[ccd],
y[in_ccd]-ccd_centre.y[ccd],
focus[in_ccd])
ls_matrix = np.dot( np.linalg.inv(np.dot(basis_function_value, basis_function_value.T)), basis_function_value)
# Create global arrays to contain the answers
n_basis_functions=np.shape(np.array(ls_matrix))[0]
if init_coeffs_flag:
acs_3dpsf_coeffs=basis_coeffs( ccd_centre,
ccd_boundary_m, ccd_boundary_c,
n_basis_functions, degree )
init_coeffs_flag = False
# Fit data to basis functions using least-squares inversion
#these are all matrices
acs_3dpsf_coeffs.ixx_fit[ccd, :] = np.dot(ls_matrix ,ixx[in_ccd])
acs_3dpsf_coeffs.ixy_fit[ccd, :] = np.dot(ls_matrix , ixy[in_ccd])
acs_3dpsf_coeffs.iyy_fit[ccd, :] = np.dot(ls_matrix , iyy[in_ccd])
acs_3dpsf_coeffs.ixxxx_fit[ccd, :] = np.dot(ls_matrix , ixxxx[in_ccd])
acs_3dpsf_coeffs.ixxxy_fit[ccd, :] = np.dot(ls_matrix , ixxxy[in_ccd])
acs_3dpsf_coeffs.ixxyy_fit[ccd, :] = np.dot(ls_matrix , ixxyy[in_ccd])
acs_3dpsf_coeffs.ixyyy_fit[ccd, :] = np.dot(ls_matrix , ixyyy[in_ccd])
acs_3dpsf_coeffs.iyyyy_fit[ccd, :] = np.dot(ls_matrix , iyyyy[in_ccd])
acs_3dpsf_coeffs.e1_fit[ccd, :] = np.dot(ls_matrix , e1[in_ccd])
acs_3dpsf_coeffs.e2_fit[ccd, :] = np.dot(ls_matrix , e2[in_ccd])
return acs_3dpsf_coeffs
# **********************************************************************
# **********************************************************************
# **********************************************************************
def acs_3dpsf_reconstruct( acs_3dpsf_coeffs, x, y, focus, radius=None, verbose=False):
# Create arrays to contain the final answer
n_galaxies=np.max( np.array([len(x), len(y), len(focus)]) )
if len(focus) == 1:
focus_local = np.zeros(len(n_galaxies)) + focus
else:
focus_local=focus
if verbose:
print("Found a total of "+str(n_galaxies)+" galaxies")
if radius is None:
radius=np.zeros(len(n_galaxies))+6
moms=moments( x, y, radius[:n_galaxies],
acs_3dpsf_coeffs.degree )
for ccd in range(2):
#Report which CCD is being considered
if ccd +1 == 1:
in_ccd = np.arange(len( y))[ y >= acs_3dpsf_coeffs.ccd_boundary_m*x+acs_3dpsf_coeffs.ccd_boundary_c]
n_in_CCD = len(in_ccd)
if ccd + 1 == 2:
in_ccd = np.arange(len( y))[ y < acs_3dpsf_coeffs.ccd_boundary_m*x+acs_3dpsf_coeffs.ccd_boundary_c]
n_in_CCD = len(in_ccd)
if n_in_CCD > 0:
if verbose:
print("Interpolating model PSF moments to the position of "+str(n_in_CCD)+" galaxies in CCD#"+str(ccd+1))
#Fit the PSF
basis_function_value=acs_3dpsf_basisfunctions(acs_3dpsf_coeffs.degree[0], \
x[in_ccd]-acs_3dpsf_coeffs.ccd_centre.x[ccd], \
y[in_ccd]-acs_3dpsf_coeffs.ccd_centre.y[ccd], \
focus_local[in_ccd] )
moms.xx[in_ccd] = np.dot(acs_3dpsf_coeffs.ixx_fit[ccd, :], basis_function_value)
moms.xy[in_ccd] = np.dot(acs_3dpsf_coeffs.ixy_fit[ccd, :], basis_function_value)
moms.yy[in_ccd] = np.dot(acs_3dpsf_coeffs.iyy_fit[ccd, :], basis_function_value)
moms.xxxx[in_ccd] = np.dot(acs_3dpsf_coeffs.ixxxx_fit[ccd, :], basis_function_value)
moms.xxxy[in_ccd] = np.dot(acs_3dpsf_coeffs.ixxxy_fit[ccd, :], basis_function_value)
moms.xxyy[in_ccd] = np.dot(acs_3dpsf_coeffs.ixxyy_fit[ccd, :], basis_function_value)
moms.xyyy[in_ccd] = np.dot(acs_3dpsf_coeffs.ixyyy_fit[ccd, :], basis_function_value)
moms.yyyy[in_ccd] = np.dot(acs_3dpsf_coeffs.iyyyy_fit[ccd, :], basis_function_value)
moms.e1[in_ccd] = np.dot(acs_3dpsf_coeffs.e1_fit[ccd, :], basis_function_value)
moms.e2[in_ccd] = np.dot(acs_3dpsf_coeffs.e2_fit[ccd, :], basis_function_value)
else:
print("No galaxies in CCD#"+str(ccd))
# Work out PSF ellipticities at positions of galaxies properly. Tsk!
moms.e1 = (moms.xx-moms.yy)/(moms.xx+moms.yy)
moms.e2 = 2*moms.xy/(moms.xx+moms.yy)
return moms
# **********************************************************************
# **********************************************************************
# **********************************************************************
def acs_3dpsf( x, y, focus, radius, scat,
acs_3dpsf_coeffs=None,
degree=np.array([3,2,2])):
# Fit the PSF
if acs_3dpsf_coeffs is None:
acs_3dpsf_coeffs=acs_3dpsf_fit(scat, degree=degree)
#Reconstruct the PSF
acs_moms=acs_3dpsf_reconstruct(acs_3dpsf_coeffs, x, y, focus, radius)
return acs_moms
class basis_coeffs:
def __init__( self, ccd_centre, ccd_boundary_m, \
ccd_boundary_c, n_basis_functions, degree ):
self.degree = degree,
self.ccd_centre = ccd_centre
self.ccd_boundary_m = ccd_boundary_m
self.ccd_boundary_c = ccd_boundary_c
self.ixx_fit = np.zeros((2,n_basis_functions))
self.ixy_fit = np.zeros((2,n_basis_functions))
self.iyy_fit = np.zeros((2,n_basis_functions))
self.ixxxx_fit = np.zeros((2,n_basis_functions))
self.ixxxy_fit = np.zeros((2,n_basis_functions))
self.ixxyy_fit = np.zeros((2,n_basis_functions))
self.ixyyy_fit = np.zeros((2,n_basis_functions))
self.iyyyy_fit = np.zeros((2,n_basis_functions))
self.e1_fit = np.zeros((2,n_basis_functions))
self.e2_fit = np.zeros((2,n_basis_functions))
class moments( dict ):
def __init__(self, x, y, radius, degree ):
n_objects = len(x)
self.__dict__['x'] = x
self.__dict__['y'] = y
self.__dict__['e1']=np.zeros(n_objects)
self.__dict__['e2']=np.zeros(n_objects)
self.__dict__['xx']=np.zeros(n_objects)
self.__dict__['xy']=np.zeros(n_objects)
self.__dict__['yy']=np.zeros(n_objects)
self.__dict__['xxxx']=np.zeros(n_objects)
self.__dict__['xxxy']=np.zeros(n_objects)
self.__dict__['xxyy']=np.zeros(n_objects)
self.__dict__['xyyy']=np.zeros(n_objects)
self.__dict__['yyyy']=np.zeros(n_objects)
self.__dict__['radius'] = radius
self.__dict__['degree'] = degree
def keys(self):
return list(self.__dict__.keys())
def __getitem__(self, key):
return self.__dict__[key]
| 39.979381 | 122 | 0.565068 | 1,617 | 11,634 | 3.760049 | 0.118738 | 0.083224 | 0.080592 | 0.02352 | 0.57023 | 0.460197 | 0.286513 | 0.222697 | 0.159539 | 0.159539 | 0 | 0.026493 | 0.260272 | 11,634 | 290 | 123 | 40.117241 | 0.679991 | 0.11389 | 0 | 0.096774 | 1 | 0 | 0.026172 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.043011 | false | 0 | 0.010753 | 0.010753 | 0.096774 | 0.032258 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b826697289acc6bb7f13171d32f3b15f39b8d6bc | 411 | py | Python | mundo-1/ex-014.py | guilhermesm28/python-curso-em-video | 50ab4e76b1903e62d4daa579699c5908329b26c8 | [
"MIT"
] | null | null | null | mundo-1/ex-014.py | guilhermesm28/python-curso-em-video | 50ab4e76b1903e62d4daa579699c5908329b26c8 | [
"MIT"
] | null | null | null | mundo-1/ex-014.py | guilhermesm28/python-curso-em-video | 50ab4e76b1903e62d4daa579699c5908329b26c8 | [
"MIT"
] | null | null | null | # Escreva um programa que converta uma temperatura digitando em graus Celsius e converta para graus Fahrenheit.
print('-' * 100)
print('{: ^100}'.format('EXERCÍCIO 014 - CONVERSOR DE TEMPERATURAS'))
print('-' * 100)
c = float(input('Informe a temperatura em ºC: '))
f = ((9 * c) / 5) + 32
print(f'A temperatura de {c:.2f}ºC corresponde a {f:.2f}ºF.')
print('-' * 100)
input('Pressione ENTER para sair...')
| 27.4 | 111 | 0.6691 | 61 | 411 | 4.508197 | 0.622951 | 0.116364 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.06087 | 0.160584 | 411 | 14 | 112 | 29.357143 | 0.736232 | 0.265207 | 0 | 0.375 | 0 | 0 | 0.533333 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.625 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
b82ba735b06701323afbbc1adb2108b231b98638 | 1,647 | py | Python | CxMetrics/calcMetrics.py | Danielhiversen/pyCustusx | 5a7fca51d885ad30f4db46ab725485d86fb2d17a | [
"MIT"
] | null | null | null | CxMetrics/calcMetrics.py | Danielhiversen/pyCustusx | 5a7fca51d885ad30f4db46ab725485d86fb2d17a | [
"MIT"
] | null | null | null | CxMetrics/calcMetrics.py | Danielhiversen/pyCustusx | 5a7fca51d885ad30f4db46ab725485d86fb2d17a | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Created on Sat Oct 24 11:39:42 2015
@author: dahoiv
"""
import numpy as np
def loadMetrics(filepath):
mr_points=dict()
us_points=dict()
with open(filepath, 'r') as file :
for line in file.readlines():
line = line.translate(None, '"')
data= line.split()
if not "pointMetric" in data[0]:
continue
key= data[1][-2:]
point = data[6:9]
if "_mr_" in data[1] and not "us" in data[2].lower():
mr_points[key]=[float(point[0]),float(point[1]),float(point[2])]
if "_us_" in data[1] and "us" in data[2].lower():
us_points[key]=[float(point[0]),float(point[1]),float(point[2])]
return (mr_points,us_points)
def calcDist(mr_points,us_points):
k=0
dist=[]
for key in mr_points.keys():
if not key in us_points.keys():
print key, " missing in us"
continue
diff = np.array(mr_points[key])-np.array(us_points[key])
dist.append((diff[0]**2 +diff[1]**2 +diff[2]**2)**0.5)
print key, dist[-1]
k=k+1
print "mean; ", np.mean(dist)
print "var: ", np.var(dist)
if __name__ == '__main__':
filePath1="/home/dahoiv/disk/data/brainshift/079_Tumor.cx3/Logs/metrics_a.txt"
(mr_points_1,us_points_1)=loadMetrics(filePath1)
calcDist(mr_points_1,us_points_1)
filePath2="/home/dahoiv/disk/data/brainshift/079_Tumor.cx3/Logs/metrics_b.txt"
(mr_points_2,us_points_2)=loadMetrics(filePath2)
calcDist(mr_points_2,us_points_2)
| 32.294118 | 82 | 0.571342 | 240 | 1,647 | 3.7375 | 0.329167 | 0.089186 | 0.026756 | 0.022297 | 0.316611 | 0.285396 | 0.205128 | 0.205128 | 0.205128 | 0.205128 | 0 | 0.049414 | 0.275046 | 1,647 | 51 | 83 | 32.294118 | 0.701843 | 0.01275 | 0 | 0.054054 | 0 | 0 | 0.122045 | 0.084345 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.027027 | null | null | 0.108108 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b842ca4df0f85a27ac428ca98c508bc0fd8473bb | 379 | py | Python | pages/page1.py | kalimuthu123/dash-app | 90bf4c570abb1770ea0f082989e8f97d62b98346 | [
"MIT"
] | null | null | null | pages/page1.py | kalimuthu123/dash-app | 90bf4c570abb1770ea0f082989e8f97d62b98346 | [
"MIT"
] | null | null | null | pages/page1.py | kalimuthu123/dash-app | 90bf4c570abb1770ea0f082989e8f97d62b98346 | [
"MIT"
] | null | null | null | import dash_html_components as html
from utils import Header
def create_layout(app):
# Page layouts
return html.Div(
[
html.Div([Header(app)]),
# page 1
# add your UI here, and callbacks go at the bottom of app.py
# assets and .js go in assets folder
# csv or images go in data folder
],
) | 25.266667 | 72 | 0.564644 | 52 | 379 | 4.057692 | 0.711538 | 0.066351 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004184 | 0.369393 | 379 | 15 | 73 | 25.266667 | 0.878661 | 0.382586 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.25 | 0.125 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
b845201c7741d5e90f7173c09fe9315087e66057 | 2,046 | py | Python | svca_limix/limix/core/covar/test/test_categorical.py | DenisSch/svca | bd029c120ca8310f43311253e4d7ce19bc08350c | [
"Apache-2.0"
] | 65 | 2015-01-20T20:46:26.000Z | 2021-06-27T14:40:35.000Z | svca_limix/limix/core/covar/test/test_categorical.py | DenisSch/svca | bd029c120ca8310f43311253e4d7ce19bc08350c | [
"Apache-2.0"
] | 29 | 2015-02-01T22:35:17.000Z | 2017-08-07T08:18:23.000Z | svca_limix/limix/core/covar/test/test_categorical.py | DenisSch/svca | bd029c120ca8310f43311253e4d7ce19bc08350c | [
"Apache-2.0"
] | 35 | 2015-02-01T17:26:50.000Z | 2019-09-13T07:06:16.000Z | """LMM testing code"""
import unittest
import scipy as sp
import numpy as np
from limix.core.covar import CategoricalCov
from limix.utils.check_grad import mcheck_grad
class TestCategoricalLowRank(unittest.TestCase):
"""test class for CategoricalCov cov"""
def setUp(self):
sp.random.seed(1)
self.n = 30
categories = sp.random.choice(['a', 'b', 'c'], self.n)
self.rank =2
self.C = CategoricalCov(categories,self.rank)
self.name = 'categorical'
self.C.setRandomParams()
def test_grad(self):
def func(x, i):
self.C.setParams(x)
return self.C.K()
def grad(x, i):
self.C.setParams(x)
return self.C.K_grad_i(i)
x0 = self.C.getParams()
err = mcheck_grad(func, grad, x0)
np.testing.assert_almost_equal(err, 0., decimal = 6)
# def test_param_activation(self):
# self.assertEqual(len(self.C.getParams()), 8)
# self.C.act_X = False
# self.assertEqual(len(self.C.getParams()), 0)
#
# self.C.setParams(np.array([]))
# with self.assertRaises(ValueError):
# self.C.setParams(np.array([0]))
#
# with self.assertRaises(ValueError):
# self.C.K_grad_i(0)
class TestCategoricalFreeForm(unittest.TestCase):
"""test class for Categorical cov"""
def setUp(self):
sp.random.seed(1)
self.n = 30
categories = sp.random.choice(['a', 'b', 'c'], self.n)
self.rank =None
self.C = CategoricalCov(categories,self.rank)
self.name = 'categorical'
self.C.setRandomParams()
def test_grad(self):
def func(x, i):
self.C.setParams(x)
return self.C.K()
def grad(x, i):
self.C.setParams(x)
return self.C.K_grad_i(i)
x0 = self.C.getParams()
err = mcheck_grad(func, grad, x0)
np.testing.assert_almost_equal(err, 0., decimal = 6)
if __name__ == '__main__':
unittest.main()
| 26.921053 | 62 | 0.580645 | 267 | 2,046 | 4.348315 | 0.273408 | 0.086133 | 0.072351 | 0.024117 | 0.753661 | 0.668389 | 0.552972 | 0.552972 | 0.552972 | 0.552972 | 0 | 0.012952 | 0.282991 | 2,046 | 75 | 63 | 27.28 | 0.778459 | 0.205279 | 0 | 0.755556 | 0 | 0 | 0.0225 | 0 | 0 | 0 | 0 | 0 | 0.044444 | 1 | 0.177778 | false | 0 | 0.111111 | 0 | 0.422222 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b8493d2511af44620ab30010ea879f211db8a17b | 11,878 | py | Python | modules/administrator.py | Gaeta/Delta | c76e149d0c17e025fe2648964e2512440fc0b4c7 | [
"MIT"
] | 1 | 2021-07-04T10:34:11.000Z | 2021-07-04T10:34:11.000Z | modules/administrator.py | Gaeta/Delta | c76e149d0c17e025fe2648964e2512440fc0b4c7 | [
"MIT"
] | null | null | null | modules/administrator.py | Gaeta/Delta | c76e149d0c17e025fe2648964e2512440fc0b4c7 | [
"MIT"
] | null | null | null | import discord, sqlite3, asyncio, utils, re
from discord.ext import commands
from datetime import datetime
TIME_REGEX = re.compile("(?:(\d{1,5})\s?(h|hours|hrs|hour|hr|s|seconds|secs|sec|second|m|mins|minutes|minute|min|d|days|day))+?")
TIME_DICT = {"h": 3600, "s": 1, "m": 60, "d": 86400}
class TimeConverter(commands.Converter):
async def convert(self, argument):
if argument is None:
return 0
args = argument.lower()
matches = re.findall(TIME_REGEX, args)
time = 0
for v, k in matches:
try:
for key in ("h", "s", "m", "d"):
if k.startswith(key):
k = key
break
time += TIME_DICT[k]*float(v)
except KeyError:
raise commands.BadArgument("{} is an invalid time-key! h/m/s/d are valid!".format(k))
except ValueError:
raise commands.BadArgument("{} is not a number!".format(v))
return time
class AdministratorCommands(commands.Cog):
def __init__(self, bot):
self.bot = bot
@commands.command(usage="poll <ping> <question> | <answer 1> | <answer2...>")
@utils.guild_only()
@utils.is_admin()
@commands.bot_has_permissions(manage_roles=True)
@commands.cooldown(1, 60, commands.BucketType.guild)
async def poll(self, ctx, ping_member, *, args):
"""Creates a poll with up to 5 answers."""
ping = ping_member.lower()
if ping not in ("yes", "no", "true", "false", "y", "n", "t", "f"):
return await utils.embed(ctx, discord.Embed(title="Poll Failed", description=f"Sorry, the `ping_member` argument should be \"Yes\" or \"No\". Please use `{self.bot.config.prefix}help poll` for more information."), error=True)
if ping in ("yes", "y", "true", "t"):
ping = True
if ping in ("no", "n", "no", "n"):
ping = False
ques_ans = args.split(" | ")
if len(ques_ans) <= 2:
return await utils.embed(ctx, discord.Embed(title="Poll Failed", description=f"Sorry, the `args` argument should be follow this syntax: `question | answer 1 | answer 2...`."), error=True)
question = ques_ans[0]
answers = ques_ans[1:6]
channel_id = self.bot.config.channels.announcements
channel = self.bot.get_channel(channel_id)
if channel is None:
return await utils.embed(ctx, discord.Embed(title="Poll Failed", description=f"Sorry, the `announcements` channel hasn't been configured."), error=True)
reactions = []
text = ""
i = 1
for answer in answers:
react = {1: "1\u20e3", 2: "2\u20e3", 3: "3\u20e3", 4: "4\u20e3", 5: "5\u20e3"}[i]
reactions.append(react)
text += f"{react} {answers[i-1]}\n\n"
i += 1
embed = await utils.embed(ctx, discord.Embed(timestamp=datetime.utcnow(), title="Server Poll", description=f"**{question}**\n\n{text}").set_footer(text=f"Poll by {ctx.author}"), send=False)
if ping:
ping_role = utils.get_ping_role(ctx)
if ping_role != ctx.guild.default_role:
if not ping_role.mentionable:
edited = False
try:
await ping_role.edit(mentionable=True)
edited = True
except discord.Forbidden:
return await utils.embed(ctx, discord.Embed(title="Poll Failed", description=f"I do not have permission to **edit** {ping_role.mention}."), error=True)
try:
message = await channel.send(ping_role.mention, embed=embed)
await utils.embed(ctx, discord.Embed(title="Poll Created", description=f"Your poll was successfully posted in {channel.mention}."), error=True)
for r in reactions:
await message.add_reaction(r)
except:
if channel.permissions_for(ctx.guild.me).add_reactions is False:
issue = f"I do not have permission to **add reactions** in <#{channel.mention}>."
if channel.permissions_for(ctx.guild.me).send_messages is False:
issue = f"I do not have permission to **send messages** in <#{channel.mention}>."
return await utils.embed(ctx, discord.Embed(title="Poll Failed", description=issue), error=True)
if edited:
await ping_role.edit(mentionable=False)
return
try:
message = await channel.send(content="@everyone" if ping else None, embed=embed)
await utils.embed(ctx, discord.Embed(title="Poll Created", description=f"Your poll was successfully posted in {channel.mention}."), error=True)
for r in reactions:
await message.add_reaction(r)
except:
if channel.permissions_for(ctx.guild.me).add_reactions is False:
issue = f"I do not have permission to **add reactions** in <#{channel.mention}>."
if channel.permissions_for(ctx.guild.me).send_messages is False:
issue = f"I do not have permission to **send messages** in <#{channel.mention}>."
await utils.embed(ctx, discord.Embed(title="Poll Failed", description=issue), error=True)
@commands.command(usage="announce <ping> <announcement>")
@utils.guild_only()
@utils.is_admin()
async def announce(self, ctx, ping_member, *, announcement):
"""Creates an announcement."""
ping = ping_member.lower()
if ping not in ("yes", "no", "true", "false", "y", "n", "t", "f"):
return await utils.embed(ctx, discord.Embed(title="Announcement Failed", description=f"Sorry, the `ping_member` argument should be \"Yes\" or \"No\". Please use `{self.bot.config.prefix}help announce` for more information."), error=True)
if ping in ("yes", "y", "true", "t"):
ping = True
if ping in ("no", "n", "no", "n"):
ping = False
channel_id = self.bot.config.channels.announcements
channel = self.bot.get_channel(channel_id)
if channel is None:
return await utils.embed(ctx, discord.Embed(title="Announcement Failed", description=f"Sorry, the `announcements` channel hasn't been configured."), error=True)
if ping:
ping_role = utils.get_ping_role(ctx)
if ping_role != ctx.guild.default_role:
if not ping_role.mentionable:
edited = False
try:
await ping_role.edit(mentionable=True)
edited = True
except discord.Forbidden:
return await utils.embed(ctx, discord.Embed(title="Announcement Failed", description=f"I do not have permission to **edit** {ping_role.mention}."), error=True)
try:
await channel.send(f"{ping_role.mention}\n{announcement}")
await utils.embed(ctx, discord.Embed(title="Announcement Sent", description=f"Your announcement was successfully posted in {channel.mention}."), error=True)
except:
if channel.permissions_for(ctx.guild.me).send_messages is False:
issue = f"I do not have permission to **send messages** in <#{channel.mention}>."
return await utils.embed(ctx, discord.Embed(title="Announcement Failed", description=issue), error=True)
if edited:
await ping_role.edit(mentionable=False)
return
try:
await channel.send("@everyone\n" if ping else "" + announcement)
await utils.embed(ctx, discord.Embed(title="Announcement Sent", description=f"Your announcement was successfully posted in {channel.mention}."), error=True)
except:
if channel.permissions_for(ctx.guild.me).send_messages is False:
issue = f"I do not have permission to **send messages** in <#{channel.mention}>."
await utils.embed(ctx, discord.Embed(title="Poll Failed", description=issue), error=True)
@commands.command(aliases=["resetcase"], usage="resetid")
@utils.guild_only()
@utils.is_admin()
async def resetid(self, ctx):
"""Resets the case ID."""
with sqlite3.connect(self.bot.config.database) as db:
db.cursor().execute("UPDATE Settings SET Case_ID='0'")
db.cursor().execute("DELETE FROM Cases")
db.commit()
await utils.embed(ctx, discord.Embed(timestamp=datetime.utcnow(), title="Data Wiped", description="All case data has been successfully cleared."))
@commands.command(aliases=["reloadconfig"], usage="reload")
@utils.guild_only()
@utils.is_admin()
async def reload(self, ctx):
"""Reloads the config file."""
del self.bot.config
self.bot.config = utils.Config()
await utils.embed(ctx, discord.Embed(timestamp=datetime.utcnow(), title="Config Reloaded", description="All config data has been successfully reloaded."))
@commands.command(usage="lockdown [time]")
@utils.guild_only()
@commands.bot_has_permissions(manage_channels=True)
@utils.is_admin()
async def lockdown(self, ctx, *, time=None):
"""Locks or unlocks a channel for a specified amount of time."""
member_role = utils.get_member_role(ctx)
ows = ctx.channel.overwrites_for(member_role)
if ows.read_messages is False:
return await utils.embed(ctx, discord.Embed(timestamp=datetime.utcnow(), title="Lockdown Failed", description=f"Sorry, I can only lock channels that can be seen by {member_role.mention if member_role != ctx.guild.default_role else member_role}."), error=True)
if ows.send_messages is False:
await ctx.channel.set_permissions(member_role, send_messages=None)
await ctx.channel.set_permissions(ctx.guild.me, send_messages=None)
return await utils.embed(ctx, discord.Embed(timestamp=datetime.utcnow(), title="Lockdown Deactivated", description=f"Lockdown has been lifted by **{ctx.author}**."))
if ows.send_messages in (True, None):
seconds = await TimeConverter().convert(time)
await ctx.channel.set_permissions(member_role, send_messages=False)
await ctx.channel.set_permissions(ctx.guild.me, send_messages=True)
if seconds < 1:
return await utils.embed(ctx, discord.Embed(timestamp=datetime.utcnow(), title="Lockdown Activated", description=f"Lockdown has been activated by **{ctx.author}**."))
await utils.embed(ctx, discord.Embed(timestamp=datetime.utcnow(), title="Lockdown Activated", description=f"Lockdown has been activated by **{ctx.author}** for {utils.display_time(round(seconds), 4)}."))
await asyncio.sleep(seconds)
ows = ctx.channel.overwrites_for(member_role)
if ows.send_messages is False:
await ctx.channel.set_permissions(member_role, send_messages=None)
await ctx.channel.set_permissions(ctx.guild.me, send_messages=None)
return await utils.embed(ctx, discord.Embed(timestamp=datetime.utcnow(), title="Lockdown Deactivated", description=f"Lockdown has been lifted."))
def setup(bot):
bot.add_cog(AdministratorCommands(bot)) | 46.217899 | 272 | 0.594124 | 1,441 | 11,878 | 4.822346 | 0.163775 | 0.033098 | 0.049647 | 0.059577 | 0.684271 | 0.662541 | 0.65808 | 0.657361 | 0.642682 | 0.624262 | 0 | 0.007165 | 0.283213 | 11,878 | 257 | 273 | 46.217899 | 0.80902 | 0 | 0 | 0.50838 | 0 | 0.022346 | 0.225157 | 0.035164 | 0 | 0 | 0 | 0 | 0 | 1 | 0.011173 | false | 0 | 0.01676 | 0 | 0.134078 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b84bfe3e24cf3fa88c7b90891f02c84318e2faae | 7,473 | py | Python | nextai_lib/inference.py | jav0927/nextai | 9de0c338a41a3ce0297b95f625290fa814a83344 | [
"Apache-2.0"
] | null | null | null | nextai_lib/inference.py | jav0927/nextai | 9de0c338a41a3ce0297b95f625290fa814a83344 | [
"Apache-2.0"
] | 1 | 2021-09-28T05:33:17.000Z | 2021-09-28T05:33:17.000Z | nextai_lib/inference.py | jav0927/nextai | 9de0c338a41a3ce0297b95f625290fa814a83344 | [
"Apache-2.0"
] | null | null | null | # AUTOGENERATED! DO NOT EDIT! File to edit: 02_inference.ipynb (unless otherwise specified).
__all__ = ['device', 'pad_output', 'get_activ_offsets_mns']
# Cell
#from fastai.vision.all import *
from fastai import *
from typing import *
from torch import tensor, Tensor
import torch
import torchvision # Needed to invoke torchvision.ops.mns function
# Cell
# Automatically sets for GPU or CPU environments
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
# Cell
# Pad tensors so that they have uniform dimentions: (batch size, no of items in a batch, 4) and (batch size, no of items in a batch, 21)
def pad_output(l_bb:List, l_scr:List, l_idx:List, no_classes:int):
'''Pad tensors so that they have uniform dimentions: (batch size, no of items in a batch, 4) and (batch size, no of items in a batch, 21)
Inputs: l_bb - list of tensors containing individual non-uniform sized bounding boxes
l_scr - list of tensors containing class index values (i.e. 1 - airplane)
l_idx - list of tensors containing class index values (i.e. 1 - airplane)
no_classes - Number of classes, Integer
Outputs: Uniform-sized tensors: bounding box tensor and score tensor with dims: (batch size, no of items in a batch, 4) and (batch size, no of items in a batch, 21)'''
if len([len(img_bb) for img_bb in l_bb]) == 0.:
print(F'Image did not pass the scoring threshold')
return
mx_len = max([len(img_bb) for img_bb in l_bb]) # Calculate maximun lenght of the boxes in the batch
l_b, l_c, l_x, l_cat = [], [], [], []
# Create Bounding Box tensors # zeroed tensor accumulators
for i, ntr in enumerate(zip(l_bb, l_scr, l_idx)):
bbox, cls, idx = ntr[0], ntr[1], ntr[2] # Unpack variables
tsr_len = mx_len - bbox.shape[0] # Calculate the number of zero-based rows to add
m = nn.ConstantPad2d((0, 0, 0, tsr_len), 0.) # Prepare to pad the box tensor with zero entries
l_b.append(m(bbox)) # Add appropriate zero-based box rows and add to list
# Create Category tensors
cat_base = torch.zeros(mx_len-bbox.shape[0], dtype=torch.int32)
img_cat = torch.cat((idx, cat_base), dim=0)
l_cat.append(img_cat)
# Create Score tensors
img_cls = [] # List to construct class vectors
for ix in range(idx.shape[0]): # Construct class vectors of dim(no of classes)
cls_base = torch.zeros(no_classes).to(device) # Base zero-based class vector
cls_base[idx[ix]] = cls[ix] # Add the score in the nth position
img_cls.append(cls_base)
img_stack = torch.stack(img_cls) # Create single tensor per image
img_stack_out = m(img_stack)
l_c.append( img_stack_out ) # Add appropriate zero-based class rows and add to list
return (TensorBBox(torch.stack(l_b,0)), TensorMultiCategory(torch.stack(l_c,0)), TensorMultiCategory(torch.stack(l_cat,0)) )
# Cell
def get_activ_offsets_mns(anchrs:Tensor, activs:Tensor, no_classes:int, threshold:float=0.5):
''' Takes in activations and calculates corresponding anchor box offsets.
It then filters the resulting boxes through MNS
Inputs:
anchrs - Anchors as Tensor
activs - Activations as Tensor
no_classes - Number of classes (categories)
threshold - Coarse filtering. Default = 0.5
Output:
one_batch_boxes, one_batch_scores as Tuple'''
p_bboxes, p_classes = activs # Read p_bboxes: [32, 189,4] Torch.Tensor and p_classes: [32, 189, 21] Torch.Tensor from self.learn.pred
#scores = torch.sigmoid(p_classes) # Calculate the confidence levels, scores, for class predictions [0, 1]
scores = torch.softmax(p_classes, -1) # Calculate the confidence levels, scores, for class predictions [0, 1] - Probabilistic
offset_boxes = activ_decode(p_bboxes, anchrs) # Return anchors + anchor offsets wiith format (batch, No Items in Batch, 4)
# For each item in batch, and for each class in the item, filter the image by passing it through NMS. Keep preds with IOU > thresshold
one_batch_boxes = []; one_batch_scores = []; one_batch_cls_pred = [] # Agregators at the bath level
for i in range(p_classes.shape[0]): # For each image in batch ...
batch_p_boxes = offset_boxes[i] # box preds for the current batch
batch_scores = scores[i] # Keep scores for the current batch
max_scores, cls_idx = torch.max(batch_scores, 1 ) # Keep batch class indexes
bch_th_mask = max_scores > threshold # Threshold mask for batch
bch_keep_boxes = batch_p_boxes[bch_th_mask] # "
bch_keep_scores = batch_scores[bch_th_mask] # "
bch_keep_cls_idx = cls_idx[bch_th_mask]
# Agregators per image in a batch
img_boxes = [] # Bounding boxes per image
img_scores = [] # Scores per image
img_cls_pred = [] # Class predictons per image
for c in range (1,no_classes): # Loop through each class
cls_mask = bch_keep_cls_idx==c # Keep masks for the current class
if cls_mask.sum() == 0: continue # Weed out images with no positive class masks
cls_boxes = bch_keep_boxes[cls_mask] # Keep boxes per image
cls_scores = bch_keep_scores[cls_mask].max(dim=1)[0] # Keep class scores for the current image
nms_keep_idx = torchvision.ops.nms(cls_boxes, cls_scores, iou_threshold=0.5) # Filter images by passing them through NMS
img_boxes += [*cls_boxes[nms_keep_idx]] # Agregate cls_boxes into tensors for all classes
box_stack = torch.stack(img_boxes,0) # Transform individual tensors into a single box tensor
img_scores += [*cls_scores[nms_keep_idx]] # Agregate cls_scores into tensors for all classes
score_stack = torch.stack(img_scores, 0) # Transform individual tensors into a single score tensor
img_cls_pred += [*tensor([c]*len(nms_keep_idx))]
cls_pred_stack = torch.stack(img_cls_pred, 0)
batch_mask = score_stack > threshold # filter final lists tto be greater than threshold
box_stack = box_stack[batch_mask] # "
score_stack = score_stack[batch_mask] # "
cls_pred_stack = cls_pred_stack[batch_mask] # "
if 'box_stack' not in locals(): continue # Failed to find any valid classes
one_batch_boxes.append(box_stack) # Agregate bounding boxes for the batch
one_batch_scores.append(score_stack) # Agregate scores for the batch
one_batch_cls_pred.append(cls_pred_stack)
# Pad individual box and score tensors into uniform-sized box and score tensors of shapes: (batch, no 0f items in batch, 4) and (batch, no 0f items in batch, 21)
one_batch_boxes, one_batch_scores, one_batch_cats = pad_output(one_batch_boxes, one_batch_scores, one_batch_cls_pred, no_classes)
return (one_batch_boxes, one_batch_cats) | 59.784 | 174 | 0.640707 | 1,072 | 7,473 | 4.277985 | 0.224813 | 0.027911 | 0.012211 | 0.017008 | 0.262102 | 0.170955 | 0.161797 | 0.13672 | 0.129089 | 0.11993 | 0 | 0.013033 | 0.281279 | 7,473 | 125 | 175 | 59.784 | 0.840812 | 0.459387 | 0 | 0 | 1 | 0 | 0.023895 | 0.005396 | 0 | 0 | 0 | 0 | 0 | 1 | 0.029412 | false | 0.014706 | 0.073529 | 0 | 0.147059 | 0.014706 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b84c1c6e378f4059bee57b13f1d84bcf63b4ae74 | 2,141 | py | Python | code.py | ashweta81/data-wrangling-pandas-code-along-practice | af49250a45c616f46d763990f2321f470d439916 | [
"MIT"
] | null | null | null | code.py | ashweta81/data-wrangling-pandas-code-along-practice | af49250a45c616f46d763990f2321f470d439916 | [
"MIT"
] | null | null | null | code.py | ashweta81/data-wrangling-pandas-code-along-practice | af49250a45c616f46d763990f2321f470d439916 | [
"MIT"
] | null | null | null | # --------------
import pandas as pd
import numpy as np
# Read the data using pandas module.
data=pd.read_csv(path)
# Find the list of unique cities where matches were played
print("The unique cities where matches were played are ", data.city.unique())
print('*'*80)
# Find the columns which contains null values if any ?
print("The columns which contain null values are ", data.columns[data.isnull().any()])
print('*'*80)
# List down top 5 most played venues
print("The top 5 most played venues are", data.venue.value_counts().head(5))
print('*'*80)
# Make a runs count frequency table
print("The frequency table for runs is", data.runs.value_counts())
print('*'*80)
# How many seasons were played and in which year they were played
data['year']=data.date.apply(lambda x : x[:4])
seasons=data.year.unique()
print('The total seasons and years are', seasons)
print('*'*80)
# No. of matches played per season
ss1=data.groupby(['year'])['match_code'].nunique()
print('The total matches played per season are', ss1)
print("*"*80)
# Total runs across the seasons
ss2=data.groupby(['year']).agg({'total':'sum'})
print("Total runs are",ss2)
print("*"*80)
# Teams who have scored more than 200+ runs. Show the top 10 results
w1=data.groupby(['match_code','batting_team']).agg({'total':'sum'}).sort_values(by='total', ascending=False)
w1[w1.total>200].reset_index().head(10)
print("The top 10 results are",w1[w1.total>200].reset_index().head(10))
print("*"*80)
# What are the chances of chasing 200+ target
dt1=data.groupby(['match_code','batting_team','inning'])['total'].sum().reset_index()
dt1.head()
dt1.loc[((dt1.total>200) & (dt1.inning==2)),:].reset_index()
data.match_code.unique().shape[0]
probability=(dt1.loc[((dt1.total>200) & (dt1.inning==2)),:].shape[0])/(data.match_code.unique().shape[0])*100
print("Chances are", probability)
print("*"*80)
# Which team has the highest win count in their respective seasons ?
dt2=data.groupby(['year','winner'])['match_code'].nunique()
dt3=dt2.groupby(level=0,group_keys=False)
dt4=dt3.apply(lambda x: x.sort_values(ascending=False).head(1))
print("The team with the highes win count is", dt4)
| 40.396226 | 109 | 0.712751 | 351 | 2,141 | 4.296296 | 0.353276 | 0.041777 | 0.029841 | 0.03183 | 0.225464 | 0.198939 | 0.079576 | 0.079576 | 0.043767 | 0 | 0 | 0.042276 | 0.105091 | 2,141 | 52 | 110 | 41.173077 | 0.744781 | 0.249416 | 0 | 0.25 | 0 | 0 | 0.272613 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.055556 | 0 | 0.055556 | 0.527778 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
b85115da00994686b76087d8e81c839619f86fa0 | 338 | py | Python | scss/setup.py | Jawbone/pyScss | b1f483c253ec4aaceb3b8d4d630ca5528590e9b8 | [
"MIT"
] | null | null | null | scss/setup.py | Jawbone/pyScss | b1f483c253ec4aaceb3b8d4d630ca5528590e9b8 | [
"MIT"
] | null | null | null | scss/setup.py | Jawbone/pyScss | b1f483c253ec4aaceb3b8d4d630ca5528590e9b8 | [
"MIT"
] | null | null | null | from distutils.core import setup, Extension
setup(name='jawbonePyScss',
version='1.1.8',
description='jawbonePyScss',
ext_modules=[
Extension(
'_scss',
sources=['src/_scss.c', 'src/block_locator.c', 'src/scanner.c'],
libraries=['pcre'],
optional=True
)
]
)
| 22.533333 | 76 | 0.553254 | 34 | 338 | 5.382353 | 0.735294 | 0.043716 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012605 | 0.295858 | 338 | 14 | 77 | 24.142857 | 0.756303 | 0 | 0 | 0 | 0 | 0 | 0.245562 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.076923 | 0 | 0.076923 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b85283b049e0e58e8a7c62f87369d905b8440e5f | 3,101 | py | Python | src/flagon/backends/redis_backend.py | ashcrow/flagon | 50e6aa96854468a89399ef08573e4f814a002d26 | [
"MIT"
] | 18 | 2015-08-27T03:49:42.000Z | 2021-05-12T21:48:17.000Z | src/flagon/backends/redis_backend.py | ashcrow/flagon | 50e6aa96854468a89399ef08573e4f814a002d26 | [
"MIT"
] | 2 | 2016-07-18T13:48:46.000Z | 2017-05-20T15:56:03.000Z | src/flagon/backends/redis_backend.py | ashcrow/flagon | 50e6aa96854468a89399ef08573e4f814a002d26 | [
"MIT"
] | 5 | 2015-09-20T08:46:01.000Z | 2021-06-10T03:41:04.000Z | # The MIT License (MIT)
#
# Copyright (c) 2014 Steve Milner
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to
# deal in the Software without restriction, including without limitation the
# rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
# sell copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
# IN THE SOFTWARE.
"""
Redis backend.
"""
import redis
from flagon import errors
from flagon.backends import Backend
class RedisBackend(Backend):
def __init__(self, host, port, db):
"""
Creates an instance of the RedisBackend.
:rtype: RedisBackend
"""
# https://pypi.python.org/pypi/redis/2.10.1
pool = redis.ConnectionPool(host=host, port=port, db=db)
self._server = redis.Redis(
connection_pool=pool,
charset='utf-8',
errors='strict',
decode_responses=False)
def set(self, name, key, value):
"""
Sets a value for a feature. This is a proposed name only!!!
:param name: name of the feature.
:rtype: bool
"""
self._server.hset(name, key, value)
def exists(self, name, key):
"""
Checks if a feature exists.
:param name: name of the feature.
:rtype: bool
"""
return self._server.hexists(name, key)
def is_active(self, name, key):
"""
Checks if a feature is on.
:param name: name of the feature.
:rtype: bool
:raises: UnknownFeatureError
"""
if not self._server.hexists(name, key):
raise errors.UnknownFeatureError('Unknown feature: %s' % name)
if self._server.hget(name, key) == 'True':
return True
return False
def _turn(self, name, key, value):
"""
Turns a feature off.
:param name: name of the feature.
:param value: Value to turn name to.
:raises: UnknownFeatureError
"""
# TODO: Copy paste --- :-(
if not self._server.hexists(name, key):
raise errors.UnknownFeatureError('Unknown feature: %s %s' % (
name, key))
self._server.hset(name, key, value)
turn_on = lambda s, name: s._turn(name, 'active', True)
turn_off = lambda s, name: s._turn(name, 'active', False)
| 32.642105 | 78 | 0.639471 | 408 | 3,101 | 4.813725 | 0.397059 | 0.039206 | 0.022403 | 0.03055 | 0.232688 | 0.220468 | 0.181263 | 0.127291 | 0.075356 | 0.075356 | 0 | 0.003967 | 0.268301 | 3,101 | 94 | 79 | 32.989362 | 0.861613 | 0.525637 | 0 | 0.142857 | 0 | 0 | 0.054927 | 0 | 0 | 0 | 0 | 0.010638 | 0 | 1 | 0.178571 | false | 0 | 0.107143 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b85adde254fd21cc8c4987b399dbf5487b008f43 | 445 | py | Python | tests/test_example.py | jlane9/mockerena | a3fd1bd39af6269dc96846967b4bba47759bab41 | [
"MIT"
] | 1 | 2019-09-10T05:12:38.000Z | 2019-09-10T05:12:38.000Z | tests/test_example.py | jlane9/mockerena | a3fd1bd39af6269dc96846967b4bba47759bab41 | [
"MIT"
] | 10 | 2019-09-10T16:14:35.000Z | 2019-12-19T17:13:51.000Z | tests/test_example.py | jlane9/mockerena | a3fd1bd39af6269dc96846967b4bba47759bab41 | [
"MIT"
] | 2 | 2019-09-10T05:11:58.000Z | 2020-04-29T17:59:47.000Z | """test_example
.. codeauthor:: John Lane <john.lane93@gmail.com>
"""
from flask import url_for
from eve import Eve
import pytest
@pytest.mark.example
def test_example(client: Eve):
"""Example test for reference
:param Eve client: Mockerena app instance
:raises: AssertionError
"""
res = client.get(url_for('generate', schema_id='mock_example'))
assert res.status_code == 200
assert res.mimetype == 'text/csv'
| 19.347826 | 67 | 0.698876 | 60 | 445 | 5.066667 | 0.633333 | 0.072368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013736 | 0.182022 | 445 | 22 | 68 | 20.227273 | 0.821429 | 0.352809 | 0 | 0 | 0 | 0 | 0.10687 | 0 | 0 | 0 | 0 | 0 | 0.25 | 1 | 0.125 | false | 0 | 0.375 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
b85e1207d6e09dc9d3b5821470f14d0eed8e2190 | 394 | py | Python | subcontent/backup/python3_closure_nonlocal.py | fingerkc/fingerkc.github.io | 0bfe5163ea28be3747756c8b6be64ad4f09b2fbf | [
"MIT"
] | 2 | 2019-06-13T07:22:22.000Z | 2019-11-23T03:55:21.000Z | subcontent/backup/python3_closure_nonlocal.py | fingerkc/fingerkc.github.io | 0bfe5163ea28be3747756c8b6be64ad4f09b2fbf | [
"MIT"
] | 1 | 2019-12-15T04:10:59.000Z | 2019-12-15T04:10:59.000Z | subcontent/backup/python3_closure_nonlocal.py | fingerkc/fingerkc.github.io | 0bfe5163ea28be3747756c8b6be64ad4f09b2fbf | [
"MIT"
] | 1 | 2019-06-24T08:17:13.000Z | 2019-06-24T08:17:13.000Z | #!/usr/bin/python3
##python3 闭包 与 nonlocal
#如果在一个内部函数里,对在外部作用域(但不是在全局作用域)的变量进行引用,
#那么内部函数就被认为是闭包(closure)
def A_():
var = 0
def clo_B():
var_b = 1 # 闭包的局部变量
var = 100
print(var) # 引用外部的var , 但是不会改变var 的值
return clo_B
#clo_B是一个闭包
#nonlocal 关键字
def A_():
var = 0
def clo_B():
nonlocal var # nonlocal关键字 指定var 不是闭包的局部变量
var = var + 1 # 若 不使用nonlocal 关键字 , 则此行代码会出现错误
| 15.153846 | 50 | 0.670051 | 57 | 394 | 4.508772 | 0.596491 | 0.046693 | 0.054475 | 0.062257 | 0.116732 | 0.116732 | 0.116732 | 0 | 0 | 0 | 0 | 0.029316 | 0.220812 | 394 | 25 | 51 | 15.76 | 0.807818 | 0.532995 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0 | 0.416667 | 0.083333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b86adccb9d42d87933b32bb27aaf25b01696f8a9 | 818 | py | Python | django_for_startups/django_customizations/drf_customizations.py | Alex3917/django_for_startups | 9dda54f5777247f7367a963d668f25e797c9adf1 | [
"MIT"
] | 102 | 2021-02-28T00:58:36.000Z | 2022-03-30T09:29:34.000Z | django_for_startups/django_customizations/drf_customizations.py | Alex3917/django_for_startups | 9dda54f5777247f7367a963d668f25e797c9adf1 | [
"MIT"
] | 1 | 2021-07-11T18:45:29.000Z | 2021-07-11T18:45:29.000Z | django_for_startups/django_customizations/drf_customizations.py | Alex3917/django_for_startups | 9dda54f5777247f7367a963d668f25e797c9adf1 | [
"MIT"
] | 16 | 2021-06-23T18:34:46.000Z | 2022-03-30T09:27:34.000Z | # Standard Library imports
# Core Django imports
# Third-party imports
from rest_framework import permissions
from rest_framework.throttling import UserRateThrottle, AnonRateThrottle
# App imports
class BurstRateThrottle(UserRateThrottle):
scope = 'burst'
class SustainedRateThrottle(UserRateThrottle):
scope = 'sustained'
class HighAnonThrottle(AnonRateThrottle):
rate = '5000000/day'
class AccountCreation(permissions.BasePermission):
""" A user should be able to create an account without being authenticated, but only the
owner of an account should be able to access that account's data in a GET method.
"""
def has_permission(self, request, view):
if (request.method == "POST") or request.user.is_authenticated:
return True
return False
| 24.787879 | 94 | 0.734719 | 94 | 818 | 6.351064 | 0.691489 | 0.026801 | 0.056951 | 0.046901 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010687 | 0.199267 | 818 | 32 | 95 | 25.5625 | 0.900763 | 0.298289 | 0 | 0 | 0 | 0 | 0.053016 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0.153846 | 0 | 0.923077 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
b86ccfc144647099cbf5ac1e80b91ec536893766 | 171,517 | py | Python | python/mapCells.py | claraya/meTRN | a4e4911b26a295e22d7309d5feda026db3325885 | [
"MIT"
] | 2 | 2019-11-18T22:54:13.000Z | 2019-11-18T22:55:18.000Z | python/mapCells.py | claraya/meTRN | a4e4911b26a295e22d7309d5feda026db3325885 | [
"MIT"
] | null | null | null | python/mapCells.py | claraya/meTRN | a4e4911b26a295e22d7309d5feda026db3325885 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# perform cellular-resolution expression analyses!
import sys
import time
import optparse
import general
import hyper
import numpy
import math
import pickle
import pdb
import metrn
import modencode
import itertools
import os
import re
import datetime
import calendar
#import simplejson as json
from scipy.stats.stats import pearsonr
from runner import *
from scipy import stats
from network import Network
from network import export
print "Command:", " ".join(sys.argv)
print "Timestamp:", time.asctime(time.localtime())
""" define functions of internal use """
""" define a function to recover cells in a time range """
def getTargetCells(inobject="", inpath="", mode="collection", timeRange=list()):
# grab cells from collection:
if mode == "collection":
# load collection cells:
cells = list()
for gene in os.listdir(inpath):
cells.extend(open(inpath + gene).read().split("\n"))
cells = general.clean(sorted(list(set(cells))))
print "Loading collection cells:", len(cells)
# grab cells from time-points:
elif mode == "time":
# load time-point cells:
cells = list()
for timePoint in os.listdir(inpath):
if int(timePoint) in timeRange:
cells += general.clean(open(inpath + timePoint).read().split("\n"))
cells = sorted(list(set(cells)))
print "Loading time-point/range cells:", len(cells)
# return collected cells:
return cells
""" define a function to construct a cell-parent relationships, and pedigree cell list """
def expressionBuilder(expressionfile, path, cutoff, minimum, metric="fraction.expression"):
# build header dict:
hd = general.build_header_dict(path + expressionfile)
# process input expression data:
quantitation_matrix, expression_matrix, tracking_matrix, trackedCells = dict(), dict(), dict(), list()
inlines = open(path + expressionfile).readlines()
inlines.pop(0)
for inline in inlines:
initems = inline.strip().split("\t")
cell, gene, rawSignal, metricSignal = initems[hd["cell.name"]], initems[hd["gene"]], initems[hd["cell.expression"]], initems[hd[metric]]
trackedCells.append(cell)
# store expression value
if not gene in quantitation_matrix:
quantitation_matrix[gene] = dict()
quantitation_matrix[gene][cell] = float(metricSignal)
# store tracked and expressing cells:
if not gene in tracking_matrix:
expression_matrix[gene] = list()
tracking_matrix[gene] = list()
tracking_matrix[gene].append(cell)
if float(metricSignal) >= float(cutoff) and float(rawSignal) >= minimum:
expression_matrix[gene].append(cell)
trackedCells = list(set(trackedCells))
return quantitation_matrix, expression_matrix, tracking_matrix, trackedCells
""" define a function to construct a cell-parent relationships, and pedigree cell list """
def relationshipBuilder(pedigreefile, path, trackedCells=list(), lineages="complete", mechanism="simple"):
cell_dict, parent_dict = dict(), dict()
inlines = open(path + pedigreefile).readlines()
header = inlines.pop(0)
for inline in inlines:
cell, binCell, parent, binParent = inline.strip().split(",")[:4]
tissues = inline.strip().split(",")[5]
if not parent == "" and not cell == "":
if mechanism == "simple" or lineages == "complete" or (lineages == "tracked" and parent in trackedCells and cell in trackedCells):
if not parent in parent_dict:
parent_dict[parent] = list()
parent_dict[parent].append(cell)
cell_dict[cell] = parent
pedigreeCells = sorted(list(set(cell_dict.keys()).union(set(parent_dict.keys()))))
return cell_dict, parent_dict, pedigreeCells
""" define a function to generate the underlying tree of a given parent """
def treeBuilder(parent_dict, cell_dict, highlights=list(), nodeColor="#FFFFFF", lineColor="#336699", textColor="#000000", highlightColor="#CC0000"):
# set color rules:
groups = { "unknown" : textColor, "highlight" : highlightColor }
nodeColors = { "unknown" : nodeColor, "highlight" : highlightColor }
lineColors = { "unknown" : lineColor, "highlight" : highlightColor }
textColors = { "unknown" : textColor, "highlight" : highlightColor }
# initialize tree:
tree = {}
for child in cell_dict:
parent = cell_dict[child]
# determine whether to
pkey, ckey = "unknown", "unknown"
if parent in highlights:
pkey = "highlight"
if child in highlights:
ckey = "highlight"
# make an instance of a class for the parent if neccesary:
if not tree.has_key(parent):
tree[parent] = {'name':parent,'group':groups[pkey],'nodeColor':nodeColors[pkey],'lineColor':lineColors[pkey],'textColor':textColors[pkey],'children':[]}
# make an instance of a class for the child if neccesary:
if not tree.has_key(child):
tree[child] = {'name':child,'group':groups[ckey],'nodeColor':nodeColors[ckey],'lineColor':lineColors[ckey],'textColor':textColors[ckey],'children':[]}
# and child object to parent if necesary:
if not tree[child] in tree[parent]['children']:
tree[parent]['children'].append(tree[child])
return tree
""" define a function to generate the list of cells that are parents to a given cell """
def ascendantsCollector(cell, parent_dict, cell_dict, ascendants=list(), sort=True):
if not cell in ascendants:
ascendants.append(cell)
if cell in cell_dict:
parent = cell_dict[cell]
ascendants.append(parent)
ascendants = ascendantsCollector(parent, parent_dict, cell_dict, ascendants, sort=sort)
if sort:
return sorted(list(set(ascendants)))
else:
return ascendants
""" define a function to generate the list of cells that are progeny to a given parent """
def descendantsCollector(parent, parent_dict, cell_dict, descendants=list(), sort=True):
if not parent in descendants:
descendants.append(parent)
if parent in parent_dict:
for cell in parent_dict[parent]:
descendants.append(cell)
descendants = descendantsCollector(cell, parent_dict, cell_dict, descendants, sort=sort)
if sort:
return sorted(list(set(descendants)))
else:
return descendants
""" define a function to generate the list of cells that are progeny to a given parent (using combinations function) """
def lineageGenerator(parent, parent_dict, cell_dict):
descendants = descendantsCollector(parent, parent_dict, cell_dict, descendants=list())
gList = list()
for r in range(1, len(descendants)+1):
for gCells in itertools.combinations(descendants, r):
process = True
gCells = list(gCells)
for gCell in gCells:
if gCell != parent:
if not cell_dict[gCell] in gCells:
process = False
if process:
gList.append(",".join(sorted(gCells)))
return gList
""" define a function to generate the list of cells that are progeny to a given parent (using lineage growth) """
def lineageBuilder(parent, parent_dict, cell_dict, limit="OFF", descendants="ON"):
mList = [parent]
for mCells in mList:
aCells, bCells, xCells, exit = str(mCells), str(mCells), str(mCells), False
for mCell in mCells.split(","):
if mCell in parent_dict and len(parent_dict[mCell]) == 2:
aCell, bCell = parent_dict[mCell]
if not aCell in aCells.split(","):
aCells = ",".join(sorted(mCells.split(",") + [aCell]))
if not bCell in bCells.split(","):
bCells = ",".join(sorted(mCells.split(",") + [bCell]))
if not aCell in xCells.split(",") and not bCell in xCells.split(","):
xCells = ",".join(sorted(mCells.split(",") + [aCell, bCell]))
if not aCells in mList:
mList.append(aCells)
if not bCells in mList:
mList.append(bCells)
if not xCells in mList:
mList.append(xCells)
if limit != "OFF" and len(mList) >= limit:
if descendants == "ON":
aCellx = sorted(list(set(mCells.split(",") + descendantsCollector(aCell, parent_dict, cell_dict, descendants=list()))))
bCellx = sorted(list(set(mCells.split(",") + descendantsCollector(bCell, parent_dict, cell_dict, descendants=list()))))
xCellx = sorted(list(set(aCellx).union(set(bCellx))))
aCellx = ",".join(aCellx)
bCellx = ",".join(bCellx)
xCellx = ",".join(xCellx)
if not aCellx in mList:
mList.append(aCellx)
if not bCellx in mList:
mList.append(bCellx)
if not xCellx in mList:
mList.append(xCellx)
exit = True
if exit:
break
return sorted(mList)
""" define a function to generate lists of related-cells from a given set of of cells """
def lineageCollector(cells, parent_dict, cell_dict, siblings="ON"):
collections, parent_tree, cell_tree = list(), dict(), dict()
#ascendants = ascendantsCollector(descendant, parent_tree, cell_tree, ascendants=list())
#descendants = descendantsCollector(parent, parent_dict, cell_dict, descendants=list())
print len(cells), cells
for cell in sorted(cells):
found, relatives = False, [cell]
if cell in cell_dict:
relatives.append(cell_dict[cell])
if cell in parent_dict:
relatives.extend(parent_dict[cell])
if siblings == "ON" and cell in cell_dict:
relatives.extend(parent_dict[cell_dict[cell]])
r, relatives = 0, list(set(relatives).intersection(set(cells)))
print cell, relatives, "<-- relatives"
updated = list()
for collection in collections:
if set(relatives).intersection(set(collection)):
print collection, "<-- collection"
collection.extend(relatives)
collection = list(set(collection))
print collection, "<-- updated"
r += 1
pdb.set_trace()
updated.append(collection)
if r == 0:
updated.append(relatives)
collections = updated
return collections
""" define a function to calculate the number of possible subsets """
def combinationCalculator(n, R):
combinations = 0
for r in range(1,R):
combinations += math.factorial(n)/(math.factorial(r)*math.factorial(n-r))
return combinations
""" define a function to calculate the number of divisions between two cells """
def divisionCalculator(aCell, aParent, parent_dict, cell_dict):
divisions = 0
while aCell in cell_dict and aCell != aParent:
if cell_dict[aCell] == aParent:
divisions += 1
break
else:
aCell = cell_dict[aCell]
divisions += 1
return divisions
""" define a function that calculates the lineage distance between two cells """
def lineageDistance(aCell, bCell, parent_dict, cell_dict):
aParents = ascendantsCollector(aCell, parent_dict, cell_dict)
bParents = ascendantsCollector(bCell, parent_dict, cell_dict)
xParents = set(aParents).intersection(set(bParents))
xDistances = dict()
#print len(xParents), aCell, bCell, ":", ", ".join(xParents)
for xParent in xParents:
aDistance = divisionCalculator(aCell, xParent, parent_dict, cell_dict)
bDistance = divisionCalculator(bCell, xParent, parent_dict, cell_dict)
xDistances[xParent] = aDistance + bDistance
xParents = general.valuesort(xDistances)
distance, ancestor = xDistances[xParents[0]], xParents[0]
return distance, ancestor
def main():
parser = optparse.OptionParser()
parser.add_option("--path", action="store", type="string", dest="path", help="Path from script to files")
parser.add_option("--organism", action = "store", type = "string", dest = "organism", help = "Target organism for operations...", default="OFF")
parser.add_option("--mode", action="store", type="string", dest="mode", help="Operation modes: import, map, or other...")
parser.add_option("--peaks", action="store", type="string", dest="peaks", help="Peaks set to be used.", default="OFF")
parser.add_option("--infile", action="store", type="string", dest="infile", help="Input file for abundance representation")
parser.add_option("--nuclear", action = "store", type = "string", dest = "nuclear", help = "Peaks are only nuclear?", default="ON")
parser.add_option("--expression", action="store", type="string", dest="expression", help="Input expression file for abundance representation", default="OFF")
parser.add_option("--pedigree", action="store", type="string", dest="pedigree", help="Input pedigree file", default="OFF")
parser.add_option("--mapping", action="store", type="string", dest="mapping", help="Input mapping file; associates tissue labels to more generic terms!", default="OFF")
parser.add_option("--tissues", action="store", type="string", dest="tissues", help="Input tissues file", default="OFF")
parser.add_option("--times", action="store", type="string", dest="times", help="Input cell times file", default="OFF")
parser.add_option("--name", action="store", type="string", dest="name", help="Output file name", default="")
parser.add_option("--nametag", action="store", type="string", dest="nametag", help="Output file name addition tag", default="")
parser.add_option("--collection", action="store", type="string", dest="collection", help="Cell collection subset name", default="OFF")
parser.add_option("--technique", action = "store", type = "string", dest = "technique", help = "What kind of matrix should I build? binary, fraction, or normal", default="binary")
parser.add_option("--neurons", action="store", type="string", dest="neurons", help="Neurons to be used for 'collection' analysis...", default="OFF")
parser.add_option("--factors", action="store", type="string", dest="factors", help="Infer factors (OFF) or load from file?", default="OFF")
parser.add_option("--measure", action="store", type="string", dest="measure", help="Maximum (cells) or mean", default="avg.expression")
parser.add_option("--fraction", action="store", type="float", dest="fraction", help="Fractional expression cutoff", default=0.1)
parser.add_option("--minimum", action="store", type="float", dest="minimum", help="Minimum raw expression cutoff", default=2000)
parser.add_option("--inherit", action="store", type="string", dest="inherit", help="Signal inheritance policy: 'max' or 'last' of ancestor expression signals...", default="last")
parser.add_option("--overlap", action="store", type="float", dest="overlap", help="Cellular overlap cutoff", default=0.75)
parser.add_option("--pvalue", action="store", type="float", dest="pvalue", help="Significance cutoff", default=0.01)
parser.add_option("--header", action="store", type="string", dest="header", help="Is there a header?", default="OFF")
parser.add_option("--format", action="store", type="string", dest="format", help="How should formatting be done?", default="bed")
parser.add_option("--reference", action="store", type="string", dest="reference", help="Gene-coordinate reference file", default="in2shape_ce_wormbased_COM_gx.bed")
parser.add_option("--up", action = "store", type = "int", dest = "up", help = "Upstream space", default=0)
parser.add_option("--dn", action = "store", type = "int", dest = "dn", help = "Downstream space", default=0)
parser.add_option("--method", action="store", type="string", dest="method", help="Should descendant cells or descendant lineages be examined?", default="lineages")
parser.add_option("--cells", action="store", type="string", dest="cells", help="Reduce lineage cells to tracked cells (tracked) or use complete lineage cells (complete)?", default="tracked")
parser.add_option("--lineages", action="store", type="string", dest="lineages", help="Reduce lineage tree to tracked cells (tracked) or use complete lineage tree (complete)?", default="tracked")
parser.add_option("--descendants", action="store", type="string", dest="descendants", help="Apply descendants cutoff?", default="OFF")
parser.add_option("--ascendants", action="store", type="string", dest="ascendants", help="Apply ascendants cutoff?", default="OFF")
parser.add_option("--extend", action="store", type="string", dest="extend", help="Extend to include 0 signal expression values for cells not measured?", default="OFF")
parser.add_option("--overwrite", action="store", type="string", dest="overwrite", help="Overwrite outputs?", default="OFF")
parser.add_option("--parameters", action="store", type="string", dest="parameters", help="Optional parameters...", default="OFF")
parser.add_option("--limit", action="store", type="string", dest="limit", help="Limit on lineage expansion? Numeric integer.", default="OFF")
parser.add_option("--query", action="store", type="string", dest="query", help="Query collections of cells whose enrichment will be searched in target cells", default="OFF")
parser.add_option("--source", action="store", type="string", dest="source", help="File source for inputs...", default="OFF")
parser.add_option("--target", action="store", type="string", dest="target", help="Target collections of cells in which enrichment is searched for", default="OFF")
parser.add_option("--domain", action="store", type="string", dest="domain", help="Domain of co-associations for hybrid-type analyses", default="OFF")
parser.add_option("--A", action = "store", type = "string", dest = "a", help = "Paths to files of interest", default="OFF")
parser.add_option("--B", action = "store", type = "string", dest = "b", help = "Files to be hybridized", default="OFF")
parser.add_option("--indexes", action = "store", type = "string", dest = "indexes", help = "Indexes for matrix construction...", default="OFF")
parser.add_option("--values", action = "store", type = "string", dest = "values", help = "Values for matrix construction...", default="OFF")
parser.add_option("--contexts", action = "store", type = "string", dest = "contexts", help = "What contexts of development should I track?", default="OFF")
parser.add_option("--exclude", action="store", type="string", dest="exclude", help="Are there items that should be excluded?", default="")
parser.add_option("--start", action = "store", type = "int", dest = "start", help = "Start development time for cell search", default=1)
parser.add_option("--stop", action = "store", type = "int", dest = "stop", help = "End development time for cell search", default=250)
parser.add_option("--step", action = "store", type = "int", dest = "step", help = "Step size", default=1)
parser.add_option("--total", action = "store", type = "int", dest = "total", help = "Total simulations (indexes) for 'master' operations ", default=1000)
parser.add_option("--threads", action = "store", type = "int", dest = "threads", help = "Parallel processing threads", default=1)
parser.add_option("--chunks", action = "store", type = "int", dest = "chunks", help = "", default=100)
parser.add_option("--module", action = "store", type = "string", dest = "module", help = "", default="md1")
parser.add_option("--qsub", action = "store", type = "string", dest = "qsub", help = "Qsub configuration header", default="OFF")
parser.add_option("--server", action = "store", type = "string", dest = "server", help = "Are we on the server?", default="OFF")
parser.add_option("--job", action = "store", type = "string", dest = "job", help = "Job name for cluster", default="OFF")
parser.add_option("--copy", action = "store", type = "string", dest = "copy", help = "Copy simulated peaks to analysis folder?", default="OFF")
parser.add_option("--tag", action = "store", type = "string", dest = "tag", help = "Add tag to TFBS?", default="")
(option, args) = parser.parse_args()
# import paths:
if option.server == "OFF":
path_dict = modencode.configBuild(option.path + "/input/" + "configure_path.txt")
elif option.server == "ON":
path_dict = modencode.configBuild(option.path + "/input/" + "configure_server.txt")
# specify input and output paths:
inpath = path_dict["input"]
extraspath = path_dict["extras"]
pythonpath = path_dict["python"]
scriptspath = path_dict["scripts"]
downloadpath = path_dict["download"]
fastqpath = path_dict["fastq"]
bowtiepath = path_dict["bowtie"]
bwapath = path_dict["bwa"]
macspath = path_dict["macs"]
memepath = path_dict["meme"]
idrpath = path_dict["idr"]
igvpath = path_dict["igv"]
testpath = path_dict["test"]
processingpath = path_dict["processing"]
annotationspath = path_dict["annotations"]
peakspath = path_dict["peaks"]
gopath = path_dict["go"]
hotpath = path_dict["hot"]
qsubpath = path_dict["qsub"]
coassociationspath = path_dict["coassociations"]
bindingpath = path_dict["binding"]
neuronspath = path_dict["neurons"]
cellspath = path_dict["cells"]
# standardize paths for analysis:
alignerpath = bwapath
indexpath = alignerpath + "index/"
alignmentpath = alignerpath + "alignment/"
qcfilterpath = alignerpath + "qcfilter/"
qcmergepath = alignerpath + "qcmerge/"
# import configuration dictionaries:
source_dict = modencode.configBuild(inpath + "configure_source.txt")
method_dict = modencode.configBuild(inpath + "configure_method.txt")
context_dict = modencode.configBuild(inpath + "configure_context.txt")
# define organism parameters:
if option.organism == "hs" or option.organism == "h.sapiens":
organismTag = "hs"
#organismIGV = "ce6"
elif option.organism == "mm" or option.organism == "m.musculus":
organismTag = "mm"
#organismIGV = "ce6"
elif option.organism == "ce" or option.organism == "c.elegans":
organismTag = "ce"
#organismIGV = "ce6"
elif option.organism == "dm" or option.organism == "d.melanogaster":
organismTag = "dm"
#organismIGV = "dm5"
# specify genome size file:
if option.nuclear == "ON":
chromosomes = metrn.chromosomes[organismTag]["nuclear"]
genome_size_file = option.path + "/input/" + metrn.reference[organismTag]["nuclear_sizes"]
genome_size_dict = general.build_config(genome_size_file, mode="single", separator="\t", spaceReplace=True)
else:
chromosomes = metrn.chromosomes[organismTag]["complete"]
genome_size_file = option.path + "/input/" + metrn.reference[organismTag]["complete_sizes"]
genome_size_dict = general.build_config(genome_size_file, mode="single", separator="\t", spaceReplace=True)
# load gene ID dictionaries:
id2name_dict, name2id_dict = modencode.idBuild(inpath + metrn.reference[organismTag]["gene_ids"], "Sequence Name (Gene)", "Gene Public Name", mode="label", header=True, idUpper=True, nameUpper=True)
# update peaks path:
peakspath = peakspath + option.peaks + "/"
# define input/output folders:
expressionpath = cellspath + "expression/"
correctionpath = cellspath + "correction/"
lineagepath = cellspath + "lineage/"
bindingpath = cellspath + "peaks/"
overlappath = cellspath + "overlap/"
cellsetpath = cellspath + "cellset/"
genesetpath = cellspath + "geneset/"
reportspath = cellspath + "reports/"
comparepath = cellspath + "compare/"
matrixpath = cellspath + "matrix/"
tissuespath = cellspath + "tissues/"
distancepath = cellspath + "distance/"
hybridpath = cellspath + "hybrid/"
dynamicspath = cellspath + "dynamics/"
cubismpath = cellspath + "cubism/"
timepath = cellspath + "time/"
cellnotationspath = cellspath + "annotations/"
general.pathGenerator(expressionpath)
general.pathGenerator(correctionpath)
general.pathGenerator(lineagepath)
general.pathGenerator(bindingpath)
general.pathGenerator(overlappath)
general.pathGenerator(cellsetpath)
general.pathGenerator(genesetpath)
general.pathGenerator(reportspath)
general.pathGenerator(comparepath)
general.pathGenerator(matrixpath)
general.pathGenerator(tissuespath)
general.pathGenerator(distancepath)
general.pathGenerator(timepath)
general.pathGenerator(hybridpath)
general.pathGenerator(dynamicspath)
general.pathGenerator(cubismpath)
general.pathGenerator(cellnotationspath)
# generate expression flag:
if option.measure == "max.expression":
expression_flag = "maxCel_"
elif option.measure == "avg.expression":
expression_flag = "avgExp_"
# check that the index range is coherent:
if option.stop > option.total:
print
print "Error: Range exceeded! Stop index is larger than total."
print
return
# master mode:
if "master" in option.mode:
# capture master mode:
master, mode = option.mode.split(":")
# prepare for qsub:
bash_path = str(option.path + "/data/cells/runs/").replace("//","/")
bash_base = "_".join([mode, option.peaks, option.name]) + "-M"
qsub_base = "_".join([mode, option.peaks, option.name])
general.pathGenerator(bash_path)
if option.qsub != "OFF":
qsub_header = open(qsubpath + option.qsub).read()
qsub = True
else:
qsub_header = ""
qsub = False
if option.job == "QSUB":
qsub_header = qsub_header.replace("qsubRunner", "qsub-" + qsub_base)
elif option.job != "OFF":
qsub_header = qsub_header.replace("qsubRunner", "qsub-" + option.job)
bash_base = option.job + "-M"
# update server path:
if option.qsub != "OFF":
option.path = serverPath(option.path)
# prepare slave modules:
m, steps, modules, commands, sequences, chunks, start, complete = 1, 0, list(), list(), list(), option.chunks, option.start, False
for index in range(option.start, option.stop+1, option.step):
run = "rn" + general.indexTag(index, option.total)
steps += 1
# cellular peak generation mode:
if mode == "cell.peaks":
command = "python <<CODEPATH>>mapCells.py --path <<PATH>> --organism <<ORGANISM>> --mode <<MODE>> --peaks <<PEAKS>> --start <<START>> --stop <<STOP>> --total <<TOTAL>> --expression <<EXPRESSION>> --collection <<COLLECTION>> --times <<TIMES>> --fraction <<FRACTION>> --minimum <<MINIMUM>> --name <<NAME>> --qsub <<QSUB>> --server <<SERVER>> --module <<MODULE>>"
command = command.replace("<<CODEPATH>>", option.path + "/python/")
command = command.replace("<<PATH>>", option.path)
command = command.replace("<<ORGANISM>>", option.organism)
command = command.replace("<<MODE>>", mode)
command = command.replace("<<PEAKS>>", option.peaks)
command = command.replace("<<START>>", str(index))
command = command.replace("<<STOP>>", str(index))
command = command.replace("<<TOTAL>>", str(option.total))
command = command.replace("<<EXPRESSION>>", option.expression)
command = command.replace("<<COLLECTION>>", option.collection)
command = command.replace("<<TIMES>>", option.times)
command = command.replace("<<FRACTION>>", str(option.fraction))
command = command.replace("<<MINIMUM>>", str(option.minimum))
command = command.replace("<<NAME>>", option.name + general.indexTag(index, option.total))
command = command.replace("<<QSUB>>", option.qsub)
command = command.replace("<<SERVER>>", option.server)
command = command.replace("<<MODULE>>", "md" + str(m))
# cellular peak generation mode:
if mode == "cell.annotation":
command = "python <<CODEPATH>>mapCells.py --path <<PATH>> --organism <<ORGANISM>> --mode <<MODE>> --peaks <<PEAKS>> --start <<START>> --stop <<STOP>> --total <<TOTAL>> --infile <<INFILE>> --collection <<COLLECTION>> --times <<TIMES>> --name <<NAME>> --qsub <<QSUB>> --server <<SERVER>> --module <<MODULE>>"
command = command.replace("<<CODEPATH>>", option.path + "/python/")
command = command.replace("<<PATH>>", option.path)
command = command.replace("<<ORGANISM>>", option.organism)
command = command.replace("<<MODE>>", mode)
command = command.replace("<<PEAKS>>", option.peaks)
command = command.replace("<<START>>", str(index))
command = command.replace("<<STOP>>", str(index))
command = command.replace("<<TOTAL>>", str(option.total))
command = command.replace("<<INFILE>>", option.infile)
command = command.replace("<<COLLECTION>>", option.collection)
command = command.replace("<<TIMES>>", option.times)
command = command.replace("<<NAME>>", option.name + general.indexTag(index, option.total) + option.nametag)
command = command.replace("<<QSUB>>", option.qsub)
command = command.replace("<<SERVER>>", option.server)
command = command.replace("<<MODULE>>", "md" + str(m))
# cellular overlap mode:
if mode == "cell.overlap":
command = "python <<CODEPATH>>mapCells.py --path <<PATH>> --organism <<ORGANISM>> --mode <<MODE>> --peaks <<PEAKS>> --start <<START>> --stop <<STOP>> --total <<TOTAL>> --expression <<EXPRESSION>> --collection <<COLLECTION>> --times <<TIMES>> --fraction <<FRACTION>> --minimum <<MINIMUM>> --extend <<EXTEND>> --name <<NAME>> --qsub <<QSUB>> --server <<SERVER>> --module <<MODULE>>"
command = command.replace("<<CODEPATH>>", option.path + "/python/")
command = command.replace("<<PATH>>", option.path)
command = command.replace("<<ORGANISM>>", option.organism)
command = command.replace("<<MODE>>", mode)
command = command.replace("<<PEAKS>>", option.peaks)
command = command.replace("<<START>>", str(index))
command = command.replace("<<STOP>>", str(index))
command = command.replace("<<TOTAL>>", str(option.total))
command = command.replace("<<EXPRESSION>>", option.expression)
command = command.replace("<<COLLECTION>>", option.collection + general.indexTag(index, option.total) + option.nametag)
command = command.replace("<<TIMES>>", option.times)
command = command.replace("<<NAME>>", option.name)
command = command.replace("<<FRACTION>>", str(option.fraction))
command = command.replace("<<MINIMUM>>", str(option.minimum))
command = command.replace("<<EXTEND>>", str(option.extend))
command = command.replace("<<QSUB>>", option.qsub)
command = command.replace("<<SERVER>>", option.server)
command = command.replace("<<MODULE>>", "md" + str(m))
# coassociations hybrid mode:
if mode == "cell.hybrid":
collection = option.collection + general.indexTag(index, option.total) + option.nametag
command = "python <<CODEPATH>>mapCells.py --path <<PATH>> --organism <<ORGANISM>> --mode <<MODE>> --A <<A>> --B <<B>> --indexes <<INDEXES>> --values <<VALUES>> --contexts <<CONTEXTS>>"
command = command.replace("<<CODEPATH>>", option.path + "/python/")
command = command.replace("<<PATH>>", option.path)
command = command.replace("<<ORGANISM>>", option.organism)
command = command.replace("<<MODE>>", mode)
command = command.replace("<<A>>", option.a)
command = command.replace("<<B>>", collection + "/mapcells_" + collection + "_matrix_overlap")
command = command.replace("<<INDEXES>>", option.indexes)
command = command.replace("<<VALUES>>", option.values)
command = command.replace("<<CONTEXTS>>", option.contexts)
# is it time to export a chunk?
if index-start+option.step == chunks:
# update start, modules, commands, and module count (m):
start = index + option.step
commands.append(command)
modules.append(commands)
commands = list()
complete = True
m += 1
# store whether the most recent index/command has been stored:
else:
complete = False
# update if there are additional commands:
if not complete:
commands.append(command)
modules.append(commands)
m += 1
# launch commands:
print
print "Launching comparisons:", len(modules)
#for module in modules:
# for command in module:
# print command
runCommands(modules, threads=option.threads, mode="module.run", run_mode="verbose", run_path=bash_path, run_base=bash_base, record=True, qsub_header=qsub_header, qsub=qsub)
print "Analyses performed:", len(modules)
print
# filter cells :
elif option.mode == "filter":
# load cells to filter:
filterCells = open(path_dict[option.source] + option.target).read().strip().split("\n")
# generate output file:
f_output = open(path_dict[option.source] + option.name, "w")
# process input lines:
f, k = 0, 0
inlines = open(path_dict[option.source] + option.infile).readlines()
for inline in inlines:
process = True
items = inline.strip().split(",")
for item in items:
if item in filterCells:
process = False
f += 1
if process:
print >>f_output, inline.strip()
k += 1
print
print "Input lines:", len(inlines)
print "Output lines:", k, "(" + str(f) + " filtered)"
print
# close output:
f_output.close()
# simplify cell annotations :
elif option.mode == "simply":
# generate output file:
f_output = open(path_dict[option.source] + option.name, "w")
# process input lines:
f, k = 0, 0
inlines = open(path_dict[option.source] + option.infile).read().strip().replace("\r","\n").split("\n")
for inline in inlines:
process = True
if "cell_mapping" in option.infile:
regExp, original, updated = inline.strip().split(",")
if updated == "":
annotation = str(original)
else:
annotation = str(updated)
print >>f_output, ",".join([regExp,annotation])
k += 1
print
print "Input lines:", len(inlines)
print "Output lines:", k, "(" + str(f) + " simplified)"
print
# close output:
f_output.close()
# robustness analysis mode:
elif option.mode == "robust":
import itertools
print
print "Loading input series data..."
signalDict, replicateDict = dict(), dict()
inlines = open(extraspath + option.infile).read().replace("\r","\n").split("\n")
columnDict = dict()
inline, index = inlines.pop(0), 0
for column in inline.strip().split(","):
columnDict[column] = index
index += 1
for inline in inlines:
valueDict, initems = dict(), inline.strip().split(",")
if initems != [""]:
for column in columnDict:
valueDict[column] = initems[columnDict[column]]
gene, series, cell, value = valueDict["Gene"], valueDict["Series"], valueDict["Cell"], valueDict["Express"]
if not gene in signalDict:
signalDict[gene] = dict()
if not cell in signalDict[gene]:
signalDict[gene][cell] = dict()
signalDict[gene][cell][series] = value
if not gene in replicateDict:
replicateDict[gene] = list()
replicateDict[gene].append(series)
replicateDict[gene] = sorted(list(set(replicateDict[gene])))
# define output file:
f_output = open(expressionpath + "mapcells_" + option.mode + "_" + option.infile.replace(".csv",".txt"), "w")
s_output = open(expressionpath + "mapcells_" + option.mode + "_" + option.infile.replace(".csv",".sum"), "w")
print >>f_output, "\t".join(["gene","series.count","i","j","cells","pearson.correlation","pearson.pvalue"])
print >>s_output, "\t".join(["series.count", "gene.count"])
print "Scoring replicate correlations .."
countDict = dict()
for gene in signalDict:
if not len(replicateDict[gene]) in countDict:
countDict[len(replicateDict[gene])] = list()
countDict[len(replicateDict[gene])].append(gene)
if len(replicateDict[gene]) > 1:
#print gene, len(replicateDict[gene])
for (i, j) in itertools.combinations(replicateDict[gene], 2):
iValues, jValues = list(), list()
for cell in signalDict[gene]:
if i in signalDict[gene][cell] and j in signalDict[gene][cell]:
iValues.append(float(signalDict[gene][cell][i]))
jValues.append(float(signalDict[gene][cell][j]))
correlation, corPvalue = pearsonr(iValues, jValues)
output = [gene, len(replicateDict[gene]), i, j, len(iValues), correlation, corPvalue]
print >>f_output, "\t".join(map(str, output))
#pdb.set_trace()
for count in sorted(countDict.keys()):
print >>s_output, "\t".join(map(str, [count, len(countDict[count])]))
# close output file:
f_output.close()
s_output.close()
print
# fillin mode:
elif option.mode == "fillin":
print
print "Loading annotation information..."
annotationDict = general.build2(extraspath + option.infile, id_column="lineage", split=",")
print "Checking parental annotation..."
missingCells = list()
for cell in annotationDict:
parent = cell[:len(cell)-1]
if not parent in annotationDict:
if not parent in missingCells:
missingCells.append(parent)
print parent, cell
print
# import mode:
elif option.mode == "import":
# Cell annotations are cell-type and tissue-type (in the new Murray version):
# specificDict: cell > cell-type
# generalDict: cell > tissue-type
# construct tissue dictionary (if necessary):
if option.tissues != "OFF":
print
print "Loading general and specific tissue information..."
specificDict = general.build2(extraspath + option.tissues, i="lineage", x="cell", mode="values", split=",")
specificTotal = specificDict.values()
generalDict = general.build2(extraspath + option.tissues, i="lineage", x="tissue", mode="values", split=",")
generalTotal = generalDict.values()
print "Generating tissue classes..."
classification = {
"rectal" : "excretory",
"na" : "other"
}
classDict, classTotal, classMissing = dict(), list(), 0
for cell in generalDict:
generalTissue = generalDict[cell]
generalHits, classHits = list(), list()
if generalTissue == "g":
classTissue = "neuron/glial"
generalHits.append(generalTissue)
classHits.append(classTissue)
else:
for classTag in classification:
if classTag in generalTissue:
classTissue = classification[classTag]
generalHits.append(generalTissue)
classHits.append(classTissue)
generalHits, classHits = list(set(generalHits)), list(set(classHits))
#print generalTissue, ":", ", ".join(classHits)
if len(classHits) > 1:
classTissue = "mixed"
elif len(classHits) == 1:
classTissue = classHits[0]
elif len(classHits) == 0:
classTissue = generalTissue
classMissing += 1
classDict[cell] = classTissue
classTotal.append(classTissue)
classTotal = sorted(list(set(classTotal)))
print
print "Specific tissue terms:", len(set(specificDict.values()))
print "General tissue terms:", len(set(generalDict.values()))
generalCounts = dict()
for cell in generalDict:
generalTissue = generalDict[cell]
if not generalTissue in generalCounts:
generalCounts[generalTissue] = 0
generalCounts[generalTissue] += 1
generalTissues = general.valuesort(generalCounts)
generalTissues.reverse()
for generalTissue in generalTissues:
print "\t" + generalTissue, ":", generalCounts[generalTissue]
print
print "Class tissue terms:", len(set(classDict.values()))
classCounts = dict()
for cell in classDict:
classTissue = classDict[cell]
if not classTissue in classCounts:
classCounts[classTissue] = 0
classCounts[classTissue] += 1
classTissues = general.valuesort(classCounts)
classTissues.reverse()
for classTissue in classTissues:
print "\t" + classTissue, ":", classCounts[classTissue]
#pdb.set_trace()
# prepare expression matrixes:
series2cell_dict, gene2cell_dict, cell2gene_dict, gene2cell_list, allCells = dict(), dict(), dict(), dict(), list()
# load expression data per series:
print
print "Loading cellular-expression data..."
inlines = open(extraspath + option.infile).read().replace("\r","\n").split("\n")
inheader = inlines.pop(0)
for inline in inlines:
if not inline == "":
series, cell, gene, expression = inline.strip().split(",")
gene = gene.upper()
if not gene in option.exclude.split(","):
if not cell in cell2gene_dict:
cell2gene_dict[cell] = dict()
if not gene in cell2gene_dict[cell]:
cell2gene_dict[cell][gene] = dict()
if not gene in gene2cell_dict:
gene2cell_dict[gene] = dict()
gene2cell_list[gene] = list()
if not cell in gene2cell_dict[gene]:
gene2cell_dict[gene][cell] = dict()
if not series in series2cell_dict:
series2cell_dict[series] = dict()
gene2cell_dict[gene][cell][series] = float(expression)
cell2gene_dict[cell][gene][series] = float(expression)
series2cell_dict[series][cell] = float(expression)
if not cell in gene2cell_list[gene]:
gene2cell_list[gene].append(cell)
if not cell in allCells:
allCells.append(cell)
# store cell-parent relationships:
print "Loading cell-parent relationships..."
cell_dict, parent_dict, pedigreeCells = relationshipBuilder(pedigreefile=option.pedigree, path=extraspath, mechanism="simple")
# construct tissue dictionary (if necessary):
if option.tissues != "OFF":
print
print "Expanding cell tissue information..."
matchDict = { "specific":dict(), "general":dict(), "class":dict() }
matchExpansion, matchTotal, matchMissing = list(), 0, 0
for cell in pedigreeCells:
if cell in generalDict and generalDict[cell] != "na":
matchDict["specific"][cell] = specificDict[cell]
matchDict["general"][cell] = generalDict[cell]
matchDict["class"][cell] = classDict[cell]
else:
# find most closely-related, annotated cell (and use its associated tissue annotation):
distanceDict = dict()
queryDict, matchTissues = dict(), list(),
ancestorCells, descendantCells, matchCells, queryCells = list(), list(), list(), list()
for queryCell in generalDict:
relative = False
if cell == queryCell[:len(cell)]:
descendantCells.append(queryCell)
relative = True
if queryCell == cell[:len(queryCell)]:
ancestorCells.append(queryCell)
relative = True
if relative:
distance = abs(len(cell)-len(queryCell))
if not distance in distanceDict:
distanceDict[distance] = list()
distanceDict[distance].append(queryCell)
# determine which cells to obtain the annotations from:
if descendantCells != list():
queryCells = descendantCells
else:
queryCells = descendantCells + ancestorCells
# find and weigh, most-related tissues:
specificMatch, generalMatch, classMatch = dict(), dict(), dict()
for distance in sorted(distanceDict.keys()):
if distance != 0:
for distanceCell in distanceDict[distance]:
if distanceCell in queryCells:
specificTissue = specificDict[distanceCell]
generalTissue = generalDict[distanceCell]
classTissue = classDict[distanceCell]
if not specificTissue in specificMatch:
specificMatch[specificTissue] = 0
if not generalTissue in generalMatch:
generalMatch[generalTissue] = 0
if not classTissue in classMatch:
classMatch[classTissue] = 0
specificMatch[specificTissue] += float(1)/distance
generalMatch[generalTissue] += float(1)/distance
classMatch[classTissue] += float(1)/distance
# Note: This section controls whether tissue annotations are obtained from
# all related cells (parents and ancestors) or just subsets of these...
""" define a function that returns the highest-likelihood tissue """
def matchFunction(cell, matchDict, queryCells, verbose="OFF"):
matchTissues = general.valuesort(matchDict)
matchTissues.reverse()
printFlag = False
if len(matchTissues) > 1 and verbose == "ON":
printFlag = True
print cell, len(matchTissues), matchTissues, queryCells
for matchTissue in matchTissues:
print matchTissue, ":", matchDict[matchTissue]
# Filter tissues associated with father/daughter cells:
if len(matchTissues) > 1:
matchTissues = general.clean(matchTissues, "death")
if len(matchTissues) > 1:
matchTissues = general.clean(matchTissues, "other")
# Generate and store specific tissue label for cell:
if len(matchTissues) == 0:
matchTissue = "other"
else:
matchTissue = matchTissues[0]
if printFlag and verbose == "ON":
print ">", matchTissue
print
# return highest likelihood tissue match and ranked tissues:
return matchTissue, matchTissues
# assign highest-scoring tissue types:
#specificDict[cell], specificTissues = matchFunction(cell, specificMatch, queryCells, verbose="OFF")
#generalDict[cell], generalTissues = matchFunction(cell, generalMatch, queryCells, verbose="OFF")
#classDict[cell], classTissues = matchFunction(cell, classMatch, queryCells, verbose="OFF")
matchDict["specific"][cell], specificMatches = matchFunction(cell, specificMatch, queryCells, verbose="OFF")
matchDict["general"][cell], generalMatches = matchFunction(cell, generalMatch, queryCells, verbose="OFF")
matchDict["class"][cell], classMatches = matchFunction(cell, classMatch, queryCells, verbose="OFF")
# update tissue counts:
matchTotal += 1
if matchDict["class"][cell] == "na":
matchMissing += 1
# Update/expand cell-tissue dictionary:
matchTissue = matchDict["specific"][cell]
if not matchTissue in matchExpansion:
matchExpansion.append(matchTissue)
# record counts for each type of tissue:
specificCounts, generalCounts, classCounts = dict(), dict(), dict()
for cell in specificDict:
specificTissue = specificDict[cell]
generalTissue = generalDict[cell]
classTissue = classDict[cell]
if not specificTissue in specificCounts:
specificCounts[specificTissue] = 0
specificCounts[specificTissue] += 1
if not generalTissue in generalCounts:
generalCounts[generalTissue] = 0
generalCounts[generalTissue] += 1
if not classTissue in classCounts:
classCounts[classTissue] = 0
classCounts[classTissue] += 1
#print
#print "Specific tissue terms:", len(set(specificDict.values()))
#specificTissues = general.valuesort(specificCounts)
#specificTissues.reverse()
#for specificTissue in specificTissues:
# print "\t" + specificTissue, ":", specificCounts[specificTissue]
print
print "General tissue terms:", len(set(generalDict.values()))
generalTissues = general.valuesort(generalCounts)
generalTissues.reverse()
for generalTissue in generalTissues:
print "\t" + generalTissue, ":", generalCounts[generalTissue]
print
print "Class tissue terms:", len(set(classDict.values()))
classTissues = general.valuesort(classCounts)
classTissues.reverse()
for classTissue in classTissues:
print "\t" + classTissue, ":", classCounts[classTissue]
print
print "Tissue information expanded by:", len(matchExpansion)
print "Tissue information expansion terms:", ", ".join(list(sorted(matchExpansion)))
#pdb.set_trace()
# calculate unique expression values for each gene/cell combination:
print
print "Generating per gene/cell expression values..."
matrix, expression, expressing = dict(), dict(), dict()
for gene in gene2cell_dict:
for cell in gene2cell_list[gene]:
values, maxSeries, maxValue = list(), "NA", 0
for series in gene2cell_dict[gene][cell]:
values.append(gene2cell_dict[gene][cell][series])
if gene2cell_dict[gene][cell][series] >= maxValue:
maxSeries, maxValue = series, gene2cell_dict[gene][cell][series]
if not gene in matrix:
matrix[gene] = dict()
expression[gene] = dict()
matrix[gene][cell] = [max(values), numpy.mean(values), numpy.median(values), numpy.std(values), len(gene2cell_dict[gene][cell]), ",".join(sorted(gene2cell_dict[gene][cell].keys())), maxSeries]
if option.measure == "max.expression":
expression[gene][cell] = max(values)
elif option.measure == "avg.expression":
expression[gene][cell] = numpy.mean(values)
# calculate expression peaks...
print "Generating per gene/cell expression statistics..."
for gene in matrix:
# find peak expression:
peakCell, peakValue = "", 0
for cell in matrix[gene]:
maxValue, meanValue, medianValue, stdValue, seriesCount, seriesIDs, maxSeries = matrix[gene][cell]
cellValue = expression[gene][cell]
if cellValue > peakValue:
peakCell, peakValue = cell, cellValue
# calculate fractional expression, cell ranks, and add cells expressing the protein (above cutoff):
cellRanks = general.valuesort(expression[gene])
cellRanks.reverse()
for cell in matrix[gene]:
maxValue, meanValue, medianValue, stdValue, seriesCount, seriesIDs, maxSeries = matrix[gene][cell]
cellValue = expression[gene][cell]
fracValue = float(cellValue)/peakValue
cellRank = cellRanks.index(cell) + 1
if not gene in expressing:
expressing[gene] = list()
if fracValue >= option.fraction and cellValue >= option.minimum:
expressing[gene].append(cell)
matrix[gene][cell] = [cellValue, peakValue, fracValue, cellRank, maxValue, meanValue, medianValue, stdValue, seriesCount, seriesIDs, maxSeries]
# define the ascendants cutoff:
print
print "Defining minimum ascendants across experiments..."
cutAscendants = 0
for gene in matrix:
minAscendants, maxAscendants = 1000, 0
for cell in matrix[gene]:
ascendants = ascendantsCollector(cell, parent_dict, cell_dict, ascendants=list())
if len(ascendants) < minAscendants:
minAscendants = len(ascendants)
minCell = cell
if len(ascendants) > maxAscendants:
maxAscendants = len(ascendants)
maxCell = cell
if minAscendants > cutAscendants:
cutAscendants = minAscendants
# define the set of cells tracked in target experiments:
print "Defining cells focused: strict list of cells assayed in target experiments..."
focusedCells = list()
for gene in option.target.split(","):
if focusedCells == list():
focusedCells = gene2cell_list[gene]
else:
focusedCells = set(focusedCells).intersection(set(gene2cell_list[gene]))
# define the set of cells tracked in all experiments:
print "Defining cells tracked: strict list of cells assayed in all experiments..."
trackedCells = list()
for gene in gene2cell_dict:
if trackedCells == list():
trackedCells = gene2cell_list[gene]
else:
trackedCells = set(trackedCells).intersection(set(gene2cell_list[gene]))
# define the set of ancestor or tracked cells:
print "Defining cells started: parent-inclusive list of cells tracked in all experiments..."
startedCells = list()
for cell in pedigreeCells:
ascendants = ascendantsCollector(cell, parent_dict, cell_dict, ascendants=list())
if cell in trackedCells or len(ascendants) < int(option.ascendants):
startedCells.append(cell)
#if cell == "ABalaaaal":
# print cell, specificDict[cell], generalDict[cell], classDict[cell]
# pdb.set_trace()
print "Ascendants cutoff:", cutAscendants
# define output files:
assayedfile = expressionpath + "mapcells_" + expression_flag + option.name + "_expression_assayed"
startedfile = expressionpath + "mapcells_" + expression_flag + option.name + "_expression_started"
trackedfile = expressionpath + "mapcells_" + expression_flag + option.name + "_expression_tracked"
focusedfile = expressionpath + "mapcells_" + expression_flag + option.name + "_expression_focused"
summaryfile = expressionpath + "mapcells_" + expression_flag + option.name + "_expression_summary"
tissuesfile = expressionpath + "mapcells_" + expression_flag + option.name + "_expression_tissues"
# define cellular expression header:
expressionHeader = ["cell", "cell.name", "gene", "cell.expression", "peak.expression", "fraction.expression", "normal.expression", "rank", "max.expression", "avg.expression", "med.expression", "std.expression", "cells.expressing", "cells.count", "series.count", "time.series", "max.series", "specific.tissue", "general.tissue", "class.tissue", "match.tissue"]
reportHeader = ["gene", "cells.expressing", "cells.assayed", "cells.tracked", "cell.expression", "peak.expression", "fraction.expression", "series.count", "time.series", "max.series"]
tissueHeader = ["cell", "specific.tissue", "general.tissue", "class.tissue", "match.tissue"]
# create output files:
a_output = open(assayedfile, "w")
s_output = open(startedfile, "w")
t_output = open(trackedfile, "w")
f_output = open(focusedfile, "w")
r_output = open(summaryfile, "w")
x_output = open(tissuesfile, "w")
print >>a_output, "\t".join(expressionHeader)
print >>s_output, "\t".join(expressionHeader)
print >>t_output, "\t".join(expressionHeader)
print >>f_output, "\t".join(expressionHeader)
print >>r_output, "\t".join(reportHeader)
print >>x_output, "\t".join(tissueHeader)
# generate set-normalization values:
maxAssayed, maxStarted, maxTracked, maxFocused = dict(), dict(), dict(), dict()
for gene in sorted(matrix.keys()):
cellsStarted = startedCells
cellsAssayed = matrix[gene].keys()
cellsTracked = trackedCells
cellsFocused = focusedCells
peakAssayed, peakStarted, peakTracked, peakFocused = 0, 0, 0, 0
for cell in sorted(matrix[gene].keys()):
cellValue, peakValue, fracValue, cellRank, maxValue, meanValue, medianValue, stdValue, seriesCount, seriesIDs, maxSeries = matrix[gene][cell]
if cell in cellsAssayed and cellValue > peakAssayed:
peakAssayed = cellValue
if cell in cellsStarted and cellValue > peakStarted:
peakStarted = cellValue
if cell in cellsTracked and cellValue > peakTracked:
peakTracked = cellValue
if cell in cellsFocused and cellValue > peakFocused:
peakFocused = cellValue
maxAssayed[gene] = peakAssayed
maxStarted[gene] = peakStarted
maxTracked[gene] = peakTracked
maxFocused[gene] = peakFocused
# export expression data:
print "Exporting expression data..."
for gene in sorted(matrix.keys()):
cellsStarted = len(startedCells)
cellsAssayed = len(matrix[gene].keys())
cellsTracked = len(trackedCells)
cellsFocused = len(focusedCells)
cellsExpressingAssayed = len(set(expressing[gene]).intersection(set(matrix[gene].keys())))
cellsExpressingTracked = len(set(expressing[gene]).intersection(set(trackedCells)))
cellsExpressingFocused = len(set(expressing[gene]).intersection(set(focusedCells)))
cellValues, fracValues = list(), list()
for cell in sorted(matrix[gene].keys()):
if option.tissues == "OFF" or not cell in specificDict:
specificTissue = "*"
generalTissue = "*"
classTissue = "*"
matchTissue = "*"
else:
specificTissue = specificDict[cell]
generalTissue = generalDict[cell]
classTissue = classDict[cell]
matchTissue = matchDict["class"][cell]
cellValue, peakValue, fracValue, cellRank, maxValue, meanValue, medianValue, stdValue, seriesCount, seriesIDs, maxSeries = matrix[gene][cell]
print >>a_output, "\t".join(map(str, [cell, cell, gene, cellValue, peakValue, fracValue, float(cellValue)/maxAssayed[gene], cellRank, maxValue, meanValue, medianValue, stdValue, cellsExpressingAssayed, cellsAssayed, seriesCount, seriesIDs, maxSeries, specificTissue, generalTissue, classTissue, matchTissue]))
if cell in startedCells:
print >>s_output, "\t".join(map(str, [cell, cell, gene, cellValue, peakValue, fracValue, float(cellValue)/maxStarted[gene], cellRank, maxValue, meanValue, medianValue, stdValue, cellsExpressingTracked, cellsStarted, seriesCount, seriesIDs, maxSeries, specificTissue, generalTissue, classTissue, matchTissue]))
if cell in trackedCells:
print >>t_output, "\t".join(map(str, [cell, cell, gene, cellValue, peakValue, fracValue, float(cellValue)/maxTracked[gene], cellRank, maxValue, meanValue, medianValue, stdValue, cellsExpressingTracked, cellsTracked, seriesCount, seriesIDs, maxSeries, specificTissue, generalTissue, classTissue, matchTissue]))
if cell in focusedCells:
print >>f_output, "\t".join(map(str, [cell, cell, gene, cellValue, peakValue, fracValue, float(cellValue)/maxFocused[gene], cellRank, maxValue, meanValue, medianValue, stdValue, cellsExpressingFocused, cellsFocused, seriesCount, seriesIDs, maxSeries, specificTissue, generalTissue, classTissue, matchTissue]))
if fracValue >= option.fraction and cellValue >= option.minimum:
cellValues.append(cellValue)
fracValues.append(fracValue)
print >>r_output, "\t".join(map(str, [gene, cellsExpressingTracked, cellsAssayed, cellsTracked, numpy.mean(cellValues), peakValue, numpy.mean(fracValues), seriesCount, seriesIDs, maxSeries]))
# export tissue annotations:
print "Exporting tissue annotation data..."
print "Annotated cells:", len(specificDict)
for cell in sorted(specificDict.keys()):
specificTissue = specificDict[cell]
generalTissue = generalDict[cell]
classTissue = classDict[cell]
if cell in matchDict["class"]:
matchTissue = matchDict["class"][cell]
else:
matchTissue = str(classTissue)
print >>x_output, "\t".join([cell, specificTissue, generalTissue, classTissue, matchTissue])
# close output:
a_output.close()
s_output.close()
t_output.close()
f_output.close()
r_output.close()
x_output.close()
print
print "Focused cells:", len(focusedCells)
print "Tracked cells:", len(trackedCells)
print "Started cells:", len(startedCells)
print
# inherit expression mode:
elif option.mode == "inherit":
# define input files:
assayedinput = expressionpath + "mapcells_" + expression_flag + option.name + "_expression_assayed"
startedinput = expressionpath + "mapcells_" + expression_flag + option.name + "_expression_started"
trackedinput = expressionpath + "mapcells_" + expression_flag + option.name + "_expression_tracked"
focusedinput = expressionpath + "mapcells_" + expression_flag + option.name + "_expression_focused"
summaryinput = expressionpath + "mapcells_" + expression_flag + option.name + "_expression_summary"
tissuesinput = expressionpath + "mapcells_" + expression_flag + option.name + "_expression_tissues"
# define output files:
assayedfile = expressionpath + "mapcells_" + expression_flag + option.name + "_expression_inassay"
startedfile = expressionpath + "mapcells_" + expression_flag + option.name + "_expression_instart"
trackedfile = expressionpath + "mapcells_" + expression_flag + option.name + "_expression_intrack"
focusedfile = expressionpath + "mapcells_" + expression_flag + option.name + "_expression_infocus"
inheritfile = expressionpath + "mapcells_" + expression_flag + option.name + "_expression_inherit"
inleafsfile = expressionpath + "mapcells_" + expression_flag + option.name + "_expression_inleafs"
maximalfile = expressionpath + "mapcells_" + expression_flag + option.name + "_expression_maximal"
mxleafsfile = expressionpath + "mapcells_" + expression_flag + option.name + "_expression_mxleafs"
# define cellular expression header:
expressionHeader = ["cell", "cell.name", "gene", "cell.expression", "peak.expression", "fraction.expression", "normal.expression", "rank", "max.expression", "avg.expression", "med.expression", "std.expression", "cells.expressing", "cells.count", "series.count", "time.series", "max.series", "specific.tissue", "general.tissue", "class.tissue", "match.tissue"]
reportHeader = ["gene", "cells.expressing", "cells.assayed", "cells.tracked", "cell.expression", "peak.expression", "fraction.expression", "series.count", "time.series", "max.series"]
tissueHeader = ["cell", "specific.tissue", "general.tissue", "class.tissue", "match.tissue"]
# create output files:
a_output = open(assayedfile, "w")
s_output = open(startedfile, "w")
t_output = open(trackedfile, "w")
f_output = open(focusedfile, "w")
i_output = open(inheritfile, "w")
l_output = open(inleafsfile, "w")
m_output = open(maximalfile, "w")
p_output = open(mxleafsfile, "w")
print >>a_output, "\t".join(expressionHeader + ["inherited"])
print >>s_output, "\t".join(expressionHeader + ["inherited"])
print >>t_output, "\t".join(expressionHeader + ["inherited"])
print >>f_output, "\t".join(expressionHeader + ["inherited"])
print >>i_output, "\t".join(expressionHeader + ["inherited"])
print >>l_output, "\t".join(expressionHeader + ["inherited"])
print >>m_output, "\t".join(expressionHeader + ["inherited"])
print >>p_output, "\t".join(expressionHeader + ["inherited"])
# load terminal leaf cells:
print
print "Loading terminal cells..."
inleafsCells = general.build2(extraspath + option.mapping, i="cell", x="cell.name", mode="values", skip=True)
# store cell-parent relationships:
print "Loading cell-parent relationships..."
cell_dict, parent_dict, pedigreeCells = relationshipBuilder(pedigreefile=option.pedigree, path=extraspath, mechanism="simple")
# loading tissue annotation data...
print "Loading tissue annotation data..."
tissuesAnnotation = general.build2(tissuesinput, id_column="cell", mode="table")
# load expression data:
print "Loading expression data..."
assayedExpression = general.build2(assayedinput, id_complex=["gene","cell"], mode="table", separator=":")
assayedMatrix = general.build2(assayedinput, i="gene", j="cell", x="cell.expression", mode="matrix")
assayedCells = general.build2(assayedinput, i="cell", x="cell.name", mode="values", skip=True)
startedCells = general.build2(startedinput, i="cell", x="cell.name", mode="values", skip=True)
trackedCells = general.build2(trackedinput, i="cell", x="cell.name", mode="values", skip=True)
focusedCells = general.build2(focusedinput, i="cell", x="cell.name", mode="values", skip=True)
# define cellular space:
print "Defining inheritance cells..."
inheritCells = list()
for inleafsCell in inleafsCells:
inheritCells += ascendantsCollector(inleafsCell, parent_dict, cell_dict, ascendants=list())
inheritCells = sorted(list(set(inheritCells)))
# load header dictionary:
hd = general.build_header_dict(assayedinput)
header = general.valuesort(hd)
# inherit peak expression from ancestors:
print "Inheriting expression from ancestors..."
inheritExpression, maximalExpression = dict(), dict()
for gene in sorted(assayedMatrix.keys()):
inheritExpression[gene] = dict()
maximalExpression[gene] = dict()
for inheritCell in inheritCells:
ascendantCells, ascendantExpression = list(), dict()
ascendants = ascendantsCollector(inheritCell, parent_dict, cell_dict, ascendants=list(), sort=False)
#print inheritCell, ascendants
if len(set(ascendants)) != len(ascendants):
print "oh, oh: not a set!"
pdb.set_trace()
for ascendantCell in ascendants + [inheritCell]:
if ascendantCell in assayedMatrix[gene]:
ascendantExpression[ascendantCell] = float(assayedMatrix[gene][ascendantCell])
ascendantCells.append(ascendantCell)
if ascendantExpression != dict():
# get inheritance cells for maximal expression and for last ancestor expression:
maximalCells = general.valuesort(ascendantExpression)
maximalCells.reverse()
maximalCell = maximalCells[0]
ascendantCell = ascendantCells[0]
# store values for last ancestor expression:
inheritExpression[gene][inheritCell] = dict(assayedExpression[gene + ":" + ascendantCell])
inheritExpression[gene][inheritCell]["cell"] = str(inheritCell)
inheritExpression[gene][inheritCell]["cell.name"] = str(inheritCell)
inheritExpression[gene][inheritCell]["specific.tissue"] = tissuesAnnotation[inheritCell]["specific.tissue"]
inheritExpression[gene][inheritCell]["general.tissue"] = tissuesAnnotation[inheritCell]["general.tissue"]
inheritExpression[gene][inheritCell]["class.tissue"] = tissuesAnnotation[inheritCell]["class.tissue"]
inheritExpression[gene][inheritCell]["match.tissue"] = tissuesAnnotation[inheritCell]["match.tissue"]
inheritExpression[gene][inheritCell]["inherited"] = ascendantCell
#if inheritCell != inheritExpression[gene][inheritCell]["cell"]:
# print cell, inheritExpression[gene][inheritCell]["cell"], 1
# pdb.set_trace()
# store values for maximal ancestor expression:
maximalExpression[gene][inheritCell] = dict(assayedExpression[gene + ":" + maximalCell])
maximalExpression[gene][inheritCell]["cell"] = str(inheritCell)
maximalExpression[gene][inheritCell]["cell.name"] = str(inheritCell)
maximalExpression[gene][inheritCell]["specific.tissue"] = tissuesAnnotation[inheritCell]["specific.tissue"]
maximalExpression[gene][inheritCell]["general.tissue"] = tissuesAnnotation[inheritCell]["general.tissue"]
maximalExpression[gene][inheritCell]["class.tissue"] = tissuesAnnotation[inheritCell]["class.tissue"]
maximalExpression[gene][inheritCell]["match.tissue"] = tissuesAnnotation[inheritCell]["match.tissue"]
maximalExpression[gene][inheritCell]["inherited"] = ascendantCell
# export inherited signals:
print "Exporting inherited expression values..."
for gene in sorted(inheritExpression):
for cell in sorted(inheritExpression[gene].keys()):
#if cell != inheritExpression[gene][cell]["cell"]:
# print cell, inheritExpression[gene][cell]["cell"], 2
# pdb.set_trace()
output = list()
for column in header + ["inherited"]:
output.append(inheritExpression[gene][cell][column])
if cell in assayedCells:
print >>a_output, "\t".join(map(str, output))
if cell in startedCells:
print >>s_output, "\t".join(map(str, output))
if cell in trackedCells:
print >>t_output, "\t".join(map(str, output))
if cell in focusedCells:
print >>f_output, "\t".join(map(str, output))
if cell in inheritCells:
print >>i_output, "\t".join(map(str, output))
if cell in inleafsCells:
print >>l_output, "\t".join(map(str, output))
#print "\t".join(map(str, output))
#pdb.set_trace()
# export inherited signals:
print "Exporting maximal expression values..."
for gene in sorted(maximalExpression):
for cell in sorted(maximalExpression[gene].keys()):
output = list()
for column in header + ["inherited"]:
output.append(maximalExpression[gene][cell][column])
if cell in inheritCells:
print >>m_output, "\t".join(map(str, output))
if cell in inleafsCells:
print >>p_output, "\t".join(map(str, output))
#print "\t".join(map(str, output))
#pdb.set_trace()
print
print "Total inherited cells:", len(inheritCells)
print "Terminal (leaf) cells:", len(inleafsCells)
# close output files:
a_output.close()
s_output.close()
t_output.close()
f_output.close()
i_output.close()
l_output.close()
m_output.close()
p_output.close()
print
#k = inheritExpression.keys()[0]
#print k
#print inheritExpression[k][inleafsCell]
#pdb.set_trace()
# correct expression mode (detect outliers):
elif option.mode == "correct":
# load quantile functions
from quantile import Quantile
# define input files:
startedfile = expressionpath + "mapcells_" + expression_flag + option.name + "_expression_started"
trackedfile = expressionpath + "mapcells_" + expression_flag + option.name + "_expression_tracked"
assayedfile = expressionpath + "mapcells_" + expression_flag + option.name + "_expression_assayed"
summaryfile = expressionpath + "mapcells_" + expression_flag + option.name + "_expression_summary"
# load assayed expression data:
print
print "Loading expression data..."
expressionDict = general.build2(assayedfile, i="gene", j="cell", x="cell.expression", mode="matrix")
# prepare to sort genes by quantile expression:
print "Sorting genes by expression..."
medianDict, quantDict = dict(), dict()
for gene in expressionDict:
values = map(float, expressionDict[gene].values())
medianDict[gene] = numpy.median(values)
quantDict[gene] = Quantile(values, 0.99)
quantRanks = general.valuesort(quantDict)
quantRanks.reverse()
# store median rankings:
rankDict = dict()
medianRanks = general.valuesort(medianDict)
medianRanks.reverse()
k = 1
for gene in medianRanks:
rankDict[gene] = k
k += 1
# generate testing path:
testingpath = correctionpath + "testing/"
general.pathGenerator(testingpath)
# Perform Gaussian Mixture Modeling (GMM):
print "Performing GMM modeling..."
gmmDict = dict()
k = 1
for gene in expressionDict:
signals = map(int, map(float, expressionDict[gene].values()))
signals = [1 if (x == 0) else x for x in signals]
testingfile = testingpath + "mapCells-gmm_" + expression_flag + option.name + "_" + "temp"
resultsfile = testingpath + "mapCells-gmm_" + expression_flag + option.name + "_" + gene
#f_output = open(testingfile, "w")
#print >>f_output, "\n".join(["signal"] + map(str, signals))
#f_output.close()
#command = " ".join(["Rscript", "~/meTRN/scripts/mapCells-gmm.r", testingfile, resultsfile, option.limit, option.parameters])
#os.system(command)
#Rscript ~/meTRN/scripts/mapCells-pilot.r ~/Desktop/data.test ~/Desktop/data.output 1000
if "mapCells-gmm_" + expression_flag + option.name + "_" + gene in os.listdir(testingpath):
gmmDict[gene] = open(resultsfile).readlines()[1].strip().split(" ")[2]
#os.system("rm -rf " + testingfile)
# export expression signals:
rankingfile = correctionpath + "mapcells_" + expression_flag + option.name + "_correction_ranking" # rank information file
percentfile = correctionpath + "mapcells_" + expression_flag + option.name + "_correction_percent" # gene-cell data, percentile-ranked genes
mediansfile = correctionpath + "mapcells_" + expression_flag + option.name + "_correction_medians" # gene-cell data, median-ranked genes
# define output headers:
correctHeader = "\t".join(["index", "gene", "cell", "signal", "zscore", "nscore", "lscore", "rank", "median", "mean", "stdev", "alpha", "delta", "sigma", "gamma"])
rankingHeader = "\t".join(["gene", "quantile.rank", "median.rank", "median", "mean", "stdev", "alpha", "delta", "sigma", "gamma"])
# gather outputs:
print "Generating expression thresholds..."
r_output = open(rankingfile, "w")
print >>r_output, rankingHeader
outputDict = dict()
k = 1
for gene in quantRanks:
signals = map(float, expressionDict[gene].values())
maximal = max(signals)
# calculate expression cutoffs:
alpha = float(maximal)/10
delta = float(quantDict[gene])/10
sigma = float(quantDict[gene])/10
# detect GMM expression cutoff:
if gene in gmmDict:
gamma = int(gmmDict[gene])
else:
gamma = int(option.limit)
# threshold expression cutoffs:
if alpha < int(option.limit):
alpha = int(option.limit)
if delta < int(option.limit):
delta = int(option.limit)
if gamma < int(option.limit):
gamma = int(option.limit)
# calculate general stats:
median = numpy.median(signals)
mean = numpy.mean(signals)
stdev = numpy.std(signals)
logMean = numpy.log10(mean)
logStDev = numpy.log10(stdev)
# store/export data:
print >>r_output, "\t".join(map(str, [gene, k, rankDict[gene], median, mean, stdev, alpha, delta, sigma, gamma]))
if not gene in outputDict:
outputDict[gene] = dict()
for cell in sorted(expressionDict[gene].keys()):
signal = float(expressionDict[gene][cell])
if signal < 1:
signal = 1
zscore = float(signal-mean)/stdev
nscore = float(signal)/maximal
lscore = float(numpy.log10(signal) - logMean)/logStDev
outputDict[gene][cell] = "\t".join(map(str, [k, gene, cell, signal, zscore, nscore, lscore, rankDict[gene], median, mean, stdev, alpha, delta, sigma, gamma]))
k += 1
r_output.close()
# export expression signals, percentile-ranked genes:
print "Exporting percentile-ranked expression signals..."
f_output = open(percentfile, "w")
print >>f_output, correctHeader
for gene in quantRanks:
for cell in sorted(outputDict[gene]):
print >>f_output, outputDict[gene][cell]
f_output.close()
# export expression signals, median-ranked genes:
print "Exporting median-ranked expression signals..."
f_output = open(mediansfile, "w")
print >>f_output, correctHeader
for gene in medianRanks:
for cell in sorted(outputDict[gene]):
print >>f_output, outputDict[gene][cell]
f_output.close()
print
# check status mode:
elif option.mode == "check.status":
# define input files:
startedfile = expressionpath + "mapcells_" + expression_flag + option.name + "_expression_started"
trackedfile = expressionpath + "mapcells_" + expression_flag + option.name + "_expression_tracked"
assayedfile = expressionpath + "mapcells_" + expression_flag + option.name + "_expression_assayed"
summaryfile = expressionpath + "mapcells_" + expression_flag + option.name + "_expression_summary"
# scan peak files:
print
print "Scanning peak files:"
hd = general.build_header_dict(summaryfile)
k, peaks, peak_files = 0, 0, os.listdir(peakspath)
for inline in open(summaryfile).readlines()[1:]:
gene, found = inline.strip().split("\t")[hd["gene"]], list()
for peak_file in peak_files:
dataset = peak_file.split("_peaks.bed")[0].replace("POL2", "AMA-1")
if gene + "_" in dataset:
found.append(dataset)
peaks += general.countLines(peakspath + peak_file, header="OFF")
if found != list():
print gene, ":", ", ".join(sorted(found))
k += 1
print
print "Found factors:", k
print "Peaks called:", peaks
print
# scan expression files:
print
print "Scanning expression data:"
caught = list()
hd = general.build_header_dict(assayedfile)
for inline in open(assayedfile).readlines()[1:]:
initems = inline.strip().split("\t")
gene, timeSeries = initems[hd["gene"]], initems[hd["time.series"]]
for timeSerie in timeSeries.split(","):
if not gene.lower() in timeSerie:
if not gene in caught:
print gene, timeSeries
caught.append(gene)
print
print "Mismatched genes:", len(caught)
print
# lineage distance mode:
elif option.mode == "cell.distance":
# build cell-expression matrix:
print
print "Loading cellular expression..."
quantitation_matrix, expression_matrix, tracking_matrix, trackedCells = expressionBuilder(expressionfile=option.expression, path=expressionpath, cutoff=0, minimum=0, metric="fraction.expression")
# store cell-parent relationships:
print "Loading cell-parent relationships..."
cell_dict, parent_dict, pedigreeCells = relationshipBuilder(pedigreefile=option.pedigree, path=extraspath, mechanism="simple")
print "Pedigree cells:", len(pedigreeCells)
print "Tracked cells:", len(trackedCells)
print
# define output files:
signalsmatrixfile = str(option.expression + "_distance_signals")
lineagematrixfile = str(option.expression + "_distance_lineage")
combinematrixfile = str(option.expression + "_distance_combine")
# build cell-cell expression correlation matrix
if not signalsmatrixfile in os.listdir(distancepath) or option.overwrite == "ON":
print "Calculating expression correlation matrix..."
correlation_matrix, index = dict(), 1
f_output = open(distancepath + signalsmatrixfile, "w")
print >>f_output, "\t".join(["i", "j", "correlation", "correlation.pvalue", "correlation.adjusted.pvalue"])
for aCell in sorted(trackedCells):
print index, aCell
for bCell in sorted(trackedCells):
aValues, bValues = list(), list()
for gene in sorted(quantitation_matrix.keys()):
aValues.append(quantitation_matrix[gene][aCell])
bValues.append(quantitation_matrix[gene][bCell])
correlation, corPvalue = pearsonr(aValues, bValues)
adjCorPvalue = corPvalue*len(trackedCells)*len(trackedCells)
if adjCorPvalue > 1:
adjCorPvalue = 1
if not aCell in correlation_matrix:
correlation_matrix[aCell] = dict()
correlation_matrix[aCell][bCell] = [correlation, corPvalue, adjCorPvalue]
print >>f_output, "\t".join(map(str, [aCell, bCell] + correlation_matrix[aCell][bCell]))
index += 1
f_output.close()
print
else:
print "Loading expression correlation matrix..."
correlation_matrix = general.build2(distancepath + signalsmatrixfile, i="i", j="j", x=["correlation","correlation.pvalue","correlation.adjusted.pvalue"], datatype="float", mode="matrix", header_dict="auto")
# build lineage distance matrix:
if not lineagematrixfile in os.listdir(distancepath) or option.overwrite == "ON":
print "Calculating lineage distance matrix..."
lineage_matrix, index = dict(), 1
f_output = open(distancepath + lineagematrixfile, "w")
print >>f_output, "\t".join(["i", "j", "distance", "parent"])
for aCell in sorted(trackedCells):
print index, aCell
for bCell in sorted(trackedCells):
distance, ancestor = lineageDistance(aCell, bCell, parent_dict, cell_dict)
if not aCell in lineage_matrix:
lineage_matrix[aCell] = dict()
lineage_matrix[aCell][bCell] = [distance, ancestor]
print >>f_output, "\t".join(map(str, [aCell, bCell] + lineage_matrix[aCell][bCell]))
index += 1
f_output.close()
print
else:
print "Loading lineage distance matrix..."
lineage_matrix = general.build2(distancepath + lineagematrixfile, i="i", j="j", x=["distance","parent"], datatype="list", mode="matrix", header_dict="auto", listtypes=["int", "str"])
#print correlation_matrix["ABal"]["ABal"]
#print lineage_matrix["ABal"]["ABal"]
#pdb.set_trace()
# build expression distance matrix (as a function of fraction expression):
print "Generating combined distance matrix (at fraction range):"
f_output = open(distancepath + combinematrixfile, "w")
print >>f_output, "\t".join(["i", "j", "minimal", "fraction", "distance", "parent", "expression.correlation", "expression.correlation.pvalue", "expression.correlation.adjusted.pvalue", "i.genes", "j.genes", "overlap", "total", "overlap.max", "overlap.sum", "pvalue", "adjusted.pvalue", "flag"])
fraction_matrix, genes = dict(), sorted(tracking_matrix.keys())
for minimal in [1500, 1750, 2000]:
for fraction in general.drange(0.10, 0.50, 0.10):
print "...", minimal, fraction
fraction_matrix[fraction] = dict()
# find genes expressed per cell (using fraction cutoff):
cellular_matrix = dict()
fraction_quantitation_matrix, fraction_expression_matrix, fraction_tracking_matrix, fraction_trackedCells = expressionBuilder(expressionfile=option.expression, path=expressionpath, cutoff=fraction, minimum=minimal, metric="fraction.expression")
for gene in fraction_expression_matrix:
for cell in fraction_expression_matrix[gene]:
if not cell in cellular_matrix:
cellular_matrix[cell] = list()
cellular_matrix[cell].append(gene)
# find multiple hypothesis adjustment factor:
adjust = 0
for aCell in sorted(fraction_trackedCells):
for bCell in sorted(fraction_trackedCells):
if aCell in cellular_matrix and bCell in cellular_matrix:
adjust += 1
# find gene expression overlap between cells:
overlap_matrix = dict()
universe = len(quantitation_matrix.keys())
for aCell in sorted(trackedCells):
for bCell in sorted(trackedCells):
if aCell in cellular_matrix and bCell in cellular_matrix:
aGenes = cellular_matrix[aCell]
bGenes = cellular_matrix[bCell]
union = set(aGenes).union(set(bGenes))
overlap = set(aGenes).intersection(set(bGenes))
maxOverlap = float(len(overlap))/min(len(aGenes), len(bGenes))
sumOverlap = float(len(overlap))/len(union)
# Hypergeometric paramters:
m = len(aGenes) # number of white balls in urn
n = universe - len(bGenes) # number of black balls in urn
N = len(bGenes) # number of balls drawn from urn
x = len(overlap) # number of white balls in drawn
# If I pull out all balls with elephant tatoos (N), is the draw enriched in white balls?:
pvalue = hyper.fishers(x, m+n, m, N, method="right")
adjPvalue = hyper.limit(pvalue*adjust)
# Store overlap and significance:
if not aCell in overlap_matrix:
overlap_matrix[aCell] = dict()
overlap_matrix[aCell][bCell] = [len(aGenes), len(bGenes), len(overlap), universe, maxOverlap, sumOverlap, pvalue, adjPvalue]
# generate combined distance output line:
for aCell in sorted(trackedCells):
for bCell in sorted(trackedCells):
# load lineage distances:
distance, ancestor = lineage_matrix[aCell][bCell]
# load correlation distances:
correlation, corPvalue, adjCorPvalue = correlation_matrix[aCell][bCell]
# load expresssion distances:
if aCell in cellular_matrix and bCell in cellular_matrix:
aGenes, bGenes, overlap, universe, maxOverlap, sumOverlap, pvalue, adjPvalue = overlap_matrix[aCell][bCell]
madeFlag = "both.observed"
elif aCell in cellular_matrix:
aGenes, bGenes, overlap, universe, maxOverlap, sumOverlap, pvalue, adjPvalue = len(cellular_matrix[aCell]), 0, 0, len(trackedCells), 0, 0, 1, 1
madeFlag = "only.observed"
elif bCell in cellular_matrix:
aGenes, bGenes, overlap, universe, maxOverlap, sumOverlap, pvalue, adjPvalue = 0, len(cellular_matrix[bCell]), 0, len(trackedCells), 0, 0, 1, 1
madeFlag = "only.observed"
else:
aGenes, bGenes, overlap, universe, maxOverlap, sumOverlap, pvalue, adjPvalue = 0, 0, 0, len(trackedCells), 0, 0, 1, 1
madeFlag = "none.observed"
# export data:
print >>f_output, "\t".join(map(str, [aCell, bCell, minimal, fraction, distance, ancestor, correlation, corPvalue, adjCorPvalue, aGenes, bGenes, overlap, universe, maxOverlap, sumOverlap, pvalue, adjPvalue, madeFlag]))
# close output file:
f_output.close()
print
# cell time mode:
elif option.mode == "cell.times":
# define input expression files:
assayedfile = expressionpath + "mapcells_" + expression_flag + option.name + "_expression_assayed"
startedfile = expressionpath + "mapcells_" + expression_flag + option.name + "_expression_started"
trackedfile = expressionpath + "mapcells_" + expression_flag + option.name + "_expression_tracked"
focusedfile = expressionpath + "mapcells_" + expression_flag + option.name + "_expression_focused"
inheritfile = expressionpath + "mapcells_" + expression_flag + option.name + "_expression_inherit"
maximalfile = expressionpath + "mapcells_" + expression_flag + option.name + "_expression_maximal"
# load cell times:
print
print "Loading cellular times..."
time_matrix = dict()
inlines = open(extraspath + option.times).readlines()
for inline in inlines:
cell, start, stop = inline.strip().split(",")
for time in range(int(start), int(stop)+1):
if not time in time_matrix:
time_matrix[time] = list()
time_matrix[time].append(cell)
# export cell times:
populationDict = {
"assayed" : assayedfile,
"started" : startedfile,
"tracked" : trackedfile,
"focused" : focusedfile,
"inherit" : inheritfile,
"maximal" : maximalfile
}
print "Exporting cells per time point..."
for population in populationDict:
populationCells = general.build2(populationDict[population], id_column="cell", skip=True, mute=True).keys()
for time in sorted(time_matrix.keys()):
general.pathGenerator(timepath + population + "/cells/")
f_output = open(timepath + population + "/cells/" + str(time), "w")
timedCells = sorted(set(time_matrix[time]).intersection(set(populationCells)))
if len(timedCells) > 0:
print >>f_output, "\n".join(timedCells)
f_output.close()
# generate reports:
print "Generating reports..."
for population in populationDict:
general.pathGenerator(timepath + population + "/report/")
f_output = open(timepath + population + "/report/mapcells_" + population + "_time_report.txt", "w")
print >>f_output, "\t".join(["time", "cell.count", "cell.percent", "cell.ids"])
for time in sorted(time_matrix.keys()):
general.pathGenerator(timepath + population + "/report/")
timedCount = general.countLines(timepath + population + "/cells/" + str(time))
timedPercent = round(100*float(timedCount)/len(time_matrix[time]), 2)
timedCells = open(timepath + population + "/cells/" + str(time)).read().split("\n")
print >>f_output, "\t".join([str(time), str(timedCount), str(timedPercent), ",".join(timedCells).rstrip(",")])
f_output.close()
print
# cubism graph mode:
elif option.mode == "cell.cubism":
# define input expression files:
assayedfile = expressionpath + "mapcells_" + expression_flag + option.name + "_expression_assayed"
startedfile = expressionpath + "mapcells_" + expression_flag + option.name + "_expression_started"
trackedfile = expressionpath + "mapcells_" + expression_flag + option.name + "_expression_tracked"
focusedfile = expressionpath + "mapcells_" + expression_flag + option.name + "_expression_focused"
inheritfile = expressionpath + "mapcells_" + expression_flag + option.name + "_expression_inherit"
maximalfile = expressionpath + "mapcells_" + expression_flag + option.name + "_expression_maximal"
# define cell populations:
populationDict = {
"assayed" : assayedfile,
"started" : startedfile,
"tracked" : trackedfile,
"focused" : focusedfile,
"inherit" : inheritfile,
"maximal" : maximalfile
}
# parse reports:
print
print "Exporting per gene, per timepoint expression cells:"
for population in populationDict:
print "Processing:", population
# define output paths:
factorpath = cubismpath + population + "/factor/"
matrixpath = cubismpath + population + "/matrix/"
general.pathGenerator(factorpath)
general.pathGenerator(matrixpath)
# build cell-expression matrix:
quantitation_matrix, expression_matrix, tracking_matrix, trackedCells = expressionBuilder(expressionfile=populationDict[population], path="", cutoff=option.fraction, minimum=option.minimum, metric="fraction.expression")
# load timepoint data:
timeDict = general.build2(timepath + population + "/report/mapcells_" + population + "_time_report.txt", id_column="time")
# load calendar months:
monthDict = dict((k,v) for k,v in enumerate(calendar.month_abbr))
# process genes x timepoints:
m_output = open(matrixpath + "mapcells_cubism_matrix.txt", "w")
print >>m_output, "\t".join(["gene", "time", "cells", "gene.cells", "time.cells"])
for gene in sorted(expression_matrix.keys()):
timeStamp = 1001856000000
timeAdded = 100000000
factorLines = list()
for time in sorted(map(int, timeDict.keys())):
geneCells = expression_matrix[gene]
timeCells = timeDict[str(time)]["cell.ids"].split(",")
dateCells = len(set(geneCells).intersection(set(timeCells)))
date = datetime.datetime.fromtimestamp(timeStamp / 1e3)
day, month, year = date.day, date.month, str(date.year)[2:]
date = "-".join(map(str, [day, monthDict[month], year]))
factorLines.append(",".join(map(str, [date, dateCells, dateCells, dateCells, len(trackedCells)])))
print >>m_output, "\t".join(map(str, [gene, time, dateCells, len(geneCells), len(timeCells)]))
timeStamp += timeAdded
factorLines.reverse()
f_output = open(factorpath + gene + ".csv", "w")
print >>f_output, "Date,Open,High,Low,Close,Volume"
for factorLine in factorLines:
print >>f_output, factorLine.strip()
f_output.close()
# close output matrix:
m_output.close()
print
# cell annotation mode:
elif option.mode == "cell.annotation":
# load target cells from time-points:
print
if option.times != "OFF":
# define output file:
f_output = open(cellnotationspath + "mapcells_" + option.name + "_" + option.infile, "w")
# load time-point cells:
cells = getTargetCells(inpath=timepath + option.times + "/cells/", mode="time", timeRange=range(option.start, option.stop + 1, option.step))
# load target cells from collection:
elif option.collection != "OFF":
# define output file:
f_output = open(cellnotationspath + "mapcells_" + option.collection + "_" + option.infile, "w")
# load collection cells:
cells = getTargetCells(inpath=cellsetpath + option.collection + "/", mode="collection")
# export features per cell:
print "Exporting features per cell..."
k = 0
inlines = open(annotationspath + option.infile).readlines()
if option.header == "ON":
inlines.pop(0)
for cell in cells:
for inline in inlines:
if option.format == "bed":
print >>f_output, cell + ":" + inline.strip()
k += 1
f_output.close()
print "Features scaled from", len(inlines), "to", k, ": " + str(round(float(k)/len(inlines), 0)) + "x"
print
# build matrix mode:
elif option.mode == "cell.matrix":
# update overlappath:
matrixpath = matrixpath + option.collection + "/"
general.pathGenerator(matrixpath)
# define input files:
infile = expressionpath + option.expression
# define output files:
matrixfile = matrixpath + str(option.expression + "_" + option.name + "_matrix")
# load header dictionary:
hd = general.build_header_dict(infile)
# build cellular expression matrix:
matrix, cells, genes, tissueDict = dict(), list(), list(), dict()
inlines = open(infile).readlines()
inlines.pop(0)
for inline in inlines:
initems = inline.strip().split("\t")
cell, gene, cellExpression, fractionExpression, normalExpression, specificTissue, generalTissue, classTissue = initems[hd["cell"]], initems[hd["gene"]], initems[hd["cell.expression"]], initems[hd["fraction.expression"]], initems[hd["normal.expression"]], initems[hd["specific.tissue"]], initems[hd["general.tissue"]], initems[hd["class.tissue"]]
# extract expression value (using specified technique):
if option.technique == "binary":
if float(fractionExpression) >= option.fraction and float(cellExpression) >= option.minimum:
value = 1
else:
value = 0
elif option.technique == "signal":
value = float(cellExpression)
elif option.technique == "fraction":
value = float(fractionExpression)
elif option.technique == "normal":
value = float(normalExpression)
# store cells, genes, and values:
if not cell in cells:
cells.append(cell)
if not gene in genes:
genes.append(gene)
if not cell in tissueDict:
tissueDict[cell] = [classTissue, generalTissue, specificTissue]
if not cell in matrix:
matrix[cell] = dict()
matrix[cell][gene] = value
# export the cellular expression matrix!
f_output = open(matrixfile, "w")
cells, genes = sorted(cells), sorted(genes)
print >>f_output, "\t".join([""] + genes)
for cell in cells:
values = list()
for gene in genes:
if gene in matrix[cell]:
values.append(matrix[cell][gene])
else:
values.append(0)
valueCount = len(values) - values.count(0)
classTissue, generalTissue, specificTissue = tissueDict[cell]
specificTissue = specificTissue.replace(" ", "_")
label = ":".join([classTissue, generalTissue, specificTissue, cell])
print >>f_output, "\t".join([label] + map(str, values))
f_output.close()
# build in silico binding peaks mode:
elif option.mode == "cell.peaks":
# define the target contexts:
if option.contexts != "OFF":
shandle, target_context_dict = metrn.options_dict["contexts.condensed"][option.contexts]
target_contexts = list()
for target in target_context_dict:
target_contexts.append(target_context_dict[target])
target_contexts = sorted(list(set(target_contexts)))
# generate output paths:
insilicopath = bindingpath + option.name + "/"
general.pathGenerator(insilicopath)
# load header dictionary:
hd = general.build_header_dict(expressionpath + option.expression)
# load expression matrix:
print
print "Loading expression matrix..."
matrix = dict()
inlines = open(expressionpath + option.expression).readlines()
inlines.pop(0)
for inline in inlines:
initems = inline.strip().split("\t")
gene, cell, cellExpression, fractionExpression = initems[hd["gene"]], initems[hd["cell"]], float(initems[hd["cell.expression"]]), float(initems[hd["fraction.expression"]])
if not gene in matrix:
matrix[gene] = dict()
if fractionExpression >= option.fraction and float(cellExpression) >= option.minimum:
matrix[gene][cell] = fractionExpression
# load target cells:
if option.times != "OFF":
# load time-point cells:
timedCells = getTargetCells(inpath=timepath + option.times + "/cells/", mode="time", timeRange=range(option.start, option.stop + 1, option.step))
# scan peak files:
print
print "Generating cell-resolution peaks..."
k, peak_files, insilico_files = 0, os.listdir(peakspath), list()
for peak_file in peak_files:
dataset = peak_file.split("_peaks.bed")[0].replace("POL2", "AMA-1")
organism, strain, factor, context, institute, method = metrn.labelComponents(dataset, target="components")
if factor in matrix:
if option.contexts == "OFF" or context in target_contexts:
print "Processing:", dataset
insilico_file = peak_file.replace("POL2", "AMA-1")
f_output = open(insilicopath + insilico_file, "w")
for cell in sorted(matrix[factor].keys()):
if option.times == "OFF" or cell in timedCells:
for inline in open(peakspath + peak_file).readlines():
print >>f_output, cell + ":" + inline.strip()
f_output.close()
insilico_files.append(insilico_file)
# define output peak files:
unsortedfile = bindingpath + "mapcells_silico_" + option.name + "_unsorted.bed"
completefile = bindingpath + "mapcells_silico_" + option.name + "_complete.bed"
compiledfile = bindingpath + "mapcells_silico_" + option.name + "_compiled.bed"
# generate compilation files:
if not "mapcells_silico_" + option.peaks + "_complete.bed" in os.listdir(bindingpath) or option.overwrite == "ON":
# gather peak files and compiled into a single file:
print
print "Gathering peaks into single file..."
joint = " " + insilicopath
command = "cat " + insilicopath + joint.join(insilico_files) + " > " + unsortedfile
os.system(command)
print "Sorting peaks in single file..."
command = "sortBed -i " + unsortedfile + " > " + completefile
os.system(command)
# merge peaks into single file:
print "Collapsing peaks in sorted file..."
command = "mergeBed -nms -i " + completefile + " > " + compiledfile
os.system(command)
# remove unsorted file:
command = "rm -rf " + unsortedfile
os.system(command)
print
# gene and cell collection reporting mode (gene expressed per cell):
elif option.mode == "reports":
taskDict = {
"gene" : [cellsetpath, "cells"],
"cell" : [genesetpath, "genes"]
}
print
for task in taskDict:
inputpath, column = taskDict[task]
for collection in os.listdir(inputpath):
if collection in option.collection.split(",") or option.collection == "OFF":
print "Processing:", task, collection
f_output = open(reportspath + "mapcell_report_" + task + "_" + collection, "w")
print >>f_output, "\t".join([task, "count", column])
for item in sorted(os.listdir(inputpath + collection)):
contents = open(inputpath + collection + "/" + item).read().strip().split("\n")
contents = general.clean(contents)
print >>f_output, "\t".join(map(str, [item, len(contents), ",".join(sorted(contents))]))
f_output.close()
print
print
# cell collection mode (cells expressed per gene):
elif option.mode == "cell.collection":
# establish descendants cutoff:
if option.descendants == "OFF":
descendants_cutoff = 1000000
descendants_handle = "XX"
else:
descendants_cutoff = int(option.descendants)
descendants_handle = option.descendants
# establish ascendants cutoff:
if option.ascendants == "OFF":
ascendants_cutoff = 0
ascendants_handle = "XX"
else:
ascendants_cutoff = int(option.ascendants)
ascendants_handle = option.ascendants
# establish limit cutoff:
if option.limit == "OFF":
limit_cutoff = "OFF"
limit_handle = "XX"
else:
limit_cutoff = int(option.limit)
limit_handle = option.limit
# define output folder:
cellsetpath = cellsetpath + option.collection + "/"
general.pathGenerator(cellsetpath)
# export expressing-cells for each gene:
if option.expression != "OFF":
# build cell-expression matrix:
quantitation_matrix, expression_matrix, tracking_matrix, trackedCells = expressionBuilder(expressionfile=option.expression, path=expressionpath, cutoff=option.fraction, minimum=option.minimum, metric="fraction.expression")
# export cells per gene:
for gene in sorted(expression_matrix.keys()):
f_output = open(cellsetpath + gene, "w")
for cell in sorted(expression_matrix[gene]):
print >>f_output, cell
f_output.close()
# export cells for SOM neurons:
if option.neurons != "OFF":
# update path to neurons:
neuronspath = neuronspath + option.peaks + "/"
# define input path:
sumpath = neuronspath + option.technique + "/results/" + option.neurons + "/summary/"
sumfile = "mapneurons_summary.txt"
# build header dict:
hd = general.build_header_dict(sumpath + sumfile)
# build SOM-cell matrix:
collection_matrix, trackedCells = dict(), list()
inlines = open(sumpath + sumfile).readlines()
inlines.pop(0)
for inline in inlines:
initems = inline.rstrip("\n").split("\t")
neuron, cells = initems[hd["neuron"]], initems[hd["class.ids"]]
collection_matrix[neuron] = general.clean(cells.split(","), "")
trackedCells.extend(cells.split(","))
trackedCells = general.clean(list(set(trackedCells)), "")
# export cells per gene:
for neuron in sorted(collection_matrix.keys()):
f_output = open(cellsetpath + neuron, "w")
for cell in sorted(collection_matrix[neuron]):
print >>f_output, cell
f_output.close()
# gene collection mode (gene expressed per cell):
elif option.mode == "gene.collection":
# establish descendants cutoff:
if option.descendants == "OFF":
descendants_cutoff = 1000000
descendants_handle = "XX"
else:
descendants_cutoff = int(option.descendants)
descendants_handle = option.descendants
# establish ascendants cutoff:
if option.ascendants == "OFF":
ascendants_cutoff = 0
ascendants_handle = "XX"
else:
ascendants_cutoff = int(option.ascendants)
ascendants_handle = option.ascendants
# establish limit cutoff:
if option.limit == "OFF":
limit_cutoff = "OFF"
limit_handle = "XX"
else:
limit_cutoff = int(option.limit)
limit_handle = option.limit
# define output folder:
genesetpath = genesetpath + option.collection + "/"
general.pathGenerator(genesetpath)
# export expressing-cells for each gene:
if option.expression != "OFF":
# build cell-expression matrix:
quantitation_matrix, expression_matrix, tracking_matrix, trackedCells = expressionBuilder(expressionfile=option.expression, path=expressionpath, cutoff=option.fraction, minimum=option.minimum, metric="fraction.expression")
# gather cells:
cells = list()
for gene in sorted(quantitation_matrix.keys()):
for cell in sorted(quantitation_matrix[gene]):
cells.append(cell)
cells = sorted(list(set(cells)))
# invert matrix:
inverted_matrix = dict()
for gene in sorted(expression_matrix.keys()):
for cell in sorted(expression_matrix[gene]):
if not cell in inverted_matrix:
inverted_matrix[cell] = list()
inverted_matrix[cell].append(gene)
# export cells per gene:
for cell in cells:
f_output = open(genesetpath + cell, "w")
if cell in inverted_matrix:
for gene in sorted(inverted_matrix[cell]):
print >>f_output, gene
f_output.close()
# cell transfer mode:
elif option.mode == "cell.transfer":
# define time-range to examine:
timeRange=range(option.start, option.stop + 1, option.step)
# generate new collections:
for timePoint in timeRange:
# define output path:
outpath = cellsetpath + option.name + general.indexTag(timePoint, option.total) + option.nametag + "/"
general.pathGenerator(outpath)
# load timePoint cells:
timedCells = getTargetCells(inpath=timepath + option.times + "/cells/", mode="time", timeRange=[timePoint])
# parse per gene signatures in collection:
for gene in os.listdir(cellsetpath + option.collection):
# load expression cells:
expressionCells = open(cellsetpath + option.collection + "/" + gene).read().split("\n")
# export timed, expressionCells:
f_output = open(outpath + gene, "w")
print >>f_output, "\n".join(sorted(list(set(timedCells).intersection(set(expressionCells)))))
f_output.close()
# mapping overlap mode:
elif option.mode == "cell.overlap":
# update overlappath:
overlappath = overlappath + option.collection + "/"
general.pathGenerator(overlappath)
# build cell-expression matrix:
print
print "Loading cellular expression..."
signal_matrix, expression_matrix, tracking_matrix, trackedCells = expressionBuilder(expressionfile=option.expression, path=expressionpath, cutoff=option.fraction, minimum=option.minimum, metric="cell.expression")
fraction_matrix, expression_matrix, tracking_matrix, trackedCells = expressionBuilder(expressionfile=option.expression, path=expressionpath, cutoff=option.fraction, minimum=option.minimum, metric="fraction.expression")
normal_matrix, expression_matrix, tracking_matrix, trackedCells = expressionBuilder(expressionfile=option.expression, path=expressionpath, cutoff=option.fraction, minimum=option.minimum, metric="fraction.expression")
rank_matrix, expression_matrix, tracking_matrix, trackedCells = expressionBuilder(expressionfile=option.expression, path=expressionpath, cutoff=option.fraction, minimum=option.minimum, metric="rank")
# load collection cells:
#print "Loading target cells..."
targetCells = getTargetCells(inpath=cellsetpath + option.collection + "/", mode="collection")
# create output file:
o_output = open(overlappath + "mapcells_" + option.collection + "_matrix_overlap", "w")
o_header = ["i", "j", "i.cells", "j.cells", "overlap.cells", "total.cells", "overlap.max", "overlap.sum", "overlap.avg", "expression.cor", "fraction.cor", "normal.cor", "rank.cor", "pvalue", "pvalue.adj", "score", "i.only.ids", "j.only.ids", "overlap.ids"]
print >>o_output, "\t".join(o_header)
# find maximum rank, if necessary:
if option.extend == "ON":
maxRank = 0
for gene in rank_matrix:
for targetCell in rank_matrix[gene]:
if int(rank_matrix[gene][targetCell]) > maxRank:
maxRank = int(rank_matrix[gene][targetCell])
# load gene-expressing cells data:
print
print "Build expression matrix per gene..."
genes = os.listdir(cellsetpath + option.collection)
matrix = dict()
for gene in genes:
matrix[gene] = dict()
matrix[gene]["cells"], matrix[gene]["signals"], matrix[gene]["fractions"], matrix[gene]["normals"], matrix[gene]["ranks"] = list(), list(), list(), list(), list()
expressionCells = open(cellsetpath + option.collection + "/" + gene).read().split("\n")
for targetCell in targetCells:
if targetCell in expressionCells:
matrix[gene]["cells"].append(targetCell)
if targetCell in signal_matrix[gene]:
matrix[gene]["signals"].append(signal_matrix[gene][targetCell])
matrix[gene]["fractions"].append(fraction_matrix[gene][targetCell])
matrix[gene]["normals"].append(normal_matrix[gene][targetCell])
matrix[gene]["ranks"].append(rank_matrix[gene][targetCell])
elif option.extend == "ON":
matrix[gene]["signals"].append(0)
matrix[gene]["fractions"].append(0)
matrix[gene]["normals"].append(0)
matrix[gene]["ranks"].append(maxRank)
# print a matrix of cell expression overlap between genes:
print "Exporting cellular overlap matrix..."
adjust = len(matrix)*len(matrix)
universe = len(targetCells)
for geneX in sorted(matrix.keys()):
cellsX = matrix[geneX]["cells"]
signalsX, fractionsX, normalsX, ranksX = numpy.array(matrix[geneX]["signals"]), numpy.array(matrix[geneX]["fractions"]), numpy.array(matrix[geneX]["normals"]), numpy.array(matrix[geneX]["ranks"])
for geneY in sorted(matrix.keys()):
cellsY = matrix[geneY]["cells"]
signalsY, fractionsY, normalsY, ranksY = numpy.array(matrix[geneY]["signals"]), numpy.array(matrix[geneY]["fractions"]), numpy.array(matrix[geneY]["normals"]), numpy.array(matrix[geneY]["ranks"])
signalCor = numpy.corrcoef(signalsX, signalsY)[0][1]
fractionCor = numpy.corrcoef(fractionsX, fractionsY)[0][1]
normalCor = numpy.corrcoef(normalsX, normalsY)[0][1]
rankCor = numpy.corrcoef(ranksX, ranksY)[0][1]
cellsXo = sorted(set(cellsX).difference(set(cellsY))) # X-only cells
cellsYo = sorted(set(cellsY).difference(set(cellsX))) # Y-only cells
cellsO = sorted(set(cellsX).intersection(set(cellsY))) # overlap
cellsU = sorted(set(cellsX).union(set(cellsY))) # union
cellsT = targetCells
# Hypergeometric paramters:
m = len(cellsX) # number of white balls in urn
n = universe - len(cellsX) # number of black balls in urn
N = len(cellsY) # number of balls drawn from urn
x = len(cellsO) # number of white balls in drawn
# If I pull out all balls with elephant tatoos (N), is the draw enriched in white balls?:
pvalue = hyper.fishers(x, m+n, m, N, method="right")
adjPvalue = hyper.limit(pvalue*adjust)
score = hyper.directional(x, m+n, m, N, adjust=adjust)
output = [geneX, geneY]
output.append(len(cellsX))
output.append(len(cellsY))
output.append(len(cellsO))
output.append(len(cellsT))
if len(cellsO) > 0:
output.append(float(len(cellsO))/min(len(cellsX), len(cellsY)))
output.append(float(len(cellsO))/len(cellsU))
output.append(float(len(cellsO))/numpy.mean([len(cellsX), len(cellsY)]))
else:
output.append(0)
output.append(0)
output.append(0)
output.append(signalCor)
output.append(fractionCor)
output.append(normalCor)
output.append(rankCor)
output.append(pvalue)
output.append(adjPvalue)
output.append(score)
output.append(";".join(cellsXo))
output.append(";".join(cellsYo))
output.append(";".join(cellsO))
if len(output) != len(o_header):
print len(o_header), len(output)
print output
print
pdb.set_trace()
if " " in "\t".join(map(str, output)):
print output
pdb.set_trace()
print >>o_output, "\t".join(map(str, output))
# close output:
o_output.close()
print
# hybrid (datatypes) matrix mode:
elif option.mode == "cell.hybrid":
# get comparison properties:
peaks, domain = option.a.split("/")[:2]
collection = option.b.split("/")[0]
# load target contexts:
codeContexts, targetContexts = metrn.options_dict["contexts.extended"][option.contexts]
# make comparison output folders:
hybridpath = hybridpath + collection + "/" + peaks + "/" + domain + "/" + codeContexts + "/"
general.pathGenerator(hybridpath)
# define input files:
ainfile = str(coassociationspath + option.a).replace("//","/")
binfile = str(cellspath + "overlap/" + option.b).replace("//","/")
# load input headers:
aheader = general.build_header_dict(ainfile)
bheader = general.build_header_dict(binfile)
# load co-association results:
print
print "Loading co-associations..."
cobindingFrames = general.build2(ainfile, id_complex=["i", "j"], separator=":")
# load cellular expression overlap:
print "Loading co-expression..."
coexpressionFrames = general.build2(binfile, id_complex=["i", "j"], separator=":")
coexpressionMatrix = general.build2(binfile, i="i", j="j", x="overlap.sum", mode="matrix")
# characterize input file basenames:
abasename = option.a.split("/")[len(option.a.split("/"))-1].replace(".txt","").replace(".bed","")
bbasename = option.b.split("/")[len(option.b.split("/"))-1].replace(".txt","").replace(".bed","")
# define output file:
f_outfile = hybridpath + "mapcells_hybrid_" + collection + "-" + peaks + "-" + domain + "_combined.txt"
f_output = open(f_outfile, "w")
# generate output header:
header = ["i", "j", "label"]
acolumns = list()
for acolumn in general.valuesort(aheader):
if not acolumn in header:
acolumns.append(acolumn)
bcolumns = list()
for bcolumn in general.valuesort(bheader):
if not bcolumn in header and not bcolumn in ["i.only.ids", "j.only.ids", "overlap.ids"]:
bcolumns.append(bcolumn)
print >>f_output, "\t".join(header + acolumns + bcolumns)
# filter-match results:
print "Merging co-binding and co-expression..."
ifactor, jfactor = option.indexes.split(",")
icontext, jcontext = option.values.split(",")
comparisons = list()
for cobindingComparison in sorted(cobindingFrames.keys()):
iFactor, jFactor = cobindingFrames[cobindingComparison][ifactor], cobindingFrames[cobindingComparison][jfactor]
iContext, jContext = cobindingFrames[cobindingComparison][icontext], cobindingFrames[cobindingComparison][jcontext]
if iContext in targetContexts and jContext in targetContexts:
if iFactor in coexpressionMatrix and jFactor in coexpressionMatrix:
coexpressionComparison = iFactor + ":" + jFactor
label = ":".join(sorted([iFactor, jFactor]))
if not coexpressionComparison in comparisons:
output = [iFactor, jFactor, label]
for acolumn in acolumns:
output.append(cobindingFrames[cobindingComparison][acolumn])
for bcolumn in bcolumns:
output.append(coexpressionFrames[coexpressionComparison][bcolumn])
print >>f_output, "\t".join(map(str, output))
comparisons.append(coexpressionComparison)
# NOTE: this filtering for unique comparisons ensures that only one
# of the RNA PolII datasets gets used.
# close output file:
f_output.close()
print "Merged comparisons:", len(comparisons)
print
# dynamics (hybrid) matrix mode:
elif option.mode == "cell.dynamics":
# are working with hybrid co-binding and co-expression data?
if option.peaks != "OFF" and option.domain != "OFF":
hybridMode = "ON"
else:
hybridMode = "OFF"
# make comparison output folders:
if hybridMode == "ON":
dynamicspath = dynamicspath + option.name + "/" + option.peaks + "/" + option.domain + "/"
general.pathGenerator(dynamicspath)
f_outfile = dynamicspath + "mapcells_hybrid_" + option.name + "-" + option.peaks + "-" + option.domain + "_dynamics.txt"
f_output = open(f_outfile, "w")
u_outfile = dynamicspath + "mapcells_hybrid_" + option.name + "-" + option.peaks + "-" + option.domain + "_uniqueID.txt"
u_output = open(u_outfile, "w")
else:
dynamicspath = dynamicspath + option.name + "/overlap/"
general.pathGenerator(dynamicspath)
f_outfile = dynamicspath + "mapcells_direct_" + option.name + "_dynamics.txt"
f_output = open(f_outfile, "w")
u_outfile = dynamicspath + "mapcells_direct_" + option.name + "_uniqueID.txt"
u_output = open(u_outfile, "w")
# define output file:
k = 0
# load target contexts:
codeContexts, targetContexts = metrn.options_dict["contexts.extended"][option.contexts]
# load overlap data from collections:
print
print "Transfer dynamic co-binding and co-expression analysis..."
for index in range(option.start, option.stop+1, option.step):
collection = option.collection + general.indexTag(index, option.total) + option.nametag
labels = list()
# process hybrid co-binding and co-expression data:
if hybridMode == "ON" and collection in os.listdir(hybridpath):
if option.peaks in os.listdir(hybridpath + collection):
if option.domain in os.listdir(hybridpath + collection + "/" + option.peaks):
infile = hybridpath + collection + "/" + option.peaks + "/" + option.domain + "/" + codeContexts + "/mapcells_hybrid_" + collection + "-" + option.peaks + "-" + option.domain + "_combined.txt"
inlines = open(infile).readlines()
header = inlines.pop(0)
if k == 0:
print >>f_output, "timepoint" + "\t" + header.strip()
print >>u_output, "timepoint" + "\t" + header.strip()
k += 1
for inline in inlines:
label = inline.strip().split("\t")[2]
print >>f_output, str(index) + "\t" + inline.strip()
if not label in labels:
print >>u_output, str(index) + "\t" + inline.strip()
labels.append(label)
# process direct co-expression data:
if hybridMode == "OFF" and collection in os.listdir(overlappath):
infile = overlappath + collection + "/mapcells_" + collection + "_matrix_overlap"
inlines = open(infile).readlines()
header = inlines.pop(0)
if k == 0:
headerItems = header.strip().split("\t")[:15]
print >>f_output, "\t".join(["timepoint", "label"] + headerItems)
print >>u_output, "\t".join(["timepoint", "label"] + headerItems)
k += 1
for inline in inlines:
initems = inline.strip().split("\t")[:15]
label = ":".join(sorted([initems[0], initems[1]]))
print >>f_output, "\t".join([str(index), label] + initems)
if not label in labels:
print >>u_output, "\t".join([str(index), label] + initems)
labels.append(label)
f_output.close()
u_output.close()
print
# hypergeometric tissue-testing mode:
elif option.mode == "test.tissues":
# load tissue annotation matrixes:
print
print "Loading tissue annotations..."
specificTissues = general.build2(expressionpath + option.infile, i="specific.tissue", j="cell", mode="matrix", counter=True)
generalTissues = general.build2(expressionpath + option.infile, i="general.tissue", j="cell", mode="matrix", counter=True)
classTissues = general.build2(expressionpath + option.infile, i="class.tissue", j="cell", mode="matrix", counter=True)
totalCells = general.build2(expressionpath + option.expression, i="cell", x="specific.tissue", mode="values", skip=True)
totalCells = sorted(totalCells.keys())
# define a function for testing:
def tissueTesting(queryCells, tissueMatrix, totalCells, adjust=1, match=True):
if match:
queryCells = set(queryCells).intersection(set(totalCells))
tissueOverlap = dict()
for tissue in sorted(tissueMatrix.keys()):
tissueCells = sorted(tissueMatrix[tissue].keys())
if match:
tissueCells = set(tissueCells).intersection(set(totalCells))
overlapCells = set(queryCells).intersection(set(tissueCells))
m = len(queryCells)
n = len(totalCells) - len(queryCells)
U = len(totalCells)
N = len(tissueCells)
x = len(overlapCells)
unionized = len(set(queryCells).union(set(tissueCells)))
maximized = min(len(queryCells), len(tissueCells))
# determine overlap fractions:
if maximized > 0:
maxOverlap = float(x)/maximized
else:
maxOverlap = 0
if unionized > 0:
sumOverlap = float(x)/unionized
else:
sumOverlap = 0
# calculate probability mass function (PMF):
pvalue = hyper.fishers(x, U, m, N, adjust=1, method="right")
adjPvalue = hyper.limit(pvalue*adjust)
# calculate enrichment/depletion score:
score = hyper.directional(x, U, m, N, adjust=adjust)
# store overlap scores:
tissueOverlap[tissue] = [len(queryCells), len(tissueCells), len(overlapCells), len(totalCells), maxOverlap, sumOverlap, pvalue, adjPvalue, score]
# return overlap scores:
return tissueOverlap
# load genes:
genes = sorted(os.listdir(cellsetpath + option.collection))
# determine Bonferroni correction factors
adjustSpecific = len(genes)*len(specificTissues)
adjustGeneral = len(genes)*len(generalTissues)
adjustClass = len(genes)*len(classTissues)
#print adjustSpecific
#print adjustGeneral
#print adjustClass
#pdb.set_trace()
# load cellular expression patterns per gene:
print "Loading per gene expression matrix..."
specificMatrix, generalMatrix, classMatrix = dict(), dict(), dict()
for gene in genes:
cells = open(cellsetpath + option.collection + "/" + gene).read().split("\n")
specificMatrix[gene] = tissueTesting(cells, specificTissues, totalCells, adjust=adjustSpecific)
generalMatrix[gene] = tissueTesting(cells, generalTissues, totalCells, adjust=adjustGeneral)
classMatrix[gene] = tissueTesting(cells, classTissues, totalCells, adjust=adjustClass)
# load cellular expression patterns per gene:
print "Exporting overlap scores..."
s_output = open(tissuespath + "mapcells_" + option.collection + "_matrix_specific.txt", "w")
g_output = open(tissuespath + "mapcells_" + option.collection + "_matrix_general.txt" , "w")
c_output = open(tissuespath + "mapcells_" + option.collection + "_matrix_class.txt" , "w")
print >>s_output, "\t".join(["gene", "tissue", "gene.cells", "tissue.cells", "overlap.cells", "total.cells", "overlap.max", "overlap.sum", "pvalue", "pvalue.adj", "score"])
print >>g_output, "\t".join(["gene", "tissue", "gene.cells", "tissue.cells", "overlap.cells", "total.cells", "overlap.max", "overlap.sum", "pvalue", "pvalue.adj", "score"])
print >>c_output, "\t".join(["gene", "tissue", "gene.cells", "tissue.cells", "overlap.cells", "total.cells", "overlap.max", "overlap.sum", "pvalue", "pvalue.adj", "score"])
for gene in sorted(specificMatrix.keys()):
for tissue in sorted(specificMatrix[gene].keys()):
print >>s_output, "\t".join(map(str, [gene, tissue] + specificMatrix[gene][tissue]))
for tissue in sorted(generalMatrix[gene].keys()):
print >>g_output, "\t".join(map(str, [gene, tissue] + generalMatrix[gene][tissue]))
for tissue in sorted(classMatrix[gene].keys()):
print >>c_output, "\t".join(map(str, [gene, tissue] + classMatrix[gene][tissue]))
# close outputs:
s_output.close()
g_output.close()
c_output.close()
print
# lineage construction/generation mode:
elif option.mode == "build.lineages":
import time
# establish descendants cutoff:
if option.descendants == "OFF":
descendants_cutoff = 1000000
descendants_handle = "XX"
else:
descendants_cutoff = int(option.descendants)
descendants_handle = option.descendants
# establish ascendants cutoff:
if option.ascendants == "OFF":
ascendants_cutoff = 0
ascendants_handle = "XX"
else:
ascendants_cutoff = int(option.ascendants)
ascendants_handle = option.ascendants
# establish limit cutoff:
if option.limit == "OFF":
limit_cutoff = "OFF"
limit_handle = "XX"
else:
limit_cutoff = int(option.limit)
limit_handle = option.limit
# define output paths:
logpath = lineagepath + option.name + "/" + option.method + "/lineage." + option.lineages + "/ascendants." + ascendants_handle + "/descendants." + descendants_handle + "/limit." + limit_handle + "/log/"
buildpath = lineagepath + option.name + "/" + option.method + "/lineage." + option.lineages + "/ascendants." + ascendants_handle + "/descendants." + descendants_handle + "/limit." + limit_handle + "/build/"
general.pathGenerator(logpath)
general.pathGenerator(buildpath)
# prepare log file:
l_output = open(logpath + "mapcells_build_" + option.cells + ".log", "w")
# clear output folder contents:
command = "rm -rf " + buildpath + "*"
os.system(command)
# build cell-expression matrix:
print
print "Loading cellular expression..."
print >>l_output, "Loading cellular expression..."
quantitation_matrix, expression_matrix, tracking_matrix, trackedCells = expressionBuilder(expressionfile=option.expression, path=expressionpath, cutoff=option.fraction, minimum=option.minimum, metric="fraction.expression")
# store cell-parent relationships:
print "Loading cell-parent relationships..."
print >>l_output, "Loading cell-parent relationships..."
cell_dict, parent_dict, pedigreeCells = relationshipBuilder(pedigreefile=option.pedigree, path=extraspath, trackedCells=trackedCells, lineages=option.lineages)
print "Pedigree cells:", len(pedigreeCells)
print "Tracked cells:", len(trackedCells)
print >>l_output, "Pedigree cells:", len(pedigreeCells)
print >>l_output, "Tracked cells:", len(trackedCells)
# generate lineages for enrichment:
print
print "Generating lineages..."
print >>l_output, ""
print >>l_output, "Generating lineages..."
i, j, maxDN, minUP = 0, 0, 0, 10000
for parent in pedigreeCells:
i += 1
# define descendant cells:
descendants = descendantsCollector(parent, parent_dict, cell_dict, descendants=list())
# define ascendants cells:
ascendants = ascendantsCollector(parent, parent_dict, cell_dict, ascendants=list())
# calculate combinations possible:
combinations = combinationCalculator(len(descendants), len(descendants))
# apply descendants cutoff:
if len(descendants) <= descendants_cutoff and len(ascendants) >= ascendants_cutoff:
j += 1
print parent, len(ascendants), len(descendants), time.asctime(time.localtime())
print >>l_output, parent, len(ascendants), len(descendants), time.asctime(time.localtime())
# record max and min cutoffs:
if len(ascendants) < minUP:
minUP = len(ascendants)
if len(descendants) > maxDN:
maxDN = len(descendants)
# define lineage cells:
if option.method == "descender":
subtrees = [",".join(descendants)]
elif option.method == "generator":
subtrees = lineageGenerator(parent, parent_dict, cell_dict)
elif option.method == "builder":
subtrees = lineageBuilder(parent, parent_dict, cell_dict, limit=limit_cutoff)
elif option.method == "collector":
subtrees = lineageCollector(expression_matrix[gene], parent_dict, cell_dict)
print subtrees
pdb.set_trace() # not implemented yet
# export lineage cells:
f_output = open(buildpath + parent, "w")
index = 1
for subtree in subtrees:
print >>f_output, "\t".join([parent, parent + "." + str(index), subtree])
index += 1
f_output.close()
print
print "Pedigree nodes lineaged:", i
print "Pedigree nodes examined:", j, "(" + str(round(100*float(j)/i, 2)) + "%)"
print "Maximum descendants:", maxDN
print "Minimum ascendants:", minUP
print
print >>l_output, ""
print >>l_output, "Pedigree nodes lineaged:", i
print >>l_output, "Pedigree nodes examined:", j, "(" + str(round(100*float(j)/i, 2)) + "%)"
print >>l_output, "Maximum descendants:", maxDN
print >>l_output, "Minimum ascendants:", minUP
# close output files:
l_output.close()
#pdb.set_trace()
# hypergeometric lineage-testing mode:
elif option.mode == "test.lineages":
# establish descendants cutoff:
if option.descendants == "OFF":
descendants_cutoff = 1000000
descendants_handle = "XX"
else:
descendants_cutoff = int(option.descendants)
descendants_handle = option.descendants
# establish ascendants cutoff:
if option.ascendants == "OFF":
ascendants_cutoff = 0
ascendants_handle = "XX"
else:
ascendants_cutoff = int(option.ascendants)
ascendants_handle = option.ascendants
# establish limit cutoff:
if option.limit == "OFF":
limit_cutoff = "OFF"
limit_handle = "XX"
else:
limit_cutoff = int(option.limit)
limit_handle = option.limit
# define output paths:
logpath = lineagepath + option.name + "/" + option.method + "/lineage." + option.lineages + "/ascendants." + ascendants_handle + "/descendants." + descendants_handle + "/limit." + limit_handle + "/log/"
buildpath = lineagepath + option.name + "/" + option.method + "/lineage." + option.lineages + "/ascendants." + ascendants_handle + "/descendants." + descendants_handle + "/limit." + limit_handle + "/build/"
hyperpath = lineagepath + option.name + "/" + option.method + "/lineage." + option.lineages + "/ascendants." + ascendants_handle + "/descendants." + descendants_handle + "/limit." + limit_handle + "/hyper/"
cellsetpath = cellsetpath + option.collection + "/"
general.pathGenerator(logpath)
general.pathGenerator(buildpath)
general.pathGenerator(hyperpath)
#general.pathGenerator(cellsetpath)
# prepare log file:
l_output = open(logpath + "mapcells_hyper_" + option.collection + "_" + option.cells + ".log", "w")
# build cell-expression matrix:
print
print "Loading cellular expression..."
print >>l_output, "Loading cellular expression..."
quantitation_matrix, expression_matrix, tracking_matrix, trackedCells = expressionBuilder(expressionfile=option.expression, path=expressionpath, cutoff=option.fraction, minimum=option.minimum, metric="fraction.expression")
# load cell-parent relationships:
print "Loading cell-parent relationships..."
print >>l_output, "Loading cell-parent relationships..."
cell_dict, parent_dict, pedigreeCells = relationshipBuilder(pedigreefile=option.pedigree, path=extraspath, trackedCells=trackedCells, lineages=option.lineages)
print "Pedigree cells:", len(pedigreeCells)
print "Tracked cells:", len(trackedCells)
print >>l_output, "Pedigree cells:", len(pedigreeCells)
print >>l_output, "Tracked cells:", len(trackedCells)
# prepare for scanning...
i, j, k = 0, 0, 0
nodes = general.clean(os.listdir(buildpath), '.DS_Store')
overlap_dict, pvalue_dict, score_dict = dict(), dict(), dict()
# prepare output file:
f_output = open(hyperpath + "mapcells_hyper_" + option.collection + "_" + option.cells + ".txt", "w")
header = ["gene", "node", "lineage", "experiment.cells", "lineage.cells", "overlap.sum", "overlap.max", "overlap.count", "total.count", "lineage.count", "expressed.count", "pvalue", "pvalue.adj", "score", "cells"]
print >>f_output, "\t".join(map(str, header))
# load target cells:
print
print "Loading target cells..."
print >>l_output, ""
print >>l_output, "Loading target cells..."
collection_matrix = dict()
for collection in os.listdir(cellsetpath):
collectionCells = general.clean(open(cellsetpath + collection).read().split("\n"), "")
collection_matrix[collection] = collectionCells
#print collection, collectionCells
# define multiple-hypothesis correction factor:
lineageTotal = 0
for node in nodes:
lineageTotal += general.countLines(buildpath + node)
adjust = len(collection_matrix)*lineageTotal
# check background cell population:
if option.cells == "tracked":
pedigreeCells = list(trackedCells)
# scan cells for enrichment:
print "Scanning cells for lineage enrichments..."
print >>l_output, ""
print >>l_output, "Scanning cells for lineage enrichments..."
collectionsEnriched = list()
for collection in collection_matrix:
collectionCells = collection_matrix[collection]
# filter (complete) pedigree cells to reduce to tracked cells?
if option.cells == "tracked" and collection in tracking_matrix:
completeCells = set(tracking_matrix[collection]).intersection(set(pedigreeCells))
else:
completeCells = pedigreeCells
# Note: These are cells for which we have expression measurements for gene ('collection')...
# Note: It is not necessary to filter expression cells because these are by definition a subset of the tracked cells.
# scan lineages for enrichment:
nodesEnriched, linesEnriched = list(), list()
for node in nodes:
# load node-specific lineages:
lineageLines = open(buildpath + node).readlines()
for lineageLine in lineageLines:
lineageNode, lineageName, lineageCells = lineageLine.strip().split("\t")
lineageCells = lineageCells.split(",")
lineageCount = len(lineageCells)
# filter lineage cells to reduce to tracked cells?
if option.cells == "tracked" and collection in tracking_matrix:
lineageCells = set(tracking_matrix[collection]).intersection(set(lineageCells))
#print collection, node, len(tracking_matrix[collection]), len(lineageCells), ",".join(lineageCells)
#pdb.set_trace()
# test enrichment in lineage:
i += 1
completed = len(completeCells)
descended = len(lineageCells)
collected = len(collectionCells)
overlaped = len(set(collectionCells).intersection(set(lineageCells)))
unionized = len(set(collectionCells).union(set(lineageCells)))
maximized = min(descended, collected)
# determine overlaps:
if maximized > 0:
maxOverlap = float(overlaped)/maximized
else:
maxOverlap = 0
if unionized > 0:
sumOverlap = float(overlaped)/unionized
else:
sumOverlap = 0
# check overlap:
if maxOverlap >= float(option.overlap):
j += 1
# calculate probability mass function (PMF):
pvalue = hyper.fishers(overlaped, completed, descended, collected, adjust=1, method="right")
adjPvalue = hyper.limit(pvalue*adjust)
# calculate enrichment/depletion score:
score = hyper.directional(overlaped, completed, descended, collected, adjust=adjust)
# should we store this result?
if adjPvalue < float(option.pvalue):
k += 1
if not collection in overlap_dict:
overlap_dict[collection], pvalue_dict[collection], score_dict[collection] = dict(), dict(), dict()
overlap_dict[collection][node] = maxOverlap
pvalue_dict[collection][node] = adjPvalue
score_dict[collection][node] = score
output = [collection, node, lineageName, len(collectionCells), lineageCount, sumOverlap, maxOverlap, overlaped, completed, descended, collected, pvalue, adjPvalue, score, ','.join(lineageCells)]
print >>f_output, "\t".join(map(str, output))
if not collection in collectionsEnriched:
collectionsEnriched.append(collection)
if not node in nodesEnriched:
nodesEnriched.append(node)
linesEnriched.append(lineageName)
print collection, i, j, k, len(collectionsEnriched), len(nodesEnriched), len(linesEnriched)
print >>l_output, collection, i, j, k, len(collectionsEnriched), len(nodesEnriched), len(linesEnriched)
print
print "Lineages examined:", i
print "Lineages overlapped:", j
print "Lineages significant:", k, "(" + str(round(100*float(k)/i, 2)) + "%)"
print
print >>l_output, ""
print >>l_output, "Lineages examined:", i
print >>l_output, "Lineages overlapped:", j
print >>l_output, "Lineages significant:", k, "(" + str(round(100*float(k)/i, 2)) + "%)"
# close output file
f_output.close()
l_output.close()
#pdb.set_trace()
# hypergeometric testing between sets of cells mode:
elif option.mode == "test.comparison":
# define output paths:
querypath = cellsetpath + option.query + "/"
targetpath = cellsetpath + option.target + "/"
hyperpath = comparepath + option.name + "/hyper/"
logpath = comparepath + option.name + "/log/"
general.pathGenerator(hyperpath)
general.pathGenerator(logpath)
# prepare log file:
l_output = open(logpath + "mapcells_comparison_" + option.name + "_" + option.cells + ".log", "w")
# build cell-expression matrix:
print
print "Loading cellular expression..."
print >>l_output, "Loading cellular expression..."
quantitation_matrix, expression_matrix, tracking_matrix, trackedCells = expressionBuilder(expressionfile=option.expression, path=expressionpath, cutoff=option.fraction, minimum=option.minimum, metric="fraction.expression")
# store cell-parent relationships:
print "Loading cell-parent relationships..."
print >>l_output, "Loading cell-parent relationships..."
cell_dict, parent_dict, pedigreeCells = relationshipBuilder(pedigreefile=option.pedigree, path=extraspath, trackedCells=trackedCells, lineages=option.cells)
# Note that here the lineage-filtering uses the indicated cells option!
print "Pedigree cells:", len(pedigreeCells)
print "Tracked cells:", len(trackedCells)
print >>l_output, "Pedigree cells:", len(pedigreeCells)
print >>l_output, "Tracked cells:", len(trackedCells)
# prepare for scanning...
overlap_dict, pvalue_dict = dict(), dict()
i, j, k = 0, 0, 0
# prepare output file:
f_output = open(hyperpath + "mapcells_test_" + option.name + "_" + option.cells + "_comparison.txt", "w")
header = ["query", "target", "lineage", "query.cells", "target.cells", "overlap.sum", "overlap.max", "overlap.count", "total.count", "query.count", "target.count", "pvalue", "pvalue.adj", "score", "cells"]
print >>f_output, "\t".join(map(str, header))
# load query cells:
print
print "Loading query cells..."
print >>l_output, ""
print >>l_output, "Loading query cells..."
query_matrix = dict()
for query in os.listdir(querypath):
queryCells = general.clean(open(querypath + query).read().split("\n"), "")
query_matrix[query] = queryCells
#print query, queryCells
# load target cells:
print
print "Loading target cells..."
print >>l_output, ""
print >>l_output, "Loading target cells..."
target_matrix = dict()
for target in os.listdir(targetpath):
targetCells = general.clean(open(targetpath + target).read().split("\n"), "")
target_matrix[target] = targetCells
#print target, targetCells
# define multiple-hypothesis correction factor:
adjust = len(query_matrix)*len(target_matrix)
# check background cell population:
if option.cells == "tracked":
pedigreeCells = list(trackedCells)
# scan query cells for enrichment:
print "Scanning target cells for query cells enrichment..."
print >>l_output, ""
print >>l_output, "Scanning target cells for query cells enrichment..."
queriesEnriched = list()
for query in sorted(query_matrix.keys()):
queryCells = list(set(query_matrix[query]))
# filter query cells to reduce to tracked cells?
if option.cells == "tracked":
queryCells = set(queryCells).intersection(set(pedigreeCells))
# scan target cells for enrichment:
targetsEnriched, linesEnriched = list(), list()
for target in sorted(target_matrix.keys()):
targetCells = list(set(target_matrix[target]))
# filter target cells to reduce to tracked cells?
if option.cells == "tracked":
targetCells = set(targetCells).intersection(set(pedigreeCells))
#print query, target, len(queryCells), len(targetCells), ",".join(targetCells)
#pdb.set_trace()
# test enrichment in lineage:
i += 1
completed = len(pedigreeCells)
descended = len(targetCells)
collected = len(queryCells)
overlaped = len(set(queryCells).intersection(set(targetCells)))
unionized = len(set(queryCells).union(set(targetCells)))
maximized = min(descended, collected)
# determine overlaps:
if maximized > 0:
maxOverlap = float(overlaped)/maximized
else:
maxOverlap = 0
if unionized > 0:
sumOverlap = float(overlaped)/unionized
else:
sumOverlap = 0
# check overlap:
if maxOverlap >= float(option.overlap):
j += 1
# calculate probability mass function (PMF):
pvalue = hyper.fishers(overlaped, completed, descended, collected, adjust=1, method="right")
adjPvalue = hyper.limit(pvalue*adjust)
# calculate enrichment/depletion score:
score = hyper.directional(overlaped, completed, descended, collected, adjust=adjust)
if adjPvalue < float(option.pvalue):
k += 1
if not query in overlap_dict:
overlap_dict[query], pvalue_dict[query] = dict(), dict()
overlap_dict[query][target] = maxOverlap
pvalue_dict[query][target] = pvalue
output = [query, target, option.target, len(queryCells), len(targetCells), sumOverlap, maxOverlap, overlaped, completed, descended, collected, pvalue, adjPvalue, score, ','.join(targetCells)]
print >>f_output, "\t".join(map(str, output))
if not query in queriesEnriched:
queriesEnriched.append(query)
if not target in targetsEnriched:
targetsEnriched.append(target)
print query, i, j, k, len(queriesEnriched), len(targetsEnriched), len(linesEnriched)
print >>l_output, query, i, j, k, len(queriesEnriched), len(targetsEnriched), len(linesEnriched)
print
print "Lineages examined:", i
print "Lineages overlapped:", j
print "Lineages significant:", k, "(" + str(round(100*float(k)/i, 2)) + "%)"
print
print >>l_output, ""
print >>l_output, "Lineages examined:", i
print >>l_output, "Lineages overlapped:", j
print >>l_output, "Lineages significant:", k, "(" + str(round(100*float(k)/i, 2)) + "%)"
# close output file
f_output.close()
l_output.close()
# filter testing results to neurons where the region is contained mode:
elif option.mode == "test.regions":
# update path to neurons:
neuronspath = neuronspath + option.peaks + "/"
# define input/output paths:
bedpath = neuronspath + option.technique + "/results/" + option.neurons + "/regions/bed/"
querypath = cellsetpath + option.query + "/"
targetpath = cellsetpath + option.target + "/"
hyperpath = comparepath + option.name + "/hyper/"
logpath = comparepath + option.name + "/log/"
general.pathGenerator(hyperpath)
general.pathGenerator(logpath)
# load region coordinates per neuron:
print
print "Loading regions per neuron matrix..."
neuron_matrix = dict()
for bedfile in general.clean(os.listdir(bedpath), ".DS_Store"):
neuron = bedfile.replace(".bed", "")
neuron_matrix[neuron] = dict()
for bedline in open(bedpath + bedfile).readlines():
chrm, start, stop, region = bedline.strip("\n").split("\t")[:4]
neuron_matrix[neuron][region] = [chrm, int(start), int(stop)]
# load gene coordinates:
print "Loading gene/feature coordinates..."
coord_dict = dict()
ad = general.build_header_dict(annotationspath + option.reference)
inlines = open(annotationspath + option.reference).readlines()
inlines.pop(0)
for inline in inlines:
initems = inline.strip("\n").split("\t")
chrm, start, stop, feature, strand, name = initems[ad["chrm"]], initems[ad["start"]], initems[ad["end"]], initems[ad["feature"]], initems[ad["strand"]], initems[ad["name"]]
if strand == "+":
coord_dict[name] = [chrm, int(start)-option.up, int(start)+option.dn]
coord_dict[feature] = [chrm, int(start)-option.up, int(start)+option.dn]
elif strand == "-":
coord_dict[name] = [chrm, int(stop)-option.dn, int(stop)+option.up]
coord_dict[feature] = [chrm, int(stop)-option.dn, int(stop)+option.up]
# prepare output file:
f_output = open(hyperpath + "mapcells_test_" + option.name + "_" + option.cells + "_regions.txt", "w")
# define hypergeometric results file:
hyperfile = hyperpath + "mapcells_test_" + option.name + "_" + option.cells + "_comparison.txt"
# build header dict:
hd = general.build_header_dict(hyperfile)
# scan hypergeometric results file for cases of overlap:
print "Scanning hypergeometric results..."
i, j, k = 0, 0, 0
inlines = open(hyperfile).readlines()
print >>f_output, inlines.pop(0).strip("\n")
queriesMissed, queriesFound, targetsFound = list(), list(), list()
for inline in inlines:
initems = inline.strip("\n").split("\t")
query, target, pvalue = initems[hd["query"]], initems[hd["target"]], initems[hd["pvalue.adj"]]
if query in coord_dict:
i += 1
qchrm, qstart, qstop = coord_dict[query]
hits = False
for region in neuron_matrix[target]:
j += 1
rchrm, rstart, rstop = neuron_matrix[target][region]
if qchrm == rchrm:
if qstart <= rstart and qstop >= rstop:
k += 1
hits = True
if hits:
print >>f_output, inline.strip("\n")
queriesFound.append(query)
targetsFound.append(target)
else:
queriesMissed.append(query)
queriesMissed = sorted(list(set(queriesMissed)))
# close output file
f_output.close()
queriesFound = sorted(list(set(queriesFound)))
targetsFound = sorted(list(set(targetsFound)))
#pdb.set_trace()
print
print "Queries found in neurons:", len(queriesFound)
print "Neurons found in queries:", len(targetsFound)
print "Searches performed:", i
print "Searches performed (x Regions):", j
print "Searches with hits (x Regions):", k
print "Queries with coordinates and found:", ", ".join(queriesFound)
print "Queries missed (no coordinates):", len(queriesMissed)
print "\n".join(queriesMissed)
print
# false discovery rate mode:
elif option.mode == "test.fdr":
# update path to neurons:
neuronspath = neuronspath + option.peaks + "/"
# define input/output paths:
bedpath = neuronspath + option.technique + "/results/" + option.neurons + "/regions/bed/"
querypath = cellsetpath + option.query + "/"
targetpath = cellsetpath + option.target + "/"
hyperpath = comparepath + option.name + "/hyper/"
logpath = comparepath + option.name + "/log/"
general.pathGenerator(hyperpath)
general.pathGenerator(logpath)
# load region coordinates per neuron:
print
print "Loading regions per neuron matrix..."
neuron_matrix = dict()
for bedfile in general.clean(os.listdir(bedpath), ".DS_Store"):
neuron = bedfile.replace(".bed", "")
neuron_matrix[neuron] = dict()
for bedline in open(bedpath + bedfile).readlines():
chrm, start, stop, region = bedline.strip("\n").split("\t")[:4]
neuron_matrix[neuron][region] = [chrm, int(start), int(stop)]
# load gene coordinates:
print "Loading gene/feature coordinates..."
coord_dict = dict()
ad = general.build_header_dict(annotationspath + option.reference)
inlines = open(annotationspath + option.reference).readlines()
inlines.pop(0)
for inline in inlines:
initems = inline.strip("\n").split("\t")
chrm, start, stop, feature, strand, name = initems[ad["chrm"]], initems[ad["start"]], initems[ad["end"]], initems[ad["feature"]], initems[ad["strand"]], initems[ad["name"]]
if strand == "+":
coord_dict[name] = [chrm, int(start)-option.up, int(start)+option.dn]
coord_dict[feature] = [chrm, int(start)-option.up, int(start)+option.dn]
elif strand == "-":
coord_dict[name] = [chrm, int(stop)-option.dn, int(stop)+option.up]
coord_dict[feature] = [chrm, int(stop)-option.dn, int(stop)+option.up]
# prepare output file:
f_output = open(hyperpath + "mapcells_test_" + option.name + "_" + option.cells + "_fdr.txt", "w")
# define hypergeometric results file:
hyperfile = hyperpath + "mapcells_test_" + option.name + "_" + option.cells + "_comparison.txt"
# build header dict:
hd = general.build_header_dict(hyperfile)
# load positive hypergeometric results:
print "Loading hypergeometric results (hits)..."
inlines = open(hyperfile).readlines()
inlines.pop(0)
hyper_matrix, hyperTargets = dict(), list()
for inline in inlines:
initems = inline.strip("\n").split("\t")
query, target, pvalue = initems[hd["query"]], initems[hd["target"]], initems[hd["pvalue.adj"]]
if not query in hyper_matrix:
hyper_matrix[query] = dict()
hyper_matrix[query][target] = float(pvalue)
if not target in hyperTargets:
hyperTargets.append(target)
# select the best matching neuron for each query:
match_matrix = dict()
print "Scanning hypergeometric results per query..."
i, j, k = 0, 0, 0
positiveRate, negativeRate, matchTargets = list(), list(), list()
for query in hyper_matrix:
if query in coord_dict:
i += 1
qchrm, qstart, qstop = coord_dict[query]
for target in neuron_matrix:
j += 1
hits = 0
for region in neuron_matrix[target]:
rchrm, rstart, rstop = neuron_matrix[target][region]
if qchrm == rchrm:
if qstart <= rstart and qstop >= rstop:
hits += 1
if hits != 0:
if not query in match_matrix:
match_matrix[query] = dict()
match_matrix[query][target] = float(hits)/len(neuron_matrix[target])
if not target in matchTargets:
matchTargets.append(target)
#print hyper_matrix.keys()
#print match_matrix.keys()
#print query
#print target
#print match_matrix[query][target]
#pdb.set_trace()
# Test A
"""
print
print "Testing positive and negative hits..."
positiveRate, negativeRate, unknownRate = list(), list(), list()
for query in match_matrix:
hits = general.valuesort(match_matrix[query])
hits.reverse()
target = hits[0]
if query in hyper_matrix and target in hyper_matrix[query]:
positiveRate.append(query + ":" + target)
print "+", query, target, match_matrix[query][target]
else:
print "-", query, target, match_matrix[query][target]
negativeRate.append(query + ":" + target)
if query in hyper_matrix:
unknownRate.append(query + ":" + target)
print "True Positive Rate:", len(positiveRate), 100*float(len(positiveRate))/(len(positiveRate)+len(negativeRate))
print "False Positive Rate:", len(negativeRate), 100*float(len(negativeRate))/(len(positiveRate)+len(negativeRate))
print "False Unknown Rate:", len(unknownRate), 100*float(len(unknownRate))/(len(positiveRate)+len(unknownRate))
print
"""
# Test B
"""
print
print "Testing positive and negative hits..."
positiveRate, negativeRate, unknownRate = list(), list(), list()
for query in hyper_matrix:
hits = 0
for target in general.valuesort(hyper_matrix[query]):
if query in match_matrix and target in match_matrix[query]:
hits += 1
if hits != 0:
positiveRate.append(query + ":" + target)
else:
negativeRate.append(query + ":" + target)
if query in match_matrix:
unknownRate.append(query + ":" + target)
print "True Positive Rate:", len(positiveRate), 100*float(len(positiveRate))/(len(positiveRate)+len(negativeRate))
print "False Positive Rate:", len(negativeRate), 100*float(len(negativeRate))/(len(positiveRate)+len(negativeRate))
print "False Unknown Rate:", len(unknownRate), 100*float(len(unknownRate))/(len(positiveRate)+len(unknownRate))
print
"""
# Test C
print
print "Testing positive and negative hits..."
positiveRate, negativeRate, unknownRate = list(), list(), list()
for query in match_matrix:
hits = 0
for target in general.valuesort(match_matrix[query]):
if query in hyper_matrix and target in hyper_matrix[query]:
hits += 1
if hits != 0:
positiveRate.append(query + ":" + target)
else:
negativeRate.append(query + ":" + target)
if query in hyper_matrix:
unknownRate.append(query + ":" + target)
print "Genes enriched in SOM neurons:", len(hyper_matrix)
print "Genes with promoter in SOM neurons:", len(match_matrix)
print "Neurons enriched in gene expression:", len(hyperTargets)
print "Neurons with gene promoter matches:", len(matchTargets)
print "True Positive Rate:", len(positiveRate), 100*float(len(positiveRate))/(len(positiveRate)+len(negativeRate))
print "False Positive Rate:", len(negativeRate), 100*float(len(negativeRate))/(len(positiveRate)+len(negativeRate))
print "False Unknown Rate (not enriched in any neuron):", len(unknownRate), 100*float(len(unknownRate))/(len(positiveRate)+len(unknownRate))
print
# scan each gene for cellular overlap in neurons where the promoter is found:
"""
print "Scanning positive and negative hits..."
i, j, k = 0, 0, 0
positiveRate, negativeRate = list(), list()
for query in hyper_matrix:
if query in coord_dict:
i += 1
qchrm, qstart, qstop = coord_dict[query]
for target in neuron_matrix:
j += 1
hits = 0
for region in neuron_matrix[target]:
rchrm, rstart, rstop = neuron_matrix[target][region]
if qchrm == rchrm:
if qstart <= rstart and qstop >= rstop:
hits += 1
if hits != 0:
if target in hyper_matrix[query]:
positiveRate.append(query + ":" + target)
else:
negativeRate.append(query + ":" + target)
"""
#print >>f_output, inlines.pop(0).strip("\n")
# close output file
f_output.close()
#print
#print "Queries found in neurons:", len(queriesFound)
#print "Neurons found in queries:", len(targetsFound)
#print "Searches performed:", i
#print "Searches performed (x Regions):", j
#print "Searches with hits (x Regions):", k
#print "Queries with coordinates and found:", ", ".join(queriesFound)
#print "Queries missed (no coordinates):", len(queriesMissed)
#print
# annotate tissue composition in neurons:
elif option.mode == "test.composition":
# update path to neurons:
neuronspath = neuronspath + option.peaks + "/"
# define input/output paths:
bedpath = neuronspath + option.technique + "/results/" + option.neurons + "/regions/bed/"
codespath = neuronspath + option.technique + "/results/" + option.neurons + "/codes/"
summarypath = neuronspath + option.technique + "/results/" + option.neurons + "/summary/"
querypath = cellsetpath + option.query + "/"
targetpath = cellsetpath + option.target + "/"
compositionpath = comparepath + option.name + "/composition/"
hyperpath = comparepath + option.name + "/hyper/"
logpath = comparepath + option.name + "/log/"
general.pathGenerator(compositionpath)
general.pathGenerator(hyperpath)
general.pathGenerator(logpath)
# load codes:
inlines = open(codespath + option.neurons + ".codes").readlines()
codes = inlines.pop(0).strip().split("\t")
codeDict = dict()
for inline in inlines:
initems = inline.strip().split("\t")
neuron = initems.pop(0)
codeDict["neuron" + neuron] = initems
# load cellular expression data:
print
print "Loading cellular annotation..."
annotationDict = general.build2(expressionpath + option.expression, id_column="cell", value_columns=["specific.tissue", "general.tissue", "class.tissue", "match.tissue"], skip=True, verbose=False)
# load tissue annotation matrixes:
print "Loading tissue annotations..."
#specificCounts = general.build2(expressionpath + option.infile, i="specific.tissue" , mode="values", skip=True, counter=True)
#generalCounts = general.build2(expressionpath + option.infile, i="general.tissue", mode="values", skip=True, counter=True)
#classCounts = general.build2(expressionpath + option.infile, i="class.tissue", mode="values", skip=True, counter=True)
totalCells = general.build2(expressionpath + option.expression, i="cell", x="specific.tissue", mode="values", skip=True)
totalCells = sorted(totalCells.keys())
# gather tissue labels
specificTissues, generalTissues, classTissues, matchTissues = list(), list(), list(), list()
for cell in annotationDict:
if not annotationDict[cell]["specific.tissue"] in specificTissues:
specificTissues.append(annotationDict[cell]["specific.tissue"])
if not annotationDict[cell]["general.tissue"] in generalTissues:
generalTissues.append(annotationDict[cell]["general.tissue"])
if not annotationDict[cell]["class.tissue"] in classTissues:
classTissues.append(annotationDict[cell]["class.tissue"])
if not annotationDict[cell]["match.tissue"] in matchTissues:
matchTissues.append(annotationDict[cell]["match.tissue"])
# load cells identified in each neuron:
print
print "Loading cell identities per neuron..."
neuronDict = general.build2(summarypath + "mapneurons_summary.txt", id_column="neuron", value_columns=["class.ids"])
# load cells counted in each neuron:
print "Loading cell counts per neuron..."
countMatrix, binaryMatrix = dict(), dict()
for neuron in os.listdir(bedpath):
inlines = open(bedpath + neuron).readlines()
neuron = neuron.replace(".bed", "")
countMatrix[neuron] = dict()
binaryMatrix[neuron] = dict()
for inline in inlines:
chrm, start, end, feature, score, strand, cell, regions = inline.strip().split("\t")
if not cell in countMatrix[neuron]:
countMatrix[neuron][cell] = 0
binaryMatrix[neuron][cell] = 1
countMatrix[neuron][cell] += 1
# generate tissue class scores:
cellList = list()
cellMatrix, specificMatrix, generalMatrix, classMatrix, matchMatrix = dict(), dict(), dict(), dict(), dict()
for neuron in neuronDict:
if not neuron in cellMatrix:
cellMatrix[neuron] = dict()
specificMatrix[neuron] = dict()
generalMatrix[neuron] = dict()
classMatrix[neuron] = dict()
matchMatrix[neuron] = dict()
for cell in neuronDict[neuron]["class.ids"].split(","):
specificTissue, generalTissue, classTissue, matchTissue = annotationDict[cell]["specific.tissue"], annotationDict[cell]["general.tissue"], annotationDict[cell]["class.tissue"], annotationDict[cell]["match.tissue"]
if not cell in cellMatrix[neuron]:
cellMatrix[neuron][cell] = 0
if not specificTissue in specificMatrix[neuron]:
specificMatrix[neuron][specificTissue] = 0
if not generalTissue in generalMatrix[neuron]:
generalMatrix[neuron][generalTissue] = 0
if not classTissue in classMatrix[neuron]:
classMatrix[neuron][classTissue] = 0
if not matchTissue in matchMatrix[neuron]:
matchMatrix[neuron][matchTissue] = 0
cellList.append(cell)
cellMatrix[neuron][cell] += binaryMatrix[neuron][cell]
specificMatrix[neuron][specificTissue] += binaryMatrix[neuron][cell]
generalMatrix[neuron][generalTissue] += binaryMatrix[neuron][cell]
classMatrix[neuron][classTissue] += binaryMatrix[neuron][cell]
matchMatrix[neuron][matchTissue] += binaryMatrix[neuron][cell]
cellList = sorted(list(set(cellList)))
# Note: The above dictionaries record how many of the cell (ids)
# in a given neuron have correspond to a given tissue.
# prepare class tallies for normalization:
specificTallies, generalTallies, classTallies, matchTallies = dict(), dict(), dict(), dict()
for cell in cellList:
if not annotationDict[cell]["specific.tissue"] in specificTallies:
specificTallies[annotationDict[cell]["specific.tissue"]] = 0
if not annotationDict[cell]["general.tissue"] in generalTallies:
generalTallies[annotationDict[cell]["general.tissue"]] = 0
if not annotationDict[cell]["class.tissue"] in classTallies:
classTallies[annotationDict[cell]["class.tissue"]] = 0
if not annotationDict[cell]["match.tissue"] in matchTallies:
matchTallies[annotationDict[cell]["match.tissue"]] = 0
specificTallies[annotationDict[cell]["specific.tissue"]] += 1
generalTallies[annotationDict[cell]["general.tissue"]] += 1
classTallies[annotationDict[cell]["class.tissue"]] += 1
matchTallies[annotationDict[cell]["match.tissue"]] += 1
# Note: The above tallies record the number of cells (observed,
# in neurons) that correspond to each tissue.***
# prepare output files:
f_output = open(compositionpath + "mapcells_composition_codes.txt", "w")
c_output = open(compositionpath + "mapcells_composition_cellular.txt", "w")
s_output = open(compositionpath + "mapcells_composition_specific.txt", "w")
g_output = open(compositionpath + "mapcells_composition_general.txt", "w")
l_output = open(compositionpath + "mapcells_composition_class.txt", "w")
m_output = open(compositionpath + "mapcells_composition_match.txt", "w")
# print out headers:
print >>f_output, "\t".join(["neuron", "id", "fraction.ids"])
print >>c_output, "\t".join(["neuron", "id", "id.found", "id.cells", "fraction.ids", "fraction.sum", "fraction.max", "fraction.nrm", "pvalue", "pvalue.adj", "score"])
print >>s_output, "\t".join(["neuron", "id", "id.found", "id.cells", "fraction.ids", "fraction.sum", "fraction.max", "fraction.nrm", "pvalue", "pvalue.adj", "score"])
print >>g_output, "\t".join(["neuron", "id", "id.found", "id.cells", "fraction.ids", "fraction.sum", "fraction.max", "fraction.nrm", "pvalue", "pvalue.adj", "score"])
print >>l_output, "\t".join(["neuron", "id", "id.found", "id.cells", "fraction.ids", "fraction.sum", "fraction.max", "fraction.nrm", "pvalue", "pvalue.adj", "score"])
print >>m_output, "\t".join(["neuron", "id", "id.found", "id.cells", "fraction.ids", "fraction.sum", "fraction.max", "fraction.nrm", "pvalue", "pvalue.adj", "score"])
# Note: We will now output the following information:
# id.found : is ID found in neuron?
# id.cells : number of cells (diversity) that match ID.
# fraction.ids: fraction of ID diversity in neuron.
# fraction.sum: fraction of cellular diversity in neuron that matches ID.
# fraction.rat: fraction of cellular diversity in neuron that matches ID, normalized by the representation of the ID.
# fraction.max: fraction of cellular diversity in neuron as normalized by the ID with the highest cellular diversity in neuron.
# fraction.nrm: fraction of cellular diversity in neuron as normalized by the total number of cells with said ID.
# determine missed tissues:
print
specificMissed, generalMissed, classMissed, matchMissed = set(specificTissues).difference(set(specificTallies.keys())), set(generalTissues).difference(set(generalTallies.keys())), set(classTissues).difference(set(classTallies.keys())), set(matchTissues).difference(set(matchTallies.keys()))
print "Specific tissues not found:", str(len(specificMissed)) + " (" + str(len(specificTissues)) + ") ; " + ",".join(sorted(specificMissed))
print "General tissues not found:", str(len(generalMissed)) + " (" + str(len(generalTissues)) + ") ; " + ",".join(sorted(generalMissed))
print "Class tissues not found:", str(len(classMissed)) + " (" + str(len(classTissues)) + ") ; " + ",".join(sorted(classMissed))
print "Match tissues not found:", str(len(matchMissed)) + " (" + str(len(matchTissues)) + ") ; " + ",".join(sorted(matchMissed))
print
# export the fractions:
print "Exporting representation per neuron..."
for neuron in sorted(neuronDict.keys()):
if neuron in codeDict:
# export factor signals:
index = 0
for code in codes:
print >>f_output, "\t".join(map(str, [neuron, code, codeDict[neuron][index]]))
index += 1
# export cell counts:
for cell in cellList:
adjust = len(neuronDict.keys())*len(cellList)
types = len(cellMatrix[neuron].keys())
total = sum(cellMatrix[neuron].values())
maxxx = max(cellMatrix[neuron].values())
if cell in cellMatrix[neuron]:
count = float(cellMatrix[neuron][cell])
index = 1
else:
count = 0
index = 0
print >>c_output, "\t".join(map(str, [neuron, cell, index, count, float(index)/types, float(count)/total, float(count)/maxxx, 1, 1, 1, 0]))
# export specific tissue enrichment:
for specificTissue in sorted(specificTallies.keys()):
types = len(specificMatrix[neuron].keys())
total = sum(specificMatrix[neuron].values())
maxxx = max(specificMatrix[neuron].values())
tally = specificTallies[specificTissue]
if specificTissue in specificMatrix[neuron]:
count = float(specificMatrix[neuron][specificTissue])
index = 1
else:
count = 0
index = 0
adjust = len(neuronDict.keys())*len(specificTallies.keys())
universe = sum(specificTallies.values())
pvalue = hyper.fishers(count, universe, total, tally, adjust=1, method="right")
adjPvalue = hyper.limit(pvalue*adjust)
score = hyper.directional(count, universe, total, tally, adjust=adjust)
print >>s_output, "\t".join(map(str, [neuron, specificTissue, index, count, float(index)/types, float(count)/total, float(count)/maxxx, float(count)/tally, pvalue, adjPvalue, score]))
# export general tissue enrichment:
for generalTissue in sorted(generalTallies.keys()):
types = len(generalMatrix[neuron].keys())
total = sum(generalMatrix[neuron].values())
maxxx = max(generalMatrix[neuron].values())
tally = generalTallies[generalTissue]
if generalTissue in generalMatrix[neuron]:
count = float(generalMatrix[neuron][generalTissue])
index = 1
else:
count = 0
index = 0
adjust = len(neuronDict.keys())*len(generalTallies.keys())
universe = sum(generalTallies.values())
pvalue = hyper.fishers(count, universe, total, tally, adjust=1, method="right")
adjPvalue = hyper.limit(pvalue*adjust)
score = hyper.directional(count, universe, total, tally, adjust=adjust)
print >>g_output, "\t".join(map(str, [neuron, generalTissue, index, count, float(index)/types, float(count)/total, float(count)/maxxx, float(count)/tally, pvalue, adjPvalue, score]))
# export class tissue enrichment:
for classTissue in sorted(classTallies.keys()):
types = len(classMatrix[neuron].keys())
total = sum(classMatrix[neuron].values())
maxxx = max(classMatrix[neuron].values())
tally = classTallies[classTissue]
if classTissue in classMatrix[neuron]:
count = float(classMatrix[neuron][classTissue])
index = 1
else:
count = 0
index = 0
adjust = len(neuronDict.keys())*len(classTallies.keys())
universe = sum(classTallies.values())
pvalue = hyper.fishers(count, universe, total, tally, adjust=1, method="right")
adjPvalue = hyper.limit(pvalue*adjust)
score = hyper.directional(count, universe, total, tally, adjust=adjust)
print >>l_output, "\t".join(map(str, [neuron, classTissue, index, count, float(index)/types, float(count)/total, float(count)/maxxx, float(count)/tally, pvalue, adjPvalue, score]))
# export match tissue enrichment:
for matchTissue in sorted(matchTallies.keys()):
types = len(matchMatrix[neuron].keys())
total = sum(matchMatrix[neuron].values())
maxxx = max(matchMatrix[neuron].values())
tally = matchTallies[matchTissue]
if matchTissue in matchMatrix[neuron]:
count = float(matchMatrix[neuron][matchTissue])
index = 1
else:
count = 0
index = 0
adjust = len(neuronDict.keys())*len(matchTallies.keys())
universe = sum(matchTallies.values())
pvalue = hyper.fishers(count, universe, total, tally, adjust=1, method="right")
adjPvalue = hyper.limit(pvalue*adjust)
score = hyper.directional(count, universe, total, tally, adjust=adjust)
print >>m_output, "\t".join(map(str, [neuron, matchTissue, index, count, float(index)/types, float(count)/total, float(count)/maxxx, float(count)/tally, pvalue, adjPvalue, score]))
# close outputs:
f_output.close()
c_output.close()
s_output.close()
g_output.close()
l_output.close()
m_output.close()
print
print "Combining cell and factor (mix) information.."
# load input factor information:
factorDict = general.build2(compositionpath + "mapcells_composition_codes.txt", i="neuron", j="id", x="fraction.ids", mode="matrix")
# define input cell/tissue files:
infiles = ["mapcells_composition_cellular.txt", "mapcells_composition_specific.txt", "mapcells_composition_general.txt", "mapcells_composition_class.txt", "mapcells_composition_match.txt"]
for infile in infiles:
print "Processing:", infile
# initiate neuron data extraction:
f_output = open(compositionpath + infile.replace(".txt", ".mix"), "w")
inheader = open(compositionpath + infile).readline().strip().split("\t")
inlines = open(compositionpath + infile).readlines()
print >>f_output, inlines.pop(0)
# append factor information to neuron data:
processed = list()
for inline in inlines:
neuron, label = inline.strip().split("\t")[:2]
if not neuron in processed:
processed.append(neuron)
for factor in factorDict[neuron]:
output = list()
for column in inheader:
if column == "neuron":
output.append(neuron)
elif column == "id":
output.append(factor)
elif column in ["pvalue", "pvalue.adj"]:
output.append("1")
else:
output.append(factorDict[neuron][factor])
print >>f_output, "\t".join(output)
print >>f_output, inline.strip()
# close outputs:
f_output.close()
print
# examine co-association correspondence between genes:
elif option.mode == "test.similarity":
# update path to neurons:
neuronspath = neuronspath + option.peaks + "/"
# define input/output paths:
bedpath = neuronspath + option.technique + "/results/" + option.neurons + "/regions/bed/"
querypath = cellsetpath + option.query + "/"
targetpath = cellsetpath + option.target + "/"
hyperpath = comparepath + option.name + "/hyper/"
logpath = comparepath + option.name + "/log/"
general.pathGenerator(hyperpath)
general.pathGenerator(logpath)
# load query cells:
print
print "Loading query cells..."
query_matrix = dict()
for query in os.listdir(querypath):
queryCells = general.clean(open(querypath + query).read().split("\n"), "")
query_matrix[query] = queryCells
#print query, queryCells
print "Generating merged region file..."
#queryfile = hyperpath + "query.bed"
#regionsfile = hyperpath + "regions.bed"
#overlapfile = hyperpath + "overlap.bed"
joint = " " + bedpath
command = "cat " + bedpath + joint.join(os.listdir(bedpath)) + " > " + regionsfile
os.system(command)
# load gene coordinates:
print "Loading gene/feature coordinates..."
coord_dict = dict()
ad = general.build_header_dict(annotationspath + option.reference)
inlines = open(annotationspath + option.reference).readlines()
inlines.pop(0)
for inline in inlines:
initems = inline.strip("\n").split("\t")
chrm, start, stop, feature, strand, name = initems[ad["#chrm"]], initems[ad["start"]], initems[ad["end"]], initems[ad["feature"]], initems[ad["strand"]], initems[ad["name"]]
if strand == "+":
start, end = int(start)-option.up, int(start)+option.dn
elif strand == "-":
start, end = int(stop)-option.dn, int(stop)+option.up
for query in query_matrix:
if query == feature or query == name:
f_output = open(queryfile, "w")
print >>f_output, "\t".join(map(str, [chrm, start, end, feature, 0, strand]))
f_output.close()
overlaps = list()
command = "intersectBed -u -a " + regionsfile + " -b " + queryfile + " > " + overlapfile
os.system(command)
for inline in open(overlapfile).readlines():
overlaps.append(inline.strip())
print query, len(overlaps)
if len(overlaps) > 0:
pdb.set_trace()
break
# tree building mode:
elif option.mode == "tree.build":
# build cell-expression matrix:
print
print "Loading cellular expression..."
quantitation_matrix, expression_matrix, tracking_matrix, trackedCells = expressionBuilder(expressionfile=option.expression, path=expressionpath, cutoff=option.fraction, minimum=option.minimum, metric="fraction.expression")
# store cell-parent relationships:
print "Loading cell-parent relationships..."
cell_dict, parent_dict, pedigreeCells = relationshipBuilder(pedigreefile=option.pedigree, path=extraspath, mechanism="simple")
print "Pedigree cells:", len(pedigreeCells)
print "Tracked cells:", len(trackedCells)
# trim tree:
cell_tree, parent_tree = dict(), dict()
for parent in parent_dict:
for cell in parent_dict[parent]:
ascendants = ascendantsCollector(cell, parent_dict, cell_dict, ascendants=list())
process = False
if option.lineages == "complete":
process = True
elif parent in trackedCells and cell in trackedCells:
process = True
elif option.ascendants != "OFF" and len(ascendants) < int(option.ascendants):
process = True
if process:
if not parent in parent_tree:
parent_tree[parent] = list()
parent_tree[parent].append(cell)
cell_tree[cell] = parent
tree = treeBuilder(parent_tree, cell_tree)
#print sorted(tree.keys())
#print tree["P0"]
#pdb.set_trace()
f_output = open(cellspath + "mapcells_tree_" + option.name + ".json", "w")
json.dump(tree["P0"], f_output)
f_output.close()
# tree coloring mode:
elif option.mode == "tree.color":
# build cell-expression matrix:
print
print "Loading cellular expression..."
quantitation_matrix, expression_matrix, tracking_matrix, trackedCells = expressionBuilder(expressionfile=option.expression, path=expressionpath, cutoff=option.fraction, minimum=option.minimum, metric="fraction.expression")
# store cell-parent relationships:
print "Loading cell-parent relationships..."
cell_dict, parent_dict, pedigreeCells = relationshipBuilder(pedigreefile=option.pedigree, path=extraspath, mechanism="simple")
print "Pedigree cells:", len(pedigreeCells)
print "Tracked cells:", len(trackedCells)
# trim tree:
cell_tree, parent_tree = dict(), dict()
for parent in parent_dict:
for cell in parent_dict[parent]:
ascendants = ascendantsCollector(cell, parent_dict, cell_dict, ascendants=list())
process = False
if option.lineages == "complete":
process = True
elif parent in trackedCells and cell in trackedCells:
process = True
elif option.ascendants != "OFF" and len(ascendants) < int(option.ascendants):
process = True
if process:
if not parent in parent_tree:
parent_tree[parent] = list()
parent_tree[parent].append(cell)
cell_tree[cell] = parent
# build header dict:
hd = general.build_header_dict(option.infile)
# load input lines:
pvalue_matrix, cells_matrix = dict(), dict()
inlines = open(option.infile).readlines()
inlines.pop(0)
for inline in inlines:
initems = inline.strip("\n").split("\t")
query, target, pvalue, cells = initems[hd["query"]], initems[hd["target"]], initems[hd["pvalue"]], initems[hd["cells"]]
if not query in pvalue_matrix:
pvalue_matrix[query] = dict()
cells_matrix[query] = dict()
pvalue_matrix[query][target] = float(pvalue)
cells_matrix[query][target] = cells.split(",")
# scan inputs, selecting the targets of highest enrichment and generating color tree for each:
k = 0
print
print "Scanning queries..."
for query in cells_matrix:
target = general.valuesort(pvalue_matrix[query])[0]
cells = cells_matrix[query][target]
print query, target, pvalue_matrix[query][target], len(cells)
tree = treeBuilder(parent_tree, cell_tree, highlights=cells)
#print sorted(tree.keys())
#print tree["P0"]
#pdb.set_trace()
f_output = open(cellspath + "mapcells_tree_" + option.name + "_" + query + "-" + target + ".json", "w")
json.dump(tree["P0"], f_output)
f_output.close()
k += 1
print
print "Queries processed:", k
print
if __name__ == "__main__":
main()
print "Completed:", time.asctime(time.localtime())
#python mapCells.py --path ~/meTRN --mode import --infile murray_2012_supplemental_dataset_1_per_gene.txt --name murray # Retired!
#python mapCells.py --path ~/meTRN --mode import --infile waterston_avgExpression.csv --name waterston --measure max.expression
#python mapCells.py --path ~/meTRN --mode import --infile waterston_avgExpression.csv --name waterston --measure avg.expression
#python mapCells.py --path ~/meTRN --mode check.status --peaks optimal_standard_factor_sx_rawraw --name waterston --measure avg.expression
#python mapCells.py --path ~/meTRN --mode check.status --peaks optimal_standard_factor_ex_rawraw --name waterston --measure avg.expression
#python mapCells.py --path ~/meTRN/ --mode build.lineages --pedigree waterston_cell_pedigree.csv --expression mapcells_avgExp_waterston_expression_tracked --name waterston.tracked --method builder --lineages tracked --descendants OFF --ascendants OFF --limit 10000
#python mapCells.py --path ~/meTRN/ --mode build.lineages --pedigree waterston_cell_pedigree.csv --expression mapcells_avgExp_waterston_expression_tracked --name waterston.tracked --method builder --lineages complete --descendants OFF --ascendants OFF --limit 10000
#python mapCells.py --path ~/meTRN/ --mode test.lineages --pedigree waterston_cell_pedigree.csv --expression mapcells_avgExp_waterston_expression_tracked --name waterston.tracked --method builder --lineages tracked --descendants OFF --ascendants OFF --limit 10000
#python mapCells.py --path ~/meTRN/ --mode test.lineages --pedigree waterston_cell_pedigree.csv --expression mapcells_avgExp_waterston_expression_assayed --name waterston.assayed --method builder --lineages tracked --descendants OFF --ascendants OFF --limit 10000
#python mapCells.py --path ~/meTRN --organism ce --mode robust --infile waterston_avgExpression.csv | 42.815027 | 384 | 0.692007 | 19,863 | 171,517 | 5.904848 | 0.066808 | 0.00752 | 0.007503 | 0.008415 | 0.522492 | 0.45147 | 0.407339 | 0.37535 | 0.332413 | 0.30883 | 0 | 0.004334 | 0.168654 | 171,517 | 4,006 | 385 | 42.815027 | 0.81822 | 0.113015 | 0 | 0.377008 | 0 | 0.001428 | 0.149897 | 0.00517 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.009282 | null | null | 0.157087 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b86fc82da8dc94ff37ad24a384c231a1a48f881c | 7,780 | py | Python | IR_Extraction.py | Kazuhito00/yolo2_onnx | 95c5e2063071d610ec8e98963f3639e0b25efb59 | [
"MIT"
] | 15 | 2018-07-02T19:11:09.000Z | 2022-03-31T07:12:53.000Z | IR_Extraction.py | Kazuhito00/yolo2_onnx | 95c5e2063071d610ec8e98963f3639e0b25efb59 | [
"MIT"
] | null | null | null | IR_Extraction.py | Kazuhito00/yolo2_onnx | 95c5e2063071d610ec8e98963f3639e0b25efb59 | [
"MIT"
] | 9 | 2018-05-08T01:58:53.000Z | 2022-01-28T06:36:02.000Z | from Onnx import make_dir, OnnxImportExport
import subprocess
import pickle
import os
import numpy as np
import time
def generate_svg(modelName, marked_nodes=[]):
"""
generate SVG figure from existed ONNX file
"""
if marked_nodes ==[]:
addfilenamestr = ""
add_command_str = ""
else:
addfilenamestr = "_marked"
marked_str = '_'.join([str(e) for e in marked_nodes])
add_command_str = " --marked 1 --marked_list {}".format(marked_str)
onnxfilepath = "onnx/{}.onnx".format(modelName)
dotfilepath = "dot/{}{}.dot".format(modelName,addfilenamestr)
svgfilepath = "svg/{}{}.svg".format(modelName,addfilenamestr)
# check if onnx file exist
if not os.path.isfile(os.getcwd()+"/"+onnxfilepath):
print('generate_svg Error! Onnx file not exist!')
return
else:
make_dir("dot")
make_dir("svg")
subprocess.call("python net_drawer.py --input {} --output {} --embed_docstring {}".format(onnxfilepath,dotfilepath,add_command_str), shell=True) # onnx -> dot
subprocess.call("dot -Tsvg {} -o {}".format(dotfilepath,svgfilepath), shell=True)# dot -> svg
print('generate_svg ..end')
return svgfilepath
def get_init_shape_dict(rep):
"""
Extract Shape of Initial Input Object
e.g.
if
%2[FLOAT, 64x3x3x3]
%3[FLOAT, 64]
then
return {u'2':(64,3,3,3),u'3':(64,)}
"""
d = {}
if hasattr(rep, 'input_dict'):
for key in rep.input_dict:
tensor = rep.input_dict[key]
shape = np.array(tensor.shape, dtype=int)
d.update({key:shape})
return d
elif hasattr(rep, 'predict_net'):
for k in rep.predict_net.tensor_dict.keys():
tensor = rep.predict_net.tensor_dict[k]
shape = np.array(tensor.shape.as_list(),dtype=float).astype(int)
d.update({k: shape})
return d
else:
print ("rep Error! check your onnx version, it might not support IR_Extraction operation!")
return d
def get_output_shape_of_node(node, shape_dict, backend, device = "CPU"):# or "CUDA:0"
"""
generate output_shape of a NODE
"""
out_idx = node.output[0]
input_list = node.input # e.g. ['1', '2']
inps = []
for inp_idx in input_list:
inp_shape = shape_dict[inp_idx]
rand_inp = np.random.random(size=inp_shape).astype('float16')
inps.append(rand_inp)
try:
out = backend.run_node(node=node, inputs=inps, device=device)
out_shape = out[0].shape
except:
out_shape = shape_dict[input_list[0]]
print("Op: [{}] run_node error! return inp_shape as out_shape".format(node.op_type))
return out_shape, out_idx
def get_overall_shape_dict(model, init_shape_dict, backend):
"""
generate output_shape of a MODEL GRAPH
"""
shape_dict = init_shape_dict.copy()
for i, node in enumerate(model.graph.node):
st=time.time()
out_shape, out_idx = get_output_shape_of_node(node, shape_dict, backend)
shape_dict.update({out_idx:out_shape})
print("out_shape: {} for Obj[{}], node [{}][{}]...{:.2f} sec".format(out_shape, out_idx, i, node.op_type,time.time()-st))
return shape_dict
def get_graph_order(model):
"""
Find Edges (each link) in MODEL GRAPH
"""
Node2nextEntity = {}
Entity2nextNode = {}
for Node_idx, node in enumerate(model.graph.node):
# node input
for Entity_idx in node.input:
if not Entity_idx in Entity2nextNode.keys():
Entity2nextNode.update({Entity_idx:Node_idx})
# node output
for Entity_idx in node.output:
if not Node_idx in Node2nextEntity.keys():
Node2nextEntity.update({Node_idx:Entity_idx})
return Node2nextEntity, Entity2nextNode
def get_kernel_shape_dict(model, overall_shape_dict):
"""
Get Input/Output/Kernel Shape for Conv in MODEL GRAPH
"""
conv_d = {}
for i, node in enumerate(model.graph.node):
if node.op_type == 'Conv':
for attr in node.attribute:
if attr.name == "kernel_shape":
kernel_shape = np.array(attr.ints, dtype=int)
break
inp_idx = node.input[0]
out_idx = node.output[0]
inp_shape = overall_shape_dict[inp_idx]
out_shape = overall_shape_dict[out_idx]
conv_d.update({i:(inp_idx, out_idx, inp_shape, out_shape, kernel_shape)})
print("for node [{}][{}]:\ninp_shape: {} from obj[{}], \nout_shape: {} from obj[{}], \nkernel_shape: {} \n"
.format(i, node.op_type, inp_shape, inp_idx, out_shape, out_idx, kernel_shape ))
return conv_d
def calculate_num_param_n_num_flops(conv_d):
"""
calculate num_param and num_flops from conv_d
"""
n_param = 0
n_flops = 0
for k in conv_d:
#i:(inp_idx, out_idx, inp_shape, out_shape, kernel_shape)
inp_shape, out_shape, kernel_shape = conv_d[k][2],conv_d[k][3],conv_d[k][4]
h,w,c,n,H,W = kernel_shape[1], kernel_shape[1], inp_shape[1], out_shape[1], out_shape[2], out_shape[3]
n_param += n*(h*w*c+1)
n_flops += H*W*n*(h*w*c+1)
return n_param, n_flops
def find_sequencial_nodes(model, Node2nextEntity, Entity2nextNode, search_target=['Conv', 'Add', 'Relu', 'MaxPool'], if_print = False):
"""
Search Where is Subgroup
"""
found_nodes = []
for i, node in enumerate(model.graph.node):
if if_print: print("\nnode[{}] ...".format(i))
n_idx = i #init
is_fit = True
for tar in search_target:
try:
assert model.graph.node[n_idx].op_type == tar #check this node
if if_print: print("node[{}] fit op_type [{}]".format(n_idx, tar))
e_idx = Node2nextEntity[n_idx] #find next Entity
n_idx = Entity2nextNode[e_idx] #find next Node
#if if_print: print(e_idx,n_idx)
except:
is_fit = False
if if_print: print("node[{}] doesn't fit op_type [{}]".format(n_idx, tar))
break
if is_fit:
if if_print: print("node[{}] ...fit!".format(i))
found_nodes.append(i)
else:
if if_print: print("node[{}] ...NOT fit!".format(i))
if if_print: print("\nNode{} fit the matching pattern".format(found_nodes))
return found_nodes
def get_permutations(a):
"""
get all permutations of list a
"""
import itertools
p = []
for r in range(len(a)+1):
c = list(itertools.combinations(a,r))
for cc in c:
p += list(itertools.permutations(cc))
return p
def get_list_of_sequencial_nodes(search_head = ['Conv'], followings = ['Add', 'Relu', 'MaxPool']):
"""
if
search_head = ['Conv']
followings = ['Add', 'Relu', 'MaxPool']
return
[['Conv'],
['Conv', 'Add'],
['Conv', 'Relu'],
['Conv', 'MaxPool'],
['Conv', 'Add', 'Relu'],
['Conv', 'Relu', 'Add'],
['Conv', 'Add', 'MaxPool'],
['Conv', 'MaxPool', 'Add'],
['Conv', 'Relu', 'MaxPool'],
['Conv', 'MaxPool', 'Relu'],
['Conv', 'Add', 'Relu', 'MaxPool'],
['Conv', 'Add', 'MaxPool', 'Relu'],
['Conv', 'Relu', 'Add', 'MaxPool'],
['Conv', 'Relu', 'MaxPool', 'Add'],
['Conv', 'MaxPool', 'Add', 'Relu'],
['Conv', 'MaxPool', 'Relu', 'Add']]
"""
search_targets = [ search_head+list(foll) for foll in get_permutations(followings)]
return search_targets
| 35.525114 | 166 | 0.579434 | 1,018 | 7,780 | 4.223969 | 0.191552 | 0.029767 | 0.014651 | 0.022791 | 0.183256 | 0.111163 | 0.08907 | 0.061163 | 0.053488 | 0.018605 | 0 | 0.009959 | 0.277249 | 7,780 | 219 | 167 | 35.525114 | 0.754757 | 0.164396 | 0 | 0.131387 | 1 | 0.007299 | 0.117006 | 0.003366 | 0 | 0 | 0 | 0 | 0.007299 | 1 | 0.072993 | false | 0 | 0.051095 | 0 | 0.218978 | 0.094891 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b870e2ce26d78dfa9746e5e88adb9ed1463fb9fc | 944 | py | Python | communications/migrations/0002_auto_20190902_1759.py | shriekdj/django-social-network | 3654051e334996ee1b0b60f83c4f809a162ddf4a | [
"MIT"
] | 368 | 2019-10-10T18:02:09.000Z | 2022-03-31T14:31:39.000Z | communications/migrations/0002_auto_20190902_1759.py | shriekdj/django-social-network | 3654051e334996ee1b0b60f83c4f809a162ddf4a | [
"MIT"
] | 19 | 2020-05-09T19:10:29.000Z | 2022-03-04T18:22:51.000Z | communications/migrations/0002_auto_20190902_1759.py | shriekdj/django-social-network | 3654051e334996ee1b0b60f83c4f809a162ddf4a | [
"MIT"
] | 140 | 2019-10-10T18:01:59.000Z | 2022-03-14T09:37:39.000Z | # Generated by Django 2.2.4 on 2019-09-02 11:59
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
('communications', '0001_initial'),
]
operations = [
migrations.AddField(
model_name='message',
name='author',
field=models.ForeignKey(default=1, on_delete=django.db.models.deletion.CASCADE, related_name='author_messages', to=settings.AUTH_USER_MODEL),
preserve_default=False,
),
migrations.AddField(
model_name='message',
name='friend',
field=models.ForeignKey(default=1, on_delete=django.db.models.deletion.CASCADE, related_name='friend_messages', to=settings.AUTH_USER_MODEL),
preserve_default=False,
),
]
| 32.551724 | 153 | 0.665254 | 106 | 944 | 5.754717 | 0.443396 | 0.052459 | 0.068852 | 0.108197 | 0.544262 | 0.544262 | 0.419672 | 0.419672 | 0.419672 | 0.252459 | 0 | 0.028807 | 0.227754 | 944 | 28 | 154 | 33.714286 | 0.807956 | 0.047669 | 0 | 0.363636 | 1 | 0 | 0.091416 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.136364 | 0 | 0.272727 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b871aaee0feb9ef1cdc6b28c76ed73a977fed9b3 | 1,126 | py | Python | examples/sht2x.py | kungpfui/python-i2cmod | 57d9cc8de372aa38526c3503ceec0d8924665c04 | [
"MIT"
] | null | null | null | examples/sht2x.py | kungpfui/python-i2cmod | 57d9cc8de372aa38526c3503ceec0d8924665c04 | [
"MIT"
] | null | null | null | examples/sht2x.py | kungpfui/python-i2cmod | 57d9cc8de372aa38526c3503ceec0d8924665c04 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Sensirion SHT2x humidity sensor.
Drives SHT20, SHT21 and SHT25 humidity and temperature sensors.
Sensirion `SHT2x Datasheets <https://www.sensirion.com/en/environmental-sensors/humidity-sensors/humidity-temperature-sensor-sht2x-digital-i2c-accurate/>`
"""
from i2cmod import SHT2X
def example():
with SHT2X() as sensor:
print("Identification: 0x{:016X}".format(sensor.serial_number))
for adc_res, reg_value in (
('12/14', 0x02),
(' 8/10', 0x03),
('10/13', 0x82),
('11/11', 0x83)):
sensor.user_register = reg_value
print("-" * 79)
print("Resolution: {}-bit (rh/T)".format(adc_res))
print("Temperature: {:.2f} °C".format(sensor.centigrade))
print("Temperature: {:.2f} °F".format(sensor.fahrenheit))
print("Relative Humidity: {:.2f} % ".format(sensor.humidity))
print("User Register: 0x{:02X}".format(sensor.user_register))
if __name__ == '__main__':
example()
| 34.121212 | 154 | 0.579041 | 126 | 1,126 | 5.071429 | 0.587302 | 0.093897 | 0.056338 | 0.059468 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.065139 | 0.263766 | 1,126 | 32 | 155 | 35.1875 | 0.703257 | 0.262877 | 0 | 0 | 0 | 0 | 0.258222 | 0 | 0 | 0 | 0.019488 | 0 | 0 | 1 | 0.055556 | true | 0 | 0.055556 | 0 | 0.111111 | 0.388889 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b87f562e23be6f95cf850092c0a407380227775e | 975 | py | Python | setup.py | remiolsen/anglerfish | 5caabebf5864180e5552b3e40de3650fc5fcabd6 | [
"MIT"
] | null | null | null | setup.py | remiolsen/anglerfish | 5caabebf5864180e5552b3e40de3650fc5fcabd6 | [
"MIT"
] | 19 | 2019-10-07T11:14:54.000Z | 2022-03-28T12:36:47.000Z | setup.py | remiolsen/anglerfish | 5caabebf5864180e5552b3e40de3650fc5fcabd6 | [
"MIT"
] | 2 | 2019-05-28T14:15:26.000Z | 2022-03-28T09:28:44.000Z | #!/usr/bin/env python
from setuptools import setup, find_packages
import sys, os
setup(
name='anglerfish',
version='0.4.1',
description='Anglerfish, a tool to demultiplex Illumina libraries from ONT data',
author='Remi-Andre Olsen',
author_email='remi-andre.olsen@scilifelab.se',
url='https://github.com/remiolsen/anglerfish',
license='MIT',
packages = find_packages(),
install_requires=[
'python-levenshtein',
'biopython',
'numpy'
],
scripts=['./anglerfish.py'],
zip_safe=False,
classifiers=[
"Development Status :: 3 - Alpha",
"Environment :: Console",
"Intended Audience :: Developers",
"Intended Audience :: Healthcare Industry",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python",
"Topic :: Scientific/Engineering :: Medical Science Apps."
]
)
| 29.545455 | 85 | 0.645128 | 101 | 975 | 6.178218 | 0.772277 | 0.076923 | 0.044872 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005229 | 0.215385 | 975 | 32 | 86 | 30.46875 | 0.810458 | 0.020513 | 0 | 0 | 0 | 0 | 0.560797 | 0.054507 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.066667 | 0 | 0.066667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b8833e3d9f3a2008bcf62eb119ccbf510334b106 | 796 | py | Python | 670/main.py | pauvrepetit/leetcode | 6ad093cf543addc4dfa52d72a8e3c0d05a23b771 | [
"MIT"
] | null | null | null | 670/main.py | pauvrepetit/leetcode | 6ad093cf543addc4dfa52d72a8e3c0d05a23b771 | [
"MIT"
] | null | null | null | 670/main.py | pauvrepetit/leetcode | 6ad093cf543addc4dfa52d72a8e3c0d05a23b771 | [
"MIT"
] | null | null | null | # 670. 最大交换
#
# 20200905
# huao
class Solution:
def maximumSwap(self, num: int) -> int:
return int(self.maximumSwapStr(str(num)))
def maximumSwapStr(self, num: str) -> str:
s = list(num)
if len(s) == 1:
return num
maxNum = '0'
maxLoc = 0
for i in range(len(s))[::-1]:
c = s[i]
if maxNum < c:
maxNum = c
maxLoc = i
if s[0] == maxNum:
return maxNum + self.maximumSwapStr(num[1:])
s[maxLoc] = s[0]
s[0] = str(maxNum)
ss = ""
for i in s:
ss += i
return ss
print(Solution().maximumSwap(100))
print(Solution().maximumSwap(2736))
print(Solution().maximumSwap(9973))
print(Solution().maximumSwap(98638))
| 22.742857 | 56 | 0.5 | 97 | 796 | 4.103093 | 0.340206 | 0.130653 | 0.241206 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.068627 | 0.359296 | 796 | 34 | 57 | 23.411765 | 0.711765 | 0.028894 | 0 | 0 | 0 | 0 | 0.001302 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0 | 0.038462 | 0.269231 | 0.153846 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b8860c8f4169552c8561caf03f121aafce628fa6 | 333 | py | Python | tests/resources/test_codegen/template.py | come2ry/atcoder-tools | d7ecf5c19427848e6c8f0aaa3c1a8af04c467f1b | [
"MIT"
] | 313 | 2016-12-04T13:25:21.000Z | 2022-03-31T09:46:15.000Z | tests/resources/test_codegen/template.py | come2ry/atcoder-tools | d7ecf5c19427848e6c8f0aaa3c1a8af04c467f1b | [
"MIT"
] | 232 | 2016-12-02T22:55:20.000Z | 2022-03-27T06:48:02.000Z | tests/resources/test_codegen/template.py | come2ry/atcoder-tools | d7ecf5c19427848e6c8f0aaa3c1a8af04c467f1b | [
"MIT"
] | 90 | 2017-09-23T15:09:48.000Z | 2022-03-17T03:13:40.000Z | #!/usr/bin/env python3
import sys
def solve(${formal_arguments}):
return
def main():
def iterate_tokens():
for line in sys.stdin:
for word in line.split():
yield word
tokens = iterate_tokens()
${input_part}
solve(${actual_arguments})
if __name__ == '__main__':
main()
| 16.65 | 37 | 0.588589 | 40 | 333 | 4.575 | 0.65 | 0.142077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004219 | 0.288288 | 333 | 19 | 38 | 17.526316 | 0.767932 | 0.063063 | 0 | 0 | 0 | 0 | 0.025723 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.076923 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b88ad3cd16814edcf01716b7796117d85426c826 | 691 | py | Python | salamander/mktcalendar.py | cclauss/statarb | a59366f70122c355fc93a2391362a3e8818a290e | [
"Apache-2.0"
] | 51 | 2019-02-01T19:43:37.000Z | 2022-03-16T09:07:03.000Z | salamander/mktcalendar.py | cclauss/statarb | a59366f70122c355fc93a2391362a3e8818a290e | [
"Apache-2.0"
] | 2 | 2019-02-23T18:54:22.000Z | 2019-11-09T01:30:32.000Z | salamander/mktcalendar.py | cclauss/statarb | a59366f70122c355fc93a2391362a3e8818a290e | [
"Apache-2.0"
] | 35 | 2019-02-08T02:00:31.000Z | 2022-03-01T23:17:00.000Z | from pandas.tseries.holiday import AbstractHolidayCalendar, Holiday, nearest_workday, \
USMartinLutherKingJr, USPresidentsDay, GoodFriday, USMemorialDay, \
USLaborDay, USThanksgivingDay
from pandas.tseries.offsets import CustomBusinessDay
class USTradingCalendar(AbstractHolidayCalendar):
rules = [
Holiday('NewYearsDay', month=1, day=1),
USMartinLutherKingJr,
USPresidentsDay,
GoodFriday,
USMemorialDay,
Holiday('USIndependenceDay', month=7, day=4),
USLaborDay,
USThanksgivingDay,
Holiday('Christmas', month=12, day=25)
]
TDay = CustomBusinessDay(calendar=USTradingCalendar())
| 31.409091 | 88 | 0.691751 | 53 | 691 | 9 | 0.584906 | 0.041929 | 0.071279 | 0.243187 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014925 | 0.224313 | 691 | 21 | 89 | 32.904762 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0.055224 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.117647 | 0 | 0.235294 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.