markdown
stringlengths 0
37k
| code
stringlengths 1
33.3k
| path
stringlengths 8
215
| repo_name
stringlengths 6
77
| license
stringclasses 15
values |
|---|---|---|---|---|
Here's what one bed (interval in striplog's vocabulary) looks like:
|
s[0]
|
docs/tutorial/15_Model_sequences_with_Markov_chains.ipynb
|
agile-geoscience/striplog
|
apache-2.0
|
It's easy to get the 'lithology' attribute from all the beds:
|
seq = [i.primary.lithology for i in s]
|
docs/tutorial/15_Model_sequences_with_Markov_chains.ipynb
|
agile-geoscience/striplog
|
apache-2.0
|
This is what we need!
|
m = Markov_chain.from_sequence(seq, strings_are_states=True, include_self=False)
m.normalized_difference
m.plot_norm_diff()
m.plot_graph()
|
docs/tutorial/15_Model_sequences_with_Markov_chains.ipynb
|
agile-geoscience/striplog
|
apache-2.0
|
http://freegeoip.net 这个网站根据你的IP返回对应的地址,如果将IP作为参数传入,那么将返回该IP对应的地址,如下所示
|
print(getCountry("50.78.253.58"))
print(getCountry(""))
|
Python_Spider_Tutorial_07.ipynb
|
yttty/python3-scraper-tutorial
|
gpl-3.0
|
用Python解析JSON
|
import json
jsonString = '{"arrayOfNums":[{"number":0},{"number":1},{"number":2}],"arrayOfFruits":[{"fruit":"apple"},{"fruit":"banana"},{"fruit":"pear"}]}'
jsonObj = json.loads(jsonString)
|
Python_Spider_Tutorial_07.ipynb
|
yttty/python3-scraper-tutorial
|
gpl-3.0
|
JSON(JavaScript Object Notation) 是一种轻量级的数据交换格式。它基于ECMAScript的一个子集。 JSON采用完全独立于语言的文本格式,但是也使用了类似于C语言家族的习惯(包括C、C++、C#、Java、JavaScript、Perl、Python等)。这些特性使JSON成为理想的数据交换语言。 易于人阅读和编写,同时也易于机器解析和生成(一般用于提升网络传输速率)。( 来自百度百科 )
将上面的jsonString对齐格式,可以看到他其实是长这个样子的:
{
"arrayOfNums": [
{
"number": 0
},
{
"number": 1
},
{
"number": 2
}
],
"arrayOfFruits": [
{
"fruit": "apple"
},
{
"fruit": "banana"
},
{
"fruit": "pear"
}
]
}
其中[]框住的是数组,而{}框住的是对象(object), 下面我们尝试对这个jsonObj进行操作
|
print(jsonObj.get("arrayOfNums"))
print(jsonObj.get("arrayOfNums")[1])
print(jsonObj.get("arrayOfNums")[1].get("number")+jsonObj.get("arrayOfNums")[2].get("number"))
print(jsonObj.get("arrayOfFruits")[2].get("fruit"))
|
Python_Spider_Tutorial_07.ipynb
|
yttty/python3-scraper-tutorial
|
gpl-3.0
|
Get quantiles from the input raster data
It is necessary to load the original raster in order to calculateits quantiles.
|
def raster2array(rasterfn):
raster = gdal.Open(rasterfn)
band = raster.GetRasterBand(1)
return band.ReadAsArray()
g_array = raster2array('global_cumul_impact_2013_all_layers.tif')
g_array_f = g_array.flatten()
print('The total number of non-zero values in the raw raster dataset:', g_array_f.size - (g_array_f==0).sum())
## in fact the following should be used for testing equality of float dtypes. Because the result remains\
## the same thus the simpler option is used.
## (np.isclose(g_array_f, 0.0)).sum()
|
old_with_antarctica/wilderness_analysis_with_antarctica.ipynb
|
Yichuans/wilderness-wh
|
gpl-3.0
|
The number of non-zero values is notably different from esri's calculation, which stands at 414,347,791, less than what's calculated here and is 300,000 fewer zeros. This suggests esri may be using a bigger tolerence value, i.e. what is considered small enough to be regarded as zero .
Now, get the quantiles... this threshold is subject to change. For the time being, arbitary values of 1%, 3%, 5% and 10% are used.
|
## the percentile function applied to the sliced array, i.e., those with values greater than 0
quantiles = [np.percentile(g_array_f[~(g_array_f == 0)], quantile) for quantile in [1,3,5,10]]
quantiles
|
old_with_antarctica/wilderness_analysis_with_antarctica.ipynb
|
Yichuans/wilderness-wh
|
gpl-3.0
|
Analyse intersection result
The hypothetical biogeographical classification of the marine environmental within EEZ is described as a combination of MEOW (Marine Ecoregional of the World), its visual representation (called hereafter MEOW visual) up to 200 nautical miles and the World's pelagic provinces. The spatial data was prepared in a way such that from the coastline outwards disjoint polygons represents: MEOW (up to 200 meter depth, inner/red), MEOW visual overlaps with pelagic provinces (middle/green), pelagic provinces that do not overlap with MEOW visual (outer/blue). This is purely a spatial aggregation based on the above data and the World Vector Shoreline EEZ. See below for example.
Load the input_data table, which describes the intersection between the marine pressure layer and the marine ecoregion/pelagic provinces classification. The input_attr table contains information on the relationship between OBJECTID and each raster pixel value.
- OBJECTID (one) - pixel value (many)
- OBJECTID (many) - attr: Province, Ecoregion, and Realm, categories (one)
Each pixel is of height and width: 934.478 meter, making each pixel in area 0.873 $km^2$
|
# calculate cell-size in sqkm2
cell_size = 934.478*934.478/1000000
print(cell_size)
# the OBJECTID - ras_val table
input_data = pd.read_csv('result.csv')
# the attribute table containing information about province etc
input_attr = pd.read_csv('attr.csv')
print('\n'.join(['Threshold cut-off value: '+ str(threshold) for threshold in quantiles]))
# total count of pixels per OBJECTID
result_count = input_data.groupby('OBJECTID').count().reset_index()
|
old_with_antarctica/wilderness_analysis_with_antarctica.ipynb
|
Yichuans/wilderness-wh
|
gpl-3.0
|
The next step will be to apply the categorisations in the input_attr table. Replace result10 result table if other threshold is used
|
# join base to the attribute
attr_merge = pd.merge(input_attr, result_count, on = 'OBJECTID')
# join result to the above table
attr_merge_10 = pd.merge(attr_merge, result_10, how = 'left', on ='OBJECTID', suffixes = ('_base', '_result'))
# fill ras_val_result's NaN with 0, province and realms with None. This should happen earlier
attr_merge_10['ras_val_result'].fillna(0, inplace=True)
attr_merge_10['PROVINCE'].fillna('None', inplace=True)
attr_merge_10['PROVINCE_P'].fillna('None', inplace=True)
# apply an aggregate function to each sub dataframe
def apply_func(group):
overlap = group['ras_val_result'].sum()*cell_size # in sqkm
base = group['ras_val_base'].sum()*cell_size
per = overlap/base
# can have multiple columns as a result, if returened as pd.series
return pd.Series([overlap, per, base], index=['less_than_threshold', 'per_ltt', 'base'])
# final dataframe
result_agg_10 = attr_merge_10.groupby(['PROVINCE', 'PROVINCE_P', 'category']).apply(apply_func).reset_index()
print(result_agg_10.head(20))
|
old_with_antarctica/wilderness_analysis_with_antarctica.ipynb
|
Yichuans/wilderness-wh
|
gpl-3.0
|
Further aggregation could be applied here.
Visualisation and exploring results
|
import seaborn as sns
g = sns.FacetGrid(result_agg_10, col="category")
g.map(plt.hist, 'per_ltt', bins=50, log=True)
# MEOW province (200m and 200 nautical combined)
result_agg_10_province = attr_merge_10.groupby(['PROVINCE']).apply(apply_func).reset_index()
sns.distplot(result_agg_10_province.per_ltt)
result_agg_10_province.sort_values('per_ltt', ascending=False).head(20)
# pelagic province
result_agg_10_pelagic = attr_merge_10.groupby(['PROVINCE_P']).apply(apply_func).reset_index()
sns.distplot(result_agg_10_pelagic.per_ltt)
result_agg_10_pelagic.sort_values('per_ltt', ascending=False).head(20)
# MEOW province (200m only)
result_agg_10_m200 = attr_merge_10[attr_merge_10.category == 'meow200m'].groupby(['PROVINCE']).apply(apply_func).reset_index()
sns.distplot(result_agg_10_m200.per_ltt)
result_agg_10_m200.sort_values('per_ltt', ascending = False).head(20)
|
old_with_antarctica/wilderness_analysis_with_antarctica.ipynb
|
Yichuans/wilderness-wh
|
gpl-3.0
|
New threshold for within EEZ
|
input_data.OBJECTID.unique().size
# no zeros in the result data
input_data.ras_val.size
input_data.ras_val.min()
# percentage of EEZ water in relation to the entire ocean
input_data.ras_val.size/g_array_f[~(g_array_f==0)].size
# all input_data are non-zero (zero indicates land and no_data)
input_data[~(input_data.ras_val == 0)].ras_val.count() == input_data.ras_val.count()
# get 10% threshold value
np.percentile(input_data.ras_val, 10)
sns.distplot(g_array_f[~(g_array_f==0)])
sns.distplot(input_data.ras_val)
|
old_with_antarctica/wilderness_analysis_with_antarctica.ipynb
|
Yichuans/wilderness-wh
|
gpl-3.0
|
変数の概要
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/guide/variable"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org で実行</a></td>
<td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/guide/variable.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab で実行</a> </td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/guide/variable.ipynb"> <img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png"> GitHubでソースを表示</a></td>
<td> <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/guide/variable.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード</a> </td>
</table>
TensorFlow の変数は、プログラムが操作する共通の永続的な状態を表すために推奨される方法です。このガイドでは、TensorFlow で tf.Variable のインスタンスを作成、更新、管理する方法について説明します。
変数は tf.Variable クラスを介して作成および追跡されます。tf.Variable は、そこで演算を実行して値を変更できるテンソルを表します。特定の演算ではこのテンソルの値の読み取りと変更を行うことができます。tf.keras などのより高度なライブラリは tf.Variable を使用してモデルのパラメーターを保存します。
セットアップ
このノートブックでは、変数の配置について説明します。変数が配置されているデバイスを確認するには、この行のコメントを外します。
|
import tensorflow as tf
# Uncomment to see where your variables get placed (see below)
# tf.debugging.set_log_device_placement(True)
|
site/ja/guide/variable.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
変数の作成
変数を作成するには、初期値を指定します。tf.Variable は、初期化の値と同じ dtype を持ちます。
|
my_tensor = tf.constant([[1.0, 2.0], [3.0, 4.0]])
my_variable = tf.Variable(my_tensor)
# Variables can be all kinds of types, just like tensors
bool_variable = tf.Variable([False, False, False, True])
complex_variable = tf.Variable([5 + 4j, 6 + 1j])
|
site/ja/guide/variable.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
変数の外観と動作はテンソルに似ており、実際にデータ構造が tf.Tensor で裏付けられています。テンソルのように dtype と形状を持ち、NumPy にエクスポートできます。
|
print("Shape: ", my_variable.shape)
print("DType: ", my_variable.dtype)
print("As NumPy: ", my_variable.numpy())
|
site/ja/guide/variable.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
ほとんどのテンソル演算は期待どおりに変数を処理しますが、変数は変形できません。
|
print("A variable:", my_variable)
print("\nViewed as a tensor:", tf.convert_to_tensor(my_variable))
print("\nIndex of highest value:", tf.argmax(my_variable))
# This creates a new tensor; it does not reshape the variable.
print("\nCopying and reshaping: ", tf.reshape(my_variable, [1,4]))
|
site/ja/guide/variable.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
上記のように、変数はテンソルによって裏付けられています。テンソルは tf.Variable.assign を使用して再割り当てできます。assign を呼び出しても、(通常は)新しいテンソルは割り当てられません。代わりに、既存テンソルのメモリが再利用されます。
|
a = tf.Variable([2.0, 3.0])
# This will keep the same dtype, float32
a.assign([1, 2])
# Not allowed as it resizes the variable:
try:
a.assign([1.0, 2.0, 3.0])
except Exception as e:
print(f"{type(e).__name__}: {e}")
|
site/ja/guide/variable.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
演算でテンソルのような変数を使用する場合、通常は裏付けているテンソルで演算します。
既存の変数から新しい変数を作成すると、裏付けているテンソルが複製されます。2 つの変数が同じメモリを共有することはありません。
|
a = tf.Variable([2.0, 3.0])
# Create b based on the value of a
b = tf.Variable(a)
a.assign([5, 6])
# a and b are different
print(a.numpy())
print(b.numpy())
# There are other versions of assign
print(a.assign_add([2,3]).numpy()) # [7. 9.]
print(a.assign_sub([7,9]).numpy()) # [0. 0.]
|
site/ja/guide/variable.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
ライフサイクル、命名、監視
Python ベースの TensorFlow では、tf.Variable インスタンスのライフサイクルは他の Python オブジェクトと同じです。変数への参照がない場合、変数は自動的に割り当て解除されます。
変数には名前を付けることもでき、変数の追跡とデバッグに役立ちます。2 つの変数に同じ名前を付けることができます。
|
# Create a and b; they will have the same name but will be backed by
# different tensors.
a = tf.Variable(my_tensor, name="Mark")
# A new variable with the same name, but different value
# Note that the scalar add is broadcast
b = tf.Variable(my_tensor + 1, name="Mark")
# These are elementwise-unequal, despite having the same name
print(a == b)
|
site/ja/guide/variable.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
変数名は、モデルの保存と読み込みを行う際に維持されます。デフォルトでは、モデル内の変数は一意の変数名を自動的に取得するため、必要がない限り自分で割り当てる必要はありません。
変数は区別のためには重要ですが、一部の変数は区別する必要はありません。作成時に trainable を false に設定すると、変数の勾配をオフにすることができます。勾配を必要としない変数には、トレーニングステップカウンターなどがあります。
|
step_counter = tf.Variable(1, trainable=False)
|
site/ja/guide/variable.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
変数とテンソルの配置
パフォーマンスを向上させるため、TensorFlow はテンソルと変数を dtype と互換性のある最速のデバイスに配置しようとします。つまり、GPU を使用できる場合はほとんどの変数が GPU に配置されることになります。
ただし、この動作はオーバーライドすることができます。このスニペットでは GPU が使用できる場合でも浮動小数点数テンソルと変数を CPU に配置します。デバイスの配置ログをオンにすると(セットアップを参照)、変数が配置されている場所を確認できます。
注:手動配置も機能しますが、配置戦略 を使用したほうがより手軽かつスケーラブルに計算を最適化することができます。
このノートブックを GPU の有無にかかわらず異なるバックエンドで実行すると、異なるログが表示されます。ロギングデバイスの配置は、セッション開始時にオンにする必要があります。
|
with tf.device('CPU:0'):
# Create some tensors
a = tf.Variable([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
b = tf.constant([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]])
c = tf.matmul(a, b)
print(c)
|
site/ja/guide/variable.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
あるデバイスで変数またはテンソルの場所を設定し、別のデバイスで計算を行うことができます。この処理ではデバイス間でデータをコピーする必要があるため、遅延が発生します。
ただし、複数の GPU ワーカーがあっても 1 つの変数のコピーだけが必要な場合は、この処理を実行することができます。
|
with tf.device('CPU:0'):
a = tf.Variable([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
b = tf.Variable([[1.0, 2.0, 3.0]])
with tf.device('GPU:0'):
# Element-wise multiply
k = a * b
print(k)
|
site/ja/guide/variable.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
Latitude, Longitude of Map
|
#center = [48.355, -124.642] # Neah Bay
center = [41.75, -124.19] # Crescent City
zoom = 11
m = Map(center=center, zoom=zoom)
m
|
misc/ipyleaflet_polygon_selector.ipynb
|
rjleveque/binder_experiments
|
bsd-2-clause
|
Map Widget and drawing tool Definitions
|
zoom = 13
c = ipywidgets.Box()
topo_background = True # Use topo as background rather than map?
if topo_background:
m = Map(width='1000px',height='600px', center=center, zoom=zoom, \
default_tiles=TileLayer(url=u'http://otile1.mqcdn.com/tiles/1.0.0/sat/{z}/{x}/{y}.jpg'))
else:
m = Map(width='1000px',height='600px', center=center, zoom=zoom)
c.children = [m]
# keep track of rectangles and polygons drawn on map:
def clear_m():
global rects,polys
rects = set()
polys = set()
clear_m()
rect_color = '#a52a2a'
poly_color = '#00F'
myDrawControl = DrawControl(
rectangle={'shapeOptions':{'color':rect_color}},
polygon={'shapeOptions':{'color':poly_color}}) #,polyline=None)
def handle_draw(self, action, geo_json):
global rects,polys
polygon=[]
for coords in geo_json['geometry']['coordinates'][0][:-1][:]:
polygon.append(tuple(coords))
polygon = tuple(polygon)
if geo_json['properties']['style']['color'] == '#00F': # poly
if action == 'created':
polys.add(polygon)
elif action == 'deleted':
polys.discard(polygon)
if geo_json['properties']['style']['color'] == '#a52a2a': # rect
if action == 'created':
rects.add(polygon)
elif action == 'deleted':
rects.discard(polygon)
myDrawControl.on_draw(handle_draw)
m.add_control(myDrawControl)
|
misc/ipyleaflet_polygon_selector.ipynb
|
rjleveque/binder_experiments
|
bsd-2-clause
|
On the map below, select the polygon tool or the rectangle tool and start clicking points.
You can add several of each if you want.
Note:
- If you delete one, you must then click 'save'.
- The edit function doesn't save the edited version (need fixing).
|
clear_m()
display(m)
|
misc/ipyleaflet_polygon_selector.ipynb
|
rjleveque/binder_experiments
|
bsd-2-clause
|
Now you can print the coordinates of the vertices:
Note that 5 digits gives about 1 meter precision.
|
for r in polys:
print("\nPolygon vertices:")
for c in r: print('%10.5f, %10.5f' % c)
for r in rects:
print("\nRectangle vertices:")
for c in r: print('%10.5f, %10.5f' % c)
|
misc/ipyleaflet_polygon_selector.ipynb
|
rjleveque/binder_experiments
|
bsd-2-clause
|
Re-execute the cell above the map to clear and specify a new set of polygons.
Other ways to print the vertices
Depending on what you want to do with the output, you might want to print in a different format. Here are some examples...
Rectangle as a tuple x1,y1,x2,y2 of corners:
This format is used in specifying a GeoClaw AMR "region".
|
for r in rects:
print("\nCoordinates of lower left and upper right corner of rectangle:")
x1 = r[0][0]
x2 = r[2][0]
y1 = r[0][1]
y2 = r[2][1]
print("x1, y1, x2, y2 = %10.5f, %10.5f, %10.5f, %10.5f" % (x1,y1,x2,y2))
|
misc/ipyleaflet_polygon_selector.ipynb
|
rjleveque/binder_experiments
|
bsd-2-clause
|
As tuples x and y:
This format is used in specifying a GeoClaw fgmax rectangle or quadrilateral.
|
for r in rects:
print("\nCoordinates of lower left and upper right corner of rectangle:")
x1 = r[0][0]
x2 = r[2][0]
y1 = r[0][1]
y2 = r[2][1]
print("x = %10.5f, %10.5f" % (x1,x2))
print("y = %10.5f, %10.5f" % (y1,y2))
for r in polys:
print("\nCoordinates of distinct vertices of polygon:")
sx = 'x = '
sy = 'y = '
for j in range(len(r)-1):
sx = sx + ' %10.5f,' % r[j][0]
sy = sy + ' %10.5f,' % r[j][1]
print(sx)
print(sy)
|
misc/ipyleaflet_polygon_selector.ipynb
|
rjleveque/binder_experiments
|
bsd-2-clause
|
To create kml files:
You can create a set of kml files for each rectangle or polygon with the code below.
poly2kml is recently added to geoclaw.kmltools.
|
from clawpack.geoclaw import kmltools
reload(kmltools)
for i,r in enumerate(rects):
x1 = r[0][0]
x2 = r[2][0]
y1 = r[0][1]
y2 = r[2][1]
name = "rect%i" % i
kmltools.box2kml((x1,x2,y1,y2), name=name, verbose=True)
for i,r in enumerate(polys):
x = [xy[0] for xy in r]
y = [xy[1] for xy in r]
kmltools.poly2kml((x,y), name="poly%i" % i, verbose=True)
|
misc/ipyleaflet_polygon_selector.ipynb
|
rjleveque/binder_experiments
|
bsd-2-clause
|
robobrowser
|
from robobrowser import RoboBrowser
def post_to_yaml_loader(url, unglue_url="https://unglue.it/api/loader/yaml"):
browser = RoboBrowser(history=True)
browser.open(unglue_url)
form = browser.get_forms()[0]
form['repo_url'] = url
# weird I have to manually set referer
browser.session.headers['referer'] = unglue_url
browser.submit_form(form)
return browser
b = post_to_yaml_loader('https://github.com/GITenberg/Adventures-of-Huckleberry-Finn_76/raw/master/metadata.yaml')
(b.url, b.response)
|
webhooks_for_GITenberg.ipynb
|
rdhyee/nypl50
|
apache-2.0
|
opds
|
from lxml import etree
import requests
opds_url = "https://unglue.it/api/opds/"
doc = etree.fromstring(requests.get(opds_url).content)
doc
|
webhooks_for_GITenberg.ipynb
|
rdhyee/nypl50
|
apache-2.0
|
Failed attempt with requests to submit to yaml loader
|
import requests
from lxml import etree
from lxml.cssselect import CSSSelector
unglue_url = "https://unglue.it/api/loader/yaml"
r = requests.get(unglue_url)
doc = etree.HTML(r.content)
sel = CSSSelector('input[name="csrfmiddlewaretoken"]')
csrftoken = sel(doc)[0].attrib.get('value')
csrftoken
r = requests.post(unglue_url,
data={'repo_url':
'https://github.com/GITenberg/Adventures-of-Huckleberry-Finn_76/raw/master/metadata.yaml',
'csrfmiddlewaretoken':csrftoken
},
headers={'referer':unglue_url})
(r.status_code, r.content)
|
webhooks_for_GITenberg.ipynb
|
rdhyee/nypl50
|
apache-2.0
|
travis webhooks
For https://travis-ci.org/GITenberg/Adventures-of-Huckleberry-Finn_76/builds/109712115 -- 2 webhooks were sent to http://requestb.in/wrr6l3wr?inspect:
Travis webhook #1 for https://travis-ci.org/GITenberg/Adventures-of-Huckleberry-Finn_76/builds/109712115
second webhook for https://travis-ci.org/GITenberg/Adventures-of-Huckleberry-Finn_76/builds/109712115
|
import requests
raw_url_1 = (
"https://gist.githubusercontent.com/rdhyee/7f33050732a09dfa93f3/raw/8abf5661911e7aedf434d464dd1a28b3d24d6f83/travis_webhook_1.json"
)
raw_url_2 = (
"https://gist.githubusercontent.com/rdhyee/8dc04b8fe52a9fefe3c2/raw/8f9968f481df3f4d4ecd44624c2dc1b0a8e02a17/travis_webhook_2.json"
)
r1 = requests.get(raw_url_1).json()
r2 = requests.get(raw_url_2).json()
# url of metadata.yaml to load:
# https://github.com/GITenberg/Adventures-of-Huckleberry-Finn_76/raw/master/metadata.yaml
r1.get('commit'), r1.get('repository', {}).get('name')
r1
r1.get('type'), r1['state'], r1['result'], r1.get('status_message')
r2.get('type'), r2['state'], r2['result'], r2.get('status_message')
|
webhooks_for_GITenberg.ipynb
|
rdhyee/nypl50
|
apache-2.0
|
travis webhook authentication
I think the documention is incorrect. Instead of 'username/repository', just use the header Travis-Repo-Slug, which, I think, is just the full name of the repo -- e.g., GITenberg/Adventures-of-Huckleberry-Finn_76
When Travis CI makes the POST request, a header named Authorization is included. Its value is the SHA2 hash of the GitHub username (see below), the name of the repository, and your Travis CI token.
For instance, in Python, use this snippet:
Python
from hashlib import sha256
sha256('username/repository' + TRAVIS_TOKEN).hexdigest()
Use this to ensure Travis CI is the one making requests to your webhook.
How to find TRAVIS_TOKEN? You have to go your profile (I thought you can use the travis CLI: travis token -- but that's for the "access token". There are 3 different types of tokens in play for travis: The Travis CI Blog: Token, Token, Token)
So I'm waiting for https://travis-ci.org/profile/rdhyee-GITenberg to load up -- very slow on Chrome but fast on Firefox?
|
sent_token = "6fba7d2102f66b16139a54e1b434471f6fb64d20c0787ec773e92a5155fad4a9"
from github_settings import TRAVIS_TOKEN, username
from hashlib import sha256
sha256('GITenberg/Adventures-of-Huckleberry-Finn_76' + TRAVIS_TOKEN).hexdigest()
|
webhooks_for_GITenberg.ipynb
|
rdhyee/nypl50
|
apache-2.0
|
testing my webhook implementation
|
import requests
url = "http://127.0.0.1:8000/api/travisci/webhook"
test_headers_url = \
"https://gist.githubusercontent.com/rdhyee/a9242f60b568b5a9e8fa/raw/e5d71c9a17964e0d43f6a35bbf03efe3f8a7d752/webhook_headers.txt"
test_body_url = \
"https://gist.githubusercontent.com/rdhyee/a9242f60b568b5a9e8fa/raw/e5d71c9a17964e0d43f6a35bbf03efe3f8a7d752/webook_body.json"
payload = requests.get(test_body_url).content
headers = dict([(k,v.strip()) for (k,v) in [line.split(":") for line in requests.get(test_headers_url).content.split('\n')]])
r = requests.post(url, data={'payload':payload}, headers=headers, allow_redirects=True)
(r.status_code, r.content)
# example of a request to exercise exception
import json
payload = json.dumps({
"repository":{
"id":4651401,
"name":"Adventures-of-Huckleberry-Finn_76",
"owner_name":"GITenberg",
"url":"http://GITenberg.github.com/"
},
"status_message": "Passed",
"type": "push"
})
r = requests.post(url, data={'payload':payload}, headers={}, allow_redirects=True)
(r.status_code, r.content)
r = requests.get(url, allow_redirects=True)
(r.status_code, r.content)
|
webhooks_for_GITenberg.ipynb
|
rdhyee/nypl50
|
apache-2.0
|
Bias-Variance using bootstrap
|
#initiate stuff
np.random.seed(2018)
err = []
bi=[]
vari=[]
n = 1000
n_boostraps = 1000
noise=0.1
x = np.sort(np.random.uniform(0,1,n)).reshape(-1,1)
y = true_fun(x).reshape(-1,1) + np.random.randn(len(x)).reshape(-1,1) * noise
y_no_noise= true_fun(x)
degrees = np.arange(1,16)
for degree in degrees:
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.2)
model = make_pipeline(PolynomialFeatures(degree=degree), LinearRegression(fit_intercept=False))
y_pred = np.empty((y_test.shape[0], n_boostraps))
for i in range(n_boostraps):
x_, y_ = resample(x_train, y_train)
# Evaluate the new model on the same test data each time.
y_pred[:, i] = model.fit(x_, y_).predict(x_test).ravel()
error = np.mean( np.mean((y_test - y_pred)**2, axis=1, keepdims=True) )
bias = np.mean( (y_test - np.mean(y_pred, axis=1, keepdims=True))**2 )
variance = np.mean( np.var(y_pred, axis=1, keepdims=True) )
err.append(error)
bi.append(bias)
vari.append(variance)
plt.figure()
plt.plot(y)
plt.plot(y_no_noise)
plt.xlabel('x')
plt.ylabel('Franke-value')
plt.show()
max_pd = 12 #max polynomial degree to plot to
plt.figure()
plt.plot(degrees[:max_pd],err[:max_pd],'k',label='MSE')
plt.plot(degrees[:max_pd],bi[:max_pd],'b',label='Bias^2')
plt.plot(degrees[:max_pd],vari[:max_pd],'y',label='Var')
summ=np.zeros(len(vari))
for i in range(len(err)):
summ[i]=vari[i]+bi[i]
plt.plot(degrees[:max_pd],summ[:max_pd],'ro',label='sum')
plt.xlabel('Polynomial degree')
plt.ylabel('MSE')
plt.legend()
plt.show()
|
doc/Programs/VariousCodes/Reduced_dimensionality.ipynb
|
CompPhysics/MachineLearning
|
cc0-1.0
|
Bias-Variance using kFold CV
|
#initiate stuff again in case data was changed earlier
np.random.seed(2018)
noise=0.1
N=1000
k=5
x = np.sort(np.random.uniform(0,1,N)).reshape(-1,1)
y = true_fun(x).reshape(-1,1) + np.random.randn(len(x)).reshape(-1,1) * noise
y_no_noise= true_fun(x)
degrees = np.arange(1,16)
kfold = KFold(n_splits = k,shuffle=True,random_state=5)
#Two clumsy lines to get the size of y_pred array right
X_trainz, X_testz, y_trainz, y_testz = train_test_split(x,y,test_size=1./k)
array_size_thingy=len(y_testz)
err = []
bi=[]
vari=[]
for deg in degrees:
y_pred = np.empty((array_size_thingy, k))
j=0
model = make_pipeline(PolynomialFeatures(degree=deg),LinearRegression(fit_intercept=False))
for train_inds,test_inds in kfold.split(x):
xtrain = x[train_inds]
ytrain= y[train_inds]
xtest = x[test_inds]
ytest = y[test_inds]
y_pred[:,j] = model.fit(xtrain,ytrain).predict(xtest).ravel()
j+=1
error = np.mean( np.mean((ytest - y_pred)**2, axis=1, keepdims=True) )
bias = np.mean( (ytest - np.mean(y_pred, axis=1, keepdims=True))**2 )
variance = np.mean( np.var(y_pred, axis=1, keepdims=True) )
err.append(error)
bi.append(bias)
vari.append(variance)
max_pd = 12 #max polynomial degree to plot to
plt.figure()
plt.plot(degrees[:max_pd],err[:max_pd],'k',label='MSE')
plt.plot(degrees[:max_pd],bi[:max_pd],'b',label='Bias^2')
plt.plot(degrees[:max_pd],vari[:max_pd],'y',label='Var')
summ=np.zeros(len(vari))
for i in range(len(err)):
summ[i]=vari[i]+bi[i]
plt.plot(degrees[:max_pd],summ[:max_pd],'ro',label='sum')
plt.xlabel('Polynomial degree')
plt.ylabel('MSE')
plt.legend()
plt.show()
|
doc/Programs/VariousCodes/Reduced_dimensionality.ipynb
|
CompPhysics/MachineLearning
|
cc0-1.0
|
Erzeugung von Datensätzen
Nun erstellen wir unseren eigenen Datensatz (DataFrame). Für dieses Beispiel nutzen wir verschiedene Menschen mit deren Vor- sowie Familiennamen und setzen diese zusammen mit deren Alter und Ergebniss eines Testes in eine Tabelle.
|
raw_data = {'first_name': ['Jason','Molly','Tina'],
'last_name': ['Miller','Jacobsen','.'],
'age':['42','23','79'],
'score':['104','108','120']}
|
Journal/.ipynb_checkpoints/Pandas-checkpoint.ipynb
|
Ironlors/SmartIntersection-Ger
|
apache-2.0
|
Nun müssen wir die verschieden Spalten der Tabelle beschriften. Dies erreichen wir mit den nachfolgenden Befehl. Dabei erstellen wir eine List mit verschiedenen Strings, welche in der Reihenfolge der Spalten angegeben wird.
|
df = pd.DataFrame(raw_data, columns = ['first_name','last_name','age','score'])
df
|
Journal/.ipynb_checkpoints/Pandas-checkpoint.ipynb
|
Ironlors/SmartIntersection-Ger
|
apache-2.0
|
Dieses DataFrame müssen wir nun in ein CSV Dokument umwandeln und speicher. Dafür nutzen wir den Befehl df.to_csv
Dadurch speichern wir die Tabelle in eine Dokument und exportieren es in den angegebenen Dateipfad.
|
df.to_csv('/Users/kay/Downloads/pandas-cookbook-master-2/data/test.csv')
|
Journal/.ipynb_checkpoints/Pandas-checkpoint.ipynb
|
Ironlors/SmartIntersection-Ger
|
apache-2.0
|
Import von CSV Dateien
Um diese CSV Datei nun wieder nutzbar zu machen, müssen wir es mit den pd.read_csv Befehl wieder importieren. Dadurch wird eine andere Spalte hinzugefügt, welche die Indexnummer angibt des alten Dataframes angibt.
|
df = pd.read_csv('/Users/kay/Downloads/pandas-cookbook-master-2/data/test.csv')
df
|
Journal/.ipynb_checkpoints/Pandas-checkpoint.ipynb
|
Ironlors/SmartIntersection-Ger
|
apache-2.0
|
Wollen wir nun den Header entfallen lassen, so ist dies ohne weiteres möglich, dazu müssen wir nur das Argument header=None anhängen. Dadurch wird die oberste Zeile aber mit entsprechenden Kennziffern der Spalten ersetzt, welches eine eindeutige Identifikation einer jeden Zelle ermöglicht.
|
df = pd.read_csv('/Users/kay/Downloads/pandas-cookbook-master-2/data/test.csv', header=None)
df
|
Journal/.ipynb_checkpoints/Pandas-checkpoint.ipynb
|
Ironlors/SmartIntersection-Ger
|
apache-2.0
|
Wollen wir nun unsere eigene Spaltenbezeichnung nutzen, so müssen wir den befehl weiter modifizieren. Dafür nutzen wir das Argument names=['stringa','stringb'] um die verschiedenen Strings als Namen zu übergeben. Dabei kann das Attribut zum wegfallen des Headers ausgelassen werden, da wir ja einen Header nutzen, dieses aber mit eigenen Werten ersetzen.
|
df = pd.read_csv('/Users/kay/Downloads/pandas-cookbook-master-2/data/test.csv' ,names=['UID','First Name','Last Name','Age','Score'])
df
|
Journal/.ipynb_checkpoints/Pandas-checkpoint.ipynb
|
Ironlors/SmartIntersection-Ger
|
apache-2.0
|
Nun wollen wir vielleicht die eingefügten Zahlen in der ersten Spalte der Tabelle ersetzen, und einen anderen Index nutzen. Dies ist möglich, wenn wir die index collumn nutzen. Das entsprechende Attribut dazu ist index_col='index'
|
df = pd.read_csv('/Users/kay/Downloads/pandas-cookbook-master-2/data/test.csv',index_col='UID' ,names=['UID','First Name','Last Name','Age','Score'])
df
|
Journal/.ipynb_checkpoints/Pandas-checkpoint.ipynb
|
Ironlors/SmartIntersection-Ger
|
apache-2.0
|
Nun erkennen wir, dass die eigentliche Indexspalte durch die UID ersetzt wurde.
Es können aber auch mehrere Indexe genutzt werden. Dafür übergeben wir statt eines einzelnen Strings eine Liste von Strings. Wichtig dabei ist aber, dass wir die selbstbestimmten Namen nutzen, und nicht die eigentlichen Namen der Spalte, welche in der CSV Datei genutzt wurde.
|
df = pd.read_csv('/Users/kay/Downloads/pandas-cookbook-master-2/data/test.csv',index_col=['First Name','Last Name'] ,names=['UID','First Name','Last Name','Age','Score'])
df
|
Journal/.ipynb_checkpoints/Pandas-checkpoint.ipynb
|
Ironlors/SmartIntersection-Ger
|
apache-2.0
|
Clearly there's a linear relationship between the amount of top and the total bill. Let's try to find the trend line here using linear regression in Tensorflow.
In other words, let's try to build a predictor for tip, given total_bill. Put differently, let's try to find the slope (=gradient in >2D) of the trend line.
|
# When we're solving a linear regression problem using ML is basically trying to find the slope/gradient of the
# equation `y = W*X+b`.
# Or put differently: given a bunch of correlated datapoints for `y` and `X`,
# try to find the right values for `W` and `b` that minimize a give loss function (e.g. mean squared error).
# We know the values of our inputs X (=total_bill) and what we want to predict Y (=tip).
# Since we know the values from our dataset, we should use tf.placeholder
# We will feed the actual data in when we do training (in session.run)
X = tf.placeholder(dtype=np.float64)
Y = tf.placeholder(dtype=np.float64)
# W and b are the things we're trying to find, so we need to define them as tf.Variable
# We assign these random numbers as default values (=general best practice)
W = tf.Variable(np.random.randn(), dtype=np.float64)
b = tf.Variable(np.random.randn(), dtype=np.float64)
# The trend line we are trying to find.
# We could've also used tf shorthand notation: `pred= W * X + b`
pred = tf.add(tf.multiply(W, X), b)
# The loss function gives us an idea of how close our predicted model is to the actual data
# This is the metric that we're going to try and minimize
# Joris: I took this loss function from https://github.com/aymericdamien/TensorFlow-Examples/blob/84c99e3de1114c3b67c00b897eb9bbc1f7c618fc/examples/2_BasicModels/linear_regression.py#L39
# I'm a bit confused here about 2 things:
# 1) Why does `loss = tf.reduce_mean(tf.pow(pred-Y, 2))` not work (it gives NaN results).
# I think it's mathematically identical? tf.mean was used in a TF course I followed, but doesn't seem to work here
# Note: When changing the optimizer below to the AdamOptimizer, this problem resolves itself,
# so this must have something to do with how the optimizer does its calculations.
# 2) Why the denominator is 2N and not just N (which I believe is the definition of the mean).
n_samples = len(tips['total_bill'])
loss = tf.reduce_sum(tf.pow(pred-Y, 2))/(2*n_samples) # MSQE
# loss = tf.reduce_mean(tf.pow(pred-Y, 2)) # doesn't work with SGD optimizer
# Define optimizer (=Stochastic Gradient Descent = SGD) and training function
optimizer = tf.train.GradientDescentOptimizer(0.05) # 0.01 is a hyper-parameter
# Note, minimize() knows to modify W and b because Variable objects are trainable=True by default
train = optimizer.minimize(loss)
# Start TF session, init variables, print current values of W and b
sess = tf.Session()
sess.run(tf.global_variables_initializer()) # Assign variables with their default values
EPOCHS=100
for i in range(EPOCHS):
# Feed every observation into the trainer, one by one
for (x, y) in zip(tips['total_bill'], tips['tip']):
sess.run(train, {X: x, Y: y})
if i % (EPOCHS/10) == 0:
l = sess.run(loss, {X: tips['total_bill'], Y: tips['tip']})
w_val = sess.run(W)
b_val = sess.run(b)
print "{}% LOSS={} W={}, b={}".format(i, l, w_val, b_val)
print "100% LOSS={} W={}, b={}".format(l, w_val, b_val)
|
machine-learning/tensorflow/tensorflow_linear_regression.ipynb
|
jorisroovers/machinelearning-playground
|
apache-2.0
|
Let's plot the line defined by W and b:
|
fit['total_bill'] = tips['total_bill']
fit['res'] = w_val * tips['total_bill'] + b_val
plt = sns.relplot(x="total_bill", y="tip", data=tips);
plt = sns.lineplot(x='total_bill', y ="res", data=fit, color="darkred");
|
machine-learning/tensorflow/tensorflow_linear_regression.ipynb
|
jorisroovers/machinelearning-playground
|
apache-2.0
|
Keras Tuner 소개
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/tutorials/keras/keras_tuner"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">View on TensorFlow.org</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/tutorials/keras/keras_tuner.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Run in Google Colab</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/tutorials/keras/keras_tuner.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">View source on GitHub</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/tutorials/keras/keras_tuner.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">Download notebook</a></td>
</table>
개요
Keras Tuner는 TensorFlow 프로그램에 대한 최적의 하이퍼파라미터 세트를 선택하는 데 도움을 주는 라이브러리입니다. 머신러닝(ML) 애플리케이션에 대한 올바른 하이퍼파라미터 세트를 선택하는 과정을 하이퍼파라미터 조정 또는 하이퍼튜닝이라고 합니다.
하이퍼파라미터는 훈련 프로세스 및 ML 모델의 토폴로지를 제어하는 변수입니다. 이러한 변수는 훈련 과정에서 일정하게 유지되며 ML 프로그램의 성능에 직접적으로 영향을 미칩니다. 하이퍼파라미터에는 두 가지 유형이 있습니다.
숨겨진 레이어의 수 및 너비와 같이 모델 선택에 영향을 미치는 모델 하이퍼파라미터
SGD(Stochastic Gradient Descent)의 학습률 및 KNN(k Nearest Neighbors) 분류자의 최근접 이웃 수와 같은 학습 알고리즘의 속도와 품질에 영향을 주는 알고리즘 하이퍼파라미터
이 튜토리얼에서는 Keras Tuner를 사용하여 이미지 분류 애플리케이션에 하이퍼튜닝을 수행합니다.
설정
|
import tensorflow as tf
from tensorflow import keras
import IPython
|
site/ko/tutorials/keras/keras_tuner.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
Keras Tuner를 설치하고 가져옵니다.
|
!pip install -U keras-tuner
import kerastuner as kt
|
site/ko/tutorials/keras/keras_tuner.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
데이터세트 다운로드 및 준비하기
이 튜토리얼에서는 Keras Tuner를 사용하여 Fashion MNIST 데이터세트에서 의류 이미지를 분류하는 머신러닝 모델에 가장 적합한 하이퍼파라미터를 찾습니다.
데이터를 로드합니다.
|
(img_train, label_train), (img_test, label_test) = keras.datasets.fashion_mnist.load_data()
# Normalize pixel values between 0 and 1
img_train = img_train.astype('float32') / 255.0
img_test = img_test.astype('float32') / 255.0
|
site/ko/tutorials/keras/keras_tuner.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
모델 정의하기
하이퍼튜닝을 위한 모델을 빌드할 때는 모델 아키텍처와 더불어 하이퍼파라미터 검색 공간도 정의합니다. 하이퍼튜닝을 위해 설정하는 모델을 하이퍼 모델이라고 합니다.
두 가지 접근 방식을 통해 하이퍼 모델을 정의할 수 있습니다.
모델 빌더 함수 사용
Keras Tuner API의 HyperModel 클래스를 하위 클래스화
또한 두 개의 사전 정의된 HyperModel - 클래스인 HyperXception과 HyperResNet을 컴퓨터 비전 애플리케이션에 사용할 수 있습니다.
이 튜토리얼에서는 모델 빌더 함수를 사용하여 이미지 분류 모델을 정의합니다. 모델 빌더 함수는 컴파일된 모델을 반환하고 인라인으로 정의한 하이퍼파라미터를 사용하여 모델을 하이퍼튜닝합니다.
|
def model_builder(hp):
model = keras.Sequential()
model.add(keras.layers.Flatten(input_shape=(28, 28)))
# Tune the number of units in the first Dense layer
# Choose an optimal value between 32-512
hp_units = hp.Int('units', min_value = 32, max_value = 512, step = 32)
model.add(keras.layers.Dense(units = hp_units, activation = 'relu'))
model.add(keras.layers.Dense(10))
# Tune the learning rate for the optimizer
# Choose an optimal value from 0.01, 0.001, or 0.0001
hp_learning_rate = hp.Choice('learning_rate', values = [1e-2, 1e-3, 1e-4])
model.compile(optimizer = keras.optimizers.Adam(learning_rate = hp_learning_rate),
loss = keras.losses.SparseCategoricalCrossentropy(from_logits = True),
metrics = ['accuracy'])
return model
|
site/ko/tutorials/keras/keras_tuner.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
튜너를 인스턴스화하고 하이퍼튜닝 수행하기
튜너를 인스턴스화하여 하이퍼튜닝을 수행합니다. Keras Tuner에는 RandomSearch, Hyperband, BayesianOptimization 및 Sklearn의 네 가지 튜너가 있습니다. 이 튜토리얼에서는 Hyperband 튜너를 사용합니다.
Hyperband 튜너를 인스턴스화하려면 최적화할 하이퍼모델인 objective, 및 훈련할 최대 epoch 수(max_epochs)를 지정해야 합니다.
|
tuner = kt.Hyperband(model_builder,
objective = 'val_accuracy',
max_epochs = 10,
factor = 3,
directory = 'my_dir',
project_name = 'intro_to_kt')
|
site/ko/tutorials/keras/keras_tuner.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
Hyperband 튜닝 알고리즘은 적응형 리소스 할당 및 조기 중단을 사용하여 고성능 모델에서 신속하게 수렴합니다. 이것은 스포츠 챔피언십 스타일 브래킷을 사용하여 수행됩니다. 이 알고리즘은 몇 개의 epoch에 대해 많은 수의 모델을 훈련하고 최고 성능을 보이는 절반만 다음 단계로 넘깁니다. Hyperband는 1 + log<sub><code>factor</code></sub>( max_epochs)를 계산하고 이를 가장 가까운 정수로 반올림하여 한 브래킷에서 훈련할 모델 수를 결정합니다.
하이퍼파라미터 검색을 실행하기 전에 훈련 단계가 끝날 때마다 훈련 결과를 지우도록 콜백을 정의합니다.
|
class ClearTrainingOutput(tf.keras.callbacks.Callback):
def on_train_end(*args, **kwargs):
IPython.display.clear_output(wait = True)
|
site/ko/tutorials/keras/keras_tuner.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
하이퍼파라미터 검색을 실행합니다. 검색 메서드의 인수는 위의 콜백 외에 tf.keras.model.fit에 사용되는 인수와 같습니다.
|
tuner.search(img_train, label_train, epochs = 10, validation_data = (img_test, label_test), callbacks = [ClearTrainingOutput()])
# Get the optimal hyperparameters
best_hps = tuner.get_best_hyperparameters(num_trials = 1)[0]
print(f"""
The hyperparameter search is complete. The optimal number of units in the first densely-connected
layer is {best_hps.get('units')} and the optimal learning rate for the optimizer
is {best_hps.get('learning_rate')}.
""")
|
site/ko/tutorials/keras/keras_tuner.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
이 튜토리얼을 마치려면 검색에서 최적의 하이퍼파라미터로 모델을 재훈련합니다.
|
# Build the model with the optimal hyperparameters and train it on the data
model = tuner.hypermodel.build(best_hps)
model.fit(img_train, label_train, epochs = 10, validation_data = (img_test, label_test))
|
site/ko/tutorials/keras/keras_tuner.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
How to plot topomaps the way EEGLAB does
If you have previous EEGLAB experience you may have noticed that topomaps
(topoplots) generated using MNE-Python look a little different from those
created in EEGLAB. If you prefer the EEGLAB style this example will show you
how to calculate head sphere origin and radius to obtain EEGLAB-like channel
layout in MNE.
|
# Authors: Mikołaj Magnuski <mmagnuski@swps.edu.pl>
#
# License: BSD-3-Clause
import numpy as np
from matplotlib import pyplot as plt
import mne
print(__doc__)
|
stable/_downloads/b11aa17d34c637b8e89ed78c8bdf13e4/eeglab_head_sphere.ipynb
|
mne-tools/mne-tools.github.io
|
bsd-3-clause
|
Calculate sphere origin and radius
EEGLAB plots head outline at the level where the head circumference is
measured
in the 10-20 system (a line going through Fpz, T8/T4, Oz and T7/T3 channels).
MNE-Python places the head outline lower on the z dimension, at the level of
the anatomical landmarks :term:LPA, RPA, and NAS <fiducial>.
Therefore to use the EEGLAB layout we
have to move the origin of the reference sphere (a sphere that is used as a
reference when projecting channel locations to a 2d plane) a few centimeters
up.
Instead of approximating this position by eye, as we did in the sensor
locations tutorial <tut-sensor-locations>, here we will calculate it using
the position of Fpz, T8, Oz and T7 channels available in our montage.
|
# first we obtain the 3d positions of selected channels
chs = ['Oz', 'Fpz', 'T7', 'T8']
# when the montage is set, it is transformed to the "head" coordinate frame
# that MNE uses internally, therefore we need to use
# ``fake_evoked.get_montage()`` to get these properly transformed coordinates
montage_head = fake_evoked.get_montage()
ch_pos = montage_head.get_positions()['ch_pos']
pos = np.stack([ch_pos[ch] for ch in chs])
# now we calculate the radius from T7 and T8 x position
# (we could use Oz and Fpz y positions as well)
radius = np.abs(pos[[2, 3], 0]).mean()
# then we obtain the x, y, z sphere center this way:
# x: x position of the Oz channel (should be very close to 0)
# y: y position of the T8 channel (should be very close to 0 too)
# z: average z position of Oz, Fpz, T7 and T8 (their z position should be the
# the same, so we could also use just one of these channels), it should be
# positive and somewhere around `0.03` (3 cm)
x = pos[0, 0]
y = pos[-1, 1]
z = pos[:, -1].mean()
# lets print the values we got:
print([f'{v:0.5f}' for v in [x, y, z, radius]])
|
stable/_downloads/b11aa17d34c637b8e89ed78c8bdf13e4/eeglab_head_sphere.ipynb
|
mne-tools/mne-tools.github.io
|
bsd-3-clause
|
Select live births, then make a CDF of <tt>totalwgt_lb</tt>.
|
live = preg[preg.outcome == 1]
live_cdf = thinkstats2.Cdf(live.prglngth)
first_cdf = thinkstats2.Cdf(live[live.birthord == 1].totalwgt_lb, label = 'first')
other_cdf = thinkstats2.Cdf(live[live.birthord > 1].totalwgt_lb, label = 'other')
|
resolved/chap04ex.ipynb
|
Nathx/think_stats
|
gpl-3.0
|
Display the CDF.
|
thinkplot.PrePlot(2)
thinkplot.Cdfs([first_cdf, other_cdf])
thinkplot.Show(xlabel='weight (pounds)', ylabel='CDF')
|
resolved/chap04ex.ipynb
|
Nathx/think_stats
|
gpl-3.0
|
Find out how much you weighed at birth, if you can, and compute CDF(x).
|
first_cdf.Percentile(50)
|
resolved/chap04ex.ipynb
|
Nathx/think_stats
|
gpl-3.0
|
If you are a first child, look up your birthweight in the CDF of first children; otherwise use the CDF of other children.
Compute the percentile rank of your birthweight
|
first_cdf.Percentile(50)
|
resolved/chap04ex.ipynb
|
Nathx/think_stats
|
gpl-3.0
|
Compute the median birth weight by looking up the value associated with p=0.5.
|
first_cdf.Percentile(25), first_cdf.Percentile(75)
|
resolved/chap04ex.ipynb
|
Nathx/think_stats
|
gpl-3.0
|
Compute the interquartile range (IQR) by computing percentiles corresponding to 25 and 75.
Make a random selection from <tt>cdf</tt>.
|
first_cdf.Random()
|
resolved/chap04ex.ipynb
|
Nathx/think_stats
|
gpl-3.0
|
Draw a random sample from <tt>cdf</tt>.
|
first_cdf.Sample(10)
|
resolved/chap04ex.ipynb
|
Nathx/think_stats
|
gpl-3.0
|
Draw a random sample from <tt>cdf</tt>, then compute the percentile rank for each value, and plot the distribution of the percentile ranks.
Generate 1000 random values using <tt>random.random()</tt> and plot their PMF.
|
%time thinkplot.Pmf(thinkstats2.Pmf([random.random() for i in xrange(100)]))
%time thinkplot.Pmf(thinkstats2.Pmf([random.random() for i in xrange(100)]))
|
resolved/chap04ex.ipynb
|
Nathx/think_stats
|
gpl-3.0
|
Assuming that the PMF doesn't work very well, try plotting the CDF instead.
|
thinkplot.Cdf(thinkstats2.Cdf([random.random() for i in xrange(10000)]))
|
resolved/chap04ex.ipynb
|
Nathx/think_stats
|
gpl-3.0
|
驱动程序
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https://tensorflow.google.cn/agents/tutorials/4_drivers_tutorial"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png">在 TensorFlow.org 上查看</a>
</td>
<td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/agents/tutorials/4_drivers_tutorial.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png">在 Google Colab 运行</a>
</td>
<td> <a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/agents/tutorials/4_drivers_tutorial.ipynb"><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png">在 Github 上查看源代码</a>
</td>
<td> <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/agents/tutorials/4_drivers_tutorial.ipynb"><img src="https://tensorflow.google.cn/images/download_logo_32px.png">下载笔记本</a> </td>
</table>
简介
强化学习的常见模式是在环境中执行策略,持续指定的步数或片段数。在诸如数据收集、评估和生成代理视频期间会采用这种模式。
使用 Python 编程非常简单,但在 TensorFlow 中编程和调试则要复杂得多,因为它涉及 tf.while 循环、tf.cond 和 tf.control_dependencies。因此,我们将运行循环这一概念抽象成一个名为 driver 的类,并在 Python 和 TensorFlow 中提供经过充分测试的实现。
此外,驱动程序在每步遇到的数据都会保存在名为 Trajectory 的命名元组内,并广播给一组观察者(例如回放缓冲区和指标)。这些数据包括环境观测值、策略建议的操作、获得的奖励、当前和下一个步骤的类型等。
设置
如果尚未安装 TF-Agents 或 Gym,请运行以下命令:
|
!pip install tf-agents
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow as tf
from tf_agents.environments import suite_gym
from tf_agents.environments import tf_py_environment
from tf_agents.policies import random_py_policy
from tf_agents.policies import random_tf_policy
from tf_agents.metrics import py_metrics
from tf_agents.metrics import tf_metrics
from tf_agents.drivers import py_driver
from tf_agents.drivers import dynamic_episode_driver
|
site/zh-cn/agents/tutorials/4_drivers_tutorial.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
Python 驱动程序
PyDriver 类采用 Python 环境、Python 策略和观察者列表在每个时间步骤更新。主要方法为 run(),它会使用策略中的操作逐步执行环境,直到至少满足以下终止条件之一:步数达到 max_steps 或片段数达到 max_episodes。
实现方式大致如下:
```python
class PyDriver(object):
def init(self, env, policy, observers, max_steps=1, max_episodes=1):
self._env = env
self._policy = policy
self._observers = observers or []
self._max_steps = max_steps or np.inf
self._max_episodes = max_episodes or np.inf
def run(self, time_step, policy_state=()):
num_steps = 0
num_episodes = 0
while num_steps < self._max_steps and num_episodes < self._max_episodes:
# Compute an action using the policy for the given time_step
action_step = self._policy.action(time_step, policy_state)
# Apply the action to the environment and get the next step
next_time_step = self._env.step(action_step.action)
# Package information into a trajectory
traj = trajectory.Trajectory(
time_step.step_type,
time_step.observation,
action_step.action,
action_step.info,
next_time_step.step_type,
next_time_step.reward,
next_time_step.discount)
for observer in self._observers:
observer(traj)
# Update statistics to check termination
num_episodes += np.sum(traj.is_last())
num_steps += np.sum(~traj.is_boundary())
time_step = next_time_step
policy_state = action_step.state
return time_step, policy_state
```
以下示例展示了在 CartPole 环境中运行随机策略,将结果保存到回放缓冲区并计算一些指标。
|
env = suite_gym.load('CartPole-v0')
policy = random_py_policy.RandomPyPolicy(time_step_spec=env.time_step_spec(),
action_spec=env.action_spec())
replay_buffer = []
metric = py_metrics.AverageReturnMetric()
observers = [replay_buffer.append, metric]
driver = py_driver.PyDriver(
env, policy, observers, max_steps=20, max_episodes=1)
initial_time_step = env.reset()
final_time_step, _ = driver.run(initial_time_step)
print('Replay Buffer:')
for traj in replay_buffer:
print(traj)
print('Average Return: ', metric.result())
|
site/zh-cn/agents/tutorials/4_drivers_tutorial.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
TensorFlow 驱动程序
TensorFlow 中也有驱动程序,其功能与 Python 驱动程序类似,区别是使用 TF 环境、TF 策略、TF 观察者等。我们目前有 2 种 TensorFlow 驱动程序:DynamicStepDriver(在给定的有效环境步数后终止),以及 DynamicEpisodeDriver(在给定的片段数后终止)。让我们看一下 DynamicEpisode 的实际应用示例。
|
env = suite_gym.load('CartPole-v0')
tf_env = tf_py_environment.TFPyEnvironment(env)
tf_policy = random_tf_policy.RandomTFPolicy(action_spec=tf_env.action_spec(),
time_step_spec=tf_env.time_step_spec())
num_episodes = tf_metrics.NumberOfEpisodes()
env_steps = tf_metrics.EnvironmentSteps()
observers = [num_episodes, env_steps]
driver = dynamic_episode_driver.DynamicEpisodeDriver(
tf_env, tf_policy, observers, num_episodes=2)
# Initial driver.run will reset the environment and initialize the policy.
final_time_step, policy_state = driver.run()
print('final_time_step', final_time_step)
print('Number of Steps: ', env_steps.result().numpy())
print('Number of Episodes: ', num_episodes.result().numpy())
# Continue running from previous state
final_time_step, _ = driver.run(final_time_step, policy_state)
print('final_time_step', final_time_step)
print('Number of Steps: ', env_steps.result().numpy())
print('Number of Episodes: ', num_episodes.result().numpy())
|
site/zh-cn/agents/tutorials/4_drivers_tutorial.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
위 소스 코드를 .py 파일 또는 jupyter notebook에 입력하여 파이썬으로 실행 시키면 "gaussian_elimination.ipynb" 파일이 생성되며, jupyter notebook으로 실행하거나, 콘솔창(cmd)에서 해당 파일이 있는 폴더로 이동 후 아래와 같이 입력하면 해당 파일이 실행 될 것 입니다.
bash
jupyter notebook gaussian_elimination.ipynb
gaussian_elimination.py 코드 구조
본 Lab은 이전 Lab과 다르게 크게 3 Part로 구성되어 있고, 각각의 Part의 세부적인 문제가 주어져 있습니다. 각각의 Part의 세부적인 목적은 아래와 같습니다.
Part명 | Part 내용
--------------------|--------------------------
vector operation | gauss elimination에서 사용하는 가장 기초적인 vector 연산을 위한 함수들이 있음
gauss elimination module | gauss elimination 과정에서 실행 되어야 할 주요 기초 함수들이 있음
gauss elimination solution | vector operation과 gauss elimination module을 활용하여 실제 gauss elimination process를 실행하는 함수
본 Lab에서는 위의 세 가지 Part를 모두 실행하여 실제 사용가능한 gauss elimination 코드를 작성하는 것에 그 목적을 두고 있습니다.
Part #1 - vector operation (point 1)
함수명 | 함수내용
--------------------|--------------------------
vector_scalar_multiplication | $$
\alpha \times
\left[\begin{array}{r}
x & y & z \
\end{array}\right] =
\left[\begin{array}{r}
\alpha \times x & \alpha \times y & \alpha \times z \
\end{array}\right]
$$
vector_add_operation | $$
\left[\begin{array}{r}
a & b & c \
\end{array}\right] +
\left[\begin{array}{r}
x & y & z \
\end{array}\right] =
\left[\begin{array}{r}
a+x & b+y & c+z \
\end{array}\right]
$$
vector_subtract_operation | $$
\left[\begin{array}{r}
a & b & c \
\end{array}\right] -
\left[\begin{array}{r}
x & y & z \
\end{array}\right] =
\left[\begin{array}{r}
a-x & b-y & c-z \
\end{array}\right]
$$
|
def vector_scalar_multiplication(scalar_value, row_vector):
return None
def vector_add_operation(vector_1, vector_2):
return None
def vector_subtract_operation(vector_1, vector_2):
return None
vector_a = [3, 2, 3]
vector_b = [5, 5, 4]
vector_c = [1, 2, 3]
print (vector_add_operation(vector_a, vector_c)) # Expected value: [4, 4, 6]
print (vector_subtract_operation(vector_b, vector_c)) # Expected value: [4, 3, 1]
print (vector_subtract_operation(vector_a, vector_c)) # Expected value: [2, 0, 0]
print (vector_scalar_multiplication(5, vector_c)) # Expected value: [5, 10, 15]
|
assignment/ps2/gaussian_elimination.ipynb
|
TeamLab/Gachon_CS50_OR_KMOOC
|
mit
|
Part #2 - gauss elimination module (point 3)
함수명 | 함수내용
--------------------|--------------------------
get_max_row_position | 주어진 Matrix에서 기준이 되는 Column number element value가 가장 큰 Row number를 반환함(//이해가 잘 안되요! Row number보다 Row number index가 맞는것 같아요), 단 반환되는 Row number 현재 처리되고 있는 row number보다 값이 커야 함
divided_by_pivot_value | row_vector를 주어진 (1/pivot_value) 값으로 scalar-vector product한 row_vector를 반환함, 단 이때 pivot_value로 인해 1이되는 element는 음수가 될 수 없음 (반드시 pivot element는 양수 1이어야 함)
swap_row | 주어진 matrix에 변경할 두 row number가 주어지면 해당 두 row가 교체 된 새로운 matrix를 반환함. 이 때 입력된 matrix는 완전히 새로운 matrix를 만들어 주어진 matrix의 값을 복사하고 두 row를 교환후 변경된 matrix가 반환됨(기존 argument로 입력된 matrix를 반환하지 않음)
gauss_jordan_elimination_process | 주어진 matrix에서 pivot_row_position의 값을 사용하여 다른 row vector의 column position element를 모두 0으로 만드는 함수, pivot 값과 target이 되는 row vector의 값에 따라 vector addition과 subtraction이 결정됨
|
def get_max_row_position(target_matrix, current_row_position, current_column_position):
return None
matrix_X = [[12,7,3], [4 ,5,6], [7 ,8,9]]
matrix_Y = [[5,8,1,2], [6,7,3,0], [4,5,9,1]]
print (get_max_row_position(matrix_X, 0, 0)) # Expected value : 0 # 12 > 4 > 7 matrix_X[0].index(12)
print (get_max_row_position(matrix_Y, 0, 2)) # Expected value : 2 # 1 < 3 < 9 matrix_Y[2].index(9)
def divided_by_pivot_value(row_vector, pivot_value):
return None
vector_c = [0, -2, 4]
pivot_value = -2
divided_by_pivot_value(vector_c, pivot_value)
def swap_row(matrix, source_row_number, target_row_number):
return None
matrix_X = [[12,7,3], [4 ,5,6], [7 ,8,9]]
print(swap_row(matrix_X, 1, 2)) # expected value : [[12, 7, 3], [7, 8, 9], [4, 5, 6]]
print(swap_row(matrix_X, 2, 1)) # expected value : [[12, 7, 3], [7, 8, 9], [4, 5, 6]]
print(swap_row(matrix_X, 1, 0)) # expected value : [[4, 5, 6], [12, 7, 3], [7, 8, 9]]
def gauss_jordan_elimination_process(target_matrix, pivot_row_position, column_position ):
return None
matrix_X = [[1,7,3], [4 ,5,6], [7 ,8,9]]
print(gauss_jordan_elimination_process(matrix_X, 0, 0)) # expected value : [[1, 7, 3], [0.0, 23.0, 6.0], [0.0, 41.0, 12.0]]
matrix_X = [[1,7,3], [0,1,6], [0 ,8,9]]
print(gauss_jordan_elimination_process(matrix_X, 1, 1)) # expected value : [[-1.0, 0.0, 39.0], [0, 1, 6], [0.0, 0.0, 39.0]]
matrix_X = [[1,0,3], [0,1,-6], [0 ,0,1]]
print(gauss_jordan_elimination_process(matrix_X, 2, 2)) # expected value : [[-1.0, 0.0, 0.0], [0.0, 1.0, 0.0], [0, 0, 1]]
|
assignment/ps2/gaussian_elimination.ipynb
|
TeamLab/Gachon_CS50_OR_KMOOC
|
mit
|
Part #3 - gauss elimination solution (point 3)
마지막 단계는 앞서 사용한 많은 모듈들을 합쳐서 하나의 gauss elimination solution을 작성하는 것이다. gauss elimination solution은 아래와 같은 순서를 따릅니다.
입력되는 matrix의 전체 row 수를 측정한다.
matrix의 0번째 row 부터 한줄씩 pivot row를 증가한다.
현재의 pivot column에서 가장 큰 값이 다른 row에 위치한다면 두 row의 위치를 변경한다.
pivot row와 pivot column이 결정됐다면, 해당 pivot value로 pivot row vector를 나눠준다.
4)의 식으로 산출된 pivot row vector로 나머지 row vector들의 pivot column을 0으로 만들어준다.
matrix의 마지막 row까지 모든 연산을 실시한다.
최종 산출된 값에서 pivot 값이 -1.0이라면 1.0으로 변경하여 산출한다.
최종 기약 행 사디리꼴 행렬을 반환한다
|
def gauss_elimination_solution(target_matrix):
return None
target_matrix = [[1.0,1.0,-1.0,8.0], [-3.0,-1.0,2.0,-11.0], [-2.0,1.0,2.0,-3.0]]
print (gauss_elimination_solution(target_matrix))
# Expected value : [[1.0, 0.0, 0.0, -0.666666666666667], [0.0, 1.0, 0.0, 4.333333333333333], [0.0, 0.0, 1.0, -4.333333333333334]]
target_matrix = [[-3.0,2.0,-6.0,6.0], [5.0,7.0,-5.0,3.0], [1.0,4.0,-2.0,1.0]]
print (gauss_elimination_solution(target_matrix))
# Expected value : [[1.0, -0.0, -0.0, -0.09302325581395321], [0.0, 1.0, 0.0, -0.24418604651162812], [-0.0, -0.0, 1.0, -1.0348837209302326]]
|
assignment/ps2/gaussian_elimination.ipynb
|
TeamLab/Gachon_CS50_OR_KMOOC
|
mit
|
결과 제출 하기
문제없이 숙제를 제출하면 아래 결과가 모두 PASS로 표시 될 것 입니다.
|
import gachon_autograder_client as g_autograder
THE_TEMLABIO_ID = "#YOUR_ID"
PASSWORD = "#YOUR_PASSWORD"
ASSIGNMENT_FILE_NAME = "gaussian_elimination.ipynb"
g_autograder.submit_assignment(THE_TEMLABIO_ID , PASSWORD, ASSIGNMENT_NAME)
|
assignment/ps2/gaussian_elimination.ipynb
|
TeamLab/Gachon_CS50_OR_KMOOC
|
mit
|
Loading the dataset 0750-0805
Description of the dataset is at:
D:/zzzLola/PhD/DataSet/US101/US101_time_series/US-101-Main-Data/vehicle-trajectory-data/trajectory-data-dictionary.htm
|
c_dataset = ['vID','fID', 'tF', 'Time', 'lX', 'lY', 'gX', 'gY', 'vLen', 'vWid', 'vType','vVel', 'vAcc', 'vLane', 'vPrec', 'vFoll', 'spac','headway' ]
dataset = pd.read_table('D:\\zzzLola\\PhD\\DataSet\\US101\\coding\\trajectories-0750am-0805am.txt', sep=r"\s+",
header=None, names=c_dataset)
dataset[:10]
|
vehicles/VehiclesTimeCycles.ipynb
|
lalonica/PhD
|
gpl-3.0
|
15min = 900 s = 9000 ms //
9529ms = 952.9s = 15min 52.9s
The actual temporal length of this dataset is 15min 52.9s. Looks like the timestamp of the vehicles is matches. Which make sense attending to the way the data is obtained. There is no GPS on the vehicles, but from cameras synchronized localized at different buildings.
|
#Converting to meters
dataset['lX'] = dataset.lX * 0.3048
dataset['lY'] = dataset.lY * 0.3048
dataset['gX'] = dataset.gX * 0.3048
dataset['gY'] = dataset.gY * 0.3048
dataset['vLen'] = dataset.vLen * 0.3048
dataset['vWid'] = dataset.vWid * 0.3048
dataset['spac'] = dataset.spac * 0.3048
dataset['vVel'] = dataset.vVel * 0.3048
dataset['vAcc'] = dataset.vAcc * 0.3048
dataset[:10]
|
vehicles/VehiclesTimeCycles.ipynb
|
lalonica/PhD
|
gpl-3.0
|
For every time stamp, check how many vehicles are accelerating when the one behind is also or not... :
- vehicle_acceleration vs precedin_vehicl_acceleration
- vehicle_acceleration vs follower_vehicl_acceleration
When is a vehicle changing lanes?
|
dataset['tF'].describe()
des_all = dataset.describe()
des_all
des_all.to_csv('D:\\zzzLola\\PhD\\DataSet\\US101\\coding\\description_allDataset_160502.csv', sep='\t', encoding='utf-8')
dataset.to_csv('D:\\zzzLola\\PhD\\DataSet\\US101\\coding\\dataset_meters_160502.txt', sep='\t', encoding='utf-8',index=False)
#table.groupby('YEARMONTH').CLIENTCODE.nunique()
v_num_lanes = dataset.groupby('vID').vLane.nunique()
v_num_lanes[v_num_lanes > 1].count()
v_num_lanes[v_num_lanes == 1].count()
#Drop some field are not necessary for the time being.
dataset = dataset.drop(['fID','tF','lX','lY','vLen','vWid', 'vType','vVel', 'vAcc',
'vLane', 'vPrec', 'vFoll','spac','headway'], axis=1)
dataset[:10]
def save_graph(graph,file_name):
#initialze Figure
plt.figure(num=None, figsize=(20, 20), dpi=80)
plt.axis('off')
fig = plt.figure(1)
pos = nx.random_layout(graph) #spring_layout(graph)
nx.draw_networkx_nodes(graph,pos)
nx.draw_networkx_edges(graph,pos)
nx.draw_networkx_labels(graph,pos)
#cut = 1.00
#xmax = cut * max(xx for xx, yy in pos.values())
#ymax = cut * max(yy for xx, yy in pos.values())
#plt.xlim(0, xmax)
#plt.ylim(0, ymax)
plt.savefig(file_name,bbox_inches="tight")
pylab.close()
del fig
times = dataset['Time'].unique()
#data = pd.DataFrame()
#data = data.fillna(0) # with 0s rather than NaNs
dTime = pd.DataFrame()
for time in times:
#print 'Time %i ' %time
dataTime0 = dataset.loc[dataset['Time'] == time]
list_vIDs = dataTime0.vID.tolist()
#print list_vIDs
dataTime = dataTime0.set_index("vID")
#index_dataTime = dataTime.index.values
#print dataTime
perm = list(permutations(list_vIDs,2))
#print perm
dist = [((((dataTime.loc[p[0],'gX'] - dataTime.loc[p[1],'gX']))**2) +
(((dataTime.loc[p[0],'gY'] - dataTime.loc[p[1],'gY']))**2))**0.5 for p in perm]
dataDist = pd.DataFrame(dist , index=perm, columns = {'dist'})
#Create the fields vID and To
dataDist['FromTo'] = dataDist.index
dataDist['From'] = dataDist.FromTo.str[0]
dataDist['To'] = dataDist.FromTo.str[1]
#I multiply by 100 in order to scale the number
dataDist['weight'] = (1/dataDist.dist)*100
#Delete the intermediate FromTo field
dataDist = dataDist.drop('FromTo', 1)
graph = nx.from_pandas_dataframe(dataDist, 'From','To',['weight'])
save_graph(graph,'D:\\zzzLola\\PhD\\DataSet\\US101\\coding\\graphs\\%i_my_graph.png' %time)
|
vehicles/VehiclesTimeCycles.ipynb
|
lalonica/PhD
|
gpl-3.0
|
DIFFERENT, again!
These two datasets are anticorrelated. To see what this means, we can derive the correlation coefficients for the two datasets independently:
|
print(np.corrcoef(X, Y1)[0, 1])
print(np.corrcoef(X, Y2)[0, 1])
|
lectures/L23.ipynb
|
eds-uga/csci1360-fa16
|
mit
|
DDM vs Signal Detection Theory
Comparing DDM to Signal Detection - does d' correlate with DDM parameters?
|
def get_d_primes(dataset, stim1, stim2, include_id=False):
d_primes = dict()
subject_ids = set(dataset.subj_idx)
for subject_id in subject_ids:
stim1_data = dataset.loc[
dataset['subj_idx'] == subject_id].loc[
dataset['stim'] == str(stim1)]
stim1_trials = len(stim1_data)
hits = len(stim1_data.loc[
dataset['response'] == 1.0])
stim2_data = dataset.loc[
dataset['subj_idx'] == subject_id].loc[
dataset['stim'] == str(stim2)]
stim2_trials = len(stim2_data)
fas = len(stim2_data.loc[
dataset['response'] == 0.0])
if not stim1_trials or not stim2_trials:
d_primes[subject_id] = None # N/A placeholder value
continue
d_prime = signal_detection.signal_detection(
n_stim1=stim1_trials,
n_stim2=stim2_trials,
hits=hits,
false_alarms=fas)['d_prime']
d_primes[subject_id] = d_prime
if not include_id:
return list(d_primes.values())
return d_primes
|
notebooks/model_comparisons.ipynb
|
CPernet/LanguageDecision
|
gpl-3.0
|
d' distributions
Pilot
|
plt.hist(get_d_primes(pilot_data, 'SS', 'US'))
plt.hist(get_d_primes(pilot_data, 'SS', 'CS'))
plt.hist(get_d_primes(pilot_data, 'SS', 'CP'))
|
notebooks/model_comparisons.ipynb
|
CPernet/LanguageDecision
|
gpl-3.0
|
Controls
|
plt.hist(get_d_primes(controls_data, 'SS', 'US'))
plt.hist(get_d_primes(controls_data, 'SS', 'CS'))
plt.hist(get_d_primes(controls_data, 'SS', 'CP'))
|
notebooks/model_comparisons.ipynb
|
CPernet/LanguageDecision
|
gpl-3.0
|
Patients
|
plt.hist(get_d_primes(patients_data, 'SS', 'US'))
plt.hist(get_d_primes(patients_data, 'SS', 'CS'))
plt.hist(list(filter(None, get_d_primes(patients_data, 'SS', 'CP'))))
|
notebooks/model_comparisons.ipynb
|
CPernet/LanguageDecision
|
gpl-3.0
|
Drift rate / d'
|
def match_dprime_to_driftrate(dataset, model, stim1, stim2):
subject_ids = set(dataset.subj_idx)
d_primes = get_d_primes(dataset, stim1, stim2, include_id=True)
for subject_id in subject_ids:
try:
d_prime = d_primes[subject_id]
v_stim1 = model.values['v_subj(' + stim1 + ').' + str(subject_id)]
v_stim2 = model.values['v_subj(' + stim2 + ').' + str(subject_id)]
v_diff = abs(v_stim2 - v_stim1)
yield (d_prime, v_diff)
except:
continue
|
notebooks/model_comparisons.ipynb
|
CPernet/LanguageDecision
|
gpl-3.0
|
SS vs US
|
dprime_driftrate = np.array([*match_dprime_to_driftrate(pilot_data, pilot_model, 'SS', 'US')])
x = dprime_driftrate[:,0]
y = dprime_driftrate[:,1]
plt.scatter(x, y)
scipy.stats.spearmanr(x, y)
dprime_driftrate = np.array([*match_dprime_to_driftrate(controls_data, controls_model, 'SS', 'US')])
x = dprime_driftrate[:,0]
y = dprime_driftrate[:,1]
plt.scatter(x, y)
scipy.stats.spearmanr(x, y)
dprime_driftrate = np.array([*match_dprime_to_driftrate(patients_data, patients_model, 'SS', 'US')])
x = dprime_driftrate[:,0]
y = dprime_driftrate[:,1]
plt.scatter(x, y)
scipy.stats.spearmanr(x, y)
|
notebooks/model_comparisons.ipynb
|
CPernet/LanguageDecision
|
gpl-3.0
|
SS vs CS
|
dprime_driftrate = np.array([*match_dprime_to_driftrate(pilot_data, pilot_model, 'SS', 'CS')])
x = dprime_driftrate[:,0]
y = dprime_driftrate[:,1]
plt.scatter(x, y)
scipy.stats.spearmanr(x, y)
dprime_driftrate = np.array([*match_dprime_to_driftrate(controls_data, controls_model, 'SS', 'CS')])
x = dprime_driftrate[:,0]
y = dprime_driftrate[:,1]
plt.scatter(x, y)
scipy.stats.spearmanr(x, y)
dprime_driftrate = np.array([*match_dprime_to_driftrate(patients_data, patients_model, 'SS', 'CS')])
x = dprime_driftrate[:,0]
y = dprime_driftrate[:,1]
plt.scatter(x, y)
scipy.stats.spearmanr(x, y)
|
notebooks/model_comparisons.ipynb
|
CPernet/LanguageDecision
|
gpl-3.0
|
SS vs CP
|
dprime_driftrate = np.array([*match_dprime_to_driftrate(pilot_data, pilot_model, 'SS', 'CP')])
x = dprime_driftrate[:,0]
y = dprime_driftrate[:,1]
plt.scatter(x, y)
scipy.stats.spearmanr(x, y)
dprime_driftrate = np.array([*match_dprime_to_driftrate(controls_data, controls_model, 'SS', 'CP')])
x = dprime_driftrate[:,0]
y = dprime_driftrate[:,1]
plt.scatter(x, y)
scipy.stats.spearmanr(x, y)
dprime_driftrate = np.array([*match_dprime_to_driftrate(patients_data, patients_model, 'SS', 'CP')])
x = dprime_driftrate[:,0]
y = dprime_driftrate[:,1]
plt.scatter(x, y)
scipy.stats.spearmanr(x, y)
|
notebooks/model_comparisons.ipynb
|
CPernet/LanguageDecision
|
gpl-3.0
|
Low d' comparisons
Compare ddm drift rate only with low d'
Ratcliff, R. (2014). Measuring psychometric functions with the diffusion model. Journal of Experimental Psychology: Human Perception and Performance, 40(2), 870-888.
http://dx.doi.org/10.1037/a0034954
Patients are the best candidates for this (SSvsCS, SSvsCP)
|
dprime_driftrate = np.array([*match_dprime_to_driftrate(patients_data,
patients_model, 'SS', 'CS')])
x = dprime_driftrate[:,0]
y = dprime_driftrate[:,1]
plt.scatter(x, y)
scipy.stats.spearmanr(x, y)
dprime_driftrate = np.array([*match_dprime_to_driftrate(patients_data,
patients_model, 'SS', 'CP')])
x = dprime_driftrate[:,0]
y = dprime_driftrate[:,1]
plt.scatter(x, y)
scipy.stats.spearmanr(x, y)
|
notebooks/model_comparisons.ipynb
|
CPernet/LanguageDecision
|
gpl-3.0
|
Get the data from the FITS file.
Here we loop over the header keywords to get the correct columns for the X/Y coordinates. We also parse the FITS header to get the data we need to project the X/Y values (which are integers from 0-->1000) into RA/dec coordinates.
|
#infile = '/Users/bwgref/science/solar/july_2016/data/20201002001/event_cl/nu20201002001B06_chu3_N_cl.evt'
infile = '/Users/bwgref/science/solar/data/Sol_16208/20201002001/event_cl/nu20201002001B06_chu3_N_cl.evt'
hdulist = fits.open(infile)
evtdata = hdulist[1].data
hdr = hdulist[1].header
hdulist.close()
|
notebooks/Convert_Example.ipynb
|
NuSTAR/nustar_pysolar
|
mit
|
Rotate to solar coordinates:
Variation on what we did to setup the pointing.
Note that this can take a little bit of time to run (~a minute or two).
The important optin here is how frequently one wants to recompute the position of the Sun. The default is once every 5 seconds.
Note that this can take a while (~minutes), so I recommend saving the output as a new FITS file (below).
|
reload(convert)
(newdata, newhdr) = convert.to_solar(evtdata, hdr)
|
notebooks/Convert_Example.ipynb
|
NuSTAR/nustar_pysolar
|
mit
|
Alternatively, use the convenience wrapper which automatically adds on the _sunpos.evt suffix:
|
convert.convert_file(infile)
|
notebooks/Convert_Example.ipynb
|
NuSTAR/nustar_pysolar
|
mit
|
So what just happened here? A decorator simple wrapped the function and modified its behaviour. Now lets understand how we can rewrite this code using the @ symbol, which is what Python uses for Decorators:
|
@new_decorator
def func_needs_decorator():
print "This function is in need of a Decorator"
func_needs_decorator()
|
PythonBootCamp/Complete-Python-Bootcamp-master/.ipynb_checkpoints/Decorators-checkpoint.ipynb
|
yashdeeph709/Algorithms
|
apache-2.0
|
数据获取
这个数据集很经典因此很多机器学习框架都为其提供接口,sklearn也是一样.但更多的情况下我们还是要处理各种来源的数据.因此此处我们还是使用最传统的方式获取数据
|
csv_content = requests.get("http://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data").text
row_name = ['sepal_length','sepal_width','petal_length','petal_width','label']
csv_list = csv_content.strip().split("\n")
row_matrix = [line.strip().split(",") for line in csv_list]
dataset = pd.DataFrame(row_matrix,columns=row_name)
dataset[:10]
|
ipynbs/supervised/Perceptron.ipynb
|
NLP-Deeplearning-Club/Classic-ML-Methods-Algo
|
mit
|
数据预处理
由于特征为float类型而标签为标签类别数据,因此标签需要为其编码,特征需要标准化.我们使用z-score进行归一化
|
encs = {}
encs["feature"] = StandardScaler()
encs["feature"].fit(dataset[row_name[:-1]])
table = pd.DataFrame(encs["feature"].transform(dataset[row_name[:-1]]),columns=row_name[:-1])
encs["label"]=LabelEncoder()
encs["label"].fit(dataset["label"])
table["label"] = encs["label"].transform(dataset["label"])
table[:10]
table.groupby("label").count()
|
ipynbs/supervised/Perceptron.ipynb
|
NLP-Deeplearning-Club/Classic-ML-Methods-Algo
|
mit
|
数据集拆分
|
train_set,validation_set = train_test_split(table)
train_set.groupby("label").count()
validation_set.groupby("label").count()
|
ipynbs/supervised/Perceptron.ipynb
|
NLP-Deeplearning-Club/Classic-ML-Methods-Algo
|
mit
|
训练模型
|
mlp = MLPClassifier(
hidden_layer_sizes=(100,50),
activation='relu',
solver='adam',
alpha=0.0001,
batch_size='auto',
learning_rate='constant',
learning_rate_init=0.001)
mlp.fit(train_set[row_name[:-1]], train_set["label"])
pre = mlp.predict(validation_set[row_name[:-1]])
|
ipynbs/supervised/Perceptron.ipynb
|
NLP-Deeplearning-Club/Classic-ML-Methods-Algo
|
mit
|
Now define a general panel and compute its velocity.
|
# define panel
my_panel = Panel(x0=-0.7,y0=0.5,x1=0.5,y1=-0.4,gamma=-2)
# compute velocity on grid
u,v = my_panel.velocity(x,y)
# plot it
plot_uv(u,v) # plot the flow on the grid
my_panel.plot() # plot the panel
|
lessons/VortexSheet.ipynb
|
ultiyuan/test0
|
gpl-2.0
|
I've already pulled down The Sunned House from Project Gutenberg (https://www.gutenberg.org/wiki/Main_Page) and saved it as a text file called 'lovecraft.txt'. Here we'll load it then define the encoding as utf-8. Lastly, we'll instantiate a TextBlob object:
|
with open (r'lovecraft.txt', 'r') as myfile:
shunned = myfile.read()
ushunned = unicode(shunned, 'utf-8')
tb = TextBlob(ushunned)
|
textblob_lovecraft.ipynb
|
dagrha/textual-analysis
|
mit
|
Now we'll go through every sentence in the story and get the 'sentiment' of each one. Sentiment analysis in TextBlob returns a polarity and a subjectivity number. Here we'll just extract the polarity:
|
paragraph = tb.sentences
i = -1
for sentence in paragraph:
i += 1
pol = sentence.sentiment.polarity
if i == 0:
write_type = 'w'
with open('shunned.csv', write_type) as text_file:
header = 'number,polarity\n'
text_file.write(str(header))
write_type = 'a'
with open('shunned.csv', write_type) as text_file:
newline = str(i) + ',' + str(pol) + '\n'
text_file.write(str(newline))
|
textblob_lovecraft.ipynb
|
dagrha/textual-analysis
|
mit
|
Now we instantiate a dataframe by pulling in that csv:
|
df = pd.DataFrame.from_csv('shunned.csv')
|
textblob_lovecraft.ipynb
|
dagrha/textual-analysis
|
mit
|
Let's plot our data! First let's just look at how the sentiment polarity changes from sentence to sentence:
|
df.polarity.plot(figsize=(12,5), color='b', title='Sentiment Polarity for HP Lovecraft\'s The Shunned House')
plt.xlabel('Sentence number')
plt.ylabel('Sentiment polarity')
|
textblob_lovecraft.ipynb
|
dagrha/textual-analysis
|
mit
|
Very up and down from sentence to sentence! Some dark sentences (the ones below 0.0 polarity), some positive sentences (greater than 0.0 polarity) but overall kind of hovers around 0.0 polarity.
One thing that may be interesting to look at is how the senitment changes over the course of the book. To examine that further, I'm going to create a new column in the dataframe which is the cumulative summation of the polarity rating, using the cumsum() pandas method:
|
df['cum_sum'] = df.polarity.cumsum()
|
textblob_lovecraft.ipynb
|
dagrha/textual-analysis
|
mit
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.