markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Here's what one bed (interval in striplog's vocabulary) looks like:
s[0]
docs/tutorial/15_Model_sequences_with_Markov_chains.ipynb
agile-geoscience/striplog
apache-2.0
It's easy to get the 'lithology' attribute from all the beds:
seq = [i.primary.lithology for i in s]
docs/tutorial/15_Model_sequences_with_Markov_chains.ipynb
agile-geoscience/striplog
apache-2.0
This is what we need!
m = Markov_chain.from_sequence(seq, strings_are_states=True, include_self=False) m.normalized_difference m.plot_norm_diff() m.plot_graph()
docs/tutorial/15_Model_sequences_with_Markov_chains.ipynb
agile-geoscience/striplog
apache-2.0
http://freegeoip.net 这个网站根据你的IP返回对应的地址,如果将IP作为参数传入,那么将返回该IP对应的地址,如下所示
print(getCountry("50.78.253.58")) print(getCountry(""))
Python_Spider_Tutorial_07.ipynb
yttty/python3-scraper-tutorial
gpl-3.0
用Python解析JSON
import json jsonString = '{"arrayOfNums":[{"number":0},{"number":1},{"number":2}],"arrayOfFruits":[{"fruit":"apple"},{"fruit":"banana"},{"fruit":"pear"}]}' jsonObj = json.loads(jsonString)
Python_Spider_Tutorial_07.ipynb
yttty/python3-scraper-tutorial
gpl-3.0
JSON(JavaScript Object Notation) 是一种轻量级的数据交换格式。它基于ECMAScript的一个子集。 JSON采用完全独立于语言的文本格式,但是也使用了类似于C语言家族的习惯(包括C、C++、C#、Java、JavaScript、Perl、Python等)。这些特性使JSON成为理想的数据交换语言。 易于人阅读和编写,同时也易于机器解析和生成(一般用于提升网络传输速率)。( 来自百度百科 ) 将上面的jsonString对齐格式,可以看到他其实是长这个样子的: { "arrayOfNums": [ { "number": 0 }, ...
print(jsonObj.get("arrayOfNums")) print(jsonObj.get("arrayOfNums")[1]) print(jsonObj.get("arrayOfNums")[1].get("number")+jsonObj.get("arrayOfNums")[2].get("number")) print(jsonObj.get("arrayOfFruits")[2].get("fruit"))
Python_Spider_Tutorial_07.ipynb
yttty/python3-scraper-tutorial
gpl-3.0
Get quantiles from the input raster data It is necessary to load the original raster in order to calculateits quantiles.
def raster2array(rasterfn): raster = gdal.Open(rasterfn) band = raster.GetRasterBand(1) return band.ReadAsArray() g_array = raster2array('global_cumul_impact_2013_all_layers.tif') g_array_f = g_array.flatten() print('The total number of non-zero values in the raw raster dataset:', g_array_f.size - (g_arr...
old_with_antarctica/wilderness_analysis_with_antarctica.ipynb
Yichuans/wilderness-wh
gpl-3.0
The number of non-zero values is notably different from esri's calculation, which stands at 414,347,791, less than what's calculated here and is 300,000 fewer zeros. This suggests esri may be using a bigger tolerence value, i.e. what is considered small enough to be regarded as zero . Now, get the quantiles... this thr...
## the percentile function applied to the sliced array, i.e., those with values greater than 0 quantiles = [np.percentile(g_array_f[~(g_array_f == 0)], quantile) for quantile in [1,3,5,10]] quantiles
old_with_antarctica/wilderness_analysis_with_antarctica.ipynb
Yichuans/wilderness-wh
gpl-3.0
Analyse intersection result The hypothetical biogeographical classification of the marine environmental within EEZ is described as a combination of MEOW (Marine Ecoregional of the World), its visual representation (called hereafter MEOW visual) up to 200 nautical miles and the World's pelagic provinces. The spatial dat...
# calculate cell-size in sqkm2 cell_size = 934.478*934.478/1000000 print(cell_size) # the OBJECTID - ras_val table input_data = pd.read_csv('result.csv') # the attribute table containing information about province etc input_attr = pd.read_csv('attr.csv') print('\n'.join(['Threshold cut-off value: '+ str(threshold) f...
old_with_antarctica/wilderness_analysis_with_antarctica.ipynb
Yichuans/wilderness-wh
gpl-3.0
The next step will be to apply the categorisations in the input_attr table. Replace result10 result table if other threshold is used
# join base to the attribute attr_merge = pd.merge(input_attr, result_count, on = 'OBJECTID') # join result to the above table attr_merge_10 = pd.merge(attr_merge, result_10, how = 'left', on ='OBJECTID', suffixes = ('_base', '_result')) # fill ras_val_result's NaN with 0, province and realms with None. This should h...
old_with_antarctica/wilderness_analysis_with_antarctica.ipynb
Yichuans/wilderness-wh
gpl-3.0
Further aggregation could be applied here. Visualisation and exploring results
import seaborn as sns g = sns.FacetGrid(result_agg_10, col="category") g.map(plt.hist, 'per_ltt', bins=50, log=True) # MEOW province (200m and 200 nautical combined) result_agg_10_province = attr_merge_10.groupby(['PROVINCE']).apply(apply_func).reset_index() sns.distplot(result_agg_10_province.per_ltt) result_agg_10...
old_with_antarctica/wilderness_analysis_with_antarctica.ipynb
Yichuans/wilderness-wh
gpl-3.0
New threshold for within EEZ
input_data.OBJECTID.unique().size # no zeros in the result data input_data.ras_val.size input_data.ras_val.min() # percentage of EEZ water in relation to the entire ocean input_data.ras_val.size/g_array_f[~(g_array_f==0)].size # all input_data are non-zero (zero indicates land and no_data) input_data[~(input_data.r...
old_with_antarctica/wilderness_analysis_with_antarctica.ipynb
Yichuans/wilderness-wh
gpl-3.0
変数の概要 <table class="tfo-notebook-buttons" align="left"> <td><a target="_blank" href="https://www.tensorflow.org/guide/variable"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org で実行</a></td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/bl...
import tensorflow as tf # Uncomment to see where your variables get placed (see below) # tf.debugging.set_log_device_placement(True)
site/ja/guide/variable.ipynb
tensorflow/docs-l10n
apache-2.0
変数の作成 変数を作成するには、初期値を指定します。tf.Variable は、初期化の値と同じ dtype を持ちます。
my_tensor = tf.constant([[1.0, 2.0], [3.0, 4.0]]) my_variable = tf.Variable(my_tensor) # Variables can be all kinds of types, just like tensors bool_variable = tf.Variable([False, False, False, True]) complex_variable = tf.Variable([5 + 4j, 6 + 1j])
site/ja/guide/variable.ipynb
tensorflow/docs-l10n
apache-2.0
変数の外観と動作はテンソルに似ており、実際にデータ構造が tf.Tensor で裏付けられています。テンソルのように dtype と形状を持ち、NumPy にエクスポートできます。
print("Shape: ", my_variable.shape) print("DType: ", my_variable.dtype) print("As NumPy: ", my_variable.numpy())
site/ja/guide/variable.ipynb
tensorflow/docs-l10n
apache-2.0
ほとんどのテンソル演算は期待どおりに変数を処理しますが、変数は変形できません。
print("A variable:", my_variable) print("\nViewed as a tensor:", tf.convert_to_tensor(my_variable)) print("\nIndex of highest value:", tf.argmax(my_variable)) # This creates a new tensor; it does not reshape the variable. print("\nCopying and reshaping: ", tf.reshape(my_variable, [1,4]))
site/ja/guide/variable.ipynb
tensorflow/docs-l10n
apache-2.0
上記のように、変数はテンソルによって裏付けられています。テンソルは tf.Variable.assign を使用して再割り当てできます。assign を呼び出しても、(通常は)新しいテンソルは割り当てられません。代わりに、既存テンソルのメモリが再利用されます。
a = tf.Variable([2.0, 3.0]) # This will keep the same dtype, float32 a.assign([1, 2]) # Not allowed as it resizes the variable: try: a.assign([1.0, 2.0, 3.0]) except Exception as e: print(f"{type(e).__name__}: {e}")
site/ja/guide/variable.ipynb
tensorflow/docs-l10n
apache-2.0
演算でテンソルのような変数を使用する場合、通常は裏付けているテンソルで演算します。 既存の変数から新しい変数を作成すると、裏付けているテンソルが複製されます。2 つの変数が同じメモリを共有することはありません。
a = tf.Variable([2.0, 3.0]) # Create b based on the value of a b = tf.Variable(a) a.assign([5, 6]) # a and b are different print(a.numpy()) print(b.numpy()) # There are other versions of assign print(a.assign_add([2,3]).numpy()) # [7. 9.] print(a.assign_sub([7,9]).numpy()) # [0. 0.]
site/ja/guide/variable.ipynb
tensorflow/docs-l10n
apache-2.0
ライフサイクル、命名、監視 Python ベースの TensorFlow では、tf.Variable インスタンスのライフサイクルは他の Python オブジェクトと同じです。変数への参照がない場合、変数は自動的に割り当て解除されます。 変数には名前を付けることもでき、変数の追跡とデバッグに役立ちます。2 つの変数に同じ名前を付けることができます。
# Create a and b; they will have the same name but will be backed by # different tensors. a = tf.Variable(my_tensor, name="Mark") # A new variable with the same name, but different value # Note that the scalar add is broadcast b = tf.Variable(my_tensor + 1, name="Mark") # These are elementwise-unequal, despite having ...
site/ja/guide/variable.ipynb
tensorflow/docs-l10n
apache-2.0
変数名は、モデルの保存と読み込みを行う際に維持されます。デフォルトでは、モデル内の変数は一意の変数名を自動的に取得するため、必要がない限り自分で割り当てる必要はありません。 変数は区別のためには重要ですが、一部の変数は区別する必要はありません。作成時に trainable を false に設定すると、変数の勾配をオフにすることができます。勾配を必要としない変数には、トレーニングステップカウンターなどがあります。
step_counter = tf.Variable(1, trainable=False)
site/ja/guide/variable.ipynb
tensorflow/docs-l10n
apache-2.0
変数とテンソルの配置 パフォーマンスを向上させるため、TensorFlow はテンソルと変数を dtype と互換性のある最速のデバイスに配置しようとします。つまり、GPU を使用できる場合はほとんどの変数が GPU に配置されることになります。 ただし、この動作はオーバーライドすることができます。このスニペットでは GPU が使用できる場合でも浮動小数点数テンソルと変数を CPU に配置します。デバイスの配置ログをオンにすると(セットアップを参照)、変数が配置されている場所を確認できます。 注:手動配置も機能しますが、配置戦略 を使用したほうがより手軽かつスケーラブルに計算を最適化することができます。 このノートブックを GPU ...
with tf.device('CPU:0'): # Create some tensors a = tf.Variable([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]]) b = tf.constant([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]]) c = tf.matmul(a, b) print(c)
site/ja/guide/variable.ipynb
tensorflow/docs-l10n
apache-2.0
あるデバイスで変数またはテンソルの場所を設定し、別のデバイスで計算を行うことができます。この処理ではデバイス間でデータをコピーする必要があるため、遅延が発生します。 ただし、複数の GPU ワーカーがあっても 1 つの変数のコピーだけが必要な場合は、この処理を実行することができます。
with tf.device('CPU:0'): a = tf.Variable([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]]) b = tf.Variable([[1.0, 2.0, 3.0]]) with tf.device('GPU:0'): # Element-wise multiply k = a * b print(k)
site/ja/guide/variable.ipynb
tensorflow/docs-l10n
apache-2.0
Latitude, Longitude of Map
#center = [48.355, -124.642] # Neah Bay center = [41.75, -124.19] # Crescent City zoom = 11 m = Map(center=center, zoom=zoom) m
misc/ipyleaflet_polygon_selector.ipynb
rjleveque/binder_experiments
bsd-2-clause
Map Widget and drawing tool Definitions
zoom = 13 c = ipywidgets.Box() topo_background = True # Use topo as background rather than map? if topo_background: m = Map(width='1000px',height='600px', center=center, zoom=zoom, \ default_tiles=TileLayer(url=u'http://otile1.mqcdn.com/tiles/1.0.0/sat/{z}/{x}/{y}.jpg')) else: m = Map(width='1000px'...
misc/ipyleaflet_polygon_selector.ipynb
rjleveque/binder_experiments
bsd-2-clause
On the map below, select the polygon tool or the rectangle tool and start clicking points. You can add several of each if you want. Note: - If you delete one, you must then click 'save'. - The edit function doesn't save the edited version (need fixing).
clear_m() display(m)
misc/ipyleaflet_polygon_selector.ipynb
rjleveque/binder_experiments
bsd-2-clause
Now you can print the coordinates of the vertices: Note that 5 digits gives about 1 meter precision.
for r in polys: print("\nPolygon vertices:") for c in r: print('%10.5f, %10.5f' % c) for r in rects: print("\nRectangle vertices:") for c in r: print('%10.5f, %10.5f' % c)
misc/ipyleaflet_polygon_selector.ipynb
rjleveque/binder_experiments
bsd-2-clause
Re-execute the cell above the map to clear and specify a new set of polygons. Other ways to print the vertices Depending on what you want to do with the output, you might want to print in a different format. Here are some examples... Rectangle as a tuple x1,y1,x2,y2 of corners: This format is used in specifying a GeoC...
for r in rects: print("\nCoordinates of lower left and upper right corner of rectangle:") x1 = r[0][0] x2 = r[2][0] y1 = r[0][1] y2 = r[2][1] print("x1, y1, x2, y2 = %10.5f, %10.5f, %10.5f, %10.5f" % (x1,y1,x2,y2))
misc/ipyleaflet_polygon_selector.ipynb
rjleveque/binder_experiments
bsd-2-clause
As tuples x and y: This format is used in specifying a GeoClaw fgmax rectangle or quadrilateral.
for r in rects: print("\nCoordinates of lower left and upper right corner of rectangle:") x1 = r[0][0] x2 = r[2][0] y1 = r[0][1] y2 = r[2][1] print("x = %10.5f, %10.5f" % (x1,x2)) print("y = %10.5f, %10.5f" % (y1,y2)) for r in polys: print("\nCoordinates of distinct vertices of pol...
misc/ipyleaflet_polygon_selector.ipynb
rjleveque/binder_experiments
bsd-2-clause
To create kml files: You can create a set of kml files for each rectangle or polygon with the code below. poly2kml is recently added to geoclaw.kmltools.
from clawpack.geoclaw import kmltools reload(kmltools) for i,r in enumerate(rects): x1 = r[0][0] x2 = r[2][0] y1 = r[0][1] y2 = r[2][1] name = "rect%i" % i kmltools.box2kml((x1,x2,y1,y2), name=name, verbose=True) for i,r in enumerate(polys): x = [xy[0] for xy in r] y = [xy[1] for xy in ...
misc/ipyleaflet_polygon_selector.ipynb
rjleveque/binder_experiments
bsd-2-clause
robobrowser
from robobrowser import RoboBrowser def post_to_yaml_loader(url, unglue_url="https://unglue.it/api/loader/yaml"): browser = RoboBrowser(history=True) browser.open(unglue_url) form = browser.get_forms()[0] form['repo_url'] = url # weird I have to manually set referer browser.session.he...
webhooks_for_GITenberg.ipynb
rdhyee/nypl50
apache-2.0
opds
from lxml import etree import requests opds_url = "https://unglue.it/api/opds/" doc = etree.fromstring(requests.get(opds_url).content) doc
webhooks_for_GITenberg.ipynb
rdhyee/nypl50
apache-2.0
Failed attempt with requests to submit to yaml loader
import requests from lxml import etree from lxml.cssselect import CSSSelector unglue_url = "https://unglue.it/api/loader/yaml" r = requests.get(unglue_url) doc = etree.HTML(r.content) sel = CSSSelector('input[name="csrfmiddlewaretoken"]') csrftoken = sel(doc)[0].attrib.get('value') csrftoken r = requests.post(ungl...
webhooks_for_GITenberg.ipynb
rdhyee/nypl50
apache-2.0
travis webhooks For https://travis-ci.org/GITenberg/Adventures-of-Huckleberry-Finn_76/builds/109712115 -- 2 webhooks were sent to http://requestb.in/wrr6l3wr?inspect: Travis webhook #1 for https://travis-ci.org/GITenberg/Adventures-of-Huckleberry-Finn_76/builds/109712115 second webhook for https://travis-ci.org/GITenb...
import requests raw_url_1 = ( "https://gist.githubusercontent.com/rdhyee/7f33050732a09dfa93f3/raw/8abf5661911e7aedf434d464dd1a28b3d24d6f83/travis_webhook_1.json" ) raw_url_2 = ( "https://gist.githubusercontent.com/rdhyee/8dc04b8fe52a9fefe3c2/raw/8f9968f481df3f4d4ecd44624c2dc1b0a8e02a17/travis_webhook_2.json" ) r1 ...
webhooks_for_GITenberg.ipynb
rdhyee/nypl50
apache-2.0
travis webhook authentication I think the documention is incorrect. Instead of 'username/repository', just use the header Travis-Repo-Slug, which, I think, is just the full name of the repo -- e.g., GITenberg/Adventures-of-Huckleberry-Finn_76 When Travis CI makes the POST request, a header named Authorization is inclu...
sent_token = "6fba7d2102f66b16139a54e1b434471f6fb64d20c0787ec773e92a5155fad4a9" from github_settings import TRAVIS_TOKEN, username from hashlib import sha256 sha256('GITenberg/Adventures-of-Huckleberry-Finn_76' + TRAVIS_TOKEN).hexdigest()
webhooks_for_GITenberg.ipynb
rdhyee/nypl50
apache-2.0
testing my webhook implementation
import requests url = "http://127.0.0.1:8000/api/travisci/webhook" test_headers_url = \ "https://gist.githubusercontent.com/rdhyee/a9242f60b568b5a9e8fa/raw/e5d71c9a17964e0d43f6a35bbf03efe3f8a7d752/webhook_headers.txt" test_body_url = \ "https://gist.githubusercontent.com/rdhyee/a9242f60b568b5a9e8fa/raw/e5d71c9a...
webhooks_for_GITenberg.ipynb
rdhyee/nypl50
apache-2.0
Bias-Variance using bootstrap
#initiate stuff np.random.seed(2018) err = [] bi=[] vari=[] n = 1000 n_boostraps = 1000 noise=0.1 x = np.sort(np.random.uniform(0,1,n)).reshape(-1,1) y = true_fun(x).reshape(-1,1) + np.random.randn(len(x)).reshape(-1,1) * noise y_no_noise= true_fun(x) degrees = np.arange(1,16) for degree in degrees: x_train, x_...
doc/Programs/VariousCodes/Reduced_dimensionality.ipynb
CompPhysics/MachineLearning
cc0-1.0
Bias-Variance using kFold CV
#initiate stuff again in case data was changed earlier np.random.seed(2018) noise=0.1 N=1000 k=5 x = np.sort(np.random.uniform(0,1,N)).reshape(-1,1) y = true_fun(x).reshape(-1,1) + np.random.randn(len(x)).reshape(-1,1) * noise y_no_noise= true_fun(x) degrees = np.arange(1,16) kfold = KFold(n_splits = k,shuffle=True,...
doc/Programs/VariousCodes/Reduced_dimensionality.ipynb
CompPhysics/MachineLearning
cc0-1.0
Erzeugung von Datensätzen Nun erstellen wir unseren eigenen Datensatz (DataFrame). Für dieses Beispiel nutzen wir verschiedene Menschen mit deren Vor- sowie Familiennamen und setzen diese zusammen mit deren Alter und Ergebniss eines Testes in eine Tabelle.
raw_data = {'first_name': ['Jason','Molly','Tina'], 'last_name': ['Miller','Jacobsen','.'], 'age':['42','23','79'], 'score':['104','108','120']}
Journal/.ipynb_checkpoints/Pandas-checkpoint.ipynb
Ironlors/SmartIntersection-Ger
apache-2.0
Nun müssen wir die verschieden Spalten der Tabelle beschriften. Dies erreichen wir mit den nachfolgenden Befehl. Dabei erstellen wir eine List mit verschiedenen Strings, welche in der Reihenfolge der Spalten angegeben wird.
df = pd.DataFrame(raw_data, columns = ['first_name','last_name','age','score']) df
Journal/.ipynb_checkpoints/Pandas-checkpoint.ipynb
Ironlors/SmartIntersection-Ger
apache-2.0
Dieses DataFrame müssen wir nun in ein CSV Dokument umwandeln und speicher. Dafür nutzen wir den Befehl df.to_csv Dadurch speichern wir die Tabelle in eine Dokument und exportieren es in den angegebenen Dateipfad.
df.to_csv('/Users/kay/Downloads/pandas-cookbook-master-2/data/test.csv')
Journal/.ipynb_checkpoints/Pandas-checkpoint.ipynb
Ironlors/SmartIntersection-Ger
apache-2.0
Import von CSV Dateien Um diese CSV Datei nun wieder nutzbar zu machen, müssen wir es mit den pd.read_csv Befehl wieder importieren. Dadurch wird eine andere Spalte hinzugefügt, welche die Indexnummer angibt des alten Dataframes angibt.
df = pd.read_csv('/Users/kay/Downloads/pandas-cookbook-master-2/data/test.csv') df
Journal/.ipynb_checkpoints/Pandas-checkpoint.ipynb
Ironlors/SmartIntersection-Ger
apache-2.0
Wollen wir nun den Header entfallen lassen, so ist dies ohne weiteres möglich, dazu müssen wir nur das Argument header=None anhängen. Dadurch wird die oberste Zeile aber mit entsprechenden Kennziffern der Spalten ersetzt, welches eine eindeutige Identifikation einer jeden Zelle ermöglicht.
df = pd.read_csv('/Users/kay/Downloads/pandas-cookbook-master-2/data/test.csv', header=None) df
Journal/.ipynb_checkpoints/Pandas-checkpoint.ipynb
Ironlors/SmartIntersection-Ger
apache-2.0
Wollen wir nun unsere eigene Spaltenbezeichnung nutzen, so müssen wir den befehl weiter modifizieren. Dafür nutzen wir das Argument names=['stringa','stringb'] um die verschiedenen Strings als Namen zu übergeben. Dabei kann das Attribut zum wegfallen des Headers ausgelassen werden, da wir ja einen Header nutzen, dieses...
df = pd.read_csv('/Users/kay/Downloads/pandas-cookbook-master-2/data/test.csv' ,names=['UID','First Name','Last Name','Age','Score']) df
Journal/.ipynb_checkpoints/Pandas-checkpoint.ipynb
Ironlors/SmartIntersection-Ger
apache-2.0
Nun wollen wir vielleicht die eingefügten Zahlen in der ersten Spalte der Tabelle ersetzen, und einen anderen Index nutzen. Dies ist möglich, wenn wir die index collumn nutzen. Das entsprechende Attribut dazu ist index_col='index'
df = pd.read_csv('/Users/kay/Downloads/pandas-cookbook-master-2/data/test.csv',index_col='UID' ,names=['UID','First Name','Last Name','Age','Score']) df
Journal/.ipynb_checkpoints/Pandas-checkpoint.ipynb
Ironlors/SmartIntersection-Ger
apache-2.0
Nun erkennen wir, dass die eigentliche Indexspalte durch die UID ersetzt wurde. Es können aber auch mehrere Indexe genutzt werden. Dafür übergeben wir statt eines einzelnen Strings eine Liste von Strings. Wichtig dabei ist aber, dass wir die selbstbestimmten Namen nutzen, und nicht die eigentlichen Namen der Spalte, we...
df = pd.read_csv('/Users/kay/Downloads/pandas-cookbook-master-2/data/test.csv',index_col=['First Name','Last Name'] ,names=['UID','First Name','Last Name','Age','Score']) df
Journal/.ipynb_checkpoints/Pandas-checkpoint.ipynb
Ironlors/SmartIntersection-Ger
apache-2.0
Clearly there's a linear relationship between the amount of top and the total bill. Let's try to find the trend line here using linear regression in Tensorflow. In other words, let's try to build a predictor for tip, given total_bill. Put differently, let's try to find the slope (=gradient in >2D) of the trend line.
# When we're solving a linear regression problem using ML is basically trying to find the slope/gradient of the # equation `y = W*X+b`. # Or put differently: given a bunch of correlated datapoints for `y` and `X`, # try to find the right values for `W` and `b` that minimize a give loss function (e.g. mean squared err...
machine-learning/tensorflow/tensorflow_linear_regression.ipynb
jorisroovers/machinelearning-playground
apache-2.0
Let's plot the line defined by W and b:
fit['total_bill'] = tips['total_bill'] fit['res'] = w_val * tips['total_bill'] + b_val plt = sns.relplot(x="total_bill", y="tip", data=tips); plt = sns.lineplot(x='total_bill', y ="res", data=fit, color="darkred");
machine-learning/tensorflow/tensorflow_linear_regression.ipynb
jorisroovers/machinelearning-playground
apache-2.0
Keras Tuner 소개 <table class="tfo-notebook-buttons" align="left"> <td><a target="_blank" href="https://www.tensorflow.org/tutorials/keras/keras_tuner"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">View on TensorFlow.org</a></td> <td><a target="_blank" href="https://colab.research.google.com/github/te...
import tensorflow as tf from tensorflow import keras import IPython
site/ko/tutorials/keras/keras_tuner.ipynb
tensorflow/docs-l10n
apache-2.0
Keras Tuner를 설치하고 가져옵니다.
!pip install -U keras-tuner import kerastuner as kt
site/ko/tutorials/keras/keras_tuner.ipynb
tensorflow/docs-l10n
apache-2.0
데이터세트 다운로드 및 준비하기 이 튜토리얼에서는 Keras Tuner를 사용하여 Fashion MNIST 데이터세트에서 의류 이미지를 분류하는 머신러닝 모델에 가장 적합한 하이퍼파라미터를 찾습니다. 데이터를 로드합니다.
(img_train, label_train), (img_test, label_test) = keras.datasets.fashion_mnist.load_data() # Normalize pixel values between 0 and 1 img_train = img_train.astype('float32') / 255.0 img_test = img_test.astype('float32') / 255.0
site/ko/tutorials/keras/keras_tuner.ipynb
tensorflow/docs-l10n
apache-2.0
모델 정의하기 하이퍼튜닝을 위한 모델을 빌드할 때는 모델 아키텍처와 더불어 하이퍼파라미터 검색 공간도 정의합니다. 하이퍼튜닝을 위해 설정하는 모델을 하이퍼 모델이라고 합니다. 두 가지 접근 방식을 통해 하이퍼 모델을 정의할 수 있습니다. 모델 빌더 함수 사용 Keras Tuner API의 HyperModel 클래스를 하위 클래스화 또한 두 개의 사전 정의된 HyperModel - 클래스인 HyperXception과 HyperResNet을 컴퓨터 비전 애플리케이션에 사용할 수 있습니다. 이 튜토리얼에서는 모델 빌더 함수를 사용하여 이미지 분류 모델을 정의합니다. 모...
def model_builder(hp): model = keras.Sequential() model.add(keras.layers.Flatten(input_shape=(28, 28))) # Tune the number of units in the first Dense layer # Choose an optimal value between 32-512 hp_units = hp.Int('units', min_value = 32, max_value = 512, step = 32) model.add(keras.layers.Dense(units = ...
site/ko/tutorials/keras/keras_tuner.ipynb
tensorflow/docs-l10n
apache-2.0
튜너를 인스턴스화하고 하이퍼튜닝 수행하기 튜너를 인스턴스화하여 하이퍼튜닝을 수행합니다. Keras Tuner에는 RandomSearch, Hyperband, BayesianOptimization 및 Sklearn의 네 가지 튜너가 있습니다. 이 튜토리얼에서는 Hyperband 튜너를 사용합니다. Hyperband 튜너를 인스턴스화하려면 최적화할 하이퍼모델인 objective, 및 훈련할 최대 epoch 수(max_epochs)를 지정해야 합니다.
tuner = kt.Hyperband(model_builder, objective = 'val_accuracy', max_epochs = 10, factor = 3, directory = 'my_dir', project_name = 'intro_to_kt')
site/ko/tutorials/keras/keras_tuner.ipynb
tensorflow/docs-l10n
apache-2.0
Hyperband 튜닝 알고리즘은 적응형 리소스 할당 및 조기 중단을 사용하여 고성능 모델에서 신속하게 수렴합니다. 이것은 스포츠 챔피언십 스타일 브래킷을 사용하여 수행됩니다. 이 알고리즘은 몇 개의 epoch에 대해 많은 수의 모델을 훈련하고 최고 성능을 보이는 절반만 다음 단계로 넘깁니다. Hyperband는 1 + log<sub><code>factor</code></sub>( max_epochs)를 계산하고 이를 가장 가까운 정수로 반올림하여 한 브래킷에서 훈련할 모델 수를 결정합니다. 하이퍼파라미터 검색을 실행하기 전에 훈련 단계가 끝날 때마다 훈련 결과를 지...
class ClearTrainingOutput(tf.keras.callbacks.Callback): def on_train_end(*args, **kwargs): IPython.display.clear_output(wait = True)
site/ko/tutorials/keras/keras_tuner.ipynb
tensorflow/docs-l10n
apache-2.0
하이퍼파라미터 검색을 실행합니다. 검색 메서드의 인수는 위의 콜백 외에 tf.keras.model.fit에 사용되는 인수와 같습니다.
tuner.search(img_train, label_train, epochs = 10, validation_data = (img_test, label_test), callbacks = [ClearTrainingOutput()]) # Get the optimal hyperparameters best_hps = tuner.get_best_hyperparameters(num_trials = 1)[0] print(f""" The hyperparameter search is complete. The optimal number of units in the first den...
site/ko/tutorials/keras/keras_tuner.ipynb
tensorflow/docs-l10n
apache-2.0
이 튜토리얼을 마치려면 검색에서 최적의 하이퍼파라미터로 모델을 재훈련합니다.
# Build the model with the optimal hyperparameters and train it on the data model = tuner.hypermodel.build(best_hps) model.fit(img_train, label_train, epochs = 10, validation_data = (img_test, label_test))
site/ko/tutorials/keras/keras_tuner.ipynb
tensorflow/docs-l10n
apache-2.0
How to plot topomaps the way EEGLAB does If you have previous EEGLAB experience you may have noticed that topomaps (topoplots) generated using MNE-Python look a little different from those created in EEGLAB. If you prefer the EEGLAB style this example will show you how to calculate head sphere origin and radius to obta...
# Authors: Mikołaj Magnuski <mmagnuski@swps.edu.pl> # # License: BSD-3-Clause import numpy as np from matplotlib import pyplot as plt import mne print(__doc__)
stable/_downloads/b11aa17d34c637b8e89ed78c8bdf13e4/eeglab_head_sphere.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Calculate sphere origin and radius EEGLAB plots head outline at the level where the head circumference is measured in the 10-20 system (a line going through Fpz, T8/T4, Oz and T7/T3 channels). MNE-Python places the head outline lower on the z dimension, at the level of the anatomical landmarks :term:LPA, RPA, and NAS &...
# first we obtain the 3d positions of selected channels chs = ['Oz', 'Fpz', 'T7', 'T8'] # when the montage is set, it is transformed to the "head" coordinate frame # that MNE uses internally, therefore we need to use # ``fake_evoked.get_montage()`` to get these properly transformed coordinates montage_head = fake_evoke...
stable/_downloads/b11aa17d34c637b8e89ed78c8bdf13e4/eeglab_head_sphere.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Select live births, then make a CDF of <tt>totalwgt_lb</tt>.
live = preg[preg.outcome == 1] live_cdf = thinkstats2.Cdf(live.prglngth) first_cdf = thinkstats2.Cdf(live[live.birthord == 1].totalwgt_lb, label = 'first') other_cdf = thinkstats2.Cdf(live[live.birthord > 1].totalwgt_lb, label = 'other')
resolved/chap04ex.ipynb
Nathx/think_stats
gpl-3.0
Display the CDF.
thinkplot.PrePlot(2) thinkplot.Cdfs([first_cdf, other_cdf]) thinkplot.Show(xlabel='weight (pounds)', ylabel='CDF')
resolved/chap04ex.ipynb
Nathx/think_stats
gpl-3.0
Find out how much you weighed at birth, if you can, and compute CDF(x).
first_cdf.Percentile(50)
resolved/chap04ex.ipynb
Nathx/think_stats
gpl-3.0
If you are a first child, look up your birthweight in the CDF of first children; otherwise use the CDF of other children. Compute the percentile rank of your birthweight
first_cdf.Percentile(50)
resolved/chap04ex.ipynb
Nathx/think_stats
gpl-3.0
Compute the median birth weight by looking up the value associated with p=0.5.
first_cdf.Percentile(25), first_cdf.Percentile(75)
resolved/chap04ex.ipynb
Nathx/think_stats
gpl-3.0
Compute the interquartile range (IQR) by computing percentiles corresponding to 25 and 75. Make a random selection from <tt>cdf</tt>.
first_cdf.Random()
resolved/chap04ex.ipynb
Nathx/think_stats
gpl-3.0
Draw a random sample from <tt>cdf</tt>.
first_cdf.Sample(10)
resolved/chap04ex.ipynb
Nathx/think_stats
gpl-3.0
Draw a random sample from <tt>cdf</tt>, then compute the percentile rank for each value, and plot the distribution of the percentile ranks. Generate 1000 random values using <tt>random.random()</tt> and plot their PMF.
%time thinkplot.Pmf(thinkstats2.Pmf([random.random() for i in xrange(100)])) %time thinkplot.Pmf(thinkstats2.Pmf([random.random() for i in xrange(100)]))
resolved/chap04ex.ipynb
Nathx/think_stats
gpl-3.0
Assuming that the PMF doesn't work very well, try plotting the CDF instead.
thinkplot.Cdf(thinkstats2.Cdf([random.random() for i in xrange(10000)]))
resolved/chap04ex.ipynb
Nathx/think_stats
gpl-3.0
驱动程序 <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://tensorflow.google.cn/agents/tutorials/4_drivers_tutorial"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png">在 TensorFlow.org 上查看</a> </td> <td> <a target="_blank" href="https://colab.research.google.co...
!pip install tf-agents from __future__ import absolute_import from __future__ import division from __future__ import print_function import tensorflow as tf from tf_agents.environments import suite_gym from tf_agents.environments import tf_py_environment from tf_agents.policies import random_py_policy from tf_agent...
site/zh-cn/agents/tutorials/4_drivers_tutorial.ipynb
tensorflow/docs-l10n
apache-2.0
Python 驱动程序 PyDriver 类采用 Python 环境、Python 策略和观察者列表在每个时间步骤更新。主要方法为 run(),它会使用策略中的操作逐步执行环境,直到至少满足以下终止条件之一:步数达到 max_steps 或片段数达到 max_episodes。 实现方式大致如下: ```python class PyDriver(object): def init(self, env, policy, observers, max_steps=1, max_episodes=1): self._env = env self._policy = policy self._observers =...
env = suite_gym.load('CartPole-v0') policy = random_py_policy.RandomPyPolicy(time_step_spec=env.time_step_spec(), action_spec=env.action_spec()) replay_buffer = [] metric = py_metrics.AverageReturnMetric() observers = [replay_buffer.append, metric] driver = py_driver.PyDriver( ...
site/zh-cn/agents/tutorials/4_drivers_tutorial.ipynb
tensorflow/docs-l10n
apache-2.0
TensorFlow 驱动程序 TensorFlow 中也有驱动程序,其功能与 Python 驱动程序类似,区别是使用 TF 环境、TF 策略、TF 观察者等。我们目前有 2 种 TensorFlow 驱动程序:DynamicStepDriver(在给定的有效环境步数后终止),以及 DynamicEpisodeDriver(在给定的片段数后终止)。让我们看一下 DynamicEpisode 的实际应用示例。
env = suite_gym.load('CartPole-v0') tf_env = tf_py_environment.TFPyEnvironment(env) tf_policy = random_tf_policy.RandomTFPolicy(action_spec=tf_env.action_spec(), time_step_spec=tf_env.time_step_spec()) num_episodes = tf_metrics.NumberOfEpisodes() env_steps = tf_metrics.Env...
site/zh-cn/agents/tutorials/4_drivers_tutorial.ipynb
tensorflow/docs-l10n
apache-2.0
위 소스 코드를 .py 파일 또는 jupyter notebook에 입력하여 파이썬으로 실행 시키면 "gaussian_elimination.ipynb" 파일이 생성되며, jupyter notebook으로 실행하거나, 콘솔창(cmd)에서 해당 파일이 있는 폴더로 이동 후 아래와 같이 입력하면 해당 파일이 실행 될 것 입니다. bash jupyter notebook gaussian_elimination.ipynb gaussian_elimination.py 코드 구조 본 Lab은 이전 Lab과 다르게 크게 3 Part로 구성되어 있고, 각각의 Part의 세부적인 문제가 주어...
def vector_scalar_multiplication(scalar_value, row_vector): return None def vector_add_operation(vector_1, vector_2): return None def vector_subtract_operation(vector_1, vector_2): return None vector_a = [3, 2, 3] vector_b = [5, 5, 4] vector_c = [1, 2, 3] print (vector_add_operation(vector_a, vector_c))...
assignment/ps2/gaussian_elimination.ipynb
TeamLab/Gachon_CS50_OR_KMOOC
mit
Part #2 - gauss elimination module (point 3) 함수명 | 함수내용 --------------------|-------------------------- get_max_row_position | 주어진 Matrix에서 기준이 되는 Column number element value가 가장 큰 Row number를 반환함(//이해가 잘 안되요! Row number보다 Row number index가 맞는것 같아요), 단 반환되는 Row number 현재 처리되고 있는 row number보다 값이 커야 함 divided_by_pivot_v...
def get_max_row_position(target_matrix, current_row_position, current_column_position): return None matrix_X = [[12,7,3], [4 ,5,6], [7 ,8,9]] matrix_Y = [[5,8,1,2], [6,7,3,0], [4,5,9,1]] print (get_max_row_position(matrix_X, 0, 0)) # Expected value : 0 # 12 > 4 > 7 matrix_X[0].index(12) print (get_max_row_positio...
assignment/ps2/gaussian_elimination.ipynb
TeamLab/Gachon_CS50_OR_KMOOC
mit
Part #3 - gauss elimination solution (point 3) 마지막 단계는 앞서 사용한 많은 모듈들을 합쳐서 하나의 gauss elimination solution을 작성하는 것이다. gauss elimination solution은 아래와 같은 순서를 따릅니다. 입력되는 matrix의 전체 row 수를 측정한다. matrix의 0번째 row 부터 한줄씩 pivot row를 증가한다. 현재의 pivot column에서 가장 큰 값이 다른 row에 위치한다면 두 row의 위치를 변경한다. pivot row와 pivot column이 결정됐다면,...
def gauss_elimination_solution(target_matrix): return None target_matrix = [[1.0,1.0,-1.0,8.0], [-3.0,-1.0,2.0,-11.0], [-2.0,1.0,2.0,-3.0]] print (gauss_elimination_solution(target_matrix)) # Expected value : [[1.0, 0.0, 0.0, -0.666666666666667], [0.0, 1.0, 0.0, 4.333333333333333], [0.0, 0.0, 1.0, -4.333333333333...
assignment/ps2/gaussian_elimination.ipynb
TeamLab/Gachon_CS50_OR_KMOOC
mit
결과 제출 하기 문제없이 숙제를 제출하면 아래 결과가 모두 PASS로 표시 될 것 입니다.
import gachon_autograder_client as g_autograder THE_TEMLABIO_ID = "#YOUR_ID" PASSWORD = "#YOUR_PASSWORD" ASSIGNMENT_FILE_NAME = "gaussian_elimination.ipynb" g_autograder.submit_assignment(THE_TEMLABIO_ID , PASSWORD, ASSIGNMENT_NAME)
assignment/ps2/gaussian_elimination.ipynb
TeamLab/Gachon_CS50_OR_KMOOC
mit
Loading the dataset 0750-0805 Description of the dataset is at: D:/zzzLola/PhD/DataSet/US101/US101_time_series/US-101-Main-Data/vehicle-trajectory-data/trajectory-data-dictionary.htm
c_dataset = ['vID','fID', 'tF', 'Time', 'lX', 'lY', 'gX', 'gY', 'vLen', 'vWid', 'vType','vVel', 'vAcc', 'vLane', 'vPrec', 'vFoll', 'spac','headway' ] dataset = pd.read_table('D:\\zzzLola\\PhD\\DataSet\\US101\\coding\\trajectories-0750am-0805am.txt', sep=r"\s+", header=None, names=c_dataset) ...
vehicles/VehiclesTimeCycles.ipynb
lalonica/PhD
gpl-3.0
15min = 900 s = 9000 ms // 9529ms = 952.9s = 15min 52.9s The actual temporal length of this dataset is 15min 52.9s. Looks like the timestamp of the vehicles is matches. Which make sense attending to the way the data is obtained. There is no GPS on the vehicles, but from cameras synchronized localized at different build...
#Converting to meters dataset['lX'] = dataset.lX * 0.3048 dataset['lY'] = dataset.lY * 0.3048 dataset['gX'] = dataset.gX * 0.3048 dataset['gY'] = dataset.gY * 0.3048 dataset['vLen'] = dataset.vLen * 0.3048 dataset['vWid'] = dataset.vWid * 0.3048 dataset['spac'] = dataset.spac * 0.3048 dataset['vVel'] = dataset.vVel * 0...
vehicles/VehiclesTimeCycles.ipynb
lalonica/PhD
gpl-3.0
For every time stamp, check how many vehicles are accelerating when the one behind is also or not... : - vehicle_acceleration vs precedin_vehicl_acceleration - vehicle_acceleration vs follower_vehicl_acceleration When is a vehicle changing lanes?
dataset['tF'].describe() des_all = dataset.describe() des_all des_all.to_csv('D:\\zzzLola\\PhD\\DataSet\\US101\\coding\\description_allDataset_160502.csv', sep='\t', encoding='utf-8') dataset.to_csv('D:\\zzzLola\\PhD\\DataSet\\US101\\coding\\dataset_meters_160502.txt', sep='\t', encoding='utf-8',index=False) #tabl...
vehicles/VehiclesTimeCycles.ipynb
lalonica/PhD
gpl-3.0
DIFFERENT, again! These two datasets are anticorrelated. To see what this means, we can derive the correlation coefficients for the two datasets independently:
print(np.corrcoef(X, Y1)[0, 1]) print(np.corrcoef(X, Y2)[0, 1])
lectures/L23.ipynb
eds-uga/csci1360-fa16
mit
DDM vs Signal Detection Theory Comparing DDM to Signal Detection - does d' correlate with DDM parameters?
def get_d_primes(dataset, stim1, stim2, include_id=False): d_primes = dict() subject_ids = set(dataset.subj_idx) for subject_id in subject_ids: stim1_data = dataset.loc[ dataset['subj_idx'] == subject_id].loc[ dataset['stim'] == str(stim1)] stim1_trials = len(stim1_da...
notebooks/model_comparisons.ipynb
CPernet/LanguageDecision
gpl-3.0
d' distributions Pilot
plt.hist(get_d_primes(pilot_data, 'SS', 'US')) plt.hist(get_d_primes(pilot_data, 'SS', 'CS')) plt.hist(get_d_primes(pilot_data, 'SS', 'CP'))
notebooks/model_comparisons.ipynb
CPernet/LanguageDecision
gpl-3.0
Controls
plt.hist(get_d_primes(controls_data, 'SS', 'US')) plt.hist(get_d_primes(controls_data, 'SS', 'CS')) plt.hist(get_d_primes(controls_data, 'SS', 'CP'))
notebooks/model_comparisons.ipynb
CPernet/LanguageDecision
gpl-3.0
Patients
plt.hist(get_d_primes(patients_data, 'SS', 'US')) plt.hist(get_d_primes(patients_data, 'SS', 'CS')) plt.hist(list(filter(None, get_d_primes(patients_data, 'SS', 'CP'))))
notebooks/model_comparisons.ipynb
CPernet/LanguageDecision
gpl-3.0
Drift rate / d'
def match_dprime_to_driftrate(dataset, model, stim1, stim2): subject_ids = set(dataset.subj_idx) d_primes = get_d_primes(dataset, stim1, stim2, include_id=True) for subject_id in subject_ids: try: d_prime = d_primes[subject_id] v_stim1 = model.values['v_subj(' + stim1 + ').' ...
notebooks/model_comparisons.ipynb
CPernet/LanguageDecision
gpl-3.0
SS vs US
dprime_driftrate = np.array([*match_dprime_to_driftrate(pilot_data, pilot_model, 'SS', 'US')]) x = dprime_driftrate[:,0] y = dprime_driftrate[:,1] plt.scatter(x, y) scipy.stats.spearmanr(x, y) dprime_driftrate = np.array([*match_dprime_to_driftrate(controls_data, controls_model, 'SS', 'US')]) x = dprime_driftrate[:,0...
notebooks/model_comparisons.ipynb
CPernet/LanguageDecision
gpl-3.0
SS vs CS
dprime_driftrate = np.array([*match_dprime_to_driftrate(pilot_data, pilot_model, 'SS', 'CS')]) x = dprime_driftrate[:,0] y = dprime_driftrate[:,1] plt.scatter(x, y) scipy.stats.spearmanr(x, y) dprime_driftrate = np.array([*match_dprime_to_driftrate(controls_data, controls_model, 'SS', 'CS')]) x = dprime_driftrate[:,0...
notebooks/model_comparisons.ipynb
CPernet/LanguageDecision
gpl-3.0
SS vs CP
dprime_driftrate = np.array([*match_dprime_to_driftrate(pilot_data, pilot_model, 'SS', 'CP')]) x = dprime_driftrate[:,0] y = dprime_driftrate[:,1] plt.scatter(x, y) scipy.stats.spearmanr(x, y) dprime_driftrate = np.array([*match_dprime_to_driftrate(controls_data, controls_model, 'SS', 'CP')]) x = dprime_driftrate[:,0...
notebooks/model_comparisons.ipynb
CPernet/LanguageDecision
gpl-3.0
Low d' comparisons Compare ddm drift rate only with low d' Ratcliff, R. (2014). Measuring psychometric functions with the diffusion model. Journal of Experimental Psychology: Human Perception and Performance, 40(2), 870-888. http://dx.doi.org/10.1037/a0034954 Patients are the best candidates for this (SSvsCS, SSvsCP)
dprime_driftrate = np.array([*match_dprime_to_driftrate(patients_data, patients_model, 'SS', 'CS')]) x = dprime_driftrate[:,0] y = dprime_driftrate[:,1] plt.scatter(x, y) scipy.stats.spearmanr(x, y) dprime_driftrate = np.array([*match_dprime_to_driftrate(patients_data...
notebooks/model_comparisons.ipynb
CPernet/LanguageDecision
gpl-3.0
Get the data from the FITS file. Here we loop over the header keywords to get the correct columns for the X/Y coordinates. We also parse the FITS header to get the data we need to project the X/Y values (which are integers from 0-->1000) into RA/dec coordinates.
#infile = '/Users/bwgref/science/solar/july_2016/data/20201002001/event_cl/nu20201002001B06_chu3_N_cl.evt' infile = '/Users/bwgref/science/solar/data/Sol_16208/20201002001/event_cl/nu20201002001B06_chu3_N_cl.evt' hdulist = fits.open(infile) evtdata = hdulist[1].data hdr = hdulist[1].header hdulist.close()
notebooks/Convert_Example.ipynb
NuSTAR/nustar_pysolar
mit
Rotate to solar coordinates: Variation on what we did to setup the pointing. Note that this can take a little bit of time to run (~a minute or two). The important optin here is how frequently one wants to recompute the position of the Sun. The default is once every 5 seconds. Note that this can take a while (~minutes),...
reload(convert) (newdata, newhdr) = convert.to_solar(evtdata, hdr)
notebooks/Convert_Example.ipynb
NuSTAR/nustar_pysolar
mit
Alternatively, use the convenience wrapper which automatically adds on the _sunpos.evt suffix:
convert.convert_file(infile)
notebooks/Convert_Example.ipynb
NuSTAR/nustar_pysolar
mit
So what just happened here? A decorator simple wrapped the function and modified its behaviour. Now lets understand how we can rewrite this code using the @ symbol, which is what Python uses for Decorators:
@new_decorator def func_needs_decorator(): print "This function is in need of a Decorator" func_needs_decorator()
PythonBootCamp/Complete-Python-Bootcamp-master/.ipynb_checkpoints/Decorators-checkpoint.ipynb
yashdeeph709/Algorithms
apache-2.0
数据获取 这个数据集很经典因此很多机器学习框架都为其提供接口,sklearn也是一样.但更多的情况下我们还是要处理各种来源的数据.因此此处我们还是使用最传统的方式获取数据
csv_content = requests.get("http://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data").text row_name = ['sepal_length','sepal_width','petal_length','petal_width','label'] csv_list = csv_content.strip().split("\n") row_matrix = [line.strip().split(",") for line in csv_list] dataset = pd.DataFrame(row_...
ipynbs/supervised/Perceptron.ipynb
NLP-Deeplearning-Club/Classic-ML-Methods-Algo
mit
数据预处理 由于特征为float类型而标签为标签类别数据,因此标签需要为其编码,特征需要标准化.我们使用z-score进行归一化
encs = {} encs["feature"] = StandardScaler() encs["feature"].fit(dataset[row_name[:-1]]) table = pd.DataFrame(encs["feature"].transform(dataset[row_name[:-1]]),columns=row_name[:-1]) encs["label"]=LabelEncoder() encs["label"].fit(dataset["label"]) table["label"] = encs["label"].transform(dataset["label"]) table[:10]...
ipynbs/supervised/Perceptron.ipynb
NLP-Deeplearning-Club/Classic-ML-Methods-Algo
mit
数据集拆分
train_set,validation_set = train_test_split(table) train_set.groupby("label").count() validation_set.groupby("label").count()
ipynbs/supervised/Perceptron.ipynb
NLP-Deeplearning-Club/Classic-ML-Methods-Algo
mit
训练模型
mlp = MLPClassifier( hidden_layer_sizes=(100,50), activation='relu', solver='adam', alpha=0.0001, batch_size='auto', learning_rate='constant', learning_rate_init=0.001) mlp.fit(train_set[row_name[:-1]], train_set["label"]) pre = mlp.predict(validation_set[row_name[:-1]])
ipynbs/supervised/Perceptron.ipynb
NLP-Deeplearning-Club/Classic-ML-Methods-Algo
mit
Now define a general panel and compute its velocity.
# define panel my_panel = Panel(x0=-0.7,y0=0.5,x1=0.5,y1=-0.4,gamma=-2) # compute velocity on grid u,v = my_panel.velocity(x,y) # plot it plot_uv(u,v) # plot the flow on the grid my_panel.plot() # plot the panel
lessons/VortexSheet.ipynb
ultiyuan/test0
gpl-2.0
I've already pulled down The Sunned House from Project Gutenberg (https://www.gutenberg.org/wiki/Main_Page) and saved it as a text file called 'lovecraft.txt'. Here we'll load it then define the encoding as utf-8. Lastly, we'll instantiate a TextBlob object:
with open (r'lovecraft.txt', 'r') as myfile: shunned = myfile.read() ushunned = unicode(shunned, 'utf-8') tb = TextBlob(ushunned)
textblob_lovecraft.ipynb
dagrha/textual-analysis
mit
Now we'll go through every sentence in the story and get the 'sentiment' of each one. Sentiment analysis in TextBlob returns a polarity and a subjectivity number. Here we'll just extract the polarity:
paragraph = tb.sentences i = -1 for sentence in paragraph: i += 1 pol = sentence.sentiment.polarity if i == 0: write_type = 'w' with open('shunned.csv', write_type) as text_file: header = 'number,polarity\n' text_file.write(str(header)) write_type = 'a' with ...
textblob_lovecraft.ipynb
dagrha/textual-analysis
mit
Now we instantiate a dataframe by pulling in that csv:
df = pd.DataFrame.from_csv('shunned.csv')
textblob_lovecraft.ipynb
dagrha/textual-analysis
mit
Let's plot our data! First let's just look at how the sentiment polarity changes from sentence to sentence:
df.polarity.plot(figsize=(12,5), color='b', title='Sentiment Polarity for HP Lovecraft\'s The Shunned House') plt.xlabel('Sentence number') plt.ylabel('Sentiment polarity')
textblob_lovecraft.ipynb
dagrha/textual-analysis
mit
Very up and down from sentence to sentence! Some dark sentences (the ones below 0.0 polarity), some positive sentences (greater than 0.0 polarity) but overall kind of hovers around 0.0 polarity. One thing that may be interesting to look at is how the senitment changes over the course of the book. To examine that furt...
df['cum_sum'] = df.polarity.cumsum()
textblob_lovecraft.ipynb
dagrha/textual-analysis
mit