markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
ndarray による1次元配列の例
a2 = np.array([[1, 2, 3],[4, 5, 6]], dtype='float32') # データ型 float32 の2次元配列を生成 print('データの型 (dtype):', a2.dtype) print('要素の数 (size):', a2.size) print('形状 (shape):', a2.shape) print('次元の数 (ndim):', a2.ndim) print('中身:', a2)
データの型 (dtype): float32 要素の数 (size): 6 形状 (shape): (2, 3) 次元の数 (ndim): 2 中身: [[1. 2. 3.] [4. 5. 6.]]
MIT
notebooks/Chapter03/math_numpy.ipynb
tagomaru/ai_security
2. ベクトル(1次元配列) ベクトル a の生成(1次元配列の生成)
a = np.array([4, 1])
_____no_output_____
MIT
notebooks/Chapter03/math_numpy.ipynb
tagomaru/ai_security
ベクトルのスカラー倍
for k in (2, 0.5, -1): print(k * a)
[8 2] [2. 0.5] [-4 -1]
MIT
notebooks/Chapter03/math_numpy.ipynb
tagomaru/ai_security
ベクトルの和と差
b = np.array([1, 2]) # ベクトル b の生成 print('a + b =', a + b) # ベクトル a とベクトル b の和 print('a - b =', a - b) # ベクトル a とベクトル b の差
a + b = [5 3] a - b = [ 3 -1]
MIT
notebooks/Chapter03/math_numpy.ipynb
tagomaru/ai_security
3. 行列(2次元配列) 行列を2次元配列で生成
A = np.array([[1, 2], [3 ,4], [5, 6]]) B = np.array([[5, 6], [7 ,8]]) print('A:\n', A) print('A.shape:', A.shape ) print() print('B:\n', B) print('B.shape:', B.shape )
A: [[1 2] [3 4] [5 6]] A.shape: (3, 2) B: [[5 6] [7 8]] B.shape: (2, 2)
MIT
notebooks/Chapter03/math_numpy.ipynb
tagomaru/ai_security
行列Aの i = 3, j = 2 にアクセス
print(A[2][1])
6
MIT
notebooks/Chapter03/math_numpy.ipynb
tagomaru/ai_security
A の転置行列
print(A.T)
[[1 3 5] [2 4 6]]
MIT
notebooks/Chapter03/math_numpy.ipynb
tagomaru/ai_security
行列のスカラー倍
print(2 * A)
[[ 2 4] [ 6 8] [10 12]]
MIT
notebooks/Chapter03/math_numpy.ipynb
tagomaru/ai_security
行列の和と差
print('A + A:\n', A + A) # 行列 A と行列 A の和 print() print('A - A:\n', A - A) # 行列 A と行列 A の差
A + A: [[ 2 4] [ 6 8] [10 12]] A - A: [[0 0] [0 0] [0 0]]
MIT
notebooks/Chapter03/math_numpy.ipynb
tagomaru/ai_security
行列 A と行列 B の和
print(A + B)
_____no_output_____
MIT
notebooks/Chapter03/math_numpy.ipynb
tagomaru/ai_security
行列の積
print(np.dot(A, B))
[[19 22] [43 50] [67 78]]
MIT
notebooks/Chapter03/math_numpy.ipynb
tagomaru/ai_security
積 BA
print(np.dot(B, A))
_____no_output_____
MIT
notebooks/Chapter03/math_numpy.ipynb
tagomaru/ai_security
アダマール積 A $\circ$ A
print(A * A)
[[ 1 4] [ 9 16] [25 36]]
MIT
notebooks/Chapter03/math_numpy.ipynb
tagomaru/ai_security
行列 X と行ベクトル a の積
X = np.array([[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]]) a = np.array([[1, 2, 3, 4, 5]]) print('X.shape:', X.shape) print('a.shape:', a.shape) print(np.dot(X, a))
X.shape: (2, 5) a.shape: (1, 5)
MIT
notebooks/Chapter03/math_numpy.ipynb
tagomaru/ai_security
行列 X と列ベクトル a の積
X = np.array([[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]]) a = np.array([[1], [2], [3], [4], [5]]) print('X.shape:', X.shape) print('a.shape:', a.shape) Xa = np.dot(X, a) print('Xa.shape:', Xa.shape) print('Xa:\n', Xa...
X.shape: (2, 5) a.shape: (5, 1) Xa.shape: (2, 1) Xa: [[ 40] [115]]
MIT
notebooks/Chapter03/math_numpy.ipynb
tagomaru/ai_security
NumPy による行列 X と1次元配列の積
X = np.array([[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]]) a = np.array([1, 2, 3, 4, 5]) # 1次元配列で生成 print('X.shape:', X.shape) print('a.shape:', a.shape) Xa = np.dot(X, a) print('Xa.shape:', Xa.shape) print('Xa:\n', Xa) import numpy as np np.array([1, 0.1])
_____no_output_____
MIT
notebooks/Chapter03/math_numpy.ipynb
tagomaru/ai_security
4. ndarray の 軸(axis)について Aの合計を計算
np.sum(A)
_____no_output_____
MIT
notebooks/Chapter03/math_numpy.ipynb
tagomaru/ai_security
axis = 0 で A の合計を計算
print(np.sum(A, axis=0).shape) print(np.sum(A, axis=0))
(2,) [ 9 12]
MIT
notebooks/Chapter03/math_numpy.ipynb
tagomaru/ai_security
axis = 1 で A の合計を計算
print(np.sum(A, axis=1).shape) print(np.sum(A, axis=1))
(3,) [ 3 7 11]
MIT
notebooks/Chapter03/math_numpy.ipynb
tagomaru/ai_security
np.max 関数の利用例
Y_hat = np.array([[3, 4], [6, 5], [7, 8]]) # 2次元配列を生成 print(np.max(Y_hat)) # axis 指定なし print(np.max(Y_hat, axis=1)) # axix=1 を指定
8 [4 6 8]
MIT
notebooks/Chapter03/math_numpy.ipynb
tagomaru/ai_security
argmax 関数の利用例
print(np.argmax(Y_hat)) # axis 指定なし print(np.argmax(Y_hat, axis=1)) # axix=1 を指定
5 [1 0 1]
MIT
notebooks/Chapter03/math_numpy.ipynb
tagomaru/ai_security
5. 3次元以上の配列 行列 A を4つ持つ配列の生成
A_arr = np.array([A, A, A, A]) print(A_arr.shape)
(4, 3, 2)
MIT
notebooks/Chapter03/math_numpy.ipynb
tagomaru/ai_security
A_arr の合計を計算
np.sum(A_arr)
_____no_output_____
MIT
notebooks/Chapter03/math_numpy.ipynb
tagomaru/ai_security
axis = 0 を指定して A_arr の合計を計算
print(np.sum(A_arr, axis=0).shape) print(np.sum(A_arr, axis=0))
(3, 2) [[ 4 8] [12 16] [20 24]]
MIT
notebooks/Chapter03/math_numpy.ipynb
tagomaru/ai_security
axis = (1, 2) を指定して A_arr の合計を計算
print(np.sum(A_arr, axis=(1, 2)))
[21 21 21 21]
MIT
notebooks/Chapter03/math_numpy.ipynb
tagomaru/ai_security
Simple Linear Regression using ANN.continuing from ["here"](https://colab.research.google.com/drive/1zTy_7Z5rfKHPKTTCWyou5EemqL8yBqih)
#importing libraries import numpy as np import torch import torch.nn as nn import matplotlib.pyplot as plt from IPython import display display.set_matplotlib_formats('svg') print('modules imported') def build_and_train(x, y, learning_rate, n_epochs): ## building model = nn.Sequential( nn.Linear(1,1), ...
_____no_output_____
MIT
ManipulatingRegressionSlopes.ipynb
ShashwatVv/naiveDL
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under...
_____no_output_____
Apache-2.0
Natural Language Processing in TensorFlow/Week 3 Sequence models/NLP_Course_Week_3_Exercise_Question Exploring overfitting in NLP - Glove Embedding.ipynb
mohameddhameem/TensorflowCertification
01.2 Scattering Compute Speed**NOT COMPLETED**In this notebook, the speed to extract scattering coefficients is computed.
import sys import random import os sys.path.append('../src') import warnings warnings.filterwarnings("ignore") import torch from tqdm import tqdm from kymatio.torch import Scattering2D import time import kymatio.scattering2d.backend as backend ##########################################################################...
_____no_output_____
RSA-MD
notebooks/01.2_scattering_compute_speed.ipynb
sgaut023/Chronic-Liver-Classification
3. Scattering Speed Test
# From: https://github.com/kymatio/kymatio/blob/0.1.X/examples/2d/compute_speed.py # Benchmark setup # -------------------- J = 3 L = 8 times = 10 devices = ['cpu', 'gpu'] scattering = Scattering2D(J, shape=(M, N), L=L, backend='torch_skcuda') data = np.concatenate(dataset['img'],axis=0) data = torch.from_numpy(data) x...
==> Testing Float32 with torch backend, on CPU, forward Elapsed time: 523.081820 [s / 10 evals], avg: 52.31 (s/batch) ==> Testing Float32 with torch backend, on GPU, forward Elapsed time: 16.777041 [s / 10 evals], avg: 1.68 (s/batch) CPU times: user 53min 2s, sys: 4min 47s, total: 57min 50s Wall time: 9min 54s
RSA-MD
notebooks/01.2_scattering_compute_speed.ipynb
sgaut023/Chronic-Liver-Classification
*This notebook contains material from [nbpages](https://jckantor.github.io/nbpages) by Jeffrey Kantor (jeff at nd.edu). The text is released under the[CC-BY-NC-ND-4.0 license](https://creativecommons.org/licenses/by-nc-nd/4.0/legalcode).The code is released under the [MIT license](https://opensource.org/licenses/MIT).*...
# IMPORT DATA FILES USED BY THIS NOTEBOOK import os, requests file_links = [("data/Stock_Data.csv", "https://jckantor.github.io/nbpages/data/Stock_Data.csv")] # This cell has been added by nbpages. Run this cell to download data files required for this notebook. for filepath, fileurl in file_links: stem, filena...
_____no_output_____
MIT
docs/02.04-Working-with-Data-and-Figures.ipynb
jckantor/nbcollection
2.4 Working with Data and Figures 2.4.1 Importing dataThe following cell reads the data file `Stock_Data.csv` from the `data` subdirectory. The name of this file will appear in the data index.
import pandas as pd df = pd.read_csv("data/Stock_Data.csv") df.head()
_____no_output_____
MIT
docs/02.04-Working-with-Data-and-Figures.ipynb
jckantor/nbcollection
2.4.2 Creating and saving figuresThe following cell creates a figure `Stock_Data.png` in the `figures` subdirectory. The name of this file will appear in the figures index.
%matplotlib inline import os import matplotlib.pyplot as plt import seaborn as sns sns.set_style("darkgrid") fig, ax = plt.subplots(2, 1, figsize=(8, 5)) (df/df.iloc[0]).drop('VIX', axis=1).plot(ax=ax[0]) df['VIX'].plot(ax=ax[1]) ax[0].set_title('Normalized Indices') ax[1].set_title('Volatility VIX') ax[1].set_xlabel(...
_____no_output_____
MIT
docs/02.04-Working-with-Data-and-Figures.ipynb
jckantor/nbcollection
Homework 4These problem sets focus on list comprehensions, string operations and regular expressions. Problem set 1: List slices and list comprehensionsLet's start with some data. The following cell contains a string with comma-separated integers, assigned to a variable called `numbers_str`:
numbers_str = '496,258,332,550,506,699,7,985,171,581,436,804,736,528,65,855,68,279,721,120'
_____no_output_____
MIT
homeworkdata/Homework_4_Paolo_Rivas_Legua.ipynb
paolorivas/homeworkfoundations
In the following cell, complete the code with an expression that evaluates to a list of integers derived from the raw numbers in `numbers_str`, assigning the value of this expression to a variable `numbers`. If you do everything correctly, executing the cell should produce the output `985` (*not* `'985'`).
values = numbers_str.split(",") numbers = [int(i) for i in values] # numbers max(numbers)
_____no_output_____
MIT
homeworkdata/Homework_4_Paolo_Rivas_Legua.ipynb
paolorivas/homeworkfoundations
Great! We'll be using the `numbers` list you created above in the next few problems.In the cell below, fill in the square brackets so that the expression evaluates to a list of the ten largest values in `numbers`. Expected output: [506, 528, 550, 581, 699, 721, 736, 804, 855, 985] (Hint: use a slice.)
#test print(sorted(numbers)) sorted(numbers)[10:]
_____no_output_____
MIT
homeworkdata/Homework_4_Paolo_Rivas_Legua.ipynb
paolorivas/homeworkfoundations
In the cell below, write an expression that evaluates to a list of the integers from `numbers` that are evenly divisible by three, *sorted in numerical order*. Expected output: [120, 171, 258, 279, 528, 699, 804, 855]
[i for i in sorted(numbers) if i%3 == 0]
_____no_output_____
MIT
homeworkdata/Homework_4_Paolo_Rivas_Legua.ipynb
paolorivas/homeworkfoundations
Okay. You're doing great. Now, in the cell below, write an expression that evaluates to a list of the square roots of all the integers in `numbers` that are less than 100. In order to do this, you'll need to use the `sqrt` function from the `math` module, which I've already imported for you. Expected output: [2.6457...
import math from math import sqrt [math.sqrt(i) for i in sorted(numbers) if i < 100]
_____no_output_____
MIT
homeworkdata/Homework_4_Paolo_Rivas_Legua.ipynb
paolorivas/homeworkfoundations
Problem set 2: Still more list comprehensionsStill looking good. Let's do a few more with some different data. In the cell below, I've defined a data structure and assigned it to a variable `planets`. It's a list of dictionaries, with each dictionary describing the characteristics of a planet in the solar system. Make...
planets = [ {'diameter': 0.382, 'mass': 0.06, 'moons': 0, 'name': 'Mercury', 'orbital_period': 0.24, 'rings': 'no', 'type': 'terrestrial'}, {'diameter': 0.949, 'mass': 0.82, 'moons': 0, 'name': 'Venus', 'orbital_period': 0.62, 'rings': 'no', 'type': 'terrestrial'}, {'diameter': 1.00, 'mass'...
_____no_output_____
MIT
homeworkdata/Homework_4_Paolo_Rivas_Legua.ipynb
paolorivas/homeworkfoundations
Now, in the cell below, write a list comprehension that evaluates to a list of names of the planets that have a radius greater than four earth radii. Expected output: ['Jupiter', 'Saturn', 'Uranus']
earth_diameter = planets[2]['diameter'] #earth radius is = half diameter. In a multiplication equation the diameter value can be use as a parameter. [i['name'] for i in planets if i['diameter'] >= earth_diameter*4]
_____no_output_____
MIT
homeworkdata/Homework_4_Paolo_Rivas_Legua.ipynb
paolorivas/homeworkfoundations
In the cell below, write a single expression that evaluates to the sum of the mass of all planets in the solar system. Expected output: `446.79`
mass_list = [] for planet in planets: outcome = planet['mass'] mass_list.append(outcome) total = sum(mass_list) total
_____no_output_____
MIT
homeworkdata/Homework_4_Paolo_Rivas_Legua.ipynb
paolorivas/homeworkfoundations
Good work. Last one with the planets. Write an expression that evaluates to the names of the planets that have the word `giant` anywhere in the value for their `type` key. Expected output: ['Jupiter', 'Saturn', 'Uranus', 'Neptune']
[i['name'] for i in planets if 'giant' in i['type']]
_____no_output_____
MIT
homeworkdata/Homework_4_Paolo_Rivas_Legua.ipynb
paolorivas/homeworkfoundations
*EXTREME BONUS ROUND*: Write an expression below that evaluates to a list of the names of the planets in ascending order by their number of moons. (The easiest way to do this involves using the [`key` parameter of the `sorted` function](https://docs.python.org/3.5/library/functions.htmlsorted), which we haven't yet dis...
#Done in class
_____no_output_____
MIT
homeworkdata/Homework_4_Paolo_Rivas_Legua.ipynb
paolorivas/homeworkfoundations
Problem set 3: Regular expressions In the following section, we're going to do a bit of digital humanities. (I guess this could also be journalism if you were... writing an investigative piece about... early 20th century American poetry?) We'll be working with the following text, Robert Frost's *The Road Not Taken*. M...
import re poem_lines = ['Two roads diverged in a yellow wood,', 'And sorry I could not travel both', 'And be one traveler, long I stood', 'And looked down one as far as I could', 'To where it bent in the undergrowth;', '', 'Then took the other, as just as fair,', 'And having perhaps the better claim,', 'Because...
_____no_output_____
MIT
homeworkdata/Homework_4_Paolo_Rivas_Legua.ipynb
paolorivas/homeworkfoundations
In the cell above, I defined a variable `poem_lines` which has a list of lines in the poem, and `import`ed the `re` library.In the cell below, write a list comprehension (using `re.search()`) that evaluates to a list of lines that contain two words next to each other (separated by a space) that have exactly four charac...
[line for line in poem_lines if re.search(r"\b\w{4}\b\s\b\w{4}\b", line)]
_____no_output_____
MIT
homeworkdata/Homework_4_Paolo_Rivas_Legua.ipynb
paolorivas/homeworkfoundations
Good! Now, in the following cell, write a list comprehension that evaluates to a list of lines in the poem that end with a five-letter word, regardless of whether or not there is punctuation following the word at the end of the line. (Hint: Try using the `?` quantifier. Is there an existing character class, or a way to...
[line for line in poem_lines if re.search(r"(?:\s\w{5}\b$|\s\w{5}\b[.:;,]$)", line)]
_____no_output_____
MIT
homeworkdata/Homework_4_Paolo_Rivas_Legua.ipynb
paolorivas/homeworkfoundations
Okay, now a slightly trickier one. In the cell below, I've created a string `all_lines` which evaluates to the entire text of the poem in one string. Execute this cell.
all_lines = " ".join(poem_lines)
_____no_output_____
MIT
homeworkdata/Homework_4_Paolo_Rivas_Legua.ipynb
paolorivas/homeworkfoundations
Now, write an expression that evaluates to all of the words in the poem that follow the word 'I'. (The strings in the resulting list should *not* include the `I`.) Hint: Use `re.findall()` and grouping! Expected output: ['could', 'stood', 'could', 'kept', 'doubted', 'should', 'shall', 'took']
[item[2:] for item in (re.findall(r"\bI\b\s\b[a-z]{1,}", all_lines))]
_____no_output_____
MIT
homeworkdata/Homework_4_Paolo_Rivas_Legua.ipynb
paolorivas/homeworkfoundations
Finally, something super tricky. Here's a list of strings that contains a restaurant menu. Your job is to wrangle this plain text, slightly-structured data into a list of dictionaries.
entrees = [ "Yam, Rosemary and Chicken Bowl with Hot Sauce $10.95", "Lavender and Pepperoni Sandwich $8.49", "Water Chestnuts and Peas Power Lunch (with mayonnaise) $12.95 - v", "Artichoke, Mustard Green and Arugula with Sesame Oil over noodles $9.95 - v", "Flank Steak with Lentils And Tabasco Peppe...
_____no_output_____
MIT
homeworkdata/Homework_4_Paolo_Rivas_Legua.ipynb
paolorivas/homeworkfoundations
You'll need to pull out the name of the dish and the price of the dish. The `v` after the hyphen indicates that the dish is vegetarian---you'll need to include that information in your dictionary as well. I've included the basic framework; you just need to fill in the contents of the `for` loop.Expected output:```[{'na...
menu = [] for dish in entrees: match = re.search(r"^(.*) \$(.*)", dish) vegetarian = re.search(r"v$", match.group(2)) price = re.search(r"(?:\d\.\d\d|\d\d\.\d\d)", dish) if vegetarian == None: vegetarian = False else: vegetarian = True if match: dish = { 'nam...
_____no_output_____
MIT
homeworkdata/Homework_4_Paolo_Rivas_Legua.ipynb
paolorivas/homeworkfoundations
Used https://github.com/GoogleCloudPlatform/cloudml-samples/blob/master/xgboost/notebooks/census_training/train.py as a starting point and adjusted to CatBoost
#Google Cloud Libraries from google.cloud import storage #System Libraries import datetime import subprocess #Data Libraries import pandas as pd import numpy as np #ML Libraries from sklearn.model_selection import train_test_split from sklearn.metrics import accuracy_score from sklearn.preprocessing import LabelEnc...
gs://mchrestkha-demo-env-ml-examples/census/catboost_census_20200525_212707/: gs://mchrestkha-demo-env-ml-examples/census/catboost_census_20200525_212707/<catboost.core.CatBoostClassifier object at 0x7fdb929aa6d0> gs://mchrestkha-demo-env-ml-examples/census/catboost_census_20200525_212852/: gs://mchrestkha-demo-env-ml...
MIT
census/catboost/gcp_ai_platform/notebooks/catboost_census_notebook.ipynb
jared-burns/machine_learning_examples
Final models with hyperparameters tuned for Logistics Regression and XGBoost with selected features.
#Import the libraries import pandas as pd import numpy as np from tqdm import tqdm from sklearn import linear_model, metrics, preprocessing, model_selection from sklearn.preprocessing import StandardScaler import xgboost as xgb #Load the data modeling_dataset = pd.read_csv('/content/drive/MyDrive/prediction/frac_clean...
_____no_output_____
MIT
MS-malware-suspectibility-detection/6-final-model/FinalModel.ipynb
Semiu/malware-detector
Dealing with errors after a run In this example, we run the model on a list of three glaciers:two of them will end with errors: one because it already failed atpreprocessing (i.e. prior to this run), and one during the run. We show how to analyze theses erros and solve (some) of them, as described in the OGGM document...
# Locals import oggm.cfg as cfg from oggm import utils, workflow, tasks # Libs import os import xarray as xr import pandas as pd # Initialize OGGM and set up the default run parameters cfg.initialize(logging_level='WARNING') # Here we override some of the default parameters # How many grid points around the glacier?...
_____no_output_____
BSD-3-Clause
notebooks/deal_with_errors.ipynb
anoukvlug/tutorials
Error diagnostics
# Write the compiled output utils.compile_glacier_statistics(gdirs); # saved as glacier_statistics.csv in the WORKING_DIR folder utils.compile_run_output(gdirs); # saved as run_output.nc in the WORKING_DIR folder # Read it with xr.open_dataset(os.path.join(WORKING_DIR, 'run_output.nc')) as ds: ds = ds.load() df_s...
_____no_output_____
BSD-3-Clause
notebooks/deal_with_errors.ipynb
anoukvlug/tutorials
- in the column *error_task*, we can see whether an error occurred, and if yes during which task- *error_msg* describes the actual error message
df_stats[['error_task', 'error_msg']]
_____no_output_____
BSD-3-Clause
notebooks/deal_with_errors.ipynb
anoukvlug/tutorials
We can also check which glacier failed at which task by using [compile_task_log]('https://docs.oggm.org/en/latest/generated/oggm.utils.compile_task_log.htmloggm.utils.compile_task_log').
# also saved as task_log.csv in the WORKING_DIR folder - "append=False" replaces the existing one utils.compile_task_log(gdirs, task_names=['glacier_masks', 'compute_centerlines', 'flowline_model_run'], append=False)
_____no_output_____
BSD-3-Clause
notebooks/deal_with_errors.ipynb
anoukvlug/tutorials
Error solving RuntimeError: `Glacier exceeds domain boundaries, at year: 98.08333333333333` To remove this error just increase the domain boundary **before** running `init_glacier_directories` ! Attention, this means that more data has to be downloaded and the run takes more time. The available options for `cfg.PARAM...
# reset to recompute statistics utils.mkdir(WORKING_DIR, reset=True) # increase the amount of gridpoints outside the glacier cfg.PARAMS['border'] = 160 gdirs = workflow.init_glacier_directories(rgi_ids, from_prepro_level=4) workflow.execute_entity_task(tasks.run_random_climate, gdirs, nyea...
_____no_output_____
BSD-3-Clause
notebooks/deal_with_errors.ipynb
anoukvlug/tutorials
Now `RGI60-11.00897` runs without errors! Error: `Need a valid model_flowlines file.` This error message in the log is misleading: it does not really describe the source of the error, which happened earlier in the processing chain. Therefore we can look instead into the glacier_statistics via [compile_glacier_statisti...
print('error_task: {}, error_msg: {}'.format(df_stats.loc['RGI60-11.03295']['error_task'], df_stats.loc['RGI60-11.03295']['error_msg']))
_____no_output_____
BSD-3-Clause
notebooks/deal_with_errors.ipynb
anoukvlug/tutorials
Now we have a better understanding of the error: - OGGM can not work with this geometry of this glacier and could therefore not make a gridded mask of the glacier outlines. - there is no way to prevent this except you find a better way to pre-process the geometry of this glacier- these glaciers have to be ignored! Less...
ds.dropna(dim='rgi_id') # here we can e.g. find the volume evolution
_____no_output_____
BSD-3-Clause
notebooks/deal_with_errors.ipynb
anoukvlug/tutorials
Goals In the previous tutorial you studied the role of freezing models on a small dataset. Understand the role of freezing models in transfer learning on a fairly large dataset Why freeze/unfreeze base models in transfer learning Use comparison feature to appropriately set this parameter on custom dataset You will b...
! wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=1MRC58-oCdR1agFTWreDFqevjEOIWDnYZ' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id=1MRC5...
_____no_output_____
Apache-2.0
study_roadmaps/2_transfer_learning_roadmap/6_freeze_base_network/2.2) Understand the effect of freezing base model in transfer learning - 2 - pytorch.ipynb
take2rohit/monk_v1
Imports
#Using pytorch backend # When installed using pip from monk.pytorch_prototype import prototype # When installed manually (Uncomment the following) #import os #import sys #sys.path.append("monk_v1/"); #sys.path.append("monk_v1/monk/"); #from monk.pytorch_prototype import prototype
_____no_output_____
Apache-2.0
study_roadmaps/2_transfer_learning_roadmap/6_freeze_base_network/2.2) Understand the effect of freezing base model in transfer learning - 2 - pytorch.ipynb
take2rohit/monk_v1
Freeze Base network in densenet121 and train a classifier Creating and managing experiments - Provide project name - Provide experiment name - For a specific data create a single project - Inside each project multiple experiments can be created - Every experiment can be have diferent hyper-parameters a...
gtf = prototype(verbose=1); gtf.Prototype("Project", "Freeze_Base_Network");
Pytorch Version: 1.2.0 Experiment Details Project: Project Experiment: Freeze_Base_Network Dir: /home/abhi/Desktop/Work/tess_tool/gui/v0.3/finetune_models/Organization/development/v5.0_blocks/study_roadmap/change_post_num_layers/5_transfer_learning_params/2_freezing_base_network/workspace/Project/Freeze_Ba...
Apache-2.0
study_roadmaps/2_transfer_learning_roadmap/6_freeze_base_network/2.2) Understand the effect of freezing base model in transfer learning - 2 - pytorch.ipynb
take2rohit/monk_v1
This creates files and directories as per the following structure workspace | |--------Project | | |-----Freeze_Base_Network | |-----experiment-state.json ...
gtf.Default(dataset_path="skin_cancer_mnist_dataset/images", path_to_csv="skin_cancer_mnist_dataset/train_labels.csv", model_name="densenet121", freeze_base_network=True, # Set this param as true ...
Dataset Details Train path: skin_cancer_mnist_dataset/images Val path: None CSV train path: skin_cancer_mnist_dataset/train_labels.csv CSV val path: None Dataset Params Input Size: 224 Batch Size: 4 Data Shuffle: True Processors: 4 Train-val split: 0.7 Delimiter...
Apache-2.0
study_roadmaps/2_transfer_learning_roadmap/6_freeze_base_network/2.2) Understand the effect of freezing base model in transfer learning - 2 - pytorch.ipynb
take2rohit/monk_v1
From the summary above - Model Params Model name: densenet121 Use Gpu: True Use pretrained: True Freeze base network: True Another thing to notice from summary Model Details Loading pretrained model Model Loaded on device Mod...
#Start Training gtf.Train(); #Read the training summary generated once you run the cell and training is completed
Training Start Epoch 1/5 ----------
Apache-2.0
study_roadmaps/2_transfer_learning_roadmap/6_freeze_base_network/2.2) Understand the effect of freezing base model in transfer learning - 2 - pytorch.ipynb
take2rohit/monk_v1
Best validation Accuracy achieved - 74.77 %(You may get a different result) Unfreeze Base network in densenet121 and train a classifier Creating and managing experiments - Provide project name - Provide experiment name - For a specific data create a single project - Inside each project multiple experimen...
gtf = prototype(verbose=1); gtf.Prototype("Project", "Unfreeze_Base_Network");
Pytorch Version: 1.2.0 Experiment Details Project: Project Experiment: Unfreeze_Base_Network Dir: /home/abhi/Desktop/Work/tess_tool/gui/v0.3/finetune_models/Organization/development/v5.0_blocks/study_roadmap/change_post_num_layers/5_transfer_learning_params/2_freezing_base_network/workspace/Project/Unfreez...
Apache-2.0
study_roadmaps/2_transfer_learning_roadmap/6_freeze_base_network/2.2) Understand the effect of freezing base model in transfer learning - 2 - pytorch.ipynb
take2rohit/monk_v1
This creates files and directories as per the following structure workspace | |--------Project | | |-----Freeze_Base_Network (Previously created) | |-----experiment-state.json...
gtf.Default(dataset_path="skin_cancer_mnist_dataset/images", path_to_csv="skin_cancer_mnist_dataset/train_labels.csv", model_name="densenet121", freeze_base_network=False, # Set this param as false ...
Dataset Details Train path: skin_cancer_mnist_dataset/images Val path: None CSV train path: skin_cancer_mnist_dataset/train_labels.csv CSV val path: None Dataset Params Input Size: 224 Batch Size: 4 Data Shuffle: True Processors: 4 Train-val split: 0.7 Delimiter...
Apache-2.0
study_roadmaps/2_transfer_learning_roadmap/6_freeze_base_network/2.2) Understand the effect of freezing base model in transfer learning - 2 - pytorch.ipynb
take2rohit/monk_v1
From the summary above - Model Params Model name: densenet121 Use Gpu: True Use pretrained: True Freeze base network: False Another thing to notice from summary Model Details Loading pretrained model Model Loaded on device Mo...
#Start Training gtf.Train(); #Read the training summary generated once you run the cell and training is completed
Training Start Epoch 1/5 ----------
Apache-2.0
study_roadmaps/2_transfer_learning_roadmap/6_freeze_base_network/2.2) Understand the effect of freezing base model in transfer learning - 2 - pytorch.ipynb
take2rohit/monk_v1
Best Val Accuracy achieved - 81.33 %(You may get a different result) Compare both the experiment
# Invoke the comparison class from monk.compare_prototype import compare
_____no_output_____
Apache-2.0
study_roadmaps/2_transfer_learning_roadmap/6_freeze_base_network/2.2) Understand the effect of freezing base model in transfer learning - 2 - pytorch.ipynb
take2rohit/monk_v1
Creating and managing comparison experiments - Provide project name
# Create a project gtf = compare(verbose=1); gtf.Comparison("Compare-effect-of-freezing");
Comparison: - Compare-effect-of-freezing
Apache-2.0
study_roadmaps/2_transfer_learning_roadmap/6_freeze_base_network/2.2) Understand the effect of freezing base model in transfer learning - 2 - pytorch.ipynb
take2rohit/monk_v1
This creates files and directories as per the following structure workspace | |--------comparison | | |-----Compare-effect-of-freezing | |------stats_best_val_...
gtf.Add_Experiment("Project", "Freeze_Base_Network"); gtf.Add_Experiment("Project", "Unfreeze_Base_Network");
Project - Project, Experiment - Freeze_Base_Network added Project - Project, Experiment - Unfreeze_Base_Network added
Apache-2.0
study_roadmaps/2_transfer_learning_roadmap/6_freeze_base_network/2.2) Understand the effect of freezing base model in transfer learning - 2 - pytorch.ipynb
take2rohit/monk_v1
Run Analysis
gtf.Generate_Statistics();
Generating statistics... Generated
Apache-2.0
study_roadmaps/2_transfer_learning_roadmap/6_freeze_base_network/2.2) Understand the effect of freezing base model in transfer learning - 2 - pytorch.ipynb
take2rohit/monk_v1
Visualize and study comparison metrics Training Accuracy Curves
from IPython.display import Image Image(filename="workspace/comparison/Compare-effect-of-freezing/train_accuracy.png")
_____no_output_____
Apache-2.0
study_roadmaps/2_transfer_learning_roadmap/6_freeze_base_network/2.2) Understand the effect of freezing base model in transfer learning - 2 - pytorch.ipynb
take2rohit/monk_v1
Training Loss Curves
from IPython.display import Image Image(filename="workspace/comparison/Compare-effect-of-freezing/train_loss.png")
_____no_output_____
Apache-2.0
study_roadmaps/2_transfer_learning_roadmap/6_freeze_base_network/2.2) Understand the effect of freezing base model in transfer learning - 2 - pytorch.ipynb
take2rohit/monk_v1
Validation Accuracy Curves
from IPython.display import Image Image(filename="workspace/comparison/Compare-effect-of-freezing/val_accuracy.png")
_____no_output_____
Apache-2.0
study_roadmaps/2_transfer_learning_roadmap/6_freeze_base_network/2.2) Understand the effect of freezing base model in transfer learning - 2 - pytorch.ipynb
take2rohit/monk_v1
Validation loss curves
from IPython.display import Image Image(filename="workspace/comparison/Compare-effect-of-freezing/val_loss.png")
_____no_output_____
Apache-2.0
study_roadmaps/2_transfer_learning_roadmap/6_freeze_base_network/2.2) Understand the effect of freezing base model in transfer learning - 2 - pytorch.ipynb
take2rohit/monk_v1
Caesar CipherA Caesar cipher, also known as shift cipher is one of the simplest and most widely known encryption techniques. It is a type of substitution cipher in which each letter in the plaintext is replaced by a letter some fixed number of positions down the alphabet. For example, with a left shift of 3, D would b...
msg = "The quick brown fox jumps over the lazy dog 123 !@#" shift = 3 def getmsg(): processedmsg = '' for x in msg: if x.isalpha(): num = ord(x) num += shift if x.isupper(): if num > ord('Z'): num -= 26 eli...
_____no_output_____
MIT
ipynb/Caesar Cipher.ipynb
davzoku/pyground
The for loop above inspects each letter in the message.chr(), character function takes an integer ordinal and returns a character. ie. chr(65) outputs 'A' based on the ASCII tableord(), ordinal does the reverse. ie ord('A') gives 65.Based on the ASCII Table, 'Z' with a shift of 3 will give us ']', which is undesirable....
encrypted=getmsg() print(encrypted)
Wkh txlfn eurzq ira mxpsv ryhu wkh odcb grj 123 !@#
MIT
ipynb/Caesar Cipher.ipynb
davzoku/pyground
Note that only alphabets are encrypted.To decrypt, the algorithm is very similar.
shift=-shift msg=encrypted decrypted= getmsg() print(decrypted)
The quick brown fox jumps over the lazy dog 123 !@#
MIT
ipynb/Caesar Cipher.ipynb
davzoku/pyground
使用TensorFlow的基本步骤以使用LinearRegression来预测房价为例。- 使用RMSE(均方根误差)评估模型预测的准确率- 通过调整超参数来提高模型的预测准确率
from __future__ import print_function import math from IPython import display from matplotlib import cm from matplotlib import gridspec import matplotlib.pyplot as plt import numpy as np import pandas as pd from sklearn import metrics import tensorflow as tf from tensorflow.python.data import Dataset tf.logging.set_...
california house dataframe: longitude latitude housing_median_age total_rooms total_bedrooms \ 840 -117.1 32.7 29.0 1429.0 293.0 15761 -122.4 37.8 52.0 3260.0 1535.0 2964 -117.8 34.1 23.0 ...
CC-BY-3.0
code/first_step_with_tensorflow.ipynb
kevinleeex/notes-for-mlcc
检查数据
# 使用pd的describe方法来统计一些信息 california_housing_df.describe()
_____no_output_____
CC-BY-3.0
code/first_step_with_tensorflow.ipynb
kevinleeex/notes-for-mlcc
构建模型我们将在这个例子中预测中位房价,将其作为学习的标签,使用房间总数作为输入特征。 第1步:定义特征并配置特征列为了把数据导入TensorFlow,我们需要指定每个特征包含的数据类型。我们主要使用以下两种类型:- 分类数据: 一种文字数据。- 数值数据:一种数字(整数或浮点数)数据或希望视为数字的数据。在TF中我们使用**特征列**的结构来表示特征的数据类型。特征列仅存储对特征数据的描述,不包含特征数据本身。
# 定义输入特征 kl_feature = california_housing_df[['total_rooms']] # 配置房间总数为数值特征列 feature_columns = [tf.feature_column.numeric_column('total_rooms')]
[_NumericColumn(key='total_rooms', shape=(1,), default_value=None, dtype=tf.float32, normalizer_fn=None)]
CC-BY-3.0
code/first_step_with_tensorflow.ipynb
kevinleeex/notes-for-mlcc
第2步: 定义目标
# 定义目标标签 targets = california_housing_df['median_house_value']
_____no_output_____
CC-BY-3.0
code/first_step_with_tensorflow.ipynb
kevinleeex/notes-for-mlcc
**梯度裁剪**是在应用梯度值之前设置其上限,梯度裁剪有助于确保数值稳定性,防止梯度爆炸。 第3步:配置线性回归器
# 使用Linear Regressor配置线性回归模型,使用GradientDescentOptimizer优化器训练模型 kl_optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.0000001) # 使用clip_gradients_by_norm梯度裁剪我们的优化器,梯度裁剪可以确保我们的梯度大小在训练期间不会变得过大,梯度过大会导致梯度下降失败。 kl_optimizer = tf.contrib.estimator.clip_gradients_by_norm(kl_optimizer, 5.0) # 使用我们的特征列和优化器配置线性回归模型 ho...
_____no_output_____
CC-BY-3.0
code/first_step_with_tensorflow.ipynb
kevinleeex/notes-for-mlcc
第4步:定义输入函数要将数据导入LinearRegressor,我们需要定义一个输入函数,让它告诉TF如何对数据进行预处理,以及在模型训练期间如何批处理、随机处理和重复数据。首先我们将Pandas特征数据转换成NumPy数组字典,然后利用Dataset API构建Dataset对象,拆分数据为batch_size的批数据,以按照指定周期数(num_epochs)进行重复,**注意:**如果默认值num_epochs=None传递到repeat(),输入数据会无限期重复。shuffle: Bool, 是否打乱数据buffer_size: 指定shuffle从中随机抽样的数据集大小
def kl_input_fn(features, targets, batch_size=1, shuffle=True, num_epochs=None): """使用单个特征训练房价预测模型 Args: features: 特征DataFrame targets: 目标DataFrame batch_size: 批大小 shuffle: Bool. 是否打乱数据 Return: 下一个数据批次的元组(features, labels) """ # 把pandas数据转换成np.array构成的dict数据 ...
_____no_output_____
CC-BY-3.0
code/first_step_with_tensorflow.ipynb
kevinleeex/notes-for-mlcc
**注意:** 更详细的输入函数和Dataset API参考:[TF Developer's Guide](https://www.tensorflow.org/programmers_guide/datasets) 第5步:训练模型在linear_regressor上调用train()来训练模型
_ = house_linear_regressor.train(input_fn=lambda: kl_input_fn(kl_feature, targets), steps=100)
_____no_output_____
CC-BY-3.0
code/first_step_with_tensorflow.ipynb
kevinleeex/notes-for-mlcc
第6步:评估模型**注意:**训练误差可以衡量训练的模型与训练数据的拟合情况,但**不能**衡量模型泛化到新数据的效果,我们需要拆分数据来评估模型的泛化能力。
# 只做一次预测,所以把epoch设为1并关闭随机 prediction_input_fn = lambda: kl_input_fn(kl_feature, targets, num_epochs=1, shuffle=False) # 调用predict进行预测 predictions = house_linear_regressor.predict(input_fn=prediction_input_fn) # 把预测结果转换为numpy数组 predictions = np.array([item['predictions'][0] for item in predictions]) # 打印MSE和RMSE mean...
最低中位房价: 14.999 最高中位房价: 500.001 中位房价最低最高差值: 485.002 均方根误差:237.417
CC-BY-3.0
code/first_step_with_tensorflow.ipynb
kevinleeex/notes-for-mlcc
由此结果可以看出模型的效果并不理想,我们可以使用一些基本的策略来降低误差。
calibration_data = pd.DataFrame() calibration_data["predictions"] = pd.Series(predictions) calibration_data["targets"] = pd.Series(targets) calibration_data.describe() # 我们可以可视化数据和我们学到的线, sample = california_housing_df.sample(n=300) # 得到均匀分布的sample数据df # 得到房屋总数的最小最大值 x_0 = sample["total_rooms"].min() x_1 = sample["tota...
_____no_output_____
CC-BY-3.0
code/first_step_with_tensorflow.ipynb
kevinleeex/notes-for-mlcc
模型调参以上代码封装调参
def train_model(learning_rate, steps, batch_size, input_feature="total_rooms"): """Trains a linear regression model of one feature. Args: learning_rate: A `float`, the learning rate. steps: A non-zero `int`, the total number of training steps. A training step consists of a forwa...
_____no_output_____
CC-BY-3.0
code/first_step_with_tensorflow.ipynb
kevinleeex/notes-for-mlcc
**练习1: 使RMSE不超过180**
train_model(learning_rate=0.00002, steps=500, batch_size=5)
Training model... RMSE (on training data): period 00 : 225.63 period 01 : 214.42 period 02 : 204.04 period 03 : 194.97 period 04 : 186.60 period 05 : 180.80 period 06 : 175.66 period 07 : 171.74 period 08 : 168.96 period 09 : 167.23 Model training finished.
CC-BY-3.0
code/first_step_with_tensorflow.ipynb
kevinleeex/notes-for-mlcc
模型调参的启发法> 不要死循规则- 训练误差应该稳步减小,刚开始是急剧减小,最终应随着训练收敛达到平稳状态。- 如果训练尚未收敛,尝试运行更长的时间。- 如果训练误差减小速度过慢,则提高学习速率也许有助于加快其减小速度。- 但有时如果学习速率过高,训练误差的减小速度反而会变慢。- 如果训练误差变化很大,尝试降低学习速率。- 较低的学习速率和较大的步数/较大的批量大小通常是不错的组合。- 批量大小过小也会导致不稳定情况。不妨先尝试 100 或 1000 等较大的值,然后逐渐减小值的大小,直到出现性能降低的情况。 **练习2:尝试其他特征**我们使用population特征替代。
train_model(learning_rate=0.00005, steps=500, batch_size=5, input_feature="population")
Training model... RMSE (on training data): period 00 : 222.79 period 01 : 209.51 period 02 : 198.00 period 03 : 189.59 period 04 : 182.78 period 05 : 179.35 period 06 : 177.30 period 07 : 176.11 period 08 : 175.97 period 09 : 176.51 Model training finished.
CC-BY-3.0
code/first_step_with_tensorflow.ipynb
kevinleeex/notes-for-mlcc
test note* jupyterはコンテナ起動すること* テストベッド一式起動済みであること
!pip install --upgrade pip !pip install --force-reinstall ../lib/ait_sdk-0.1.7-py3-none-any.whl from pathlib import Path import pprint from ait_sdk.test.hepler import Helper import json # settings cell # mounted dir root_dir = Path('/workdir/root/ait') ait_name='eval_metamorphic_test_tf1.13' ait_version='0.1' ait_fu...
<Response [200]> {'OutParams': {'ReportUrl': 'http://127.0.0.1:8888/qai-testbed/api/0.0.1/download/469'}, 'Result': {'Code': 'D12000', 'Message': 'command invoke success.'}}
Apache-2.0
ait_repository/test/tests/eval_metamorphic_test_tf1.13_0.1.ipynb
ads-ad-itcenter/qunomon.forked
Python APIProphet follows the `sklearn` model API. We create an instance of the `Prophet` class and then call its `fit` and `predict` methods. The input to Prophet is always a dataframe with two columns: `ds` and `y`. The `ds` (datestamp) column should be of a format expected by Pandas, ideally YYYY-MM-DD for a da...
import pandas as pd from fbprophet import Prophet df = pd.read_csv('../examples/example_wp_log_peyton_manning.csv') df.head()
_____no_output_____
MIT
notebooks/quick_start.ipynb
timgates42/prophet
We fit the model by instantiating a new `Prophet` object. Any settings to the forecasting procedure are passed into the constructor. Then you call its `fit` method and pass in the historical dataframe. Fitting should take 1-5 seconds.
m = Prophet() m.fit(df)
INFO:fbprophet:Disabling daily seasonality. Run prophet with daily_seasonality=True to override this.
MIT
notebooks/quick_start.ipynb
timgates42/prophet
Predictions are then made on a dataframe with a column `ds` containing the dates for which a prediction is to be made. You can get a suitable dataframe that extends into the future a specified number of days using the helper method `Prophet.make_future_dataframe`. By default it will also include the dates from the hist...
future = m.make_future_dataframe(periods=365) future.tail()
_____no_output_____
MIT
notebooks/quick_start.ipynb
timgates42/prophet
The `predict` method will assign each row in `future` a predicted value which it names `yhat`. If you pass in historical dates, it will provide an in-sample fit. The `forecast` object here is a new dataframe that includes a column `yhat` with the forecast, as well as columns for components and uncertainty intervals.
forecast = m.predict(future) forecast[['ds', 'yhat', 'yhat_lower', 'yhat_upper']].tail()
_____no_output_____
MIT
notebooks/quick_start.ipynb
timgates42/prophet
You can plot the forecast by calling the `Prophet.plot` method and passing in your forecast dataframe.
fig1 = m.plot(forecast)
_____no_output_____
MIT
notebooks/quick_start.ipynb
timgates42/prophet
If you want to see the forecast components, you can use the `Prophet.plot_components` method. By default you'll see the trend, yearly seasonality, and weekly seasonality of the time series. If you include holidays, you'll see those here, too.
fig2 = m.plot_components(forecast)
_____no_output_____
MIT
notebooks/quick_start.ipynb
timgates42/prophet
An interactive figure of the forecast and components can be created with plotly. You will need to install plotly 4.0 or above separately, as it will not by default be installed with fbprophet. You will also need to install the `notebook` and `ipywidgets` packages.
from fbprophet.plot import plot_plotly, plot_components_plotly plot_plotly(m, forecast) plot_components_plotly(m, forecast)
_____no_output_____
MIT
notebooks/quick_start.ipynb
timgates42/prophet
More details about the options available for each method are available in the docstrings, for example, via `help(Prophet)` or `help(Prophet.fit)`. The [R reference manual](https://cran.r-project.org/web/packages/prophet/prophet.pdf) on CRAN provides a concise list of all of the available functions, each of which has a ...
%%R library(prophet)
_____no_output_____
MIT
notebooks/quick_start.ipynb
timgates42/prophet
First we read in the data and create the outcome variable. As in the Python API, this is a dataframe with columns `ds` and `y`, containing the date and numeric value respectively. The ds column should be YYYY-MM-DD for a date, or YYYY-MM-DD HH:MM:SS for a timestamp. As above, we use here the log number of views to Peyt...
%%R df <- read.csv('../examples/example_wp_log_peyton_manning.csv')
_____no_output_____
MIT
notebooks/quick_start.ipynb
timgates42/prophet
We call the `prophet` function to fit the model. The first argument is the historical dataframe. Additional arguments control how Prophet fits the data and are described in later pages of this documentation.
%%R m <- prophet(df)
_____no_output_____
MIT
notebooks/quick_start.ipynb
timgates42/prophet