markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Here is the explanation for the code above: 1. We create an empty set called result and assign it to an empty set. 2. We create a variable called i and assign it to 0. 3. We create a variable called length and assign it to the length of the raw_string. 4. We create a while loop that will go on forever until i is equal ...
x = [] s = "a" count = 0 print(x, ":", end="") while x: print(s, end="") count = count + 1 if count > 3: break print("")
[] :
MIT
Week 01 - Introduction to Python/Python I.ipynb
TheAIDojo/Machine_Learning_Bootcamp
For Loops and the Range functionIn Python, `for` statements iterate over sequences and utilize the `in` keyword. Like `while` loops, `for` loops can contain `break` and `continue`. They can also contain `else` statements; these are executed when the loop ends via something other than `break`).
raw_string = "hello world" characters = set(raw_string) for w in characters: print(w, ",", raw_string.count(w))
l , 3 r , 1 w , 1 e , 1 d , 1 h , 1 , 1 o , 2
MIT
Week 01 - Introduction to Python/Python I.ipynb
TheAIDojo/Machine_Learning_Bootcamp
The `range()` function can be used to generate a sequence of numbers, which can then be used in a loop.
x = range(11) for i in x: print(i, i * i)
0 0 1 1 2 4 3 9 4 16 5 25 6 36 7 49 8 64 9 81 10 100
MIT
Week 01 - Introduction to Python/Python I.ipynb
TheAIDojo/Machine_Learning_Bootcamp
Minimum and maximum (exlcusive) values can also be specified, as can an integer step-size.
x = range(20, 31, 2) for i in x: print(i)
20 22 24 26 28 30
MIT
Week 01 - Introduction to Python/Python I.ipynb
TheAIDojo/Machine_Learning_Bootcamp
Getting started Initialize streams [Stream](https://thermosteam.readthedocs.io/en/latest/Stream.html) objects define material flow rates along with its thermodynamic state. Before creating streams, a [Thermo](https://thermosteam.readthedocs.io/en/latest/Thermo.html) property package must be defined. Alternatively, we...
import biosteam as bst bst.settings.set_thermo(['Water', 'Methanol']) feed = bst.Stream(Water=50, Methanol=20) feed.show()
Stream: s1 phase: 'l', T: 298.15 K, P: 101325 Pa flow (kmol/hr): Water 50 Methanol 20
MIT
docs/tutorial/Getting_started.ipynb
yoelcortes/biosteam
Set prices for performing techno-economic analysis later:
feed.price = 0.15 # USD/kg feed.cost # USD/hr
_____no_output_____
MIT
docs/tutorial/Getting_started.ipynb
yoelcortes/biosteam
Process settings Process settings include price of feeds and products, conditions of utilities, and the chemical engineering plant cost index. These should be set before simulating a system. Set the chemical engineering plant cost index:
bst.CE # Default year is 2017 bst.CE = 603.1 # To year 2018
_____no_output_____
MIT
docs/tutorial/Getting_started.ipynb
yoelcortes/biosteam
Set [PowerUtility](../PowerUtility.txt) options:
bst.PowerUtility.price # Default price (USD/kJ) bst.PowerUtility.price = 0.065 # Adjust price
_____no_output_____
MIT
docs/tutorial/Getting_started.ipynb
yoelcortes/biosteam
Set [HeatUtility](../HeatUtility.txt) options via [UtilityAgent](../UtilityAgent.txt) objects, which are [Stream](https://thermosteam.readthedocs.io/en/latest/Stream.html) objects with additional attributes to describe a utility agent:
bst.HeatUtility.cooling_agents # All available cooling agents cooling_water = bst.HeatUtility.get_cooling_agent('cooling_water') cooling_water.show() # A UtilityAgent # Price of regenerating the utility in USD/kmol cooling_water.regeneration_price # Other utilities may be priced for amount of heat transfered in USD/kJ ...
_____no_output_____
MIT
docs/tutorial/Getting_started.ipynb
yoelcortes/biosteam
Find design requirements and cost with Unit objects [Creating a Unit](./Creating_a_Unit.ipynb) can be flexible. But in summary, a [Unit](../Unit.txt) object is initialized with an ID, and unit-specific arguments. BioSTEAM includes [essential unit operations](../units/units.txt) with rigorous modeling and design algori...
from biosteam import units # Specify vapor fraction and isobaric conditions F1 = units.Flash('F1', V=0.5, P=101325) F1.show()
Flash: F1 ins... [0] missing stream outs... [0] s2 phase: 'l', T: 298.15 K, P: 101325 Pa flow: 0 [1] s3 phase: 'l', T: 298.15 K, P: 101325 Pa flow: 0
MIT
docs/tutorial/Getting_started.ipynb
yoelcortes/biosteam
Note that, by default, Missing Stream objects are given to inputs, `ins`, and empty streams to outputs, `outs`:
F1.ins F1.outs
_____no_output_____
MIT
docs/tutorial/Getting_started.ipynb
yoelcortes/biosteam
You can connect streams by setting the `ins` and `outs`:
F1.ins[0] = feed F1.show()
Flash: F1 ins... [0] s1 phase: 'l', T: 298.15 K, P: 101325 Pa flow (kmol/hr): Water 50 Methanol 20 outs... [0] s2 phase: 'l', T: 298.15 K, P: 101325 Pa flow: 0 [1] s3 phase: 'l', T: 298.15 K, P: 101325 Pa flow: 0
MIT
docs/tutorial/Getting_started.ipynb
yoelcortes/biosteam
To simulate the flash, use the `simulate` method:
F1.simulate() F1.show()
Flash: F1 ins... [0] s1 phase: 'l', T: 298.15 K, P: 101325 Pa flow (kmol/hr): Water 50 Methanol 20 outs... [0] s2 phase: 'g', T: 359.6 K, P: 101325 Pa flow (kmol/hr): Water 19 Methanol 16 [1] s3 phase: 'l', T: 359.6 K, P: 101325 Pa flow (kmol/hr)...
MIT
docs/tutorial/Getting_started.ipynb
yoelcortes/biosteam
Note that warnings notify you whether purchase cost correlations are out of range for the given design. This is ok for the example, but its important to make sure that the process is well designed and cost correlations are suitable for the domain. The `results` method returns simulation results:
F1.results() # Default returns DataFrame object with units F1.results(with_units=False) # Returns Series object without units
_____no_output_____
MIT
docs/tutorial/Getting_started.ipynb
yoelcortes/biosteam
Although BioSTEAM includes a large set of essential unit operations, many process specific unit operations are not yet available. In this case, you can create new [Unit subclasses](./Inheriting_from_Unit.ipynb) to model unit operations not yet available in BioSTEAM. Solve recycle loops and process specifications with ...
M1 = units.Mixer('M1') S1 = units.Splitter('S1', outs=('liquid_recycle', 'liquid_product'), split=0.5) # Split to 0th output stream F1.outs[0].ID = 'vapor_product' F1.outs[1].ID = 'liquid'
_____no_output_____
MIT
docs/tutorial/Getting_started.ipynb
yoelcortes/biosteam
You can [find unit operations and manage flowsheets](./Managing_flowsheets.ipynb) with the `main_flowsheet`:
bst.main_flowsheet.diagram()
_____no_output_____
MIT
docs/tutorial/Getting_started.ipynb
yoelcortes/biosteam
Connect streams and make a recycle loop using [-pipe- notation](./-pipe-_notation.ipynb):
feed = bst.Stream('feed', Methanol=100, Water=450) # Broken down -pipe- notation [S1-0, feed]-M1 # M1.ins[:] = [S1.outs[0], feed] M1-F1 # F1.ins[:] = M1.outs F1-1-S1 # S1.ins[:] = [F1.outs[1]] # All together [S1-0, feed]-M1-F1-1-S1;
_____no_output_____
MIT
docs/tutorial/Getting_started.ipynb
yoelcortes/biosteam
Now lets check the diagram again:
bst.main_flowsheet.diagram(format='png')
_____no_output_____
MIT
docs/tutorial/Getting_started.ipynb
yoelcortes/biosteam
[System](../System.txt) objects take care of solving recycle loops and simulating all unit operations.Although there are many ways of [creating a system](./Creating_a_System.ipynb), the most recommended way is to use the flowsheet:
flowsheet_sys = bst.main_flowsheet.create_system('flowsheet_sys') flowsheet_sys.show()
System: flowsheet_sys Highest convergence error among components in recycle stream S1-0 after 0 loops: - flow rate 0.00e+00 kmol/hr (0%) - temperature 0.00e+00 K (0%) ins... [0] feed phase: 'l', T: 298.15 K, P: 101325 Pa flow (kmol/hr): Water 450 Methanol 100 outs... [0] vapor_product...
MIT
docs/tutorial/Getting_started.ipynb
yoelcortes/biosteam
Although not recommened due to the likelyhood of human error, a [System](../System.txt) object may also be created by specifying an ID, a `recycle` stream and a `path` of units to run element by element:
sys = bst.System('sys', path=(M1, F1, S1), recycle=S1-0) # recycle=S1.outs[0] sys.show()
System: sys Highest convergence error among components in recycle stream S1-0 after 0 loops: - flow rate 0.00e+00 kmol/hr (0%) - temperature 0.00e+00 K (0%) ins... [0] feed phase: 'l', T: 298.15 K, P: 101325 Pa flow (kmol/hr): Water 450 Methanol 100 outs... [0] vapor_product phase...
MIT
docs/tutorial/Getting_started.ipynb
yoelcortes/biosteam
Simulate the System object:
sys.simulate() sys.show()
System: sys Highest convergence error among components in recycle stream S1-0 after 4 loops: - flow rate 1.38e-01 kmol/hr (0.16%) - temperature 4.44e-03 K (0.0012%) ins... [0] feed phase: 'l', T: 298.15 K, P: 101325 Pa flow (kmol/hr): Water 450 Methanol 100 outs... [0] vapor_product ...
MIT
docs/tutorial/Getting_started.ipynb
yoelcortes/biosteam
Note how the recycle stream converged and all unit operations (including the flash vessel) were simulated:
F1.results()
_____no_output_____
MIT
docs/tutorial/Getting_started.ipynb
yoelcortes/biosteam
You can retrieve summarized power and heat utilities from the system as well:
sys.power_utility.show() for i in sys.heat_utilities: i.show()
HeatUtility: low_pressure_steam duty: 1.82e+07 kJ/hr flow: 470 kmol/hr cost: 94 USD/hr
MIT
docs/tutorial/Getting_started.ipynb
yoelcortes/biosteam
Once your system has been simulated, you can save a system report to view all results in an excel spreadsheet:
# Try this on your computer and open excel # sys.save_report('Example.xlsx')
_____no_output_____
MIT
docs/tutorial/Getting_started.ipynb
yoelcortes/biosteam
Analyzing replicability of functional connectivity-based multivariate BWAS on the Human Connectome Project dataset Imports
import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt from sklearn.linear_model import Ridge from sklearn.svm import SVR from sklearn.model_selection import KFold, train_test_split from sklearn.pipeline import Pipeline from sklearn.decomposition import PCA from joblib import Paral...
_____no_output_____
MIT
multivariate_BWAS_replicability_analysis_FC.ipynb
spisakt/BWAS_comment
Load HCP dataWe load functional network matrices (netmats) from the HCP1200-release, as published on connectomeDB: https://db.humanconnectome.org/Due to licensoing issues, data is not supplied with the repository, but can be downloaded from the ConnectomeDB.See [hcp_data/readme.md](hcp_data/readme.md) for more details...
# HCP data can be obtained from the connectomeDB # data is not part of this repository subjectIDs = pd.read_csv('hcp_data/subjectIDs.txt', header=None) netmats_pearson = pd.read_csv('hcp_data/netmats1_correlationZ.txt', sep=' ', header=None) netmats_pearson['ID...
_____no_output_____
MIT
multivariate_BWAS_replicability_analysis_FC.ipynb
spisakt/BWAS_comment
Function to prepare target variable
def create_data(target='CogTotalComp_AgeAdj', feature_data=netmats_parcor): # it's a good practice to use pandas for merging, messing up subject order can be painful features = feature_data.columns df = behavior df = df.merge(feature_data, left_index=True, right_index=True, how='left') df = df.drop...
_____no_output_____
MIT
multivariate_BWAS_replicability_analysis_FC.ipynb
spisakt/BWAS_comment
Function implementing a single bootstrap iterationWe define a workhorse function which:- randomly samples the discovery and the replication datasets,- creates cross-validated estimates of predictive performance within the discovery sample- finalizes the model by fitting it to the whole discovery sample (overfits the d...
def bootstrap_workhorse(X, y, sample_size, model, random_state, shuffle_y=False): #create discovery and replication samples by random sampling from the whole dataset (without replacement) # if shuffle_y is true, a null model is created bz permuting y if shuffle_y: rng = np.random.default_rng(rando...
_____no_output_____
MIT
multivariate_BWAS_replicability_analysis_FC.ipynb
spisakt/BWAS_comment
All set, now we start the analysis. Replicability with sample sizes n=50, 100, 200, 300 and maxHere we train a few different models on 100 bootstrap samples.We aggregate the results of our workhorse function in `n_bootstrap`=100 bootstrap cases (run in parallel).The whole process is repeated for all sample sizes, feta...
%%time random_state = 42 n_bootstrap = 100 features = { 'netmats_parcor': netmats_parcor, 'netmats_pearson': netmats_pearson } models = { 'PCA_SVR': Pipeline([('pca', PCA(n_components=0.5)), ('svr', SVR())]) } # We aggregate all results here: df = pd.DataFrame(columns=['connect...
***************************************************************** netmats_parcor PCA_SVR age 50 0.18451221232892587 0.18901378266057708 Replicability at alpha = 0.05 : 57.14285714285714 % Replicability at alpha = 0.01 : 14.285714285714285 % Replicability at alpha = 0.005 : 0.0 % Replicability at alpha = 0.001 : 0.0 % *...
MIT
multivariate_BWAS_replicability_analysis_FC.ipynb
spisakt/BWAS_comment
Now we fit a simple Ridge regression(no feature selection, no hyperparameter optimization)This can be expected to perform better on low samples than SVR.
%%time random_state = 42 n_bootstrap = 100 features = { 'netmats_parcor': netmats_parcor, 'netmats_pearson': netmats_pearson } models = { 'ridge': Ridge() } # We aggregate all results here: df = pd.DataFrame(columns=['connectivity','model','target','n','r_discovery_cv','r_discovery_overfit','r_replicati...
***************************************************************** netmats_parcor ridge age 50 0.24233370132686197 0.2609198136325508 Replicability at alpha = 0.05 : 58.536585365853654 % Replicability at alpha = 0.01 : 14.634146341463413 % Replicability at alpha = 0.005 : 12.195121951219512 % Replicability at alpha = 0....
MIT
multivariate_BWAS_replicability_analysis_FC.ipynb
spisakt/BWAS_comment
Null scenario with random targetTo evaluate false positives with biased estimates
%%time random_state = 42 n_bootstrap = 100 features = { 'netmats_parcor': netmats_parcor, 'netmats_pearson': netmats_pearson } models = { 'PCA_SVR': Pipeline([('pca', PCA(n_components=0.5)), ('svr', SVR())]) } # We aggregate all results here: df = pd.DataFrame(columns=['connect...
***************************************************************** netmats_parcor Ridge age 50 -0.014348756240858624 -0.019865509971863777 Replicability at alpha = 0.05 : 0.0 % Replicability at alpha = 0.01 : 0.0 % Replicability at alpha = 0.005 : 0.0 % Replicability at alpha = 0.001 : 0.0 % ****************************...
MIT
multivariate_BWAS_replicability_analysis_FC.ipynb
spisakt/BWAS_comment
*See the notebook called 'plot_results.ipynb' for the results.*
model = Pipeline([('pca', PCA(n_components=0.5)), ('svr', SVR())]) random_state = 42 cv = KFold(10, shuffle=True, random_state=random_state) bar_data_svr = [] for target_var in ['age', 'CogTotalComp_AgeAdj', 'PMAT24_A_CR', 'Flanker_AgeAdj', 'CardSort_AgeAdj', 'PicSeq_AgeAdj']: print(target_var) X, y = create_...
age r = 0.45 p = 0.001 R2 = 20.0 % CogTotalComp_AgeAdj r = 0.4 p = 0.001 R2 = 16.2 % PMAT24_A_CR r = 0.25 p = 0.001 R2 = 6.3 % Flanker_AgeAdj r = 0.16 p = 0.001 R2 = 2.6 % CardSort_AgeAdj r = 0.17 p = 0.001 R2 = 2.8 % PicSeq_AgeAdj r = 0.23 p = 0.001 R2 = 5.5 %
MIT
multivariate_BWAS_replicability_analysis_FC.ipynb
spisakt/BWAS_comment
Setup
pip install -U plotly
Requirement already up-to-date: plotly in /usr/local/lib/python3.6/dist-packages (4.14.3) Requirement already satisfied, skipping upgrade: six in /usr/local/lib/python3.6/dist-packages (from plotly) (1.15.0) Requirement already satisfied, skipping upgrade: retrying>=1.3.3 in /usr/local/lib/python3.6/dist-packages (from...
MIT
Charity.ipynb
robsonzagrejr/hobs_nico_charity
Make sure that sklearn version is 0.24.1
!pip install --user --upgrade scikit-learn==0.24.1 import sklearn print('The scikit-learn version is {}.'.format(sklearn.__version__)) # To support both python 2 and python 3 from __future__ import division, print_function, unicode_literals # Common imports import numpy as np import os import pandas as pd import plot...
_____no_output_____
MIT
Charity.ipynb
robsonzagrejr/hobs_nico_charity
Data Raw Description**Features**- `age`: Age- `workclass`: Working Class (Private, Self-emp-not-inc, Self-emp-inc, Federal-gov, Local-gov, State-gov, Without-pay, Never-worked)- `education_level`: Level of Education (Bachelors, Some-college, 11th, HS-grad, Prof-school, Assoc-acdm, Assoc-voc, 9th, 7th-8th, 12th, Masters...
import requests r = requests.get('https://raw.githubusercontent.com/ngoeldner/Machine-Learning-Project/main/finding_donors/census.csv') file_path = '/content/census.csv' f = open(file_path,'wb') f.write(r.content) df = pd.read_csv(file_path) df.head()
_____no_output_____
MIT
Charity.ipynb
robsonzagrejr/hobs_nico_charity
lower_snake_case
df = df.rename( columns={ 'education-num': 'education_num', 'marital-status': 'marital_status', 'capital-gain': 'capital_gain', 'capital-loss': 'capital_loss', 'hours-per-week': 'hours_per_week', 'native-country': 'native_country', 'income': 'y' } ) df.hea...
_____no_output_____
MIT
Charity.ipynb
robsonzagrejr/hobs_nico_charity
Data Analysis In this part, we take a quick glance at the whole dataset, then split it and look more carefully at the training dataset.
df.info()
<class 'pandas.core.frame.DataFrame'> RangeIndex: 45222 entries, 0 to 45221 Data columns (total 14 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 age 45222 non-null int64 1 workclass 45222 non-null object 2 education_level 45222...
MIT
Charity.ipynb
robsonzagrejr/hobs_nico_charity
So we got more categorical columns than numeric columns, later we will analyse what is the more appropriate cat->num transformation and if we should do any kind of feature engineering besides the cat->num.
numeric_columns = ['age', 'education_num', 'capital_gain', 'capital_loss', 'hours_per_week']
_____no_output_____
MIT
Charity.ipynb
robsonzagrejr/hobs_nico_charity
Split
df_used = df.copy() df_used['y'] = df_used['y'] == '>50K' df_used.head() X = df_used.drop(['y'], axis=1) y = df_used['y'] from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler X_train_npre, X_test_npre, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=...
_____no_output_____
MIT
Charity.ipynb
robsonzagrejr/hobs_nico_charity
Analysis
print(df_train_npre['y'].count()) print(df_train_npre['y'].value_counts(normalize=True)) true_y_prop = df_train_npre['y'].value_counts(normalize=True)[True] print(true_y_prop) df_train_npre.describe()
_____no_output_____
MIT
Charity.ipynb
robsonzagrejr/hobs_nico_charity
Here we got some weird values, for example, working 99 hours/week (14 hours by 7 days in week).The almost 100K is a possible value in capital gain, but this is clearly an outsider.Maybe letting this values in the training dataset can have a good influence in the model, unless we know that it was generated from wrong da...
pd.options.plotting.backend = "matplotlib" #plotly df_train_npre[numeric_columns].hist(bins=50, figsize=(20,15)) plt.show() pd.options.plotting.backend = "plotly" #plotly
_____no_output_____
MIT
Charity.ipynb
robsonzagrejr/hobs_nico_charity
In the next cells, we can notice the huge amount of intances with 0 in capital_gain and in capital_loss
print(df_train_npre['capital_gain'].value_counts()) print(df_train_npre['capital_gain'].value_counts()[0]/df_train_npre.count()['capital_gain']) print(df_train_npre['capital_loss'].value_counts()) print(df_train_npre['capital_loss'].value_counts()[0]/df_train_npre.count()['capital_loss']) df_train_npre.head()
_____no_output_____
MIT
Charity.ipynb
robsonzagrejr/hobs_nico_charity
Proportion of true values of y in each class
def plot_true_porcent(column): return ( df_train_npre .groupby( [column] ) ['y'] .value_counts(normalize=True) .xs(True, level=1) .plot(kind='bar') ) df_train_npre.groupby(['workclass'])['y'].value_counts(normalize=True) plot_true_porcent('work...
_____no_output_____
MIT
Charity.ipynb
robsonzagrejr/hobs_nico_charity
Preprocessing and Feature Engineering Pipeline
from sklearn.pipeline import Pipeline from sklearn.compose import ColumnTransformer from sklearn.preprocessing import OneHotEncoder, StandardScaler, FunctionTransformer from sklearn.impute import SimpleImputer from sklearn.base import BaseEstimator, TransformerMixin categorical_columns = ['workclass', 'marital_status',...
_____no_output_____
MIT
Charity.ipynb
robsonzagrejr/hobs_nico_charity
Partial pipeline to get the name of the new features
partial_pipeline = ColumnTransformer([ # ("num", StandardScaler(), numeric_columns), ("cat", OneHotEncoder(sparse=False, handle_unknown='ignore'), categorical_columns), # ('edu_level', MyCat2Num('education_level'), ['education_level']), # ('nat_country', MyCat2Num('native_country'), ['na...
_____no_output_____
MIT
Charity.ipynb
robsonzagrejr/hobs_nico_charity
Full Pipeline
class MyCat2Num(BaseEstimator, TransformerMixin): def __init__(self, column): # no *args or **kwargs self.column = column def fit(self, X, y): self.df_Xy = pd.concat([X,y], axis=1) self.column_y_true = (1-self.df_Xy.groupby([self.column])['y'].value_counts(normalize=True).xs(False, leve...
_____no_output_____
MIT
Charity.ipynb
robsonzagrejr/hobs_nico_charity
Model Training In this part, we train different models using the GridSearchCV to search for the best hyperparameters. Since the dataset is not very large, we can test many combinations of hyperparameters for each model.
from sklearn.linear_model import LogisticRegression, SGDClassifier from sklearn.ensemble import ExtraTreesClassifier, RandomForestClassifier, GradientBoostingClassifier, AdaBoostClassifier from sklearn.model_selection import GridSearchCV, StratifiedKFold from sklearn.svm import LinearSVC, SVC from sklearn.tree import D...
_____no_output_____
MIT
Charity.ipynb
robsonzagrejr/hobs_nico_charity
SGDClassifier
parameters = {'estimator__l1_ratio':[0.025, 0.05, 0.1, 0.3, 0.9, 1], 'estimator__alpha':[0.00001, 0.0001, 0.001]} sgd_class = SGDClassifier(random_state=0, max_iter=200, penalty='elasticnet') preproc_sgd_class = Pipeline(steps= [('preproc', full_pipeline),('estimator', sgd_class)] , verbose=3) sgd_class_gscv = GridSear...
_____no_output_____
MIT
Charity.ipynb
robsonzagrejr/hobs_nico_charity
Logistc Regression
parameters = {'estimator__l1_ratio':[0.05, 0.1, 0.3,0.6, 0.9, 1], 'estimator__C':[0.1,1,10]} log_reg = LogisticRegression(solver='saga',penalty='elasticnet', random_state=0, max_iter=200) preproc_logreg = Pipeline(steps= [('preproc', full_pipeline),('estimator', log_reg)] , verbose=3) log_reg_gscv = GridSearchCV(estima...
_____no_output_____
MIT
Charity.ipynb
robsonzagrejr/hobs_nico_charity
SVM Linear
parameters = {'estimator__loss':['squared_epsilon_insensitive'], 'estimator__C':[0.00001,0.0001,0.001,0.01,0.1,1,10,100,1000]} # parameters = {'estimator__loss':['squared_epsilon_insensitive'], 'estimator__C':[1]} lin_svc = LinearSVC(dual=False, random_state=0) preproc_linsvc = Pipeline(steps= [('preproc', full_pipelin...
_____no_output_____
MIT
Charity.ipynb
robsonzagrejr/hobs_nico_charity
Nonlinear
# parameters = [{'kernel':['poly'], 'C':[0.001,0.01,0.1,1,10,100,300], 'degree':[2,3,4,5,6,7,8]}, # {'kernel':['rbf'], 'C':[0.001,0.01,0.1,1,10,100,300]}, # {'kernel':['sigmoid'], 'C':[0.001,0.01,0.1,1,10,100,300]} # ] parameters = [{'estimator__kernel':['poly'], 'est...
_____no_output_____
MIT
Charity.ipynb
robsonzagrejr/hobs_nico_charity
Random Forest
# parameters = {'max_leaf_nodes':[700, 800,850, 900,950, 1000]} parameters = {'estimator__n_estimators':[100, 200, 300, 400]} rf = RandomForestClassifier(random_state=0, max_leaf_nodes=950) preproc_rf = Pipeline(steps= [('preproc', full_pipeline),('estimator', rf)] , verbose=4) rf_gscv = GridSearchCV(estimator=preproc_...
_____no_output_____
MIT
Charity.ipynb
robsonzagrejr/hobs_nico_charity
Extra-Trees
# parameters = {'max_leaf_nodes':[1500, 1750, 2000, 2250, 2500]} parameters = {'estimator__n_estimators':[100, 200, 300, 400]} et = ExtraTreesClassifier(random_state=0, max_leaf_nodes=2000) preproc_et = Pipeline(steps= [('preproc', full_pipeline),('estimator', et)] , verbose=3) et_gscv = GridSearchCV(estimator=preproc_...
_____no_output_____
MIT
Charity.ipynb
robsonzagrejr/hobs_nico_charity
AdaBoost
parameters = {'estimator__n_estimators':[100,300], 'estimator__learning_rate' : [1,2,3]} dt = DecisionTreeClassifier(random_state=0) ada = AdaBoostClassifier(base_estimator=dt, random_state=0) preproc_ada = Pipeline(steps= [('preproc', full_pipeline),('estimator', ada)] , verbose=4) ada_gscv = GridSearchCV(estimator=pr...
_____no_output_____
MIT
Charity.ipynb
robsonzagrejr/hobs_nico_charity
Gradient Boosting
parameters = {'estimator__n_estimators':[400, 500],'estimator__max_depth':[2,3], 'estimator__learning_rate' : [0.1, 1]} gb = GradientBoostingClassifier(random_state=0, loss='deviance', subsample=0.8) preproc_gb = Pipeline(steps= [('preproc', full_pipeline),('estimator', gb)] , verbose=4) gb_gscv = GridSearchCV(estima...
_____no_output_____
MIT
Charity.ipynb
robsonzagrejr/hobs_nico_charity
XGBoost
parameters = { 'estimator__n_estimators' : [500], #400], "estimator__eta" : [0.05],# 0.10, 1],#0.15, 0.20, 0.25, 0.30 ], "estimator__max_depth" : [4],#3, 5, 6, 8],#, 10, 12, 15], "estimator__gamma" : [0.2],# 0.0,0.2 , 0.3, 0.4 ], #"colsample_bytree" : [ 0.3]#,...
_____no_output_____
MIT
Charity.ipynb
robsonzagrejr/hobs_nico_charity
Final Evaluation, Final Training and Saving the Model We have chosen the SGDClassifier for the final model because it presented a good roc_auc score, which was not so different from the score of a much more complex model, the XGBoost. Also, the SGDClassifier allows online learning. Here, we evaluate the final model i...
from sklearn.metrics import roc_auc_score from sklearn.base import clone best_model = sgd_class_gscv.best_estimator_ y_pred = best_model.predict(X_test_npre) roc_auc_score(y_test, y_pred) sgd_class_gscv.best_params_ final_model = clone(sgd_class_gscv.best_estimator_) final_model.fit(X, y) import joblib filename = 'sgd_...
_____no_output_____
MIT
Charity.ipynb
robsonzagrejr/hobs_nico_charity
Create a PipelineYou can perform the various steps required to ingest data, train a model, and register the model individually by using the Azure ML SDK to run script-based experiments. However, in an enterprise environment it is common to encapsulate the sequence of discrete steps required to build a machine learning...
import azureml.core from azureml.core import Workspace # Load the workspace from the saved config file ws = Workspace.from_config() print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))
Ready to use Azure ML 1.26.0 to work with mls-dp100
MIT
08 - Create a Pipeline.ipynb
MeteorF/MS-AZ-DP100-Labs
Prepare dataIn your pipeline, you'll use a dataset containing details of diabetes patients. Run the cell below to create this dataset (if you created it in previously, the code will find the existing version)
from azureml.core import Dataset default_ds = ws.get_default_datastore() if 'diabetes dataset' not in ws.datasets: default_ds.upload_files(files=['./data/diabetes.csv', './data/diabetes2.csv'], # Upload the diabetes csv files in /data target_path='diabetes-data/', # Put it in a folder path...
Dataset already registered.
MIT
08 - Create a Pipeline.ipynb
MeteorF/MS-AZ-DP100-Labs
Create scripts for pipeline stepsPipelines consist of one or more *steps*, which can be Python scripts, or specialized steps like a data transfer step that copies data from one location to another. Each step can run in its own compute context. In this exercise, you'll build a simple pipeline that contains two Python s...
import os # Create a folder for the pipeline step files experiment_folder = 'diabetes_pipeline' os.makedirs(experiment_folder, exist_ok=True) print(experiment_folder)
diabetes_pipeline
MIT
08 - Create a Pipeline.ipynb
MeteorF/MS-AZ-DP100-Labs
Now let's create the first script, which will read data from the diabetes dataset and apply some simple pre-processing to remove any rows with missing data and normalize the numeric features so they're on a similar scale.The script includes a argument named **--prepped-data**, which references the folder where the resu...
%%writefile $experiment_folder/prep_diabetes.py # Import libraries import os import argparse import pandas as pd from azureml.core import Run from sklearn.preprocessing import MinMaxScaler # Get parameters parser = argparse.ArgumentParser() parser.add_argument("--input-data", type=str, dest='raw_dataset_id', help='raw...
Writing diabetes_pipeline/prep_diabetes.py
MIT
08 - Create a Pipeline.ipynb
MeteorF/MS-AZ-DP100-Labs
Now you can create the script for the second step, which will train a model. The script includes a argument named **--training-folder**, which references the folder where the prepared data was saved by the previous step.
%%writefile $experiment_folder/train_diabetes.py # Import libraries from azureml.core import Run, Model import argparse import pandas as pd import numpy as np import joblib import os from sklearn.model_selection import train_test_split from sklearn.tree import DecisionTreeClassifier from sklearn.metrics import roc_auc_...
Writing diabetes_pipeline/train_diabetes.py
MIT
08 - Create a Pipeline.ipynb
MeteorF/MS-AZ-DP100-Labs
Prepare a compute environment for the pipeline stepsIn this exercise, you'll use the same compute for both steps, but it's important to realize that each step is run independently; so you could specify different compute contexts for each step if appropriate.First, get the compute target you created in a previous lab (...
from azureml.core.compute import ComputeTarget, AmlCompute from azureml.core.compute_target import ComputeTargetException cluster_name = "alazureml-cc0408" try: # Check for existing compute target pipeline_cluster = ComputeTarget(workspace=ws, name=cluster_name) print('Found existing cluster, use it.') ex...
Found existing cluster, use it.
MIT
08 - Create a Pipeline.ipynb
MeteorF/MS-AZ-DP100-Labs
The compute will require a Python environment with the necessary package dependencies installed, so you'll need to create a run configuration.
from azureml.core import Environment from azureml.core.conda_dependencies import CondaDependencies from azureml.core.runconfig import RunConfiguration # Create a Python environment for the experiment diabetes_env = Environment("diabetes-pipeline-env") diabetes_env.python.user_managed_dependencies = False # Let Azure M...
'enabled' is deprecated. Please use the azureml.core.runconfig.DockerConfiguration object with the 'use_docker' param instead.
MIT
08 - Create a Pipeline.ipynb
MeteorF/MS-AZ-DP100-Labs
Create and run a pipelineNow you're ready to create and run a pipeline.First you need to define the steps for the pipeline, and any data references that need to be passed between them. In this case, the first step must write the prepared data to a folder that can be read from by the second step. Since the steps will b...
from azureml.pipeline.core import PipelineData from azureml.pipeline.steps import PythonScriptStep # Get the training dataset diabetes_ds = ws.datasets.get("diabetes dataset") # Create a PipelineData (temporary Data Reference) for the model folder prepped_data_folder = PipelineData("prepped_data_folder", datastore=ws...
Pipeline steps defined
MIT
08 - Create a Pipeline.ipynb
MeteorF/MS-AZ-DP100-Labs
OK, you're ready build the pipeline from the steps you've defined and run it as an experiment.
from azureml.core import Experiment from azureml.pipeline.core import Pipeline from azureml.widgets import RunDetails # Construct the pipeline pipeline_steps = [prep_step, train_step] pipeline = Pipeline(workspace=ws, steps=pipeline_steps) print("Pipeline is built.") # Create an experiment and run the pipeline experi...
Pipeline is built. Created step Prepare Data [50c39f8a][74934260-06aa-4c4d-a4cd-230f602c6eb9], (This step will run and generate new outputs) Created step Train and Register Model [6b28353a][1873f725-ebd6-4cab-a043-38d79745045a], (This step will run and generate new outputs) Submitted PipelineRun 5e79d151-67ed-40d1-9ebc...
MIT
08 - Create a Pipeline.ipynb
MeteorF/MS-AZ-DP100-Labs
A graphical representation of the pipeline experiment will be displayed in the widget as it runs. Keep an eye on the kernel indicator at the top right of the page, when it turns from **&9899;** to **&9711;**, the code has finished running. You can also monitor pipeline runs in the **Experiments** page in [Azure Machine...
for run in pipeline_run.get_children(): print(run.name, ':') metrics = run.get_metrics() for metric_name in metrics: print('\t',metric_name, ":", metrics[metric_name])
Train and Register Model : Accuracy : 0.9004444444444445 AUC : 0.8859105592722003 ROC : aml://artifactId/ExperimentRun/dcid.1d216aff-706b-4b92-a858-5de6e744d173/ROC_1617931737.png Prepare Data : raw_rows : 15000 processed_rows : 15000
MIT
08 - Create a Pipeline.ipynb
MeteorF/MS-AZ-DP100-Labs
Assuming the pipeline was successful, a new model should be registered with a *Training context* tag indicating it was trained in a pipeline. Run the following code to verify this.
from azureml.core import Model for model in Model.list(ws): print(model.name, 'version:', model.version) for tag_name in model.tags: tag = model.tags[tag_name] print ('\t',tag_name, ':', tag) for prop_name in model.properties: prop = model.properties[prop_name] print ('\t',p...
diabetes_model version: 8 Training context : Pipeline AUC : 0.8859105592722003 Accuracy : 0.9004444444444445 diabetes_model version: 7 Training context : Inline Training AUC : 0.8760759241753321 Accuracy : 0.8876666666666667 diabetes_model version: 6 Training context : Inline Training AUC : 0.875108...
MIT
08 - Create a Pipeline.ipynb
MeteorF/MS-AZ-DP100-Labs
Publish the pipelineAfter you've created and tested a pipeline, you can publish it as a REST service.
# Publish the pipeline from the run published_pipeline = pipeline_run.publish_pipeline( name="diabetes-training-pipeline", description="Trains diabetes model", version="1.0") published_pipeline
_____no_output_____
MIT
08 - Create a Pipeline.ipynb
MeteorF/MS-AZ-DP100-Labs
Note that the published pipeline has an endpoint, which you can see in the **Endpoints** page (on the **Pipeline Endpoints** tab) in [Azure Machine Learning studio](https://ml.azure.com). You can also find its URI as a property of the published pipeline object:
rest_endpoint = published_pipeline.endpoint print(rest_endpoint)
https://eastasia.api.azureml.ms/pipelines/v1.0/subscriptions/c0a4d868-4fa1-4023-b058-13dfc12ea9be/resourceGroups/rg-dp100/providers/Microsoft.MachineLearningServices/workspaces/mls-dp100/PipelineRuns/PipelineSubmit/27573b5e-5a7c-4296-9439-29b2d6a03b52
MIT
08 - Create a Pipeline.ipynb
MeteorF/MS-AZ-DP100-Labs
Call the pipeline endpointTo use the endpoint, client applications need to make a REST call over HTTP. This request must be authenticated, so an authorization header is required. A real application would require a service principal with which to be authenticated, but to test this out, we'll use the authorization heade...
from azureml.core.authentication import InteractiveLoginAuthentication interactive_auth = InteractiveLoginAuthentication() auth_header = interactive_auth.get_authentication_header() print("Authentication header ready.")
Authentication header ready.
MIT
08 - Create a Pipeline.ipynb
MeteorF/MS-AZ-DP100-Labs
Now we're ready to call the REST interface. The pipeline runs asynchronously, so we'll get an identifier back, which we can use to track the pipeline experiment as it runs:
import requests experiment_name = 'mslearn-diabetes-pipeline' rest_endpoint = published_pipeline.endpoint response = requests.post(rest_endpoint, headers=auth_header, json={"ExperimentName": experiment_name}) run_id = response.json()["Id"] run_id
_____no_output_____
MIT
08 - Create a Pipeline.ipynb
MeteorF/MS-AZ-DP100-Labs
Since you have the run ID, you can use it to wait for the run to complete.> **Note**: The pipeline should complete quickly, because each step was configured to allow output reuse. This was done primarily for convenience and to save time in this course. In reality, you'd likely want the first step to run every time in c...
from azureml.pipeline.core.run import PipelineRun published_pipeline_run = PipelineRun(ws.experiments[experiment_name], run_id) published_pipeline_run.wait_for_completion(show_output=True)
PipelineRunId: 2285dce2-df5b-4c9f-86b7-7e51618aa387 Link to Azure Machine Learning Portal: https://ml.azure.com/runs/2285dce2-df5b-4c9f-86b7-7e51618aa387?wsid=/subscriptions/c0a4d868-4fa1-4023-b058-13dfc12ea9be/resourcegroups/rg-dp100/workspaces/mls-dp100&tid=ffb6df9b-a626-4119-8765-20cd966f4661 PipelineRun Status: Run...
MIT
08 - Create a Pipeline.ipynb
MeteorF/MS-AZ-DP100-Labs
Schedule the PipelineSuppose the clinic for the diabetes patients collects new data each week, and adds it to the dataset. You could run the pipeline every week to retrain the model with the new data.
from azureml.pipeline.core import ScheduleRecurrence, Schedule # Submit the Pipeline every Monday at 00:00 UTC recurrence = ScheduleRecurrence(frequency="Week", interval=1, week_days=["Monday"], time_of_day="00:00") weekly_schedule = Schedule.create(ws, name="weekly-diabetes-training", ...
Pipeline scheduled.
MIT
08 - Create a Pipeline.ipynb
MeteorF/MS-AZ-DP100-Labs
You can retrieve the schedules that are defined in the workspace like this:
schedules = Schedule.list(ws) schedules
_____no_output_____
MIT
08 - Create a Pipeline.ipynb
MeteorF/MS-AZ-DP100-Labs
You can check the latest run like this:
pipeline_experiment = ws.experiments.get('mslearn-diabetes-pipeline') latest_run = list(pipeline_experiment.get_runs())[0] latest_run.get_details()
_____no_output_____
MIT
08 - Create a Pipeline.ipynb
MeteorF/MS-AZ-DP100-Labs
Prepare Data to be Analyzed with Modulos AutoML Note: For all of these operations to work, we are relying on the data being sorted, as it's done in the notebook DataCleaning.ipynb. Imports
import pandas as pd import matplotlib.pyplot as plt import glob import os from IPython.display import display import tqdm from collections import Counter import matplotlib pd.options.display.max_columns = None from pandas.plotting import register_matplotlib_converters register_matplotlib_converters() import numpy as np
_____no_output_____
MIT
DataPreparation.ipynb
schoolofdata-ch/openfarming-Decision-Support
Configure path variables and number of samples
# Path where the cleaned data is stored fpath_clean_data_dir = 'clean_data/' # Path where the data ready for the ML analysis is stored and filename of output file fpath_prepared_data_dir = 'ready_data/' foldername_prepared_data = 'ai_basic_all/' # Number of unique Cow IDs to consider (the computation is very slow # I...
_____no_output_____
MIT
DataPreparation.ipynb
schoolofdata-ch/openfarming-Decision-Support
Data Loading Load all relevant tables into one dictionary. Note that we are not considering hm_BCS and hm_pregnancy in this first implementation.
# Columns with datetime entries & file names datetime_cols = {#'hm_BCS': ['BCS_date'], 'hm_lactation': ['calving_date'], 'hm_NSAIET': ['nsaiet_date'], 'hm_animal': ['birth_date'], 'hm_milkrecording': ['mlksmpl_date', 'lab_date'], 'hm_...
----- Reading in hm_lactation.csv ----- parity calving_date calving_ease idani_anon 0 1 2018-09-06 2 CHE000000000561 1 2 2019-09-15 2 CHE000000000561 2 1 2016-09-07 2 CHE000000000781 3 2 2017-08-05 1 CHE000000000781 4 3 201...
MIT
DataPreparation.ipynb
schoolofdata-ch/openfarming-Decision-Support
Data Manipulation & Enhancement Remove all parity = 0 entries (i.e. inseminations before the cow has even given birth and milk)
orig_rows = data_frames['hm_NSAIET'].shape[0] mask = np.argwhere(data_frames['hm_NSAIET']['parity'].values == 0).flatten() data_frames['hm_NSAIET'] = data_frames['hm_NSAIET'].drop(mask, axis=0).reset_index(drop=True) print('Removed {:} entries ({:.2f}%)'.format(orig_rows-data_frames['hm_NSAIET'].shape[0], ...
Removed 329889 entries (24.15%)
MIT
DataPreparation.ipynb
schoolofdata-ch/openfarming-Decision-Support
List of unique cow IDs by considering intersection of all the tables with necessary inputs for prediction
# Tables necessary for the prediction ('hm_health' doesn't contain many cows and # one would have to throw away much data) fnames_necessary = [fname for fname in fnames if fname != 'hm_health'] # Select subset unique_cow_ids = [set(data_frames[fname]['idani_anon'].values) for fname in fnames_necessary] unique_cow_ids ...
Number of individual cows in sample: 180005
MIT
DataPreparation.ipynb
schoolofdata-ch/openfarming-Decision-Support
Convert parity to labels (= column used for prediction)If the same parity number occurs multiple times only the one with the most recent time stamp is considered a success. The other are considered failures. Parities that only appear once are considered success by default.
def parity_to_label_for_single_cow(df): """ Function to return a new column called 'parity_labels', which contains True/False depending on the outcome of the artificial insemination. :param df: Subset of a Pandas dataframe containing all the relevant entries for a single cow :return: Column wit...
_____no_output_____
MIT
DataPreparation.ipynb
schoolofdata-ch/openfarming-Decision-Support
Convert labels for all cows (using unique_cow_ids from above)
ids_to_remove = 0 parity_labels_all = np.zeros(data_frames['hm_NSAIET'].shape[0], dtype=np.int) for cow_id in tqdm.tqdm(unique_cow_ids): left = data_frames['hm_NSAIET']["idani_anon"].searchsorted(cow_id, 'left') right = data_frames['hm_NSAIET']["idani_anon"].searchsorted(cow_id, 'right') single_cow = ...
100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 180005/180005 [00:33<00:00, 5410.62it/s]
MIT
DataPreparation.ipynb
schoolofdata-ch/openfarming-Decision-Support
Display all dataframes individually (sanity check)
data_frames['hm_lactation'] data_frames['hm_NSAIET'] data_frames['hm_animal'] data_frames['hm_milkrecording'] data_frames['hm_ebv'] data_frames['hm_health']
_____no_output_____
MIT
DataPreparation.ipynb
schoolofdata-ch/openfarming-Decision-Support
Functions to contain hm_NSAIET with other datasets
def combine_nsaeit_with_milkrecording_single_cow(df_nsaiet, df_milkrec, columns_both='idani_anon'): """ Function combining the dataframes hm_NSAIET and hm_milkrecording for a single cow ID. The tables are combined such that for every insemination, the date of the previous milkrecording is chosen. :...
_____no_output_____
MIT
DataPreparation.ipynb
schoolofdata-ch/openfarming-Decision-Support
Merge all dataframes
datetime_cols = {#'hm_BCS': ['BCS_date'], 'hm_lactation': ['calving_date'], 'hm_NSAIET': ['nsaiet_date'], 'hm_animal': ['birth_date'], 'hm_milkrecording': ['mlksmpl_date', 'lab_date'], 'hm_ebv': False, # 'hm_pregnancy...
0%| | 341/180005 [00:52<7:37:04, 6.55it/s]
MIT
DataPreparation.ipynb
schoolofdata-ch/openfarming-Decision-Support
Add columns with age and days since calving, drop datetime columns
# Add columns (deltas between dates) df_merged_all['age'] = (df_merged_all['nsaiet_date'] - df_merged_all['birth_date']).values // np.timedelta64(1, 'D') df_merged_all['days_since_calving'] = (df_merged_all['nsaiet_date'] - df_merged_all['calving_date']).values // np.timedelta64(1, 'D') df_merged_all['days_since_mlksam...
_____no_output_____
MIT
DataPreparation.ipynb
schoolofdata-ch/openfarming-Decision-Support
Save data, create a dataset structure file for the AutoML platform, and tar the dataset Save dataset
folderpath = fpath_prepared_data_dir + foldername_prepared_data df_merged_all.to_csv(folderpath+'data.csv', index=False)
_____no_output_____
MIT
DataPreparation.ipynb
schoolofdata-ch/openfarming-Decision-Support
Save dataset structure file (DSSF), which is needed for the AutoML analysis
# Content of DSSF dssf_string = ['[', ' {', ' \"name\": \"{}\",'.format(foldername_prepared_data[:-1]), ' \"path\": \"data.csv\",', ' \"type\": \"table\"', ' },', ' {', ' \"_vers...
[ { "name": "ai_basic_all", "path": "data.csv", "type": "table" }, { "_version": "0.1" } ]
MIT
DataPreparation.ipynb
schoolofdata-ch/openfarming-Decision-Support
Create a tarball of all the contents
!tar -cf {fpath_prepared_data_dir}{foldername_prepared_data[:-1]}.tar -C {fpath_prepared_data_dir} {foldername_prepared_data[:-1]}
_____no_output_____
MIT
DataPreparation.ipynb
schoolofdata-ch/openfarming-Decision-Support
Prepare a file for a regression task (predict optimal date for insemination)
foldername_prepared_data = 'ai_basic_all_predict_date/' !mkdir -p {fpath_prepared_data_dir}{foldername_prepared_data} # Remove all non-successful inseminations mask = df_merged_all['parity_labels'].values == 0 df_merged_subset = df_merged_all.drop(np.arange(mask.size)[mask], axis=0).reset_index(drop=True) folderpath =...
_____no_output_____
MIT
DataPreparation.ipynb
schoolofdata-ch/openfarming-Decision-Support
"Julia"> "Basics of the julia language"- author: Christopher Thiemann- toc: true- branch: master- badges: true- comments: true- categories: [julia ]- hide: true- search_exclude: true
answer = 43 x = 2 1 < x < 3 M = [1 0; 0 1] typeof(size(M))
_____no_output_____
Apache-2.0
_notebooks/2020-10-20-julia.ipynb
ChristopherTh/statistics-blog
Basics Assignement
answer = 42 x, y, z = 1, [1:10; ], "A string" # just like in python ! x, y = y, x # swap x and y
_____no_output_____
Apache-2.0
_notebooks/2020-10-20-julia.ipynb
ChristopherTh/statistics-blog
Declaring Constants
const DATE_OF_BIRTH = 2012
_____no_output_____
Apache-2.0
_notebooks/2020-10-20-julia.ipynb
ChristopherTh/statistics-blog
Commenting
1 + 1 # Hello, this is a comment!
_____no_output_____
Apache-2.0
_notebooks/2020-10-20-julia.ipynb
ChristopherTh/statistics-blog
Delimited comment
1 + #= This comment is inside code! =# 1
_____no_output_____
Apache-2.0
_notebooks/2020-10-20-julia.ipynb
ChristopherTh/statistics-blog
Chaining
x = y = z = 1 # right-to-left 0 < x < 3 #works z = 10 b = 2 x < y < z < b # works too!
_____no_output_____
Apache-2.0
_notebooks/2020-10-20-julia.ipynb
ChristopherTh/statistics-blog
Function definition
function add_one(i) return i + 1 # Just a bit different to python. end add_one(2)
_____no_output_____
Apache-2.0
_notebooks/2020-10-20-julia.ipynb
ChristopherTh/statistics-blog
Insert LaTeX symbolsNow this is a cool feature...in a code cell type for example \alpha + Tab
Ξ² = 1
_____no_output_____
Apache-2.0
_notebooks/2020-10-20-julia.ipynb
ChristopherTh/statistics-blog
Operators Basic Arithmetic works as expected
println(1 + 1) println(1 - 3) println(3 * 3) println(4 / 2)
2 -2 9 2.0
Apache-2.0
_notebooks/2020-10-20-julia.ipynb
ChristopherTh/statistics-blog