repo_name
stringlengths 6
77
| path
stringlengths 8
215
| license
stringclasses 15
values | cells
list | types
list |
|---|---|---|---|---|
regardscitoyens/consultation_an
|
exploitation/analyse_quanti_theme5.ipynb
|
agpl-3.0
|
[
"%matplotlib inline\n\nimport json\nimport pandas as pd",
"Reading the data",
"def loadContributions(file, withsexe=False):\n contributions = pd.read_json(path_or_buf=file, orient=\"columns\")\n rows = [];\n rindex = [];\n for i in range(0, contributions.shape[0]):\n row = {};\n row['id'] = contributions['id'][i]\n rindex.append(contributions['id'][i])\n if (withsexe):\n if (contributions['sexe'][i] == 'Homme'):\n row['sexe'] = 0\n else:\n row['sexe'] = 1\n for question in contributions['questions'][i]:\n if (question.get('Reponse')) and question['texte'][0:5] != 'Conna' and question['titreQuestion'][-2:] != '34':\n row[question['titreQuestion']+' : '+question['texte']] = 1\n for criteres in question.get('Reponse'):\n # print(criteres['critere'].keys())\n row[question['titreQuestion']+'. (Réponse) '+question['texte']+' -> '+str(criteres['critere'].get('texte'))] = 1\n rows.append(row)\n df = pd.DataFrame(data=rows)\n df.fillna(0, inplace=True)\n return df\n\ndf = loadContributions('../data/EGALITE5.brut.json', True)\ndf.fillna(0, inplace=True)\ndf.index = df['id']\n#df.to_csv('consultation_an.csv', format='%d')\n#df.columns = ['Q_' + str(col+1) for col in range(len(df.columns) - 2)] + ['id' , 'sexe']\ndf.head()",
"Build clustering model\nHere we build a kmeans model , and select the \"optimal\" of clusters.\nHere we see that the optimal number of clusters is 2.",
"from sklearn.cluster import KMeans\nfrom sklearn import metrics\nimport numpy as np\nX = df.drop('id', axis=1).values\n\ndef train_kmeans(nb_clusters, X):\n kmeans = KMeans(n_clusters=nb_clusters, random_state=0).fit(X)\n return kmeans\n#print(kmeans.predict(X))\n#kmeans.cluster_centers_\n\n\ndef select_nb_clusters():\n perfs = {};\n for nbclust in range(2,10):\n kmeans_model = train_kmeans(nbclust, X);\n labels = kmeans_model.labels_\n # from http://scikit-learn.org/stable/modules/clustering.html#calinski-harabaz-index\n # we are in an unsupervised model. cannot get better!\n # perfs[nbclust] = metrics.calinski_harabaz_score(X, labels);\n perfs[nbclust] = metrics.silhouette_score(X, labels);\n print(perfs);\n return perfs;\n\n\ndf['clusterindex'] = train_kmeans(4, X).predict(X)\n#df \n\nperfs = select_nb_clusters();\n# result :\n# {2: 341.07570462155348, 3: 227.39963334619881, 4: 186.90438345452918, 5: 151.03979976346525, 6: 129.11214073405731, 7: 112.37235520885432, 8: 102.35994869157568, 9: 93.848315820675438}\n\noptimal_nb_clusters = max(perfs, key=perfs.get);\n\nprint(\"optimal_nb_clusters\" , optimal_nb_clusters);",
"Build the optimal model and apply it",
"km_model = train_kmeans(optimal_nb_clusters, X);\ndf['clusterindex'] = km_model.predict(X)\nlGroupBy = df.groupby(['clusterindex']).mean();\n\ncluster_profile_counts = df.groupby(['clusterindex']).count();\ncluster_profile_means = df.groupby(['clusterindex']).mean();\nglobal_counts = df.count()\nglobal_means = df.mean()\n\n\n\n\ncluster_profile_counts.head(10)\n\n\ndf_profiles = pd.DataFrame();\nnbclusters = cluster_profile_means.shape[0]\ndf_profiles['clusterindex'] = range(nbclusters)\nfor col in cluster_profile_means.columns:\n if(col != \"clusterindex\"):\n df_profiles[col] = np.zeros(nbclusters)\n for cluster in range(nbclusters):\n df_profiles[col][cluster] = cluster_profile_means[col][cluster]\n# row.append(df[col].mean());\ndf_profiles.head()\n\n#print(df_profiles.columns) \n\nintereseting_columns = {};\nfor col in df_profiles.columns:\n if(col != \"clusterindex\"):\n global_mean = df[col].mean()\n diff_means_global = abs(df_profiles[col] - global_mean). max();\n # print(col , diff_means_global)\n if(diff_means_global > 0.05):\n intereseting_columns[col] = True\n \n#print(intereseting_columns)\n\n\n%matplotlib inline\n\nimport matplotlib\nimport numpy as np\nimport matplotlib.pyplot as plt",
"Cluster Profiles\nHere, the optimal model ihas two clusters , cluster 0 with 399 cases, and 1 with 537 cases. \nAs this model is based on binary inputs. Given this, the best description of the clusters is by the distribution of zeros and ones of each input (question).\nThe figure below gives the cluster profiles of this model. Cluster 0 on the left. 1 on the right. The questions invloved as different (highest bars)",
"interesting = list(intereseting_columns.keys())\ndf_profiles_sorted = df_profiles[interesting].sort_index(axis=1)\ndf_profiles_sorted.plot.bar(figsize =(1, 1))\ndf_profiles_sorted.plot.bar(figsize =(16, 8), legend=False)\n\n\ndf_profiles_sorted.T\n\n#df_profiles.sort_index(axis=1).T",
"Analyse\nThématique n° 5 : La protection contre les violences conjugales\nDeux groupes de personnes émergent\n - 502 personnes qui ne connaissent pas le téléphone grand danger\n - 336 qui le connaissent\nCelles qui le connaissent ont moins l'impression qu'il y a un déficit d'information sur cette mesure, approuve plus la géolocalisation des victimes et connaissent la disposition d'ordonnance à laquelle elles sont plus favorable. Elles sont plus favorables (bien que minoritaires) au fait de réserver l’occupation du logement au conjoint victime des violences.\nLes personnes qui connaissent pas le téléphone grand danger ont tendance à apporter plus un peu plus d'importance à l'aide juridictionnelle et aux violences psychologiques.\nEn ignorant la connaissance du téléphone grand danger (et le fait d'avoir répondu à question ouverte), on a 4 groupes :\n - 95 personnes sans profil particulier\n - 276 personnes qui pensent que l'ordonnance est une disposition adpatée. Elles pensent que l'ordonnance a un déficit d'information, dissimuler son domicile et que la géolocalisation est adaptée\n - 353 personnes qui trouve la disposition d'ordonnance peu adpatée. Elles pointent le manque d'information de cette disposition et du téléphone grand danger et valorisent plus le fait de mieux prendre en compte les violence pyschologiques\n - 114 personnes qui pensent que le niveau d'information sur l'ordonnance est adaptée. Dans cet échantillon, il y a proportionnellement plus d'homme que dans les autres."
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ogaway/Game-Theory
|
StochasticEvolution/SimultaneousRevisions.ipynb
|
gpl-3.0
|
[
"同時改訂(simultaneous revisions)とは\n「各 t 期において,N 人全員が行動を (変えたければ) 変えることができる.」 \n協調ゲーム (coordination game)\n以下の利得表のもと各人は戦略をとる場合を考える。 \n[(4, 4), (0, 3)]\n[(3, 0), (2, 2)]\n混合戦略ナッシュ均衡の組は、\n(1, 0), (1, 0)\n(2/3, 1/3), (2/3, 1/3)\n(0, 1), (0, 1)\n全体の人数(N)が5人で、今戦略1を取っている人数が3人だとする。\nということは自分自身と対戦することも許すならば、3/5 > 1/3 より戦略1をとることが各人にとって望ましい。 \nよって、みんなで一斉に戦略を変えるチャンスが与えられるならば、\n次の期において戦略1を5人全員が選択する確率は、\n${}_5 C _5 (1-\\frac{\\epsilon}{2})^{5} (\\frac{\\epsilon}{2})^{0}$ (つまり$(1-\\frac{\\epsilon}{2})^{5}$ )\n4人が戦略1を選択する確率は\n${}_5 C _4 (1-\\frac{\\epsilon}{2})^{4} (\\frac{\\epsilon}{2})^{1}$\n...\n0人が戦略1を選択する確率は\n${}_5 C _0 (1-\\frac{\\epsilon}{2})^{0} (\\frac{\\epsilon}{2})^{5}$(つまり$(\\frac{\\epsilon}{2})^{5}$) \nこれより戦略1を取る人数ごとの確率分布は二項分布であると言える。\n二項分布をコードで書くときに便利な関数が、scipy.stats.binom \nサイトより引用\n```\nNotes\nThe probability mass function for binom is:\nbinom.pmf(k) = choose(n, k) * pk * (1-p)(n-k)\nfor k in {0, 1,..., n}.\nbinom takes n and p as shape parameters.\n```\nつまり、binom.pmf(k, n, p)という形で使うことができ、それぞれの引数はこのように定義されている。 \n実際に使ってみる。",
"%matplotlib inline\nfrom scipy.stats import binom\nimport matplotlib.pyplot as plt ",
"t期において戦略1を3人が取っている状況を考え、epsilonを0.1と仮定する。\nt+1期で戦略1を取る人数が5人全員となる確率$(1-\\frac{\\epsilon}{2})^{5}$は、",
"epsilon = 0.1\nbinom.pmf(5, 5, 1-epsilon/2)",
"t+1期で戦略1を取る人数が4人となる確率${}_5 C _4 (1-\\frac{\\epsilon}{2})^{4} (\\frac{\\epsilon}{2})^{1}$ は、",
"binom.pmf(4, 5, 1-epsilon/2)",
"t+1期で戦略1を取る人数が4人となる確率${}_5 C _3 (1-\\frac{\\epsilon}{2})^{3} (\\frac{\\epsilon}{2})^{2}$ は、",
"binom.pmf(3, 5, 1-epsilon/2)",
"t+1期に戦略1を取る人数毎の確率をプロットしてみると、",
"P = [binom.pmf(i, 5, 1-epsilon/2) for i in range(5+1)]\nplt.plot(range(5+1), P)",
"実験を行わず合理的に戦略を選択すれば各人は戦略1を取るはずであり、グラフはt+1期に戦略1を取る人数が多いほど確率も高くなるということを示しており整合的である。\nこの結果を遷移行列にあてはめると、\nt期において戦略1を選択していた人が3人と仮定していたので、P[3]の行に上で求めた確率がそれぞれ付与されていく。"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
tcstewar/testing_notebooks
|
Implementing a Braindrop-like architecture in Nengo.ipynb
|
gpl-2.0
|
[
"%matplotlib inline\nimport matplotlib.pyplot as plt\nimport nengo\nimport numpy as np\nimport scipy.ndimage",
"Implementing Brainsim: A Braindrop-like simulation in Nengo\nI've had a number of people ask about building an emulator for Braindrop, an analog neuromorphic chip I helped develop https://ieeexplore.ieee.org/document/8591981 . Doing a \"full\" emulation is rather difficult, especially since you'd want to be able to adjust the level of detail on the emulation -- during development we created a variety of different emulators at different levels of abstraction, ranging from extremely low-level (nanosecond-scale simulation of individual transistors) to very high-level (just treating all the neurons as LIF and using floating-point multiplies instead of accumulators).\nThis notebook is an attempt at starting with the highest level of abstraction, and then adding a bit of detail. The idea here is to show people how to use Nengo to build simulations based around different architectural assumptions, and that this might help people who are making their own chips that will probably have very different assumptions than those made in Braindrop.\nNengo as a generic neural simulator\nThe simplest answer for how to use Nengo to implement your particular neuromorphic architecture is to just use Nengo as a normal neural simulator. You can define your own neuron class (see https://www.nengo.ai/nengo/examples/usage/rectified_linear.html for an example) and you can connect neurons however you want with the Connection class.\nHowever, this is a somewhat unsatisfying answer. Nengo also has a lot of built-in tools to help build up larger neural systems using separate sub-networks, each of which is optimized locally. (These tools are known as the Neural Engineering Framework, or NEF). It would be nice to be able to leverage these tools to help build up larger systems. Furthermore, Braindrop was specifically designed to be suitable for this sort of constructive approach to building systems out of neurons, so it seems like we should be able to make use of that when simulating Braindrop or any other similar system.\nDiffusors and Encoders\nOne of the basic principles of the NEF (and of Braindrop) is that you don't send inputs to one specific neuron. Rather, inputs are sent to a group of neurons (an Ensemble), and each neuron reacts to that input slightly differently. This is inspired by the sorts of population encoding seen in the brain -- many neurons tend to respond to the same inputs, but in differing amounts (i.e. they have different tuning curves).\nIn Braindrop, this is accomplished using tap-points and diffusors. Instead of sending an input to a particular neuron, current is injected into particular locations (tap-points) in a resistor mesh (the diffusor). Each neuron connects to a different point in the resistor mesh. This means that a neuron close to a tap-point will get a strong input, and those farther away will get weaker inputs. We define separate tap-points for each input dimension, and we have both positive and negative tap-points, so that a neuron close to a negative tap-point will get a strong input for a large negative value.\nDesigning this diffusor and characterizing how transistor mismatch affects this current flow (and tweaking it to give good distributions of tuning curves) was a large amount of the engineering effort behind Braindrop. So for this example, we're just going to make up a simple diffusor that has a square grid, no transistor mismatch, and has a Gaussian current spreading function. This would definitely have to be much more complicated for a more correct simulator, but the basic principle holds for the more detailed model.\nHere's a simple approximation of the diffusor:",
"def make_diffusor(shape, n_tap_points, spread, rng):\n mesh = np.zeros(shape)\n np.add.at(mesh, (rng.randint(shape[0], size=n_tap_points), rng.randint(shape[1], size=n_tap_points)), 1)\n np.add.at(mesh, (rng.randint(shape[0], size=n_tap_points), rng.randint(shape[1], size=n_tap_points)), -1)\n mesh = scipy.ndimage.gaussian_filter(mesh, sigma=spread)\n return mesh\n \nrng = np.random.RandomState(seed=0) \nplt.imshow(make_diffusor(shape=(16,16), n_tap_points=4, spread=2, rng=rng))\nplt.colorbar()\nplt.show()",
"This chooses 4 random positive tap-points and 4 random negative tap-points in a 16x16 ensemble of neurons, and shows how much current each neuron would see given an input of 1. In a more detailed model including mis-match, we'd also want to be more careful about choosing which tap-points to use in particular.\nIn general, there can be multiple inputs into the mesh, corresponding to multiple dimensions, so we can just generate multiple versions of this, choosing different tap-points.\nSince this characterizes a transformation between the D dimensions of the input and the N neurons, and since this transformation is linear, we can think of this as a set of connection weights between the input and the neurons. In Nengo and the NEF, this is often called an encoder and is one of the basic things that should be specified when defining an Ensemble.\nNengo also allows us to specify a gain and a bias for each neuron. In Braindrop, this is set by a combination of transistor mismatch and a few configuration bits per neuron, but here we will just randomly generate these.\nYou can also, of course, create your own neuron model, as described at https://www.nengo.ai/nengo/examples/usage/rectified_linear.html. For this example, we'll just use the built-in default LIF neuron model.\nLet's try this out making an Ensemble that has 2 input dimensions.",
"D = 2\nshape = (16,16)\nN = np.prod(shape)\nn_tap_points = 4\nspread = 2\n\n# create the encoder\nrng = np.random.RandomState(seed=0) \nencoder = np.array([make_diffusor(shape=shape, n_tap_points=n_tap_points, spread=spread, rng=rng).flatten() for i in range(D)])\nencoder_gain = np.linalg.norm(encoder, axis=0)\nencoder /= encoder_gain[None, :] # encoders are automatically normalized in nengo\n\n# create the gain and bias\ngain = 10+rng.randn(N)\nbias = rng.randn(N)\n\n# build the nengo model\nimport nengo\nmodel = nengo.Network()\nwith model:\n stim = nengo.Node(nengo.processes.WhiteSignal(period=10, high=0.5), size_out=D) # random input\n ens = nengo.Ensemble(n_neurons=N, dimensions=D, encoders=encoder.T,\n gain=gain, bias=bias)\n nengo.Connection(stim, ens)\n \n p_stim = nengo.Probe(stim)\n p_spikes = nengo.Probe(ens.neurons)\n\n ",
"You can interact with this model using the Nengo GUI and see the neural activity. Right-click on the ensemble of neurons and choose 'Firing pattern' to see the neural activity. Right-click on the stimulus and choose 'Sliders' to let you manually control the inputs. Press the Play button in the bottom-right to run the simulation.",
"import nengo_gui.jupyter\nnengo_gui.jupyter.InlineGUI(model, cfg='model.cfg')",
"We can also manually run the system and plot the results",
"sim = nengo.Simulator(model)\nwith sim:\n sim.run(10)\n\nplt.figure(figsize=(12,7))\nplt.subplot(2, 1, 1)\nplt.plot(sim.trange(), sim.data[p_stim])\nplt.subplot(2, 1, 2)\nimport nengo.utils.matplotlib\nnengo.utils.matplotlib.rasterplot(sim.trange(), sim.data[p_spikes])",
"Computing with neurons\nWe now have spiking activity that reflects the changing input. Now let's try to compute something using that input. For this example, we'll just compute the product between the two values.\nTo do this, Nengo will compute decoders. These are connection weights that will decode the spikes into some desired output value. With the addition of the decoders, we now have a standard single-hidden-layer network where the first set of weights are the encoders and the second set of weights are the decoders. Importantly, we do not have a neural non-linearity at the input or output -- the only neural non-linearities are in the hidden layer. But we can use this to approximate any function, as single-hidden-layer networks are universal function approximators.\nNengo automatically computes these weights for you when you make a Connection out of an Ensemble. In a more detailed simulator, we might constrain these weights by discretizing them or bounding them using a custom nengo.Solver class, but we don't do that here. Instead, we use the default where nengo will automatically feed random inputs into the network, record the resulting firing rates, and then use least-squares minimization to compute the optimal output weights.",
"import nengo\nmodel = nengo.Network()\nwith model:\n stim = nengo.Node(nengo.processes.WhiteSignal(period=10, high=0.5), size_out=D)\n ens = nengo.Ensemble(n_neurons=N, dimensions=D, encoders=encoder.T,\n gain=gain, bias=bias)\n nengo.Connection(stim, ens)\n \n output = nengo.Node(None, size_in=1)\n def my_function(x):\n return x[0]*x[1]\n nengo.Connection(ens, output, function=my_function, synapse=None)\n \n p_stim = nengo.Probe(stim)\n p_spikes = nengo.Probe(ens.neurons)\n p_output = nengo.Probe(output)\nwith nengo.Simulator(model, seed=0) as sim:\n sim.run(10)\n\n \nplt.figure(figsize=(12,4))\nplt.plot(sim.trange(), sim.data[p_output], label='raw decoded output')\nfilt = nengo.synapses.Lowpass(0.01) # for smoothing the output\nplt.plot(sim.trange(), filt.filt(sim.data[p_output]), label='smoothed output')\nideal = [my_function(x) for x in sim.data[p_stim]]\nplt.plot(sim.trange(), ideal, ls='--', lw=2, label='ideal output', c='k')\nplt.legend()\nplt.show()\n",
"Building larger systems\nNow that we can compute things, we can build larger systems by connecting together multiple Ensembles. Here, we make three ensembles, two of which will do the same \"compute the product\" calculation, and a third will compare the two results and see which one is bigger. That is, the overall system should output a 1 if a[0]*a[1] >b[0]*b[1], and otherwise output a -1.",
"# define a helper function to make it easier to define an Ensemble\ndef make_brainsim_ensemble(shape, dimensions, rng, n_tap_points=4, spread=2):\n N = np.prod(shape)\n encoder = np.array([make_diffusor(shape=shape, n_tap_points=n_tap_points, spread=spread, rng=rng).flatten() for i in range(dimensions)])\n encoder_gain = np.linalg.norm(encoder, axis=0)\n encoder /= encoder_gain[None, :]\n\n gain = 10+rng.randn(N)\n bias = rng.randn(N)\n \n ens = nengo.Ensemble(n_neurons=N, dimensions=dimensions, encoders=encoder.T,\n gain=gain, bias=bias) \n return ens\n\nrng = np.random.RandomState(seed=3) \nmodel = nengo.Network()\nwith model:\n stim_a = nengo.Node(nengo.processes.WhiteSignal(period=10, high=0.5), size_out=2)\n a = make_brainsim_ensemble((16,16), dimensions=2, rng=rng)\n nengo.Connection(stim_a, a)\n \n stim_b = nengo.Node(nengo.processes.WhiteSignal(period=10, high=0.5), size_out=2)\n b = make_brainsim_ensemble((16,16), dimensions=2, rng=rng)\n nengo.Connection(stim_b, b)\n \n compare = make_brainsim_ensemble((16,16), dimensions=1, rng=rng)\n nengo.Connection(a, compare, function=lambda x: x[0]*x[1])\n nengo.Connection(b, compare, function=lambda x: x[0]*x[1], transform=-1)\n \n output = nengo.Node(None, size_in=1)\n nengo.Connection(compare, output, function=lambda x: 1 if x>0 else -1)\n \n p_stim_a = nengo.Probe(stim_a)\n p_stim_b = nengo.Probe(stim_b)\n p_output = nengo.Probe(output)\n \nsim = nengo.Simulator(model, seed=3)\nwith sim:\n sim.run(10)\n ",
"And now we can see the results.",
"plt.figure(figsize=(12,4))\nfilt = nengo.synapses.Lowpass(0.01)\nplt.plot(sim.trange(), filt.filt(sim.data[p_output]))\n\nideal = np.where(sim.data[p_stim_a][:,0]*sim.data[p_stim_a][:,1]>sim.data[p_stim_b][:,0]*sim.data[p_stim_b][:,1], 1, -1)\nplt.plot(sim.trange(), ideal, ls='--', lw=2, c='k')\nplt.show()",
"The neurons are doing a decent job of computing the function. This could be further optimized by having better distributions of gains and biases in the neurons, or by better placement of tap-points.\nAdding Implementation Details: The Accumulator\nThe above code gives a basic simulation of a Braindrop-like system. However, there is one important component of Braindrop that should be added. In particular, so far we have assumed inputs and outputs are continuus values. In Braindrop, inputs and outputs are all spikes and they are generated by an Accumulator. Please see the Braindrop paper for more details about this approach, but since it is so important we should implement it in our Nengo version.\nTo do this, we implement a nengo.Process, which is how we implement arbitrary code in Nengo. For this accumulator, we just implement it with floating-point math (in Braindrop, this is done with 8-bit fixed-point).\nFor any accumulator, we must specify its maximum frequency (fmax). This is the number of spikes per second that it will generate given an input of 1. Here, we pick 1000.\n(Note that this implementation does allow the accumulator to spike multiple times per timestep. This is because this simulation will be run on a computer with some timestep (the default in Nengo is dt=0.001 seconds. But the actual hardware doesn't have a timestep as it is all event-driven, so it's perfectly capable of generating multiple spikes per timestep.)",
"class Accumulator(nengo.Process):\n def __init__(self, size_in, fmax=1000):\n self.fmax = fmax\n super().__init__(default_size_in=size_in, default_size_out=size_in)\n \n def make_step(self, shape_in, shape_out, dt, rng, state=None):\n state=np.zeros(shape_in)\n \n # this code will be run each time step\n def step_accumulator(t, x, state=state):\n state += x*(self.fmax*dt) # build up the accumulator\n spikes = np.round(state) # emit positive and negative spikes\n state -= spikes\n return spikes/(dt*self.fmax) # scale output by fmax\n return step_accumulator \n \nmodel = nengo.Network()\nwith model:\n stim = nengo.Node(np.sin)\n acc = nengo.Node(Accumulator(fmax=1000, size_in=1))\n nengo.Connection(stim, acc, synapse=None)\n \n p_stim = nengo.Probe(stim)\n p = nengo.Probe(acc)\nsim = nengo.Simulator(model, dt=0.001)\nwith sim:\n sim.run(6)\n\nplt.figure(figsize=(12,4))\nplt.plot(sim.trange(), sim.data[p], label='positive and negative output spikes')\nplt.plot(sim.trange(), sim.data[p_stim], label='input')\nplt.legend()\nplt.show()",
"Given these Accumulators, we can build the above model again, but this time interspersing Accumulators in the places where they would exist in the hardware.\nWe also take this opportunity to also correctly place the low-pass filter synapse. In Braindrop, this occurs after the accumulator and before the diffusor, so we use nengo to add a synapse that is a low-pass filter to that Connection.",
"rng = np.random.RandomState(seed=3) \nmodel = nengo.Network()\nwith model:\n stim_a = nengo.Node(nengo.processes.WhiteSignal(period=10, high=0.5), size_out=2)\n acc_a = nengo.Node(Accumulator(size_in=2))\n a = make_brainsim_ensemble((16,16), dimensions=2, rng=rng)\n nengo.Connection(stim_a, acc_a, synapse=None)\n nengo.Connection(acc_a, a, synapse=nengo.synapses.Lowpass(0.01))\n \n stim_b = nengo.Node(nengo.processes.WhiteSignal(period=10, high=0.5), size_out=2)\n acc_b = nengo.Node(Accumulator(size_in=2))\n b = make_brainsim_ensemble((16,16), dimensions=2, rng=rng)\n nengo.Connection(stim_b, acc_b, synapse=None)\n nengo.Connection(acc_b, b, synapse=nengo.synapses.Lowpass(0.01))\n \n compare = make_brainsim_ensemble((16,16), dimensions=1, rng=rng)\n acc_compare = nengo.Node(Accumulator(size_in=1))\n nengo.Connection(a, acc_compare, function=lambda x: x[0]*x[1], synapse=None)\n nengo.Connection(b, acc_compare, function=lambda x: x[0]*x[1], transform=-1, synapse=None)\n nengo.Connection(acc_compare, compare, synapse=nengo.synapses.Lowpass(0.01))\n \n output = nengo.Node(Accumulator(size_in=1))\n nengo.Connection(compare, output, function=lambda x: 1 if x>0 else -1, synapse=None)\n \n p_stim_a = nengo.Probe(stim_a)\n p_stim_b = nengo.Probe(stim_b)\n p_output = nengo.Probe(output)\n \nsim = nengo.Simulator(model, seed=3)\nwith sim:\n sim.run(10)\n\nplt.figure(figsize=(12,4))\nfilt = nengo.synapses.Lowpass(0.01)\nplt.plot(sim.trange(), filt.filt(sim.data[p_output]))\n\nideal = np.where(sim.data[p_stim_a][:,0]*sim.data[p_stim_a][:,1]>sim.data[p_stim_b][:,0]*sim.data[p_stim_b][:,1], 1, -1)\nplt.plot(sim.trange(), ideal, ls='--', lw=2, c='k')\nplt.show()",
"I hope this shows the basic process for using nengo to simulate some types of neuromorphic hardware! The next steps would be to add more details, depending on the hardware being simulated. We can also use this sort of approach to help design the hardware itself, by adjusting parameters in simulation and evaluating how accurate the resulting system is. This is the approach we used in https://ieeexplore.ieee.org/document/8351459."
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
GoogleCloudPlatform/training-data-analyst
|
courses/machine_learning/deepdive2/text_classification/solutions/text_classification.ipynb
|
apache-2.0
|
[
"Basic Text Classification\nOverview\nThis notebook demonstrates text classification starting from plain text files stored on disk. You'll train a binary classifier to perform sentiment analysis on an IMDB dataset. At the end of the notebook, there is an exercise for you to try, in which you'll train a multi-class classifier to predict the tag for a programming question on Stack Overflow.\nLearning Objective\nIn this notebook, you learn how to:\n\nPrepare the dataset for training\nUse loss function and optimizer\nTrain the model\nEvaluate the model\nExport the model\n\nIntroduction\nThis notebook shows how to train a sentiment analysis model to classify movie reviews as positive or negative, based on the text of the review.\nEach learning objective will correspond to a #TODO in the student lab notebook -- try to complete the target notebook first and then review this solution notebook.",
"# Import necessary libraries\nimport matplotlib.pyplot as plt\nimport os\nimport re\nimport shutil\nimport string\nimport tensorflow as tf\n\nfrom tensorflow.keras import layers\nfrom tensorflow.keras import losses\n\n\n# Print the TensorFlow version\nprint(tf.__version__)",
"Sentiment analysis\nThis notebook trains a sentiment analysis model to classify movie reviews as positive or negative, based on the text of the review. This is an example of binary—or two-class—classification, an important and widely applicable kind of machine learning problem.\nYou'll use the Large Movie Review Dataset that contains the text of 50,000 movie reviews from the Internet Movie Database. These are split into 25,000 reviews for training and 25,000 reviews for testing. The training and testing sets are balanced, meaning they contain an equal number of positive and negative reviews.\nDownload and explore the IMDB dataset\nLet's download and extract the dataset, then explore the directory structure.",
"# Download the IMDB dataset\nurl = \"https://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz\"\n\ndataset = tf.keras.utils.get_file(\"aclImdb_v1\", url,\n untar=True, cache_dir='.',\n cache_subdir='')\n\ndataset_dir = os.path.join(os.path.dirname(dataset), 'aclImdb')\n\n# Explore the dataset\nos.listdir(dataset_dir)\n\ntrain_dir = os.path.join(dataset_dir, 'train')\nos.listdir(train_dir)",
"The aclImdb/train/pos and aclImdb/train/neg directories contain many text files, each of which is a single movie review. Let's take a look at one of them.",
"# Print the file content\nsample_file = os.path.join(train_dir, 'pos/1181_9.txt')\nwith open(sample_file) as f:\n print(f.read())",
"Load the dataset\nNext, you will load the data off disk and prepare it into a format suitable for training. To do so, you will use the helpful text_dataset_from_directory utility, which expects a directory structure as follows.\nmain_directory/\n...class_a/\n......a_text_1.txt\n......a_text_2.txt\n...class_b/\n......b_text_1.txt\n......b_text_2.txt\nTo prepare a dataset for binary classification, you will need two folders on disk, corresponding to class_a and class_b. These will be the positive and negative movie reviews, which can be found in aclImdb/train/pos and aclImdb/train/neg. As the IMDB dataset contains additional folders, you will remove them before using this utility.",
"remove_dir = os.path.join(train_dir, 'unsup')\nshutil.rmtree(remove_dir)",
"Next, you will use the text_dataset_from_directory utility to create a labeled tf.data.Dataset. tf.data is a powerful collection of tools for working with data. \nWhen running a machine learning experiment, it is a best practice to divide your dataset into three splits: train, validation, and test. \nThe IMDB dataset has already been divided into train and test, but it lacks a validation set. Let's create a validation set using an 80:20 split of the training data by using the validation_split argument below.",
"# Create the validation set\nbatch_size = 32\nseed = 42\n\nraw_train_ds = tf.keras.utils.text_dataset_from_directory(\n 'aclImdb/train', \n batch_size=batch_size, \n validation_split=0.2, \n subset='training', \n seed=seed)",
"As you can see above, there are 25,000 examples in the training folder, of which you will use 80% (or 20,000) for training. As you will see in a moment, you can train a model by passing a dataset directly to model.fit. If you're new to tf.data, you can also iterate over the dataset and print out a few examples as follows.",
"# Print few examples\nfor text_batch, label_batch in raw_train_ds.take(1):\n for i in range(3):\n print(\"Review\", text_batch.numpy()[i])\n print(\"Label\", label_batch.numpy()[i])",
"Notice the reviews contain raw text (with punctuation and occasional HTML tags like <br/>). You will show how to handle these in the following section. \nThe labels are 0 or 1. To see which of these correspond to positive and negative movie reviews, you can check the class_names property on the dataset.",
"print(\"Label 0 corresponds to\", raw_train_ds.class_names[0])\nprint(\"Label 1 corresponds to\", raw_train_ds.class_names[1])",
"Next, you will create a validation and test dataset. You will use the remaining 5,000 reviews from the training set for validation.\nNote: When using the validation_split and subset arguments, make sure to either specify a random seed, or to pass shuffle=False, so that the validation and training splits have no overlap.",
"raw_val_ds = tf.keras.utils.text_dataset_from_directory(\n 'aclImdb/train', \n batch_size=batch_size, \n validation_split=0.2, \n subset='validation', \n seed=seed)\n\nraw_test_ds = tf.keras.utils.text_dataset_from_directory(\n 'aclImdb/test', \n batch_size=batch_size)",
"Prepare the dataset for training\nNext, you will standardize, tokenize, and vectorize the data using the helpful tf.keras.layers.TextVectorization layer. \nStandardization refers to preprocessing the text, typically to remove punctuation or HTML elements to simplify the dataset. Tokenization refers to splitting strings into tokens (for example, splitting a sentence into individual words, by splitting on whitespace). Vectorization refers to converting tokens into numbers so they can be fed into a neural network. All of these tasks can be accomplished with this layer.\nAs you saw above, the reviews contain various HTML tags like <br />. These tags will not be removed by the default standardizer in the TextVectorization layer (which converts text to lowercase and strips punctuation by default, but doesn't strip HTML). You will write a custom standardization function to remove the HTML.\nNote: To prevent training-testing skew (also known as training-serving skew), it is important to preprocess the data identically at train and test time. To facilitate this, the TextVectorization layer can be included directly inside your model, as shown later in this tutorial.",
"def custom_standardization(input_data):\n lowercase = tf.strings.lower(input_data)\n stripped_html = tf.strings.regex_replace(lowercase, '<br />', ' ')\n return tf.strings.regex_replace(stripped_html,\n '[%s]' % re.escape(string.punctuation),\n '')",
"Next, you will create a TextVectorization layer. You will use this layer to standardize, tokenize, and vectorize our data. You set the output_mode to int to create unique integer indices for each token.\nNote that you're using the default split function, and the custom standardization function you defined above. You'll also define some constants for the model, like an explicit maximum sequence_length, which will cause the layer to pad or truncate sequences to exactly sequence_length values.",
"max_features = 10000\nsequence_length = 250\n\n# TODO\n# Created the TextVectorization layer\nvectorize_layer = layers.TextVectorization(\n standardize=custom_standardization,\n max_tokens=max_features,\n output_mode='int',\n output_sequence_length=sequence_length)",
"Next, you will call adapt to fit the state of the preprocessing layer to the dataset. This will cause the model to build an index of strings to integers.\nNote: It's important to only use your training data when calling adapt (using the test set would leak information).",
"# Make a text-only dataset (without labels), then call adapt\ntrain_text = raw_train_ds.map(lambda x, y: x)\nvectorize_layer.adapt(train_text)",
"Let's create a function to see the result of using this layer to preprocess some data.",
"def vectorize_text(text, label):\n text = tf.expand_dims(text, -1)\n return vectorize_layer(text), label\n\n# retrieve a batch (of 32 reviews and labels) from the dataset\ntext_batch, label_batch = next(iter(raw_train_ds))\nfirst_review, first_label = text_batch[0], label_batch[0]\nprint(\"Review\", first_review)\nprint(\"Label\", raw_train_ds.class_names[first_label])\nprint(\"Vectorized review\", vectorize_text(first_review, first_label))",
"As you can see above, each token has been replaced by an integer. You can lookup the token (string) that each integer corresponds to by calling .get_vocabulary() on the layer.",
"# Print the token (string) that each integer corresponds\nprint(\"1287 ---> \",vectorize_layer.get_vocabulary()[1287])\nprint(\" 313 ---> \",vectorize_layer.get_vocabulary()[313])\nprint('Vocabulary size: {}'.format(len(vectorize_layer.get_vocabulary())))",
"You are nearly ready to train your model. As a final preprocessing step, you will apply the TextVectorization layer you created earlier to the train, validation, and test dataset.",
"# Apply the TextVectorization layer you created earlier to the train, validation, and test dataset\ntrain_ds = raw_train_ds.map(vectorize_text)\nval_ds = raw_val_ds.map(vectorize_text)\ntest_ds = raw_test_ds.map(vectorize_text)",
"Configure the dataset for performance\nThese are two important methods you should use when loading data to make sure that I/O does not become blocking.\n.cache() keeps data in memory after it's loaded off disk. This will ensure the dataset does not become a bottleneck while training your model. If your dataset is too large to fit into memory, you can also use this method to create a performant on-disk cache, which is more efficient to read than many small files.\n.prefetch() overlaps data preprocessing and model execution while training. \nYou can learn more about both methods, as well as how to cache data to disk in the data performance guide.",
"AUTOTUNE = tf.data.AUTOTUNE\n\ntrain_ds = train_ds.cache().prefetch(buffer_size=AUTOTUNE)\nval_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)\ntest_ds = test_ds.cache().prefetch(buffer_size=AUTOTUNE)",
"Create the model\nIt's time to create your neural network:",
"embedding_dim = 16\n\n# Create your neural network\nmodel = tf.keras.Sequential([\n layers.Embedding(max_features + 1, embedding_dim),\n layers.Dropout(0.2),\n layers.GlobalAveragePooling1D(),\n layers.Dropout(0.2),\n layers.Dense(1)])\n\nmodel.summary()",
"The layers are stacked sequentially to build the classifier:\n\nThe first layer is an Embedding layer. This layer takes the integer-encoded reviews and looks up an embedding vector for each word-index. These vectors are learned as the model trains. The vectors add a dimension to the output array. The resulting dimensions are: (batch, sequence, embedding). To learn more about embeddings, check out the Word embeddings tutorial.\nNext, a GlobalAveragePooling1D layer returns a fixed-length output vector for each example by averaging over the sequence dimension. This allows the model to handle input of variable length, in the simplest way possible.\nThis fixed-length output vector is piped through a fully-connected (Dense) layer with 16 hidden units. \nThe last layer is densely connected with a single output node.\n\nLoss function and optimizer\nA model needs a loss function and an optimizer for training. Since this is a binary classification problem and the model outputs a probability (a single-unit layer with a sigmoid activation), you'll use losses.BinaryCrossentropy loss function.\nNow, configure the model to use an optimizer and a loss function:",
"# TODO\n# Configure the model to use an optimizer and a loss function\nmodel.compile(loss=losses.BinaryCrossentropy(from_logits=True),\n optimizer='adam',\n metrics=tf.metrics.BinaryAccuracy(threshold=0.0))",
"Train the model\nYou will train the model by passing the dataset object to the fit method.",
"# TODO\n# Train the model\nepochs = 10\nhistory = model.fit(\n train_ds,\n validation_data=val_ds,\n epochs=epochs)",
"Evaluate the model\nLet's see how the model performs. Two values will be returned. Loss (a number which represents our error, lower values are better), and accuracy.",
"# TODO\n# Evaluate the model\nloss, accuracy = model.evaluate(test_ds)\n\nprint(\"Loss: \", loss)\nprint(\"Accuracy: \", accuracy)",
"This fairly naive approach achieves an accuracy of about 86%.\nCreate a plot of accuracy and loss over time\nmodel.fit() returns a History object that contains a dictionary with everything that happened during training:",
"history_dict = history.history\nhistory_dict.keys()",
"There are four entries: one for each monitored metric during training and validation. You can use these to plot the training and validation loss for comparison, as well as the training and validation accuracy:",
"# Plot the loss over time\nacc = history_dict['binary_accuracy']\nval_acc = history_dict['val_binary_accuracy']\nloss = history_dict['loss']\nval_loss = history_dict['val_loss']\n\nepochs = range(1, len(acc) + 1)\n\n# \"bo\" is for \"blue dot\"\nplt.plot(epochs, loss, 'bo', label='Training loss')\n# b is for \"solid blue line\"\nplt.plot(epochs, val_loss, 'b', label='Validation loss')\nplt.title('Training and validation loss')\nplt.xlabel('Epochs')\nplt.ylabel('Loss')\nplt.legend()\n\nplt.show()\n\n# Plot the accuracy over time\nplt.plot(epochs, acc, 'bo', label='Training acc')\nplt.plot(epochs, val_acc, 'b', label='Validation acc')\nplt.title('Training and validation accuracy')\nplt.xlabel('Epochs')\nplt.ylabel('Accuracy')\nplt.legend(loc='lower right')\n\nplt.show()",
"In this plot, the dots represent the training loss and accuracy, and the solid lines are the validation loss and accuracy.\nNotice the training loss decreases with each epoch and the training accuracy increases with each epoch. This is expected when using a gradient descent optimization—it should minimize the desired quantity on every iteration.\nThis isn't the case for the validation loss and accuracy—they seem to peak before the training accuracy. This is an example of overfitting: the model performs better on the training data than it does on data it has never seen before. After this point, the model over-optimizes and learns representations specific to the training data that do not generalize to test data.\nFor this particular case, you could prevent overfitting by simply stopping the training when the validation accuracy is no longer increasing. One way to do so is to use the tf.keras.callbacks.EarlyStopping callback.\nExport the model\nIn the code above, you applied the TextVectorization layer to the dataset before feeding text to the model. If you want to make your model capable of processing raw strings (for example, to simplify deploying it), you can include the TextVectorization layer inside your model. To do so, you can create a new model using the weights you just trained.",
"# TODO\n# Export the model\nexport_model = tf.keras.Sequential([\n vectorize_layer,\n model,\n layers.Activation('sigmoid')\n])\n\nexport_model.compile(\n loss=losses.BinaryCrossentropy(from_logits=False), optimizer=\"adam\", metrics=['accuracy']\n)\n\n# Test it with `raw_test_ds`, which yields raw strings\nloss, accuracy = export_model.evaluate(raw_test_ds)\nprint(accuracy)",
"Inference on new data\nTo get predictions for new examples, you can simply call model.predict().",
"examples = [\n \"The movie was great!\",\n \"The movie was okay.\",\n \"The movie was terrible...\"\n]\n\nexport_model.predict(examples)",
"Including the text preprocessing logic inside your model enables you to export a model for production that simplifies deployment, and reduces the potential for train/test skew.\nThere is a performance difference to keep in mind when choosing where to apply your TextVectorization layer. Using it outside of your model enables you to do asynchronous CPU processing and buffering of your data when training on GPU. So, if you're training your model on the GPU, you probably want to go with this option to get the best performance while developing your model, then switch to including the TextVectorization layer inside your model when you're ready to prepare for deployment.\nVisit this tutorial to learn more about saving models.\nExercise: multi-class classification on Stack Overflow questions\nThis tutorial showed how to train a binary classifier from scratch on the IMDB dataset. As an exercise, you can modify this notebook to train a multi-class classifier to predict the tag of a programming question on Stack Overflow.\nA dataset has been prepared for you to use containing the body of several thousand programming questions (for example, \"How can I sort a dictionary by value in Python?\") posted to Stack Overflow. Each of these is labeled with exactly one tag (either Python, CSharp, JavaScript, or Java). Your task is to take a question as input, and predict the appropriate tag, in this case, Python. \nThe dataset you will work with contains several thousand questions extracted from the much larger public Stack Overflow dataset on BigQuery, which contains more than 17 million posts.\nAfter downloading the dataset, you will find it has a similar directory structure to the IMDB dataset you worked with previously:\ntrain/\n...python/\n......0.txt\n......1.txt\n...javascript/\n......0.txt\n......1.txt\n...csharp/\n......0.txt\n......1.txt\n...java/\n......0.txt\n......1.txt\nNote: To increase the difficulty of the classification problem, occurrences of the words Python, CSharp, JavaScript, or Java in the programming questions have been replaced with the word blank (as many questions contain the language they're about).\nTo complete this exercise, you should modify this notebook to work with the Stack Overflow dataset by making the following modifications:\n\n\nAt the top of your notebook, update the code that downloads the IMDB dataset with code to download the Stack Overflow dataset that has already been prepared. As the Stack Overflow dataset has a similar directory structure, you will not need to make many modifications.\n\n\nModify the last layer of your model to Dense(4), as there are now four output classes.\n\n\nWhen compiling the model, change the loss to tf.keras.losses.SparseCategoricalCrossentropy. This is the correct loss function to use for a multi-class classification problem, when the labels for each class are integers (in this case, they can be 0, 1, 2, or 3). In addition, change the metrics to metrics=['accuracy'], since this is a multi-class classification problem (tf.metrics.BinaryAccuracy is only used for binary classifiers).\n\n\nWhen plotting accuracy over time, change binary_accuracy and val_binary_accuracy to accuracy and val_accuracy, respectively.\n\n\nOnce these changes are complete, you will be able to train a multi-class classifier. \n\n\nLearning more\nThis tutorial introduced text classification from scratch. To learn more about the text classification workflow in general, check out the Text classification guide from Google Developers."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
Yu-Group/scikit-learn-sandbox
|
jupyter/29_iRF_demo_sklearn.ipynb
|
mit
|
[
"Demo of the scikit-learn fork iRF\n\nThe following is a demo of the scikit learn iRF code\n\nTypical Setup\nImport the required dependencies\n\nIn particular irf_utils and irf_jupyter_utils",
"%matplotlib inline\nimport matplotlib.pyplot as plt\n\nfrom sklearn.datasets import load_breast_cancer\nimport numpy as np\nfrom functools import reduce\n\n# Needed for the scikit-learn wrapper function\nfrom sklearn.tree import irf_utils\nfrom sklearn.ensemble import RandomForestClassifier\nfrom math import ceil\n\n# Import our custom utilities\nfrom imp import reload\nfrom utils import irf_jupyter_utils\nreload(irf_jupyter_utils)",
"Step 1: Fit the Initial Random Forest\n\nJust fit every feature with equal weights per the usual random forest code e.g. DecisionForestClassifier in scikit-learn",
"load_breast_cancer = load_breast_cancer()\n\nX_train, X_test, y_train, y_test, rf = irf_jupyter_utils.generate_rf_example(n_estimators=20, \n feature_weight=None)",
"Check out the data",
"print(\"Training feature dimensions\", X_train.shape, sep = \":\\n\")\nprint(\"\\n\")\nprint(\"Training outcome dimensions\", y_train.shape, sep = \":\\n\")\nprint(\"\\n\")\nprint(\"Test feature dimensions\", X_test.shape, sep = \":\\n\")\nprint(\"\\n\")\nprint(\"Test outcome dimensions\", y_test.shape, sep = \":\\n\")\nprint(\"\\n\")\nprint(\"first 2 rows of the training set features\", X_train[:2], sep = \":\\n\")\nprint(\"\\n\")\nprint(\"first 2 rows of the training set outcomes\", y_train[:2], sep = \":\\n\")",
"Step 2: Get all Random Forest and Decision Tree Data\n\nExtract in a single dictionary the random forest data and for all of it's decision trees\nThis is as required for RIT purposes",
"all_rf_tree_data = irf_utils.get_rf_tree_data(\n rf=rf, X_train=X_train, X_test=X_test, y_test=y_test)",
"STEP 3: Get the RIT data and produce RITs",
"np.random.seed(12)\nall_rit_tree_data = irf_utils.get_rit_tree_data(\n all_rf_tree_data=all_rf_tree_data,\n bin_class_type=1,\n M=100,\n max_depth=2,\n noisy_split=False,\n num_splits=2)",
"Perform Manual CHECKS on the irf_utils\n\nThese should be converted to unit tests and checked with nosetests -v test_irf_utils.py\n\nStep 4: Plot some Data\nList Ranked Feature Importances",
"# Print the feature ranking\nprint(\"Feature ranking:\")\n\nfeature_importances_rank_idx = all_rf_tree_data['feature_importances_rank_idx']\nfeature_importances = all_rf_tree_data['feature_importances']\n\nfor f in range(X_train.shape[1]):\n print(\"%d. feature %d (%f)\" % (f + 1\n , feature_importances_rank_idx[f]\n , feature_importances[feature_importances_rank_idx[f]]))",
"Plot Ranked Feature Importances",
"# Plot the feature importances of the forest\nfeature_importances_std = all_rf_tree_data['feature_importances_std']\n\nplt.figure()\nplt.title(\"Feature importances\")\nplt.bar(range(X_train.shape[1])\n , feature_importances[feature_importances_rank_idx]\n , color=\"r\"\n , yerr = feature_importances_std[feature_importances_rank_idx], align=\"center\")\nplt.xticks(range(X_train.shape[1]), feature_importances_rank_idx)\nplt.xlim([-1, X_train.shape[1]])\nplt.show()",
"Decision Tree 0 (First) - Get output\nCheck the output against the decision tree graph",
"# Now plot the trees individually\nirf_jupyter_utils.draw_tree(decision_tree = all_rf_tree_data['rf_obj'].estimators_[0])",
"Compare to our dict of extracted data from the tree",
"#irf_jupyter_utils.pretty_print_dict(inp_dict = all_rf_tree_data['dtree0'])\n\n# Count the number of samples passing through the leaf nodes\nsum(all_rf_tree_data['dtree0']['tot_leaf_node_values'])",
"Check output against the diagram",
"#irf_jupyter_utils.pretty_print_dict(inp_dict = all_rf_tree_data['dtree0']['all_leaf_paths_features'])",
"Run the iRF function\nWe will run the iRF with the following parameters\nData:\n\nbreast cancer binary classification data\nrandom state (for reproducibility): 2018\n\nWeighted RFs\n\nK: 5 iterations\nnumber of trees: 20\n\nBootstrap RFs\n\nproportion of bootstrap samples: 20%\nB: 30 bootstrap samples\nnumber of trees (bootstrap RFs): 5 iterations\n\nRITs (on the bootstrap RFs)\n\nM: 20 RITs per forest\nfilter label type: 1-class only\nMax Depth: 5\nNoisy Split: False\nNumber of splits at Node: 2 splits\n\nRunning the iRF is easy - single function call\n\nAll of the bootstrap, RIT complexity is covered through the key parameters passed through\nin the main algorithm (as listed above)\nThis function call returns the following data:\nall RF weights\nall the K RFs that are iterated over\nall of the B bootstrap RFs that are run\nall the B*M RITs that are run on the bootstrap RFs\nthe stability score\n\n\n\nThis is a lot of data returned!\nWill be useful when we build the interface later\nLet's run it!",
"all_rf_weights, all_K_iter_rf_data, \\\nall_rf_bootstrap_output, all_rit_bootstrap_output, \\\nstability_score = irf_utils.run_iRF(X_train=X_train,\n X_test=X_test,\n y_train=y_train,\n y_test=y_test,\n K=5,\n n_estimators=20,\n B=30,\n random_state_classifier=2018,\n propn_n_samples=.2,\n bin_class_type=1,\n M=20,\n max_depth=5,\n noisy_split=False,\n num_splits=2,\n n_estimators_bootstrap=5)\n\nstability_score",
"Examine the stability scores",
"irf_jupyter_utils._get_histogram(stability_score, sort = True)",
"That's interesting - feature 22, 27, 20, 23 keep popping up!\nWe should probably look at the feature importances to understand if there is a useful correlation\nExamine feature importances\nIn particular, let us see how they change over the K iterations of random forest",
"for k in range(5): \n \n iteration = \"rf_iter{}\".format(k)\n \n feature_importances_std = all_K_iter_rf_data[iteration]['feature_importances_std']\n feature_importances_rank_idx = all_K_iter_rf_data[iteration]['feature_importances_rank_idx']\n feature_importances = all_K_iter_rf_data[iteration]['feature_importances']\n\n plt.figure(figsize=(8, 6))\n title = \"Feature importances; iteration = {}\".format(k)\n plt.title(title)\n plt.bar(range(X_train.shape[1])\n , feature_importances[feature_importances_rank_idx]\n , color=\"r\"\n , yerr = feature_importances_std[feature_importances_rank_idx], align=\"center\")\n plt.xticks(range(X_train.shape[1]), feature_importances_rank_idx, rotation='vertical')\n plt.xlim([-1, X_train.shape[1]])\n plt.show() ",
"Some Observations\n\nNote that after 5 iterations, the most important features were found to be 22, 27, 7, and 23\nNow also recall that the most stable interactions were found to be '22_27', '7_22', '7_22_27', '23_27', '7_27', '22_23_27'\nGiven the overlap between these two plots, the results are not unreasonable here. \n\nExplore iRF Data Further\nWe can look at the decision paths of the Kth RF\nLet's look at the final iteration RF - the key validation metrics",
"irf_jupyter_utils.pretty_print_dict(all_K_iter_rf_data['rf_iter4']['rf_validation_metrics'])\n\n# Now plot the trees individually\nirf_jupyter_utils.draw_tree(decision_tree = all_K_iter_rf_data['rf_iter4']['rf_obj'].estimators_[0])",
"We can get this data quite easily in a convenient format",
"irf_jupyter_utils.pretty_print_dict(\n all_K_iter_rf_data['rf_iter4']['dtree0']['all_leaf_paths_features'])",
"This checks nicely against the plotted diagram above.\nIn fact - we can go further and plot some interesting data from the Decision Trees\n- This can help us understand variable interactions better",
"irf_jupyter_utils.pretty_print_dict(\n all_K_iter_rf_data['rf_iter4']['dtree0']['all_leaf_node_values'])",
"We can also look at the frequency that a feature appears along a decision path",
"irf_jupyter_utils._hist_features(all_K_iter_rf_data['rf_iter4'], n_estimators = 20, \\\n title = 'Frequency of features along decision paths : iteration = 4')",
"The most common features that appeared were 27,22,23, and 7. This matches well with the feature importance plot above. \nRun some Sanity Checks\nRun iRF for just 1 iteration - should be the uniform sampling version\nThis is just a sanity check: the feature importances from iRF after 1 iteration should match the feature importance from running a standard RF",
"all_K_iter_rf_data.keys()\nprint(all_K_iter_rf_data['rf_iter0']['feature_importances'])",
"Compare to the original single fitted random forest",
"rf = RandomForestClassifier(n_estimators=20, random_state=2018)\nrf.fit(X=X_train, y=y_train)\nprint(rf.feature_importances_)",
"And they match perfectly as expected.",
"#all_rf_weights['rf_weight1']\n\n#all_K_iter_rf_data\n\n#all_rf_bootstrap_output\n\n#all_rit_bootstrap_output\n\n#stability_score",
"End Wrapper test"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
Ignotus/MachineLearning
|
Lab2.ipynb
|
mit
|
[
"Lab 2: Classification\nMachine Learning 1, September 2015\n\nThe lab exercises should be made in groups of two, three or four people.\nThe deadline is October 4th (Sunday) 23:59.\nAssignment should be sent to Philip Versteeg (p.j.j.p.versteeg@uva.nl). The subject line of your email should be \"lab#_lastname1_lastname2_lastname3\".\nPut your and your teammates' names in the body of the email\nAttach the .IPYNB (IPython Notebook) file containing your code and answers. Naming of the file follows the same rule as the subject line. For example, if the subject line is \"lab01_Kingma_Hu\", the attached file should be \"lab01_Kingma_Hu.ipynb\". Only use underscores (\"_\") to connect names, otherwise the files cannot be parsed.\n\nNotes on implementation:\n\nFor this notebook you need to answer a few theory questions, add them in the Markdown cell's below the question. Note: you can use Latex-style code in here.\nFocus on Part 1 the first week, and Part 2 the second week!\nYou should write your code and answers below the questions in this IPython Notebook.\nAmong the first lines of your notebook should be \"%pylab inline\". This imports all required modules, and your plots will appear inline.\nIf you have questions outside of the labs, post them on blackboard or email me.\nNOTE: Make sure we can run your notebook / scripts!\n\n$\\newcommand{\\bx}{\\mathbf{x}}$\n$\\newcommand{\\bw}{\\mathbf{w}}$\n$\\newcommand{\\bt}{\\mathbf{t}}$\n$\\newcommand{\\by}{\\mathbf{y}}$\n$\\newcommand{\\bm}{\\mathbf{m}}$\n$\\newcommand{\\bb}{\\mathbf{b}}$\n$\\newcommand{\\bS}{\\mathbf{S}}$\n$\\newcommand{\\ba}{\\mathbf{a}}$\n$\\newcommand{\\bz}{\\mathbf{z}}$\n$\\newcommand{\\bv}{\\mathbf{v}}$\n$\\newcommand{\\bq}{\\mathbf{q}}$\n$\\newcommand{\\bp}{\\mathbf{p}}$\n$\\newcommand{\\bh}{\\mathbf{h}}$\n$\\newcommand{\\bI}{\\mathbf{I}}$\n$\\newcommand{\\bX}{\\mathbf{X}}$\n$\\newcommand{\\bT}{\\mathbf{T}}$\n$\\newcommand{\\bPhi}{\\mathbf{\\Phi}}$\n$\\newcommand{\\bW}{\\mathbf{W}}$\n$\\newcommand{\\bV}{\\mathbf{V}}$",
"%matplotlib inline\nimport gzip, cPickle\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom pylab import *\nrcParams['legend.loc'] = 'best'",
"Part 1. Multiclass logistic regression\nScenario: you have a friend with one big problem: she's completely blind. You decided to help her: she has a special smartphone for blind people, and you are going to develop a mobile phone app that can do machine vision using the mobile camera: converting a picture (from the camera) to the meaning of the image. You decide to start with an app that can read handwritten digits, i.e. convert an image of handwritten digits to text (e.g. it would enable her to read precious handwritten phone numbers).\nA key building block for such an app would be a function predict_digit(x) that returns the digit class of an image patch $\\bx$. Since hand-coding this function is highly non-trivial, you decide to solve this problem using machine learning, such that the internal parameters of this function are automatically learned using machine learning techniques.\nThe dataset you're going to use for this is the MNIST handwritten digits dataset (http://yann.lecun.com/exdb/mnist/). You can load the data from mnist.pkl.gz we provided, using:",
"def load_mnist():\n f = gzip.open('mnist.pkl.gz', 'rb')\n data = cPickle.load(f)\n f.close()\n return data\n\n(x_train, t_train), (x_valid, t_valid), (x_test, t_test) = load_mnist()",
"The tuples represent train, validation and test sets. The first element (x_train, x_valid, x_test) of each tuple is a $N \\times M$ matrix, where $N$ is the number of datapoints and $M = 28^2 = 784$ is the dimensionality of the data. The second element (t_train, t_valid, t_test) of each tuple is the corresponding $N$-dimensional vector of integers, containing the true class labels.\nHere's a visualisation of the first 8 digits of the trainingset:",
"def plot_digits(data, numcols, shape=(28,28)):\n numdigits = data.shape[0]\n numrows = int(numdigits/numcols)\n for i in range(0, numdigits):\n plt.subplot(numrows, numcols, i)\n plt.axis('off')\n plt.imshow(data[i].reshape(shape), interpolation='nearest', cmap='Greys')\n plt.show()\n \nplot_digits(x_train[0:8], numcols=4)",
"In multiclass logistic regression, the conditional probability of class label $j$ given the image $\\bx$ for some datapoint is given by:\n$ \\log p(t = j \\;|\\; \\bx, \\bb, \\bW) = \\log q_j - \\log Z$\nwhere $\\log q_j = \\bw_j^T \\bx + b_j$ (the log of the unnormalized probability of the class $j$), and $Z = \\sum_k q_k$ is the normalizing factor. $\\bw_j$ is the $j$-th column of $\\bW$ (a matrix of size $784 \\times 10$) corresponding to the class label, $b_j$ is the $j$-th element of $\\bb$.\nGiven an input image, the multiclass logistic regression model first computes the intermediate vector $\\log \\bq$ (of size $10 \\times 1$), using $\\log q_j = \\bw_j^T \\bx + b_j$, containing the unnormalized log-probabilities per class. \nThe unnormalized probabilities are then normalized by $Z$ such that $\\sum_j p_j = \\sum_j \\exp(\\log p_j) = 1$. This is done by $\\log p_j = \\log q_j - \\log Z$ where $Z = \\sum_j \\exp(\\log q_j)$. This is known as the softmax transformation, and is also used as a last layer of many classifcation neural network models, to ensure that the output of the network is a normalized distribution, regardless of the values of second-to-last layer ($\\log \\bq$)\nWarning: when computing $\\log Z$, you are likely to encounter numerical problems. Save yourself countless hours of debugging and learn the log-sum-exp trick.\nThe network's output $\\log \\bp$ of size $10 \\times 1$ then contains the conditional log-probabilities $\\log p(t = j \\;|\\; \\bx, \\bb, \\bW)$ for each digit class $j$. In summary, the computations are done in this order:\n$\\bx \\rightarrow \\log \\bq \\rightarrow Z \\rightarrow \\log \\bp$\nGiven some dataset with $N$ independent, identically distributed datapoints, the log-likelihood is given by:\n$ \\mathcal{L}(\\bb, \\bW) = \\sum_{n=1}^N \\mathcal{L}^{(n)}$\nwhere we use $\\mathcal{L}^{(n)}$ to denote the partial log-likelihood evaluated over a single datapoint. It is important to see that the log-probability of the class label $t^{(n)}$ given the image, is given by the $t^{(n)}$-th element of the network's output $\\log \\bp$, denoted by $\\log p_{t^{(n)}}$:\n$\\mathcal{L}^{(n)} = \\log p(t = t^{(n)} \\;|\\; \\bx = \\bx^{(n)}, \\bb, \\bW) = \\log p_{t^{(n)}} = \\log q_{t^{(n)}} - \\log Z^{(n)}$\nwhere $\\bx^{(n)}$ and $t^{(n)}$ are the input (image) and class label (integer) of the $n$-th datapoint, and $Z^{(n)}$ is the normalizing constant for the distribution over $t^{(n)}$.\n1.1 Gradient-based stochastic optimization\n1.1.1 Derive gradient equations (20 points)\nDerive the equations for computing the (first) partial derivatives of the log-likelihood w.r.t. all the parameters, evaluated at a single datapoint $n$.\nYou should start deriving the equations for $\\frac{\\partial \\mathcal{L}^{(n)}}{\\partial \\log q_j}$ for each $j$. For clarity, we'll use the shorthand $\\delta^q_j = \\frac{\\partial \\mathcal{L}^{(n)}}{\\partial \\log q_j}$.\nFor $j = t^{(n)}$:\n$\n\\delta^q_j\n= \\frac{\\partial \\mathcal{L}^{(n)}}{\\partial \\log p_j}\n\\frac{\\partial \\log p_j}{\\partial \\log q_j}\n+ \\frac{\\partial \\mathcal{L}^{(n)}}{\\partial \\log Z}\n\\frac{\\partial \\log Z}{\\partial Z} \n\\frac{\\partial Z}{\\partial \\log q_j} \n= 1 \\cdot 1 - \\frac{\\partial \\log Z}{\\partial Z} \n\\frac{\\partial Z}{\\partial \\log q_j}\n= 1 - \\frac{\\partial \\log Z}{\\partial Z} \n\\frac{\\partial Z}{\\partial \\log q_j}\n$\nFor $j \\neq t^{(n)}$:\n$\n\\delta^q_j\n= \\frac{\\partial \\mathcal{L}^{(n)}}{\\partial \\log Z}\n\\frac{\\partial \\log Z}{\\partial Z} \n\\frac{\\partial Z}{\\partial \\log q_j} \n= - \\frac{\\partial \\log Z}{\\partial Z} \n\\frac{\\partial Z}{\\partial \\log q_j}\n$\nComplete the above derivations for $\\delta^q_j$ by furtherly developing $\\frac{\\partial \\log Z}{\\partial Z}$ and $\\frac{\\partial Z}{\\partial \\log q_j}$. Both are quite simple. For these it doesn't matter whether $j = t^{(n)}$ or not.\nGiven your equations for computing the gradients $\\delta^q_j$ it should be quite straightforward to derive the equations for the gradients of the parameters of the model, $\\frac{\\partial \\mathcal{L}^{(n)}}{\\partial W_{ij}}$ and $\\frac{\\partial \\mathcal{L}^{(n)}}{\\partial b_j}$. The gradients for the biases $\\bb$ are given by:\n$\n\\frac{\\partial \\mathcal{L}^{(n)}}{\\partial b_j}\n= \\frac{\\partial \\mathcal{L}^{(n)}}{\\partial \\log q_j}\n\\frac{\\partial \\log q_j}{\\partial b_j}\n= \\delta^q_j\n\\cdot 1\n= \\delta^q_j\n$\nThe equation above gives the derivative of $\\mathcal{L}^{(n)}$ w.r.t. a single element of $\\bb$, so the vector $\\nabla_\\bb \\mathcal{L}^{(n)}$ with all derivatives of $\\mathcal{L}^{(n)}$ w.r.t. the bias parameters $\\bb$ is: \n$\n\\nabla_\\bb \\mathcal{L}^{(n)} = \\mathbf{\\delta}^q\n$\nwhere $\\mathbf{\\delta}^q$ denotes the vector of size $10 \\times 1$ with elements $\\mathbf{\\delta}_j^q$.\nThe (not fully developed) equation for computing the derivative of $\\mathcal{L}^{(n)}$ w.r.t. a single element $W_{ij}$ of $\\bW$ is:\n$\n\\frac{\\partial \\mathcal{L}^{(n)}}{\\partial W_{ij}} =\n\\frac{\\partial \\mathcal{L}^{(n)}}{\\partial \\log q_j}\n\\frac{\\partial \\log q_j}{\\partial W_{ij}}\n= \\mathbf{\\delta}j^q\n\\frac{\\partial \\log q_j}{\\partial W{ij}}\n$\nWhat is $\\frac{\\partial \\log q_j}{\\partial W_{ij}}$? Complete the equation above.\nIf you want, you can give the resulting equation in vector format ($\\nabla_{\\bw_j} \\mathcal{L}^{(n)} = ...$), like we did for $\\nabla_\\bb \\mathcal{L}^{(n)}$.\nAnswer:\n$\\dfrac{\\partial \\log Z}{\\partial Z} = \\dfrac{1}{Z}$\n$\\dfrac{\\partial Z}{\\partial \\log q_j} = \\dfrac{\\partial \\sum_k q_k}{\\partial \\log q_j} = \\dfrac{\\partial q_j}{\\partial \\log q_j} = \\dfrac{\\partial exp(\\log q_j)}{\\partial \\log q_j} = exp(\\log q_j)$\nFor $j = t^{(n)}$: $\\delta_j^q = 1 - \\dfrac{1}{Z}exp(\\log q_j)$\nFor $j \\neq t^{(n)}$: $\\delta_j^q = -\\dfrac{1}{Z}exp(\\log q_j)$\n$ \\log q_j = \\mathbf{w}j^T \\mathbf{x} = b_j + \\sum{i} W_{ij} x_i \\Rightarrow \\dfrac{\\partial \\log q_j}{\\partial W_{ij}} = x_i \\Rightarrow \\dfrac{\\partial \\mathcal{L}^{(n)}}{\\partial W_{ij}} = \\delta_j^q x_i$\n1.1.2 Implement gradient computations (10 points)\nImplement the gradient calculations you derived in the previous question. Write a function logreg_gradient(x, t, w, b) that returns the gradients $\\nabla_{\\bw_j} \\mathcal{L}^{(n)}$ (for each $j$) and $\\nabla_{\\bb} \\mathcal{L}^{(n)}$, i.e. the first partial derivatives of the log-likelihood w.r.t. the parameters $\\bW$ and $\\bb$, evaluated at a single datapoint (x, t).\nThe computation will contain roughly the following intermediate variables:\n$\n\\log \\bq \\rightarrow Z \\rightarrow \\log \\bp\\,,\\, \\mathbf{\\delta}^q\n$\nfollowed by computation of the gradient vectors $\\nabla_{\\bw_j} \\mathcal{L}^{(n)}$ (contained in a $784 \\times 10$ matrix) and $\\nabla_{\\bb} \\mathcal{L}^{(n)}$ (a $10 \\times 1$ vector).",
"def logreg_gradient(x, t, W, b):\n log_q = W.T.dot(x) + b\n\n # Log-sum-exp trick\n a = np.max(log_q)\n logZ = a + np.log(np.sum(np.exp(log_q - a)))\n Z = np.exp(logZ)\n grad_b = -np.exp(log_q) / Z\n grad_b[t] += 1\n grad_w = x[np.newaxis].T.dot(grad_b[np.newaxis])\n return grad_w, grad_b\n\ndef init_weights(num_of_features, num_of_class):\n w = (np.random.random_sample((num_of_features, num_of_class)) - 0.5) * 0.001\n b = (np.random.random_sample(num_of_class) - 0.5) * 0.001\n return w, b\n\nnp.random.seed(0)\nnum_of_features = x_train.shape[1]\nnum_of_class = 10\n\n# Test\nw, b = init_weights(num_of_features, num_of_class)\n\nrandom_item_id = np.random.randint(x_train.shape[0])\ngrad_w, grad_b = logreg_gradient(x_train[random_item_id], t_train[random_item_id], w, b)\nprint grad_w.shape, grad_b.shape",
"1.1.3 Stochastic gradient descent (10 points)\nWrite a function sgd_iter(x_train, t_train, w, b) that performs one iteration of stochastic gradient descent (SGD), and returns the new weights. It should go through the trainingset once in randomized order, call logreg_gradient(x, t, w, b) for each datapoint to get the gradients, and update the parameters using a small learning rate (e.g. 1E-4). Note that in this case we're maximizing the likelihood function, so we should actually performing gradient ascent... For more information about SGD, see Bishop 5.2.4 or an online source (i.e. https://en.wikipedia.org/wiki/Stochastic_gradient_descent)",
"def compute_log_likelihood(x, t, w, b):\n log_q = x.dot(w) + b\n a = np.max(log_q, axis=1)\n log_Z = a + np.log(np.sum(np.exp((log_q.T - a).T)))\n #print log_Z\n log_likelihood = np.sum(np.array([log_q[index, value] for (index,), value in np.ndenumerate(t)]) - log_Z)\n return log_likelihood\n\ndef sgd_iter(x_train, t_train, w, b, lr=0.001):\n \"\"\"\n Don't need to return w and b. They are modified by reference\n \"\"\"\n indexes = np.arange(len(x_train))\n np.random.shuffle(indexes)\n\n for index in indexes:\n grad_w, delta = logreg_gradient(x_train[index], t_train[index], w, b)\n w += lr * grad_w\n b += lr * delta\n\n# Test\nw, b = init_weights(num_of_features, num_of_class)\nprint 'Likelihood:', compute_log_likelihood(x_train, t_train, w, b)\nsgd_iter(x_train, t_train, w, b)\nprint 'Likelihood:', compute_log_likelihood(x_train, t_train, w, b)\n\nprint w.shape, b.shape",
"1.2. Train\n1.2.1 Train (10 points)\nPerform a handful of training iterations through the trainingset. Plot (in one graph) the conditional log-probability of the trainingset and validation set after each iteration.",
"w, b = init_weights(num_of_features, num_of_class)\n\nloglik_train = [compute_log_likelihood(x_train, t_train, w, b)]\nloglik_valid = [compute_log_likelihood(x_valid, t_valid, w, b)]\n\nfor i in xrange(20):\n sgd_iter(x_train, t_train, w, b)\n loglik_train.append(compute_log_likelihood(x_train, t_train, w, b))\n loglik_valid.append(compute_log_likelihood(x_valid, t_valid, w, b))\nprint loglik_train\nprint loglik_valid\n\nr = np.arange(len(loglik_valid))\nfigure = plt.figure()\nfigure.set_size_inches((14, 10))\nplt.subplot(221)\nplt.plot(r, loglik_train, label='Training set')\nplt.plot(r, loglik_valid, label='Validation set')\nplt.ylabel('Likelihood')\nplt.xlabel('Iteration')\nplt.grid(True)\nplt.legend()\n\nplt.subplot(222)\nplt.plot(r, loglik_train)\nplt.title('log likelihood changes for the training set')\nplt.ylabel('Likelihood')\nplt.xlabel('Iteration')\nplt.grid(True)\n\nplt.subplot(223)\nplt.plot(r, loglik_valid)\nplt.title('log likelihood changes for the validation set')\n\nplt.ylabel('Likelihood')\nplt.xlabel('Iteration')\nplt.grid(True)\n\nplt.show()",
"1.2.2 Visualize weights (10 points)\nVisualize the resulting parameters $\\bW$ after a few iterations through the training set, by treating each column of $\\bW$ as an image. If you want, you can use or edit the plot_digits(...) above.",
"plot_digits(w.T, numcols=5)",
"1.2.3. Visualize the 8 hardest and 8 easiest digits (10 points)\nVisualize the 8 digits in the validation set with the highest probability of the true class label under the model.\nAlso plot the 8 digits that were assigned the lowest probability.\nAsk yourself if these results make sense.",
"# Ranking by highest probability is the same like ranking log(q(j = t)), therefore we are taking log_qt to make\n# it easier from the computational perspective\nlog_q = x_valid.dot(w) + b\n\nlog_qt = np.array([log_q[index, value] for (index,), value in np.ndenumerate(t_valid)]) \nsorted_by_log_qt = np.argsort(log_qt)\neasiest = sorted_by_log_qt[-8:]\nfor i in range(8):\n plt.subplot(1, 8, i)\n plt.axis('off')\n plt.imshow(x_train[easiest[i]].reshape((28, 28)), interpolation='nearest', cmap='Greys')\nplt.title('Easiest 8 digits')\nplt.show()\n\nhardest = sorted_by_log_qt[:8]\nfor i in range(8):\n plt.subplot(1, 8, i)\n plt.axis('off')\n plt.imshow(x_train[hardest[i]].reshape((28, 28)), interpolation='nearest', cmap='Greys')\nplt.title('Hardest 8 digits')\nplt.show()",
"Part 2. Multilayer perceptron\nYou discover that the predictions by the logistic regression classifier are not good enough for your application: the model is too simple. You want to increase the accuracy of your predictions by using a better model. For this purpose, you're going to use a multilayer perceptron (MLP), a simple kind of neural network. The perceptron wil have a single hidden layer $\\bh$ with $L$ elements. The parameters of the model are $\\bV$ (connections between input $\\bx$ and hidden layer $\\bh$), $\\ba$ (the biases/intercepts of $\\bh$), $\\bW$ (connections between $\\bh$ and $\\log q$) and $\\bb$ (the biases/intercepts of $\\log q$.\nThe conditional probability of the class label $j$ is given by:\n$\\log p(t = j \\;|\\; \\bx, \\bb, \\bW) = \\log q_j - \\log Z$\nwhere $q_j$ are again the unnormalized probabilities per class, and $Z = \\sum_j q_j$ is again the probability normalizing factor. Each $q_j$ is computed using:\n$\\log q_j = \\bw_j^T \\bh + b_j$\nwhere $\\bh$ is a $L \\times 1$ vector with the hidden layer activations (of a hidden layer with size $L$), and $\\bw_j$ is the $j$-th column of $\\bW$ (a $L \\times 10$ matrix). Each element of the hidden layer is computed from the input vector $\\bx$ using:\n$h_j = \\sigma(\\bv_j^T \\bx + a_j)$\nwhere $\\bv_j$ is the $j$-th column of $\\bV$ (a $784 \\times L$ matrix), $a_j$ is the $j$-th element of $\\ba$, and $\\sigma(.)$ is the so-called sigmoid activation function, defined by:\n$\\sigma(x) = \\frac{1}{1 + \\exp(-x)}$\nNote that this model is almost equal to the multiclass logistic regression model, but with an extra 'hidden layer' $\\bh$. The activations of this hidden layer can be viewed as features computed from the input, where the feature transformation ($\\bV$ and $\\ba$) is learned.\n2.1 Derive gradient equations (20 points)\nState (shortly) why $\\nabla_{\\bb} \\mathcal{L}^{(n)}$ is equal to the earlier (multiclass logistic regression) case, and why $\\nabla_{\\bw_j} \\mathcal{L}^{(n)}$ is almost equal to the earlier case.\nLike in multiclass logistic regression, you should use intermediate variables $\\mathbf{\\delta}_j^q$. In addition, you should use intermediate variables $\\mathbf{\\delta}_j^h = \\frac{\\partial \\mathcal{L}^{(n)}}{\\partial h_j}$.\nGiven an input image, roughly the following intermediate variables should be computed:\n$\n\\log \\bq \\rightarrow Z \\rightarrow \\log \\bp \\rightarrow \\mathbf{\\delta}^q \\rightarrow \\mathbf{\\delta}^h\n$\nwhere $\\mathbf{\\delta}_j^h = \\frac{\\partial \\mathcal{L}^{(n)}}{\\partial \\bh_j}$.\nGive the equations for computing $\\mathbf{\\delta}^h$, and for computing the derivatives of $\\mathcal{L}^{(n)}$ w.r.t. $\\bW$, $\\bb$, $\\bV$ and $\\ba$. \nYou can use the convenient fact that $\\frac{\\partial}{\\partial x} \\sigma(x) = \\sigma(x) (1 - \\sigma(x))$.\nAnswer:\n$\\mathcal{L}^{(n)} = \\log q_j - \\log Z = \\mathbf{w}j^T \\mathbf{h} + b_j - \\log Z = \\sum{i=1}^L W_{ij}\\sigma(\\mathbf{v}i^T \\mathbf{x} + a_i) + b_j - \\log Z = \\sum{i=1}^L W_{ij}\\sigma(\\sum_{k=1}^{784} v_{ki}x_k + a_i) + b_j - \\log Z$\n$\\delta^q_j = \\frac{\\partial \\mathcal{L}^{(n)}}{\\partial \\log q_j}$\nFor $j = t^{(n)}$: $\\delta_j^q = 1 - \\dfrac{1}{Z}exp(\\log q_j)$\nFor $j \\neq t^{(n)}$: $\\delta_j^q = -\\dfrac{1}{Z}exp(\\log q_j)$\n$\\delta_i^h = \\dfrac{\\partial \\mathcal{L}^{(n)}}{\\partial h_i} = \\dfrac{\\partial \\mathcal{L}^{(n)}}{\\partial \\log q_j} \\dfrac{\\partial \\log q_j}{\\partial h_i} = \\delta_j^q W_{ij}$\n$\\dfrac{\\partial \\mathcal{L}^{(n)}}{\\partial b_j} = \\dfrac{\\partial \\mathcal{L}^{(n)}}{\\partial \\log q_j} \\dfrac{\\partial \\log q_j}{\\partial b_j} = \\delta_j^q$\n$\\dfrac{\\partial \\mathcal{L}^{(n)}}{\\partial W_{ij}} = \\dfrac{\\partial \\mathcal{L}^{(n)}}{\\partial \\log q_j}\\dfrac{\\partial \\log q_j}{\\partial \\log W_{ij}} = \\delta_j^q h_i$\n$\\dfrac{\\partial \\mathcal{L}^{(n)}}{\\partial a_i} = \\dfrac{\\partial \\mathcal{L}^{(n)}}{\\partial h_i} \\dfrac{\\partial h_i}{\\partial (\\mathbf{v_i}^T\\mathbf{x} + a_i)} \\dfrac{\\partial(\\mathbf{v_i}^T\\mathbf{x} + a_i)}{\\partial a_i} = \\delta_i^h \\sigma(\\mathbf{v_i}^T\\mathbf{x} + a_i)(1 - \\sigma(\\mathbf{v_i}^T\\mathbf{x} + a_i)) = \\delta_i^h h_i (1 - h_i)$\n$\\dfrac{\\partial \\mathcal{L}^{(n)}}{\\partial v_{ki}} = \\dfrac{\\partial \\mathcal{L}^{(n)}}{\\partial h_i} \\dfrac{\\partial h_i}{\\partial(\\mathbf{v_i}^T\\mathbf{x} + a_i)} \\dfrac{\\partial(\\mathbf{v_i}^T\\mathbf{x} + a_i)}{\\partial a_i} = \\delta_i^h \\sigma(\\mathbf{v_i}^T\\mathbf{x} + a_i)(1 - \\sigma(\\mathbf{v_i}^T\\mathbf{x} + a_i))x_k = \\delta_i^h h_i(1 - h_i)x_k$\n2.2 MAP optimization (10 points)\nYou derived equations for finding the maximum likelihood solution of the parameters. Explain, in a few sentences, how you could extend this approach so that it optimizes towards a maximum a posteriori (MAP) solution of the parameters, with a Gaussian prior on the parameters. \nAnswer:\n$\\mathcal{L}^{(n)} = \\log q_j - \\log Z - \\dfrac{\\alpha_1}{2}\\sum_{k=1}^{784}\\sum_{i=0}^L v_{ki}^2 - \\dfrac{\\alpha_2}{2} \\sum_{i=0}^L\\sum_{j=1}^{10} W_{ij}^2 $\n$\\dfrac{\\partial \\mathcal{L}^{(n)}}{\\partial b_j} = \\delta_j^q$\n$\\dfrac{\\partial \\mathcal{L}^{(n)}}{\\partial W_{ij}} = \\delta_j^q h_i - \\alpha_2 W_{ij}$\n$\\dfrac{\\partial \\mathcal{L}^{(n)}}{\\partial a_i} = \\delta_i^h h_i (1 - h_i)$\n$\\dfrac{\\partial \\mathcal{L}^{(n)}}{\\partial v_{ki}} = \\delta_i^h h_i (1 - h_i)x_k - \\alpha_1 v_{ki}$\n<!-- Bishop 5.5.1 Section -->\n\n2.3. Implement and train a MLP (15 points)\nImplement a MLP model with a single hidden layer, and code to train the model.",
"def mlperceptron_gradient(x, t, W, b, V, a, a1=0.001, a2=0.001):\n h = 1 / (1 + np.exp(-x.dot(V) - a))\n log_q = h.dot(W) + b\n\n a = np.max(log_q)\n logZ = a + np.log(np.sum(np.exp(log_q - a)))\n Z = np.exp(logZ) \n\n grad_b = -np.exp(log_q) / Z\n grad_b[t] += 1\n\n grad_W = h[np.newaxis].T.dot(grad_b[np.newaxis]) - a2 * W\n\n grad_h = W[:,t].dot(grad_b[t])\n grad_a = grad_h * h * (1 - h)\n\n grad_V = x[np.newaxis].T.dot(grad_a[np.newaxis]) - a1 * V\n return grad_W, grad_b, grad_V, grad_a\n\ndef init_mlperceptron_weights(num_of_features, num_of_class, L):\n W = (np.random.random_sample((L, num_of_class)) - 0.5) * 0.001\n b = (np.random.random_sample(num_of_class) - 0.5) * 0.001\n V = (np.random.random_sample((num_of_features, L)) - 0.5) * 0.001\n a = (np.random.random_sample(L) - 0.5) * 0.001\n return W, b, V, a\n\n# Test\nnp.random.seed(0)\nL = 50\nW, b, V, a = init_mlperceptron_weights(num_of_features, num_of_class, L)\ngrad_W, grad_b, grad_V, grad_a = mlperceptron_gradient(x_train[0], t_train[0], W, b, V, a)\nprint grad_W.shape, grad_b.shape, grad_V.shape, grad_a.shape\n\ndef sgd_mlperceptron_iter(x_train, t_train, W, b, V, a, a1=0.001, a2=0.001, lr=0.001):\n indexes = np.arange(len(x_train))\n np.random.shuffle(indexes)\n\n for index in indexes:\n grad_W, grad_b, grad_V, grad_a = mlperceptron_gradient(x_train[index], t_train[index], W, b, V, a, a1, a2)\n W += lr * grad_W\n b += lr * grad_b\n V += lr * grad_V\n a += lr * grad_a\n\n# Test\nnp.random.seed(0)\nL = 50\nW, b, V, a = init_mlperceptron_weights(num_of_features, num_of_class, L)\nsgd_mlperceptron_iter(x_train, t_train, W, b, V, a)\n\ndef compute_log_likelihood(x, t, W, b, V, a):\n h = 1 / (1 + np.exp(-x.dot(V) - a))\n log_q = h.dot(W) + b\n\n a = np.max(log_q)\n log_Z = a + np.log(np.sum(np.exp(log_q - a)))\n\n log_likelihood = np.sum(np.array([log_q[index, value] for (index,), value in np.ndenumerate(t)]) - log_Z)\n return log_likelihood\n\n# Test\ncompute_log_likelihood(x_train, t_train, W, b, V, a)\n\nimport time\ndef train_mlperceptron(W, b, V, a, a1=0.001, a2=0.001, lr=0.001, num_iter=20, cl=True):\n loglik_train = []\n if cl:\n loglik_train.append(compute_log_likelihood(x_train, t_train, W, b, V, a))\n for i in xrange(num_iter):\n t1 = time.time()\n sgd_mlperceptron_iter(x_train, t_train, W, b, V, a, a1, a2, lr)\n print 'dt =', (time.time() - t1)\n if cl:\n loglik_train.append(compute_log_likelihood(x_train, t_train, W, b, V, a))\n\n return loglik_train\n\n# Test\nW, b, V, a = init_mlperceptron_weights(num_of_features, num_of_class, L)\nlikelihood = train_mlperceptron(W, b, V, a, num_iter=20)\n\nr = np.arange(len(likelihood))\nfigure = plt.figure()\nfigure.set_size_inches((14, 10))\nplt.plot(r, likelihood)\nplt.title('log likelihood changes for the training set')\nplt.ylabel('Likelihood')\nplt.xlabel('Iteration')\nplt.grid(True)\n\ndef predict_mlperceptron(W, b, V, a, x):\n h = 1 / (1 + np.exp(-x.dot(V) - a))\n log_q = h.dot(W) + b\n return np.argmax(log_q, axis=1)",
"We got 91.9% of accuracy on validation set without model selection:",
"# Test\nprediction = predict_mlperceptron(W, b, V, a, x_valid)\ncorrectness = t_valid == prediction\naccuracy = len(correctness[correctness == True]) / float(len(t_valid))\nprint accuracy\n\nfigure = plt.figure()\nfigure.set_size_inches((16, 10))\nplot_digits(V.T, numcols=5)",
"2.3.1. Less than 250 misclassifications on the test set (10 bonus points)\nYou receive an additional 10 bonus points if you manage to train a model with very high accuracy: at most 2.5% misclasified digits on the test set. Note that the test set contains 10000 digits, so you model should misclassify at most 250 digits. This should be achievable with a MLP model with one hidden layer. See results of various models at : http://yann.lecun.com/exdb/mnist/index.html. To reach such a low accuracy, you probably need to have a very high $L$ (many hidden units), probably $L > 200$, and apply a strong Gaussian prior on the weights. In this case you are allowed to use the validation set for training.\nYou are allowed to add additional layers, and use convolutional networks, although that is probably not required to reach 2.5% misclassifications.",
"# Ignore this cell if you don't want to wait for a century\nbest_L = 0\nbest_a1 = 0\nbest_a2 = 0\nbest_accuracy = 0\n\nfor a1 in [0.001, 0.005, 0.01]:\n for a2 in [0.001, 0.005, 0.01]:\n for L in [150, 200, 250, 300]:\n np.random.seed(0)\n W, b, V, a = init_mlperceptron_weights(num_of_features, num_of_class, L)\n train_mlperceptron(W, b, V, a, a1=a1, a2=a2, num_iter=5, cl=False)\n prediction = predict_mlperceptron(W, b, V, a, x_valid)\n correctness = t_valid == prediction\n accuracy = len(correctness[correctness == True]) / float(len(t_valid))\n print 'L =', L, 'a1 =', a1, 'a2 =', a2, 'accuracy =', accuracy\n if accuracy > best_accuracy:\n best_L = L\n best_a1 = a1\n best_a2 = a2\n best_accuracy = accuracy\n\nprint best_a1, best_a2, best_L\n\nnp.random.seed(0)\nbest_a1 = 0.001\nbest_a2 = 0.001\nbest_L = 200\nW, b, V, a = init_mlperceptron_weights(num_of_features, num_of_class, best_L)\n# L = 150 a1 = 0.001 a2 = 0.001 accuracy = 0.8807\n# L = 200 a1 = 0.001 a2 = 0.001 accuracy = 0.8813\n# L = 250 a1 = 0.001 a2 = 0.001 accuracy = 0.8806\n# L = 300 a1 = 0.001 a2 = 0.001 accuracy = 0.8789\n# L = 150 a1 = 0.001 a2 = 0.005 accuracy = 0.8738\n# L = 200 a1 = 0.001 a2 = 0.005 accuracy = 0.8779\n# L = 250 a1 = 0.001 a2 = 0.005 accuracy = 0.8761\n\ntrain_mlperceptron(W, b, V, a, a1=best_a1, a2=best_a2, num_iter=20, cl=False)\n\nprediction = predict_mlperceptron(W, b, V, a, x_valid)\ncorrectness = t_valid == prediction\naccuracy = len(correctness[correctness == True]) / float(len(t_valid))\nprint accuracy\n\nW1 = np.copy(W)\nb1 = np.copy(b)\nV1 = np.copy(V)\na1 = np.copy(a)\ntrain_mlperceptron(W1, b1, V1, a1, a1=best_a1, a2=best_a2, num_iter=5, cl=False)\n\nprediction = predict_mlperceptron(W1, b1, V1, a1, x_valid)\ncorrectness = t_valid == prediction\naccuracy = len(correctness[correctness == True]) / float(len(t_valid))\nprint accuracy\n\nW2 = np.copy(W1)\nb2 = np.copy(b1)\nV2 = np.copy(V1)\na2 = np.copy(a1)\ntrain_mlperceptron(W2, b2, V2, a2, a1=best_a1, a2=best_a2, num_iter=5, cl=False)\n\nprediction = predict_mlperceptron(W2, b2, V2, a2, x_valid)\ncorrectness = t_valid == prediction\naccuracy = len(correctness[correctness == True]) / float(len(t_valid))\nprint accuracy"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
prisae/blog-notebooks
|
MX_Oaxaca.ipynb
|
cc0-1.0
|
[
"Oaxaca\nMaps for https://mexico.werthmuller.org/besucherreisen/oaxaca.",
"import numpy as np\nimport travelmaps2 as tm\nfrom matplotlib import pyplot as plt\ntm.setup(dpi=200)",
"1. Map",
"fig_x = tm.plt.figure(figsize=(tm.cm2in([11, 6])))\n\n# Locations\nMDF = [19.433333, -99.133333] # Mexico City\nOAX = [16.898056, -96.414167] # Oaxaca\n\n# Create basemap\nm_x = tm.Basemap(width=3500000, height=2300000, resolution='c', projection='tmerc', lat_0=24, lon_0=-102)\n\n# Plot image\nm_x.warpimage('./data/TravelMap/HYP_HR_SR_OB_DR/HYP_HR_SR_OB_DR.tif')\n\n# Put a shade over non-Mexican countries\ncountries = ['USA', 'BLZ', 'GTM', 'HND', 'SLV', 'NIC', 'CUB']\ntm.country(countries, m_x, fc='.8', ec='.3', lw=.5, alpha=.6)\n\n# Fill states\nfcs = 32*['none']\necs = 32*['k']\nlws = 32*[.2,]\ntm.country('MEX', bmap=m_x, fc=fcs, ec=ecs, lw=lws, adm=1)\necs = 32*['none']\necs[19] = 'r'\nlws = 32*[1,]\ntm.country('MEX', bmap=m_x, fc=fcs, ec=ecs, lw=lws, adm=1)\n\n# Add visited cities\ntm.city(OAX, 'Oaxaca', m_x, offs=[0, -2], halign=\"center\")\ntm.city(MDF, 'Mexiko-Stadt', m_x, offs=[-.6, .6], halign=\"right\")\n\n# Save-path\n#fpath = '../mexico.werthmuller.org/content/images/oaxaca/'\n#tm.plt.savefig(fpath+'MapOaxaca.png', bbox_inches='tight')\ntm.plt.show()",
"2. Profile\nRoute drawn in Google Earth, exported, and subsequently converted in http://www.gpsvisualizer.com.",
"fig_p,ax = plt.subplots(figsize=(tm.cm2in([10.8, 5])))\n\n# Switch off axis and ticks\nax.spines['right'].set_visible(False)\nax.spines['top'].set_visible(False)\nax.spines['left'].set_visible(False)\nax.spines['bottom'].set_visible(False)\nax.xaxis.set_ticks_position('none')\nax.yaxis.set_ticks_position('none')\n\n# Get data\npdat = np.loadtxt('./data/Mexico/OaxacaProfile.txt', delimiter='\\t', skiprows=1)\n\n# Plot City names and kilometers\nplt.annotate('Mexiko Stadt', (0, 5500), horizontalalignment='center', rotation='vertical')\nplt.annotate('Puebla (129 km)', (129, 6000), horizontalalignment='center', rotation='vertical')\nplt.annotate('Tehuacan (252 km)', (252, 6000), horizontalalignment='center', rotation='vertical')\nplt.annotate('Oaxaca (465 km)', (465, 5400), horizontalalignment='center', rotation='vertical')\n\n# Ticks, hlines, axis\nplt.xticks(())\nplt.yticks((1000, 1500, 2000, 2500, 3000, 3500), ('1000 m', '', '2000 m', '', '3000 m', ''))\nplt.hlines([1000, 2000, 3000], -100, 500, colors='.8')\nplt.hlines([1500, 2500, 3500], -100, 500, colors='.8', lw=.5)\nplt.axis([-20, 486, 500, 6000])\n\n# Plat data\nplt.plot(pdat[:,3], pdat[:,2])\n\n# Save-path\n#fpath = '../mexico.werthmuller.org/content/images/oaxaca/'\n#plt.savefig(fpath+'Profile.png', bbox_inches='tight')\n\nplt.show()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
AllenDowney/ModSim
|
python/soln/chap11.ipynb
|
gpl-2.0
|
[
"Chapter 11\nModeling and Simulation in Python\nCopyright 2021 Allen Downey\nLicense: Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International",
"# install Pint if necessary\n\ntry:\n import pint\nexcept ImportError:\n !pip install pint\n\n# download modsim.py if necessary\n\nfrom os.path import exists\n\nfilename = 'modsim.py'\nif not exists(filename):\n from urllib.request import urlretrieve\n url = 'https://raw.githubusercontent.com/AllenDowney/ModSim/main/'\n local, _ = urlretrieve(url+filename, filename)\n print('Downloaded ' + local)\n\n# import functions from modsim\n\nfrom modsim import *",
"In this chapter, we develop a model of an epidemic as it spreads in a\nsusceptible population, and use it to evaluate the effectiveness of\npossible interventions.\nMy presentation of the model in the next few chapters is based on an excellent article by David Smith and Lang Moore, [^1]: Smith and Moore, \"The SIR Model for Spread of Disease,\" Journal of Online Mathematics and its Applications, December 2001, available at http://modsimpy.com/sir.\nThe Freshman Plague\nEvery year at Olin College, about 90 new students come to campus from\naround the country and the world. Most of them arrive healthy and happy, but usually at least one brings with them some kind of infectious disease. A few weeks later, predictably, some fraction of the incoming class comes down with what we call \"The Freshman Plague\".\nIn this chapter we introduce a well-known model of infectious disease,\nthe Kermack-McKendrick model, and use it to explain the progression of\nthe disease over the course of the semester, predict the effect of\npossible interventions (like immunization) and design the most effective intervention campaign.\nSo far we have done our own modeling; that is, we've chosen physical\nsystems, identified factors that seem important, and made decisions\nabout how to represent them. In this chapter we start with an existing\nmodel and reverse-engineer it. Along the way, we consider the modeling\ndecisions that went into it and identify its capabilities and\nlimitations.\nThe SIR model\nThe Kermack-McKendrick model is a simple version of an SIR model,\nso-named because it considers three categories of people:\n\n\nS: People who are \"susceptible\\\", that is, capable of\n contracting the disease if they come into contact with someone who\n is infected.\n\n\nI: People who are \"infectious\\\", that is, capable of passing\n along the disease if they come into contact with someone\n susceptible.\n\n\nR: People who are \"recovered\\\". In the basic version of the\n model, people who have recovered are considered to be immune to\n reinfection. That is a reasonable model for some diseases, but not\n for others, so it should be on the list of assumptions to reconsider\n later.\n\n\nLet's think about how the number of people in each category changes over time. Suppose we know that people with the disease are infectious for a period of 4 days, on average. If 100 people are infectious at a\nparticular point in time, and we ignore the particular time each one\nbecame infected, we expect about 1 out of 4 to recover on any particular day.\nPutting that a different way, if the time between recoveries is 4 days, the recovery rate is about 0.25 recoveries per day, which we'll denote with the Greek letter gamma, $\\gamma$, or the variable name gamma.\nIf the total number of people in the population is $N$, and the fraction currently infectious is $i$, the total number of recoveries we expect per day is $\\gamma i N$.\nNow let's think about the number of new infections. Suppose we know that each susceptible person comes into contact with 1 person every 3 days, on average, in a way that would cause them to become infected if the other person is infected. We'll denote this contact rate with the Greek letter beta, $\\beta$.\nIt's probably not reasonable to assume that we know $\\beta$ ahead of\ntime, but later we'll see how to estimate it based on data from previous outbreaks.\nIf $s$ is the fraction of the population that's susceptible, $s N$ is\nthe number of susceptible people, $\\beta s N$ is the number of contacts per day, and $\\beta s i N$ is the number of those contacts where the other person is infectious.\nIn summary:\n\n\nThe number of recoveries we expect per day is $\\gamma i N$; dividing by $N$ yields the fraction of the population that recovers in a day, which is $\\gamma i$.\n\n\nThe number of new infections we expect per day is $\\beta s i N$;\n dividing by $N$ yields the fraction of the population that gets\n infected in a day, which is $\\beta s i$.\n\n\nThis model assumes that the population is closed; that is, no one\narrives or departs, so the size of the population, $N$, is constant.\nThe SIR equations\nIf we treat time as a continuous quantity, we can write differential\nequations that describe the rates of change for $s$, $i$, and $r$ (where $r$ is the fraction of the population that has recovered):\n$$\\begin{aligned}\n\\frac{ds}{dt} &= -\\beta s i \\\n\\frac{di}{dt} &= \\beta s i - \\gamma i\\\n\\frac{dr}{dt} &= \\gamma i\\end{aligned}$$ \nTo avoid cluttering the equations, I leave it implied that $s$ is a function of time, $s(t)$, and likewise for $i$ and $r$.\nSIR models are examples of compartment models, so-called because\nthey divide the world into discrete categories, or compartments, and\ndescribe transitions from one compartment to another. Compartments are\nalso called stocks and transitions between them are called\nflows.\nIn this example, there are three stocks---susceptible, infectious, and\nrecovered---and two flows---new infections and recoveries. Compartment\nmodels are often represented visually using stock and flow diagrams (see http://modsimpy.com/stock).\nThe following figure shows the stock and flow diagram for an SIR\nmodel.\n{width=\"4in\"}\nStocks are represented by rectangles, flows by arrows. The widget in the middle of the arrows represents a valve that controls the rate of flow; the diagram shows the parameters that control the valves.\nImplementation\nFor a given physical system, there are many possible models, and for a\ngiven model, there are many ways to represent it. For example, we can\nrepresent an SIR model as a stock-and-flow diagram, as a set of\ndifferential equations, or as a Python program. The process of\nrepresenting a model in these forms is called implementation. In\nthis section, we implement the SIR model in Python.\nI'll represent the initial state of the system using a State object\nwith state variables S, I, and R; they represent the fraction of\nthe population in each compartment.\nWe can initialize the State object with the number of people in each compartment, assuming there is one infected student in a class of 90:",
"init = State(S=89, I=1, R=0)\ninit",
"And then convert the numbers to fractions by dividing by the total:",
"from numpy import sum\n\ninit /= sum(init)\ninit",
"For now, let's assume we know the time between contacts and time between\nrecoveries:",
"tc = 3 # time between contacts in days \ntr = 4 # recovery time in days",
"We can use them to compute the parameters of the model:",
"beta = 1 / tc # contact rate in per day\ngamma = 1 / tr # recovery rate in per day",
"Now we need a System object to store the parameters and initial\nconditions. The following function takes the system parameters and returns a new System object:",
"def make_system(beta, gamma):\n init = State(S=89, I=1, R=0)\n init /= sum(init)\n\n t0 = 0\n t_end = 7 * 14\n\n return System(init=init, t0=t0, t_end=t_end,\n beta=beta, gamma=gamma)",
"The default value for t_end is 14 weeks, about the length of a\nsemester.\nHere's what the System object looks like.",
"system = make_system(beta, gamma)\nsystem",
"The update function\nAt any point in time, the state of the system is represented by a\nState object with three variables, S, I and R. So I'll define an\nupdate function that takes as parameters a State object, the current\ntime, and a System object:",
"def update_func(state, t, system):\n s, i, r = state\n\n infected = system.beta * i * s \n recovered = system.gamma * i\n \n s -= infected\n i += infected - recovered\n r += recovered\n \n return State(S=s, I=i, R=r)",
"The first line uses a feature we have not seen before, multiple\nassignment. The value on the right side is a State object that\ncontains three values. The left side is a sequence of three variable\nnames. The assignment does just what we want: it assigns the three\nvalues from the State object to the three variables, in order.\nThe variables s, i and r, are lowercase to distinguish them\nfrom the state variables, S, I and R.\nThe update function computes infected and recovered as a fraction of the population, then updates s, i and r. The return value is a State that contains the updated values.\nWhen we call update_func like this:",
"state = update_func(init, 0, system)\nstate",
"You might notice that this version of update_func does not use one of its parameters, t. I include it anyway because update functions\nsometimes depend on time, and it is convenient if they all take the same parameters, whether they need them or not.\nRunning the simulation\nNow we can simulate the model over a sequence of time steps:",
"from numpy import arange\n\ndef run_simulation(system, update_func):\n state = system.init\n\n for t in arange(system.t0, system.t_end):\n state = update_func(state, t, system)\n\n return state",
"The parameters of run_simulation are the System object and the\nupdate function. The System object contains the parameters, initial\nconditions, and values of t0 and t_end.\nWe can call run_simulation like this:",
"system = make_system(beta, gamma)\nfinal_state = run_simulation(system, update_func)",
"The result is the final state of the system:",
"final_state",
"This result indicates that after 14 weeks (98 days), about 52% of the\npopulation is still susceptible, which means they were never infected,\nless than 1% are actively infected, and 48% have recovered, which means they were infected at some point.\nCollecting the results\nThe previous version of run_simulation only returns the final state,\nbut we might want to see how the state changes over time. We'll consider two ways to do that: first, using three TimeSeries objects, then using a new object called a TimeFrame.\nHere's the first version:",
"\n\ndef run_simulation(system, update_func):\n S = TimeSeries()\n I = TimeSeries()\n R = TimeSeries()\n\n state = system.init\n t0 = system.t0\n S[t0], I[t0], R[t0] = state\n \n for t in arange(system.t0, system.t_end):\n state = update_func(state, t, system)\n S[t+1], I[t+1], R[t+1] = state\n \n return S, I, R",
"First, we create TimeSeries objects to store the results. Notice that\nthe variables S, I, and R are TimeSeries objects now.\nNext we initialize state, t0, and the first elements of S, I and\nR.\nInside the loop, we use update_func to compute the state of the system\nat the next time step, then use multiple assignment to unpack the\nelements of state, assigning each to the corresponding TimeSeries.\nAt the end of the function, we return the values S, I, and R. This\nis the first example we have seen where a function returns more than one\nvalue.\nNow we can run the function like this:",
"system = make_system(beta, gamma)\nS, I, R = run_simulation(system, update_func)",
"We'll use the following function to plot the results:",
"def plot_results(S, I, R):\n S.plot(style='--', label='Susceptible')\n I.plot(style='-', label='Infected')\n R.plot(style=':', label='Resistant')\n decorate(xlabel='Time (days)',\n ylabel='Fraction of population')",
"And run it like this:",
"plot_results(S, I, R)",
"Notice that it takes about three weeks (21 days) for the outbreak to get going, and about six weeks (42 days) before it peaks. The fraction of the population that's infected is never very high, but it adds up. In total, almost half the population gets sick.\nNow with a TimeFrame\nIf the number of state variables is small, storing them as separate\nTimeSeries objects might not be so bad. But a better alternative is to use a TimeFrame, which is another object defined in the ModSim\nlibrary.\nA TimeFrame is a kind of a DataFrame, which we used in.\nHere's a more concise version of run_simulation using a TimeFrame:",
"def run_simulation(system, update_func):\n frame = TimeFrame(columns=system.init.index)\n frame.loc[system.t0] = system.init\n \n for t in arange(system.t0, system.t_end):\n frame.loc[t+1] = update_func(frame.loc[t], t, system)\n \n return frame",
"The first line creates an empty TimeFrame with one column for each\nstate variable. Then, before the loop starts, we store the initial\nconditions in the TimeFrame at t0. Based on the way we've been using\nTimeSeries objects, it is tempting to write:\nframe[system.t0] = system.init\nBut when you use the bracket operator with a TimeFrame or DataFrame, it selects a column, not a row. \nTo select a row, we have to use loc, like this:\nframe.loc[system.t0] = system.init\nSince the value on the right side is a State, the assignment matches\nup the index of the State with the columns of the TimeFrame; that\nis, it assigns the S value from system.init to the S column of\nframe, and likewise with I and R.\nWe use the same feature to write the loop more concisely, assigning the State we get from update_func directly to the next row of\nframe.\nFinally, we return frame. We can call this version of run_simulation like this:",
"results = run_simulation(system, update_func)",
"And plot the results like this:",
"plot_results(results.S, results.I, results.R)",
"As with a DataFrame, we can use the dot operator to select columns\nfrom a TimeFrame.\nSummary\nExercises\nExercise Suppose the time between contacts is 4 days and the recovery time is 5 days. After 14 weeks, how many students, total, have been infected?\nHint: what is the change in S between the beginning and the end of the simulation?",
"# Solution\n\ntc = 4 # time between contacts in days \ntr = 5 # recovery time in days\n\nbeta = 1 / tc # contact rate in per day\ngamma = 1 / tr # recovery rate in per day\n\nsystem = make_system(beta, gamma)\ns_0 = system.init.S\n\nfinal = run_simulation(system, update_func)\ns_end = final.S[system.t_end]\n\ns_0 - s_end"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
alexvmarch/atomic
|
exatomic/static/exatomic_demo.ipynb
|
apache-2.0
|
[
"Welcome to exatomic! Let's get started",
"import pandas as pd\nimport numpy as np\nimport exatomic",
"Here are some test demo containers to play around with",
"u = exatomic.Universe()\nu",
"exatomic universes in principle contain a QM/MD calculation or set of calculations\nThe following dataframes are currently supported as properties of the universe with their associated required columns\n<li> Frame -- ['atom_count']\n<li> Atom -- ['symbol', 'x', 'y', 'z', 'frame']\n<li> Two -- ['distance', 'atom0', 'atom1', 'frame']\n\nThis constitutes all the required information to visualize an MD trajectory (or geometry optimization, etc.). However, there are more dataframes that allow for increased functionality.\n<li> AtomicField\n<li> Molecule\n<li> Overlap\n<li> BasisSet\n\nAn exhaustive list can be found in the documentation or on readthedocs.org\n\n### There are convenience methods for immediate access to your data\nexatomic.XYZ('/path/to/xyz/or/trajectory')\n\nexatomic.Cube('/path/to/cube/file')\n\n### Try it out!",
"#myxyz = exatomic.XYZ('../data/examples/porphyrin.xyz')\nmyxyz = exatomic.XYZ('porphyrin.xyz')\n\nmyxyz.head()",
"Just a textfile...?",
"myxyz.atom.head() # Atomic units are used throughout the exatomic package\n\nmyuni = myxyz.to_universe()\nmyuni.two.head()\n\nmyuni",
"There we go. Our porphyrin looks pretty good. Check out the GUI controls in the animation\nSo what happened above?\nexatomic.XYZ is a wrapper around exatomic.Editor, the base class for dealing with file I/O in exatomic. The base class has a to_universe method which converts an exatomic.Editor to an exatomic.Universe, which ships our data to javscript to be visualized right in a widget in the notebook.\nSo... Avogadro in the notebook? Surely it won't scale...",
"from exa.relational import Isotope\nimport random\n\nnat = 10**4 # Be careful changing this value...\nx = nat**0.5 * np.random.rand(nat)\ny = nat**0.5 * np.random.rand(nat)\nz = nat**0.5 * np.random.rand(nat)\nsymbols = Isotope.to_frame().drop_duplicates('symbol')['symbol'].tolist()\nsymbol = [random.choice(symbols) for i in range(nat)]\natom = pd.DataFrame.from_dict({'x': x, 'y': y, 'z': z, 'symbol': symbol})\natom['frame'] = 0\nscuni = exatomic.Universe(atom=atom)\n\nscuni"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
aaossa/Dear-Notebooks
|
More/FacebookGraphAPI_ES.ipynb
|
gpl-3.0
|
[
"Facebook Graph API v2.5\nEn este IPython Notebook se anotarán algunos usos básicos la API que provee Facebook.",
"import json\nimport requests\n\nBASE = \"https://graph.facebook.com\"\nVERSION = \"v2.5\"\n\n# Si queremos imprimir los json de respuesta \n# de una forma mas agradable a la vista podemos usar\ndef print_pretty(jsonstring, indent=4, sort_keys=False):\n print(json.dumps(jsonstring, indent=indent, sort_keys=sort_keys))",
"Tomamos el access token temporal creado en Graph API Explorer. Si queremos crear uno que sea permanente podemos usar las instrucciones de esta pregunta de StackOverflow o de esta otra pregunta. Alternativamente, podemos crear un token de acceso con nuestra id y clave:",
"with open(\"credentials\") as f:\n access_token = str(f.read().splitlines()[0])",
"Partiremos con lo más simple. Una consulta GET para obtener información sobre nosotros mismos.\n\nGET /me\n\nEl token de acceso se envía como parámetro, junto con los campos que queremos obtener de la consulta:",
"url = \"{}/{}/me\".format(BASE, VERSION)\nparams = {\n \"access_token\": access_token,\n \"fields\": [\"id\", \"name\"]\n}\nreq = requests.get(url, params=params)\nprint_pretty(req.json())\n\nmy_id = req.json()[\"id\"]\nmy_name = req.json()[\"name\"]",
"Ahora, publicaremos un estado. Esta request nos retornará la id del post, que será publicado con visibilidad \"Solo para mi\" (Only me)\n\nPOST /me/feed",
"url = \"{}/{}/me/feed\".format(BASE, VERSION)\nparams = {\n \"access_token\": access_token,\n \"message\": \"Este estado lo publiqué usando la API de Facebook :O\"\n}\nreq = requests.post(url, params=params)\nstatus_id = req.json()[\"id\"]\nprint(\"status_id = {}\".format(status_id))",
"Luego, podemos directamente borrar un estado solo si lo publicamos usando la API:\n\nDELETE /{status-id}",
"url = \"{}/{}/{}\".format(BASE, VERSION, status_id)\nparams = {\n \"access_token\": access_token\n}\nreq = requests.delete(url, params = params)\nprint_pretty(req.json()) "
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
jokedurnez/RequiredEffectSize
|
Figure2_CorrSimulation/.ipynb_checkpoints/Correlation_simulation-checkpoint.ipynb
|
mit
|
[
"This notebook generates random synthetic fMRI data and a random behavioral regressor, and performs a standard univariate analysis to find correlations between the two. It is meant to demonstrate how easy it is to find seemingly impressive correlations with fMRI data when multiple tests are not properly controlled for. \nIn order to run this code, you must first install the standard Scientific Python stack (e.g. using anaconda) along with following additional dependencies:\n* nibabel\n* nilearn\n* statsmodels\n* nipype\nIn addition, this notebook assumes that FSL is installed and that the FSLDIR environment variable is defined.",
"import numpy\nimport nibabel\nimport os\nimport nilearn.plotting\nimport matplotlib.pyplot as plt\nfrom statsmodels.regression.linear_model import OLS\nimport nipype.interfaces.fsl as fsl\nimport scipy.stats\n\nif not 'FSLDIR' in os.environ.keys():\n raise Exception('This notebook requires that FSL is installed and the FSLDIR environment variable is set')\n\n%matplotlib inline\n",
"Set up default parameters. We use 32 subjects, which is the median sample size of the set of fMRI studies published between 2011 and 2015 that were estimated from Neurosynth in the paper. We use a heuristic correction for multiple comparisons of p<0.001 and 10 voxels, like that show by Eklund et al. (2016, PNAS) to result in Type I error rates of 0.6-0.9.",
"pthresh=0.001 # cluster forming threshold\ncthresh=10 # cluster extent threshold\nnsubs=32 # number of subjects",
"In order to recreate the figure from the paper exactly, we need to fix the random seed so that it will generate exactly the same random data. If you wish to generate new data, then set the recreate_paper_figure variable to False and rerun the notebook.",
"recreate_paper_figure=True\nif recreate_paper_figure:\n seed=61974\nelse:\n seed=numpy.ceil(numpy.random.rand()*100000).astype('int')\n print(seed)\n\nnumpy.random.seed(seed)",
"Use the standard MNI152 2mm brain mask as the mask for the generated data",
"maskimg=os.path.join(os.getenv('FSLDIR'),'data/standard/MNI152_T1_2mm_brain_mask.nii.gz')\nmask=nibabel.load(maskimg)\nmaskdata=mask.get_data()\nmaskvox=numpy.where(maskdata>0)\nprint('Mask includes %d voxels'%len(maskvox[0]))",
"Generate a dataset for each subject. fMRI data within the mask are generated using a Gaussian distribution (mean=1000, standard deviation=100). Behavioral data are generated using a Gaussian distribution (mean=100, standard deviation=1).",
"imgmean=1000 # mean activation within mask\nimgstd=100 # standard deviation of noise within mask\nbehavmean=100 # mean of behavioral regressor\nbehavstd=1 # standard deviation of behavioral regressor\n\ndata=numpy.zeros((maskdata.shape + (nsubs,)))\n\nfor i in range(nsubs):\n tmp=numpy.zeros(maskdata.shape)\n tmp[maskvox]=numpy.random.randn(len(maskvox[0]))*imgstd+imgmean\n data[:,:,:,i]=tmp\n\nnewimg=nibabel.Nifti1Image(data,mask.get_affine(),mask.get_header())\nnewimg.to_filename('fakedata.nii.gz')\nregressor=numpy.random.randn(nsubs,1)*behavstd+behavmean\nnumpy.savetxt('regressor.txt',regressor)",
"Spatially smooth data using a 6 mm FWHM Gaussian kernel",
"smoothing_fwhm=6 # FWHM in millimeters\n\nsmooth=fsl.IsotropicSmooth(fwhm=smoothing_fwhm,\n in_file='fakedata.nii.gz',\n out_file='fakedata_smooth.nii.gz')\nsmooth.run()",
"Use FSL's GLM tool to run a regression at each voxel",
"glm = fsl.GLM(in_file='fakedata_smooth.nii.gz', \n design='regressor.txt', \n out_t_name='regressor_tstat.nii.gz',\n demean=True)\nglm.run()",
"Use FSL's cluster tool to identify clusters of activation that exceed the specified cluster-forming threshold",
"tcut=scipy.stats.t.ppf(1-pthresh,nsubs-1)\ncl = fsl.Cluster()\ncl.inputs.threshold = tcut\ncl.inputs.in_file = 'regressor_tstat.nii.gz'\ncl.inputs.out_index_file='tstat_cluster_index.nii.gz'\nresults=cl.run()",
"Generate a plot showing the brain-behavior relation from the top cluster",
"clusterimg=nibabel.load(cl.inputs.out_index_file)\nclusterdata=clusterimg.get_data()\nindices=numpy.unique(clusterdata)\n\nclustersize=numpy.zeros(len(indices))\nclustermean=numpy.zeros((len(indices),nsubs))\nindvox={}\nfor c in range(1,len(indices)):\n indvox[c]=numpy.where(clusterdata==c) \n clustersize[c]=len(indvox[c][0])\n for i in range(nsubs):\n tmp=data[:,:,:,i]\n clustermean[c,i]=numpy.mean(tmp[indvox[c]])\ncorr=numpy.corrcoef(regressor.T,clustermean[-1])\n\nprint('Found %d clusters exceeding p<%0.3f and %d voxel extent threshold'%(c,pthresh,cthresh))\nprint('Largest cluster: correlation=%0.3f, extent = %d voxels'%(corr[0,1],len(indvox[c][0])))\n\n# set cluster to show - 0 is the largest, 1 the second largest, and so on\ncluster_to_show=0\n\n# translate this variable into the index of indvox\ncluster_to_show_idx=len(indices)-cluster_to_show-1\n\n# plot the (circular) relation between fMRI signal and \n# behavioral regressor in the chosen cluster\n\nplt.scatter(regressor.T,clustermean[cluster_to_show_idx])\nplt.title('Correlation = %0.3f'%corr[0,1],fontsize=14)\nplt.xlabel('Fake behavioral regressor',fontsize=18)\nplt.ylabel('Fake fMRI data',fontsize=18)\nm, b = numpy.polyfit(regressor[:,0], clustermean[cluster_to_show_idx], 1)\naxes = plt.gca()\nX_plot = numpy.linspace(axes.get_xlim()[0],axes.get_xlim()[1],100)\nplt.plot(X_plot, m*X_plot + b, '-')\nplt.savefig('scatter.png',dpi=600)",
"Generate a thresholded statistics image for display",
"tstat=nibabel.load('regressor_tstat.nii.gz').get_data()\nthresh_t=clusterdata.copy()\ncutoff=numpy.min(numpy.where(clustersize>cthresh))\nthresh_t[thresh_t<cutoff]=0\nthresh_t=thresh_t*tstat\nthresh_t_img=nibabel.Nifti1Image(thresh_t,mask.get_affine(),mask.get_header())",
"Generate a figure showing the location of the selected activation focus.",
"mid=len(indvox[cluster_to_show_idx][0])/2\ncoords=numpy.array([indvox[cluster_to_show_idx][0][mid],\n indvox[cluster_to_show_idx][1][mid],\n indvox[cluster_to_show_idx][2][mid],1]).T\nmni=mask.get_qform().dot(coords)\nnilearn.plotting.plot_stat_map(thresh_t_img,\n os.path.join(os.getenv('FSLDIR'),'data/standard/MNI152_T1_2mm_brain.nii.gz'),\n threshold=cl.inputs.threshold,\n cut_coords=mni[:3])\nplt.savefig('slices.png',dpi=600)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mikekestemont/wuerzb15
|
Chapter 5 - Vectorization.ipynb
|
mit
|
[
"Vectorization\nVectorization is a crucial and basic step in stylometric research: it refers to the process of turning text into numbers. More precisely, it refers to the creation of the two-dimensional X matrix, which we have been carelessly importing so far: in this matrix, the rows represent documents and the columns represent stylometric features, such as word frequencies. Vectorization is therefore closely related to feature extraction, or determining which stylistic properties should be extracted from documents to arrive at a reliable corpus representation, which is useful for stylometric research. While feature extraction has been a popular research topic in, for instance, authorship studies, there exist few reliable practical introductions to the topic. This is a shame: vectorization is a foundational preprocessing step in stylometry and it has a huge impact on all subsequent analytical steps. It is a pity that most papers are very explicit about the preprocessing steps taken, so that many practical questions remain unanswered:\n- Was punctuation removed?\n- Were texts lowercased?\n- What about character n-grams across word boundaries?\n- Were pronouns deleted before or after calculating relative frequencies?\n- Was culling performed before or after segmenting texts into samples?\n- etc.\nThis chapter is therefore entirely devoted to this important topic - partially to help raise awareness. The course repository comes with a module called vectorization.py, which contains a Vectorizer object which has been developed in the context of the work on pystyl. If you import this class, you can check out the documentation:",
"from vectorization import Vectorizer\nhelp(Vectorizer)",
"As you can see, this vectorizer offers easy access to a variety of vectorization strategies. All this code is based on sklearn library, but seamlessly wraps around the different modules which are needed. Importantly, the Vectorizer offers access to a number of vectorization pipelines that are common in stylometry, but much less in other fields of Machine Learning. Let us load a larger corpus this time and test the vectorizer:",
"import glob\nimport os\n\nauthors, titles, texts = [], [], []\nfor filename in glob.glob('data/victorian_large/*.txt'):\n with open(filename, 'r') as f:\n text = f.read()\n author, title = os.path.basename(filename).replace('.txt', '').split('_')\n authors.append(author)\n titles.append(title)\n texts.append(text)",
"As you can see we loop over the txt-files under the data/victorian_large directory and end up with three lists (authors, titles, and the actual texts) which can easily zipped together:",
"for t, a in zip(titles, authors):\n print(t, 'by', a)",
"Let us start with some basic preprocessing. The function preprocess() below lowercases each text and only retains alphabetic characters (and whitespace). Additionally, to speed things up a bit, we truncate each document after the 200K first characters:",
"def preprocess(text, max_len=200000):\n return ''.join([c for c in text.lower()\n if c.isalpha() or c.isspace()])[:max_len]",
"Let us apply this new function:",
"for i in range(len(texts)):\n texts[i] = preprocess(texts[i])",
"We can now instantiate our vectorizer with some traditional settings; we will extract the 100 most frequent words and scale them using the per-column standard deviation. We limit this vectorizer to word unigrams (ngram_size=1) and we specify the 'culling' rule (cf. min_df) that words should be present in at least 70% of all texts:",
"vectorizer = Vectorizer(mfi=100,\n vector_space='tf_std',\n ngram_type='words',\n ngram_size=1,\n min_df=0.7)",
"We can now use this object to vectorize our lists of documents:",
"X = vectorizer.vectorize(texts)\nprint(X.shape)",
"As requested, we indeed seem to have obtained a two-dimensional matrix, with for each text 100 feature columns. To find out to which words these columns correspond, we can access the vectorizer's feature_names attribute:",
"print(vectorizer.feature_names)",
"Having such a module is great, but it also hides a lot of the interfacing which is needed with sklearn. In the next paragraphs, we will have a lot at the preprocessing functionality which sklearn offers to deal with text.\nInteger frequencies\nCalculating absolute frequencies of, for instance, words in texts is something that is rarely done in stylometry. Because we often work with texts of unequal length, it is typically safer to back off to relative frequencies. Nevertheless, it is good to know that sklearn's supports the extraction of absolute counts with its CountVectorizer:",
"from sklearn.feature_extraction.text import CountVectorizer\nvec = CountVectorizer(max_features=100)\nX = vec.fit_transform(texts)\nprint(X.shape)",
"Here, we immediately use the important max_features parameter, which controls how many of the most frequent words will be returned (cf. the 2nd dimension of the vectorized matrix). Notice that all the vectorization methods discussed here. are implemented as unsupervised methods in sklearn, so that they all have fit() and transform() methods. One warning is important here: if we check the data type of the matrix being returned, we see that this is not a simple np.array:",
"type(X)",
"We see that sklearn by default returns a so-called sparse matrix, which only explicitly stores non-zero values. While this very efficient for larger datasets, there are many methods which cannot deal with such sparse matrices - such as sklearn's PCAobject, to give but one example. If memory is not that much of an issue, it is always safer to convert the sparse matrix back to a 'dense' array:",
"X = X.toarray()\ntype(X)",
"Finally, to access the names of the features which correspond to our columns in X, we can access the following function:",
"print(vec.get_feature_names())",
"Of course, with max_features set to 100, this list is dominated by function words, which are typically the most frequent items in a corpus of texts. Note that extracting binary features - which simply records the absence or presence of items in texts - is also supported, although this is of course even less common in current stylometry:",
"vec = CountVectorizer(max_features=50, binary=True)\nX = vec.fit_transform(texts).toarray()\nprint(X)",
"When using only 50 top-frequent features, it is logical that these are present in most of our texts.\nReal-valued Frequencies\nWorking with relative frequencies is much more common in stylometry. Although this is somewhat counter-intuitive, sklearn does not have a dedicated function for this vectorization strategy. Rather, one must work with the TfidfVectorizer, which can be imported as follows:",
"from sklearn.feature_extraction.text import TfidfVectorizer",
"Tfidf stands for term-frequency/inverse-document-frequency. This particular vectorization method is one of the golden oldies in Information Retrieval: it gives more importance to rare words in texts, by dividing the relative frequency of a 'term' (i.e. 'word') in a document by the inverse of the document. Thus, the rarer a word, the more its importance will be boosted in this model. Note how this model in fact captures the inverse intuition of Burrows's Delta, which gives more weight to highly common words. Tfidf is not very common in stylometry or authorship attribution in particular, although one could easily argue that it is not necessarily useless: if a rare word occurs in two anonymous texts, this does seem to increase the likelihood that both documents were authored by the same individual. In many ways, the TfidfVectorizer can be parametrized in the same way as the CountVectorizer, the main exception being that it will eventually yield a matrix of real number, instead of integers:",
"vec = TfidfVectorizer(max_features=10)\nX = vec.fit_transform(texts).toarray()\nprint(vec.get_feature_names())\nprint(X)",
"To create a vector space that simple has relative frequencies (which have not been normalized using IDF's), we can simple add the following parameter:",
"vec = TfidfVectorizer(max_features=10,\n use_idf=False)\nX = vec.fit_transform(texts).toarray()\nprint(vec.get_feature_names())",
"Of course, the list of features extracted is not altered by changing this argument, but they values will have changed.\nFeature types\nSo far, we have only considered word frequencies as stylometric style markers - where we naively define a word as a space-free string of alphabetic characters. Implicitly, we have been setting the analyzer argument to 'word':",
"vec = TfidfVectorizer(max_features=10,\n analyzer='word')\nX = vec.fit_transform(texts)\nprint(vec.get_feature_names())",
"It becomes clear, therefore, that sklearn is performing some sort of tokenization internally. Inconveniently, it also removes certain words: can you find out which?\nTo override this default behaviour, we need a little hack. One common solution is to create our own analyzer (i.e. tokenizer) function and pass that to our vectorizer:",
"def identity(x):\n return x.split()\n\nvec = TfidfVectorizer(max_features=10,\n analyzer=identity,\n use_idf=False)\nX = vec.fit_transform(texts)\nprint(vec.get_feature_names())",
"Does this solve our issue?\nAdditionally, sklearn supports the extraction of character n-grams, which are also a common feature type in stylometry. Interestingly, it allows us to specify an ngram_range: can you figure out what it achieves? (Executing the block below might take a while...)",
"vec = TfidfVectorizer(max_features=10,\n analyzer='char',\n ngram_range=(2, 2))\nX = vec.fit_transform(texts)\nprint(vec.get_feature_names())\n\nvec = TfidfVectorizer(max_features=30,\n analyzer='char',\n ngram_range=(2, 3))\nX = vec.fit_transform(texts)\nprint(vec.get_feature_names())",
"Here, we have to watch out of course, because specifying such ranges will interfere with the max_features parameter. Because bigrams are much more frequent than tetragrams, for instance, the tetragrams might never make it to to frequency table, if the max_features paramater isn't high enough! Naturally we could gain more control over this extraction process, by running two independent vectorizers, and stacking their respective outcomes:",
"vec = TfidfVectorizer(max_features=50,\n analyzer='char',\n ngram_range=(2, 2))\nX1 = vec.fit_transform(texts).toarray()\n\nvec = TfidfVectorizer(max_features=100,\n analyzer='char',\n ngram_range=(3, 3))\nX2 = vec.fit_transform(texts).toarray()\n\nimport numpy as np\nprint(X1.shape)\nprint(X2.shape)\nX = np.hstack((X1, X2))\nprint(X.shape)",
"Here, we finally obtain a matrix with all features.\nControlling the vocabulary\nIn this final section, it is worth discussing another set of parameters in the signatures of the sklearn vectorizers, that are especially useful for stylometric research. Culling is a good issue to start with. Although 'culling' is used in a number of different meanings, it typically means that we remove words which aren't well distributed enough over the texts in the corpus. If a specific word - e.g. a character's name - is extremely frequent in only one text, it might end in our list of most frequent features, even though it doesn't scale well to other texts. Using 'culling' we specify the minimum proportion of documents in which a feature should occur, before it is allowed inside the vectorizer's vocabulary. In the sklearn vectorizers, this culling property can be set using the min_df argument. Here we see of the 1000 columns we requested, only 615 remain because of the culling:",
"vec = TfidfVectorizer(max_features=1000, min_df=.95)\nX = vec.fit_transform(texts)\nprint(X.shape[1])",
"Likewise, it is also possible to specify a max_df, or the proportion of documents in which an item should occur. This setting might be useful if you wish to remove the focus in your experiments on function words only, and also take into consideration some items from lower frequency strata.",
"vec = TfidfVectorizer(max_features=100, max_df=.40)\nX = vec.fit_transform(texts)\nprint(vec.get_feature_names())",
"As you can see, the max_df takes us away from the high-frequence function words, with a lot of proper nouns coming through. By the way: make sure that you specify min_df and max_df as floats: it you specify them as integers, sklearn will interpret these number as the minimum or maximum number of individual documents in which a term should occur.\nFinally, it is good to know that we can also manually specify vocabularies, through the vocabulary argument. This way, we can exercise a much tighther control over which words go into a procedure - and manually remove words from a previous analysis, if necessary.",
"vec = TfidfVectorizer(vocabulary=('my', 'i', 'we'))\nX = vec.fit_transform(texts)\nprint(vec.get_feature_names())",
""
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mne-tools/mne-tools.github.io
|
stable/_downloads/2455121b46e43615a45b660a36d0ad93/30_epochs_metadata.ipynb
|
bsd-3-clause
|
[
"%matplotlib inline",
"Working with Epoch metadata\nThis tutorial shows how to add metadata to ~mne.Epochs objects, and\nhow to use Pandas query strings <pandas:indexing.query> to select and\nplot epochs based on metadata properties.\nFor this tutorial we'll use a different dataset than usual: the\nkiloword-dataset, which contains EEG data averaged across 75 subjects\nwho were performing a lexical decision (word/non-word) task. The data is in\n~mne.Epochs format, with each epoch representing the response to a\ndifferent stimulus (word). As usual we'll start by importing the modules we\nneed and loading the data:",
"import os\nimport numpy as np\nimport pandas as pd\nimport mne\n\nkiloword_data_folder = mne.datasets.kiloword.data_path()\nkiloword_data_file = os.path.join(kiloword_data_folder,\n 'kword_metadata-epo.fif')\nepochs = mne.read_epochs(kiloword_data_file)",
"Viewing Epochs metadata\n.. sidebar:: Restrictions on metadata DataFrames\nMetadata dataframes are less flexible than typical\n :class:Pandas DataFrames <pandas.DataFrame>. For example, the allowed\n data types are restricted to strings, floats, integers, or booleans;\n and the row labels are always integers corresponding to epoch numbers.\n Other capabilities of :class:DataFrames <pandas.DataFrame> such as\n :class:hierarchical indexing <pandas.MultiIndex> are possible while the\n ~mne.Epochs object is in memory, but will not survive saving and\n reloading the ~mne.Epochs object to/from disk.\nThe metadata attached to ~mne.Epochs objects is stored as a\n:class:pandas.DataFrame containing one row for each epoch. The columns of\nthis :class:~pandas.DataFrame can contain just about any information you\nwant to store about each epoch; in this case, the metadata encodes\ninformation about the stimulus seen on each trial, including properties of\nthe visual word form itself (e.g., NumberOfLetters, VisualComplexity)\nas well as properties of what the word means (e.g., its Concreteness) and\nits prominence in the English lexicon (e.g., WordFrequency). Here are all\nthe variables; note that in a Jupyter notebook, viewing a\n:class:pandas.DataFrame gets rendered as an HTML table instead of the\nnormal Python output block:",
"epochs.metadata",
"Viewing the metadata values for a given epoch and metadata variable is done\nusing any of the Pandas indexing <pandas:/reference/indexing.rst>\nmethods such as :obj:~pandas.DataFrame.loc,\n:obj:~pandas.DataFrame.iloc, :obj:~pandas.DataFrame.at,\nand :obj:~pandas.DataFrame.iat. Because the\nindex of the dataframe is the integer epoch number, the name- and index-based\nselection methods will work similarly for selecting rows, except that\nname-based selection (with :obj:~pandas.DataFrame.loc) is inclusive of the\nendpoint:",
"print('Name-based selection with .loc')\nprint(epochs.metadata.loc[2:4])\n\nprint('\\nIndex-based selection with .iloc')\nprint(epochs.metadata.iloc[2:4])",
"Modifying the metadata\nLike any :class:pandas.DataFrame, you can modify the data or add columns as\nneeded. Here we convert the NumberOfLetters column from :class:float to\n:class:integer <int> data type, and add a :class:boolean <bool> column\nthat arbitrarily divides the variable VisualComplexity into high and low\ngroups.",
"epochs.metadata['NumberOfLetters'] = \\\n epochs.metadata['NumberOfLetters'].map(int)\n\nepochs.metadata['HighComplexity'] = epochs.metadata['VisualComplexity'] > 65\nepochs.metadata.head()",
"Selecting epochs using metadata queries\nAll ~mne.Epochs objects can be subselected by event name, index, or\n:term:slice (see tut-section-subselect-epochs). But\n~mne.Epochs objects with metadata can also be queried using\nPandas query strings <pandas:indexing.query> by passing the query\nstring just as you would normally pass an event name. For example:",
"print(epochs['WORD.str.startswith(\"dis\")'])",
"This capability uses the :meth:pandas.DataFrame.query method under the\nhood, so you can check out the documentation of that method to learn how to\nformat query strings. Here's another example:",
"print(epochs['Concreteness > 6 and WordFrequency < 1'])",
"Note also that traditional epochs subselection by condition name still works;\nMNE-Python will try the traditional method first before falling back on rich\nmetadata querying.",
"epochs['solenoid'].plot_psd()",
"One use of the Pandas query string approach is to select specific words for\nplotting:",
"words = ['typhoon', 'bungalow', 'colossus', 'drudgery', 'linguist', 'solenoid']\nepochs['WORD in {}'.format(words)].plot(n_channels=29)",
"Notice that in this dataset, each \"condition\" (A.K.A., each word) occurs only\nonce, whereas with the sample-dataset dataset each condition (e.g.,\n\"auditory/left\", \"visual/right\", etc) occurred dozens of times. This makes\nthe Pandas querying methods especially useful when you want to aggregate\nepochs that have different condition names but that share similar stimulus\nproperties. For example, here we group epochs based on the number of letters\nin the stimulus word, and compare the average signal at electrode Pz for\neach group:",
"evokeds = dict()\nquery = 'NumberOfLetters == {}'\nfor n_letters in epochs.metadata['NumberOfLetters'].unique():\n evokeds[str(n_letters)] = epochs[query.format(n_letters)].average()\n\nmne.viz.plot_compare_evokeds(evokeds, cmap=('word length', 'viridis'),\n picks='Pz')",
"Metadata can also be useful for sorting the epochs in an image plot. For\nexample, here we order the epochs based on word frequency to see if there's a\npattern to the latency or intensity of the response:",
"sort_order = np.argsort(epochs.metadata['WordFrequency'])\nepochs.plot_image(order=sort_order, picks='Pz')",
"Although there's no obvious relationship in this case, such analyses may be\nuseful for metadata variables that more directly index the time course of\nstimulus processing (such as reaction time).\nAdding metadata to an Epochs object\nYou can add a metadata :class:~pandas.DataFrame to any\n~mne.Epochs object (or replace existing metadata) simply by\nassigning to the :attr:~mne.Epochs.metadata attribute:",
"new_metadata = pd.DataFrame(data=['foo'] * len(epochs), columns=['bar'],\n index=range(len(epochs)))\nepochs.metadata = new_metadata\nepochs.metadata.head()",
"You can remove metadata from an ~mne.Epochs object by setting its\nmetadata to None:",
"epochs.metadata = None"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
tensorflow/adanet
|
adanet/examples/tutorials/adanet_tpu.ipynb
|
apache-2.0
|
[
"Copyright 2018 The AdaNet Authors.",
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"AdaNet on TPU\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/adanet/blob/master/adanet/examples/tutorials/adanet_tpu.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/adanet/blob/master/adanet/examples/tutorials/adanet_tpu.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n</table>\n\nAdaNet supports training on Google's custom machine learning accelerators known\nas Tensor Processing Units (TPU). Conveniently, we provide adanet.TPUEstimator\nwhich handles TPU support behind the scenes. There are only a few minor changes\nneeded to switch from adanet.Estimator to adanet.TPUEstimator. We highlight\nthe necessary changes in this tutorial.\nIf the reader is not familiar with AdaNet, it is recommended they take a look\nat\nThe AdaNet Objective\nand in particular\nCustomizing AdaNet\nas this tutorial builds upon the latter.\nNOTE: you must provide a valid GCS bucket to use TPUEstimator.\nTo begin, we import the necessary packages, obtain the Colab's TPU master\naddress, and give the TPU permissions to write to our GCS Bucket. Follow the\ninstructions\nhere\nto connect to a Colab TPU runtime.",
"#@test {\"skip\": true}\n# If you're running this in Colab, first install the adanet package:\n!pip install adanet\n\nimport functools\nimport json\nimport os\nimport time\n\nimport adanet\nfrom google.colab import auth\nimport tensorflow.compat.v1 as tf\n\nBUCKET = '' #@param {type: 'string'}\nMODEL_DIR = 'gs://{}/{}'.format(\n BUCKET, time.strftime('adanet-tpu-estimator/%Y-%m-%d-%H-%M-%S'))\n\nMASTER = ''\nif 'COLAB_TPU_ADDR' in os.environ:\n auth.authenticate_user()\n\n MASTER = 'grpc://' + os.environ['COLAB_TPU_ADDR']\n\n # Authenticate TPU to use GCS Bucket.\n with tf.Session(MASTER) as sess:\n with open('/content/adc.json', 'r') as file_:\n auth_info = json.load(file_)\n tf.contrib.cloud.configure_gcs(sess, credentials=auth_info)\n\n\n# The random seed to use.\nRANDOM_SEED = 42",
"Fashion MNIST\nWe focus again on the Fashion MNIST dataset and download the data via Keras.",
"(x_train, y_train), (x_test, y_test) = (\n tf.keras.datasets.fashion_mnist.load_data())",
"input_fn Changes\nThere are two minor changes we must make to input_fn to support running on\nTPU:\n\n\nTPUs dynamically shard the input data depending on the number of cores used.\n Because of this, we augment input_fn to take a dictionary params\n argument. When running on TPU, params contains a batch_size field with\n the appropriate batch size.\n\n\nOnce the input is batched, we drop the last batch if it is smaller than\n batch_size. This can simply be done by specifying drop_remainder=True to\n the\n tf.data.DataSet.batch()\n function. It is important to specify this option since TPUs do not support\n dynamic shapes. Note that we only drop the remainder batch during training\n since evaluation is still done on the CPU.",
"FEATURES_KEY = \"images\"\n\n\ndef generator(images, labels):\n \"\"\"Returns a generator that returns image-label pairs.\"\"\"\n\n def _gen():\n for image, label in zip(images, labels):\n yield image, label\n\n return _gen\n\n\ndef preprocess_image(image, label):\n \"\"\"Preprocesses an image for an `Estimator`.\"\"\"\n image = image / 255.\n image = tf.reshape(image, [28, 28, 1])\n features = {FEATURES_KEY: image}\n return features, label\n\n\ndef input_fn(partition, training, batch_size):\n \"\"\"Generate an input_fn for the Estimator.\"\"\"\n\n def _input_fn(params): # TPU: specify `params` argument.\n\n # TPU: get the TPU set `batch_size`, if available.\n batch_size_ = params.get(\"batch_size\", batch_size)\n\n if partition == \"train\":\n dataset = tf.data.Dataset.from_generator(\n generator(x_train, y_train), (tf.float32, tf.int32), ((28, 28), ()))\n elif partition == \"predict\":\n dataset = tf.data.Dataset.from_generator(\n generator(x_test[:10], y_test[:10]), (tf.float32, tf.int32),\n ((28, 28), ()))\n else:\n dataset = tf.data.Dataset.from_generator(\n generator(x_test, y_test), (tf.float32, tf.int32), ((28, 28), ()))\n\n if training:\n dataset = dataset.shuffle(10 * batch_size_, seed=RANDOM_SEED).repeat()\n\n # TPU: drop the remainder batch when training on TPU.\n dataset = dataset.map(preprocess_image).batch(\n batch_size_, drop_remainder=training)\n iterator = dataset.make_one_shot_iterator()\n features, labels = iterator.get_next()\n return features, labels\n\n return _input_fn",
"model_fn Changes\nWe use a similar CNN architecture as used in the\nCustomizing AdaNet\ntutorial. The only TPU specific change we need to make is to wrap the optimizer\nin a\ntf.contrib.tpu.CrossShardOptimizer.",
"#@title Define the Builder and Generator\nclass SimpleCNNBuilder(adanet.subnetwork.Builder):\n \"\"\"Builds a CNN subnetwork for AdaNet.\"\"\"\n\n def __init__(self, learning_rate, max_iteration_steps, seed):\n \"\"\"Initializes a `SimpleCNNBuilder`.\n\n Args:\n learning_rate: The float learning rate to use.\n max_iteration_steps: The number of steps per iteration.\n seed: The random seed.\n\n Returns:\n An instance of `SimpleCNNBuilder`.\n \"\"\"\n self._learning_rate = learning_rate\n self._max_iteration_steps = max_iteration_steps\n self._seed = seed\n\n def build_subnetwork(self,\n features,\n logits_dimension,\n training,\n iteration_step,\n summary,\n previous_ensemble=None):\n \"\"\"See `adanet.subnetwork.Builder`.\"\"\"\n images = list(features.values())[0]\n kernel_initializer = tf.keras.initializers.he_normal(seed=self._seed)\n x = tf.keras.layers.Conv2D(\n filters=16,\n kernel_size=3,\n padding=\"same\",\n activation=\"relu\",\n kernel_initializer=kernel_initializer)(\n images)\n x = tf.keras.layers.MaxPool2D(pool_size=2, strides=2)(x)\n x = tf.keras.layers.Flatten()(x)\n x = tf.keras.layers.Dense(\n units=64, activation=\"relu\", kernel_initializer=kernel_initializer)(\n x)\n\n logits = tf.keras.layers.Dense(\n units=10, activation=None, kernel_initializer=kernel_initializer)(\n x)\n\n complexity = tf.constant(1)\n\n return adanet.Subnetwork(\n last_layer=x,\n logits=logits,\n complexity=complexity,\n persisted_tensors={})\n\n def build_subnetwork_train_op(self,\n subnetwork,\n loss,\n var_list,\n labels,\n iteration_step,\n summary,\n previous_ensemble=None):\n \"\"\"See `adanet.subnetwork.Builder`.\"\"\"\n\n learning_rate = tf.train.cosine_decay(\n learning_rate=self._learning_rate,\n global_step=iteration_step,\n decay_steps=self._max_iteration_steps)\n optimizer = tf.train.MomentumOptimizer(learning_rate, .9)\n # TPU: wrap the optimizer in a CrossShardOptimizer.\n optimizer = tf.contrib.tpu.CrossShardOptimizer(optimizer)\n return optimizer.minimize(loss=loss, var_list=var_list)\n\n def build_mixture_weights_train_op(self, loss, var_list, logits, labels,\n iteration_step, summary):\n \"\"\"See `adanet.subnetwork.Builder`.\"\"\"\n return tf.no_op(\"mixture_weights_train_op\")\n\n @property\n def name(self):\n \"\"\"See `adanet.subnetwork.Builder`.\"\"\"\n return \"simple_cnn\"\n\n\nclass SimpleCNNGenerator(adanet.subnetwork.Generator):\n \"\"\"Generates a `SimpleCNN` at each iteration.\"\"\"\n\n def __init__(self, learning_rate, max_iteration_steps, seed=None):\n \"\"\"Initializes a `Generator` that builds `SimpleCNNs`.\n\n Args:\n learning_rate: The float learning rate to use.\n max_iteration_steps: The number of steps per iteration.\n seed: The random seed.\n\n Returns:\n An instance of `Generator`.\n \"\"\"\n self._seed = seed\n self._dnn_builder_fn = functools.partial(\n SimpleCNNBuilder,\n learning_rate=learning_rate,\n max_iteration_steps=max_iteration_steps)\n\n def generate_candidates(self, previous_ensemble, iteration_number,\n previous_ensemble_reports, all_reports):\n \"\"\"See `adanet.subnetwork.Generator`.\"\"\"\n seed = self._seed\n # Change the seed according to the iteration so that each subnetwork\n # learns something different.\n if seed is not None:\n seed += iteration_number\n return [self._dnn_builder_fn(seed=seed)]",
"Launch TensorBoard\nLet's run TensorBoard to visualize model training over time. We'll use ngrok to tunnel traffic to localhost.\nThe instructions for setting up Tensorboard were obtained from https://www.dlology.com/blog/quick-guide-to-run-tensorboard-in-google-colab/\nRun the next cells and follow the link to see the TensorBoard in a new tab.",
"#@test {\"skip\": true}\n\nget_ipython().system_raw(\n 'tensorboard --logdir {} --host 0.0.0.0 --port 6006 &'\n .format(MODEL_DIR)\n)\n\n# Install ngrok binary.\n! wget https://bin.equinox.io/c/4VmDzA7iaHb/ngrok-stable-linux-amd64.zip\n! unzip ngrok-stable-linux-amd64.zip\n\nprint(\"Follow this link to open TensorBoard in a new tab.\")\nget_ipython().system_raw('./ngrok http 6006 &')\n! curl -s http://localhost:4040/api/tunnels | python3 -c \\\n \"import sys, json; print(json.load(sys.stdin)['tunnels'][0]['public_url'])\"\n\n",
"Using adanet.TPUEstimator to Train and Evaluate\nFinally, we switch from adanet.Estimator to adanet.TPUEstimator. There are\ntwo last changes needed:\n\nUpdate the RunConfig to be a\n tf.contrib.tpu.RunConfig.\n We supply the TPU master address and set iterations_per_loop=200. This\n choice is fairly arbitrary in our case. A good practice is to set it to the\n number of steps in between summary writes and metric evals.\nFinally, we specify the use_tpu and batch_size parameters\n adanet.TPUEstimator.\n\nThere is an important thing to note about the batch_size: each TPU chip\nconsists of 2 cores with 4 shards each. In the\nCustomizing AdaNet\ntutorial, a batch_size of 64 was used. To be consistent we use the same\nbatch_size per shard and drop the number of training steps accordingly. In\nother words, since we're running on one TPU we set batch_size=64*8=512 and\ntrain_steps=1000. In the ideal case, since we drop the train_steps by 5x,\nthis means we're training 5x faster!",
"#@title AdaNet Parameters\nLEARNING_RATE = 0.25 #@param {type:\"number\"}\nTRAIN_STEPS = 1000 #@param {type:\"integer\"}\nBATCH_SIZE = 512 #@param {type:\"integer\"}\nADANET_ITERATIONS = 2 #@param {type:\"integer\"}\n\n# TPU: switch `tf.estimator.RunConfig` to `tf.contrib.tpu.RunConfig`.\n# The main required changes are specifying `tpu_config` and `master`.\nconfig = tf.contrib.tpu.RunConfig(\n tpu_config=tf.contrib.tpu.TPUConfig(iterations_per_loop=200),\n master=MASTER,\n save_checkpoints_steps=200,\n save_summary_steps=200,\n tf_random_seed=RANDOM_SEED)\n\nhead = tf.contrib.estimator.multi_class_head(\n n_classes=10, loss_reduction=tf.losses.Reduction.SUM_OVER_BATCH_SIZE)\nmax_iteration_steps = TRAIN_STEPS // ADANET_ITERATIONS\n# TPU: switch `adanet.Estimator` to `adanet.TPUEstimator`.\ntry:\n estimator = adanet.TPUEstimator(\n head=head,\n subnetwork_generator=SimpleCNNGenerator(\n learning_rate=LEARNING_RATE,\n max_iteration_steps=max_iteration_steps,\n seed=RANDOM_SEED),\n max_iteration_steps=max_iteration_steps,\n evaluator=adanet.Evaluator(\n input_fn=input_fn(\"train\", training=False, batch_size=BATCH_SIZE),\n steps=None),\n adanet_loss_decay=.99,\n config=config,\n model_dir=MODEL_DIR,\n # TPU: specify `use_tpu` and the batch_size parameters.\n use_tpu=True,\n # We evaluate on CPU since train_and_evaluate() will shut the TPU down\n # after evaluating the first time. However, AdaNet fully supports\n # evaluating on TPU.\n eval_on_tpu=False,\n train_batch_size=BATCH_SIZE,\n eval_batch_size=32)\nexcept tf.errors.InvalidArgumentError as e:\n raise Exception(\n \"Invalid GCS Bucket: you must provide a valid GCS bucket in the \"\n \"`BUCKET` form field of the first cell.\") from e\n\nresults, _ = tf.estimator.train_and_evaluate(\n estimator,\n train_spec=tf.estimator.TrainSpec(\n input_fn=input_fn(\"train\", training=True, batch_size=BATCH_SIZE),\n max_steps=TRAIN_STEPS),\n eval_spec=tf.estimator.EvalSpec(\n input_fn=input_fn(\"test\", training=False, batch_size=BATCH_SIZE),\n steps=None,\n start_delay_secs=1,\n throttle_secs=1,\n ))\n\nprint(\"Accuracy:\", results[\"accuracy\"])\nprint(\"Loss:\", results[\"average_loss\"])",
"Conclusion\nThat was easy! With very few changes we were able to transform our original\nestimator into one which can harness the power of TPUs."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ResearchComputing/xsede_2015
|
pyspark/02_parpivot.ipynb
|
mit
|
[
"Example 2: A fast parallel pivot, or preparing for time series analysis",
"from pyspark import SparkConf, SparkContext\nfrom collections import OrderedDict\nfrom functools import partial\n\npartitions = 64\nparcsv = sc.textFile(\"/user/milroy/lustre_timeseries.csv\", partitions)\nparcsv.take(5)",
"Each of these lines contains 6 semi-colon delimited columns: hostname, metric name, value reported, type, units, and Unix epoch time. Can we assume all do? The example data is an excerpt of one day of Lustre data, but we have hundreds of full days which may contain dropped writes and malformed data. I'll apply a filter to the data to select all lines with six columns.\nSometimes it isn't evident whether filters are needed until a succeeding RDD action fails.",
"filtered = parcsv.filter(lambda line: len(line.split(';')) == 6)",
"As seen above, the lines are Unicode, but in anticipation of necessary transformations the timestamp and values will need to be cast to appropriate types. We'll need to create a function that takes each line as an argument and returns a 4-tuple (quadruple?), organized to facilitate intuitive indexing. Let's pick the following ordering: (timestamp, host, metric, value). We don't need the other values, so they are discarded.\nSince the values in the third column are currently Unicode, a try-except structure is used to attempt to cast them to floats. If unsuccessful we set them to zero rather than NaN, since these don't work with the forthcoming eigendecomposition.\nAn alternative to the try-except would be to apply a filter for lines whose third column can't be cast as a float. I haven't compared the performance between these two.",
"def cast(line):\n try:\n val = float(str(line.split(';')[2]))\n except:\n val = 0.0\n return (int(line.split(';')[5]), line.split(';')[0], \n line.split(';')[1], val)\n\nparsed = filtered.map(cast)",
"Metrics aren't reported continuously, nor are the monitoring systems flawless. We need to assemble a unique set (dictionary) of metrics for the pivot, but they must be ordered to make sure the covariance structure (for PCA) isn't distorted. \nPySpark's \".distinct()\" method accomplishes this; we issue a \".collect()\" as well to assign the RDD's values to a variable.",
"columns = parsed.map(lambda x: x[2]).distinct().collect()\nbasedict = dict((metric, 0.0) for metric in columns)",
"Now we create an ordered dictionary to preserve the metric (and consequently, column) ordering. If we did not create this OrderedDict, the keys' ordering may be permuted. This will render the eigendecomposition of the covariance matrix meaningless.\nThe object is broadcast to all executors to be used in a future mapped function.",
"ordered = sc.broadcast(OrderedDict(sorted(basedict.items(), key=lambda y: y[0])))",
"The two functions below are adapted from user patricksurry's answer to this Stack Overflow question: http://stackoverflow.com/questions/30260015/reshaping-pivoting-data-in-spark-rdd-and-or-spark-dataframes. Beware, patricksurry's answer is predominantly serial!",
"def combine(u1, u2):\n u1.update(u2)\n return u1\n\ndef sequential(u, v):\n if not u:\n u = {}\n u[v[2]] = v[3]\n return u",
"We need to perform an aggregation by key. This operation takes two functions as arguments: the sequential and combination functions. The sequential op constructs a dictionary from (metric, value) in each row, and the combine op combines row dictionaries based on identical (timestamp, host) keys.\n<img src=\"aggregateByKey.png\">",
"aggregated = parsed.keyBy(lambda row: (row[0], row[1])).aggregateByKey(\n None, sequential, combine)",
"Now we need to impose the structure of our OrderedDict on each aggregated key, value pair. We create a new function to copy our canonical dictionary (of ordered keys, and 0.0 values) and update it with the dictionaries created in the aggregateByKey step.",
"def mergedicts(new):\n tmp = ordered.value.copy()\n tmp.update(new[1])\n return new[0], tmp\n\npivoted = aggregated.map(mergedicts)",
"Let's take a look at the results.",
"final_ordered = pivoted.takeOrdered(10, key=lambda x: x[0])\n\nfinal_ordered[0][0]",
"To sort the entire RDD, we use a sortByKey.",
"final_sorted = pivoted.sortByKey(keyfunc= lambda k: k[0])\n\nfinal_dict = final_sorted.map(lambda row: row[1].values())",
"Writing the lists to disk takes quite a long time. This is not optimized for Hadoop, and not writing in parallel. An exercise for the reader!",
"final_dict.coalesce(2).saveAsTextFile(\"/home/milroy/pyspark/processed.txt\")",
"Now on to Scala Spark for time series PCA\nNow exit the pyspark shell, and run spark-shell with the following options.",
"spark-shell --master $MASTER --driver-memory 12g\n\nimport org.apache.spark.mllib.linalg.Matrix\nimport org.apache.spark.mllib.linalg.distributed.RowMatrix\nimport org.apache.spark.mllib.linalg.{Vector, Vectors}\n\nval datafilePattern = \"/user/milroy/pivoted.txt\"\nval lustreData = sc.textFile(datafilePattern).cache()\n\nval vecData = lustreData.map(line => line.split(\",\").map(\n line => line.drop(1).dropRight(1)).map(\n v => v.toDouble)).map(arr => Vectors.dense(arr))\nval rmat: RowMatrix = new RowMatrix(vecData)\nval pc: Matrix = rmat.computePrincipalComponents(15)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mrcinv/GSEA.py
|
Leukemia.ipynb
|
mit
|
[
"GSEA analysis on leukemia dataset",
"%load_ext autoreload\n%autoreload 2\n\nfrom gsea import *\nimport numpy as np\n%pylab\n\n%matplotlib inline",
"Load data",
"genes, D, C = read_expression_file(\"data/leukemia.txt\")\ngene_sets, gene_set_names = read_genesets_file(\"data/pathways.txt\", genes)\ngene_set_hash = {}\nfor i in range(len(gene_sets)):\n gene_set_hash[gene_set_names[i][0]] = {'indexes':gene_sets[i],'desc':gene_set_names[i][1]}\n\n# verify that the dimensions make sense\nlen(genes),D.shape,len(C)",
"Enrichment score calculations\nWe graphically present the calculation of ES.",
"L,r = rank_genes(D,C)",
"See if the first genes in L are indeed correlated with C",
"scatter(D[L[1],:],C)\n\nscatter(D[L[-1],:],C)\n\nscatter(D[L[1000],:],C)",
"Graphical ilustration of ES calculations",
"p_exp = 1\ndef plot_es_calculations(name, L, r):\n S = gene_set_hash[name]['indexes']\n N = len(L)\n S_mask = np.zeros(N)\n S_mask[S] = 1\n # reorder gene set mask\n S_mask = S_mask[L]\n N_R = sum(abs(r*S_mask)**p_exp)\n P_hit = np.cumsum(abs(r*S_mask)**p_exp)/N_R if N_R!=0 else np.zeros_like(S_mask)\n N_H = len(S)\n P_mis = np.cumsum((1-S_mask))/(N-N_H) if N!=N_H else np.zeros_like(S_mask)\n idx = np.argmax(abs(P_hit - P_mis))\n print(\"ES =\", P_hit[idx]-P_mis[idx])\n f, axarr = plt.subplots(3, sharex=True)\n axarr[0].plot(S_mask)\n axarr[0].set_title('gene set %s' % name)\n axarr[1].plot(r)\n axarr[1].set_title('correlation with phenotype')\n axarr[2].plot(P_hit-P_mis)\n axarr[2].set_title('random walk')\n\nL,r = rank_genes(D,C)\nplot_es_calculations('CBF_LEUKEMIA_DOWNING_AML', L, r)",
"Random phenotype labels\nNow let's assign phenotype labels randomly. Is the ES much different?",
"N, k = D.shape\npi = np.array([np.random.randint(0,2) for i in range(k)])\nL, r = rank_genes(D,pi)\nprint(pi)\nplot_es_calculations('CBF_LEUKEMIA_DOWNING_AML', L, r)",
"GSEA analysis",
"# use `n_jobs=-1` to use all cores\n%time order, NES, p_values = gsea(D, C, gene_sets, n_jobs=-1)\n\nfrom IPython.display import display, Markdown\ns = \"| geneset | NES | p-value | number of genes in geneset |\\n |-------|---|---|---|\\n \"\nfor i in range(len(order)):\n s = s + \"| **%s** | %.3f | %.7f | %d |\\n\" % (gene_set_names[order[i]][0], NES[i], p_values[i], len(gene_sets[order[i]]))\ndisplay(Markdown(s))",
"Multiple Hypotesis testing\nWe present two example gene sets. One with a high NES and low p-value and one with a low NES and a high p-value. We plot the histograms of null distribution for ES.",
"name = 'DNA_DAMAGE_SIGNALLING'\nL,r = rank_genes(D,C)\nplot_es_calculations(name, L, r)\n\nn = 1000\nS = gene_set_hash[name]['indexes']\nL, r = rank_genes(D,C)\nES = enrichment_score(L,r,S)\nES_pi = np.zeros(n)\nfor i in range(n):\n pi = np.array([np.random.randint(0,2) for i in range(k)])\n L, r = rank_genes(D,pi)\n ES_pi[i] = enrichment_score(L,r,S)\n\nhist(ES_pi,bins=100)\nplot([ES,ES],[0,20],'r-',label=\"ES(S)\")\ntitle(\"Histogram of ES vlues for random phenotype labels.\\nRed line is ES for the selected gene set.\")\n\n\nname = 'tcrPathway'\nL,r = rank_genes(D,C)\nplot_es_calculations(name, L, r)\n\nn = 1000\nS = gene_set_hash[name]['indexes']\nL, r = rank_genes(D,C)\nES = enrichment_score(L,r,S)\nES_pi = np.zeros(n)\nfor i in range(n):\n pi = np.array([np.random.randint(0,2) for i in range(k)])\n L, r = rank_genes(D,pi)\n ES_pi[i] = enrichment_score(L,r,S)\n\nhist(ES_pi,bins=100)\nplot([ES,ES],[0,20],'r-',label=\"ES(S)\")\ntitle(\"Histogram of ES vlues for random phenotype labels.\\nRed line is ES for the selected gene set.\")\n",
"Performance optimizations",
"%timeit L,R = rank_genes(D,C)\n\n%timeit ES = enrichment_score(L,r,S)\n\n%prun order, NES, p_values = gsea(D, C, gene_sets)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
pyreaclib/pyreaclib
|
pynucastro/library/tabular/use_tabulated_rate.ipynb
|
bsd-3-clause
|
[
"Tabulated weak nuclear reaction rates\nThe reaction rate parameterizations in pynucastro/library/tabular were obtained from:\nToshio Suzuki, Hiroshi Toki and Ken'ichi Nomoto (2016):\nELECTRON-CAPTURE AND beta-DECAY RATES FOR sd-SHELL NUCLEI IN STELLAR ENVIRONMENTS RELEVANT TO HIGH-DENSITY O–NE–MG CORES. The Astrophysical Journal, 817, 163 \nNote: You must have package seaborn in your PATHONPATH.",
"import pynucastro as pyrl",
"Load a tabulated rate",
"al_mg = pyrl.Rate(\"al28--mg28-toki\")",
"A human readable string describing the rate, and the nuclei involved",
"print(al_mg)",
"Evaluate the electron capture rate [s$^{-1}$] at a given temperature (T [K]) and $Y_e$-weighted density ($\\rho Y_e$ [g/cm$^3$])",
"al_mg.eval(T=1.e8,rhoY=1.e9)",
"Plot the rate depending on the temperature and the density on a heat map.",
"al_mg.plot()",
"Another example :",
"ne_f = pyrl.Rate(\"ne23--f23-toki\")\nprint(ne_f)\n\nne_f.plot()",
"Working with a group of rates",
"files = [\"c13-pg-n14-nacr\",\n \"n13--c13-wc12\",\n \"c12-c12n-mg23-cf88\",\n \"o14-ap-f17-Ha96c\",\n \"mg23--na23-toki\",\n \"na23--ne23-toki\",\n \"n13-pg-o14-lg06\",\n \"c12-c12p-na23-cf88\"]\nrc = pyrl.RateCollection(files)\nrc.plot()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
leyhline/vix-term-structure
|
classification.ipynb
|
mit
|
[
"import operator\n\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom matplotlib import rcParams, cm\nfrom mpl_toolkits.mplot3d import Axes3D\nimport tensorflow.contrib.keras as keras\n\nfrom vixstructure.data import LongPricesDataset\n\nrcParams[\"figure.figsize\"] = 12, 6\n\ndataset = LongPricesDataset(\"data/8_m_settle.csv\", \"data/expirations.csv\")\n\nx = dataset.term_structure.data_frame\ny = x.apply(lambda x: [np.nan] + [2*x.iloc[i] - x.iloc[i-1] - x.iloc[i+1] for i in range(1, 7)] + [np.nan],\n axis=1).iloc[:, 1:-1]\n\ny.describe()\n\ndef splitted_dataset(x, y, validation_split = 0.15, test_split = 0.15):\n assert len(x) == len(y)\n val_length = int(len(x) * validation_split / 2)\n test_length = int(len(x) * test_split / 2)\n x_fst = x.iloc[:int(len(x) / 2)]\n x_snd = x.iloc[int(len(x) / 2):]\n y_fst = y.iloc[:int(len(y) / 2)]\n y_snd = y.iloc[int(len(y) / 2):]\n x_train, y_train = (x_fst.iloc[:-(val_length + test_length)].append(x_snd.iloc[:-(val_length + test_length)]),\n y_fst.iloc[:-(val_length + test_length)].append(y_snd.iloc[:-(val_length + test_length)]))\n x_val, y_val = (x_fst.iloc[-(val_length + test_length):-test_length].append(x_snd.iloc[-(val_length + test_length):-test_length]),\n y_fst.iloc[-(val_length + test_length):-test_length].append(y_snd.iloc[-(val_length + test_length):-test_length]))\n x_test, y_test = (x_fst.iloc[-test_length:].append(x_snd.iloc[-test_length:]),\n y_fst.iloc[-test_length:].append(y_snd[-test_length:]))\n return (x_train, y_train), (x_val, y_val), (x_test, y_test)",
"Classification\nUse up to five categories, thresholding with these values:\n\n+ → 0.1\n0 → 0\n- → -0.1",
"def get_categories(y):\n y = y.dropna()\n plus = (0.1 < y)\n zero = (-0.1 <= y) & (y <= 0.1)\n minus = (y < -0.1)\n return plus, zero, minus\n\ndef get_count(plus, zero, minus):\n return pd.concat(map(operator.methodcaller(\"sum\"), [plus, zero, minus]), axis=1, keys=[\"plus\", \"zero\", \"minus\"])\n\nplus, zero, minus = get_categories(y)\ncount = get_count(plus, zero, minus)\ncount.plot.bar(figsize=(8, 3))\nplt.legend((\"> 0.1\", \"≈ 0\", \"< 0.1\"), framealpha=0.7, loc=9)\nplt.title(\"Number of samples per category.\")\nplt.xticks(range(6), range(1, 7))\nplt.xlabel(\"Leg\")\nplt.ylabel(\"Samples\")\nplt.savefig(\"classification-samples-per-category.pdf\", format=\"pdf\", dpi=300, bbox_inches=\"tight\")\nplt.show()\n\nscaling = count.apply(lambda x: [1 / (xi / x[0]) for xi in x], axis=1)\nscaling\n# The necessary scaling factor",
"That is not very balanced. Better take zero and minus category times two.",
"for dataset, name in zip(splitted_dataset(x, y), (\"Training Set\", \"Validation Set\", \"Test Set\")):\n cats = get_categories(y)\n cnt = get_count(*cats)\n cnt.plot.bar(figsize=(12, 3))\n plt.title(name)\nplt.show()",
"Even the splitted data sets look similar. Seems there is no problem with doubling the underrepresented categories.\nNow for the mapping\n$N \\times M \\to N \\times M \\times 3$",
"target = np.dstack((plus.values, zero.values, minus.values)).astype(float)\ntarget.shape\n\nscaled_target = target * scaling.values\nscaled_target.shape\n\ninputs = x.dropna().values\ninputs = np.diff(inputs, axis=1)\ninputs.shape",
"Testwise training\nOnly for first leg",
"leg_nr = 0\n(x_train, y_train), (x_val, y_val), (x_test, y_test) = splitted_dataset(\n pd.DataFrame(inputs), pd.DataFrame(target[:,leg_nr,:]))\n\ninput_layer = keras.layers.Input(shape=(7,), name=\"inputs\")\nhidden_layer = keras.layers.Dense(30, activation=\"relu\", name=\"hidden\")(input_layer)\noutput_layer = keras.layers.Dense(3, activation=\"softmax\", name=\"predictions\")(hidden_layer)\nmodel = keras.models.Model(inputs=input_layer, outputs=output_layer)\nprint(model.summary())\nmodel.compile(optimizer=\"Adam\",\n loss='categorical_crossentropy',\n metrics=['accuracy'])\nmodel.fit(x_train.values[:-1], y_train.values[1:], epochs=100, batch_size=32,\n validation_data=(x_val.values[:-1], y_val.values[1:]),\n class_weight={k:v for k, v in enumerate(scaling.iloc[leg_nr].values)})\n\ntest_pred = model.predict(x_test.values[:-1])\n\ndef accuracy(y1, y2):\n return np.equal(np.argmax(y1, axis=-1), np.argmax(y2, axis=-1)).sum() / len(y1)\n\n# Predicted accuracy\naccuracy(test_pred, y_test.values[1:])\n\n# Naive accuracy\naccuracy(y_test.values[:-1], y_test.values[1:])",
"Conclusion:\nClassification works equally bad.",
"results_dict = {}\n\n# Try for all the legs:\n#days = 1\nfor days in range(1, 23, 3):\n network_predictions = []\n naive_predictions = []\n for leg_nr in range(6):\n (x_train, y_train), (x_val, y_val), (x_test, y_test) = splitted_dataset(\n pd.DataFrame(inputs), pd.DataFrame(target[:,leg_nr,:]))\n input_layer = keras.layers.Input(shape=(7,), name=\"inputs\")\n hidden_layer = keras.layers.Dense(30, activation=\"relu\", name=\"hidden\")(input_layer)\n output_layer = keras.layers.Dense(3, activation=\"softmax\", name=\"predictions\")(hidden_layer)\n model = keras.models.Model(inputs=input_layer, outputs=output_layer)\n model.compile(optimizer=\"Adam\",\n loss='categorical_crossentropy',\n metrics=['accuracy'])\n model.fit(x_train.values[:-days], y_train.values[days:], epochs=100, batch_size=32,\n validation_data=(x_val.values[:-days], y_val.values[days:]),\n class_weight={k:v for k, v in enumerate(scaling.iloc[leg_nr].values)},\n verbose=0)\n test_pred = model.predict(x_test.values[:-days])\n pred_acc = accuracy(test_pred, y_test.values[days:])\n naive_acc = accuracy(y_test.values[:-days], y_test.values[days:])\n network_predictions.append(pred_acc)\n naive_predictions.append(naive_acc)\n results = pd.DataFrame([pd.Series(naive_predictions), pd.Series(network_predictions)])\n results.columns = [\"V\" + str(i) for i in range(1, 7)]\n results.index = [\"Naive\", \"Network\"]\n results_dict[days] = results\n\nresults_dict[1].T.plot.bar(figsize=(4, 4), width=0.8)\nplt.legend(loc=\"lower center\")\nplt.axhline(results_dict[1].loc[\"Naive\"].mean(), color=\"#1f77b4\")\nplt.axhline(results_dict[1].loc[\"Network\"].mean(), color=\"#ff7f0e\")\nplt.ylabel(\"Accuracy\")\nplt.xticks(np.arange(6), list(range(1, 7)))\nplt.savefig(\"classification-1.pdf\", format=\"pdf\", dpi=300, bbox_inches=\"tight\")\nplt.show()\n\nplt.figure(figsize=(4,4))\nplt.plot(list(results_dict.keys()),\n [results_dict[i].loc[\"Naive\"].mean() for i in results_dict],\n list(results_dict.keys()),\n [results_dict[i].loc[\"Network\"].mean() for i in results_dict],\n linewidth=2)\nplt.legend((\"Naive\", \"Network\"))\nplt.ylabel(\"Mean accuracy\")\nplt.axhline(0.5, color=\"grey\", alpha=0.75)\nplt.xlim(1, 22)\nplt.grid(axis=\"x\")\nplt.xticks(list(results_dict.keys()))\nplt.savefig(\"classification-all.pdf\", format=\"pdf\", dpi=300, bbox_inches=\"tight\")\nplt.show()"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mne-tools/mne-tools.github.io
|
0.19/_downloads/a09c964ce37825f750704113fa863276/plot_mne_inverse_envelope_correlation_volume.ipynb
|
bsd-3-clause
|
[
"%matplotlib inline",
"Compute envelope correlations in volume source space\nCompute envelope correlations of orthogonalized activity [1] [2] in source\nspace using resting state CTF data in a volume source space.",
"# Authors: Eric Larson <larson.eric.d@gmail.com>\n# Sheraz Khan <sheraz@khansheraz.com>\n# Denis Engemann <denis.engemann@gmail.com>\n#\n# License: BSD (3-clause)\n\nimport os.path as op\n\nimport mne\nfrom mne.beamformer import make_lcmv, apply_lcmv_epochs\nfrom mne.connectivity import envelope_correlation\nfrom mne.preprocessing import compute_proj_ecg, compute_proj_eog\n\ndata_path = mne.datasets.brainstorm.bst_resting.data_path()\nsubjects_dir = op.join(data_path, 'subjects')\nsubject = 'bst_resting'\ntrans = op.join(data_path, 'MEG', 'bst_resting', 'bst_resting-trans.fif')\nbem = op.join(subjects_dir, subject, 'bem', subject + '-5120-bem-sol.fif')\nraw_fname = op.join(data_path, 'MEG', 'bst_resting',\n 'subj002_spontaneous_20111102_01_AUX.ds')\ncrop_to = 60.",
"Here we do some things in the name of speed, such as crop (which will\nhurt SNR) and downsample. Then we compute SSP projectors and apply them.",
"raw = mne.io.read_raw_ctf(raw_fname, verbose='error')\nraw.crop(0, crop_to).load_data().pick_types(meg=True, eeg=False).resample(80)\nraw.apply_gradient_compensation(3)\nprojs_ecg, _ = compute_proj_ecg(raw, n_grad=1, n_mag=2)\nprojs_eog, _ = compute_proj_eog(raw, n_grad=1, n_mag=2, ch_name='MLT31-4407')\nraw.info['projs'] += projs_ecg\nraw.info['projs'] += projs_eog\nraw.apply_proj()\ncov = mne.compute_raw_covariance(raw) # compute before band-pass of interest",
"Now we band-pass filter our data and create epochs.",
"raw.filter(14, 30)\nevents = mne.make_fixed_length_events(raw, duration=5.)\nepochs = mne.Epochs(raw, events=events, tmin=0, tmax=5.,\n baseline=None, reject=dict(mag=8e-13), preload=True)\ndel raw",
"Compute the forward and inverse",
"# This source space is really far too coarse, but we do this for speed\n# considerations here\npos = 15. # 1.5 cm is very broad, done here for speed!\nsrc = mne.setup_volume_source_space('bst_resting', pos, bem=bem,\n subjects_dir=subjects_dir, verbose=True)\nfwd = mne.make_forward_solution(epochs.info, trans, src, bem)\ndata_cov = mne.compute_covariance(epochs)\nfilters = make_lcmv(epochs.info, fwd, data_cov, 0.05, cov,\n pick_ori='max-power', weight_norm='nai')\ndel fwd",
"Compute label time series and do envelope correlation",
"epochs.apply_hilbert() # faster to do in sensor space\nstcs = apply_lcmv_epochs(epochs, filters, return_generator=True)\ncorr = envelope_correlation(stcs, verbose=True)",
"Compute the degree and plot it",
"degree = mne.connectivity.degree(corr, 0.15)\nstc = mne.VolSourceEstimate(degree, src[0]['vertno'], 0, 1, 'bst_resting')\nbrain = stc.plot(\n src, clim=dict(kind='percent', lims=[75, 85, 95]), colormap='gnuplot',\n subjects_dir=subjects_dir, mode='glass_brain')",
"References\n.. [1] Hipp JF, Hawellek DJ, Corbetta M, Siegel M, Engel AK (2012)\n Large-scale cortical correlation structure of spontaneous\n oscillatory activity. Nature Neuroscience 15:884–890\n.. [2] Khan S et al. (2018). Maturation trajectories of cortical\n resting-state networks depend on the mediating frequency band.\n Neuroimage 174:57–68"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
elenduuche/deep-learning
|
batch-norm/Batch_Normalization_Lesson.ipynb
|
mit
|
[
"Batch Normalization – Lesson\n\nWhat is it?\nWhat are it's benefits?\nHow do we add it to a network?\nLet's see it work!\nWhat are you hiding?\n\nWhat is Batch Normalization?<a id='theory'></a>\nBatch normalization was introduced in Sergey Ioffe's and Christian Szegedy's 2015 paper Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. The idea is that, instead of just normalizing the inputs to the network, we normalize the inputs to layers within the network. It's called \"batch\" normalization because during training, we normalize each layer's inputs by using the mean and variance of the values in the current mini-batch.\nWhy might this help? Well, we know that normalizing the inputs to a network helps the network learn. But a network is a series of layers, where the output of one layer becomes the input to another. That means we can think of any layer in a neural network as the first layer of a smaller network.\nFor example, imagine a 3 layer network. Instead of just thinking of it as a single network with inputs, layers, and outputs, think of the output of layer 1 as the input to a two layer network. This two layer network would consist of layers 2 and 3 in our original network. \nLikewise, the output of layer 2 can be thought of as the input to a single layer network, consisting only of layer 3.\nWhen you think of it like that - as a series of neural networks feeding into each other - then it's easy to imagine how normalizing the inputs to each layer would help. It's just like normalizing the inputs to any other neural network, but you're doing it at every layer (sub-network).\nBeyond the intuitive reasons, there are good mathematical reasons why it helps the network learn better, too. It helps combat what the authors call internal covariate shift. This discussion is best handled in the paper and in Deep Learning a book you can read online written by Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Specifically, check out the batch normalization section of Chapter 8: Optimization for Training Deep Models.\nBenefits of Batch Normalization<a id=\"benefits\"></a>\nBatch normalization optimizes network training. It has been shown to have several benefits:\n1. Networks train faster – Each training iteration will actually be slower because of the extra calculations during the forward pass and the additional hyperparameters to train during back propagation. However, it should converge much more quickly, so training should be faster overall. \n2. Allows higher learning rates – Gradient descent usually requires small learning rates for the network to converge. And as networks get deeper, their gradients get smaller during back propagation so they require even more iterations. Using batch normalization allows us to use much higher learning rates, which further increases the speed at which networks train. \n3. Makes weights easier to initialize – Weight initialization can be difficult, and it's even more difficult when creating deeper networks. Batch normalization seems to allow us to be much less careful about choosing our initial starting weights.\n4. Makes more activation functions viable – Some activation functions do not work well in some situations. Sigmoids lose their gradient pretty quickly, which means they can't be used in deep networks. And ReLUs often die out during training, where they stop learning completely, so we need to be careful about the range of values fed into them. Because batch normalization regulates the values going into each activation function, non-linearlities that don't seem to work well in deep networks actually become viable again.\n5. Simplifies the creation of deeper networks – Because of the first 4 items listed above, it is easier to build and faster to train deeper neural networks when using batch normalization. And it's been shown that deeper networks generally produce better results, so that's great.\n6. Provides a bit of regularlization – Batch normalization adds a little noise to your network. In some cases, such as in Inception modules, batch normalization has been shown to work as well as dropout. But in general, consider batch normalization as a bit of extra regularization, possibly allowing you to reduce some of the dropout you might add to a network. \n7. May give better results overall – Some tests seem to show batch normalization actually improves the training results. However, it's really an optimization to help train faster, so you shouldn't think of it as a way to make your network better. But since it lets you train networks faster, that means you can iterate over more designs more quickly. It also lets you build deeper networks, which are usually better. So when you factor in everything, you're probably going to end up with better results if you build your networks with batch normalization.\nBatch Normalization in TensorFlow<a id=\"implementation_1\"></a>\nThis section of the notebook shows you one way to add batch normalization to a neural network built in TensorFlow. \nThe following cell imports the packages we need in the notebook and loads the MNIST dataset to use in our experiments. However, the tensorflow package contains all the code you'll actually need for batch normalization.",
"# Import necessary packages\nimport tensorflow as tf\nimport tqdm\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\n# Import MNIST data so we have something for our experiments\nfrom tensorflow.examples.tutorials.mnist import input_data\nmnist = input_data.read_data_sets(\"MNIST_data/\", one_hot=True)",
"Neural network classes for testing\nThe following class, NeuralNet, allows us to create identical neural networks with and without batch normalization. The code is heavily documented, but there is also some additional discussion later. You do not need to read through it all before going through the rest of the notebook, but the comments within the code blocks may answer some of your questions.\nAbout the code:\n\nThis class is not meant to represent TensorFlow best practices – the design choices made here are to support the discussion related to batch normalization.\nIt's also important to note that we use the well-known MNIST data for these examples, but the networks we create are not meant to be good for performing handwritten character recognition. We chose this network architecture because it is similar to the one used in the original paper, which is complex enough to demonstrate some of the benefits of batch normalization while still being fast to train.",
"class NeuralNet:\n def __init__(self, initial_weights, activation_fn, use_batch_norm):\n \"\"\"\n Initializes this object, creating a TensorFlow graph using the given parameters.\n \n :param initial_weights: list of NumPy arrays or Tensors\n Initial values for the weights for every layer in the network. We pass these in\n so we can create multiple networks with the same starting weights to eliminate\n training differences caused by random initialization differences.\n The number of items in the list defines the number of layers in the network,\n and the shapes of the items in the list define the number of nodes in each layer.\n e.g. Passing in 3 matrices of shape (784, 256), (256, 100), and (100, 10) would \n create a network with 784 inputs going into a hidden layer with 256 nodes,\n followed by a hidden layer with 100 nodes, followed by an output layer with 10 nodes.\n :param activation_fn: Callable\n The function used for the output of each hidden layer. The network will use the same\n activation function on every hidden layer and no activate function on the output layer.\n e.g. Pass tf.nn.relu to use ReLU activations on your hidden layers.\n :param use_batch_norm: bool\n Pass True to create a network that uses batch normalization; False otherwise\n Note: this network will not use batch normalization on layers that do not have an\n activation function.\n \"\"\"\n # Keep track of whether or not this network uses batch normalization.\n self.use_batch_norm = use_batch_norm\n self.name = \"With Batch Norm\" if use_batch_norm else \"Without Batch Norm\"\n\n # Batch normalization needs to do different calculations during training and inference,\n # so we use this placeholder to tell the graph which behavior to use.\n self.is_training = tf.placeholder(tf.bool, name=\"is_training\")\n\n # This list is just for keeping track of data we want to plot later.\n # It doesn't actually have anything to do with neural nets or batch normalization.\n self.training_accuracies = []\n\n # Create the network graph, but it will not actually have any real values until after you\n # call train or test\n self.build_network(initial_weights, activation_fn)\n \n def build_network(self, initial_weights, activation_fn):\n \"\"\"\n Build the graph. The graph still needs to be trained via the `train` method.\n \n :param initial_weights: list of NumPy arrays or Tensors\n See __init__ for description. \n :param activation_fn: Callable\n See __init__ for description. \n \"\"\"\n self.input_layer = tf.placeholder(tf.float32, [None, initial_weights[0].shape[0]])\n layer_in = self.input_layer\n for weights in initial_weights[:-1]:\n layer_in = self.fully_connected(layer_in, weights, activation_fn) \n self.output_layer = self.fully_connected(layer_in, initial_weights[-1])\n \n def fully_connected(self, layer_in, initial_weights, activation_fn=None):\n \"\"\"\n Creates a standard, fully connected layer. Its number of inputs and outputs will be\n defined by the shape of `initial_weights`, and its starting weight values will be\n taken directly from that same parameter. If `self.use_batch_norm` is True, this\n layer will include batch normalization, otherwise it will not. \n \n :param layer_in: Tensor\n The Tensor that feeds into this layer. It's either the input to the network or the output\n of a previous layer.\n :param initial_weights: NumPy array or Tensor\n Initial values for this layer's weights. The shape defines the number of nodes in the layer.\n e.g. Passing in 3 matrix of shape (784, 256) would create a layer with 784 inputs and 256 \n outputs. \n :param activation_fn: Callable or None (default None)\n The non-linearity used for the output of the layer. If None, this layer will not include \n batch normalization, regardless of the value of `self.use_batch_norm`. \n e.g. Pass tf.nn.relu to use ReLU activations on your hidden layers.\n \"\"\"\n # Since this class supports both options, only use batch normalization when\n # requested. However, do not use it on the final layer, which we identify\n # by its lack of an activation function.\n if self.use_batch_norm and activation_fn:\n # Batch normalization uses weights as usual, but does NOT add a bias term. This is because \n # its calculations include gamma and beta variables that make the bias term unnecessary.\n # (See later in the notebook for more details.)\n weights = tf.Variable(initial_weights)\n linear_output = tf.matmul(layer_in, weights)\n\n # Apply batch normalization to the linear combination of the inputs and weights\n batch_normalized_output = tf.layers.batch_normalization(linear_output, training=self.is_training)\n\n # Now apply the activation function, *after* the normalization.\n return activation_fn(batch_normalized_output)\n else:\n # When not using batch normalization, create a standard layer that multiplies\n # the inputs and weights, adds a bias, and optionally passes the result \n # through an activation function. \n weights = tf.Variable(initial_weights)\n biases = tf.Variable(tf.zeros([initial_weights.shape[-1]]))\n linear_output = tf.add(tf.matmul(layer_in, weights), biases)\n return linear_output if not activation_fn else activation_fn(linear_output)\n\n def train(self, session, learning_rate, training_batches, batches_per_sample, save_model_as=None):\n \"\"\"\n Trains the model on the MNIST training dataset.\n \n :param session: Session\n Used to run training graph operations.\n :param learning_rate: float\n Learning rate used during gradient descent.\n :param training_batches: int\n Number of batches to train.\n :param batches_per_sample: int\n How many batches to train before sampling the validation accuracy.\n :param save_model_as: string or None (default None)\n Name to use if you want to save the trained model.\n \"\"\"\n # This placeholder will store the target labels for each mini batch\n labels = tf.placeholder(tf.float32, [None, 10])\n\n # Define loss and optimizer\n cross_entropy = tf.reduce_mean(\n tf.nn.softmax_cross_entropy_with_logits(labels=labels, logits=self.output_layer))\n \n # Define operations for testing\n correct_prediction = tf.equal(tf.argmax(self.output_layer, 1), tf.argmax(labels, 1))\n accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n\n if self.use_batch_norm:\n # If we don't include the update ops as dependencies on the train step, the \n # tf.layers.batch_normalization layers won't update their population statistics,\n # which will cause the model to fail at inference time\n with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):\n train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy)\n else:\n train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy)\n \n # Train for the appropriate number of batches. (tqdm is only for a nice timing display)\n for i in tqdm.tqdm(range(training_batches)):\n # We use batches of 60 just because the original paper did. You can use any size batch you like.\n batch_xs, batch_ys = mnist.train.next_batch(60)\n session.run(train_step, feed_dict={self.input_layer: batch_xs, \n labels: batch_ys, \n self.is_training: True})\n \n # Periodically test accuracy against the 5k validation images and store it for plotting later.\n if i % batches_per_sample == 0:\n test_accuracy = session.run(accuracy, feed_dict={self.input_layer: mnist.validation.images,\n labels: mnist.validation.labels,\n self.is_training: False})\n self.training_accuracies.append(test_accuracy)\n\n # After training, report accuracy against test data\n test_accuracy = session.run(accuracy, feed_dict={self.input_layer: mnist.validation.images,\n labels: mnist.validation.labels,\n self.is_training: False})\n print('{}: After training, final accuracy on validation set = {}'.format(self.name, test_accuracy))\n\n # If you want to use this model later for inference instead of having to retrain it,\n # just construct it with the same parameters and then pass this file to the 'test' function\n if save_model_as:\n tf.train.Saver().save(session, save_model_as)\n\n def test(self, session, test_training_accuracy=False, include_individual_predictions=False, restore_from=None):\n \"\"\"\n Trains a trained model on the MNIST testing dataset.\n\n :param session: Session\n Used to run the testing graph operations.\n :param test_training_accuracy: bool (default False)\n If True, perform inference with batch normalization using batch mean and variance;\n if False, perform inference with batch normalization using estimated population mean and variance.\n Note: in real life, *always* perform inference using the population mean and variance.\n This parameter exists just to support demonstrating what happens if you don't.\n :param include_individual_predictions: bool (default True)\n This function always performs an accuracy test against the entire test set. But if this parameter\n is True, it performs an extra test, doing 200 predictions one at a time, and displays the results\n and accuracy.\n :param restore_from: string or None (default None)\n Name of a saved model if you want to test with previously saved weights.\n \"\"\"\n # This placeholder will store the true labels for each mini batch\n labels = tf.placeholder(tf.float32, [None, 10])\n\n # Define operations for testing\n correct_prediction = tf.equal(tf.argmax(self.output_layer, 1), tf.argmax(labels, 1))\n accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n\n # If provided, restore from a previously saved model\n if restore_from:\n tf.train.Saver().restore(session, restore_from)\n\n # Test against all of the MNIST test data\n test_accuracy = session.run(accuracy, feed_dict={self.input_layer: mnist.test.images,\n labels: mnist.test.labels,\n self.is_training: test_training_accuracy})\n print('-'*75)\n print('{}: Accuracy on full test set = {}'.format(self.name, test_accuracy))\n\n # If requested, perform tests predicting individual values rather than batches\n if include_individual_predictions:\n predictions = []\n correct = 0\n\n # Do 200 predictions, 1 at a time\n for i in range(200):\n # This is a normal prediction using an individual test case. However, notice\n # we pass `test_training_accuracy` to `feed_dict` as the value for `self.is_training`.\n # Remember that will tell it whether it should use the batch mean & variance or\n # the population estimates that were calucated while training the model.\n pred, corr = session.run([tf.arg_max(self.output_layer,1), accuracy],\n feed_dict={self.input_layer: [mnist.test.images[i]],\n labels: [mnist.test.labels[i]],\n self.is_training: test_training_accuracy})\n correct += corr\n\n predictions.append(pred[0])\n\n print(\"200 Predictions:\", predictions)\n print(\"Accuracy on 200 samples:\", correct/200)\n",
"There are quite a few comments in the code, so those should answer most of your questions. However, let's take a look at the most important lines.\nWe add batch normalization to layers inside the fully_connected function. Here are some important points about that code:\n1. Layers with batch normalization do not include a bias term.\n2. We use TensorFlow's tf.layers.batch_normalization function to handle the math. (We show lower-level ways to do this later in the notebook.)\n3. We tell tf.layers.batch_normalization whether or not the network is training. This is an important step we'll talk about later.\n4. We add the normalization before calling the activation function.\nIn addition to that code, the training step is wrapped in the following with statement:\npython\nwith tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):\nThis line actually works in conjunction with the training parameter we pass to tf.layers.batch_normalization. Without it, TensorFlow's batch normalization layer will not operate correctly during inference.\nFinally, whenever we train the network or perform inference, we use the feed_dict to set self.is_training to True or False, respectively, like in the following line:\npython\nsession.run(train_step, feed_dict={self.input_layer: batch_xs, \n labels: batch_ys, \n self.is_training: True})\nWe'll go into more details later, but next we want to show some experiments that use this code and test networks with and without batch normalization.\nBatch Normalization Demos<a id='demos'></a>\nThis section of the notebook trains various networks with and without batch normalization to demonstrate some of the benefits mentioned earlier. \nWe'd like to thank the author of this blog post Implementing Batch Normalization in TensorFlow. That post provided the idea of - and some of the code for - plotting the differences in accuracy during training, along with the idea for comparing multiple networks using the same initial weights.\nCode to support testing\nThe following two functions support the demos we run in the notebook. \nThe first function, plot_training_accuracies, simply plots the values found in the training_accuracies lists of the NeuralNet objects passed to it. If you look at the train function in NeuralNet, you'll see it that while it's training the network, it periodically measures validation accuracy and stores the results in that list. It does that just to support these plots.\nThe second function, train_and_test, creates two neural nets - one with and one without batch normalization. It then trains them both and tests them, calling plot_training_accuracies to plot how their accuracies changed over the course of training. The really imporant thing about this function is that it initializes the starting weights for the networks outside of the networks and then passes them in. This lets it train both networks from the exact same starting weights, which eliminates performance differences that might result from (un)lucky initial weights.",
"def plot_training_accuracies(*args, **kwargs):\n \"\"\"\n Displays a plot of the accuracies calculated during training to demonstrate\n how many iterations it took for the model(s) to converge.\n \n :param args: One or more NeuralNet objects\n You can supply any number of NeuralNet objects as unnamed arguments \n and this will display their training accuracies. Be sure to call `train` \n the NeuralNets before calling this function.\n :param kwargs: \n You can supply any named parameters here, but `batches_per_sample` is the only\n one we look for. It should match the `batches_per_sample` value you passed\n to the `train` function.\n \"\"\"\n fig, ax = plt.subplots()\n\n batches_per_sample = kwargs['batches_per_sample']\n \n for nn in args:\n ax.plot(range(0,len(nn.training_accuracies)*batches_per_sample,batches_per_sample),\n nn.training_accuracies, label=nn.name)\n ax.set_xlabel('Training steps')\n ax.set_ylabel('Accuracy')\n ax.set_title('Validation Accuracy During Training')\n ax.legend(loc=4)\n ax.set_ylim([0,1])\n plt.yticks(np.arange(0, 1.1, 0.1))\n plt.grid(True)\n plt.show()\n\ndef train_and_test(use_bad_weights, learning_rate, activation_fn, training_batches=50000, batches_per_sample=500):\n \"\"\"\n Creates two networks, one with and one without batch normalization, then trains them\n with identical starting weights, layers, batches, etc. Finally tests and plots their accuracies.\n \n :param use_bad_weights: bool\n If True, initialize the weights of both networks to wildly inappropriate weights;\n if False, use reasonable starting weights.\n :param learning_rate: float\n Learning rate used during gradient descent.\n :param activation_fn: Callable\n The function used for the output of each hidden layer. The network will use the same\n activation function on every hidden layer and no activate function on the output layer.\n e.g. Pass tf.nn.relu to use ReLU activations on your hidden layers.\n :param training_batches: (default 50000)\n Number of batches to train.\n :param batches_per_sample: (default 500)\n How many batches to train before sampling the validation accuracy.\n \"\"\"\n # Use identical starting weights for each network to eliminate differences in\n # weight initialization as a cause for differences seen in training performance\n #\n # Note: The networks will use these weights to define the number of and shapes of\n # its layers. The original batch normalization paper used 3 hidden layers\n # with 100 nodes in each, followed by a 10 node output layer. These values\n # build such a network, but feel free to experiment with different choices.\n # However, the input size should always be 784 and the final output should be 10.\n if use_bad_weights:\n # These weights should be horrible because they have such a large standard deviation\n weights = [np.random.normal(size=(784,100), scale=5.0).astype(np.float32),\n np.random.normal(size=(100,100), scale=5.0).astype(np.float32),\n np.random.normal(size=(100,100), scale=5.0).astype(np.float32),\n np.random.normal(size=(100,10), scale=5.0).astype(np.float32)\n ]\n else:\n # These weights should be good because they have such a small standard deviation\n weights = [np.random.normal(size=(784,100), scale=0.05).astype(np.float32),\n np.random.normal(size=(100,100), scale=0.05).astype(np.float32),\n np.random.normal(size=(100,100), scale=0.05).astype(np.float32),\n np.random.normal(size=(100,10), scale=0.05).astype(np.float32)\n ]\n\n # Just to make sure the TensorFlow's default graph is empty before we start another\n # test, because we don't bother using different graphs or scoping and naming \n # elements carefully in this sample code.\n tf.reset_default_graph()\n\n # build two versions of same network, 1 without and 1 with batch normalization\n nn = NeuralNet(weights, activation_fn, False)\n bn = NeuralNet(weights, activation_fn, True)\n \n # train and test the two models\n with tf.Session() as sess:\n tf.global_variables_initializer().run()\n\n nn.train(sess, learning_rate, training_batches, batches_per_sample)\n bn.train(sess, learning_rate, training_batches, batches_per_sample)\n \n nn.test(sess)\n bn.test(sess)\n \n # Display a graph of how validation accuracies changed during training\n # so we can compare how the models trained and when they converged\n plot_training_accuracies(nn, bn, batches_per_sample=batches_per_sample)\n",
"Comparisons between identical networks, with and without batch normalization\nThe next series of cells train networks with various settings to show the differences with and without batch normalization. They are meant to clearly demonstrate the effects of batch normalization. We include a deeper discussion of batch normalization later in the notebook.\nThe following creates two networks using a ReLU activation function, a learning rate of 0.01, and reasonable starting weights.",
"train_and_test(False, 0.01, tf.nn.relu)",
"As expected, both networks train well and eventually reach similar test accuracies. However, notice that the model with batch normalization converges slightly faster than the other network, reaching accuracies over 90% almost immediately and nearing its max acuracy in 10 or 15 thousand iterations. The other network takes about 3 thousand iterations to reach 90% and doesn't near its best accuracy until 30 thousand or more iterations.\nIf you look at the raw speed, you can see that without batch normalization we were computing over 1100 batches per second, whereas with batch normalization that goes down to just over 500. However, batch normalization allows us to perform fewer iterations and converge in less time over all. (We only trained for 50 thousand batches here so we could plot the comparison.)\nThe following creates two networks with the same hyperparameters used in the previous example, but only trains for 2000 iterations.",
"train_and_test(False, 0.01, tf.nn.relu, 2000, 50)",
"As you can see, using batch normalization produces a model with over 95% accuracy in only 2000 batches, and it was above 90% at somewhere around 500 batches. Without batch normalization, the model takes 1750 iterations just to hit 80% – the network with batch normalization hits that mark after around 200 iterations! (Note: if you run the code yourself, you'll see slightly different results each time because the starting weights - while the same for each model - are different for each run.)\nIn the above example, you should also notice that the networks trained fewer batches per second then what you saw in the previous example. That's because much of the time we're tracking is actually spent periodically performing inference to collect data for the plots. In this example we perform that inference every 50 batches instead of every 500, so generating the plot for this example requires 10 times the overhead for the same 2000 iterations.\nThe following creates two networks using a sigmoid activation function, a learning rate of 0.01, and reasonable starting weights.",
"train_and_test(False, 0.01, tf.nn.sigmoid)",
"With the number of layers we're using and this small learning rate, using a sigmoid activation function takes a long time to start learning. It eventually starts making progress, but it took over 45 thousand batches just to get over 80% accuracy. Using batch normalization gets to 90% in around one thousand batches. \nThe following creates two networks using a ReLU activation function, a learning rate of 1, and reasonable starting weights.",
"train_and_test(False, 1, tf.nn.relu)",
"Now we're using ReLUs again, but with a larger learning rate. The plot shows how training started out pretty normally, with the network with batch normalization starting out faster than the other. But the higher learning rate bounces the accuracy around a bit more, and at some point the accuracy in the network without batch normalization just completely crashes. It's likely that too many ReLUs died off at this point because of the high learning rate.\nThe next cell shows the same test again. The network with batch normalization performs the same way, and the other suffers from the same problem again, but it manages to train longer before it happens.",
"train_and_test(False, 1, tf.nn.relu)",
"In both of the previous examples, the network with batch normalization manages to gets over 98% accuracy, and get near that result almost immediately. The higher learning rate allows the network to train extremely fast.\nThe following creates two networks using a sigmoid activation function, a learning rate of 1, and reasonable starting weights.",
"train_and_test(False, 1, tf.nn.sigmoid)",
"In this example, we switched to a sigmoid activation function. It appears to hande the higher learning rate well, with both networks achieving high accuracy.\nThe cell below shows a similar pair of networks trained for only 2000 iterations.",
"train_and_test(False, 1, tf.nn.sigmoid, 2000, 50)",
"As you can see, even though these parameters work well for both networks, the one with batch normalization gets over 90% in 400 or so batches, whereas the other takes over 1700. When training larger networks, these sorts of differences become more pronounced.\nThe following creates two networks using a ReLU activation function, a learning rate of 2, and reasonable starting weights.",
"train_and_test(False, 2, tf.nn.relu)",
"With this very large learning rate, the network with batch normalization trains fine and almost immediately manages 98% accuracy. However, the network without normalization doesn't learn at all.\nThe following creates two networks using a sigmoid activation function, a learning rate of 2, and reasonable starting weights.",
"train_and_test(False, 2, tf.nn.sigmoid)",
"Once again, using a sigmoid activation function with the larger learning rate works well both with and without batch normalization.\nHowever, look at the plot below where we train models with the same parameters but only 2000 iterations. As usual, batch normalization lets it train faster.",
"train_and_test(False, 2, tf.nn.sigmoid, 2000, 50)",
"In the rest of the examples, we use really bad starting weights. That is, normally we would use very small values close to zero. However, in these examples we choose random values with a standard deviation of 5. If you were really training a neural network, you would not want to do this. But these examples demonstrate how batch normalization makes your network much more resilient. \nThe following creates two networks using a ReLU activation function, a learning rate of 0.01, and bad starting weights.",
"train_and_test(True, 0.01, tf.nn.relu)",
"As the plot shows, without batch normalization the network never learns anything at all. But with batch normalization, it actually learns pretty well and gets to almost 80% accuracy. The starting weights obviously hurt the network, but you can see how well batch normalization does in overcoming them. \nThe following creates two networks using a sigmoid activation function, a learning rate of 0.01, and bad starting weights.",
"train_and_test(True, 0.01, tf.nn.sigmoid)",
"Using a sigmoid activation function works better than the ReLU in the previous example, but without batch normalization it would take a tremendously long time to train the network, if it ever trained at all. \nThe following creates two networks using a ReLU activation function, a learning rate of 1, and bad starting weights.<a id=\"successful_example_lr_1\"></a>",
"train_and_test(True, 1, tf.nn.relu)",
"The higher learning rate used here allows the network with batch normalization to surpass 90% in about 30 thousand batches. The network without it never gets anywhere.\nThe following creates two networks using a sigmoid activation function, a learning rate of 1, and bad starting weights.",
"train_and_test(True, 1, tf.nn.sigmoid)",
"Using sigmoid works better than ReLUs for this higher learning rate. However, you can see that without batch normalization, the network takes a long time tro train, bounces around a lot, and spends a long time stuck at 90%. The network with batch normalization trains much more quickly, seems to be more stable, and achieves a higher accuracy.\nThe following creates two networks using a ReLU activation function, a learning rate of 2, and bad starting weights.<a id=\"successful_example_lr_2\"></a>",
"train_and_test(True, 2, tf.nn.relu)",
"We've already seen that ReLUs do not do as well as sigmoids with higher learning rates, and here we are using an extremely high rate. As expected, without batch normalization the network doesn't learn at all. But with batch normalization, it eventually achieves 90% accuracy. Notice, though, how its accuracy bounces around wildly during training - that's because the learning rate is really much too high, so the fact that this worked at all is a bit of luck.\nThe following creates two networks using a sigmoid activation function, a learning rate of 2, and bad starting weights.",
"train_and_test(True, 2, tf.nn.sigmoid)",
"In this case, the network with batch normalization trained faster and reached a higher accuracy. Meanwhile, the high learning rate makes the network without normalization bounce around erratically and have trouble getting past 90%.\nFull Disclosure: Batch Normalization Doesn't Fix Everything\nBatch normalization isn't magic and it doesn't work every time. Weights are still randomly initialized and batches are chosen at random during training, so you never know exactly how training will go. Even for these tests, where we use the same initial weights for both networks, we still get different weights each time we run.\nThis section includes two examples that show runs when batch normalization did not help at all.\nThe following creates two networks using a ReLU activation function, a learning rate of 1, and bad starting weights.",
"train_and_test(True, 1, tf.nn.relu)",
"When we used these same parameters earlier, we saw the network with batch normalization reach 92% validation accuracy. This time we used different starting weights, initialized using the same standard deviation as before, and the network doesn't learn at all. (Remember, an accuracy around 10% is what the network gets if it just guesses the same value all the time.)\nThe following creates two networks using a ReLU activation function, a learning rate of 2, and bad starting weights.",
"train_and_test(True, 2, tf.nn.relu)",
"When we trained with these parameters and batch normalization earlier, we reached 90% validation accuracy. However, this time the network almost starts to make some progress in the beginning, but it quickly breaks down and stops learning. \nNote: Both of the above examples use extremely bad starting weights, along with learning rates that are too high. While we've shown batch normalization can overcome bad values, we don't mean to encourage actually using them. The examples in this notebook are meant to show that batch normalization can help your networks train better. But these last two examples should remind you that you still want to try to use good network design choices and reasonable starting weights. It should also remind you that the results of each attempt to train a network are a bit random, even when using otherwise identical architectures.\nBatch Normalization: A Detailed Look<a id='implementation_2'></a>\nThe layer created by tf.layers.batch_normalization handles all the details of implementing batch normalization. Many students will be fine just using that and won't care about what's happening at the lower levels. However, some students may want to explore the details, so here is a short explanation of what's really happening, starting with the equations you're likely to come across if you ever read about batch normalization. \nIn order to normalize the values, we first need to find the average value for the batch. If you look at the code, you can see that this is not the average value of the batch inputs, but the average value coming out of any particular layer before we pass it through its non-linear activation function and then feed it as an input to the next layer.\nWe represent the average as $\\mu_B$, which is simply the sum of all of the values $x_i$ divided by the number of values, $m$ \n$$\n\\mu_B \\leftarrow \\frac{1}{m}\\sum_{i=1}^m x_i\n$$\nWe then need to calculate the variance, or mean squared deviation, represented as $\\sigma_{B}^{2}$. If you aren't familiar with statistics, that simply means for each value $x_i$, we subtract the average value (calculated earlier as $\\mu_B$), which gives us what's called the \"deviation\" for that value. We square the result to get the squared deviation. Sum up the results of doing that for each of the values, then divide by the number of values, again $m$, to get the average, or mean, squared deviation.\n$$\n\\sigma_{B}^{2} \\leftarrow \\frac{1}{m}\\sum_{i=1}^m (x_i - \\mu_B)^2\n$$\nOnce we have the mean and variance, we can use them to normalize the values with the following equation. For each value, it subtracts the mean and divides by the (almost) standard deviation. (You've probably heard of standard deviation many times, but if you have not studied statistics you might not know that the standard deviation is actually the square root of the mean squared deviation.)\n$$\n\\hat{x_i} \\leftarrow \\frac{x_i - \\mu_B}{\\sqrt{\\sigma_{B}^{2} + \\epsilon}}\n$$\nAbove, we said \"(almost) standard deviation\". That's because the real standard deviation for the batch is calculated by $\\sqrt{\\sigma_{B}^{2}}$, but the above formula adds the term epsilon, $\\epsilon$, before taking the square root. The epsilon can be any small, positive constant - in our code we use the value 0.001. It is there partially to make sure we don't try to divide by zero, but it also acts to increase the variance slightly for each batch. \nWhy increase the variance? Statistically, this makes sense because even though we are normalizing one batch at a time, we are also trying to estimate the population distribution – the total training set, which itself an estimate of the larger population of inputs your network wants to handle. The variance of a population is higher than the variance for any sample taken from that population, so increasing the variance a little bit for each batch helps take that into account. \nAt this point, we have a normalized value, represented as $\\hat{x_i}$. But rather than use it directly, we multiply it by a gamma value, $\\gamma$, and then add a beta value, $\\beta$. Both $\\gamma$ and $\\beta$ are learnable parameters of the network and serve to scale and shift the normalized value, respectively. Because they are learnable just like weights, they give your network some extra knobs to tweak during training to help it learn the function it is trying to approximate. \n$$\ny_i \\leftarrow \\gamma \\hat{x_i} + \\beta\n$$\nWe now have the final batch-normalized output of our layer, which we would then pass to a non-linear activation function like sigmoid, tanh, ReLU, Leaky ReLU, etc. In the original batch normalization paper (linked in the beginning of this notebook), they mention that there might be cases when you'd want to perform the batch normalization after the non-linearity instead of before, but it is difficult to find any uses like that in practice.\nIn NeuralNet's implementation of fully_connected, all of this math is hidden inside the following line, where linear_output serves as the $x_i$ from the equations:\npython\nbatch_normalized_output = tf.layers.batch_normalization(linear_output, training=self.is_training)\nThe next section shows you how to implement the math directly. \nBatch normalization without the tf.layers package\nOur implementation of batch normalization in NeuralNet uses the high-level abstraction tf.layers.batch_normalization, found in TensorFlow's tf.layers package.\nHowever, if you would like to implement batch normalization at a lower level, the following code shows you how.\nIt uses tf.nn.batch_normalization from TensorFlow's neural net (nn) package.\n1) You can replace the fully_connected function in the NeuralNet class with the below code and everything in NeuralNet will still work like it did before.",
"def fully_connected(self, layer_in, initial_weights, activation_fn=None):\n \"\"\"\n Creates a standard, fully connected layer. Its number of inputs and outputs will be\n defined by the shape of `initial_weights`, and its starting weight values will be\n taken directly from that same parameter. If `self.use_batch_norm` is True, this\n layer will include batch normalization, otherwise it will not. \n \n :param layer_in: Tensor\n The Tensor that feeds into this layer. It's either the input to the network or the output\n of a previous layer.\n :param initial_weights: NumPy array or Tensor\n Initial values for this layer's weights. The shape defines the number of nodes in the layer.\n e.g. Passing in 3 matrix of shape (784, 256) would create a layer with 784 inputs and 256 \n outputs. \n :param activation_fn: Callable or None (default None)\n The non-linearity used for the output of the layer. If None, this layer will not include \n batch normalization, regardless of the value of `self.use_batch_norm`. \n e.g. Pass tf.nn.relu to use ReLU activations on your hidden layers.\n \"\"\"\n if self.use_batch_norm and activation_fn:\n # Batch normalization uses weights as usual, but does NOT add a bias term. This is because \n # its calculations include gamma and beta variables that make the bias term unnecessary.\n weights = tf.Variable(initial_weights)\n linear_output = tf.matmul(layer_in, weights)\n\n num_out_nodes = initial_weights.shape[-1]\n\n # Batch normalization adds additional trainable variables: \n # gamma (for scaling) and beta (for shifting).\n gamma = tf.Variable(tf.ones([num_out_nodes]))\n beta = tf.Variable(tf.zeros([num_out_nodes]))\n\n # These variables will store the mean and variance for this layer over the entire training set,\n # which we assume represents the general population distribution.\n # By setting `trainable=False`, we tell TensorFlow not to modify these variables during\n # back propagation. Instead, we will assign values to these variables ourselves. \n pop_mean = tf.Variable(tf.zeros([num_out_nodes]), trainable=False)\n pop_variance = tf.Variable(tf.ones([num_out_nodes]), trainable=False)\n\n # Batch normalization requires a small constant epsilon, used to ensure we don't divide by zero.\n # This is the default value TensorFlow uses.\n epsilon = 1e-3\n\n def batch_norm_training():\n # Calculate the mean and variance for the data coming out of this layer's linear-combination step.\n # The [0] defines an array of axes to calculate over.\n batch_mean, batch_variance = tf.nn.moments(linear_output, [0])\n\n # Calculate a moving average of the training data's mean and variance while training.\n # These will be used during inference.\n # Decay should be some number less than 1. tf.layers.batch_normalization uses the parameter\n # \"momentum\" to accomplish this and defaults it to 0.99\n decay = 0.99\n train_mean = tf.assign(pop_mean, pop_mean * decay + batch_mean * (1 - decay))\n train_variance = tf.assign(pop_variance, pop_variance * decay + batch_variance * (1 - decay))\n\n # The 'tf.control_dependencies' context tells TensorFlow it must calculate 'train_mean' \n # and 'train_variance' before it calculates the 'tf.nn.batch_normalization' layer.\n # This is necessary because the those two operations are not actually in the graph\n # connecting the linear_output and batch_normalization layers, \n # so TensorFlow would otherwise just skip them.\n with tf.control_dependencies([train_mean, train_variance]):\n return tf.nn.batch_normalization(linear_output, batch_mean, batch_variance, beta, gamma, epsilon)\n \n def batch_norm_inference():\n # During inference, use the our estimated population mean and variance to normalize the layer\n return tf.nn.batch_normalization(linear_output, pop_mean, pop_variance, beta, gamma, epsilon)\n\n # Use `tf.cond` as a sort of if-check. When self.is_training is True, TensorFlow will execute \n # the operation returned from `batch_norm_training`; otherwise it will execute the graph\n # operation returned from `batch_norm_inference`.\n batch_normalized_output = tf.cond(self.is_training, batch_norm_training, batch_norm_inference)\n \n # Pass the batch-normalized layer output through the activation function.\n # The literature states there may be cases where you want to perform the batch normalization *after*\n # the activation function, but it is difficult to find any uses of that in practice.\n return activation_fn(batch_normalized_output)\n else:\n # When not using batch normalization, create a standard layer that multiplies\n # the inputs and weights, adds a bias, and optionally passes the result \n # through an activation function. \n weights = tf.Variable(initial_weights)\n biases = tf.Variable(tf.zeros([initial_weights.shape[-1]]))\n linear_output = tf.add(tf.matmul(layer_in, weights), biases)\n return linear_output if not activation_fn else activation_fn(linear_output)\n",
"This version of fully_connected is much longer than the original, but once again has extensive comments to help you understand it. Here are some important points:\n\nIt explicitly creates variables to store gamma, beta, and the population mean and variance. These were all handled for us in the previous version of the function.\nIt initializes gamma to one and beta to zero, so they start out having no effect in this calculation: $y_i \\leftarrow \\gamma \\hat{x_i} + \\beta$. However, during training the network learns the best values for these variables using back propagation, just like networks normally do with weights.\nUnlike gamma and beta, the variables for population mean and variance are marked as untrainable. That tells TensorFlow not to modify them during back propagation. Instead, the lines that call tf.assign are used to update these variables directly.\nTensorFlow won't automatically run the tf.assign operations during training because it only evaluates operations that are required based on the connections it finds in the graph. To get around that, we add this line: with tf.control_dependencies([train_mean, train_variance]): before we run the normalization operation. This tells TensorFlow it needs to run those operations before running anything inside the with block. \nThe actual normalization math is still mostly hidden from us, this time using tf.nn.batch_normalization.\ntf.nn.batch_normalization does not have a training parameter like tf.layers.batch_normalization did. However, we still need to handle training and inference differently, so we run different code in each case using the tf.cond operation.\nWe use the tf.nn.moments function to calculate the batch mean and variance.\n\n2) The current version of the train function in NeuralNet will work fine with this new version of fully_connected. However, it uses these lines to ensure population statistics are updated when using batch normalization: \npython\nif self.use_batch_norm:\n with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):\n train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy)\nelse:\n train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy)\nOur new version of fully_connected handles updating the population statistics directly. That means you can also simplify your code by replacing the above if/else condition with just this line:\npython\ntrain_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy)\n3) And just in case you want to implement every detail from scratch, you can replace this line in batch_norm_training:\npython\nreturn tf.nn.batch_normalization(linear_output, batch_mean, batch_variance, beta, gamma, epsilon)\nwith these lines:\npython\nnormalized_linear_output = (linear_output - batch_mean) / tf.sqrt(batch_variance + epsilon)\nreturn gamma * normalized_linear_output + beta\nAnd replace this line in batch_norm_inference:\npython\nreturn tf.nn.batch_normalization(linear_output, pop_mean, pop_variance, beta, gamma, epsilon)\nwith these lines:\npython\nnormalized_linear_output = (linear_output - pop_mean) / tf.sqrt(pop_variance + epsilon)\nreturn gamma * normalized_linear_output + beta\nAs you can see in each of the above substitutions, the two lines of replacement code simply implement the following two equations directly. The first line calculates the following equation, with linear_output representing $x_i$ and normalized_linear_output representing $\\hat{x_i}$: \n$$\n\\hat{x_i} \\leftarrow \\frac{x_i - \\mu_B}{\\sqrt{\\sigma_{B}^{2} + \\epsilon}}\n$$\nAnd the second line is a direct translation of the following equation:\n$$\ny_i \\leftarrow \\gamma \\hat{x_i} + \\beta\n$$\nWe still use the tf.nn.moments operation to implement the other two equations from earlier – the ones that calculate the batch mean and variance used in the normalization step. If you really wanted to do everything from scratch, you could replace that line, too, but we'll leave that to you. \nWhy the difference between training and inference?\nIn the original function that uses tf.layers.batch_normalization, we tell the layer whether or not the network is training by passing a value for its training parameter, like so:\npython\nbatch_normalized_output = tf.layers.batch_normalization(linear_output, training=self.is_training)\nAnd that forces us to provide a value for self.is_training in our feed_dict, like we do in this example from NeuralNet's train function:\npython\nsession.run(train_step, feed_dict={self.input_layer: batch_xs, \n labels: batch_ys, \n self.is_training: True})\nIf you looked at the low level implementation, you probably noticed that, just like with tf.layers.batch_normalization, we need to do slightly different things during training and inference. But why is that?\nFirst, let's look at what happens when we don't. The following function is similar to train_and_test from earlier, but this time we are only testing one network and instead of plotting its accuracy, we perform 200 predictions on test inputs, 1 input at at time. We can use the test_training_accuracy parameter to test the network in training or inference modes (the equivalent of passing True or False to the feed_dict for is_training).",
"def batch_norm_test(test_training_accuracy):\n \"\"\"\n :param test_training_accuracy: bool\n If True, perform inference with batch normalization using batch mean and variance;\n if False, perform inference with batch normalization using estimated population mean and variance.\n \"\"\"\n\n weights = [np.random.normal(size=(784,100), scale=0.05).astype(np.float32),\n np.random.normal(size=(100,100), scale=0.05).astype(np.float32),\n np.random.normal(size=(100,100), scale=0.05).astype(np.float32),\n np.random.normal(size=(100,10), scale=0.05).astype(np.float32)\n ]\n\n tf.reset_default_graph()\n\n # Train the model\n bn = NeuralNet(weights, tf.nn.relu, True)\n \n # First train the network\n with tf.Session() as sess:\n tf.global_variables_initializer().run()\n\n bn.train(sess, 0.01, 2000, 2000)\n\n bn.test(sess, test_training_accuracy=test_training_accuracy, include_individual_predictions=True)",
"In the following cell, we pass True for test_training_accuracy, which performs the same batch normalization that we normally perform during training.",
"batch_norm_test(True)",
"As you can see, the network guessed the same value every time! But why? Because during training, a network with batch normalization adjusts the values at each layer based on the mean and variance of that batch. The \"batches\" we are using for these predictions have a single input each time, so their values are the means, and their variances will always be 0. That means the network will normalize the values at any layer to zero. (Review the equations from before to see why a value that is equal to the mean would always normalize to zero.) So we end up with the same result for every input we give the network, because its the value the network produces when it applies its learned weights to zeros at every layer. \nNote: If you re-run that cell, you might get a different value from what we showed. That's because the specific weights the network learns will be different every time. But whatever value it is, it should be the same for all 200 predictions.\nTo overcome this problem, the network does not just normalize the batch at each layer. It also maintains an estimate of the mean and variance for the entire population. So when we perform inference, instead of letting it \"normalize\" all the values using their own means and variance, it uses the estimates of the population mean and variance that it calculated while training. \nSo in the following example, we pass False for test_training_accuracy, which tells the network that we it want to perform inference with the population statistics it calculates during training.",
"batch_norm_test(False)",
"As you can see, now that we're using the estimated population mean and variance, we get a 97% accuracy. That means it guessed correctly on 194 of the 200 samples – not too bad for something that trained in under 4 seconds. :)\nConsiderations for other network types\nThis notebook demonstrates batch normalization in a standard neural network with fully connected layers. You can also use batch normalization in other types of networks, but there are some special considerations.\nConvNets\nConvolution layers consist of multiple feature maps. (Remember, the depth of a convolutional layer refers to its number of feature maps.) And the weights for each feature map are shared across all the inputs that feed into the layer. Because of these differences, batch normalizaing convolutional layers requires batch/population mean and variance per feature map rather than per node in the layer.\nWhen using tf.layers.batch_normalization, be sure to pay attention to the order of your convolutionlal dimensions.\nSpecifically, you may want to set a different value for the axis parameter if your layers have their channels first instead of last. \nIn our low-level implementations, we used the following line to calculate the batch mean and variance:\npython\nbatch_mean, batch_variance = tf.nn.moments(linear_output, [0])\nIf we were dealing with a convolutional layer, we would calculate the mean and variance with a line like this instead:\npython\nbatch_mean, batch_variance = tf.nn.moments(conv_layer, [0,1,2], keep_dims=False)\nThe second parameter, [0,1,2], tells TensorFlow to calculate the batch mean and variance over each feature map. (The three axes are the batch, height, and width.) And setting keep_dims to False tells tf.nn.moments not to return values with the same size as the inputs. Specifically, it ensures we get one mean/variance pair per feature map.\nRNNs\nBatch normalization can work with recurrent neural networks, too, as shown in the 2016 paper Recurrent Batch Normalization. It's a bit more work to implement, but basically involves calculating the means and variances per time step instead of per layer. You can find an example where someone extended tf.nn.rnn_cell.RNNCell to include batch normalization in this GitHub repo."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
NeuroDataDesign/seelviz
|
Tony/ipynb/FA Visualizations Final.ipynb
|
apache-2.0
|
[
"Fractional Anisotropy Maps - Steps and Results\nOn Thursday, we showed Greg the output of the first step of the CAPTURE pipeline - namely, after modifying the CAPTURE MATLAB pipeline to accept TIFF files (originally it only took TIFs), we were able to generate two structure tensors from a TIFF stack of Aut1367 originally for use in Ilastik analysis. The main steps for the generation of the structure tensors are explained in a separate viewer (we showed Greg this) on Thursday: http://nbviewer.jupyter.org/github/NeuroDataDesign/seelviz/blob/gh-pages/Tony/ipynb/Generating%20Structure%20Tensors.ipynb \nThere were two separate structure tensors generated by the CAPTURE pipeline - one was \"DTK\" (which could be used later in the Diffusion ToolKit process) and the other was \"FSL\" (an alternate file format). We realized at office hours that the structure tensors (which were 5000 x 5000 x 5 x 6) each were the \"lower triangular\" values from the structures.\nFrom there, we first tried to use the DTK file directly inside Diffusion ToolKit, but were informed that the \"file appeared to be corrupted/missing data\". Only the FSL format seemed to have properly saved all the image data (likely because it was run first during the MATLAB script, and because generating the structure tensors froze Tony's computer, so the DTK file format was corrupted. Thus, all analysis was done on the FSL file. \nFrom there, we followed the DiPy tutorial/ndmg code that was suitable for generating FA maps (as recommended by Greg).",
"from dipy.reconst.dti import fractional_anisotropy, color_fa\nfrom argparse import ArgumentParser\nfrom scipy import ndimage\nimport os\nimport re\nimport numpy as np\nimport nibabel as nb\nimport sys\nimport matplotlib\n\nmatplotlib.use('Agg') # very important above pyplot import\nimport matplotlib.pyplot as plt\n\nimport vtk\n\nfrom dipy.reconst.dti import from_lower_triangular\n\nimg = nb.load('../../../../../Desktop/result/dogsig1_gausig2.3/v100_ch0_tensorfsl_dogsig1_gausig2.3.nii')\n\ndata = img.get_data()\n\n# Output is the structure tensor generated from a lower triangular structure tensor (which data is)\noutput = from_lower_triangular(data)",
"Subsampling:\nWe added this step because the calculation of RGB/eigenvalues/eigenvectors took much too long on the full file. Even still, with small sizes like 25x25, the last VTK rendering step took significant amounts of time. In the pipeline we'll have to think of a more optimal way to compute these, and we're guessing we're missing something (since why is this taking so long)?",
"output_ds = output[4250:4300, 250:300, :, :, :]\n\nprint output.shape\nprint output_ds.shape\n\nFA = fractional_anisotropy(output_ds)\n\nFA = np.clip(FA, 0, 1)\n\nFA[np.isnan(FA)] = 0\n\nprint FA.shape\n\nfrom dipy.reconst.dti import decompose_tensor\n\nevalues, evectors = decompose_tensor(output_ds)\n\nprint evectors[..., 0, 0].shape\nprint evectors.shape[-2:]\n\nprint FA[:, :, :, 0].shape\n\n## To satisfy requirements for RGB\nRGB = color_fa(FA[:, :, :, 0], evectors)\n\nnb.save(nb.Nifti1Image(np.array(255 * RGB, 'uint8'), img.get_affine()), 'tensor_rgb_upper.nii.gz')\n\nprint('Computing tensor ellipsoids in a random part')\n\nfrom dipy.data import get_sphere\nsphere = get_sphere('symmetric724')\n\nfrom dipy.viz import fvtk\n\nren = fvtk.ren()\nevals = evalues[:, :, :]\nevecs = evectors[:, :, :]\n\nprint \"printing evals:\"\nprint evals\n\nprint \"printing evecs\"\nprint evecs\n\ncfa = RGB[:, :, :]\n\ncfa = cfa / cfa.max()\n\nprint \"printing cfa\"\nprint cfa\n\nfvtk.add(ren, fvtk.tensor(evals, evecs, cfa, sphere))\n\nfrom IPython.display import Image\ndef vtk_show(renderer, width=400, height=300):\n \"\"\"\n Takes vtkRenderer instance and returns an IPython Image with the rendering.\n \"\"\"\n renderWindow = vtk.vtkRenderWindow()\n renderWindow.SetOffScreenRendering(1)\n renderWindow.AddRenderer(renderer)\n renderWindow.SetSize(width, height)\n renderWindow.Render()\n \n windowToImageFilter = vtk.vtkWindowToImageFilter()\n windowToImageFilter.SetInput(renderWindow)\n windowToImageFilter.Update()\n \n writer = vtk.vtkPNGWriter()\n writer.SetWriteToMemory(1)\n writer.SetInputConnection(windowToImageFilter.GetOutputPort())\n writer.Write()\n data = str(buffer(writer.GetResult()))\n \n return Image(data)",
"Results:",
"# x = 4250:4300, y = 250:300, z = : on Tony's computer (doesn't show anything)\n# Thus, all results were displayed after running on Albert's computer\nvtk_show(ren)",
"x = [0, 25], y = [25, 50] (different views):\nView 1:\n\nView 2:\n\nView 3:\n\nView 4:\n\nView 5:\n\nView 6:\n\nraw 1:\n\nx = [1000, 1025], y = [1025, 1050] (different views):\nView 1:\n\nView 2:\n\nView 3:\n\nView 4:\n\nraw 2:\n\nx = [4025, 4050], y = [250, 300] (different views):\nView 1:\n\nView 2:\n\nView 3:\n\nView 4:\n\nView 5:\n\nraw 3:"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
statsmodels/statsmodels.github.io
|
v0.13.2/examples/notebooks/generated/statespace_structural_harvey_jaeger.ipynb
|
bsd-3-clause
|
[
"Detrending, Stylized Facts and the Business Cycle\nIn an influential article, Harvey and Jaeger (1993) described the use of unobserved components models (also known as \"structural time series models\") to derive stylized facts of the business cycle.\nTheir paper begins:\n\"Establishing the 'stylized facts' associated with a set of time series is widely considered a crucial step\nin macroeconomic research ... For such facts to be useful they should (1) be consistent with the stochastic\nproperties of the data and (2) present meaningful information.\"\n\nIn particular, they make the argument that these goals are often better met using the unobserved components approach rather than the popular Hodrick-Prescott filter or Box-Jenkins ARIMA modeling techniques.\nstatsmodels has the ability to perform all three types of analysis, and below we follow the steps of their paper, using a slightly updated dataset.",
"%matplotlib inline\n\nimport numpy as np\nimport pandas as pd\nimport statsmodels.api as sm\nimport matplotlib.pyplot as plt\n\nfrom IPython.display import display, Latex",
"Unobserved Components\nThe unobserved components model available in statsmodels can be written as:\n$$\ny_t = \\underbrace{\\mu_{t}}{\\text{trend}} + \\underbrace{\\gamma{t}}{\\text{seasonal}} + \\underbrace{c{t}}{\\text{cycle}} + \\sum{j=1}^k \\underbrace{\\beta_j x_{jt}}{\\text{explanatory}} + \\underbrace{\\varepsilon_t}{\\text{irregular}}\n$$\nsee Durbin and Koopman 2012, Chapter 3 for notation and additional details. Notice that different specifications for the different individual components can support a wide range of models. The specific models considered in the paper and below are specializations of this general equation.\nTrend\nThe trend component is a dynamic extension of a regression model that includes an intercept and linear time-trend.\n$$\n\\begin{align}\n\\underbrace{\\mu_{t+1}}{\\text{level}} & = \\mu_t + \\nu_t + \\eta{t+1} \\qquad & \\eta_{t+1} \\sim N(0, \\sigma_\\eta^2) \\\\\n\\underbrace{\\nu_{t+1}}{\\text{trend}} & = \\nu_t + \\zeta{t+1} & \\zeta_{t+1} \\sim N(0, \\sigma_\\zeta^2) \\\n\\end{align}\n$$\nwhere the level is a generalization of the intercept term that can dynamically vary across time, and the trend is a generalization of the time-trend such that the slope can dynamically vary across time.\nFor both elements (level and trend), we can consider models in which:\n\nThe element is included vs excluded (if the trend is included, there must also be a level included).\nThe element is deterministic vs stochastic (i.e. whether or not the variance on the error term is confined to be zero or not)\n\nThe only additional parameters to be estimated via MLE are the variances of any included stochastic components.\nThis leads to the following specifications:\n| | Level | Trend | Stochastic Level | Stochastic Trend |\n|----------------------------------------------------------------------|-------|-------|------------------|------------------|\n| Constant | ✓ | | | |\n| Local Level <br /> (random walk) | ✓ | | ✓ | |\n| Deterministic trend | ✓ | ✓ | | |\n| Local level with deterministic trend <br /> (random walk with drift) | ✓ | ✓ | ✓ | |\n| Local linear trend | ✓ | ✓ | ✓ | ✓ |\n| Smooth trend <br /> (integrated random walk) | ✓ | ✓ | | ✓ |\nSeasonal\nThe seasonal component is written as:\n<span>$$\n\\gamma_t = - \\sum_{j=1}^{s-1} \\gamma_{t+1-j} + \\omega_t \\qquad \\omega_t \\sim N(0, \\sigma_\\omega^2)\n$$</span>\nThe periodicity (number of seasons) is s, and the defining character is that (without the error term), the seasonal components sum to zero across one complete cycle. The inclusion of an error term allows the seasonal effects to vary over time.\nThe variants of this model are:\n\nThe periodicity s\nWhether or not to make the seasonal effects stochastic.\n\nIf the seasonal effect is stochastic, then there is one additional parameter to estimate via MLE (the variance of the error term).\nCycle\nThe cyclical component is intended to capture cyclical effects at time frames much longer than captured by the seasonal component. For example, in economics the cyclical term is often intended to capture the business cycle, and is then expected to have a period between \"1.5 and 12 years\" (see Durbin and Koopman).\nThe cycle is written as:\n<span>$$\n\\begin{align}\nc_{t+1} & = c_t \\cos \\lambda_c + c_t^ \\sin \\lambda_c + \\tilde \\omega_t \\qquad & \\tilde \\omega_t \\sim N(0, \\sigma_{\\tilde \\omega}^2) \\\\\nc_{t+1}^ & = -c_t \\sin \\lambda_c + c_t^ \\cos \\lambda_c + \\tilde \\omega_t^ & \\tilde \\omega_t^* \\sim N(0, \\sigma_{\\tilde \\omega}^2)\n\\end{align}\n$$</span>\nThe parameter $\\lambda_c$ (the frequency of the cycle) is an additional parameter to be estimated by MLE. If the seasonal effect is stochastic, then there is one another parameter to estimate (the variance of the error term - note that both of the error terms here share the same variance, but are assumed to have independent draws).\nIrregular\nThe irregular component is assumed to be a white noise error term. Its variance is a parameter to be estimated by MLE; i.e.\n$$\n\\varepsilon_t \\sim N(0, \\sigma_\\varepsilon^2)\n$$\nIn some cases, we may want to generalize the irregular component to allow for autoregressive effects:\n$$\n\\varepsilon_t = \\rho(L) \\varepsilon_{t-1} + \\epsilon_t, \\qquad \\epsilon_t \\sim N(0, \\sigma_\\epsilon^2)\n$$\nIn this case, the autoregressive parameters would also be estimated via MLE.\nRegression effects\nWe may want to allow for explanatory variables by including additional terms\n<span>$$\n\\sum_{j=1}^k \\beta_j x_{jt}\n$$</span>\nor for intervention effects by including\n<span>$$\n\\begin{align}\n\\delta w_t \\qquad \\text{where} \\qquad w_t & = 0, \\qquad t < \\tau, \\\\\n& = 1, \\qquad t \\ge \\tau\n\\end{align}\n$$</span>\nThese additional parameters could be estimated via MLE or by including them as components of the state space formulation.\nData\nFollowing Harvey and Jaeger, we will consider the following time series:\n\nUS real GNP, \"output\", (GNPC96)\nUS GNP implicit price deflator, \"prices\", (GNPDEF)\nUS monetary base, \"money\", (AMBSL)\n\nThe time frame in the original paper varied across series, but was broadly 1954-1989. Below we use data from the period 1948-2008 for all series. Although the unobserved components approach allows isolating a seasonal component within the model, the series considered in the paper, and here, are already seasonally adjusted.\nAll data series considered here are taken from Federal Reserve Economic Data (FRED). Conveniently, the Python library Pandas has the ability to download data from FRED directly.",
"# Datasets\nfrom pandas_datareader.data import DataReader\n\n# Get the raw data\nstart = '1948-01'\nend = '2008-01'\nus_gnp = DataReader('GNPC96', 'fred', start=start, end=end)\nus_gnp_deflator = DataReader('GNPDEF', 'fred', start=start, end=end)\nus_monetary_base = DataReader('AMBSL', 'fred', start=start, end=end).resample('QS').mean()\nrecessions = DataReader('USRECQ', 'fred', start=start, end=end).resample('QS').last().values[:,0]\n\n# Construct the dataframe\ndta = pd.concat(map(np.log, (us_gnp, us_gnp_deflator, us_monetary_base)), axis=1)\ndta.columns = ['US GNP','US Prices','US monetary base']\ndta.index.freq = dta.index.inferred_freq\ndates = dta.index._mpl_repr()",
"To get a sense of these three variables over the timeframe, we can plot them:",
"# Plot the data\nax = dta.plot(figsize=(13,3))\nylim = ax.get_ylim()\nax.xaxis.grid()\nax.fill_between(dates, ylim[0]+1e-5, ylim[1]-1e-5, recessions, facecolor='k', alpha=0.1);",
"Model\nSince the data is already seasonally adjusted and there are no obvious explanatory variables, the generic model considered is:\n$$\ny_t = \\underbrace{\\mu_{t}}{\\text{trend}} + \\underbrace{c{t}}{\\text{cycle}} + \\underbrace{\\varepsilon_t}{\\text{irregular}}\n$$\nThe irregular will be assumed to be white noise, and the cycle will be stochastic and damped. The final modeling choice is the specification to use for the trend component. Harvey and Jaeger consider two models:\n\nLocal linear trend (the \"unrestricted\" model)\nSmooth trend (the \"restricted\" model, since we are forcing $\\sigma_\\eta = 0$)\n\nBelow, we construct kwargs dictionaries for each of these model types. Notice that rather that there are two ways to specify the models. One way is to specify components directly, as in the table above. The other way is to use string names which map to various specifications.",
"# Model specifications\n\n# Unrestricted model, using string specification\nunrestricted_model = {\n 'level': 'local linear trend', 'cycle': True, 'damped_cycle': True, 'stochastic_cycle': True\n}\n\n# Unrestricted model, setting components directly\n# This is an equivalent, but less convenient, way to specify a\n# local linear trend model with a stochastic damped cycle:\n# unrestricted_model = {\n# 'irregular': True, 'level': True, 'stochastic_level': True, 'trend': True, 'stochastic_trend': True,\n# 'cycle': True, 'damped_cycle': True, 'stochastic_cycle': True\n# }\n\n# The restricted model forces a smooth trend\nrestricted_model = {\n 'level': 'smooth trend', 'cycle': True, 'damped_cycle': True, 'stochastic_cycle': True\n}\n\n# Restricted model, setting components directly\n# This is an equivalent, but less convenient, way to specify a\n# smooth trend model with a stochastic damped cycle. Notice\n# that the difference from the local linear trend model is that\n# `stochastic_level=False` here.\n# unrestricted_model = {\n# 'irregular': True, 'level': True, 'stochastic_level': False, 'trend': True, 'stochastic_trend': True,\n# 'cycle': True, 'damped_cycle': True, 'stochastic_cycle': True\n# }",
"We now fit the following models:\n\nOutput, unrestricted model\nPrices, unrestricted model\nPrices, restricted model\nMoney, unrestricted model\nMoney, restricted model",
"# Output\noutput_mod = sm.tsa.UnobservedComponents(dta['US GNP'], **unrestricted_model)\noutput_res = output_mod.fit(method='powell', disp=False)\n\n# Prices\nprices_mod = sm.tsa.UnobservedComponents(dta['US Prices'], **unrestricted_model)\nprices_res = prices_mod.fit(method='powell', disp=False)\n\nprices_restricted_mod = sm.tsa.UnobservedComponents(dta['US Prices'], **restricted_model)\nprices_restricted_res = prices_restricted_mod.fit(method='powell', disp=False)\n\n# Money\nmoney_mod = sm.tsa.UnobservedComponents(dta['US monetary base'], **unrestricted_model)\nmoney_res = money_mod.fit(method='powell', disp=False)\n\nmoney_restricted_mod = sm.tsa.UnobservedComponents(dta['US monetary base'], **restricted_model)\nmoney_restricted_res = money_restricted_mod.fit(method='powell', disp=False)",
"Once we have fit these models, there are a variety of ways to display the information. Looking at the model of US GNP, we can summarize the fit of the model using the summary method on the fit object.",
"print(output_res.summary())",
"For unobserved components models, and in particular when exploring stylized facts in line with point (2) from the introduction, it is often more instructive to plot the estimated unobserved components (e.g. the level, trend, and cycle) themselves to see if they provide a meaningful description of the data.\nThe plot_components method of the fit object can be used to show plots and confidence intervals of each of the estimated states, as well as a plot of the observed data versus the one-step-ahead predictions of the model to assess fit.",
"fig = output_res.plot_components(legend_loc='lower right', figsize=(15, 9));",
"Finally, Harvey and Jaeger summarize the models in another way to highlight the relative importances of the trend and cyclical components; below we replicate their Table I. The values we find are broadly consistent with, but different in the particulars from, the values from their table.",
"# Create Table I\ntable_i = np.zeros((5,6))\n\nstart = dta.index[0]\nend = dta.index[-1]\ntime_range = '%d:%d-%d:%d' % (start.year, start.quarter, end.year, end.quarter)\nmodels = [\n ('US GNP', time_range, 'None'),\n ('US Prices', time_range, 'None'),\n ('US Prices', time_range, r'$\\sigma_\\eta^2 = 0$'),\n ('US monetary base', time_range, 'None'),\n ('US monetary base', time_range, r'$\\sigma_\\eta^2 = 0$'),\n]\nindex = pd.MultiIndex.from_tuples(models, names=['Series', 'Time range', 'Restrictions'])\nparameter_symbols = [\n r'$\\sigma_\\zeta^2$', r'$\\sigma_\\eta^2$', r'$\\sigma_\\kappa^2$', r'$\\rho$',\n r'$2 \\pi / \\lambda_c$', r'$\\sigma_\\varepsilon^2$',\n]\n\ni = 0\nfor res in (output_res, prices_res, prices_restricted_res, money_res, money_restricted_res):\n if res.model.stochastic_level:\n (sigma_irregular, sigma_level, sigma_trend,\n sigma_cycle, frequency_cycle, damping_cycle) = res.params\n else:\n (sigma_irregular, sigma_level,\n sigma_cycle, frequency_cycle, damping_cycle) = res.params\n sigma_trend = np.nan\n period_cycle = 2 * np.pi / frequency_cycle\n \n table_i[i, :] = [\n sigma_level*1e7, sigma_trend*1e7,\n sigma_cycle*1e7, damping_cycle, period_cycle,\n sigma_irregular*1e7\n ]\n i += 1\n \npd.set_option('float_format', lambda x: '%.4g' % np.round(x, 2) if not np.isnan(x) else '-')\ntable_i = pd.DataFrame(table_i, index=index, columns=parameter_symbols)\ntable_i"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ML4DS/ML4all
|
R1.Intro_Regression/regression_intro_student.ipynb
|
mit
|
[
"Introduction to Regression.\nAuthor: Jerónimo Arenas García (jarenas@tsc.uc3m.es)\n Jesús Cid Sueiro (jcid@tsc.uc3m.es)\n\nNotebook version: 1.1 (Sep 12, 2017)\n\nChanges: v.1.0 - First version. Extracted from regression_intro_knn v.1.0.\n v.1.1 - Compatibility with python 2 and python 3",
"# Import some libraries that will be necessary for working with data and displaying plots\n\n# To visualize plots in the notebook\n%matplotlib inline \n\nimport numpy as np\nimport scipy.io # To read matlab files\nimport pandas as pd # To read data tables from csv files\n\n# For plots and graphical results\nimport matplotlib \nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D \nimport pylab\n\n# For the student tests (only for python 2)\nimport sys\nif sys.version_info.major==2:\n from test_helper import Test\n\n# That's default image size for this interactive session\npylab.rcParams['figure.figsize'] = 9, 6 ",
"1. The regression problem\nThe goal of regression methods is to predict the value of some target variable $S$ from the observation of one or more input variables $X_0, X_1, \\ldots, X_{m-1}$ (that we will collect in a single vector $\\bf X$).\nRegression problems arise in situations where the value of the target variable is not easily accessible, but we can measure other dependent variables, from which we can try to predict $S$.\n<img src=\"figs/block_diagram.png\" width=400>\nThe only information available to estimate the relation between the inputs and the target is a dataset $\\mathcal D$ containing several observations of all variables.\n$$\\mathcal{D} = {{\\bf x}{k}, s{k}}_{k=0}^{K-1}$$\nThe dataset $\\mathcal{D}$ must be used to find a function $f$ that, for any observation vector ${\\bf x}$, computes an output $\\hat{s} = f({\\bf x})$ that is a good predition of the true value of the target, $s$.\n<img src=\"figs/predictor.png\" width=300>\n2. Examples of regression problems.\nThe <a href=http://scikit-learn.org/>scikit-learn</a> package contains several <a href=http://scikit-learn.org/stable/datasets/> datasets</a> related to regression problems. \n\n\n<a href=http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_boston.html#sklearn.datasets.load_boston > Boston dataset</a>: the target variable contains housing values in different suburbs of Boston. The goal is to predict these values based on several social, economic and demographic variables taken frome theses suburbs (you can get more details in the <a href = https://archive.ics.uci.edu/ml/datasets/Housing > UCI repository </a>).\n\n\n<a href=http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_diabetes.html#sklearn.datasets.load_diabetes /> Diabetes dataset</a>.\n\n\nWe can load these datasets as follows:",
"from sklearn import datasets\n\n# Load the dataset. Select it by uncommenting the appropriate line\nD_all = datasets.load_boston()\n#D_all = datasets.load_diabetes()\n\n# Extract data and data parameters.\nX = D_all.data # Complete data matrix (including input and target variables)\nS = D_all.target # Target variables\nn_samples = X.shape[0] # Number of observations\nn_vars = X.shape[1] # Number of variables (including input and target)",
"This dataset contains",
"print(n_samples)",
"observations of the target variable and",
"print(n_vars)",
"input variables.\n3. Scatter plots\n3.1. 2D scatter plots\nWhen the instances of the dataset are multidimensional, they cannot be visualized directly, but we can get a first rough idea about the regression task if we plot the target variable versus one of the input variables. These representations are known as <i>scatter plots</i>\nPython methods plot and scatter from the matplotlib package can be used for these graphical representations.",
"# Select a dataset\nnrows = 4\nncols = 1 + (X.shape[1]-1)/nrows\n\n# Some adjustment for the subplot.\npylab.subplots_adjust(hspace=0.2)\n\n# Plot all variables\nfor idx in range(X.shape[1]):\n ax = plt.subplot(nrows,ncols,idx+1)\n ax.scatter(X[:,idx], S) # <-- This is the key command\n ax.get_xaxis().set_ticks([])\n ax.get_yaxis().set_ticks([])\n plt.ylabel('Target')\n ",
"3.2. 3D Plots\nWith the addition of a third coordinate, plot and scatter can be used for 3D plotting.\nExercise 1:\nSelect the diabetes dataset. Visualize the target versus components 2 and 4. (You can get more info about the <a href=http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.scatter>scatter</a> command and an <a href=http://matplotlib.org/examples/mplot3d/scatter3d_demo.html>example of use</a> in the <a href=http://matplotlib.org/index.html> matplotlib</a> documentation)",
"# <SOL>\n# </SOL>",
"4. Evaluating a regression task\nIn order to evaluate the performance of a given predictor, we need to quantify the quality of predictions. This is usually done by means of a loss function $l(s,\\hat{s})$. Two common losses are\n\nSquare error: $l(s, \\hat{s}) = (s - \\hat{s})^2$\nAbsolute error: $l(s, \\hat{s}) = |s - \\hat{s}|$\n\nNote that both the square and absolute errors are functions of the estimation error $e = s-{\\hat s}$. However, this is not necessarily the case. As an example, imagine a situation in which we would like to introduce a penalty which increases with the magnitude of the estimated variable. For such case, the following cost would better fit our needs: $l(s,{\\hat s}) = s^2 \\left(s-{\\hat s}\\right)^2$.",
"# In this section we will plot together the square and absolute errors\ngrid = np.linspace(-3,3,num=100)\nplt.plot(grid, grid**2, 'b-', label='Square error')\nplt.plot(grid, np.absolute(grid), 'r--', label='Absolute error')\nplt.xlabel('Error')\nplt.ylabel('Cost')\nplt.legend(loc='best')\nplt.show()",
"The overal prediction performance is computed as the average of the loss computed over a set of samples:\n$${\\bar R} = \\frac{1}{K}\\sum_{k=0}^{K-1} l\\left(s_k, \\hat{s}_k\\right)$$\nExercise 2:\nThe dataset in file 'datasets/x01.csv', taken from <a href=\"http://people.sc.fsu.edu/~jburkardt/datasets/regression/x01.txt\">here</a> records the average weight of the brain and body for a number of mammal species.\n* Represent a scatter plot of the targe variable versus the one-dimensional input.\n* Plot, over the same plot, the prediction function given by $S = 1.2 X$\n* Compute the square error rate for the given dataset.",
"# Load dataset in arrays X and S\ndf = pd.read_csv('datasets/x01.csv', sep=',', header=None)\nX = df.values[:,0]\nS = df.values[:,1]\n\n# <SOL>\n# </SOL>\n\nif sys.version_info.major==2:\n Test.assertTrue(np.isclose(R, 153781.943889), 'Incorrect value for the average square error')\nelse:\n np.testing.assert_almost_equal(R, 153781.943889, decimal=4)\n print(\"Test passed\")",
"4.1. Training and test data\nThe major goal of the regression problem is that the predictor makes good predictions for arbitrary new inputs, not taken from the dataset used by the regression algorithm. \nThus, in order to evaluate the prediction accuracy of some regression algorithm, we need some data, not used during the predictor design, to test the performance of the predictor under new data. To do so, the original dataset is usually divided in (at least) two disjoint sets:\n\nTraining set, $\\cal{D}_{\\text{train}}$: Used by the regression algorithm to determine predictor $f$.\nTest set, $\\cal{D}_{\\text{test}}$: Used to evaluate the performance of the regression algorithm.\n\nA good regression algorith uses $\\cal{D}{\\text{train}}$ to obtain a predictor with small average loss based on $\\cal{D}{\\text{test}}$\n$$\n{\\bar R}{\\text{test}} = \\frac{1}{K{\\text{test}}} \n\\sum_{ ({\\bf x},s) \\in \\mathcal{D}{\\text{test}}} l(s, f({\\bf x}))\n$$\nwhere $K{\\text{test}}$ is the size of the test set.\n5. Parametric and non-parametric regression models\nGenerally speaking, we can distinguish two approaches when designing a regression model:\n\nParametric approach: In this case, the estimation function is given <i>a priori</i> a parametric form, and the goal of the design is to find the most appropriate values of the parameters according to a certain goal\n\nFor instance, we could assume a linear expression\n $${\\hat s} = f({\\bf x}) = {\\bf w}^\\top {\\bf x}$$\n and adjust the parameter vector in order to minimize the average of the quadratic error over the training data. This is known as least-squares regression, and we will study it in a future session.\n\nNon-parametric approach: In this case, the analytical shape of the regression model is not assumed <i>a priori</i>."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
graphistry/pygraphistry
|
demos/demos_by_use_case/logs/aws_vpc_flow_cloudwatch/vpc_flow.ipynb
|
bsd-3-clause
|
[
"AWS CloudWatch VPC Flow Logs <> Graphistry\nAnalyze cloudwatch logs, such as using vpc flow to map an account, with Graphistry\nThis example directly uses the AWS CLI for cloudwatch API access. You can also work from S3 or systems like Athena.\nInstalls & Configure\nSet aws_access_key_id, aws_secret_access_key, key or pull from your env",
"!pip install graphistry -q\n!pip install awscli -q\n\n!aws configure set region us-west-2\n!aws configure set aws_access_key_id \"FILL_ME_IN\"\n!aws configure set aws_secret_access_key \"FILL_ME_IN\"\n\nimport pandas as pd\nimport json\nimport graphistry\n\n# To specify Graphistry account & server, use:\n# graphistry.register(api=3, username='...', password='...', protocol='https', server='hub.graphistry.com')\n# For more options, see https://github.com/graphistry/pygraphistry#configure\n",
"Record logs\nIf you do not already have logs, you can record VPC flow logs from your EC2 console:\n * Services -> EC2 -> Network Interfaces -> select interface(s) -> Action -> create flow log\n * Send to cloudwatch; use default settings for IAM and elsewhere\n * When enough data available, stop logging\nDownload & summarize logs\n\nPick a log group from available\nFetch: See AWS docs on filter-log-events\nLoad into a dataframe\nCompute summary stats",
"!aws logs describe-log-groups\n\n!aws logs filter-log-events --log-group-name VPCFlowDemo > data.json\n!ls -al data.json\n\nwith open('data.json', 'r') as f:\n data = json.load(f)\ndf = pd.DataFrame([x['message'].split(\" \") for x in data['events']])\ndf.columns = cols = ['version', 'accountid', 'interfaceid', 'src_ip', 'dest_ip', 'src_port', 'dest_port', 'protocol', 'packets', 'bytes', 'time_start', 'time_end', 'action', 'status']\n\nprint('# rows', len(df))\ndf.sample(3)\n\n# Int->Float for precision errors\ndf2 = df.copy()\nfor c in ['packets', 'bytes']:\n df2[c] = df2[c].astype(float)\n\nsummary_df = df2\\\n .groupby(['src_ip', 'dest_ip', 'interfaceid', 'dest_port', 'protocol', 'action', 'status'])\\\n .agg({\n 'time_start': ['min', 'max'],\n 'time_end': ['min', 'max'],\n 'packets': ['min', 'max', 'sum', 'count'],\n 'bytes': ['min', 'max', 'sum', 'count']\n }).reset_index()\nsummary_df.columns = [(\" \".join(x)).strip().replace(\" \", \"_\") for x in list(summary_df.columns)]\nprint('# rows', len(summary_df))\nsummary_df.sample(3)",
"Plot",
"hg = graphistry.hypergraph(\n summary_df,\n entity_types=['src_ip', 'dest_ip'], #'dest_port', 'interfaceid', 'action', ...\n direct=True)\nhg['graph'].bind(edge_title='bytes_sum').plot()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
DCPROGS/HJCFIT
|
exploration/CH82.ipynb
|
gpl-3.0
|
[
"CH82 Model\nThe following tries to reproduce Fig 8 from Hawkes, Jalali, Colquhoun (1992).\nFirst we create the $Q$-matrix for this particular model. Please note that the units are different from other publications.",
"%matplotlib notebook\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nfrom dcprogs.likelihood import QMatrix\n\ntau = 1e-4\nqmatrix = QMatrix([[ -3050, 50, 3000, 0, 0 ], \n [ 2./3., -1502./3., 0, 500, 0 ], \n [ 15, 0, -2065, 50, 2000 ], \n [ 0, 15000, 4000, -19000, 0 ], \n [ 0, 0, 10, 0, -10 ] ], 2)\nqmatrix.matrix /= 1000.0",
"We first reproduce the top tow panels showing $\\mathrm{det} W(s)$ for open and shut times.\nThese quantities can be accessed using dcprogs.likelihood.DeterminantEq. The plots are done using a standard plotting function from the dcprogs.likelihood package as well.",
"from dcprogs.likelihood import plot_roots, DeterminantEq\n\nfig = plt.figure()\nax = fig.add_subplot(1, 2, 1)\nplot_roots(DeterminantEq(qmatrix, 0.2), ax=ax)\nax.set_xlabel('Laplace $s$')\nax.set_ylabel('$\\\\mathrm{det} ^{A}W(s)$')\n\nax = fig.add_subplot(1, 2, 2)\nplot_roots(DeterminantEq(qmatrix, 0.2).transpose(), ax=ax)\nax.set_xlabel('Laplace $s$')\nax.set_ylabel('$\\\\mathrm{det} ^{F}W(s)$')\nax.yaxis.tick_right()\nax.yaxis.set_label_position(\"right\")\n\nfig.tight_layout()",
"Then we want to plot the panels c and d showing the excess shut and open-time probability densities$(\\tau = 0.2)$. To do this we need to access each exponential that makes up the approximate survivor function. We could use:",
"from dcprogs.likelihood import ApproxSurvivor\napprox = ApproxSurvivor(qmatrix, tau)\ncomponents = approx.af_components\nprint(components[:1])",
"The list components above contain 2-tuples with the weight (as a matrix) and the exponant (or root) for each exponential component in $^{A}R_{\\mathrm{approx}}(t)$. We could then create python functions pdf(t) for each exponential component, as is done below for the first root:",
"from dcprogs.likelihood import MissedEventsG\n\nweight, root = components[1]\neG = MissedEventsG(qmatrix, tau)\n# Note: the sum below is equivalent to a scalar product with u_F\ncoefficient = sum(np.dot(eG.initial_occupancies, np.dot(weight, eG.af_factor)))\npdf = lambda t: coefficient * exp((t)*root) ",
"The initial occupancies, as well as the $Q_{AF}e^{-Q_{FF}\\tau}$ factor are obtained directly from the object implementing the weight, root = components[1]\nmissed event likelihood $^{e}G(t)$.\nHowever, there is a convenience function that does all the above in the package. Since it is generally of little use, it is not currently exported to the dcprogs.likelihood namespace. So we create below a plotting function that uses it.",
"from dcprogs.likelihood._methods import exponential_pdfs\n\ndef plot_exponentials(qmatrix, tau, x=None, ax=None, nmax=2, shut=False):\n from dcprogs.likelihood import missed_events_pdf\n\n if x is None: x = np.arange(0, 5*tau, tau/10)\n pdf = missed_events_pdf(qmatrix, tau, nmax=nmax, shut=shut)\n graphb = [x, pdf(x+tau), '-k']\n functions = exponential_pdfs(qmatrix, tau, shut=shut)\n plots = ['.r', '.b', '.g'] \n together = None\n for f, p in zip(functions[::-1], plots):\n if together is None: together = f(x+tau)\n else: together = together + f(x+tau)\n graphb.extend([x, together, p])\n\n if ax is None: plot(*graphb)\n else: ax.plot(*graphb)\n\nfig = plt.figure()\nax = fig.add_subplot(1, 2, 1)\nax.set_xlabel('time $t$ (ms)')\nax.set_ylabel('Excess open-time probability density $f_{\\\\bar{\\\\tau}=0.2}(t)$')\nplot_exponentials(qmatrix, 0.2, shut=False, ax=ax)\n\nax = fig.add_subplot(1, 2, 2)\nplot_exponentials(qmatrix, 0.2, shut=True, ax=ax)\nax.set_xlabel('time $t$ (ms)')\nax.set_ylabel('Excess shut-time probability density $f_{\\\\bar{\\\\tau}=0.2}(t)$')\nax.yaxis.tick_right()\nax.yaxis.set_label_position(\"right\")\nfig.tight_layout()",
"Finally, we create the last plot (e), and throw in an (f) for good measure.",
"fig = plt.figure()\nax = fig.add_subplot(1, 2, 1)\nax.set_xlabel('time $t$ (ms)')\nax.set_ylabel('Excess open-time probability density $f_{\\\\bar{\\\\tau}=0.5}(t)$')\nplot_exponentials(qmatrix, 0.5, shut=False, ax=ax)\n\nax = fig.add_subplot(1, 2, 2)\nplot_exponentials(qmatrix, 0.5, shut=True, ax=ax)\nax.set_xlabel('time $t$ (ms)')\nax.set_ylabel('Excess shut-time probability density $f_{\\\\bar{\\\\tau}=0.5}(t)$')\nax.yaxis.tick_right()\nax.yaxis.set_label_position(\"right\")\n\nfig.tight_layout()\n\nfrom dcprogs.likelihood import QMatrix, MissedEventsG\n\ntau = 1e-4\nqmatrix = QMatrix([[ -3050, 50, 3000, 0, 0 ], \n [ 2./3., -1502./3., 0, 500, 0 ], \n [ 15, 0, -2065, 50, 2000 ], \n [ 0, 15000, 4000, -19000, 0 ], \n [ 0, 0, 10, 0, -10 ] ], 2)\neG = MissedEventsG(qmatrix, tau, 2, 1e-8, 1e-8)\nmeG = MissedEventsG(qmatrix, tau)\nt = 3.5* tau\n\nprint(eG.initial_CHS_occupancies(t) - meG.initial_CHS_occupancies(t))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mne-tools/mne-tools.github.io
|
0.16/_downloads/plot_stats_cluster_spatio_temporal_2samp.ipynb
|
bsd-3-clause
|
[
"%matplotlib inline",
"2 samples permutation test on source data with spatio-temporal clustering\nTests if the source space data are significantly different between\n2 groups of subjects (simulated here using one subject's data).\nThe multiple comparisons problem is addressed with a cluster-level\npermutation test across space and time.",
"# Authors: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>\n# Eric Larson <larson.eric.d@gmail.com>\n# License: BSD (3-clause)\n\nimport os.path as op\n\nimport numpy as np\nfrom scipy import stats as stats\n\nimport mne\nfrom mne import spatial_src_connectivity\nfrom mne.stats import spatio_temporal_cluster_test, summarize_clusters_stc\nfrom mne.datasets import sample\n\nprint(__doc__)",
"Set parameters",
"data_path = sample.data_path()\nstc_fname = data_path + '/MEG/sample/sample_audvis-meg-lh.stc'\nsubjects_dir = data_path + '/subjects'\nsrc_fname = subjects_dir + '/fsaverage/bem/fsaverage-ico-5-src.fif'\n\n# Load stc to in common cortical space (fsaverage)\nstc = mne.read_source_estimate(stc_fname)\nstc.resample(50, npad='auto')\n\n# Read the source space we are morphing to\nsrc = mne.read_source_spaces(src_fname)\nfsave_vertices = [s['vertno'] for s in src]\nstc = mne.morph_data('sample', 'fsaverage', stc, grade=fsave_vertices,\n smooth=20, subjects_dir=subjects_dir)\nn_vertices_fsave, n_times = stc.data.shape\ntstep = stc.tstep\n\nn_subjects1, n_subjects2 = 7, 9\nprint('Simulating data for %d and %d subjects.' % (n_subjects1, n_subjects2))\n\n# Let's make sure our results replicate, so set the seed.\nnp.random.seed(0)\nX1 = np.random.randn(n_vertices_fsave, n_times, n_subjects1) * 10\nX2 = np.random.randn(n_vertices_fsave, n_times, n_subjects2) * 10\nX1[:, :, :] += stc.data[:, :, np.newaxis]\n# make the activity bigger for the second set of subjects\nX2[:, :, :] += 3 * stc.data[:, :, np.newaxis]\n\n# We want to compare the overall activity levels for each subject\nX1 = np.abs(X1) # only magnitude\nX2 = np.abs(X2) # only magnitude",
"Compute statistic\nTo use an algorithm optimized for spatio-temporal clustering, we\njust pass the spatial connectivity matrix (instead of spatio-temporal)",
"print('Computing connectivity.')\nconnectivity = spatial_src_connectivity(src)\n\n# Note that X needs to be a list of multi-dimensional array of shape\n# samples (subjects_k) x time x space, so we permute dimensions\nX1 = np.transpose(X1, [2, 1, 0])\nX2 = np.transpose(X2, [2, 1, 0])\nX = [X1, X2]\n\n# Now let's actually do the clustering. This can take a long time...\n# Here we set the threshold quite high to reduce computation.\np_threshold = 0.0001\nf_threshold = stats.distributions.f.ppf(1. - p_threshold / 2.,\n n_subjects1 - 1, n_subjects2 - 1)\nprint('Clustering.')\nT_obs, clusters, cluster_p_values, H0 = clu =\\\n spatio_temporal_cluster_test(X, connectivity=connectivity, n_jobs=1,\n threshold=f_threshold)\n# Now select the clusters that are sig. at p < 0.05 (note that this value\n# is multiple-comparisons corrected).\ngood_cluster_inds = np.where(cluster_p_values < 0.05)[0]",
"Visualize the clusters",
"print('Visualizing clusters.')\n\n# Now let's build a convenient representation of each cluster, where each\n# cluster becomes a \"time point\" in the SourceEstimate\nfsave_vertices = [np.arange(10242), np.arange(10242)]\nstc_all_cluster_vis = summarize_clusters_stc(clu, tstep=tstep,\n vertices=fsave_vertices,\n subject='fsaverage')\n\n# Let's actually plot the first \"time point\" in the SourceEstimate, which\n# shows all the clusters, weighted by duration\nsubjects_dir = op.join(data_path, 'subjects')\n# blue blobs are for condition A != condition B\nbrain = stc_all_cluster_vis.plot('fsaverage', hemi='both', colormap='mne',\n views='lateral', subjects_dir=subjects_dir,\n time_label='Duration significant (ms)')\nbrain.save_image('clusters.png')"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mne-tools/mne-tools.github.io
|
0.13/_downloads/plot_dics_source_power.ipynb
|
bsd-3-clause
|
[
"%matplotlib inline",
"Compute source power using DICS beamfomer\nCompute a Dynamic Imaging of Coherent Sources (DICS) filter from single trial\nactivity to estimate source power for two frequencies of interest.\nThe original reference for DICS is:\nGross et al. Dynamic imaging of coherent sources: Studying neural interactions\nin the human brain. PNAS (2001) vol. 98 (2) pp. 694-699",
"# Author: Roman Goj <roman.goj@gmail.com>\n# Denis Engemann <denis.engemann@gmail.com>\n#\n# License: BSD (3-clause)\n\nimport mne\nfrom mne.datasets import sample\nfrom mne.time_frequency import csd_epochs\nfrom mne.beamformer import dics_source_power\n\nprint(__doc__)\n\ndata_path = sample.data_path()\nraw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'\nevent_fname = data_path + '/MEG/sample/sample_audvis_raw-eve.fif'\nfname_fwd = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif'\nsubjects_dir = data_path + '/subjects'",
"Read raw data",
"raw = mne.io.read_raw_fif(raw_fname)\nraw.info['bads'] = ['MEG 2443'] # 1 bad MEG channel\n\n# Set picks\npicks = mne.pick_types(raw.info, meg=True, eeg=False, eog=False,\n stim=False, exclude='bads')\n\n# Read epochs\nevent_id, tmin, tmax = 1, -0.2, 0.5\nevents = mne.read_events(event_fname)\nepochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True,\n picks=picks, baseline=(None, 0), preload=True,\n reject=dict(grad=4000e-13, mag=4e-12))\nevoked = epochs.average()\n\n# Read forward operator\nforward = mne.read_forward_solution(fname_fwd, surf_ori=True)\n\n# Computing the data and noise cross-spectral density matrices\n# The time-frequency window was chosen on the basis of spectrograms from\n# example time_frequency/plot_time_frequency.py\n# As fsum is False csd_epochs returns a list of CrossSpectralDensity\n# instances than can then be passed to dics_source_power\ndata_csds = csd_epochs(epochs, mode='multitaper', tmin=0.04, tmax=0.15,\n fmin=15, fmax=30, fsum=False)\nnoise_csds = csd_epochs(epochs, mode='multitaper', tmin=-0.11,\n tmax=-0.001, fmin=15, fmax=30, fsum=False)\n\n# Compute DICS spatial filter and estimate source power\nstc = dics_source_power(epochs.info, forward, noise_csds, data_csds)\n\nclim = dict(kind='value', lims=[1.6, 1.9, 2.2])\nfor i, csd in enumerate(data_csds):\n message = 'DICS source power at %0.1f Hz' % csd.frequencies[0]\n brain = stc.plot(surface='inflated', hemi='rh', subjects_dir=subjects_dir,\n time_label=message, figure=i, clim=clim)\n brain.set_data_time_index(i)\n brain.show_view('lateral')\n # Uncomment line below to save images\n # brain.save_image('DICS_source_power_freq_%d.png' % csd.frequencies[0])"
] |
[
"code",
"markdown",
"code",
"markdown",
"code"
] |
StingraySoftware/notebooks
|
DynamicalPowerspectrum/DynamicalPowerspectrum_tutorial_[real_data].ipynb
|
mit
|
[
"Dynamical Power Spectra (on real data)",
"%matplotlib inline\n\n# load auxiliary libraries\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom astropy.io import fits\n\n# import stingray\nimport stingray\n\nplt.style.use('seaborn-talk')",
"All starts with a lightcurve..\nOpen the event file with astropy.io.fits",
"f = fits.open('emr_cleaned.fits')",
"The time resolution is stored in the header of the first extension under the Keyword TIMEDEL",
"dt = f[1].header['TIMEDEL']",
"The collumn TIME of the first extension stores the time of each event",
"toa = f[1].data['Time']",
"Let's create a Lightcurve from the Events time of arrival witha a given time resolution",
"lc = stingray.Lightcurve.make_lightcurve(toa=toa, dt=dt)\n\nlc.plot()",
"DynamicPowerspectrum\nLet's create a dynamic powerspectrum with the a segment size of 16s and the powers with a \"leahy\" normalization",
"dynspec = stingray.DynamicalPowerspectrum(lc=lc, segment_size=16, norm='leahy')",
"The dyn_ps attribute stores the power matrix, each column corresponds to the powerspectrum of each segment of the light curve",
"dynspec.dyn_ps",
"To plot the DynamicalPowerspectrum matrix, we use the attributes time and freq to set the extend of the image axis. have a look at the documentation of matplotlib's imshow().",
"extent = min(dynspec.time), max(dynspec.time), max(dynspec.freq), min(dynspec.freq)\n\nplt.imshow(dynspec.dyn_ps, origin=\"lower left\", aspect=\"auto\", vmin=1.98, vmax=3.0,\n interpolation=\"none\", extent=extent)\nplt.colorbar()\nplt.ylim(700,850)\n\nprint(\"The dynamical powerspectrun has {} frequency bins and {} time bins\".format(len(dynspec.freq), len(dynspec.time)))",
"# Rebinning in Frequency",
"print(\"The current frequency resolution is {}\".format(dynspec.df))",
"Let's rebin to a frequency resolution of 2 Hz and using the average of the power",
"dynspec.rebin_frequency(df_new=2.0, method=\"average\")\n\nprint(\"The new frequency resolution is {}\".format(dynspec.df))",
"Let's see how the Dynamical Powerspectrum looks now",
"extent = min(dynspec.time), max(dynspec.time), min(dynspec.freq), max(dynspec.freq)\nplt.imshow(dynspec.dyn_ps, origin=\"lower\", aspect=\"auto\", vmin=1.98, vmax=3.0,\n interpolation=\"none\", extent=extent)\nplt.colorbar()\nplt.ylim(500, 1000)\n\nextent = min(dynspec.time), max(dynspec.time), min(dynspec.freq), max(dynspec.freq)\nplt.imshow(dynspec.dyn_ps, origin=\"lower\", aspect=\"auto\", vmin=2.0, vmax=3.0,\n interpolation=\"none\", extent=extent)\nplt.colorbar()\nplt.ylim(700,850)",
"Rebin time\nLet's try to improve the visualization by rebinnin our matrix in the time axis",
"print(\"The current time resolution is {}\".format(dynspec.dt))",
"Let's rebin to a time resolution of 64 s",
"dynspec.rebin_time(dt_new=64.0, method=\"average\")\n\nprint(\"The new time resolution is {}\".format(dynspec.dt))\n\nextent = min(dynspec.time), max(dynspec.time), min(dynspec.freq), max(dynspec.freq)\nplt.imshow(dynspec.dyn_ps, origin=\"lower\", aspect=\"auto\", vmin=2.0, vmax=3.0,\n interpolation=\"none\", extent=extent)\nplt.colorbar()\nplt.ylim(700,850)",
"Trace maximun\nLet's use the method trace_maximum() to find the index of the maximum on each powerspectrum in a certain frequency range. For example, between 755 and 782Hz)",
"tracing = dynspec.trace_maximum(min_freq=755, max_freq=782)",
"This is how the trace function looks like",
"plt.plot(dynspec.time, dynspec.freq[tracing], color='red', alpha=1)\nplt.show()",
"Let's plot it on top of the dynamic spectrum",
"extent = min(dynspec.time), max(dynspec.time), min(dynspec.freq), max(dynspec.freq)\nplt.imshow(dynspec.dyn_ps, origin=\"lower\", aspect=\"auto\", vmin=2.0, vmax=3.0,\n interpolation=\"none\", extent=extent, alpha=0.7)\nplt.colorbar()\nplt.ylim(740,800)\nplt.plot(dynspec.time, dynspec.freq[tracing], color='red', lw=3, alpha=1)\nplt.show()",
"The spike at 400 Hz is probably a statistical fluctutations, tracing by the maximum power can be dangerous!\nWe will implement better methods in the future, stay tunned ;)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
smorton2/think-stats
|
code/chap10ex.ipynb
|
gpl-3.0
|
[
"Examples and Exercises from Think Stats, 2nd Edition\nhttp://thinkstats2.com\nCopyright 2016 Allen B. Downey\nMIT License: https://opensource.org/licenses/MIT",
"from __future__ import print_function, division\n\n%matplotlib inline\n\nimport numpy as np\n\nimport random\n\nimport thinkstats2\nimport thinkplot",
"Least squares\nOne more time, let's load up the NSFG data.",
"import first\nlive, firsts, others = first.MakeFrames()\nlive = live.dropna(subset=['agepreg', 'totalwgt_lb'])\nages = live.agepreg\nweights = live.totalwgt_lb",
"The following function computes the intercept and slope of the least squares fit.",
"from thinkstats2 import Mean, MeanVar, Var, Std, Cov\n\ndef LeastSquares(xs, ys):\n meanx, varx = MeanVar(xs)\n meany = Mean(ys)\n\n slope = Cov(xs, ys, meanx, meany) / varx\n inter = meany - slope * meanx\n\n return inter, slope",
"Here's the least squares fit to birth weight as a function of mother's age.",
"inter, slope = LeastSquares(ages, weights)\ninter, slope",
"The intercept is often easier to interpret if we evaluate it at the mean of the independent variable.",
"inter + slope * 25",
"And the slope is easier to interpret if we express it in pounds per decade (or ounces per year).",
"slope * 10",
"The following function evaluates the fitted line at the given xs.",
"def FitLine(xs, inter, slope):\n fit_xs = np.sort(xs)\n fit_ys = inter + slope * fit_xs\n return fit_xs, fit_ys",
"And here's an example.",
"fit_xs, fit_ys = FitLine(ages, inter, slope)",
"Here's a scatterplot of the data with the fitted line.",
"thinkplot.Scatter(ages, weights, color='blue', alpha=0.1, s=10)\nthinkplot.Plot(fit_xs, fit_ys, color='white', linewidth=3)\nthinkplot.Plot(fit_xs, fit_ys, color='red', linewidth=2)\nthinkplot.Config(xlabel=\"Mother's age (years)\",\n ylabel='Birth weight (lbs)',\n axis=[10, 45, 0, 15],\n legend=False)",
"Residuals\nThe following functon computes the residuals.",
"def Residuals(xs, ys, inter, slope):\n xs = np.asarray(xs)\n ys = np.asarray(ys)\n res = ys - (inter + slope * xs)\n return res",
"Now we can add the residuals as a column in the DataFrame.",
"live['residual'] = Residuals(ages, weights, inter, slope)",
"To visualize the residuals, I'll split the respondents into groups by age, then plot the percentiles of the residuals versus the average age in each group.\nFirst I'll make the groups and compute the average age in each group.",
"bins = np.arange(10, 48, 3)\nindices = np.digitize(live.agepreg, bins)\ngroups = live.groupby(indices)\n\nage_means = [group.agepreg.mean() for _, group in groups][1:-1]\nage_means",
"Next I'll compute the CDF of the residuals in each group.",
"cdfs = [thinkstats2.Cdf(group.residual) for _, group in groups][1:-1]",
"The following function plots percentiles of the residuals against the average age in each group.",
"def PlotPercentiles(age_means, cdfs):\n thinkplot.PrePlot(3)\n for percent in [75, 50, 25]:\n weight_percentiles = [cdf.Percentile(percent) for cdf in cdfs]\n label = '%dth' % percent\n thinkplot.Plot(age_means, weight_percentiles, label=label)",
"The following figure shows the 25th, 50th, and 75th percentiles.\nCurvature in the residuals suggests a non-linear relationship.",
"PlotPercentiles(age_means, cdfs)\n\nthinkplot.Config(xlabel=\"Mother's age (years)\",\n ylabel='Residual (lbs)',\n xlim=[10, 45])",
"Sampling distribution\nTo estimate the sampling distribution of inter and slope, I'll use resampling.",
"def SampleRows(df, nrows, replace=False):\n \"\"\"Choose a sample of rows from a DataFrame.\n\n df: DataFrame\n nrows: number of rows\n replace: whether to sample with replacement\n\n returns: DataDf\n \"\"\"\n indices = np.random.choice(df.index, nrows, replace=replace)\n sample = df.loc[indices]\n return sample\n\ndef ResampleRows(df):\n \"\"\"Resamples rows from a DataFrame.\n\n df: DataFrame\n\n returns: DataFrame\n \"\"\"\n return SampleRows(df, len(df), replace=True)",
"The following function resamples the given dataframe and returns lists of estimates for inter and slope.",
"def SamplingDistributions(live, iters=101):\n t = []\n for _ in range(iters):\n sample = ResampleRows(live)\n ages = sample.agepreg\n weights = sample.totalwgt_lb\n estimates = LeastSquares(ages, weights)\n t.append(estimates)\n\n inters, slopes = zip(*t)\n return inters, slopes",
"Here's an example.",
"inters, slopes = SamplingDistributions(live, iters=1001)",
"The following function takes a list of estimates and prints the mean, standard error, and 90% confidence interval.",
"def Summarize(estimates, actual=None):\n mean = Mean(estimates)\n stderr = Std(estimates, mu=actual)\n cdf = thinkstats2.Cdf(estimates)\n ci = cdf.ConfidenceInterval(90)\n print('mean, SE, CI', mean, stderr, ci)",
"Here's the summary for inter.",
"Summarize(inters)",
"And for slope.",
"Summarize(slopes)",
"Exercise: Use ResampleRows and generate a list of estimates for the mean birth weight. Use Summarize to compute the SE and CI for these estimates.",
"# Solution goes here",
"Visualizing uncertainty\nTo show the uncertainty of the estimated slope and intercept, we can generate a fitted line for each resampled estimate and plot them on top of each other.",
"for slope, inter in zip(slopes, inters):\n fxs, fys = FitLine(age_means, inter, slope)\n thinkplot.Plot(fxs, fys, color='gray', alpha=0.01)\n \nthinkplot.Config(xlabel=\"Mother's age (years)\",\n ylabel='Residual (lbs)',\n xlim=[10, 45])",
"Or we can make a neater (and more efficient plot) by computing fitted lines and finding percentiles of the fits for each value of the dependent variable.",
"def PlotConfidenceIntervals(xs, inters, slopes, percent=90, **options):\n fys_seq = []\n for inter, slope in zip(inters, slopes):\n fxs, fys = FitLine(xs, inter, slope)\n fys_seq.append(fys)\n\n p = (100 - percent) / 2\n percents = p, 100 - p\n low, high = thinkstats2.PercentileRows(fys_seq, percents)\n thinkplot.FillBetween(fxs, low, high, **options)",
"This example shows the confidence interval for the fitted values at each mother's age.",
"PlotConfidenceIntervals(age_means, inters, slopes, percent=90, \n color='gray', alpha=0.3, label='90% CI')\nPlotConfidenceIntervals(age_means, inters, slopes, percent=50,\n color='gray', alpha=0.5, label='50% CI')\n\nthinkplot.Config(xlabel=\"Mother's age (years)\",\n ylabel='Residual (lbs)',\n xlim=[10, 45])",
"Coefficient of determination\nThe coefficient compares the variance of the residuals to the variance of the dependent variable.",
"def CoefDetermination(ys, res):\n return 1 - Var(res) / Var(ys)",
"For birth weight and mother's age $R^2$ is very small, indicating that the mother's age predicts a small part of the variance in birth weight.",
"inter, slope = LeastSquares(ages, weights)\nres = Residuals(ages, weights, inter, slope)\nr2 = CoefDetermination(weights, res)\nr2",
"We can confirm that $R^2 = \\rho^2$:",
"print('rho', thinkstats2.Corr(ages, weights))\nprint('R', np.sqrt(r2)) ",
"To express predictive power, I think it's useful to compare the standard deviation of the residuals to the standard deviation of the dependent variable, as a measure RMSE if you try to guess birth weight with and without taking into account mother's age.",
"print('Std(ys)', Std(weights))\nprint('Std(res)', Std(res))",
"As another example of the same idea, here's how much we can improve guesses about IQ if we know someone's SAT scores.",
"var_ys = 15**2\nrho = 0.72\nr2 = rho**2\nvar_res = (1 - r2) * var_ys\nstd_res = np.sqrt(var_res)\nstd_res",
"Hypothesis testing with slopes\nHere's a HypothesisTest that uses permutation to test whether the observed slope is statistically significant.",
"class SlopeTest(thinkstats2.HypothesisTest):\n\n def TestStatistic(self, data):\n ages, weights = data\n _, slope = thinkstats2.LeastSquares(ages, weights)\n return slope\n\n def MakeModel(self):\n _, weights = self.data\n self.ybar = weights.mean()\n self.res = weights - self.ybar\n\n def RunModel(self):\n ages, _ = self.data\n weights = self.ybar + np.random.permutation(self.res)\n return ages, weights",
"And it is.",
"ht = SlopeTest((ages, weights))\npvalue = ht.PValue()\npvalue",
"Under the null hypothesis, the largest slope we observe after 1000 tries is substantially less than the observed value.",
"ht.actual, ht.MaxTestStat()",
"We can also use resampling to estimate the sampling distribution of the slope.",
"sampling_cdf = thinkstats2.Cdf(slopes)",
"The distribution of slopes under the null hypothesis, and the sampling distribution of the slope under resampling, have the same shape, but one has mean at 0 and the other has mean at the observed slope.\nTo compute a p-value, we can count how often the estimated slope under the null hypothesis exceeds the observed slope, or how often the estimated slope under resampling falls below 0.",
"thinkplot.PrePlot(2)\nthinkplot.Plot([0, 0], [0, 1], color='0.8')\nht.PlotCdf(label='null hypothesis')\n\nthinkplot.Cdf(sampling_cdf, label='sampling distribution')\n\nthinkplot.Config(xlabel='slope (lbs / year)',\n ylabel='CDF',\n xlim=[-0.03, 0.03],\n legend=True, loc='upper left')",
"Here's how to get a p-value from the sampling distribution.",
"pvalue = sampling_cdf[0]\npvalue",
"Resampling with weights\nResampling provides a convenient way to take into account the sampling weights associated with respondents in a stratified survey design.\nThe following function resamples rows with probabilities proportional to weights.",
"def ResampleRowsWeighted(df, column='finalwgt'):\n weights = df[column]\n cdf = thinkstats2.Cdf(dict(weights))\n indices = cdf.Sample(len(weights))\n sample = df.loc[indices]\n return sample",
"We can use it to estimate the mean birthweight and compute SE and CI.",
"iters = 100\nestimates = [ResampleRowsWeighted(live).totalwgt_lb.mean()\n for _ in range(iters)]\nSummarize(estimates)",
"And here's what the same calculation looks like if we ignore the weights.",
"estimates = [thinkstats2.ResampleRows(live).totalwgt_lb.mean()\n for _ in range(iters)]\nSummarize(estimates)",
"The difference is non-negligible, which suggests that there are differences in birth weight between the strata in the survey.\nExercises\nExercise: Using the data from the BRFSS, compute the linear least squares fit for log(weight) versus height. How would you best present the estimated parameters for a model like this where one of the variables is log-transformed? If you were trying to guess someone’s weight, how much would it help to know their height?\nLike the NSFG, the BRFSS oversamples some groups and provides a sampling weight for each respondent. In the BRFSS data, the variable name for these weights is totalwt. Use resampling, with and without weights, to estimate the mean height of respondents in the BRFSS, the standard error of the mean, and a 90% confidence interval. How much does correct weighting affect the estimates?\nRead the BRFSS data and extract heights and log weights.",
"import brfss\n\ndf = brfss.ReadBrfss(nrows=None)\ndf = df.dropna(subset=['htm3', 'wtkg2'])\nheights, weights = df.htm3, df.wtkg2\nlog_weights = np.log10(weights)",
"Estimate intercept and slope.",
"# Solution goes here",
"Make a scatter plot of the data and show the fitted line.",
"# Solution goes here",
"Make the same plot but apply the inverse transform to show weights on a linear (not log) scale.",
"# Solution goes here",
"Plot percentiles of the residuals.",
"# Solution goes here",
"Compute correlation.",
"# Solution goes here",
"Compute coefficient of determination.",
"# Solution goes here",
"Confirm that $R^2 = \\rho^2$.",
"# Solution goes here",
"Compute Std(ys), which is the RMSE of predictions that don't use height.",
"# Solution goes here",
"Compute Std(res), the RMSE of predictions that do use height.",
"# Solution goes here",
"How much does height information reduce RMSE?",
"# Solution goes here",
"Use resampling to compute sampling distributions for inter and slope.",
"# Solution goes here",
"Plot the sampling distribution of slope.",
"# Solution goes here",
"Compute the p-value of the slope.",
"# Solution goes here",
"Compute the 90% confidence interval of slope.",
"# Solution goes here",
"Compute the mean of the sampling distribution.",
"# Solution goes here",
"Compute the standard deviation of the sampling distribution, which is the standard error.",
"# Solution goes here",
"Resample rows without weights, compute mean height, and summarize results.",
"# Solution goes here",
"Resample rows with weights. Note that the weight column in this dataset is called finalwt.",
"# Solution goes here"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
sdpython/ensae_teaching_cs
|
_doc/notebooks/td1a_soft/td1a_unit_test_ci.ipynb
|
mit
|
[
"1A.soft - Tests unitaires, setup et ingéniérie logicielle\nOn vérifie toujours qu'un code fonctionne quand on l'écrit mais cela ne veut pas dire qu'il continuera à fonctionner à l'avenir. La robustesse d'un code vient de tout ce qu'on fait autour pour s'assurer qu'il continue d'exécuter correctement.",
"from jyquickhelper import add_notebook_menu\nadd_notebook_menu()\n\nfrom pyensae.graphhelper import draw_diagram",
"Petite histoire\nSupposons que vous ayez implémenté trois fonctions qui dépendent les unes des autres. la fonction f3 utilise les fonctions f1 et f2.",
"draw_diagram(\"blockdiag { f0 -> f1 -> f3; f2 -> f3;}\")",
"Six mois plus tard, vous créez une fonction f5 qui appelle une fonction f4 et la fonction f2.",
"draw_diagram('blockdiag { f0 -> f1 -> f3; f2 -> f3; f2 -> f5 [color=\"red\"]; f4 -> f5 [color=\"red\"]; }')",
"Ah au fait, ce faisant, vous modifiez la fonction f2 et vous avez un peu oublié ce que faisait la fonction f3... Bref, vous ne savez pas si la fonction f3 sera impactée par la modification introduite dans la fonction f2 ? C'est ce type de problème qu'on rencontre tous les jours quand on écrit un logiciel à plusieurs et sur une longue durée. Ce notebook présente les briques classiques pour s'assurer de la robustesse d'un logiciel.\n\nles tests unitaires\nun logiciel de suivi de source\ncalcul de couverture\nl'intégration continue\nécrire un setup\nécrire la documentation\npublier sur PyPi\n\nEcrire une fonction\nN'importe quel fonction qui fait un calcul, par exemple une fonction qui résoud une équation du second degré.",
"def solve_polynom(a, b, c):\n # ....\n return None",
"Ecrire un test unitaire\nUn test unitaire est une fonction qui s'assure qu'une autre fonction retourne bien le résultat souhaité. Le plus simple est d'utiliser le module standard unittest et de quitter les notebooks pour utiliser des fichiers. Parmi les autres alternatives : pytest et nose.\nCouverture ou coverage\nLa couverture de code est l'ensemble des lignes exécutées par les tests unitaires. Cela ne signifie pas toujours qu'elles soient correctes mais seulement qu'elles ont été exécutées une ou plusieurs sans provoquer d'erreur. Le module le plus simple est coverage. Il produit des rapports de ce type : mlstatpy/coverage.\nCréer un compte GitHub\nGitHub est un site qui contient la majorité des codes des projets open-source. Il faut créer un compte si vous n'en avez pas, c'est gratuit pour les projets open souce, puis créer un projet et enfin y insérer votre projet. Votre ordinateur a besoin de :\n\ngit\nGitHub destkop\n\nVous pouvez lire GitHub Pour les Nuls : Pas de Panique, Lancez-Vous ! (Première Partie) et bien sûr faire plein de recherches internet.\nNote\nTout ce que vous mettez sur GitHub pour un projet open-source est en accès libre. Veillez à ne rien mettre de personnel. Un compte GitHub fait aussi partie des choses qu'un recruteur ira regarder en premier.\nIntégration continue\nL'intégration continue a pour objectif de réduire le temps entre une modification et sa mise en production. Typiquement, un développeur fait une modification, une machine exécute tous les tests unitaires. On en déduit que le logiciel fonctionne sous tous les angles, on peut sans crainte le mettre à disposition des utilisateurs. Si je résume, l'intégration continue consiste à lancer une batterie de tests dès qu'une modification est détectée. Si tout fonctionne, le logiciel est construit et prêt à être partagé ou déployé si c'est un site web.\nLà encore pour des projets open-source, il est possible de trouver des sites qui offre ce service gratuitement :\n\ntravis - Linux\nappveyor - Windows - 1 job à la fois, pas plus d'une heure.\ncircle-ci - Linux et Mac OSX (payant)\nGitLab-ci\n\nA part GitLab-ci, ces trois services font tourner les tests unitaires sur des machines hébergés par chacun des sociétés. Il faut s'enregistrer sur le site, définir un fichier .travis.yml, .appveyor.yml ou circle.yml puis activer le projet sur le site correspondant. Quelques exemples sont disponibles à pyquickhelper ou scikit-learn. Le fichier doit être ajouté au projet sur GitHub et activé sur le site d'intégration continue choisi. La moindre modification déclenchera un nouveau build.permet\nLa plupart des sites permettent l'insertion de badge de façon à signifier que le build fonctionne.",
"from IPython.display import Image\ntry:\n im = Image(\"https://travis-ci.com/sdpython/ensae_teaching_cs.png\")\nexcept TimeoutError:\n im = None\nim\n\nfrom IPython.display import SVG\ntry:\n im = SVG(\"https://codecov.io/github/sdpython/ensae_teaching_cs/coverage.svg\")\nexcept TimeoutError:\n im = None\nim",
"Il y a des badges un peu pour tout.\nEcrire un setup\nLe fichier setup.py détermin la façon dont le module python doit être installé pour un utilisateur qui ne l'a pas développé. Comment construire un setup : setup.\nEcrire la documentation\nL'outil est le plus utilisé est sphinx. Saurez-vous l'utiliser ?\nDernière étape : PyPi\nPyPi est un serveur qui permet de mettre un module à la disposition de tout le monde. Il suffit d'uploader le module... Packaging and Distributing Projects ou How to submit a package to PyPI. PyPi permet aussi l'insertion de badge.",
"try:\n im = SVG(\"https://badge.fury.io/py/ensae_teaching_cs.svg\")\nexcept TimeoutError:\n im = None\nim"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
PyladiesMx/Pyladies_ifc
|
3. Functions/Functions.ipynb
|
mit
|
[
"Bienvenid@s a otra reunión de Pyladies!!\nEn esta sesión aprenderemos a crear nuestras propias funciones en python.Pero primero que son funciones?\nUna función en python es un bloque de código organizado y reusable que sirve para realizar una tarea. Recuerdas las funciones que hemos usado en python, por ejemplo, cuando quisimos saber cuántos elementos hay en una lista usamos la función len. En python ya hay una gran colección de funciones que puedes utilizar (así no tenemos que re inventar la rueda cada vez que necesitemos algo) y aquí hay una lista de funciones que ya vienen incluídas en python.\nUsando funciones en python\nComo se los he dicho en varias ocasiones todas las funciones en python tienen la misma estructura que es como se ilustra a continuación:\nnombre + paréntesis + argumentos\nEn el caso de len, la estructura sería la siguiente:\nlen(lista)\nlen toma como argumento la lista o arreglo del cual quieras saber su longitud. Una vez que la función sea ejecutada, nos va a devolver un objeto (que evidentemente será lo que le hemos pedido).",
"animales = ['perro', 'gato', 'perico']\n\nlen(animales)\n\nanimales[1]\n\nx = 4\n\ntype(int('43'))",
"Ejercicio 1.\nCada un@ va a escoger una función de las que ya están incluídas en python y la va a explicar a sus compañer@s\nCreando tus propias funciones en python\nYa que estás más familiarizada con las funciones pre hechas en python te podrás dar cuenta de que no siempre van a tener una que necesites, entonces, ¿cómo puedo hacer mis propias funciones?\nEn python la forma de hacerlo es la siguiente:\nPrimero tienes que hacerle claro a python que el bloque de código (o pequeño programa) que vas a escribir forma va a ser una función para esto se escribe def que es la abreviatura para definir.\nDespués tienes que inventar un nombre para tu función. En teoría puedes llamarlo como quieras, sin embargo, es de buena práctica en python llamar a tus funciones de forma tal que cuando las leas después de meses o años puedas claramente recordar que es lo que hacen.\nDespués de escribir def y el nombre de la función viene algo crucial para crear funciones, basándote en la estructura de las que ya vienen incluídas en python, qué crees que sea...\n... Exactamente!! los argumentos!!\nEsta parte es crucial porque de aquí vas a sacar la información necesaria para poder dar un resultado. Veremos esto más adelante.\nDespués viene el bloque de código que deseas ejecutar y esto puede constar de operaciones complejas y transformación de los datos.\nFinalmente, para que quede claro para python lo que te debe devolver al final de la función necesitas escribir un return seguido de lo que será el resultado de la función.\nLa estructura para definir funciones queda de la siguiente manera:\ndef nombre_función(argumento 1, argumento 2, ... , argumento n):\noperación 1\n\noperación 2\n\nresultado = operación 1 + operación 2\n\nreturn resultado\n\nHagamos una pequeña función como ejemplo.",
"def cuadrado(numero):\n '''Función que da como resultado el cuadrado de un número\n necesitas un número como argumento'''\n resultado = numero**2\n return resultado\n\n# Probemos la función\n\ncuadrado(8)\n\ncuadrado(8.0)\n\ncuadrado(-8)",
"Pausa para dudas\n3 ..\n2..\n1\nMuy bien! ahora te toca a tí :)\nEjercicio 2\nCrea una función que dibuje una barra de carga con un porcentaje. Digamos que queremos que se dibuje el 35% entonces el resultado de correr la función sería:\n[#######-------------] 35%",
"def barras(porcentaje):\n gatos = (porcentaje*20)//100\n guiones = 20 - gatos\n print('['+'#'* gatos + '-' * guiones + ']'+str(porcentaje)+'%')\n\nbarras(167)\n\n gatos = (35*20)//100\n\nprint(20*'gatos')",
"Ahora prueba tu función con estos valores de porcentage:\n * 12.5%\n * 167 %\n * -20 *\nEjercicio 3\nEscribe una función que te diga cuantas consonantes hay en una palabra. Ejemplo: La palabra \"carroza\" tiene 4 consonantes\nArgumentos predeterminados\nHay ocasiones en las cuales los argumentos para una función que vamos a crear los vamos a ocuparemos cotidianamente o simplemente tienen más sentido y para no tener que escribirlos cada vez que llamamos a la función lo que podemos hacer es definirlos desde el momento en el que estamos creando una función\nVamos a asumir que yo quiero hacer una función que eleve a la potencia n un número x. Digamos que de acuerdo a mi experiencia, la mayoría de las personas quiere saber el cuadrado de 4. Lo que hago entonces es una función que tenga como argumentos predeterminados el 4 y el 2... Veamos el ejemplo",
"def exponente(numero=4, exponente=2):\n '''Toma un número y lo eleva a la potencia de otro'''\n resultado = numero**exponente\n return resultado",
"Ahora veamos que pasa cuando llamamos a la función",
"exponente()",
"Esto no significa que la función que acabo de escribir sea definitiva y no pueda yo modificarla para sacar las potencias con otros números. Como veremos a continuación, la función puede tomar cualquier número. Sólo tenemos que hacerlo explícito esta vez...",
"exponente(4, 0.5)\n\nexponente(5, -1)\n\nexponente(0.5, 2)",
"Ahora te toca a ti\nEjercicio 4\nModifica la función que escribiste en el ejercicio número dos para que la barra de porcentage por default la llene con el símbolo de \"#\" pero que si quieres, puedas cambiar los signos por \"@\" o cualquier otro símbolo permitido en python... Tal vez con ❤ si sientes que es apropiado por el mes del amor y la amistad! (Puedes encontrar más caracteres de unicode aquí)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mne-tools/mne-tools.github.io
|
0.21/_downloads/7bbeb6a728b7d16c6e61cd487ba9e517/plot_morph_volume_stc.ipynb
|
bsd-3-clause
|
[
"%matplotlib inline",
"Morph volumetric source estimate\nThis example demonstrates how to morph an individual subject's\n:class:mne.VolSourceEstimate to a common reference space. We achieve this\nusing :class:mne.SourceMorph. Pre-computed data will be morphed based on\nan affine transformation and a nonlinear registration method\nknown as Symmetric Diffeomorphic Registration (SDR) by\n:footcite:AvantsEtAl2008.\nTransformation is estimated from the subject's anatomical T1 weighted MRI\n(brain) to FreeSurfer's 'fsaverage' T1 weighted MRI (brain)\n<https://surfer.nmr.mgh.harvard.edu/fswiki/FsAverage>__.\nAfterwards the transformation will be applied to the volumetric source\nestimate. The result will be plotted, showing the fsaverage T1 weighted\nanatomical MRI, overlaid with the morphed volumetric source estimate.",
"# Author: Tommy Clausner <tommy.clausner@gmail.com>\n#\n# License: BSD (3-clause)\nimport os\n\nimport nibabel as nib\nimport mne\nfrom mne.datasets import sample, fetch_fsaverage\nfrom mne.minimum_norm import apply_inverse, read_inverse_operator\nfrom nilearn.plotting import plot_glass_brain\n\nprint(__doc__)",
"Setup paths",
"sample_dir_raw = sample.data_path()\nsample_dir = os.path.join(sample_dir_raw, 'MEG', 'sample')\nsubjects_dir = os.path.join(sample_dir_raw, 'subjects')\n\nfname_evoked = os.path.join(sample_dir, 'sample_audvis-ave.fif')\nfname_inv = os.path.join(sample_dir, 'sample_audvis-meg-vol-7-meg-inv.fif')\n\nfname_t1_fsaverage = os.path.join(subjects_dir, 'fsaverage', 'mri',\n 'brain.mgz')\nfetch_fsaverage(subjects_dir) # ensure fsaverage src exists\nfname_src_fsaverage = subjects_dir + '/fsaverage/bem/fsaverage-vol-5-src.fif'",
"Compute example data. For reference see\nsphx_glr_auto_examples_inverse_plot_compute_mne_inverse_volume.py\nLoad data:",
"evoked = mne.read_evokeds(fname_evoked, condition=0, baseline=(None, 0))\ninverse_operator = read_inverse_operator(fname_inv)\n\n# Apply inverse operator\nstc = apply_inverse(evoked, inverse_operator, 1.0 / 3.0 ** 2, \"dSPM\")\n\n# To save time\nstc.crop(0.09, 0.09)",
"Get a SourceMorph object for VolSourceEstimate\nsubject_from can typically be inferred from\n:class:src <mne.SourceSpaces>,\nand subject_to is set to 'fsaverage' by default. subjects_dir can be\nNone when set in the environment. In that case SourceMorph can be initialized\ntaking src as only argument. See :class:mne.SourceMorph for more\ndetails.\nThe default parameter setting for zooms will cause the reference volumes\nto be resliced before computing the transform. A value of '5' would cause\nthe function to reslice to an isotropic voxel size of 5 mm. The higher this\nvalue the less accurate but faster the computation will be.\nThe recommended way to use this is to morph to a specific destination source\nspace so that different subject_from morphs will go to the same space.`\nA standard usage for volumetric data reads:",
"src_fs = mne.read_source_spaces(fname_src_fsaverage)\nmorph = mne.compute_source_morph(\n inverse_operator['src'], subject_from='sample', subjects_dir=subjects_dir,\n niter_affine=[10, 10, 5], niter_sdr=[10, 10, 5], # just for speed\n src_to=src_fs, verbose=True)",
"Apply morph to VolSourceEstimate\nThe morph can be applied to the source estimate data, by giving it as the\nfirst argument to the :meth:morph.apply() <mne.SourceMorph.apply> method:",
"stc_fsaverage = morph.apply(stc)",
"Convert morphed VolSourceEstimate into NIfTI\nWe can convert our morphed source estimate into a NIfTI volume using\n:meth:morph.apply(..., output='nifti1') <mne.SourceMorph.apply>.",
"# Create mri-resolution volume of results\nimg_fsaverage = morph.apply(stc, mri_resolution=2, output='nifti1')",
"Plot results",
"# Load fsaverage anatomical image\nt1_fsaverage = nib.load(fname_t1_fsaverage)\n\n# Plot glass brain (change to plot_anat to display an overlaid anatomical T1)\ndisplay = plot_glass_brain(t1_fsaverage,\n title='subject results to fsaverage',\n draw_cross=False,\n annotate=True)\n\n# Add functional data as overlay\ndisplay.add_overlay(img_fsaverage, alpha=0.75)",
"Reading and writing SourceMorph from and to disk\nAn instance of SourceMorph can be saved, by calling\n:meth:morph.save <mne.SourceMorph.save>.\nThis methods allows for specification of a filename under which the morph\nwill be save in \".h5\" format. If no file extension is provided, \"-morph.h5\"\nwill be appended to the respective defined filename::\n>>> morph.save('my-file-name')\n\nReading a saved source morph can be achieved by using\n:func:mne.read_source_morph::\n>>> morph = mne.read_source_morph('my-file-name-morph.h5')\n\nOnce the environment is set up correctly, no information such as\nsubject_from or subjects_dir must be provided, since it can be\ninferred from the data and used morph to 'fsaverage' by default, e.g.::\n>>> morph.apply(stc)\n\nReferences\n.. footbibliography::"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
dietmarw/EK5312_ElectricalMachines
|
Chapman/Ch2-Problem_2-06.ipynb
|
unlicense
|
[
"Excercises Electric Machinery Fundamentals\nChapter 2\nProblem 2-6",
"%pylab notebook\n%precision 4",
"Description\nA 1000-VA 230/115-V transformer has been tested to determine its equivalent circuit. The results of the\ntests are shown below:\n| Open-circuit test | Short-circuit test |\n|---------------------|--------------------|\n| (on secondary side) | (on primary side) |\n| $V_{OC} = 115\\,V$ | $V_{SC} = 17.1\\,V$ |\n| $I_{OC} = 0.11\\,A$ | $I_{SC} = 8.7\\,A$ (should really be 4.35 A) |\n| $P_{OC} = 3.9\\,W$ | $P_{SC} = 38.1\\,W$ |\n<hr>\n\nNote: As was correctly pointed out in the lecture, the current $I_{sc}$ is actually to large by a factor of 2.\nI assume it simply being an oversight an the correct value given should be $I_{SC} = 4.35\\,A = I_n$ (i.e., equal to the nominal current which can be calculated from $I_n = \\frac{1000\\,VA}{230\\,V}$).\nThe following solution is based on the incorrectly given current of $I_{SC} = 8.7\\,A$ but you can simply set the value of Isc below to 4.35 and run \"Cell → Run all\" to recaculate all values based on the correct $I_{SC}$.\nSee also Example 2-2 in the book which is similar to this problem but where the correct values were used.\n<hr>",
"Voc = 115.0 # [V] \nIoc = 0.11 # [A]\nPoc = 3.9 # [W]\nVsc = 17.1 # [V] \nIsc = 8.7 # [A] replace with 4.35 so see alternative(correct solutions)\nPsc = 38.1 # [W]",
"(a)\n\nFind the equivalent circuit of this transformer referred to the low-voltage side.\n\n(b)\n\nFind the transformer’s voltage regulation at rated conditions and \nfor 0.8 PF lagging\nfor 1.0 PF\nfor 0.8 PF leading.\n\n(c)\n\nDetermine the transformer’s efficiency at rated conditions and 0.8 PF lagging.\n\nSOLUTION\n(a)\nSolution based on active and reactive power:\nOpen-circuit test results (ignore $R_{EQ}$ and $X_{EQ}$ and all values referred to the secondary side):",
"Rc = Voc**2/Poc\nRc\n\nSoc = Voc*Ioc\nQoc = sqrt(Soc**2 - Poc**2)\nXm = Voc**2/Qoc\nXm",
"Short-circuit test results (ignore $R_C$ and $X_M$ and all values referred to the primary side):",
"Req = Psc/Isc**2\nReq\n\nSsc = Vsc*Isc\nQsc = sqrt(Ssc**2 - Psc**2)\nXeq = Qsc/Isc**2\nXeq",
"Solution based on complex angle:",
"Yex_amp = Ioc/Voc\nYex_amp\n\ntheta_ex = arccos(Poc/Soc)\nYex = Yex_amp * exp(-1j*theta_ex)\nYex\n\nRc = 1/real(Yex)\nRc\n\nXm = -1/imag(Yex)\nXm\n\nZeq_amp = Vsc/Isc\nZeq_amp\n\ntheta_eq = arccos(Psc/Ssc)\nZeq = Zeq_amp * exp(1j*theta_eq)\nZeq\n\n\nReq = 1*real(Zeq)\nReq\n\nXeq = 1*imag(Zeq)\nXeq",
"The resulting equivalent circuit is:\nTo convert the equivalent circuit to the secondary side, divide each series impedance by the square of the\nturns ratio ( a = 230/115 = 2). Note that the excitation branch elements are already on the secondary side.\nThe resulting equivalent circuit is shown below:\n<img src=\"figs/FigC_2-18b.jpg\" width=\"50%\">",
"a = 230/115\na\n\nRcs = Rc # measurements were already done on the secondary side\nRcs\n\nXms = Xm # measurements were already done on the secondary side\nabs(Xms)\n\nReqs = Req/a**2\nReqs\n\nXeqs = Xeq/a**2\nXeqs",
"(b)\nTo find the required voltage regulation, we will use the equivalent circuit of the transformer referred to the secondary side. The rated secondary current is",
"Sn = 1000.0 # [VA] nominal apparent power\nVs = 115.0 # [VA] nominal secondary voltage\nIs_amp = Sn/Vs # amplitude of the current\nIs_amp",
"We will now calculate the primary voltage referred to the secondary side and use the voltage regulation equation for each power factor. The calulations are taking the secondary voltage $V_S = 115\\,V\\angle 0^\\circ$ as a reference. \n$$ \\vec{V_P}' = \\vec{V_S} + Z_{EQ} \\vec{I_S}$$\n1) 0.8 PF lagging",
"PF = 0.8\ntheta = -arccos(PF)\nIs = Is_amp *exp(1j*theta)\nprint('Is = {:.1f} A ∠{:.1f}°'.format(\n abs(Is), theta/pi*180))\n\nVps_lagg = Vs + (Reqs + 1j*Xeqs)*Is\nVps_lagg_angle = arctan(imag(Vps_lagg)/real(Vps_lagg))\nprint('Vps = {:.1f} V ∠{:.2f}°'.format(\n abs(Vps_lagg), Vps_lagg_angle/pi*180))",
"And the voltage regulation with:\n$$ VR = \\frac{V_P/a-V_S}{V_S} = \\frac{V_P'-V_S}{V_S} $$\nis therefore:",
"VR = (abs(Vps_lagg) - abs(Vs)) / abs(Vs) * 100 # in percent\nprint('VR = {:.2f} %'.format(VR))\nprint('===========')",
"2) 1.0 PF",
"PF = 1.0\ntheta = arccos(PF)\nIs = Is_amp *exp(1j*theta)\nprint('Is = {:.1f} A ∠{:.1f}°'.format(\n abs(Is), angle(Is, deg=True)))\n\nVps = Vs + (Reqs + 1j*Xeqs)*Is\nVps_angle = arctan(imag(Vps)/real(Vps))\nprint('Vps = {:.1f} V ∠{:.2f}°'.format(\n abs(Vps), Vps_angle/pi*180))",
"And the voltage regulation with:\n$VR = \\frac{V_P/a-V_S}{V_S} = \\frac{V_P'-V_S}{V_S}$\nis therefore:",
"VR = (abs(Vps) - abs(Vs)) / abs(Vs) * 100 # in percent\nprint('VR = {:.2f} %'.format(VR))\nprint('===========')",
"3) 0.8 PF leading",
"PF = 0.8\ntheta = +arccos(0.8)\nIs = Is_amp *exp(1j*theta)\nprint('Is = {:.1f} A ∠{:.1f}°'.format(\n abs(Is), theta/pi*180))\n\nVps_lead = Vs + (Reqs + 1j*Xeqs)*Is\nVps_lead_angle = arctan(imag(Vps_lead)/real(Vps_lead))\nprint('Vps = {:.1f} V ∠{:.1f}°'.format(\n abs(Vps_lead), Vps_lead_angle/pi*180))",
"And the voltage regulation with: $VR = \\frac{V_P/a-V_S}{V_S} = \\frac{V_P'-V_S}{V_S}$ is therefore:",
"VR = (abs(Vps_lead) - abs(Vs)) / abs(Vs) * 100 # in percent\nprint('VR = {:.2f} %'.format(VR))\nprint('============')",
"(c)\nAt rated conditions and 0.8 PF lagging, the output power of this transformer is:\n$$ P_\\text{OUT} = V_s I_s \\cos(\\theta) = V_s I_s \\cdot PF $$",
"Pout = abs(Vs) * abs(Is) * PF\nPout",
"watts. The copper and core losses of this transformer are $P_\\text{CU} = I_s^2 \\cdot R_{\\text{EQ}_s}$:",
"Pcu = abs(Is)**2 * Reqs\nPcu",
"watts. And the core losses are $P_\\text{core} = V_P'^2/R_c$:",
"Pcore = abs(Vps_lagg)**2 / Rcs\nPcore",
"watts. Therefore the efficiency of this transformer at these conditions is\n$$ \\eta = \\frac{P_\\text{OUT}}{P_\\text{OUT}+P_\\text{CU}+P_\\text{core}} \\times 100\\,\\%$$",
"eta = Pout / (Pout + Pcu + Pcore)\nprint('η = {:.1f} %'.format(eta*100))\nprint('==========')",
"Note:\nOne might argue that $115\\,V\\angle 0^\\circ$ really should be seen as the no-load voltage on the secondary side and not as the full-load voltage. I.e., $V_P/a = 115\\,V\\angle 0^\\circ$. Think about what kind of effect this has on the voltage regulation and efficiency calculations in (b) and (c) above.\nWhat would be changed in the equations above in order to calculate this variant?"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
Applied-Groundwater-Modeling-2nd-Ed/Chapter_4_problems-1
|
P4.3_Flopy_Hubbertville_areal_model.ipynb
|
gpl-2.0
|
[
"<img src=\"AW&H2015.tiff\" style=\"float: left\">\n<img src=\"flopylogo.png\" style=\"float: center\">\nProblem P4.3 Hubbertville Areal Model\nIn Problem P4.3 from page 173-174 in Anderson, Woessner and Hunt (2015), we are asked to construct an areal 2D model to assess impacts of pumping. The town of Hubbertville is planning to expand its water supply by constructing a pumping well in an unconfined gravel aquifer (Fig. P4.3). The well is designed to pump constantly at a rate of 20,000 m3/day. Well construction was halted by the State Fish and Game Service who manage the Green Swamp Conservation area. The agency claimed that pumping would “significantly reduce” groundwater discharge to the swamp and damage waterfowl habitat. The town claimed the fully penetrating river boundary to the north and the groundwater divide located near the center of the valley would prevent any change in flow to the swamp.\nPart a.\nConstruct a 2D areal steady-state model of the aquifer between the river and swamp for conditions prior to pumping using the information in Fig. P4.3. Represent the river and swamp boundaries as constant head boundaries with head set at 1000 m. The side boundaries are no-flow boundaries. Justify this assignment of boundary conditions. Use a constant nodal spacing of 500 m. Run the model and produce a contour map of heads. Draw the water-table profile in a north-south cross-section and label the simulated groundwater divide between the river and the swamp. Compute the discharge to Green Swamp.\nIn this notebook, we will work through the problem using MODFLOW and the Python tool set Flopy. Notice how much code is reused from P4.1 because the variable names remained the same.\n<img src=\"P4.3_figure.tiff\" style=\"float: center\">\nBelow is an iPython Notebook that builds a Python MODFLOW model for this problem and plots results. See the Github wiki associated with this Chapter for information on one suggested installation and setup configuration for Python and iPython Notebook.\n[Acknowledgements: This tutorial was created by Randy Hunt and all failings are mine. The exercise here has benefited greatly from the online Flopy tutorial and example notebooks developed by Chris Langevin and Joe Hughes for the USGS Spring 2015 Python Training course GW1774]\nCreating the Model\nIn this example, we will create a simple groundwater flow model by following the tutorial included on the Flopy website. We will make a few small changes so that the tutorial works with our file structure.\nVisit the tutorial website here.\nSetup the Notebook Environment and Import Flopy\nLoad a few standard libraries, and then load flopy.",
"%matplotlib inline\nimport sys\nimport os\nimport shutil\nimport numpy as np\nfrom subprocess import check_output\n\n# Import flopy\nimport flopy",
"Setup a New Directory and Change Paths\nFor this tutorial, we will work in a new subdirectory underneath the directory where the notebook is located. We can use some fancy Python tools to help us manage the directory creation. Note that if you encounter path problems with this workbook, you can stop and then restart the kernel and the paths will be reset.",
"# Set the name of the path to the model working directory\ndirname = \"P4-3_Hubbertville\"\ndatapath = os.getcwd()\nmodelpath = os.path.join(datapath, dirname)\nprint 'Name of model path: ', modelpath\n\n# Now let's check if this directory exists. If not, then we will create it.\nif os.path.exists(modelpath):\n print 'Model working directory already exists.'\nelse:\n print 'Creating model working directory.'\n os.mkdir(modelpath)",
"Define the Model Extent, Grid Resolution, and Characteristics\nIt is normally good practice to group things that you might want to change into a single code block. This makes it easier to make changes and rerun the code.",
"# model domain and grid definition\n# for clarity, user entered variables are all caps; python syntax are lower case or mixed case\n# In a contrast to P4.1 and P4.2, this is an areal 2D model\nLX = 4500.\nLY = 11000. # note that there is an added 500m on the top and bottom to represent the boundary conditions,that leaves an aqufier lenght of 10000 m \nZTOP = 1030. # the system is unconfined so set the top above land surface so that the water table never > layer top\nZBOT = 980.\nNLAY = 1\nNROW = 22\nNCOL = 9\nDELR = LX / NCOL # recall that MODFLOW convention is DELR is along a row, thus has items = NCOL; see page XXX in AW&H (2015)\nDELC = LY / NROW # recall that MODFLOW convention is DELC is along a column, thus has items = NROW; see page XXX in AW&H (2015)\nDELV = (ZTOP - ZBOT) / NLAY\nBOTM = np.linspace(ZTOP, ZBOT, NLAY + 1)\nHK = 50.\nVKA = 1.\nRCH = 0.001\nWELLQ = 0. #recall MODFLOW convention, negative means pumped out of the model domain (=aquifer)\nprint \"DELR =\", DELR, \" DELC =\", DELC, ' DELV =', DELV\nprint \"BOTM =\", BOTM\nprint \"Recharge =\", RCH \nprint \"Pumping well rate =\", WELLQ\n",
"Create the MODFLOW Model Object\nCreate a flopy MODFLOW object: flopy.modflow.Modflow.",
"# Assign name and create modflow model object\nmodelname = 'P4-3'\n#exe_name = os.path.join(datapath, 'mf2005.exe') # for Windows OS\nexe_name = os.path.join(datapath, 'mf2005') # for Mac OS\nprint 'Model executable: ', exe_name\nMF = flopy.modflow.Modflow(modelname, exe_name=exe_name, model_ws=modelpath)",
"Discretization Package\nCreate a flopy discretization package object: flopy.modflow.ModflowDis.",
"# Create the discretization object\nTOP = ZTOP * np.ones((NROW, NCOL),dtype=np.float)\n\nDIS_PACKAGE = flopy.modflow.ModflowDis(MF, NLAY, NROW, NCOL, delr=DELR, delc=DELC,\n top=TOP, botm=BOTM[1:], laycbd=0)\n# print DIS_PACKAGE #uncomment this on far left to see information about the flopy object",
"Basic Package\nCreate a flopy basic package object: flopy.modflow.ModflowBas.",
"# Variables for the BAS package\nIBOUND = np.ones((NLAY, NROW, NCOL), dtype=np.int32) # all nodes are active (IBOUND = 1)\n\n# make the top of the profile specified head by setting the IBOUND = -1\nIBOUND[:, 0, :] = -1 #don't forget arrays are zero-based!\nIBOUND[:, -1, :] = -1 #-1 is Python for last in array\nprint IBOUND\n\nSTRT = 1015 * np.ones((NLAY, NROW, NCOL), dtype=np.float32) # set starting head to 1010 m through out model domain\nSTRT[:, 0, :] = 1000. # river stage for setting constant head\nSTRT[:, -1, :] = 1000. # wetland stage for setting constant head\nprint STRT\n\nBAS_PACKAGE = flopy.modflow.ModflowBas(MF, ibound=IBOUND, strt=STRT)\n# print BAS_PACKAGE # uncomment this at far left to see the information about the flopy BAS object",
"Layer Property Flow Package\nCreate a flopy layer property flow package object: flopy.modflow.ModflowLpf.",
"LPF_PACKAGE = flopy.modflow.ModflowLpf(MF, laytyp=1, hk=HK, vka=VKA) # we defined the K and anisotropy at top of file\n# print LPF_PACKAGE # uncomment this at far left to see the information about the flopy LPF object",
"Well Package\nCreate a flopy output control object: flopy.modflow.ModflowWel.",
"WEL_PACKAGE = flopy.modflow.ModflowWel(MF, stress_period_data=[0,6,4,WELLQ]) # remember python 0 index, layer 0 = layer 1 in MF\n#print WEL_PACKAGE # uncomment this at far left to see the information about the flopy WEL object",
"Output Control\nCreate a flopy output control object: flopy.modflow.ModflowOc.",
"OC_PACKAGE = flopy.modflow.ModflowOc(MF) # we'll use the defaults for the model output\n# print OC_PACKAGE # uncomment this at far left to see the information about the flopy OC object",
"Preconditioned Conjugate Gradient Solver\nCreate a flopy pcg package object: flopy.modflow.ModflowPcg.",
"PCG_PACKAGE = flopy.modflow.ModflowPcg(MF, mxiter=500, iter1=100, hclose=1e-04, rclose=1e-03, relax=0.98, damp=0.5) \n# print PCG_PACKAGE # uncomment this at far left to see the information about the flopy PCG object",
"Recharge Package\nCreate a flopy pcg package object: flopy.modflow.ModflowRch.",
"RCH_PACKAGE = flopy.modflow.ModflowRch(MF, rech=RCH)\n# print RCH_PACKAGE # uncomment this at far left to see the information about the flopy RCH object",
"Writing the MODFLOW Input Files\nBefore we create the model input datasets, we can do some directory cleanup to make sure that we don't accidently use old files.",
"#Before writing input, destroy all files in folder to prevent reusing old files\n#Here's the working directory\nprint modelpath\n#Here's what's currently in the working directory\nmodelfiles = os.listdir(modelpath)\nprint modelfiles\n\n#delete these files to prevent us from reading old results\nmodelfiles = os.listdir(modelpath)\nfor filename in modelfiles:\n f = os.path.join(modelpath, filename)\n if modelname in f:\n try:\n os.remove(f)\n print 'Deleted: ', filename\n except:\n print 'Unable to delete: ', filename\n\n#Now write the model input files\nMF.write_input()",
"Yup. It's that simple, the model datasets are written using a single command (mf.write_input).\nCheck in the model working directory and verify that the input files have been created. Or if you might just add another cell, right after this one, that prints a list of all the files in our model directory. The path we are working in is returned from this next block.",
"# return current working directory\nprint \"You can check the newly created files in\", modelpath\n",
"Running the Model\nFlopy has several methods attached to the model object that can be used to run the model. They are run_model, run_model2, and run_model3. Here we use run_model3, which will write output to the notebook.",
"silent = False #Print model output to screen?\npause = False #Require user to hit enter? Doesn't mean much in Ipython notebook\nreport = True #Store the output from the model in buff\nsuccess, buff = MF.run_model(silent=silent, pause=pause, report=report)",
"Post Processing the Results\nTo read heads from the MODFLOW binary output file, we can use the flopy.utils.binaryfile module. Specifically, we can use the HeadFile object from that module to extract head data arrays.",
"#imports for plotting and reading the MODFLOW binary output file\nimport matplotlib.pyplot as plt\nimport flopy.utils.binaryfile as bf\n\n#Create the headfile object and grab the results for last time.\nheadfile = os.path.join(modelpath, modelname + '.hds')\nheadfileobj = bf.HeadFile(headfile)\n\n#Get a list of times that are contained in the model\ntimes = headfileobj.get_times()\nprint 'Headfile (' + modelname + '.hds' + ') contains the following list of times: ', times\n\n#Get a numpy array of heads for totim = 1.0\n#The get_data method will extract head data from the binary file.\nHEAD = headfileobj.get_data(totim=1.0)\n\n#Print statistics on the head\nprint 'Head statistics'\nprint ' min: ', HEAD.min()\nprint ' max: ', HEAD.max()\nprint ' std: ', HEAD.std()\n\n#Create a contour plot of heads\nFIG = plt.figure(figsize=(15,13))\n\n#setup contour levels and plot extent\nLEVELS = np.arange(1000., 1011., 0.5)\nEXTENT = (DELR/2., LX - DELR/2., DELC/2., LY - DELC/2.)\nprint 'Contour Levels: ', LEVELS\nprint 'Extent of domain: ', EXTENT\n\n#Make a contour plot on the first axis\nAX1 = FIG.add_subplot(1, 2, 1, aspect='equal')\nAX1.set_xlabel(\"x\")\nAX1.set_ylabel(\"y\")\nYTICKS = np.arange(0, 11000, 500)\nAX1.set_yticks(YTICKS)\nAX1.set_title(\"Hubbertville contour map\")\nAX1.text(2000, 10500, r\"River\", fontsize=10, color=\"blue\")\nAX1.text(1800, 340, r\"Green Swamp\", fontsize=10, color=\"green\")\nAX1.contour(np.flipud(HEAD[0, :, :]), levels=LEVELS, extent=EXTENT)\n\n#Make a color flood on the second axis\nAX2 = FIG.add_subplot(1, 2, 2, aspect='equal')\nAX2.set_xlabel(\"x\")\nAX2.set_ylabel(\"y\")\nAX2.set_yticks(YTICKS)\nAX2.set_title(\"Hubbertville color flood\")\nAX2.text(2000, 10500, r\"River\", fontsize=10, color=\"black\")\nAX2.text(1800, 340, r\"Green Swamp\", fontsize=10, color=\"black\")\ncax = AX2.imshow(HEAD[0, :, :], extent=EXTENT, interpolation='nearest')\ncbar = FIG.colorbar(cax, orientation='vertical', shrink=0.45)\n",
"Look at the bottom of the MODFLOW output file (ending with a *.list) and note the water balance reported.",
"#look at the head in column = 4 from headobj, and then plot it\n#print HEAD along a column; COL is a variable that allows us to change this easily\nCOL = 4\nprint HEAD[0,:,COL]\n\n# we see this is what we want, but is flipped because MODFLOW's array does not = Python, so we reverse the order (flip them) and call it \nY = np.flipud(HEAD[0,:,COL])\nprint Y\n\n#for our cross section create X-coordinates to match with heads\nXCOORD = np.arange(0, 11000, 500) + 250\nprint XCOORD\n\nfig = plt.figure(figsize=(10, 5))\nax = fig.add_subplot(1, 1, 1)\nTITLE = 'cross section of head along Column = ({0})'.format(COL)\nax.set_title(TITLE)\nax.set_xlabel('y')\nax.set_ylabel('head')\nax.set_xlim(0, 11000.)\nax.set_ylim(980.,1020.)\nax.text(10480, 998, r\"River\", fontsize=10, color=\"blue\",rotation='vertical')\nax.text(300, 998, r\"Green Swamp\", fontsize=10, color=\"green\",rotation='vertical')\nax.text(5300,1009., r\"Groundwater Divide\", fontsize=10, color=\"black\",rotation='vertical')\nax.plot(XCOORD, Y)\n\n#calculate the flux to Green Swamp\nHEAD_ADJACENT_CELLS = HEAD[0,-2,:]\nprint \"heads in cells next to Green Swamp =\", HEAD_ADJACENT_CELLS\nFLUX_TO_SWAMP = 0\nTHICK = (HEAD[0,-2,5]+1000.)/2 - ZBOT #the thickness is approximated using the average saturated thickness\nfor NODEHEAD in HEAD_ADJACENT_CELLS:\n NODEFLUX = (HK * ((NODEHEAD-1000.)/(DELC)) * (DELR * THICK)) # Q = KIA\n FLUX_TO_SWAMP += NODEFLUX\n print 'gradient =', (NODEHEAD-1000)/(DELC), ' Kh =', HK, ' thickness=', THICK, ' Grid spacing =', DELC, ' Node flux =', NODEFLUX\nprint \"Total Flux to Swamp =\", FLUX_TO_SWAMP, \"cubic meters per day\"\n\n#calculate the flux to River\nHEAD_ADJACENT_CELLS = HEAD[0,1,:]\nprint \"heads in cells next to River =\", HEAD_ADJACENT_CELLS\nFLUX_TO_RIVER = 0\nTHICK = (HEAD[0,-2,5]+1000.)/2 - ZBOT #the thickness is approximated using the average saturated thickness\nfor NODEHEAD in HEAD_ADJACENT_CELLS:\n NODEFLUX = (HK * (NODEHEAD-1000.)/(DELC) * DELR * THICK) # Q = KIA\n FLUX_TO_RIVER += NODEFLUX\n print 'gradient =', (NODEHEAD-1000)/(DELC), ' Kh =', HK, ' thickness=', THICK, ' Grid spacing =', DELC, ' Node flux =', NODEFLUX\nprint \"Total Flux to River =\", FLUX_TO_RIVER, \"cubic meters per day\"\n\nprint 'Flux to Green Swamp =', FLUX_TO_SWAMP, ' Flux to River =', FLUX_TO_RIVER\nBCFLUX = FLUX_TO_SWAMP + FLUX_TO_RIVER\nQ = WELLQ * -1\nprint 'Flux to BCs =', BCFLUX,', Well pumping =', Q,', Total Vol Out =', BCFLUX+Q, 'cubic meters per day'",
"Testing your Skills\n\n\nDoes the total volumetric flux out equal that reported in the MODFLOW list file?\n\n\nExperiment with horizontal grid resolution, well location, recharge, pumping rate, and aquifer characteristics. Rerun the model and post process to evaluate the effects.\n\n\nP4.3 Part b.\nUsing the steady-state heads derived in (a), locate the groundwater divide in the central portion of the valley. Run the model using first a no-flow boundary and then a specified head boundary at the location of the groundwater divide.\nCompare results with those in part (a). Compute the discharge to Green Swamp under each representation. What is the effect of assigning an internal boundary on the results?",
"# let's print out some heads by row to see which has the highest head (=the gw divide); don't forget arrays are zero-based!\nprint HEAD[0,9,:]\n\nprint HEAD[0,10,:]\n\nprint HEAD[0,11,:]\n\nprint HEAD[0,12,:]\n\n# Rows 10 and 11 have highest heads; let's save these rows of heads for later\nROW10_HEAD = HEAD[0,10,:]\nROW11_HEAD = HEAD[0,11,:]\n#let's reset Rows 10 and 11 to a no flow boundary (set that row to 0 in the MODFLOW IBOUND array)\nIBOUND[:, 10, :] = 0 \nIBOUND[:, 11, :] = 0 \nprint IBOUND\n\n#we have to update the MODFLOW's BAS Package with the new IBOUND array \nBAS_PACKAGE = flopy.modflow.ModflowBas(MF, ibound=IBOUND, strt=STRT)\n\n# added MODFLOW solver here again for testing of solver convergence; the problem will solve with these settings\n#PCG_PACKAGE = flopy.modflow.ModflowPcg(MF, mxiter=500, iter1=100, hclose=1e-04, rclose=1e-03, relax=0.98, damp=0.5)\n# but you can play with other settings then execute the code blocks from here on down to see effect on convergence\nPCG_PACKAGE = flopy.modflow.ModflowPcg(MF, mxiter=500, iter1=100, hclose=1e-04, rclose=1e-03, relax=0.98, damp=0.5) \n\n#Before writing input, destroy all files in folder to prevent reusing old files\n#Here's the working directory\nprint modelpath\n#Here's what's currently in the working directory\nmodelfiles = os.listdir(modelpath)\nprint modelfiles\n\n#delete these files to prevent us from reading old results\nmodelfiles = os.listdir(modelpath)\nfor filename in modelfiles:\n f = os.path.join(modelpath, filename)\n if modelname in f:\n try:\n os.remove(f)\n print 'Deleted: ', filename\n except:\n print 'Unable to delete: ', filename\n\n#Now write the model input files\nMF.write_input()\nprint \"New MODFLOW input files = \", modelfiles\nprint \"You can check the newly created files in\", modelpath\n\n\n#rerun MODFLOW-2005\nsilent = False #Print model output to screen?\npause = False #Require user to hit enter? Doesn't mean much in Ipython notebook\nreport = True #Store the output from the model in buff\nsuccess, buff = MF.run_model(silent=silent, pause=pause, report=report)\n\n#As before, let's look at the results and compare to P4-3 Part a.\n#imports for plotting and reading the MODFLOW binary output file\nimport matplotlib.pyplot as plt\nimport flopy.utils.binaryfile as bf\n\n#Create the headfile object and grab the results for last time.\nheadfile = os.path.join(modelpath, modelname + '.hds')\nheadfileobj = bf.HeadFile(headfile)\n\n#Get a list of times that are contained in the model\ntimes = headfileobj.get_times()\nprint 'Headfile (' + modelname + '.hds' + ') contains the following list of times: ', times\n\n#Get a numpy array of heads for totim = 1.0\n#The get_data method will extract head data from the binary file.\nHEAD = headfileobj.get_data(totim=1.0)\n\n#Print statistics on the head\nprint 'Head statistics'\nprint ' min: ', HEAD.min()\nprint ' max: ', HEAD.max()\nprint ' std: ', HEAD.std()\n\n#-999.99 is the Inactive node flag so we'll use our previous contour settings\n#Create a contour plot of heads\nFIG = plt.figure(figsize=(15,13))\n\n#setup contour levels and plot extent\nLEVELS = np.arange(1000., 1011., 0.5)\nEXTENT = (DELR/2., LX - DELR/2., DELC/2., LY - DELC/2.)\nprint 'Contour Levels: ', LEVELS\nprint 'Extent of domain: ', EXTENT\n\n#Make a contour plot on the first axis\nAX1 = FIG.add_subplot(1, 2, 1, aspect='equal')\nAX1.set_xlabel(\"x\")\nAX1.set_ylabel(\"y\")\nYTICKS = np.arange(0, 11000, 500)\nAX1.set_yticks(YTICKS)\nAX1.set_title(\"Hubbertville contour map\")\nAX1.text(2000, 10500, r\"River\", fontsize=10, color=\"blue\")\nAX1.text(1800, 340, r\"Green Swamp\", fontsize=10, color=\"green\")\nAX1.contour(np.flipud(HEAD[0, :, :]), levels=LEVELS, extent=EXTENT)\n\n#Make a color flood on the second axis\nAX2 = FIG.add_subplot(1, 2, 2, aspect='equal')\nAX2.set_xlabel(\"x\")\nAX2.set_ylabel(\"y\")\nAX2.set_yticks(YTICKS)\nAX2.set_title(\"Hubbertville color flood\")\nAX2.text(2000, 10500, r\"River\", fontsize=10, color=\"black\")\nAX2.text(1800, 340, r\"Green Swamp\", fontsize=10, color=\"black\")\ncax = AX2.imshow(HEAD[0, :, :], extent=EXTENT, interpolation='nearest', vmin=998.2)\ncbar = FIG.colorbar(cax, orientation='vertical', shrink=0.45)\n\nCOL = 4\n# recall we need to flip because MODFLOW's array does not = Python, so we reverse the order (flip them) and call it \nY = np.flipud(HEAD[0,:,COL])\nprint Y\n\n#for our cross section create X-coordinates to match with heads\nXCOORD = np.arange(0, 11000, 500) + 250\nprint XCOORD\n\nfig = plt.figure(figsize=(10, 5))\nax = fig.add_subplot(1, 1, 1)\nTITLE = 'cross section of head along Column = ({0})'.format(COL)\nax.set_title(TITLE)\nax.set_xlabel('y')\nax.set_ylabel('head')\nax.set_xlim(0, 11000.)\nax.set_ylim(980.,1020.)\nax.text(10480, 998, r\"River\", fontsize=10, color=\"blue\",rotation='vertical')\nax.text(300, 998, r\"Green Swamp\", fontsize=10, color=\"green\",rotation='vertical')\nax.text(5400,1006., r\"Groundwater Divide / Inactive cells\", fontsize=10, color=\"black\",rotation='vertical')\nax.plot(XCOORD, Y)\n\n#calculate the flux to Green Swamp\nHEAD_ADJACENT_CELLS = HEAD[0,-2,:]\nprint \"heads in cells next to Green Swamp =\", HEAD_ADJACENT_CELLS\nFLUX_TO_SWAMP_NO_FLOW = 0\nTHICK = (HEAD[0,-2,5]+1000.)/2 - ZBOT #the thickness is approximated using the average saturated thickness\nfor NODEHEAD in HEAD_ADJACENT_CELLS:\n NODEFLUX = (HK * (NODEHEAD-1000.)/(DELC) * DELR * THICK) # Q = KIA\n FLUX_TO_SWAMP_NO_FLOW += NODEFLUX\n print 'gradient =', (NODEHEAD-1000)/(DELC), ' Kh =', HK, ' thickness=', THICK, ' Grid spacing =', DELC, ' Node flux =', NODEFLUX\nprint \"Total Flux to Swamp (No Flow) =\", FLUX_TO_SWAMP_NO_FLOW, \"cubic meters per day\"",
"Why is there less water in the system than when the gw divide was not simulated with no-flow cells?\nNote that this problem is harder to solve. To see non-convergence in MODFLOW, set the dampening in block In [36] to 1.0\nNow let's try a specified head for the groundwater divide",
"# Rows 10 and 11 had highest heads; reset Row 10 and 11 to a specified head boundary (set that row to -1 in the MODFLOW IBOUND array)\nIBOUND[:, 10, :] = -1 \nIBOUND[:, 11, :] = -1 \nprint IBOUND\n\n#MODFLOW uses the starting heads to set the specified head boundary elevations\n#we need to reset the starting heads in Rows 10 and 11 to what they were originally\n#recall we saved these heads, and can print them to check\nprint \"Row 10 heads =\", ROW10_HEAD\nprint \"Row 11 heads =\", ROW11_HEAD\n\nSTRT[:, 10, :] = ROW10_HEAD # setting starting heads Row 10 to heads calculated in Part a.\nSTRT[:, 11, :] = ROW11_HEAD # setting starting heads Row 10 to heads calculated in Part a.\nprint STRT\n\n#we have to update the MODFLOW's BAS Package with the new STRT heads \nBAS_PACKAGE = flopy.modflow.ModflowBas(MF, ibound=IBOUND, strt=STRT)\n\n#delete old files to prevent us from reading old results\nmodelfiles = os.listdir(modelpath)\nfor filename in modelfiles:\n f = os.path.join(modelpath, filename)\n if modelname in f:\n try:\n os.remove(f)\n print 'Deleted: ', filename\n except:\n print 'Unable to delete: ', filename\n\n#Now write the model input files\nMF.write_input()\nprint \"New MODFLOW input files = \", modelfiles\nprint \"You can check the newly created files in\", modelpath\n\n\n#rerun MODFLOW-2005\nsilent = False #Print model output to screen?\npause = False #Require user to hit enter? Doesn't mean much in Ipython notebook\nreport = True #Store the output from the model in buff\nsuccess, buff = MF.run_model(silent=silent, pause=pause, report=report)\n\n#As before, let's look at the results and compare to P4-3 Part a.\n#imports for plotting and reading the MODFLOW binary output file\nimport matplotlib.pyplot as plt\nimport flopy.utils.binaryfile as bf\n\n#Create the headfile object and grab the results for last time.\nheadfile = os.path.join(modelpath, modelname + '.hds')\nheadfileobj = bf.HeadFile(headfile)\n\n#Get a list of times that are contained in the model\ntimes = headfileobj.get_times()\nprint 'Headfile (' + modelname + '.hds' + ') contains the following list of times: ', times\n\n#Get a numpy array of heads for totim = 1.0\n#The get_data method will extract head data from the binary file.\nHEAD = headfileobj.get_data(totim=1.0)\n\n#Print statistics on the head\nprint 'Head statistics'\nprint ' min: ', HEAD.min()\nprint ' max: ', HEAD.max()\nprint ' std: ', HEAD.std()\n\n#Create a contour plot of heads\nFIG = plt.figure(figsize=(15,13))\n\n#setup contour levels and plot extent\nLEVELS = np.arange(1000., 1011., 0.5)\nEXTENT = (DELR/2., LX - DELR/2., DELC/2., LY - DELC/2.)\n\n#Make a contour plot on the first axis\nAX1 = FIG.add_subplot(1, 2, 1, aspect='equal')\nAX1.set_xlabel(\"x\")\nAX1.set_ylabel(\"y\")\nYTICKS = np.arange(0, 11000, 500)\nAX1.set_yticks(YTICKS)\nAX1.set_title(\"Hubbertville contour map\")\nAX1.text(2000, 10500, r\"River\", fontsize=10, color=\"blue\")\nAX1.text(1800, 340, r\"Green Swamp\", fontsize=10, color=\"green\")\nAX1.contour(np.flipud(HEAD[0, :, :]), levels=LEVELS, extent=EXTENT)\n\n#Make a color flood on the second axis\nAX2 = FIG.add_subplot(1, 2, 2, aspect='equal')\nAX2.set_xlabel(\"x\")\nAX2.set_ylabel(\"y\")\nAX2.set_yticks(YTICKS)\nAX2.set_title(\"Hubbertville color flood\")\nAX2.text(2000, 10500, r\"River\", fontsize=10, color=\"black\")\nAX2.text(1800, 340, r\"Green Swamp\", fontsize=10, color=\"black\")\ncax = AX2.imshow(HEAD[0, :, :], extent=EXTENT, interpolation='nearest', vmin=998.2)\ncbar = FIG.colorbar(cax, orientation='vertical', shrink=0.45)\n\n#as before let's plot a north-south cross section\nCOL = 4\n# recall we need to flip because MODFLOW's array does not = Python, so we reverse the order (flip them) and call it \nY = np.flipud(HEAD[0,:,COL])\n#for our cross section create X-coordinates to match with heads\nXCOORD = np.arange(0, 11000, 500) + 250\nfig = plt.figure(figsize=(10, 5))\nax = fig.add_subplot(1, 1, 1)\nTITLE = 'cross section of head along Column = ({0})'.format(COL)\nax.set_title(TITLE)\nax.set_xlabel('y')\nax.set_ylabel('head')\nax.set_xlim(0, 11000.)\nax.set_ylim(980.,1020.)\nax.text(10480, 998, r\"River\", fontsize=10, color=\"blue\",rotation='vertical')\nax.text(300, 998, r\"Green Swamp\", fontsize=10, color=\"green\",rotation='vertical')\nax.text(5400,1007., r\"Groundwater Divide\", fontsize=10, color=\"black\",rotation='vertical')\nax.plot(XCOORD, Y)\n\n#calculate the flux to Green Swamp\nHEAD_ADJACENT_CELLS = HEAD[0,-2,:]\nprint \"heads in cells next to Green Swamp =\", HEAD_ADJACENT_CELLS\nFLUX_TO_SWAMP_SPEC_HEAD = 0\nTHICK = (HEAD[0,-2,5]+1000.)/2 - ZBOT #the thickness is approximated using the average saturated thickness\nfor NODEHEAD in HEAD_ADJACENT_CELLS:\n NODEFLUX = (HK * (NODEHEAD-1000.)/(DELC) * DELR * THICK) # Q = KIA\n FLUX_TO_SWAMP_SPEC_HEAD += NODEFLUX\n print 'gradient =', (NODEHEAD-1000)/(DELC), ' Kh =', HK, ' thickness=', THICK, ' Grid spacing =', DELC, ' Node flux =', NODEFLUX\nprint \"Total Flux to Swamp (Specified Head) =\", FLUX_TO_SWAMP_SPEC_HEAD, \"cubic meters per day\"\n\n#let's compare the three formulations: \n#1) gw divide simulated; 2) gw divide as no flow BC; and 3) gw divide as specified head BC\nprint \"Flux to Swamp (simulated) =\", FLUX_TO_SWAMP\nprint \"Flux to Swamp (no flow) = \", FLUX_TO_SWAMP_NO_FLOW\nprint \"Flux to Swamp (spec head) = \", FLUX_TO_SWAMP_SPEC_HEAD",
"Why when the gw divide is simulated as a no-flow might the total flux to Green Swamp be different (and lower)?\nP4.3 Part c.\nRun the model in part (a) again but this time use a HDB to represent the river. The stage of the river is 1000 m and the width is 500 m. The vertical hydraulic conductivity of the riverbed sediments is 5 m/day and the thickness of the sediments is 1 m. The elevation of the bottom of the sediments is 995 m. Compare results with those in part (a).",
"#We have to recreate the IBOUND and STRT heads of Part a. \n#This is just copied directly from Part a. above to start clean\n# Variables for the BAS package\nIBOUND = np.ones((NLAY, NROW, NCOL), dtype=np.int32) # all nodes are active (IBOUND = 1)\n\n# make the top of the profile specified head by setting the IBOUND = -1\nIBOUND[:, 0, :] = -1 #don't forget arrays are zero-based!\nIBOUND[:, -1, :] = -1 #-1 is Python for last in array\nprint IBOUND\n\n#BUT in Part c. the river to the north is a head-dependent BCs, not specified head\n#so we have to change it from -1 (specified head) to 1 (active cells)\nIBOUND[:, 0, :] = 1 #don't forget arrays are zero-based!\nprint IBOUND\n\n#In the same way, we need to reset the Starting Head array, but only need \n# values set for the specified head boundary used for Green Swamp in the south\nSTRT = 1015 * np.ones((NLAY, NROW, NCOL), dtype=np.float32) # set starting head to 1010 m through out model domain\nSTRT[:, -1, :] = 1000. # wetland stage for setting constant head\nprint STRT\n\n#we have to update the MODFLOW's BAS Package with the new IBOUND and STRT heads \nBAS_PACKAGE = flopy.modflow.ModflowBas(MF, ibound=IBOUND, strt=STRT)\n\n#now we need to add a HDB - the RIV Package is a good choice\n#recall that a RIV node has a river stage, a conductance, and a bottom elevation\nRIV_STAGE = 1000.\nKv_RIVER = 5.\nb_RIVER = 1.\nWIDTH_RIVER = 500.\nSED_BOT_RIVER = 995.\n# conductance = leakance x cross-sectional area\n# leakance = Kv/b\nRIV_LEAKANCE = Kv_RIVER / b_RIVER\nprint \"River sediment leakance =\", RIV_LEAKANCE\n\n\n#area is the nodal area, DELR x DELC which was entered above\nprint \"DELR = \", DELR\nprint \"River width =\", WIDTH_RIVER\nprint \"River area in node =\", DELR * WIDTH_RIVER\n\n#conductance is leakance x area\nRIV_COND = RIV_LEAKANCE * DELR * WIDTH_RIVER\nprint 'River Conductance =', RIV_COND\n\n#We enter RIV Package data by \"layer-row-column-data\" = lrcd\nstress_period_data = [\n [0, 0, 0, RIV_STAGE, RIV_COND, SED_BOT_RIVER], #layer, row, column, stage conductance, river bottom\n [0, 0, 1, RIV_STAGE, RIV_COND, SED_BOT_RIVER], #remember Python indexing is zero based\n [0, 0, 2, RIV_STAGE, RIV_COND, SED_BOT_RIVER],\n [0, 0, 3, RIV_STAGE, RIV_COND, SED_BOT_RIVER],\n [0, 0, 4, RIV_STAGE, RIV_COND, SED_BOT_RIVER],\n [0, 0, 5, RIV_STAGE, RIV_COND, SED_BOT_RIVER],\n [0, 0, 6, RIV_STAGE, RIV_COND, SED_BOT_RIVER],\n [0, 0, 7, RIV_STAGE, RIV_COND, SED_BOT_RIVER], \n [0, 0, 8, RIV_STAGE, RIV_COND, SED_BOT_RIVER]]\n\n\n\nprint stress_period_data\n\nriv = flopy.modflow.ModflowRiv(MF, stress_period_data=stress_period_data)\n\n#delete old files to prevent us from reading old results\nmodelfiles = os.listdir(modelpath)\nfor filename in modelfiles:\n f = os.path.join(modelpath, filename)\n if modelname in f:\n try:\n os.remove(f)\n print 'Deleted: ', filename\n except:\n print 'Unable to delete: ', filename\n\n#Now write the model input files\nMF.write_input()\nprint \"New MODFLOW input files = \", modelfiles\nprint \"You can check the newly created files in\", modelpath\n\n\n#rerun MODFLOW-2005\nsilent = False #Print model output to screen?\npause = False #Require user to hit enter? Doesn't mean much in Ipython notebook\nreport = True #Store the output from the model in buff\nsuccess, buff = MF.run_model(silent=silent, pause=pause, report=report)\n\n\n#As before, let's look at the results and compare to P4-3 Part a.\n#imports for plotting and reading the MODFLOW binary output file\nimport matplotlib.pyplot as plt\nimport flopy.utils.binaryfile as bf\n\n#Create the headfile object and grab the results for last time.\nheadfile = os.path.join(modelpath, modelname + '.hds')\nheadfileobj = bf.HeadFile(headfile)\n\n#Get a list of times that are contained in the model\ntimes = headfileobj.get_times()\nprint 'Headfile (' + modelname + '.hds' + ') contains the following list of times: ', times\n\n#Get a numpy array of heads for totim = 1.0\n#The get_data method will extract head data from the binary file.\nHEAD = headfileobj.get_data(totim=1.0)\n\n#Create a contour plot of heads\nFIG = plt.figure(figsize=(15,13))\n\n#setup contour levels and plot extent\nLEVELS = np.arange(1000., 1011., 0.5)\nEXTENT = (DELR/2., LX - DELR/2., DELC/2., LY - DELC/2.)\n\n#Make a contour plot on the first axis\nAX1 = FIG.add_subplot(1, 2, 1, aspect='equal')\nAX1.set_xlabel(\"x\")\nAX1.set_ylabel(\"y\")\nYTICKS = np.arange(0, 11000, 500)\nAX1.set_yticks(YTICKS)\nAX1.set_title(\"Hubbertville contour map\")\nAX1.text(2000, 10500, r\"River\", fontsize=10, color=\"blue\")\nAX1.text(1800, 340, r\"Green Swamp\", fontsize=10, color=\"green\")\nAX1.contour(np.flipud(HEAD[0, :, :]), levels=LEVELS, extent=EXTENT)\n\n#Make a color flood on the second axis\nAX2 = FIG.add_subplot(1, 2, 2, aspect='equal')\nAX2.set_xlabel(\"x\")\nAX2.set_ylabel(\"y\")\nAX2.set_yticks(YTICKS)\nAX2.set_title(\"Hubbertville color flood\")\nAX2.text(2000, 10500, r\"River\", fontsize=10, color=\"black\")\nAX2.text(1800, 340, r\"Green Swamp\", fontsize=10, color=\"black\")\ncax = AX2.imshow(HEAD[0, :, :], extent=EXTENT, interpolation='nearest', vmin=998.2)\ncbar = FIG.colorbar(cax, orientation='vertical', shrink=0.45)\n\n#as before let's plot a north-south cross section\nCOL = 4\n# recall we need to flip because MODFLOW's array does not = Python, so we reverse the order (flip them) and call it \nY = np.flipud(HEAD[0,:,COL])\n#for our cross section create X-coordinates to match with heads\nXCOORD = np.arange(0, 11000, 500) + 250\nfig = plt.figure(figsize=(10, 5))\nax = fig.add_subplot(1, 1, 1)\nTITLE = 'cross section of head along Column = ({0})'.format(COL)\nax.set_title(TITLE)\nax.set_xlabel('y')\nax.set_ylabel('head')\nax.set_xlim(0, 11000.)\nax.set_ylim(980.,1020.)\nax.text(10480, 998, r\"River\", fontsize=10, color=\"blue\",rotation='vertical')\nax.text(300, 998, r\"Green Swamp\", fontsize=10, color=\"green\",rotation='vertical')\nax.text(5400,1007., r\"Groundwater Divide\", fontsize=10, color=\"black\",rotation='vertical')\nax.plot(XCOORD, Y)\n\n#Print statistics on the head\nprint 'Head statistics'\nprint ' min: ', HEAD.min()\nprint ' max: ', HEAD.max()\nprint ' std: ', HEAD.std()",
"P4.3 Part d.\nRun the model in part (c) again but this time assume the width of the river is 5 m. What is the effect of reducing the width of the river?",
"WIDTH_RIVER = 5.\n#area is the nodal area, DELR x DELC which was entered above\nprint \"DELR = \", DELR\nprint \"River width =\", WIDTH_RIVER\nprint \"River area in node =\", DELR * WIDTH_RIVER\n#conductance is leakance x area\nprint 'River Leakance = ', RIV_LEAKANCE\nRIV_COND = RIV_LEAKANCE * DELR * WIDTH_RIVER\nprint 'River Conductance =', RIV_COND\n\n#We enter RIV Package data by \"layer-row-column-data\" = lrcd\nstress_period_data = [\n [0, 0, 0, RIV_STAGE, RIV_COND, SED_BOT_RIVER], #layer, row, column, stage conductance, river bottom\n [0, 0, 1, RIV_STAGE, RIV_COND, SED_BOT_RIVER], #remember Python indexing is zero based\n [0, 0, 2, RIV_STAGE, RIV_COND, SED_BOT_RIVER],\n [0, 0, 3, RIV_STAGE, RIV_COND, SED_BOT_RIVER],\n [0, 0, 4, RIV_STAGE, RIV_COND, SED_BOT_RIVER],\n [0, 0, 5, RIV_STAGE, RIV_COND, SED_BOT_RIVER],\n [0, 0, 6, RIV_STAGE, RIV_COND, SED_BOT_RIVER],\n [0, 0, 7, RIV_STAGE, RIV_COND, SED_BOT_RIVER], \n [0, 0, 8, RIV_STAGE, RIV_COND, SED_BOT_RIVER]]\n\n\n\nprint stress_period_data\n\nriv = flopy.modflow.ModflowRiv(MF, stress_period_data=stress_period_data)\n#delete old files to prevent us from reading old results\nmodelfiles = os.listdir(modelpath)\nfor filename in modelfiles:\n f = os.path.join(modelpath, filename)\n if modelname in f:\n try:\n os.remove(f)\n print 'Deleted: ', filename\n except:\n print 'Unable to delete: ', filename\n \n\n#Now write the model input files and rerun MODFLOW\nMF.write_input()\nprint \"New MODFLOW input files = \", modelfiles\nprint \"You can check the newly created files in\", modelpath\n#rerun MODFLOW-2005\nsilent = False #Print model output to screen?\npause = False #Require user to hit enter? Doesn't mean much in Ipython notebook\nreport = True #Store the output from the model in buff\nsuccess, buff = MF.run_model(silent=silent, pause=pause, report=report)\n\n\n#As before, let's look at the results and compare to P4-3 Part a.\n#imports for plotting and reading the MODFLOW binary output file\nimport matplotlib.pyplot as plt\nimport flopy.utils.binaryfile as bf\n\n#Create the headfile object and grab the results for last time.\nheadfile = os.path.join(modelpath, modelname + '.hds')\nheadfileobj = bf.HeadFile(headfile)\n\n#Get a list of times that are contained in the model\ntimes = headfileobj.get_times()\nprint 'Headfile (' + modelname + '.hds' + ') contains the following list of times: ', times\n\n#Get a numpy array of heads for totim = 1.0\n#The get_data method will extract head data from the binary file.\nHEAD = headfileobj.get_data(totim=1.0)\n\n#Create a contour plot of heads\nFIG = plt.figure(figsize=(15,13))\n\n#setup contour levels and plot extent\nLEVELS = np.arange(1000., 1011., 0.5)\nEXTENT = (DELR/2., LX - DELR/2., DELC/2., LY - DELC/2.)\n\n#Make a contour plot on the first axis\nAX1 = FIG.add_subplot(1, 2, 1, aspect='equal')\nAX1.set_xlabel(\"x\")\nAX1.set_ylabel(\"y\")\nYTICKS = np.arange(0, 11000, 500)\nAX1.set_yticks(YTICKS)\nAX1.set_title(\"Hubbertville contour map\")\nAX1.text(2000, 10500, r\"River\", fontsize=10, color=\"blue\")\nAX1.text(1800, 340, r\"Green Swamp\", fontsize=10, color=\"green\")\nAX1.contour(np.flipud(HEAD[0, :, :]), levels=LEVELS, extent=EXTENT)\n\n#Make a color flood on the second axis\nAX2 = FIG.add_subplot(1, 2, 2, aspect='equal')\nAX2.set_xlabel(\"x\")\nAX2.set_ylabel(\"y\")\nAX2.set_yticks(YTICKS)\nAX2.set_title(\"Hubbertville color flood\")\nAX2.text(2000, 10500, r\"River\", fontsize=10, color=\"black\")\nAX2.text(1800, 340, r\"Green Swamp\", fontsize=10, color=\"black\")\ncax = AX2.imshow(HEAD[0, :, :], extent=EXTENT, interpolation='nearest', vmin=998.2)\ncbar = FIG.colorbar(cax, orientation='vertical', shrink=0.45)\n\n#as before let's plot a north-south cross section\nCOL = 4\n# recall we need to flip because MODFLOW's array does not = Python, so we reverse the order (flip them) and call it \nY = np.flipud(HEAD[0,:,COL])\n#for our cross section create X-coordinates to match with heads\nXCOORD = np.arange(0, 11000, 500) + 250\nfig = plt.figure(figsize=(10, 5))\nax = fig.add_subplot(1, 1, 1)\nTITLE = 'cross section of head along Column = ({0})'.format(COL)\nax.set_title(TITLE)\nax.set_xlabel('y')\nax.set_ylabel('head')\nax.set_xlim(0, 11000.)\nax.set_ylim(980.,1020.)\nax.text(10480, 998, r\"River\", fontsize=10, color=\"blue\",rotation='vertical')\nax.text(300, 998, r\"Green Swamp\", fontsize=10, color=\"green\",rotation='vertical')\nax.text(5400,1007., r\"Groundwater Divide\", fontsize=10, color=\"black\",rotation='vertical')\nax.plot(XCOORD, Y)\n\n#Print statistics on the head\nprint 'Head statistics'\nprint ' min: ', HEAD.min()\nprint ' max: ', HEAD.max()\nprint ' std: ', HEAD.std()",
"How different are the results from the two river widths? Why might this be?"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
UKPLab/deeplearning4nlp-tutorial
|
2015-10_Lecture/Lecture2/code/2_MNIST.ipynb
|
apache-2.0
|
[
"Handwritten Digit Recognition with Theano\nIn this tutorial we will train a feed forward network / multi-layer-perceptron (MLP) to recognize handwritten digits using pure Theano.\nFor a long version see: http://deeplearning.net/tutorial/mlp.html\nLayout\nThe layout of our network \n<img src=\"http://deeplearning.net/tutorial/_images/mlp.png\">\nSource of image: http://deeplearning.net/tutorial/mlp.html\nOur networks has 3 layers\n- Input layer, $28*28=786$ dimensional (the pixels of the images)\n- A hidden layer\n- A Softmax layer\nIn order to make our lives easier, we will create the following files / classes / components:\n- HiddenLayer - To model a hidden layer\n- SoftmaxLayer - To model a softmax layer\n- MLP - Combines several hidden & softmax layers together to form a MLP\n- One file for reading the data and training the network\nHiddenLayer\nThe hidden layer computes the following function:\n$$\\text{output} = \\tanh(xW + b)$$\nThe matrix $W$ will be initialized Glorot-style (see 1. Lecture).\nThis is the class we will use for the hidden layer:",
"import numpy \nimport theano\nimport theano.tensor as T\nclass HiddenLayer(object):\n def __init__(self, rng, input, n_in, n_out, W=None, b=None, activation=T.tanh):\n \"\"\"\n :param rng: Random number generator, for reproducable results\n :param input: Symbolic Theano variable for the input\n :param n_in: Number of incoming units\n :param n_out: Number of outgoing units\n :param W: Weight matrix\n :param b: Bias\n :param activation: Activation function to use\n \"\"\"\n self.input = input\n self.rng = rng\n self.n_in = n_in\n self.n_out = n_out\n self.activation=activation\n \n \n if W is None: #Initialize Glorot Style\n W_values = numpy.asarray(rng.uniform(\n low=-numpy.sqrt(6. / (n_in + n_out)),\n high=numpy.sqrt(6. / (n_in + n_out)),\n size=(n_in, n_out)), dtype=theano.config.floatX)\n if activation == theano.tensor.nnet.sigmoid or activation == theano.tensor.nnet.hard_sigmoid or activation == theano.tensor.nnet.ultra_fast_sigmoid:\n W_values *= 4\n\n W = theano.shared(value=W_values, name='W')\n\n if b is None: #Initialize bias to zeor\n b_values = numpy.zeros((n_out,), dtype=theano.config.floatX)\n b = theano.shared(value=b_values, name='b')\n\n self.W = W\n self.b = b\n\n # Put your code here: Implement a function to compute activation(x*W+b)\n ",
"Softmax Layer\nThe softmax-layer computes: \n$$\\text{output} = \\text{softmax}(xW+b)$$\nAs for the hidden layer, we allow the parameterization of the number of neurons. The weight matrix and bias vector is initialized to zero.\nAs we performt a single label classification task, we use the negative log-likelihood as error function:\n$$E(x,W,b) = -log(o_y)$$\nwith $o_y$ the output for label $y$.",
"import numpy\nimport theano\nimport theano.tensor as T\n\n\nclass SoftmaxLayer(object):\n def __init__(self, input, n_in, n_out):\n self.W = theano.shared(value=numpy.zeros((n_in, n_out), \n dtype=theano.config.floatX), name='W')\n self.b = theano.shared(value=numpy.zeros((n_out,),\n dtype=theano.config.floatX), name='b')\n \n # Put your code here, implement a function to compute softmax(x*W+b)",
"MLP\nOur Multi-Layer-Perceptron now plugs everything together, i.e. one hidden layer and the softmax layer.",
"import numpy\nimport theano\nimport theano.tensor as T\n\nclass MLP(object):\n def __init__(self, rng, input, n_in, n_hidden, n_out):\n \"\"\"\n :param rng: Our random number generator\n :param input: Input variable (the data)\n :param n_in: Input dimension\n :param n_hidden: Hidden size\n :param n_out: Output size\n \"\"\"\n #Put your code here to build the neural network \n ",
"Read data + train the network\nFinally we have all blocks to create a MLP for the MNIST dataset.\nYou find the MNIST dataset in the data dir. Otherwise you can obtain it from http://www.iro.umontreal.ca/~lisa/deep/data/mnist/mnist.pkl.gz",
"import cPickle\nimport gzip\nimport os\nimport sys\nimport timeit\nimport numpy as np\nimport theano\nimport theano.tensor as T\n\n\n# Load the pickle file for the MNIST dataset.\ndataset = 'data/mnist.pkl.gz'\n\nf = gzip.open(dataset, 'rb')\ntrain_set, dev_set, test_set = cPickle.load(f)\nf.close()\n\n#train_set contains 2 entries, first the X values, second the Y values\ntrain_x, train_y = train_set\ndev_x, dev_y = dev_set\ntest_x, test_y = test_set\n\n#Created shared variables for these sets (for performance reasons)\ntrain_x_shared = theano.shared(value=np.asarray(train_x, dtype='float32'), name='train_x')\ntrain_y_shared = theano.shared(value=np.asarray(train_y, dtype='int32'), name='train_y')\n\n\nprint \"Shape of train_x-Matrix: \",train_x_shared.get_value().shape\nprint \"Shape of train_y-vector: \",train_y_shared.get_value().shape\nprint \"Shape of dev_x-Matrix: \",dev_x.shape\nprint \"Shape of test_x-Matrix: \",test_x.shape\n\n###########################\n#\n# Start to build the model\n#\n###########################\n\n# Hyper parameters\nhidden_units = 50\nlearning_rate = 0.01\nbatch_size = 20\n\n# Put your code here to build the training and predict_labels function",
"Time to train the model\nNow we can train our model by calling train_model(mini_batch_index). To predict labels, we can use the function predict_labels(data).",
"# Train your network on mini batches"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
GoogleCloudPlatform/training-data-analyst
|
courses/machine_learning/deepdive/05_review/labs/5_train.ipynb
|
apache-2.0
|
[
"Training on Cloud AI Platform\nLearning Objectives\n- Use CAIP to run a distributed training job\nIntroduction\nAfter having testing our training pipeline both locally and in the cloud on a susbset of the data, we can submit another (much larger) training job to the cloud. It is also a good idea to run a hyperparameter tuning job to make sure we have optimized the hyperparameters of our model. \nThis notebook illustrates how to do distributed training and hyperparameter tuning on Cloud AI Platform. \nTo start, we'll set up our environment variables as before.",
"PROJECT = \"cloud-training-demos\" # Replace with your PROJECT\nBUCKET = \"cloud-training-bucket\" # Replace with your BUCKET\nREGION = \"us-central1\" # Choose an available region for Cloud AI Platform\nTFVERSION = \"1.14\" # TF version for CAIP to use\n\nimport os\nos.environ[\"BUCKET\"] = BUCKET\nos.environ[\"PROJECT\"] = PROJECT\nos.environ[\"REGION\"] = REGION\nos.environ[\"TFVERSION\"] = TFVERSION\n\n%%bash\ngcloud config set project $PROJECT\ngcloud config set compute/region $REGION",
"Next, we'll look for the preprocessed data for the babyweight model and copy it over if it's not there.",
"%%bash\nif ! gsutil ls -r gs://$BUCKET | grep -q gs://$BUCKET/babyweight/preproc; then\n gsutil mb -l ${REGION} gs://${BUCKET}\n # copy canonical set of preprocessed files if you didn't do previous notebook\n gsutil -m cp -R gs://cloud-training-demos/babyweight gs://${BUCKET}\nfi\n\n%%bash\ngsutil ls gs://${BUCKET}/babyweight/preproc/*-00000*",
"In the previous labs we developed our TensorFlow model and got it working on a subset of the data. Now we can package the TensorFlow code up as a Python module and train it on Cloud AI Platform.\nTrain on Cloud AI Platform\nTraining on Cloud AI Platform requires two things:\n- Configuring our code as a Python package\n- Using gcloud to submit the training code to Cloud AI Platform\nMove code into a Python package\nA Python package is simply a collection of one or more .py files along with an __init__.py file to identify the containing directory as a package. The __init__.py sometimes contains initialization code but for our purposes an empty file suffices.\nThe bash command touch creates an empty file in the specified location, the directory babyweight should already exist.",
"%%bash\ntouch babyweight/trainer/__init__.py",
"We then use the %%writefile magic to write the contents of the cell below to a file called task.py in the babyweight/trainer folder.\nExercise 1\nThe cell below write the file babyweight/trainer/task.py which sets up our training job. Here is where we determine which parameters of our model to pass as flags during training using the parser module. Look at how batch_size is passed to the model in the code below. Use this as an example to parse arguements for the following variables\n- nnsize which represents the hidden layer sizes to use for DNN feature columns\n- nembeds which represents the embedding size of a cross of n key real-valued parameters\n- train_examples which represents the number of examples (in thousands) to run the training job\n- eval_steps which represents the positive number of steps for which to evaluate model\n- pattern which specifies a pattern that has to be in input files. For example '00001-of' would process only one shard. For this variable, set 'of' to be the default. \nBe sure to include a default value for the parsed arguments above and specfy the type if necessary.",
"%%writefile babyweight/trainer/task.py\nimport argparse\nimport json\nimport os\n\nimport tensorflow as tf\n\nfrom . import model\n\n\nif __name__ == \"__main__\":\n parser = argparse.ArgumentParser()\n parser.add_argument(\n \"--bucket\",\n help=\"GCS path to data. We assume that data is in \\\n gs://BUCKET/babyweight/preproc/\",\n required=True\n )\n parser.add_argument(\n \"--output_dir\",\n help=\"GCS location to write checkpoints and export models\",\n required=True\n )\n parser.add_argument(\n \"--batch_size\",\n help=\"Number of examples to compute gradient over.\",\n type=int,\n default=512\n )\n parser.add_argument(\n \"--job-dir\",\n help=\"this model ignores this field, but it is required by gcloud\",\n default=\"junk\"\n )\n \n # TODO: Your code goes here\n \n # TODO: Your code goes here\n \n # TODO: Your code goes here\n \n # TODO: Your code goes here\n \n # TODO: Your code goes here\n \n # Parse arguments\n args = parser.parse_args()\n arguments = args.__dict__\n\n # Pop unnecessary args needed for gcloud\n arguments.pop(\"job-dir\", None)\n\n # Assign the arguments to the model variables\n output_dir = arguments.pop(\"output_dir\")\n model.BUCKET = arguments.pop(\"bucket\")\n model.BATCH_SIZE = arguments.pop(\"batch_size\")\n model.TRAIN_STEPS = (\n arguments.pop(\"train_examples\") * 1000) / model.BATCH_SIZE\n model.EVAL_STEPS = arguments.pop(\"eval_steps\")\n print (\"Will train for {} steps using batch_size={}\".format(\n model.TRAIN_STEPS, model.BATCH_SIZE))\n model.PATTERN = arguments.pop(\"pattern\")\n model.NEMBEDS = arguments.pop(\"nembeds\")\n model.NNSIZE = arguments.pop(\"nnsize\")\n print (\"Will use DNN size of {}\".format(model.NNSIZE))\n\n # Append trial_id to path if we are doing hptuning\n # This code can be removed if you are not using hyperparameter tuning\n output_dir = os.path.join(\n output_dir,\n json.loads(\n os.environ.get(\"TF_CONFIG\", \"{}\")\n ).get(\"task\", {}).get(\"trial\", \"\")\n )\n\n # Run the training job\n model.train_and_evaluate(output_dir)",
"In the same way we can write to the file model.py the model that we developed in the previous notebooks. \nExercise 2\nComplete the TODOs in the code cell below to create out model.py. We'll use the code we wrote for the Wide & Deep model. Look back at your 3_tensorflow_wide_deep notebook and copy/paste the necessary code from that notebook into its place in the cell below.",
"%%writefile babyweight/trainer/model.py\nimport shutil\nimport numpy as np\nimport tensorflow as tf\n\ntf.logging.set_verbosity(tf.logging.INFO)\n\nBUCKET = None # set from task.py\nPATTERN = \"of\" # gets all files\n\n# Determine CSV and label columns\n# TODO: Your code goes here\n\n# Set default values for each CSV column\n# TODO: Your code goes here\n\n# Define some hyperparameters\nTRAIN_STEPS = 10000\nEVAL_STEPS = None\nBATCH_SIZE = 512\nNEMBEDS = 3\nNNSIZE = [64, 16, 4]\n\n# Create an input function reading a file using the Dataset API\n# Then provide the results to the Estimator API\ndef read_dataset(prefix, mode, batch_size):\n def _input_fn():\n def decode_csv(value_column):\n # TODO: Your code goes here\n \n # Use prefix to create file path\n file_path = \"gs://{}/babyweight/preproc/{}*{}*\".format(\n BUCKET, prefix, PATTERN)\n\n # Create list of files that match pattern\n file_list = tf.gfile.Glob(filename=file_path)\n\n # Create dataset from file list\n # TODO: Your code goes here\n \n # In training mode, shuffle the dataset and repeat indefinitely\n # TODO: Your code goes here\n \n dataset = # TODO: Your code goes here\n\n # This will now return batches of features, label\n return dataset\n return _input_fn\n\n# Define feature columns\ndef get_wide_deep():\n # TODO: Your code goes here\n return wide, deep\n\n\n# Create serving input function to be able to serve predictions later using provided inputs\ndef serving_input_fn():\n # TODO: Your code goes here\n return tf.estimator.export.ServingInputReceiver(\n features=features, receiver_tensors=feature_placeholders)\n\n# create metric for hyperparameter tuning\ndef my_rmse(labels, predictions):\n pred_values = predictions[\"predictions\"]\n return {\"rmse\": tf.metrics.root_mean_squared_error(\n labels=labels, predictions=pred_values)}\n\n# Create estimator to train and evaluate\ndef train_and_evaluate(output_dir):\n # TODO: Your code goes here",
"Train locally\nAfter moving the code to a package, make sure it works as a standalone. Note, we incorporated the --pattern and --train_examples flags so that we don't try to train on the entire dataset while we are developing our pipeline. Once we are sure that everything is working on a subset, we can change the pattern so that we can train on all the data. Even for this subset, this takes about 3 minutes in which you won't see any output ...\nExercise 3\nFill in the missing code in the TODOs below so that we can run a very small training job over a single file (i.e. use the pattern equal to \"00000-of-\") with 1 train step and 1 eval step",
"%%bash\necho \"bucket=${BUCKET}\"\nrm -rf babyweight_trained\nexport PYTHONPATH=${PYTHONPATH}:${PWD}/babyweight\npython -m trainer.task \\\n --bucket= # TODO: Your code goes here\n --output_dir= # TODO: Your code goes here\n --job-dir=./tmp \\\n --pattern= # TODO: Your code goes here\n --train_examples= # TODO: Your code goes here\n --eval_steps= # TODO: Your code goes here",
"Making predictions\nThe JSON below represents an input into your prediction model. Write the input.json file below with the next cell, then run the prediction locally to assess whether it produces predictions correctly.",
"%%writefile inputs.json\n{\"is_male\": \"True\", \"mother_age\": 26.0, \"plurality\": \"Single(1)\", \"gestation_weeks\": 39}\n{\"is_male\": \"False\", \"mother_age\": 26.0, \"plurality\": \"Single(1)\", \"gestation_weeks\": 39}",
"Exercise 4\nFinish the code in cell below to run a local prediction job on the inputs.json file we just created. You will need to provide two additional flags\n- one for model-dir specifying the location of the model binaries\n- one for json-instances specifying the location of the json file on which you want to predict",
"%%bash\nMODEL_LOCATION=$(ls -d $(pwd)/babyweight_trained/export/exporter/* | tail -1)\necho $MODEL_LOCATION\ngcloud ai-platform local predict # TODO: Your code goes here",
"Training on the Cloud with CAIP\nOnce the code works in standalone mode, you can run it on Cloud AI Platform. Because this is on the entire dataset, it will take a while. The training run took about <b> an hour </b> for me. You can monitor the job from the GCP console in the Cloud AI Platform section.\nExercise 5\nLook at the TODOs in the code cell below and fill in the missing information. Some of the required flags are already there for you. You will need to provide the rest.",
"%%bash\nOUTDIR=gs://${BUCKET}/babyweight/trained_model\nJOBNAME=babyweight_$(date -u +%y%m%d_%H%M%S)\necho $OUTDIR $REGION $JOBNAME\ngsutil -m rm -rf $OUTDIR\ngcloud ai-platform jobs submit training $JOBNAME \\\n --region= # TODO: Your code goes here\n --module-name= # TODO: Your code goes here\n --package-path= # TODO: Your code goes here\n --job-dir= # TODO: Your code goes here\n --staging-bucket=gs://$BUCKET \\\n --scale-tier= #TODO: Your code goes here\n --runtime-version= #TODO: Your code goes here\n -- \\\n --bucket=${BUCKET} \\\n --output_dir=${OUTDIR} \\\n --train_examples=200000",
"When I ran it, I used train_examples=2000000. When training finished, I filtered in the Stackdriver log on the word \"dict\" and saw that the last line was:\n<pre>\nSaving dict for global step 5714290: average_loss = 1.06473, global_step = 5714290, loss = 34882.4, rmse = 1.03186\n</pre>\nThe final RMSE was 1.03 pounds.\n<h2> Optional: Hyperparameter tuning </h2>\n<p>\nAll of these are command-line parameters to my program. To do hyperparameter tuning, create hyperparam.xml and pass it as --configFile.\nThis step will take <b>1 hour</b> -- you can increase maxParallelTrials or reduce maxTrials to get it done faster. Since maxParallelTrials is the number of initial seeds to start searching from, you don't want it to be too large; otherwise, all you have is a random search.\n\n\n#### **Exercise 6**\n\nWe need to create a .yaml file to pass with our hyperparameter tuning job. Fill in the TODOs below for each of the parameters we want to include in our hyperparameter search.",
"%writefile hyperparam.yaml\ntrainingInput:\n scaleTier: STANDARD_1\n hyperparameters:\n hyperparameterMetricTag: rmse\n goal: MINIMIZE\n maxTrials: 20\n maxParallelTrials: 5\n enableTrialEarlyStopping: True\n params:\n - parameterName: batch_size\n type: # TODO: Your code goes here\n minValue: # TODO: Your code goes here\n maxValue: # TODO: Your code goes here\n scaleType: # TODO: Your code goes here\n - parameterName: nembeds\n type: # TODO: Your code goes here\n minValue: # TODO: Your code goes here\n maxValue: # TODO: Your code goes here\n scaleType: # TODO: Your code goes here\n - parameterName: nnsize\n type: # TODO: Your code goes here\n minValue: # TODO: Your code goes here\n maxValue: # TODO: Your code goes here\n scaleType: # TODO: Your code goes here\n\n%%bash\nOUTDIR=gs://${BUCKET}/babyweight/hyperparam\nJOBNAME=babyweight_$(date -u +%y%m%d_%H%M%S)\necho $OUTDIR $REGION $JOBNAME\ngsutil -m rm -rf $OUTDIR\ngcloud ai-platform jobs submit training $JOBNAME \\\n --region=$REGION \\\n --module-name=trainer.task \\\n --package-path=$(pwd)/babyweight/trainer \\\n --job-dir=$OUTDIR \\\n --staging-bucket=gs://$BUCKET \\\n --scale-tier=STANDARD_1 \\\n --config=hyperparam.yaml \\\n --runtime-version=$TFVERSION \\\n -- \\\n --bucket=${BUCKET} \\\n --output_dir=${OUTDIR} \\\n --eval_steps=10 \\\n --train_examples=20000",
"<h2> Repeat training </h2>\n<p>\nThis time with tuned parameters (note last line)",
"%%bash\nOUTDIR=gs://${BUCKET}/babyweight/trained_model_tuned\nJOBNAME=babyweight_$(date -u +%y%m%d_%H%M%S)\necho $OUTDIR $REGION $JOBNAME\ngsutil -m rm -rf $OUTDIR\ngcloud ai-platform jobs submit training $JOBNAME \\\n --region=$REGION \\\n --module-name=trainer.task \\\n --package-path=$(pwd)/babyweight/trainer \\\n --job-dir=$OUTDIR \\\n --staging-bucket=gs://$BUCKET \\\n --scale-tier=STANDARD_1 \\\n --runtime-version=$TFVERSION \\\n -- \\\n --bucket=${BUCKET} \\\n --output_dir=${OUTDIR} \\\n --train_examples=20000 --batch_size=35 --nembeds=16 --nnsize=281",
"Copyright 2017 Google Inc. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
tpin3694/tpin3694.github.io
|
python/geolocate_a_city_or_country.ipynb
|
mit
|
[
"Title: Geolocate A City Or Country \nSlug: geolocate_a_city_or_country\nSummary: Geolocate a city or country. \nDate: 2016-09-21 12:00\nCategory: Python\nTags: Other\nAuthors: Chris Albon \nThis tutorial creates a function that attempts to take a city and country and return its latitude and longitude. But when the city is unavailable (which is often be the case), the returns the latitude and longitude of the center of the country.\nPreliminaries",
"from geopy.geocoders import Nominatim\ngeolocator = Nominatim()\nimport numpy as np",
"Create Geolocation Function",
"def geolocate(city=None, country=None):\n '''\n Inputs city and country, or just country. Returns the lat/long coordinates of \n either the city if possible, if not, then returns lat/long of the center of the country.\n '''\n \n # If the city exists,\n if city != None:\n # Try\n try:\n # To geolocate the city and country\n loc = geolocator.geocode(str(city + ',' + country))\n # And return latitude and longitude\n return (loc.latitude, loc.longitude)\n # Otherwise\n except:\n # Return missing value\n return np.nan\n # If the city doesn't exist\n else:\n # Try\n try:\n # Geolocate the center of the country\n loc = geolocator.geocode(country)\n # And return latitude and longitude \n return (loc.latitude, loc.longitude)\n # Otherwise\n except:\n # Return missing value\n return np.nan",
"Geolocate A City And Country",
"# Geolocate a city and country\ngeolocate(city='Austin', country='USA')",
"Geolocate Just A Country",
"# Geolocate just a country\ngeolocate(country='USA')"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
GoogleCloudPlatform/tf-estimator-tutorials
|
00_Miscellaneous/tf_train_eval_export/Tutorial - TensorFlow from Estimators to Keras.ipynb
|
apache-2.0
|
[
"TensorFlow: From Estimators to Keras\n\nBuilding a custom TensorFlow estimator (as a reference)\nUse Census classification dataset\nCreate feature columns from the estimator\nImplement a tf.data input_fn\nCreate a custom estimator using tf.keras.layers\nTrain and evaluate the model\n\n\nBuilding a Functional Keras model and using tf.data APIs\nModify the input_fn to process categorical features\nBuild a Functional Keras Model\nUse the input_fn to fit the Keras model\nConfigure epochs and validation\nConfigure callbacks for early stopping and checkpoints\n\n\nSave and Load Keras model\nExport Keras model to saved_model\nConverting Keras model to estimator\nConcluding Remarks\n\n<a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/tf-estimator-tutorials/blob/master/00_Miscellaneous/tf_train_eval_export/Tutorial%20-%20TensorFlow%20from%20Estimators%20to%20Keras.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"import math\nimport os\nimport pandas as pd\nimport numpy as np\nfrom datetime import datetime\n\nimport tensorflow as tf\nfrom tensorflow import data\n\nprint(\"TensorFlow : {}\".format(tf.__version__))\n\nSEED = 19831060",
"Download the Data",
"DATA_DIR='data'\n!mkdir $DATA_DIR\n!gsutil cp gs://cloud-samples-data/ml-engine/census/data/adult.data.csv $DATA_DIR\n!gsutil cp gs://cloud-samples-data/ml-engine/census/data/adult.test.csv $DATA_DIR\nTRAIN_DATA_FILE = os.path.join(DATA_DIR, 'adult.data.csv')\nEVAL_DATA_FILE = os.path.join(DATA_DIR, 'adult.test.csv')\n\nTRAIN_DATA_SIZE = 32561\nEVAL_DATA_SIZE = 16278",
"Dataset Metadata",
"HEADER = ['age', 'workclass', 'fnlwgt', 'education', 'education_num',\n 'marital_status', 'occupation', 'relationship', 'race', 'gender',\n 'capital_gain', 'capital_loss', 'hours_per_week',\n 'native_country', 'income_bracket']\n\nHEADER_DEFAULTS = [[0], [''], [0], [''], [0], [''], [''], [''], [''], [''],\n [0], [0], [0], [''], ['']]\n\nNUMERIC_FEATURE_NAMES = ['age', 'education_num', 'capital_gain', 'capital_loss', 'hours_per_week']\nCATEGORICAL_FEATURE_NAMES = ['gender', 'race', 'education', 'marital_status', 'relationship', \n 'workclass', 'occupation', 'native_country']\n\nFEATURE_NAMES = NUMERIC_FEATURE_NAMES + CATEGORICAL_FEATURE_NAMES\nTARGET_NAME = 'income_bracket'\nTARGET_LABELS = [' <=50K', ' >50K']\nWEIGHT_COLUMN_NAME = 'fnlwgt'\nNUM_CLASSES = len(TARGET_LABELS)\n\ndef get_categorical_features_vocabolary():\n data = pd.read_csv(TRAIN_DATA_FILE, names=HEADER)\n return {\n column: list(data[column].unique()) \n for column in data.columns if column in CATEGORICAL_FEATURE_NAMES\n }\n\nfeature_vocabolary = get_categorical_features_vocabolary()\nprint(feature_vocabolary)",
"Building a TensorFlow Custom Estimator\n\nCreating feature columns\nCreating model_fn\nCreate estimator using the model_fn\nDefine data input_fn\nDefine Train and evaluate experiment\nRun experiment with parameters\n\n1. Create feature columns",
"def create_feature_columns():\n \n feature_columns = []\n \n for column in NUMERIC_FEATURE_NAMES:\n feature_column = tf.feature_column.numeric_column(column)\n feature_columns.append(feature_column)\n \n for column in CATEGORICAL_FEATURE_NAMES:\n vocabolary = feature_vocabolary[column]\n embed_size = int(math.sqrt(len(vocabolary)))\n feature_column = tf.feature_column.embedding_column(\n tf.feature_column.categorical_column_with_vocabulary_list(column, vocabolary), \n embed_size)\n feature_columns.append(feature_column)\n \n return feature_columns\n",
"2. Create model_fn\n\nUse feature columns to create input_layer\nUse tf.keras.layers to define the model architecutre and output\nUse binary_classification_head for create EstimatorSpec",
"def model_fn(features, labels, mode, params):\n \n is_training = True if mode == tf.estimator.ModeKeys.TRAIN else False\n \n # model body\n def _inference(features, mode, params):\n \n feature_columns = create_feature_columns()\n input_layer = tf.feature_column.input_layer(features=features, feature_columns=feature_columns)\n dense_inputs = input_layer\n for i in range(len(params.hidden_units)):\n dense = tf.keras.layers.Dense(params.hidden_units[i], activation='relu')(dense_inputs)\n dense_dropout = tf.keras.layers.Dropout(params.dropout_prob)(dense, training=is_training)\n dense_inputs = dense_dropout\n fully_connected = dense_inputs \n logits = tf.keras.layers.Dense(units=1, name='logits', activation=None)(fully_connected)\n return logits\n \n # model head\n head = tf.contrib.estimator.binary_classification_head(\n label_vocabulary=TARGET_LABELS,\n weight_column=WEIGHT_COLUMN_NAME\n )\n \n return head.create_estimator_spec(\n features=features,\n mode=mode,\n logits=_inference(features, mode, params),\n labels=labels,\n optimizer=tf.train.AdamOptimizer(params.learning_rate)\n )\n ",
"3. Create estimator",
"def create_estimator(params, run_config):\n \n feature_columns = create_feature_columns()\n \n estimator = tf.estimator.Estimator(\n model_fn,\n params=params,\n config=run_config\n )\n \n return estimator",
"4. Data Input Function",
"def make_input_fn(file_pattern, batch_size, num_epochs, \n mode=tf.estimator.ModeKeys.EVAL):\n \n def _input_fn():\n dataset = tf.data.experimental.make_csv_dataset(\n file_pattern=file_pattern,\n batch_size=batch_size,\n column_names=HEADER,\n column_defaults=HEADER_DEFAULTS,\n label_name=TARGET_NAME,\n field_delim=',',\n use_quote_delim=True,\n header=False,\n num_epochs=num_epochs,\n shuffle= (mode==tf.estimator.ModeKeys.TRAIN)\n )\n return dataset\n \n return _input_fn",
"5. Experiment Definition",
"def train_and_evaluate_experiment(params, run_config):\n \n # TrainSpec ####################################\n train_input_fn = make_input_fn(\n TRAIN_DATA_FILE,\n batch_size=params.batch_size,\n num_epochs=None,\n mode=tf.estimator.ModeKeys.TRAIN\n )\n \n train_spec = tf.estimator.TrainSpec(\n input_fn = train_input_fn,\n max_steps=params.traning_steps\n )\n ############################################### \n \n # EvalSpec ####################################\n eval_input_fn = make_input_fn(\n EVAL_DATA_FILE,\n num_epochs=1,\n batch_size=params.batch_size,\n )\n\n eval_spec = tf.estimator.EvalSpec(\n name=datetime.utcnow().strftime(\"%H%M%S\"),\n input_fn = eval_input_fn,\n steps=None,\n start_delay_secs=0,\n throttle_secs=params.eval_throttle_secs\n )\n ###############################################\n\n tf.logging.set_verbosity(tf.logging.INFO)\n \n if tf.gfile.Exists(run_config.model_dir):\n print(\"Removing previous artefacts...\")\n tf.gfile.DeleteRecursively(run_config.model_dir)\n \n print('')\n estimator = create_estimator(params, run_config)\n print('')\n \n time_start = datetime.utcnow() \n print(\"Experiment started at {}\".format(time_start.strftime(\"%H:%M:%S\")))\n print(\".......................................\") \n\n tf.estimator.train_and_evaluate(\n estimator=estimator,\n train_spec=train_spec, \n eval_spec=eval_spec\n )\n\n time_end = datetime.utcnow() \n print(\".......................................\")\n print(\"Experiment finished at {}\".format(time_end.strftime(\"%H:%M:%S\")))\n print(\"\")\n time_elapsed = time_end - time_start\n print(\"Experiment elapsed time: {} seconds\".format(time_elapsed.total_seconds()))\n \n return estimator\n",
"6. Run Experiment with Parameters",
"MODELS_LOCATION = 'models/census'\nMODEL_NAME = 'dnn_classifier'\nmodel_dir = os.path.join(MODELS_LOCATION, MODEL_NAME)\n\nparams = tf.contrib.training.HParams(\n batch_size=200,\n traning_steps=1000,\n hidden_units=[100, 70, 50],\n learning_rate=0.01,\n dropout_prob=0.2,\n eval_throttle_secs=0,\n)\n\nstrategy = None\nnum_gpus = len([device_name for device_name in tf.contrib.eager.list_devices()\n if '/device:GPU' in device_name])\n\nif num_gpus > 1:\n strategy = tf.contrib.distribute.MirroredStrategy()\n params.batch_size = int(math.ceil(params.batch_size / num_gpus))\n\nrun_config = tf.estimator.RunConfig(\n tf_random_seed=SEED,\n save_checkpoints_steps=200,\n keep_checkpoint_max=3,\n model_dir=model_dir,\n train_distribute=strategy\n)\n\ntrain_and_evaluate_experiment(params, run_config)",
"Building a Keras Model\n\nImplement a data input_fn process the data for the Keras model\nCreate the Keras model\nCreate the callbacks\nRun the experiment\n\n1. Data input_fn\nA typical way of feed data into Keras is to convert it to a numpy array and pass it to the model.fit() function of the model. However, in other (probably more parctical) cases, all the data may not fit into the memory of your worker. Thus, you woud need to either create a reader that reads your data chuck by chuck, and pass it to model.fit_generator(), or to use the tf.data.Dataset APIs, which are much easier to use.\nIn the input_fn, \n1. Create a CSV dataset (similar to the one used with the TensorFlow Custom Estimator)\n2. Create lookups for categorical features vocabolary to numerical index\n3. Process the dataset features to:\n * extrat the instance weight column\n * convert the categorical features to numerical index",
"def make_keras_input_fn(file_pattern, batch_size, mode=tf.estimator.ModeKeys.EVAL):\n \n mapping_tables = {}\n \n mapping_tables[TARGET_NAME] = tf.contrib.lookup.index_table_from_tensor(\n mapping=tf.constant(TARGET_LABELS))\n\n for feature_name in CATEGORICAL_FEATURE_NAMES:\n mapping_tables[feature_name] = tf.contrib.lookup.index_table_from_tensor(\n mapping=tf.constant(feature_vocabolary[feature_name]))\n try:\n tf.tables_initializer().run(session=tf.keras.backend.get_session()) \n except:\n pass\n \n def _process_features(features, target):\n \n weight = features.pop(WEIGHT_COLUMN_NAME)\n target = mapping_tables[TARGET_NAME].lookup(target)\n for feature in CATEGORICAL_FEATURE_NAMES:\n features[feature] = mapping_tables[feature].lookup(features[feature])\n return features, target, weight\n \n def _input_fn():\n \n dataset = tf.data.experimental.make_csv_dataset(\n file_pattern=file_pattern,\n batch_size=batch_size,\n column_names=HEADER,\n column_defaults=HEADER_DEFAULTS,\n label_name=TARGET_NAME,\n field_delim=',',\n use_quote_delim=True,\n header=False,\n shuffle= (mode==tf.estimator.ModeKeys.TRAIN)\n ).map(_process_features)\n\n return dataset\n \n return _input_fn",
"2. Create the keras model\n\nCreate the model architecture: because Keras models do not suppurt feature columns (yet), we need to create:\nOne input for each feature\nEmbedding layer for each categorical feature\nSigmoid output\n\n\nCompile the model",
"def create_model(params):\n \n inputs = []\n to_concat = []\n\n for column in HEADER:\n if column not in [WEIGHT_COLUMN_NAME, TARGET_NAME]:\n if column in NUMERIC_FEATURE_NAMES:\n numeric_input = tf.keras.layers.Input(shape=(1, ), name=column, dtype='float32')\n inputs.append(numeric_input)\n to_concat.append(numeric_input)\n else:\n categorical_input = tf.keras.layers.Input(shape=(1, ), name=column, dtype='int32')\n inputs.append(categorical_input)\n vocabulary_size = len(feature_vocabolary[column])\n embed_size = int(math.sqrt(vocabulary_size))\n embedding = tf.keras.layers.Embedding(input_dim=vocabulary_size, \n output_dim=embed_size)(categorical_input)\n reshape = tf.keras.layers.Reshape(target_shape=(embed_size, ))(embedding)\n to_concat.append(reshape)\n \n input_layer = tf.keras.layers.Concatenate(-1)(to_concat) \n dense_inputs = input_layer\n for i in range(len(params.hidden_units)):\n dense = tf.keras.layers.Dense(params.hidden_units[i], activation='relu')(dense_inputs)\n dense_dropout = tf.keras.layers.Dropout(params.dropout_prob)(dense)#, training=is_training)\n dense_inputs = dense_dropout\n fully_connected = dense_inputs \n logits = tf.keras.layers.Dense(units=1, name='logits', activation=None)(fully_connected)\n \n sigmoid = tf.keras.layers.Activation(activation='sigmoid', name='probability')(logits)\n\n # keras model\n model = tf.keras.models.Model(inputs=inputs, outputs=sigmoid)\n \n model.compile(\n loss='binary_crossentropy', \n optimizer='adam', \n metrics=['accuracy']\n )\n\n return model\n\n!mkdir $model_dir/checkpoints\n!ls $model_dir",
"3. Define callbacks\n\nEarly stopping callback\nCheckpoints callback",
"callbacks = [\n tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=2),\n tf.keras.callbacks.ModelCheckpoint(\n os.path.join(model_dir,'checkpoints', 'model-{epoch:02d}.h5'), \n monitor='val_loss', \n period=1)\n]\n\nfrom keras.utils.training_utils import multi_gpu_model\nmodel = create_model(params)\n# model = multi_gpu_model(model, gpus=4) # This is to train the model with multiple GPUs\nmodel.summary()",
"4. Run experiment\nWhen using out-of-memory dataset, that is, reading data chuck by chuck from file(s) and feeding it to the model (using the tf.data.Dataset APIs), you usually do not know the size of the dataset. Thus, beside the number of epochs required to train the model for, you need to specify how many step is considered as an epoch. \nThis is not required when use an in-memory (numpy array) dataset, since the size of the dataset is know to the model, hence how many steps here are in the epoch.\nIn our experiment, we know the size of our dataset, thus we compute the steps_per_epoch as: training data size /batch size",
"train_data = make_keras_input_fn(\n TRAIN_DATA_FILE,\n batch_size=params.batch_size,\n mode=tf.estimator.ModeKeys.TRAIN\n)()\n\n\nvalid_data = make_keras_input_fn(\n EVAL_DATA_FILE,\n batch_size=params.batch_size,\n mode=tf.estimator.ModeKeys.EVAL\n)()\n\nsteps_per_epoch = int(math.ceil(TRAIN_DATA_SIZE/float(params.batch_size)))\nmodel.fit(\n train_data, \n epochs=5, \n steps_per_epoch=steps_per_epoch,\n validation_data=valid_data,\n validation_steps=steps_per_epoch,\n callbacks=callbacks\n)\n\n!ls $model_dir/checkpoints",
"Save and Load Keras Model for Prediction",
"keras_model_dir = os.path.join(model_dir, 'keras_classifier.h5')\nmodel.save(keras_model_dir)\nprint(\"Keras model saved to: {}\".format(keras_model_dir))\nmodel = tf.keras.models.load_model(keras_model_dir)\nprint(\"Keras model loaded.\")\n\npredict_data = make_keras_input_fn(\n EVAL_DATA_FILE,\n batch_size=5,\n mode=tf.estimator.ModeKeys.EVAL\n )()\n\npredictions = map(\n lambda probability: TARGET_LABELS[0] if probability <0.5 else TARGET_LABELS[1], \n model.predict(predict_data, steps=1)\n)\n\nprint(list(predictions))",
"Export Keras Model as saved_model for tf.Serving",
"os.environ['MODEL_DIR'] = model_dir\nexport_dir = os.path.join(model_dir, 'export')\n\nfrom tensorflow.contrib.saved_model.python.saved_model import keras_saved_model\nkeras_saved_model.save_keras_model(model, export_dir)\n\n%%bash\n\nsaved_models_base=${MODEL_DIR}/export/\nsaved_model_dir=${saved_models_base}$(ls ${saved_models_base} | tail -n 1)\necho ${saved_model_dir}\nls ${saved_model_dir}\nsaved_model_cli show --dir=${saved_model_dir} --all",
"Convert to Estimator for Distributed Training...",
"estimator = tf.keras.estimator.model_to_estimator(model)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
DiCarloLab-Delft/PycQED_py3
|
examples/MeasurementControl - adaptive sampling use cases.ipynb
|
mit
|
[
"Tutorial: Measurement Control - adaptive sampling\nAuthor(s): Victor Negîrneac\nLast update: 2020-03-25\nThis is an advanced tutorial that focuses on adaptive sampling. If you are new to PycQED measurements and mesurements flow control, take a look first at PycQED_py3/examples/MeasurementControl.ipynb. It covers the basics of measurement control, soft(ware) and hard(ware) measurements, etc..\nContents covered in this notebook\nWe can mix soft(ware) and hard(ware) measurements but for simplicity this notebook focuses on the soft measurements. In the \"normal\" soft (vs \"adaptive\" soft) measurements MC is in charge of the measurement loop and consecutively sets and gets datapoints according to a pre-determined list of points, usually a rectangular-like regular grid (i.e. uniform sampling in each dimension).\nOn the other hand, for an adaptive measurment the datapoints are determined dynamically during the measurement loop itself. Any optimization falls into this case. Furthermore, here we focus on soft adaptive measurements. (I would call hard adaptive a sampling algorithm running on an FPGA.)\nThis tutorial is structured in the following way, a sampling problem is stated and a possible solution based on adaptive sampling is shown to highlight the available features. We will start with a few uniform sampling examples to showcase the advatages provided by the adaptive sampling approach.\nFuture reproducibility of this notebook\nPycQED and its dependencies are a rapidly evolving repositories therefore this notebook might stop working properly with the latest packages at any moment in the future. In order to always be able to reproduce this notebook, below you can find the software versions used in this tutorial as well as the commit hash of PycQED at the moment of writing.\nNB: if you run the two cells below you will have to git reset the file to get the original output back",
"from pycqed.utilities import git_utils as gu\nimport pycqed as pq\nprint_output = True\npycqed_path = pq.__path__[0]\n#status_pycqed, _ = gu.git_status(repo_dir=pycqed_path, print_output=print_output)\nlast_commit_pycqed, _ = gu.git_get_last_commit(repo_dir=pycqed_path, author=None, print_output=print_output)\n\nfrom platform import python_version\npython_v = python_version()\n\nipython_v = !ipython --version\njupyterlab_v = !jupyter-lab --version\n\nprint()\nprint(\"Python version: \", python_v)\nprint(\"iPython version: \", ipython_v[0])\nprint(\"Jupyter Lab version: \", jupyterlab_v[0])\n\n# In case you are not able to run this notebook you can setup a virtual env with the following pacakges\n\n!pip list",
"Import required modules",
"%matplotlib inline\nimport adaptive\nimport matplotlib.pyplot as plt\nimport pycqed as pq\nimport numpy as np\nfrom pycqed.measurement import measurement_control as mc\n#from pycqed.measurement.sweep_functions import None_Sweep\n#import pycqed.measurement.detector_functions as det\n\nfrom qcodes import station\nstation = station.Station()\n\nimport pycqed.analysis_v2.measurement_analysis as ma2\n\nfrom importlib import reload\nfrom pycqed.utilities.general import print_exception",
"Creating an instance of MeasurementControl",
"MC = mc.MeasurementControl('MC',live_plot_enabled=True, verbose=True)\nMC.station = station\nstation.add_component(MC)\n\nMC.persist_mode(True) # Turns on and off persistent plotting from previous run\nMC.verbose(True)\nMC.plotting_interval(.4)\nMC.live_plot_enabled(True)",
"Create instruments used in the experiment\nWe will use a dummy instrument behaving like a Chevron measurement",
"import pycqed.instrument_drivers.physical_instruments.dummy_instruments as di\nreload(di)\n\nintr_name = \"dummy_chevron\"\nif intr_name in station.components.keys():\n # Reset instr if it exists from previously running this cell\n station.close_and_remove_instrument(intr_name)\n del dummy_chevron\n \ndummy_chevron = di.DummyChevronAlignmentParHolder(intr_name)\nstation.add_component(dummy_chevron)",
"Problem: How to observe the features of a 1D function with the minimum number of sampling points?\nSimple 1D uniform sweep",
"dummy_chevron.delay(.001)\ndummy_chevron.noise(0.05)\ndummy_chevron.t(20e-9)\ndummy_chevron.detuning_swt_spt(12.5e9)\n\nnpoints = 20\nbounds = [0.6 * dummy_chevron.amp_center_2(), 1.4 * dummy_chevron.amp_center_2()]\n\nMC.soft_avg(1)\nMC.set_sweep_function(dummy_chevron.amp)\nMC.set_sweep_points(np.linspace(bounds[0], bounds[-1], npoints))\n\nMC.set_detector_function(dummy_chevron.frac_excited)\nlabel = '1D uniform'\ndat = MC.run(label, mode=\"1D\")\nma2.Basic1DAnalysis(label=label, close_figs=False)",
"Adaptive 1D sampling",
"MC.set_sweep_function(dummy_chevron.amp)\nMC.set_adaptive_function_parameters({\n 'adaptive_function': adaptive.Learner1D,\n 'bounds': bounds,\n 'goal': lambda l: l.npoints >= npoints\n })\n\nMC.set_detector_function(dummy_chevron.frac_excited)\nlabel = '1D adaptive'\ndat = MC.run(label, mode=\"adaptive\")\nma2.Basic1DAnalysis(label=label, close_figs=False)",
"Adaptive 1D sampling, poor choice of bounds + noise\nIn this example it didn't found the peak and started sampling a noisy area (mind the plot range)",
"bounds = [0.7 * dummy_chevron.amp_center_2(), 1.6 * dummy_chevron.amp_center_2()]\n\nMC.set_sweep_function(dummy_chevron.amp)\nMC.set_adaptive_function_parameters({\n 'adaptive_function': adaptive.Learner1D,\n 'bounds': bounds,\n 'goal': lambda l: l.npoints >= npoints\n })\n\nMC.set_detector_function(dummy_chevron.frac_excited)\nlabel = '1D adaptive fail'\ndat = MC.run(label, mode=\"adaptive\")\nma2.Basic1DAnalysis(label=label, close_figs=False)",
"Of course it is not bullet proof, poor choice of boundaries and noise might get into the way, but we can help it a bit\nTo achieve this we use tools that are available in PycQED.\nNB: The tools (loss and goal making functions) in pycqed.utilities.learner1D_minimizer require the use of a modified verion of the learner: pycqed.utilities.learner1D_minimizer.Learner1D_Minimizer.\nOther issues might arise and the Learner1D_Minimizer is flexible to be adjusted for other cases.\nWe can impose minum sampling priority to segments with length below certain distance, i.e. force minimum resolution",
"from adaptive.learner.learner1D import default_loss\nfrom pycqed.utilities import learner1D_minimizer as l1dm\nreload(l1dm)\n\ndummy_chevron.delay(.0)\n\n# Live plotting has a significant overhead, uncomment to see difference\n# MC.live_plot_enabled(False)\n\nMC.set_sweep_function(dummy_chevron.amp)\nMC.set_adaptive_function_parameters({\n 'adaptive_function': l1dm.Learner1D_Minimizer,\n 'bounds': bounds,\n 'goal': lambda l: l.npoints >= npoints,\n 'loss_per_interval': l1dm.mk_res_loss_func(\n default_loss_func=default_loss,\n # do not split segments that are x3 smaller than uniform sampling\n min_distance=(bounds[-1] - bounds[0]) / npoints / 3)\n })\n\nMC.set_detector_function(dummy_chevron.frac_excited)\nlabel = '1D adaptive segment size'\ndat = MC.run(label, mode=\"adaptive\")\nma2.Basic1DAnalysis(label=label, close_figs=False)",
"High resolution for reference of unerlying model (no noise)",
"dummy_chevron.delay(.05)\ndummy_chevron.noise(.0)\ndummy_chevron.detuning_swt_spt(2 * np.pi * 2e9)\n\nnpoints = 100\nbounds = [0.6 * dummy_chevron.amp_center_2(), 1.4 * dummy_chevron.amp_center_2()]\n\nMC.soft_avg(1)\nMC.set_sweep_function(dummy_chevron.amp)\nMC.set_sweep_points(np.linspace(bounds[0], bounds[-1], npoints))\n\nMC.set_detector_function(dummy_chevron.frac_excited)\nlabel = '1D uniform HR'\ndat = MC.run(label, mode=\"1D\")\nma2.Basic1DAnalysis(label=label, close_figs=False)\n\nMC.set_sweep_function(dummy_chevron.amp)\nMC.set_adaptive_function_parameters({\n 'adaptive_function': l1dm.Learner1D_Minimizer,\n 'bounds': bounds,\n 'goal': lambda l: l.npoints >= npoints,\n 'loss_per_interval': l1dm.mk_res_loss_func(\n default_loss_func=default_loss,\n # do not split segments that are x3 smaller than uniform sampling\n min_distance=(bounds[-1] - bounds[0]) / npoints / 3)\n })\n\nMC.set_detector_function(dummy_chevron.frac_excited)\nlabel = '1D adaptive HR'\ndat = MC.run(label, mode=\"adaptive\")\nma2.Basic1DAnalysis(label=label, close_figs=False)",
"For more cool animations and other examples of adaptive sampling visit adaptive package documentation and tutorials.\nProblem: How to maximize this noisy function?\nFor the 1D there are probably many decent solutions, but this serves as well to give some intuition for the N-dimensional generalization.\nTip:\nMany optimizers are minimizer by design or default. MC detects the minimize: False option when passed to set_adaptive_function_parameters so that any minimzer can be used as a maximizer.\nMeet the SKOptLearner\nThis is a wrapper included in the adaptive package that wraps around scikit-optimize.\nInteresting features include otimization over integers and Bayesian optimization (not based on gradients, therefore has some noise resilience), beside the N-dim capabilities.\nNB: might not be appropriate for functions that are quick to evaluate as the model that it builds under the hood might be computationally expensive.\nNB2: due to some probabilistic factors inside this learner + the noise in our functions it will take sitinct ammounts of iterations to get the maximum\nNB3: From experience it might get stuck at the boundaries sometimes, configuration exploration might be required to make it work for your case",
"dummy_chevron.delay(.2)\ndummy_chevron.noise(.05)\n\nbounds = [0.6 * dummy_chevron.amp_center_2(), 1.6 * dummy_chevron.amp_center_2()]\nnpoints = 30 # Just in case\n\ntarget_f = 0.99\n\nMC.set_sweep_function(dummy_chevron.amp)\nMC.set_adaptive_function_parameters({\n 'adaptive_function': adaptive.SKOptLearner,\n # this one has its own paramters, might require exploring it\n 'dimensions': [bounds], \n 'base_estimator': \"GP\", \n 'acq_func': \"gp_hedge\",\n 'acq_optimizer': \"lbfgs\",\n 'goal': lambda l: l.npoints >= npoints,\n 'minimize': False,\n 'f_termination': target_f,\n})\n\nMC.set_detector_function(dummy_chevron.frac_excited)\nlabel = '1D maximize skopt'\ntry:\n dat = MC.run(label, mode=\"adaptive\")\nexcept StopIteration as e:\n print(e)\nma2.Basic1DAnalysis(label=label, close_figs=False)",
"Meet the home made Learner1D_Minimizer (and its tools)\n(with blood, sweat and tears of master students)",
"dummy_chevron.delay(.2)\ndummy_chevron.noise(.05)\n\n# If the optimal pnt is in the middle would be trivial to find the optimal pnt\nbounds = [0.6 * dummy_chevron.amp_center_2(), 1.6 * dummy_chevron.amp_center_2()]\nnpoints = 40 # Just in case\n\nloss = l1dm.mk_minimization_loss_func(max_no_improve_in_local=4)\ngoal = l1dm.mk_minimization_goal_func()\n\ntarget_f = 0.999\n\nMC.set_sweep_function(dummy_chevron.amp)\nMC.set_adaptive_function_parameters({\n 'adaptive_function': l1dm.Learner1D_Minimizer,\n 'bounds': bounds,\n # the modified learner requires the call of a dedicated goal function that takes care of certain things\n # goal(learner) returns always False so that it can be chained with the user goal\n # mind the sign! This can be easily lead to mistakes. The learner will get the inverse of our detector output\n 'goal': lambda l: goal(l) or l.npoints >= npoints or l.last_min <= -target_f,\n 'loss_per_interval': loss,\n 'minimize': False,\n #'goal': lambda l: l.npoints >= npoints,\n \n})\n\nMC.set_detector_function(dummy_chevron.frac_excited)\nlabel = '1D maximize'\ndat = MC.run(label, mode=\"adaptive\")\n\nma2.Basic1DAnalysis(label=label, close_figs=False)",
"Problem: What if I want to fit the peak and maximize the number of points on it?\nThis solution was developed for flux bias calibration through chevron alignment, see pycqed.instrument_drivers.meta_instrument.device_object_CCL.measure_chevron_1D_bias_sweep",
"# We want a specific number point on the peak in order to fit\nminimizer_threshold = 0.6\nmax_pnts_beyond_threshold = 20\n\n# For this case there is a dedicated goal func\ngoal = l1dm.mk_min_threshold_goal_func(\n max_pnts_beyond_threshold=max_pnts_beyond_threshold\n)\n# and a specific option in the loss function\nloss = l1dm.mk_minimization_loss_func(\n threshold=-minimizer_threshold)\n\nadaptive_pars = {\n \"adaptive_function\": l1dm.Learner1D_Minimizer,\n \"goal\": lambda l: goal(l) or l.npoints > npoints,\n \"bounds\": bounds,\n \"loss_per_interval\": loss,\n \"minimize\": False,\n}\n\nMC.set_sweep_function(dummy_chevron.amp)\nMC.set_adaptive_function_parameters(adaptive_pars)\n\nMC.set_detector_function(dummy_chevron.frac_excited)\nlabel = '1D maximize peak points for fit'\ndat = MC.run(label, mode=\"adaptive\")\n\nma2.Basic1DAnalysis(label=label, close_figs=False)",
"Problem: What if I know I am already in a local optimal and just want to converge?\nNote that we are using a noisy model",
"# Set the maximum sampling points budget\nnpoints = 40\n\nloss = l1dm.mk_minimization_loss_func(\n max_no_improve_in_local=4,\n converge_at_local=True)\ngoal = l1dm.mk_minimization_goal_func()\n\nbounds = [0.92 * dummy_chevron.amp_center_2(), 1.08 * dummy_chevron.amp_center_2()]\n\nMC.set_sweep_function(dummy_chevron.amp)\nMC.set_adaptive_function_parameters({\n 'adaptive_function': l1dm.Learner1D_Minimizer,\n 'bounds': bounds,\n 'goal': lambda l: goal(l) or l.npoints >= npoints,\n 'loss_per_interval': loss,\n 'minimize': False,\n})\n\nMC.set_detector_function(dummy_chevron.frac_excited)\nlabel = '1D maximize already in local'\ndat = MC.run(label, mode=\"adaptive\")\n\nma2.Basic1DAnalysis(label=label, close_figs=False)",
"Problem: What if I want to converge to the first local optimal that is below a threshold?\nBlindly converging in the local optimal might be an issue with noise or outliers, setting a threshold might be safer",
"# Set the maximum sampling points budget\nnpoints = 20\ntarget_f = 0.8\n\nloss = l1dm.mk_minimization_loss_func(\n max_no_improve_in_local=4,\n converge_below=-target_f)\ngoal = l1dm.mk_minimization_goal_func()\n\nbounds = [0.6 * dummy_chevron.amp_center_2(), 1.8 * dummy_chevron.amp_center_2()]\n\nMC.set_sweep_function(dummy_chevron.amp)\nMC.set_adaptive_function_parameters({\n 'adaptive_function': l1dm.Learner1D_Minimizer,\n 'bounds': bounds,\n 'goal': lambda l: goal(l) or l.npoints >= npoints,\n 'loss_per_interval': loss,\n 'minimize': False,\n \n})\n\nMC.set_detector_function(dummy_chevron.frac_excited)\nlabel = '1D maximize converge first local'\ndat = MC.run(label, mode=\"adaptive\")\n\nma2.Basic1DAnalysis(label=label, close_figs=False)",
"Problem: What if I want to sample mostly optimal regions but also understand how the landscape?\nActually much of the logic of the Learner1D_Minimizer relies on the balance between sampling around the best optimal seen values and the size of the biggest segments on the landscapes.",
"dummy_chevron.noise(.05)\nnpoints = 50\ninterval_weight = 100.\n\nloss = l1dm.mk_minimization_loss_func(\n max_no_improve_in_local=4,\n interval_weight=interval_weight)\ngoal = l1dm.mk_minimization_goal_func()\n\nbounds = [0.6 * dummy_chevron.amp_center_2(), 1.8 * dummy_chevron.amp_center_2()]\n\nMC.set_sweep_function(dummy_chevron.amp)\nMC.set_adaptive_function_parameters({\n 'adaptive_function': l1dm.Learner1D_Minimizer,\n 'bounds': bounds,\n 'goal': lambda l: goal(l) or l.npoints >= npoints,\n 'loss_per_interval': loss,\n 'minimize': False,\n \n})\n\nMC.set_detector_function(dummy_chevron.frac_excited)\nlabel = '1D maximize'\ndat = MC.run(label, mode=\"adaptive\")\n\nma2.Basic1DAnalysis(label=label, close_figs=False)",
"interval_weight is knob that allows to control the bias towards large intervals vs intervals that share a point close to the best seen optimal\ninterval_weight was arbitrarily defined to take values in the range [0., 1000.]. You need to play a bit to get a feeling for what value your particluar case requires. The default should already give reasonable results.\ninterval_weight=0. sets maximum sampling priority on the intervals containing the best seen optimal point.",
"npoints = 50\ninterval_weight = 0.\n\nloss = l1dm.mk_minimization_loss_func(\n max_no_improve_in_local=4,\n interval_weight=interval_weight)\ngoal = l1dm.mk_minimization_goal_func()\n\nbounds = [0.6 * dummy_chevron.amp_center_2(), 1.8 * dummy_chevron.amp_center_2()]\n\nMC.set_sweep_function(dummy_chevron.amp)\nMC.set_adaptive_function_parameters({\n 'adaptive_function': l1dm.Learner1D_Minimizer,\n 'bounds': bounds,\n 'goal': lambda l: goal(l) or l.npoints >= npoints,\n 'loss_per_interval': loss,\n 'minimize': False,\n \n})\n\nMC.set_detector_function(dummy_chevron.frac_excited)\nlabel = '1D maximize'\ndat = MC.run(label, mode=\"adaptive\")\n\nma2.Basic1DAnalysis(label=label, close_figs=False)",
"interval_weight=1000. sets maximum priority to the largest interval, this translates into uniform sampling.",
"npoints = 50\ninterval_weight = 1000.\nmax_no_improve_in_local = 4\n\nloss = l1dm.mk_minimization_loss_func(\n max_no_improve_in_local=max_no_improve_in_local,\n interval_weight=interval_weight)\ngoal = l1dm.mk_minimization_goal_func()\n\nbounds = [0.6 * dummy_chevron.amp_center_2(), 1.8 * dummy_chevron.amp_center_2()]\n\nMC.set_sweep_function(dummy_chevron.amp)\nMC.set_adaptive_function_parameters({\n 'adaptive_function': l1dm.Learner1D_Minimizer,\n 'bounds': bounds,\n 'goal': lambda l: goal(l) or l.npoints >= npoints,\n 'loss_per_interval': loss,\n 'minimize': False,\n \n})\n\nMC.set_detector_function(dummy_chevron.frac_excited)\nlabel = '1D maximize'\ndat = MC.run(label, mode=\"adaptive\")\n\nma2.Basic1DAnalysis(label=label, close_figs=False)",
"Note that it still found the global optimal\nThe bias between large intervals and optimal values is the strategy used to search for the global optimal. Every time it finds a new best optimal the corresponding intervals get maximum sampling priority, and that priority persists for max_no_improve_in_local new samples.\nYou may want to set max_no_improve_in_local=2 to almost fully impose the uniform sampling, or you can increase it in order to make the sampler more persistent exploring the local optima.",
"npoints = 50\ninterval_weight = 1000.\nmax_no_improve_in_local = 2\n\nloss = l1dm.mk_minimization_loss_func(\n max_no_improve_in_local=max_no_improve_in_local,\n interval_weight=interval_weight)\ngoal = l1dm.mk_minimization_goal_func()\n\nbounds = [0.6 * dummy_chevron.amp_center_2(), 1.8 * dummy_chevron.amp_center_2()]\n\nMC.set_sweep_function(dummy_chevron.amp)\nMC.set_adaptive_function_parameters({\n 'adaptive_function': l1dm.Learner1D_Minimizer,\n 'bounds': bounds,\n 'goal': lambda l: goal(l) or l.npoints >= npoints,\n 'loss_per_interval': loss,\n 'minimize': False,\n \n})\n\nMC.set_detector_function(dummy_chevron.frac_excited)\nlabel = '1D maximize'\ndat = MC.run(label, mode=\"adaptive\")\n\nma2.Basic1DAnalysis(label=label, close_figs=False)",
"Problem: What if I would like to run two adaptive samplers with distinct setting in the same domain?\nUse case: sample chevron on both sides (negative and positive amplitudes)\nThe distinct setting here are the boundaries that correspond to a small positive amplitude region and a small negative region. The basic way of achiving this is to run two distinct experiment, i.e. call MC.run(...) twice and end up with two files that need merging later.\nThere is a new MC feature that runs an outer loop of a list of adaptive samplers! Everthing is kept in the same dataset.\nWe could pontially run distinct types of adaptive sampler and/or optimizers in the same dataset, not tested yet though.",
"adaptive_sampling_pts = 50\nmax_no_improve_in_local = 4\nmax_pnts_beyond_threshold = 15\n\namps = [0.6 * dummy_chevron.amp_center_2(), 1.8 * dummy_chevron.amp_center_2()]\n\ngoal = l1dm.mk_min_threshold_goal_func(\n max_pnts_beyond_threshold=max_pnts_beyond_threshold)\nloss = l1dm.mk_minimization_loss_func(\n threshold=-minimizer_threshold, interval_weight=100.0)\n\nadaptive_pars_pos = {\n \"adaptive_function\": l1dm.Learner1D_Minimizer,\n \"goal\": lambda l: goal(l) or l.npoints > adaptive_sampling_pts,\n \"bounds\": amps,\n \"loss_per_interval\": loss,\n \"minimize\": False,\n}\n\nadaptive_pars_neg = {\n \"adaptive_function\": l1dm.Learner1D_Minimizer,\n \"goal\": lambda l: goal(l) or l.npoints > adaptive_sampling_pts,\n # NB: order of the bounds matters, mind negative numbers ordering\n \"bounds\": np.flip(-np.array(amps), 0),\n \"loss_per_interval\": loss,\n \"minimize\": False,\n}\n\nMC.set_sweep_function(dummy_chevron.amp)\nadaptive_pars = {\n \"multi_adaptive_single_dset\": True,\n \"adaptive_pars_list\": [adaptive_pars_pos, adaptive_pars_neg],\n}\n\nMC.set_adaptive_function_parameters(adaptive_pars)\nlabel = \"1D multi_adaptive_single_dset\"\ndat = MC.run(label, mode=\"adaptive\")\n\nma2.Basic1DAnalysis(label=label, close_figs=False)",
"Get ready for the mind blow\nProblem: What if we want to do the same thing but also sweep linearly a few points in a second dimension\nExample: sweep the flux bias so that it can be calibrated to align the chevrons\nMC has an extra loop that allows for that as well!\nIt is being used in pycqed.instrument_drivers.meta_instrument.device_object_CCL.measure_chevron_1D_bias_sweep",
"import warnings\nimport logging\nlog = logging.getLogger()\nlog.setLevel(\"ERROR\")\n\ndummy_chevron.delay(0.1)\nMC.plotting_interval(1.)\n\nadaptive_sampling_pts = 40\nmax_no_improve_in_local = 4\nmax_pnts_beyond_threshold = 10\n\namps = [0.6 * dummy_chevron.amp_center_2(), 1.4 * dummy_chevron.amp_center_2()]\n\ngoal = l1dm.mk_min_threshold_goal_func(\n max_pnts_beyond_threshold=max_pnts_beyond_threshold)\nloss = l1dm.mk_minimization_loss_func(\n threshold=-minimizer_threshold, interval_weight=100.0)\n\nadaptive_pars_pos = {\n \"adaptive_function\": l1dm.Learner1D_Minimizer,\n \"goal\": lambda l: goal(l) or l.npoints > adaptive_sampling_pts,\n \"bounds\": amps,\n \"loss_per_interval\": loss,\n \"minimize\": False,\n}\n\nadaptive_pars_neg = {\n \"adaptive_function\": l1dm.Learner1D_Minimizer,\n \"goal\": lambda l: goal(l) or l.npoints > adaptive_sampling_pts,\n # NB: order of the bounds matters, mind negative numbers ordering\n \"bounds\": np.flip(-np.array(amps), 0),\n \"loss_per_interval\": loss,\n \"minimize\": False,\n}\n\nflux_bias_par = dummy_chevron.flux_bias\nmv_bias_by=[-150e-6, 150e-6, 75e-6]\nflux_bias_par(180e-6)\n\n# Mind that the order matter, linear sweeped pars at the end\nMC.set_sweep_functions([dummy_chevron.amp, flux_bias_par])\nadaptive_pars = {\n \"multi_adaptive_single_dset\": True,\n \"adaptive_pars_list\": [adaptive_pars_pos, adaptive_pars_neg],\n \"extra_dims_sweep_pnts\": flux_bias_par() + np.array(mv_bias_by),\n}\n\nMC.set_adaptive_function_parameters(adaptive_pars)\nlabel = \"1D multi_adaptive_single_dset extra_dims_sweep_pnts\"\n\nwith warnings.catch_warnings():\n # ignore some warning, interpolations needs some extra features to support this mode\n warnings.simplefilter(\"ignore\")\n dat = MC.run(label, mode=\"adaptive\")\n\nlog.setLevel(\"WARNING\")\nma2.Basic2DInterpolatedAnalysis(label=label, close_figs=False)",
"Take it to the next level(s): LearnerND_Minimizer\nIt works in a similar way as the Learner1D_Minimizer\nTODO\n\n[ ] 2D adaptive\n[ ] ND adaptive"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
antoniomezzacapo/qiskit-tutorial
|
community/aqua/general/evolution.ipynb
|
apache-2.0
|
[
"Using Qiskit Aqua's quantum evolution functionality\nThis notebook demonstrates how to realize quantum evolution using the Qiskit Aqua library.\nFurther information is available for the algorithms in the github repo aqua/readme.md\nFirst, an Operator instance is created for our randomly generated Hamiltonian. We also randomly generate an initial quantum state state_in.",
"import numpy as np\nfrom qiskit_aqua.operator import Operator\nfrom qiskit_aqua import get_initial_state_instance\n\nnum_qubits = 2\nevo_time = 1\ntemp = np.random.random((2 ** num_qubits, 2 ** num_qubits))\nh1 = temp + temp.T\nqubitOp = Operator(matrix=h1)\nstate_in = get_initial_state_instance('CUSTOM')\nstate_in.init_args(num_qubits, state='random')",
"With the operator and the initial state, we can easily compute the groundtruth evolution result as follows.",
"from scipy.linalg import expm\n\nstate_in_vec = state_in.construct_circuit('vector')\ngroundtruth = expm(-1.j * h1 * evo_time) @ state_in_vec\nprint('The directly computed groundtruth evolution result state is\\n{}.'.format(groundtruth))",
"The evolve method as provided by the Operator class also provides the ability to compute the evolution groundtruth via the same matrix and vector multiplication. Therefore, we can also compute the evolution's groundtruth result state as follows, which we can easily verify to be the same as the groundtruth we just computed.",
"groundtruth_evolution = qubitOp.evolve(state_in_vec, evo_time, 'matrix', 0)\nprint('The groundtruth evolution result as computed by the Dynamics algorithm is\\n{}.'.format(groundtruth_evolution))\nnp.testing.assert_allclose(groundtruth_evolution, groundtruth)",
"Next, let's actually build the quantum circuit, which involves the circuit for putting the system in the specified initial state, and the actual evolution circuit corresponding to the operator we generated, for which, let's, for example, use the 3rd order suzuki expansion.",
"from qiskit import QuantumCircuit, QuantumRegister\n\nquantum_registers = QuantumRegister(qubitOp.num_qubits)\ncircuit = state_in.construct_circuit('circuit', quantum_registers)\ncircuit += qubitOp.evolve(\n None, evo_time, 'circuit', 1,\n quantum_registers=quantum_registers,\n expansion_mode='suzuki',\n expansion_order=3\n)",
"With the circuit built, we can now execute the circuit to get the evolution result. We use the statevector_simulator backend for the purpose of this demonstration.",
"from qiskit.wrapper import execute as q_execute\nfrom qiskit import Aer\n\nbackend = Aer.get_backend('statevector_simulator')\n\njob = q_execute(circuit, backend)\ncircuit_execution_result = np.asarray(job.result().get_statevector(circuit))\nprint('The evolution result state from executing the Dynamics circuit is\\n{}.'.format(circuit_execution_result))",
"We can then check the fidelity between the groundtruth and the circuit_execution_result.",
"from qiskit.tools.qi.qi import state_fidelity\n\nprint('Fidelity between the groundtruth and the circuit result states is {}.'.format(\n state_fidelity(groundtruth, circuit_execution_result)\n))",
"As seen, the fidelity is very close to 1, indicating that the quantum circuit produced is a good approximation of the intended evolution."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ledeprogram/algorithms
|
class6/donow/Gruen_Gianna_6_donow.ipynb
|
gpl-3.0
|
[
"1. Import the necessary packages to read in the data, plot, and create a linear regression model",
"import pandas as pd\nimport matplotlib.pyplot as plt\n%matplotlib inline\nimport statsmodels.formula.api as smf",
"2. Read in the hanford.csv file",
"df = pd.read_csv('hanford.csv')\ndf",
"3. Calculate the basic descriptive statistics on the data",
"df.describe()\n\niqr = df.quantile(q=0.75) - df.quantile(q=0.25)\niqr\n\nual = df.quantile(q=0.75) + (iqr * 1.5)\nual\n\nlal = df.quantile(q=0.25) - (iqr * 1.5)\nlal",
"4. Calculate the coefficient of correlation (r) and generate the scatter plot. Does there seem to be a correlation worthy of investigation?",
"df.corr()",
"Yes, it seems very much so that there's a correlation worth to be investigated\n5. Create a linear regression model based on the available data to predict the mortality rate given a level of exposure",
"lm = smf.ols(formula=\"Mortality~Exposure\",data=df).fit()\nlm.params\n\n\nintercept, slope = lm.params\n\nexposure_input = input(\"Type in an exposre you'd like to know the mortality for:\")\nif exposure_input:\n prediction = (float(lm.params['Exposure']) * float(exposure_input)) + (float(lm.params['Intercept']))\n print(prediction)",
"6. Plot the linear regression line on the scatter plot of values. Calculate the r^2 (coefficient of determination)",
"fig, ax = plt.subplots(figsize=(7,7))\nplt.style.use('ggplot')\n\nax = df.plot(ax = ax, kind= 'scatter', x = 'Exposure', y = 'Mortality')\nplt.plot(df['Exposure'],slope*df['Exposure']+intercept, color=\"red\", linewidth=2)\n\nr = df.corr()['Exposure']['Mortality']\nr\n\ncoefficient_determination = r **2\ncoefficient_determination",
"7. Predict the mortality rate (Cancer per 100,000 man years) given an index of exposure = 10",
"prediction = (float(lm.params['Exposure']) * 10 + (float(lm.params['Intercept'])))\nprint(prediction)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mdeff/ntds_2016
|
algorithms/02_ex_clustering.ipynb
|
mit
|
[
"A Network Tour of Data Science\n Xavier Bresson, Winter 2016/17\nExercise 4 - Code 2 : Unsupervised Learning\nUnsupervised Clustering with Kernel K-Means",
"# Load libraries\n\n# Math\nimport numpy as np\n\n# Visualization \n%matplotlib notebook \nimport matplotlib.pyplot as plt\nplt.rcParams.update({'figure.max_open_warning': 0})\nfrom mpl_toolkits.axes_grid1 import make_axes_locatable\nfrom scipy import ndimage\n\n# Print output of LFR code\nimport subprocess\n\n# Sparse matrix\nimport scipy.sparse\nimport scipy.sparse.linalg\n\n# 3D visualization\nimport pylab\nfrom mpl_toolkits.mplot3d import Axes3D\nfrom matplotlib import pyplot\n\n# Import data\nimport scipy.io\n\n# Import functions in lib folder\nimport sys\nsys.path.insert(1, 'lib')\n\n# Import helper functions\n%load_ext autoreload\n%autoreload 2\nfrom lib.utils import construct_kernel\nfrom lib.utils import compute_kernel_kmeans_EM\nfrom lib.utils import compute_kernel_kmeans_spectral\nfrom lib.utils import compute_purity\n\n# Import distance function\nimport sklearn.metrics.pairwise\n\n# Remove warnings\nimport warnings\nwarnings.filterwarnings(\"ignore\")\n\n# Load MNIST raw data images\nmat = scipy.io.loadmat('datasets/mnist_raw_data.mat')\nX = mat['Xraw']\nn = X.shape[0]\nd = X.shape[1]\nCgt = mat['Cgt'] - 1; Cgt = Cgt.squeeze()\nnc = len(np.unique(Cgt))\nprint('Number of data =',n)\nprint('Data dimensionality =',d);\nprint('Number of classes =',nc);",
"Question 1a: What is the clustering accuracy of standard/linear K-Means?<br>\nHint: You may use functions Ker=construct_kernel(X,'linear') to compute the\nlinear kernel and [C_kmeans, En_kmeans]=compute_kernel_kmeans_EM(n_classes,Ker,Theta,10) with Theta= np.ones(n) to run the standard K-Means algorithm, and accuracy = compute_purity(C_computed,C_solution,n_clusters) that returns the\naccuracy.",
"# Your code here\n",
"Question 1b: What is the clustering accuracy for the kernel K-Means algorithm with<br>\n(1) Gaussian Kernel for the EM approach and the Spectral approach?<br>\n(2) Polynomial Kernel for the EM approach and the Spectral approach?<br>\nHint: You may use functions Ker=construct_kernel(X,'gaussian') and Ker=construct_kernel(X,'polynomial',[1,0,2]) to compute the non-linear kernels<br>\nHint: You may use functions C_kmeans,__ = compute_kernel_kmeans_EM(K,Ker,Theta,10) for the EM kernel KMeans algorithm and C_kmeans,__ = compute_kernel_kmeans_spectral(K,Ker,Theta,10) for the Spectral kernel K-Means algorithm.<br>",
"# Your code here\n",
"Question 1c: What is the clustering accuracy for the kernel K-Means algorithm with<br>\n(1) KNN_Gaussian Kernel for the EM approach and the Spectral approach?<br>\n(2) KNN_Cosine_Binary Kernel for the EM approach and the Spectral approach?<br>\nYou can test for the value KNN_kernel=50.<br>\nHint: You may use functions Ker = construct_kernel(X,'kNN_gaussian',KNN_kernel)\nand Ker = construct_kernel(X,'kNN_cosine_binary',KNN_kernel) to compute the\nnon-linear kernels.",
"# Your code here\n"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
tensorflow/docs-l10n
|
site/ko/tutorials/structured_data/imbalanced_data.ipynb
|
apache-2.0
|
[
"Copyright 2019 The TensorFlow Authors.",
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"불균형 데이터 분류\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td> <a target=\"_blank\" href=\"https://www.tensorflow.org/tutorials/structured_data/imbalanced_data\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\">TensorFlow.org에서 보기</a> </td>\n <td> <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/tutorials/structured_data/imbalanced_data.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\">Google Colab에서 실행하기</a> </td>\n <td> <a target=\"_blank\" href=\"https://github.com/tensorflow/docs-l10n/blob/master/site/ko/tutorials/structured_data/imbalanced_data.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\">GitHub에서 소스 </a> </td>\n <td><a href=\"https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/tutorials/structured_data/imbalanced_data.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\">노트북 다운로드</a></td>\n</table>\n\n이 튜토리얼에서는 한 클래스의 예시의 수가 다른 클래스보다 훨씬 많은 매우 불균형적인 데이터세트를 분류하는 방법을 소개합니다. Kaggle에서 호스팅되는 신용 카드 부정 행위 탐지 데이터세트를 사용하여 작업해 보겠습니다. 총 284,807건의 거래에서 492건의 부정 거래를 탐지하는 것을 목표로 합니다. Keras를 사용하여 모델 및 클래스 가중치를 정의하여 불균형 데이터에서 모델을 학습시켜 보겠습니다.\n이 튜토리얼에는 다음을 수행하기 위한 완전한 코드가 포함되어 있습니다.\n\nPandas를 사용하여 CSV 파일 로드.\n학습, 검증 및 테스트세트 작성.\nKeras를 사용하여 모델을 정의하고 학습(클래스 가중치 설정 포함)\n다양한 측정 기준(정밀도 및 재현율 포함)을 사용하여 모델 평가\n다음과 같은 불균형 데이터를 처리하기 위한 일반적인 기술 사용\n클래스 가중치\n오버샘플링\n\n\n\n설정",
"import tensorflow as tf\nfrom tensorflow import keras\n\nimport os\nimport tempfile\n\nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns\n\nimport sklearn\nfrom sklearn.metrics import confusion_matrix\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.preprocessing import StandardScaler\n\nmpl.rcParams['figure.figsize'] = (12, 10)\ncolors = plt.rcParams['axes.prop_cycle'].by_key()['color']",
"데이터 처리 및 탐색\nKaggle 신용 카드 부정 행위 데이터 세트\nPandas는 구조적 데이터를 로드하고 처리하는 데 유용한 여러 유틸리티가 포함된 Python 라이브러리입니다. CSV를 Pandas 데이터 프레임으로 다운로드하는 데 사용할 수 있습니다.\n참고: 이 데이터세트는 빅데이터 마이닝 및 부정 행위 감지에 대한 Worldline과 ULB(Université Libre de Bruxelles) Machine Learning Group의 연구 협업을 통해 수집 및 분석되었습니다. 관련 주제에 관한 현재 및 과거 프로젝트에 대한 자세한 내용은 여기를 참조하거나 DefeatFraud 프로젝트 페이지에서 확인할 수 있습니다.",
"file = tf.keras.utils\nraw_df = pd.read_csv('https://storage.googleapis.com/download.tensorflow.org/data/creditcard.csv')\nraw_df.head()\n\nraw_df[['Time', 'V1', 'V2', 'V3', 'V4', 'V5', 'V26', 'V27', 'V28', 'Amount', 'Class']].describe()",
"클래스 레이블 불균형 검사\n데이터세트 불균형을 살펴보겠습니다.:",
"neg, pos = np.bincount(raw_df['Class'])\ntotal = neg + pos\nprint('Examples:\\n Total: {}\\n Positive: {} ({:.2f}% of total)\\n'.format(\n total, pos, 100 * pos / total))",
"이를 통해 양성 샘플 일부를 확인할 수 있습니다.\n데이터 정리, 분할 및 정규화\n원시 데이터에는 몇 가지 문제가 있습니다. 먼저 Time 및 Amount 열이 매우 가변적이므로 직접 사용할 수 없습니다. (의미가 명확하지 않으므로) Time 열을 삭제하고 Amount 열의 로그를 가져와 범위를 줄입니다.",
"cleaned_df = raw_df.copy()\n\n# You don't want the `Time` column.\ncleaned_df.pop('Time')\n\n# The `Amount` column covers a huge range. Convert to log-space.\neps = 0.001 # 0 => 0.1¢\ncleaned_df['Log Ammount'] = np.log(cleaned_df.pop('Amount')+eps)",
"데이터세트를 학습, 검증 및 테스트 세트로 분할합니다. 검증 세트는 모델 피팅 중에 사용되어 손실 및 메트릭을 평가하지만 해당 모델은 이 데이터에 적합하지 않습니다. 테스트 세트는 훈련 단계에서는 전혀 사용되지 않으며 마지막에만 사용되어 모델이 새 데이터로 일반화되는 정도를 평가합니다. 이는 훈련 데이터가 부족하여 과대적합이 크게 문제가 되는 불균형 데이터세트에서 특히 중요합니다.",
"# Use a utility from sklearn to split and shuffle your dataset.\ntrain_df, test_df = train_test_split(cleaned_df, test_size=0.2)\ntrain_df, val_df = train_test_split(train_df, test_size=0.2)\n\n# Form np arrays of labels and features.\ntrain_labels = np.array(train_df.pop('Class'))\nbool_train_labels = train_labels != 0\nval_labels = np.array(val_df.pop('Class'))\ntest_labels = np.array(test_df.pop('Class'))\n\ntrain_features = np.array(train_df)\nval_features = np.array(val_df)\ntest_features = np.array(test_df)",
"sklearn StandardScaler를 사용하여 입력 특성을 정규화하면 평균은 0으로, 표준 편차는 1로 설정됩니다.\n참고: StandardScaler는 모델이 유효성 검사 또는 테스트 세트를 참고하는지 여부를 확인하기 위해 train_features를 사용하는 경우에만 적합합니다.",
"scaler = StandardScaler()\ntrain_features = scaler.fit_transform(train_features)\n\nval_features = scaler.transform(val_features)\ntest_features = scaler.transform(test_features)\n\ntrain_features = np.clip(train_features, -5, 5)\nval_features = np.clip(val_features, -5, 5)\ntest_features = np.clip(test_features, -5, 5)\n\n\nprint('Training labels shape:', train_labels.shape)\nprint('Validation labels shape:', val_labels.shape)\nprint('Test labels shape:', test_labels.shape)\n\nprint('Training features shape:', train_features.shape)\nprint('Validation features shape:', val_features.shape)\nprint('Test features shape:', test_features.shape)\n",
"주의: 모델을 배포하려면 전처리 계산을 유지하는 것이 중요합니다. 따라서 레이어로 구현하고 내보내기 전에 모델에 연결하는 것이 가장 쉬운 방법입니다.\n데이터 분포 살펴보기\n다음으로 몇 가지 특성에 대한 양 및 음의 예시 분포를 비교해 보겠습니다. 이 때 스스로 검토할 사항은 다음과 같습니다.\n\n이와 같은 분포가 합리적인가?\n예, 이미 입력을 정규화했으며 대부분 +/- 2 범위에 집중되어 있습니다.\n\n\n분포 간 차이를 알 수 있습니까?\n예, 양의 예에는 극단적 값의 비율이 훨씬 높습니다.",
"pos_df = pd.DataFrame(train_features[ bool_train_labels], columns=train_df.columns)\nneg_df = pd.DataFrame(train_features[~bool_train_labels], columns=train_df.columns)\n\nsns.jointplot(pos_df['V5'], pos_df['V6'],\n kind='hex', xlim=(-5,5), ylim=(-5,5))\nplt.suptitle(\"Positive distribution\")\n\nsns.jointplot(neg_df['V5'], neg_df['V6'],\n kind='hex', xlim=(-5,5), ylim=(-5,5))\n_ = plt.suptitle(\"Negative distribution\")",
"모델 및 메트릭 정의\n조밀하게 연결된 숨겨진 레이어, 과대적합을 줄이기 위한 드롭아웃 레이어, 거래 사기 가능성을 반환하는 시그모이드 출력 레이어로 간단한 신경망을 생성하는 함수를 정의합니다.",
"METRICS = [\n keras.metrics.TruePositives(name='tp'),\n keras.metrics.FalsePositives(name='fp'),\n keras.metrics.TrueNegatives(name='tn'),\n keras.metrics.FalseNegatives(name='fn'), \n keras.metrics.BinaryAccuracy(name='accuracy'),\n keras.metrics.Precision(name='precision'),\n keras.metrics.Recall(name='recall'),\n keras.metrics.AUC(name='auc'),\n keras.metrics.AUC(name='prc', curve='PR'), # precision-recall curve\n]\n\ndef make_model(metrics=METRICS, output_bias=None):\n if output_bias is not None:\n output_bias = tf.keras.initializers.Constant(output_bias)\n model = keras.Sequential([\n keras.layers.Dense(\n 16, activation='relu',\n input_shape=(train_features.shape[-1],)),\n keras.layers.Dropout(0.5),\n keras.layers.Dense(1, activation='sigmoid',\n bias_initializer=output_bias),\n ])\n\n model.compile(\n optimizer=keras.optimizers.Adam(learning_rate=1e-3),\n loss=keras.losses.BinaryCrossentropy(),\n metrics=metrics)\n\n return model",
"유용한 메트릭 이해하기\n위에서 정의한 몇 가지 메트릭은 모델을 통해 계산할 수 있으며 성능을 평가할 때 유용합니다.\n\n허위 음성과 허위 양성은 잘못 분류된 샘플입니다.\n실제 음성과 실제 양성은 올바로 분류된 샘플입니다.\n정확도는 올바로 분류된 예의 비율입니다.\n\n\n$\\frac{\\text{true samples}}{\\text{total samples}}$\n\n\n정밀도는 올바르게 분류된 예측 양성의 비율입니다.\n\n\n$\\frac{\\text{true positives}}{\\text{true positives + false positives}}$\n\n\n재현율은 올바르게 분류된 실제 양성의 비율입니다.\n\n\n$\\frac{\\text{true positives}}{\\text{true positives + false negatives}}$\n\n\nAUC는 ROC-AUC(Area Under the Curve of a Receiver Operating Characteristic) 곡선을 의미합니다. 이 메트릭은 분류자가 임의의 양성 샘플 순위를 임의의 음성 샘플 순위보다 높게 지정할 확률과 같습니다.\nAUPRC는 PR curve AUC를 의미합니다. 이 메트릭은 다양한 확률 임계값에 대한 정밀도-재현율 쌍을 계산합니다.\n\n참고: 정확도는 이 작업에 유용한 측정 항목이 아닙니다. 항상 False를 예측해야 이 작업에서 99.8% 이상의 정확도를 얻을 수 있습니다.\n더 읽어보기:\n\n참 vs. 거짓, 양성 vs. 음성\n정확성\n정밀도와 재현율\nROC-AUC\nPrecision-Recall과 ROC 곡선의 관계\n\n기준 모델\n모델 구축\n이제 앞서 정의한 함수를 사용하여 모델을 만들고 학습해 보겠습니다. 모델은 기본 배치 크기인 2048보다 큰 배치 크기를 사용하는 것이 좋습니다. 각 배치에서 양성 샘플을 일부 포함시켜 적절한 기회를 얻는 것이 중요합니다. 배치 크기가 너무 작으면 부정 거래 예시를 제대로 학습할 수 없습니다.\n참고: 이 모델은 클래스의 불균형을 잘 다루지 못합니다. 이를 이 튜토리얼의 뒷부분에서 개선하게 될 겁니다.",
"EPOCHS = 100\nBATCH_SIZE = 2048\n\nearly_stopping = tf.keras.callbacks.EarlyStopping(\n monitor='val_auc', \n verbose=1,\n patience=10,\n mode='max',\n restore_best_weights=True)\n\nmodel = make_model()\nmodel.summary()",
"모델을 실행하여 테스트해보겠습니다.",
"model.predict(train_features[:10])",
"선택사항: 초기 바이어스를 올바로 설정합니다.\n이와 같은 초기 추측은 적절하지 않습니다. 데이터세트가 불균형하다는 것을 알고 있으니까요. 출력 레이어의 바이어스를 설정하여 해당 데이터세트를 반영하면(참조: 신경망 훈련 방법: \"init well\") 초기 수렴에 유용할 수 있습니다.\n기본 바이어스 초기화를 사용하면 손실은 약 math.log(2) = 0.69314",
"results = model.evaluate(train_features, train_labels, batch_size=BATCH_SIZE, verbose=0)\nprint(\"Loss: {:0.4f}\".format(results[0]))",
"올바른 바이어스 설정은 다음에서 가능합니다.\n$$ p_0 = pos/(pos + neg) = 1/(1+e^{-b_0}) $$ $$ b_0 = -log_e(1/p_0 - 1) $$ $$ b_0 = log_e(pos/neg)$$",
"initial_bias = np.log([pos/neg])\ninitial_bias",
"이를 초기 바이어스로 설정하면 모델은 훨씬 더 합리적으로 초기 추측을 할 수 있습니다.\npos/total = 0.0018에 가까울 것입니다.",
"model = make_model(output_bias=initial_bias)\nmodel.predict(train_features[:10])",
"이 초기화를 통해서 초기 손실은 대략 다음과 같아야합니다.:\n$$-p_0log(p_0)-(1-p_0)log(1-p_0) = 0.01317$$",
"results = model.evaluate(train_features, train_labels, batch_size=BATCH_SIZE, verbose=0)\nprint(\"Loss: {:0.4f}\".format(results[0]))",
"이 초기 손실은 단순한 상태의 초기화에서 발생했을 때 보다 약 50배 적습니다.\n이런 식으로 모델은 처음 몇 epoch를 쓰며 양성 예시가 거의 없다는 것을 학습할 필요는 없습니다. 이렇게 하면 학습을 하면서 손실된 플롯을 더 쉽게 파악할 수 있습니다.\n초기 가중치 체크 포인트\n다양한 학습 과정을 비교하려면 이 초기 모델의 가중치를 체크포인트 파일에 보관하고 학습 전에 각 모델에 로드합니다.",
"initial_weights = os.path.join(tempfile.mkdtemp(), 'initial_weights')\nmodel.save_weights(initial_weights)",
"바이어스 수정이 도움이 되는지 확인하기\n계속 진행하기 전에 조심스러운 바이어스 초기화가 실제로 도움이 되었는지 빠르게 확인하십시오\n정교한 초기화를 한 모델과 하지 않은 모델을 20 epoch 학습시키고 손실을 비교합니다.",
"model = make_model()\nmodel.load_weights(initial_weights)\nmodel.layers[-1].bias.assign([0.0])\nzero_bias_history = model.fit(\n train_features,\n train_labels,\n batch_size=BATCH_SIZE,\n epochs=20,\n validation_data=(val_features, val_labels), \n verbose=0)\n\nmodel = make_model()\nmodel.load_weights(initial_weights)\ncareful_bias_history = model.fit(\n train_features,\n train_labels,\n batch_size=BATCH_SIZE,\n epochs=20,\n validation_data=(val_features, val_labels), \n verbose=0)\n\ndef plot_loss(history, label, n):\n # Use a log scale to show the wide range of values.\n plt.semilogy(history.epoch, history.history['loss'],\n color=colors[n], label='Train '+label)\n plt.semilogy(history.epoch, history.history['val_loss'],\n color=colors[n], label='Val '+label,\n linestyle=\"--\")\n plt.xlabel('Epoch')\n plt.ylabel('Loss')\n \n plt.legend()\n\nplot_loss(zero_bias_history, \"Zero Bias\", 0)\nplot_loss(careful_bias_history, \"Careful Bias\", 1)",
"위의 그림에서 명확히 알 수 있듯이, 검증 손실 측면에서 이와 같은 정교한 초기화에는 분명한 이점이 있습니다. \n모델 학습",
"model = make_model()\nmodel.load_weights(initial_weights)\nbaseline_history = model.fit(\n train_features,\n train_labels,\n batch_size=BATCH_SIZE,\n epochs=EPOCHS,\n callbacks=[early_stopping],\n validation_data=(val_features, val_labels))",
"학습 이력 확인\n이 섹션에서는 훈련 및 검증 세트에서 모델의 정확도 및 손실에 대한 플롯을 생성합니다. 이는 과대적합 확인에 유용하며 과대적합 및 과소적합 튜토리얼에서 자세히 알아볼 수 있습니다.\n또한, 위에서 만든 모든 메트릭에 대해 다음과 같은 플롯을 생성할 수 있습니다. 거짓 음성이 예시에 포함되어 있습니다.",
"def plot_metrics(history):\n metrics = ['loss', 'auc', 'precision', 'recall']\n for n, metric in enumerate(metrics):\n name = metric.replace(\"_\",\" \").capitalize()\n plt.subplot(2,2,n+1)\n plt.plot(history.epoch, history.history[metric], color=colors[0], label='Train')\n plt.plot(history.epoch, history.history['val_'+metric],\n color=colors[0], linestyle=\"--\", label='Val')\n plt.xlabel('Epoch')\n plt.ylabel(name)\n if metric == 'loss':\n plt.ylim([0, plt.ylim()[1]])\n elif metric == 'auc':\n plt.ylim([0.8,1])\n else:\n plt.ylim([0,1])\n\n plt.legend()\n\n\nplot_metrics(baseline_history)",
"참고: 검증 곡선은 일반적으로 훈련 곡선보다 성능이 좋습니다. 이는 주로 모델을 평가할 때 drop out 레이어가 활성화 되지 않았기 때문에 발생합니다.\n메트릭 평가\n혼동 행렬을 사용하여 실제 레이블과 예측 레이블을 요약할 수 있습니다. 여기서 X축은 예측 레이블이고 Y축은 실제 레이블입니다.",
"train_predictions_baseline = model.predict(train_features, batch_size=BATCH_SIZE)\ntest_predictions_baseline = model.predict(test_features, batch_size=BATCH_SIZE)\n\ndef plot_cm(labels, predictions, p=0.5):\n cm = confusion_matrix(labels, predictions > p)\n plt.figure(figsize=(5,5))\n sns.heatmap(cm, annot=True, fmt=\"d\")\n plt.title('Confusion matrix @{:.2f}'.format(p))\n plt.ylabel('Actual label')\n plt.xlabel('Predicted label')\n\n print('Legitimate Transactions Detected (True Negatives): ', cm[0][0])\n print('Legitimate Transactions Incorrectly Detected (False Positives): ', cm[0][1])\n print('Fraudulent Transactions Missed (False Negatives): ', cm[1][0])\n print('Fraudulent Transactions Detected (True Positives): ', cm[1][1])\n print('Total Fraudulent Transactions: ', np.sum(cm[1]))",
"테스트 데이터세트에서 모델을 평가하고 위에서 생성한 메트릭 결과를 표시합니다.",
"baseline_results = model.evaluate(test_features, test_labels,\n batch_size=BATCH_SIZE, verbose=0)\nfor name, value in zip(model.metrics_names, baseline_results):\n print(name, ': ', value)\nprint()\n\nplot_cm(test_labels, test_predictions_baseline)",
"만약 모델이 모두 완벽하게 예측했다면 대각행렬이 되어 예측 오류를 보여주며 대각선 값은 0이 됩니다. 이와 같은 경우, 매트릭에 거짓 양성이 상대적으로 낮음을 확인할 수 있으며 이를 통해 플래그가 잘못 지정된 합법적인 거래가 상대적으로 적다는 것을 알 수 있습니다. 그러나 거짓 양성 수를 늘리더라도 거짓 음성을 더 낮추고 싶을 수 있습니다. 거짓 음성은 부정 거래가 발생할 수 있지만, 거짓 양성은 고객에게 이메일을 보내 카드 활동 확인을 요청할 수 있기 때문에 거짓 음성을 낮추는 것이 더 바람직할 수 있기 때문입니다.\nROC 플로팅\n이제 ROC을 플로팅 하십시오. 이 그래프는 출력 임계값을 조정하기만 해도 모델이 도달할 수 있는 성능 범위를 한눈에 보여주기 때문에 유용합니다.",
"def plot_roc(name, labels, predictions, **kwargs):\n fp, tp, _ = sklearn.metrics.roc_curve(labels, predictions)\n\n plt.plot(100*fp, 100*tp, label=name, linewidth=2, **kwargs)\n plt.xlabel('False positives [%]')\n plt.ylabel('True positives [%]')\n plt.xlim([-0.5,20])\n plt.ylim([80,100.5])\n plt.grid(True)\n ax = plt.gca()\n ax.set_aspect('equal')\n\nplot_roc(\"Train Baseline\", train_labels, train_predictions_baseline, color=colors[0])\nplot_roc(\"Test Baseline\", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')\nplt.legend(loc='lower right')",
"AUPRC 플로팅\nNow plot the AUPRC. Area under the interpolated precision-recall curve, obtained by plotting (recall, precision) points for different values of the classification threshold. Depending on how it's calculated, PR AUC may be equivalent to the average precision of the model.",
"def plot_prc(name, labels, predictions, **kwargs):\r\n precision, recall, _ = sklearn.metrics.precision_recall_curve(labels, predictions)\r\n\r\n plt.plot(precision, recall, label=name, linewidth=2, **kwargs)\r\n plt.xlabel('Recall')\r\n plt.ylabel('Precision')\r\n plt.grid(True)\r\n ax = plt.gca()\r\n ax.set_aspect('equal')\n\nplot_prc(\"Train Baseline\", train_labels, train_predictions_baseline, color=colors[0])\r\nplot_prc(\"Test Baseline\", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')\r\nplt.legend(loc='lower right')",
"정밀도가 비교적 높은 것 같지만 재현율과 ROC 곡선(AUC) 아래 면적이 높지 않습니다. 분류자가 정밀도와 재현율 모두를 최대화하려고 하면 종종 어려움에 직면하는데, 불균형 데이터세트로 작업할 때 특히 그렇습니다. 관심있는 문제의 맥락에서 다른 유형의 오류 비용을 고려하는 것이 중요합니다. 이 예시에서 거짓 음성(부정 거래를 놓친 경우)은 금전적 비용이 들 수 있지만 , 거짓 양성(거래가 사기 행위로 잘못 표시됨)은 사용자 만족도를 감소시킬 수 있습니다.\n클래스 가중치\n클래스 가중치 계산\n목표는 부정 거래를 식별하는 것이지만, 작업할 수 있는 양성 샘플이 많지 않지 않기 때문에 분류자는 이용 가능한 몇 가지 예에 가중치를 두고자 할 것입니다. 매개 변수를 통해 각 클래스에 대한 Keras 가중치를 전달한다면 이 작업을 수행할 수 있습니다. 이로 인해 모델은 더 적은 클래스 예시에 \"더 많은 관심을 기울일\" 수 있습니다.",
"# Scaling by total/2 helps keep the loss to a similar magnitude.\n# The sum of the weights of all examples stays the same.\nweight_for_0 = (1 / neg) * (total / 2.0)\nweight_for_1 = (1 / pos) * (total / 2.0)\n\nclass_weight = {0: weight_for_0, 1: weight_for_1}\n\nprint('Weight for class 0: {:.2f}'.format(weight_for_0))\nprint('Weight for class 1: {:.2f}'.format(weight_for_1))",
"클래스 가중치로 모델 교육\n이제 해당 모델이 예측에 어떤 영향을 미치는지 확인하기 위하여 클래스 가중치로 모델을 재 교육하고 평가해 보십시오.\n참고: class_weights를 사용하면 손실 범위가 변경됩니다. 이는 옵티마이저에 따라 훈련의 안정성에 영향을 미칠 수 있습니다. tf.keras.optimizers.SGD와 같이 단계 크기가 그래디언트의 크기에 따라 달라지는 옵티마이저는 실패할 수 있습니다. 여기서 사용된 옵티마이저인 tf.keras.optimizers.Adam은 스케일링 변경의 영향을 받지 않습니다. 또한, 가중치로 인해 총 손실은 두 모델 간에 비교할 수 없습니다.",
"weighted_model = make_model()\nweighted_model.load_weights(initial_weights)\n\nweighted_history = weighted_model.fit(\n train_features,\n train_labels,\n batch_size=BATCH_SIZE,\n epochs=EPOCHS,\n callbacks=[early_stopping],\n validation_data=(val_features, val_labels),\n # The class weights go here\n class_weight=class_weight) ",
"학습 이력 조회",
"plot_metrics(weighted_history)",
"매트릭 평가",
"train_predictions_weighted = weighted_model.predict(train_features, batch_size=BATCH_SIZE)\ntest_predictions_weighted = weighted_model.predict(test_features, batch_size=BATCH_SIZE)\n\nweighted_results = weighted_model.evaluate(test_features, test_labels,\n batch_size=BATCH_SIZE, verbose=0)\nfor name, value in zip(weighted_model.metrics_names, weighted_results):\n print(name, ': ', value)\nprint()\n\nplot_cm(test_labels, test_predictions_weighted)",
"여기서 클래스 가중치를 사용하면 거짓 양성이 더 많기 때문에 정확도와 정밀도는 더 낮지만, 반대로 참 양성이 많으므로 재현율과 AUC는 더 높다는 것을 알 수 있습니다. 정확도가 낮음에도 불구하고 이 모델은 재현율이 더 높습니다(더 많은 부정 거래 식별). 물론 두 가지 유형의 오류 모두 비용이 발생합니다(많은 합법 거래를 사기로 표시하여 사용자를 번거롭게 하는 것은 바람직하지 않으므로). 따라서, 여러 유형 오류 간 절충 사항을 신중하게 고려해야 합니다.\nROC 플로팅",
"plot_roc(\"Train Baseline\", train_labels, train_predictions_baseline, color=colors[0])\nplot_roc(\"Test Baseline\", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')\n\nplot_roc(\"Train Weighted\", train_labels, train_predictions_weighted, color=colors[1])\nplot_roc(\"Test Weighted\", test_labels, test_predictions_weighted, color=colors[1], linestyle='--')\n\n\nplt.legend(loc='lower right')",
"AUPRC 플로팅",
"plot_prc(\"Train Baseline\", train_labels, train_predictions_baseline, color=colors[0])\r\nplot_prc(\"Test Baseline\", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')\r\n\r\nplot_prc(\"Train Weighted\", train_labels, train_predictions_weighted, color=colors[1])\r\nplot_prc(\"Test Weighted\", test_labels, test_predictions_weighted, color=colors[1], linestyle='--')\r\n\r\n\r\nplt.legend(loc='lower right')",
"오버샘플링\n소수 계급 과대 표본\n관련된 접근 방식은 소수 클래스를 오버 샘플링 하여 데이터 세트를 리 샘플링 하는 것입니다.",
"pos_features = train_features[bool_train_labels]\nneg_features = train_features[~bool_train_labels]\n\npos_labels = train_labels[bool_train_labels]\nneg_labels = train_labels[~bool_train_labels]",
"NumPy 사용\n긍정적인 예에서 적절한 수의 임의 인덱스를 선택하여 데이터 세트의 균형을 수동으로 조정할 수 있습니다.:",
"ids = np.arange(len(pos_features))\nchoices = np.random.choice(ids, len(neg_features))\n\nres_pos_features = pos_features[choices]\nres_pos_labels = pos_labels[choices]\n\nres_pos_features.shape\n\nresampled_features = np.concatenate([res_pos_features, neg_features], axis=0)\nresampled_labels = np.concatenate([res_pos_labels, neg_labels], axis=0)\n\norder = np.arange(len(resampled_labels))\nnp.random.shuffle(order)\nresampled_features = resampled_features[order]\nresampled_labels = resampled_labels[order]\n\nresampled_features.shape",
"tf.data 사용\ntf.data를 사용하는 경우 균형있는 예를 생성하는 가장 쉬운 방법은 positive와 negative 데이터세트로 시작하여 이들을 병합하는 것입니다. tf.data guide에서 더 많은 예를 참조하시기 바랍니다.",
"BUFFER_SIZE = 100000\n\ndef make_ds(features, labels):\n ds = tf.data.Dataset.from_tensor_slices((features, labels))#.cache()\n ds = ds.shuffle(BUFFER_SIZE).repeat()\n return ds\n\npos_ds = make_ds(pos_features, pos_labels)\nneg_ds = make_ds(neg_features, neg_labels)",
"각 데이터 세트는 (feature, label) 쌍으로 되어 있습니다.",
"for features, label in pos_ds.take(1):\n print(\"Features:\\n\", features.numpy())\n print()\n print(\"Label: \", label.numpy())",
"experimental.sample_from_datasets 를 사용하여 두 가지를 병합합니다.:",
"resampled_ds = tf.data.experimental.sample_from_datasets([pos_ds, neg_ds], weights=[0.5, 0.5])\nresampled_ds = resampled_ds.batch(BATCH_SIZE).prefetch(2)\n\nfor features, label in resampled_ds.take(1):\n print(label.numpy().mean())",
"이 데이터 세트를 사용하려면 epoch당 스텝 수가 필요합니다.\n이 경우 \"epoch\"의 정의는 명확하지 않습니다. 각 음성 예시를 한 번 볼 때 필요한 배치 수라고 해봅시다.",
"resampled_steps_per_epoch = np.ceil(2.0*neg/BATCH_SIZE)\nresampled_steps_per_epoch",
"오버 샘플링 된 데이터에 대한 학습\n이제 클래스 가중치를 사용하는 대신 리 샘플링 된 데이터 세트로 모델을 학습하여 이러한 방법이 어떻게 비교되는지 확인하십시오.\n참고: 긍정적인 예를 복제하여 데이터가 균형을 이루었기 때문에 총 데이터 세트 크기가 더 크고 각 세대가 더 많은 학습 단계를 위해 실행됩니다.",
"resampled_model = make_model()\nresampled_model.load_weights(initial_weights)\n\n# Reset the bias to zero, since this dataset is balanced.\noutput_layer = resampled_model.layers[-1] \noutput_layer.bias.assign([0])\n\nval_ds = tf.data.Dataset.from_tensor_slices((val_features, val_labels)).cache()\nval_ds = val_ds.batch(BATCH_SIZE).prefetch(2) \n\nresampled_history = resampled_model.fit(\n resampled_ds,\n epochs=EPOCHS,\n steps_per_epoch=resampled_steps_per_epoch,\n callbacks=[early_stopping],\n validation_data=val_ds)",
"만약 훈련 프로세스가 각 기울기 업데이트에서 전체 데이터 세트를 고려하는 경우, 이 오버 샘플링은 기본적으로 클래스 가중치와 동일합니다.\n그러나 여기에서와 같이, 모델을 배치별로 훈련할 때 오버샘플링된 데이터는 더 부드러운 그래디언트 신호를 제공합니다. 각 양성 예시가 하나의 배치에서 큰 가중치를 가지기보다, 매번 여러 배치에서 작은 가중치를 갖기 때문입니다.\n이 부드러운 기울기 신호는 모델을 더 쉽게 훈련 할 수 있습니다.\n교육 이력 확인\n학습 데이터의 분포가 검증 및 테스트 데이터와 완전히 다르기 때문에 여기서 측정 항목의 분포가 다를 수 있습니다.",
"plot_metrics(resampled_history)",
"재교육\n균형 잡힌 데이터에 대한 훈련이 더 쉽기 때문에 위의 훈련 절차가 빠르게 과적합 될 수 있습니다.\nepoch를 나누어 tf.keras.callbacks.EarlyStopping를 보다 세밀하게 제어하여 훈련 중단 시점을 정합니다.",
"resampled_model = make_model()\nresampled_model.load_weights(initial_weights)\n\n# Reset the bias to zero, since this dataset is balanced.\noutput_layer = resampled_model.layers[-1] \noutput_layer.bias.assign([0])\n\nresampled_history = resampled_model.fit(\n resampled_ds,\n # These are not real epochs\n steps_per_epoch=20,\n epochs=10*EPOCHS,\n callbacks=[early_stopping],\n validation_data=(val_ds))",
"훈련 이력 재확인",
"plot_metrics(resampled_history)",
"메트릭 평가",
"train_predictions_resampled = resampled_model.predict(train_features, batch_size=BATCH_SIZE)\ntest_predictions_resampled = resampled_model.predict(test_features, batch_size=BATCH_SIZE)\n\nresampled_results = resampled_model.evaluate(test_features, test_labels,\n batch_size=BATCH_SIZE, verbose=0)\nfor name, value in zip(resampled_model.metrics_names, resampled_results):\n print(name, ': ', value)\nprint()\n\nplot_cm(test_labels, test_predictions_resampled)",
"ROC 플로팅",
"plot_roc(\"Train Baseline\", train_labels, train_predictions_baseline, color=colors[0])\nplot_roc(\"Test Baseline\", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')\n\nplot_roc(\"Train Weighted\", train_labels, train_predictions_weighted, color=colors[1])\nplot_roc(\"Test Weighted\", test_labels, test_predictions_weighted, color=colors[1], linestyle='--')\n\nplot_roc(\"Train Resampled\", train_labels, train_predictions_resampled, color=colors[2])\nplot_roc(\"Test Resampled\", test_labels, test_predictions_resampled, color=colors[2], linestyle='--')\nplt.legend(loc='lower right')",
"AUPRC 플로팅",
"plot_prc(\"Train Baseline\", train_labels, train_predictions_baseline, color=colors[0])\r\nplot_prc(\"Test Baseline\", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')\r\n\r\nplot_prc(\"Train Weighted\", train_labels, train_predictions_weighted, color=colors[1])\r\nplot_prc(\"Test Weighted\", test_labels, test_predictions_weighted, color=colors[1], linestyle='--')\r\n\r\nplot_prc(\"Train Resampled\", train_labels, train_predictions_resampled, color=colors[2])\r\nplot_prc(\"Test Resampled\", test_labels, test_predictions_resampled, color=colors[2], linestyle='--')\r\nplt.legend(loc='lower right')",
"튜토리얼을 이 문제에 적용\n불균형 데이터 분류는 학습 할 샘플이 너무 적기 때문에 본질적으로 어려운 작업입니다. 항상 데이터부터 시작하여 가능한 한 많은 샘플을 수집하고 모델이 소수 클래스를 최대한 활용할 수 있도록 어떤 기능이 관련 될 수 있는지에 대해 실질적인 생각을 하도록 최선을 다해야 합니다. 어떤 시점에서 모델은 원하는 결과를 개선하고 산출하는데 어려움을 겪을 수 있으므로 문제의 컨텍스트와 다양한 유형의 오류 간의 균형을 염두에 두는 것이 중요합니다."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
Geosyntec/pycvc
|
examples/Data Summaries.ipynb
|
bsd-3-clause
|
[
"CVC Data Summaries\nSetup the basic working environment",
"%matplotlib inline\n\nimport os\nimport sys\nimport datetime\nimport warnings\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport pandas\nimport seaborn\nseaborn.set(style='ticks', context='paper')\n\nimport wqio\nimport pybmpdb\nimport pynsqd\n\nimport pycvc\n\nmin_precip = 1.9999\npalette = seaborn.color_palette('deep', n_colors=6)\npybmpdb.setMPLStyle()\nPOCs = [p['cvcname'] for p in filter(lambda p: p['include'], pycvc.info.POC_dicts)]\n\nif wqio.testing.checkdep_tex() is None:\n warnings.warn(\"LaTeX not found on system path. You will not be able to compile ISRs to PDF files\", UserWarning)",
"Hydrologic Relationships\n$V_{\\mathrm{runoff, \\ LV1}} = \\max\\left(0,\\: -12.05 + 2.873\\, D_{\\mathrm{precip}} + 0.863 \\, \\Delta t \\right)$",
"def LV1_runoff(row):\n return max(0, -12.0 + 2.87 * row['total_precip_depth'] + 0.863 * row['duration_hours'])",
"ED-1\n$\\log \\left(V_{\\mathrm{runoff, \\ ED1}}\\right) = 1.58 + 0.000667 \\, I_{\\mathrm{max}} + 0.0169 \\, D_{\\mathrm{precip}} $\n$V_{\\mathrm{bypass, \\ ED1}} = \\max \\left(0,\\: -26.4 + 0.184 \\, I_{\\mathrm{max}} + 1.22 \\, D_{\\mathrm{precip}} \\right)$\n$V_{\\mathrm{inflow, \\ ED1}} = \\max \\left(0,\\: V_{\\mathrm{runoff, \\ ED1}} - V_{\\mathrm{bypass, \\ ED1}} \\right)$",
"def ED1_runoff(row):\n return 10**(1.58 + 0.000667 * row['peak_precip_intensity'] + 0.0169 * row['total_precip_depth'] )\n\ndef ED1_bypass(row):\n return max(0, -26.4 + 0.184 * row['peak_precip_intensity'] + 1.22 * row['total_precip_depth'])\n\ndef ED1_inflow(row):\n return max(0, ED1_runoff(row) - ED1_bypass(row))",
"LV-2\n$\\log \\left(V_{\\mathrm{runoff, \\ LV2}}\\right) = 1.217 + 0.00622 \\, I_{\\mathrm{max}} + 0.0244 \\, D_{\\mathrm{precip}} $\n$V_{\\mathrm{bypass, \\ LV2}} = 0$\n$V_{\\mathrm{inflow, \\ LV2}} = \\max \\left(0,\\: V_{\\mathrm{runoff, \\ LV2}} - V_{\\mathrm{bypass, \\ LV2}} \\right)$",
"def LV2_runoff(row):\n return 10**(1.22 + 0.00622 * row['peak_precip_intensity'] + 0.0244 * row['total_precip_depth'] )\n\ndef LV2_bypass(row):\n return 0\n\ndef LV2_inflow(row):\n return max(0, LV2_runoff(row) - LV2_bypass(row))",
"LV-4\n$\\log \\left(V_{\\mathrm{runoff, \\ LV4}}\\right) = 1.35 + 0.00650 \\, I_{\\mathrm{max}} + 0.00940 \\, D_{\\mathrm{precip}} $\n$V_{\\mathrm{bypass, \\ LV4}} = \\max \\left(0,\\: 7.37 + 0.0370 \\, I_{\\mathrm{max}} + 0.112 \\, D_{\\mathrm{precip}} \\right)$\n$V_{\\mathrm{inflow, \\ LV4}} = \\max \\left(0,\\: V_{\\mathrm{runoff, \\ LV4}} - V_{\\mathrm{bypass, \\ LV4}} \\right)$",
"def LV4_runoff(row):\n return 10**(1.35 + 0.00650 * row['peak_precip_intensity'] + 0.00940 * row['total_precip_depth'] )\n\ndef LV4_bypass(row):\n return max(0, 7.36 + 0.0370 * row['peak_precip_intensity'] + 0.112 * row['total_precip_depth'])\n\ndef LV4_inflow(row):\n return max(0, LV4_runoff(row) - LV4_bypass(row))",
"Water quality loading relationship\n$ M_{\\mathrm{runoff}} = V_{\\mathrm{runoff}} \\times \\hat{\\mathbb{C}}_{\\mathrm{inflow}}\\left(\\mathrm{landuse,\\ season}\\right) $\n$ M_{\\mathrm{bypass}} = V_{\\mathrm{bypass}} \\times \\hat{\\mathbb{C}}_{\\mathrm{inflow}}\\left(\\mathrm{landuse,\\ season}\\right) $\n$ M_{\\mathrm{inflow}} = M_{\\mathrm{runoff}} - M_{\\mathrm{bypass}} $\n$ M_{\\mathrm{outflow}} = V_{\\mathrm{outflow}} \\times \\mathbb{C}_{\\mathrm{outflow}} $\nLoad External Data (this takes a while)",
"bmpdb = pycvc.external.bmpdb(palette[3], 'D')\nnsqdata = pycvc.external.nsqd(palette[2], 'd')",
"Load CVC Database",
"cvcdbfile = \"C:/users/phobson/Desktop/cvc.accdb\"\ncvcdb = pycvc.Database(cvcdbfile, nsqdata, bmpdb, testing=False)",
"Define the site object for the reference site and compute its median values (\"influent\" to other sites)",
"LV1 = pycvc.Site(db=cvcdb, siteid='LV-1', raingauge='LV-1', tocentry='Lakeview Control', \n isreference=True, influentmedians=pycvc.wqstd_template(),\n runoff_fxn=LV1_runoff, minprecip=min_precip,\n color=palette[1], marker='s')",
"Lakeview BMP sites get their \"influent\" data from LV-1",
"LV_Influent = (\n LV1.wqdata\n .query(\"sampletype == 'composite'\")\n .groupby(by=['season', 'parameter', 'units'])['concentration']\n .median()\n .reset_index()\n .rename(columns={'concentration': 'influent median'}) \n)\nLV_Influent.head()",
"Elm Drive's \"influent\" data come from NSQD",
"ED_Influent = (\n cvcdb.nsqdata\n .medians.copy()\n .rename(columns={'NSQD Medians': 'influent median'})\n)\nED_Influent.head()",
"Remaining site objects",
"ED1 = pycvc.Site(db=cvcdb, siteid='ED-1', raingauge='ED-1',\n tocentry='Elm Drive', influentmedians=ED_Influent, \n minprecip=min_precip, isreference=False,\n runoff_fxn=ED1_runoff, bypass_fxn=ED1_bypass,\n inflow_fxn=ED1_inflow, color=palette[0], marker='o')\n\nLV2 = pycvc.Site(db=cvcdb, siteid='LV-2', raingauge='LV-1',\n tocentry='Lakeview Grass Swale', influentmedians=LV_Influent, \n minprecip=min_precip, isreference=False,\n runoff_fxn=LV2_runoff, bypass_fxn=LV2_bypass,\n inflow_fxn=LV2_inflow, color=palette[4], marker='^')\n\nLV4 = pycvc.Site(db=cvcdb, siteid='LV-4', raingauge='LV-1',\n tocentry=r'Lakeview Bioswale 1$^{\\mathrm{st}}$ South Side', \n influentmedians=LV_Influent, \n minprecip=min_precip, isreference=False,\n runoff_fxn=LV4_runoff, bypass_fxn=LV4_bypass,\n inflow_fxn=LV4_inflow, color=palette[5], marker='v')",
"Fix ED-1 storm that had two composite samples",
"ED1.hydrodata.data.loc['2012-08-10 23:50:00':'2012-08-11 05:20', 'storm'] = 0\nED1.hydrodata.data.loc['2012-08-11 05:30':, 'storm'] += 1",
"Replace total inflow volume with estimate from simple method for 2013-07-08 storm",
"storm_date = datetime.date(2013, 7, 8)\nfor site in [ED1, LV1, LV2, LV4]:\n bigstorm = site.storm_info.loc[site.storm_info.start_date.dt.date == storm_date].index[0]\n inflow = site.drainagearea.simple_method(site.storm_info.loc[bigstorm, 'total_precip_depth'])\n site.storm_info.loc[bigstorm, 'inflow_m3'] = inflow\n site.storm_info.loc[bigstorm, 'runoff_m3'] = np.nan\n site.storm_info.loc[bigstorm, 'bypass_m3'] = np.nan",
"High-level summaries\nHydrologic Summary",
"stormfile = pandas.ExcelWriter(\"output/xlsx/CVCHydro_StormInfo.xlsx\")\nhydrofile = pandas.ExcelWriter(\"output/xlsx/CVCHydro_StormStats.xlsx\")\n\nwith pandas.ExcelWriter(\"output/xlsx/CVCHydro_StormStats.xlsx\") as hydrofile,\\\n pandas.ExcelWriter(\"output/xlsx/CVCHydro_StormInfo.xlsx\") as stormfile: \n for site in [ED1, LV1, LV2, LV4]:\n site.storm_info.to_excel(stormfile, sheet_name=site.siteid)\n site.storm_stats().to_excel(hydrofile, sheet_name=site.siteid)",
"Hydrologic Pairplots\n(failing on LV-2's outflow is expected)",
"for site in [ED1, LV2, LV4]:\n for by in ['year', 'outflow', 'season']:\n try: \n site.hydro_pairplot(by=by)\n except:\n print('failed on {}, {}'.format(site, by))",
"Prevalence Tables",
"with pandas.ExcelWriter('output/xlsx/CVCWQ_DataInventory.xlsx') as prev_tables:\n for site in [ED1, LV1, LV2, LV4]:\n stype = 'composite'\n site.prevalence_table()[stype].to_excel(prev_tables, sheet_name='{}'.format(site.siteid))",
"Concentrations Stats",
"with pandas.ExcelWriter('output/xlsx/CVCWQ_ConcStats.xlsx') as concfile:\n for site in [ED1, LV1, LV2, LV4]:\n concs = site.wq_summary('concentration', sampletype='composite').T\n concs.to_excel(concfile, sheet_name=site.siteid, na_rep='--')",
"Load Stats",
"with pandas.ExcelWriter('output/xlsx/CVCWQ_LoadStats.xlsx') as loadstats:\n for site in [ED1, LV1, LV2, LV4]:\n load = (\n site.wq_summary('load_outflow', sampletype='composite')\n .stack(level='parameter')\n .stack(level='load_units')\n )\n load.to_excel(loadstats, sheet_name=site.siteid, na_rep='--')",
"Tidy Data",
"with pandas.ExcelWriter('output/xlsx/CVCWQ_TidyData.xlsx') as tidyfile:\n for site in [ED1, LV1, LV2, LV4]:\n site.tidy_data.to_excel(tidyfile, sheet_name=site.siteid, na_rep='--')",
"Total Loads Summary",
"with pandas.ExcelWriter('output/xlsx/CVCWQ_LoadTotals.xlsx') as loadfile:\n for site in [ED1, LV1, LV2, LV4]:\n loads = site.load_totals(sampletype='composite')\n loads.to_excel(loadfile, sheet_name=site.siteid, na_rep='--')",
"Analysis",
"seaborn.set(style='ticks', context='paper')\npybmpdb.setMPLStyle()",
"Individual Storm Reports\n(requires $\\LaTeX$)",
"for site in [ED1, LV1, LV2, LV4]:\n print('\\n----Compiling ISR for {0}----'.format(site.siteid))\n site.allISRs('composite', version='draft')",
"Precip-outflow scatter plots",
"for site in [ED1, LV1, LV2, LV4]:\n print('\\n----Summarizing {0}----'.format(site.siteid))\n \n site.hydro_jointplot(\n xcol='total_precip_depth', \n ycol='outflow_mm', \n conditions=\"outflow_mm > 0\", \n one2one=True\n )\n\n site.hydro_jointplot(\n xcol='antecedent_days', \n ycol='outflow_mm', \n conditions=\"outflow_mm > 0\", \n one2one=False\n )\n\n site.hydro_jointplot(\n xcol='total_precip_depth', \n ycol='antecedent_days', \n conditions=\"outflow_mm == 0\", \n one2one=False\n )\n \n site.hydro_jointplot(\n xcol='peak_precip_intensity', \n ycol='peak_outflow', \n conditions=None, \n one2one=False\n )\n \n plt.close('all')",
"WQ Comparison\nLists of sites to compare",
"site_lists = [\n [ED1],\n [LV1, LV2, LV4],\n]",
"Individual Figures",
"for sl in site_lists:\n print('\\n----Comparing {}----'.format(', '.join([s.siteid for s in sl])))\n for poc in POCs:\n print(' ' + poc)\n \n wqcomp = pycvc.summary.WQComparison(sl, 'composite', poc, nsqdata, bmpdb)\n \n wqcomp.seasonalBoxplots(load=False, finalOutput=True)\n wqcomp.seasonalBoxplots(load=True, finalOutput=True)\n \n wqcomp.landuseBoxplots(finalOutput=True)\n wqcomp.bmpCategoryBoxplots(finalOutput=True)\n \n wqcomp.parameterStatPlot(finalOutput=True)\n wqcomp.parameterStatPlot(load=True, finalOutput=True)\n \n wqcomp.parameterTimeSeries(finalOutput=True) \n wqcomp.parameterTimeSeries(load=True, finalOutput=True) \n\n plt.close('all')",
"Megafigures",
"for sl in site_lists:\n print('\\n----Megafigs with {}----'.format(', '.join([s.siteid for s in sl])))\n \n # construct the megafigures\n mf1 = pycvc.summary.WQMegaFigure(sl, 'composite', POCs[:6], 1, nsqdata, bmpdb)\n mf2 = pycvc.summary.WQMegaFigure(sl, 'composite', POCs[6:], 2, nsqdata, bmpdb)\n for n, mf in enumerate([mf1, mf2]):\n print('\\tTime Series {0}'.format(n+1))\n mf.timeseriesFigure(load=False)\n mf.timeseriesFigure(load=True)\n\n print('\\tStat plots {0}'.format(n+1))\n mf.statplotFigure(load=False)\n mf.statplotFigure(load=True)\n\n print('\\tBMPDB Boxplots {0}'.format(n+1))\n mf.bmpCategoryBoxplotFigure()\n\n print('\\tNSQD Boxplots {0}'.format(n+1))\n mf.landuseBoxplotFigure()\n\n print('\\tSeasonal Boxplots {0}'.format(n+1))\n mf.seasonalBoxplotFigure(load=False)\n mf.seasonalBoxplotFigure(load=True)\n \n plt.close('all')",
"Unsampled loading estimates\nWarning: Site objects (e.g., ED1) have hidden _unsampled_loading_estimates methods that return load estimates of unsampled storms using the estimated median influent concentrations and median effluent concentrations. However, it is highly recommended that you aggregate the data and don't draw conclusions about individual storms.\nThe cell below aggregates the data for each parameter, season, and whether the storms produced outflow. The results (sums) are then saved to an Excel file, one tab for each site.",
"cols = [\n 'duration_hours', 'total_precip_depth_mm', \n 'runoff_m3', 'bypass_m3', 'inflow_m3', 'outflow_m3', \n 'load_runoff', 'load_bypass', 'load_inflow', 'load_outflow',\n]\n\nwith pandas.ExcelWriter(\"output/xlsx/CVCHydro_UnsampledLoadEstimates.xlsx\") as unsampled_file:\n for site in [ED1, LV1, LV2, LV4]:\n loads = (\n site._unsampled_load_estimates()\n .groupby(['season', 'has_outflow', 'parameter', 'load_units'])\n .sum()\n .select(lambda c: c in cols, axis=1)\n .reset_index()\n )\n loads.to_excel(unsampled_file, sheet_name=site.siteid)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
PyDataTokyo/pydata-tokyo-tutorial-1
|
pydatatokyo_tutorial_ml.ipynb
|
mit
|
[
"PyData.Tokyo Tutorial & Hackathon #1\nPyData.Tokyoでは毎月開催している中上級者向けの勉強会に加え、初心者の育成を目的としたチュートリアルイベントを開催します。今回のイベントでは下記の項目にフォーカスします。\n\nデータの読み込み\nデータの前処理・整形\n集計・統計解析\nデータの可視化\n機械学習を使った分類モデルの生成\nモデル分類結果の検証\n\nこのチュートリアルでは実際のデータを使ったコーディングを行うことで実践力をつけることを目的とします。扱う事例はタイタニックの乗客データを使った生存者推定モデルの生成です。乗客の年齢、性別、その他の情報を機械学習アルゴリズムに学習させることで、初心者でも80%に近い精度で生存者を当てることができるようになります。\nイベント詳細: http://pydatatokyo.connpass.com/event/11860/\nチュートリアルのリポジトリ: https://github.com/PyDataTokyo/pydata-tokyo-tutorial-1\nTwitter: @PyDataTokyo\n\nチュートリアル第二部「Machine Learning」\n第二部の目的\nチュートリアル第二部では、Pythonの機械学習ライブラリscikit-learnを使って、次の2つの点について学びます。\n- 機械学習を使った分類モデルの生成\n- 分類結果の検証\n使用するパッケージ\n\nPython 3.6.0\nIPython 5.1.0\nnumpy 1\npandas 0.15.2\nmatplotlib 1.4.3\nscikit-learn 0.15.2\n\n使用するデータ\nタイタニックの乗客データ: Titanic: Machine Learning from Disaster\n※データのダウンロードには、Kaggleのアカウントが必要です。\n講師\nPyData.Tokyo オーガナイザー 田中 秀樹(@atelierhide)\nシリコンバレーでPython×Dataの魅力に出会う。その後、ディープラーニングに興味を持ち、PyCon JP 2014に登壇したことがきっかけとなりPyData.Tokyoをスタート。カメラレンズの光学設計エンジニアをする傍ら、画像認識を用いた火星および太陽系惑星表面の構造物探索を行うMarsface Project(@marsfaceproject)に参加。\nアジェンダ\n\nバックグラウンド\nライブラリのインポートとデータの準備\nジェンダーモデルによる生存者推定、推定値の評価\nロジスティック回帰による生存者推定\n交差検証(クロスバリデーション)\n決定木(Decision Tree)による生存者推定\nグリッドサーチ\n\n\n1. バックグラウンド - タイタニック号沈没事故\n1912年4月15日、タイタニックはその初航海にして流氷との衝突により沈没しました。2224人の乗客員と乗組員のうち1502人がなくなる大事故でした。\nこの沈没事故がこれほど多くの犠牲者を産んだ一つの理由は救助ボートが十分に用意されていなかったことです。もちろん、生存には運が大きく左右しましたが、生存者の傾向にはパターンも見られます。例えば、女性・子供(男性が助けたため)や上流階級の乗客などは、生存率が高い傾向にあります。",
"from IPython.display import Image\nImage(url='http://graphics8.nytimes.com/images/section/learning/general/onthisday/big/0415_big.gif')",
"2. ライブラリのインポートとデータの準備\n最初に、必要なライブラリをインポートしましょう。",
"%matplotlib inline\n\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nfrom matplotlib.colors import ListedColormap\nfrom sklearn.metrics import accuracy_score, classification_report, confusion_matrix\nfrom sklearn.model_selection import train_test_split, cross_val_score, KFold, GridSearchCV\nfrom sklearn.preprocessing import LabelEncoder\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.svm import SVC\nfrom sklearn.tree import DecisionTreeClassifier, export_graphviz\nfrom IPython.display import Image\n\n# Pandasの設定をします\npd.set_option('chained_assignment', None)\n\n# matplotlibのスタイルを指定します。これでグラフが少しかっこよくなります。\nplt.style.use('ggplot')\nplt.rc('xtick.major', size=0)\nplt.rc('ytick.major', size=0)",
"PandasのDataFrameに2つのデータを読込みます。train.csvは乗客の生存情報が付いているトレーニングデータ(教師データ)です。test.csvは生存情報を推定してKaggleに投稿するためのテストデータのため、生存情報が付いていません。",
"df_train = pd.read_csv('data/train.csv')\ndf_test = pd.read_csv('data/test.csv')",
"2つのデータを確認してみましょう。df_trainにのみ生存情報(Survived)があるのがわかります。",
"df_train.tail()\n\ndf_test.tail()",
"3. ジェンダーモデルによる生存者推定、推定値の評価\n前半のチュートリアルのデータ解析で、生存確率は男性より女性の方が高いことが分かりました。先ず、最も単純なモデルとして、性別により生存者を予測するモデル(ジェンダーモデル)を考えてみましょう。\n使用するデータの選択\nトレーニングデータから性別データと乗客の生存情報を取り出します。特徴量はx、正解データはyと表すことが一般的です。ここでは、性別が性別、正解データは生存情報です。1つの特徴量(性別)のみを取り出す時には、ベクトルを意味する小文字のxを使いますが、2つ以上の特徴量を使う時は、行列(マトリクス)を意味する大文字のXを使います。大文字のXは後ほど出てきます。",
"x = df_train['Sex']\ny = df_train['Survived']",
"ジェンダーモデルによる推定\nジェンダーモデルで生存者を推定します。ジェンダーモデルは、女性は全員が生存(0)、男性は全員が死亡(1)と仮定するモデルです。y_predのpredはpredictionの略です。pandasのmapを使って計算してみましょう。",
"y_pred = x.map({'female': 1, 'male': 0}).astype(int)",
"推定値の評価\n推定したデータを評価します。最初に正解率(Accuracy)を求めましょう。accuracy_scoreで計算します。",
"print('Accuracy: {:.3f}'.format(accuracy_score(y, y_pred)))",
"78.7%の正解率が得られました。データを理解して仮説を立てることで、単純なモデルでも高い正解率が得られることが分かります。Kaggleでは、コンペによって使われている指標が異なりますが、タイタニックのコンペでは正解率が指標となっています。<br>\n他の指標もscikit-learnで簡単に計算出来ます。Precision、Recall、F1-scoreをclassification_reportで計算してみましょう。",
"print(classification_report(y, y_pred))",
"混同行列(Confusion Matrix)は、推定結果を理解するのにとても便利です。scikit-learnの[confusion_matrix]((http://scikit-learn.org/stable/modules/generated/sklearn.metrics.confusion_matrix.html)で計算し、結果をmatplotlibで可視化してみましょう。",
"cm = confusion_matrix(y, y_pred)\nprint(cm)\n\ndef plot_confusion_matrix(cm):\n fig, ax = plt.subplots()\n im = ax.imshow(cm, interpolation='nearest', cmap=plt.cm.Blues)\n ax.set_title('Confusion Matrix')\n fig.colorbar(im)\n\n target_names = ['not survived', 'survived']\n\n tick_marks = np.arange(len(target_names))\n ax.set_xticks(tick_marks)\n ax.set_xticklabels(target_names, rotation=45)\n ax.set_yticks(tick_marks)\n ax.set_yticklabels(target_names)\n ax.set_ylabel('True label')\n ax.set_xlabel('Predicted label')\n fig.tight_layout()\n\nplot_confusion_matrix(cm)",
"テストデータから生存者を推定\n演習問題\nトレーニングデータと同様に、Kaggleに投稿するテストデータからも生存者を推定しましょう。\n解答例",
"x_test = df_test['Sex']\ny_test_pred = x_test.map({'female': 1, 'male': 0}).astype(int)",
"Kaggleに投稿するファイルの作成\n推定した結果を、Kaggleに投稿するためのCSVファイルを作成します。CSVファイルに記載する必要のあるデータはPassengerIdとSurvived(生存者の推定値)です。pandasで投稿データ用のDataFrameを作成し、to_csvを使ってCSV形式で保存します。",
"df_kaggle = pd.DataFrame({'PassengerId': df_test['PassengerId'], 'Survived':np.array(y_test_pred)})\ndf_kaggle.to_csv('kaggle_gendermodel.csv', index=False)\n\ndf_kaggle.head()",
"作成したkaggle_gendermodel.csvをKaggleに投稿し、スコアと順位を確認してみましょう!これで皆さんもKagglerです!\n\n4. ロジスティック回帰による生存者推定\nscikit-learnに実装されている機械学習のアルゴリズムを使うことを学びます。先ずは最も基本的な線形モデルから始めましょう。\n使用するデータの選択\nジェンダーモデルでは、性別情報だけを使って生存者の推定を行いましたが、正解率を上げるために他の特徴量も使ってみましょう。チュートリアル第一部の解析より、性別は女性で、年齢は若く、乗船クラスのランクが高いほど生存率は高くなることが分かっています。今回はこれを仮説として使います。性別に加えて、年齢(Age)と乗船クラス(Pclass)を特徴量をして選びましょう。",
"X = df_train[['Age', 'Pclass', 'Sex']]\ny = df_train['Survived']",
"特徴量のデータフレームを確認します。",
"X.tail()",
"年齢に欠損値があります。教師データのサイズが十分に大きければ、欠損値を使わなくても問題ありません。今回は教師データがあまり大きくないため、欠損値を埋めて使います。チュートリアル第一部では、欠損値を埋める手法をいくつか紹介しましたが、今回は全体の平均値を使うことにします。",
"X['AgeFill'] = X['Age'].fillna(X['Age'].mean())\nX = X.drop(['Age'], axis=1)",
"また、性別(Sex)はmaleとfemaleという値が入っていますが、scikit-learnでは、このようなカテゴリー情報を扱うことが出来ません。そのため、female、maleを数値に変換する必要があります。femaleを0、maleを1とし、新しくGenderを作成します。",
"X['Gender'] = X['Sex'].map({'female': 0, 'male': 1}).astype(int)\n\nX.tail()",
"次に、女性(Gender=0)で且つ、乗船クラスのランクが高い(Pclass=1)ほど、生存率が高いという仮説を表す新しい特徴量(Pclass_Gender)を作成します。Pclass_Genderは値が小さいほど生存率が高いことになります。",
"X['Pclass_Gender'] = X['Pclass'] + X['Gender']\n\nX.tail()",
"今回は特徴量としてPclass_GenderとAgeの2つを使います。不要になった特徴量は、dropで削除します。",
"X = X.drop(['Pclass', 'Sex', 'Gender'], axis=1)\n\nX.head()",
"データを可視化して「年齢が若く、女性で且つ、乗船クラスのランクが高いほど、生存率が高い」という仮説が正しいか確認してみましょう。横軸が年齢、縦軸がPclass_Genderを表します。",
"np.random.seed = 0\n\nxmin, xmax = -5, 85\nymin, ymax = 0.5, 4.5\n\nindex_survived = y[y==0].index\nindex_notsurvived = y[y==1].index\n\nfig, ax = plt.subplots()\ncm = plt.cm.RdBu\ncm_bright = ListedColormap(['#FF0000', '#0000FF'])\nsc = ax.scatter(X.loc[index_survived, 'AgeFill'],\n X.loc[index_survived, 'Pclass_Gender']+(np.random.rand(len(index_survived))-0.5)*0.1,\n color='r', label='Not Survived', alpha=0.3)\nsc = ax.scatter(X.loc[index_notsurvived, 'AgeFill'],\n X.loc[index_notsurvived, 'Pclass_Gender']+(np.random.rand(len(index_notsurvived))-0.5)*0.1,\n color='b', label='Survived', alpha=0.3)\nax.set_xlabel('AgeFill')\nax.set_ylabel('Pclass_Gender')\nax.set_xlim(xmin, xmax)\nax.set_ylim(ymin, ymax)\nax.legend(bbox_to_anchor=(1.4, 1.03))\nplt.show()",
"いかがでしょうか?仮説は正しく、グラフの左下で生存者が多くなっていますね。\nトレーニングデータの分割\n機械学習では、学習にデータをすべて使ってしまうと、モデルを正しく評価出来ません。そこで、データを学習用と評価用の2つに分割します。分割はscikit-learnのtrain_test_splitで簡単に行うことが出来ます。ここでは、データの80%を学習用、20%を評価用として分割します。x_val、y_valのvalはvalidationの略です。",
"X_train, X_val, y_train, y_val = train_test_split(X, y, train_size=0.8, random_state=1)\n\nX_train\n\nprint('Num of Training Samples: {}'.format(len(X_train)))\nprint('Num of Validation Samples: {}'.format(len(X_val)))",
"ロジスティック回帰による推定\n線形モデルであるロジスティック回帰(LogisticRegression)を使います。clfは分類器を意味するclassifierの略です。",
"clf = LogisticRegression()",
"先ほど分割した学習用データを使います。",
"clf.fit(X_train, y_train)",
"これで、学習は完了です。データ数が少ないため、あっという間に終わります。<br>次に生存者の推定をしますが、こちらも簡単に行えます。先ほど分割した学習用データと、評価用データの両方について推定します。",
"y_train_pred = clf.predict(X_train)\ny_val_pred = clf.predict(X_val)",
"結果を評価してみましょう。",
"print('Accuracy on Training Set: {:.3f}'.format(accuracy_score(y_train, y_train_pred)))\nprint('Accuracy on Validation Set: {:.3f}'.format(accuracy_score(y_val, y_val_pred)))\n\ncm = confusion_matrix(y_val, y_val_pred)\nprint(cm)\n\nplot_confusion_matrix(cm)",
"ロジスティック回帰はどのような役割を確認しましょう。matplotlibで可視化します。",
"h = 0.02\nxmin, xmax = -5, 85\nymin, ymax = 0.5, 4.5\nxx, yy = np.meshgrid(np.arange(xmin, xmax, h), np.arange(ymin, ymax, h))\nZ = clf.predict_proba(np.c_[xx.ravel(), yy.ravel()])[:, 1]\nZ = Z.reshape(xx.shape)\n\nfig, ax = plt.subplots()\nlevels = np.linspace(0, 1.0, 5)\ncm = plt.cm.RdBu\ncm_bright = ListedColormap(['#FF0000', '#0000FF'])\ncontour = ax.contourf(xx, yy, Z, cmap=cm, levels=levels, alpha=0.8)\nax.scatter(X_train.iloc[:, 0], X_train.iloc[:, 1]+(np.random.rand(len(X_train))-0.5)*0.1, c=y_train, cmap=cm_bright)\nax.scatter(X_val.iloc[:, 0], X_val.iloc[:, 1]+(np.random.rand(len(X_val))-0.5)*0.1, c=y_val, cmap=cm_bright, alpha=0.5)\nax.set_xlabel('AgeFill')\nax.set_ylabel('Pclass_Gender')\nax.set_xlim(xmin, xmax)\nax.set_ylim(ymin, ymax)\nfig.colorbar(contour)\n\nx1 = xmin\nx2 = xmax\ny1 = -1*(clf.intercept_[0]+clf.coef_[0][0]*xmin)/clf.coef_[0][1]\ny2 = -1*(clf.intercept_[0]+clf.coef_[0][0]*xmax)/clf.coef_[0][1]\nax.plot([x1, x2] ,[y1, y2], 'k--')\n\nplt.show()",
"ロジスティック回帰は与えられた特徴量に基づいて、乗客が生存したか、生存しなかったかの境界(グラフの点線)を判断しています。これの境界はHyperplane(超平面)または、Decision Boundary(決定境界)と呼ばれます。機械学習の分類問題の目的は、この境界を求めることとも言えます。アルゴリズムによって、この境界の求め方が異なり、結果も異なります。機械学習の様々な分野で広く使われいているSVM(Support Vector Machines)と比較してみましょう。アルゴリズムの詳細の説明は省略します。",
"clf_log = LogisticRegression()\nclf_svc_lin = SVC(kernel='linear', probability=True)\nclf_svc_rbf = SVC(kernel='rbf', probability=True)\ntitles = ['Logistic Regression', 'SVC with Linear Kernel', 'SVC with RBF Kernel',]\n\nh = 0.02\nxmin, xmax = -5, 85\nymin, ymax = 0.5, 4.5\nxx, yy = np.meshgrid(np.arange(xmin, xmax, h), np.arange(ymin, ymax, h))\n\nfig, axes = plt.subplots(1, 3, figsize=(12,4))\nlevels = np.linspace(0, 1.0, 5)\ncm = plt.cm.RdBu\ncm_bright = ListedColormap(['#FF0000', '#0000FF'])\nfor i, clf in enumerate((clf_log, clf_svc_lin, clf_svc_rbf)):\n clf.fit(X, y)\n Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])\n Z = Z.reshape(xx.shape)\n axes[i].contourf(xx, yy, Z, cmap=cm, levels=levels, alpha=0.8)\n axes[i].scatter(X.iloc[:, 0], X.iloc[:, 1], c=y, cmap=cm_bright)\n axes[i].set_title(titles[i])\n axes[i].set_xlabel('AgeFill')\n axes[i].set_ylabel('Pclass_Gender')\n axes[i].set_xlim(xmin, xmax)\n axes[i].set_ylim(ymin, ymax)\n fig.tight_layout()",
"過学習(Overfitting)\n上のグラフから分かるように、SVC with RBF Kernelは複雑な形状の境界が作れますが、これが必ずしも良いわけではありません。理由は、学習用データに対して性能が高くなり、評価用データに対して性能が低くなる場合があるためです。これを過学習(Overfitting)と呼びます。アルゴリズムが複雑になるほど、注意が必要です。",
"clf = SVC(kernel='rbf', probability=True)\nclf.fit(X_train, y_train)\n\ny_train_pred = clf.predict(X_train)\ny_val_pred = clf.predict(X_val)\n\nprint('Accuracy on Training Set: {:.3f}'.format(accuracy_score(y_train, y_train_pred)))\nprint('Accuracy on Validation Set: {:.3f}'.format(accuracy_score(y_val, y_val_pred)))",
"5. 交差検証(クロスバリデーション)\n演習問題\nモデルを評価するために、データを学習用と評価用の2つに分割することを説明しましたが、データが変わっても結果は同じでしょうか?train_test_splitで指定するrandom_stateの値を変化させて実際に確認してみましょう。\n解答例",
"X_train, X_val, y_train, y_val = train_test_split(X, y, train_size=0.8, random_state=33)\n\nclf = LogisticRegression()\nclf.fit(X_train, y_train)\n\ny_train_pred = clf.predict(X_train)\ny_val_pred = clf.predict(X_val)\n\nprint('Accuracy on Training Set: {:.3f}'.format(accuracy_score(y_train, y_train_pred)))\nprint('Accuracy on Test Set: {:.3f}'.format(accuracy_score(y_val, y_val_pred)))",
"どの部分を教師データにするかによって結果は異なります。この課題を解決する方法として交差検証(クロスバリデーション、Cross-validation)という手法があります。ここでは、K-分割交差検証(K-fold cross-validation)を使いましょう。K-分割交差検証は、データをK個に分割し、そのうちK-1個のデータセットを学習に、K個のデータを訓練に用いるということをK回繰り返し、得られた結果の平均を得るというものです。例えば、5-fold cross-validationの場合、5つのデータセットを作成します。各データセットには20%のサンプルが含まれていることになりますが、これを利用し、80%のサンプルで学習、20%のサンプルで評価するということを5回繰り返します。",
"Image(url='http://scott.fortmann-roe.com/docs/docs/MeasuringError/crossvalidation.png')",
"scikit-learnでもcross_validationとして実装されています。K-分割交差検証を関数として定義し、実行してみましょう。",
"def cross_val(clf, X, y, K=5, random_state=0):\n cv = KFold(K, shuffle=True, random_state=random_state)\n scores = cross_val_score(clf, X, y, cv=cv)\n return scores\n\ncv = KFold(5, shuffle=True, random_state=0)\ncv\n\nclf = LogisticRegression()\nscores = cross_val(clf, X, y)\nprint('Scores:', scores)\nprint('Mean Score: {0:.3f} (+/-{1:.3f})'.format(scores.mean(), scores.std()*2))",
"3つ以上の特徴量を使う\n3つ以上の特徴量を使う場合も同様に学習を行い、Hyperplaneを求めることが出来ます。",
"X = df_train[['Age', 'Pclass', 'Sex', 'SibSp', 'Parch', 'Embarked']]\ny = df_train['Survived']\nX_test = df_test[['Age', 'Pclass', 'Sex', 'SibSp', 'Parch', 'Embarked']]\n\nX.tail()",
"演習問題\n前回と同様に、年齢の欠損値を平均値で埋め、性別を数値化しましょう。\n解答例",
"X['AgeFill'] = X['Age'].fillna(X['Age'].mean())\nX_test['AgeFill'] = X_test['Age'].fillna(X['Age'].mean())\n\nX = X.drop(['Age'], axis=1)\nX_test = X_test.drop(['Age'], axis=1)",
"性別の数値化にはscikit-learnのLabelEncoderを使うことも出来ます。",
"le = LabelEncoder()\nle.fit(X['Sex'])\nX['Gender'] = le.transform(X['Sex'])\nX_test['Gender'] = le.transform(X_test['Sex'])\nclasses = {gender: i for (i, gender) in enumerate(le.classes_)}\nprint(classes)\n\nX.tail()",
"One-hot Encoding\n性別(Sex)と同様に乗船地(Embarked)もそのままでは使えないため、数値化する必要がありますが、対象となるのはS、C、Qの3種類です。このような場合は、One-hot Encording、またはOne-of-K Encordingという手法を使って、新たな特徴量を作ります。pandasのget_dummiesを使います。",
"X = X.join(pd.get_dummies(X['Embarked'], prefix='Embarked'))\nX_test = X_test.join(pd.get_dummies(X['Embarked'], prefix='Embarked'))\n\nX.tail()",
"不要な特徴量は削除しましょう。",
"X = X.drop(['Sex', 'Embarked'], axis=1)\nX_test = X_test.drop(['Sex', 'Embarked'], axis=1)",
"ロジスティック回帰+交差検証で評価します。",
"clf = LogisticRegression()\nscores = cross_val(clf, X, y)\nprint('Scores:', scores)\nprint('Mean Score: {0:.3f} (+/-{1:.3f})'.format(scores.mean(), scores.std()*2))",
"スコアが改善しました!\n\n6. 決定木(Decision Tree)による生存者推定\n決定木は、機械学習の手法の中でも非常によく用いられるものの一つです。分類を決定づけた要因を木構造で説明することが出来るため、非常に分かりやすいという特徴があります。",
"clf = DecisionTreeClassifier(criterion='entropy', max_depth=2, min_samples_leaf=2)\nscores = cross_val(clf, X, y, 5)\nprint('Scores:', scores)\nprint('Mean Score: {0:.3f} (+/-{1:.3f})'.format(scores.mean(), scores.std()*2))\n\nImage(url='https://raw.githubusercontent.com/PyDataTokyo/pydata-tokyo-tutorial-1/master/images/titanic_decision_tree.png')",
"演習問題\n決定木のパラメータを変えて、スコアを比較してみましょう。\n解答例",
"clf = DecisionTreeClassifier(criterion='entropy', max_depth=3, min_samples_leaf=2)\nscores = cross_val(clf, X, y, 5)\nprint('Scores:', scores)\nprint('Mean Score: {0:.3f} (+/-{1:.3f})'.format(scores.mean(), scores.std()*2))",
"7. グリッドサーチ\nグリッドサーチは、分類器のパラメータを指定した範囲で変化させ、最もスコアの高いパラメータの組合せを探してくれる便利な機能です。",
"clf = DecisionTreeClassifier(criterion='entropy', max_depth=2, min_samples_leaf=2)\n\nparam_grid = {'max_depth': [2, 3, 4, 5], 'min_samples_leaf': [2, 3, 4, 5]}\ncv = KFold(5, shuffle=True, random_state=0)\n\ngrid_search = GridSearchCV(clf, param_grid, cv=cv, n_jobs=-1, verbose=1,return_train_score=True)\ngrid_search.fit(X, y)",
"ベストなスコアとパラメータの組合せを確認します。",
"print('Scores: {:.3f}'.format(grid_search.best_score_))\nprint('Best Parameter Choice:', grid_search.best_params_)",
"全ての結果を確認することも出来ます。",
"grid_search.cv_results_\n\ngrid_search.cv_results_['mean_test_score']\n\nscores = grid_search.cv_results_['mean_test_score'].reshape(4, 4)\n\nfig, ax = plt.subplots()\ncm = plt.cm.Blues\nmat = ax.matshow(scores, cmap=cm)\nax.set_xlabel('min_samples_leaf')\nax.set_ylabel('max_depth')\nax.set_xticklabels(['']+param_grid['min_samples_leaf'])\nax.set_yticklabels(['']+param_grid['max_depth'])\nfig.colorbar(mat)\nplt.show()",
"ベストなパラメータの組合せで推定を行います。",
"y_test_pred = grid_search.predict(X_test)",
"Kaggleに投稿するためのCSVファイルを作成し、結果を確認してみましょう。",
"df_kaggle = pd.DataFrame({'PassengerId': df_test['PassengerId'], 'Survived':np.array(y_test_pred)})\ndf_kaggle.to_csv('kaggle_decisiontree.csv', index=False)",
"チュートリアル第二部はこれで終了です。ここで学んだことを活かして、さらに高いスコアを目指してください!\n参考文献\n\nBuilding Machine Learning Systems with Python\nLearning scikit-learn: Machine Learning in Python\nTutorial on scikit-learn and IPython for parallel machine learning\nPyData NYC 2014 tutorial on the more advanced features of scikit-learn"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
quole/gensim
|
docs/notebooks/annoytutorial.ipynb
|
lgpl-2.1
|
[
"Similarity Queries using Annoy Tutorial\nThis tutorial is about using the (Annoy Approximate Nearest Neighbors Oh Yeah) library for similarity queries with a Word2Vec model built with gensim.\nWhy use Annoy?\nThe current implementation for finding k nearest neighbors in a vector space in gensim has linear complexity via brute force in the number of indexed documents, although with extremely low constant factors. The retrieved results are exact, which is an overkill in many applications: approximate results retrieved in sub-linear time may be enough. Annoy can find approximate nearest neighbors much faster.\nPrerequisites\nAdditional libraries needed for this tutorial:\n- annoy\n- psutil\n- matplotlib\nOutline\n\nDownload Text8 Corpus\nBuild Word2Vec Model\nConstruct AnnoyIndex with model & make a similarity query\nVerify & Evaluate performance\nEvaluate relationship of num_trees to initialization time and accuracy\nWork with Google's word2vec C formats",
"# pip install watermark\n%reload_ext watermark\n%watermark -v -m -p gensim,numpy,scipy,psutil,matplotlib",
"1. Download Text8 Corpus",
"import os.path\nif not os.path.isfile('text8'):\n !wget -c http://mattmahoney.net/dc/text8.zip\n !unzip text8.zip",
"Import & Set up Logging\nI'm not going to set up logging due to the verbose input displaying in notebooks, but if you want that, uncomment the lines in the cell below.",
"LOGS = False\n\nif LOGS:\n import logging\n logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)",
"2. Build Word2Vec Model",
"from gensim.models import Word2Vec, KeyedVectors\nfrom gensim.models.word2vec import Text8Corpus\n\n# using params from Word2Vec_FastText_Comparison\n\nlr = 0.05\ndim = 100\nws = 5\nepoch = 5\nminCount = 5\nneg = 5\nloss = 'ns'\nt = 1e-4\n\n# Same values as used for fastText training above\nparams = {\n 'alpha': lr,\n 'size': dim,\n 'window': ws,\n 'iter': epoch,\n 'min_count': minCount,\n 'sample': t,\n 'sg': 1,\n 'hs': 0,\n 'negative': neg\n}\n\nmodel = Word2Vec(Text8Corpus('text8'), **params)\nprint(model)",
"See the Word2Vec tutorial for how to initialize and save this model.\nComparing the traditional implementation and the Annoy approximation",
"#Set up the model and vector that we are using in the comparison\ntry:\n from gensim.similarities.index import AnnoyIndexer\nexcept ImportError:\n raise ValueError(\"SKIP: Please install the annoy indexer\")\n\nmodel.init_sims()\nannoy_index = AnnoyIndexer(model, 100)\n\n# Dry run to make sure both indices are fully in RAM\nvector = model.wv.syn0norm[0]\nmodel.most_similar([vector], topn=5, indexer=annoy_index)\nmodel.most_similar([vector], topn=5)\n\nimport time\nimport numpy as np\n\ndef avg_query_time(annoy_index=None, queries=1000):\n \"\"\"\n Average query time of a most_similar method over 1000 random queries,\n uses annoy if given an indexer\n \"\"\"\n total_time = 0\n for _ in range(queries):\n rand_vec = model.wv.syn0norm[np.random.randint(0, len(model.wv.vocab))]\n start_time = time.clock()\n model.most_similar([rand_vec], topn=5, indexer=annoy_index)\n total_time += time.clock() - start_time\n return total_time / queries\n\nqueries = 10000\n\ngensim_time = avg_query_time(queries=queries)\nannoy_time = avg_query_time(annoy_index, queries=queries)\nprint(\"Gensim (s/query):\\t{0:.5f}\".format(gensim_time))\nprint(\"Annoy (s/query):\\t{0:.5f}\".format(annoy_time))\nspeed_improvement = gensim_time / annoy_time\nprint (\"\\nAnnoy is {0:.2f} times faster on average on this particular run\".format(speed_improvement))",
"This speedup factor is by no means constant and will vary greatly from run to run and is particular to this data set, BLAS setup, Annoy parameters(as tree size increases speedup factor decreases), machine specifications, among other factors.\n\nNote: Initialization time for the annoy indexer was not included in the times. The optimal knn algorithm for you to use will depend on how many queries you need to make and the size of the corpus. If you are making very few similarity queries, the time taken to initialize the annoy indexer will be longer than the time it would take the brute force method to retrieve results. If you are making many queries however, the time it takes to initialize the annoy indexer will be made up for by the incredibly fast retrieval times for queries once the indexer has been initialized\nNote : Gensim's 'most_similar' method is using numpy operations in the form of dot product whereas Annoy's method isnt. If 'numpy' on your machine is using one of the BLAS libraries like ATLAS or LAPACK, it'll run on multiple cores(only if your machine has multicore support ). Check SciPy Cookbook for more details.\n\n3. Construct AnnoyIndex with model & make a similarity query\nCreating an indexer\nAn instance of AnnoyIndexer needs to be created in order to use Annoy in gensim. The AnnoyIndexer class is located in gensim.similarities.index\nAnnoyIndexer() takes two parameters:\nmodel: A Word2Vec or Doc2Vec model\nnum_trees: A positive integer. num_trees effects the build time and the index size. A larger value will give more accurate results, but larger indexes. More information on what trees in Annoy do can be found here. The relationship between num_trees, build time, and accuracy will be investigated later in the tutorial. \nNow that we are ready to make a query, lets find the top 5 most similar words to \"science\" in the Text8 corpus. To make a similarity query we call Word2Vec.most_similar like we would traditionally, but with an added parameter, indexer. The only supported indexer in gensim as of now is Annoy.",
"# 100 trees are being used in this example\nannoy_index = AnnoyIndexer(model, 100)\n# Derive the vector for the word \"science\" in our model\nvector = model[\"science\"]\n# The instance of AnnoyIndexer we just created is passed \napproximate_neighbors = model.most_similar([vector], topn=11, indexer=annoy_index)\n# Neatly print the approximate_neighbors and their corresponding cosine similarity values\nprint(\"Approximate Neighbors\")\nfor neighbor in approximate_neighbors:\n print(neighbor)\n\nnormal_neighbors = model.most_similar([vector], topn=11)\nprint(\"\\nNormal (not Annoy-indexed) Neighbors\")\nfor neighbor in normal_neighbors:\n print(neighbor)",
"Analyzing the results\nThe closer the cosine similarity of a vector is to 1, the more similar that word is to our query, which was the vector for \"science\". There are some differences in the ranking of similar words and the set of words included within the 10 most similar words.\n4. Verify & Evaluate performance\nPersisting Indexes\nYou can save and load your indexes from/to disk to prevent having to construct them each time. This will create two files on disk, fname and fname.d. Both files are needed to correctly restore all attributes. Before loading an index, you will have to create an empty AnnoyIndexer object.",
"fname = '/tmp/mymodel.index'\n\n# Persist index to disk\nannoy_index.save(fname)\n\n# Load index back\nif os.path.exists(fname):\n annoy_index2 = AnnoyIndexer()\n annoy_index2.load(fname)\n annoy_index2.model = model\n\n# Results should be identical to above\nvector = model[\"science\"]\napproximate_neighbors2 = model.most_similar([vector], topn=11, indexer=annoy_index2)\nfor neighbor in approximate_neighbors2:\n print(neighbor)\n \nassert approximate_neighbors == approximate_neighbors2",
"Be sure to use the same model at load that was used originally, otherwise you will get unexpected behaviors.\nSave memory by memory-mapping indices saved to disk\nAnnoy library has a useful feature that indices can be memory-mapped from disk. It saves memory when the same index is used by several processes.\nBelow are two snippets of code. First one has a separate index for each process. The second snipped shares the index between two processes via memory-mapping. The second example uses less total RAM as it is shared.",
"# Remove verbosity from code below (if logging active)\n\nif LOGS:\n logging.disable(logging.CRITICAL)\n\nfrom multiprocessing import Process\nimport os\nimport psutil",
"Bad Example: Two processes load the Word2vec model from disk and create there own Annoy indices from that model.",
"%%time\n\nmodel.save('/tmp/mymodel.pkl')\n\ndef f(process_id):\n print('Process Id: {}'.format(os.getpid()))\n process = psutil.Process(os.getpid())\n new_model = Word2Vec.load('/tmp/mymodel.pkl')\n vector = new_model[\"science\"]\n annoy_index = AnnoyIndexer(new_model,100)\n approximate_neighbors = new_model.most_similar([vector], topn=5, indexer=annoy_index)\n print('\\nMemory used by process {}: {}\\n---'.format(os.getpid(), process.memory_info()))\n\n# Creating and running two parallel process to share the same index file.\np1 = Process(target=f, args=('1',))\np1.start()\np1.join()\np2 = Process(target=f, args=('2',))\np2.start()\np2.join()",
"Good example. Two processes load both the Word2vec model and index from disk and memory-map the index",
"%%time\n\nmodel.save('/tmp/mymodel.pkl')\n\ndef f(process_id):\n print('Process Id: {}'.format(os.getpid()))\n process = psutil.Process(os.getpid())\n new_model = Word2Vec.load('/tmp/mymodel.pkl')\n vector = new_model[\"science\"]\n annoy_index = AnnoyIndexer()\n annoy_index.load('/tmp/mymodel.index')\n annoy_index.model = new_model\n approximate_neighbors = new_model.most_similar([vector], topn=5, indexer=annoy_index)\n print('\\nMemory used by process {}: {}\\n---'.format(os.getpid(), process.memory_info()))\n\n# Creating and running two parallel process to share the same index file.\np1 = Process(target=f, args=('1',))\np1.start()\np1.join()\np2 = Process(target=f, args=('2',))\np2.start()\np2.join()",
"5. Evaluate relationship of num_trees to initialization time and accuracy",
"import matplotlib.pyplot as plt\n%matplotlib inline",
"Build dataset of Initialization times and accuracy measures",
"exact_results = [element[0] for element in model.most_similar([model.wv.syn0norm[0]], topn=100)]\n\nx_values = []\ny_values_init = []\ny_values_accuracy = []\n\nfor x in range(1, 300, 10):\n x_values.append(x)\n start_time = time.time()\n annoy_index = AnnoyIndexer(model, x)\n y_values_init.append(time.time() - start_time)\n approximate_results = model.most_similar([model.wv.syn0norm[0]], topn=100, indexer=annoy_index)\n top_words = [result[0] for result in approximate_results]\n y_values_accuracy.append(len(set(top_words).intersection(exact_results)))",
"Plot results",
"plt.figure(1, figsize=(12, 6))\nplt.subplot(121)\nplt.plot(x_values, y_values_init)\nplt.title(\"num_trees vs initalization time\")\nplt.ylabel(\"Initialization time (s)\")\nplt.xlabel(\"num_trees\")\nplt.subplot(122)\nplt.plot(x_values, y_values_accuracy)\nplt.title(\"num_trees vs accuracy\")\nplt.ylabel(\"% accuracy\")\nplt.xlabel(\"num_trees\")\nplt.tight_layout()\nplt.show()",
"Initialization:\nInitialization time of the annoy indexer increases in a linear fashion with num_trees. Initialization time will vary from corpus to corpus, in the graph above the lee corpus was used\nAccuracy:\nIn this dataset, the accuracy seems logarithmically related to the number of trees. We see an improvement in accuracy with more trees, but the relationship is nonlinear. \n6. Work with Google word2vec files\nOur model can be exported to a word2vec C format. There is a binary and a plain text word2vec format. Both can be read with a variety of other software, or imported back into gensim as a KeyedVectors object.",
"# To export our model as text\nmodel.wv.save_word2vec_format('/tmp/vectors.txt', binary=False)\n\n# View the first 3 lines of the exported file\n\n# The first line has the total number of entries and the vector dimension count. \n# The next lines have a key (a string) followed by its vector.\nwith open('/tmp/vectors.txt') as myfile:\n for i in range(3):\n print(myfile.readline().strip())\n\n# To import a word2vec text model\nwv = KeyedVectors.load_word2vec_format('/tmp/vectors.txt', binary=False)\n\n# To export our model as binary\nmodel.wv.save_word2vec_format('/tmp/vectors.bin', binary=True)\n\n# To import a word2vec binary model\nwv = KeyedVectors.load_word2vec_format('/tmp/vectors.bin', binary=True)\n\n# To create and save Annoy Index from a loaded `KeyedVectors` object (with 100 trees)\nannoy_index = AnnoyIndexer(wv, 100)\nannoy_index.save('/tmp/mymodel.index')\n\n# Load and test the saved word vectors and saved annoy index\nwv = KeyedVectors.load_word2vec_format('/tmp/vectors.bin', binary=True)\nannoy_index = AnnoyIndexer()\nannoy_index.load('/tmp/mymodel.index')\nannoy_index.model = wv\n\nvector = wv[\"cat\"]\napproximate_neighbors = wv.most_similar([vector], topn=11, indexer=annoy_index)\n# Neatly print the approximate_neighbors and their corresponding cosine similarity values\nprint(\"Approximate Neighbors\")\nfor neighbor in approximate_neighbors:\n print(neighbor)\n\nnormal_neighbors = wv.most_similar([vector], topn=11)\nprint(\"\\nNormal (not Annoy-indexed) Neighbors\")\nfor neighbor in normal_neighbors:\n print(neighbor)",
"Recap\nIn this notebook we used the Annoy module to build an indexed approximation of our word embeddings. To do so, we did the following steps:\n1. Download Text8 Corpus\n2. Build Word2Vec Model\n3. Construct AnnoyIndex with model & make a similarity query\n4. Verify & Evaluate performance\n5. Evaluate relationship of num_trees to initialization time and accuracy\n6. Work with Google's word2vec C formats"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
lwcook/horsetail-matching
|
notebooks/Targets.ipynb
|
mit
|
[
"In this tutorial we examine the effect of changing the target(s) on the results of a horsetail matching optimization. \nWe'll use TP3 from the demo problems. We also define a function for easy plotting using matplotlib.",
"from horsetailmatching import HorsetailMatching, GaussianParameter\nfrom horsetailmatching.demoproblems import TP3\n\nfrom scipy.optimize import minimize\nimport numpy as np\nimport matplotlib.pyplot as plt\n\ndef plotHorsetail(theHM, c='b', label=''):\n (q, h, t), _, _ = theHM.getHorsetail()\n plt.plot(q, h, c=c, label=label)\n plt.plot(t, h, c=c, linestyle='dashed')\n plt.xlim([-10, 10])",
"In the following code we setup a horsetail matching optimization using test problem 3, and then run optimizations under three targets: a standard target, a risk averse target, and a very risk averse target.",
"u1 = GaussianParameter()\n\ndef standardTarget(h):\n return 0.\n\ntheHM = HorsetailMatching(TP3, u1, ftarget=standardTarget, samples_prob=5000)\nsolution1 = minimize(theHM.evalMetric, x0=0.6, method='COBYLA',\n constraints=[{'type': 'ineq', 'fun': lambda x: x}, {'type': 'ineq', 'fun': lambda x: 1-x}])\ntheHM.evalMetric(solution1.x)\nprint(solution1)\nplotHorsetail(theHM, c='b', label='Standard')\n\ndef riskAverseTarget(h):\n return 0. - 3.*h**3.\n\ntheHM.ftarget=riskAverseTarget\nsolution2 = minimize(theHM.evalMetric, x0=0.6, method='COBYLA',\n constraints=[{'type': 'ineq', 'fun': lambda x: x}, {'type': 'ineq', 'fun': lambda x: 1-x}])\ntheHM.evalMetric(solution2.x)\nprint(solution2)\nplotHorsetail(theHM, c='g', label='Risk Averse')\n\ndef veryRiskAverseTarget(h):\n return 1. - 10.**h**10.\n\ntheHM.ftarget=veryRiskAverseTarget\nsolution3 = minimize(theHM.evalMetric, x0=0.6, method='COBYLA',\n constraints=[{'type': 'ineq', 'fun': lambda x: x}, {'type': 'ineq', 'fun': lambda x: 1-x}])\ntheHM.evalMetric(solution3.x)\nprint(solution3)\nplotHorsetail(theHM, c='r', label='Very Risk Averse')\n\nplt.xlim([-10, 5])\nplt.ylim([0, 1])\nplt.xlabel('Quantity of Interest')\nplt.legend(loc='lower left')\nplt.plot()\nplt.show()",
"We can see that changing the target has changed how much influence is put on different parts of the CDF in the optimization. The more risk averse the target the more the optimizer will try to minimize the highest values of q over the CDF.\nIn the next tutorial we'll illustrate how you can use surrogates within horsetail matching so that if evaluating the quantity of interest is expensive, we can use fewer evaluations: http://nbviewer.jupyter.org/github/lwcook/horsetail-matching/blob/master/notebooks/Surrogates.ipynb\nFor other tutorials, please visit http://www-edc.eng.cam.ac.uk/aerotools/horsetailmatching/"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
Kaggle/learntools
|
notebooks/deep_learning/raw/ex_tpus.ipynb
|
apache-2.0
|
[
"In this exercise, you'll make your first submission to the Petals to the Metal competition. You'll learn how to accept the competition rules, run a notebook on Kaggle that uses (free!) TPUs, and how to submit your results to the leaderboard. \nWe won't cover the code in detail here, but if you'd like to dive into the details, you're encouraged to check out the tutorial notebook.\nJoin the competition\nBegin by joining the competition. Open a new window with the competition page, and click on the \"Rules\" tab.\nThis takes you to the rules acceptance page. You must accept the competition rules in order to participate. These rules govern how many submissions you can make per day, the maximum team size, and other competition-specific details. Click on \"I Understand and Accept\" to indicate that you will abide by the competition rules.\nCommit your Notebook\nCommitting your notebook will run a fresh copy of the notebook start to finish, saving a copy of the submission.csv file as output.\nFirst, click on the Save Version button in the upper right.\n<figure>\n<img src=\"https://i.imgur.com/ebMUMSq.png\" alt=\"The blue Save Version button.\" width=300>\n</figure>\n\nChoose Advanced Settings.\n<figure>\n<img src=\"https://i.imgur.com/sx9l1fL.png\" alt=\"Advanced Settings in the Version menu.\" width=600>\n</figure>\n\nSelect Run with TPU for this session from the dropdown menu and click the blue Save button.\n<figure>\n<img src=\"https://i.imgur.com/1cB5ykf.png\" alt=\"The Accelerator dropdown menu.\" width=600>\n</figure>\n\nSelect Save & Run All (Commit) and click the blue Save button.\n<figure>\n<img src=\"https://i.imgur.com/YeJLsNG.png\" alt=\"The Save Version menu.\" width=600>\n</figure>\n\nThe commit may take a while to finish (about 10-15 min), but there's no harm in doing something else while it's running and coming back later.\nThis generates a window in the bottom left corner of the notebook. After it has finished running, click on the number to the right of the Save Version button. This pulls up a list of versions on the right of the screen. Click on the ellipsis (...) to the right of the most recent version, and select Open in Viewer. This brings you into view mode of the same page. You will need to scroll down to get back to these instructions.\nMake a Submission\nNow you're ready to make a submission! Click on the Output heading in the menu to the right of the notebook.\n<figure>\n<img src=\"https://i.imgur.com/thKwt1q.png\" alt=\"The Output heading.\" width=300>\n</figure>\n\nAnd finally you'll submit the predictions! Just look for the blue Submit button. After clicking it, you should shortly be on the leaderboard!\n<figure>\n<img src=\"https://i.imgur.com/j00mDeI.png\" alt=\"The Save Version menu.\" width=600>\n</figure>\n\nCode\nThe code reproduces the code we covered together in the tutorial. If you commit the notebook by following the instructions above, then the code is run for you.\nLoad Helper Functions",
"from petal_helper import *",
"Create Distribution Strategy",
"# Detect TPU, return appropriate distribution strategy\ntry:\n tpu = tf.distribute.cluster_resolver.TPUClusterResolver() \n print('Running on TPU ', tpu.master())\nexcept ValueError:\n tpu = None\n\nif tpu:\n tf.config.experimental_connect_to_cluster(tpu)\n tf.tpu.experimental.initialize_tpu_system(tpu)\n strategy = tf.distribute.experimental.TPUStrategy(tpu)\nelse:\n strategy = tf.distribute.get_strategy() \n\nprint(\"REPLICAS: \", strategy.num_replicas_in_sync)",
"Loading the Competition Data",
"ds_train = get_training_dataset()\nds_valid = get_validation_dataset()\nds_test = get_test_dataset()\n\nprint(\"Training:\", ds_train)\nprint (\"Validation:\", ds_valid)\nprint(\"Test:\", ds_test)",
"Explore the Data\nTry using some of the helper functions described in the Getting Started tutorial to explore the dataset.",
"print(\"Number of classes: {}\".format(len(CLASSES)))\n\nprint(\"First five classes, sorted alphabetically:\")\nfor name in sorted(CLASSES)[:5]:\n print(name)\n\nprint (\"Number of training images: {}\".format(NUM_TRAINING_IMAGES))",
"Examine the shape of the data.",
"print(\"Training data shapes:\")\nfor image, label in ds_train.take(3):\n print(image.numpy().shape, label.numpy().shape)\nprint(\"Training data label examples:\", label.numpy())\n\nprint(\"Test data shapes:\")\nfor image, idnum in ds_test.take(3):\n print(image.numpy().shape, idnum.numpy().shape)\nprint(\"Test data IDs:\", idnum.numpy().astype('U')) # U=unicode string",
"Peek at training data.",
"one_batch = next(iter(ds_train.unbatch().batch(20)))\ndisplay_batch_of_images(one_batch)",
"Define Model",
"with strategy.scope():\n pretrained_model = tf.keras.applications.VGG16(\n weights='imagenet',\n include_top=False ,\n input_shape=[*IMAGE_SIZE, 3]\n )\n pretrained_model.trainable = False\n \n model = tf.keras.Sequential([\n # To a base pretrained on ImageNet to extract features from images...\n pretrained_model,\n # ... attach a new head to act as a classifier.\n tf.keras.layers.GlobalAveragePooling2D(),\n tf.keras.layers.Dense(len(CLASSES), activation='softmax')\n ])\n model.compile(\n optimizer='adam',\n loss = 'sparse_categorical_crossentropy',\n metrics=['sparse_categorical_accuracy'],\n )\n\nmodel.summary()",
"Train Model",
"# Define the batch size. This will be 16 with TPU off and 128 with TPU on\nBATCH_SIZE = 16 * strategy.num_replicas_in_sync\n\n# Define training epochs for committing/submitting. (TPU on)\nEPOCHS = 12\nSTEPS_PER_EPOCH = NUM_TRAINING_IMAGES // BATCH_SIZE\n\nhistory = model.fit(\n ds_train,\n validation_data=ds_valid,\n epochs=EPOCHS,\n steps_per_epoch=STEPS_PER_EPOCH,\n)",
"Examine training curves.",
"display_training_curves(\n history.history['loss'],\n history.history['val_loss'],\n 'loss',\n 211,\n)\ndisplay_training_curves(\n history.history['sparse_categorical_accuracy'],\n history.history['val_sparse_categorical_accuracy'],\n 'accuracy',\n 212,\n)",
"Validation\nCreate a confusion matrix.",
"cmdataset = get_validation_dataset(ordered=True)\nimages_ds = cmdataset.map(lambda image, label: image)\nlabels_ds = cmdataset.map(lambda image, label: label).unbatch()\n\ncm_correct_labels = next(iter(labels_ds.batch(NUM_VALIDATION_IMAGES))).numpy()\ncm_probabilities = model.predict(images_ds)\ncm_predictions = np.argmax(cm_probabilities, axis=-1)\n\nlabels = range(len(CLASSES))\ncmat = confusion_matrix(\n cm_correct_labels,\n cm_predictions,\n labels=labels,\n)\ncmat = (cmat.T / cmat.sum(axis=1)).T # normalize\n\nscore = f1_score(\n cm_correct_labels,\n cm_predictions,\n labels=labels,\n average='macro',\n)\nprecision = precision_score(\n cm_correct_labels,\n cm_predictions,\n labels=labels,\n average='macro',\n)\nrecall = recall_score(\n cm_correct_labels,\n cm_predictions,\n labels=labels,\n average='macro',\n)\ndisplay_confusion_matrix(cmat, score, precision, recall)",
"Look at examples from the dataset, with true and predicted classes.",
"dataset = get_validation_dataset()\ndataset = dataset.unbatch().batch(20)\nbatch = iter(dataset)\n\nimages, labels = next(batch)\nprobabilities = model.predict(images)\npredictions = np.argmax(probabilities, axis=-1)\ndisplay_batch_of_images((images, labels), predictions)",
"Test Predictions\nCreate predictions to submit to the competition.",
"test_ds = get_test_dataset(ordered=True)\n\nprint('Computing predictions...')\ntest_images_ds = test_ds.map(lambda image, idnum: image)\nprobabilities = model.predict(test_images_ds)\npredictions = np.argmax(probabilities, axis=-1)\nprint(predictions)\n\nprint('Generating submission.csv file...')\n\n# Get image ids from test set and convert to integers\ntest_ids_ds = test_ds.map(lambda image, idnum: idnum).unbatch()\ntest_ids = next(iter(test_ids_ds.batch(NUM_TEST_IMAGES))).numpy().astype('U')\n\n# Write the submission file\nnp.savetxt(\n 'submission.csv',\n np.rec.fromarrays([test_ids, predictions]),\n fmt=['%s', '%d'],\n delimiter=',',\n header='id,label',\n comments='',\n)\n\n# Look at the first few predictions\n!head submission.csv",
"Going Further\nNow that you've joined the Petals to the Metal competition, why not try your hand at improving the model and see if you can climb the ranks! If you're looking for ideas, the original flower competition, Flower Classification with TPUs, has a wealth of information in its notebooks and discussion forum. Check it out!"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
quantopian/research_public
|
notebooks/data/eventvestor.clinical_trials/notebook.ipynb
|
apache-2.0
|
[
"EventVestor: Clinical Trials\nIn this notebook, we'll take a look at EventVestor's Clinical Trials dataset, available on the Quantopian Store. This dataset spans January 01, 2007 through the current day, and documents announcements of key phases of clinical trials by biotech/pharmaceutical companies.\nBlaze\nBefore we dig into the data, we want to tell you about how you generally access Quantopian Store data sets. These datasets are available through an API service known as Blaze. Blaze provides the Quantopian user with a convenient interface to access very large datasets.\nBlaze provides an important function for accessing these datasets. Some of these sets are many millions of records. Bringing that data directly into Quantopian Research directly just is not viable. So Blaze allows us to provide a simple querying interface and shift the burden over to the server side.\nIt is common to use Blaze to reduce your dataset in size, convert it over to Pandas and then to use Pandas for further computation, manipulation and visualization.\nHelpful links:\n* Query building for Blaze\n* Pandas-to-Blaze dictionary\n* SQL-to-Blaze dictionary.\nOnce you've limited the size of your Blaze object, you can convert it to a Pandas DataFrames using:\n\nfrom odo import odo\nodo(expr, pandas.DataFrame)\n\nFree samples and limits\nOne other key caveat: we limit the number of results returned from any given expression to 10,000 to protect against runaway memory usage. To be clear, you have access to all the data server side. We are limiting the size of the responses back from Blaze.\nThere is a free version of this dataset as well as a paid one. The free one includes about three years of historical data, though not up to the current day.\nWith preamble in place, let's get started:",
"# import the dataset\nfrom quantopian.interactive.data.eventvestor import clinical_trials\n# or if you want to import the free dataset, use:\n# from quantopian.data.eventvestor import clinical_trials_free\n\n# import data operations\nfrom odo import odo\n# import other libraries we will use\nimport pandas as pd\n\n# Let's use blaze to understand the data a bit using Blaze dshape()\nclinical_trials.dshape\n\n# And how many rows are there?\n# N.B. we're using a Blaze function to do this, not len()\nclinical_trials.count()\n\n# Let's see what the data looks like. We'll grab the first three rows.\nclinical_trials[:3]",
"Let's go over the columns:\n- event_id: the unique identifier for this clinical trial.\n- asof_date: EventVestor's timestamp of event capture.\n- trade_date: for event announcements made before trading ends, trade_date is the same as event_date. For announcements issued after market close, trade_date is next market open day.\n- symbol: stock ticker symbol of the affected company.\n- event_type: this should always be Clinical Trials.\n- event_headline: a short description of the event.\n- clinical_phase: phases include 0, I, II, III, IV, Pre-Clinical\n- clinical_scope: types of scope include additional indications, all indications, limited indications\n- clinical_result: result types include negative, partial, positive\n- product_name: name of the drug being investigated.\n- event_rating: this is always 1. The meaning of this is uncertain.\n- timestamp: this is our timestamp on when we registered the data.\n- sid: the equity's unique identifier. Use this instead of the symbol.\nWe've done much of the data processing for you. Fields like timestamp and sid are standardized across all our Store Datasets, so the datasets are easy to combine. We have standardized the sid across all our equity databases.\nWe can select columns and rows with ease. Below, we'll fetch all phase-3 announcements. We'll only display the columns for the sid and the drug name.",
"phase_three = clinical_trials[clinical_trials.clinical_phase == \"Phase III\"][['timestamp', 'sid','product_name']].sort('timestamp')\n# When displaying a Blaze Data Object, the printout is automatically truncated to ten rows.\nphase_three",
"Finally, suppose we want a DataFrame of GlaxoSmithKline Phase-III announcements, sorted in descending order by date:",
"gsk_sid = symbols('GSK').sid\n\ngsk = clinical_trials[clinical_trials.sid==gsk_sid].sort('timestamp',ascending=False)\ngsk_df = odo(gsk, pd.DataFrame)\n# now filter down to the Phase 4 trials\ngsk_df = gsk_df[gsk_df.clinical_phase==\"Phase III\"]\ngsk_df"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
barjacks/pythonrecherche
|
Kursteilnehmer/Thierry Seiler/06 /01 Rückblick For-Loop-Übungen.ipynb
|
mit
|
[
"10 For-Loop-Rückblick-Übungen\nIn den Teilen der folgenden Übungen habe ich den Code mit \"XXX\" ausgewechselt. Es gilt in allen Übungen, den korrekten Code auszuführen und die Zelle dann auszuführen. \n1.Drucke alle diese Prim-Zahlen aus:",
"primeNumbers = [2, 3, 5, 7]\nfor prime in primeNumbers:\n print(prime)",
"2.Drucke alle die Zahlen von 0 bis 4 aus:",
"for x in range(5):\n print(x)",
"3.Drucke die Zahlen 3,4,5 aus:",
"for x in range(3, 6):\n print(x)\n",
"4.Baue einen For-Loop, indem Du alle geraden Zahlen ausdruckst, die tiefer sind als 237.",
"numbers = [\n 951, 402, 984, 651, 360, 69, 408, 319, 601, 485, 980, 507, 725, 547, 544,\n 615, 83, 165, 141, 501, 263, 617, 865, 575, 219, 390, 984, 592, 236, 105, 942, 941,\n 386, 462, 47, 418, 907, 344, 236, 375, 823, 566, 597, 978, 328, 615, 953, 345,\n 399, 162, 758, 219, 918, 237, 412, 566, 826, 248, 866, 950, 626, 949, 687, 217,\n 815, 67, 104, 58, 512, 24, 892, 894, 767, 553, 81, 379, 843, 831, 445, 742, 717,\n 958, 609, 842, 451, 688, 753, 854, 685, 93, 857, 440, 380, 126, 721, 328, 753, 470,\n 743, 527\n]\n\n# Hier kommt Dein Code:\n\nfor number in numbers:\n if number%2 == 0 and number < 237:\n print(number)\n\n\n\n\n#Lösung:",
"5.Addiere alle Zahlen in der Liste",
"added = 0\n\nfor number in numbers:\n added += number\n\nprint(added)\n\n#Lösung:",
"6.Addiere nur die Zahlen, die gerade sind",
"new_list = []\nfor elem in numbers:\n if elem%2 == 0:\n new_list.append(elem)\nsum(new_list)",
"7.Drucke mit einem For Loop 5 Mal hintereinander Hello World aus",
"for i in range(5):\n print(\"Hello World\")\n\n#Lösung",
"8.Entwickle ein Programm, das alle Nummern zwischen 2000 und 3200 findet, die durch 7, aber nicht durch 5 teilbar sind. Das Ergebnis sollte auf einer Zeile ausgedruckt werden. Tipp: Schaue Dir hier die Vergleichsoperanden von Python an.",
"l=[]\nfor i in range(2000, 3200):\n if (i%7==0) and (i%5!=0):\n l.append(str(i))\n\nprint(','.join(l))",
"9.Schreibe einen For Loop, der die Nummern in der folgenden Liste von int in str verwandelt.",
"lst = range(45,99)\n\nlst = list(lst)\nindex = 0\nprint(lst)\nfor number in lst:\n lst[index] = str(number)\n index += 1\n\n\n",
"10.Schreibe nun ein Programm, das alle Ziffern 4 mit dem Buchstaben A ersetzte, alle Ziffern 5 mit dem Buchtaben B.",
"new_list = []\nfor elem in lst:\n if '4' in elem:\n elem = elem.replace('4', 'A')\n if '5' in elem:\n elem = elem.replace('5', 'B')\n new_list.append(elem)\n\nnewnewlist"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
QuantEcon/QuantEcon.notebooks
|
ddp_ex_MF_7_6_5_py.ipynb
|
bsd-3-clause
|
[
"DiscreteDP Example: Water Management\nDaisuke Oyama\nFaculty of Economics, University of Tokyo\nFrom Miranda and Fackler, <i>Applied Computational Economics and Finance</i>, 2002,\nSection 7.6.5",
"%matplotlib inline\n\nimport itertools\nimport numpy as np\nfrom scipy import sparse\nimport matplotlib.pyplot as plt\nfrom quantecon.markov import DiscreteDP\n\nmaxcap = 30\nn = maxcap + 1 # Number of states\nm = n # Number of actions\n\na1, b1 = 14, 0.8\na2, b2 = 10, 0.4\nF = lambda x: a1 * x**b1 # Benefit from irrigation\nU = lambda c: a2 * c**b2 # Benefit from recreational consumption c = s - x\n\nprobs = [0.1, 0.2, 0.4, 0.2, 0.1]\nsupp_size = len(probs)\n\nbeta = 0.9",
"Product formulation",
"# Reward array\nR = np.empty((n, m))\nfor s, x in itertools.product(range(n), range(m)):\n R[s, x] = F(x) + U(s-x) if x <= s else -np.inf\n\n# Transition probability array\nQ = np.zeros((n, m, n))\nfor s, x in itertools.product(range(n), range(m)):\n if x <= s:\n for j in range(supp_size):\n Q[s, x, np.minimum(s-x+j, n-1)] += probs[j]\n\n# Create a DiscreteDP\nddp = DiscreteDP(R, Q, beta)\n\n# Solve the dynamic optimization problem (by policy iteration)\nres = ddp.solve()\n\n# Number of iterations\nres.num_iter\n\n# Optimal policy\nres.sigma\n\n# Optimal value function\nres.v\n\n# Simulate the controlled Markov chain for num_rep times\n# and compute the average\ninit = 0\nnyrs = 50\nts_length = nyrs + 1\nnum_rep = 10**4\nave_path = np.zeros(ts_length)\nfor i in range(num_rep):\n path = res.mc.simulate(ts_length, init=init)\n ave_path = (i/(i+1)) * ave_path + (1/(i+1)) * path\n\nave_path\n\n# Stationary distribution of the Markov chain\nstationary_dist = res.mc.stationary_distributions[0]\n\nstationary_dist\n\n# Plot sigma, v, ave_path, stationary_dist\nhspace = 0.3\nfig, axes = plt.subplots(2, 2, figsize=(12, 8+hspace))\nfig.subplots_adjust(hspace=hspace)\n\naxes[0, 0].plot(res.sigma, '*')\naxes[0, 0].set_xlim(-1, 31)\naxes[0, 0].set_ylim(-0.5, 5.5)\naxes[0, 0].set_xlabel('Water Level')\naxes[0, 0].set_ylabel('Irrigation')\naxes[0, 0].set_title('Optimal Irrigation Policy')\n\naxes[0, 1].plot(res.v)\naxes[0, 1].set_xlim(0, 30)\ny_lb, y_ub = 300, 700\naxes[0, 1].set_ylim(y_lb, y_ub)\naxes[0, 1].set_yticks(np.linspace(y_lb, y_ub, 5, endpoint=True))\naxes[0, 1].set_xlabel('Water Level')\naxes[0, 1].set_ylabel('Value')\naxes[0, 1].set_title('Optimal Value Function')\n\naxes[1, 0].plot(ave_path)\naxes[1, 0].set_xlim(0, nyrs)\ny_lb, y_ub = 0, 15\naxes[1, 0].set_ylim(y_lb, y_ub)\naxes[1, 0].set_yticks(np.linspace(y_lb, y_ub, 4, endpoint=True))\naxes[1, 0].set_xlabel('Year')\naxes[1, 0].set_ylabel('Water Level')\naxes[1, 0].set_title('Average Optimal State Path')\n\naxes[1, 1].bar(range(n), stationary_dist, align='center')\naxes[1, 1].set_xlim(-1, n)\ny_lb, y_ub = 0, 0.15\naxes[1, 1].set_ylim(y_lb, y_ub+0.01)\naxes[1, 1].set_yticks(np.linspace(y_lb, y_ub, 4, endpoint=True))\naxes[1, 1].set_xlabel('Water Level')\naxes[1, 1].set_ylabel('Probability')\naxes[1, 1].set_title('Stationary Distribution')\n\nplt.show()",
"State-action pairs formulation",
"# Arrays of state and action indices\nS = np.arange(n)\nX = np.arange(m)\nS_left = S.reshape(n, 1) - X.reshape(1, n)\ns_indices, a_indices = np.where(S_left >= 0)\n\n# Reward vector\nS_left = S_left[s_indices, a_indices]\nR = F(X[a_indices]) + U(S_left)\n\n# Transition probability array\nL = len(S_left)\nQ = sparse.lil_matrix((L, n))\nfor i, s_left in enumerate(S_left):\n for j in range(supp_size):\n Q[i, np.minimum(s_left+j, n-1)] += probs[j]\n\n# Create a DiscreteDP\nddp = DiscreteDP(R, Q, beta, s_indices, a_indices)\n\n# Solve the dynamic optimization problem (by policy iteration)\nres = ddp.solve()\n\n# Number of iterations\nres.num_iter\n\n# Simulate the controlled Markov chain for num_rep times\n# and compute the average\ninit = 0\nnyrs = 50\nts_length = nyrs + 1\nnum_rep = 10**4\nave_path = np.zeros(ts_length)\nfor i in range(num_rep):\n path = res.mc.simulate(ts_length, init=init)\n ave_path = (i/(i+1)) * ave_path + (1/(i+1)) * path\n\n# Stationary distribution of the Markov chain\nstationary_dist = res.mc.stationary_distributions[0]\n\n# Plot sigma, v, ave_path, stationary_dist\nhspace = 0.3\nfig, axes = plt.subplots(2, 2, figsize=(12, 8+hspace))\nfig.subplots_adjust(hspace=hspace)\n\naxes[0, 0].plot(res.sigma, '*')\naxes[0, 0].set_xlim(-1, 31)\naxes[0, 0].set_ylim(-0.5, 5.5)\naxes[0, 0].set_xlabel('Water Level')\naxes[0, 0].set_ylabel('Irrigation')\naxes[0, 0].set_title('Optimal Irrigation Policy')\n\naxes[0, 1].plot(res.v)\naxes[0, 1].set_xlim(0, 30)\ny_lb, y_ub = 300, 700\naxes[0, 1].set_ylim(y_lb, y_ub)\naxes[0, 1].set_yticks(np.linspace(y_lb, y_ub, 5, endpoint=True))\naxes[0, 1].set_xlabel('Water Level')\naxes[0, 1].set_ylabel('Value')\naxes[0, 1].set_title('Optimal Value Function')\n\naxes[1, 0].plot(ave_path)\naxes[1, 0].set_xlim(0, nyrs)\ny_lb, y_ub = 0, 15\naxes[1, 0].set_ylim(y_lb, y_ub)\naxes[1, 0].set_yticks(np.linspace(y_lb, y_ub, 4, endpoint=True))\naxes[1, 0].set_xlabel('Year')\naxes[1, 0].set_ylabel('Water Level')\naxes[1, 0].set_title('Average Optimal State Path')\n\naxes[1, 1].bar(range(n), stationary_dist, align='center')\naxes[1, 1].set_xlim(-1, n)\ny_lb, y_ub = 0, 0.15\naxes[1, 1].set_ylim(y_lb, y_ub+0.01)\naxes[1, 1].set_yticks(np.linspace(y_lb, y_ub, 4, endpoint=True))\naxes[1, 1].set_xlabel('Water Level')\naxes[1, 1].set_ylabel('Probability')\naxes[1, 1].set_title('Stationary Distribution')\n\nplt.show()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
Upward-Spiral-Science/uhhh
|
code/.ipynb_checkpoints/DavidWest - Analysis-checkpoint.ipynb
|
apache-2.0
|
[
"Assignment 8 Analysis\nIn this assignment, I walk through the data performing various analysis",
"# Import necessary libraries\n% matplotlib inline\n\nfrom matplotlib import pyplot as plt\nfrom sklearn.decomposition import PCA\nfrom sklearn.decomposition import PCA\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns\nfrom pylab import *\nimport csv\nfrom mpl_toolkits.mplot3d import Axes3D\n\n# Read in the data\ndata = open('../data/data.csv', 'r').readlines()\nfieldnames = ['x', 'y', 'z', 'unmasked', 'synapses']\nreader = csv.reader(data)\nreader.next()\n\nrows = [[int(col) for col in row] for row in reader]\n\n# Format the data sorted by physical location on slide\nsorted_x = sorted(list(set([r[0] for r in rows])))\nsorted_y = sorted(list(set([r[1] for r in rows])))\nsorted_z = sorted(list(set([r[2] for r in rows])))\n\n# Synapse data (as in rows from last code block) considering only unmasked synapses\nunmaskedSynapses = ([r[-1] for r in rows if r[-2] != 0])\n\n# Synapse data considering only unmasked synapses with non-zero density values\nunmaskedSynapsesNoZero = ([r[-1] for r in rows if r[-2] != 0 if r[-1] !=0])\n\nreal_volume = np.zeros((len(sorted_x), len(sorted_y), len(sorted_z)))\nfor r in rows:\n real_volume[sorted_x.index(r[0]), sorted_y.index(r[1]), sorted_z.index(r[2])] = r[-1]",
"Kernal Density Estimation",
"unmaskedSynapses = np.asarray(unmaskedSynapses)\nsns.kdeplot(unmaskedSynapses)",
"Examining xy density projections through image z layers\nThis didn't work how I wanted to, but there is more I can do here.",
"count = 0\nfor i in sorted_z:\n unmaskedSynapsesNoZero = ([r[-1] for r in rows if r[-2] != 0 if r[-1] !=0 if r[2] == i])\n # convert to pandas dataframe\n df = pd.DataFrame(unmaskedSynapsesNoZero, columns=['xy'])\n df.plot()\n count = count + 1\n \n# df\n \n# syn_den = sns.load_dataset(df)",
"Single-layer heat map",
"count = 0\nfor i in sorted_z:\n df = pd.DataFrame(real_volume[:,:,count])\n# print real_volume[:,:,count]\n# ax = sns.heatmap(df)\n count = count + 1\n \nax = sns.heatmap(df, yticklabels=False, xticklabels=False)\n \n# syn_den = sns.load_dataset(df)\n ",
"Cluster map",
"cmap = sns.cubehelix_palette(as_cmap=True, rot=-.3, light=1)\ng = sns.clustermap(df, cmap=cmap, linewidths=.5)",
"Cluster map with standardization across columns",
"# standardize data across all columns\ng = sns.clustermap(df, standard_scale=1)",
"Analysis using Principal Components\nPCA from single 2D Layer\nHere we use a single z layer and find the principle components",
"pca = PCA(n_components=5)\npca.fit(df)\nprint(pca.explained_variance_ratio_)",
"Explained Variance",
"# make a square figure and axes\nfigure(1, figsize=(6,6))\nax = axes([0.1, 0.1, 0.8, 0.8])\n\n# The slices will be ordered and plotted counter-clockwise.\nlabels = 'PC1', 'PC2', 'PC3', 'PC4', 'PC5'\nfracs = pca.explained_variance_ratio_\nprint fracs\n\npie(fracs, labels=labels, autopct='%1.1f%%', shadow=True, startangle=90)\n\ntitle('First 5 PC Explained Variance', bbox={'facecolor':'0.8', 'pad':5})\n\nshow()",
"We can see how the first five pricipal components explain variance in the 2D z-layers (\"z\" w.r.t. image, not brain) \nPCA from single all three dimensions\nWe want to use PCA to go from 3D data --> 2D. This can help us visualize data, and will reorient our view hopefully in such a way that we can see brain layers.\nFirst, let's see what the 3D data looks like:",
"ax.scatter(real, y, z, c='b')\n\n#Attempt to plot the entire data set in 3d in order to find some other characteristics?\n#The plot does not look that great. Too dense. Needs adjustment\nfig = plt.figure()\nax = fig.add_subplot(111, projection='3d')\n\n#print unmaskedSynapsesNoZero\nx = []\ny = []\nz = []\nval = []\n\nfor r in rows:\n if r[-2] != 0:\n if r[-1] !=0:\n x.append(r[0])\n y.append(r[1])\n z.append(r[2])\n \nax.scatter(x, y, z, c='b')\nplt.show()",
"This plot is too messy. It also doesn't say much about the density at particular locations. To solve this, we need to do two things:\n* Make scatter marker size based on the density\n* Find a threshold and ignore the least dense points\nI'll start by making a function that does these plots so we don't copy paste.",
"def make3DSynapsePlot(thresh):\n fig = plt.figure()\n ax = fig.add_subplot(111, projection='3d')\n\n x = []\n y = []\n z = []\n d = []\n val = []\n\n for r in rows:\n if r[-2] != 0:\n # Filtering \n if r[-1] > thresh:\n x.append(r[0])\n y.append(r[1])\n z.append(r[2])\n d.append(r[-1])\n\n ax.scatter(x, y, z, c='b')\n plt.show()",
"We know from our histogram that there is a density peak around 190. Let's start by filtering out points below this peak.",
"make3DSynapsePlot(190)",
"This is still too dense. Let's really crank up the threshold to 280.\nNote: In the future, I want to use standard deviation to determine this. While this would be easy on the raw data, we actually want to use the standard deviation on the second gaussian given by gmm. (@richie)",
"make3DSynapsePlot(280)",
"That's more like it! Now, let's add in some density information. We can base the bubble size by the density at that location.",
"def make3DSynapsePlot(thresh):\n fig = plt.figure()\n ax = fig.add_subplot(111, projection='3d')\n\n x = []\n y = []\n z = []\n d = []\n val = []\n\n for r in rows:\n if r[-2] != 0:\n # Filtering \n if r[-1] > thresh:\n x.append(r[0])\n y.append(r[1])\n z.append(r[2])\n d.append(r[-1])\n \n #Scale and raise it to appropriate power\n dens = [((i-min(d))**1.2) for i in d]\n\n ax.scatter(x, y, z, c='b', s=dens)\n fig.suptitle('3D Distribution of Synapses', fontsize=20)\n return ax\n\n\nax = make3DSynapsePlot(310)\nax.set_xlabel('x-axis', fontsize=16)\nax.set_ylabel('y-axis', fontsize=16)\nax.set_zlabel('z-axis', fontsize=16)\nax.view_init(elev=60, azim=125)\nplt.show()\n\ndef plot_figs(fig_num, elev, azim):\n fig = plt.figure(fig_num, figsize=(4, 3))\n plt.clf() #clear the figure\n ax = Axes3D(fig, rect=[0, 0, .95, 1], elev=elev, azim=azim)\n\n ax.scatter(x[::10], y[::10], z[::10], c='b', marker='+', alpha=.4)\n Y = np.c_[x, y, z]\n\n # Using SciPy's SVD, this would be:\n # _, pca_score, V = scipy.linalg.svd(Y, full_matrices=False)\n\n pca = PCA(n_components=3)\n pca.fit(Y)\n pca_score = pca.explained_variance_ratio_\n V = pca.components_\n\n x_pca_axis, y_pca_axis, z_pca_axis = V.T * pca_score / pca_score.min()\n\n x_pca_axis, y_pca_axis, z_pca_axis = 3 * V.T\n x_pca_plane = np.r_[x_pca_axis[:2], - x_pca_axis[1::-1]]\n y_pca_plane = np.r_[y_pca_axis[:2], - y_pca_axis[1::-1]]\n z_pca_plane = np.r_[z_pca_axis[:2], - z_pca_axis[1::-1]]\n x_pca_plane.shape = (2, 2)\n y_pca_plane.shape = (2, 2)\n z_pca_plane.shape = (2, 2)\n ax.plot_surface(x_pca_plane, y_pca_plane, z_pca_plane)\n ax.w_xaxis.set_ticklabels([])\n ax.w_yaxis.set_ticklabels([])\n ax.w_zaxis.set_ticklabels([])\n\n\nelev = -40\nazim = -80\nplot_figs(1, elev, azim)\n\n# elev = 30\n# azim = 20\n# plot_figs(2, elev, azim)\n\nplt.show()",
"Transforming data to PC",
"pca.transform(df)\n\n# Plot the transformed data\n\nfig = plt.figure()\nax = fig.add_subplot(111, projection='3d')\n\nx = []\ny = []\nz = []\nval = []\n\nfor r in rows:\n if r[-2] != 0:\n if r[-1] !=0:\n x.append(r[0])\n y.append(r[1])\n z.append(r[2])\n \nax.scatter(x, y, z, c='b')\nplt.show()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
simulkade/FVTool
|
Examples/External/Diffusion1DSpherical_Analytic-vs-FVTool-vs-Fipy/diffusion1Dspherical_analytic_vs_FVTool_vs_Fipy.ipynb
|
bsd-2-clause
|
[
"In order to be run, this Jupyter Python notebook requires that a number of specific Python modules (e.g. Fipy) are available; see all import statements. Also, it requires GNU Octave for executing the diffusion1Dspherical_FVTool.m script (to be placed in the same directory as this notebook) that calculates numerical solutions using FVTool.\nGNU Octave: https://www.gnu.org/software/octave/\nFipy: https://www.ctcms.nist.gov/fipy/\nFVTool: https://github.com/simulkade/FVTool",
"import sys\nimport numpy as np\nfrom numpy import sqrt, exp, pi\nfrom scipy.special import erf\nimport scipy.io as spio\nimport fipy as fp\nimport matplotlib.pyplot as plt\nfrom fipy import numerix\nimport os.path\n\nprint('Python: ', sys.version)\nprint('Fipy v: ', fp.__version__)\nprint('Solver: ', fp.DefaultSolver)",
"Diffusion of an initial sphere into an infinite medium\nM.H.V. Werts, 2020\nHere, we study the diffusion equation in an infinite medium with the initial condition that all matter is homogeneously distributed in a sphere radius $a$, and no matter is outside of this sphere.\nReference: J. Crank (1975) \"The Mathematics of Diffusion\", 2nd Ed., \n Clarendon Press (Oxford), pages 29-30, Equation 3.8, Figure 3.1\nA system of spherical symmetry in a spherical coordinate system, i.e. \"1D spherical\", space coordinate $r$. Time $t$.\nSystem parameters:\n$a$ : radius of initial sphere; \n$c_0$ : concentration in initial sphere; \n$D$ : diffusion coefficient\nSimple diffusion equation:\n$$\\frac{\\partial c}{\\partial t} = D \\nabla^2 c$$\nwith initial condition:\n$$ \nc(t = 0, r \\leqslant a) = c_0 \\quad \\ \nc(t = 0, r > a) = 0\n$$",
"a = 1.0\nC_0 = 1.0\nD = 1.0",
"Analytic solution\nFrom Crank's \"Mathematics of Diffusion\" (2nd Ed., 1975), Chapter 3, we take Eqn 3.8 and Fig. 3.1.\nCrank's Eqn 3.8 is coded below as the function C_sphere_infmed(r, t)",
"# this evaluates Crank (1975), eqn. (3.8)\n# a quick and dirty definition, with implicit globals\n# should be checked against Carslaw & Jaeger\n# but this form DOES reproduce Crank, fig. (3.1)\ndef C_sphere_infmed(r, t):\n term1 = erf((a-r)/(2*sqrt(D*t))) + erf((a+r)/(2*sqrt(D*t)))\n term2b = exp(-(a-r)**2/(4*D*t)) - exp(-(a+r)**2/(4*D*t))\n term2a = (D*t/pi) \n C = 0.5 * C_0 * term1 - C_0/r * sqrt(term2a) * term2b\n return C\n\nrr = np.linspace(0.001,4,1000)\nplt.figure(2)\nplt.clf()\nplt.title('analytic solution for $a = 1$, $D = 1$')\nplt.plot(rr, C_sphere_infmed(rr,0.0001),label='t~0')\nplt.plot(rr, C_sphere_infmed(rr,0.0625),label='t=0.0625')\nplt.plot(rr, C_sphere_infmed(rr,0.25),label='t=0.25')\nplt.plot(rr, C_sphere_infmed(rr,1.0),label='t=1.0')\nplt.ylabel('$C / C_0$')\nplt.xlabel('r')\nplt.legend()",
"The figure above is consistent with Figure 3.1 from Crank's book, demonstrating probable correctness of our code for evaluation of the analytic solution.\nComparison with numerical solutions by FVTool and Fipy\nIn the following we will reproduce the curves at the different time points using FiPy and FVTool. FVTool is run separately (since it is Matlab/Octave) by calling it directly from this notebook. The results are stored in '.mat' files and are retrieved here.\nWe calculate the solution with Fipy and pull in the results from FVTool. Then, we compare with the analytic solution.\nFVTool\nConcerning FVTool, it was observed that a very fine grid needs to be used. 50 cells is woofully insufficient (very imprecise result), 100 is slightly better, 500 seems to do OK, 1000 cells on 10 units width better still. We finally used 2000 cells over 10 units! We use smaller time-steps than with Fipy (0.0625/20 instead of 0.0625/10), but this does not change the final result much.\nFirst we call command-line octave...\nThis generates '.mat' files which contain the solutions at the same time-points as in Crank's figure.",
"# only call the script if the first result file does not exist\nif not os.path.exists('diffusion1Dspherical_FVTool_tstep20.mat'):\n !octave diffusion1Dspherical_FVTool.m\nelse:\n print('result file exists. did not run FVTool.')",
"Fipy\nFipy does not have a 1D spherical grid. Therefore we use a 2D cylindrical grid with a variable rs which contains the distance to the origin for each cell center.\nWe experimented with number of cells (moving to 200x200 instead of 120x120 does not change final result much)",
"boxsize = 120\nmsh = fp.CylindricalGrid2D(nr = boxsize, nz = boxsize, Lr = 10.0, Lz = 10.0)\nrc = msh.cellCenters[0]\nzc = msh.cellCenters[1]\nrs = numerix.sqrt(rc**2 + zc**2)",
"First, we define the concentration cell variable and immediately initialize it with the initial condition",
"c = fp.CellVariable(name = 'c', mesh = msh, value = 0.)\n\nc[rs<a] = C_0",
"The boundary conditions may be Dirichlet on the outer boundaries (right and top), and zero-flux on the inner boundaries (bottom and left). Zero-flux is implicit: if nothing is specified we have zero-flux.\nAnyway, in the present case there is no difference between Dirichlet and zero-flux, showing that our grid is large enough to emulate an infinite medium.",
"## if this is commented out, we are using zero-flux!\n#c.constrain(0., where = msh.facesRight)\n#c.constrain(0., where = msh.facesTop)",
"We still need to define our transport equation",
"eq = fp.TransientTerm(var = c) == fp.DiffusionTerm(coeff = D, var = c)",
"Now, we initialize time $t$, and take the first time step.",
"t = 0. # master time\ndeltat = 0.0625/10\n\neq.solve(dt=deltat); t+=deltat",
"For plotting the Fipy solution after this first careful timestep, we only plot the values of the cells at the lowest $z$ ($z = 0$).",
"plt.plot(rs.value[0:boxsize],c.value[0:boxsize])\nplt.title('t = {0:.5f}'.format(t))",
"Now we take 9 additional steps to arrive at $t = 0.0625$, the first time point in Crank's figure. Subsequently, we plot the concentration profile and compare it with the analytic profile and the result from FVTool.",
"# nine additional steps\nfor i in range(9):\n eq.solve(dt=deltat); t+=deltat\n\nrr = np.linspace(0.001,5,1000) # for plotting analytic solution\n\n# get result from FVTool\nlm = spio.loadmat('diffusion1Dspherical_FVTool_tstep20.mat')\nfvr = lm['x']\nfvc = lm['cval']\n\nplt.plot(rr, C_sphere_infmed(rr,t), label = 'analytic')\nplt.plot(rs.value[0:boxsize],c.value[0:boxsize], label = 'FiPy')\nplt.plot(fvr,fvc, label = 'FVTool')\nplt.title('t = {0:.5f}'.format(t))\nplt.ylabel('$C / C_0$')\nplt.xlabel('r')\nplt.legend()\n\n# thirty additional steps to arrive at next curve\nfor i in range(30):\n eq.solve(dt=deltat); t+=deltat\n\n# get result from FVTool\nlm = spio.loadmat('diffusion1Dspherical_FVTool_tstep80.mat')\nfvr = lm['x']\nfvc = lm['cval']\n\nplt.plot(rr, C_sphere_infmed(rr,t), label = 'analytic')\nplt.plot(rs.value[0:boxsize],c.value[0:boxsize], label = 'FiPy')\nplt.plot(fvr,fvc, label = 'FVTool')\nplt.title('t = {0:.5f}'.format(t))\nplt.ylabel('$C / C_0$')\nplt.xlabel('r')\nplt.legend()",
"We can also calculate the difference between the analytic solution and the numerical results from Fipy and FVTool.",
"plt.plot(rs.value[1:boxsize],c.value[1:boxsize]-C_sphere_infmed(rs.value[1:boxsize],t), label = 'FiPy')\nplt.plot(fvr[1:],fvc[1:]-C_sphere_infmed(fvr[1:],t), label = 'FVTool')\nplt.title('t = {0:.5f}'.format(t))\nplt.ylabel('error')\nplt.xlabel('r')\nplt.legend()\n\nfor i in range(120):\n eq.solve(dt=deltat); t+=deltat\n\n# get result from FVTool\nlm = spio.loadmat('diffusion1Dspherical_FVTool_tstep320.mat')\nfvr = lm['x']\nfvc = lm['cval']\n\nplt.plot(rr, C_sphere_infmed(rr,t), label = 'analytic')\nplt.plot(rs.value[0:boxsize],c.value[0:boxsize], label = 'FiPy')\nplt.plot(fvr,fvc, label = 'FVTool')\nplt.title('t = {0:.5f}'.format(t))\nplt.ylabel('$C / C_0$')\nplt.xlabel('r')\nplt.legend()\n\nplt.plot(rs.value[1:boxsize],c.value[1:boxsize]-C_sphere_infmed(rs.value[1:boxsize],t), label = 'FiPy')\nplt.plot(fvr[1:],fvc[1:]-C_sphere_infmed(fvr[1:],t), label = 'FVTool')\nplt.title('t = {0:.5f}'.format(t))\nplt.ylabel('error')\nplt.xlabel('r')\nplt.legend()",
"Conclusion\nThis notebook shows that once the computational parameters for Fipy and FVTool have suitable values, close agreement is obtained with the analytic solution from [Crank 1975] for this particular diffusion problem. It also shows how to call FVTool and how to set up a simple calculation in spherical symmetry with Fipy.\nIt might be of interest to analyze the subtle differences between the analytic and numerical solutions. The FVTool calculation may be made more efficient by using unevenly sized cells: smaller cells near the initial sphere, larger cells farther away from the origin."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
jorgemauricio/INIFAP_Course
|
ejercicios/Avanzados/Ejercicio_Llamadas_911_Solucion.ipynb
|
mit
|
[
"Ejercicio Llamadas al 911 - Solucion\nPara este ejercicio vamos analizar la informacion referente a las llamadas al 911 Kaggle. La informacion contiene los siguientes campos:\n\nlat : String variable, Latitud\nlng: String variable, Longitud\ndesc: String variable, Descripcion del tipo de emergencia\nzip: String variable, Codigo postal\ntitle: String variable, Titulo\ntimeStamp: String variable, YYYY-MM-DD HH:MM:SS\ntwp: String variable, Municipio\naddr: String variable, Direccion\ne: String variable, Variable dummy (siempre 1)\n\nData and Setup\n\n Import numpy and pandas",
"import numpy as np\nimport pandas as pd",
"Import visualization libraries and set %matplotlib inline.",
"import matplotlib.pyplot as plt\nimport seaborn as sns\nsns.set_style('whitegrid')\n%matplotlib inline",
"Read in the csv file as a dataframe called df",
"df = pd.read_csv('911.csv')",
"Check the info() of the df",
"df.info()",
"Check the head of df",
"df.head(3)",
"Basic Questions\n What are the top 5 zipcodes for 911 calls?",
"df['zip'].value_counts().head(5)",
"What are the top 5 townships (twp) for 911 calls?",
"df['twp'].value_counts().head(5)",
"Take a look at the 'title' column, how many unique title codes are there?",
"df['title'].nunique()",
"Creating new features\n In the titles column there are \"Reasons/Departments\" specified before the title code. These are EMS, Fire, and Traffic. Use .apply() with a custom lambda expression to create a new column called \"Reason\" that contains this string value. \nFor example, if the title column value is EMS: BACK PAINS/INJURY , the Reason column value would be EMS.",
"df['Reason'] = df['title'].apply(lambda title: title.split(':')[0])",
"What is the most common Reason for a 911 call based off of this new column?",
"df['Reason'].value_counts()",
"Now use seaborn to create a countplot of 911 calls by Reason.",
"sns.countplot(x='Reason',data=df,palette='viridis')",
"Now let us begin to focus on time information. What is the data type of the objects in the timeStamp column?",
"type(df['timeStamp'].iloc[0])",
"You should have seen that these timestamps are still strings. Use pd.to_datetime to convert the column from strings to DateTime objects.",
"df['timeStamp'] = pd.to_datetime(df['timeStamp'])",
"You can now grab specific attributes from a Datetime object by calling them. For example:\ntime = df['timeStamp'].iloc[0]\ntime.hour\n\nYou can use Jupyter's tab method to explore the various attributes you can call. Now that the timestamp column are actually DateTime objects, use .apply() to create 3 new columns called Hour, Month, and Day of Week. You will create these columns based off of the timeStamp column, reference the solutions if you get stuck on this step.",
"df['Hour'] = df['timeStamp'].apply(lambda time: time.hour)\ndf['Month'] = df['timeStamp'].apply(lambda time: time.month)\ndf['Day of Week'] = df['timeStamp'].apply(lambda time: time.dayofweek)",
"Notice how the Day of Week is an integer 0-6. Use the .map() with this dictionary to map the actual string names to the day of the week: \ndmap = {0:'Mon',1:'Tue',2:'Wed',3:'Thu',4:'Fri',5:'Sat',6:'Sun'}",
"dmap = {0:'Mon',1:'Tue',2:'Wed',3:'Thu',4:'Fri',5:'Sat',6:'Sun'}\n\ndf['Day of Week'] = df['Day of Week'].map(dmap)",
"Now use seaborn to create a countplot of the Day of Week column with the hue based off of the Reason column.",
"sns.countplot(x='Day of Week',data=df,hue='Reason',palette='viridis')\n\n# To relocate the legend\nplt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)",
"Now do the same for Month:",
"sns.countplot(x='Month',data=df,hue='Reason',palette='viridis')\n\n# To relocate the legend\nplt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)",
"Did you notice something strange about the Plot?",
"# It is missing some months! 9,10, and 11 are not there.",
"You should have noticed it was missing some Months, let's see if we can maybe fill in this information by plotting the information in another way, possibly a simple line plot that fills in the missing months, in order to do this, we'll need to do some work with pandas...\n Now create a gropuby object called byMonth, where you group the DataFrame by the month column and use the count() method for aggregation. Use the head() method on this returned DataFrame.",
"byMonth = df.groupby('Month').count()\nbyMonth.head()",
"Now create a simple plot off of the dataframe indicating the count of calls per month.",
"# Could be any column\nbyMonth['twp'].plot()",
"Now see if you can use seaborn's lmplot() to create a linear fit on the number of calls per month. Keep in mind you may need to reset the index to a column.",
"sns.lmplot(x='Month',y='twp',data=byMonth.reset_index())",
"Create a new column called 'Date' that contains the date from the timeStamp column. You'll need to use apply along with the .date() method.",
"df['Date']=df['timeStamp'].apply(lambda t: t.date())",
"Now groupby this Date column with the count() aggregate and create a plot of counts of 911 calls.",
"df.groupby('Date').count()['twp'].plot()\nplt.tight_layout()",
"Now recreate this plot but create 3 separate plots with each plot representing a Reason for the 911 call",
"df[df['Reason']=='Traffic'].groupby('Date').count()['twp'].plot()\nplt.title('Traffic')\nplt.tight_layout()\n\ndf[df['Reason']=='Fire'].groupby('Date').count()['twp'].plot()\nplt.title('Fire')\nplt.tight_layout()\n\ndf[df['Reason']=='EMS'].groupby('Date').count()['twp'].plot()\nplt.title('EMS')\nplt.tight_layout()",
"Now let's move on to creating heatmaps with seaborn and our data. We'll first need to restructure the dataframe so that the columns become the Hours and the Index becomes the Day of the Week. There are lots of ways to do this, but I would recommend trying to combine groupby with an unstack method. Reference the solutions if you get stuck on this!",
"dayHour = df.groupby(by=['Day of Week','Hour']).count()['Reason'].unstack()\ndayHour.head()",
"Now create a HeatMap using this new DataFrame.",
"plt.figure(figsize=(12,6))\nsns.heatmap(dayHour,cmap='viridis')",
"Now create a clustermap using this DataFrame.",
"sns.clustermap(dayHour,cmap='viridis')",
"Now repeat these same plots and operations, for a DataFrame that shows the Month as the column.",
"dayMonth = df.groupby(by=['Day of Week','Month']).count()['Reason'].unstack()\ndayMonth.head()\n\nplt.figure(figsize=(12,6))\nsns.heatmap(dayMonth,cmap='viridis')\n\nsns.clustermap(dayMonth,cmap='viridis')"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
jasontlam/snorkel
|
tutorials/workshop/Workshop_5_Advanced_Preprocessing.ipynb
|
apache-2.0
|
[
"<img align=\"left\" src=\"imgs/logo.jpg\" width=\"50px\" style=\"margin-right:10px\">\nSnorkel Workshop: Extracting Spouse Relations <br> from the News\nPart 5: Preprocessing & Building the Snorkel Database\nIn this tutorial, we will walk through the process of using Snorkel to identify mentions of spouses in a corpus of news articles. \nPart I: Preprocessing\nIn this notebook, we preprocess several documents using Snorkel utilities, parsing them into a simple hierarchy of component parts of our input data, which we refer to as contexts. We'll also create candidates out of these contexts, which are the objects we want to classify, in this case, possible mentions of spouses. Finally, we'll load some gold labels for evaluation.\nAll of this preprocessed input data is saved to a database. (Connection strings can be specified by setting the SNORKELDB environment variable. In Snorkel, if no database is specified, then a SQLite database at ./snorkel.db is created by default--so no setup is needed here!\nInitializing a SnorkelSession\nFirst, we initialize a SnorkelSession, which manages a connection to a database automatically for us, and will enable us to save intermediate results. If we don't specify any particular database (see commented-out code below), then it will automatically create a SQLite database in the background for us:",
"%load_ext autoreload\n%autoreload 2\n%matplotlib inline\nimport os\n\n# Connect to the database backend and initalize a Snorkel session\nfrom lib.init import *\n\n# Here, we just set how many documents we'll process for automatic testing- you can safely ignore this!\nn_docs = 1000 if 'CI' in os.environ else 2591",
"Loading the Corpus\nNext, we load and pre-process the corpus of documents.\nConfiguring a DocPreprocessor\nWe'll start by defining a TSVDocPreprocessor class to read in the documents, which are stored in a tab-seperated value format as pairs of document names and text.",
"from snorkel.parser import TSVDocPreprocessor\n\ndoc_preprocessor = TSVDocPreprocessor('data/articles.tsv', max_docs=n_docs)",
"Running a CorpusParser\nWe'll use Spacy, an NLP preprocessing tool, to split our documents into sentences and tokens, and provide named entity annotations.",
"from snorkel.parser.spacy_parser import Spacy\nfrom snorkel.parser import CorpusParser\n\ncorpus_parser = CorpusParser(parser=Spacy())\n%time corpus_parser.apply(doc_preprocessor, count=n_docs, parallelism=1)",
"We can then use simple database queries (written in the syntax of SQLAlchemy, which Snorkel uses) to check how many documents and sentences were parsed:",
"from snorkel.models import Document, Sentence\n\nprint(\"Documents:\", session.query(Document).count())\nprint(\"Sentences:\", session.query(Sentence).count())",
"Generating Candidates\nThe next step is to extract candidates from our corpus. A Candidate in Snorkel is an object for which we want to make a prediction. In this case, the candidates are pairs of people mentioned in sentences, and our task is to predict which pairs are described as married in the associated text.\nDefining a Candidate schema\nWe now define the schema of the relation mention we want to extract (which is also the schema of the candidates). This must be a subclass of Candidate, and we define it using a helper function. Here we'll define a binary spouse relation mention which connects two Span objects of text. Note that this function will create the table in the database backend if it does not exist:",
"from snorkel.models import candidate_subclass\n\nSpouse = candidate_subclass('Spouse', ['person1', 'person2'])",
"Writing a basic CandidateExtractor\nNext, we'll write a basic function to extract candidate spouse relation mentions from the corpus. The Spacy parser we used performs named entity recognition for us.\nWe will extract Candidate objects of the Spouse type by identifying, for each Sentence, all pairs of ngrams (up to trigrams) that were tagged as people. We do this with three objects:\n\n\nA ContextSpace defines the \"space\" of all candidates we even potentially consider; in this case we use the Ngrams subclass, and look for all n-grams up to 3 words long\n\n\nA Matcher heuristically filters the candidates we use. In this case, we just use a pre-defined matcher which looks for all n-grams tagged by CoreNLP as \"PERSON\"\n\n\nA CandidateExtractor combines this all together!",
"from snorkel.candidates import Ngrams, CandidateExtractor\nfrom snorkel.matchers import PersonMatcher\n\nngrams = Ngrams(n_max=7)\nperson_matcher = PersonMatcher(longest_match_only=True)\ncand_extractor = CandidateExtractor(Spouse, \n [ngrams, ngrams], [person_matcher, person_matcher],\n symmetric_relations=False)",
"Next, we'll split up the documents into train, development, and test splits; and collect the associated sentences.\nNote that we'll filter out a few sentences that mention more than five people. These lists are unlikely to contain spouses.",
"from snorkel.models import Document\nfrom lib.util import number_of_people\n\ndocs = session.query(Document).order_by(Document.name).all()\n\ntrain_sents = set()\ndev_sents = set()\ntest_sents = set()\n\nfor i, doc in enumerate(docs):\n for s in doc.sentences:\n if number_of_people(s) <= 5:\n if i % 10 == 8:\n dev_sents.add(s)\n elif i % 10 == 9:\n test_sents.add(s)\n else:\n train_sents.add(s)",
"Finally, we'll apply the candidate extractor to the three sets of sentences. The results will be persisted in the database backend.",
"%%time\nfor i, sents in enumerate([train_sents, dev_sents, test_sents]):\n cand_extractor.apply(sents, split=i, parallelism=1)\n print(\"Number of candidates:\", session.query(Spouse).filter(Spouse.split == i).count())",
"Loading Gold Labels\nFinally, we'll load gold labels for development and evaluation. Even though Snorkel is designed to create labels for data, we still use gold labels to evaluate the quality of our models. Fortunately, we need far less labeled data to evaluate a model than to train it.",
"from lib.util import load_external_labels\n\n%time load_external_labels(session, Spouse, annotator_name='gold')",
"Next, in Part II, we will work towards building a model to predict these labels with high accuracy using data programming"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mgeier/jupyter-presentation
|
jupyter-presentation.ipynb
|
cc0-1.0
|
[
"Using Jupyter/IPython for Teaching\n<p xmlns:dct=\"http://purl.org/dc/terms/\">\n <a rel=\"license\"\n href=\"http://creativecommons.org/publicdomain/zero/1.0/\">\n <img src=\"http://i.creativecommons.org/p/zero/1.0/88x31.png\" style=\"border-style: none;\" alt=\"CC0\" />\n </a>\n</p>\n\nThis notebook is meant to be a slide show.\nIf it doesn't look like a slide show, you probably have to install RISE:\npython3 -m pip install rise --user\npython3 -m notebook.nbextensions install --python rise --user\npython3 -m notebook.nbextensions enable --python rise --user\n\nIf it doesn't work, you might have to use python instead of python3.\nAfter the installation (and after re-loading the Jupyter notebook), you will have a new item in the toolbar which allows you to start the presentation.\nWhat is Jupyter?\n\nformerly known as IPython (\"interactive Python\")\nan interactive terminal and a browser-based notebook\n\nhttps://jupyter.org/\n\n\ncan be used with different programming languages:\n\n\nJulia (http://julialang.org/)\n\n\nPython (https://www.python.org/)\n\n\nR (http://www.r-project.org/)\n\n\nand many others ...\n\n\nWhat's so great about the Jupyter notebook?\n\n\nmix of text, code and results\n\n\nmedia\n\n\nimages, audio, video\n\n\nanything a web browser can display\n\n\nequations\n\n\nOne notebook, many uses\n\n\ninteractive local use\n\n\nstatic online HTML pages on http://nbviewer.jupyter.org/\n\n\ninteractive online use at https://mybinder.org/\n\n\nnbconvert\n\n\nHTML\n\n\n$\\mathrm{\\LaTeX}$ $\\to$ PDF\n\n\n.py files\n\n\n...\n\n\nslide shows!\n\n\nHTML5 <audio> tag\n<audio src=\"data/singing.wav\" controls>Your browser does not support the audio element.</audio>\nsinging.wav by www.openairlib.net; CC BY-SA.\nLoading Audio Data in Python",
"import soundfile as sf\nsig, fs = sf.read('data/singing.wav')",
"Plotting",
"%matplotlib inline\nimport matplotlib.pyplot as plt\n\nimport numpy as np\n\nt = np.arange(len(sig)) / fs\nplt.plot(t, sig)\nplt.xlabel('time / seconds')\nplt.grid()",
"Spectrogram\nSquared magnitude of the Short Time Fourier Transform (STFT)\n$$|\\text{STFT}{x[n]}(m, \\omega)|^2 = \\left| \\sum_{n=-\\infty}^\\infty x[n]w[n-m] \\text{e}^{-j \\omega n}\\right|^2$$",
"plt.specgram(sig, Fs=fs)\nplt.ylabel('frequency / Hz')\nplt.xlabel('time / seconds')\nplt.ylim(0, 10000);",
"Symbolic Math",
"%matplotlib inline\nimport sympy as sp\n\nsp.init_printing()\n\nt, sigma, omega = sp.symbols(('t', 'sigma', 'omega'))\n\nsigma = -2\nomega = 10\ns = sigma + sp.I * omega\nx = sp.exp(s * t)\nx\n\nsp.plotting.plot(sp.re(x),(t, 0, 2 * sp.pi), ylim=[-2, 2], ylabel='Re{$e^{st}$}')\nsp.plotting.plot(sp.im(x),(t, 0, 2 * sp.pi), ylim=[-2, 2], ylabel='Im{$e^{st}$}');",
"Example notebooks\nexercises for the lecture \"communication acoustics\"\nThat's it for now!\n<p xmlns:dct=\"http://purl.org/dc/terms/\">\n <a rel=\"license\"\n href=\"http://creativecommons.org/publicdomain/zero/1.0/\">\n <img src=\"http://i.creativecommons.org/p/zero/1.0/88x31.png\" style=\"border-style: none;\" alt=\"CC0\" />\n </a>\n <br />\n To the extent possible under law,\n <span rel=\"dct:publisher\" resource=\"[_:publisher]\">the person who associated CC0</span>\n with this work has waived all copyright and related or neighboring\n rights to this work.\n</p>"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
bird-house/birdy
|
notebooks/demo/agu2018_demo.ipynb
|
apache-2.0
|
[
"AGU 2018 Demo\nThis notebook shows how to use birdy's high-level interface to WPS processes. \nHere we access a test server called Emu offering a dozen or so dummy processes. \nThe shell interface",
"%%bash\nexport WPS_SERVICE=\"http://localhost:5000/wps?Service=WPS&Request=GetCapabilities&Version=1.0.0\"\nbirdy -h\n\n%%bash\nexport WPS_SERVICE=\"http://localhost:5000/wps?Service=WPS&Request=GetCapabilities&Version=1.0.0\"\nbirdy hello -h\n\n%%bash\nexport WPS_SERVICE=\"http://localhost:5000/wps?Service=WPS&Request=GetCapabilities&Version=1.0.0\"\nbirdy hello --name stranger",
"The python interface\nThe WPSClient function creates a mock python module whose functions actually call a remote WPS process. The \ndocstring and signature of the function are dynamically created from the remote's process description. If you type wps. and then press Tab, you should see a drop-down list of available processes. Simply call help on each process of type ? after the process to print the docstring for that process.",
"from birdy import WPSClient\nurl = \"http://localhost:5000/wps?Service=WPS&Request=GetCapabilities&Version=1.0.0\"\nwps = WPSClient(url, verify=False)\nhelp(wps.binaryoperatorfornumbers)",
"Type wps. and the press Tab, you should see a drop-down list of available processes.",
"# wps.",
"Process execution\nProcesses are executed by calling the function. Each process instantaneoulsy returns a WPSExecute object. The actual output values of the process are obtained by calling the get method. This get method returns a namedtuple storing the process outputs as native python objects.",
"resp = wps.binaryoperatorfornumbers(1, 2, operator='add')\nprint(resp)\nresp.get()",
"For instance, the inout function returns a wide variety of data types (float, integers, dates, etc) all of which are converted into a corresponding python type.",
"wps.inout().get()",
"Retrieving outputs by references\nFor ComplexData objects, WPS servers often return a reference to the output (an http link) instead of the actual data. This is useful if that output is to serve as an input to another process, so as to avoid passing back and forth large files for nothing. \nWith birdy, the outputs are by default return values are the references themselves, but it's also possible to download these references in the background and convert them into python objects. To trigger this automatic conversion, set asobj to True when calling the get method. In the example below, we're using a dummy process called output_formats, whose first output is a netCDF file, and second output is a json file. With asobj=True, the netCDF file is opened and returned as a netcdf4.Dataset instance, and the json file into a dictionary.",
"# NBVAL_SKIP\n# This cell is failing due to an unautheticated SSL certificate\nout = wps.output_formats()\nnc, json = out.get()\nprint(out.get())\nds, json = out.get(asobj=True)\nprint(json)\nds",
"Progress bar\nIt's possible to display a progress bar when calling a process. The interface to do so at the moment goes like this. Note that the cancel button does not do much here, as the WPS server does not support interruption requests.",
"wps = WPSClient(\n 'http://localhost:5000/wps', \n progress=True)\nresp = wps.sleep()\n"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
UWPRG/Python
|
tutorials/MetaD countours.ipynb
|
mit
|
[
"Jim's notebook on contour plots, showing projection of 2D data on top of the countour plot",
"import numpy as np\nimport matplotlib.pyplot as plt\n\n\n\nunbiasedCVs = np.genfromtxt('NVT_monitor/COLVAR',comments='#');\nbiasedCVs = np.genfromtxt('MetaD/COLVAR',comments='#');\nunbiasedCVsHOT = np.genfromtxt('NVT_monitor/hot/COLVAR',comments='#');\n",
"Plotting biased and unbiased CVS",
"%matplotlib inline\n\nfig = plt.figure(figsize=(6,6)) \naxes = fig.add_subplot(111)\nstride=5\nxlabel='$\\Phi$'\nylabel='$\\Psi$'\n\naxes.plot(biasedCVs[::stride,1],biasedCVs[::stride,2],marker='o',markersize=4,linestyle='none')\naxes.plot(unbiasedCVs[::stride,1],unbiasedCVs[::stride,2],marker='o',markersize=4,linestyle='none',markerfacecolor='yellow')\n\naxes.set_xlabel(xlabel, fontsize=20)\naxes.set_ylabel(ylabel, fontsize=20)\n\n\nplt.show()",
"Plotting contour plot of biased FES",
"#read the data in from a text file \nfesdata = np.genfromtxt('MetaD/fes.dat',comments='#');\n\nfesdata = fesdata[:,0:3]\n#what was your grid size? this calculates it \ndim=int(np.sqrt(np.size(fesdata)/3))\n\n#some post-processing to be compatible with contourf \nX=np.reshape(fesdata[:,0],[dim,dim],order=\"F\") #order F was 20% faster than A/C\nY=np.reshape(fesdata[:,1],[dim,dim],order=\"F\") \nZ=np.reshape((fesdata[:,2]-np.min(fesdata[:,2]))/4.184,[dim,dim],order=\"F\") #convert to kcal/mol\n\n#what spacing do you want? assume units are in kJ/mol\nspacer=1\nlines=20\nlevels=np.linspace(0,lines*spacer,num=(lines+1),endpoint=True)\n\n\nfig=plt.figure(figsize=(10,8)) \naxes = fig.add_subplot(111)\n\n\nplt.contourf(X, Y, Z, levels, cmap=plt.cm.bone,)\nplt.colorbar()\nplt.xlabel('$\\Phi$')\nplt.ylabel('$\\Psi$')\naxes.set_xlabel(xlabel, fontsize=20)\naxes.set_ylabel(ylabel, fontsize=20)\n\n\nstride=10\n#axes.plot(biasedCVs[::stride,1],biasedCVs[::stride,2],marker='o',markersize=8,linestyle='none',markerfacecolor='cyan')\naxes.plot(unbiasedCVs[::stride,1],unbiasedCVs[::stride,2],marker='o',markersize=8,linestyle='none',markerfacecolor='blue')\n#axes.plot(unbiasedCVsHOT[::stride,1],unbiasedCVsHOT[::stride,2],marker='o',markersize=8,linestyle='none',markerfacecolor='red')\n\nunbiasedCVs = np.genfromtxt('NVT_monitor/other_basin/COLVAR',comments='#');\nstride=5\naxes.plot(unbiasedCVs[::stride,1],unbiasedCVs[::stride,2],marker='o',markersize=8,linestyle='none',markerfacecolor='yellow')\n\n\n\nplt.savefig('fes_bias.png')\nplt.show()\n"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ehongdata/Network-Analysis-Made-Simple
|
3. Hubs and Paths (Student).ipynb
|
mit
|
[
"import networkx as nx\nimport matplotlib.pyplot as plt\nfrom collections import Counter\n\n%matplotlib inline\n\n# Load the pickled data without the new individuals added in the previous notebook.\n\nG = nx.read_gpickle('Synthetic Social Network.pkl')",
"Hubs: How do we evaluate the importance of some individuals in a network?\nWithin a social network, there will be certain individuals which perform certain important functions. For example, there may be hyper-connected individuals who are connected to many, many more people. They would be of use in the spreading of information. Alternatively, if this were a disease contact network, identifying them would be useful in stopping the spread of diseases. How would one identify these people?\nApproach 1: Neighbors\nOne way we could compute this is to find out the number of people an individual is conencted to. NetworkX let's us do this by giving us a G.neighbors(node) function.",
"# Let's find out the number of neighbors that individual #7 has.\nG.neighbors(7)",
"Exercise\nCan you create a ranked list of the importance of each individual, based on the number of neighbors they have? \nHint: One suggested output would be a list of tuples, where the first element in each tuple is the node ID (an integer number), and the second element is a list of its neighbors.\nHint: Python's sorted(iterable, key=lambda x:...., reverse=True) function may be of help here.\nApproach 2: Degree Centrality\nThe number of other nodes that one node is connected to is a measure of its centrality. NetworkX implements a degree centrality, which is defined as the number of neighbors that a node has normalized to the number of individuals it could be connected to in the entire graph. This is accessed by using nx.degree_centrality(G)",
"nx.degree_centrality(G)",
"If you inspect the dictionary closely, you will find that node 19 is the one that has the highest degree centrality, just as we had measured by counting the number of neighbors.\nThere are other measures of centrality, namely betweenness centrality, flow centrality and load centrality. You can take a look at their definitions on the NetworkX API docs and their cited references. You can also define your own measures if those don't fit your needs, but that is an advanced topic that won't be dealt with here.\nThe NetworkX API docs that document the centrality measures are here: http://networkx.github.io/documentation/networkx-1.9.1/reference/algorithms.centrality.html\nExercises\n\nCan you create a histogram of the distribution of degree centralities? (1-2 min)\nCan you create a histogram of the distribution of number of neighbors? (1-2 min)\nCan you create a scatterplot of the degree centralities against number of neighbors? (1-2 min)\nIf I have n nodes, then how many possible edges are there in total, assuming self-edges are allowed? What if self-edges are not allowed?\n\nTime: 3-6 min.\nHint: You may want to use:\nplt.hist(list_of_values)\n\nand \nplt.scatter(x_values, y_values)\n\nIf you know the Matplotlib API, feel free to get fancy :).",
"# Your answer here.\n",
"Paths in a Network\nGraph traversal is akin to walking along the graph, node by node, restricted by the edges that connect the nodes. Graph traversal is particularly useful for understanding the local structure (e.g. connectivity, retrieving the exact relationships) of certain portions of the graph and for finding paths that connect two nodes in the network. \nUsing the synthetic social network, we will figure out how to answer the following questions:\n\nHow long will it take for a message to spread through this group of friends? (making some assumptions, of course)\nHow do we find the shortest path to get from individual A to individual B?\n\nShortest Path",
"nx.draw(G, with_labels=True)",
"Let's say we wanted to find the shortest path between two nodes. How would we approach this? One approach is what one would call a breadth-first search (http://en.wikipedia.org/wiki/Breadth-first_search). While not necessarily the fastest, it is the easiest to conceptualize. \nThe approach is essentially as such:\n\nBegin with a queue of the starting node.\nAdd the neighbors of that node to the queue.\nIf destination node is present in the queue, end.\nIf destination node is not present, proceed.\n\n\nFor each node in the queue:\nRemove node from the queue.\nAdd neighbors of the node to the queue. Check if destination node is present or not.\nIf destination node is present, break.\nIf destination node is not present, repeat step 3.\n\n\n\nExercise\nTry implementing this algorithm in a function called path_exists(node1, node2, G).\nThe function should take in two nodes, node1 and node2, and the graph G that they belong to, and return a Boolean that indicates whether a path exists between those two nodes or not.",
"def path_exists(node1, node2, G):\n \"\"\"\n This function checks whether a path exists between two nodes (node1, node2) in graph G.\n \"\"\"\n",
"And testing the function on a few test cases:\n\n18 and any other node (should return False)\n29 and 26 (should return True)",
"path_exists(18, 5, G)\npath_exists(29, 26, G)",
"Meanwhile... thankfully, NetworkX has a function for us to use, titled has_path, so we don't have to always implement this on our own. :-)\nhttp://networkx.lanl.gov/reference/generated/networkx.algorithms.shortest_paths.generic.has_path.html#networkx.algorithms.shortest_paths.generic.has_path",
"nx.has_path(G, 18, 5)",
"NetworkX also has other shortest path algorithms implemented. \nhttps://networkx.github.io/documentation/latest/reference/generated/networkx.algorithms.shortest_paths.unweighted.predecessor.html#networkx.algorithms.shortest_paths.unweighted.predecessor\nWe can build upon these to build our own graph query functions. Let's see if we can trace the shortest path from one node to another.",
"nx.draw(G, with_labels=True)",
"nx.shortest_path(G, source, target) gives us a list of nodes that exist within one of the shortest paths between the two nodes. (Not all paths are guaranteed to be found.)",
"nx.shortest_path(G, 4, 14)",
"Incidentally, the node list is in order as well - we will travel through 19 and 17 in that order to get from 14 from 4.\nExercise\nWrite a function that extracts the edges in the shortest path between two nodes and puts them into a new graph, and draws it to the screen. It should also return an error if there is no path between the two nodes. (~5 min) \nHint: You may want to use G.subgraph(iterable_of_nodes) to extract just the nodes and edges of interest from the graph G. One coding pattern to consider is this:\nnewG = G.subgraph(nodes_of_interest)\n\nnewG will be comprised of the nodes of interest and the edges that connect them.",
"# Possible Answer:\n\ndef extract_path_edges(G, source, target):\n \"\"\"\n Fill in the code below.\n \"\"\"\n\n \n# Test your function with the following block of code.\nnewG = extract_path_edges(G, 1, 14)\nnx.draw(newG, with_labels=True)",
"Exercise\nSince we've been drawing some graphs to screen, we might as well draw a few other things while we're on a roll.\nWrite a function that extracts only node, its neighbors, and the edges between that node and its neighbors as a new graph. Then, draw the new graph to screen. (~5 min.)",
"# Possible Answer\n\ndef extract_neighbor_edges(G, node):\n \"\"\"\n Fill in code below.\n \"\"\"\n\n \n# Test your function with the following block of code.\nfig = plt.figure(0)\nnewG = extract_neighbor_edges(G, 19)\nnx.draw(newG, with_labels=True)",
"Challenge Exercises (optional)\nLet's try some other problems that build on the NetworkX API. (10 min.)\nRefer to the following for the relevant functions:\nhttps://networkx.github.io/documentation/latest/reference/algorithms.shortest_paths.html\n\nIf we want a message to go from one person to another person, and we assume that the message takes 1 day for the initial step and 1 additional day per step in the transmission chain (i.e. the first step takes 1 day, the second step takes 2 days etc.), how long will the message take to spread from any two given individuals? Write a function to compute this.\nWhat is the distribution of message spread times from person to person? What about chain lengths?\nAre there certain individuals who consistently show up in the chain? (Hint: you might wish to use the following functions/objects:\nCounter object from the collections module \ncombinations function from the itertools module.\nall_shortest_paths(G, node1, node2) which is part of the networkX algorithms.\n\n\nAs a bonus, if you were able to compute the answer to question 3, can you plot a histogram of the number of times each node shows up in a connecting path?",
"# Your answer to Question 1:\n# All we need here is the length of the path.\n\ndef compute_transmission_time(G, source, target):\n \"\"\"\n Fill in code below.\n \"\"\"\n# Test with the following line of code.\ncompute_transmission_time(G, 14, 4) \n\n# Your answer to Question 2:\n# We need to know the length of every single shortest path between every pair of nodes.\n# If we don't put a source and target into the nx.shortest_path_length(G) function call, then\n# we get a dictionary of dictionaries, where all source-->target-->lengths are shown.\n\n\n\n# Your answer to Question 3:\n# You may want to use the Counter object from collections, as well as combinations from itertools.\nfrom collections import Counter\nfrom itertools import combinations\n\n\n# Your answer to Question 4:\n# Hint: You may want to use bar graphs or histograms.\nplt.bar(totals.keys(), totals.values())",
"Hubs Revisited\nIt looks like individual 19 is an important person of some sorts - if a message has to be passed through the network in the shortest time possible, then usually it'll go through person 19. Such a person has a high betweenness centrality. This is implemented as one of NetworkX's centrality algorithms. Check out the Wikipedia page for a further description.\nhttp://en.wikipedia.org/wiki/Betweenness_centrality",
"btws = nx.betweenness_centrality(G, normalized=False)\nplt.bar(btws.keys(), btws.values())",
"Exercise\nPlot betweeness centrality against degree centrality for the synthetic social network above.\nThink about it...\nFrom the scatter plot, we can see that the dots don't all fall on the same line. Degree centrality and betweenness centrality don't necessarily correlate. Can you think of a reason why?\nWhat would be the degree centrality and betweenness centrality of the middle connecting node in the barbell graph below?",
"nx.draw(nx.barbell_graph(5, 1))"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
dariox2/CADL
|
session-5/lecture-5.ipynb
|
apache-2.0
|
[
"Session 5: Generative Models\n<p class=\"lead\">\n<a href=\"https://www.kadenze.com/courses/creative-applications-of-deep-learning-with-tensorflow/info\">Creative Applications of Deep Learning with Google's Tensorflow</a><br />\n<a href=\"http://pkmital.com\">Parag K. Mital</a><br />\n<a href=\"https://www.kadenze.com\">Kadenze, Inc.</a>\n</p>\n\n<a name=\"learning-goals\"></a>\nLearning Goals\n<!-- MarkdownTOC autolink=true autoanchor=true bracket=round -->\n\n\nIntroduction\nGenerative Adversarial Networks\nInput Pipelines\nGAN/DCGAN\nExtensions\n\n\nRecurrent Networks\nBasic RNN Cell\nLSTM RNN Cell\nGRU RNN Cell\n\n\nCharacter Langauge Model\nSetting up the Data\nCreating the Model\nLoss\nClipping the Gradient\nTraining\nExtensions\n\n\nDRAW Network\nFuture\nHomework\nExamples\nReading\n\n<!-- /MarkdownTOC -->",
"# First check the Python version\nimport sys\nif sys.version_info < (3,4):\n print('You are running an older version of Python!\\n\\n',\n 'You should consider updating to Python 3.4.0 or',\n 'higher as the libraries built for this course',\n 'have only been tested in Python 3.4 and higher.\\n')\n print('Try installing the Python 3.5 version of anaconda'\n 'and then restart `jupyter notebook`:\\n',\n 'https://www.continuum.io/downloads\\n\\n')\n\n# Now get necessary libraries\ntry:\n import os\n import numpy as np\n import matplotlib.pyplot as plt\n from skimage.transform import resize\n from skimage import data\n from scipy.misc import imresize\n from scipy.ndimage.filters import gaussian_filter\n import IPython.display as ipyd\n import tensorflow as tf\n from libs import utils, gif, datasets, dataset_utils, nb_utils\nexcept ImportError as e:\n print(\"Make sure you have started notebook in the same directory\",\n \"as the provided zip file which includes the 'libs' folder\",\n \"and the file 'utils.py' inside of it. You will NOT be able\",\n \"to complete this assignment unless you restart jupyter\",\n \"notebook inside the directory created by extracting\",\n \"the zip file or cloning the github repo.\")\n print(e)\n\n# We'll tell matplotlib to inline any drawn figures like so:\n%matplotlib inline\nplt.style.use('ggplot')\n\n# Bit of formatting because I don't like the default inline code style:\nfrom IPython.core.display import HTML\nHTML(\"\"\"<style> .rendered_html code { \n padding: 2px 4px;\n color: #c7254e;\n background-color: #f9f2f4;\n border-radius: 4px;\n} </style>\"\"\")",
"<a name=\"introduction\"></a>\nIntroduction\nSo far we've seen the basics of neural networks, how they can be used for encoding large datasets, or for predicting labels. We've also seen how to interrogate the deeper representations that networks learn in order to help with their objective, and how ampliyfing some of these objectives led to creating deep dream. Finally, we saw how the representations in deep nets trained on object recognition are capable of representing both style and content, and how we could independently manipulate a new image to have the style of one image, and the content of another.\nIn this session we'll start to explore some more generative models. We've already seen how an autoencoder is composed of both an encoder which takes an input and represents it into some hidden state vector. From this hidden state vector, a decoder is capable of resynthsizing the original input, though with some loss. So think back to the the decoders that we've already built. It has an internal state, and from that state, it can express the entire distribution of the original data, that is, it can express any possible image that is has seen.\nWe call that a generative model as it is capable of generating the distribution of the data. Contrast this to the latter half of Session 3 when we saw how ot label an image using supervised learning. This model is really trying to discriminate the data distribution based on the extra labels that we have. So this is another helpful distinction with machine learning algorithms, ones that are generative and others that are discriminative.\nIn this session, we'll explore more generative models, and states can be used to generate data in two other very powerful generative networks, one based on game theory called the generative adversarial network, and another capable of remembering and forgetting over time, allowing us to model dynamic content and sequences, called the recurrent neural network.\n<a name=\"generative-adversarial-networks\"></a>\nGenerative Adversarial Networks\nIn session 3, we were briefly introduced to the Variational Autoencoder. This network was very powerful because it encompasses a very strong idea. And that idea is measuring distance not necessarily based on pixels, but in some \"semantic space\". And I mentioned then that we'd see another type of network capable of generating even better images of CelebNet.\nSo this is where we're heading...\nWe're now going to see how to do that using what's called the generative adversarial network.\nThe generative adversarial network is actually two networks. One called the generator, and another called the discriminator. The basic idea is the generator is trying to create things which look like the training data. So for images, more images that look like the training data. The discriminator has to guess whether what its given is a real training example. Or whether its the output of the generator. By training one after another, you ensure neither are ever too strong, but both grow stronger together. The discriminator is also learning a distance function! This is pretty cool because we no longer need to measure pixel-based distance, but we learn the distance function entirely!\nThe Generative Adversarial Network, or GAN, for short, are in a way, very similar to the autoencoder we created in session 3. Or at least the implementation of it is. The discriminator is a lot like the encoder part of this network, except instead of going down to the 64 dimensions we used in our autoencoder, we'll reduce our input down to a single value, yes or no, 0 or 1, denoting yes its a true training example, or no, it's a generated one.\nAnd the generator network is exactly like the decoder of the autoencoder. Except, there is nothing feeding into this inner layer. It is just on its own. From whatever vector of hidden values it starts off with, it will generate a new example meant to look just like the training data. One pitfall of this model is there is no explicit encoding of an input. Meaning, you can't take an input and find what would possibly generate it. However, there are recent extensions to this model which make it more like the autoencoder framework, allowing it to do this.\n<a name=\"input-pipelines\"></a>\nInput Pipelines\nBefore we get started, we're going to need to work with a very large image dataset, the CelebNet dataset. In session 1, we loaded this dataset but only grabbed the first 1000 images. That's because loading all 200 thousand images would take up a lot of memory which we'd rather not have to do. And in Session 3 we were introduced again to the CelebNet and Sita Sings the Blues which required us to load a lot of images. I glossed over the details of the input pipeline then so we could focus on learning the basics of neural networks. But I think now we're ready to see how to handle some larger datasets.\nTensorflow provides operations for takinga list of files, using that list to load the data pointed to it, decoding that file's data as an image, and creating shuffled minibatches. All of this is put into a queue and managed by queuerunners and coordinators.\nAs you may have already seen in the Variational Autoencoder's code, I've provided a simple interface for creating such an input pipeline using image files which will also apply cropping and reshaping of images in the pipeline so you don't have to deal with any of it. Let's see how we can use it to load the CelebNet dataset.\nLet's first get the list of all the CelebNet files:",
"import tensorflow as tf\nfrom libs.datasets import CELEB\nfiles = CELEB()",
"And then create our input pipeline to create shuffled minibatches and crop the images to a standard shape. This will require us to specify the list of files, how large each minibatch is, how many epochs we want to run for, and how we want the images to be cropped.",
"from libs.dataset_utils import create_input_pipeline\nbatch_size = 100\nn_epochs = 10\ninput_shape = [218, 178, 3]\ncrop_shape = [64, 64, 3]\ncrop_factor = 0.8\nbatch = create_input_pipeline(\n files=files,\n batch_size=batch_size,\n n_epochs=n_epochs,\n crop_shape=crop_shape,\n crop_factor=crop_factor,\n shape=input_shape)",
"Then when we are ready to use the batch generator, we'll need to create a Coordinator and specify this to tensorflow using the start_queue_runners method in order to provide the data:",
"sess = tf.Session()\ncoord = tf.train.Coordinator()\nthreads = tf.train.start_queue_runners(sess=sess, coord=coord)",
"We can grab our data using our batch generator like so:",
"batch_xs = sess.run(batch)\n# We get batch_size at a time, so 100\nprint(batch_xs.shape)\n# The datatype is float32 since what is what we use in the tensorflow graph\n# And the max value still has the original image range from 0-255\nprint(batch_xs.dtype, np.max(batch_xs.dtype))\n# So to plot it, we'll need to divide by 255.\nplt.imshow(batch_xs[0] / 255.0)",
"Let's see how to make use of this while we train a generative adversarial network!\n<a name=\"gandcgan\"></a>\nGAN/DCGAN\nInside the libs directory, you'll find gan.py which shows how to create a generative adversarial network with or without convolution, and how to train it using the CelebNet dataset. Let's step through the code and then I'll show you what it's capable of doing.\n-- Code demonstration not transcribed. -- \n<a name=\"extensions\"></a>\nExtensions\nSo it turns out there are a ton of very fun and interesting extensions when you have a model in this space. It turns out that you can perform addition in the latent space. I'll just show you Alec Radford's code base on github to show you what that looks like.\n<a name=\"recurrent-networks\"></a>\nRecurrent Networks\nUp until now, all of the networks that we've learned and worked with really have no sense of time. They are static. They cannot remember sequences, nor can they understand order outside of the spatial dimensions we offer it. Imagine for instance that we wanted a network capable of reading. As input, it is given one letter at a time. So let's say it were given the letters 'n', 'e', 't', 'w', 'o', 'r', and we wanted it to learn to output 'k'. It would need to be able to reason about inputs it received before the last one it received, the letters before 'r'. But it's not just letters.\nConsider the way we look at the world. We don't simply download a high resolution image of the world in front of us. We move our eyes. Each fixation takes in new information and each of these together in sequence help us perceive and act. That again is a sequential process.\nRecurrent neural networks let us reason about information over multiple timesteps. They are able to encode what it has seen in the past as if it has a memory of its own. It does this by basically creating one HUGE network that expands over time. It can reason about the current timestep by conditioning on what it has already seen. By giving it many sequences as batches, it can learn a distribution over sequences which can model the current timestep given the previous timesteps. But in order for this to be practical, we specify at each timestep, or each time it views an input, that the weights in each new timestep cannot change. We also include a new matrix, H, which reasons about the past timestep, connecting each new timestep. For this reason, we can just think of recurrent networks as ones with loops in it.\nOther than that, they are exactly like every other network we've come across! They will have an input and an output. They'll need a loss or an objective function to optimize which will relate what we want the network to output for some given set of inputs. And they'll be trained with gradient descent and backprop.\n<a name=\"basic-rnn-cell\"></a>\nBasic RNN Cell\nThe basic recurrent cell can be used in tensorflow as tf.nn.rnn_cell.BasicRNNCell. Though for most complex sequences, especially longer sequences, this is almost never a good idea. That is because the basic RNN cell does not do very well as time goes on. To understand why this is, we'll have to learn a bit more about how backprop works. When we perform backrprop, we're multiplying gradients from the output back to the input. As the network gets deeper, there are more multiplications along the way from the output to the input.\nSame for recurrent networks. Remember, their just like a normal feedforward network with each new timestep creating a new layer. So if we're creating an infinitely deep network, what will happen to all our multiplications? Well if the derivatives are all greater than 1, then they will very quickly grow to infinity. And if they are less than 1, then they will very quickly grow to 0. That makes them very difficult to train in practice. The problem is known in the literature as the exploding or vanishing gradient problem. Luckily, we don't have to figure out how to solve it, because some very clever people have already come up with a solution, in 1997!, yea, what were you doing in 1997. Probably not coming up with they called the long-short-term-memory, or LSTM.\n<a name=\"lstm-rnn-cell\"></a>\nLSTM RNN Cell\nThe mechanics of this are unforunately far beyond the scope of this course, but put simply, it uses a combinations of gating cells to control its contents and by having gates, it is able to block the flow of the gradient, avoiding too many multiplications during backprop. For more details, I highly recommend reading: https://colah.github.io/posts/2015-08-Understanding-LSTMs/.\nIn tensorflow, we can make use of this cell using tf.nn.rnn_cell.LSTMCell.\n<a name=\"gru-rnn-cell\"></a>\nGRU RNN Cell\nOne last cell type is worth mentioning, the gated recurrent unit, or GRU. Again, beyond the scope of this class. Just think of it as a simplifed version of the LSTM with 2 gates instead of 4, though that is not an accurate description. In Tensorflow we can use this with tf.nn.rnn_cell.GRUCell.\n<a name=\"character-langauge-model\"></a>\nCharacter Langauge Model\nWe'll now try a fun application of recurrent networks where we try to model a corpus of text, one character at a time. The basic idea is to take one character at a time and try to predict the next character in sequence. Given enough sequences, the model is capable of generating entirely new sequences all on its own.\n<a name=\"setting-up-the-data\"></a>\nSetting up the Data\nFor data, we're going to start with text. You can basically take any text file that is sufficiently long, as we'll need a lot of it, and try to use this. This website seems like an interesting place to begin: http://textfiles.com/directory.html and project guttenberg https://www.gutenberg.org/browse/scores/top. http://prize.hutter1.net/ also has a 50k euro reward for compressing wikipedia. Let's try w/ Alice's Adventures in Wonderland by Lewis Carroll:",
"%pylab\nimport tensorflow as tf\nfrom six.moves import urllib\nf, _ = urllib.request.urlretrieve('https://www.gutenberg.org/cache/epub/11/pg11.txt', 'alice.txt')\nwith open(f, 'r') as fp:\n txt = fp.read()",
"And let's find out what's inside this text file by creating a set of all possible characters.",
"vocab = list(set(txt))\nlen(txt), len(vocab)",
"Great so we now have about 164 thousand characters and 85 unique characters in our vocabulary which we can use to help us train a model of language. Rather than use the characters, we'll convert each character to a unique integer. We'll later see that when we work with words, we can achieve a similar goal using a very popular model called word2vec: https://www.tensorflow.org/versions/r0.9/tutorials/word2vec/index.html\nWe'll first create a look up table which will map a character to an integer:",
"encoder = dict(zip(vocab, range(len(vocab))))\ndecoder = dict(zip(range(len(vocab)), vocab))",
"<a name=\"creating-the-model\"></a>\nCreating the Model\nFor our model, we'll need to define a few parameters.",
"# Number of sequences in a mini batch\nbatch_size = 100\n\n# Number of characters in a sequence\nsequence_length = 100\n\n# Number of cells in our LSTM layer\nn_cells = 256\n\n# Number of LSTM layers\nn_layers = 2\n\n# Total number of characters in the one-hot encoding\nn_chars = len(vocab)",
"Now create the input and output to the network. Rather than having batch size x number of features; or batch size x height x width x channels; we're going to have batch size x sequence length.",
"X = tf.placeholder(tf.int32, [None, sequence_length], name='X')\n\n# We'll have a placeholder for our true outputs\nY = tf.placeholder(tf.int32, [None, sequence_length], name='Y')",
"Now remember with MNIST that we used a one-hot vector representation of our numbers. We could transform our input data into such a representation. But instead, we'll use tf.nn.embedding_lookup so that we don't need to compute the encoded vector. Let's see how this works:",
"# we first create a variable to take us from our one-hot representation to our LSTM cells\nembedding = tf.get_variable(\"embedding\", [n_chars, n_cells])\n\n# And then use tensorflow's embedding lookup to look up the ids in X\nXs = tf.nn.embedding_lookup(embedding, X)\n\n# The resulting lookups are concatenated into a dense tensor\nprint(Xs.get_shape().as_list())",
"To create a recurrent network, we're going to need to slice our sequences into individual inputs. That will give us timestep lists which are each batch_size x input_size. Each character will then be connected to a recurrent layer composed of n_cells LSTM units.",
"# Let's create a name scope for the operations to clean things up in our graph\nwith tf.name_scope('reslice'):\n Xs = [tf.squeeze(seq, [1])\n for seq in tf.split(1, sequence_length, Xs)]",
"Now we'll create our recurrent layer composed of LSTM cells.",
"cells = tf.nn.rnn_cell.BasicLSTMCell(num_units=n_cells, state_is_tuple=True)",
"We'll initialize our LSTMs using the convenience method provided by tensorflow. We could explicitly define the batch size here or use the tf.shape method to compute it based on whatever X is, letting us feed in different sizes into the graph.",
"initial_state = cells.zero_state(tf.shape(X)[0], tf.float32)",
"Great now we have a layer of recurrent cells and a way to initialize them. If we wanted to make this a multi-layer recurrent network, we could use the MultiRNNCell like so:",
"if n_layers > 1:\n cells = tf.nn.rnn_cell.MultiRNNCell(\n [cells] * n_layers, state_is_tuple=True)\n initial_state = cells.zero_state(tf.shape(X)[0], tf.float32)",
"In either case, the cells are composed of their outputs as modulated by the LSTM's output gate, and whatever is currently stored in its memory contents. Now let's connect our input to it.",
"# this will return us a list of outputs of every element in our sequence.\n# Each output is `batch_size` x `n_cells` of output.\n# It will also return the state as a tuple of the n_cells's memory and\n# their output to connect to the time we use the recurrent layer.\noutputs, state = tf.nn.rnn(cells, Xs, initial_state=initial_state)\n\n# We'll now stack all our outputs for every cell\noutputs_flat = tf.reshape(tf.concat(1, outputs), [-1, n_cells])",
"For our output, we'll simply try to predict the very next timestep. So if our input sequence was \"networ\", our output sequence should be: \"etwork\". This will give us the same batch size coming out, and the same number of elements as our input sequence.",
"with tf.variable_scope('prediction'):\n W = tf.get_variable(\n \"W\",\n shape=[n_cells, n_chars],\n initializer=tf.random_normal_initializer(stddev=0.1))\n b = tf.get_variable(\n \"b\",\n shape=[n_chars],\n initializer=tf.random_normal_initializer(stddev=0.1))\n\n # Find the output prediction of every single character in our minibatch\n # we denote the pre-activation prediction, logits.\n logits = tf.matmul(outputs_flat, W) + b\n\n # We get the probabilistic version by calculating the softmax of this\n probs = tf.nn.softmax(logits)\n\n # And then we can find the index of maximum probability\n Y_pred = tf.argmax(probs, 1)",
"<a name=\"loss\"></a>\nLoss\nOur loss function will take the reshaped predictions and targets, and compute the softmax cross entropy.",
"with tf.variable_scope('loss'):\n # Compute mean cross entropy loss for each output.\n Y_true_flat = tf.reshape(tf.concat(1, Y), [-1])\n loss = tf.nn.sparse_softmax_cross_entropy_with_logits(logits, Y_true_flat)\n mean_loss = tf.reduce_mean(loss)",
"<a name=\"clipping-the-gradient\"></a>\nClipping the Gradient\nNormally, we would just create an optimizer, give it a learning rate, and tell it to minize our loss. But with recurrent networks, we can help out a bit by telling it to clip gradients. That helps with the exploding gradient problem, ensureing they can't get any bigger than the value we tell it. We can do that in tensorflow by iterating over every gradient and variable, and changing their value before we apply their update to every trainable variable.",
"with tf.name_scope('optimizer'):\n optimizer = tf.train.AdamOptimizer(learning_rate=0.001)\n gradients = []\n clip = tf.constant(5.0, name=\"clip\")\n for grad, var in optimizer.compute_gradients(mean_loss):\n gradients.append((tf.clip_by_value(grad, -clip, clip), var))\n updates = optimizer.apply_gradients(gradients)",
"We could also explore other methods of clipping the gradient based on a percentile of the norm of activations or other similar methods, like when we explored deep dream regularization. But the LSTM has been built to help regularize the network through its own gating mechanisms, so this may not be the best idea for your problem. Really, the only way to know is to try different approaches and see how it effects the output on your problem.\n<a name=\"training\"></a>\nTraining",
"sess = tf.Session()\ninit = tf.initialize_all_variables()\nsess.run(init)\n\ncursor = 0\nit_i = 0\nwhile True:\n Xs, Ys = [], []\n for batch_i in range(batch_size):\n if (cursor + sequence_length) >= len(txt) - sequence_length - 1:\n cursor = 0\n Xs.append([encoder[ch]\n for ch in txt[cursor:cursor + sequence_length]])\n Ys.append([encoder[ch]\n for ch in txt[cursor + 1: cursor + sequence_length + 1]])\n\n cursor = (cursor + sequence_length)\n Xs = np.array(Xs).astype(np.int32)\n Ys = np.array(Ys).astype(np.int32)\n\n loss_val, _ = sess.run([mean_loss, updates],\n feed_dict={X: Xs, Y: Ys})\n print(it_i, loss_val)\n\n if it_i % 500 == 0:\n p = sess.run([Y_pred], feed_dict={X: Xs})[0]\n preds = [decoder[p_i] for p_i in p]\n print(\"\".join(preds).split('\\n'))\n\n it_i += 1",
"<a name=\"extensions-1\"></a>\nExtensions\nThere are also certainly a lot of additions we can add to speed up or help with training including adding dropout or using batch normalization that I haven't gone into here. Also when dealing with variable length sequences, you may want to consider using a special token to denote the last character or element in your sequence.\nAs for applications, completley endless. And I think that is really what makes this field so exciting right now. There doesn't seem to be any limit to what is possible right now. You are not just limited to text first of all. You may want to feed in MIDI data to create a piece of algorithmic music. I've tried it with raw sound data and this even works, but it requires a lot of memory and at least 30k iterations to run before it sounds like anything. Or perhaps you might try some other unexpected text based information, such as encodings of image data like JPEG in base64. Or other compressed data formats. Or perhaps you are more adventurous and want to try using what you've learned here with the previous sessions to add recurrent layers to a traditional convolutional model.\n<a name=\"future\"></a>\nFuture\nIf you're still here, then I'm really excited for you and to see what you'll create. By now, you've seen most of the major building blocks with neural networks. From here, you are only limited by the time it takes to train all of the interesting ideas you'll have. But there is still so much more to discover, and it's very likely that this entire course is already out of date, because this field just moves incredibly fast. In any case, the applications of these techniques are still fairly stagnant, so if you're here to see how your creative practice could grow with these techniques, then you should already have plenty to discover.\nI'm very excited about how the field is moving. Often, it is very hard to find labels for a lot of data in a meaningful and consistent way. But there is a lot of interesting stuff starting to emerge in the unsupervised models. Those are the models that just take data in, and the computer reasons about it. And even more interesting is the combination of general purpose learning algorithms. That's really where reinforcement learning is starting to shine. But that's for another course, perhaps.\n<a name=\"reading\"></a>\nReading\nIan J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio. Generative Adversarial Networks. 2014.\nhttps://arxiv.org/abs/1406.2661\nIan J. Goodfellow, Jonathon Shlens, Christian Szegedy. Explaining and Harnessing Adversarial Examples. 2014.\nAlec Radford, Luke Metz, Soumith Chintala. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. 2015.\nhttps://arxiv.org/abs/1511.06434\nEmily Denton, Soumith Chintala, Arthur Szlam, Rob Fergus. \nDeep Generative Image Models using a Laplacian Pyramid of Adversarial Networks. 2015.\narxiv.org/abs/1506.05751\nAnders Boesen Lindbo Larsen, Søren Kaae Sønderby, Hugo Larochelle, Ole Winther. Autoencoding beyond pixels using a learned similarity metric. 2015.\nhttps://arxiv.org/abs/1512.09300\nVincent Dumoulin, Ishmael Belghazi, Ben Poole, Alex Lamb, Martin Arjovsky, Olivier Mastropietro, Aaron Courville. Adversarially Learned Inference. 2016.\nhttps://arxiv.org/abs/1606.00704\nIlya Sutskever, James Martens, and Geoffrey Hinton. Generating Text with Recurrent Neural Networks, ICML 2011. \nA. Graves. Generating sequences with recurrent neural networks. In Arxiv preprint, arXiv:1308.0850, 2013.\nT. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean. Distributed representations of words and phrases and their compositionality. In Advances in\nNeural Information Processing Systems, pages 3111–3119, 2013.\nJ. Pennington, R. Socher, and C. D. Manning. Glove: Global vectors for word representation. Proceedings of the Empiricial Methods in Natural Language Processing (EMNLP 2014), 12, 2014.\nYoon Kim, Yacine Jernite, David Sontag, Alexander M. Rush. Character-Aware Neural Language Models. 2015.\nhttps://arxiv.org/abs/1508.06615\nI. Sutskever, J. Martens, and G. Hinton. Generating text with recurrent neural networks. In L. Getoor and T. Scheffer, editors, Proceedings of the 28th International Conference on Machine Learning (ICML-11), ICML ’11, pages 1017–1024, New York, NY, USA, June 2011. ACM."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ES-DOC/esdoc-jupyterhub
|
notebooks/messy-consortium/cmip6/models/sandbox-1/ocean.ipynb
|
gpl-3.0
|
[
"ES-DOC CMIP6 Model Properties - Ocean\nMIP Era: CMIP6\nInstitute: MESSY-CONSORTIUM\nSource ID: SANDBOX-1\nTopic: Ocean\nSub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing. \nProperties: 133 (101 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:10\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook",
"# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'messy-consortium', 'sandbox-1', 'ocean')",
"Document Authors\nSet document authors",
"# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Contributors\nSpecify document contributors",
"# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Publication\nSpecify document publication status",
"# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)",
"Document Table of Contents\n1. Key Properties\n2. Key Properties --> Seawater Properties\n3. Key Properties --> Bathymetry\n4. Key Properties --> Nonoceanic Waters\n5. Key Properties --> Software Properties\n6. Key Properties --> Resolution\n7. Key Properties --> Tuning Applied\n8. Key Properties --> Conservation\n9. Grid\n10. Grid --> Discretisation --> Vertical\n11. Grid --> Discretisation --> Horizontal\n12. Timestepping Framework\n13. Timestepping Framework --> Tracers\n14. Timestepping Framework --> Baroclinic Dynamics\n15. Timestepping Framework --> Barotropic\n16. Timestepping Framework --> Vertical Physics\n17. Advection\n18. Advection --> Momentum\n19. Advection --> Lateral Tracers\n20. Advection --> Vertical Tracers\n21. Lateral Physics\n22. Lateral Physics --> Momentum --> Operator\n23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff\n24. Lateral Physics --> Tracers\n25. Lateral Physics --> Tracers --> Operator\n26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff\n27. Lateral Physics --> Tracers --> Eddy Induced Velocity\n28. Vertical Physics\n29. Vertical Physics --> Boundary Layer Mixing --> Details\n30. Vertical Physics --> Boundary Layer Mixing --> Tracers\n31. Vertical Physics --> Boundary Layer Mixing --> Momentum\n32. Vertical Physics --> Interior Mixing --> Details\n33. Vertical Physics --> Interior Mixing --> Tracers\n34. Vertical Physics --> Interior Mixing --> Momentum\n35. Uplow Boundaries --> Free Surface\n36. Uplow Boundaries --> Bottom Boundary Layer\n37. Boundary Forcing\n38. Boundary Forcing --> Momentum --> Bottom Friction\n39. Boundary Forcing --> Momentum --> Lateral Friction\n40. Boundary Forcing --> Tracers --> Sunlight Penetration\n41. Boundary Forcing --> Tracers --> Fresh Water Forcing \n1. Key Properties\nOcean key properties\n1.1. Model Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of ocean model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.2. Model Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of ocean model code (NEMO 3.6, MOM 5.0,...)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.3. Model Family\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of ocean model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.model_family') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OGCM\" \n# \"slab ocean\" \n# \"mixed layer ocean\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.4. Basic Approximations\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nBasic approximations made in the ocean.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.basic_approximations') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Primitive equations\" \n# \"Non-hydrostatic\" \n# \"Boussinesq\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.5. Prognostic Variables\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nList of prognostic variables in the ocean component.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Potential temperature\" \n# \"Conservative temperature\" \n# \"Salinity\" \n# \"U-velocity\" \n# \"V-velocity\" \n# \"W-velocity\" \n# \"SSH\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"2. Key Properties --> Seawater Properties\nPhysical properties of seawater in ocean\n2.1. Eos Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of EOS for sea water",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Linear\" \n# \"Wright, 1997\" \n# \"Mc Dougall et al.\" \n# \"Jackett et al. 2006\" \n# \"TEOS 2010\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"2.2. Eos Functional Temp\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTemperature used in EOS for sea water",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Potential temperature\" \n# \"Conservative temperature\" \n# TODO - please enter value(s)\n",
"2.3. Eos Functional Salt\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSalinity used in EOS for sea water",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Practical salinity Sp\" \n# \"Absolute salinity Sa\" \n# TODO - please enter value(s)\n",
"2.4. Eos Functional Depth\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDepth or pressure used in EOS for sea water ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Pressure (dbars)\" \n# \"Depth (meters)\" \n# TODO - please enter value(s)\n",
"2.5. Ocean Freezing Point\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nEquation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"TEOS 2010\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"2.6. Ocean Specific Heat\nIs Required: TRUE Type: FLOAT Cardinality: 1.1\nSpecific heat in ocean (cpocean) in J/(kg K)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"2.7. Ocean Reference Density\nIs Required: TRUE Type: FLOAT Cardinality: 1.1\nBoussinesq reference density (rhozero) in kg / m3",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3. Key Properties --> Bathymetry\nProperties of bathymetry in ocean\n3.1. Reference Dates\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nReference date of bathymetry",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Present day\" \n# \"21000 years BP\" \n# \"6000 years BP\" \n# \"LGM\" \n# \"Pliocene\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"3.2. Type\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs the bathymetry fixed in time in the ocean ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.bathymetry.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"3.3. Ocean Smoothing\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe any smoothing or hand editing of bathymetry in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3.4. Source\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe source of bathymetry in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.bathymetry.source') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4. Key Properties --> Nonoceanic Waters\nNon oceanic waters treatement in ocean\n4.1. Isolated Seas\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how isolated seas is performed",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.2. River Mouth\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how river mouth mixing or estuaries specific treatment is performed",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5. Key Properties --> Software Properties\nSoftware properties of ocean code\n5.1. Repository\nIs Required: FALSE Type: STRING Cardinality: 0.1\nLocation of code for this component.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.2. Code Version\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCode version identifier.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.3. Code Languages\nIs Required: FALSE Type: STRING Cardinality: 0.N\nCode language(s).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6. Key Properties --> Resolution\nResolution in the ocean grid\n6.1. Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.2. Canonical Horizontal Resolution\nIs Required: TRUE Type: STRING Cardinality: 1.1\nExpression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.3. Range Horizontal Resolution\nIs Required: TRUE Type: STRING Cardinality: 1.1\nRange of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.4. Number Of Horizontal Gridpoints\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTotal number of horizontal (XY) points (or degrees of freedom) on computational grid.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"6.5. Number Of Vertical Levels\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nNumber of vertical levels resolved on computational grid.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"6.6. Is Adaptive Grid\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDefault is False. Set true if grid resolution changes during execution.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.7. Thickness Level 1\nIs Required: TRUE Type: FLOAT Cardinality: 1.1\nThickness of first surface ocean level (in meters)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"7. Key Properties --> Tuning Applied\nTuning methodology for ocean component\n7.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.2. Global Mean Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList set of metrics of the global mean state used in tuning model/component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.3. Regional Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.4. Trend Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList observed trend metrics used in tuning model/component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8. Key Properties --> Conservation\nConservation in the ocean component\n8.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nBrief description of conservation methodology",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.2. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nProperties conserved in the ocean by the numerical schemes",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.scheme') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Energy\" \n# \"Enstrophy\" \n# \"Salt\" \n# \"Volume of ocean\" \n# \"Momentum\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"8.3. Consistency Properties\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAny additional consistency properties (energy conversion, pressure gradient discretisation, ...)?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.4. Corrected Conserved Prognostic Variables\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSet of variables which are conserved by more than the numerical scheme alone.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.5. Was Flux Correction Used\nIs Required: FALSE Type: BOOLEAN Cardinality: 0.1\nDoes conservation involve flux correction ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"9. Grid\nOcean grid\n9.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of grid in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10. Grid --> Discretisation --> Vertical\nProperties of vertical discretisation in ocean\n10.1. Coordinates\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of vertical coordinates in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Z-coordinate\" \n# \"Z*-coordinate\" \n# \"S-coordinate\" \n# \"Isopycnic - sigma 0\" \n# \"Isopycnic - sigma 2\" \n# \"Isopycnic - sigma 4\" \n# \"Isopycnic - other\" \n# \"Hybrid / Z+S\" \n# \"Hybrid / Z+isopycnic\" \n# \"Hybrid / other\" \n# \"Pressure referenced (P)\" \n# \"P*\" \n# \"Z**\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"10.2. Partial Steps\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nUsing partial steps with Z or Z vertical coordinate in ocean ?*",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"11. Grid --> Discretisation --> Horizontal\nType of horizontal discretisation scheme in ocean\n11.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHorizontal grid type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Lat-lon\" \n# \"Rotated north pole\" \n# \"Two north poles (ORCA-style)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"11.2. Staggering\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nHorizontal grid staggering type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Arakawa B-grid\" \n# \"Arakawa C-grid\" \n# \"Arakawa E-grid\" \n# \"N/a\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"11.3. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHorizontal discretisation scheme in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Finite difference\" \n# \"Finite volumes\" \n# \"Finite elements\" \n# \"Unstructured grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"12. Timestepping Framework\nOcean Timestepping Framework\n12.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of time stepping in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"12.2. Diurnal Cycle\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDiurnal cycle type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Via coupling\" \n# \"Specific treatment\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13. Timestepping Framework --> Tracers\nProperties of tracers time stepping in ocean\n13.1. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTracers time stepping scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Leap-frog + Asselin filter\" \n# \"Leap-frog + Periodic Euler\" \n# \"Predictor-corrector\" \n# \"Runge-Kutta 2\" \n# \"AM3-LF\" \n# \"Forward-backward\" \n# \"Forward operator\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.2. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTracers time step (in seconds)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"14. Timestepping Framework --> Baroclinic Dynamics\nBaroclinic dynamics in ocean\n14.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nBaroclinic dynamics type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Preconditioned conjugate gradient\" \n# \"Sub cyling\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14.2. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nBaroclinic dynamics scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Leap-frog + Asselin filter\" \n# \"Leap-frog + Periodic Euler\" \n# \"Predictor-corrector\" \n# \"Runge-Kutta 2\" \n# \"AM3-LF\" \n# \"Forward-backward\" \n# \"Forward operator\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14.3. Time Step\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nBaroclinic time step (in seconds)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"15. Timestepping Framework --> Barotropic\nBarotropic time stepping in ocean\n15.1. Splitting\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTime splitting method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"split explicit\" \n# \"implicit\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.2. Time Step\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nBarotropic time step (in seconds)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"16. Timestepping Framework --> Vertical Physics\nVertical physics time stepping in ocean\n16.1. Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDetails of vertical time stepping in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17. Advection\nOcean advection\n17.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of advection in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"18. Advection --> Momentum\nProperties of lateral momemtum advection scheme in ocean\n18.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of lateral momemtum advection scheme in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.momentum.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Flux form\" \n# \"Vector form\" \n# TODO - please enter value(s)\n",
"18.2. Scheme Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of ocean momemtum advection scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.momentum.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"18.3. ALE\nIs Required: FALSE Type: BOOLEAN Cardinality: 0.1\nUsing ALE for vertical advection ? (if vertical coordinates are sigma)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.momentum.ALE') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"19. Advection --> Lateral Tracers\nProperties of lateral tracer advection scheme in ocean\n19.1. Order\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nOrder of lateral tracer advection scheme in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"19.2. Flux Limiter\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nMonotonic flux limiter for lateral tracer advection scheme in ocean ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"19.3. Effective Order\nIs Required: TRUE Type: FLOAT Cardinality: 1.1\nEffective order of limited lateral tracer advection scheme in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"19.4. Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"19.5. Passive Tracers\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nPassive tracers advected",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Ideal age\" \n# \"CFC 11\" \n# \"CFC 12\" \n# \"SF6\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"19.6. Passive Tracers Advection\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIs advection of passive tracers different than active ? if so, describe.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"20. Advection --> Vertical Tracers\nProperties of vertical tracer advection scheme in ocean\n20.1. Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.vertical_tracers.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"20.2. Flux Limiter\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nMonotonic flux limiter for vertical tracer advection scheme in ocean ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"21. Lateral Physics\nOcean lateral physics\n21.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of lateral physics in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"21.2. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of transient eddy representation in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Eddy active\" \n# \"Eddy admitting\" \n# TODO - please enter value(s)\n",
"22. Lateral Physics --> Momentum --> Operator\nProperties of lateral physics operator for momentum in ocean\n22.1. Direction\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDirection of lateral physics momemtum scheme in the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Horizontal\" \n# \"Isopycnal\" \n# \"Isoneutral\" \n# \"Geopotential\" \n# \"Iso-level\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"22.2. Order\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nOrder of lateral physics momemtum scheme in the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Harmonic\" \n# \"Bi-harmonic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"22.3. Discretisation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDiscretisation of lateral physics momemtum scheme in the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Second order\" \n# \"Higher order\" \n# \"Flux limiter\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff\nProperties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean\n23.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nLateral physics momemtum eddy viscosity coeff type in the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Space varying\" \n# \"Time + space varying (Smagorinsky)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"23.2. Constant Coefficient\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"23.3. Variable Coefficient\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"23.4. Coeff Background\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"23.5. Coeff Backscatter\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"24. Lateral Physics --> Tracers\nProperties of lateral physics for tracers in ocean\n24.1. Mesoscale Closure\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there a mesoscale closure in the lateral physics tracers scheme ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"24.2. Submesoscale Mixing\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"25. Lateral Physics --> Tracers --> Operator\nProperties of lateral physics operator for tracers in ocean\n25.1. Direction\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDirection of lateral physics tracers scheme in the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Horizontal\" \n# \"Isopycnal\" \n# \"Isoneutral\" \n# \"Geopotential\" \n# \"Iso-level\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"25.2. Order\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nOrder of lateral physics tracers scheme in the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Harmonic\" \n# \"Bi-harmonic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"25.3. Discretisation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDiscretisation of lateral physics tracers scheme in the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Second order\" \n# \"Higher order\" \n# \"Flux limiter\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff\nProperties of eddy diffusity coeff in lateral physics tracers scheme in the ocean\n26.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nLateral physics tracers eddy diffusity coeff type in the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Space varying\" \n# \"Time + space varying (Smagorinsky)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"26.2. Constant Coefficient\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"26.3. Variable Coefficient\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"26.4. Coeff Background\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nDescribe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"26.5. Coeff Backscatter\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there backscatter in eddy diffusity coeff in lateral physics tracers scheme ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"27. Lateral Physics --> Tracers --> Eddy Induced Velocity\nProperties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean\n27.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of EIV in lateral physics tracers in the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"GM\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"27.2. Constant Val\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf EIV scheme for tracers is constant, specify coefficient value (M2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"27.3. Flux Type\nIs Required: TRUE Type: STRING Cardinality: 1.1\nType of EIV flux (advective or skew)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"27.4. Added Diffusivity\nIs Required: TRUE Type: STRING Cardinality: 1.1\nType of EIV added diffusivity (constant, flow dependent or none)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"28. Vertical Physics\nOcean Vertical Physics\n28.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of vertical physics in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"29. Vertical Physics --> Boundary Layer Mixing --> Details\nProperties of vertical physics in ocean\n29.1. Langmuir Cells Mixing\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there Langmuir cells mixing in upper ocean ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"30. Vertical Physics --> Boundary Layer Mixing --> Tracers\n*Properties of boundary layer (BL) mixing on tracers in the ocean *\n30.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of boundary layer mixing for tracers in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant value\" \n# \"Turbulent closure - TKE\" \n# \"Turbulent closure - KPP\" \n# \"Turbulent closure - Mellor-Yamada\" \n# \"Turbulent closure - Bulk Mixed Layer\" \n# \"Richardson number dependent - PP\" \n# \"Richardson number dependent - KT\" \n# \"Imbeded as isopycnic vertical coordinate\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"30.2. Closure Order\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nIf turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"30.3. Constant\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf constant BL mixing of tracers, specific coefficient (m2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"30.4. Background\nIs Required: TRUE Type: STRING Cardinality: 1.1\nBackground BL mixing of tracers coefficient, (schema and value in m2/s - may by none)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"31. Vertical Physics --> Boundary Layer Mixing --> Momentum\n*Properties of boundary layer (BL) mixing on momentum in the ocean *\n31.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of boundary layer mixing for momentum in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant value\" \n# \"Turbulent closure - TKE\" \n# \"Turbulent closure - KPP\" \n# \"Turbulent closure - Mellor-Yamada\" \n# \"Turbulent closure - Bulk Mixed Layer\" \n# \"Richardson number dependent - PP\" \n# \"Richardson number dependent - KT\" \n# \"Imbeded as isopycnic vertical coordinate\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"31.2. Closure Order\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nIf turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"31.3. Constant\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf constant BL mixing of momentum, specific coefficient (m2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"31.4. Background\nIs Required: TRUE Type: STRING Cardinality: 1.1\nBackground BL mixing of momentum coefficient, (schema and value in m2/s - may by none)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"32. Vertical Physics --> Interior Mixing --> Details\n*Properties of interior mixing in the ocean *\n32.1. Convection Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of vertical convection in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Non-penetrative convective adjustment\" \n# \"Enhanced vertical diffusion\" \n# \"Included in turbulence closure\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"32.2. Tide Induced Mixing\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe how tide induced mixing is modelled (barotropic, baroclinic, none)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"32.3. Double Diffusion\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there double diffusion",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"32.4. Shear Mixing\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there interior shear mixing",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"33. Vertical Physics --> Interior Mixing --> Tracers\n*Properties of interior mixing on tracers in the ocean *\n33.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of interior mixing for tracers in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant value\" \n# \"Turbulent closure / TKE\" \n# \"Turbulent closure - Mellor-Yamada\" \n# \"Richardson number dependent - PP\" \n# \"Richardson number dependent - KT\" \n# \"Imbeded as isopycnic vertical coordinate\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"33.2. Constant\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf constant interior mixing of tracers, specific coefficient (m2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"33.3. Profile\nIs Required: TRUE Type: STRING Cardinality: 1.1\nIs the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"33.4. Background\nIs Required: TRUE Type: STRING Cardinality: 1.1\nBackground interior mixing of tracers coefficient, (schema and value in m2/s - may by none)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"34. Vertical Physics --> Interior Mixing --> Momentum\n*Properties of interior mixing on momentum in the ocean *\n34.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of interior mixing for momentum in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant value\" \n# \"Turbulent closure / TKE\" \n# \"Turbulent closure - Mellor-Yamada\" \n# \"Richardson number dependent - PP\" \n# \"Richardson number dependent - KT\" \n# \"Imbeded as isopycnic vertical coordinate\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"34.2. Constant\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf constant interior mixing of momentum, specific coefficient (m2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"34.3. Profile\nIs Required: TRUE Type: STRING Cardinality: 1.1\nIs the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"34.4. Background\nIs Required: TRUE Type: STRING Cardinality: 1.1\nBackground interior mixing of momentum coefficient, (schema and value in m2/s - may by none)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"35. Uplow Boundaries --> Free Surface\nProperties of free surface in ocean\n35.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of free surface in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"35.2. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nFree surface scheme in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Linear implicit\" \n# \"Linear filtered\" \n# \"Linear semi-explicit\" \n# \"Non-linear implicit\" \n# \"Non-linear filtered\" \n# \"Non-linear semi-explicit\" \n# \"Fully explicit\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"35.3. Embeded Seaice\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs the sea-ice embeded in the ocean model (instead of levitating) ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"36. Uplow Boundaries --> Bottom Boundary Layer\nProperties of bottom boundary layer in ocean\n36.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of bottom boundary layer in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"36.2. Type Of Bbl\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of bottom boundary layer in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Diffusive\" \n# \"Acvective\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"36.3. Lateral Mixing Coef\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"36.4. Sill Overflow\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe any specific treatment of sill overflows",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"37. Boundary Forcing\nOcean boundary forcing\n37.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of boundary forcing in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"37.2. Surface Pressure\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"37.3. Momentum Flux Correction\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"37.4. Tracers Flux Correction\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"37.5. Wave Effects\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe if/how wave effects are modelled at ocean surface.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.wave_effects') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"37.6. River Runoff Budget\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe how river runoff from land surface is routed to ocean and any global adjustment done.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"37.7. Geothermal Heating\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe if/how geothermal heating is present at ocean bottom.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"38. Boundary Forcing --> Momentum --> Bottom Friction\nProperties of momentum bottom friction in ocean\n38.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of momentum bottom friction in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Linear\" \n# \"Non-linear\" \n# \"Non-linear (drag function of speed of tides)\" \n# \"Constant drag coefficient\" \n# \"None\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"39. Boundary Forcing --> Momentum --> Lateral Friction\nProperties of momentum lateral friction in ocean\n39.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of momentum lateral friction in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Free-slip\" \n# \"No-slip\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"40. Boundary Forcing --> Tracers --> Sunlight Penetration\nProperties of sunlight penetration scheme in ocean\n40.1. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of sunlight penetration scheme in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"1 extinction depth\" \n# \"2 extinction depth\" \n# \"3 extinction depth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"40.2. Ocean Colour\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs the ocean sunlight penetration scheme ocean colour dependent ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"40.3. Extinction Depth\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe and list extinctions depths for sunlight penetration scheme (if applicable).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"41. Boundary Forcing --> Tracers --> Fresh Water Forcing\nProperties of surface fresh water forcing in ocean\n41.1. From Atmopshere\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of surface fresh water forcing from atmos in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Freshwater flux\" \n# \"Virtual salt flux\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"41.2. From Sea Ice\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of surface fresh water forcing from sea-ice in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Freshwater flux\" \n# \"Virtual salt flux\" \n# \"Real salt flux\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"41.3. Forced Mode Restoring\nIs Required: TRUE Type: STRING Cardinality: 1.1\nType of surface salinity restoring in forced mode (OMIP)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"©2017 ES-DOC"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
diging/methods
|
1.7. Latent Dirichlet Allocation/1.7. Finding concepts in texts - Latent Dirichlet Allocation.ipynb
|
gpl-3.0
|
[
"%pylab inline",
"1.7. Finding concepts in texts - Latent Dirichlet Allocation\nLatent Semantic Analysis provided a powerful way to begin interrogating relationships among texts.\nIn this notebook we use the gensim implementation of Online LDA (Hoffman et al 2010), which is an alternate of the typical Gibbs-Sampling MCMC approach.",
"import nltk\nfrom tethne.readers import zotero\nimport matplotlib.pyplot as plt\nfrom nltk.corpus import stopwords\nimport gensim\nimport networkx as nx\nimport pandas as pd\n\nfrom collections import defaultdict, Counter\n\nwordnet = nltk.WordNetLemmatizer()\nstemmer = nltk.SnowballStemmer('english')\nstoplist = stopwords.words('english')\n\n\ntext_root = '../data/EmbryoProjectTexts/files'\nzotero_export_path = '../data/EmbryoProjectTexts'\n\ncorpus = nltk.corpus.PlaintextCorpusReader(text_root, 'https.+')\nmetadata = zotero.read(zotero_export_path, index_by='link', follow_links=False)\n\ndef normalize_token(token):\n \"\"\"\n Convert token to lowercase, and stem using the Porter algorithm.\n\n Parameters\n ----------\n token : str\n\n Returns\n -------\n token : str\n \"\"\"\n return wordnet.lemmatize(token.lower())\n\ndef filter_token(token):\n \"\"\"\n Evaluate whether or not to retain ``token``.\n\n Parameters\n ----------\n token : str\n\n Returns\n -------\n keep : bool\n \"\"\"\n token = token.lower()\n return token not in stoplist and token.isalpha() and len(token) > 2",
"We will represent our documents as a list of lists. Each sub-list contains tokens in the document.",
"documents=[[normalize_token(token) \n for token in corpus.words(fileids=[fileid])\n if filter_token(token)]\n for fileid in corpus.fileids()]\n\nyears = [metadata[fileid].date for fileid in corpus.fileids()]",
"Further filtering\nLDA in Python is a bit computationally expensive, so anything we can do to cut down on \"noise\" will help. Let's take a look at wordcounts and documentcounts to see whether we can narrow in on more useful terms.",
"wordcounts = nltk.FreqDist([token for document in documents for token in document])\n\nwordcounts.plot(20)\n\ndocumentcounts = nltk.FreqDist([token for document in documents for token in set(document)])\n\ndocumentcounts.plot(80)",
"Here we filter the tokens in each document, preserving the shape of the corpus.",
"filtered_documents = [[token for token in document \n if wordcounts[token] < 2000\n and 1 < documentcounts[token] < 350]\n for document in documents]",
"It's easier to compute over integers, so we use a Dictionary to create a mapping between words and their integer/id representation.",
"dictionary = gensim.corpora.Dictionary(filtered_documents)",
"The doc2bow() converts a document (series of tokens) into a bag-of-words representation.",
"documents_bow = [dictionary.doc2bow(document) for document in filtered_documents]",
"We're ready to fit the model! We pass our BOW-transformed documents, our dictionary, and the number of topics. update_every=0 disables an \"online\" feature in the sampler (used for very very large corpora), and passes=20 tells the sampler to pass over the whole corpus 20 times.",
"model = gensim.models.LdaModel(documents_bow, \n id2word=dictionary,\n num_topics=20, \n update_every=0,\n passes=20)\n\nfor i, topic in enumerate(model.print_topics(num_topics=20, num_words=5)):\n print i, ':', topic\n\ndocuments_lda = model[documents_bow]\n\ndocuments_lda[6]\n\ntopic_counts = defaultdict(Counter)\nfor year, document in zip(years, documents_lda):\n for topic, representation in document:\n topic_counts[topic][year] += 1.\n\ntopics_over_time = pd.DataFrame(columns=['Topic', 'Year', 'Count'])\n\ni = 0\nfor topic, yearcounts in topic_counts.iteritems():\n for year, count in yearcounts.iteritems():\n topics_over_time.loc[i] = [topic, year, count]\n i += 1\n\ntopics_over_time\n\ntopic_0_over_time = topics_over_time[topics_over_time.Topic == 0]\n\nplt.bar(topic_0_over_time.Year, topic_0_over_time.Count)\nplt.ylabel('Number of documents')\nplt.show()\n\nfrom scipy.spatial import distance\n\ndistance.cosine"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
GoogleCloudPlatform/asl-ml-immersion
|
notebooks/building_production_ml_systems/solutions/1_training_at_scale.ipynb
|
apache-2.0
|
[
"Training at scale with AI Platform Training Service\nLearning Objectives:\n 1. Learn how to organize your training code into a Python package\n 1. Train your model using cloud infrastructure via Google Cloud AI Platform Training Service\n 1. (optional) Learn how to run your training package using Docker containers and push training Docker images on a Docker registry\nIntroduction\nIn this notebook we'll make the jump from training locally, to do training in the cloud. We'll take advantage of Google Cloud's AI Platform Training Service. \nAI Platform Training Service is a managed service that allows the training and deployment of ML models without having to provision or maintain servers. The infrastructure is handled seamlessly by the managed service for us.\nEach learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook.",
"from google import api_core\nfrom google.cloud import bigquery",
"Change your project name and bucket name in the cell below if necessary.",
"# Change below if necessary\nPROJECT = !gcloud config get-value project # noqa: E999\nPROJECT = PROJECT[0]\nBUCKET = PROJECT\nREGION = \"us-central1\"\nOUTDIR = f\"gs://{BUCKET}/taxifare/data\"\n\n%env PROJECT=$PROJECT\n%env BUCKET=$BUCKET\n%env REGION=$REGION\n%env OUTDIR=$OUTDIR\n%env TFVERSION=2.5",
"Confirm below that the bucket is regional and its region equals to the specified region:",
"%%bash\ngsutil ls -Lb gs://$BUCKET | grep \"gs://\\|Location\"\necho $REGION\n\n%%bash\ngcloud config set project $PROJECT\ngcloud config set ai_platform/region $REGION",
"Create BigQuery tables\nIf you have not already created a BigQuery dataset for our data, run the following cell:",
"bq = bigquery.Client(project=PROJECT)\ndataset = bigquery.Dataset(bq.dataset(\"taxifare\"))\n\ntry:\n bq.create_dataset(dataset)\n print(\"Dataset created\")\nexcept api_core.exceptions.Conflict:\n print(\"Dataset already exists\")",
"Let's create a table with 1 million examples.\nNote that the order of columns is exactly what was in our CSV files.",
"%%bigquery\n\nCREATE OR REPLACE TABLE taxifare.feateng_training_data AS\n\nSELECT\n (tolls_amount + fare_amount) AS fare_amount,\n pickup_datetime,\n pickup_longitude AS pickuplon,\n pickup_latitude AS pickuplat,\n dropoff_longitude AS dropofflon,\n dropoff_latitude AS dropofflat,\n passenger_count*1.0 AS passengers,\n 'unused' AS key\nFROM `nyc-tlc.yellow.trips`\nWHERE ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 1000)) = 1\nAND\n trip_distance > 0\n AND fare_amount >= 2.5\n AND pickup_longitude > -78\n AND pickup_longitude < -70\n AND dropoff_longitude > -78\n AND dropoff_longitude < -70\n AND pickup_latitude > 37\n AND pickup_latitude < 45\n AND dropoff_latitude > 37\n AND dropoff_latitude < 45\n AND passenger_count > 0",
"Make the validation dataset be 1/10 the size of the training dataset.",
"%%bigquery\n\nCREATE OR REPLACE TABLE taxifare.feateng_valid_data AS\n\nSELECT\n (tolls_amount + fare_amount) AS fare_amount,\n pickup_datetime,\n pickup_longitude AS pickuplon,\n pickup_latitude AS pickuplat,\n dropoff_longitude AS dropofflon,\n dropoff_latitude AS dropofflat,\n passenger_count*1.0 AS passengers,\n 'unused' AS key\nFROM `nyc-tlc.yellow.trips`\nWHERE ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 10000)) = 2\nAND\n trip_distance > 0\n AND fare_amount >= 2.5\n AND pickup_longitude > -78\n AND pickup_longitude < -70\n AND dropoff_longitude > -78\n AND dropoff_longitude < -70\n AND pickup_latitude > 37\n AND pickup_latitude < 45\n AND dropoff_latitude > 37\n AND dropoff_latitude < 45\n AND passenger_count > 0",
"Export the tables as CSV files",
"%%bash\n\necho \"Deleting current contents of $OUTDIR\"\ngsutil -m -q rm -rf $OUTDIR\n\necho \"Extracting training data to $OUTDIR\"\nbq --location=US extract \\\n --destination_format CSV \\\n --field_delimiter \",\" --noprint_header \\\n taxifare.feateng_training_data \\\n $OUTDIR/taxi-train-*.csv\n\necho \"Extracting validation data to $OUTDIR\"\nbq --location=US extract \\\n --destination_format CSV \\\n --field_delimiter \",\" --noprint_header \\\n taxifare.feateng_valid_data \\\n $OUTDIR/taxi-valid-*.csv\n\ngsutil ls -l $OUTDIR\n\n!gsutil cat gs://$BUCKET/taxifare/data/taxi-train-000000000000.csv | head -2",
"Make code compatible with AI Platform Training Service\nIn order to make our code compatible with AI Platform Training Service we need to make the following changes:\n\nUpload data to Google Cloud Storage \nMove code into a trainer Python package\nSubmit training job with gcloud to train on AI Platform\n\nUpload data to Google Cloud Storage (GCS)\nCloud services don't have access to our local files, so we need to upload them to a location the Cloud servers can read from. In this case we'll use GCS.\nTo do this run the notebook 0_export_data_from_bq_to_gcs.ipynb, which will export the taxifare data from BigQuery directly into a GCS bucket. If all ran smoothly, you should be able to list the data bucket by running the following command:",
"!gsutil ls gs://$BUCKET/taxifare/data",
"Move code into a Python package\nThe first thing to do is to convert your training code snippets into a regular Python package.\nA Python package is simply a collection of one or more .py files along with an __init__.py file to identify the containing directory as a package. The __init__.py sometimes contains initialization code but for our purposes an empty file suffices.\nCreate the package directory\nOur package directory contains 3 files:",
"ls ./taxifare/trainer/",
"Paste existing code into model.py\nA Python package requires our code to be in a .py file, as opposed to notebook cells. So, we simply copy and paste our existing code for the previous notebook into a single file.\nIn the cell below, we write the contents of the cell into model.py packaging the model we \ndeveloped in the previous labs so that we can deploy it to AI Platform Training Service.",
"%%writefile ./taxifare/trainer/model.py\n\"\"\"Data prep, train and evaluate DNN model.\"\"\"\n\nimport datetime\nimport logging\nimport os\n\nimport numpy as np\nimport tensorflow as tf\nfrom tensorflow import feature_column as fc\nfrom tensorflow.keras import activations, callbacks, layers, models\n\nlogging.info(tf.version.VERSION)\n\n\nCSV_COLUMNS = [\n \"fare_amount\",\n \"pickup_datetime\",\n \"pickup_longitude\",\n \"pickup_latitude\",\n \"dropoff_longitude\",\n \"dropoff_latitude\",\n \"passenger_count\",\n \"key\",\n]\n\n# inputs are all float except for pickup_datetime which is a string\nSTRING_COLS = [\"pickup_datetime\"]\nLABEL_COLUMN = \"fare_amount\"\nDEFAULTS = [[0.0], [\"na\"], [0.0], [0.0], [0.0], [0.0], [0.0], [\"na\"]]\nDAYS = [\"Sun\", \"Mon\", \"Tue\", \"Wed\", \"Thu\", \"Fri\", \"Sat\"]\n\n\ndef features_and_labels(row_data):\n for unwanted_col in [\"key\"]:\n row_data.pop(unwanted_col)\n label = row_data.pop(LABEL_COLUMN)\n return row_data, label\n\n\ndef load_dataset(pattern, batch_size, num_repeat):\n dataset = tf.data.experimental.make_csv_dataset(\n file_pattern=pattern,\n batch_size=batch_size,\n column_names=CSV_COLUMNS,\n column_defaults=DEFAULTS,\n num_epochs=num_repeat,\n shuffle_buffer_size=1000000,\n )\n return dataset.map(features_and_labels)\n\n\ndef create_train_dataset(pattern, batch_size):\n dataset = load_dataset(pattern, batch_size, num_repeat=None)\n return dataset.prefetch(1)\n\n\ndef create_eval_dataset(pattern, batch_size):\n dataset = load_dataset(pattern, batch_size, num_repeat=1)\n return dataset.prefetch(1)\n\n\ndef parse_datetime(s):\n if not isinstance(s, str):\n s = s.numpy().decode(\"utf-8\")\n return datetime.datetime.strptime(s, \"%Y-%m-%d %H:%M:%S %Z\")\n\n\ndef euclidean(params):\n lon1, lat1, lon2, lat2 = params\n londiff = lon2 - lon1\n latdiff = lat2 - lat1\n return tf.sqrt(londiff * londiff + latdiff * latdiff)\n\n\ndef get_dayofweek(s):\n ts = parse_datetime(s)\n return DAYS[ts.weekday()]\n\n\n@tf.function\ndef dayofweek(ts_in):\n return tf.map_fn(\n lambda s: tf.py_function(get_dayofweek, inp=[s], Tout=tf.string), ts_in\n )\n\n\n@tf.function\ndef fare_thresh(x):\n return 60 * activations.relu(x)\n\n\ndef transform(inputs, numeric_cols, nbuckets):\n # Pass-through columns\n transformed = inputs.copy()\n del transformed[\"pickup_datetime\"]\n\n feature_columns = {\n colname: fc.numeric_column(colname) for colname in numeric_cols\n }\n\n # Scaling longitude from range [-70, -78] to [0, 1]\n for lon_col in [\"pickup_longitude\", \"dropoff_longitude\"]:\n transformed[lon_col] = layers.Lambda(\n lambda x: (x + 78) / 8.0, name=f\"scale_{lon_col}\"\n )(inputs[lon_col])\n\n # Scaling latitude from range [37, 45] to [0, 1]\n for lat_col in [\"pickup_latitude\", \"dropoff_latitude\"]:\n transformed[lat_col] = layers.Lambda(\n lambda x: (x - 37) / 8.0, name=f\"scale_{lat_col}\"\n )(inputs[lat_col])\n\n # Adding Euclidean dist (no need to be accurate: NN will calibrate it)\n transformed[\"euclidean\"] = layers.Lambda(euclidean, name=\"euclidean\")(\n [\n inputs[\"pickup_longitude\"],\n inputs[\"pickup_latitude\"],\n inputs[\"dropoff_longitude\"],\n inputs[\"dropoff_latitude\"],\n ]\n )\n feature_columns[\"euclidean\"] = fc.numeric_column(\"euclidean\")\n\n # hour of day from timestamp of form '2010-02-08 09:17:00+00:00'\n transformed[\"hourofday\"] = layers.Lambda(\n lambda x: tf.strings.to_number(\n tf.strings.substr(x, 11, 2), out_type=tf.dtypes.int32\n ),\n name=\"hourofday\",\n )(inputs[\"pickup_datetime\"])\n feature_columns[\"hourofday\"] = fc.indicator_column(\n fc.categorical_column_with_identity(\"hourofday\", num_buckets=24)\n )\n\n latbuckets = np.linspace(0, 1, nbuckets).tolist()\n lonbuckets = np.linspace(0, 1, nbuckets).tolist()\n b_plat = fc.bucketized_column(\n feature_columns[\"pickup_latitude\"], latbuckets\n )\n b_dlat = fc.bucketized_column(\n feature_columns[\"dropoff_latitude\"], latbuckets\n )\n b_plon = fc.bucketized_column(\n feature_columns[\"pickup_longitude\"], lonbuckets\n )\n b_dlon = fc.bucketized_column(\n feature_columns[\"dropoff_longitude\"], lonbuckets\n )\n ploc = fc.crossed_column([b_plat, b_plon], nbuckets * nbuckets)\n dloc = fc.crossed_column([b_dlat, b_dlon], nbuckets * nbuckets)\n pd_pair = fc.crossed_column([ploc, dloc], nbuckets ** 4)\n feature_columns[\"pickup_and_dropoff\"] = fc.embedding_column(pd_pair, 100)\n\n return transformed, feature_columns\n\n\ndef rmse(y_true, y_pred):\n return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true)))\n\n\ndef build_dnn_model(nbuckets, nnsize, lr, string_cols):\n numeric_cols = set(CSV_COLUMNS) - {LABEL_COLUMN, \"key\"} - set(string_cols)\n inputs = {\n colname: layers.Input(name=colname, shape=(), dtype=\"float32\")\n for colname in numeric_cols\n }\n inputs.update(\n {\n colname: layers.Input(name=colname, shape=(), dtype=\"string\")\n for colname in string_cols\n }\n )\n\n # transforms\n transformed, feature_columns = transform(inputs, numeric_cols, nbuckets)\n dnn_inputs = layers.DenseFeatures(feature_columns.values())(transformed)\n\n x = dnn_inputs\n for layer, nodes in enumerate(nnsize):\n x = layers.Dense(nodes, activation=\"relu\", name=f\"h{layer}\")(x)\n output = layers.Dense(1, name=\"fare\")(x)\n\n model = models.Model(inputs, output)\n lr_optimizer = tf.keras.optimizers.Adam(learning_rate=lr)\n model.compile(optimizer=lr_optimizer, loss=\"mse\", metrics=[rmse, \"mse\"])\n\n return model\n\n\ndef train_and_evaluate(hparams):\n batch_size = hparams[\"batch_size\"]\n nbuckets = hparams[\"nbuckets\"]\n lr = hparams[\"lr\"]\n nnsize = hparams[\"nnsize\"]\n eval_data_path = hparams[\"eval_data_path\"]\n num_evals = hparams[\"num_evals\"]\n num_examples_to_train_on = hparams[\"num_examples_to_train_on\"]\n output_dir = hparams[\"output_dir\"]\n train_data_path = hparams[\"train_data_path\"]\n\n timestamp = datetime.datetime.now().strftime(\"%Y%m%d%H%M%S\")\n savedmodel_dir = os.path.join(output_dir, \"export/savedmodel\")\n model_export_path = os.path.join(savedmodel_dir, timestamp)\n checkpoint_path = os.path.join(output_dir, \"checkpoints\")\n tensorboard_path = os.path.join(output_dir, \"tensorboard\")\n\n if tf.io.gfile.exists(output_dir):\n tf.io.gfile.rmtree(output_dir)\n\n model = build_dnn_model(nbuckets, nnsize, lr, STRING_COLS)\n logging.info(model.summary())\n\n trainds = create_train_dataset(train_data_path, batch_size)\n evalds = create_eval_dataset(eval_data_path, batch_size)\n\n steps_per_epoch = num_examples_to_train_on // (batch_size * num_evals)\n\n checkpoint_cb = callbacks.ModelCheckpoint(\n checkpoint_path, save_weights_only=True, verbose=1\n )\n tensorboard_cb = callbacks.TensorBoard(tensorboard_path, histogram_freq=1)\n\n history = model.fit(\n trainds,\n validation_data=evalds,\n epochs=num_evals,\n steps_per_epoch=max(1, steps_per_epoch),\n verbose=2, # 0=silent, 1=progress bar, 2=one line per epoch\n callbacks=[checkpoint_cb, tensorboard_cb],\n )\n\n # Exporting the model with default serving function.\n model.save(model_export_path)\n return history\n",
"Modify code to read data from and write checkpoint files to GCS\nIf you look closely above, you'll notice a new function, train_and_evaluate that wraps the code that actually trains the model. This allows us to parametrize the training by passing a dictionary of parameters to this function (e.g, batch_size, num_examples_to_train_on, train_data_path etc.)\nThis is useful because the output directory, data paths and number of train steps will be different depending on whether we're training locally or in the cloud. Parametrizing allows us to use the same code for both.\nWe specify these parameters at run time via the command line. Which means we need to add code to parse command line parameters and invoke train_and_evaluate() with those params. This is the job of the task.py file.",
"%%writefile taxifare/trainer/task.py\n\"\"\"Argument definitions for model training code in `trainer.model`.\"\"\"\n\nimport argparse\n\nfrom trainer import model\n\nif __name__ == \"__main__\":\n parser = argparse.ArgumentParser()\n parser.add_argument(\n \"--batch_size\",\n help=\"Batch size for training steps\",\n type=int,\n default=32,\n )\n parser.add_argument(\n \"--eval_data_path\",\n help=\"GCS location pattern of eval files\",\n required=True,\n )\n parser.add_argument(\n \"--nnsize\",\n help=\"Hidden layer sizes (provide space-separated sizes)\",\n nargs=\"+\",\n type=int,\n default=[32, 8],\n )\n parser.add_argument(\n \"--nbuckets\",\n help=\"Number of buckets to divide lat and lon with\",\n type=int,\n default=10,\n )\n parser.add_argument(\n \"--lr\", help=\"learning rate for optimizer\", type=float, default=0.001\n )\n parser.add_argument(\n \"--num_evals\",\n help=\"Number of times to evaluate model on eval data training.\",\n type=int,\n default=5,\n )\n parser.add_argument(\n \"--num_examples_to_train_on\",\n help=\"Number of examples to train on.\",\n type=int,\n default=100,\n )\n parser.add_argument(\n \"--output_dir\",\n help=\"GCS location to write checkpoints and export models\",\n required=True,\n )\n parser.add_argument(\n \"--train_data_path\",\n help=\"GCS location pattern of train files containing eval URLs\",\n required=True,\n )\n parser.add_argument(\n \"--job-dir\",\n help=\"this model ignores this field, but it is required by gcloud\",\n default=\"junk\",\n )\n args = parser.parse_args()\n hparams = args.__dict__\n hparams.pop(\"job-dir\", None)\n\n model.train_and_evaluate(hparams)\n",
"Run trainer module package locally\nNow we can test our training code locally as follows using the local test data. We'll run a very small training job over a single file with a small batch size and one eval step.",
"%%bash\n\nEVAL_DATA_PATH=./taxifare/tests/data/taxi-valid*\nTRAIN_DATA_PATH=./taxifare/tests/data/taxi-train*\nOUTPUT_DIR=./taxifare-model\n\ntest ${OUTPUT_DIR} && rm -rf ${OUTPUT_DIR}\nexport PYTHONPATH=${PYTHONPATH}:${PWD}/taxifare\n \npython3 -m trainer.task \\\n--eval_data_path $EVAL_DATA_PATH \\\n--output_dir $OUTPUT_DIR \\\n--train_data_path $TRAIN_DATA_PATH \\\n--batch_size 5 \\\n--num_examples_to_train_on 100 \\\n--num_evals 1 \\\n--nbuckets 10 \\\n--lr 0.001 \\\n--nnsize 32 8",
"Run your training package on Cloud AI Platform\nOnce the code works in standalone mode locally, you can run it on Cloud AI Platform. To submit to the Cloud we use gcloud ai-platform jobs submit training [jobname] and simply specify some additional parameters for AI Platform Training Service:\n- jobid: A unique identifier for the Cloud job. We usually append system time to ensure uniqueness\n- region: Cloud region to train in. See here for supported AI Platform Training Service regions\nThe arguments before -- \\ are for AI Platform Training Service.\nThe arguments after -- \\ are sent to our task.py.\nBecause this is on the entire dataset, it will take a while. You can monitor the job from the GCP console in the Cloud AI Platform section.",
"%%bash\n\n# Output directory and jobID\nOUTDIR=gs://${BUCKET}/taxifare/trained_model_$(date -u +%y%m%d_%H%M%S)\nJOBID=taxifare_$(date -u +%Y%m%d_%H%M%S)\necho ${OUTDIR} ${REGION} ${JOBID}\ngsutil -m rm -rf ${OUTDIR}\n\n# Model and training hyperparameters\nBATCH_SIZE=50\nNUM_EXAMPLES_TO_TRAIN_ON=100\nNUM_EVALS=100\nNBUCKETS=10\nLR=0.001\nNNSIZE=\"32 8\"\n\n# GCS paths\nGCS_PROJECT_PATH=gs://$BUCKET/taxifare\nDATA_PATH=$GCS_PROJECT_PATH/data\nTRAIN_DATA_PATH=$DATA_PATH/taxi-train*\nEVAL_DATA_PATH=$DATA_PATH/taxi-valid*\n\n#TODO 2\ngcloud ai-platform jobs submit training $JOBID \\\n --module-name=trainer.task \\\n --package-path=taxifare/trainer \\\n --staging-bucket=gs://${BUCKET} \\\n --python-version=3.7 \\\n --runtime-version=${TFVERSION} \\\n --region=${REGION} \\\n -- \\\n --eval_data_path $EVAL_DATA_PATH \\\n --output_dir $OUTDIR \\\n --train_data_path $TRAIN_DATA_PATH \\\n --batch_size $BATCH_SIZE \\\n --num_examples_to_train_on $NUM_EXAMPLES_TO_TRAIN_ON \\\n --num_evals $NUM_EVALS \\\n --nbuckets $NBUCKETS \\\n --lr $LR \\\n --nnsize $NNSIZE ",
"(Optional) Run your training package using Docker container\nAI Platform Training also supports training in custom containers, allowing users to bring their own Docker containers with any pre-installed ML framework or algorithm to run on AI Platform Training. \nIn this last section, we'll see how to submit a Cloud training job using a customized Docker image. \nContainerizing our ./taxifare/trainer package involves 3 steps:\n\nWriting a Dockerfile in ./taxifare\nBuilding the Docker image\nPushing it to the Google Cloud container registry in our GCP project\n\nThe Dockerfile specifies\n1. How the container needs to be provisioned so that all the dependencies in our code are satisfied\n2. Where to copy our trainer Package in the container\n3. What command to run when the container is ran (the ENTRYPOINT line)",
"%%writefile ./taxifare/Dockerfile\nFROM gcr.io/deeplearning-platform-release/tf2-cpu.2-5:latest\n# TODO 3\n\nCOPY . /code\n\nWORKDIR /code\n\nENTRYPOINT [\"python3\", \"-m\", \"trainer.task\"]\n\nPROJECT_DIR = !cd ./taxifare &&pwd\nPROJECT_DIR = PROJECT_DIR[0]\nIMAGE_NAME = \"taxifare_training_container\"\nDOCKERFILE = f\"{PROJECT_DIR}/Dockerfile\"\nIMAGE_URI = f\"gcr.io/{PROJECT}/{IMAGE_NAME}\"\n\n%env PROJECT_DIR=$PROJECT_DIR\n%env IMAGE_NAME=$IMAGE_NAME\n%env DOCKERFILE=$DOCKERFILE\n%env IMAGE_URI=$IMAGE_URI\n\n!docker build $PROJECT_DIR -f $DOCKERFILE -t $IMAGE_URI\n\n!docker push $IMAGE_URI",
"Remark: If you prefer to build the container image from the command line, we have written a script for that ./taxifare/scripts/build.sh. This script reads its configuration from the file ./taxifare/scripts/env.sh. You can configure these arguments the way you want in that file. You can also simply type make build from within ./taxifare to build the image (which will invoke the build script). Similarly, we wrote the script ./taxifare/scripts/push.sh to push the Docker image, which you can also trigger by typing make push from within ./taxifare.\nTrain using a custom container on AI Platform\nTo submit to the Cloud we use gcloud ai-platform jobs submit training [jobname] and simply specify some additional parameters for AI Platform Training Service:\n- jobname: A unique identifier for the Cloud job. We usually append system time to ensure uniqueness\n- master-image-uri: The uri of the Docker image we pushed in the Google Cloud registry\n- region: Cloud region to train in. See here for supported AI Platform Training Service regions\nThe arguments before -- \\ are for AI Platform Training Service.\nThe arguments after -- \\ are sent to our task.py.\nYou can track your job and view logs using cloud console.",
"%%bash\n\n# Output directory and jobID\nOUTDIR=gs://${BUCKET}/taxifare/trained_model\nJOBID=taxifare_container_$(date -u +%Y%m%d_%H%M%S)\necho ${OUTDIR} ${REGION} ${JOBID}\ngsutil -m rm -rf ${OUTDIR}\n\n# Model and training hyperparameters\nBATCH_SIZE=50\nNUM_EXAMPLES_TO_TRAIN_ON=100\nNUM_EVALS=100\nNBUCKETS=10\nNNSIZE=\"32 8\"\n\n# AI-Platform machines to use for training\nMACHINE_TYPE=n1-standard-4\nSCALE_TIER=CUSTOM\n\n# GCS paths.\nGCS_PROJECT_PATH=gs://$BUCKET/taxifare\nDATA_PATH=$GCS_PROJECT_PATH/data\nTRAIN_DATA_PATH=$DATA_PATH/taxi-train*\nEVAL_DATA_PATH=$DATA_PATH/taxi-valid*\n\nIMAGE_NAME=taxifare_training_container\nIMAGE_URI=gcr.io/$PROJECT/$IMAGE_NAME\n\ngcloud ai-platform jobs submit training $JOBID \\\n --staging-bucket=gs://$BUCKET \\\n --region=$REGION \\\n --master-image-uri=$IMAGE_URI \\\n --master-machine-type=$MACHINE_TYPE \\\n --scale-tier=$SCALE_TIER \\\n -- \\\n --eval_data_path $EVAL_DATA_PATH \\\n --output_dir $OUTDIR \\\n --train_data_path $TRAIN_DATA_PATH \\\n --batch_size $BATCH_SIZE \\\n --num_examples_to_train_on $NUM_EXAMPLES_TO_TRAIN_ON \\\n --num_evals $NUM_EVALS \\\n --nbuckets $NBUCKETS \\\n --nnsize $NNSIZE \n",
"Remark: If you prefer submitting your jobs for training on the AI Platform using the command line, we have written the ./taxifare/scripts/submit.sh for you (that you can also invoke using make submit from within ./taxifare). As the other scripts, it reads it configuration variables from ./taxifare/scripts/env.sh.\nCopyright 2021 Google Inc. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ES-DOC/esdoc-jupyterhub
|
notebooks/nerc/cmip6/models/hadgem3-gc31-hm/toplevel.ipynb
|
gpl-3.0
|
[
"ES-DOC CMIP6 Model Properties - Toplevel\nMIP Era: CMIP6\nInstitute: NERC\nSource ID: HADGEM3-GC31-HM\nSub-Topics: Radiative Forcings. \nProperties: 85 (42 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:26\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook",
"# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'nerc', 'hadgem3-gc31-hm', 'toplevel')",
"Document Authors\nSet document authors",
"# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Contributors\nSpecify document contributors",
"# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Publication\nSpecify document publication status",
"# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)",
"Document Table of Contents\n1. Key Properties\n2. Key Properties --> Flux Correction\n3. Key Properties --> Genealogy\n4. Key Properties --> Software Properties\n5. Key Properties --> Coupling\n6. Key Properties --> Tuning Applied\n7. Key Properties --> Conservation --> Heat\n8. Key Properties --> Conservation --> Fresh Water\n9. Key Properties --> Conservation --> Salt\n10. Key Properties --> Conservation --> Momentum\n11. Radiative Forcings\n12. Radiative Forcings --> Greenhouse Gases --> CO2\n13. Radiative Forcings --> Greenhouse Gases --> CH4\n14. Radiative Forcings --> Greenhouse Gases --> N2O\n15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3\n16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3\n17. Radiative Forcings --> Greenhouse Gases --> CFC\n18. Radiative Forcings --> Aerosols --> SO4\n19. Radiative Forcings --> Aerosols --> Black Carbon\n20. Radiative Forcings --> Aerosols --> Organic Carbon\n21. Radiative Forcings --> Aerosols --> Nitrate\n22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect\n23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect\n24. Radiative Forcings --> Aerosols --> Dust\n25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic\n26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic\n27. Radiative Forcings --> Aerosols --> Sea Salt\n28. Radiative Forcings --> Other --> Land Use\n29. Radiative Forcings --> Other --> Solar \n1. Key Properties\nKey properties of the model\n1.1. Model Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nTop level overview of coupled model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.2. Model Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of coupled model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2. Key Properties --> Flux Correction\nFlux correction properties of the model\n2.1. Details\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe if/how flux corrections are applied in the model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.flux_correction.details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3. Key Properties --> Genealogy\nGenealogy and history of the model\n3.1. Year Released\nIs Required: TRUE Type: STRING Cardinality: 1.1\nYear the model was released",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3.2. CMIP3 Parent\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCMIP3 parent if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3.3. CMIP5 Parent\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCMIP5 parent if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3.4. Previous Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nPreviously known as",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4. Key Properties --> Software Properties\nSoftware properties of model\n4.1. Repository\nIs Required: FALSE Type: STRING Cardinality: 0.1\nLocation of code for this component.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.2. Code Version\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCode version identifier.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.3. Code Languages\nIs Required: FALSE Type: STRING Cardinality: 0.N\nCode language(s).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.4. Components Structure\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe how model realms are structured into independent software components (coupled via a coupler) and internal software components.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.5. Coupler\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nOverarching coupling framework for model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OASIS\" \n# \"OASIS3-MCT\" \n# \"ESMF\" \n# \"NUOPC\" \n# \"Bespoke\" \n# \"Unknown\" \n# \"None\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"5. Key Properties --> Coupling\n**\n5.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of coupling in the model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.coupling.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.2. Atmosphere Double Flux\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"5.3. Atmosphere Fluxes Calculation Grid\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nWhere are the air-sea fluxes calculated",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Atmosphere grid\" \n# \"Ocean grid\" \n# \"Specific coupler grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"5.4. Atmosphere Relative Winds\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nAre relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6. Key Properties --> Tuning Applied\nTuning methodology for model\n6.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.2. Global Mean Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList set of metrics/diagnostics of the global mean state used in tuning model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.3. Regional Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.4. Trend Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList observed trend metrics/diagnostics used in tuning model/component (such as 20th century)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.5. Energy Balance\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.6. Fresh Water Balance\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7. Key Properties --> Conservation --> Heat\nGlobal heat convervation properties of the model\n7.1. Global\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe if/how heat is conserved globally",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.2. Atmos Ocean Interface\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how heat is conserved at the atmosphere/ocean coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.3. Atmos Land Interface\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe if/how heat is conserved at the atmosphere/land coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.4. Atmos Sea-ice Interface\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how heat is conserved at the atmosphere/sea-ice coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.5. Ocean Seaice Interface\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how heat is conserved at the ocean/sea-ice coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.6. Land Ocean Interface\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how heat is conserved at the land/ocean coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8. Key Properties --> Conservation --> Fresh Water\nGlobal fresh water convervation properties of the model\n8.1. Global\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe if/how fresh_water is conserved globally",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.2. Atmos Ocean Interface\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how fresh_water is conserved at the atmosphere/ocean coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.3. Atmos Land Interface\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe if/how fresh water is conserved at the atmosphere/land coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.4. Atmos Sea-ice Interface\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.5. Ocean Seaice Interface\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how fresh water is conserved at the ocean/sea-ice coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.6. Runoff\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe how runoff is distributed and conserved",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.7. Iceberg Calving\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how iceberg calving is modeled and conserved",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.8. Endoreic Basins\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how endoreic basins (no ocean access) are treated",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.9. Snow Accumulation\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe how snow accumulation over land and over sea-ice is treated",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9. Key Properties --> Conservation --> Salt\nGlobal salt convervation properties of the model\n9.1. Ocean Seaice Interface\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how salt is conserved at the ocean/sea-ice coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10. Key Properties --> Conservation --> Momentum\nGlobal momentum convervation properties of the model\n10.1. Details\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how momentum is conserved in the model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11. Radiative Forcings\nRadiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)\n11.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of radiative forcings (GHG and aerosols) implementation in model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"12. Radiative Forcings --> Greenhouse Gases --> CO2\nCarbon dioxide forcing\n12.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"12.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"13. Radiative Forcings --> Greenhouse Gases --> CH4\nMethane forcing\n13.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"14. Radiative Forcings --> Greenhouse Gases --> N2O\nNitrous oxide forcing\n14.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3\nTroposheric ozone forcing\n15.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3\nStratospheric ozone forcing\n16.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"16.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17. Radiative Forcings --> Greenhouse Gases --> CFC\nOzone-depleting and non-ozone-depleting fluorinated gases forcing\n17.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.2. Equivalence Concentration\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDetails of any equivalence concentrations used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"Option 1\" \n# \"Option 2\" \n# \"Option 3\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.3. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"18. Radiative Forcings --> Aerosols --> SO4\nSO4 aerosol forcing\n18.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"18.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"19. Radiative Forcings --> Aerosols --> Black Carbon\nBlack carbon aerosol forcing\n19.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"19.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"20. Radiative Forcings --> Aerosols --> Organic Carbon\nOrganic carbon aerosol forcing\n20.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"20.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"21. Radiative Forcings --> Aerosols --> Nitrate\nNitrate forcing\n21.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"21.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect\nCloud albedo effect forcing (RFaci)\n22.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"22.2. Aerosol Effect On Ice Clouds\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nRadiative effects of aerosols on ice clouds are represented?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"22.3. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect\nCloud lifetime effect forcing (ERFaci)\n23.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"23.2. Aerosol Effect On Ice Clouds\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nRadiative effects of aerosols on ice clouds are represented?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"23.3. RFaci From Sulfate Only\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nRadiative forcing from aerosol cloud interactions from sulfate aerosol only?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"23.4. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"24. Radiative Forcings --> Aerosols --> Dust\nDust forcing\n24.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"24.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic\nTropospheric volcanic forcing\n25.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"25.2. Historical Explosive Volcanic Aerosol Implementation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow explosive volcanic aerosol is implemented in historical simulations",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Type A\" \n# \"Type B\" \n# \"Type C\" \n# \"Type D\" \n# \"Type E\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"25.3. Future Explosive Volcanic Aerosol Implementation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow explosive volcanic aerosol is implemented in future simulations",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Type A\" \n# \"Type B\" \n# \"Type C\" \n# \"Type D\" \n# \"Type E\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"25.4. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic\nStratospheric volcanic forcing\n26.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"26.2. Historical Explosive Volcanic Aerosol Implementation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow explosive volcanic aerosol is implemented in historical simulations",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Type A\" \n# \"Type B\" \n# \"Type C\" \n# \"Type D\" \n# \"Type E\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"26.3. Future Explosive Volcanic Aerosol Implementation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow explosive volcanic aerosol is implemented in future simulations",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Type A\" \n# \"Type B\" \n# \"Type C\" \n# \"Type D\" \n# \"Type E\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"26.4. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"27. Radiative Forcings --> Aerosols --> Sea Salt\nSea salt forcing\n27.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"27.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"28. Radiative Forcings --> Other --> Land Use\nLand use forcing\n28.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"28.2. Crop Change Only\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nLand use change represented via crop change only?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"28.3. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"29. Radiative Forcings --> Other --> Solar\nSolar forcing\n29.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow solar forcing is provided",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"irradiance\" \n# \"proton\" \n# \"electron\" \n# \"cosmic ray\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"29.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"©2017 ES-DOC"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
piskvorky/gensim
|
docs/notebooks/pivoted_document_length_normalisation.ipynb
|
lgpl-2.1
|
[
"Pivoted Document Length Normalization\nBackground\nIn many cases, normalizing the tfidf weights for each term favors weight of terms of the documents with shorter length. The pivoted document length normalization scheme counters the effect of this bias for short documents by making tfidf independent of the document length.\nThis is achieved by tilting the normalization curve along the pivot point defined by user with some slope.\nRoughly following the equation:\npivoted_norm = (1 - slope) * pivot + slope * old_norm\nThis scheme is proposed in the paper Pivoted Document Length Normalization by Singhal, Buckley and Mitra.\nOverall this approach can in many cases help increase the accuracy of the model where the document lengths are hugely varying in the entire corpus.\nIntroduction\nThis guide demonstrates how to perform pivoted document length normalization.\nWe will train a logistic regression to distinguish between text from two different newsgroups.\nOur results will show that using pivoted document length normalization yields a better model (higher classification accuracy).",
"#\n# Download our dataset\n#\nimport gensim.downloader as api\nnws = api.load(\"20-newsgroups\")\n\n#\n# Pick texts from relevant newsgroups, split into training and test set.\n#\ncat1, cat2 = ('sci.electronics', 'sci.space')\n\n#\n# X_* contain the actual texts as strings.\n# Y_* contain labels, 0 for cat1 (sci.electronics) and 1 for cat2 (sci.space)\n#\nX_train = []\nX_test = []\ny_train = []\ny_test = []\n\nfor i in nws:\n if i[\"set\"] == \"train\" and i[\"topic\"] == cat1:\n X_train.append(i[\"data\"])\n y_train.append(0)\n elif i[\"set\"] == \"train\" and i[\"topic\"] == cat2:\n X_train.append(i[\"data\"])\n y_train.append(1)\n elif i[\"set\"] == \"test\" and i[\"topic\"] == cat1:\n X_test.append(i[\"data\"])\n y_test.append(0)\n elif i[\"set\"] == \"test\" and i[\"topic\"] == cat2:\n X_test.append(i[\"data\"])\n y_test.append(1)\n\nfrom gensim.parsing.preprocessing import preprocess_string\nfrom gensim.corpora import Dictionary\n\nid2word = Dictionary([preprocess_string(doc) for doc in X_train])\ntrain_corpus = [id2word.doc2bow(preprocess_string(doc)) for doc in X_train]\ntest_corpus = [id2word.doc2bow(preprocess_string(doc)) for doc in X_test]\n\nprint(len(X_train), len(X_test))\n\n# We perform our analysis on top k documents which is almost top 10% most scored documents\nk = len(X_test) // 10\n\nfrom gensim.sklearn_api.tfidf import TfIdfTransformer\nfrom sklearn.linear_model import LogisticRegression\nfrom gensim.matutils import corpus2csc\n\n# This function returns the model accuracy and indivitual document prob values using\n# gensim's TfIdfTransformer and sklearn's LogisticRegression\ndef get_tfidf_scores(kwargs):\n tfidf_transformer = TfIdfTransformer(**kwargs).fit(train_corpus)\n\n X_train_tfidf = corpus2csc(tfidf_transformer.transform(train_corpus), num_terms=len(id2word)).T\n X_test_tfidf = corpus2csc(tfidf_transformer.transform(test_corpus), num_terms=len(id2word)).T\n\n clf = LogisticRegression().fit(X_train_tfidf, y_train)\n\n model_accuracy = clf.score(X_test_tfidf, y_test)\n doc_scores = clf.decision_function(X_test_tfidf)\n\n return model_accuracy, doc_scores",
"Get TFIDF scores for corpus without pivoted document length normalisation",
"params = {}\nmodel_accuracy, doc_scores = get_tfidf_scores(params)\nprint(model_accuracy)\n\nimport numpy as np\n\n# Sort the document scores by their scores and return a sorted list\n# of document score and corresponding document lengths.\ndef sort_length_by_score(doc_scores, X_test):\n doc_scores = sorted(enumerate(doc_scores), key=lambda x: x[1])\n doc_leng = np.empty(len(doc_scores))\n\n ds = np.empty(len(doc_scores))\n\n for i, _ in enumerate(doc_scores):\n doc_leng[i] = len(X_test[_[0]])\n ds[i] = _[1]\n\n return ds, doc_leng\n\n\nprint(\n \"Normal cosine normalisation favors short documents as our top {} \"\n \"docs have a smaller mean doc length of {:.3f} compared to the corpus mean doc length of {:.3f}\"\n .format(\n k, sort_length_by_score(doc_scores, X_test)[1][:k].mean(), \n sort_length_by_score(doc_scores, X_test)[1].mean()\n )\n)",
"Get TFIDF scores for corpus with pivoted document length normalisation testing on various values of alpha.",
"best_model_accuracy = 0\noptimum_slope = 0\nfor slope in np.arange(0, 1.1, 0.1):\n params = {\"pivot\": 10, \"slope\": slope}\n\n model_accuracy, doc_scores = get_tfidf_scores(params)\n\n if model_accuracy > best_model_accuracy:\n best_model_accuracy = model_accuracy\n optimum_slope = slope\n\n print(\"Score for slope {} is {}\".format(slope, model_accuracy))\n\nprint(\"We get best score of {} at slope {}\".format(best_model_accuracy, optimum_slope))\n\nparams = {\"pivot\": 10, \"slope\": optimum_slope}\nmodel_accuracy, doc_scores = get_tfidf_scores(params)\nprint(model_accuracy)\n\nprint(\n \"With pivoted normalisation top {} docs have mean length of {:.3f} \"\n \"which is much closer to the corpus mean doc length of {:.3f}\"\n .format(\n k, sort_length_by_score(doc_scores, X_test)[1][:k].mean(), \n sort_length_by_score(doc_scores, X_test)[1].mean()\n )\n)",
"Visualizing the pivoted normalization\nSince cosine normalization favors retrieval of short documents from the plot we can see that when slope was 1 (when pivoted normalisation was not applied) short documents with length of around 500 had very good score hence the bias for short documents can be seen. As we varied the value of slope from 1 to 0 we introdcued a new bias for long documents to counter the bias caused by cosine normalisation. Therefore at a certain point we got an optimum value of slope which is 0.5 where the overall accuracy of the model is increased.",
"%matplotlib inline\nimport matplotlib.pyplot as py\n\nbest_model_accuracy = 0\noptimum_slope = 0\n\nw = 2\nh = 2\nf, axarr = py.subplots(h, w, figsize=(15, 7))\n\nit = 0\nfor slope in [1, 0.2]:\n params = {\"pivot\": 10, \"slope\": slope}\n\n model_accuracy, doc_scores = get_tfidf_scores(params)\n\n if model_accuracy > best_model_accuracy:\n best_model_accuracy = model_accuracy\n optimum_slope = slope\n\n doc_scores, doc_leng = sort_length_by_score(doc_scores, X_test)\n\n y = abs(doc_scores[:k, np.newaxis])\n x = doc_leng[:k, np.newaxis]\n\n py.subplot(1, 2, it+1).bar(x, y, width=20, linewidth=0)\n py.title(\"slope = \" + str(slope) + \" Model accuracy = \" + str(model_accuracy))\n py.ylim([0, 4.5])\n py.xlim([0, 3200])\n py.xlabel(\"document length\")\n py.ylabel(\"confidence score\")\n \n it += 1\n\npy.tight_layout()\npy.show()",
"The above histogram plot helps us visualize the effect of slope. For top k documents we have document length on the x axis and their respective scores of belonging to a specific class on y axis.\nAs we decrease the slope the density of bins is shifted from low document length (around ~250-500) to over ~500 document length. This suggests that the positive biasness which was seen at slope=1 (or when regular tfidf was used) for short documents is now reduced. We get the optimum slope or the max model accuracy when slope is 0.2.\nConclusion\nUsing pivoted document normalization improved the classification accuracy significantly:\n\nBefore (slope=1, identical to default cosine normalization): 0.9682\nAfter (slope=0.2): 0.9771"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
shareactorIO/pipeline
|
source.ml/jupyterhub.ml/notebooks/zz_old/Spark/Intro/Lab 1 - Hello Spark/Lab 1 - Hello Spark - Student.ipynb
|
apache-2.0
|
[
"Lab 1 - Hello Spark\nThis Lab will show you how to work with Apache Spark using Python\nStep 1 - Working with Spark Context\nCheck what version of Apache Spark is setup within this lab notebook.\nIn step 1 - Invoke the spark context and extract what version of the spark driver application is running.\nType\nsc.version",
"#Step 1 - Check spark version\n#Type:\n#sc.version\n\n",
"Step 2 - Working with Resilient Distributed Datasets\nCreate multiple RDDs and return results\nIn Step 2 - Create RDD with numbers 1 to 10,\nExtract first line,\nExtract first 5 lines,\nCreate RDD with string \"Hello Spark\",\nExtract first line.",
"#Step 2 - Create RDD of Numbers 1-10\n\n#Type: \n#x = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]\n#x_nbr_rdd = sc.parallelize(x)\n\n\n\n#Step 2 - Extract first line\n\n#Type:\n#x_nbr_rdd.first()\n\n\n\n#Step 2 - Extract first 5 lines\n\n#Type:\n#x_nbr_rdd.take(5)\n\n\n\n#Step 2 - Create RDD String, Extract first line\n\n#Type:\n#y = [\"Hello Spark!\"]\n#y_str_rdd = sc.parallelize(y)\n#y_str_rdd.first()\n\n",
"Step 3 - Working with Strings\nIn Step 3 - Create a larger string of words that include \"Hello\" and \"Spark\",\nMap the string into an RDD as a collection of words,\nextract the count of words \"Hello\" and \"Spark\" found in your RDD.",
"#Step 3 - Create RDD String, Extract first line\n\n#type:\n#z = [\"Hello World!, Hello Universe!, I love Spark\"]\n#z_str_rdd = sc.parallelize(z)\n#z_str_rdd.first()\n\n\n\n#Step 3 - Create RDD with object for each word, Extract first 7 words\n\n#type:\n#z_str2_rdd = z_str_rdd.flatMap(lambda line: line.split(\" \"))\n#z_str2_rdd.take(7)\n\n\n\n#Step 3 - Count of \"Hello\" words\n\n#type:\n#z_str3_rdd = z_str2_rdd.filter(lambda line: \"Hello\" in line) \n#print \"The count of words 'Hello' in: \" + repr(z_str_rdd.first())\n#print \"Is: \" + repr(z_str3_rdd.count())\n\n\n\n#Step 3 - Count of \"Spark\" words\n#type\n#z_str4_rdd = z_str2_rdd.filter(lambda line: \"Spark\" in line) \n#print \"The count of words 'Spark' in: \" + repr(z_str_rdd.first())\n#print \"Is: \" + repr(z_str4_rdd.count())\n\n"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
feststelltaste/software-analytics
|
demos/20180314_JavaLand_Bruehl/Strategic Redesign Minimal.ipynb
|
gpl-3.0
|
[
"Analyse der Webanwendung \"PetClinic\"\nPriorisierung von Umbauarbeiten nach Nutzungsgrad\nTechnische Vorbereitung: Laden der Analysewerkzeuge",
"import pandas as pd\nimport py2neo\ngraph = py2neo.Graph(password=\"password\")",
"Aggregation der Messwerte nach Subdomänen",
"query = \"\"\"\nMATCH \n (t:Type)-[:BELONGS_TO]->(s:Subdomain),\n (t)-[:HAS_CHANGE]->(ch:Change),\n (t)-[:HAS_MEASURE]->(co:Coverage),\n (t)-[:DECLARES]->(m:Method)\nOPTIONAL MATCH\n (t)-[:HAS_BUG]->(b:BugInstance) \nRETURN \n s.name as ASubdomain,\n COUNT(DISTINCT t) as Types,\n COUNT(DISTINCT ch) as Changes,\n AVG(co.ratio) as Coverage,\n COUNT(DISTINCT b) as Bugs,\n SUM(DISTINCT m.lastLineNumber) as Lines\nORDER BY Coverage ASC, Bugs DESC\n\"\"\"",
"Ergebnisse nach Subdomänen",
"result = pd.DataFrame(graph.data(query))\nresult",
"Umbenennung nach geläufigen Begriffen",
"plot_data = result.copy()\nplot_data = plot_data.rename(\n columns= {\n \"Changes\" : \"Investment\",\n \"Coverage\" : \"Utilization\",\n \"Lines\" : \"Size\"})\nplot_data\n\n%matplotlib inline\nimport matplotlib.pyplot as plt\n\ndef plot_portfolio_diagramm(plot_data, name):\n x = \"Investment\"\n y = \"Utilization\"\n \n ax = plot_data.plot.scatter(\n x,\n y,\n s=plot_data.Size,\n alpha=0.7,\n title=\"Return on Investment ({})\".format(name),\n figsize=[10,7],\n fontsize=14\n )\n\n ax.title.set_size(24)\n ax.title\n plt.xlabel(x, fontsize=18)\n plt.ylabel(y, fontsize=18)\n \n ax.plot(\n [plot_data[x].max()/2, plot_data[x].max()/2],\n [0, plot_data[y].max()], color='k', linestyle='--', linewidth=0.6)\n ax.plot(\n [0, plot_data[x].max()],\n [plot_data[y].max()/2,plot_data[y].max()/2], color='k', linestyle='--', linewidth=0.6)\n ax.text(plot_data[x].max()*1/4, plot_data[y].max()*3/4, \"Success\", ha=\"center\", fontsize=24)\n ax.text(plot_data[x].max()*3/4, plot_data[y].max()*3/4, \"Beware\", ha=\"center\", fontsize=24)\n ax.text(plot_data[x].max()*1/4, plot_data[y].max()*1/4, \"Watch\", ha=\"center\", fontsize=24)\n ax.text(plot_data[x].max()*3/4, plot_data[y].max()*1/4, \"Failure\", ha=\"center\", fontsize=24)",
"Vier-Felder-Matrix zur Priorisierung nach Subdomänen",
"plot_portfolio_diagramm(plot_data, \"Subdomains\")",
"Aggregation der Messwerte nach technischen Aspekten",
"query = \"\"\"\nMATCH \n (t:Type)-[:IS_A]->(ta:TechnicalAspect),\n (t)-[:HAS_CHANGE]->(ch:Change),\n (t)-[:HAS_MEASURE]->(co:Coverage),\n (t)-[:DECLARES]->(m:Method)\nOPTIONAL MATCH\n (t)-[:HAS_BUG]->(b:BugInstance) \nRETURN \n ta.name as ATechnicalAspect,\n COUNT(DISTINCT t) as Types,\n COUNT(DISTINCT ch) as Investment,\n AVG(co.ratio) as Utilization,\n COUNT(DISTINCT b) as Bugs,\n SUM(DISTINCT m.lastLineNumber) as Size\nORDER BY Utilization ASC, Bugs DESC\n\"\"\"",
"Ergebnisse nach technischen Aspekten",
"result = pd.DataFrame(graph.data(query))\nresult",
"Vier-Felder-Matrix zur Priorisierung nach technischen Aspekten",
"plot_portfolio_diagramm(result, \"Technical Aspects\")",
"Ende Demo"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
KshitijT/fundamentals_of_interferometry
|
2_Mathematical_Groundwork/2_3_fourier_series.ipynb
|
gpl-2.0
|
[
"<span style=\"background-color:red\">BVH:MC:Author needs to add figure labels</span>\n\n\nOutline\nGlossary\n2. Mathematical Groundwork\nPrevious: 2.2 Important functions\nNext: 2.4 The Fourier Transform\n\n\n\n\nImport standard modules:",
"import numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\nfrom IPython.display import HTML \nHTML('../style/course.css') #apply general CSS",
"Import section specific modules:",
"from IPython.display import HTML\nfrom ipywidgets import interact\nHTML('../style/code_toggle.html')",
"2.3. Fourier Series<a id='math:sec:fourier_series'></a>\nWhile Fourier series are not immediately required to understand the required calculus for this book, they are closely connected to the Fourier transform, which is an essential tool. Moreover, we noticed a few times that the principle of the harmonic analysis or harmonic decomposition is essential- despite its simplicity - often not fully understood. We hence give a very brief summary, not caring about existence questions.\n2.3.1 Definition <a id='math:sec:fourier_series_definition'></a>\nThe Fourier series of a function $f: \\mathbb{R} \\rightarrow \\mathbb{R}$ with real coefficients is defined as\n<a id='math:eq:3_001'></a><!--\\label{math:eq:3_001}-->$$\n f_{\\rm F}(x) \\,=\\, \\frac{1}{2}c_0+\\sum_{m = 1}^{\\infty}c_m \\,\\cos(mx)+\\sum_{m = 1}^{\\infty}s_m \\,\\sin(mx),\n$$\nwith the Fourier coefficients $c_m$ and $s_m$\n<a id='math:eq:3_002'></a><!--\\label{math:eq:3_002}-->$$\n \\left( c_0 \\,=\\,\\frac{1}{\\pi}\\int_{-\\pi}^{\\pi}f(x)\\,dx \\right)\\\n c_m \\,=\\,\\frac{1}{\\pi}\\int_{-\\pi}^{\\pi}f(x)\\,\\cos(mx)\\,dx \\qquad m \\in \\mathbb{N_0}\\\n s_m \\,=\\,\\frac{1}{\\pi}\\int_{-\\pi}^{\\pi}f(x)\\,\\sin(mx)\\,dx \\qquad m \\in \\mathbb{N_0}.\\{\\rm }\n$$\nIf $f_{\\rm F}$ exists, it is identical to $f$ in all points of continuity. For functions which are periodic with a period of $2\\pi$ the Fourier series converges. Hence, for continuous periodic function with a period of $2\\pi$ the Fourier series converges and $f_{\\rm F}=f$.\nThe Fourier series of a function $f: \\mathbb{R} \\rightarrow \\mathbb{R}$ with imaginary coefficients is defined as\n<a id='math:eq:3_003'></a><!--\\label{math:eq:3_003}-->$$\n f_{\\rm IF}(x) \\,=\\, \\sum_{m = -\\infty}^{\\infty}a_m \\,e^{\\imath mx},$$\nwith the Fourier coefficients $a_m$\n<a id='math:eq:3_004'></a><!--\\label{math:eq:3_004}-->$$\n a_m \\,=\\, \\frac{1}{2\\pi}\\int_{-\\pi}^{\\pi}f(x)e^{-\\imath mx}\\,dx\\qquad \\forall m\\in\\mathbb{Z}.$$\nThe same convergence criteria apply and one realisation can be transformed to the other. Making use of Euler's formula ➞ <!--\\ref{math:sec:eulers_formula}-->, one gets\n<a id='math:eq:3_005'></a><!--\\label{math:eq:3_005}-->$$\n\\begin{split}\na_m \\,&=\\, \\frac{1}{2\\pi}\\int_{-\\pi}^{\\pi}f(x)\\,[\\cos(mx)-\\imath \\,\\sin(mx)]\\,dx\n&=\\,\\left {\n \\begin{array}{lll}\n \\frac{1}{2} (c_m+\\imath s_m) & {\\rm for} & m < 0\\\n \\frac{1}{2} c_m & {\\rm for} & m = 0\\\n \\frac{1}{2} (c_m-\\imath\\,s_m) & {\\rm for} & m > 0\\\n\\end{array} \\right. \n\\end{split},\n$$\nand accordingly, $\\forall m \\in \\mathbb{N_0}$, \n<a id='math:eq:2_006'></a><!--\\label{math:eq:2_006}-->$$\n\\\n\\begin{split}\nc_m \\,&=\\, a_m+a_{-m}\\\ns_m \\,&=\\, \\imath\\,(a_m-a_{-m})\\\n\\end{split}.\n$$\nThe concept Fourier series can be expanded to a base interval of a period T instead of $2\\pi$ by substituting $x$ with $x = \\frac{2\\pi}{T}t$.\n<a id='math:eq:3_007'></a><!--\\label{math:eq:3_007}-->$$\n g_{\\rm F}(t) = f_{\\rm F}(\\frac{2\\pi}{T}t) \\,=\\, \\frac{1}{2}c_0+\\sum_{m = 1}^{\\infty}c_m \\,\\cos(m\\frac{2\\pi}{T}t)+\\sum_{m = 1}^{\\infty}s_m \\,\\sin(m\\frac{2\\pi}{T}t)\n$$\nwhere \n<a id='math:eq:3_008'></a><!--\\label{math:eq:3_008}-->$$\n c_0 \\,=\\,\\frac{1}{\\pi}\\int_{-\\pi}^{\\pi}f(\\frac{2\\pi}{T}t)\\,dx \\,=\\, \\frac{2}{T}\\int_{-\\frac{T}{2}}^{\\frac{T}{2}}g(t)\\,dt. \\ \nc_m \\,=\\,\\frac{1}{\\pi}\\int_{-\\pi}^{\\pi}f(\\frac{2\\pi}{T}t)\\,\\cos(m\\frac{2\\pi}{T}t)\\,dx \\,=\\, \\frac{2}{T}\\int_{-\\frac{T}{2}}^{\\frac{T}{2}}g(t)\\,\\cos(m\\frac{2\\pi}{T}t)\\,dt \\qquad m \\in \\mathbb{N_0}\\\ns_m \\,=\\,\\frac{1}{\\pi}\\int_{-\\pi}^{\\pi}f(\\frac{2\\pi}{T}t)\\,\\sin(m\\frac{2\\pi}{T}t)\\,dx \\,=\\, \\frac{2}{T}\\int_{-\\frac{T}{2}}^{\\frac{T}{2}}g(t)\\,\\sin(m\\frac{2\\pi}{T}t)\\,dt \\qquad m \\in \\mathbb{N_0}\\\n$$\nor\n<a id='math:eq:3_009'></a><!--\\label{math:eq:3_010}-->$$\n g_{\\rm IF}(t) = f_{\\rm IF}(\\frac{2\\pi}{T}t) \\,=\\, \\sum_{m = -\\infty}^{\\infty}a_m \\,e^{\\imath m\\frac{2\\pi}{T}t}\n$$\n<a id='math:eq:3_011'></a><!--\\label{math:eq:3_011}-->$$\n a_m \\,=\\, \\frac{1}{2\\pi}\\int_{-\\pi}^{\\pi}f(\\frac{2\\pi}{T}t)e^{-\\imath m\\frac{2\\pi}{T}t}\\,dx\\,=\\,\\frac{1}{T}\\int_{-\\frac{T}{2}}^{\\frac{T}{2}}g(t)e^{-\\imath m\\frac{2\\pi}{T}t}\\,dt \\qquad\\forall m\\in\\mathbb{Z}.$$\nThe series again converges under the same criteria as before and the relations between the coefficients of the complex or real Fourier coefficients from equation equation <!--\\ref{math:eq:3_005}-->stay valid.\nOne nice example is the complex, scaled Fourier series of the scaled shah function ➞ <!--\\ref{math:sec:shah_function}--> $III_{T^{-1}}(x)\\,=III\\left(\\frac{x}{T}\\right)\\,=\\sum_{l=-\\infty}^{+\\infty} T \\delta\\left(x-l T\\right)$. Obviously, the period of this function is $T$. The Fourier coefficients (matched to a period of $T$) is calculated as\n<a id='math:eq:3_012'></a><!--\\label{math:eq:3_012}-->$$\n\\begin{split}\n a_m \\,&= \\,\\frac{1}{T}\\int_{-\\frac{T}{2}}^{\\frac{T}{2}}\\left(\\sum_{l=-\\infty}^{+\\infty}T\\delta\\left(x-l T\\right)\\right)e^{-\\imath m \\frac{2\\pi}{T} x}\\,dx\\\n &=\\,\\frac{1}{T} \\int_{-\\frac{T}{2}}^{\\frac{T}{2}} T \\delta\\left(x\\right)e^{-\\imath m \\frac{2\\pi}{T} x}\\,dx\\\n &=\\,1\n \\end{split}\n \\forall m\\in\\mathbb{Z}.$$\nIt follows that\n<a id='math:eq:3_013'></a><!--\\label{math:eq:3_013}-->$$\n\\begin{split}\n III_{T^{-1}}(x)\\,=III\\left(\\frac{x}{T}\\right)\\,=\\,\\sum_{m=-\\infty}^{+\\infty} e^{\\imath m\\frac{2\\pi}{T} x t}\n \\end{split}\n .$$\n2.3.2 Example <a id='math:sec:fourier_series_example'></a>\nNext we will quickly demonstrate how to decompose a signal into its Fourier series. An easy way to implement this numerically is to use the trapezoidal rule to approximate the integral. Thus we start by defining a function that computes the coefficients of the Fourier series using the complex definition <!--\\ref{math:eq:3_011}-->",
"def FS_coeffs(x, m, func, T=2.0*np.pi):\n \"\"\"\n Computes Fourier series (FS) coeffs of func\n Input: \n x = input vector at which to evaluate func\n m = the order of the coefficient\n func = the function to find the FS of\n T = the period of func (defaults to 2 pi)\n \"\"\"\n # Evaluate the integrand\n am_int = func(x)*np.exp(-1j*2.0*m*np.pi*x/T)\n # Use trapezoidal integration to get the coefficient\n am = np.trapz(am_int,x)\n return am/T",
"That should be good enough for our purposes here. Next we create a function to sum the Fourier series.",
"def FS_sum(x, m, func, period=None):\n # If no period is specified use entire domain\n if period is None:\n period = np.abs(x.max() - x.min())\n \n # Evaluate the coefficients and sum the series\n f_F = np.zeros(x.size, dtype=np.complex128)\n for i in xrange(-m,m+1):\n am = FS_coeffs(x, i, func, T=period)\n f_F += am*np.exp(2.0j*np.pi*i*x/period) \n return f_F ",
"Let's see what happens if we decompose a square wave.",
"# define square wave function\ndef square_wave(x):\n I = np.argwhere(np.abs(x) <= 0.5)\n tmp = np.zeros(x.size)\n tmp[I] = 1.0\n return tmp\n\n# Set domain and compute square wave\nN = 250\nx = np.linspace(-1.0,1.0,N)\n\n# Compute the FS up to order m\nm = 10\nsw_F = FS_sum(x, m, square_wave, period=2.0)\n\n# Plot result\nplt.figure(figsize=(15,5))\nplt.plot(x, sw_F.real, 'g', label=r'$ Fourier \\ series $')\nplt.plot(x, square_wave(x), 'b', label=r'$ Square \\ wave $')\nplt.title(r\"$FS \\ decomp \\ of \\ square \\ wave$\",fontsize=20)\nplt.xlabel(r'$x$',fontsize=18)\nplt.ylim(-0.05,1.5)\nplt.legend()",
"Figure 2.8.1:\nAs can be seen from the figure, the Fourier series approximates the square wave. However at such a low order (i.e. $m = 10$) it doesn't do a very good job. Actually an infinite number of Fourier series coefficients are required to fully capture a square wave. Below is an interactive demonstration that allows you to vary the parameters on the Fourier series decomposition. Note in particular what happens if we make the period too small. Also feel free to apply it to functions other than the square wave (but make sure to adjust the domain accordingly.",
"def inter_FS(x,m,func,T):\n f_F = FS_sum(x, m, func, period=T)\n plt.plot(x,f_F.real,'b')\n plt.plot(x,func(x),'g')\n \n\ninteract(lambda m,T:inter_FS(x=np.linspace(-1.0,1.0,N),m=m,func=square_wave,T=T),\n m=(5,100,1),T=(0,2*np.pi,0.5)) and None",
"Figure 2.8.2:\n\n\nNext: 2.4 The Fourier Transform"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
evanmiltenburg/python-for-text-analysis
|
Chapters/Chapter 15 - Off to analyzing text.ipynb
|
apache-2.0
|
[
"Chapter 15: Off to analyzing text\nWay to go! You have already learned a lot of essential components of the Python language. Being able to deal with data structures, import packages, build your own functions and operate with files is not only essential for most tasks in Python, but also a prerequisite for text analysis. We have applied some common preprocessing steps like casefolding/lowercasing, punctuation removal, and stemming/lemmatization. Did you know that there are some very useful NLP packages and modules that do some of these steps? One that is often used in text analysis is the Python package NLTK (the Natural Language Toolkit).\nAt the end of this chapter, you will be able to:\n\nhave an idea of the NLP tasks that constitute an NLP pipeline\nuse the functions of the NLTK module to manipulate the content of files for NLP purposes (e.g. sentence splitting, tokenization, POS-tagging, and lemmatization);\ndo nesting of multiple for-loops or files\n\nMore NLP software for Python:\n\nNLTK\nSpaCy\nStanford CoreNLP\nAbout Python NLP libraries\n\nIf you have questions about this chapter, please contact us (cltl.python.course@gmail.com).\n1 A short intro to text processing\nThere are many aspects of text we can (try to) analyze. Commonly used analyses conducted in Natural Language Processing (NLP) are for instance:\n\ndetermining the part of speech of words in a text (verb, noun, etc.)\nanalyzing the syntactic relations between words and phrases in a sentence (i.e., syntactic parsing)\nanalyzing which entities (people, organizations, locations) are mentioned in a text\n\n...and many more. Each of these aspects is addressed within its own NLP task. \nThe NLP pipeline\nUsually, these tasks are carried out sequentially because they depend on each other. For instance, we need to first tokenize the text (split it into words) in order to be able to assign part-of-speech tags to each word. This sequence is often called an NLP pipeline. For example, a general pipeline could consist of the components shown below (taken from here) You can see the NLP pipeline of the NewsReader project here. (you can ignore the middle part of the picture, and focus on the blue and green boxes in the outer row).\n<img src='images/nlp-pipeline.jpg'>\nIn this chapter we will look into four simple NLP modules that are nevertheless very common in NLP: tokenization, sentence splitting, lemmatization and POS tagging. \nThere are also more advanced processing modules out there - feel free to do some research yourself :-) \n2 The NLTK package\nNLTK (Natural Language Processing Toolkit) is a module we can use for most fundamental aspects of natural language processing. There are many more advanced approaches out there, but it is a good way of getting started. \nHere we will show you how to use it for tokenization, sentence splitting, POS tagging, and lemmatization. These steps are necessary processing steps for most NLP tasks. \nWe will first give you an overview of all tasks and then delve into each of them in more detail. \nBefore we can use NLTK for the first time, we have to make sure it is downloaded and installed on our computer (some of you may have already done this). \nTo install NLTK, please try to run the following two cells. If this does not work, please try and follow the documentation. If you don't manage to get this to work, please ask for help.",
"%%bash\npip install nltk",
"Once you have downloaded the NLTK book, you do not need to run the download again. If you are using the NLTK again, it is sufficient to import it.",
"# downloading nltk\n\nimport nltk\nnltk.download('book')",
"Now that we have installed and downloaded NLTK, let's look at an example of a simple NLP pipeline. In the following cell, you can observe how we tokenize raw text into tokens and setnences, perform part of speech tagging and lemmatize some of the tokens. Don't worry about the details just yet - we will go trhough them step by step.",
"text = \"This example sentence is used for illustrating some basic NLP tasks. Language is awesome!\"\n\n# Tokenization\ntokens = nltk.word_tokenize(text)\n\n# Sentence splitting\nsentences = nltk.sent_tokenize(text)\n\n# POS tagging\ntagged_tokens = nltk.pos_tag(tokens)\n\n# Lemmatization\nlmtzr = nltk.stem.wordnet.WordNetLemmatizer()\nlemma=lmtzr.lemmatize(tokens[4], 'v')\n\n# Printing all information\nprint(tokens)\nprint(sentences)\nprint(tagged_tokens)\nprint(lemma)",
"2.1 Tokenization and sentence splitting with NLTK\n2.1.1 word_tokenize()\nNow, let's try tokenizing our Charlie story! First, we will open and read the file again and assign the file contents to the variable content. Then, we can call the word_tokenize() function from the nltk module as follows:",
"with open(\"../Data/Charlie/charlie.txt\") as infile:\n content = infile.read()\n\ntokens = nltk.word_tokenize(content)\nprint(type(tokens), len(tokens))\nprint(tokens)",
"As you can see, we now have a list of all words in the text. The punctuation marks are also in the list, but as separate tokens.\n2.1.2 sent_tokenize()\nAnother thing that NLTK can do for you is to split a text into sentences by using the sent_tokenize() function. We use it on the entire text (as a string):",
"with open(\"../Data/Charlie/charlie.txt\") as infile:\n content = infile.read()\n\nsentences = nltk.sent_tokenize(content)\n\nprint(type(sentences), len(sentences))\nprint(sentences)",
"We can now do all sorts of cool things with these lists. For example, we can search for all words that have certain letters in them and add them to a list. Let's say we want to find all present participles in the text. We know that present participles end with -ing, so we can do something like this:",
"# Open and read in file as a string, assign it to the variable `content`\nwith open(\"../Data/Charlie/charlie.txt\") as infile:\n content = infile.read()\n \n# Split up entire text into tokens using word_tokenize():\ntokens = nltk.word_tokenize(content)\n\n# create an empty list to collect all words having the present participle -ing:\npresent_participles = []\n\n# looking through all tokens\nfor token in tokens:\n # checking if a token ends with the present parciciple -ing\n if token.endswith(\"ing\"):\n # if the condition is met, add it to the list we created above (present_participles)\n present_participles.append(token)\n \n# Print the list to inspect it\nprint(present_participles)",
"This looks good! We now have a list of words like boiling, sizzling, etc. However, we can see that there is one word in the list that actually is not a present participle (ceiling). Of course, also other words can end with -ing. So if we want to find all present participles, we have to come up with a smarter solution.\n2.2. Part-of-speech (POS) tagging\nOnce again, NLTK comes to the rescue. Using the function pos_tag(), we can label each word in the text with its part of speech. \nTo do pos-tagging, you first need to tokenize the text. We have already done this above, but we will repeat the steps here, so you get a sense of what an NLP pipeline may look like.\n2.2.1 pos_tag()\nTo see how pos_tag() can be used, we can (as always) look at the documentation by using the help() function. As we can see, pos_tag() takes a tokenized text as input and returns a list of tuples in which the first element corresponds to the token and the second to the assigned pos-tag.",
"# As always, we can start by reading the documentation:\nhelp(nltk.pos_tag)\n\n# Open and read in file as a string, assign it to the variable `content`\nwith open(\"../Data/Charlie/charlie.txt\") as infile:\n content = infile.read()\n \n# Split up entire text into tokens using word_tokenize():\ntokens = nltk.word_tokenize(content)\n\n# Apply pos tagging to the tokenized text\ntagged_tokens = nltk.pos_tag(tokens)\n\n# Inspect pos tags\nprint(tagged_tokens)",
"2.2.2 Working with POS tags\nAs we saw above, pos_tag() returns a list of tuples: The first element is the token, the second element indicates the part of speech (POS) of the token. \nThis POS tagger uses the POS tag set of the Penn Treebank Project, which can be found here. For example, all tags starting with a V are used for verbs. \nWe can now use this, for example, to identify all the verbs in a text:",
"# Open and read in file as a string, assign it to the variable `content`\nwith open(\"../Data/Charlie/charlie.txt\") as infile:\n content = infile.read()\n \n# Apply tokenization and POS tagging\ntokens = nltk.word_tokenize(content)\ntagged_tokens = nltk.pos_tag(tokens)\n\n# List of verb tags (i.e. tags we are interested in)\nverb_tags = [\"VBD\", \"VBG\", \"VBN\", \"VBP\", \"VBZ\"]\n\n# Create an empty list to collect all verbs:\nverbs = []\n\n# Iterating over all tagged tokens\nfor token, tag in tagged_tokens:\n \n # Checking if the tag is any of the verb tags\n if tag in verb_tags:\n # if the condition is met, add it to the list we created above \n verbs.append(token)\n \n# Print the list to inspect it\nprint(verbs)",
"2.3. Lemmatization\nWe can also use NLTK to lemmatize words.\nThe lemma of a word is the form of the word which is usually used in dictionary entries. This is useful for many NLP tasks, as it gives a better generalization than the strong a word appears in. To a computer, cat and cats are two completely different tokens, even though we know they are both forms of the same lemma. \n2.3.1 The WordNet lemmatizer\nWe will use the WordNetLemmatizer for this using the lemmatize() function. In the code below, we loop through the list of verbs, lemmatize each of the verbs, and add them to a new list called verb_lemmas. Again, we show all the processing steps (consider the comments in the code below):",
"#################################################################################\n#### Process text as explained above ###\n\nwith open(\"../Data/Charlie/charlie.txt\") as infile:\n content = infile.read()\n \ntokens = nltk.word_tokenize(content)\ntagged_tokens = nltk.pos_tag(tokens)\n\nverb_tags = [\"VBD\", \"VBG\", \"VBN\", \"VBP\", \"VBZ\"]\nverbs = []\n\nfor token, tag in tagged_tokens:\n if tag in verb_tags:\n verbs.append(token)\n\nprint(verbs)\n\n#############################################################################\n#### Use the list of verbs collected above to lemmatize all the verbs ###\n\n \n# Instatiate a lemmatizer object\nlmtzr = nltk.stem.wordnet.WordNetLemmatizer()\n\n# Create list to collect all the verb lemmas:\nverb_lemmas = []\n \nfor participle in verbs:\n # For this lemmatizer, we need to indicate the POS of the word (in this case, v = verb)\n lemma = lmtzr.lemmatize(participle, \"v\") \n verb_lemmas.append(lemma)\nprint(verb_lemmas)",
"Note about the wordnet lemmatizer: \nWe need to specify a POS tag to the WordNet lemmatizer, in a WordNet format (\"n\" for noun, \"v\" for verb, \"a\" for adjective). If we do not indicate the Part-of-Speech tag, the WordNet lemmatizer thinks it is a noun (this is the default value for its part-of-speech). See the examples below:",
"test_nouns = ('building', 'applications', 'leafs')\nfor n in test_nouns:\n print(f\"Noun in conjugated form: {n}\")\n default_lemma=lmtzr.lemmatize(n) # default lemmatization, without specifying POS, n is interpretted as a noun!\n print(f\"Default lemmatization: {default_lemma}\")\n verb_lemma=lmtzr.lemmatize(n, 'v')\n print(f\"Lemmatization as a verb: {verb_lemma}\")\n noun_lemma=lmtzr.lemmatize(n, 'n')\n print(f\"Lemmatization as a noun: {noun_lemma}\")\n print()\n\ntest_verbs=('grew', 'standing', 'plays')\nfor v in test_verbs:\n print(f\"Verb in conjugated form: {v}\")\n default_lemma=lmtzr.lemmatize(v) # default lemmatization, without specifying POS, v is interpretted as a noun!\n print(f\"Default lemmatization: {default_lemma}\")\n verb_lemma=lmtzr.lemmatize(v, 'v')\n print(f\"Lemmatization as a verb: {verb_lemma}\")\n noun_lemma=lmtzr.lemmatize(v, 'n')\n print(f\"Lemmatization as a noun: {noun_lemma}\")\n print()",
"3 Nesting\nSo far, we typically used a single for-loop, or we were opening a single file at a time. In Python (and most programming languages), one can nest multiple loops or files in one another. For instance, we can use one (outer) for-loop to iterate through files, and then for each file iterate through all its sentences (internal for-loop). As we have learned above, glob is a convenient way of creating a list of files. \nYou might think: can we stretch this on more levels? Iterate through files, then iterate through the sentences in these files, then iterate through each word in these sentences, then iterate through each letter in these words, etc. This is possible. Python (and most programming languages) allow you to perform nesting with (in theory) as many loops as you want. Keep in mind that nesting too much will eventually cause computational problems, but this also depends on the size of your data. \nFor the tasks we are treating here, a a couple of levels of nesting are fine. \nIn the code below, we want get an idea of the number and length of the sentences in the texts stored in the ../Data/dreams directory. We do this by creating two for loops: We iterate over all the files in the directory (loop 1), apply sentence tokenization and iterate over all the sentences in the file (loop 2).\nLook at the code and comments below to figure out what is going on:",
"import glob\n\n### Loop 1 ####\n# Loop1: iterate over all the files in the dreams directory\nfor filename in glob.glob(\"../Data/dreams/*.txt\"): \n # read in the file and assign the content to a variable\n with open(filename, \"r\") as infile:\n content = infile.read()\n # split the content into sentences\n sentences = nltk.sent_tokenize(content) \n # Print the number of sentences in the file\n print(f\"INFO: File {filename} has {len(sentences)} sentences\") \n\n # For each file, assign a number to each sentence. Start with 0:\n counter=0\n\n #### Loop 2 ####\n # Loop 2: loop over all the sentences in a file:\n for sentence in sentences:\n # add 1 to the counter\n counter+=1 \n # tokenize the sentence\n tokens=nltk.word_tokenize(sentence) \n # print the number of tokens per sentence\n print(f\"Sentence {counter} has {len(tokens)} tokens\") \n \n # print an empty line after each file (this belongs to loop 1)\n print()",
"4 Putting it all together\nIn this section, we will use what we have learned above to write a small NLP program. We will go through all the steps and show how they can be put together. In the last chapters, we have already learned how to write functions. We will make use of this skill here. \nOur goal is to collect all the nouns from Vickie's dream reports. \nBefore we write actual code, it is always good to consider which steps we need to carry out to reach the goal. \nImportant steps to remember:\n\ncreate a list of all the files we want to process\nopen and read the files\ntokenize the texts\nperform pos-tagging\ncollect all the tokens analyzed as nouns\n\nRemember, we first needed to import nltk to use it. \n4.1 Writing a processing function for a single file\nSince we want to carry out the same task for each of the files, it is very useful (and good practice!) to write a single function which can do the processing. The following function reads the specified file and returns the tokens with their POS tags:",
"import nltk\n\ndef tag_tokens_file(filepath):\n \"\"\"Read the contents of the file found at the location specified in \n FILEPATH and return a list of its tokens with their POS tags.\"\"\"\n with open(filepath, \"r\") as infile:\n content = infile.read()\n tokens = nltk.word_tokenize(content)\n tagged_tokens = nltk.pos_tag(tokens)\n return tagged_tokens",
"Now, instead of having to open a file, read the contents and close the file, we can just call the function tag_tokens_file to do this. We can test it on a single file:",
"filename = \"../Data/dreams/vickie1.txt\"\ntagged_tokens = tag_tokens_file(filename)\nprint(tagged_tokens)",
"4.2 Iterating over all the files and applying the processing function\nWe can also do this for each of the files in the ../Data/dreams directory by using a for-loop:",
"import glob\n\n# Iterate over the `.txt` files in the directory and perform POS tagging on each of them\nfor filename in glob.glob(\"../Data/dreams/*.txt\"): \n tagged_tokens = tag_tokens_file(filename)\n print(filename, \"\\n\", tagged_tokens, \"\\n\")",
"4.3 Collecting all the nouns\nNow, we extend this code a bit so that we don't print all POS-tagged tokens of each file, but we get all (proper) nouns from the texts and add them to a list called nouns_in_dreams. Then, we print the set of nouns:",
"# Create a list that will contain all nouns\nnouns_in_dreams = []\n\n# Iterate over the `.txt` files in the directory and perform POS tagging on each of them\nfor filename in glob.glob(\"../Data/dreams/*.txt\"): \n tagged_tokens = tag_tokens_file(filename)\n \n # Get all (proper) nouns in the text (\"NN\" and \"NNP\") and add them to the list\n for token, pos in tagged_tokens:\n if pos in [\"NN\", \"NNP\"]:\n nouns_in_dreams.append(token)\n\n# Print the set of nouns in all dreams\nprint(set(nouns_in_dreams))\n",
"Now we have an idea what Vickie dreams about!\nExercises\nExercise 1: \nTry to collect all the present participles in the the text store in ../Data/Charlie/charlie.txt using the NLTK tokenizer and POS-tagger.",
"# you code here",
"You should get the following list: \n['boiling', 'bubbling', 'hissing', 'sizzling', 'clanking', 'running', 'hopping', 'knowing', 'rubbing', 'cackling', 'going']",
"# we can test our code using the assert statement (don't worry about this now, \n# but if you want to use it, you can probably figure out how it works yourself :-) \n# If our code is correct, we should get a compliment :-)\nassert len(present_participles) == 11 and type(present_participles[0]) == str\nprint(\"Well done!\")",
"Exercise 2: \nThe resulting list verb_lemmas above contains a lot of duplicates. Do you remember how you can get rid of these duplicates? Create a set in which each verb occurs only once and name it unique_verbs. Then print it.",
"## the list is stored under the variable 'verb_lemmas'\n\n# your code here\n\n# Test your code here! If your code is correct, you should get a compliment :-)\nassert len(unique_verbs) == 28 \nprint(\"Well done!\")",
"Exercise 3: \nNow use a for-loop to count the number of times that each of these verb lemmas occurs in the text! For each verb in the list you just created, get the count of this verb in charlie.txt using the count() method. Create a dictionary that contains the lemmas of the verbs as keys, and the counts of these verbs as values. Refer to the notebook about Topic 1 if you forgot how to use the count() method or how to create dictionary entries!\nTip: you don't need to read in the file again, you can just use the list called verb_lemmas.",
"verb_counts = {}\n\n# Finish this for-loop\nfor verb in unique_verbs:\n # your code here\n\nprint(verb_counts) \n\n# Test your code here! If your code is correct, you should get a compliment :-)\nassert len(verb_counts) == 28 and verb_counts[\"bubble\"] == 1 and verb_counts[\"be\"] == 9\nprint(\"Well done!\")",
"Exercise 4:\nWrite your counts to a file called charlie_verb_counts.txt and write it to ../Data/Charlie/charlie_verb_counts.txt in the following format:\nverb, count\nverb, count \n...\nDon't forget to use newline characters at the end of each line."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
yingchi/fastai-notes
|
deeplearning1/nbs/mnist_yingchi.ipynb
|
apache-2.0
|
[
"Model Building for MNIST",
"from theano.sandbox import cuda\ncuda.use('gpu1')\n\n%matplotlib inline\nfrom importlib import reload\nimport utils; reload(utils)\nfrom utils import *\nfrom __future__ import division, print_function",
"Setup",
"batch_size = 64\nfrom keras.datasets import mnist\n(X_train, y_train), (X_test, y_test) = mnist.load_data()\n(X_train.shape, y_train.shape, X_test.shape, y_test.shape)\n\n# Because MNIST is grey-scale images, it does not have the color column,\n# Let's add one empty dim to the X data\nX_test = np.expand_dims(X_test, 1)\nX_train = np.expand_dims(X_train, 1)\nX_train.shape\n\ny_train[:5]\n\ny_train = onehot(y_train)\ny_test = onehot(y_test)\ny_train[:5]",
"Now, let's normalize the inputs",
"mean_px = X_train.mean().astype(np.float32)\nstd_px = X_train.std().astype(np.float32)\n\ndef norm_input(x): return (x-mean_px)/std_px",
"Linear model\nWhy not we just fine-tune the imagenet model?\nBecause imageNet is 214 x 214 and is full-color. Here we have 28 x 28 and greyscale.\nSo we need to start from scratch.",
"def get_lin_model():\n model = Sequential([\n Lambda(norm_input, input_shape=(1,28,28)),\n Flatten(),\n Dense(10, activation='softmax')\n ])\n model.compile(Adam(), loss='categorical_crossentropy', metrics=['accuracy'])\n return model\n\nlm = get_lin_model()\n\ngen = image.ImageDataGenerator()\nbatches = gen.flow(X_train, y_train, batch_size=64)\ntest_batches = gen.flow(X_test, y_test, batch_size=64)\n\nlm.fit_generator(batches, batches.N, nb_epoch=1, \n validation_data=test_batches, nb_val_samples=test_batches.N)",
"It's always recommended to start with epoch 1 and a low learning rate. Defaut is 0.0001",
"lm.optimizer.lr = 0.1\nlm.fit_generator(batches, batches.N, nb_epoch=3,\n validation_data=test_batches, nb_val_samples=test_batches.N)",
"Single Dense Layer",
"def get_fc_model():\n model = Sequential([\n Lambda(norm_input, input_shape=(1,28,28)),\n Flatten(),\n Dense(512, activation='softmax'),\n Dense(10, activation='softmax')\n ])\n model.compile(Adam(), loss='categorical_crossentropy', metrics=['accuracy'])\n return model\n\nfc = get_fc_model()",
"As before, let's start with 1 epoch and a default low learning rate.",
"fc.fit_generator(batches, batches.N, nb_epoch=1, \n validation_data=test_batches, nb_val_samples=test_batches.N)\n\nfc.optimizer.lr=0.01\nfc.fit_generator(batches, batches.N, nb_epoch=4, \n validation_data=test_batches, nb_val_samples=test_batches.N)",
"Basic 'VGG-style' CNN",
"def get_model():\n model = Sequential([\n Lambda(norm_input, input_shape=(1,28, 28)),\n Convolution2D(32,3,3, activation='relu'),\n Convolution2D(32,3,3, activation='relu'),\n MaxPooling2D(),\n Convolution2D(64,3,3, activation='relu'),\n Convolution2D(64,3,3, activation='relu'),\n MaxPooling2D(),\n Flatten(),\n Dense(512, activation='relu'),\n Dense(10, activation='softmax')\n ])\n model.compile(Adam(), loss='categorical_crossentropy', metrics=['accuracy'])\n return model\n\nmodel = get_model()\nmodel.fit_generator(batches, batches.N, nb_epoch=1,\n validation_data=test_batches, nb_val_samples=test_batches.N)\n\nmodel.optimizer.lr=0.1\nmodel.fit_generator(batches, batches.N, nb_epoch=1, \n validation_data=test_batches, nb_val_samples=test_batches.N)\n\nmodel.optimizer.lr=0.01\nmodel.fit_generator(batches, batches.N, nb_epoch=8, \n validation_data=test_batches, nb_val_samples=test_batches.N)",
"Data Augmentation",
"model = get_model()\n\n# Now, we don't user the default settings for ImageDataGenerator\ngen = image.ImageDataGenerator(rotation_range=8, width_shift_range=0.08, shear_range=0.3,\n height_shift_range=0.08, zoom_range=0.08)\nbatches = gen.flow(X_train, y_train, batch_size=64)\ntest_batches = gen.flow(X_test, y_test, batch_size=64)\n\nmodel.fit_generator(batches, batches.N, nb_epoch=1,\n validation_data=test_batches, nb_val_samples=test_batches.N)\n\nmodel.optimizer.lr=0.1\nmodel.fit_generator(batches, batches.N, nb_epoch=4,\n validation_data=test_batches, nb_val_samples=test_batches.N)\n\nmodel.optimizer.lr=0.01\nmodel.fit_generator(batches, batches.N, nb_epoch=8, \n validation_data=test_batches, nb_val_samples=test_batches.N)",
"Batchnorm + data augmentation",
"def get_model_bn():\n model = Sequential([\n Lambda(norm_input, input_shape=(1,28,28)),\n Convolution2D(32,3,3, activation='relu'),\n BatchNormalization(axis=1),\n Convolution2D(32,3,3, activation='relu'),\n MaxPooling2D(),\n BatchNormalization(axis=1),\n Convolution2D(64,3,3, activation='relu'),\n BatchNormalization(axis=1),\n Convolution2D(64,3,3, activation='relu'),\n MaxPooling2D(),\n Flatten(),\n BatchNormalization(),\n Dense(512, activation='relu'),\n BatchNormalization(),\n Dense(10, activation='softmax')\n ])\n model.compile(Adam(), loss='categorical_crossentropy', metrics=['accuracy'])\n return model\n\nmodel = get_model_bn()\nmodel.fit_generator(batches, batches.N, nb_epoch=1,\n validation_data=test_batches, nb_val_samples=test_batches.N)\n\nmodel.optimizer.lr=0.1\nmodel.fit_generator(batches, batches.N, nb_epoch=4, \n validation_data=test_batches, nb_val_samples=test_batches.N)\n\nmodel.optimizer.lr=0.001\nmodel.fit_generator(batches, batches.N, nb_epoch=12, \n validation_data=test_batches, nb_val_samples=test_batches.N)",
"Batchnorm + dropout + data augmentation",
"def get_model_bn_do():\n model = Sequential([\n Lambda(norm_input, input_shape=(1,28,28)),\n Convolution2D(32,3,3, activation='relu'),\n BatchNormalization(axis=1),\n Convolution2D(32,3,3, activation='relu'),\n MaxPooling2D(),\n BatchNormalization(axis=1),\n Convolution2D(64,3,3, activation='relu'),\n BatchNormalization(axis=1),\n Convolution2D(64,3,3, activation='relu'),\n MaxPooling2D(),\n Flatten(),\n BatchNormalization(),\n Dense(512, activation='relu'),\n BatchNormalization(),\n Dropout(0.5),\n Dense(10, activation='softmax')\n ])\n model.compile(Adam(), loss='categorical_crossentropy', metrics=['accuracy'])\n return model\n\nmodel = get_model_bn_do()\n\nmodel.optimizer.lr=0.01\nmodel.fit_generator(batches, batches.N, nb_epoch=12, \n validation_data=test_batches, nb_val_samples=test_batches.N)",
"Ensembling\nEnsembling is a way that can often improve your accuracy. It takes many models and combines them together.",
"def fit_model():\n model = get_model_bn_do()\n model.fit_generator(batches, batches.N, nb_epoch=1, verbose=0,\n validation_data=test_batches, nb_val_samples=test_batches.N)\n model.optimizer.lr=0.1\n model.fit_generator(batches, batches.N, nb_epoch=4, verbose=0,\n validation_data=test_batches, nb_val_samples=test_batches.N)\n model.optimizer.lr=0.01\n model.fit_generator(batches, batches.N, nb_epoch=12, verbose=0,\n validation_data=test_batches, nb_val_samples=test_batches.N)\n # model.optimizer.lr=0.001\n # model.fit_generator(batches, batches.N, nb_epoch=18, verbose=0,\n # validation_data=test_batches, nb_val_samples=test_batches.N)\n return model\n\n# Return a list of models\nmodels = [fit_model() for i in range(6)]\n\npath = 'data/mnist/'\nmodel_path = path + 'models/'\n\nfor i, m in enumerate(models):\n m.save_weights(model_path+'cnn-mnist23-'+str(i)+'.pkl')\n\nevals = np.array([m.evaluate(X_test, y_test, batch_size=256) for m in models])\n\nevals.mean(axis=0)\n\nall_preds = np.stack([m.predict(X_test, batch_size=256) for m in models])\nall_preds.shape\n\navg_preds = all_preds.mean(axis=0)\n\nkeras.metrics.categorical_accuracy(y_test, avg_preds).eval()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
UWashington-Astro300/Astro300-A17
|
FirstLast_Sympy.ipynb
|
mit
|
[
"First Last - SymPy",
"%matplotlib inline\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport sympy as sp",
"$$ \\Large {\\displaystyle f(x)=3e^{-{\\frac {x^{2}}{8}}}} \\sin(x/3)$$\n\nFind the first four terms of the Taylor expansion of the above equation\nMake a plot of the function\nPlot size 10 in x 4 in\nX limts -5, 5\nY limits -2, 2\nOver-plot the 1-term Taylor expansion using a different color\nOver-plot the 2-term Taylor expansion using a different color\nOver-plot the 3-term Taylor expansion using a different color\nOver-plot the 4-term Taylor expansion using a different color",
"sp.init_printing()\n\nx = sp.symbols('x')\n\nmy_x = np.linspace(-10,10,100)",
"Due Wed Nov 29 - Noon\n\nMake sure to change the filename to your name!\nMake sure to change the Title to your name!\nFile -> Download as -> HTML (.html)\nupload your .html and .ipynb file to the class Canvas page"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
GoogleCloudPlatform/training-data-analyst
|
courses/fast-and-lean-data-science/TPU-GPU optimized Jigsaw Multilingual BERT.ipynb
|
apache-2.0
|
[
"To run this sample on Google Cloud Platform with various accelerator setups:\n 1. Download this notebook\n 1. Create a Cloud AI Platform Notebook VM with your choice of accelerator.\n * V100 GPU (AI Platform Notebook UI > New Instance > Tensorflow 2.2 > Customize > V100 x1)\n * 4x V100 GPU (AI Platform Notebook UI > New Instance > Tensorflow 2.2 > Customize > V100 x 4)\n * 8x V100 GPU (AI Platform Notebook UI > New Instance > Tensorflow 2.2 > Customize > V100 x 8)\n * TPU v3-8 (use create-tpu-deep-learning-vm.sh script from this page with --tpu-type v3-8)\n * TPU v3-32 pod (use create-tpu-deep-learning-vm.sh script from this page with --tpu-type v3-32)\n 1. Get the data from Kaggle. The easiest is to run the cell below on Kaggle and copy the name of the GCS bucket where the dataset is cached. This bucket is a cache and will expire after a couple of days but it should be enough to run the notebook. Optionnally, for best performance, copy the data to your own bucket located in the same region as your TPU.\n 1. adjust the import and the GCS_PATH in the cell below.",
"# When not running on Kaggle, comment out this import\nfrom kaggle_datasets import KaggleDatasets\n# When not running on Kaggle, set a fixed GCS path here\nGCS_PATH = KaggleDatasets().get_gcs_path('jigsaw-multilingual-toxic-comment-classification')\nprint(GCS_PATH)",
"Overview\nThis notebook is a fork of the Geting started notebook for the Jigsaw Multilingual Toxic Comment classification competition by Ian Kivlichan.\nIt only takes one toxic comment to sour an online discussion. The Conversation AI team, a research initiative founded by Jigsaw and Google, builds technology to protect voices in conversation. A main area of focus is machine learning models that can identify toxicity in online conversations, where toxicity is defined as anything rude, disrespectful or otherwise likely to make someone leave a discussion. Our API, Perspective, serves these models and others in a growing set of languages (see our documentation for the full list). If these toxic contributions can be identified, we could have a safer, more collaborative internet.\nIn this competition, we'll explore how models for recognizing toxicity in online conversations might generalize across different languages. Specifically, in this notebook, we'll demonstrate this with a multilingual BERT (m-BERT) model. Multilingual BERT is pretrained on monolingual data in a variety of languages, and through this learns multilingual representations of text. These multilingual representations enable zero-shot cross-lingual transfer, that is, by fine-tuning on a task in one language, m-BERT can learn to perform that same task in another language (for some examples, see e.g. How multilingual is Multilingual BERT?).\nWe'll study this zero-shot transfer in the context of toxicity in online conversations, similar to past competitions we've hosted ([1], [2]). But rather than analyzing toxicity in English as in those competitions, here we'll ask you to do it in several different languages. For training, we're including the (English) datasets from our earlier competitions, as well as a small amount of new toxicity data in other languages.",
"import os, time, logging\nimport tensorflow as tf\nimport tensorflow_hub as hub\nfrom matplotlib import pyplot as plt\nprint(tf.version.VERSION)\ntf.get_logger().setLevel(logging.ERROR)",
"TPU or GPU detection",
"try: # detect TPU\n tpu = None\n tpu = tf.distribute.cluster_resolver.TPUClusterResolver() # TPU detection\n tf.config.experimental_connect_to_cluster(tpu)\n tf.tpu.experimental.initialize_tpu_system(tpu)\n strategy = tf.distribute.experimental.TPUStrategy(tpu)\nexcept ValueError: # detect GPU(s) and enable mixed precision\n strategy = tf.distribute.MirroredStrategy() # works on GPU and multi-GPU\n policy = tf.keras.mixed_precision.experimental.Policy('mixed_float16')\n tf.config.optimizer.set_jit(True) # XLA compilation\n tf.keras.mixed_precision.experimental.set_policy(policy)\n print('Mixed precision enabled')\n\nprint(\"REPLICAS: \", strategy.num_replicas_in_sync)\n\n# mixed precision\n# On TPU, bfloat16/float32 mixed precision is automatically used in TPU computations.\n# Enabling it in Keras also stores relevant variables in bfloat16 format (memory optimization).\n# This additional optimization was not used for TPUs in this sample.\n# On GPU, specifically V100, mixed precision must be enabled for hardware TensorCores to be used.\n# XLA compilation must be enabled for this to work. (On TPU, XLA compilation is the default and cannot be turned off)",
"Configuration\nSet maximum sequence length and path variables.",
"SEQUENCE_LENGTH = 128\n\n# Copy of the TF Hub model at https://tfhub.dev/tensorflow/bert_multi_cased_L-12_H-768_A-12/2\nBERT_GCS_PATH = 'gs://bert_multilingual_public/bert_multi_cased_L-12_H-768_A-12_2/'\nEPOCHS = 6\n\nif tpu:\n BATCH_SIZE = 128 * strategy.num_replicas_in_sync\nelse:\n BATCH_SIZE = 64 * strategy.num_replicas_in_sync\n\nTRAIN_DATA = GCS_PATH + \"/jigsaw-toxic-comment-train-processed-seqlen{}.csv\".format(SEQUENCE_LENGTH)\nTRAIN_DATA_LENGTH = 223549 # rows\nVALID_DATA = GCS_PATH + \"/validation-processed-seqlen{}.csv\".format(SEQUENCE_LENGTH)\nSTEPS_PER_EPOCH = TRAIN_DATA_LENGTH // BATCH_SIZE\n\nLR_MAX = 0.001 * strategy.num_replicas_in_sync\nLR_EXP_DECAY = .9\nLR_MIN = 0.0001\n\n@tf.function\ndef lr_fn(epoch):\n lr = (LR_MAX - LR_MIN) * LR_EXP_DECAY**(epoch) + LR_MIN\n return lr\n\nprint(\"Learning rate schedule:\")\nrng = [i for i in range(EPOCHS)]\ny = [lr_fn(x) for x in rng]\nplt.plot(rng, [lr_fn(x) for x in rng])\nplt.show()",
"Model\nDefine the model. We convert m-BERT's output to a final probabilty estimate. We're using an m-BERT model from TensorFlow Hub.",
"def multilingual_bert_model(max_seq_length=SEQUENCE_LENGTH):\n \"\"\"Build and return a multilingual BERT model and tokenizer.\"\"\"\n input_word_ids = tf.keras.layers.Input(\n shape=(max_seq_length,), dtype=tf.int32, name=\"input_word_ids\")\n input_mask = tf.keras.layers.Input(\n shape=(max_seq_length,), dtype=tf.int32, name=\"input_mask\")\n segment_ids = tf.keras.layers.Input(\n shape=(max_seq_length,), dtype=tf.int32, name=\"all_segment_id\")\n \n bert_layer = tf.saved_model.load(BERT_GCS_PATH) # copy of TF Hub model 'https://tfhub.dev/tensorflow/bert_multi_cased_L-12_H-768_A-12/2'\n bert_layer = hub.KerasLayer(bert_layer, trainable=True)\n\n pooled_output, _ = bert_layer([input_word_ids, input_mask, segment_ids])\n output = tf.keras.layers.Dense(32, activation='relu')(pooled_output)\n output = tf.keras.layers.Dense(1, activation='sigmoid', name='labels', dtype=tf.float32)(output)\n\n return tf.keras.Model(inputs={'input_word_ids': input_word_ids,\n 'input_mask': input_mask,\n 'all_segment_id': segment_ids},\n outputs=output)",
"Dataset\nLoad the preprocessed dataset. See the demo notebook for sample code for performing this preprocessing.",
"def parse_string_list_into_ints(strlist):\n s = tf.strings.strip(strlist)\n s = tf.strings.substr(\n strlist, 1, tf.strings.length(s) - 2) # Remove parentheses around list\n s = tf.strings.split(s, ',', maxsplit=SEQUENCE_LENGTH)\n s = tf.strings.to_number(s, tf.int32)\n s = tf.reshape(s, [SEQUENCE_LENGTH]) # Force shape here needed for XLA compilation (TPU)\n return s\n\ndef format_sentences(data, label='toxic', remove_language=False):\n labels = {'labels': data.pop(label)}\n if remove_language:\n languages = {'language': data.pop('lang')}\n # The remaining three items in the dict parsed from the CSV are lists of integers\n for k,v in data.items(): # \"input_word_ids\", \"input_mask\", \"all_segment_id\"\n data[k] = parse_string_list_into_ints(v)\n return data, labels\n\ndef make_sentence_dataset_from_csv(filename, label='toxic', language_to_filter=None):\n # This assumes the column order label, input_word_ids, input_mask, segment_ids\n SELECTED_COLUMNS = [label, \"input_word_ids\", \"input_mask\", \"all_segment_id\"]\n label_default = tf.int32 if label == 'id' else tf.float32\n COLUMN_DEFAULTS = [label_default, tf.string, tf.string, tf.string]\n\n if language_to_filter:\n insert_pos = 0 if label != 'id' else 1\n SELECTED_COLUMNS.insert(insert_pos, 'lang')\n COLUMN_DEFAULTS.insert(insert_pos, tf.string)\n\n preprocessed_sentences_dataset = tf.data.experimental.make_csv_dataset(\n filename, column_defaults=COLUMN_DEFAULTS, select_columns=SELECTED_COLUMNS,\n batch_size=1, num_epochs=1, shuffle=False) # We'll do repeating and shuffling ourselves\n # make_csv_dataset required a batch size, but we want to batch later\n preprocessed_sentences_dataset = preprocessed_sentences_dataset.unbatch()\n \n if language_to_filter:\n preprocessed_sentences_dataset = preprocessed_sentences_dataset.filter(\n lambda data: tf.math.equal(data['lang'], tf.constant(language_to_filter)))\n #preprocessed_sentences.pop('lang')\n preprocessed_sentences_dataset = preprocessed_sentences_dataset.map(\n lambda data: format_sentences(data, label=label,\n remove_language=language_to_filter))\n\n return preprocessed_sentences_dataset",
"Set up our data pipelines for training and evaluation.",
"def make_dataset_pipeline(dataset, repeat_and_shuffle=True):\n \"\"\"Set up the pipeline for the given dataset.\n \n Caches, repeats, shuffles, and sets the pipeline up to prefetch batches.\"\"\"\n cached_dataset = dataset.cache()\n if repeat_and_shuffle:\n cached_dataset = cached_dataset.repeat().shuffle(2048)\n cached_dataset = cached_dataset.batch(BATCH_SIZE, drop_remainder=True) # no remainder on repeated dataset\n else:\n cached_dataset = cached_dataset.batch(BATCH_SIZE)\n cached_dataset = cached_dataset.prefetch(tf.data.experimental.AUTOTUNE)\n return cached_dataset\n\n# Load the preprocessed English dataframe.\npreprocessed_en_filename = TRAIN_DATA\n\n# Set up the dataset and pipeline.\nenglish_train_dataset = make_dataset_pipeline(\n make_sentence_dataset_from_csv(preprocessed_en_filename))\n\n# Process the new datasets by language.\npreprocessed_val_filename = VALID_DATA\n\nnonenglish_val_datasets = {}\nfor language_name, language_label in [('Spanish', 'es'), ('Italian', 'it'),\n ('Turkish', 'tr')]:\n nonenglish_val_datasets[language_name] = make_sentence_dataset_from_csv(\n preprocessed_val_filename, language_to_filter=language_label)\n nonenglish_val_datasets[language_name] = make_dataset_pipeline(\n nonenglish_val_datasets[language_name], repeat_and_shuffle=False)\n\nnonenglish_val_datasets['Combined'] = make_sentence_dataset_from_csv(preprocessed_val_filename)\nnonenglish_val_datasets['Combined'] = make_dataset_pipeline(nonenglish_val_datasets['Combined'], repeat_and_shuffle=False)",
"Instantiate the model\nCompile our model. We will fine-tune the multilingual model on one of our English datasets, and then evaluate its performance on the new multilingual toxicity data. As our metric, we'll use the AUC.",
"with strategy.scope():\n multilingual_bert = multilingual_bert_model()\n\n # Compile the model. Optimize using stochastic gradient descent.\n multilingual_bert.compile(\n loss=tf.keras.losses.BinaryCrossentropy(),\n optimizer=tf.keras.optimizers.SGD(learning_rate=0.001*strategy.num_replicas_in_sync),\n metrics=[tf.keras.metrics.AUC()])\n\nmultilingual_bert.summary()\n\n%%time\n# Train on English Wikipedia comment data.\nlr_callback = tf.keras.callbacks.LearningRateScheduler(lr_fn)\nhistory = multilingual_bert.fit(\n english_train_dataset, steps_per_epoch=STEPS_PER_EPOCH, epochs=EPOCHS,\n #validation_data=nonenglish_val_datasets['Combined'],\n callbacks=[lr_callback])\n\n# Performance on non-English comments after training.\nfor language in nonenglish_val_datasets:\n results = multilingual_bert.evaluate(nonenglish_val_datasets[language], verbose=0)\n print('{} loss, AUC after training:'.format(language), results)",
"License\nCopyright 2020 Google LLC\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n\nThis is not an official Google product but sample code provided for an educational purpose"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
jinntrance/MOOC
|
coursera/deep-neural-network/quiz and assignments/week 5/Initialization.ipynb
|
cc0-1.0
|
[
"Initialization\nWelcome to the first assignment of \"Improving Deep Neural Networks\". \nTraining your neural network requires specifying an initial value of the weights. A well chosen initialization method will help learning. \nIf you completed the previous course of this specialization, you probably followed our instructions for weight initialization, and it has worked out so far. But how do you choose the initialization for a new neural network? In this notebook, you will see how different initializations lead to different results. \nA well chosen initialization can:\n- Speed up the convergence of gradient descent\n- Increase the odds of gradient descent converging to a lower training (and generalization) error \nTo get started, run the following cell to load the packages and the planar dataset you will try to classify.",
"import numpy as np\nimport matplotlib.pyplot as plt\nimport sklearn\nimport sklearn.datasets\nfrom init_utils import sigmoid, relu, compute_loss, forward_propagation, backward_propagation\nfrom init_utils import update_parameters, predict, load_dataset, plot_decision_boundary, predict_dec\n\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots\nplt.rcParams['image.interpolation'] = 'nearest'\nplt.rcParams['image.cmap'] = 'gray'\n\n# load image dataset: blue/red dots in circles\ntrain_X, train_Y, test_X, test_Y = load_dataset()",
"You would like a classifier to separate the blue dots from the red dots.\n1 - Neural Network model\nYou will use a 3-layer neural network (already implemented for you). Here are the initialization methods you will experiment with:\n- Zeros initialization -- setting initialization = \"zeros\" in the input argument.\n- Random initialization -- setting initialization = \"random\" in the input argument. This initializes the weights to large random values.\n- He initialization -- setting initialization = \"he\" in the input argument. This initializes the weights to random values scaled according to a paper by He et al., 2015. \nInstructions: Please quickly read over the code below, and run it. In the next part you will implement the three initialization methods that this model() calls.",
"def model(X, Y, learning_rate = 0.01, num_iterations = 15000, print_cost = True, initialization = \"he\"):\n \"\"\"\n Implements a three-layer neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SIGMOID.\n \n Arguments:\n X -- input data, of shape (2, number of examples)\n Y -- true \"label\" vector (containing 0 for red dots; 1 for blue dots), of shape (1, number of examples)\n learning_rate -- learning rate for gradient descent \n num_iterations -- number of iterations to run gradient descent\n print_cost -- if True, print the cost every 1000 iterations\n initialization -- flag to choose which initialization to use (\"zeros\",\"random\" or \"he\")\n \n Returns:\n parameters -- parameters learnt by the model\n \"\"\"\n \n grads = {}\n costs = [] # to keep track of the loss\n m = X.shape[1] # number of examples\n layers_dims = [X.shape[0], 10, 5, 1]\n \n # Initialize parameters dictionary.\n if initialization == \"zeros\":\n parameters = initialize_parameters_zeros(layers_dims)\n elif initialization == \"random\":\n parameters = initialize_parameters_random(layers_dims)\n elif initialization == \"he\":\n parameters = initialize_parameters_he(layers_dims)\n\n # Loop (gradient descent)\n\n for i in range(0, num_iterations):\n\n # Forward propagation: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID.\n a3, cache = forward_propagation(X, parameters)\n \n # Loss\n cost = compute_loss(a3, Y)\n\n # Backward propagation.\n grads = backward_propagation(X, Y, cache)\n \n # Update parameters.\n parameters = update_parameters(parameters, grads, learning_rate)\n \n # Print the loss every 1000 iterations\n if print_cost and i % 1000 == 0:\n print(\"Cost after iteration {}: {}\".format(i, cost))\n costs.append(cost)\n \n # plot the loss\n plt.plot(costs)\n plt.ylabel('cost')\n plt.xlabel('iterations (per hundreds)')\n plt.title(\"Learning rate =\" + str(learning_rate))\n plt.show()\n \n return parameters",
"2 - Zero initialization\nThere are two types of parameters to initialize in a neural network:\n- the weight matrices $(W^{[1]}, W^{[2]}, W^{[3]}, ..., W^{[L-1]}, W^{[L]})$\n- the bias vectors $(b^{[1]}, b^{[2]}, b^{[3]}, ..., b^{[L-1]}, b^{[L]})$\nExercise: Implement the following function to initialize all parameters to zeros. You'll see later that this does not work well since it fails to \"break symmetry\", but lets try it anyway and see what happens. Use np.zeros((..,..)) with the correct shapes.",
"# GRADED FUNCTION: initialize_parameters_zeros \n\ndef initialize_parameters_zeros(layers_dims):\n \"\"\"\n Arguments:\n layer_dims -- python array (list) containing the size of each layer.\n \n Returns:\n parameters -- python dictionary containing your parameters \"W1\", \"b1\", ..., \"WL\", \"bL\":\n W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])\n b1 -- bias vector of shape (layers_dims[1], 1)\n ...\n WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])\n bL -- bias vector of shape (layers_dims[L], 1)\n \"\"\"\n \n parameters = {}\n L = len(layers_dims) # number of layers in the network\n \n for l in range(1, L):\n ### START CODE HERE ### (≈ 2 lines of code)\n parameters['W' + str(l)] = np.zeros((layers_dims[l], layers_dims[l-1]))\n parameters['b' + str(l)] = np.zeros((layers_dims[l], 1))\n ### END CODE HERE ###\n return parameters\n\nparameters = initialize_parameters_zeros([3,2,1])\nprint(\"W1 = \" + str(parameters[\"W1\"]))\nprint(\"b1 = \" + str(parameters[\"b1\"]))\nprint(\"W2 = \" + str(parameters[\"W2\"]))\nprint(\"b2 = \" + str(parameters[\"b2\"]))",
"Expected Output:\n<table> \n <tr>\n <td>\n **W1**\n </td>\n <td>\n [[ 0. 0. 0.]\n [ 0. 0. 0.]]\n </td>\n </tr>\n <tr>\n <td>\n **b1**\n </td>\n <td>\n [[ 0.]\n [ 0.]]\n </td>\n </tr>\n <tr>\n <td>\n **W2**\n </td>\n <td>\n [[ 0. 0.]]\n </td>\n </tr>\n <tr>\n <td>\n **b2**\n </td>\n <td>\n [[ 0.]]\n </td>\n </tr>\n\n</table>\n\nRun the following code to train your model on 15,000 iterations using zeros initialization.",
"parameters = model(train_X, train_Y, initialization = \"zeros\")\nprint (\"On the train set:\")\npredictions_train = predict(train_X, train_Y, parameters)\nprint (\"On the test set:\")\npredictions_test = predict(test_X, test_Y, parameters)",
"The performance is really bad, and the cost does not really decrease, and the algorithm performs no better than random guessing. Why? Lets look at the details of the predictions and the decision boundary:",
"print (\"predictions_train = \" + str(predictions_train))\nprint (\"predictions_test = \" + str(predictions_test))\n\nplt.title(\"Model with Zeros initialization\")\naxes = plt.gca()\naxes.set_xlim([-1.5,1.5])\naxes.set_ylim([-1.5,1.5])\nplot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)",
"The model is predicting 0 for every example. \nIn general, initializing all the weights to zero results in the network failing to break symmetry. This means that every neuron in each layer will learn the same thing, and you might as well be training a neural network with $n^{[l]}=1$ for every layer, and the network is no more powerful than a linear classifier such as logistic regression. \n<font color='blue'>\nWhat you should remember:\n- The weights $W^{[l]}$ should be initialized randomly to break symmetry. \n- It is however okay to initialize the biases $b^{[l]}$ to zeros. Symmetry is still broken so long as $W^{[l]}$ is initialized randomly. \n3 - Random initialization\nTo break symmetry, lets intialize the weights randomly. Following random initialization, each neuron can then proceed to learn a different function of its inputs. In this exercise, you will see what happens if the weights are intialized randomly, but to very large values. \nExercise: Implement the following function to initialize your weights to large random values (scaled by *10) and your biases to zeros. Use np.random.randn(..,..) * 10 for weights and np.zeros((.., ..)) for biases. We are using a fixed np.random.seed(..) to make sure your \"random\" weights match ours, so don't worry if running several times your code gives you always the same initial values for the parameters.",
"# GRADED FUNCTION: initialize_parameters_random\n\ndef initialize_parameters_random(layers_dims):\n \"\"\"\n Arguments:\n layer_dims -- python array (list) containing the size of each layer.\n \n Returns:\n parameters -- python dictionary containing your parameters \"W1\", \"b1\", ..., \"WL\", \"bL\":\n W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])\n b1 -- bias vector of shape (layers_dims[1], 1)\n ...\n WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])\n bL -- bias vector of shape (layers_dims[L], 1)\n \"\"\"\n \n np.random.seed(3) # This seed makes sure your \"random\" numbers will be the as ours\n parameters = {}\n L = len(layers_dims) # integer representing the number of layers\n \n for l in range(1, L):\n ### START CODE HERE ### (≈ 2 lines of code)\n parameters['W' + str(l)] = 10 * np.random.randn(layers_dims[l], layers_dims[l-1])\n parameters['b' + str(l)] = np.zeros((layers_dims[l], 1))\n ### END CODE HERE ###\n\n return parameters\n\nparameters = initialize_parameters_random([3, 2, 1])\nprint(\"W1 = \" + str(parameters[\"W1\"]))\nprint(\"b1 = \" + str(parameters[\"b1\"]))\nprint(\"W2 = \" + str(parameters[\"W2\"]))\nprint(\"b2 = \" + str(parameters[\"b2\"]))",
"Expected Output:\n<table> \n <tr>\n <td>\n **W1**\n </td>\n <td>\n [[ 17.88628473 4.36509851 0.96497468]\n [-18.63492703 -2.77388203 -3.54758979]]\n </td>\n </tr>\n <tr>\n <td>\n **b1**\n </td>\n <td>\n [[ 0.]\n [ 0.]]\n </td>\n </tr>\n <tr>\n <td>\n **W2**\n </td>\n <td>\n [[-0.82741481 -6.27000677]]\n </td>\n </tr>\n <tr>\n <td>\n **b2**\n </td>\n <td>\n [[ 0.]]\n </td>\n </tr>\n\n</table>\n\nRun the following code to train your model on 15,000 iterations using random initialization.",
"parameters = model(train_X, train_Y, initialization = \"random\")\nprint (\"On the train set:\")\npredictions_train = predict(train_X, train_Y, parameters)\nprint (\"On the test set:\")\npredictions_test = predict(test_X, test_Y, parameters)",
"If you see \"inf\" as the cost after the iteration 0, this is because of numerical roundoff; a more numerically sophisticated implementation would fix this. But this isn't worth worrying about for our purposes. \nAnyway, it looks like you have broken symmetry, and this gives better results. than before. The model is no longer outputting all 0s.",
"print (predictions_train)\nprint (predictions_test)\n\nplt.title(\"Model with large random initialization\")\naxes = plt.gca()\naxes.set_xlim([-1.5,1.5])\naxes.set_ylim([-1.5,1.5])\nplot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)",
"Observations:\n- The cost starts very high. This is because with large random-valued weights, the last activation (sigmoid) outputs results that are very close to 0 or 1 for some examples, and when it gets that example wrong it incurs a very high loss for that example. Indeed, when $\\log(a^{[3]}) = \\log(0)$, the loss goes to infinity.\n- Poor initialization can lead to vanishing/exploding gradients, which also slows down the optimization algorithm. \n- If you train this network longer you will see better results, but initializing with overly large random numbers slows down the optimization.\n<font color='blue'>\nIn summary:\n- Initializing weights to very large random values does not work well. \n- Hopefully intializing with small random values does better. The important question is: how small should be these random values be? Lets find out in the next part! \n4 - He initialization\nFinally, try \"He Initialization\"; this is named for the first author of He et al., 2015. (If you have heard of \"Xavier initialization\", this is similar except Xavier initialization uses a scaling factor for the weights $W^{[l]}$ of sqrt(1./layers_dims[l-1]) where He initialization would use sqrt(2./layers_dims[l-1]).)\nExercise: Implement the following function to initialize your parameters with He initialization.\nHint: This function is similar to the previous initialize_parameters_random(...). The only difference is that instead of multiplying np.random.randn(..,..) by 10, you will multiply it by $\\sqrt{\\frac{2}{\\text{dimension of the previous layer}}}$, which is what He initialization recommends for layers with a ReLU activation.",
"# GRADED FUNCTION: initialize_parameters_he\n\ndef initialize_parameters_he(layers_dims):\n \"\"\"\n Arguments:\n layer_dims -- python array (list) containing the size of each layer.\n \n Returns:\n parameters -- python dictionary containing your parameters \"W1\", \"b1\", ..., \"WL\", \"bL\":\n W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])\n b1 -- bias vector of shape (layers_dims[1], 1)\n ...\n WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])\n bL -- bias vector of shape (layers_dims[L], 1)\n \"\"\"\n \n np.random.seed(3)\n parameters = {}\n L = len(layers_dims) - 1 # integer representing the number of layers\n \n for l in range(1, L + 1):\n ### START CODE HERE ### (≈ 2 lines of code)\n parameters['W' + str(l)] = np.random.randn(layers_dims[l], layers_dims[l-1]) * np.sqrt(2.0/layers_dims[l-1])\n parameters['b' + str(l)] = np.zeros((layers_dims[l], 1))\n ### END CODE HERE ###\n \n return parameters\n\nparameters = initialize_parameters_he([2, 4, 1])\nprint(\"W1 = \" + str(parameters[\"W1\"]))\nprint(\"b1 = \" + str(parameters[\"b1\"]))\nprint(\"W2 = \" + str(parameters[\"W2\"]))\nprint(\"b2 = \" + str(parameters[\"b2\"]))",
"Expected Output:\n<table> \n <tr>\n <td>\n **W1**\n </td>\n <td>\n [[ 1.78862847 0.43650985]\n [ 0.09649747 -1.8634927 ]\n [-0.2773882 -0.35475898]\n [-0.08274148 -0.62700068]]\n </td>\n </tr>\n <tr>\n <td>\n **b1**\n </td>\n <td>\n [[ 0.]\n [ 0.]\n [ 0.]\n [ 0.]]\n </td>\n </tr>\n <tr>\n <td>\n **W2**\n </td>\n <td>\n [[-0.03098412 -0.33744411 -0.92904268 0.62552248]]\n </td>\n </tr>\n <tr>\n <td>\n **b2**\n </td>\n <td>\n [[ 0.]]\n </td>\n </tr>\n\n</table>\n\nRun the following code to train your model on 15,000 iterations using He initialization.",
"parameters = model(train_X, train_Y, initialization = \"he\")\nprint (\"On the train set:\")\npredictions_train = predict(train_X, train_Y, parameters)\nprint (\"On the test set:\")\npredictions_test = predict(test_X, test_Y, parameters)\n\nplt.title(\"Model with He initialization\")\naxes = plt.gca()\naxes.set_xlim([-1.5,1.5])\naxes.set_ylim([-1.5,1.5])\nplot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)",
"Observations:\n- The model with He initialization separates the blue and the red dots very well in a small number of iterations.\n5 - Conclusions\nYou have seen three different types of initializations. For the same number of iterations and same hyperparameters the comparison is:\n<table> \n <tr>\n <td>\n **Model**\n </td>\n <td>\n **Train accuracy**\n </td>\n <td>\n **Problem/Comment**\n </td>\n\n </tr>\n <td>\n 3-layer NN with zeros initialization\n </td>\n <td>\n 50%\n </td>\n <td>\n fails to break symmetry\n </td>\n <tr>\n <td>\n 3-layer NN with large random initialization\n </td>\n <td>\n 83%\n </td>\n <td>\n too large weights \n </td>\n </tr>\n <tr>\n <td>\n 3-layer NN with He initialization\n </td>\n <td>\n 99%\n </td>\n <td>\n recommended method\n </td>\n </tr>\n</table>\n\n<font color='blue'>\nWhat you should remember from this notebook:\n- Different initializations lead to different results\n- Random initialization is used to break symmetry and make sure different hidden units can learn different things\n- Don't intialize to values that are too large\n- He initialization works well for networks with ReLU activations."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
Kaggle/learntools
|
notebooks/data_cleaning/raw/tut4.ipynb
|
apache-2.0
|
[
"In this notebook, we're going to be working with different character encodings. \nLet's get started!\nGet our environment set up\nThe first thing we'll need to do is load in the libraries we'll be using. Not our dataset, though: we'll get to it later!",
"# modules we'll use\nimport pandas as pd\nimport numpy as np\n\n# helpful character encoding module\nimport chardet\n\n# set seed for reproducibility\nnp.random.seed(0)",
"What are encodings?\nCharacter encodings are specific sets of rules for mapping from raw binary byte strings (that look like this: 0110100001101001) to characters that make up human-readable text (like \"hi\"). There are many different encodings, and if you tried to read in text with a different encoding than the one it was originally written in, you ended up with scrambled text called \"mojibake\" (said like mo-gee-bah-kay). Here's an example of mojibake:\næ–‡å—化ã??\nYou might also end up with a \"unknown\" characters. There are what gets printed when there's no mapping between a particular byte and a character in the encoding you're using to read your byte string in and they look like this:\n����������\nCharacter encoding mismatches are less common today than they used to be, but it's definitely still a problem. There are lots of different character encodings, but the main one you need to know is UTF-8.\n\nUTF-8 is the standard text encoding. All Python code is in UTF-8 and, ideally, all your data should be as well. It's when things aren't in UTF-8 that you run into trouble.\n\nIt was pretty hard to deal with encodings in Python 2, but thankfully in Python 3 it's a lot simpler. (Kaggle Notebooks only use Python 3.) There are two main data types you'll encounter when working with text in Python 3. One is is the string, which is what text is by default.",
"# start with a string\nbefore = \"This is the euro symbol: €\"\n\n# check to see what datatype it is\ntype(before)",
"The other data is the bytes data type, which is a sequence of integers. You can convert a string into bytes by specifying which encoding it's in:",
"# encode it to a different encoding, replacing characters that raise errors\nafter = before.encode(\"utf-8\", errors=\"replace\")\n\n# check the type\ntype(after)",
"If you look at a bytes object, you'll see that it has a b in front of it, and then maybe some text after. That's because bytes are printed out as if they were characters encoded in ASCII. (ASCII is an older character encoding that doesn't really work for writing any language other than English.) Here you can see that our euro symbol has been replaced with some mojibake that looks like \"\\xe2\\x82\\xac\" when it's printed as if it were an ASCII string.",
"# take a look at what the bytes look like\nafter",
"When we convert our bytes back to a string with the correct encoding, we can see that our text is all there correctly, which is great! :)",
"# convert it back to utf-8\nprint(after.decode(\"utf-8\"))",
"However, when we try to use a different encoding to map our bytes into a string, we get an error. This is because the encoding we're trying to use doesn't know what to do with the bytes we're trying to pass it. You need to tell Python the encoding that the byte string is actually supposed to be in.\n\nYou can think of different encodings as different ways of recording music. You can record the same music on a CD, cassette tape or 8-track. While the music may sound more-or-less the same, you need to use the right equipment to play the music from each recording format. The correct decoder is like a cassette player or a CD player. If you try to play a cassette in a CD player, it just won't work.",
"# try to decode our bytes with the ascii encoding\nprint(after.decode(\"ascii\"))",
"We can also run into trouble if we try to use the wrong encoding to map from a string to bytes. Like I said earlier, strings are UTF-8 by default in Python 3, so if we try to treat them like they were in another encoding we'll create problems. \nFor example, if we try to convert a string to bytes for ASCII using encode(), we can ask for the bytes to be what they would be if the text was in ASCII. Since our text isn't in ASCII, though, there will be some characters it can't handle. We can automatically replace the characters that ASCII can't handle. If we do that, however, any characters not in ASCII will just be replaced with the unknown character. Then, when we convert the bytes back to a string, the character will be replaced with the unknown character. The dangerous part about this is that there's not way to tell which character it should have been. That means we may have just made our data unusable!",
"# start with a string\nbefore = \"This is the euro symbol: €\"\n\n# encode it to a different encoding, replacing characters that raise errors\nafter = before.encode(\"ascii\", errors = \"replace\")\n\n# convert it back to utf-8\nprint(after.decode(\"ascii\"))\n\n# We've lost the original underlying byte string! It's been \n# replaced with the underlying byte string for the unknown character :(",
"This is bad and we want to avoid doing it! It's far better to convert all our text to UTF-8 as soon as we can and keep it in that encoding. The best time to convert non UTF-8 input into UTF-8 is when you read in files, which we'll talk about next.\nReading in files with encoding problems\nMost files you'll encounter will probably be encoded with UTF-8. This is what Python expects by default, so most of the time you won't run into problems. However, sometimes you'll get an error like this:",
"# try to read in a file not in UTF-8\nkickstarter_2016 = pd.read_csv(\"../input/kickstarter-projects/ks-projects-201612.csv\")",
"Notice that we get the same UnicodeDecodeError we got when we tried to decode UTF-8 bytes as if they were ASCII! This tells us that this file isn't actually UTF-8. We don't know what encoding it actually is though. One way to figure it out is to try and test a bunch of different character encodings and see if any of them work. A better way, though, is to use the chardet module to try and automatically guess what the right encoding is. It's not 100% guaranteed to be right, but it's usually faster than just trying to guess.\nI'm going to just look at the first ten thousand bytes of this file. This is usually enough for a good guess about what the encoding is and is much faster than trying to look at the whole file. (Especially with a large file this can be very slow.) Another reason to just look at the first part of the file is that we can see by looking at the error message that the first problem is the 11th character. So we probably only need to look at the first little bit of the file to figure out what's going on.",
"# look at the first ten thousand bytes to guess the character encoding\nwith open(\"../input/kickstarter-projects/ks-projects-201801.csv\", 'rb') as rawdata:\n result = chardet.detect(rawdata.read(10000))\n\n# check what the character encoding might be\nprint(result)",
"So chardet is 73% confidence that the right encoding is \"Windows-1252\". Let's see if that's correct:",
"# read in the file with the encoding detected by chardet\nkickstarter_2016 = pd.read_csv(\"../input/kickstarter-projects/ks-projects-201612.csv\", encoding='Windows-1252')\n\n# look at the first few lines\nkickstarter_2016.head()",
"Yep, looks like chardet was right! The file reads in with no problem (although we do get a warning about datatypes) and when we look at the first few rows it seems to be fine. \n\nWhat if the encoding chardet guesses isn't right? Since chardet is basically just a fancy guesser, sometimes it will guess the wrong encoding. One thing you can try is looking at more or less of the file and seeing if you get a different result and then try that.\n\nSaving your files with UTF-8 encoding\nFinally, once you've gone through all the trouble of getting your file into UTF-8, you'll probably want to keep it that way. The easiest way to do that is to save your files with UTF-8 encoding. The good news is, since UTF-8 is the standard encoding in Python, when you save a file it will be saved as UTF-8 by default:",
"# save our file (will be saved as UTF-8 by default!)\nkickstarter_2016.to_csv(\"ks-projects-201801-utf8.csv\")",
"Pretty easy, huh? :)\nYour turn!\nDeepen your understanding with a dataset of fatal police shootings in the US."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
dietmarw/EK5312_ElectricalMachines
|
Chapman/Ch5-Problem_5-07.ipynb
|
unlicense
|
[
"Excercises Electric Machinery Fundamentals\nChapter 5\nProblem 5-7",
"%pylab notebook",
"Description\nA 208-V Y-connected synchronous motor is drawing 50 A at unity power factor from a 208-V power\nsystem. The field current flowing under these conditions is 2.7 A. Its synchronous reactance is $1.6\\,\\Omega$.\nAssume a linear open-circuit characteristic.",
"Vt = 208 # [V]\nIa = 50 # [A]\nXs = 1.6 # [Ohm]\nRa = 0 # [Ohm]\nPF2 = 0.8\nIf_1 = 2.7 # [A]",
"(a)\n\nFind $\\vec{V}_\\phi$ and $\\vec{E}_A$ for these conditions.\n\n(b)\n\nFind the torque angle $\\delta$ .\n\n(c)\n\nWhat is the static stability power limit under these conditions?\n\n(d)\n\nHow much field current would be required to make the motor operate at 0.80 PF leading?\n\n(e)\n\nWhat is the new torque angle in part (d)?\n\nSOLUTION\n(a)\nThe phase voltage of this motor is $V_\\phi = 120 V$, and the armature current is $\\vec{I}_A = 50\\,A \\angle 0°$ .\nTherefore, the internal generated voltage is:\n$$\\vec{E}A = \\vec{V}\\phi - R_A \\vec{I}_A - jX_S \\vec{I}_A$$",
"Vphi = Vt / sqrt(3)\nEA = Vphi - Ra*Ia - Xs*1j*Ia\nEA_angle = arctan(EA.imag/EA.real)\nprint('''\nEA = {:.0f} V ∠{:.1f}°\n=================='''.format(abs(EA), EA_angle/pi*180))",
"(b)\nThe tor que angle $\\delta$ of this machine is",
"delta = EA_angle\nprint('''\nδ = {:.1f}°\n=========='''.format(delta/pi*180))",
"(c)\nThe static stability power limit is given by\n$$P_\\text{max} = \\frac{3V_\\phi E_A}{X_S}$$",
"Pmax = (3*Vphi*abs(EA)) / Xs\nprint('''\nPmax = {:.1f} kW\n=============='''.format(Pmax/1000))",
"(d)\nA phasor diagram of the motor operating at a power factor of 0.78 leading is shown below.\n<img src=\"figs/Problem_5-07.jpg\" width=\"70%\">\nSince the power supplied by the motor is constant, the quantity $I_A cos \\theta$ , which is directly proportional\nto power, must be constant. Therefore,",
"theta1 = 0 # [rad]\ntheta2 = arccos(PF2)\nIa2 = Ia*cos(theta1) / cos(theta2)\nIA2 = Ia2 * (cos(theta2)+sin(theta2)*1j)\nIA2_angle = theta2\nprint('IA2 = {:.1f} A ∠{:.2f}°'.format(abs(IA2), IA2_angle/pi*180))",
"The internal generated voltage required to produce this current would be:\n$$\\vec{E}{A2} = \\vec{V}\\phi - R_A \\vec{I}{A2} - jX_S \\vec{I}{A2}$$",
"EA2 = Vphi - Ra*IA2 - Xs*1j*IA2\nEA2_angle = arctan(EA2.imag/EA2.real)\nprint('EA2 = {:.0f} V ∠{:.1f}°'.format(abs(EA2), EA2_angle/pi*180))",
"The internal generated voltage $E_A$ is directly proportional to the field flux, and we have assumed in this\nproblem that the flux is directly proportional to the fiel d current. Therefore, the required field current is:\n$$I_{F2} = \\frac{E_{A2}}{E_{A1}}I_{F1}$$",
"If_2 = abs(EA2)/abs(EA) * If_1\nprint('''\nIf_2 = {:.2f} A\n============='''.format(If_2))",
"(e)\nThe new torque angle $\\delta$ of this machine is",
"delta2 = EA2_angle\nprint('''\nδ_2 = {:.1f}°\n============'''.format(delta2/pi*180))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
tuanavu/coursera-university-of-washington
|
machine_learning/2_regression/lecture/week5/Overfitting_Demo_Ridge_Lasso.ipynb
|
mit
|
[
"Overfitting demo\nCreate a dataset based on a true sinusoidal relationship\nLet's look at a synthetic dataset consisting of 30 points drawn from the sinusoid $y = \\sin(4x)$:",
"import sys\nsys.path.append('C:\\Anaconda2\\envs\\dato-env\\Lib\\site-packages')\n\nimport graphlab\nimport math\nimport random\nimport numpy\nfrom matplotlib import pyplot as plt\n%matplotlib inline",
"Create random values for x in interval [0,1)",
"random.seed(98103)\nn = 30\nx = graphlab.SArray([random.random() for i in range(n)]).sort()",
"Compute y",
"y = x.apply(lambda x: math.sin(4*x))",
"Add random Gaussian noise to y",
"random.seed(1)\ne = graphlab.SArray([random.gauss(0,1.0/3.0) for i in range(n)])\ny = y + e",
"Put data into an SFrame to manipulate later",
"data = graphlab.SFrame({'X1':x,'Y':y})\ndata",
"Create a function to plot the data, since we'll do it many times",
"def plot_data(data): \n plt.plot(data['X1'],data['Y'],'k.')\n plt.xlabel('x')\n plt.ylabel('y')\n\nplot_data(data)",
"Define some useful polynomial regression functions\nDefine a function to create our features for a polynomial regression model of any degree:",
"def polynomial_features(data, deg):\n data_copy=data.copy()\n for i in range(1,deg):\n data_copy['X'+str(i+1)]=data_copy['X'+str(i)]*data_copy['X1']\n return data_copy",
"Define a function to fit a polynomial linear regression model of degree \"deg\" to the data in \"data\":",
"def polynomial_regression(data, deg):\n model = graphlab.linear_regression.create(polynomial_features(data,deg), \n target='Y', l2_penalty=0.,l1_penalty=0.,\n validation_set=None,verbose=False)\n return model",
"Define function to plot data and predictions made, since we are going to use it many times.",
"def plot_poly_predictions(data, model):\n plot_data(data)\n\n # Get the degree of the polynomial\n deg = len(model.coefficients['value'])-1\n \n # Create 200 points in the x axis and compute the predicted value for each point\n x_pred = graphlab.SFrame({'X1':[i/200.0 for i in range(200)]})\n y_pred = model.predict(polynomial_features(x_pred,deg))\n \n # plot predictions\n plt.plot(x_pred['X1'], y_pred, 'g-', label='degree ' + str(deg) + ' fit')\n plt.legend(loc='upper left')\n plt.axis([0,1,-1.5,2])",
"Create a function that prints the polynomial coefficients in a pretty way :)",
"def print_coefficients(model): \n # Get the degree of the polynomial\n deg = len(model.coefficients['value'])-1\n\n # Get learned parameters as a list\n w = list(model.coefficients['value'])\n\n # Numpy has a nifty function to print out polynomials in a pretty way\n # (We'll use it, but it needs the parameters in the reverse order)\n print 'Learned polynomial for degree ' + str(deg) + ':'\n w.reverse()\n print numpy.poly1d(w)",
"Fit a degree-2 polynomial\nFit our degree-2 polynomial to the data generated above:",
"model = polynomial_regression(data, deg=2)",
"Inspect learned parameters",
"print_coefficients(model)",
"Form and plot our predictions along a grid of x values:",
"plot_poly_predictions(data,model)",
"Fit a degree-4 polynomial",
"model = polynomial_regression(data, deg=4)\nprint_coefficients(model)\nplot_poly_predictions(data,model)",
"Fit a degree-16 polynomial",
"model = polynomial_regression(data, deg=16)\nprint_coefficients(model)",
"Woah!!!! Those coefficients are crazy! On the order of 10^6.",
"plot_poly_predictions(data,model)",
"Above: Fit looks pretty wild, too. Here's a clear example of how overfitting is associated with very large magnitude estimated coefficients.\n\n\n# \n# \nRidge Regression\nRidge regression aims to avoid overfitting by adding a cost to the RSS term of standard least squares that depends on the 2-norm of the coefficients $\\|w\\|$. The result is penalizing fits with large coefficients. The strength of this penalty, and thus the fit vs. model complexity balance, is controled by a parameter lambda (here called \"L2_penalty\").\nDefine our function to solve the ridge objective for a polynomial regression model of any degree:",
"def polynomial_ridge_regression(data, deg, l2_penalty):\n model = graphlab.linear_regression.create(polynomial_features(data,deg), \n target='Y', l2_penalty=l2_penalty,\n validation_set=None,verbose=False)\n return model",
"Perform a ridge fit of a degree-16 polynomial using a very small penalty strength",
"model = polynomial_ridge_regression(data, deg=16, l2_penalty=1e-25)\nprint_coefficients(model)\n\nplot_poly_predictions(data,model)",
"Perform a ridge fit of a degree-16 polynomial using a very large penalty strength",
"model = polynomial_ridge_regression(data, deg=16, l2_penalty=100)\nprint_coefficients(model)\n\nplot_poly_predictions(data,model)",
"Let's look at fits for a sequence of increasing lambda values",
"for l2_penalty in [1e-25, 1e-10, 1e-6, 1e-3, 1e2]:\n model = polynomial_ridge_regression(data, deg=16, l2_penalty=l2_penalty)\n print 'lambda = %.2e' % l2_penalty\n print_coefficients(model)\n print '\\n'\n plt.figure()\n plot_poly_predictions(data,model)\n plt.title('Ridge, lambda = %.2e' % l2_penalty)",
"Perform a ridge fit of a degree-16 polynomial using a \"good\" penalty strength\nWe will learn about cross validation later in this course as a way to select a good value of the tuning parameter (penalty strength) lambda. Here, we consider \"leave one out\" (LOO) cross validation, which one can show approximates average mean square error (MSE). As a result, choosing lambda to minimize the LOO error is equivalent to choosing lambda to minimize an approximation to average MSE.",
"# LOO cross validation -- return the average MSE\ndef loo(data, deg, l2_penalty_values):\n # Create polynomial features\n polynomial_features(data, deg)\n \n # Create as many folds for cross validatation as number of data points\n num_folds = len(data)\n folds = graphlab.cross_validation.KFold(data,num_folds)\n \n # for each value of l2_penalty, fit a model for each fold and compute average MSE\n l2_penalty_mse = []\n min_mse = None\n best_l2_penalty = None\n for l2_penalty in l2_penalty_values:\n next_mse = 0.0\n for train_set, validation_set in folds:\n # train model\n model = graphlab.linear_regression.create(train_set,target='Y', \n l2_penalty=l2_penalty,\n validation_set=None,verbose=False)\n \n # predict on validation set \n y_test_predicted = model.predict(validation_set)\n # compute squared error\n next_mse += ((y_test_predicted-validation_set['Y'])**2).sum()\n \n # save squared error in list of MSE for each l2_penalty\n next_mse = next_mse/num_folds\n l2_penalty_mse.append(next_mse)\n if min_mse is None or next_mse < min_mse:\n min_mse = next_mse\n best_l2_penalty = l2_penalty\n \n return l2_penalty_mse,best_l2_penalty",
"Run LOO cross validation for \"num\" values of lambda, on a log scale",
"l2_penalty_values = numpy.logspace(-4, 10, num=10)\nl2_penalty_mse,best_l2_penalty = loo(data, 16, l2_penalty_values)",
"Plot results of estimating LOO for each value of lambda",
"plt.plot(l2_penalty_values,l2_penalty_mse,'k-')\nplt.xlabel('$\\L2_penalty$')\nplt.ylabel('LOO cross validation error')\nplt.xscale('log')\nplt.yscale('log')",
"Find the value of lambda, $\\lambda_{\\mathrm{CV}}$, that minimizes the LOO cross validation error, and plot resulting fit",
"best_l2_penalty\n\nmodel = polynomial_ridge_regression(data, deg=16, l2_penalty=best_l2_penalty)\nprint_coefficients(model)\n\nplot_poly_predictions(data,model)",
"Lasso Regression\nLasso regression jointly shrinks coefficients to avoid overfitting, and implicitly performs feature selection by setting some coefficients exactly to 0 for sufficiently large penalty strength lambda (here called \"L1_penalty\"). In particular, lasso takes the RSS term of standard least squares and adds a 1-norm cost of the coefficients $\\|w\\|$.\nDefine our function to solve the lasso objective for a polynomial regression model of any degree:",
"def polynomial_lasso_regression(data, deg, l1_penalty):\n model = graphlab.linear_regression.create(polynomial_features(data,deg), \n target='Y', l2_penalty=0.,\n l1_penalty=l1_penalty,\n validation_set=None, \n solver='fista', verbose=False,\n max_iterations=3000, convergence_threshold=1e-10)\n return model",
"Explore the lasso solution as a function of a few different penalty strengths\nWe refer to lambda in the lasso case below as \"l1_penalty\"",
"for l1_penalty in [0.0001, 0.01, 0.1, 10]:\n model = polynomial_lasso_regression(data, deg=16, l1_penalty=l1_penalty)\n print 'l1_penalty = %e' % l1_penalty\n print 'number of nonzeros = %d' % (model.coefficients['value']).nnz()\n print_coefficients(model)\n print '\\n'\n plt.figure()\n plot_poly_predictions(data,model)\n plt.title('LASSO, lambda = %.2e, # nonzeros = %d' % (l1_penalty, (model.coefficients['value']).nnz()))",
"Above: We see that as lambda increases, we get sparser and sparser solutions. However, even for our non-sparse case for lambda=0.0001, the fit of our high-order polynomial is not too wild. This is because, like in ridge, coefficients included in the lasso solution are shrunk relative to those of the least squares (unregularized) solution. This leads to better behavior even without sparsity. Of course, as lambda goes to 0, the amount of this shrinkage decreases and the lasso solution approaches the (wild) least squares solution."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
texib/deeplearning_homework
|
mnist-logistic_regression.ipynb
|
mit
|
[
"import tensorflow as tf\nimport matplotlib.pyplot as plt\nimport matplotlib.cm as cm\nimport matplotlib\n%matplotlib inline\nimport input_data\nimport numpy\n\nimport sys\nimport input_data\nmnist = input_data.read_data_sets(\"/tmp/data/\", one_hot=True)",
"定義 Input 及 Output 暫存變數\nInput 為 28x28 的點陣圖素\nOutput 為 10 個 Label Array ,分別代表著 0~9 的預測值",
"x = tf.placeholder(tf.float32,shape=[None,28*28])\ny = tf.placeholder(tf.float32,shape=[None,10])\n\n# Create model\n\n# Set model weights\nW = tf.Variable(tf.zeros([784, 10]))\nb = tf.Variable(tf.zeros([10]))\n\nxw = tf.matmul(x, W)\nr = xw + b\na = tf.nn.softmax(r)",
"Cost Functoin 請參考 : http://ufldl.stanford.edu/tutorial/supervised/SoftmaxRegression/",
"cost = -tf.reduce_sum(y*tf.log(a))\n\nop = tf.train.GradientDescentOptimizer(0.01).minimize(cost)",
"以下進行開始進行實際運算",
"init = tf.initialize_all_variables()\n\nsess = tf.Session()\n\nsess.run(init)\n\nepochs = 100\nbatch_size = 200\nfor _ in range(100):\n avg_cost = 0\n input_x , output_y = mnist.train.next_batch(batch_size)\n sess.run(op,feed_dict={x:input_x,\n y:output_y })\n avg_cost += sess.run(cost,feed_dict={x:input_x,\n y:output_y })\n \n print \"avg_cost:\" ,avg_cost/batch_size\n \n \n\n\npredict = tf.argmax(a, 1)\n# sess.run(predict,feed_dict={x:mnist.test.images})\n\nans = tf.argmax(y,1)\n# sess.run(ans, feed_dict= {y:mnist.test.labels})\n\npreccision = sess.run(tf.reduce_mean(tf.cast(tf.equal(predict,ans),\"float\")),feed_dict= {x:mnist.test.images,y:mnist.test.labels} )\n\nprint preccision\n\nimport random\n\nfor img in list(map(lambda _: random.choice(mnist.train.images), range(5))): #mnist.train.images[50:55]:\n tmp = img\n tmp2 = tmp.reshape((28,28))\n\n plt.imshow(tmp2, cmap = cm.Greys)\n plt.show()\n print sess.run(predict,feed_dict={x:[tmp]})[0]\n"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
albahnsen/ML_RiskManagement
|
notebooks/07_decision_trees.ipynb
|
mit
|
[
"07 - Decision Trees\nby Alejandro Correa Bahnsen & Iván Torroledo\nversion 1.2, Feb 2018\nPart of the class Machine Learning for Risk Management\nThis notebook is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. Special thanks goes to Kevin Markham\nAdapted from Chapter 8 of An Introduction to Statistical Learning\nWhy are we learning about decision trees?\n\nCan be applied to both regression and classification problems\nMany useful properties\nVery popular\nBasis for more sophisticated models\nHave a different way of \"thinking\" than the other models we have studied\n\nLesson objectives\nStudents will be able to:\n\nExplain how a decision tree is created\nBuild a decision tree model in scikit-learn\nTune a decision tree model and explain how tuning impacts the model\nInterpret a tree diagram\nDescribe the key differences between regression and classification trees\nDecide whether a decision tree is an appropriate model for a given problem\n\nPart 1: Regression trees\nMajor League Baseball player data from 1986-87:\n\nYears (x-axis): number of years playing in the major leagues\nHits (y-axis): number of hits in the previous year\nSalary (color): low salary is blue/green, high salary is red/yellow\n\n\nGroup exercise:\n\nThe data above is our training data.\nWe want to build a model that predicts the Salary of future players based on Years and Hits.\nWe are going to \"segment\" the feature space into regions, and then use the mean Salary in each region as the predicted Salary for future players.\nIntuitively, you want to maximize the similarity (or \"homogeneity\") within a given region, and minimize the similarity between different regions.\n\nRules for segmenting:\n\nYou can only use straight lines, drawn one at a time.\nYour line must either be vertical or horizontal.\nYour line stops when it hits an existing line.\n\n\nAbove are the regions created by a computer:\n\n$R_1$: players with less than 5 years of experience, mean Salary of \\$166,000 \n$R_2$: players with 5 or more years of experience and less than 118 hits, mean Salary of \\$403,000 \n$R_3$: players with 5 or more years of experience and 118 hits or more, mean Salary of \\$846,000 \n\nNote: Years and Hits are both integers, but the convention is to use the midpoint between adjacent values to label a split.\nThese regions are used to make predictions on out-of-sample data. Thus, there are only three possible predictions! (Is this different from how linear regression makes predictions?)\nBelow is the equivalent regression tree:\n\nThe first split is Years < 4.5, thus that split goes at the top of the tree. When a splitting rule is True, you follow the left branch. When a splitting rule is False, you follow the right branch.\nFor players in the left branch, the mean Salary is \\$166,000, thus you label it with that value. (Salary has been divided by 1000 and log-transformed to 5.11.)\nFor players in the right branch, there is a further split on Hits < 117.5, dividing players into two more Salary regions: \\$403,000 (transformed to 6.00), and \\$846,000 (transformed to 6.74).\n\nWhat does this tree tell you about your data?\n\nYears is the most important factor determining Salary, with a lower number of Years corresponding to a lower Salary.\nFor a player with a lower number of Years, Hits is not an important factor determining Salary.\nFor a player with a higher number of Years, Hits is an important factor determining Salary, with a greater number of Hits corresponding to a higher Salary.\n\nQuestion: What do you like and dislike about decision trees so far?\nBuilding a regression tree by hand\nYour training data is a tiny dataset of used vehicle sale prices. Your goal is to predict price for testing data.\n\nRead the data into a Pandas DataFrame.\nExplore the data by sorting, plotting, or split-apply-combine (aka group_by).\nDecide which feature is the most important predictor, and use that to create your first splitting rule.\nOnly binary splits are allowed.\n\n\nAfter making your first split, split your DataFrame into two parts, and then explore each part to figure out what other splits to make.\nStop making splits once you are convinced that it strikes a good balance between underfitting and overfitting.\nYour goal is to build a model that generalizes well.\nYou are allowed to split on the same variable multiple times!\n\n\nDraw your tree, labeling the leaves with the mean price for the observations in that region.\nMake sure nothing is backwards: You follow the left branch if the rule is true, and the right branch if the rule is false.\n\n\n\nHow does a computer build a regression tree?\nIdeal approach: Consider every possible partition of the feature space (computationally infeasible)\n\"Good enough\" approach: recursive binary splitting\n\nBegin at the top of the tree.\nFor every feature, examine every possible cutpoint, and choose the feature and cutpoint such that the resulting tree has the lowest possible mean squared error (MSE). Make that split.\nExamine the two resulting regions, and again make a single split (in one of the regions) to minimize the MSE.\nKeep repeating step 3 until a stopping criterion is met:\nmaximum tree depth (maximum number of splits required to arrive at a leaf)\nminimum number of observations in a leaf\n\n\n\nDemo: Choosing the ideal cutpoint for a given feature",
"# vehicle data\nimport pandas as pd\nimport zipfile\nwith zipfile.ZipFile('../datasets/vehicles_train.csv.zip', 'r') as z:\n f = z.open('vehicles_train.csv')\n train = pd.io.parsers.read_table(f, index_col=False, sep=',')\n\n# before splitting anything, just predict the mean of the entire dataset\ntrain['prediction'] = train.price.mean()\ntrain\n\nyear = 0\ntrain['pred'] = train.loc[train.year<year, 'price'].mean()\ntrain.loc[train.year>=year, 'pred'] = train.loc[train.year>=year, 'price'].mean()\n\n(((train['price'] - train['pred'])**2).mean()) ** 0.5\n\ntrain_izq = train.loc[train.year<0].copy()\n\ntrain_izq.year.unique()\n\ndef error_año(train, year):\n train['pred'] = train.loc[train.year<year, 'price'].mean()\n train.loc[train.year>=year, 'pred'] = train.loc[train.year>=year, 'price'].mean()\n return round(((((train['price'] - train['pred'])**2).mean()) ** 0.5), 2)\n\ndef error_miles(train, miles):\n train['pred'] = train.loc[train.miles<miles, 'price'].mean()\n train.loc[train.miles>=miles, 'pred'] = train.loc[train.miles>=miles, 'price'].mean()\n return round(((((train['price'] - train['pred'])**2).mean()) ** 0.5), 2)",
"Recap: Before every split, this process is repeated for every feature, and the feature and cutpoint that produces the lowest MSE is chosen.\nBuilding a regression tree in scikit-learn",
"# encode car as 0 and truck as 1\ntrain['vtype'] = train.vtype.map({'car':0, 'truck':1})\n\n# define X and y\nfeature_cols = ['year', 'miles', 'doors', 'vtype']\nX = train[feature_cols]\ny = train.price\n\n# instantiate a DecisionTreeRegressor (with random_state=1)\nfrom sklearn.tree import DecisionTreeRegressor\ntreereg = DecisionTreeRegressor(random_state=1)\ntreereg\n\n# use leave-one-out cross-validation (LOOCV) to estimate the RMSE for this model\nimport numpy as np\nfrom sklearn.model_selection import cross_val_score\nscores = cross_val_score(treereg, X, y, cv=14, scoring='neg_mean_squared_error')\nnp.mean(np.sqrt(-scores))",
"What happens when we grow a tree too deep?\n\nLeft: Regression tree for Salary grown deeper\nRight: Comparison of the training, testing, and cross-validation errors for trees with different numbers of leaves\n\n\nThe training error continues to go down as the tree size increases (due to overfitting), but the lowest cross-validation error occurs for a tree with 3 leaves.\nTuning a regression tree\nLet's try to reduce the RMSE by tuning the max_depth parameter:",
"# try different values one-by-one\ntreereg = DecisionTreeRegressor(max_depth=1, random_state=1)\nscores = cross_val_score(treereg, X, y, cv=14, scoring='neg_mean_squared_error')\nnp.mean(np.sqrt(-scores))",
"Or, we could write a loop to try a range of values:",
"# list of values to try\nmax_depth_range = range(1, 8)\n\n# list to store the average RMSE for each value of max_depth\nRMSE_scores = []\n\n# use LOOCV with each value of max_depth\nfor depth in max_depth_range:\n treereg = DecisionTreeRegressor(max_depth=depth, random_state=1)\n MSE_scores = cross_val_score(treereg, X, y, cv=14, scoring='neg_mean_squared_error')\n RMSE_scores.append(np.mean(np.sqrt(-MSE_scores)))\n\n%matplotlib inline\nimport matplotlib.pyplot as plt\n# plot max_depth (x-axis) versus RMSE (y-axis)\nplt.plot(max_depth_range, RMSE_scores)\nplt.xlabel('max_depth')\nplt.ylabel('RMSE (lower is better)')\n\n# max_depth=3 was best, so fit a tree using that parameter\ntreereg = DecisionTreeRegressor(max_depth=3, random_state=1)\ntreereg.fit(X, y)\n\n# \"Gini importance\" of each feature: the (normalized) total reduction of error brought by that feature\npd.DataFrame({'feature':feature_cols, 'importance':treereg.feature_importances_})",
"Creating a tree diagram",
"# create a Graphviz file\nfrom sklearn.tree import export_graphviz\nexport_graphviz(treereg, out_file='tree_vehicles.dot', feature_names=feature_cols)\n\n# At the command line, run this to convert to PNG:\n# dot -Tpng tree_vehicles.dot -o tree_vehicles.png",
"Reading the internal nodes:\n\nsamples: number of observations in that node before splitting\nmse: MSE calculated by comparing the actual response values in that node against the mean response value in that node\nrule: rule used to split that node (go left if true, go right if false)\n\nReading the leaves:\n\nsamples: number of observations in that node\nvalue: mean response value in that node\nmse: MSE calculated by comparing the actual response values in that node against \"value\"\n\nMaking predictions for the testing data",
"# read the testing data\nwith zipfile.ZipFile('../datasets/vehicles_test.csv.zip', 'r') as z:\n f = z.open('vehicles_test.csv')\n test = pd.io.parsers.read_table(f, index_col=False, sep=',')\n\ntest['vtype'] = test.vtype.map({'car':0, 'truck':1})\ntest",
"Question: Using the tree diagram above, what predictions will the model make for each observation?",
"# use fitted model to make predictions on testing data\nX_test = test[feature_cols]\ny_test = test.price\ny_pred = treereg.predict(X_test)\ny_pred\n\n# calculate RMSE\nfrom sklearn.metrics import mean_squared_error\nnp.sqrt(mean_squared_error(y_test, y_pred))",
"Part 2: Classification trees\nExample: Predict whether Barack Obama or Hillary Clinton will win the Democratic primary in a particular county in 2008:\n\nQuestions:\n\nWhat are the observations? How many observations are there?\nWhat is the response variable?\nWhat are the features?\nWhat is the most predictive feature?\nWhy does the tree split on high school graduation rate twice in a row?\nWhat is the class prediction for the following county: 15% African-American, 90% high school graduation rate, located in the South, high poverty, high population density?\nWhat is the predicted probability for that same county?\n\nComparing regression trees and classification trees\n|regression trees|classification trees|\n|---|---|\n|predict a continuous response|predict a categorical response|\n|predict using mean response of each leaf|predict using most commonly occuring class of each leaf|\n|splits are chosen to minimize MSE|splits are chosen to minimize Gini index (discussed below)|\nSplitting criteria for classification trees\nCommon options for the splitting criteria:\n\nclassification error rate: fraction of training observations in a region that don't belong to the most common class\nGini index: measure of total variance across classes in a region\n\nExample of classification error rate\nPretend we are predicting whether someone buys an iPhone or an Android:\n\nAt a particular node, there are 25 observations (phone buyers), of whom 10 bought iPhones and 15 bought Androids.\nSince the majority class is Android, that's our prediction for all 25 observations, and thus the classification error rate is 10/25 = 40%.\n\nOur goal in making splits is to reduce the classification error rate. Let's try splitting on gender:\n\nMales: 2 iPhones and 12 Androids, thus the predicted class is Android\nFemales: 8 iPhones and 3 Androids, thus the predicted class is iPhone\nClassification error rate after this split would be 5/25 = 20%\n\nCompare that with a split on age:\n\n30 or younger: 4 iPhones and 8 Androids, thus the predicted class is Android\n31 or older: 6 iPhones and 7 Androids, thus the predicted class is Android\nClassification error rate after this split would be 10/25 = 40%\n\nThe decision tree algorithm will try every possible split across all features, and choose the split that reduces the error rate the most.\nExample of Gini index\nCalculate the Gini index before making a split:\n$$1 - \\left(\\frac {iPhone} {Total}\\right)^2 - \\left(\\frac {Android} {Total}\\right)^2 = 1 - \\left(\\frac {10} {25}\\right)^2 - \\left(\\frac {15} {25}\\right)^2 = 0.48$$\n\nThe maximum value of the Gini index is 0.5, and occurs when the classes are perfectly balanced in a node.\nThe minimum value of the Gini index is 0, and occurs when there is only one class represented in a node.\nA node with a lower Gini index is said to be more \"pure\".\n\nEvaluating the split on gender using Gini index:\n$$\\text{Males: } 1 - \\left(\\frac {2} {14}\\right)^2 - \\left(\\frac {12} {14}\\right)^2 = 0.24$$\n$$\\text{Females: } 1 - \\left(\\frac {8} {11}\\right)^2 - \\left(\\frac {3} {11}\\right)^2 = 0.40$$\n$$\\text{Weighted Average: } 0.24 \\left(\\frac {14} {25}\\right) + 0.40 \\left(\\frac {11} {25}\\right) = 0.31$$\nEvaluating the split on age using Gini index:\n$$\\text{30 or younger: } 1 - \\left(\\frac {4} {12}\\right)^2 - \\left(\\frac {8} {12}\\right)^2 = 0.44$$\n$$\\text{31 or older: } 1 - \\left(\\frac {6} {13}\\right)^2 - \\left(\\frac {7} {13}\\right)^2 = 0.50$$\n$$\\text{Weighted Average: } 0.44 \\left(\\frac {12} {25}\\right) + 0.50 \\left(\\frac {13} {25}\\right) = 0.47$$\nAgain, the decision tree algorithm will try every possible split, and will choose the split that reduces the Gini index (and thus increases the \"node purity\") the most.\nComparing classification error rate and Gini index\n\nGini index is generally preferred because it will make splits that increase node purity, even if that split does not change the classification error rate.\nNode purity is important because we're interested in the class proportions in each region, since that's how we calculate the predicted probability of each class.\nscikit-learn's default splitting criteria for classification trees is Gini index.\n\nNote: There is another common splitting criteria called cross-entropy. It's numerically similar to Gini index, but slower to compute, thus it's not as popular as Gini index.\nBuilding a classification tree in scikit-learn\nWe'll build a classification tree using the Titanic data:",
"# read in the data\nwith zipfile.ZipFile('../datasets/titanic.csv.zip', 'r') as z:\n f = z.open('titanic.csv')\n titanic = pd.read_csv(f, sep=',', index_col=0)\n\n# encode female as 0 and male as 1\ntitanic['Sex'] = titanic.Sex.map({'female':0, 'male':1})\n\n# fill in the missing values for age with the median age\ntitanic.Age.fillna(titanic.Age.median(), inplace=True)\n\n# create a DataFrame of dummy variables for Embarked\nembarked_dummies = pd.get_dummies(titanic.Embarked, prefix='Embarked')\nembarked_dummies.drop(embarked_dummies.columns[0], axis=1, inplace=True)\n\n# concatenate the original DataFrame and the dummy DataFrame\ntitanic = pd.concat([titanic, embarked_dummies], axis=1)\n\n# print the updated DataFrame\ntitanic.head()",
"Survived: 0=died, 1=survived (response variable)\nPclass: 1=first class, 2=second class, 3=third class\nWhat will happen if the tree splits on this feature?\n\n\nSex: 0=female, 1=male\nAge: numeric value\nEmbarked: C or Q or S",
"# define X and y\nfeature_cols = ['Pclass', 'Sex', 'Age', 'Embarked_Q', 'Embarked_S']\nX = titanic[feature_cols]\ny = titanic.Survived\n\n# fit a classification tree with max_depth=3 on all data\nfrom sklearn.tree import DecisionTreeClassifier\ntreeclf = DecisionTreeClassifier(max_depth=3, random_state=1)\ntreeclf.fit(X, y)\n\n# create a Graphviz file\nexport_graphviz(treeclf, out_file='tree_titanic.dot', feature_names=feature_cols)\n\n# At the command line, run this to convert to PNG:\n# dot -Tpng tree_titanic.dot -o tree_titanic.png",
"Notice the split in the bottom right: the same class is predicted in both of its leaves. That split didn't affect the classification error rate, though it did increase the node purity, which is important because it increases the accuracy of our predicted probabilities.",
"# compute the feature importances\npd.DataFrame({'feature':feature_cols, 'importance':treeclf.feature_importances_})",
"Part 3: Comparing decision trees with other models\nAdvantages of decision trees:\n\nCan be used for regression or classification\nCan be displayed graphically\nHighly interpretable\nCan be specified as a series of rules, and more closely approximate human decision-making than other models\nPrediction is fast\nFeatures don't need scaling\nAutomatically learns feature interactions\nTends to ignore irrelevant features\nNon-parametric (will outperform linear models if relationship between features and response is highly non-linear)\n\n\nDisadvantages of decision trees:\n\nPerformance is (generally) not competitive with the best supervised learning methods\nCan easily overfit the training data (tuning is required)\nSmall variations in the data can result in a completely different tree (high variance)\nRecursive binary splitting makes \"locally optimal\" decisions that may not result in a globally optimal tree\nDoesn't tend to work well if the classes are highly unbalanced\nDoesn't tend to work well with very small datasets"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
NekuSakuraba/my_capstone_research
|
subjects/diffusion maps/Diffusion Maps 00.ipynb
|
mit
|
[
"from numpy.linalg import inv\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom sklearn.datasets import make_blobs\nfrom scipy.linalg import eig\nfrom mpl_toolkits.mplot3d import Axes3D\n\nfrom diffmaps_util import k, diag\n\nX = np.array([.9,1.1,1.2,1.3]).reshape(2,2)\nX = np.array([.9,1.1,1.2,1.3,1.4,1.5,1.6,1.7,1.8]).reshape(3,3)\n\n%matplotlib inline\n\nfig = plt.figure()\nax = fig.add_subplot(111, projection='3d')\nax.scatter(X[:,0], X[:,1], X[:,2])\nax.set_xlabel('x')\nax.set_ylabel('y')\nax.set_zlabel('z')\nplt.show()",
"$\nM = D^{-1}L\n$",
"L = k(X, .7)\nD = diag(L)\nM = inv(D).dot(L)\n# Mi,j denotes the transition probability\n# from the point xi to the point xj in one time step\nprint M",
"<p>Equivalente a</p>\n$\nM_{i,j}=\\frac{k_\\epsilon(x_i, x_j)}{p_\\epsilon(x_j)}\n$\n<p>onde</p>\n$\np_\\epsilon(x_j) = \\sum_i k_\\epsilon(x_i, x_j)\n$",
"L/L.sum(axis=1).reshape(-1,1)",
"$\nMs = D^{1/2}LD^{-1/2}\n$",
"Ms = (diag(D,.5)).dot(M).dot(diag(D,-.5))\nMs",
"Equivalente a\n$\nMs = \\frac{L_{i,j}}{(d(x_i) \\times d(x_j))^{1/2}}\n$",
"p = L.sum(axis=1)\nfor i in range(0,3):\n a = []\n for j in range(0,3):\n a.append(L[i,j]/(p[i]*p[j])**.5)\n print a",
"",
"w, v0, v1 = eig(Ms, left=True)\nw = w.real\nprint '%s\\n%s' % (w, v0)",
"$\nP \\times \\psi_l = \\lambda_l \\times \\psi_l\n$",
"Ms.dot(v0)\n\nw * v0",
"Implementação\n* https://github.com/petermuehlbacher/diffusion-maps-algorithm/blob/b4e91352459b2c4e6b0d3358b5b3e4040762d9c5/diffusion%20maps.py",
"w = w[::-1]\n\nphi = v0.T\nphi = phi[::-1]\n#print w, '\\n', phi\nprint w, '\\n', phi\n\npsi = []\nfor i in range(3):\n psi.append([])\n for j in range(2):\n psi[i].append(phi[j+1,i]/M[i])\npsi\n\nfig = plt.figure()\nax = fig.add_subplot(111, projection='3d')\n\nfor p in psi:\n ax.scatter(p[0][0],p[0][1],p[0][2])\n ax.scatter(p[1][0],p[1][1],p[1][2])\n #print p[0][0], p[0][1], p[0][2]",
"$$\nD_t(x,y) = (\\sum_{l \\geq 1} \\lambda_t^{2t} (\\psi_l(x) - \\psi_l(y))^2)^{1/2}\n$$",
"l = w.real[::-1]\nprint l\npsi = v0.T[::-1]\nprint psi\n\nphi = []\nfor i in range(3):\n phi.append(l[i] * psi[i])\nphi\n\npairwise_distances(phi[1:])**2",
"",
"from sklearn.preprocessing import normalize\n\nX = np.array([[ 1., -1., 2.]])\nnormalize(X, norm='l2')\n\nnp.sqrt((X * X).sum())"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
dschick/udkm1Dsimpy
|
docs/source/examples/dynamical_xray.ipynb
|
gpl-3.0
|
[
"Dynamical X-ray Scattering\nIn this example static and transient X-ray simulations are carried out employing the dynamical X-ray scattering formalism.\nSetup\nDo all necessary imports and settings.",
"import udkm1Dsim as ud\nu = ud.u # import the pint unit registry from udkm1Dsim\nimport scipy.constants as constants\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\nu.setup_matplotlib() # use matplotlib with pint units",
"Structure\nRefer to the structure-example for more details.",
"O = ud.Atom('O')\nTi = ud.Atom('Ti')\nSr = ud.Atom('Sr')\nRu = ud.Atom('Ru')\nPb = ud.Atom('Pb')\nZr = ud.Atom('Zr')\n\n# c-axis lattice constants of the two layers\nc_STO_sub = 3.905*u.angstrom\nc_SRO = 3.94897*u.angstrom\n# sound velocities [nm/ps] of the two layers\nsv_SRO = 6.312*u.nm/u.ps\nsv_STO = 7.800*u.nm/u.ps\n\n# SRO layer\nprop_SRO = {}\nprop_SRO['a_axis'] = c_STO_sub # aAxis\nprop_SRO['b_axis'] = c_STO_sub # bAxis\nprop_SRO['deb_Wal_Fac'] = 0 # Debye-Waller factor\nprop_SRO['sound_vel'] = sv_SRO # sound velocity\nprop_SRO['opt_ref_index'] = 2.44+4.32j\nprop_SRO['therm_cond'] = 5.72*u.W/(u.m*u.K) # heat conductivity\nprop_SRO['lin_therm_exp'] = 1.03e-5 # linear thermal expansion\nprop_SRO['heat_capacity'] = '455.2 + 0.112*T - 2.1935e6/T**2' # heat capacity [J/kg K]\n\nSRO = ud.UnitCell('SRO', 'Strontium Ruthenate', c_SRO, **prop_SRO)\nSRO.add_atom(O, 0)\nSRO.add_atom(Sr, 0)\nSRO.add_atom(O, 0.5)\nSRO.add_atom(O, 0.5)\nSRO.add_atom(Ru, 0.5)\n\n# STO substrate\nprop_STO_sub = {}\nprop_STO_sub['a_axis'] = c_STO_sub # aAxis\nprop_STO_sub['b_axis'] = c_STO_sub # bAxis\nprop_STO_sub['deb_Wal_Fac'] = 0 # Debye-Waller factor\nprop_STO_sub['sound_vel'] = sv_STO # sound velocity\nprop_STO_sub['opt_ref_index'] = 2.1+0j\nprop_STO_sub['therm_cond'] = 12*u.W/(u.m*u.K) # heat conductivity\nprop_STO_sub['lin_therm_exp'] = 1e-5 # linear thermal expansion\nprop_STO_sub['heat_capacity'] = '733.73 + 0.0248*T - 6.531e6/T**2' # heat capacity [J/kg K]\n\nSTO_sub = ud.UnitCell('STOsub', 'Strontium Titanate Substrate', c_STO_sub, **prop_STO_sub)\nSTO_sub.add_atom(O, 0)\nSTO_sub.add_atom(Sr, 0)\nSTO_sub.add_atom(O, 0.5)\nSTO_sub.add_atom(O, 0.5)\nSTO_sub.add_atom(Ti, 0.5)\n\nS = ud.Structure('Single Layer')\nS.add_sub_structure(SRO, 200) # add 100 layers of SRO to sample\nS.add_sub_structure(STO_sub, 1000) # add 1000 layers of dynamic STO substrate\n\nsubstrate = ud.Structure('Static Substrate')\nsubstrate.add_sub_structure(STO_sub, 1000000) # add 1000000 layers of static STO substrate\nS.add_substrate(substrate)",
"Heat\nRefer to the heat-example for more details.",
"h = ud.Heat(S, True)\n\nh.save_data = False\nh.disp_messages = True\n\nh.excitation = {'fluence': [35]*u.mJ/u.cm**2,\n 'delay_pump': [0]*u.ps,\n 'pulse_width': [0]*u.ps,\n 'multilayer_absorption': True,\n 'wavelength': 800*u.nm,\n 'theta': 45*u.deg}\n\n# temporal and spatial grid\ndelays = np.r_[-5:40:0.1]*u.ps\n_, _, distances = S.get_distances_of_layers()\n\ntemp_map, delta_temp_map = h.get_temp_map(delays, 300*u.K)\n\nplt.figure(figsize=[6, 8])\nplt.subplot(2, 1, 1)\nplt.plot(distances.to('nm').magnitude, temp_map[101, :])\nplt.xlim([0, distances.to('nm').magnitude[-1]])\nplt.xlabel('Distance [nm]')\nplt.ylabel('Temperature [K]')\nplt.title('Temperature Profile')\n\nplt.subplot(2, 1, 2)\nplt.pcolormesh(distances.to('nm').magnitude, delays.to('ps').magnitude, temp_map, shading='auto')\nplt.colorbar()\nplt.xlabel('Distance [nm]')\nplt.ylabel('Delay [ps]')\nplt.title('Temperature Map')\n\nplt.tight_layout()\nplt.show()",
"Numerical Phonons\nRefer to the phonons-example for more details.",
"p = ud.PhononNum(S, True)\np.save_data = False\np.disp_messages = True\n\nstrain_map = p.get_strain_map(delays, temp_map, delta_temp_map)\n\nplt.figure(figsize=[6, 8])\nplt.subplot(2, 1, 1)\nplt.plot(distances.to('nm').magnitude, strain_map[130, :],\n label=np.round(delays[130]))\nplt.plot(distances.to('nm').magnitude, strain_map[350, :],\n label=np.round(delays[350]))\nplt.xlim([0, distances.to('nm').magnitude[-1]])\nplt.xlabel('Distance [nm]')\nplt.ylabel('Strain')\nplt.legend()\nplt.title('Strain Profile')\n\nplt.subplot(2, 1, 2)\nplt.pcolormesh(distances.to('nm').magnitude, delays.to('ps').magnitude,\n strain_map, cmap='RdBu',\n vmin=-np.max(strain_map), vmax=np.max(strain_map), shading='auto')\nplt.colorbar()\nplt.xlabel('Distance [nm]')\nplt.ylabel('Delay [ps]')\nplt.title('Strain Map')\n\nplt.tight_layout()\nplt.show()",
"Initialize dynamical X-ray simulation\nThe XrayDyn class requires a Structure object and a boolean force_recalc in order overwrite previous simulation results.\nThese results are saved in the cache_dir when save_data is enabled.\nPrinting simulation messages can be en-/disabled using disp_messages and progress bars can using the boolean switch progress_bar.",
"dyn = ud.XrayDyn(S, True)\ndyn.disp_messages = True\ndyn.save_data = False",
"Homogeneous X-ray scattering\nFor the case of homogeneously strained samples, the dynamical X-ray scattering simulations can be greatly simplified, which saves a lot of computational time.\n$q_z$-scan\nThe XrayDyn object requires an energy and scattering vector qz to run the simulations.\nBoth parameters can be arrays and the resulting reflectivity has a first dimension for the photon energy and the a second for the scattering vector.",
"dyn.energy = np.r_[5000, 8047]*u.eV # set two photon energies\ndyn.qz = np.r_[3.1:3.3:0.00001]/u.angstrom # qz range\n\nR_hom, A = dyn.homogeneous_reflectivity() # this is the actual calculation\n\nplt.figure()\nplt.semilogy(dyn.qz[0, :], R_hom[0, :], label='{}'.format(dyn.energy[0]), alpha=0.5)\nplt.semilogy(dyn.qz[1, :], R_hom[1, :], label='{}'.format(dyn.energy[1]), alpha=0.5)\nplt.ylabel('Reflectivity')\nplt.xlabel('$q_z$ [nm$^{-1}$]')\nplt.legend()\nplt.show()",
"Due to the very thick static substrate in the structure and the very small step width in qz also the Darwin width of the substrate Bragg peak is nicely resolvable.",
"plt.figure()\nplt.semilogy(dyn.qz[0, :], R_hom[0, :], label='{}'.format(dyn.energy[0]), alpha=0.5)\nplt.semilogy(dyn.qz[1, :], R_hom[1, :], label='{}'.format(dyn.energy[1]), alpha=0.5)\nplt.ylabel('Reflectivity')\nplt.xlabel('$q_z$ [nm$^{-1}$]')\nplt.xlim(32.17, 32.195)\nplt.ylim(1e-3, 1)\nplt.legend()\nplt.title('Darwin Width')\nplt.show()",
"Post-Processing\nAll result can be convoluted with an arbitrary function handle, which e.g. mimics the instrumental resolution.",
"FWHM = 0.004/1e-10 # Angstrom\nsigma = FWHM/2.3548\n\nhandle = lambda x: np.exp(-((x)/sigma)**2/2)\ny_conv = dyn.conv_with_function(R_hom[0, :], dyn._qz[0, :], handle)\n\nplt.figure()\nplt.semilogy(dyn.qz[0, :], R_hom[0, :], label='{}'.format(dyn.energy[0]))\nplt.semilogy(dyn.qz[0, :], y_conv, label='{} convoluted'.format(dyn.energy[0]))\nplt.ylabel('Reflectivity')\nplt.xlabel('$q_z$ [nm$^{-1}$]')\nplt.legend()\nplt.show()",
"Energy-scan\nEnergy scans rely on experimental atomic scattering factors that are include also energy ranges around relevant resonances.\nThe warning message can be safely ignored as it results from the former q_z range which cannot be accessed with the new energy range.",
"dyn.energy = np.r_[2000:4000]*u.eV # set the energy range\ndyn.qz = np.r_[2]/u.angstrom # qz range\n\nR_hom, A = dyn.homogeneous_reflectivity() # this is the actual calculation\n\nplt.figure()\nplt.plot(dyn.energy, R_hom[:, 0])\nplt.ylabel('Reflectivity')\nplt.xlabel('Energy [eV]')\nplt.show()",
"Inhomogeneous X-ray scattering\nThe inhomogeneous_reflectivity() method allows to calculate the transient X-ray reflectivity according to a strain_map.\nThe actual strains per layer will be discretized and limited in order to save computational time using the strain_vectors.",
"dyn.energy = np.r_[8047]*u.eV # set two photon energies\ndyn.qz = np.r_[3.1:3.3:0.001]/u.angstrom # qz range\n\nstrain_vectors = p.get_reduced_strains_per_unique_layer(strain_map)\nR_seq = dyn.inhomogeneous_reflectivity(strain_map, strain_vectors, calc_type='sequential')\n\nplt.figure()\nplt.pcolormesh(dyn.qz[0, :].to('1/nm').magnitude, delays.to('ps').magnitude, np.log10(R_seq[:, 0, :]), shading='auto')\nplt.title('Dynamical X-ray')\nplt.ylabel('Delay [ps]')\nplt.xlabel('$q_z$ [nm$^{-1}$]')\nplt.show()",
"The results can be convoluted again to mimic real experimental resolution:",
"R_seq_conv = np.zeros_like(R_seq)\nfor i, delay in enumerate(delays):\n R_seq_conv[i, 0, :] = dyn.conv_with_function(R_seq[i, 0, :], dyn._qz[0, :], handle)\n\nplt.figure(figsize=[6, 8])\nplt.subplot(2, 1, 1)\nplt.semilogy(dyn.qz[0, :].to('1/nm'), R_seq_conv[0, 0, :], label=np.round(delays[0]))\nplt.semilogy(dyn.qz[0, :].to('1/nm'), R_seq_conv[100, 0, :], label=np.round(delays[100]))\nplt.semilogy(dyn.qz[0, :].to('1/nm'), R_seq_conv[-1, 0, :], label=np.round(delays[-1]))\n\nplt.xlabel('$q_z$ [nm$^{-1}$]')\nplt.ylabel('Reflectivity')\nplt.legend()\nplt.title('Dynamical X-ray Convoluted')\n\nplt.subplot(2, 1, 2)\nplt.pcolormesh(dyn.qz[0, :].to('1/nm').magnitude, delays.to('ps').magnitude, np.log10(R_seq_conv[:, 0, :]), shading='auto')\nplt.ylabel('Delay [ps]')\nplt.xlabel('$q_z$ [nm$^{-1}$]')\n\nplt.tight_layout()\nplt.show()",
"Parallel inhomogeneous X-ray scattering\nYou need to install the udkm1Dsim with the parallel option which essentially add the Dask package to the requirements:\n```\n\npip install udkm1Dsim[parallel]\n```\n\nYou can also install/add Dask manually, e.g. via pip:\n```\n\npip install dask\n```\n\nPlease refer to the Dask documentation for more details on parallel computing in Python.",
"try:\n from dask.distributed import Client\n client = Client()\n R_par = dyn.inhomogeneous_reflectivity(strain_map, strain_vectors, calc_type='parallel', dask_client=client)\n client.close()\nexcept:\n pass\n\nplt.figure()\nplt.pcolormesh(dyn.qz[0, :].to('1/nm').magnitude, delays.to('ps').magnitude, np.log10(R_par[:, 0, :]), shading='auto')\nplt.title('Parallel Dynamical X-ray')\nplt.ylabel('Delay [ps]')\nplt.xlabel('$q_z$ [nm$^{-1}$]')\nplt.show()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
SBRG/ssbio
|
docs/notebooks/Protein - Structure Mapping, Alignments, and Visualization.ipynb
|
mit
|
[
"Protein - Structure Mapping, Alignments, and Visualization\nThis notebook gives an example of how to map a single protein sequence to its structure, along with conducting sequence alignments and visualizing the mutations.\n<div class=\"alert alert-info\">\n\n**Input:** Protein ID + amino acid sequence + mutated sequence(s)\n\n</div>\n\n<div class=\"alert alert-info\">\n\n**Output:** Representative protein structure, sequence alignments, and visualization of mutations\n\n</div>\n\nImports",
"import sys\nimport logging\n\n# Import the Protein class\nfrom ssbio.core.protein import Protein\n\n# Printing multiple outputs per cell\nfrom IPython.core.interactiveshell import InteractiveShell\nInteractiveShell.ast_node_interactivity = \"all\"",
"Logging\nSet the logging level in logger.setLevel(logging.<LEVEL_HERE>) to specify how verbose you want the pipeline to be. Debug is most verbose.\n\nCRITICAL\nOnly really important messages shown\n\n\nERROR\nMajor errors\n\n\nWARNING\nWarnings that don't affect running of the pipeline\n\n\nINFO (default)\nInfo such as the number of structures mapped per gene\n\n\nDEBUG\nReally detailed information that will print out a lot of stuff\n\n\n\n<p><div class=\"alert alert-warning\">**Warning:** `DEBUG` mode prints out a large amount of information, especially if you have a lot of genes. This may stall your notebook!</div></p>",
"# Create logger\nlogger = logging.getLogger()\nlogger.setLevel(logging.INFO) # SET YOUR LOGGING LEVEL HERE #\n\n# Other logger stuff for Jupyter notebooks\nhandler = logging.StreamHandler(sys.stderr)\nformatter = logging.Formatter('[%(asctime)s] [%(name)s] %(levelname)s: %(message)s', datefmt=\"%Y-%m-%d %H:%M\")\nhandler.setFormatter(formatter)\nlogger.handlers = [handler]",
"Initialization of the project\nSet these three things:\n\nROOT_DIR\nThe directory where a folder named after your PROTEIN_ID will be created\n\n\nPROTEIN_ID\nYour protein ID\n\n\nPROTEIN_SEQ\nYour protein sequence\n\n\n\nA directory will be created in ROOT_DIR with your PROTEIN_ID name. The folders are organized like so:\n```\n ROOT_DIR\n └── PROTEIN_ID\n ├── sequences # Protein sequence files, alignments, etc.\n └── structures # Protein structure files, calculations, etc.\n```",
"# SET FOLDERS AND DATA HERE\nimport tempfile\nROOT_DIR = tempfile.gettempdir()\n\nPROTEIN_ID = 'SRR1753782_00918'\nPROTEIN_SEQ = 'MSKQQIGVVGMAVMGRNLALNIESRGYTVSVFNRSREKTEEVIAENPGKKLVPYYTVKEFVESLETPRRILLMVKAGAGTDAAIDSLKPYLEKGDIIIDGGNTFFQDTIRRNRELSAEGFNFIGTGVSGGEEGALKGPSIMPGGQKDAYELVAPILTKIAAVAEDGEPCVTYIGADGAGHYVKMVHNGIEYGDMQLIAEAYSLLKGGLNLSNEELANTFTEWNNGELSSYLIDITKDIFTKKDEDGNYLVDVILDEAANKGTGKWTSQSALDLGEPLSLITESVFARYISSLKAQRVAASKVLSGPKAQPAGDKAEFIEKVRRALYLGKIVSYAQGFSQLRAASDEYHWDLNYGEIAKIFRAGCIIRAQFLQKITDAYAENADIANLLLAPYFKKIADEYQQALRDVVAYAVQNGIPVPTFSAAVAYYDSYRAAVLPANLIQAQRDYFGAHTYKRTDKEGIFHTEWLE'\n\n# Create the Protein object\nmy_protein = Protein(ident=PROTEIN_ID, root_dir=ROOT_DIR, pdb_file_type='mmtf')\n\n# Load the protein sequence\n# This sets the loaded sequence as the representative one\nmy_protein.load_manual_sequence(seq=PROTEIN_SEQ, ident='WT', write_fasta_file=True, set_as_representative=True)",
"Mapping sequence --> structure\nSince the sequence has been provided, we just need to BLAST it to the PDB.\n<p><div class=\"alert alert-info\">**Note:** These methods do not download any 3D structure files.</div></p>\n\nMethods",
"# Mapping using BLAST\nmy_protein.blast_representative_sequence_to_pdb(seq_ident_cutoff=0.9, evalue=0.00001)\nmy_protein.df_pdb_blast.head()",
"Downloading and ranking structures\nMethods\n<div class=\"alert alert-warning\">\n\n**Warning:** \nDownloading all PDBs takes a while, since they are also parsed for metadata. You can skip this step and just set representative structures below if you want to minimize the number of PDBs downloaded.\n\n</div>",
"# Download all mapped PDBs and gather the metadata\nmy_protein.download_all_pdbs()\nmy_protein.df_pdb_metadata.head(2)\n\n# Set representative structures\nmy_protein.set_representative_structure()",
"Loading and aligning new sequences\nYou can load additional sequences into this protein object and align them to the representative sequence.",
"my_protein.__dict__",
"Methods",
"# Input your mutated sequence and load it\nmutated_protein1_id = 'N17P_SNP'\nmutated_protein1_seq = 'MSKQQIGVVGMAVMGRPLALNIESRGYTVSVFNRSREKTEEVIAENPGKKLVPYYTVKEFVESLETPRRILLMVKAGAGTDAAIDSLKPYLEKGDIIIDGGNTFFQDTIRRNRELSAEGFNFIGTGVSGGEEGALKGPSIMPGGQKDAYELVAPILTKIAAVAEDGEPCVTYIGADGAGHYVKMVHNGIEYGDMQLIAEAYSLLKGGLNLSNEELANTFTEWNNGELSSYLIDITKDIFTKKDEDGNYLVDVILDEAANKGTGKWTSQSALDLGEPLSLITESVFARYISSLKAQRVAASKVLSGPKAQPAGDKAEFIEKVRRALYLGKIVSYAQGFSQLRAASDEYHWDLNYGEIAKIFRAGCIIRAQFLQKITDAYAENADIANLLLAPYFKKIADEYQQALRDVVAYAVQNGIPVPTFSAAVAYYDSYRAAVLPANLIQAQRDYFGAHTYKRTDKEGIFHTEWLE'\n\nmy_protein.load_manual_sequence(ident=mutated_protein1_id, seq=mutated_protein1_seq)\n\n# Input another mutated sequence and load it\nmutated_protein2_id = 'Q4S_N17P_SNP'\nmutated_protein2_seq = 'MSKSQIGVVGMAVMGRPLALNIESRGYTVSVFNRSREKTEEVIAENPGKKLVPYYTVKEFVESLETPRRILLMVKAGAGTDAAIDSLKPYLEKGDIIIDGGNTFFQDTIRRNRELSAEGFNFIGTGVSGGEEGALKGPSIMPGGQKDAYELVAPILTKIAAVAEDGEPCVTYIGADGAGHYVKMVHNGIEYGDMQLIAEAYSLLKGGLNLSNEELANTFTEWNNGELSSYLIDITKDIFTKKDEDGNYLVDVILDEAANKGTGKWTSQSALDLGEPLSLITESVFARYISSLKAQRVAASKVLSGPKAQPAGDKAEFIEKVRRALYLGKIVSYAQGFSQLRAASDEYHWDLNYGEIAKIFRAGCIIRAQFLQKITDAYAENADIANLLLAPYFKKIADEYQQALRDVVAYAVQNGIPVPTFSAAVAYYDSYRAAVLPANLIQAQRDYFGAHTYKRTDKEGIFHTEWLE'\n\nmy_protein.load_manual_sequence(ident=mutated_protein2_id, seq=mutated_protein2_seq)\n\n# Conduct pairwise sequence alignments\nmy_protein.pairwise_align_sequences_to_representative()\n\n# View IDs of all sequence alignments\n[x.id for x in my_protein.sequence_alignments]\n\n# View the stored information for one of the alignments\nmy_alignment = my_protein.sequence_alignments.get_by_id('WT_Q4S_N17P_SNP')\nmy_alignment.annotations\nstr(my_alignment[0].seq)\nstr(my_alignment[1].seq)\n\n# Summarize all the mutations in all sequence alignments\ns,f = my_protein.sequence_mutation_summary(alignment_type='seqalign')\nprint('Single mutations:')\ns\nprint('---------------------')\nprint('Mutation fingerprints')\nf",
"Some additional methods\nGetting binding site/other information from UniProt",
"import ssbio.databases.uniprot\n\nthis_examples_uniprot = 'P14062'\nsites = ssbio.databases.uniprot.uniprot_sites(this_examples_uniprot)\nmy_protein.representative_sequence.features = sites\nmy_protein.representative_sequence.features",
"Mapping sequence residue numbers to structure residue numbers\nMethods",
"# Returns a dictionary mapping sequence residue numbers to structure residue identifiers\n# Will warn you if residues are not present in the structure\nstructure_sites = my_protein.map_seqprop_resnums_to_structprop_resnums(resnums=[1,3,45], \n use_representatives=True)\nstructure_sites",
"Viewing structures\nThe awesome package nglview is utilized as a backend for viewing structures within a Jupyter notebook. ssbio view functions will either return a NGLWidget object, which is the same as using nglview like the below example, or act upon the widget object itself.\n```python\nThis is how NGLview usually works - it will load a structure file and return a NGLWidget \"view\" object.\nimport nglview\nview = nglview.show_structure_file(my_protein.representative_structure.structure_path)\nview\n```\nMethods",
"# View just the structure\nview = my_protein.representative_structure.view_structure(recolor=True)\nview\n\nview.add_spacefill(selection='( :A ) and not hydrogen and 17', label_type='res', color='orange')\n\n# Map the mutations on the visualization (scale increased) - will show up on the above view\nmy_protein.add_mutations_to_nglview(view=view, alignment_type='seqalign', scale_range=(4,7), \n use_representatives=True)\n\n# Add sites as shown above in the table to the view\nmy_protein.add_features_to_nglview(view=view, use_representatives=True)",
"Saving",
"import os.path as op\nmy_protein.save_json(op.join(my_protein.protein_dir, '{}.json'.format(my_protein.id)))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
gvasold/gdp17
|
objekte/objekte_3.ipynb
|
apache-2.0
|
[
"Objektorientiere Programmierung: Vertiefung\nDiese Notebook vertieft einige Konzepte der objektorientierten Programmierung, insbesondere in Hinblick auf Python.\nGeschützte Variablen und Methoden (Kapselung)\nGeschützte Variablen und Methoden\nWir haben gelernt, dass einer der wesentlichen Vorteile von Objektorientierung die Datenkapselung ist. Damit ist gemeint, dass der Zugriff auf Eigenschaften und Methoden eingeschränkt werden kann. Manche Programmiersprachen wie z.B. Java markieren diese Zugriffsrechte explizit und sind in der Auslegung sehr strikt. Diese Variablendeklaration in Java beschränkt den Zugriff auf eine Variable auf die Klasse selbst:\n~~~\nprivate int score = 0;\n~~~\nDadurch kann der Wert von score nur aus der Klasse heraus gelesen oder verändert werden.\n~~~\npublic String username;\n~~~\nHingegegen erlaubt den uneingeschränkten Zugriff auf die Eigenschaft username.\nDiesen Mechanismus gibt es auch in Python, allerdings geht man hier die Dinge relaxter an: Ein vor einen Variablennamen oder einen Methodennamen gesetztes Underline bedeutet, dass dieser Teil des Objekt von außerhalb des Objekts nicht verwendet, vor allem nicht verändert werden soll.",
"class MyClass:\n \n def __init__(self, val):\n self.set_val(val)\n \n def get_val(self):\n return self._val\n \n def set_val(self, val):\n if val > 0:\n self._val = val\n else:\n raise ValueError('val must be greater 0')\n \nmyclass = MyClass(27) \nmyclass._val",
"Wie wir sehen, ist die Eigenschaft _val durchaus von außerhalb verfügbar. Allerdings signalisiert das Underline, dass vom Programmierer der Klasse nicht vorgesehen ist, dass dieser Wert direkt verwendet wird (sondern z.B. nur über die Methoden get_val() und set_val()). Wenn ein anderer Programmierer der Meinung ist, dass er direkten Zugriff auf die Eigenschaft _val braucht, liegt das in seiner Verantwortung (wird aber von Python nicht unterbunden). Man spricht hier von protection by convention. Python-Programmierer halten sich in aller Regel an diese Konvention, weshalb dieser Art von \"Schutz\" weit verbreitet ist.\nUnsichtbare Eigenschaften und Methoden\nFür paranoide Programmierer bietet Python die Möglichkeit, den Zugriff von außerhalb des Objekt komplett zu unterbinden, indem man statt eines Unterstrichts zwei Unterstriche vor den Namen setzt.",
"class MyClass:\n \n def __init__(self, val):\n self.__val = val\n \nmyclass = MyClass(42) \nmyclass.__val",
"Hier sehen wir, dass die Eigenschaft __val von außerhalb der Klasse gar nicht sichtbar und damit auch nicht veränderbar ist. Innerhalb der Klasse ist sie jedoch normal verfügbar. Das kann zu Problemen führen:",
"class MySpecialClass(MyClass):\n \n def get_val(self):\n return self.__val\n \nmsc = MySpecialClass(42) \nmsc.get_val()",
"Da __val nur innerhalb der Basisklasse angelegt wurde, hat die abgeleitete Klasse keinen Zugriff darauf.\nDatenkapelung mit Properties\nWie wir gesehen haben, werden für den Zugriff auf geschützte Eigenschaften eigene Getter- und Setter-Methoden geschrieben, über die der Wert einer Eigenschaft kontrolliert verändert werden kann. Programmieren wir eine Student-Klasse, in der eine Note gespeichert werden soll. Um den Zugriff auf diese Eigenschaft zu kontrollieren, schreiben wir eine Setter- und eine Getter-Methode.",
"class GradingError(Exception): pass\n\n\nclass Student:\n \n def __init__(self, matrikelnr):\n self.matrikelnr = matrikelnr\n self._grade = 0\n \n def set_grade(self, grade):\n if grade > 0 and grade < 6:\n self._grade = grade\n else:\n raise ValueError('Grade must be between 1 and 5!')\n \n def get_grade(self):\n if self._grade > 0:\n return self._grade\n raise GradingError('Noch nicht benotet!')",
"Wir können jetzt die Note setzen und auslesen:",
"anna = Student('01754645')\nanna.set_grade(6)\n\nanna.set_grade(2)\nanna.get_grade()",
"Allerdings ist der direkte Zugriff auf grade immer noch möglich:",
"anna._grade\n\nanna._grade = 6",
"Wie wir bereits gesehen haben, können wir das verhindern, indem wir die Eigenschaft grade auf __grade umbenennen. \nProperties setzen via Getter und Setter\nPython bietet eine Möglichkeit, das Setzen und Auslesen von Objekteigenschaften automatisch durch Methoden zu leiten. Dazu werden der Getter und Setter an die poperty-Funktion übergeben (letzte Zeile der Klasse).",
"class Student:\n \n def __init__(self, matrikelnr):\n self.matrikelnr = matrikelnr\n self.__grade = 0\n \n def set_grade(self, grade):\n if grade > 0 and grade < 6:\n self.__grade = grade\n else:\n raise ValueError('Grade must be between 1 and 5!')\n \n def get_grade(self):\n if self.__grade > 0:\n return self.__grade\n raise GradingError('Noch nicht benotet!')\n \n grade = property(get_grade, set_grade)\n \notto = Student('01745646465') \notto.grade = 6",
"Wie wir sehen, können wir die Eigenschaft des Objekts direkt setzen und auslesen, der Zugriff wird aber von Python jeweils durch den Setter und Getter geleitet.\nWenn wir nur eine Methode (den Getter) als Argument an die property()-Funktion übergeben, haben wir eine Eigenschaft, die sich nur auslesen, aber nicht verändern lässt.",
"class Student:\n \n def __init__(self, matrikelnr, grade):\n self.matrikelnr = matrikelnr\n self.__grade = grade\n \n def get_grade(self):\n if self.__grade > 0:\n return self.__grade\n raise GradingError('Noch nicht benotet!')\n \n grade = property(get_grade)\n \nalbert = Student('0157897846546', 5) \nalbert.grade",
"Wir können also auf unsere via property() definierte Eigenschaften zugreifen. Wir können grade aber nicht verwenden,\num die Eigenschaft zu verändern:",
"albert.grade = 1",
"Der @Property-Dekorator\nDekoratoren erweitern dynamisch die Funktionalität von Funktionen indem sie diese (im Hintergrund) in eine weitere Funktion verpacken. Die Anwendung eines Dekorators ist einfach: man schreibt ihn einfach vor die Funktionsdefinition.\nPython bringt eine Reihe von Dekoratoren mit, man kann sich aber auch eigene Dekoratoren schreiben, was jedoch hier nicht behandelt wird.\nDer in Python eingebaute @property-Dekorator ist eine Alternative zu der oben vorgestellten property()-Funktion:",
"class Student:\n \n def __init__(self, matrikelnr):\n self.matrikelnr = matrikelnr\n self.__grade = 0\n \n @property\n def grade(self):\n if self.__grade > 0:\n return self.__grade\n raise GradingError('Noch nicht benotet!')\n \n @grade.setter\n def grade(self, grade):\n if grade > 0 and grade < 6:\n self.__grade = grade\n else:\n raise ValueError('Grade must be between 1 and 5!')\n \n\nhugo = Student('0176464645454') \n\nhugo.grade = 6\n\nhugo.grade = 2\n\nhugo.grade",
"Klassenvariablen (Static members)\nWir haben gelernt, dass Klassen Eigenschaften und Methoden von Objekten festlegen. Allerdings (und das kann zu Beginn etwas verwirrend sein), sind Klassen selbst auch Objekte, die Eigenschaften und Methoden haben. Hier ein Beispiel:",
"class MyClass:\n \n the_answer = 42\n \n def __init__(self, val):\n self.the_answer = val\n \nMyClass.the_answer \n\nmc = MyClass(17)\nprint('Objekteigenschaft:', mc.the_answer)\nprint('Klasseneigenschaft:', MyClass.the_answer)",
"Die eine Eigenschaft hängt also am Klassenobjekt, die andere am aus der Klasse erzeugten Objekt. Solche Klassenobjekte können nützlich sein, weil sie in allen aus der Klasse erzeugten Objekten verfügbar sind (sogar via self, solange das Objekt nicht selbst eine gleichnamige Eigenschaft hat:",
"class MyClass:\n instance_counter = 0\n \n def __init__(self):\n MyClass.instance_counter += 1\n print('Ich bin das {}. Objekt'.format(MyClass.instance_counter))\n \na = MyClass()\nb = MyClass()\n\nclass MyOtherClass(MyClass):\n instance_counter = 0\n\na = MyOtherClass()\nb = MyOtherClass()",
"Man kann das auch so schreiben, wodurch der Counter auch für Subklassen funktioniert:",
"class MyClass:\n instance_counter = 0\n \n def __init__(self):\n self.__class__.instance_counter += 1\n print('Ich bin das {}. Objekt'.format(self.__class__.instance_counter))\n \na = MyClass()\nb = MyClass()\n\nclass MyOtherClass(MyClass):\n instance_counter = 0\n\na = MyOtherClass()\nb = MyOtherClass()",
"Übung\nSchreiben Sie eine Klasse Student, die über eine Klassenvariable sicherstellt, dass keine Matrikelnummer mehr als einmal vorkommt."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
vlad17/vlad17.github.io
|
assets/2020-08-12-rct2-solution/rct2.ipynb
|
apache-2.0
|
[
"RCT2 Problem Solution\nLast time, we discussed the RCT2 problem, which we won't delve into in great detail, but at a high level, we have an inductively defined Markov chain, parameterized by $n$, with special start and end states and the following outgoing arrows, such that for $k\\in[n]$, we have the following transition dynamics:",
"from IPython.display import Image\nImage(filename='transitions.png') ",
"We already went over how to solve the expected hitting time for the end state for a given, known $n$. We now focus on how to solve for a parameter $n$.\nWe'll go about solving this \"by hand\" as we would in a class, but then think about deeper implications.\nIf we use our intuition from the video in the previous post, we'll notice that there are two modalities of transitioning between states. You're either moving backwards or forwards, and you have some slight momentum in both directions (but the momentum is stronger going backwards).\nIn particular, let's introduce two random variables, which are well-defined by the Markov property. Namely, the expected time to reach the end from a given state is a function of the state you're in and not any other history of your maze exploration. \nSo, meet $F_k$, the expected time to reach the end when you're facing forward (towards the exit) in the $k$-th tile.\nAnd then there's $B_k$, the expected time to reach the end when you're facing backwards (towards the enterance) in the $k$-th tile.\nBy exploring all our local transitions described above, we can do one \"value iteration.\" In particular, the following must hold:\n$$\n\\begin{align}\nF_0&=B_0\\\nB_0&=1+F_1\\\nF_k&=\\frac{1}{4}(1+F_{k+1})+\\frac{3}{8}(3+B_{k-1})+\\frac{3}{8}(3+F_{k+1})\\\nB_k&=\\frac{3}{4}(1+B_{k-1})+\\frac{1}{8}(3+B_{k-1})+\\frac{1}{8}(3+F_{k+1})\\\nF_{n+1}&=0\\,\\,.\n\\end{align}\n$$\nThe middle equations are the juicy ones, but they just internalize the transition into the $(k, 2)$ state. In other words, for $F_k$, wp $\\frac{1}{4}$ we keep moving forward (costing us a single time step), but wp $\\frac{3}{4}$ we go into the inlet $(k, 2)$, after which we go to $(k, 3)$ and then split our probability mass between going back up or down. $B_k$ is similar, but note that this equation only holds for $k\\in[n-1]$, whereas the $F_k$ equation holds for $k\\in[n]$ (from the diagram, you can see that $B_n$ never gets visited).\nSimplifying a little, and cleaning up the edge cases, we're left with\n$$\n\\begin{align}\nF_0&=B_0\\\nB_0&=1+F_1\\\nF_k&=\\frac{5}{2}+\\frac{5}{8}F_{k+1}+\\frac{3}{8}B_{k-1}\\\nB_k&=\\frac{3}{2}+\\frac{7}{8}B_{k-1}+\\frac{1}{8}F_{k+1}\\\nF_n&=\\frac{5}{2}+\\frac{3}{8}B_{n-1}\\\n\\end{align}\n$$\nNow the above equations hold for all $k\\in[n-1]$.\nIt may seem like we have no base case, but it's hiding in there as conservation of mass. By inspecting the final $n$ state, it's clear we'll need some kind of message passing in terms of $(B_{k-1},F_k)$ pairs, and rearranging the equations that's just what we get (i.e., if we had to canonically order our terms $F_0,B_0,F_1,B_1\\cdots$, this would correspond to finding a reduced row-echelon form in the large linear system described above). We rearrange the $B_k$ equation in terms of $B_{k-1}$, then we use that value to plug into the $B_{k-1}$ term of $F_k$, which indeed puts $B_{k-1},F_k$ in terms of $B_{k},F_{k+1}$. It's at this point that we should switch to sympy.",
"from sympy import *\ninit_printing()\nbkm1, fkm1, bk, fk, fkp1 = symbols('B_{k-1} F_{k-1} B_k F_k F_{k+1}')\n\neq1 = Eq(fk, S('5/2') + S('5/8') * fkp1 + S('3/8') * bkm1)\neq2 = Eq(bk, S('3/2') + S('7/8') * bkm1 + S('1/8') * fkp1)\nsol = solve((eq1, eq2), (bkm1, fk))\nsol",
"Excellent, this confirms what we had written above, and sets us up for a vector recurrence over the vector $(B_k, F_{k+1}, 1)$. Remember, the above equations hold for $k\\in[n-1]$.",
"lhs = (bkm1, fk, 1)\nrhs = (bk, fkp1, 1)\nsol[1] = S('1')\n\ncoeffs = [\n sol[v].as_coefficients_dict()\n for v in lhs\n]\n\nT = Matrix([[c[v] for v in rhs] for c in coeffs])\nT\n\nEq(Matrix(lhs), T * Matrix(rhs))",
"So now that we have an explicit transition matrix, we can repeat this down to $k-1=0$ (since $k=1\\in[n-1]$ is one of the equations this holds for). The trick is that we can unroll the equation by matrix exponentiation, which has closed form for our simple $3\\times 3$ matrix. If we were doing this by hand, then we'd need to write out the full eigensystem.",
"n = symbols('n', positive=True, integer=True)\nb0, f1, bnm1, fn = symbols('B_0 F_1 B_{n-1} F_n')\nlhs = (b0, f1, 1)\nrhs = (bnm1, fn, 1)\nT ** (n-1)",
"Excellent, 2 (effective) equations and 4 unknowns ($B_0,F_1,B_k,B_{n-1},F_n$). Let's re-introduce our original boundary conditions. Then we have our final linear system.",
"eq1 = Eq(b0, 1+f1)\neq2 = Eq(fn, S('5/2') + S('3/8') * bnm1)\neq3 = Eq(Matrix(lhs), T ** (n-1) * Matrix(rhs))\npowsimp(solve((eq1, eq2, eq3), (b0, f1, bnm1, fn))[b0])",
"And since that is $B_0=F_0$, we have our expected absorption time!\nIn Review\nWhat's curious here is that we effectively solved the original formulation of the problem, namely the system $(I-P)\\mathbf{x}=\\mathbf{1}$, where $P$ is our full transition matrix, using various linear transformations of our equalities. One implicit move was reducing our state space from four states $(k,0)\\cdots(k,3)$ to two $F_k,B_k$, but this can just be seen as another substitution of the linear equation relating the expected time to finish the maze from $(k, 2)$ to $(k, 3)$ (which is a simple deterministic equation with the latter exactly 1 larger than the former).\nZooming out a bit, what we ended up doing by \"solving out\" the $(k, 2)$ and $(k, 3)$ states is simplify into a weighted chain that has the following transitions.",
"Image(filename='weighted.png') ",
"What's interesting to me is this transition can be described as some set of elementary transformations $E$ (transforming into $(B_{k-1},F_k)$ space), which simplify the problem $(I-P)\\mathbf{x}=\\mathbf{1}$ into another one $E(I-P)\\mathbf{x}=E\\mathbf{1}$ which then happens to be easily reducible, in the sense that $E(I-P)$ becomes a block diagonal matrix with upper-triangular blocks, which is then solved by matrix exponentiation (\"backsolving\").\nThis suggests that there's probably an automated method for solving absorption times such inductively-defined Markov chains analytically, but naive analysis of the original $(I-P)$ matrix did not get me very far. Perhaps I'll take a crack at the more generic question another time..."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mne-tools/mne-tools.github.io
|
0.19/_downloads/162648d33d7b9ea4f5ce1e8bb494a02d/plot_mne_inverse_label_connectivity.ipynb
|
bsd-3-clause
|
[
"%matplotlib inline",
"Compute source space connectivity and visualize it using a circular graph\nThis example computes the all-to-all connectivity between 68 regions in\nsource space based on dSPM inverse solutions and a FreeSurfer cortical\nparcellation. The connectivity is visualized using a circular graph which\nis ordered based on the locations of the regions in the axial plane.",
"# Authors: Martin Luessi <mluessi@nmr.mgh.harvard.edu>\n# Alexandre Gramfort <alexandre.gramfort@inria.fr>\n# Nicolas P. Rougier (graph code borrowed from his matplotlib gallery)\n#\n# License: BSD (3-clause)\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nimport mne\nfrom mne.datasets import sample\nfrom mne.minimum_norm import apply_inverse_epochs, read_inverse_operator\nfrom mne.connectivity import spectral_connectivity\nfrom mne.viz import circular_layout, plot_connectivity_circle\n\nprint(__doc__)",
"Load our data\nFirst we'll load the data we'll use in connectivity estimation. We'll use\nthe sample MEG data provided with MNE.",
"data_path = sample.data_path()\nsubjects_dir = data_path + '/subjects'\nfname_inv = data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-inv.fif'\nfname_raw = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'\nfname_event = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'\n\n# Load data\ninverse_operator = read_inverse_operator(fname_inv)\nraw = mne.io.read_raw_fif(fname_raw)\nevents = mne.read_events(fname_event)\n\n# Add a bad channel\nraw.info['bads'] += ['MEG 2443']\n\n# Pick MEG channels\npicks = mne.pick_types(raw.info, meg=True, eeg=False, stim=False, eog=True,\n exclude='bads')\n\n# Define epochs for left-auditory condition\nevent_id, tmin, tmax = 1, -0.2, 0.5\nepochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,\n baseline=(None, 0), reject=dict(mag=4e-12, grad=4000e-13,\n eog=150e-6))",
"Compute inverse solutions and their connectivity\nNext, we need to compute the inverse solution for this data. This will return\nthe sources / source activity that we'll use in computing connectivity. We'll\ncompute the connectivity in the alpha band of these sources. We can specify\nparticular frequencies to include in the connectivity with the fmin and\nfmax flags. Notice from the status messages how mne-python:\n\nreads an epoch from the raw file\napplies SSP and baseline correction\ncomputes the inverse to obtain a source estimate\naverages the source estimate to obtain a time series for each label\nincludes the label time series in the connectivity computation\nmoves to the next epoch.\n\nThis behaviour is because we are using generators. Since we only need to\noperate on the data one epoch at a time, using a generator allows us to\ncompute connectivity in a computationally efficient manner where the amount\nof memory (RAM) needed is independent from the number of epochs.",
"# Compute inverse solution and for each epoch. By using \"return_generator=True\"\n# stcs will be a generator object instead of a list.\nsnr = 1.0 # use lower SNR for single epochs\nlambda2 = 1.0 / snr ** 2\nmethod = \"dSPM\" # use dSPM method (could also be MNE or sLORETA)\nstcs = apply_inverse_epochs(epochs, inverse_operator, lambda2, method,\n pick_ori=\"normal\", return_generator=True)\n\n# Get labels for FreeSurfer 'aparc' cortical parcellation with 34 labels/hemi\nlabels = mne.read_labels_from_annot('sample', parc='aparc',\n subjects_dir=subjects_dir)\nlabel_colors = [label.color for label in labels]\n\n# Average the source estimates within each label using sign-flips to reduce\n# signal cancellations, also here we return a generator\nsrc = inverse_operator['src']\nlabel_ts = mne.extract_label_time_course(stcs, labels, src, mode='mean_flip',\n return_generator=True)\n\nfmin = 8.\nfmax = 13.\nsfreq = raw.info['sfreq'] # the sampling frequency\ncon_methods = ['pli', 'wpli2_debiased']\ncon, freqs, times, n_epochs, n_tapers = spectral_connectivity(\n label_ts, method=con_methods, mode='multitaper', sfreq=sfreq, fmin=fmin,\n fmax=fmax, faverage=True, mt_adaptive=True, n_jobs=1)\n\n# con is a 3D array, get the connectivity for the first (and only) freq. band\n# for each method\ncon_res = dict()\nfor method, c in zip(con_methods, con):\n con_res[method] = c[:, :, 0]",
"Make a connectivity plot\nNow, we visualize this connectivity using a circular graph layout.",
"# First, we reorder the labels based on their location in the left hemi\nlabel_names = [label.name for label in labels]\n\nlh_labels = [name for name in label_names if name.endswith('lh')]\n\n# Get the y-location of the label\nlabel_ypos = list()\nfor name in lh_labels:\n idx = label_names.index(name)\n ypos = np.mean(labels[idx].pos[:, 1])\n label_ypos.append(ypos)\n\n# Reorder the labels based on their location\nlh_labels = [label for (yp, label) in sorted(zip(label_ypos, lh_labels))]\n\n# For the right hemi\nrh_labels = [label[:-2] + 'rh' for label in lh_labels]\n\n# Save the plot order and create a circular layout\nnode_order = list()\nnode_order.extend(lh_labels[::-1]) # reverse the order\nnode_order.extend(rh_labels)\n\nnode_angles = circular_layout(label_names, node_order, start_pos=90,\n group_boundaries=[0, len(label_names) / 2])\n\n# Plot the graph using node colors from the FreeSurfer parcellation. We only\n# show the 300 strongest connections.\nplot_connectivity_circle(con_res['pli'], label_names, n_lines=300,\n node_angles=node_angles, node_colors=label_colors,\n title='All-to-All Connectivity left-Auditory '\n 'Condition (PLI)')",
"Make two connectivity plots in the same figure\nWe can also assign these connectivity plots to axes in a figure. Below we'll\nshow the connectivity plot using two different connectivity methods.",
"fig = plt.figure(num=None, figsize=(8, 4), facecolor='black')\nno_names = [''] * len(label_names)\nfor ii, method in enumerate(con_methods):\n plot_connectivity_circle(con_res[method], no_names, n_lines=300,\n node_angles=node_angles, node_colors=label_colors,\n title=method, padding=0, fontsize_colorbar=6,\n fig=fig, subplot=(1, 2, ii + 1))\n\nplt.show()",
"Save the figure (optional)\nBy default matplotlib does not save using the facecolor, even though this was\nset when the figure was generated. If not set via savefig, the labels, title,\nand legend will be cut off from the output png file.",
"# fname_fig = data_path + '/MEG/sample/plot_inverse_connect.png'\n# fig.savefig(fname_fig, facecolor='black')"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
AlphaGit/deep-learning
|
gan_mnist/Intro_to_GANs_Exercises.ipynb
|
mit
|
[
"Generative Adversarial Network\nIn this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits!\nGANs were first reported on in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out:\n\nPix2Pix \nCycleGAN\nA whole list\n\nThe idea behind GANs is that you have two networks, a generator $G$ and a discriminator $D$, competing against each other. The generator makes fake data to pass to the discriminator. The discriminator also sees real data and predicts if the data it's received is real or fake. The generator is trained to fool the discriminator, it wants to output data that looks as close as possible to real data. And the discriminator is trained to figure out which data is real and which is fake. What ends up happening is that the generator learns to make data that is indistiguishable from real data to the discriminator.\n\nThe general structure of a GAN is shown in the diagram above, using MNIST images as data. The latent sample is a random vector the generator uses to contruct it's fake images. As the generator learns through training, it figures out how to map these random vectors to recognizable images that can fool the discriminator.\nThe output of the discriminator is a sigmoid function, where 0 indicates a fake image and 1 indicates an real image. If you're interested only in generating new images, you can throw out the discriminator after training. Now, let's see how we build this thing in TensorFlow.",
"%matplotlib inline\n\nimport pickle as pkl\nimport numpy as np\nimport tensorflow as tf\nimport matplotlib.pyplot as plt\n\nfrom tensorflow.examples.tutorials.mnist import input_data\nmnist = input_data.read_data_sets('MNIST_data')",
"Model Inputs\nFirst we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks.\n\nExercise: Finish the model_inputs function below. Create the placeholders for inputs_real and inputs_z using the input sizes real_dim and z_dim respectively.",
"def model_inputs(real_dim, z_dim):\n inputs_real = tf.placeholder(tf.float32, (None, real_dim), name='input_real')\n inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z')\n \n return inputs_real, inputs_z",
"Generator network\n\nHere we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values.\nVariable Scope\nHere we need to use tf.variable_scope for two reasons. Firstly, we're going to make sure all the variable names start with generator. Similarly, we'll prepend discriminator to the discriminator variables. This will help out later when we're training the separate networks.\nWe could just use tf.name_scope to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also sample from it as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the reuse keyword for tf.variable_scope to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again.\nTo use tf.variable_scope, you use a with statement:\npython\nwith tf.variable_scope('scope_name', reuse=False):\n # code here\nHere's more from the TensorFlow documentation to get another look at using tf.variable_scope.\nLeaky ReLU\nTensorFlow doesn't provide an operation for leaky ReLUs, so we'll need to make one . For this you can just take the outputs from a linear fully connected layer and pass them to tf.maximum. Typically, a parameter alpha sets the magnitude of the output for negative values. So, the output for negative input (x) values is alpha*x, and the output for positive x is x:\n$$\nf(x) = max(\\alpha * x, x)\n$$\nTanh Output\nThe generator has been found to perform the best with $tanh$ for the generator output. This means that we'll have to rescale the MNIST images to be between -1 and 1, instead of 0 and 1.\n\nExercise: Implement the generator network in the function below. You'll need to return the tanh output. Make sure to wrap your code in a variable scope, with 'generator' as the scope name, and pass the reuse keyword argument from the function to tf.variable_scope.",
"def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01):\n ''' Build the generator network.\n \n Arguments\n ---------\n z : Input tensor for the generator\n out_dim : Shape of the generator output\n n_units : Number of units in hidden layer\n reuse : Reuse the variables with tf.variable_scope\n alpha : leak parameter for leaky ReLU\n \n Returns\n -------\n out, logits: \n '''\n with tf.variable_scope('generator', reuse=reuse): # finish this\n # Hidden layer\n h1 = tf.layers.dense(z, n_units)\n # Leaky ReLU\n h1 = tf.maximum(alpha * h1, h1)\n \n # Logits and tanh output\n logits = tf.layers.dense(h1, out_dim)\n out = tf.tanh(logits)\n \n return out",
"Discriminator\nThe discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer.\n\nExercise: Implement the discriminator network in the function below. Same as above, you'll need to return both the logits and the sigmoid output. Make sure to wrap your code in a variable scope, with 'discriminator' as the scope name, and pass the reuse keyword argument from the function arguments to tf.variable_scope.",
"def discriminator(x, n_units=128, reuse=False, alpha=0.01):\n ''' Build the discriminator network.\n \n Arguments\n ---------\n x : Input tensor for the discriminator\n n_units: Number of units in hidden layer\n reuse : Reuse the variables with tf.variable_scope\n alpha : leak parameter for leaky ReLU\n \n Returns\n -------\n out, logits: \n '''\n with tf.variable_scope('discriminator', reuse=reuse): # finish this\n # Hidden layer\n h1 = tf.layers.dense(x, n_units)\n # Leaky ReLU\n h1 = tf.maximum(alpha * h1, h1)\n \n logits = tf.layers.dense(h1, 1)\n out = tf.sigmoid(logits)\n \n return out, logits",
"Hyperparameters",
"# Size of input image to discriminator\ninput_size = 784 # 28x28 MNIST images flattened\n# Size of latent vector to generator\nz_size = 100\n# Sizes of hidden layers in generator and discriminator\ng_hidden_size = 128\nd_hidden_size = 128\n# Leak factor for leaky ReLU\nalpha = 0.01\n# Label smoothing \nsmooth = 0.1",
"Build network\nNow we're building the network from the functions defined above.\nFirst is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z.\nThen, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes.\nThen the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as g_model. So the real data discriminator is discriminator(input_real) while the fake discriminator is discriminator(g_model, reuse=True).\n\nExercise: Build the network from the functions you defined earlier.",
"tf.reset_default_graph()\n# Create our input placeholders\ninput_real, input_z = model_inputs(input_size, z_size)\n\n# Generator network here\ng_model = generator(input_z, input_size, g_hidden_size, False, alpha)\n# g_model is the generator output\n\n# Disriminator network here\nd_model_real, d_logits_real = discriminator(input_real, d_hidden_size, False, alpha)\nd_model_fake, d_logits_fake = discriminator(g_model, d_hidden_size, True, alpha)",
"Discriminator and Generator Losses\nNow we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropies, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mean for all the images in the batch. So the losses will look something like \npython\ntf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))\nFor the real image logits, we'll use d_logits_real which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter smooth. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like labels = tf.ones_like(tensor) * (1 - smooth)\nThe discriminator loss for the fake data is similar. The logits are d_logits_fake, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.\nFinally, the generator losses are using d_logits_fake, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images.\n\nExercise: Calculate the losses for the discriminator and the generator. There are two discriminator losses, one for real images and one for fake images. For the real image loss, use the real logits and (smoothed) labels of ones. For the fake image loss, use the fake logits with labels of all zeros. The total discriminator loss is the sum of those two losses. Finally, the generator loss again uses the fake logits from the discriminator, but this time the labels are all ones because the generator wants to fool the discriminator.",
"# Calculate losses\nreal_labels = tf.ones_like(d_logits_real)\nd_loss_real = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=real_labels * (1 - smooth)))\n\nfake_labels = tf.zeros_like(d_logits_fake)\nd_loss_fake = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=fake_labels))\n\nd_loss = d_loss_real + d_loss_fake\n\ng_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=real_labels))",
"Optimizers\nWe want to update the generator and discriminator variables separately. So we need to get the variables for each part and build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph.\nFor the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with generator. So, we just need to iterate through the list from tf.trainable_variables() and keep variables that start with generator. Each variable object has an attribute name which holds the name of the variable as a string (var.name == 'weights_0' for instance). \nWe can do something similar with the discriminator. All the variables in the discriminator start with discriminator.\nThen, in the optimizer we pass the variable lists to the var_list keyword argument of the minimize method. This tells the optimizer to only update the listed variables. Something like tf.train.AdamOptimizer().minimize(loss, var_list=var_list) will only train the variables in var_list.\n\nExercise: Below, implement the optimizers for the generator and discriminator. First you'll need to get a list of trainable variables, then split that list into two lists, one for the generator variables and another for the discriminator variables. Finally, using AdamOptimizer, create an optimizer for each network that update the network variables separately.",
"# Optimizers\nlearning_rate = 0.002\n\n# Get the trainable_variables, split into G and D parts\nt_vars = tf.trainable_variables()\ng_vars = [v for v in t_vars if v.name.startswith('generator')]\nd_vars = [v for v in t_vars if v.name.startswith('discriminator')]\n\nd_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(d_loss, var_list=d_vars)\ng_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(g_loss, var_list=g_vars)",
"Training",
"batch_size = 100\nepochs = 100\nsamples = []\nlosses = []\nsaver = tf.train.Saver(var_list = g_vars)\nwith tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n for e in range(epochs):\n for ii in range(mnist.train.num_examples//batch_size):\n batch = mnist.train.next_batch(batch_size)\n \n # Get images, reshape and rescale to pass to D\n batch_images = batch[0].reshape((batch_size, 784))\n batch_images = batch_images*2 - 1\n \n # Sample random noise for G\n batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))\n \n # Run optimizers\n _ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z: batch_z})\n _ = sess.run(g_train_opt, feed_dict={input_real: batch_images, input_z: batch_z})\n \n # At the end of each epoch, get the losses and print them out\n train_loss_d = sess.run(d_loss, {input_z: batch_z, input_real: batch_images})\n train_loss_g = g_loss.eval({input_real: batch_images, input_z: batch_z})\n \n print(\"Epoch {}/{}...\".format(e+1, epochs),\n \"Discriminator Loss: {:.4f}...\".format(train_loss_d),\n \"Generator Loss: {:.4f}\".format(train_loss_g)) \n # Save losses to view after training\n losses.append((train_loss_d, train_loss_g))\n \n # Sample from generator as we're training for viewing afterwards\n sample_z = np.random.uniform(-1, 1, size=(16, z_size))\n gen_samples = sess.run(\n generator(input_z, input_size, reuse=True),\n feed_dict={input_z: sample_z})\n samples.append(gen_samples)\n saver.save(sess, './checkpoints/generator.ckpt')\n\n# Save training generator samples\nwith open('train_samples.pkl', 'wb') as f:\n pkl.dump(samples, f)",
"Training loss\nHere we'll check out the training losses for the generator and discriminator.",
"%matplotlib inline\n\nimport matplotlib.pyplot as plt\n\nfig, ax = plt.subplots()\nlosses = np.array(losses)\nplt.plot(losses.T[0], label='Discriminator')\nplt.plot(losses.T[1], label='Generator')\nplt.title(\"Training Losses\")\nplt.legend()",
"Generator samples from training\nHere we can view samples of images from the generator. First we'll look at images taken while training.",
"def view_samples(epoch, samples):\n fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True)\n for ax, img in zip(axes.flatten(), samples[epoch]):\n ax.xaxis.set_visible(False)\n ax.yaxis.set_visible(False)\n im = ax.imshow(img.reshape((28,28)), cmap='Greys_r')\n \n return fig, axes\n\n# Load samples from generator taken while training\nwith open('train_samples.pkl', 'rb') as f:\n samples = pkl.load(f)",
"These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 5, 7, 3, 0, 9. Since this is just a sample, it isn't representative of the full range of images this generator can make.",
"_ = view_samples(-1, samples)",
"Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!",
"rows, cols = 10, 6\nfig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True)\n\nfor sample, ax_row in zip(samples[::int(len(samples)/rows)], axes):\n for img, ax in zip(sample[::int(len(sample)/cols)], ax_row):\n ax.imshow(img.reshape((28,28)), cmap='Greys_r')\n ax.xaxis.set_visible(False)\n ax.yaxis.set_visible(False)",
"It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise. Looks like 1, 9, and 8 show up first. Then, it learns 5 and 3.\nSampling from the generator\nWe can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples!",
"saver = tf.train.Saver(var_list=g_vars)\nwith tf.Session() as sess:\n saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))\n sample_z = np.random.uniform(-1, 1, size=(16, z_size))\n gen_samples = sess.run(\n generator(input_z, input_size, reuse=True),\n feed_dict={input_z: sample_z})\nview_samples(0, [gen_samples])"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
kfollette/ASTR200-Spring2017
|
Labs/Lab13/Lab 13.ipynb
|
mit
|
[
"Names: [Insert Your Names Here]\nLab 13 Data Investigation 3 (Week 1) - Variable Star Database\nLab 13 Contents\n\nIntroduction to the Variable Star Database\nExercises\nData Investigation 3 - Week 2 Instructions\n\nThere are no new concepts introduced in this lab - it just provides another opportunity to practice concepts introduces in Labs 9 and 11. It is also a shorter lab, to allow some in-class time this week to review your final project proposals.\n1. Introduction and Preliminaries\nIn this lab, you will be exploring a table containing information about stars whose brightness changes as a function of time (so-called \"variable stars\"). There are many types of variable stars, and it is not critical that you understand the details of how or why a star's brightness varies. This particular set of variable stars are all in the field of view of the Kepler spacecraft mission. \nA description of the table is here",
"##Load packages\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\nimport scipy.stats as st\nimport scipy.optimize as optimization\n\n# these set the pandas defaults so that it will print ALL values, even for very long lists and large dataframes\npd.set_option('display.max_columns', None)\npd.set_option('display.max_rows', None)\n\n##Much like the QuaRCS dataset, there are some values in the table that mean \"NaN\" or \"N/A\" or \"not measured\"\n##replace all of these with actual python-recognized NaNs\ndata = pd.read_csv('Kepler_variables.csv')\ndata = data.replace(99.999,np.nan)\ndata = data.replace(99.99,np.nan)\ndata = data.replace(\"N/A\",np.nan)\ndata",
"Check out the link at the top of this page, which gives descriptions of all of the columns in the table. Generally, the first 7 columns contain identifying information for the objects (name, location in the sky, etc.), which are not as interesting as they are unrelated to physical properties, and the last 10 columns contain all of the interesting measured quantities for the stars.\n2. Exercises\n<div class=hw>\n## Exercise 1\n-----------------\n\nUsing the code from Lab 11 as a reference, write code that will: \n1) isolate only those variable star \"Types\" (this is in the column \"Types\" with more than 50 entries in the table - you should be left with five types\n2) make a scatterplot of two quantities (columns) in the table where the different types each have a different color and symbol, as you did in Lab 11 with planet discovery methods. You can write a function that takes a data frame and column names as input and generates this scatterplot, or you can write code that gives columns generic names (like x and y) and just swap out the names of the columns assigned to those variables as needed. \n3) To demonstrate that everything is working, make an example plot where the x axis is period and y axis is the H-K \"Color\" of the star. You should choose appropriate axis labels, axis limits, axis scalings (linear or log), and legend location to best highlight the data\n\nOnce you are satisfied with the plot, save it and in a separate markdown cell, display the saved plot and note three interesting things that you notice about it/questions that it generates for you.",
"## Code for truncating the data to only those types with > 50 entries\n\n#list of symbols and colors for plot\n\n#code to loop through methods and make plots",
"plot and questions/observations go here\n<div class=hw>\n## Exercise 2\n----------------------\n\nUse the code you wrote for Exercise 1 to explore the dataset. Change the \"x\" and \"y\" quantities in the scatterplot until you find one pair that ***for one particular \"type\" of variable star appears to show a nice linear relationship between the two quantities*** (make sure any log scales you used to generate your plot for exercise 2 are turned off if you want to see true linear relationsips). Once you've found a linear relationship to investigate:\n\n1) make a dataframe with only this type of star and generate a scatterplot with the appropriate axis labels and ranges. \n2) Use the \"modeling\" notes from several weeks ago as a model to generate a least-squares linear (slope-intercept) fit to the data and overplot it on this same figure and save it. \n3) Calculate the chi-squared statistic for goodness of fit of this model. \n4) In a separate markdown cell, insert your figure with the data and model, your chi-squared calculation, a description of whether or not you think the fit is \"good\", and what if any additional information would help you to determine this. If that information is something you know how to find or calculate, do it. \n\nOnce you're done, spend a little time thinking about what it might mean that the two quantities you've plotted are linearly related. What do you think it might tell us about the universe? What can you find out about the two quantities and what do you still need to understand in order to judge the relationship? Add a reflection on these questions to the end of your markdown cell",
"# code to truncate the dataframe to only the sample you want to look at\n\n#code to create basic plot\n\n#code to define function for line fit\n\n#code to calculate fit\n\n#code to make plot with data + model fit\n\n#code to calculate chi-squared statistic",
"plot and questions/observations go here\n3. Data Investigation 3 - Week 2 Instructions\nNow that you are familar with the variable star database, you and your partner must come up with a statistical investigation that you would like to complete using this data. It's completely up to you what you choose to investigate, but here are a few broad ideas to guide your thinking:\n\nYou might choose to isolate a population of variable stars that you noticed in one of the plots and attempt to understand it (descriptive statistics, correlations, etc) and/or compare it to another population\nYou might make a quantitative comparsion of the TYPES of variable stars and connect this to what you can find out about their physical properties\nYou might isolate a region of a plot or a subset of stars with apparent correlations between variables and attempt to fit a model to the relationship between them.\nYou might consider adding a fourth variable to one of the plots you made by sizing the points to represent that variable. \n\nIn all cases, I can provide suggestions and guidance, and would be happy to discuss at office hours or by appointment. \nBefore 5pm next Monday evening (4/24), you must send me a brief e-mail (that you write together, one e-mail per group) describing a plan for how you will approach a question that you have developed. What do you need to know that you don't know already? What kind of plots will you make and what kinds of statistics will you compute? What is your first thought for what your final data representations will look like?",
"from IPython.core.display import HTML\ndef css_styling():\n styles = open(\"../custom.css\", \"r\").read()\n return HTML(styles)\ncss_styling()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
neuro-ml/reskit
|
tutorials/3. Transformers Guide.ipynb
|
bsd-3-clause
|
[
"This tutorial helps you to understand how you can transform your data using DataTransformer and MatrixTransformer classes and how to make your own classes for data transformation.\n1. MatrixTransformer",
"import numpy as np\n\nfrom reskit.normalizations import mean_norm\nfrom reskit.core import MatrixTransformer\n\nmatrix_0 = np.random.rand(5, 5)\nmatrix_1 = np.random.rand(5, 5)\nmatrix_2 = np.random.rand(5, 5)\ny = np.array([0, 0, 1])\n\nX = np.array([matrix_0,\n matrix_1,\n matrix_2])\n\noutput = np.array([mean_norm(matrix_0),\n mean_norm(matrix_1),\n mean_norm(matrix_2)])\n\nresult = MatrixTransformer(\n func=mean_norm).fit_transform(X)\n\n(output == result).all()",
"This is a simple example of MatrixTransformer usage. Input X for transformation with MatrixTransformer should be a 3 dimensional array (array of matrices). So, MatrixTransformer just transforms each matrix in X.\nIf you have a data with specific data structure it is useful and convenient to write your function for data processing.\n2. DataTransformer\nTo simply write new transformers we provide DataTransformer. The main idea is to write functions which takes some X and output transformed X. Thus, you shouldn't write a transformation class for compatibility with sklearn pipelines. So, here is example of DataTransformer usage:",
"from reskit.core import DataTransformer\n\n\ndef mean_norm_trans(X):\n X = X.copy()\n N = len(X)\n for i in range(N):\n X[i] = mean_norm(X[i])\n return X\n\nresult = DataTransformer(\n func=mean_norm_trans).fit_transform(X)\n\n(output == result).all()",
"As you can see, we writed the same transformation, but with DataTransformer instead of MatrixTransformer.\n3. Your own transformer\nIf you need more flexibility in transformation, you can implement your own transformer. Simplest example:",
"from sklearn.base import TransformerMixin\nfrom sklearn.base import BaseEstimator\n\nclass MyTransformer(BaseEstimator, TransformerMixin):\n \n def __init__(self):\n pass\n \n def fit(self, X, y=None, **fit_params):\n #\n # Write here the code if transformer need\n # to learn anything from data.\n #\n # Usually nothing should be here, \n # just return self.\n #\n return self\n \n def transform(self, X):\n #\n # Write here your transformation\n #\n return X"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
jinntrance/MOOC
|
coursera/ml-regression/assignments/week-2-multiple-regression-assignment-1-blank.ipynb
|
cc0-1.0
|
[
"Regression Week 2: Multiple Regression (Interpretation)\nThe goal of this first notebook is to explore multiple regression and feature engineering with existing graphlab functions.\nIn this notebook you will use data on house sales in King County to predict prices using multiple regression. You will:\n* Use SFrames to do some feature engineering\n* Use built-in graphlab functions to compute the regression weights (coefficients/parameters)\n* Given the regression weights, predictors and outcome write a function to compute the Residual Sum of Squares\n* Look at coefficients and interpret their meanings\n* Evaluate multiple models via RSS\nFire up graphlab create",
"import graphlab",
"Load in house sales data\nDataset is from house sales in King County, the region where the city of Seattle, WA is located.",
"sales = graphlab.SFrame('kc_house_data.gl/')",
"Split data into training and testing.\nWe use seed=0 so that everyone running this notebook gets the same results. In practice, you may set a random seed (or let GraphLab Create pick a random seed for you).",
"train_data,test_data = sales.random_split(.8,seed=0)\n\nfrom math import *\n\ndef fit_features(data):\n data['bedrooms_squared']= data['bedrooms'] * data['bedrooms']\n data['bed_bath_rooms'] = data['bedrooms'] * data['bathrooms']\n data['log_sqft_living'] = [log(i) for i in data['sqft_living']]\n data['lat_plus_long'] = data['lat'] + data['long']\n print data['bedrooms_squared'].mean()\n print data['bed_bath_rooms'].mean()\n print data['log_sqft_living'].mean()\n print data['lat_plus_long'].mean()\n \nfit_features(train_data)\nfit_features(test_data)\n",
"Learning a multiple regression model\nRecall we can use the following code to learn a multiple regression model predicting 'price' based on the following features:\nexample_features = ['sqft_living', 'bedrooms', 'bathrooms'] on training data with the following code:\n(Aside: We set validation_set = None to ensure that the results are always the same)",
"example_features = ['sqft_living', 'bedrooms', 'bathrooms', 'lat', 'long']\nexample_model = graphlab.linear_regression.create(test_data, target = 'price', features = example_features, \n validation_set = None)",
"Now that we have fitted the model we can extract the regression weights (coefficients) as an SFrame as follows:",
"example_weight_summary = example_model.get(\"coefficients\")\nprint example_weight_summary",
"Making Predictions\nIn the gradient descent notebook we use numpy to do our regression. In this book we will use existing graphlab create functions to analyze multiple regressions. \nRecall that once a model is built we can use the .predict() function to find the predicted values for data we pass. For example using the example model above:",
"example_predictions = example_model.predict(train_data)\nprint example_predictions[0] # should be 271789.505878",
"Compute RSS\nNow that we can make predictions given the model, let's write a function to compute the RSS of the model. Complete the function below to calculate RSS given the model, data, and the outcome.",
"def get_residual_sum_of_squares(model, data, outcome):\n # First get the predictions\n pred = model.predict(data)\n # Then compute the residuals/errors\n df = pred - outcome\n # Then square and add them up\n RSS = (df * df).sum()\n return(RSS) ",
"Test your function by computing the RSS on TEST data for the example model:",
"rss_example_train = get_residual_sum_of_squares(example_model, test_data, test_data['price'])\nprint rss_example_train # should be 2.7376153833e+14",
"Create some new features\nAlthough we often think of multiple regression as including multiple different features (e.g. # of bedrooms, squarefeet, and # of bathrooms) but we can also consider transformations of existing features e.g. the log of the squarefeet or even \"interaction\" features such as the product of bedrooms and bathrooms.\nYou will use the logarithm function to create a new feature. so first you should import it from the math library.",
"from math import log",
"Next create the following 4 new features as column in both TEST and TRAIN data:\n* bedrooms_squared = bedrooms*bedrooms\n* bed_bath_rooms = bedrooms*bathrooms\n* log_sqft_living = log(sqft_living)\n* lat_plus_long = lat + long \nAs an example here's the first one:",
"train_data['bedrooms_squared'] = train_data['bedrooms'].apply(lambda x: x**2)\ntest_data['bedrooms_squared'] = test_data['bedrooms'].apply(lambda x: x**2)\n\n# create the remaining 3 features in both TEST and TRAIN data\n\n",
"Squaring bedrooms will increase the separation between not many bedrooms (e.g. 1) and lots of bedrooms (e.g. 4) since 1^2 = 1 but 4^2 = 16. Consequently this feature will mostly affect houses with many bedrooms.\nbedrooms times bathrooms gives what's called an \"interaction\" feature. It is large when both of them are large.\nTaking the log of squarefeet has the effect of bringing large values closer together and spreading out small values.\nAdding latitude to longitude is totally non-sensical but we will do it anyway (you'll see why)\n\nQuiz Question: What is the mean (arithmetic average) value of your 4 new features on TEST data? (round to 2 digits)\nLearning Multiple Models\nNow we will learn the weights for three (nested) models for predicting house prices. The first model will have the fewest features the second model will add one more feature and the third will add a few more:\n* Model 1: squarefeet, # bedrooms, # bathrooms, latitude & longitude\n* Model 2: add bedrooms*bathrooms\n* Model 3: Add log squarefeet, bedrooms squared, and the (nonsensical) latitude + longitude",
"model_1_features = ['sqft_living', 'bedrooms', 'bathrooms', 'lat', 'long']\nmodel_2_features = model_1_features + ['bed_bath_rooms']\nmodel_3_features = model_2_features + ['bedrooms_squared', 'log_sqft_living', 'lat_plus_long']",
"Now that you have the features, learn the weights for the three different models for predicting target = 'price' using graphlab.linear_regression.create() and look at the value of the weights/coefficients:",
"# Learn the three models: (don't forget to set validation_set = None)\nfor features in [model_1_features, model_2_features, model_3_features]:\n model = graphlab.linear_regression.create(train_data, target = 'price', features = features, \n validation_set = None)\n rss = get_residual_sum_of_squares(model, test_data, test_data['price'])\n print rss\n \n\n# Examine/extract each model's coefficients:\n",
"Quiz Question: What is the sign (positive or negative) for the coefficient/weight for 'bathrooms' in model 1?\nQuiz Question: What is the sign (positive or negative) for the coefficient/weight for 'bathrooms' in model 2?\nThink about what this means.\nComparing multiple models\nNow that you've learned three models and extracted the model weights we want to evaluate which model is best.\nFirst use your functions from earlier to compute the RSS on TRAINING Data for each of the three models.",
"# Compute the RSS on TRAINING data for each of the three models and record the values:\n",
"Quiz Question: Which model (1, 2 or 3) has lowest RSS on TRAINING Data? Is this what you expected?\nNow compute the RSS on on TEST data for each of the three models.",
"# Compute the RSS on TESTING data for each of the three models and record the values:\n",
"Quiz Question: Which model (1, 2 or 3) has lowest RSS on TESTING Data? Is this what you expected?Think about the features that were added to each model from the previous."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
lukas/scikit-class
|
examples/notebooks/Lesson-2-Feature-Extraction.ipynb
|
gpl-2.0
|
[
"Feature Extraction\nGoals\n\nIntroduction to Feature Extraction\nDo Feature Extraction on Text - Introduction to Bag of Words\n\nIntroduction\nMachine Learning algorithms all take the same basic form of input: a fixed length list of numbers. Very few real world problems are a fixed length list of numbers, so a crucial step in machine learning is converting the data into this format. Sometimes the individual numbers are called \"features\" so this process is sometimes called \"feature extraction\". A fixed length list of numbers is also known as a vector. A list of vectors with all the same length is known as a matrix.\nRight now our input is a list of tweets that looks like this:\n<img src=\"images/tweets.png\" width=\"400\"/>\nWe need to convert it into a list of feature vectors that look like this:\n<img src=\"images/features.png\" width=\"400\"/>\nYou might want to stop and think about how you might do this.\nBag of Words\nOne way to convert our input into a vector is to make each row correspond to a different word and each cell correspond to the number of times that word occured in a particular tweet.\n<img src=\"images/tweet-transform.png\" width=\"600\"/>\nThis creates a lot of columns! This is the most basic feature extraction method used on text in natural language processing.\nScikit has methods to make this transofrmation really easy. In sklearn.feature_extraction.text there is a class called CountVectorizer we will use.\nCountVecotizer has two important methods\n1. fit sets things up, associating each word with a column\n2. transform converts a list of strings into feature vectors",
"import pandas as pd\nimport numpy as np\n\ndf = pd.read_csv('../scikit/tweets.csv')\ntarget = df['is_there_an_emotion_directed_at_a_brand_or_product']\ntext = df['tweet_text']\n\n# We need to remove the empty rows from the text before we pass into CountVectorizer\nfixed_text = text[pd.notnull(text)]\nfixed_target = target[pd.notnull(text)]\n\n# Do the feature extraction\nfrom sklearn.feature_extraction.text import CountVectorizer\ncount_vect = CountVectorizer() # initialize the count vectorizer\ncount_vect.fit(fixed_text) # set up the columns",
"Now our count_vect object is able to transform text into feautre vectors.\nWe can try it out:",
"count_vect.transform([\"My iphone is awesome\"])",
"A sparse matrix is a matrix with mostly zeros, and we are definitely dealing with a sparse matric since most of the counts here are zero.",
"print(count_vect.transform([\"My iphone is awesome\"]))",
"This notation says that cells in columns 876, 4573, 4596, 5699 are one and all other cells are zero. We have one row here because we passed in a list of length one - just the tweet \"My iphone is awesome\".\nSome questions to ask yourself now:\n- which words correspond to which columns?\n- is our transformation case senstitive?\n- how many columns do we have?\nLet's do the transformation on all of our tweets to build our big feature matrix (you can think of a matrix as a list of fixed size vectors).",
"counts = count_vect.transform(fixed_text)\nprint(counts.shape)",
"Great! Now we have a feature matrix that we can feed in to our machine learning algorithm. It has 9092 rows corresponding to 9092 tweets and 9706 columns corresponding to 9706 words.\nTakeaways\n\nAll machine learning algorithms have the same API - a list of fixed-length vectors of numbers - also known as a Feature Matrix\nData almost never comes in a list of fixed length vectors, so this transofrmation is critical, and highly application dependant.\nWhen dealing with text data, \"bag of words\" is a common way to do feature extraction.\n\nQuestions\n\nWhat would be another way to transform text?\nWhat information is lost in the \"bag-of-words\" transformation?"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
cdt15/lingam
|
examples/DirectLiNGAM(Kernel).ipynb
|
mit
|
[
"DirectLiNGAM by Kernel Method\nImport and settings\nIn this example, we need to import numpy, pandas, and graphviz in addition to lingam.",
"import numpy as np\nimport pandas as pd\nimport graphviz\nimport lingam\nfrom lingam.utils import make_dot\n\nprint([np.__version__, pd.__version__, graphviz.__version__, lingam.__version__])\n\nnp.set_printoptions(precision=3, suppress=True)\nnp.random.seed(0)",
"Test data\nWe create test data consisting of 6 variables.",
"n = 1000\ne = lambda n: np.random.laplace(0, 1, n)\nx3 = e(n)\nx2 = 0.3*x3 + e(n)\nx1 = 0.3*x3 + 0.3*x2 + e(n)\nx0 = 0.3*x2 + 0.3*x1 + e(n)\nx4 = 0.3*x1 + 0.3*x0 + e(n)\nX = pd.DataFrame(np.array([x0, x1, x2, x3, x4]).T ,columns=['x0', 'x1', 'x2', 'x3', 'x4'])\nX.head()\n\nm = np.array([[0.0, 0.3, 0.3, 0.0, 0.0],\n [0.0, 0.0, 0.3, 0.3, 0.0],\n [0.0, 0.0, 0.0, 0.3, 0.0],\n [0.0, 0.0, 0.0, 0.0, 0.0],\n [0.3, 0.3, 0.0, 0.0, 0.0]])\n\nmake_dot(m)",
"Causal Discovery\nTo run causal discovery, we create a DirectLiNGAM object by specifying 'kernel' in the measure parameter. Then, we call the fit method.",
"model = lingam.DirectLiNGAM(measure='kernel')\nmodel.fit(X)",
"Using the causal_order_ properties, we can see the causal ordering as a result of the causal discovery.",
"model.causal_order_",
"Also, using the adjacency_matrix_ properties, we can see the adjacency matrix as a result of the causal discovery.",
"model.adjacency_matrix_",
"We can draw a causal graph by utility funciton.",
"make_dot(model.adjacency_matrix_)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
Olsthoorn/IHE-python-course-2017
|
exercises/Feb21/py_exploratory_comp_1_20170221.ipynb
|
gpl-2.0
|
[
"<figure>\n <IMG SRC=\"https://raw.githubusercontent.com/mbakker7/exploratory_computing_with_python/master/tudelft_logo.png\" WIDTH=250 ALIGN=\"right\">\n</figure>\n\nExploratory Computing with Python\nBorrowed from Mark Bakker in extra-curricular Python course at UNESCO-IHE\nOn Feb 21, We started working with this first jupyter notebook developed by Prof. Mark Bakker of TU-Delft.\nWe finally didn't have time to go through all of it.\nJust for your memory and inspiraction this is a fast rap up of what we did and did not completely finish.\nTO\nNotebook 1: Basics and Plotting\nFirst Python steps\nPortable, powerful, and a breeze to use, Python is a popular, open-source programming language used for both scripting applications and standalone programs. Python can be used to do pretty much anything.\n<a name=\"ex1\"></a> Exercise 1, First Python code\nCompute the value of the polynomial\n$y_1=ax^2+bx+c$ at a large number of $x$ values=-2$ using\n$a=-6$, $b=-4$, $c=-2$, $d=1$, $e=4$, $f=6$\nWe also add a 5th degree polynomial:\n$y_2 = (x - a) (x - b ) (x - c ) (x - d) (x - e) (x -f)$",
"import numpy as np\nimport matplotlib.pyplot as plt\n\naList = ['a', 'quark', 'flies', 'in', 'this', 'room', 'at', 3, 'oclock']\n\n# print(\"{} {} {} {} {} {} {} {} {}\".format(1,2,'three',4,5,6,6, 7, 8, 9)) # (aList))\nprint(\"{} {} {} {} {} {} {} {} {}\".format(*aList))\n\nplt.legend?\n\nimport numpy as np # functionality to use numeric arrays\nimport matplotlib.pyplot as plt # functionality to plot\n\na = -6 \nb = -5\nc = -2\nd = 1\ne = 4\nf = 6\n\nx = np.linspace(-6, 6, 1000) # 1000 x-values between -6 and 6\n\n# Compute the polynomials for 1000 points at once\ny1 = a * x**2 + b * x + c\ny2 = (x - a) * (x - b )* (x - c ) * (x - d) * (x - e) * (x -f)\n\n# to put the equations in the graph, put them in strings between $ $\neq1 = '$a x^2 + b x + c$'\neq2 = '$(x - a) (x - b ) (x - c ) (x - d) (x - e) (x -f)$'\n\nplt.plot(x, y1, label=eq1) # use these equations as label\nplt.plot(x, y2, label=eq2)\n\nplt.title('Title of the graph, just two equations')\nplt.xlabel('x [m]')\nplt.ylabel('y [whatever]')\nplt.grid(True)\nplt.legend(loc='best', fontsize='small') # this plots the legend with the equation labels\n\nplt.show() # need this to actually show the plot",
"Simultaneously plot three graphs\nThis shows a way to read data from the current directory and then plot these in a single figure.\nThe data file need to be in your directory, so copy them first from the corresponding directory of the notebooks by Mark Bakker (notebook 1)",
"holland = np.loadtxt('holland_temperature.dat')\nnewyork= np.loadtxt('newyork_temperature.dat')\nbeijing = np.loadtxt('beijing_temperature.dat')\n\nplt.plot(np.linspace(1, 12, 12), holland)\nplt.plot(np.linspace(1, 12, 12), newyork)\nplt.plot(np.linspace(1, 12, 12), beijing)\n\nplt.xlabel('Number of the month')\nplt.ylabel('Mean monthly temperature (Celcius)')\n\nplt.xlim(1, 12)\n\n# the labels are given in legend, instead of with each plot like we did before\nplt.legend(['Holland','New York','Beijing'], loc='best');\n\nplt.show()",
"Use more than one axis, i.e. using everal subplots( )",
"# read the data from the current directory\nair = np.loadtxt('holland_temperature.dat') \nsea = np.loadtxt('holland_seawater.dat')\n\n# specifiy two plots vertically 2 rows 1 column\n# and generate first axis of them\nplt.subplot(211) # plt.subplot(2, 1, 1) is the same\n\n# plot the actual two lines and use a label for each of them\nplt.plot(air, 'b', label='air temp')\nplt.plot(sea, 'r', label='sea temp')\n\nplt.legend(loc='best') # show legend\n\nplt.ylabel('temp (Celcius)')\n\nplt.xlim(0, 11) # set the limits of the x-axis of the graph\nplt.xticks([]) # don't plot ticks along the x-axis\n\nplt.subplot(212) # generate second subplot\nplt.plot(air-sea, 'ko')\n\n# generate the tick labels explicitly\nplt.xticks(np.linspace(0, 11, 12),\n ['jan','feb','mar','apr','may','jun','jul','aug','sep','oct','nov','dec'])\n\nplt.xlim(0, 11)\n\nplt.ylabel('air - sea temp (Celcius)');\n\nplt.show()",
"Gallery of graphs\nThe plotting package matplotlib allows you to make very fancy graphs. Check out the <A href=\"http://matplotlib.org/gallery.html\" target=_blank>matplotlib gallery</A> to get an overview of many of the options. The following exercises use several of the matplotlib options.\n<a name=\"ex5\"></a> Exercise 5, Pie Chart\nAt the 2012 London Olympics, the top ten countries (plus the rest) receiving gold medals were ['USA', 'CHN', 'GBR', 'RUS', 'KOR', 'GER', 'FRA', 'ITA', 'HUN', 'AUS', 'OTHER']. They received [46, 38, 29, 24, 13, 11, 11, 8, 8, 7, 107] gold medals, respectively. Make a pie chart (type plt.pie? or go to the pie charts in the matplotlib gallery) of the top 10 gold medal winners plus the others at the London Olympics. Try some of the keyword arguments to make the plot look nice. You may want to give the command plt.axis('equal') to make the scales along the horizontal and vertical axes equal so that the pie actually looks like a circle rather than an ellipse. There are four different ways to specify colors in matplotlib plotting; you may read about it here. The coolest way is to use the html color names. Use the colors keyword in your pie chart to specify a sequence of colors. The sequence must be between square brackets, each color must be between quotes preserving upper and lower cases, and they must be separated by comma's like ['MediumBlue','SpringGreen','BlueViolet']; the sequence is repeated if it is not long enough. The html names for the colors may be found, for example, here.",
"gold = [46, 38, 29, 24, 13, 11, 11, 8, 8, 7, 107]\ncountries = ['USA', 'CHN', 'GBR', 'RUS', 'KOR', 'GER', 'FRA', 'ITA', 'HUN', 'AUS', 'OTHER']\n\n# use pie graph this time\nplt.pie(gold, labels = countries, colors = ['Gold', 'MediumBlue', 'SpringGreen', 'BlueViolet'])\nplt.axis('equal');\nplt.show()",
"<a name=\"ex6\"></a> Exercise 6, Fill between\nLoad the air and sea temperature, as used in Exercise 4, but this time make one plot of temperature vs the number of the month and use the plt.fill_between command to fill the space between the curve and the $x$-axis. Specify the alpha keyword, which defines the transparancy. Some experimentation will give you a good value for alpha (stay between 0 and 1). Note that you need to specify the color using the color keyword argument.",
"air = np.loadtxt('holland_temperature.dat') \nsea = np.loadtxt('holland_seawater.dat')\n\n# use fill_between graph this time\n# range(12) generates values 0, 1, 2, 3, ... 11 (used for months, 0=jan)\nplt.fill_between(range(12), air, color='b', alpha=0.3, label='air') # alpha is degree of transparency\nplt.fill_between(range(12), sea, color='r', alpha=0.3, label='sea')\n\n# this is quite sophisticated: the plot \n# the \\ after 'apr' is line continuation\n\nplt.xticks(np.linspace(0, 11, 12), ['jan', 'feb', 'mar', 'apr',\\\n 'may', 'jun', 'jul', 'aug', 'sep', ' oct', 'nov', 'dec'])\n\nplt.xlim(0, 11)\nplt.ylim(0, 20)\n\nplt.xlabel('Month')\nplt.ylabel('Temperature (Celcius)')\n\nplt.legend(loc='best', fontsize='x-small')\n\nplt.show()\n\n\"\"\"\nDemo of spines using custom bounds to limit the extent of the spine.\n\"\"\"\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n\nx = np.linspace(0, 2*np.pi, 50)\ny = np.sin(x)\ny2 = y + 0.1 * np.random.normal(size=x.shape)\n\nfig, ax = plt.subplots()\nax.plot(x, y, 'k--')\nax.plot(x, y2, 'ro')\n\n# set ticks and tick labels\nax.set_xlim((0, 2*np.pi))\nax.set_xticks([0, np.pi, 2*np.pi])\nax.set_xticklabels(['0', '$\\pi$', '2$\\pi$'])\nax.set_ylim((-1.5, 1.5))\nax.set_yticks([-1, 0, 1])\n\n# Only draw spine between the y-ticks\nax.spines['left'].set_bounds(-1, 1)\n# Hide the right and top spines\nax.spines['right'].set_visible(False)\nax.spines['top'].set_visible(False)\n# Only show ticks on the left and bottom spines\nax.yaxis.set_ticks_position('left')\nax.xaxis.set_ticks_position('bottom')\n\nplt.show()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
IBMDecisionOptimization/docplex-examples
|
examples/mp/jupyter/ucp_pandas.ipynb
|
apache-2.0
|
[
"The Unit Commitment Problem (UCP)\nThis tutorial includes everything you need to set up IBM Decision Optimization CPLEX Modeling for Python (DOcplex), build a Mathematical Programming model, and get its solution by solving the model on the cloud with IBM ILOG CPLEX Optimizer.\nWhen you finish this tutorial, you'll have a foundational knowledge of Prescriptive Analytics.\n\nIt requires either an installation of CPLEX Optimizers or it can be run on IBM Cloud Pak for Data as a Service (Sign up for a free IBM Cloud account\nand you can start using IBM Cloud Pak for Data as a Service right away).\nCPLEX is available on <i>IBM Cloud Pack for Data</i> and <i>IBM Cloud Pak for Data as a Service</i>:\n - <i>IBM Cloud Pak for Data as a Service</i>: Depends on the runtime used:\n - <i>Python 3.x</i> runtime: Community edition\n - <i>Python 3.x + DO</i> runtime: full edition\n - <i>Cloud Pack for Data</i>: Community edition is installed by default. Please install DO addon in Watson Studio Premium for the full edition\n\nTable of contents:\n\nDescribe the business problem\nHow decision optimization (prescriptive analytics) can help\nUse decision optimization\nStep 1: Import the library\nStep 2: Model the Data\nStep 3: Prepare the data\nStep 4: Set up the prescriptive model\nDefine the decision variables\nExpress the business constraints\nExpress the objective\nSolve with Decision Optimization\n\n\nStep 5: Investigate the solution and run an example analysis\n\n\nSummary\n\n\nDescribe the business problem\n\n\nThe Model estimates the lower cost of generating electricity within a given plan. \nDepending on the demand for electricity, we turn on or off units that generate power and which have operational properties and costs.\n\n\nThe Unit Commitment Problem answers the question \"Which power generators should I run at which times and at what level in order to satisfy the demand for electricity?\". This model helps users to find not only a feasible answer to the question, but one that also optimizes its solution to meet as many of the electricity company's overall goals as possible. \n\n\nHow decision optimization can help\n\n\nPrescriptive analytics (decision optimization) technology recommends actions that are based on desired outcomes. It takes into account specific scenarios, resources, and knowledge of past and current events. With this insight, your organization can make better decisions and have greater control of business outcomes. \n\n\nPrescriptive analytics is the next step on the path to insight-based actions. It creates value through synergy with predictive analytics, which analyzes data to predict future outcomes. \n\n\nPrescriptive analytics takes that insight to the next level by suggesting the optimal way to handle that future situation. Organizations that can act fast in dynamic conditions and make superior decisions in uncertain environments gain a strong competitive advantage.\n<br/>\n\n\n<u>With prescriptive analytics, you can:</u> \n\nAutomate the complex decisions and trade-offs to better manage your limited resources.\nTake advantage of a future opportunity or mitigate a future risk.\nProactively update recommendations based on changing events.\nMeet operational goals, increase customer loyalty, prevent threats and fraud, and optimize business processes.\n\nChecking minimum requirements\nThis notebook uses some features of pandas that are available in version 0.17.1 or above.",
"import pip\nREQUIRED_MINIMUM_PANDAS_VERSION = '0.17.1'\ntry:\n import pandas as pd\n assert pd.__version__ >= REQUIRED_MINIMUM_PANDAS_VERSION\nexcept:\n raise Exception(\"Version \" + REQUIRED_MINIMUM_PANDAS_VERSION + \" or above of Pandas is required to run this notebook\")",
"Use decision optimization\nStep 1: Import the library\nRun the following code to the import the Decision Optimization CPLEX Modeling library. The DOcplex library contains the two modeling packages, Mathematical Programming (docplex.mp) and Constraint Programming (docplex.cp).",
"import sys\ntry:\n import docplex.mp\nexcept:\n raise Exception('Please install docplex. See https://pypi.org/project/docplex/')",
"Step 2: Model the data\nLoad data from a pandas DataFrame\nData for the Unit Commitment Problem is provided as a pandas DataFrame.\nFor a standalone notebook, we provide the raw data as Python collections,\nbut real data could be loaded\nfrom an Excel sheet, also using pandas.",
"import pandas as pd\nfrom pandas import DataFrame, Series\n\n# make matplotlib plots appear inside the notebook\nimport matplotlib.pyplot as plt\n%matplotlib inline\nfrom pylab import rcParams\nrcParams['figure.figsize'] = 11, 5 ############################ <-Use this to change the plot",
"Update the configuration of notebook so that display matches browser window width.",
"from IPython.core.display import HTML\nHTML(\"<style>.container { width:100%; }</style>\")",
"Available energy technologies\nThe following df_energy DataFrame stores CO<sub>2</sub> cost information, indexed by energy type.",
"energies = [\"coal\", \"gas\", \"diesel\", \"wind\"]\ndf_energy = DataFrame({\"co2_cost\": [30, 5, 15, 0]}, index=energies)\n\n# Display the 'df_energy' Data Frame\ndf_energy",
"The following df_units DataFrame stores common elements for units of a given technology.",
"all_units = [\"coal1\", \"coal2\", \n \"gas1\", \"gas2\", \"gas3\", \"gas4\", \n \"diesel1\", \"diesel2\", \"diesel3\", \"diesel4\"]\n \nucp_raw_unit_data = {\n \"energy\": [\"coal\", \"coal\", \"gas\", \"gas\", \"gas\", \"gas\", \"diesel\", \"diesel\", \"diesel\", \"diesel\"],\n \"initial\" : [400, 350, 205, 52, 155, 150, 78, 76, 0, 0],\n \"min_gen\": [100, 140, 78, 52, 54.25, 39, 17.4, 15.2, 4, 2.4],\n \"max_gen\": [425, 365, 220, 210, 165, 158, 90, 87, 20, 12],\n \"operating_max_gen\": [400, 350, 205, 197, 155, 150, 78, 76, 20, 12],\n \"min_uptime\": [15, 15, 6, 5, 5, 4, 3, 3, 1, 1],\n \"min_downtime\":[9, 8, 7, 4, 3, 2, 2, 2, 1, 1],\n \"ramp_up\": [212, 150, 101.2, 94.8, 58, 50, 40, 60, 20, 12],\n \"ramp_down\": [183, 198, 95.6, 101.7, 77.5, 60, 24, 45, 20, 12],\n \"start_cost\": [5000, 4550, 1320, 1291, 1280, 1105, 560, 554, 300, 250],\n \"fixed_cost\": [208.61, 117.37, 174.12, 172.75, 95.353, 144.52, 54.417, 54.551, 79.638, 16.259],\n \"variable_cost\": [22.536, 31.985, 70.5, 69, 32.146, 54.84, 40.222, 40.522, 116.33, 76.642],\n }\n\ndf_units = DataFrame(ucp_raw_unit_data, index=all_units)\n\n# Display the 'df_units' Data Frame\ndf_units",
"Step 3: Prepare the data\nThe pandas merge operation is used to create a join between the df_units and df_energy DataFrames. Here, the join is performed based on the 'energy' column of df_units and index column of df_energy.\nBy default, merge performs an inner join. That is, the resulting DataFrame is based on the intersection of keys from both input DataFrames.",
"# Add a derived co2-cost column by merging with df_energies\n# Use energy key from units and index from energy dataframe\ndf_up = pd.merge(df_units, df_energy, left_on=\"energy\", right_index=True)\ndf_up.index.names=['units']\n\n# Display first rows of new 'df_up' Data Frame\ndf_up.head()",
"The demand is stored as a pandas Series indexed from 1 to the number of periods.",
"raw_demand = [1259.0, 1439.0, 1289.0, 1211.0, 1433.0, 1287.0, 1285.0, 1227.0, 1269.0, 1158.0, 1277.0, 1417.0, 1294.0, 1396.0, 1414.0, 1386.0,\n 1302.0, 1215.0, 1433.0, 1354.0, 1436.0, 1285.0, 1332.0, 1172.0, 1446.0, 1367.0, 1243.0, 1275.0, 1363.0, 1208.0, 1394.0, 1345.0, \n 1217.0, 1432.0, 1431.0, 1356.0, 1360.0, 1364.0, 1286.0, 1440.0, 1440.0, 1313.0, 1389.0, 1385.0, 1265.0, 1442.0, 1435.0, 1432.0, \n 1280.0, 1411.0, 1440.0, 1258.0, 1333.0, 1293.0, 1193.0, 1440.0, 1306.0, 1264.0, 1244.0, 1368.0, 1437.0, 1236.0, 1354.0, 1356.0, \n 1383.0, 1350.0, 1354.0, 1329.0, 1427.0, 1163.0, 1339.0, 1351.0, 1174.0, 1235.0, 1439.0, 1235.0, 1245.0, 1262.0, 1362.0, 1184.0, \n 1207.0, 1359.0, 1443.0, 1205.0, 1192.0, 1364.0, 1233.0, 1281.0, 1295.0, 1357.0, 1191.0, 1329.0, 1294.0, 1334.0, 1265.0, 1207.0, \n 1365.0, 1432.0, 1199.0, 1191.0, 1411.0, 1294.0, 1244.0, 1256.0, 1257.0, 1224.0, 1277.0, 1246.0, 1243.0, 1194.0, 1389.0, 1366.0, \n 1282.0, 1221.0, 1255.0, 1417.0, 1358.0, 1264.0, 1205.0, 1254.0, 1276.0, 1435.0, 1335.0, 1355.0, 1337.0, 1197.0, 1423.0, 1194.0, \n 1310.0, 1255.0, 1300.0, 1388.0, 1385.0, 1255.0, 1434.0, 1232.0, 1402.0, 1435.0, 1160.0, 1193.0, 1422.0, 1235.0, 1219.0, 1410.0, \n 1363.0, 1361.0, 1437.0, 1407.0, 1164.0, 1392.0, 1408.0, 1196.0, 1430.0, 1264.0, 1289.0, 1434.0, 1216.0, 1340.0, 1327.0, 1230.0, \n 1362.0, 1360.0, 1448.0, 1220.0, 1435.0, 1425.0, 1413.0, 1279.0, 1269.0, 1162.0, 1437.0, 1441.0, 1433.0, 1307.0, 1436.0, 1357.0, \n 1437.0, 1308.0, 1207.0, 1420.0, 1338.0, 1311.0, 1328.0, 1417.0, 1394.0, 1336.0, 1160.0, 1231.0, 1422.0, 1294.0, 1434.0, 1289.0]\nnb_periods = len(raw_demand)\nprint(\"nb periods = {}\".format(nb_periods))\n\ndemand = Series(raw_demand, index = range(1, nb_periods+1))\n\n# plot demand\ndemand.plot(title=\"Demand\")",
"Step 4: Set up the prescriptive model\nCreate the DOcplex model\nThe model contains all the business constraints and defines the objective.",
"from docplex.mp.model import Model\n\nucpm = Model(\"ucp\")",
"Define the decision variables\nDecision variables are:\n\nThe variable in_use[u,t] is 1 if and only if unit u is working at period t.\nThe variable turn_on[u,t] is 1 if and only if unit u is in production at period t.\nThe variable turn_off[u,t] is 1 if unit u is switched off at period t.\nThe variable production[u,t] is a continuous variables representing the production of energy for unit u at period t.",
"units = all_units\n# periods range from 1 to nb_periods included\nperiods = range(1, nb_periods+1)\n\n# in use[u,t] is true iff unit u is in production at period t\nin_use = ucpm.binary_var_matrix(keys1=units, keys2=periods, name=\"in_use\")\n\n# true if unit u is turned on at period t\nturn_on = ucpm.binary_var_matrix(keys1=units, keys2=periods, name=\"turn_on\")\n\n# true if unit u is switched off at period t\n# modeled as a continuous 0-1 variable, more on this later\nturn_off = ucpm.continuous_var_matrix(keys1=units, keys2=periods, lb=0, ub=1, name=\"turn_off\")\n\n# production of energy for unit u at period t\nproduction = ucpm.continuous_var_matrix(keys1=units, keys2=periods, name=\"p\")\n\n# at this stage we have defined the decision variables.\nucpm.print_information()\n\n# Organize all decision variables in a DataFrame indexed by 'units' and 'periods'\ndf_decision_vars = DataFrame({'in_use': in_use, 'turn_on': turn_on, 'turn_off': turn_off, 'production': production})\n# Set index names\ndf_decision_vars.index.names=['units', 'periods']\n\n# Display first few rows of 'df_decision_vars' DataFrame\ndf_decision_vars.head()",
"Express the business constraints\nLinking in-use status to production\nWhenever the unit is in use, the production must be within the minimum and maximum generation.",
"# Create a join between 'df_decision_vars' and 'df_up' Data Frames based on common index id (ie: 'units')\n# In 'df_up', one keeps only relevant columns: 'min_gen' and 'max_gen'\ndf_join_decision_vars_up = df_decision_vars.join(df_up[['min_gen', 'max_gen']], how='inner')\n\n# Display first few rows of joined Data Frames\ndf_join_decision_vars_up.head()\n\nimport pandas as pb\nprint(pd.__version__)\n\n\n# When in use, the production level is constrained to be between min and max generation.\nfor item in df_join_decision_vars_up.itertuples(index=False):\n ucpm += (item.production <= item.max_gen * item.in_use)\n ucpm += (item.production >= item.min_gen * item.in_use)",
"Initial state\nThe solution must take into account the initial state. The initial state of use of the unit is determined by its initial production level.",
"# Initial state\n# If initial production is nonzero, then period #1 is not a turn_on\n# else turn_on equals in_use\n# Dual logic is implemented for turn_off\nfor u in units:\n if df_up.initial[u] > 0:\n # if u is already running, not starting up\n ucpm.add_constraint(turn_on[u, 1] == 0)\n # turnoff iff not in use\n ucpm.add_constraint(turn_off[u, 1] + in_use[u, 1] == 1)\n else:\n # turn on at 1 iff in use at 1\n ucpm.add_constraint(turn_on[u, 1] == in_use[u, 1])\n # already off, not switched off at t==1\n ucpm.add_constraint(turn_off[u, 1] == 0)\nucpm.print_information()",
"Ramp-up / ramp-down constraint\nVariations of the production level over time in a unit is constrained by a ramp-up / ramp-down process.\nWe use the pandas groupby operation to collect all decision variables for each unit in separate series. Then, we iterate over units to post constraints enforcing the ramp-up / ramp-down process by setting upper bounds on the variation of the production level for consecutive periods.",
"# Use groupby operation to process each unit\nfor unit, r in df_decision_vars.groupby(level='units'):\n u_ramp_up = df_up.ramp_up[unit]\n u_ramp_down = df_up.ramp_down[unit]\n u_initial = df_up.initial[unit]\n # Initial ramp up/down\n # Note that r.production is a Series that can be indexed as an array (ie: first item index = 0)\n ucpm.add_constraint(r.production[0] - u_initial <= u_ramp_up)\n ucpm.add_constraint(u_initial - r.production[0] <= u_ramp_down)\n for (p_curr, p_next) in zip(r.production, r.production[1:]):\n ucpm.add_constraint(p_next - p_curr <= u_ramp_up)\n ucpm.add_constraint(p_curr - p_next <= u_ramp_down)\n\nucpm.print_information()",
"Turn on / turn off\nThe following constraints determine when a unit is turned on or off.\nWe use the same pandas groupby operation as in the previous constraint to iterate over the sequence of decision variables for each unit.",
"# Turn_on, turn_off\n# Use groupby operation to process each unit\nfor unit, r in df_decision_vars.groupby(level='units'):\n for (in_use_curr, in_use_next, turn_on_next, turn_off_next) in zip(r.in_use, r.in_use[1:], r.turn_on[1:], r.turn_off[1:]):\n # if unit is off at time t and on at time t+1, then it was turned on at time t+1\n ucpm.add_constraint(in_use_next - in_use_curr <= turn_on_next)\n\n # if unit is on at time t and time t+1, then it was not turned on at time t+1\n # mdl.add_constraint(in_use_next + in_use_curr + turn_on_next <= 2)\n\n # if unit is on at time t and off at time t+1, then it was turned off at time t+1\n ucpm.add_constraint(in_use_curr - in_use_next + turn_on_next == turn_off_next)\nucpm.print_information() ",
"Minimum uptime and downtime\nWhen a unit is turned on, it cannot be turned off before a minimum uptime. Conversely, when a unit is turned off, it cannot be turned on again before a minimum downtime.\nAgain, let's use the same pandas groupby operation to implement this constraint for each unit.",
"# Minimum uptime, downtime\nfor unit, r in df_decision_vars.groupby(level='units'):\n min_uptime = df_up.min_uptime[unit]\n min_downtime = df_up.min_downtime[unit]\n # Note that r.turn_on and r.in_use are Series that can be indexed as arrays (ie: first item index = 0)\n for t in range(min_uptime, nb_periods):\n ctname = \"min_up_{0!s}_{1}\".format(*r.index[t])\n ucpm.add_constraint(ucpm.sum(r.turn_on[(t - min_uptime) + 1:t + 1]) <= r.in_use[t], ctname)\n\n for t in range(min_downtime, nb_periods):\n ctname = \"min_down_{0!s}_{1}\".format(*r.index[t])\n ucpm.add_constraint(ucpm.sum(r.turn_off[(t - min_downtime) + 1:t + 1]) <= 1 - r.in_use[t], ctname)\n",
"Demand constraint\nTotal production level must be equal or higher than demand on any period.\nThis time, the pandas operation groupby is performed on \"periods\" since we have to iterate over the list of all units for each period.",
"# Enforcing demand\n# we use a >= here to be more robust, \n# objective will ensure we produce efficiently\nfor period, r in df_decision_vars.groupby(level='periods'):\n total_demand = demand[period]\n ctname = \"ct_meet_demand_%d\" % period\n ucpm.add_constraint(ucpm.sum(r.production) >= total_demand, ctname) ",
"Express the objective\nOperating the different units incur different costs: fixed cost, variable cost, startup cost, co2 cost.\nIn a first step, we define the objective as a non-weighted sum of all these costs.\nThe following pandas join operation groups all the data to calculate the objective in a single DataFrame.",
"# Create a join between 'df_decision_vars' and 'df_up' Data Frames based on common index ids (ie: 'units')\n# In 'df_up', one keeps only relevant columns: 'fixed_cost', 'variable_cost', 'start_cost' and 'co2_cost'\ndf_join_obj = df_decision_vars.join(\n df_up[['fixed_cost', 'variable_cost', 'start_cost', 'co2_cost']], how='inner')\n\n# Display first few rows of joined Data Frame\ndf_join_obj.head()\n\n# objective\ntotal_fixed_cost = ucpm.sum(df_join_obj.in_use * df_join_obj.fixed_cost)\ntotal_variable_cost = ucpm.sum(df_join_obj.production * df_join_obj.variable_cost)\ntotal_startup_cost = ucpm.sum(df_join_obj.turn_on * df_join_obj.start_cost)\ntotal_co2_cost = ucpm.sum(df_join_obj.production * df_join_obj.co2_cost)\ntotal_economic_cost = total_fixed_cost + total_variable_cost + total_startup_cost\n\ntotal_nb_used = ucpm.sum(df_decision_vars.in_use)\ntotal_nb_starts = ucpm.sum(df_decision_vars.turn_on)\n\n# store expression kpis to retrieve them later.\nucpm.add_kpi(total_fixed_cost , \"Total Fixed Cost\")\nucpm.add_kpi(total_variable_cost, \"Total Variable Cost\")\nucpm.add_kpi(total_startup_cost , \"Total Startup Cost\")\nucpm.add_kpi(total_economic_cost, \"Total Economic Cost\")\nucpm.add_kpi(total_co2_cost , \"Total CO2 Cost\")\nucpm.add_kpi(total_nb_used, \"Total #used\")\nucpm.add_kpi(total_nb_starts, \"Total #starts\")\n\n# minimize sum of all costs\nucpm.minimize(total_fixed_cost + total_variable_cost + total_startup_cost + total_co2_cost)",
"Solve with Decision Optimization\nIf you're using a Community Edition of CPLEX runtimes, depending on the size of the problem, the solve stage may fail and will need a paying subscription or product installation.",
"ucpm.print_information()\n\nassert ucpm.solve(), \"!!! Solve of the model fails\"\n\nucpm.report()",
"Step 5: Investigate the solution and then run an example analysis\nNow let's store the results in a new pandas DataFrame.\nFor convenience, the different figures are organized into pivot tables with periods as row index and units as columns. The pandas unstack operation does this for us.",
"df_prods = df_decision_vars.production.apply(lambda v: v.solution_value).unstack(level='units')\ndf_used = df_decision_vars.in_use.apply(lambda v: v.solution_value).unstack(level='units')\ndf_started = df_decision_vars.turn_on.apply(lambda v: v.solution_value).unstack(level='units')\n\n# Display the first few rows of the pivoted 'production' data\ndf_prods.head()",
"From these raw DataFrame results, we can compute derived results.\nFor example, for a given unit and period, the reserve r(u,t) is defined as\nthe unit's maximum generation minus the current production.",
"df_spins = DataFrame(df_up.max_gen.to_dict(), index=periods) - df_prods\n\n# Display the first few rows of the 'df_spins' Data Frame, representing the reserve for each unit, over time\ndf_spins.head()",
"Let's plot the evolution of the reserves for the \"coal2\" unit:",
"df_spins.coal2.plot(style='o-', ylim=[0,200])",
"Now we want to sum all unit reserves to compute the global spinning reserve.\nWe need to sum all columns of the DataFrame to get an aggregated time series. We use the pandas sum method\nwith axis=1 (for rows).",
"global_spin = df_spins.sum(axis=1)\nglobal_spin.plot(title=\"Global spinning reserve\")",
"Number of plants online by period\nThe total number of plants online at each period t is the sum of in_use variables for all units at this period.\nAgain, we use the pandas sum with axis=1 (for rows) to sum over all units.",
"df_used.sum(axis=1).plot(title=\"Number of plants online\", kind='line', style=\"r-\", ylim=[0, len(units)])",
"Costs by period",
"# extract unit cost data\nall_costs = [\"fixed_cost\", \"variable_cost\", \"start_cost\", \"co2_cost\"]\ndf_costs = df_up[all_costs]\n\nrunning_cost = df_used * df_costs.fixed_cost\nstartup_cost = df_started * df_costs.start_cost\nvariable_cost = df_prods * df_costs.variable_cost\nco2_cost = df_prods * df_costs.co2_cost\ntotal_cost = running_cost + startup_cost + variable_cost + co2_cost\n\nrunning_cost.sum(axis=1).plot(style='g')\nstartup_cost.sum(axis=1).plot(style='r')\nvariable_cost.sum(axis=1).plot(style='b',logy=True)\nco2_cost.sum(axis=1).plot(style='k')",
"Cost breakdown by unit and by energy",
"# Calculate sum by column (by default, axis = 0) to get total cost for each unit\ncost_by_unit = total_cost.sum()\n\n# Create a dictionary storing energy type for each unit, from the corresponding pandas Series\nunit_energies = df_up.energy.to_dict()\n\n# Group cost by unit type and plot total cost by energy type in a pie chart\ngb = cost_by_unit.groupby(unit_energies)\n# gb.sum().plot(kind='pie')\ngb.sum().plot.pie(figsize=(6, 6),autopct='%.2f',fontsize=15)\n\nplt.title('total cost by energy type', bbox={'facecolor':'0.8', 'pad':5})",
"Arbitration between CO<sub>2</sub> cost and economic cost\nEconomic cost and CO<sub>2</sub> cost usually push in opposite directions.\nIn the above discussion, we have minimized the raw sum of economic cost and CO<sub>2</sub> cost, without weights.\nBut how good could we be on CO<sub>2</sub>, regardless of economic constraints? \nTo know this, let's solve again with CO<sub>2</sub> cost as the only objective.",
"# first retrieve the co2 and economic kpis\nco2_kpi = ucpm.kpi_by_name(\"co2\") # does a name matching\neco_kpi = ucpm.kpi_by_name(\"eco\")\nprev_co2_cost = co2_kpi.compute()\nprev_eco_cost = eco_kpi.compute()\nprint(\"* current CO2 cost is: {}\".format(prev_co2_cost))\nprint(\"* current $$$ cost is: {}\".format(prev_eco_cost))\n# now set the objective\nold_objective = ucpm.objective_expr # save it\nucpm.minimize(co2_kpi.as_expression())\n\nassert ucpm.solve(), \"Solve failed\"\n\nmin_co2_cost = ucpm.objective_value\nmin_co2_eco_cost = eco_kpi.compute()\nprint(\"* absolute minimum for CO2 cost is {}\".format(min_co2_cost))\nprint(\"* at this point $$$ cost is {}\".format(min_co2_eco_cost))",
"As expected, we get a significantly lower CO<sub>2</sub> cost when minimized alone, at the price of a higher economic cost.\nWe could do a similar analysis for economic cost to estimate the absolute minimum of\nthe economic cost, regardless of CO<sub>2</sub> cost.",
"# minimize only economic cost\nucpm.minimize(eco_kpi.as_expression())\n\nassert ucpm.solve(), \"Solve failed\"\n\nmin_eco_cost = ucpm.objective_value\nmin_eco_co2_cost = co2_kpi.compute()\nprint(\"* absolute minimum for $$$ cost is {}\".format(min_eco_cost))\nprint(\"* at this point CO2 cost is {}\".format(min_eco_co2_cost))",
"Again, the absolute minimum for economic cost is lower than the figure we obtained in the original model where we minimized the sum of economic and CO<sub>2</sub> costs, but here we significantly increase the CO<sub>2</sub>.\nBut what happens in between these two extreme points?\nTo investigate, we will divide the interval of CO<sub>2</sub> cost values in smaller intervals, add an upper limit on CO<sub>2</sub>,\nand minimize economic cost with this constraint. This will give us a Pareto optimal point with at most this CO<sub>2</sub> value.\nTo avoid adding many constraints, we add only one constraint with an extra variable, and we change only the upper bound\nof this CO<sub>2</sub> limit variable between successive solves.\nThen we iterate (with a fixed number of iterations) and collect the cost values.",
"# add extra variable\nco2_limit = ucpm.continuous_var(lb=0)\n# add a named constraint which limits total co2 cost to this variable:\nmax_co2_ctname = \"ct_max_co2\"\nco2_ct = ucpm.add_constraint(co2_kpi.as_expression() <= co2_limit, max_co2_ctname) \n\nco2min = min_co2_cost\nco2max = min_eco_co2_cost\ndef explore_ucp(nb_iters, eps=1e-5):\n\n step = (co2max-co2min)/float(nb_iters)\n co2_ubs = [co2min + k * step for k in range(nb_iters+1)]\n\n # ensure we minimize eco\n ucpm.minimize(eco_kpi.as_expression())\n all_co2s = []\n all_ecos = []\n for k in range(nb_iters+1):\n co2_ub = co2min + k * step\n print(\" iteration #{0} co2_ub={1}\".format(k, co2_ub))\n co2_limit.ub = co2_ub + eps\n assert ucpm.solve() is not None, \"Solve failed\"\n cur_co2 = co2_kpi.compute()\n cur_eco = eco_kpi.compute()\n all_co2s.append(cur_co2)\n all_ecos.append(cur_eco)\n return all_co2s, all_ecos\n\n#explore the co2/eco frontier in 50 points\nco2s, ecos = explore_ucp(nb_iters=50)\n\n# normalize all values by dividing by their maximum\neco_max = min_co2_eco_cost\nnxs = [c / co2max for c in co2s]\nnys = [e / eco_max for e in ecos]\n# plot a scatter chart of x=co2, y=costs\nplt.scatter(nxs, nys)\n# plot as one point\nplt.plot(prev_co2_cost/co2max, prev_eco_cost/eco_max, \"rH\", markersize=16)\nplt.xlabel(\"co2 cost\")\nplt.ylabel(\"economic cost\")\nplt.show()",
"This figure demonstrates that the result obtained in the initial model clearly favored\neconomic cost over CO<sub>2</sub> cost: CO<sub>2</sub> cost is well above 95% of its maximum value.\nSummary\nYou learned how to set up and use IBM Decision Optimization CPLEX Modeling for Python to formulate a Mathematical Programming model and solve it with IBM Decision Optimization on Cloud.\nReferences\n\nCPLEX Modeling for Python documentation\nIBM Decision Optimization\nNeed help with DOcplex or to report a bug? Please go here.\nContact us at dofeedback@wwpdl.vnet.ibm.com.\n\nCopyright © 2017-2021 IBM. IPLA licensed Sample Materials."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
usantamaria/iwi131
|
ipynb/26-EjerciciosTipoCertamen/Ejercicios.ipynb
|
cc0-1.0
|
[
"\"\"\"\nIPython Notebook v4.0 para python 2.7\nLibrerías adicionales: Ninguna.\nContenido bajo licencia CC-BY 4.0. Código bajo licencia MIT. (c) Sebastian Flores.\n\"\"\"\n\n# Configuracion para recargar módulos y librerías \n%reload_ext autoreload\n%autoreload 2\n\nfrom IPython.core.display import HTML\n\nHTML(open(\"style/iwi131.css\", \"r\").read())",
"<header class=\"w3-container w3-teal\">\n<img src=\"images/utfsm.png\" alt=\"\" align=\"left\"/>\n<img src=\"images/inf.png\" alt=\"\" align=\"right\"/>\n</header>\n<br/><br/><br/><br/><br/>\nIWI131\nProgramación de Computadores\nSebastián Flores\nhttp://progra.usm.cl/ \nhttps://www.github.com/usantamaria/iwi131",
"m = 'Edkhy\u001f`°n\u001fmtdun '\ns = \"\"\nfor c in m:\n s += chr(ord(c)+1)\nprint s\n\nm = 'Mensaje Secreto'\ns = \"\"\nfor c in m:\n s += chr(ord(c)-1)\nprint s",
"Fechas\n\nMiércoles 6 Enero, 8:00. Actividad 5.\nViernes 8 Enero, 15:40. Certamen 3.\nLunes 18 Enero, 8:00. Certamen Recuperativo.\n\nProblemas solicitados por correo\n\nTelégrafo.\n\n1. Telégrafo\nDado un mensaje, se debe calcular su costo para enviarlo por telégrafo. Para esto se sabe que cada letra cuesta \\$10, los caracteres especiales que no sean letras cuestan \\$30 y los dígitos tienen un valor de \\$20 cada uno. Los espacios no tienen valor.\nSu mensaje debe ser un string, y las letras del castellano (ñ, á, é, í, ó, ú) se consideran caracteres especiales.\nMensaje: Feliz Aniversario!\nSu mensaje cuesta $190",
"def costo_mensaje(msg):\n return 0\n\nmsg = raw_input(\"Mensaje: \")\nwhile len(msg)!=0:\n costo = costo_mensaje(msg)\n print \"Su mensaje cuesta ${0}\".format(costo)\n msg = raw_input(\"Mensaje: \")\n\ndef costo_mensaje(msg):\n letras_sin_valor = \" \"\n letras_normales = \"abcdefghijklmnopqrstuvwxyz\"\n digitos = \"0123456789\"\n costo = 0\n for letra in msg.lower():\n if letra in letras_sin_valor:\n costo += 0\n elif letra in letras_normales:\n costo += 10\n elif letra in digitos:\n costo += 20\n else:\n costo += 30\n return costo\n\nmsg = raw_input(\"Mensaje: \")\nwhile len(msg)!=0:\n costo = costo_mensaje(msg)\n print \"Su mensaje cuesta ${0}\".format(costo)\n msg = raw_input(\"Mensaje: \")",
"Pregunta 2, Certamen 3, Primer Semestre, 2014.\nLos registros de un sismo se guardan en un archivo con la siguiente estructura (siempre en el mismo orden):\n{mag:float,place:string,dept:float,\ntsunami:integer,date:string,time:string}\nConsidere como ejemplo el archivo registro.geojson:\n{mag:7.0,place:Iquique,dept:10.0,\n tsunami:1,date:2014-03-15,time:15:44:13}\n{mag:5.8,place:Salvador,dept:23.0,\n tsunami:0,date:2014-03-17,time:06:11:08}\n{mag:3.1,place:California,dept:22.0,\n tsunami:0,date:2014-03-17,time:17:55:33}\n{mag:2.5,place:Quilpue,dept:10.0,\n tsunami:0,date:2014-03-23,time:02:41:09}\n{mag:4.6,place:Iquique,dept:98.0,\n tsunami:0,date:2014-03-28,time:20:34:22}\n(a) Leer una linea\nDesarrolle la función interpretar_geojson(linea) que reciba como parámetro un string\ncon la misma estructura que tienen las lineas del archivo geojson y retorne un diccionario con los\ndatos formateados como muestra el ejemplo.\n```Python\n\n\n\ninterpretar_geojson('{mag:5.8,place:Salvador,dept:23.0,tsunami:0, date:2014-03-17,time:06:11:08}')\n{'mag': 5.8, \n 'place': 'Salvador', \n 'dept': 23.0, \n 'tsunami': 0, \n 'date': (2014, 3, 17), \n 'time': '06:11:08'}\n ```",
"def interpretar_geojson(linea):\n d = {}\n return d\n \n \nprint interpretar_geojson('{mag:5.8,place:Salvador,dept:23.0,tsunami:0, date:2014-03-17,time:06:11:08}')\n\ndef interpretar_geojson(linea):\n d = {}\n # sacar caracteres innecesarios: {,} y \\n\n # separar datos con split\n # procesar datos\n # regresar el diccionario\n return d \n \nprint interpretar_geojson('{mag:5.8,place:Salvador,dept:23.0,tsunami:0, date:2014-03-17,time:06:11:08}')\n\ndef interpretar_geojson(linea):\n d = {}\n # sacar caracteres innecesarios: {,}. OBS: No trae \\n\n linea = linea[1:-1]\n # separar datos con split\n datos = linea.split(\",\")\n # procesar datos\n mag = datos[0].split(\":\")[-1]\n d[\"mag\"] = float(mag)\n place = datos[1].split(\":\")[-1]\n d[\"place\"] = place\n dept = datos[2].split(\":\")[-1]\n d[\"dept\"] = float(dept)\n tsunami = datos[3].split(\":\")[-1]\n d[\"tsunami\"] = int(tsunami)\n date = datos[4].split(\":\")[-1]\n yyyy,mm,dd = date.split(\"-\")\n d[\"date\"] = (int(yyyy),int(mm),int(dd))\n time = datos[5].replace(\"time:\",\"\") # Ojo!!!\n d[\"time\"] = time\n # regresar el diccionario\n return d \n \nprint interpretar_geojson('{mag:5.8,place:Salvador,dept:23.0,tsunami:0, date:2014-03-17,time:06:11:08}')",
"(b) Mayor Sismo\nDesarrolle la función mayor_sismo(nombre_archivo) que reciba como parámetro el nombre\ndel archivo y retorne una tupla con: la magnitud, lugar, y fecha (también como tupla) del sismo con mayor magnitud.\n```Python\n\n\n\nmayor_sismo(\"registro.geojson\")\n(7.0, 'Iquique', (2014, 3, 15))\n ```",
"def mayor_sismo(nombre_archivo):\n return ()\n\nprint mayor_sismo(\"data/registro.geojson\")\n\ndef mayor_sismo(nombre_archivo):\n # Abrir archivo\n # Inicializar mayor magnitud\n # Recorrer lineas del archivo\n # Procesar cada linea, cuidado con \\n\n # Actualizar el mayor\n # Cerrar archivo\n return ()\n\nprint mayor_sismo(\"data/registro.geojson\")\n\ndef mayor_sismo(nombre_archivo):\n # Abrir archivo\n archivo = open(nombre_archivo)\n # Inicializar mayor magnitud\n mayor_mag = -float(\"inf\")\n mayor_tupla = ()\n # Recorrer lineas del archivo\n for linea in archivo:\n # Procesar cada linea, cuidado con \\n\n d = interpretar_geojson(linea.strip())\n # Actualizar el mayor\n if d[\"mag\"] > mayor_mag:\n mayor_mag = d[\"mag\"]\n mayor_tupla = (d[\"mag\"], d[\"place\"], d[\"date\"])\n # Cerrar archivo\n archivo.close()\n return mayor_tupla\n\nprint mayor_sismo(\"data/registro.geojson\")",
"(c) Mostrar registro\nDesarrolle la función mostrar_registro(nombre_archivo, mag) que reciba como parámetro el nombre del archivo y un numero real. La función debe mostrar por pantalla los registros donde la magnitud es mayor o igual al segundo parámetro, en el formato:\nPLACE <-> mag <-> dept <-> date-time\nGuíese por el ejemplo. Además note que el lugar del epicentro esta en mayúscula.\nLa función retorna nada.\n```\n\n\n\nmostrar_registro(\"registro.geojson\",4.4)\nIQUIQUE <-> 7.0 <-> 10.0 <-> 2014-03-15-15:44:13\nSALVADOR <-> 5.8 <-> 23.0 <-> 2014-03-17-06:11:08\nIQUIQUE <-> 4.6 <-> 98.0 <-> 2014-03-28-20:34:22\n ```",
"def mostrar_registro(nombre_archivo, mag):\n return None\n\nmostrar_registro(\"registro.geojson\", 4.4)\n\ndef mostrar_registro(nombre_archivo, mag):\n # Abrir archivo\n # Recorrer lineas del archivo\n # Procesar cada linea\n # Si magnitud es >= a la indicada, imprimir en formato correcto\n # Cerrar archivo\n return None\n\nmostrar_registro(\"registro.geojson\", 4.4)\n\ndef mostrar_registro(nombre_archivo, mag):\n # Abrir archivo\n archivo = open(nombre_archivo)\n # Recorrer lineas del archivo\n for linea in archivo:\n # Procesar cada linea\n d = interpretar_geojson(linea.strip())\n # Si magnitud es >= a la indicada, imprimir en formato correcto\n if d[\"mag\"]>=mag:\n date = \"-\".join(map(str,d[\"date\"]))\n date_time = date + \"-\" + d[\"time\"]\n datos = (d[\"place\"].upper(), d[\"mag\"], d[\"dept\"], date_time)\n linea_imprimir = \" <-> \".join( map(str, datos) ) \n print linea_imprimir\n # Cerrar archivo\n archivo.close()\n return None\n\nmostrar_registro(\"data/registro.geojson\", 4.4)",
"Pregunta 3, Certamen 3, Primer Semestre, 2014.\nSe tienen los resultados de todos los partidos por grupos del mundial de fútbol 2014 en archivos de texto tales como los presentados a continuación.\nGrupo1.txt\nBrasil;3-Croacia;1\nMexico;1-Camerun;0\nBrasil;0-Mexico;0\nCamerun;0-Croacia;4\nCamerun;1-Brasil;4\nCroacia;1-Mexico;3\nGrupo2.txt\nEspania;1-Holanda;5\nChile;3-Australia;1\nAustralia;2-Holanda;3\nEspania;0-Chile;3\nHolanda;2-Chile;0\nAustralia;0-Espania;3\nTener en cuenta que se tienen los archivos Grupo1.txt hasta el Grupo8.txt.\n(a) Obtener equipos\nDesarrolle la función obtener_equipos(archivo), la cual recibe como parámetro el nombre de un archivo y retorna una lista con todos los equipos que se encuentran en el archivo.\n```Python\n\n\n\nprint obtener_equipos('Grupo2.txt')\n['Chile', 'Australia', 'Espania', 'Holanda']\n```",
"def obtener_equipos(archivo):\n return []\n\nprint obtener_equipos('data/Grupo2.txt')\n\ndef obtener_equipos(archivo):\n # Abrir archivo\n # Inicializar conjunto\n # Leer cada linea del archivo\n # Agregar equipos a conjunto\n # Cerrar archivo\n # Convertir a lista y regresar\n return []\n\nprint obtener_equipos('data/Grupo2.txt')\n\ndef obtener_equipos(archivo):\n # Abrir archivo\n arch = open(archivo)\n # Inicializar conjunto\n paises = set()\n # Leer cada linea del archivo\n for linea in arch:\n # Agregar equipos a conjunto\n p1,p2 = linea.split(\"-\")\n pais,_ = p1.split(\";\")\n paises.add(pais)\n pais = p2.split(\";\")[0]\n paises.add(pais)\n # Cerrar archivo\n arch.close()\n # Convertir a lista y regresar\n return list(paises)\n\nfor i in range(1,9):\n print obtener_equipos('data/Grupo{0}.txt'.format(i))",
"(b) Obtener clasificados\nDesarrolle la función obtener_clasificados(archivo), la cual recibe como parámetro el nombre de un archivo y retorna una tupla con el nombre de los equipos que obtuvieron el primer y segundo lugar del grupo (debe ser en ese orden). En caso de que existan equipos con la misma cantidad de puntos considere la cantidad de goles anotados, si persistir el empate, retorne cualquiera. Considere que se otorgan 3 ptos por partido ganado, 1 por partido empatado y 0 por partido perdido.\n```Python\n\n\n\nprint obtener_clasificados('Grupo2.txt')\n('Holanda', 'Chile')\n```",
"def obtener_clasificados(archivo):\n return ()\n\nprint obtener_clasificados('Grupo2.txt')\n\ndef obtener_clasificados(archivo):\n # Calcular puntos\n # Calcular primer clasificado\n # Calcular segundo clasificado\n return ()\n\nprint obtener_clasificados('data/Grupo2.txt')\n\ndef obtener_clasificados(archivo):\n # Abrir archivo\n arch = open(archivo)\n # Inicializar conjunto\n puntajes = {}\n # Leer cada linea del archivo\n for linea in arch:\n pg1,pg2 = linea.strip().split(\"-\")\n pais1,goles1 = pg1.split(\";\")\n pais2,goles2 = pg2.split(\";\")\n if pais1 not in puntajes:\n puntajes[pais1] = 0\n if pais2 not in puntajes:\n puntajes[pais2] = 0\n if goles1>goles2:\n puntajes[pais1] += 3\n elif goles1<goles2:\n puntajes[pais2] += 3\n else:\n puntajes[pais1] += 1\n puntajes[pais2] += 1\n # Cerrar archivo\n arch.close()\n # Calcular primer clasificado\n mayor_puntaje = -1\n primer_clasificado = \"\"\n for pais in puntajes:\n if puntajes[pais]>mayor_puntaje:\n primer_clasificado = pais\n mayor_puntaje = puntajes[pais]\n # Calcular segundo clasificado\n del puntajes[primer_clasificado]\n mayor_puntaje = -1\n segundo_clasificado = \"\"\n for pais in puntajes:\n if puntajes[pais]>mayor_puntaje:\n segundo_clasificado = pais\n mayor_puntaje = puntajes[pais]\n \n return (primer_clasificado, segundo_clasificado)\n\nprint obtener_clasificados('data/Grupo2.txt')",
"Por supuesto, el método anterior es muy poco elegante. Podemos hacer algo mejor.",
"def obtener_puntajes(archivo):\n # Abrir archivo\n arch = open(archivo)\n # Inicializar conjunto\n puntajes = {}\n # Leer cada linea del archivo\n for linea in arch:\n pg1,pg2 = linea.strip().split(\"-\")\n pais1,goles1 = pg1.split(\";\")\n pais2,goles2 = pg2.split(\";\")\n if pais1 not in puntajes:\n puntajes[pais1] = 0\n if pais2 not in puntajes:\n puntajes[pais2] = 0\n if goles1>goles2:\n puntajes[pais1] += 3\n elif goles1<goles2:\n puntajes[pais2] += 3\n else:\n puntajes[pais1] += 1\n puntajes[pais2] += 1\n # Cerrar archivo\n arch.close()\n return puntajes",
"¿Como podemos definir ahora la funcion (a), obtener_paises?",
"# Pregunta (a)\ndef obtener_paises(archivo):\n return obtener_puntajes(archivo).keys()\n\nprint obtener_paises('data/Grupo2.txt')",
"¿Como podemos definir ahora la funcion (b), obtener_clasificados?",
"def obtener_clasificados(archivo, k=2):\n # Obtener el diccionario de puntajes\n puntajes = obtener_puntajes(archivo)\n # Crear una lista con puntajes primero\n lista= list()\n for pais in puntajes:\n lista.append((puntajes[pais], pais))\n # Ordenar la lista\n lista.sort()\n lista.reverse()\n # Regresar los primeros k\n primeros = list()\n for i in range(k):\n puntaje, pais = lista[i]\n primeros.append(pais)\n return tuple(primeros)\n\nprint obtener_clasificados('data/Grupo2.txt')",
"O, aún mejor (pero más riesgoso en un certamen)...",
"def obtener_clasificados(archivo, k=2):\n # Obtener el diccionario de puntajes\n puntajes = obtener_puntajes(archivo)\n # Ordenar el dict\n primeros = sorted(puntajes, key=puntajes.get, reverse=True)\n return tuple(primeros[:k])\n\nfor i in range(1,9):\n print obtener_clasificados('data/Grupo{0}.txt'.format(i), 2)",
"(c) Partidos de Octavos de Final\nDesarrolle la función partidos_octavos(), la cual no recibe parámetros. Esta función debe crear el archivo Partidos_octavos.txt el cual debe poner en cada línea del archivo los equipos que se enfrentarán en octavos de final. Los partidos se forman de la siguiente forma:\nel primero del grupo1 se enfrenta al segundo del grupo2, el primero del grupo2 se enfrenta al\nsegundo del grupo1, lo mismo sucede para los grupos 3-4, grupos 5-6 y grupo 7-8.\n```Python\n\n\n\npartidos_octavos()\nDebería generar el archivo `Partidos_octavos.txt` con el siguiente contenido:\nBrasil v/s Chile\nHolanda v/s Mexico\nColombia v/s Uruguay\nCosta Rica v/s Grecia\nFrancia v/s Nigeria\nArgentina v/s Suiza\nAlemania v/s Argelia\nBelgica v/s EEUU\n```",
"def partidos_octavos():\n # Abrir archivo para escribir lineas\n # Obtener clasificados para grupos 1 y 2\n # Escribibir pares cruzados\n # Obtener clasificados para grupos 3 y 4\n # Escribibir pares cruzados\n # Obtener clasificados para grupos 5 y 6\n # Escribibir pares cruzados\n # Obtener clasificados para grupos 7 y 8\n # Escribibir pares cruzados\n # Cerrar archivo\n return None\n\npartidos_octavos()\n\ndef partidos_octavos():\n grupos = ((1,2),(3,4),(5,6),(7,8))\n archivo = open(\"data/Partidos_octavos.txt\", \"w\")\n template = \"{0} v/s {1}\\n\"\n pa1, pa2 = obtener_clasificados('data/Grupo1.txt')\n pb1, pb2 = obtener_clasificados('data/Grupo2.txt')\n archivo.write( template.format(pa1, pb2) )\n archivo.write( template.format(pb1, pa2) )\n pa1, pa2 = obtener_clasificados('data/Grupo3.txt')\n pb1, pb2 = obtener_clasificados('data/Grupo4.txt')\n archivo.write( template.format(pa1, pb2) )\n archivo.write( template.format(pb1, pa2) )\n pa1, pa2 = obtener_clasificados('data/Grupo5.txt')\n pb1, pb2 = obtener_clasificados('data/Grupo6.txt')\n archivo.write( template.format(pa1, pb2) )\n archivo.write( template.format(pb1, pa2) )\n pa1, pa2 = obtener_clasificados('data/Grupo7.txt')\n pb1, pb2 = obtener_clasificados('data/Grupo8.txt')\n archivo.write( template.format(pa1, pb2) )\n archivo.write( template.format(pb1, pa2) )\n archivo.close()\n return None\n\npartidos_octavos()\n\ndef partidos_octavos():\n grupos = ((1,2),(3,4),(5,6),(7,8))\n archivo = open(\"data/Partidos_octavos.txt\", \"w\")\n template = \"{0} v/s {1}\\n\"\n for a,b in grupos:\n pa1, pa2 = obtener_clasificados('data/Grupo{0}.txt'.format(a))\n pb1, pb2 = obtener_clasificados('data/Grupo{0}.txt'.format(b))\n archivo.write( template.format(pa1, pb2) )\n archivo.write( template.format(pb1, pa2) )\n archivo.close()\n return None\n\npartidos_octavos()",
"Sobre el certamen\nConsejos para el certamen\n<img src=\"images/advice.png\" alt=\"\" align=\"middle\"/>"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
google/prog-edu-assistant
|
exercises/dataframe-pre3-master.ipynb
|
apache-2.0
|
[
"import io\n\nimport numpy as np\nimport pandas as pd\nimport plotly_express as px\n\n# MASTER ONLY\nimport ast\n# imports %%solution, %%submission, %%template etc.\n%load_ext prog_edu_assistant_tools.magics\nfrom prog_edu_assistant_tools.magics import report, autotest",
"Data frames 3: 簡単なデータの変換 (Simple data manipulation)\n```\nASSIGNMENT METADATA\nassignment_id: \"DataFrame3\"\n```\nlang:en\nIn this unit, we will get acquainted with a couple of simple techniques to change the data:\n\nFilter rows based on a condition\nCreate new columns as a transformation of other columns\nDrop columns that are no longer needed\n\nLet's start with reading the data.\nlang:ja\nこの講義では、簡単なデータの変換を紹介します。\n\n行を条件によりフィルター(抽出)します\nデータ変換によって新しい列を作ります\n必要だけの列を抽出します\n\nまずはデータを読み込みましょう。",
"# データをCVSファイルから読み込みます。 Read the data from CSV file.\ndf = pd.read_csv('data/15-July-2019-Tokyo-hourly.csv')\n\nprint(\"データフレームの行数は %d\" % len(df))\nprint(df.dtypes)\ndf.head()",
"lang:en Let's consider the question of how one should hold an umbrella when it rains.\nDepending on the wind direction, it's better to slant the umbrella towards the direction\nthe rain is coming from. Therefore, one needs to know the wind direction when it rains.\nFirst step is to limit the data to the hours when there was rain. To accomplish that,\nwe filter the data set by using a condition. The condition is placed in square brackets\nafter the dataframe.\nTechnical details:\n* The inner df['Precipitation_mm'] extracts a single column as a pandas Series object.\n* The comparison df['Precipitation_mm'] > 0' is evaluated as a vector expression, that computes\nthe condition element-wise, resulting in a Series object of the same length with boolean elements\n(true or false).\n* Finally, the indexing of a data frame by the boolean series performs the filtering of the rows\nin the dataframe only to rows which had the corresponding element as True. Note that the original\ndata frame is left unmodified. Instead, a new copy of a data \nlang:ja雨の中の傘の持ち方について考えましょう。風の向きによって、適切な持ち方が変わります。風が来ている方向に傾けると傘の効率がよくなります。\nしたがって、雨のときの風の向きを調べなければいけません。\nまずは雨のなかったデータを除きましょう。そのために条件をつけてデータをフィルターします。\n条件はデータフレームの参照の後に角括弧に入ります。\n詳しく述べると:\n\n角括弧に入っているdf['Precipitation_mm']は一つの列を抽出します。それはpandasのSeriesオブジェクトになります。\n比較表現 df['Precipitation_mm'] > 0' は各行ごとに評価されます、真理値のベクターになります。それもSeriesです。長さはデータフレームの行数です。\nデータフレームの後に角括弧に真理値ベクターを入れるとFalseの行が除かれます。\n\n結果のデータフレームは新しいデータフレームです。既存のデータフレームは変わらないままで、フィルターされたデータフレームを新しい変数に保存します。",
"# This is an example of filtering rows by a condition\n# that is computed over variables in the dataframe.\n# 条件によってデータフレームをフィルターします。\ndf2 = df[df['Precipitation_mm'] > 0]\nlen(df2)",
"lang:en So it was 11 hours out of 24 in a day that the rain was falling. Let's see what the distribution of wind directions was.\nlang:ja 一日の24時間の中に雨が降っていたは11時間がありました。 風の向きを可視化しましょう。 px.histogramはxの値を数えて、個数を棒グラフとして可視化します。",
"px.histogram(df2, x='WindDirection_16compasspoints')",
"lang:en Now we can clearly see that NE was the prevailing wind direction while it rained.\nNote that the result may have been different if we did not filter for the hours with rain:\nlang:ja雨が降ったときに風はNEの方向に吹いたことがわかります。雨だけのデータにフィルターしなければ、グラフは異なる結果がえられます。\n以下はdfは元のデータフレームで、フィルターされたデータフレームはdf2です。",
"px.histogram(df, x='WindDirection_16compasspoints')",
"lang:en We can plot the whole data and use the color dimension to distinguish between hours when it rained or not by using a different technique: instead of filtering rows by some condition, we can introduce the condition \nas a new boolean variable. This is done by assigning to a new column in the data frame:\nlang:jaフィルターに変わりに、可視化によって同じデータを確認ができます。たとえば、雨が降ったかどうかを色で表現します。\nそのために新しい真理値の列を作らなければなりません。以下の例はdfのデータフレームに新しい列を追加します。",
"# This creates a new column named \"rained\" that is a boolean variable \n# indicating whether it was raining in that hour.\n# 新しい真理値の列'rained'を追加します。\ndf['rained'] = df['Precipitation_mm'] > 0\npx.histogram(df, x='WindDirection_16compasspoints', color='rained')",
"lang:en Now let's consider how could we present the same data in a tabular form. If we do not do anything,\nall existing columns in the data frame would be shown, which may make it hard for the reader\nto see the point of the author. To make reading the data easier, we can limit the data output\njust to columns we are interested in.\nlang:ja 今まで解析してきたデータを表の形に表示について考えましょう。 dfのデータフレームをそのまま表示するとたくさんの列が出て、\nどのデータを見せたかったのはとてもわかりにくくなります。 それを解決するために、見せたい列だけを抽出しましょう。",
"# そのままだとデータが多すぎて混乱しやすい。\n# その表を見せてなにがいいたいのか分かりづらい。\ndf\n\n# 列の名前の一覧を見ましょう。\ndf.dtypes\n\n# Indexing by list of column names returns a copy of the data frame just with the named\n# columns.\n# 列の名前を二重角括弧に入れると、列の抽出ができます。 列の名前は以上の`dtypes`の一覧によって確認できます。\ndf[['Time_Hour', 'WindDirection_16compasspoints', 'rained']]",
"予習課題: データの変換 (Data manipulation)\n```\nEXERCISE METADATA\nexercise_id: \"DataManipulation\"\n```\nlang:en\nStarting with the weather data frame df defined above, filter out the data set consisting only of the day hours when sun was shining (i.e. variable SunshineDuration_h > 0), and containing only the following columns:\n* Time_Hour -- extracted from the original data frame.\n* WindDirection_16compasspoints -- extracted from the original data frame.\n* rained -- the boolean indicator of whether it was raining or not (Precipitation_mm > 0). This is a new column that is not present in the original data, so it should be added.\nlang:ja\n以上に定義したdfのデータフレームを使って、以下のデータの表を抽出しましょう。\n* 日が出ていた時間帯のみ (すなわち、SunshineDuration_h > 0)\n以下の列だけを抽出しましょう。\n* Time_Hour -- 元のデータフレームから抽出しましょう。\n* WindDirection_16compasspoints -- 元のデータフレームから抽出しましょう。\n* rained -- 雨があったかどうかの真理値列 (すなわち、Precipitation_mm > 0)。こちらの列は元のデータに入ってないため、追加しなければなりません。",
"%%solution\n\"\"\" # BEGIN PROMPT\n# Note: you can do multiple steps to get the data frame you need.\n# 複数の段階に分けてデータ処理してもよい。\ndf['rained'] = df[...]\nsunny_df = df[...]\nsunny_df = sunny_df[...]\n\"\"\" # END PROMPT\n# BEGIN SOLUTION\ndf['rained'] = df['Precipitation_mm'] > 0\nsunny_df = df[df['SunshineDuration_h'] > 0]\nsunny_df = sunny_df[['Time_Hour', 'WindDirection_16compasspoints', 'rained']]\n# END SOLUTION",
"lang:enNote: if you see a warning SettingWithCopyWarning, it means that you are trying to apply transformation\nto a data frame that is a copy or a slice of a different data frame. This is an optimization that Pandas\nlibrary may do on filtering steps to reduce memory use. To avoid this warning, you can either move the new column computation before the filtering step, or add a .copy() call to the filtered data frame to force\ncreating of a full data frame object.\nlang:jaもしSettingWithCopyWarningのエラーが出たら、データフレームのコピーに変更を行うという意味なのです。pandasは、データ抽出のときに\n 自動的にコピーしないような最適化の副作用です。解決のために、データ変更は先にするか、抽出の後に.copy()を呼び出すことができます。",
"# Inspect the data frame\nsunny_df\n\n%%studenttest StudentTest\n# Test your solution\nassert len(sunny_df) == 2, \"The result data frame should only have 2 rows, yours has %d\" % len(sunny_df)\nassert np.sort(np.unique(sunny_df['Time_Hour'])).tolist() == [13, 14], \"Sunshine was during 13h,14h, but you got %s\" % sunny_df['Time_Hour']\nassert np.all(sunny_df['rained'] == False), \"It was not raining during sunshine hours!\"\n\n%%inlinetest AutograderTest\n\n# This cell will not be present in the students notebook.\n\nassert 'sunny_df' in globals(), \"Did you define the data frame named 'sunny_df' in the solution cell?\"\nassert sunny_df.__class__ == pd.core.frame.DataFrame, \"Did you define a data frame named 'sunny_df'? 'sunny_df' was a %s instead\" % sunny_df.__class__\nassert len(sunny_df) == 2, \"The data frame should have 2 rows, but you have %d\" % len(sunny_df)\nassert np.sort(np.unique(sunny_df['Time_Hour'])).tolist() == [13, 14], \"Sunshine was during 13h,14h, but you got %s\" % sunny_df['Time_Hour']\nassert np.all(sunny_df['rained'] == False), \"It was not raining during sunshine hours!\"\nassert np.all(np.sort(np.unique(sunny_df.columns)) == ['Time_Hour', 'WindDirection_16compasspoints', 'rained']), (\"Expected to see 3 columns: rained, Time_Hour, WindDirection_16compasspoints, but got %d: %s\" % (len(np.unique(sunny_df.columns)), np.sort(np.unique(sunny_df.columns))) )\n\n%%submission\ndf['rained'] = df['Precipitation_mm'] > 0\nsunny_df = df[df['SunshineDuration_h'] > 0]\n#sunny_df = sunny_df[['Time_Hour', 'WindDirection_16compasspoints', 'rained']]\n\nimport re\nresult, logs = %autotest AutograderTest\nassert re.match(r'Expected to see 3 columns.*', str(result.results['error']))\nreport(AutograderTest, results=result.results, source=submission_source.source)"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
blue-yonder/tsfresh
|
notebooks/examples/03 Feature Extraction Settings.ipynb
|
mit
|
[
"Feature Calculator Settings\nBy default, all feature calculators are used when you call extract_features.\nThere could be multiple reasons why you do not want that:\n* you are only interested on a certain feature (or features)\n* you want to save time during extraction\n* you have ran the feature selection before and already know, which features are relevant\nFor more information on these settings, please have a look into the documentation.",
"from tsfresh.feature_extraction import extract_features\nfrom tsfresh.feature_extraction import settings\n\nimport numpy as np\nimport pandas as pd",
"Construct a time series container\nFor testing, we construct the time series container that includes two sensor time series, \"temperature\" and \"pressure\", for two devices \"a\" and \"b\".",
"df = pd.DataFrame({\"id\": [\"a\", \"a\", \"b\", \"b\"], \"temperature\": [1,2,3,1], \"pressure\": [-1, 2, -1, 7]})\ndf",
"The default_fc_parameters\nWhich features are calculated by tsfresh is controlled by a dictionary that contains a mapping from feature calculator names to their parameters. \nThis dictionary is called fc_parameters. \nIt maps feature calculator names (= keys) to parameters (= values). \nEvery key in the dictionary will be looked up as a function in tsfresh.feature_extraction.feature_calculators and be used to extract features.\ntsfresh comes with some predefined sets of fc_parameters dictionaries:",
"settings.ComprehensiveFCParameters, settings.EfficientFCParameters, settings.MinimalFCParameters",
"For example, to only calculate a very minimal set of features:",
"settings_minimal = settings.MinimalFCParameters() \nsettings_minimal",
"Each key stands for one of the feature calculators. \nThe value are the parameters. If a feature calculator has no parameters, None is used as a value (and as these feature calculators are very simple, they all have no parameters).\nThis dictionary can passed to the extract method, resulting in a few basic time series beeing calculated:",
"X_tsfresh = extract_features(df, column_id=\"id\", default_fc_parameters=settings_minimal)\nX_tsfresh.head()",
"By using the settings_minimal as value of the default_fc_parameters parameter, those settings are used for all type of time series. \nIn this case, the settings_minimal dictionary is used for both \"temperature\" and \"pressure\" time series.\nPlease note how the columns in the resulting dataframe depend both on the settings as well as the kinds of the data.\nNow, lets say we want to remove the length feature and prevent it from beeing calculated. We just delete it from the dictionary.",
"del settings_minimal[\"length\"]\nsettings_minimal",
"Now, if we extract features for this reduced dictionary, the length feature will not be calculated",
"X_tsfresh = extract_features(df, column_id=\"id\", default_fc_parameters=settings_minimal)\nX_tsfresh.head()",
"The kind_to_fc_parameters\nNow, lets say we do not want to calculate the same features for both type of time series. Instead there should be different sets of features for each kind.\nTo do that, we can use the kind_to_fc_parameters parameter, which lets us specifiy which fc_parameters we want to use for which kind of time series:",
"fc_parameters_pressure = {\"length\": None, \n \"sum_values\": None}\n\nfc_parameters_temperature = {\"maximum\": None, \n \"minimum\": None}\n\nkind_to_fc_parameters = {\n \"temperature\": fc_parameters_temperature,\n \"pressure\": fc_parameters_pressure\n}\n\nprint(kind_to_fc_parameters)",
"So, in this case, for sensor \"pressure\" both \"max\" and \"min\" are calculated. \nFor the \"temperature\" signal, the length and sum_values features are extracted instead.",
"X_tsfresh = extract_features(df, column_id=\"id\", kind_to_fc_parameters=kind_to_fc_parameters)\nX_tsfresh.head()",
"Extracting from data\nAfter applying a feature selection algorithm to drop irrelevant feature columns you know which features are relevant and which are not.\nYou can also use this information to only extract these relevant features in the first place.\nThe provided from_columns method can be used to infer a settings dictionary from the dataframe containing the features.\nThis dictionary can then for example be stored and be used in the next feature extraction.",
"# Assuming `X_tsfresh` contains only our relevant features\nrelevant_settings = settings.from_columns(X_tsfresh)\nrelevant_settings",
"More complex dictionaries\nWe provide fc_parameters dictionaries with larger sets of features.\nThe EfficientFCParameters contain features and parameters that should be calculated quite fast:",
"settings_efficient = settings.EfficientFCParameters()\nsettings_efficient",
"The ComprehensiveFCParameters are the biggest set of features. It will take the longest to calculate",
"settings_comprehensive = settings.ComprehensiveFCParameters()\nsettings_comprehensive",
"Feature Calculator Parameters\nMore complex feature calculators have parameters that you can use to tune the extracted features.\nThe predefined settings (such as ComprehensiveFCParameters) already contain default values of these features.\nHowever for your own projects, you might want/need to tune them.\nIn detail, the values in a fc_parameters dictionary contain a list of parameter dictionaries. \nWhen calculating the feature, each entry in the list of parameters will be used to calculate one feature.\nFor example, lets have a look into the feature large_standard_deviation, which depends on a single parameter called r (it basically defines how large \"large\" is).\nThe ComprehensiveFCParameters contains several default values for r. \nEach of them will be used to calculate a single feature:",
"settings_comprehensive['large_standard_deviation']",
"If you use these settings in feature extraction, that would trigger the calculation of 20 different large_standard_deviation features, one for r=0.05 up to r=0.95.",
"settings_tmp = {'large_standard_deviation': settings_comprehensive['large_standard_deviation']}\n\nX_tsfresh = extract_features(df, column_id=\"id\", default_fc_parameters=settings_tmp)\nX_tsfresh.columns",
"If you now want to change the parameters for a specific feature calculator, all you need to do is to change the dictionary values."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ledeprogram/algorithms
|
class9/homework/radhikapc_class9_1.ipynb
|
gpl-3.0
|
[
"Use the pseudocode you came up with in class to write your own 5-fold cross-validation function that splits the data set into\nDon't forget to shuffle the input before assigning to the splits\nYou can use the fit\nTest the results with the sklearn cross_val_score\nIn your PR, discuss what challenges you had creating this function and if it helped you better understand cross validation",
"import pandas as pd\n%matplotlib inline\nfrom sklearn import datasets\nfrom sklearn import tree\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom random import shuffle\nfrom sklearn.metrics import accuracy_score\n\niris = datasets.load_iris() # load iris data set\n\niris.keys()\n\niris['target_names']\n\niris['target']\n\niris['data']\n\nx = iris.data[:,2:] # the attributes\ny = iris.target # the target variable\n\nfor a, b in zip(x, y):\n print(a, b)\n\ny\n\n# shuffling data (which is X), and target (which is Y) and adding into two seperate lists\nshuf_x = []\nshuf_y = []\nshuf_index = list(range(len(x)))\nshuffle(shuf_index)\nfor i in shuf_index:\n shuf_x.append(x[i])\n shuf_y.append(y[i])\n\nchunk_length = int(len(shuf_x)/ 5)\nchunk_length\n\nchunk_length = int(len(shuf_y)/ 5)\nchunk_length\n\ndef chunks(l, num):\n num = max(1, num)\n return [l[i:i + num] for i in range(0, len(l), num)]\n\nchunk_y = chunks(shuf_y, chunk_length)\n\nchunk_x = chunks(shuf_x, chunk_length)\n\ndt = tree.DecisionTreeClassifier()\n\n\nAverage_list = []\n\nfor x, y in zip(chunk_x, chunk_y):\n \n #Popping first item off the list\n x_test = chunk_x.pop(0)\n x_train = sum(chunk_x, [])\n \n #Adding it back on again\n chunk_x.append(x_test)\n \n #Popping first item off the list\n y_test = chunk_y.pop(0)\n y_train = sum(chunk_y, [])\n \n #Popping it back on again\n chunk_y.append(y_test)\n \n #fitting training\n dt = dt.fit(x_train,y_train)\n \n #Predicting\n y_pred=dt.predict(x_test)\n \n #Getting the accurancy score\nAccuracy_score = accuracy_score(y_test, y_pred)\n \n #Creating a list of averages:\nAverage_list.append(Accuracy_score)\n\nprint(Average_list)",
"Now we create our cross validation scores",
"from sklearn.cross_validation import cross_val_score\n\niris = datasets.load_iris() \n\nx = iris.data[:,2:] \ny = iris.target \n\ndt = tree.DecisionTreeClassifier()\n\ndt = dt.fit(x,y)\n\n# http://scikit-learn.org/stable/modules/cross_validation.html#computing-cross-validated-metrics\nscores = cross_val_score(dt,x,y,cv=5) #We're passing in our values and getting an array of values back",
"and dt is pass the decision tree classifier",
"scores\n\nimport numpy as np\n\nnp.mean(scores) #here we get our average result"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
tensorflow/docs-l10n
|
site/en-snapshot/quantum/tutorials/noise.ipynb
|
apache-2.0
|
[
"Copyright 2020 The TensorFlow Authors.",
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"Noise\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/quantum/tutorials/noise\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/quantum/blob/master/docs/tutorials/noise.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/quantum/blob/master/docs/tutorials/noise.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/quantum/docs/tutorials/noise.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n</table>\n\nNoise is present in modern day quantum computers. Qubits are susceptible to interference from the surrounding environment, imperfect fabrication, TLS and sometimes even gamma rays. Until large scale error correction is reached, the algorithms of today must be able to remain functional in the presence of noise. This makes testing algorithms under noise an important step for validating quantum algorithms / models will function on the quantum computers of today.\nIn this tutorial you will explore the basics of noisy circuit simulation in TFQ via the high level tfq.layers API.\nSetup",
"!pip install tensorflow==2.7.0 tensorflow-quantum\n\n!pip install -q git+https://github.com/tensorflow/docs\n\n# Update package resources to account for version changes.\nimport importlib, pkg_resources\nimportlib.reload(pkg_resources)\n\nimport random\nimport cirq\nimport sympy\nimport tensorflow_quantum as tfq\nimport tensorflow as tf\nimport numpy as np\n# Plotting\nimport matplotlib.pyplot as plt\nimport tensorflow_docs as tfdocs\nimport tensorflow_docs.plots",
"1. Understanding quantum noise\n1.1 Basic circuit noise\nNoise on a quantum computer impacts the bitstring samples you are able to measure from it. One intuitive way you can start to think about this is that a noisy quantum computer will \"insert\", \"delete\" or \"replace\" gates in random places like the diagram below:\n<img src=\"./images/noise_1.png\" width=700>\nBuilding off of this intuition, when dealing with noise, you are no longer using a single pure state $|\\psi \\rangle$ but instead dealing with an ensemble of all possible noisy realizations of your desired circuit: $\\rho = \\sum_j p_j |\\psi_j \\rangle \\langle \\psi_j |$ . Where $p_j$ gives the probability that the system is in $|\\psi_j \\rangle$ .\nRevisiting the above picture, if we knew beforehand that 90% of the time our system executed perfectly, or errored 10% of the time with just this one mode of failure, then our ensemble would be: \n$\\rho = 0.9 |\\psi_\\text{desired} \\rangle \\langle \\psi_\\text{desired}| + 0.1 |\\psi_\\text{noisy} \\rangle \\langle \\psi_\\text{noisy}| $\nIf there was more than just one way that our circuit could error, then the ensemble $\\rho$ would contain more than just two terms (one for each new noisy realization that could happen). $\\rho$ is referred to as the density matrix describing your noisy system.\n1.2 Using channels to model circuit noise\nUnfortunately in practice it's nearly impossible to know all the ways your circuit might error and their exact probabilities. A simplifying assumption you can make is that after each operation in your circuit there is some kind of channel that roughly captures how that operation might error. You can quickly create a circuit with some noise:",
"def x_circuit(qubits):\n \"\"\"Produces an X wall circuit on `qubits`.\"\"\"\n return cirq.Circuit(cirq.X.on_each(*qubits))\n\ndef make_noisy(circuit, p):\n \"\"\"Add a depolarization channel to all qubits in `circuit` before measurement.\"\"\"\n return circuit + cirq.Circuit(cirq.depolarize(p).on_each(*circuit.all_qubits()))\n\nmy_qubits = cirq.GridQubit.rect(1, 2)\nmy_circuit = x_circuit(my_qubits)\nmy_noisy_circuit = make_noisy(my_circuit, 0.5)\nmy_circuit\n\nmy_noisy_circuit",
"You can examine the noiseless density matrix $\\rho$ with:",
"rho = cirq.final_density_matrix(my_circuit)\nnp.round(rho, 3)",
"And the noisy density matrix $\\rho$ with:",
"rho = cirq.final_density_matrix(my_noisy_circuit)\nnp.round(rho, 3)",
"Comparing the two different $ \\rho $ 's you can see that the noise has impacted the amplitudes of the state (and consequently sampling probabilities). In the noiseless case you would always expect to sample the $ |11\\rangle $ state. But in the noisy state there is now a nonzero probability of sampling $ |00\\rangle $ or $ |01\\rangle $ or $ |10\\rangle $ as well:",
"\"\"\"Sample from my_noisy_circuit.\"\"\"\ndef plot_samples(circuit):\n samples = cirq.sample(circuit + cirq.measure(*circuit.all_qubits(), key='bits'), repetitions=1000)\n freqs, _ = np.histogram(samples.data['bits'], bins=[i+0.01 for i in range(-1,2** len(my_qubits))])\n plt.figure(figsize=(10,5))\n plt.title('Noisy Circuit Sampling')\n plt.xlabel('Bitstring')\n plt.ylabel('Frequency')\n plt.bar([i for i in range(2** len(my_qubits))], freqs, tick_label=['00','01','10','11'])\n\nplot_samples(my_noisy_circuit)",
"Without any noise you will always get $|11\\rangle$:",
"\"\"\"Sample from my_circuit.\"\"\"\nplot_samples(my_circuit)",
"If you increase the noise a little further it will become harder and harder to distinguish the desired behavior (sampling $|11\\rangle$ ) from the noise:",
"my_really_noisy_circuit = make_noisy(my_circuit, 0.75)\nplot_samples(my_really_noisy_circuit)",
"Note: Try experimenting with different channels in your circuit to generate noise. Common channels supported in both Cirq and TFQ can be found here\n2. Basic noise in TFQ\nWith this understanding of how noise can impact circuit execution, you can explore how noise works in TFQ. TensorFlow Quantum uses monte-carlo / trajectory based simulation as an alternative to density matrix simulation. This is because the memory complexity of density matrix simulation limits large simulations to being <= 20 qubits with traditional full density matrix simulation methods. Monte-carlo / trajectory trades this cost in memory for additional cost in time. The backend='noisy' option available to all tfq.layers.Sample, tfq.layers.SampledExpectation and tfq.layers.Expectation (In the case of Expectation this does add a required repetitions parameter).\n2.1 Noisy sampling in TFQ\nTo recreate the above plots using TFQ and trajectory simulation you can use tfq.layers.Sample",
"\"\"\"Draw bitstring samples from `my_noisy_circuit`\"\"\"\nbitstrings = tfq.layers.Sample(backend='noisy')(my_noisy_circuit, repetitions=1000)\n\nnumeric_values = np.einsum('ijk,k->ij', bitstrings.to_tensor().numpy(), [1, 2])[0]\nfreqs, _ = np.histogram(numeric_values, bins=[i+0.01 for i in range(-1,2** len(my_qubits))])\nplt.figure(figsize=(10,5))\nplt.title('Noisy Circuit Sampling')\nplt.xlabel('Bitstring')\nplt.ylabel('Frequency')\nplt.bar([i for i in range(2** len(my_qubits))], freqs, tick_label=['00','01','10','11'])",
"2.2 Noisy sample based expectation\nTo do noisy sample based expectation calculation you can use tfq.layers.SampleExpectation:",
"some_observables = [cirq.X(my_qubits[0]), cirq.Z(my_qubits[0]), 3.0 * cirq.Y(my_qubits[1]) + 1]\nsome_observables",
"Compute the noiseless expectation estimates via sampling from the circuit:",
"noiseless_sampled_expectation = tfq.layers.SampledExpectation(backend='noiseless')(\n my_circuit, operators=some_observables, repetitions=10000\n)\nnoiseless_sampled_expectation.numpy()",
"Compare those with the noisy versions:",
"noisy_sampled_expectation = tfq.layers.SampledExpectation(backend='noisy')(\n [my_noisy_circuit, my_really_noisy_circuit], operators=some_observables, repetitions=10000\n)\nnoisy_sampled_expectation.numpy()",
"You can see that the noise has particularly impacted the $\\langle \\psi | Z | \\psi \\rangle$ accuracy, with my_really_noisy_circuit concentrating very quickly towards 0.\n2.3 Noisy analytic expectation calculation\nDoing noisy analytic expectation calculations is nearly identical to above:",
"noiseless_analytic_expectation = tfq.layers.Expectation(backend='noiseless')(\n my_circuit, operators=some_observables\n)\nnoiseless_analytic_expectation.numpy()\n\nnoisy_analytic_expectation = tfq.layers.Expectation(backend='noisy')(\n [my_noisy_circuit, my_really_noisy_circuit], operators=some_observables, repetitions=10000\n)\nnoisy_analytic_expectation.numpy()",
"3. Hybrid models and quantum data noise\nNow that you have implemented some noisy circuit simulations in TFQ, you can experiment with how noise impacts quantum and hybrid quantum classical models, by comparing and contrasting their noisy vs noiseless performance. A good first check to see if a model or algorithm is robust to noise is to test under a circuit wide depolarizing model which looks something like this:\n<img src=\"./images/noise_2.png\" width=500>\nWhere each time slice of the circuit (sometimes referred to as moment) has a depolarizing channel appended after each gate operation in that time slice. The depolarizing channel with apply one of ${X, Y, Z }$ with probability $p$ or apply nothing (keep the original operation) with probability $1-p$.\n3.1 Data\nFor this example you can use some prepared circuits in the tfq.datasets module as training data:",
"qubits = cirq.GridQubit.rect(1, 8)\ncircuits, labels, pauli_sums, _ = tfq.datasets.xxz_chain(qubits, 'closed')\ncircuits[0]",
"Writing a small helper function will help to generate the data for the noisy vs noiseless case:",
"def get_data(qubits, depolarize_p=0.):\n \"\"\"Return quantum data circuits and labels in `tf.Tensor` form.\"\"\"\n circuits, labels, pauli_sums, _ = tfq.datasets.xxz_chain(qubits, 'closed')\n if depolarize_p >= 1e-5:\n circuits = [circuit.with_noise(cirq.depolarize(depolarize_p)) for circuit in circuits]\n tmp = list(zip(circuits, labels))\n random.shuffle(tmp)\n circuits_tensor = tfq.convert_to_tensor([x[0] for x in tmp])\n labels_tensor = tf.convert_to_tensor([x[1] for x in tmp])\n\n return circuits_tensor, labels_tensor",
"3.2 Define a model circuit\nNow that you have quantum data in the form of circuits, you will need a circuit to model this data, like with the data you can write a helper function to generate this circuit optionally containing noise:",
"def modelling_circuit(qubits, depth, depolarize_p=0.):\n \"\"\"A simple classifier circuit.\"\"\"\n dim = len(qubits)\n ret = cirq.Circuit(cirq.H.on_each(*qubits))\n\n for i in range(depth):\n # Entangle layer.\n ret += cirq.Circuit(cirq.CX(q1, q2) for (q1, q2) in zip(qubits[::2], qubits[1::2]))\n ret += cirq.Circuit(cirq.CX(q1, q2) for (q1, q2) in zip(qubits[1::2], qubits[2::2]))\n # Learnable rotation layer.\n # i_params = sympy.symbols(f'layer-{i}-0:{dim}')\n param = sympy.Symbol(f'layer-{i}')\n single_qb = cirq.X\n if i % 2 == 1:\n single_qb = cirq.Y\n ret += cirq.Circuit(single_qb(q) ** param for q in qubits)\n \n if depolarize_p >= 1e-5:\n ret = ret.with_noise(cirq.depolarize(depolarize_p))\n\n return ret, [op(q) for q in qubits for op in [cirq.X, cirq.Y, cirq.Z]]\n\nmodelling_circuit(qubits, 3)[0]",
"3.3 Model building and training\nWith your data and model circuit built, the final helper function you will need is one that can assemble both a noisy or a noiseless hybrid quantum tf.keras.Model:",
"def build_keras_model(qubits, depolarize_p=0.):\n \"\"\"Prepare a noisy hybrid quantum classical Keras model.\"\"\"\n spin_input = tf.keras.Input(shape=(), dtype=tf.dtypes.string)\n\n circuit_and_readout = modelling_circuit(qubits, 4, depolarize_p)\n if depolarize_p >= 1e-5:\n quantum_model = tfq.layers.NoisyPQC(*circuit_and_readout, sample_based=False, repetitions=10)(spin_input)\n else:\n quantum_model = tfq.layers.PQC(*circuit_and_readout)(spin_input)\n\n intermediate = tf.keras.layers.Dense(4, activation='sigmoid')(quantum_model)\n post_process = tf.keras.layers.Dense(1)(intermediate)\n\n return tf.keras.Model(inputs=[spin_input], outputs=[post_process])",
"4. Compare performance\n4.1 Noiseless baseline\nWith your data generation and model building code, you can now compare and contrast model performance in the noiseless and noisy settings, first you can run a reference noiseless training:",
"training_histories = dict()\ndepolarize_p = 0.\nn_epochs = 50\nphase_classifier = build_keras_model(qubits, depolarize_p)\n\nphase_classifier.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.02),\n loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),\n metrics=['accuracy'])\n\n\n# Show the keras plot of the model\ntf.keras.utils.plot_model(phase_classifier, show_shapes=True, dpi=70)\n\nnoiseless_data, noiseless_labels = get_data(qubits, depolarize_p)\ntraining_histories['noiseless'] = phase_classifier.fit(x=noiseless_data,\n y=noiseless_labels,\n batch_size=16,\n epochs=n_epochs,\n validation_split=0.15,\n verbose=1)",
"And explore the results and accuracy:",
"loss_plotter = tfdocs.plots.HistoryPlotter(metric = 'loss', smoothing_std=10)\nloss_plotter.plot(training_histories)\n\nacc_plotter = tfdocs.plots.HistoryPlotter(metric = 'accuracy', smoothing_std=10)\nacc_plotter.plot(training_histories)",
"4.2 Noisy comparison\nNow you can build a new model with noisy structure and compare to the above, the code is nearly identical:",
"depolarize_p = 0.001\nn_epochs = 50\nnoisy_phase_classifier = build_keras_model(qubits, depolarize_p)\n\nnoisy_phase_classifier.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.02),\n loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),\n metrics=['accuracy'])\n\n\n# Show the keras plot of the model\ntf.keras.utils.plot_model(noisy_phase_classifier, show_shapes=True, dpi=70)",
"Note: in the model diagram there is now a tfq.layers.NoisyPQC instead of a tfq.layers.PQC since the depolarization probability is no longer zero. Training will take significantly longer since noisy simulation is far more expensive than noiseless.",
"noisy_data, noisy_labels = get_data(qubits, depolarize_p)\ntraining_histories['noisy'] = noisy_phase_classifier.fit(x=noisy_data,\n y=noisy_labels,\n batch_size=16,\n epochs=n_epochs,\n validation_split=0.15,\n verbose=1)\n\nloss_plotter.plot(training_histories)\n\nacc_plotter.plot(training_histories)",
"Success: The noisy model still managed to train under some mild depolarization noise. Try experimenting with different noise models to see how and when training might fail. Also look out for noisy functionality under tfq.layers and tfq.noise."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
huggingface/pytorch-transformers
|
notebooks/03-pipelines.ipynb
|
apache-2.0
|
[
"<a href=\"https://colab.research.google.com/github/huggingface/transformers/blob/master/notebooks/03-pipelines.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\nHow can I leverage State-of-the-Art Natural Language Models with only one line of code ?\nNewly introduced in transformers v2.3.0, pipelines provides a high-level, easy to use,\nAPI for doing inference over a variety of downstream-tasks, including: \n\nSentence Classification (Sentiment Analysis): Indicate if the overall sentence is either positive or negative, i.e. binary classification task or logitic regression task.\nToken Classification (Named Entity Recognition, Part-of-Speech tagging): For each sub-entities (tokens) in the input, assign them a label, i.e. classification task.\nQuestion-Answering: Provided a tuple (question, context) the model should find the span of text in content answering the question.\nMask-Filling: Suggests possible word(s) to fill the masked input with respect to the provided context.\nSummarization: Summarizes the input article to a shorter article.\nTranslation: Translates the input from a language to another language.\nFeature Extraction: Maps the input to a higher, multi-dimensional space learned from the data.\n\nPipelines encapsulate the overall process of every NLP process:\n\nTokenization: Split the initial input into multiple sub-entities with ... properties (i.e. tokens).\nInference: Maps every tokens into a more meaningful representation. \nDecoding: Use the above representation to generate and/or extract the final output for the underlying task.\n\nThe overall API is exposed to the end-user through the pipeline() method with the following \nstructure:\n```python\nfrom transformers import pipeline\nUsing default model and tokenizer for the task\npipeline(\"<task-name>\")\nUsing a user-specified model\npipeline(\"<task-name>\", model=\"<model_name>\")\nUsing custom model/tokenizer as str\npipeline('<task-name>', model='<model name>', tokenizer='<tokenizer_name>')\n```",
"!pip install -q transformers\n\nfrom __future__ import print_function\nimport ipywidgets as widgets\nfrom transformers import pipeline",
"1. Sentence Classification - Sentiment Analysis",
"nlp_sentence_classif = pipeline('sentiment-analysis')\nnlp_sentence_classif('Such a nice weather outside !')",
"2. Token Classification - Named Entity Recognition",
"nlp_token_class = pipeline('ner')\nnlp_token_class('Hugging Face is a French company based in New-York.')",
"3. Question Answering",
"nlp_qa = pipeline('question-answering')\nnlp_qa(context='Hugging Face is a French company based in New-York.', question='Where is based Hugging Face ?')",
"4. Text Generation - Mask Filling",
"nlp_fill = pipeline('fill-mask')\nnlp_fill('Hugging Face is a French company based in ' + nlp_fill.tokenizer.mask_token)",
"5. Summarization\nSummarization is currently supported by Bart and T5.",
"TEXT_TO_SUMMARIZE = \"\"\" \nNew York (CNN)When Liana Barrientos was 23 years old, she got married in Westchester County, New York. \nA year later, she got married again in Westchester County, but to a different man and without divorcing her first husband. \nOnly 18 days after that marriage, she got hitched yet again. Then, Barrientos declared \"I do\" five more times, sometimes only within two weeks of each other. \nIn 2010, she married once more, this time in the Bronx. In an application for a marriage license, she stated it was her \"first and only\" marriage. \nBarrientos, now 39, is facing two criminal counts of \"offering a false instrument for filing in the first degree,\" referring to her false statements on the \n2010 marriage license application, according to court documents. \nProsecutors said the marriages were part of an immigration scam. \nOn Friday, she pleaded not guilty at State Supreme Court in the Bronx, according to her attorney, Christopher Wright, who declined to comment further. \nAfter leaving court, Barrientos was arrested and charged with theft of service and criminal trespass for allegedly sneaking into the New York subway through an emergency exit, said Detective \nAnnette Markowski, a police spokeswoman. In total, Barrientos has been married 10 times, with nine of her marriages occurring between 1999 and 2002. \nAll occurred either in Westchester County, Long Island, New Jersey or the Bronx. She is believed to still be married to four men, and at one time, she was married to eight men at once, prosecutors say. \nProsecutors said the immigration scam involved some of her husbands, who filed for permanent residence status shortly after the marriages. \nAny divorces happened only after such filings were approved. It was unclear whether any of the men will be prosecuted. \nThe case was referred to the Bronx District Attorney\\'s Office by Immigration and Customs Enforcement and the Department of Homeland Security\\'s \nInvestigation Division. Seven of the men are from so-called \"red-flagged\" countries, including Egypt, Turkey, Georgia, Pakistan and Mali. \nHer eighth husband, Rashid Rajput, was deported in 2006 to his native Pakistan after an investigation by the Joint Terrorism Task Force. \nIf convicted, Barrientos faces up to four years in prison. Her next court appearance is scheduled for May 18.\n\"\"\"\n\nsummarizer = pipeline('summarization')\nsummarizer(TEXT_TO_SUMMARIZE)",
"6. Translation\nTranslation is currently supported by T5 for the language mappings English-to-French (translation_en_to_fr), English-to-German (translation_en_to_de) and English-to-Romanian (translation_en_to_ro).",
"# English to French\ntranslator = pipeline('translation_en_to_fr')\ntranslator(\"HuggingFace is a French company that is based in New York City. HuggingFace's mission is to solve NLP one commit at a time\")\n\n# English to German\ntranslator = pipeline('translation_en_to_de')\ntranslator(\"The history of natural language processing (NLP) generally started in the 1950s, although work can be found from earlier periods.\")",
"7. Text Generation\nText generation is currently supported by GPT-2, OpenAi-GPT, TransfoXL, XLNet, CTRL and Reformer.",
"text_generator = pipeline(\"text-generation\")\ntext_generator(\"Today is a beautiful day and I will\")",
"8. Projection - Features Extraction",
"import numpy as np\nnlp_features = pipeline('feature-extraction')\noutput = nlp_features('Hugging Face is a French company based in Paris')\nnp.array(output).shape # (Samples, Tokens, Vector Size)\n",
"Alright ! Now you have a nice picture of what is possible through transformers' pipelines, and there is more\nto come in future releases. \nIn the meantime, you can try the different pipelines with your own inputs",
"task = widgets.Dropdown(\n options=['sentiment-analysis', 'ner', 'fill_mask'],\n value='ner',\n description='Task:',\n disabled=False\n)\n\ninput = widgets.Text(\n value='',\n placeholder='Enter something',\n description='Your input:',\n disabled=False\n)\n\ndef forward(_):\n if len(input.value) > 0: \n if task.value == 'ner':\n output = nlp_token_class(input.value)\n elif task.value == 'sentiment-analysis':\n output = nlp_sentence_classif(input.value)\n else:\n if input.value.find('<mask>') == -1:\n output = nlp_fill(input.value + ' <mask>')\n else:\n output = nlp_fill(input.value) \n print(output)\n\ninput.on_submit(forward)\ndisplay(task, input)\n\ncontext = widgets.Textarea(\n value='Einstein is famous for the general theory of relativity',\n placeholder='Enter something',\n description='Context:',\n disabled=False\n)\n\nquery = widgets.Text(\n value='Why is Einstein famous for ?',\n placeholder='Enter something',\n description='Question:',\n disabled=False\n)\n\ndef forward(_):\n if len(context.value) > 0 and len(query.value) > 0: \n output = nlp_qa(question=query.value, context=context.value) \n print(output)\n\nquery.on_submit(forward)\ndisplay(context, query)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
paris-saclay-cds/python-workshop
|
Day_1_Scientific_Python/03-matplotib_seaborn.ipynb
|
bsd-3-clause
|
[
"<p style=\"margin-top: 3em; margin-bottom: 3em;\"><font size=\"7\"><b>Matplotlib & Seaborn: Introduction </b></font></p>",
"%matplotlib inline",
"Matplotlib\nMatplotlib is a Python package used widely throughout the scientific Python community to produce high quality 2D publication graphics. It transparently supports a wide range of output formats including PNG (and other raster formats), PostScript/EPS, PDF and SVG and has interfaces for all of the major desktop GUI (graphical user interface) toolkits. It is a great package with lots of options.\nHowever, matplotlib is...\n\nThe 800-pound gorilla — and like most 800-pound gorillas, this one should probably be avoided unless you genuinely need its power, e.g., to make a custom plot or produce a publication-ready graphic.\n(As we’ll see, when it comes to statistical visualization, the preferred tack might be: “do as much as you easily can in your convenience layer of choice [nvdr e.g. directly from Pandas, or with seaborn], and then use matplotlib for the rest.”)\n\n(quote used from this blogpost)\nAnd that's we mostly did, just use the .plot function of Pandas. So, why do we learn matplotlib? Well, for the ...then use matplotlib for the rest.; at some point, somehow!\nMatplotlib comes with a convenience sub-package called pyplot which, for consistency with the wider matplotlib community, should always be imported as plt:",
"import numpy as np\nimport matplotlib.pyplot as plt",
"- dry stuff - The matplotlib Figure, axes and axis\nAt the heart of every plot is the figure object. The \"Figure\" object is the top level concept which can be drawn to one of the many output formats, or simply just to screen. Any object which can be drawn in this way is known as an \"Artist\" in matplotlib.\nLets create our first artist using pyplot, and then show it:",
"fig = plt.figure()\nplt.show()",
"On its own, drawing the figure artist is uninteresting and will result in an empty piece of paper (that's why we didn't see anything above).\nBy far the most useful artist in matplotlib is the \"Axes\" artist. The Axes artist represents the \"data space\" of a typical plot, a rectangular axes (the most common, but not always the case, e.g. polar plots) will have 2 (confusingly named) Axis artists with tick labels and tick marks.\nThere is no limit on the number of Axes artists which can exist on a Figure artist. Let's go ahead and create a figure with a single Axes artist, and show it using pyplot:",
"ax = plt.axes()",
"Matplotlib's pyplot module makes the process of creating graphics easier by allowing us to skip some of the tedious Artist construction. For example, we did not need to manually create the Figure artist with plt.figure because it was implicit that we needed a figure when we created the Axes artist.\nUnder the hood matplotlib still had to create a Figure artist, its just we didn't need to capture it into a variable. We can access the created object with the \"state\" functions found in pyplot called gcf and gca.\n- essential stuff - pyplot versus Object based\nSome example data:",
"x = np.linspace(0, 5, 10)\ny = x ** 2",
"Observe the following difference:\n1. pyplot style: plt... (you will see this a lot for code online!)",
"plt.plot(x, y, '-')",
"2. creating objects",
"fig, ax = plt.subplots()\nax.plot(x, y, '-')",
"Although a little bit more code is involved, the advantage is that we now have full control of where the plot axes are placed, and we can easily add more than one axis to the figure:",
"fig, ax1 = plt.subplots()\nax.plot(x, y, '-')\nax2 = fig.add_axes([0.2, 0.5, 0.4, 0.3]) # inset axes\nax1.plot(x, y, '-')\nax1.set_ylabel('y')\nax2.set_xlabel('x')\nax2.plot(x, y*2, 'r-')",
"<div class=\"alert alert-info\" style=\"font-size:18px\">\n\n<b>REMEMBER</b>:\n\n <ul>\n <li>Use the **object oriented** power of Matplotlib!</li>\n <li>Get yourself used to writing `fig, ax = plt.subplots()`</li>\n</ul>\n</div>",
"fig, ax = plt.subplots()\nax.plot(x, y, '-')\n# ...",
"An small cheat-sheet reference for some common elements",
"x = np.linspace(-1, 0, 100)\n\nfig, ax = plt.subplots()\n\n# Adjust the created axes so that its topmost extent is 0.8 of the figure.\nfig.subplots_adjust(top=0.8)\n\nax.plot(x, x**2, color='0.4', label=\"power 2\")\nax.plot(x, x**3, color='0.8', linestyle='--', label=\"power 3\")\n\nfig.suptitle('Figure title', fontsize=18, \n fontweight='bold')\nax.set_title('Axes title', fontsize=16)\n\nax.set_xlabel('The X axis')\nax.set_ylabel('The Y axis $y=f(x)$', fontsize=16)\n\nax.set_xlim(-1.0, 1.1)\nax.set_ylim(-0.1, 1.)\n\nax.text(0.5, 0.2, 'Text centered at (0.5, 0.2)\\nin data coordinates.',\n horizontalalignment='center', fontsize=14)\n\nax.text(0.5, 0.5, 'Text centered at (0.5, 0.5)\\nin Figure coordinates.',\n horizontalalignment='center', fontsize=14, \n transform=ax.transAxes, color='grey')\n\nax.legend(loc='upper right', frameon=True, ncol=2)",
"For more information on legend positioning, check this post on stackoverflow!\nAnother nice blogpost about customizing matplotlib figures: http://pbpython.com/effective-matplotlib.html\nI do not like the style...\nThe power of the object-oriented way of working makes it possible to change everything. However, mostly we just want quickly a good-looking plot. Matplotlib provides a number of styles that can be used to quickly change a number of settings:",
"plt.style.available\n\nx = np.linspace(0, 10)\n\nwith plt.style.context('seaborn-muted'): # 'ggplot', 'bmh', 'grayscale', 'seaborn-whitegrid'\n fig, ax = plt.subplots()\n ax.plot(x, np.sin(x) + x + np.random.randn(50))\n ax.plot(x, np.sin(x) + 0.5 * x + np.random.randn(50))\n ax.plot(x, np.sin(x) + 2 * x + np.random.randn(50))",
"We should not start discussing about colors and styles, just pick your favorite style!\nInteraction with Pandas\nWhat we have been doing while plotting with Pandas:",
"import pandas as pd\n\naqdata = pd.read_csv('data/20000101_20161231-NO2.csv', sep=';', skiprows=[1], na_values=['n/d'],\n index_col=0, parse_dates=True)\naqdata = aqdata[\"2014\":].resample('D').mean()\n\naqdata.plot()",
"The pandas versus matplotlib\nComparison 1: single plot",
"aqdata.plot(figsize=(16, 6)) # shift tab this!",
"Making this with matplotlib...",
"fig, ax = plt.subplots(figsize=(16, 6))\nax.plot(aqdata.index, aqdata[\"BASCH\"],\n aqdata.index, aqdata[\"BONAP\"], \n aqdata.index, aqdata[\"PA18\"],\n aqdata.index, aqdata[\"VERS\"])\nax.legend([\"BASCH\", \"BONAP\", \"PA18\", \"VERS\"])",
"or...",
"fig, ax = plt.subplots(figsize=(16, 6))\nfor station in aqdata.columns:\n ax.plot(aqdata.index, aqdata[station], label=station)\nax.legend()",
"Comparison 2: with subplots",
"axs = aqdata.plot(subplots=True, sharex=True,\n figsize=(16, 8), colormap='viridis', # Dark2\n fontsize=15)",
"Mimicking this in matplotlib (just as a reference):",
"from matplotlib import cm\nimport matplotlib.dates as mdates\n\ncolors = [cm.viridis(x) for x in np.linspace(0.0, 1.0, len(aqdata.columns))] # list comprehension to set up the colors\n\nfig, axs = plt.subplots(4, 1, figsize=(16, 8))\n\nfor ax, col, station in zip(axs, colors, aqdata.columns):\n ax.plot(aqdata.index, aqdata[station], label=station, color=col)\n ax.legend()\n if not ax.is_last_row():\n ax.xaxis.set_ticklabels([])\n ax.xaxis.set_major_locator(mdates.YearLocator())\n else:\n ax.xaxis.set_major_locator(mdates.YearLocator())\n ax.xaxis.set_major_formatter(mdates.DateFormatter('%Y'))\n ax.set_xlabel('Time')\n ax.tick_params(labelsize=15)\nfig.autofmt_xdate()",
"Best of both worlds...",
"aqdata.columns\n\nfig, ax = plt.subplots() #prepare a matplotlib figure\n\naqdata.plot(ax=ax) # use pandas for the plotting\n\n# Provide further adaptations with matplotlib:\nax.set_xlabel(\"\")\nax.tick_params(labelsize=15, pad=8, which='both')\nfig.suptitle('Air quality station time series', fontsize=15)\n\nfig, (ax1, ax2) = plt.subplots(2, 1) #provide with matplotlib 2 axis\n\naqdata[[\"BASCH\", \"BONAP\"]].plot(ax=ax1) # plot the two timeseries of the same location on the first plot\naqdata[\"PA18\"].plot(ax=ax2) # plot the other station on the second plot\n\n# further adapt with matplotlib\nax1.set_ylabel(\"BASCH\")\nax2.set_ylabel(\"PA18\")\nax2.legend()",
"<div class=\"alert alert-info\">\n\n <b>Remember</b>: \n\n <ul>\n <li>You can do anything with matplotlib, but at a cost... [stackoverflow!!](http://stackoverflow.com/questions/tagged/matplotlib)</li>\n <li>The preformatting of Pandas provides mostly enough flexibility for quick analysis and draft reporting. It is not for paper-proof figures or customization</li>\n</ul>\n<br>\n\n\n</div>\n\n<div class=\"alert alert-danger\">\n\n <b>NOTE</b>: \n\nIf you take the time to make you're perfect/spot-on/greatest-ever matplotlib-figure: Make it a **reusable function**! (see tomorrow!)\n\n<ul>\n <li>Let your hard work pay off, write your own custom functions!</li>\n</ul>\n\n</div>\n\n<div class=\"alert alert-info\" style=\"font-size:18px\">\n\n <b>Remember</b>: \n\n`fig.savefig()` to save your Figure object!\n\n</div>\n\nSeaborn",
"import seaborn as sns",
"Built on top of Matplotlib, but providing\nHigh level functions\nMuch cleaner default figures\n\n\nWorks well with Pandas\n\nFirst example: pairplot\nA scatterplot comparing the three stations with a color variation on the months:",
"aqdata[\"month\"] = aqdata.index.month\n\nsns.pairplot(aqdata[\"2014\"].dropna(), \n vars=['BASCH', 'BONAP', 'PA18', 'VERS'],\n diag_kind='kde', hue=\"month\")",
"Seaborn works well with Pandas & is built on top of Matplotlib\nWe will use the Titanic example again:",
"titanic = pd.read_csv('data/titanic.csv')\n\ntitanic.head()",
"Histogram: Getting the univariaite distribution of the Age",
"fig, ax = plt.subplots()\nsns.distplot(titanic[\"Age\"].dropna(), ax=ax) # Seaborn does not like Nan values...\nsns.rugplot(titanic[\"Age\"].dropna(), color=\"g\", ax=ax) # rugplot provides lines at the individual data point locations\nax.set_ylabel(\"Frequency\")",
"<div class=\"alert alert-info\">\n\n <b>Remember</b>: \n\nSimilar to Pandas handling above, we can set up a `figure` and `axes` and add the seaborn output to it; adapt it afterwards\n\n</div>\n\nCompare two variables (scatter-plot):",
"g = sns.jointplot(x=\"Fare\", y=\"Age\", \n data=titanic, \n kind=\"scatter\") #kde, hex\n\ng = sns.jointplot(x=\"Fare\", y=\"Age\", \n data=titanic, \n kind=\"scatter\") #kde, hex\n# Adapt the properties with matplotlib by changing the available axes objects\ng.ax_marg_x.set_ylabel(\"Frequency\")\ng.ax_joint.set_facecolor('0.1')\ng.ax_marg_y.set_xlabel(\"Frequency\")",
"<div class=\"alert alert-info\">\n\n <b>Remember</b>: \n\nAdapting the output of a Seaborn `grid` of different axes can be done as well to adapt it with matplotlib\n\n</div>\n\nWho likes regressions?",
"fig, ax = plt.subplots()\nsns.regplot(x=\"Fare\", y=\"Age\", data=titanic, ax=ax, lowess=False)\n# adding the small lines to indicate individual data points\nsns.rugplot(titanic[\"Fare\"].dropna(), axis='x', \n color=\"#6699cc\", height=0.02, ax=ax)\nsns.rugplot(titanic[\"Age\"].dropna(), axis='y', \n color=\"#6699cc\", height=0.02, ax=ax)",
"Section especially for R ggplot lovers\nRegressions with factors/categories: lmplot\nWhen you want to take into account a category as well to do regressions, use lmplot (which is a special case of Facetgrid):",
"sns.lmplot(x=\"Fare\", y=\"Age\", hue=\"Sex\", \n data=titanic)\n\nsns.lmplot(x=\"Fare\", y=\"Age\", hue=\"Sex\", \n col=\"Survived\", data=titanic)",
"Other plots with factors/categories: factorplot\nAnother method to create thes category based split of columns, colors,... based on specific category columns is the factorplot",
"titanic.head()\n\nsns.factorplot(x=\"Sex\", \n y=\"Fare\", \n col=\"Pclass\", \n data=titanic) #kind='strip' # violin,...\n\nsns.factorplot(x=\"Sex\", y=\"Fare\", col=\"Pclass\", row=\"Embarked\", \n data=titanic, kind='bar')\n\ng = sns.factorplot(x=\"Survived\", y=\"Fare\", hue=\"Sex\",\n col=\"Embarked\", data=titanic, \n kind=\"box\", size=4, aspect=.5);\ng.fig.set_figwidth(15)\ng.fig.set_figheight(6)",
"<div class=\"alert alert-info\">\n\n <b>Remember</b>: \n\n <ul>\n <li>`lmplot` and `factorplot` are shortcuts for a more advanced `FacetGrid` functionality</li>\n <li>If you want to dig deeper into this `FacetGrid`-based plotting, check the [online manual](http://seaborn.pydata.org/tutorial/axis_grids.html)!</li>\n</ul>\n\n</div>\n\nNeed more matplotlib/seaborn inspiration?\nFor more in-depth material:\n* http://www.labri.fr/perso/nrougier/teaching/matplotlib/\n* notebooks in matplotlib section: http://nbviewer.jupyter.org/github/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/Index.ipynb#4.-Visualization-with-Matplotlib\n* main reference: matplotlib homepage\n* very nice blogpost about customizing figures with matplotlib: http://pbpython.com/effective-matplotlib.html\n<div class=\"alert alert-info\" style=\"font-size:18px\">\n\n <b>Remember</b>(!)\n\n<ul>\n <li>[matplotlib Gallery](http://matplotlib.org/gallery.html)</li>\n <li>[seaborn gallery ](http://seaborn.pydata.org/examples/index.html)</li>\n</ul>\n<br>\nImportant resources to start from!\n\n</div>\n\nAlternatives for matplotlib\nWe only use matplotlib (or matplotlib-based plotting) in this workshop, and it is still the main plotting library for many scientists, but it is not the only existing plotting library. \nA nice overview of the landscape of visualisation tools in python was recently given by Jake VanderPlas: (or matplotlib-based plotting): https://speakerdeck.com/jakevdp/pythons-visualization-landscape-pycon-2017\nBokeh (http://bokeh.pydata.org/en/latest/): interactive, web-based visualisation",
"from bokeh.io import output_notebook\noutput_notebook()\n\nfrom bokeh.plotting import figure, show\nfrom bokeh.sampledata.iris import flowers\n\n\ncolormap = {'setosa': 'red', 'versicolor': 'green', 'virginica': 'blue'}\ncolors = [colormap[x] for x in flowers['species']]\n\np = figure(title = \"Iris Morphology\")\np.xaxis.axis_label = 'Petal Length'\np.yaxis.axis_label = 'Petal Width'\n\np.circle(flowers[\"petal_length\"], flowers[\"petal_width\"],\n color=colors, fill_alpha=0.2, size=10)\nshow(p)",
"Altair (https://altair-viz.github.io/index.html): declarative statistical visualization library for Python, based on Vega.",
"from altair import Chart, load_dataset\n\n# load built-in dataset as a pandas DataFrame\niris = load_dataset('iris')\n\nChart(iris).mark_circle().encode(\n x='petalLength',\n y='petalWidth',\n color='species',\n)",
"Acknowledgement\n\nThis notebook is partly based on material of © 2016, Joris Van den Bossche and Stijn Van Hoey (jorisvandenbossche@gmail.com, stijnvanhoey@gmail.com, licensed under CC BY 4.0 Creative Commons and partly on material of the Met Office (Copyright (C) 2013 SciTools, GPL licensed): https://github.com/SciTools/courses"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
andrewosh/notebooks
|
worker/notebooks/thunder/tutorials/image_registration.ipynb
|
mit
|
[
"Image registration\nA common problem when working with large collections of related images is registrating them together, relative to a reference. Thunder implements a generic image registration API that supports a variety of registration algorithms, and exposes to the user both standard approaches and the ability to implement custom solutions. Here, we generate data for registration, apply a registration algorithm, and show the results.\nIf you are interested in contributing a new registration algorithm to Thunder, let us know in the chatroom!\nSetup plotting",
"%matplotlib inline\n\nimport seaborn as sns\nimport matplotlib.pyplot as plt\nfrom thunder import Colorize\nimage = Colorize.image\ntile = Colorize.tile\nsns.set_style('darkgrid')\nsns.set_context('notebook')",
"Generating data\nWe will use the included toy example image data to test registration algorithms. These data do not actually have any motion, so to test the algorithms, we will induce fake motion. First we'll load and inspect the data.",
"data = tsc.loadExample('mouse-images')\ndata",
"There are 500 images (corresponding to 500 time points), and the data are two-dimensional, so we'll want to generate 500 random shifts in x and y. We'll use smoothing functions from scipy to make sure the drift varies slowly over time, which will be easier to look at.",
"from numpy import random\nfrom scipy.ndimage.filters import gaussian_filter\nt = 500\ndx = gaussian_filter(random.randn(t), 50) * 25\ndy = gaussian_filter(random.randn(t), 50) * 25\n\nplt.plot(dx);\nplt.plot(dy);",
"Now let's use these drifts to shift the data. We'll use the apply method on our data, which applies an arbitrary function to each record; in this case, the function is to shift by an amount given by the corresponding entry in our list of shifts.",
"from scipy.ndimage import shift\nshifted = data.apply(lambda (k, v): (k, shift(v, (dx[k], dy[k]), mode='nearest', order=0)))",
"Look at the first entry of both the original images and the shifted images, and their difference",
"im1 = data[0]\nim2 = shifted[0]\ntile([im1, im2, im1-im2], clim=[(0,300), (0,300), (-300,300)], grid=(1,3), size=14)",
"It's also useful to look at the mean of the raw images and the shifted images, the mean of the shifted images should be much more blurry!",
"tile([data.mean(), shifted.mean()], size=14)",
"Registration\nTo run registration, first we create a registration method by specifying its name (current options include 'crosscorr' and 'planarcrosscorr')",
"from thunder import Registration\nreg = Registration('crosscorr')",
"This method computes a cross-correlation in parallel between every image and a reference. To compute that reference, we can use the prepare method, and either give it a reference, or have it compute one for us. For this method, the default prepare is to compute a mean, over some specified range. We call:",
"reg.prepare(shifted, startIdx=0, stopIdx=500);",
"This adds a reference attribute to the reg object, which we can look at",
"image(reg.reference)",
"We could have equivalently computed the reference ourselves (using the mean, or any other calculation) and passed it as an argument",
"ref = shifted.filterOnKeys(lambda k: k > 0 and k < 500).mean()\nreg.prepare(ref)\nimage(reg.reference)",
"Now we use the registration method reg and fit it to the shifted data, returning a fitted RegistrationModel",
"model = reg.fit(shifted)",
"Inspect the model",
"model",
"The model represents a list of transformations. You can inspect them:",
"model[0]",
"You can also convert the full collection of transformations into an array, which is useful for plotting. Here we'll plot the estimated transformations relative to the ground truth, they should be fairly similar.",
"clrs = sns.color_palette('deep')\nplt.plot(model.toArray()[:,0], color=clrs[0])\nplt.plot(dx, color=clrs[0])\nplt.plot(model.toArray()[:,1], color=clrs[1])\nplt.plot(dy, color=clrs[1]);",
"Note that, while following a similar pattern as the ground truth, the estimates are not perfect. That's because we didn't use the true reference to estimate the displacements, but rather the mean of the displaced data. To see that we get the exact displacements back, let's compute a reference from the original, unshifted data.",
"reg.prepare(data, startIdx=0, stopIdx=500)\nmodel = reg.fit(shifted)",
"Now the estimates should be exact (up to rounding error)! But note that this is sort of cheating, because in general we don't know the ground truth.",
"plt.plot(model.toArray()[:,0], color=clrs[0])\nplt.plot(dx, color=clrs[0])\nplt.plot(model.toArray()[:,1], color=clrs[1])\nplt.plot(dy, color=clrs[1]);",
"We can now use our model to transform a set of images, which applies the estimated transformations. The API design makes it easy to apply the transformations to the dataset we used to estimate the transformations, or a different one. We'll use the model we just estimates, which used the true reference, because it will be easy to see that it did the right thing.",
"corrected = model.transform(shifted)",
"Let's again look at the first image from the orignal and corrected, and their difference. Whereas before they were different, now they should be the same, except for minor near the boundaries (where the image has been replaced with its nearest neighbors).",
"im1 = data[0]\nim2 = corrected[0]\ntile([im1, im2, im1-im2], clim=[(0,300), (0,300), (-300,300)], grid=(1,3), size=14)",
"As a final check on the registation, we can compare the mean of the shifted data, and the mean of the regsitered data. The latter should be much sharper.",
"tile([shifted.mean(), corrected.mean()], size=14)",
"We can easily save the model to a JSON file, and load it back in.\nmodel.save('model.json')\nmodelreloaded = Registration.load('model.json')"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.