code
stringlengths
38
801k
repo_path
stringlengths
6
263
# --- # jupyter: # jupytext: # text_representation: # extension: .sos # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: SoS # language: sos # name: sos # --- # + [markdown] kernel="SoS" tags=[] # # Input option `group_by` # + [markdown] kernel="SoS" tags=[] # * **Difficulty level**: intermediate # * **Time need to lean**: 20 minutes or less # * **Key points**: # * Option `group_by` creates groups (subsets) of input targets # * Groups are persistent and can be passed from step to step # + [markdown] kernel="SoS" tags=[] # ## Parameter `group_by` and substeps # + [markdown] kernel="SoS" tags=[] # By default all input targets are processed all at once by the step. If you need to process input files one by one or in pair, you can define **substeps** that basically applies the step to subgroups of input targets, represented by variable `_input`. # # In the trivial case when all input targets are processed together, `_input` is the same as `step_input`. # + kernel="SoS" tags=[] input: 'a.txt', 'b.txt' print(f'step input is {step_input}') print(f'substep input is {_input}') # + [markdown] kernel="SoS" tags=[] # Using option `group_by`, you can group the input targets in a number of ways, the easiest being group by `1`: # + kernel="SoS" tags=[] input: 'a.txt', 'b.txt', group_by=1 print(f'input of step is {step_input}') print(f'input of substep {_index} is {_input}') # + [markdown] kernel="SoS" tags=[] # As you can see, the step process is now executed twice. Whereas the `step_input` is the same for both substeps, `_input` is `a.txt` for the first substep, and `b.txt` for the second substep. Here we used an internal variable `_index` to show the index of the substep. # + [markdown] kernel="SoS" tags=[] # SoS allows you to group input in a number of ways: # # | option | group by | # | --- | --- | # | `all` | all in a single group, the default | # | `single` | individual target | # | `pairs` | match first half of files with the second half, take one from each half each time | # | `combinations` | all unordered combinations of 2-sets | # | `pairwise` | all adjacent 2-sets | # | `label` | by labels of input | # | `pairsource` | pair input files by their sources and take one from each source each time | # | `N` = `1`, `2`, ... | chunks of size `N` | # | `pairsN`, `N`=`2`, `3`, ... | match first half of files with the second half, take `N` from each half each time | # | `pairlabelN`, `N`=`2`, `3`, ... | pair input files by their labels and take `N` from each label (if equal size) each time | # | `pairwiseN`, `N`=`2`, `3`, ...| all adjacent 2-sets, but each set has `N` items | # | `combinationsN`, `N`=`2`, `3`, ... | all unorderd combinations of `N` items | # | function (e.g. `lamba x: ...`) | a function that returns groups of inputs | # + [markdown] kernel="SoS" tags=[] # ### Group by order of input targets # + [markdown] kernel="SoS" tags=[] # You can group input targets in many different combinations based on their order in input list. For exmple, with the following sos script, the input are groups pairwisely: # + kernel="SoS" tags=[] !touch file1 file2 file3 file4 input: 'file1', 'file2', 'file3', 'file4', group_by='pairwise' print(f"{_input}") # + [markdown] kernel="SoS" tags=[] # To demonstrate more acceptable values, the following example uses `sos_run` action to execute this a step with different grouping method. # + kernel="SoS" tags=[] !touch file1 file2 file3 file4 # %run -v1 [group] parameter: group = str print(f"\ngroup_by={group}") input: 'file1', 'file2', 'file3', 'file4', group_by=group print(f"{_index}: {_input}") [default] sos_run('group', group=1) sos_run('group', group=2) sos_run('group', group='single') sos_run('group', group='pairs') sos_run('group', group='pairwise') sos_run('group', group='combinations') sos_run('group', group='combinations3') # + [markdown] kernel="SoS" tags=[] # We did not include options `pairsN` and `pairwiseN` in the example because we need more input files to see what is going on. As you can see from the following example, the `N` groups input targets as small groups of size `N` before `pairs` and `pairwise` are applied. # + kernel="SoS" tags=[] !touch A1 B1 A2 B2 A3 B3 A4 B4 # %run -v1 [group] parameter: group = str print(f"\ngroup_by={group}") input: 'A1', 'B1', 'A2', 'B2', 'A3', 'B3', 'A4', 'B4', group_by=group print(f"{_index}: {_input}") [default] sos_run('group', group='pairs2') sos_run('group', group='pairwise2') # + [markdown] kernel="SoS" tags=[] # ### Group by label of input # + [markdown] kernel="SoS" tags=[] # As we recall from the `labels` attribute of `sos_targets`, input targets can have `label` of the present step (if specified directly), or as the output of previouly executed steps. Option `group_by` allows you to group input by sources `by='label'`, or pair sources (`by='pairlabel'` and `by='pairlabelN'`). # + [markdown] kernel="SoS" tags=[] # An example to use labeled input is when you have input data of different nature. For example # + kernel="SoS" tags=[] !touch sample1.txt sample2.txt reference.txt input: data=['sample1.txt', 'sample2.txt'], reference='reference.txt', group_by='pairlabel' print(f'Process data {_input["data"]} with reference {_input["reference"]}') # + [markdown] kernel="SoS" tags=[] # Here we would like to `group_by=1` only for `_input["data"]`, so we pair `_input["data"]` and `_input["reference"]` and group them together with `pairlabel`. # + [markdown] kernel="SoS" tags=[] # As a more complete example, # + kernel="SoS" tags=[] !touch c1 c2 c3 c4 # %run -v1 [step_10] output: 'a1' _output.touch() [step_20] output: 'b1', 'b2' _output.touch() [group_step] parameter: group = str print(f"\ngroup_by={group}") input: 'c1', 'c2', 'c3', 'c4', output_from(['step_10', 'step_20']), group_by=group print(f"{_index}: {_input} from {_input.labels}") [default] sos_run('group_step', group='label') sos_run('group_step', group='pairlabel') sos_run('group_step', group='pairlabel2') # + [markdown] kernel="SoS" tags=[] # The options `pairsource` and `pairsource2` need some explanation here because our groups do not have the same size. What these options do are # # 1. Determine number of groups `m` from `N` and longest source. # 2. Either group or repeat items in sources to create `m` groups # # For example, with `pairsource2`, we are creating two groups because the largest source have 4 targets (`m=4/2=2`). Then, `a1` is repeated twice, `b1`, `b2` are in two groups, and `c1`, `c2` and `c3`, `c4` are in two groups. # + [markdown] kernel="SoS" tags=[] # ### Group by user-defined function # + [markdown] kernel="SoS" tags=[] # Finally, if none of the predefined grouping mechanism works, it can be easier for you to specify a function that takes `step_input` and returns a list of `sos_targets` as `_input`. # + kernel="SoS" tags=[] !touch c1 c2 c3 c4 c5 c6 input: 'c1', 'c2', 'c3', 'c4', 'c5', 'c6', group_by=lambda x: [x[0], x[1:3], x[3:]] print(f"{_index}: {_input}") # + [markdown] kernel="SoS" tags=[] # ## Parameter `group_by` of `output_from` and `named_output` # + [markdown] kernel="SoS" tags=[] # Pairing input from multiple sources is complicated when we apply `group_by` to a list of targets with different sources. It is actually a lot easier if you apply `group_by` to the sources separately. Fortunately, functions `output_from` accepts `group_by` so that you can regroup the targets before merging with other sources. # # For example, in the following example, `step_10` has 2 output files, `step_20` has 4, by applying `group_by=1` to `output_from('step_10')` and `group_by=2` to `output_from('step_20')`, we create two `sos_targets` each with two subgroups. The two `sos_targets` will be joined to create a single `_input` for each substep. # + kernel="SoS" tags=[] # %run group -v1 [step_10] output: 'a1', 'a2' _output.touch() [step_20] output: 'c1', 'c2', 'c3', 'c4' _output.touch() [group] input: output_from('step_10', group_by=1), output_from('step_20', group_by=2) print(f"{_index}: {_input} from {_input.labels}") # + [markdown] kernel="SoS" tags=[] # As explained by [named input](named_input.html), keyword arguments overrides the labels of targets, so you can assign names to groups with keyword arguments: # + kernel="SoS" tags=[] # %run group -v1 [step_10] output: 'a1', 'a2' _output.touch() [step_20] output: 'c1', 'c2', 'c3', 'c4' _output.touch() [group] input: output_from('step_10', group_by=1), s20=output_from('step_20', group_by=2) print(f"{_index}: {_input} from {_input.labels}") # + [markdown] kernel="SoS" tags=[] # Things can become tricky if you specify both "regular" input and grouped targets from `output_from`. In this case, the regular input will be considered as a `sos_targets` with a single group, and be merged to every group of another `sos_targets`. # + kernel="SoS" tags=[] !touch e1 e2 # %run group -v1 [step_10] output: 'a1', 'a2' _output.touch() [step_20] output: 'c1', 'c2', 'c3', 'c4' _output.touch() [group] input: output_from('step_10', group_by=1), output_from('step_20', group_by=2), my=('e1', 'e2') print(f'\nSubstep {_index}') print(f"substep input is {_input} from {_input.labels}") # + [markdown] kernel="SoS" tags=[] # However, if option `group_by` is specified outside of `output_from`, it will group all targets regardless of original grouping. For example, in the following example, output from `step_10` will be grouped by 2. # + kernel="SoS" tags=[] !touch e1 e2 # %run group -v1 [step_10] output: 'c1', 'c2', 'c3', 'c4' _output.touch() [group] input: output_from('step_10', group_by=1), my=('e1', 'e2'), group_by=2 print(f'\nSubstep {_index}') print(f"substep input is {_input} from {_input.labels}")
src/user_guide/group_by.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # --- # URL: http://bokeh.pydata.org/en/latest/docs/gallery/histogram.html # # Most examples work across multiple plotting backends, this example is also available for: # # * [Matplotlib - histogram_example](../matplotlib/histogram_example.ipynb) import numpy as np import scipy import scipy.special import holoviews as hv hv.extension('bokeh') # ## Declaring data # + def get_overlay(hist, x, pdf, cdf, label): pdf = hv.Curve((x, pdf), label='PDF') cdf = hv.Curve((x, cdf), label='CDF') return (hv.Histogram(hist, vdims='P(r)') * pdf * cdf).relabel(label) np.seterr(divide='ignore', invalid='ignore') label = "Normal Distribution (μ=0, σ=0.5)" mu, sigma = 0, 0.5 measured = np.random.normal(mu, sigma, 1000) hist = np.histogram(measured, density=True, bins=50) x = np.linspace(-2, 2, 1000) pdf = 1/(sigma * np.sqrt(2*np.pi)) * np.exp(-(x-mu)**2 / (2*sigma**2)) cdf = (1+scipy.special.erf((x-mu)/np.sqrt(2*sigma**2)))/2 norm = get_overlay(hist, x, pdf, cdf, label) label = "Log Normal Distribution (μ=0, σ=0.5)" mu, sigma = 0, 0.5 measured = np.random.lognormal(mu, sigma, 1000) hist = np.histogram(measured, density=True, bins=50) x = np.linspace(0, 8.0, 1000) pdf = 1/(x* sigma * np.sqrt(2*np.pi)) * np.exp(-(np.log(x)-mu)**2 / (2*sigma**2)) cdf = (1+scipy.special.erf((np.log(x)-mu)/(np.sqrt(2)*sigma)))/2 lognorm = get_overlay(hist, x, pdf, cdf, label) label = "Gamma Distribution (k=1, θ=2)" k, theta = 1.0, 2.0 measured = np.random.gamma(k, theta, 1000) hist = np.histogram(measured, density=True, bins=50) x = np.linspace(0, 20.0, 1000) pdf = x**(k-1) * np.exp(-x/theta) / (theta**k * scipy.special.gamma(k)) cdf = scipy.special.gammainc(k, x/theta) / scipy.special.gamma(k) gamma = get_overlay(hist, x, pdf, cdf, label) label = "Beta Distribution (α=2, β=2)" alpha, beta = 2.0, 2.0 measured = np.random.beta(alpha, beta, 1000) hist = np.histogram(measured, density=True, bins=50) x = np.linspace(0, 1, 1000) pdf = x**(alpha-1) * (1-x)**(beta-1) / scipy.special.beta(alpha, beta) cdf = scipy.special.btdtr(alpha, beta, x) beta = get_overlay(hist, x, pdf, cdf, label) label = "Weibull Distribution (λ=1, k=1.25)" lam, k = 1, 1.25 measured = lam*(-np.log(np.random.uniform(0, 1, 1000)))**(1/k) hist = np.histogram(measured, density=True, bins=50) x = np.linspace(0, 8, 1000) pdf = (k/lam)*(x/lam)**(k-1) * np.exp(-(x/lam)**k) cdf = 1 - np.exp(-(x/lam)**k) weibull = get_overlay(hist, x, pdf, cdf, label) # - # ## Plot # + no_norm = dict(axiswise=True) opts = {'Histogram': {'style': dict(fill_color="#036564"), 'norm': no_norm, 'plot': dict(height=350, width=350, bgcolor="#E8DDCB")}, 'Curve': {'norm': no_norm}, 'Layout': {'plot': dict(shared_axes=False)}} (norm + lognorm + gamma + beta + weibull).opts(opts).cols(2)
examples/gallery/demos/bokeh/histogram_example.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # In this example, we will load the residual map of a fit from a .fits file and plot it using the function autolens.data.plotters.array_plotters.plot_array. # # We will use the residuals of a fit to slacs1430+4105, which comes from running the example pipeline '_workspacde/pipelines/examples/lens_light_and_x1_source_parametric.py_. # # We have included the .fits data required for this example in the directory '_workspace/output/example/slacs1430+4105/pipeline_light_and_x1_source_parametric/phase_3_both/image/fits_'. # # However, the complete set of optimizer results for the pipeline are not included, as the large file sizes prohibit distribution. Therefore, you may wish to run this pipeline now on slacs1430+4105 to generate your own results. # # We will customize the appearance of this figure to highlight the features of the residual map. # from autolens.data.array import scaled_array from autolens.data.array.plotters import array_plotters # First, lets setup the path to the .fits file of the residual map. # + # If you are using Docker, the path to the workspace is as follows (e.g. comment out this line)! # path = '/home/user/workspace/' # If you arn't using docker, you need to change the path below to the workspace directory and uncomment it # path = '/path/to/workspace/' path = '/home/jammy/PyCharm/Projects/AutoLens/workspace/' lens_name = 'slacs1430+4105' pipeline_name = 'pipeline_lens_light_and_x1_source_parametric' phase_name = 'phase_3_both' residual_map_path = path+'/output/example/'+lens_name+'/'+pipeline_name+'/'+phase_name+'/image/fits/fit_residual_map.fits' # - # Now, lets load this image as a scaled array. A scaled array is an ordinary NumPy array, but it also includes a pixel scale which allows us to convert the axes of the image to arc-second coordinates. residual_map = scaled_array.ScaledSquarePixelArray.from_fits_with_pixel_scale(file_path=residual_map_path, hdu=0, pixel_scale=0.04) # We can now use an array plotter to plot the residual map. array_plotters.plot_array(array=residual_map, title='SLACS1430+4105 Residual Map') # A useful way to really dig into the residuals is to set upper and lower limits on the normalization of the colorbar. array_plotters.plot_array(array=residual_map, title='SLACS1430+4105 Residual Map', norm_min=-0.02, norm_max=0.02) # Or, alternatively, use a symmetric logarithmic colormap array_plotters.plot_array(array=residual_map, title='SLACS1430+4105 Residual Map', norm='symmetric_log', linthresh=0.01, linscale=0.02) # These tools are equally powerful ways to inspect the chi-squared map of a fit. chi_squared_map_path = \ path+'/output/example/'+lens_name+'/'+pipeline_name+'/'+phase_name+'/image/fits/fit_chi_squared_map.fits' chi_squared_map = scaled_array.ScaledSquarePixelArray.from_fits_with_pixel_scale(file_path=chi_squared_map_path, hdu=0, pixel_scale=0.04) array_plotters.plot_array(array=chi_squared_map, title='SLACS1430+4105 Chi-Squared Map') array_plotters.plot_array(array=chi_squared_map, title='SLACS1430+4105 Chi-Squared Map', norm_min=-10.0, norm_max=10.0) array_plotters.plot_array(array=chi_squared_map, title='SLACS1430+4105 Chi-Squared Map', norm='symmetric_log', linthresh=0.01, linscale=0.02) # We can also plot the results of a fit using the fit itself. To do this, we have to make the pipeline and run it so as to load up all the results of the pipeline. We can then access the results of every phase. # + from autolens.data import ccd from workspace.pipelines.examples.lens_light_and_x1_source_parametric import make_lens_light_and_x1_source_parametric_pipeline image_path = path + '/data/example/' + lens_name + '/image.fits' psf_path = path + '/data/example/' + lens_name + '/psf.fits' noise_map_path = path + '/data/example/' + lens_name + '/noise_map.fits' ccd_data = ccd.load_ccd_data_from_fits(image_path=image_path, psf_path=psf_path, noise_map_path=noise_map_path, pixel_scale=0.03) pipeline = make_lens_light_and_x1_source_parametric_pipeline(pipeline_path='example/' + lens_name) # - # Now we run the pipeline on the data to get the result. If a mask was supplied to the pipeline when it was run, it is important the same mask is supplied in this run statement. # # The skip_optimizer boolean flag means that the non-linear searches will not run, and visualization will be skipped. This ensures the running of the pipeline is fast. result = pipeline.run(data=ccd_data, skip_optimizer=True)
workspace/plotting/examples/fit.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] colab_type="text" id="ZrwVQsM9TiUw" # ##### Copyright 2020 The TensorFlow Probability Authors. # # Licensed under the Apache License, Version 2.0 (the "License"); # + colab_type="code" id="CpDUTVKYTowI" colab={} #@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" } # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # + [markdown] colab_type="text" id="ltPJCG6pAUoc" # # A Tour of Oryx # # <table class="tfo-notebook-buttons" align="left"> # <td> # <a target="_blank" href="https://www.tensorflow.org/probability/oryx/examples/a_tour_of_oryx"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> # </td> # <td> # <a target="_blank" href="https://colab.research.google.com/github/tensorflow/probability/blob/main/spinoffs/oryx/examples/notebooks/a_tour_of_oryx.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> # </td> # <td> # <a target="_blank" href="https://github.com/tensorflow/probability/blob/main/spinoffs/oryx/examples/notebooks/a_tour_of_oryx.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> # </td> # <td> # <a href="https://storage.googleapis.com/tensorflow_docs/probability/spinoffs/oryx/examples/notebooks/a_tour_of_oryx.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> # </td> # </table> # + [markdown] colab_type="text" id="Cvrh7Ppuwlbb" # ## What is Oryx? # + [markdown] colab_type="text" id="F_n9c7K3xdKQ" # Oryx is an experimental library that extends [JAX](https://github.com/google/jax) to applications ranging from building and training complex neural networks to approximate Bayesian inference in deep generative models. Like JAX provides `jit`, `vmap`, and `grad`, Oryx provides a set of **composable function transformations** that enable writing simple code and transforming it to build complexity while staying completely interoperable with JAX. # # JAX can only safely transform pure, functional code (i.e. code without side-effects). While pure code can be easier to write and reason about, "impure" code can often be more concise and more easily expressive. # # At its core, Oryx is a library that enables "augmenting" pure functional code to accomplish tasks like defining state or pulling out intermediate values. Its goal is to be as thin of a layer on top of JAX as possible, leveraging JAX's minimalist approach to numerical computing. Oryx is conceptually divided into several "layers", each building on the one below it. # # The source code for Oryx can be found [on GitHub](https://github.com/tensorflow/probability/tree/main/spinoffs/oryx). # + [markdown] colab_type="text" id="8cloSFmOiJqn" # ## Setup # + colab_type="code" id="cdhNEzj6iJCc" colab={} # !pip install oryx 1>/dev/null # + colab_type="code" id="Ve8yVrLbiOXv" colab={} import matplotlib.pyplot as plt import seaborn as sns sns.set(style='whitegrid') import jax import jax.numpy as jnp from jax import random from jax import vmap from jax import jit from jax import grad import oryx tfd = oryx.distributions state = oryx.core.state ppl = oryx.core.ppl inverse = oryx.core.inverse ildj = oryx.core.ildj plant = oryx.core.plant reap = oryx.core.reap sow = oryx.core.sow unzip = oryx.core.unzip nn = oryx.experimental.nn mcmc = oryx.experimental.mcmc optimizers = oryx.experimental.optimizers # + [markdown] colab_type="text" id="AF05PEzd8QFI" # ## Layer 0: Base function transformations # # # + [markdown] colab_type="text" id="8WVTh54ZBJvq" # At its base, Oryx defines several new function transformations. These transformations are implemented using JAX's tracing machinery and are interoperable with existing JAX transformations like `jit`, `grad`, `vmap`, etc. # # ### Automatic function inversion # `oryx.core.inverse` and `oryx.core.ildj` are function transformations that can programatically invert a function and compute its inverse log-det Jacobian (ILDJ) respectively. These transformations are useful in probabilistic modeling for computing log-probabilities using the change-of-variable formula. There are limitations on the types of functions they are compatible with, however (see [the documentation](https://tensorflow.org/probability/oryx/api_docs/python/oryx/core/interpreters/inverse) for more details). # + colab_type="code" id="YxbReBYs5OpM" colab={} def f(x): return jnp.exp(x) + 2. print(inverse(f)(4.)) # ln(2) print(ildj(f)(4.)) # -ln(2) # + [markdown] colab_type="text" id="-U08JAgs5w5p" # ### Harvest # `oryx.core.harvest` enables tagging values in functions along with the ability to collect them, or "reap" them, and the ability to inject values in their place, or "planting" them. We tag values using the `sow` function. # + colab_type="code" id="pFJNr4SR5_vl" colab={} def f(x): y = sow(x + 1., name='y', tag='intermediate') return y ** 2 print('Reap:', reap(f, tag='intermediate')(1.)) # Pulls out 'y' print('Plant:', plant(f, tag='intermediate')(dict(y=5.), 1.)) # Injects 5. for 'y' # + [markdown] colab_type="text" id="ffR6Emmm5OVI" # ### Unzip # `oryx.core.unzip` splits a function in two along a set of values tagged as intermediates, then returning the functions `init_f` and `apply_f`. `init_f` takes in a key argument and returns the intermediates. `apply_f` returns a function that takes in the intermediates and returns the original function's output. # + colab_type="code" id="ojFVr_ZKm0UX" colab={} def f(key, x): w = sow(random.normal(key), tag='variable', name='w') return w * x init_f, apply_f = unzip(f, tag='variable')(random.PRNGKey(0), 1.) # + [markdown] id="jUJ5isbLjGy8" colab_type="text" # The `init_f` function runs `f` but only returns its variables. # + id="26VUK0nTjLcO" colab_type="code" colab={} init_f(random.PRNGKey(0)) # + [markdown] id="0KWemKR2jOn6" colab_type="text" # `apply_f` takes a set of variables as its first input and executes `f` with the given set of variables. # + id="SpKFfQZqiDAR" colab_type="code" colab={} apply_f(dict(w=2.), 2.) # Runs f with `w = 2`. # + [markdown] colab_type="text" id="Q0EtM2bj64fc" # ## Layer 1: Higher level transformations # + [markdown] colab_type="text" id="DexZ_6Ds69J4" # Oryx builds off the low-level inverse, harvest, and unzip function transformations to offer several higher-level transformations for writing stateful computations and for probabilistic programming. # + [markdown] colab_type="text" id="3zEucvAN7WJX" # ### Stateful functions (`core.state`) # We're often interested in expressing stateful computations where we initialize a set of parameters and express a computation in terms of the parameters. In `oryx.core.state`, Oryx provides an `init` transformation that converts a function into one that initializes a `Module`, a container for state. # # `Module`s resemble Pytorch and TensorFlow `Module`s except that they are immutable. # + colab_type="code" id="cmV2jLSr62Le" colab={} def make_dense(dim_out): def forward(x, init_key=None): w_key, b_key = random.split(init_key) dim_in = x.shape[0] w = state.variable(random.normal(w_key, (dim_in, dim_out)), name='w') b = state.variable(random.normal(w_key, (dim_out,)), name='b') return jnp.dot(x, w) + b return forward layer = state.init(make_dense(5))(random.PRNGKey(0), jnp.zeros(2)) print('layer:', layer) print('layer.w:', layer.w) print('layer.b:', layer.b) # + [markdown] id="dM02YafPiyVR" colab_type="text" # `Module`s are registered as JAX pytrees and can be used as inputs to JAX transformed functions. Oryx provides a convenient `call` function that executes a `Module`. # + id="DRYp96JFizoU" colab_type="code" colab={} vmap(state.call, in_axes=(None, 0))(layer, jnp.ones((5, 2))) # + [markdown] colab_type="text" id="p_ZPPibD-NI4" # The `state` API also enables writing stateful updates (like running averages) using the `assign` function. The resulting `Module` has an `update` function with an input signature that is the same as the `Module`'s `__call__` but creates a new copy of the `Module` with an updated state. # + colab_type="code" id="fXnL3ZvD-UKx" colab={} def counter(x, init_key=None): count = state.variable(0., key=init_key, name='count') count = state.assign(count + 1., name='count') return x + count layer = state.init(counter)(random.PRNGKey(0), 0.) print(layer.count) updated_layer = layer.update(0.) print(updated_layer.count) # Count has advanced! print(updated_layer.call(1.)) # + [markdown] colab_type="text" id="VO_VdtAA70EN" # # ### Probabilistic programming # + [markdown] colab_type="text" id="-bYaYxDA-5yz" # In `oryx.core.ppl`, Oryx provides a set of tools built on top of `harvest` and `inverse` which aim to make writing and transforming probabilistic programs intuitive and easy. # # In Oryx, a probabilistic program is a JAX function that takes a source of randomness as its first argument and returns a sample from a distribution, i.e, `f :: Key -> Sample`. In order to write these programs, Oryx wraps [TensorFlow Probability](https://www.tensorflow.org/probability) distributions and provides a simple function `random_variable` that converts a distribution into a probabilistic program. # + colab_type="code" id="fh8AFQq771VJ" colab={} def sample(key): return ppl.random_variable(tfd.Normal(0., 1.))(key) sample(random.PRNGKey(0)) # + [markdown] colab_type="text" id="JWnPjFxx_i5I" # What can we do with probabilistic programs? The simplest thing would be to take a probabilistic program (i.e. a sampling function) and convert it into one that provides the log-density of a sample. # + colab_type="code" id="h6U4_pAp_huX" colab={} ppl.log_prob(sample)(1.) # + [markdown] id="51yfR5Sm2ZuD" colab_type="text" # The new log-probability function is compatible with other JAX transformations like `vmap` and `grad`. # + id="je3wggIi2Ytm" colab_type="code" colab={} grad(lambda s: vmap(ppl.log_prob(sample))(s).sum())(jnp.arange(10.)) # + [markdown] colab_type="text" id="wEqAS9AfAPCh" # Using the `ildj` transformation, we can compute `log_prob` of programs that invertibly transform samples. # + colab_type="code" id="2SGe1YZ5AUP1" colab={} def sample(key): x = ppl.random_variable(tfd.Normal(0., 1.))(key) return jnp.exp(x / 2.) + 2. _, ax = plt.subplots(2) ax[0].hist(jit(vmap(sample))(random.split(random.PRNGKey(0), 1000)), bins='auto') x = jnp.linspace(0, 8, 100) ax[1].plot(x, jnp.exp(jit(vmap(ppl.log_prob(sample)))(x))) plt.show() # + [markdown] colab_type="text" id="AEvnv1-__8jd" # We can tag intermediate values in a probabilistic program with names and obtain joint sampling and joint log-prob functions. # + colab_type="code" id="yDttqgL7_umZ" colab={} def sample(key): z_key, x_key = random.split(key) z = ppl.random_variable(tfd.Normal(0., 1.), name='z')(z_key) x = ppl.random_variable(tfd.Normal(z, 1.), name='x')(x_key) return x ppl.joint_sample(sample)(random.PRNGKey(0)) # + [markdown] id="Q45YW73E2uVK" colab_type="text" # Oryx also has a `joint_log_prob` function that composes `log_prob` with `joint_sample`. # + id="FjZIhP7n2uwm" colab_type="code" colab={} ppl.joint_log_prob(sample)(dict(x=0., z=0.)) # + [markdown] colab_type="text" id="OP8boCwYA50n" # To learn more, see the [documentation](https://tensorflow.org/probability/oryx/api_docs/python/oryx/core/ppl/transformations). # + [markdown] colab_type="text" id="eglTKzL6A72r" # ## Layer 2: Mini-libraries # + [markdown] colab_type="text" id="9LdSK3XzBMuV" # Building further on top of the layers that handle state and probabilistic programming, Oryx provides experimental mini-libraries tailored for specific applications like deep learning and Bayesian inference. # + [markdown] colab_type="text" id="iGXK3SHGBTqe" # ### Neural networks # + [markdown] colab_type="text" id="0l7OEJM2BYJu" # In `oryx.experimental.nn`, Oryx provides a set of common neural network `Layer`s that fit neatly into the `state` API. These layers are built for single examples (not batches) but override batch behaviors to handle patterns like running averages in batch normalization. They also enable passing keyword arguments like `training=True/False` into modules. # # `Layer`s are initialized from a `Template` like `nn.Dense(200)` using `state.init`. # + colab_type="code" id="a6c2IjijA7Sn" colab={} layer = state.init(nn.Dense(200))(random.PRNGKey(0), jnp.zeros(50)) print(layer, layer.params.kernel.shape, layer.params.bias.shape) # + [markdown] id="1XKSMZyuiD6v" colab_type="text" # A `Layer` has a `call` method that runs its forward pass. # + id="z0n7l3DZiNre" colab_type="code" colab={} layer.call(jnp.ones(50)).shape # + [markdown] colab_type="text" id="J73S0GXjCLQ2" # Oryx also provides a `Serial` combinator. # + colab_type="code" id="xQhmJAHVB5iN" colab={} mlp_template = nn.Serial([ nn.Dense(200), nn.Relu(), nn.Dense(200), nn.Relu(), nn.Dense(10), nn.Softmax() ]) # OR mlp_template = ( nn.Dense(200) >> nn.Relu() >> nn.Dense(200) >> nn.Relu() >> nn.Dense(10) >> nn.Softmax()) mlp = state.init(mlp_template)(random.PRNGKey(0), jnp.ones(784)) mlp(jnp.ones(784)) # + [markdown] colab_type="text" id="g8h2nzyICpVd" # We can interleave functions and combinators to create a flexible neural network "meta language". # + colab_type="code" id="NvLB8zxXChyr" colab={} def resnet(template): def forward(x, init_key=None): layer = state.init(template, name='layer')(init_key, x) return x + layer(x) return forward big_resnet_template = nn.Serial([ nn.Dense(50) >> resnet(nn.Dense(50) >> nn.Relu()) >> resnet(nn.Dense(50) >> nn.Relu()) >> nn.Dense(10) ]) network = state.init(big_resnet_template)(random.PRNGKey(0), jnp.ones(784)) network(jnp.ones(784)) # + [markdown] colab_type="text" id="7-qBbDe_D8oV" # ### Optimizers # + [markdown] colab_type="text" id="a3c3GW1LEGKm" # In `oryx.experimental.optimizers`, Oryx provides a set of first-order optimizers, built using the `state` API. Their design is based off of JAX's [`optix` library](https://jax.readthedocs.io/en/latest/jax.experimental.optix.html), where optimizers maintain state about a set of gradient updates. Oryx's version manages state using the `state` API. # + colab_type="code" id="b7Gfm0d2EBC6" colab={} network_key, opt_key = random.split(random.PRNGKey(0)) def autoencoder_loss(network, x): return jnp.square(network.call(x) - x).mean() network = state.init(nn.Dense(200) >> nn.Relu() >> nn.Dense(2))(network_key, jnp.zeros(2)) opt = state.init(optimizers.adam(1e-4))(opt_key, network, network) g = grad(autoencoder_loss)(network, jnp.zeros(2)) g, opt = opt.call_and_update(network, g) network = optimizers.optix.apply_updates(network, g) # + [markdown] colab_type="text" id="EGDs47TEFKXB" # ### Markov chain Monte Carlo # + [markdown] colab_type="text" id="T7b6IdRwFP-k" # In `oryx.experimental.mcmc`, Oryx provides a set of Markov Chain Monte Carlo (MCMC) kernels. MCMC is an approach to approximate Bayesian inference where we draw samples from a Markov chain whose stationary distribution is the posterior distribution of interest. # # Oryx's MCMC library builds on both the `state` and `ppl` API. # + colab_type="code" id="wWTHfPWmGrAl" colab={} def model(key): return jnp.exp(ppl.random_variable(tfd.MultivariateNormalDiag( jnp.zeros(2), jnp.ones(2)))(key)) # + [markdown] colab_type="text" id="hQB7rhQ5GmN8" # #### Random walk Metropolis # + colab_type="code" id="O27O2oTJE1Nu" colab={} samples = jit(mcmc.sample_chain(mcmc.metropolis( ppl.log_prob(model), mcmc.random_walk()), 1000))(random.PRNGKey(0), jnp.ones(2)) plt.scatter(samples[:, 0], samples[:, 1], alpha=0.5) plt.show() # + [markdown] colab_type="text" id="0vTY-MiTGuQa" # #### Hamiltonian Monte Carlo # + colab_type="code" id="2CWSqdO7F3Ix" colab={} samples = jit(mcmc.sample_chain(mcmc.hmc( ppl.log_prob(model)), 1000))(random.PRNGKey(0), jnp.ones(2)) plt.scatter(samples[:, 0], samples[:, 1], alpha=0.5) plt.show()
spinoffs/oryx/examples/notebooks/a_tour_of_oryx.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + from config import username, password, database import matplotlib import pandas as pd import matplotlib.pyplot as plt import seaborn as sn import numpy as np from sqlalchemy.ext.declarative import declarative_base Base = declarative_base() from sqlalchemy import Column, Integer, String, Float, Date from sqlalchemy import create_engine, inspect # from sqlalchemy import.ext.auto import automap_base # - engine = create_engine(f'postgresql://{username}:{password}@localhost:5432/SQL_Challenge') connection = engine.connect() # + # IMPORT DATA FROM SQL # emp_no INTEGER # salary INTEGER # salary_start_dt DATE # salary_end_dt DATE # - base = automap_base() base.prepare(engine, relflect=True) Salaries = Base.classes.salaries # + active="" # session = Session(engine) # - # Create a histogram to visualize the most common salary ranges for employees. x=np.array(salaries["Salary"]) plt.style.use("ggplot") plt.show() plt.hist() plt.show() # Create a bar chart of average salary by title.
BONUS.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + id="Qd-j_0ysLbk9" colab_type="code" outputId="218debea-e616-4da4-c7e2-e647e372c93d" executionInfo={"status": "ok", "timestamp": 1568827400023, "user_tz": 420, "elapsed": 33617, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "09622378695568340769"}} colab={"base_uri": "https://localhost:8080/", "height": 122} from google.colab import drive drive.mount('/content/gdrive') # + id="hR02IOlGvTcI" colab_type="code" colab={} # !cp -r gdrive/My\ Drive/data/ /content # + id="5zddDa9837De" colab_type="code" outputId="b6c38763-3b80-4650-f4e8-81c5e4f57543" executionInfo={"status": "ok", "timestamp": 1568827407497, "user_tz": 420, "elapsed": 2693, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "09622378695568340769"}} colab={"base_uri": "https://localhost:8080/", "height": 34} # !ls # + id="M0D2e59AwWZm" colab_type="code" outputId="83ee5e28-cb40-4a91-8e5e-2f85f58679ea" executionInfo={"status": "ok", "timestamp": 1568827415661, "user_tz": 420, "elapsed": 3668, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "09622378695568340769"}} colab={"base_uri": "https://localhost:8080/", "height": 34} import tensorflow as tf tf.test.gpu_device_name() # + id="gtGBAe4a0UBT" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="94a095fd-e9ac-4372-84c0-5a2f497baf62" executionInfo={"status": "ok", "timestamp": 1568827418956, "user_tz": 420, "elapsed": 1009, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "09622378695568340769"}} from __future__ import print_function import keras from keras.layers import Dense, Conv2D, BatchNormalization, Activation from keras.layers import AveragePooling2D, Input, Flatten from keras.optimizers import Adam from keras.callbacks import ModelCheckpoint, LearningRateScheduler from keras.callbacks import ReduceLROnPlateau from keras.preprocessing.image import ImageDataGenerator from keras.regularizers import l2 from keras import backend as K from keras.models import Model import numpy as np import os # Training parameters batch_size = 32 # orig paper trained all networks with batch_size=128 epochs = 200 data_augmentation = True num_classes = 5 input_shape = (64,64,3) # Subtracting pixel mean improves accuracy subtract_pixel_mean = True # Model parameter # ---------------------------------------------------------------------------- # | | 200-epoch | Orig Paper| 200-epoch | Orig Paper| sec/epoch # Model | n | ResNet v1 | ResNet v1 | ResNet v2 | ResNet v2 | GTX1080Ti # |v1(v2)| %Accuracy | %Accuracy | %Accuracy | %Accuracy | v1 (v2) # ---------------------------------------------------------------------------- # ResNet20 | 3 (2)| 92.16 | 91.25 | ----- | ----- | 35 (---) # ResNet32 | 5(NA)| 92.46 | 92.49 | NA | NA | 50 ( NA) # ResNet44 | 7(NA)| 92.50 | 92.83 | NA | NA | 70 ( NA) # ResNet56 | 9 (6)| 92.71 | 93.03 | 93.01 | NA | 90 (100) # ResNet110 |18(12)| 92.65 | 93.39+-.16| 93.15 | 93.63 | 165(180) # ResNet164 |27(18)| ----- | 94.07 | ----- | 94.54 | ---(---) # ResNet1001| (111)| ----- | 92.39 | ----- | 95.08+-.14| ---(---) # --------------------------------------------------------------------------- n = 3 # Model version # Orig paper: version = 1 (ResNet v1), Improved ResNet: version = 2 (ResNet v2) version = 1 # Computed depth from supplied model parameter n if version == 1: depth = n * 6 + 2 elif version == 2: depth = n * 9 + 2 # Model name, depth and version model_type = 'ResNet%dv%d' % (depth, version) # If subtract pixel mean is enabled # if subtract_pixel_mean: # x_train_mean = np.mean(x_train, axis=0) # x_train -= x_train_mean # x_test -= x_train_mean # + id="DMJBma--0ldK" colab_type="code" colab={} def lr_schedule(epoch): """Learning Rate Schedule Learning rate is scheduled to be reduced after 80, 120, 160, 180 epochs. Called automatically every epoch as part of callbacks during training. # Arguments epoch (int): The number of epochs # Returns lr (float32): learning rate """ lr = 1e-3 if epoch > 180: lr *= 0.5e-3 elif epoch > 160: lr *= 1e-3 elif epoch > 120: lr *= 1e-2 elif epoch > 80: lr *= 1e-1 print('Learning rate: ', lr) return lr def resnet_layer(inputs, num_filters=16, kernel_size=3, strides=1, activation='relu', batch_normalization=True, conv_first=True): """2D Convolution-Batch Normalization-Activation stack builder # Arguments inputs (tensor): input tensor from input image or previous layer num_filters (int): Conv2D number of filters kernel_size (int): Conv2D square kernel dimensions strides (int): Conv2D square stride dimensions activation (string): activation name batch_normalization (bool): whether to include batch normalization conv_first (bool): conv-bn-activation (True) or bn-activation-conv (False) # Returns x (tensor): tensor as input to the next layer """ conv = Conv2D(num_filters, kernel_size=kernel_size, strides=strides, padding='same', kernel_initializer='he_normal', kernel_regularizer=l2(1e-4)) x = inputs if conv_first: x = conv(x) if batch_normalization: x = BatchNormalization()(x) if activation is not None: x = Activation(activation)(x) else: if batch_normalization: x = BatchNormalization()(x) if activation is not None: x = Activation(activation)(x) x = conv(x) return x # + id="4o1V5Mpf04tB" colab_type="code" colab={} def resnet_v1(input_shape, depth, num_classes=10): """ResNet Version 1 Model builder [a] Stacks of 2 x (3 x 3) Conv2D-BN-ReLU Last ReLU is after the shortcut connection. At the beginning of each stage, the feature map size is halved (downsampled) by a convolutional layer with strides=2, while the number of filters is doubled. Within each stage, the layers have the same number filters and the same number of filters. Features maps sizes: stage 0: 32x32, 16 stage 1: 16x16, 32 stage 2: 8x8, 64 The Number of parameters is approx the same as Table 6 of [a]: ResNet20 0.27M ResNet32 0.46M ResNet44 0.66M ResNet56 0.85M ResNet110 1.7M # Arguments input_shape (tensor): shape of input image tensor depth (int): number of core convolutional layers num_classes (int): number of classes (CIFAR10 has 10) # Returns model (Model): Keras model instance """ if (depth - 2) % 6 != 0: raise ValueError('depth should be 6n+2 (eg 20, 32, 44 in [a])') # Start model definition. num_filters = 16 num_res_blocks = int((depth - 2) / 6) inputs = Input(shape=input_shape) x = resnet_layer(inputs=inputs) # Instantiate the stack of residual units for stack in range(3): for res_block in range(num_res_blocks): strides = 1 if stack > 0 and res_block == 0: # first layer but not first stack strides = 2 # downsample y = resnet_layer(inputs=x, num_filters=num_filters, strides=strides) y = resnet_layer(inputs=y, num_filters=num_filters, activation=None) if stack > 0 and res_block == 0: # first layer but not first stack # linear projection residual shortcut connection to match # changed dims x = resnet_layer(inputs=x, num_filters=num_filters, kernel_size=1, strides=strides, activation=None, batch_normalization=False) x = keras.layers.add([x, y]) x = Activation('relu')(x) num_filters *= 2 # Add classifier on top. # v1 does not use BN after last shortcut connection-ReLU x = AveragePooling2D(pool_size=8)(x) y = Flatten()(x) outputs = Dense(num_classes, activation='softmax', kernel_initializer='he_normal')(y) # Instantiate model. model = Model(inputs=inputs, outputs=outputs) return model # + id="-jVIPplz06gR" colab_type="code" colab={} def resnet_v2(input_shape, depth, num_classes=10): """ResNet Version 2 Model builder [b] Stacks of (1 x 1)-(3 x 3)-(1 x 1) BN-ReLU-Conv2D or also known as bottleneck layer First shortcut connection per layer is 1 x 1 Conv2D. Second and onwards shortcut connection is identity. At the beginning of each stage, the feature map size is halved (downsampled) by a convolutional layer with strides=2, while the number of filter maps is doubled. Within each stage, the layers have the same number filters and the same filter map sizes. Features maps sizes: conv1 : 32x32, 16 stage 0: 32x32, 64 stage 1: 16x16, 128 stage 2: 8x8, 256 # Arguments input_shape (tensor): shape of input image tensor depth (int): number of core convolutional layers num_classes (int): number of classes (CIFAR10 has 10) # Returns model (Model): Keras model instance """ if (depth - 2) % 9 != 0: raise ValueError('depth should be 9n+2 (eg 56 or 110 in [b])') # Start model definition. num_filters_in = 16 num_res_blocks = int((depth - 2) / 9) inputs = Input(shape=input_shape) # v2 performs Conv2D with BN-ReLU on input before splitting into 2 paths x = resnet_layer(inputs=inputs, num_filters=num_filters_in, conv_first=True) # Instantiate the stack of residual units for stage in range(3): for res_block in range(num_res_blocks): activation = 'relu' batch_normalization = True strides = 1 if stage == 0: num_filters_out = num_filters_in * 4 if res_block == 0: # first layer and first stage activation = None batch_normalization = False else: num_filters_out = num_filters_in * 2 if res_block == 0: # first layer but not first stage strides = 2 # downsample # bottleneck residual unit y = resnet_layer(inputs=x, num_filters=num_filters_in, kernel_size=1, strides=strides, activation=activation, batch_normalization=batch_normalization, conv_first=False) y = resnet_layer(inputs=y, num_filters=num_filters_in, conv_first=False) y = resnet_layer(inputs=y, num_filters=num_filters_out, kernel_size=1, conv_first=False) if res_block == 0: # linear projection residual shortcut connection to match # changed dims x = resnet_layer(inputs=x, num_filters=num_filters_out, kernel_size=1, strides=strides, activation=None, batch_normalization=False) x = keras.layers.add([x, y]) num_filters_in = num_filters_out # Add classifier on top. # v2 has BN-ReLU before Pooling x = BatchNormalization()(x) x = Activation('relu')(x) x = AveragePooling2D(pool_size=8)(x) y = Flatten()(x) outputs = Dense(num_classes, activation='softmax', kernel_initializer='he_normal')(y) # Instantiate model. model = Model(inputs=inputs, outputs=outputs) return model # + id="qfa-4edALx8S" colab_type="code" outputId="11f5214c-2089-4a89-e7ef-33cc6adbb8d4" executionInfo={"status": "error", "timestamp": 1568797842292, "user_tz": 420, "elapsed": 10224088, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "09622378695568340769"}} colab={"base_uri": "https://localhost:8080/", "height": 1000} # Prepare model saving directory. # save_dir = os.path.join(os.getcwd(), 'saved_models') # model_name = 'GED_classification_1_%s_model.{epoch:03d}.h5' % model_type # if not os.path.isdir(save_dir): # os.makedirs(save_dir) # filepath = os.path.join(save_dir, model_name) save_dir = os.path.join(os.getcwd(), 'gdrive/My Drive/saved_models') model_name = 'GED_classification_2_64_%s_model.{epoch:03d}.h5' % model_type if not os.path.isdir(save_dir): os.makedirs(save_dir) filepath = os.path.join(save_dir, model_name) # Prepare callbacks for model saving and for learning rate adjustment. checkpoint = ModelCheckpoint(filepath=filepath, monitor='val_acc', verbose=1, save_best_only=True) lr_scheduler = LearningRateScheduler(lr_schedule) lr_reducer = ReduceLROnPlateau(factor=np.sqrt(0.1), cooldown=0, patience=5, min_lr=0.5e-6) callbacks = [checkpoint, lr_reducer, lr_scheduler] # Run training, with data augmentation. train_datagen = ImageDataGenerator( rescale=1./255, # set input mean to 0 over the dataset featurewise_center=False, # set each sample mean to 0 samplewise_center=False, # divide inputs by std of dataset featurewise_std_normalization=False, # divide each input by its std samplewise_std_normalization=False, # apply ZCA whitening zca_whitening=False, # epsilon for ZCA whitening zca_epsilon=1e-06, # randomly rotate images in the range (deg 0 to 180) rotation_range=0, # randomly shift images horizontally width_shift_range=0.1, # randomly shift images vertically height_shift_range=0.1, # set range for random shear shear_range=0., # set range for random zoom zoom_range=0., # set range for random channel shifts channel_shift_range=0., # set mode for filling points outside the input boundaries fill_mode='nearest', # value used for fill_mode = "constant" cval=0., # randomly flip images horizontal_flip=True, # randomly flip images vertical_flip=False, # # set rescaling factor (applied before any other transformation) # rescale=None, # set function that will be applied on each input preprocessing_function=None, # image data format, either "channels_first" or "channels_last" data_format=None, # fraction of images reserved for validation (strictly between 0 and 1) validation_split=0.0) test_datagen = ImageDataGenerator(rescale=1./255) train_generator = train_datagen.flow_from_directory( './data/training_set', target_size=(64, 64), color_mode='rgb', batch_size=batch_size, class_mode='categorical') validation_generator = test_datagen.flow_from_directory( './data/test_set', target_size=(64, 64), color_mode='rgb', batch_size=batch_size, class_mode='categorical') model.fit_generator( train_generator, verbose=1, workers=4, epochs=epochs, validation_data=validation_generator, callbacks=callbacks) # Score trained model. scores = model.evaluate(x_test, y_test, verbose=1) print('Test loss:', scores[0]) print('Test accuracy:', scores[1]) # + id="afUB5m1n0_zi" colab_type="code" outputId="340e3efc-dbca-4809-b45a-88878f31948e" executionInfo={"status": "ok", "timestamp": 1568827497665, "user_tz": 420, "elapsed": 2994, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "09622378695568340769"}} colab={"base_uri": "https://localhost:8080/", "height": 1000} if version == 2: model = resnet_v2(input_shape=input_shape, depth=depth, num_classes=num_classes) else: model = resnet_v1(input_shape=input_shape, depth=depth, num_classes=num_classes) model.compile(loss='categorical_crossentropy', optimizer=Adam(lr=lr_schedule(0)), metrics=['accuracy']) model.summary() # Save the model architecture with open('model_architecture.json', 'w') as f: f.write(model.to_json()) print(model_type) # + id="CAUH7yO-3gQA" colab_type="code" colab={} # !mv model_architecture.json gdrive/My\ Drive/ # + id="ZQlNBtIr0t_w" colab_type="code" colab={} # + id="UWtEAeeP0Uy6" colab_type="code" colab={} # + id="dcCh_YPG0U-x" colab_type="code" colab={}
Pytorch_ResNet50_Colab_Notebooks/computer_vision_v_0.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Motorization Rate in France # ## Goals # # **Predict the car equipment rate at the municipality level using the second dataset** # + import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns from sklearn.preprocessing import MinMaxScaler from sklearn.feature_selection import RFE, SelectFromModel from sklearn.linear_model import LinearRegression from sklearn.ensemble import RandomForestRegressor, GradientBoostingRegressor, VotingRegressor from sklearn.model_selection import train_test_split, GridSearchCV, cross_val_score, KFold from lightgbm import LGBMRegressor from xgboost import XGBRegressor from sklearn.metrics import mean_squared_error import shap # %matplotlib inline # - df_TM = pd.read_excel('data/Taux_de_motorisation_a_lIRIS-Global_Map_Solution.xlsx') df_TM.head() # We will just keep our target and the code of each Commune df_TM[['Code Commune', 'Taux de motorisation Commune']] # We now access the dataset with a lot of features concerning Commune but also whole Departments df_mdb = pd.read_excel('data/MDB-INSEE-V2.xls') df_mdb.head() #df_mdb.columns.tolist() print('\n'.join(map(str, df_mdb.columns))) # ### Merge df = df_TM[['Code Commune', 'Taux de motorisation Commune']].merge(right=df_mdb, left_on='Code Commune', right_on='CODGEO') df.head() len(df_TM), len(df), df.duplicated().sum() df_mdb.duplicated().sum(), df_TM.duplicated().sum() df_mdb['CODGEO'].nunique() df['Taux de motorisation Commune'].hist(bins=50) # The overall motorization rate is pretty high, with the majority of Communes being aroung 90% # The **Urbanité Ruralité** feature looks interesting, lets explore it df['Urbanité Ruralité'].value_counts().plot(kind='barh', figsize=(20,10)) # Needs transformation, the last category proabably contains the other ones, we surelay want to separate plt.figure(figsize=(16,8)) ax = sns.boxplot(x='Urbanité Ruralité', y='Taux de motorisation Commune', data=df) #ax = sns.swarmplot(x='Urbanité Ruralité', y='Taux de motorisation Commune', data=df, color="grey") # Indeed, Com rurale < 2000 m habitants has a lot of outliers, because those should belong exclusively to other categories. # We can't just look at each feature and try to see with it's worth to keep, we better try to select a set of them which will provide a good prediction g = sns.heatmap(df[['Taux de motorisation Commune', 'Dynamique Entrepreneuriale', 'Score PIB', 'Nb Actifs Salariés', 'Population', 'Moyenne Revnus fiscaux', 'Nb de Commerce', 'Nb Camping', 'Taux Propriété']].corr(),annot=True,cmap="RdYlGn") # ### Lot of features, which ones actually matter ? # # #### We are going to try several feature selection methods and then compare wich features each methode would keep num_cols = df._get_numeric_data().columns.tolist() num_cols cols = df.columns.tolist() cat_cols = [col for col in cols if col not in num_cols] cat_cols len(df.columns), len(num_cols), len(cat_cols) df = df.drop(columns="CODGEO") df.columns[df.isna().any()].tolist() # List columns which have at least one NA value # ### Pearson Correlation # # We can use the correlation between a feature and our target to see if a link exists # + df_ = df.copy() df_ = df_.dropna() X = df_[num_cols] X = X.drop(columns=['Taux de motorisation Commune']) #features y = df_['Taux de motorisation Commune'] #target def cor_selector(X, y, num_feats): cor_list = [] feature_name = X.columns.tolist() for i in feature_name: cor = np.corrcoef(X[i], y)[0, 1] cor_list.append(cor) cor_list = [0 if np.isnan(i) else i for i in cor_list] # replace NaN with 0 just in case cor_feature = X.iloc[:,np.argsort(np.abs(cor_list))[-num_feats:]].columns.tolist() #take the most abs correlate features cor_support = [True if i in cor_feature else False for i in feature_name] return cor_support, cor_feature cor_support, cor_feature = cor_selector(X, y, num_feats = 20) print(str(len(cor_feature)), 'selected features') # - # ### Recursive Feature Elimination X_norm = MinMaxScaler().fit_transform(X) # + rfe_selector = RFE(estimator=LinearRegression(), n_features_to_select=20, step=10, verbose=5) rfe_selector.fit(X_norm, y) rfe_support = rfe_selector.get_support() rfe_feature = X.loc[:,rfe_support].columns.tolist() print(str(len(rfe_feature)), 'selected features') # - # ### Random Forest # + embeded_rf_reg = SelectFromModel(RandomForestRegressor(n_estimators=100), max_features=20) embeded_rf_reg.fit(X, y) embeded_rf_support = embeded_rf_reg.get_support() embeded_rf_feature = X.loc[:,embeded_rf_support].columns.tolist() print(str(len(embeded_rf_feature)), 'selected features') # - # ### King XGBOOST # + xgb = XGBRegressor() embeded_xgb_selector = SelectFromModel(xgb, max_features=20) embeded_xgb_selector.fit(X, y) embeded_xgb_support = embeded_xgb_selector.get_support() embeded_xgb_feature = X.loc[:,embeded_xgb_support].columns.tolist() print(str(len(embeded_xgb_feature)), 'selected features') # - # ### Recap Numerical Data # + feature_name = X.columns.tolist() feature_selection_df = pd.DataFrame({'Feature':feature_name, 'Pearson':cor_support, 'RFE':rfe_support ,'Random Forest':embeded_rf_support, 'XGBOOST':embeded_xgb_support}) feature_selection_df['Total'] = np.sum(feature_selection_df, axis=1) feature_selection_df = feature_selection_df.sort_values(['Total','Feature'] , ascending=False) feature_selection_df.index = range(1, len(feature_selection_df)+1) feature_selection_df.head(20) # - # 4 features were chosen 3 times, we could decide to continue without the others. But let us check if they are not too much correlated # + print('Taux Propriété') print('\n') print(df[['Taux Propriété', 'Score Urbanité']].corr()) print(df[['Taux Propriété', 'Moyenne Revnus fiscaux']].corr()) print(df[['Taux Propriété', 'Evolution Population']].corr()) print('\n') print('Score Urbanité') print('\n') print(df[['<NAME>', 'Score Urbanité']].corr()) print(df[['Evolution Population', 'Score Urbanité']].corr()) print('\n') print('Evolution Population') print('\n') print(df[['<NAME>', 'Evolution Population']].corr()) # - # There is not strong correlation between those features, max is 0.5 # ### Now lets deal with the categorical data cat_cols cat_cols.remove('CODGEO') cat_cols.remove('DEP') cat_cols.remove('LIBGEO') # + df_ = df.copy() df_ = df_.dropna() categorical_data = df_[cat_cols] # - categorical_data.head() categorical_data_encoded = categorical_data.apply(lambda x: pd.factorize(x)[0]) categorical_data_encoded.head(5) len(y), len(categorical_data_encoded) # + X = categorical_data_encoded #RF embeded_rf_reg = SelectFromModel(RandomForestRegressor(n_estimators=100), max_features=5) embeded_rf_reg.fit(X, y) embeded_rf_support = embeded_rf_reg.get_support() embeded_rf_feature = X.loc[:,embeded_rf_support].columns.tolist() #XGB xgb = XGBRegressor() embeded_xgb_selector = SelectFromModel(xgb, max_features=5) embeded_xgb_selector.fit(X, y) embeded_xgb_support = embeded_xgb_selector.get_support() embeded_xgb_feature = X.loc[:,embeded_xgb_support].columns.tolist() # - # ## Recap cat # + feature_name = X.columns.tolist() feature_selection_df = pd.DataFrame({'Feature':feature_name,'Random Forest':embeded_rf_support, 'XGBOOST':embeded_xgb_support}) feature_selection_df['Total'] = np.sum(feature_selection_df, axis=1) feature_selection_df = feature_selection_df.sort_values(['Total','Feature'] , ascending=False) feature_selection_df.index = range(1, len(feature_selection_df)+1) feature_selection_df.head(5) # - # Looks like the first feature convinced everyone, we could also keep the second one. # # Modelling # ### Preparing the data df_final = df[['Taux Propriété', 'Score Urbanité', 'Moyenne Revnus fiscaux', 'Evolution Population', 'DYN SetC', 'Taux de motorisation Commune']] df_final.head() df_final = pd.get_dummies(df_final, columns = ["DYN SetC"]) df_final.isna().sum() df_final[df_final['Taux de motorisation Commune'].isna()] df_final = df_final.fillna(method='ffill') # + X = df_final.drop(labels = 'Taux de motorisation Commune',axis = 1) y = df_final['Taux de motorisation Commune'] X = X.rename(columns={"Taux Propriété": "TP", "Score Urbanité": "SU"}) X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0) # - kfold = KFold(n_splits=5) # ### RANDOM FOREST # + param_grid = {'n_estimators': [10, 100, 150, 200], 'max_depth': [5,10,20,50]} grid = GridSearchCV(RandomForestRegressor(), param_grid=param_grid, cv=kfold, scoring='neg_mean_squared_error',verbose=3, n_jobs=2, return_train_score=True) # - grid.fit(X_train, y_train) grid.predict(X_test) grid.best_score_, grid.best_params_ grid.score(X_test, y_test) best_RFR = grid.best_estimator_ # ### Gradient Boosting # + param_grid = {'n_estimators': [100, 200, 350, 500]} gridGB = GridSearchCV(GradientBoostingRegressor(), param_grid=param_grid, cv=kfold, scoring='neg_mean_squared_error', verbose=3, n_jobs=2, return_train_score=True) gridGB.fit(X_train, y_train) gridGB.predict(X_test) # - gridGB.best_score_, gridGB.best_params_ gridGB.score(X_test, y_test) best_GB = gridGB.best_estimator_ # ### XGBOOST # + #param_gridXGB = {'n_estimators': [200, 250, 300], 'reg_alpha': [0, 0.5, 1], 'reg_lambda' : [0,0.5,1], 'booster' : ['gbtree', 'gblinear', 'dart']} #gridXGB = GridSearchCV(XGBRegressor(), param_grid=param_gridXGB, cv=kfold, scoring='neg_mean_squared_error', verbose=3, n_jobs=2, return_train_score=True) #gridXGB.fit(X_train, y_train) #grid.predict(X_test) # + #gridXGB.best_score_, gridXGB.best_params_ # + #gridXGB.score(X_test, y_test) # + #best_XGB = gridXGB.best_estimator_ # - best_XGB = XGBRegressor(**{'booster': 'gbtree', 'n_estimators': 300, 'reg_alpha': 0, 'reg_lambda': 0.5}) best_XGB.fit(X_train, y_train) best_XGB.predict(X_test) XGB_scores = cross_val_score(best_XGB, X_test, y_test, scoring = 'neg_mean_squared_error', cv=5) # ### LightGBM # + #X_train = X_train.rename(columns={"Taux Propriété": "TP", "Score Urbanité": "SU"}) # + param_gridlgbm = {'n_estimators': [100,200,300]} gridlgbm = GridSearchCV(LGBMRegressor(), param_grid=param_gridlgbm, cv=kfold, scoring='neg_mean_squared_error', verbose=3, n_jobs=2, return_train_score=True) gridlgbm.fit(X_train, y_train) gridlgbm.predict(X_test) # - gridlgbm.best_score_, gridlgbm.best_params_ gridlgbm.score(X_test, y_test) best_lgbm = gridlgbm.best_estimator_ # ### Voting # + voting_reg = VotingRegressor(estimators=[('RFR', best_RFR),('GB', best_GB), ('XGB',best_XGB),('lgbm',best_lgbm)], n_jobs=2) voting_reg = voting_reg.fit(X_train, y_train) voting_reg.predict(X_test) # - voting_score = mean_squared_error(y_test, voting_reg.predict(X_test)) voting_score # + results_scores = [-grid.score(X_test, y_test), -gridGB.score(X_test, y_test), -XGB_scores.mean(), -gridlgbm.score(X_test, y_test), voting_score] models = ['Random_Forest', 'Gradient Boosting', 'XGBOOST', 'LGBM', 'Voting'] results = pd.DataFrame({'Models':models, 'Scores':results_scores}) # - results.sort_values(by='Scores') # + explainer = shap.TreeExplainer(best_lgbm) shap_values = explainer.shap_values(X_test) #shap.summary_plot(shap_values[1], X_test) # - shap.initjs() shap.force_plot(explainer.expected_value[1], shap_values[1], X_test) # + explainer = shap.TreeExplainer(best_RFR) shap_values = explainer.shap_values(X_test) #shap.summary_plot(shap_values[1], X_test) # -
MotorizationRate.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- from typing import Tuple import re import json from collections import defaultdict, OrderedDict import numpy as np import pandas as pd # ## Read namuwiki dump with open('../data/namuwiki.json') as f: context_ = json.load(f) context = defaultdict(dict) for doc in context_: context[doc['title']][doc['namespace']] = doc # ## Parser # + regex_document = re.compile('\[\[(.[^:]+?)\]\]') regex_table = re.compile('(?<=\|\|)(.*)(?=\|\|)') regex_bracket = re.compile('\((.+?)\)') regex_redirect = re.compile('#redirect (.+?)') regex_tags = OrderedDict({ 'horizontal_line': ('', re.compile('\-{4,9}')), 'comment': ('', re.compile('##\s?(.*)')), 'header': (r'\1', re.compile('\={2,6}\#?(.+?)\#?\={2,6}')), 'bold': (r'\1', re.compile("\'\'\'(.+?)\'\'\'")), 'italic': (r'\1', re.compile("\'\'(.+?)\'\'")), 'strike1': (r'\1', re.compile('~~(.+?)~~')), 'strike2': (r'\1', re.compile('--(.+?)--')), 'underline': (r'\1', re.compile('__(.+?)__')), 'upper': (r'\1', re.compile('\^\^(.+?)\^\^')), 'under': (r'\1', re.compile(',,(.+?),,')), 'bigger': (r'\1', re.compile('\{\{\{\+[1-5] (.+?)\}\}\}')), 'smaller': (r'\1', re.compile('\{\{\{\-[1-5] (.+?)\}\}\}')), 'color': (r'\2', re.compile('\{\{\{\#(.+?) (.+?)\}\}\}')), 'without_markup': (r'\1', re.compile('\{\{\{(.*)\}\}\}')), 'macro_html': (r'\1', re.compile('\{\{\{\#\!html (.+?)\}\}\}')), 'macro_wiki': (r'\2', re.compile('\{\{\{\#\!wiki (.+?)\n(.*)\}\}\}')), 'macro_syntax': (r'\2', re.compile('\{\{\{\#\!syntax (.+?)\n(.*)\n\}\}\}', re.IGNORECASE)), 'macro_color': (r'\2', re.compile('\{\{\{\#(.+?) (.+?)\}\}\}', re.IGNORECASE)), 'macro_math': ('', re.compile('\[math\((.+?)\)\]', re.IGNORECASE)), 'macro_date': ('', re.compile('\[date(time)?\]', re.IGNORECASE)), 'macro_br': ('\n', re.compile('\[br\]', re.IGNORECASE)), 'macro_include': ('', re.compile('\[include(.+?)\]', re.IGNORECASE)), 'macro_index': ('', re.compile('\[목차\]')), 'macro_index_': ('', re.compile('\[tableofcontents\]')), 'macro_footnote': ('', re.compile('\[각주\]')), 'macro_footnote_': ('', re.compile('\[footnote\]')), 'macro_pagecount': ('', re.compile('\[pagecount(.+?)?\]', re.IGNORECASE)), 'macro_age': ('', re.compile('\[age\(\)\]', re.IGNORECASE)), 'macro_dday': ('', re.compile('\[dday\(\)\]', re.IGNORECASE)), 'macro_tag': ('', re.compile('\<(.+?)\>')), 'attach_': (r'\1', re.compile('\[\[파일:(.+?)\|(.+?)\]\]')), 'attach': (r'\1', re.compile('\[\[파일:(.+?)\]\]')), 'paragraph_': (r'\1', re.compile('\[\[#s-(.+?)\|(.+?)\]\]')), 'paragraph': (r'\1', re.compile('\[\[#s-(.+?)\]]')), 'link_paragraph_': (r'\1', re.compile('\[\[(.+?)#s-(.+?)\|(.+?)\]\]')), 'link_paragraph': (r'\1', re.compile('\[\[(.+?)#s-(.+?)\]\]')), 'link': (r'\1', re.compile('\[\[((?:(?!\|).)+?)\]\]')), 'link_': (r'\1', re.compile('\[\[(.+?)\|(.+?)\]\]')), 'list': (r'\1', re.compile('\|\*\|(.*)')), 'list_': (r'\1', re.compile('\|\*(.*)')), 'list__': (r'\1', re.compile('\|[1Aa]\.\|(.*)')), 'list___': (r'\1', re.compile('\|[1Aa]\.(.*)')), 'unordred_list': (r'\1', re.compile('[ ]+\*(.*)')), 'ordered_list': (r'\1', re.compile('[ ]+[1AaIi]\.(.*)')), 'quote': (r'\1', re.compile('\>+\s?(.*)')), 'footnote': ('', re.compile('\[\*[A-Za-z]? (.+?)\]')), }) # - def parse(text: str, verbose: bool = False) -> str: def _parse(text: str, target: str, tag: re.Pattern) \ -> Tuple[str, int]: return tag.subn(target, text) if verbose: print(f'Parsing Regex {len(regex_tags.keys())} rules\n\t{text}') while True: count = 0 for key, (target, tag) in regex_tags.items(): text, count = _parse(text, target, tag) if count: if verbose: print(f'Rule [{key}: {tag}]\n\t{text}') break if not count: break return text.strip() def parse_table(text): for row in regex_table.findall(text): values = row.split('||') yield values # ## Get drama titles from channles interested = ["JTBC 금토 드라마(2014~2017)", "JTBC 금토 드라마(2017~2020)", "JTBC 드라마", "JTBC 수목 드라마", "JTBC 월화 드라마(2011~2014)", "JTBC 월화 드라마(2017~2020)", "JTBC 주말 드라마", "KBS 수목 드라마(2001~2005)", "KBS 수목 드라마(2006~2010)", "KBS 수목 드라마(2011~2015)", "KBS 수목 드라마(2016~2020)", "KBS 월화 드라마(2001~2005)", "KBS 월화 드라마(2006~2010)", "KBS 월화 드라마(2011~2015)", "KBS 월화 드라마(2016~2020)", "KBS 학교 시리즈", "MBC 수목 미니시리즈(2006~2010)", "MBC 수목 미니시리즈(2011~2015)", "MBC 수목 미니시리즈(2016~2020)", "MBC 아침 드라마(2011~2015)", "MBC 아침 드라마(2016~2020)", "MBC 예능 드라마", "MBC 월화 미니시리즈(2006~2010)", "MBC 월화 미니시리즈(2016~2020)", "MBC 월화특별기획(2011~2015)", "MBC 일일 드라마(2016~2020)", "MBC 일일 연속극(2011~2015)", "MBC 주말 드라마(2011~2015)", "MBC 주말 드라마(2016~2020)", "MBC 주말 특별기획(2011~2015)", "MBC 주말 특별기획(2016~2020)", "MBC 하이킥 시리즈", "MBN 수목 드라마", "OCN 로맨스 드라마", "OCN 수목 오리지널", "OCN 오리지널 드라마(2010~2016)", "OCN 월화 오리지널", "OCN 토일 오리지널(2017~2020)", "SBS 금토 드라마(2019~현재)", "SBS 드라마 스페셜(1992~1995)", "SBS 드라마 스페셜(1996~2000)", "SBS 드라마 스페셜(2001~2005)", "SBS 드라마 스페셜(2006~2010)", "SBS 드라마 스페셜(2011~2015)", "SBS 드라마 스페셜(2016~2020)", "SBS 아침 연속극(2016~2020)", "SBS 월화 드라마(1991~1995)", "SBS 월화 드라마(1996~2000)", "SBS 월화 드라마(2001~2005)", "SBS 월화 드라마(2006~2010)", "SBS 월화 드라마(2011~2015)", "SBS 월화 드라마(2016~2020)", "TV CHOSUN 토일드라마", "tvN 금요 드라마(2007~2015)", "tvN 금토 드라마", "tvN 로맨스가 필요해 시리즈", "tvN 불금 시리즈(2017~)", "tvN 월화 드라마(2011~2015)", "tvN 월화 드라마(2016~2020)", "tvN 토일 드라마(2017~2020)"] titles = set() for inter in interested: document = context[inter] matches = regex_document.findall(document['1']['text']) for match in matches: name, *_ = match.split('|') name, *_ = name.split('#') titles.add(name) # ## Get metadata from document columns = { '방송 기간': ['방송기간', '방송 기간'], '방송 횟수': ['횟수', '방송 횟수'], '장르': ['장르'], '채널': ['채널', '방송사'], '제작사': ['제작사', '제작자', '제작'], '극본': ['극본', '대본'], '출연자': ['출연자', '출연', '출연진'], } data = defaultdict(dict) notexists = [] notparser = [] for title in titles: try: if '(드라마)' not in title and f'{title}(드라마)' in context: document = context[title]['0']['text'] else: document = context[title]['0']['text'] except KeyError: notexists.append(title) d = defaultdict(str) for row in parse_table(document): try: key, value = filter(len, map(parse, row)) except ValueError: continue key = next((ckey for ckey, cvalues in columns.items() if any(cvalue in key for cvalue in cvalues)), False) if key: key = key.strip() d[key] = f'{d[key]} {value}' if not d: notparser.append(title) else: data[title] = d print(f'{len(notparser)} pages are not parsable') print(f'{len(notexists)} pages are not exists') # ## Create table from data # + table_columns = list(columns.keys()) table = np.empty((0, len(table_columns) + 1)) for title, values in data.items(): table = np.vstack((table, np.array([ regex_bracket.sub('', title).replace(' ', '').strip(), *tuple(map(lambda c: values[c], table_columns)) ]))) # - df = pd.DataFrame(table) df.columns = ['제목', *table_columns] # ### Parse datetime regex_date = re.compile('(.+?)년(.*)월(.*)일') # + date_start = [] for index, date in enumerate(df['방송 기간']): ds, *de = map(regex_date.findall, map(str.strip, date.split('~' if '~' in date else '-'))) ds, *_ = ds or ['unknown'] try: date_start.append(pd.datetime(*tuple(map(int, ds)))) except (ValueError, TypeError): date_start.append('unknown') assert len(date_start) == np.size(df, 0) # - df['방송 시작'] = pd.Series(date_start) # ## Show Dataframe print(df.shape) df.head() df.to_csv('../results/namuwiki.csv', index=None)
labs/.ipynb_checkpoints/namuwiki-checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ![pymt](https://github.com/csdms/pymt/raw/master/docs/_static/pymt-logo-header-text.png) # ## Coastline Evolution Model # # * Link to this notebook: https://github.com/csdms/sedhyd-2019/blob/master/notebooks/cem.ipynb # * Download local copy of notebook: # # `$ curl -O https://raw.githubusercontent.com/csdms/sedhyd-2019/master/notebooks/cem.ipynb` # # This example explores how to use a BMI implementation using the CEM model as an example. # # ### Links # * [CEM source code](https://github.com/csdms/cem-old/tree/mcflugen/add-function-pointers): Look at the files that have *deltas* in their name. # * [CEM description on CSDMS](http://csdms.colorado.edu/wiki/Model_help:CEM): Detailed information on the CEM model. # ### Interacting with the Coastline Evolution Model BMI using Python # Some magic that allows us to view images within the notebook. # %matplotlib inline # Import the `Cem` class, and instantiate it. In Python, a model with a BMI will have no arguments for its constructor. Note that although the class has been instantiated, it's not yet ready to be run. We'll get to that later! import pymt.models cem = pymt.models.Cem() # Even though we can't run our waves model yet, we can still get some information about it. *Just don't try to run it.* Some things we can do with our model are get the names of the input variables. cem.output_var_names cem.input_var_names # We can also get information about specific variables. Here we'll look at some info about wave direction. This is the main input of the Cem model. Notice that BMI components always use [CSDMS standard names](http://csdms.colorado.edu/wiki/CSDMS_Standard_Names). The CSDMS Standard Name for wave angle is, # # "sea_surface_water_wave__azimuth_angle_of_opposite_of_phase_velocity" # # Quite a mouthful, I know. With that name we can get information about that variable and the grid that it is on (it's actually not a one). # + angle_name = 'sea_surface_water_wave__azimuth_angle_of_opposite_of_phase_velocity' print("Data type: %s" % cem.var_type(angle_name)) print("Units: %s" % cem.var_units(angle_name)) print("Grid id: %d" % cem.var_grid(angle_name)) print("Number of elements in grid: %d" % cem.grid_node_count(0)) print("Type of grid: %s" % cem.grid_type(0)) # - # OK. We're finally ready to run the model. Well not quite. First we initialize the model with the BMI **initialize** method. Normally we would pass it a string that represents the name of an input file. For this example we'll pass **None**, which tells Cem to use some defaults. args = cem.setup(number_of_rows=100, number_of_cols=200, grid_spacing=200.) cem.initialize(*args) # Before running the model, let's set a couple input parameters. These two parameters represent the wave height and wave period of the incoming waves to the coastline. # + import numpy as np cem.set_value("sea_surface_water_wave__height", 2.) cem.set_value("sea_surface_water_wave__period", 7.) cem.set_value( "sea_surface_water_wave__azimuth_angle_of_opposite_of_phase_velocity", 0. * np.pi / 180., ) # - # The main output variable for this model is *water depth*. In this case, the CSDMS Standard Name is much shorter: # # "sea_water__depth" # # First we find out which of Cem's grids contains water depth. grid_id = cem.var_grid('sea_water__depth') # With the *grid_id*, we can now get information about the grid. For instance, the number of dimension and the type of grid (structured, unstructured, etc.). This grid happens to be *uniform rectilinear*. If you were to look at the "grid" types for wave height and period, you would see that they aren't on grids at all but instead are scalars. grid_type = cem.grid_type(grid_id) grid_rank = cem.grid_ndim(grid_id) print('Type of grid: %s (%dD)' % (grid_type, grid_rank)) # Because this grid is uniform rectilinear, it is described by a set of BMI methods that are only available for grids of this type. These methods include: # * get_grid_shape # * get_grid_spacing # * get_grid_origin # + spacing = np.empty((grid_rank, ), dtype=float) shape = cem.grid_shape(grid_id) cem.grid_spacing(grid_id, out=spacing) print('The grid has %d rows and %d columns' % (shape[0], shape[1])) print('The spacing between rows is %f and between columns is %f' % (spacing[0], spacing[1])) # - # Allocate memory for the water depth grid and get the current values from `cem`. z = np.empty(shape, dtype=float) cem.get_value('sea_water__depth', out=z) # It generates plots that look like this. We begin with a flat delta (green) and a linear coastline (y = 3 km). The bathymetry drops off linearly to the top of the domain. cem.quick_plot("sea_water__depth", cmap="ocean") # Right now we have waves coming in but no sediment entering the ocean. To add some discharge, we need to figure out where to put it. For now we'll put it on a cell that's next to the ocean. # Allocate memory for the sediment discharge array and set the discharge at the coastal cell to some value. qs = np.zeros_like(z) qs[0, 100] = 1250 # The CSDMS Standard Name for this variable is: # # "land_surface_water_sediment~bedload__mass_flow_rate" # # You can get an idea of the units based on the quantity part of the name. "mass_flow_rate" indicates mass per time. You can double-check this with the BMI method function **get_var_units**. cem.var_units('land_surface_water_sediment~bedload__mass_flow_rate') cem.time_step, cem.time_units, cem.time # Set the bedload flux and run the model. # + for time in range(3000): cem.set_value('land_surface_water_sediment~bedload__mass_flow_rate', qs) cem.update_until(time) cem.get_value('sea_water__depth', out=z) # - cem.time cem.get_value('sea_water__depth', out=z) cem.quick_plot("sea_water__depth", cmap="ocean") # Let's add another sediment source with a different flux and update the model. # + qs[0, 150] = 1500 for time in range(3750): cem.set_value('land_surface_water_sediment~bedload__mass_flow_rate', qs) cem.update_until(time) cem.get_value('sea_water__depth', out=z) # - cem.quick_plot("sea_water__depth", cmap="ocean") # Here we shut off the sediment supply completely. # + qs.fill(0.) for time in range(4000): cem.set_value('land_surface_water_sediment~bedload__mass_flow_rate', qs) cem.update_until(time) cem.get_value('sea_water__depth', out=z) # - cem.quick_plot("sea_water__depth", cmap="ocean")
notebooks/cem.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Django Shell-Plus # language: python # name: django_extensions # --- # + [markdown] toc=true # <h1>Table of Contents<span class="tocSkip"></span></h1> # <div class="toc" style="margin-top: 1em;"><ul class="toc-item"><li><span><a href="#Choose-a-Topic" data-toc-modified-id="Choose-a-Topic-1"><span class="toc-item-num">1&nbsp;&nbsp;</span>Choose a Topic</a></span></li><li><span><a href="#Analysis" data-toc-modified-id="Analysis-2"><span class="toc-item-num">2&nbsp;&nbsp;</span>Analysis</a></span><ul class="toc-item"><li><span><a href="#Compare-screen-time-across-the-entire-dataset" data-toc-modified-id="Compare-screen-time-across-the-entire-dataset-2.1"><span class="toc-item-num">2.1&nbsp;&nbsp;</span>Compare screen time across the entire dataset</a></span></li><li><span><a href="#Compare-screen-time-by-show" data-toc-modified-id="Compare-screen-time-by-show-2.2"><span class="toc-item-num">2.2&nbsp;&nbsp;</span>Compare screen time by show</a></span><ul class="toc-item"><li><span><a href="#Including-hosts" data-toc-modified-id="Including-hosts-2.2.1"><span class="toc-item-num">2.2.1&nbsp;&nbsp;</span>Including hosts</a></span></li><li><span><a href="#Excluding-hosts" data-toc-modified-id="Excluding-hosts-2.2.2"><span class="toc-item-num">2.2.2&nbsp;&nbsp;</span>Excluding hosts</a></span></li></ul></li></ul></li></ul></div> # + from esper.prelude import * from esper.widget import * from esper.topics import * from esper.spark_util import * from esper.plot_util import * from esper.major_canonical_shows import MAJOR_CANONICAL_SHOWS from datetime import timedelta from collections import defaultdict import _pickle as pickle # - # # Choose a Topic topic = 'vaccine' lexicon = mutual_info(topic) for word, _ in lexicon: print(word) selected_words = '\n'.join(x[0] for x in lexicon) selected_words_set = set() for line in selected_words.split('\n'): line = line.strip() if line == '' or line[0] == '#': continue selected_words_set.add(line) filtered_lexicon = [x for x in lexicon if x[0] in selected_words_set] segments = find_segments(filtered_lexicon, window_size=100, threshold=50, merge_overlaps=True) show_segments(segments[:100]) # # Analysis # + face_genders = get_face_genders() face_genders = face_genders.where( (face_genders.in_commercial == False) & (face_genders.size_percentile >= 25) & (face_genders.gender_id != Gender.objects.get(name='U').id) ) intervals_by_video = defaultdict(list) for video_id, _, interval, _, _ in segments: intervals_by_video[video_id].append(interval) face_genders_with_topic_overlap = annotate_interval_overlap(face_genders, intervals_by_video) face_genders_with_topic_overlap = face_genders_with_topic_overlap.where( face_genders_with_topic_overlap.overlap_seconds > 0) # - # ## Compare screen time across the entire dataset # + distinct_columns = ['face_id'] overlap_field = 'overlap_seconds' z_score = 1.96 topic_screentime_with_woman = sum_distinct_over_column( face_genders_with_topic_overlap, overlap_field, distinct_columns, probability_column='female_probability' ) print('Woman on screen: {:0.2f}h +/- {:0.02f}'.format( topic_screentime_with_woman[0] / 3600, z_score * math.sqrt(topic_screentime_with_woman[1]) / 3600)) topic_screentime_with_man = sum_distinct_over_column( face_genders_with_topic_overlap, overlap_field, distinct_columns, probability_column='male_probability' ) print('Man on screen: {:0.2f}h +/- {:0.02f}'.format( topic_screentime_with_man[0] / 3600, z_score * math.sqrt(topic_screentime_with_man[1]) / 3600)) topic_screentime_with_nh_woman = sum_distinct_over_column( face_genders_with_topic_overlap.where((face_genders_with_topic_overlap.host_probability <= 0.5)), overlap_field, distinct_columns, probability_column='female_probability' ) print('Woman (non-host) on screen: {:0.2f}h +/- {:0.02f}'.format( topic_screentime_with_nh_woman[0] / 3600, z_score * math.sqrt(topic_screentime_with_nh_woman[1]) / 3600)) topic_screentime_with_nh_man = sum_distinct_over_column( face_genders_with_topic_overlap.where((face_genders_with_topic_overlap.host_probability <= 0.5)), overlap_field, distinct_columns, probability_column='male_probability' ) print('Man (non-host) on screen: {:0.2f}h +/- {:0.02f}'.format( topic_screentime_with_nh_man[0] / 3600, z_score * math.sqrt(topic_screentime_with_nh_man[1]) / 3600)) # - # ## Compare screen time by show # + canoncal_show_map = { c.id : c.name for c in CanonicalShow.objects.all() } distinct_columns = ['face_id'] group_by_columns = ['canonical_show_id'] overlap_field = 'overlap_seconds' channel_name_cmap = { 'CNN': 'DarkBlue', 'FOXNEWS': 'DarkRed', 'MSNBC': 'DarkGreen' } canoncal_show_cmap = { v['show__canonical_show__name'] : channel_name_cmap[v['channel__name']] for v in Video.objects.distinct( 'show__canonical_show' ).values('show__canonical_show__name', 'channel__name') } # - # ### Including hosts # + CACHE_BASELINE_INCL_HOST_FILE = '/tmp/base_screentime_gender_incl_host_by_show.pkl' try: with open(CACHE_BASELINE_INCL_HOST_FILE, 'rb') as f: base_screentime_with_man_by_show, base_screentime_with_woman_by_show = pickle.load(f) print('[Base] loaded from cache') except: base_screentime_with_woman_by_show = { canoncal_show_map[k[0]] : (timedelta(seconds=v[0]), v[1]) for k, v in sum_distinct_over_column( face_genders, 'duration', distinct_columns, group_by_columns, probability_column='female_probability' ).items() if canoncal_show_map[k[0]] in MAJOR_CANONICAL_SHOWS } print('[Base] Woman on screen: done') base_screentime_with_man_by_show = { canoncal_show_map[k[0]] : (timedelta(seconds=v[0]), v[1]) for k, v in sum_distinct_over_column( face_genders, 'duration', distinct_columns, group_by_columns, probability_column='male_probability' ).items() if canoncal_show_map[k[0]] in MAJOR_CANONICAL_SHOWS } print('[Base] Man on screen: done') with open(CACHE_BASELINE_INCL_HOST_FILE, 'wb') as f: pickle.dump([base_screentime_with_man_by_show, base_screentime_with_woman_by_show], f) topic_screentime_with_woman_by_show = { canoncal_show_map[k[0]] : (timedelta(seconds=v[0]), v[1]) for k, v in sum_distinct_over_column( face_genders_with_topic_overlap, overlap_field, distinct_columns, group_by_columns, probability_column='female_probability' ).items() if canoncal_show_map[k[0]] in MAJOR_CANONICAL_SHOWS } print('[Topic] Woman on screen: done') topic_screentime_with_man_by_show = { canoncal_show_map[k[0]] : (timedelta(seconds=v[0]), v[1]) for k, v in sum_distinct_over_column( face_genders_with_topic_overlap, overlap_field, distinct_columns, group_by_columns, probability_column='male_probability' ).items() if canoncal_show_map[k[0]] in MAJOR_CANONICAL_SHOWS } print('[Topic] Man on screen: done') # - plot_binary_screentime_proportion_comparison( ['Male (incl-host)', 'Female (incl-host)'], [topic_screentime_with_man_by_show, topic_screentime_with_woman_by_show], 'Proportion of gendered screen time by show for topic "{}"'.format(topic), 'Show name', 'Proportion of screen time', secondary_series_names=['Baseline Male (incl-host)', 'Baseline Female (incl-host)'], secondary_data=[base_screentime_with_man_by_show, base_screentime_with_woman_by_show], subcategory_color_map=canoncal_show_cmap ) # ### Excluding hosts # + CACHE_BASELINE_NO_HOST_FILE = '/tmp/base_screentime_gender_no_host_by_show.pkl' try: with open(CACHE_BASELINE_NO_HOST_FILE, 'rb') as f: base_screentime_with_nh_man_by_show, base_screentime_with_nh_woman_by_show = pickle.load(f) print('[Base] loaded from cache') except: base_screentime_with_nh_woman_by_show = { canoncal_show_map[k[0]] : (timedelta(seconds=v[0]), v[1]) for k, v in sum_distinct_over_column( face_genders.where(face_genders.host_probability <= 0.25), 'duration', distinct_columns, group_by_columns, probability_column='female_probability' ).items() if canoncal_show_map[k[0]] in MAJOR_CANONICAL_SHOWS } print('[Base] Woman (non-host) on screen: done') base_screentime_with_nh_man_by_show = { canoncal_show_map[k[0]] : (timedelta(seconds=v[0]), v[1]) for k, v in sum_distinct_over_column( face_genders.where(face_genders.host_probability <= 0.25), 'duration', distinct_columns, group_by_columns, probability_column='male_probability' ).items() if canoncal_show_map[k[0]] in MAJOR_CANONICAL_SHOWS } print('[Base] Man (non-host) on screen: done') with open(CACHE_BASELINE_NO_HOST_FILE, 'wb') as f: pickle.dump([base_screentime_with_nh_man_by_show, base_screentime_with_nh_woman_by_show], f) topic_screentime_with_nh_woman_by_show = { canoncal_show_map[k[0]] : (timedelta(seconds=v[0]), v[1]) for k, v in sum_distinct_over_column( face_genders_with_topic_overlap.where(face_genders_with_topic_overlap.host_probability <= 0.25), overlap_field, distinct_columns, group_by_columns, probability_column='female_probability' ).items() if canoncal_show_map[k[0]] in MAJOR_CANONICAL_SHOWS } print('[Topic] Woman (non-host) on screen: done') topic_screentime_with_nh_man_by_show = { canoncal_show_map[k[0]] : (timedelta(seconds=v[0]), v[1]) for k, v in sum_distinct_over_column( face_genders_with_topic_overlap.where(face_genders_with_topic_overlap.host_probability <= 0.25), overlap_field, distinct_columns, group_by_columns, probability_column='male_probability' ).items() if canoncal_show_map[k[0]] in MAJOR_CANONICAL_SHOWS } print('[Topic] Man (non-host) on screen: done') # - plot_binary_screentime_proportion_comparison( ['Male (non-host)', 'Female (non-host)'], [topic_screentime_with_nh_man_by_show, topic_screentime_with_nh_woman_by_show], 'Proportion of gendered screen time by show for topic "{}"'.format(topic), 'Show name', 'Proportion of screen time', secondary_series_names=['Baseline Male (non-host)', 'Baseline Female (non-host)'], secondary_data=[base_screentime_with_nh_man_by_show, base_screentime_with_nh_woman_by_show], tertiary_series_names=['Male (incl-host)', 'Female (incl-host)'], tertiary_data=[topic_screentime_with_man_by_show, topic_screentime_with_woman_by_show], subcategory_color_map=canoncal_show_cmap )
app/notebooks/topics/gender_vaccine.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd import networkx as nx import matplotlib import scipy import matplotlib.pyplot as plt plt.rcParams['figure.figsize'] = [17, 8] # %matplotlib notebook pd.set_option('display.max_rows', 500) pd.set_option('display.max_columns', 500) pd.set_option('display.width', 1000) dataset_unidades = pd.read_json("../dataset/departments.json") # # Verify idUnidade uniqueness unidade_ids = dataset_unidades["idUnidade"].unique() print("idUnidade is unique: {0}".format(len(unidade_ids) == len(dataset_unidades["idUnidade"]))) # # Verify whether graph is connected parents = list() for i in dataset_unidades["hierarquiaUnidade"]: parents.append(i.split(".")[1]) unique_parents_ids = pd.Series(parents).unique() """ For each department, check if its ancestor exists. """ nOrphans = 0 for value in list(unique_parents_ids): if int(value) not in list(unidade_ids): nOrphans += 1 print("Grafico possui {0} orfãos".format(nOrphans)) # # Get edges """ For each hierarchy string apply: ("x.y.z.w") -> [(x,y), (y,z), (z,w)] """ edges = list() for hierarquia in dataset_unidades["hierarquiaUnidade"]: nos = hierarquia.split(".")[1:-1] for index_no in range(len(nos)): try: edge = (int(nos[index_no]), int(nos[index_no+1])) if edge not in edges: edges.append(edge) except: pass # # Graph # G = nx.DiGraph() """ Get node metadada from datasource columns. Build DiGraph from nodes and edges """ for unidade in dataset_unidades.iterrows(): idUnidade = unidade[1]["idUnidade"] attrs = dict() for key in unidade[1].index: attrs[key] = unidade[1][key] G.add_node(idUnidade, **attrs) G.add_edges_from(edges) G.remove_edge(605,605) dag = nx.algorithms.dag print("G is Directed Acyclic: {0}".format(dag.is_directed_acyclic_graph(G))) nx.write_graphml(G, "../output/departments.graphml")
notebooks/build_graph.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + id="ZMFICkf3J6bm" # !git clone https://github.com/ArseniyBolotin/asr_project.git # + id="4nR-bTybK5RL" # !pip install -r asr_project/requirements.txt # !git clone --recursive https://github.com/parlance/ctcdecode.git # !cd ctcdecode && pip install . # + id="jPkrTcqdNsSm" # datasphere only # # %pip install gdown # + colab={"base_uri": "https://localhost:8080/"} id="BXk3AFntIqgU" outputId="a86031a1-3df8-4d60-c04d-c492c427c622" # !gdown --id 1pz9M9PGxlMgrWTyamwlR0049ofpUez0O # !gdown --id 1tFvww-3TeTjJzmSXPpqHuw8nvgAYV1BR # + id="ZbzaugznMG9a" # !mv best_model_config.json config.json # + colab={"base_uri": "https://localhost:8080/"} id="gMTjJP2xJMbd" outputId="8941651f-b953-45e8-8d0e-9ec29f560a50" # !python asr_project/test.py -r best_model -t asr_project/test_data -o test_result.json # + id="frmsbqN3Pk-j"
test.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import rigol import rigol.usbtmc import rigol.rigol import os assert os.path.exists("/dev/usbtmc0") stat = os.stat("/dev/usbtmc0") device = rigol.usbtmc.Usbtmc() device device.name() device.reset() scope = rigol.rigol.RigolScope(device=device) scope.device assert scope.vendor=="Rigol Technologies" assert scope.model=="DS1052D" assert scope.version=="00.04.02.01.00" # # Keys ddimport with open("../Documentation/DS1000DE_Command_QuickReference.txt", "r") as file: DS1000DE_Command_QuickReference = file.read() import re subsystem_command_re = re.compile("(:[\w<>]+:[\w]+\??)") for match in subsystem_command_re.findall(DS1000DE_Command_QuickReference): if match.startswith # + class CommandsFactory: def __init__(self, *args, **kwargs): if len(args)>0: raise Exception("No.") for key, value in kwargs.items(): setattr(self, key, value) def __str__(self): return f"{self.__class__.__name__.upper()}" Subsystem=type("Subsystem", (CommandsFactory, ), {}) Key=type("Key", (Subsystem, ), {}) # - key_subsystem_commands = list() for match in subsystem_command_re.findall(DS1000DE_Command_QuickReference): if match.startswith(":KEY"): key_subsystem_commands.append(match) print(match) # + def key_factory(key_str:str, wait: int=0): def press_key(self): command_string = f":{self}:{key_str}" print(command_string) self.device.write(f":{self}:{key_str}") return press_key def key_factory2(key_str:str, wait: int=0): key_str = key_str.strip().strip(":").split(":")[-1].upper() key_function = key_factory(key_str, wait) return key_str.lower(), key_function # - result = match result key_factory(result) key=Key(device=device) key_factory(result)(key) key_factory2(result) key_str, key_function = key_factory2(result) key_str key_function(key) setattr(Key, key_str, key_function) command_re = re.compile("(:KEY:[\w]+)") subsystem_commands=dict() for result in command_re.findall(DS1000DE_Command_QuickReference): if result.startswith(":KEY"): key_str, key_function = key_factory2(result) subsystem_commands[key_str] = key_function Key=type("Key", (Subsystem, ), subsystem_commands) key=Key(device=device) key.auto key.auto() key.auto() key.device.reset() key.channel1() key.measure()
DevelopmentNotebooks/rigol_channels2.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/RoshaniMallav/letUpgrade_Python_Essential/blob/master/Day2_String.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + id="vbA3LSbD_NVt" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 136} outputId="7856e0ba-b39c-44dd-84c8-6643cce2792c" var1 = 'Roshani' var2 = "mallav" #1.accessing value print ("var1[0]: ", var1[0]) print ("var2[1:5]: ", var2[1:5]) #2.Upadating string print("Updated String :- ", var1[:6] + 'Python') #3 print lenght print(len(var1)) #4 print(var2.strip()) #5 print(var1.lower()) #6 print(var1.replace("S", "K"))
Day2_String.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/AndrewSLowe/AndrewSLowe.github.io/blob/master/Module3/2_1_3A_regression_classification.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + [markdown] colab_type="text" id="7IXUfiQ2UKj6" # Lambda School Data Science, Unit 2: Predictive Modeling # # # Regression & Classification, Module 3 # # ## Assignment # # We're going back to our other **New York City** real estate dataset. Instead of predicting apartment rents, you'll predict property sales prices. # # But not just for condos in Tribeca... # # Instead, predict property sales prices for **One Family Dwellings** (`BUILDING_CLASS_CATEGORY` == `'01 ONE FAMILY DWELLINGS'`). # # Use a subset of the data where the **sale price was more than \\$100 thousand and less than $2 million.** # # The [NYC Department of Finance](https://www1.nyc.gov/site/finance/taxes/property-rolling-sales-data.page) has a glossary of property sales terms and NYC Building Class Code Descriptions. The data comes from the [NYC OpenData](https://data.cityofnewyork.us/browse?q=NYC%20calendar%20sales) portal. # # - [ ] Do train/test split. Use data from January — March 2019 to train. Use data from April 2019 to test. # - [ ] Do one-hot encoding of categorical features. # - [ ] Do feature selection with `SelectKBest`. # - [ ] Do [feature scaling](https://scikit-learn.org/stable/modules/preprocessing.html). # - [ ] Fit a ridge regression model with multiple features. # - [ ] Get mean absolute error for the test set. # - [ ] As always, commit your notebook to your fork of the GitHub repo. # # # ## Stretch Goals # - [ ] Add your own stretch goal(s) ! # - [ ] Instead of `RidgeRegression`, try `LinearRegression`. Depending on how many features you select, your errors will probably blow up! 💥 # - [ ] Instead of `RidgeRegression`, try [`RidgeCV`](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.RidgeCV.html). # - [ ] Learn more about feature selection: # - ["Permutation importance"](https://www.kaggle.com/dansbecker/permutation-importance) # - [scikit-learn's User Guide for Feature Selection](https://scikit-learn.org/stable/modules/feature_selection.html) # - [mlxtend](http://rasbt.github.io/mlxtend/) library # - scikit-learn-contrib libraries: [boruta_py](https://github.com/scikit-learn-contrib/boruta_py) & [stability-selection](https://github.com/scikit-learn-contrib/stability-selection) # - [_Feature Engineering and Selection_](http://www.feat.engineering/) by Kuhn & Johnson. # - [ ] Try [statsmodels](https://www.statsmodels.org/stable/index.html) if you’re interested in more inferential statistical approach to linear regression and feature selection, looking at p values and 95% confidence intervals for the coefficients. # - [ ] Read [_An Introduction to Statistical Learning_](http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf), Chapters 1-3, for more math & theory, but in an accessible, readable way. # - [ ] Try [scikit-learn pipelines](https://scikit-learn.org/stable/modules/compose.html). # + colab_type="code" id="o9eSnDYhUGD7" outputId="df221246-21f5-4399-aee3-3c984f5e1b34" colab={"base_uri": "https://localhost:8080/", "height": 1000} import os, sys in_colab = 'google.colab' in sys.modules # If you're in Colab... if in_colab: # Pull files from Github repo os.chdir('/content') # !git init . # !git remote add origin https://github.com/LambdaSchool/DS-Unit-2-Regression-Classification.git # !git pull origin master # Install required python packages # !pip install -r requirements.txt # Change into directory for module os.chdir('module3') # + colab_type="code" id="ipBYS77PUwNR" colab={} # Ignore this Numpy warning when using Plotly Express: # FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead. import warnings warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy') # + id="4rV0q43kK6G4" colab_type="code" colab={} # + colab_type="code" id="QJBD4ruICm1m" colab={} import pandas as pd import pandas_profiling # Read New York City property sales data df = pd.read_csv('../data/condos/NYC_Citywide_Rolling_Calendar_Sales.csv') # Change column names: replace spaces with underscores df.columns = [col.replace(' ', '_') for col in df] # SALE_PRICE was read as strings. # Remove symbols, convert to integer df['SALE_PRICE'] = ( df['SALE_PRICE'] .str.replace('$','') .str.replace('-','') .str.replace(',','') .astype(int) ) # + id="WffHZrjZPahr" colab_type="code" colab={} # BOROUGH is a numeric column, but arguably should be a categorical feature, # so convert it from a number to a string df['BOROUGH'] = df['BOROUGH'].astype(str) # + id="2__STWYQPahv" colab_type="code" colab={} # Reduce cardinality for NEIGHBORHOOD feature # Get a list of the top 10 neighborhoods top10 = df['NEIGHBORHOOD'].value_counts()[:10].index # At locations where the neighborhood is NOT in the top 10, # replace the neighborhood with 'OTHER' df.loc[~df['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER' # + id="o-xDMuag01Y3" colab_type="code" outputId="5abfc647-dc89-4cc6-dea9-177be5eca65c" colab={"base_uri": "https://localhost:8080/", "height": 343} df = df[(df['SALE_PRICE'] > 100000) & (df['SALE_PRICE'] < 2000000) & (df['BUILDING_CLASS_CATEGORY'] == '01 ONE FAMILY DWELLINGS')] print(df.shape) df.head() # + id="fHkv6u4QPaiA" colab_type="code" colab={} import pandas_profiling pandas_profiling.ProfileReport(df) # + id="C398r1sGTOyM" colab_type="code" outputId="d8df35ef-feb7-46c1-fe11-9c9fcad2f324" colab={"base_uri": "https://localhost:8080/", "height": 391} df.dtypes # + [markdown] id="hOk3NOFKdXIs" colab_type="text" # Dropping values with lots of nans and high cardinality. # + id="Sq8_yIIpcL_n" colab_type="code" colab={} #High NaN values df = df.drop(columns=['EASE-MENT', 'APARTMENT_NUMBER']) # + id="Iv7_fxSmcgYy" colab_type="code" outputId="d2dbff7e-525f-458f-982e-1d2c6fdcdc72" colab={"base_uri": "https://localhost:8080/", "height": 326} print(df.shape) df.head() # + id="7ax_P9MJgpzk" colab_type="code" outputId="9b8a1d74-6e52-48ab-ddb5-1e1ce1f7e0b1" colab={"base_uri": "https://localhost:8080/", "height": 359} df.select_dtypes(include='number').describe().T # + id="ST59uX1Jgt34" colab_type="code" outputId="463ec6f1-397e-42d1-c2ee-90f6687b8e78" colab={"base_uri": "https://localhost:8080/", "height": 328} df.select_dtypes(exclude='number').describe().T # + id="37-D5dnab7pk" colab_type="code" colab={} df['SALE_DATE'] = pd.to_datetime(df['SALE_DATE'], infer_datetime_format=True) cutoff = pd.to_datetime('2019-4-01') train = df[df.SALE_DATE < cutoff] test = df[df.SALE_DATE >= cutoff] # + id="4NksWgOpHqjW" colab_type="code" outputId="2cdf8ed4-c773-4f17-e2d9-08aace75ffb6" colab={"base_uri": "https://localhost:8080/", "height": 297} train.describe() # + id="2E3oRACWHv6e" colab_type="code" outputId="3ca65f17-a18e-4cef-932b-187769c659f9" colab={"base_uri": "https://localhost:8080/", "height": 328} df.select_dtypes(exclude='number').describe().T # + id="f7qWmNXJm878" colab_type="code" colab={} #Dropping cat variables for training set with over 50 unique features. target = 'SALE_PRICE' high_cardinality = ['ADDRESS', 'SALE_DATE', 'LAND_SQUARE_FEET'] features = train.columns.drop([target] + high_cardinality) X_train = train[features] y_train = train[target] X_test = test[features] y_test = test[target] # + id="eFlkqvHHL1ng" colab_type="code" outputId="53cae6f0-a71c-43c8-b564-6e23c84a5c75" colab={"base_uri": "https://localhost:8080/", "height": 241} import category_encoders as ce encoder = ce.OneHotEncoder(use_cat_names=True) X_train_encoded = encoder.fit_transform(X_train) X_test_encoded = encoder.transform(X_test) X_train_encoded.head() #Seems like the issue I face later is because of onehotencoding... # + id="yZn2dEKX7kIQ" colab_type="code" colab={} warnings.filterwarnings(action='ignore', category=RuntimeWarning, module='sklearn') warnings.filterwarnings(action='ignore', category=RuntimeWarning, module='scipy') # + id="0jkiakMW3nDg" colab_type="code" outputId="33b3a813-5c2b-40eb-d0e1-f5c218c4f817" colab={"base_uri": "https://localhost:8080/", "height": 1000} from sklearn.feature_selection import f_regression, SelectKBest from sklearn.linear_model import RidgeCV from sklearn.metrics import mean_absolute_error from sklearn.preprocessing import StandardScaler scaler = StandardScaler() X_train_scaled = scaler.fit_transform(X_train_encoded) X_test_scaled = scaler.transform(X_test_encoded) for k in range(1, len(X_train_encoded.columns)+1): print(f'{k} features') selector = SelectKBest(score_func=f_regression, k=k) X_train_selected = selector.fit_transform(X_train_scaled, y_train) X_test_selected = selector.transform(X_test_scaled) model = RidgeCV() model.fit(X_train_selected, y_train) y_pred = model.predict(X_test_selected) mae = mean_absolute_error(y_test, y_pred) print(f'Test MAE: ${mae:,.0f} \n') # + id="1WLd540pzVko" colab_type="code" outputId="389b5538-1578-48a7-a6cc-fb040cfbc960" colab={"base_uri": "https://localhost:8080/", "height": 867} k = 36 selector = SelectKBest(score_func=f_regression, k=k) X_train_selected = selector.fit_transform(X_train_scaled, y_train) all_names = X_train_encoded.columns selected_mask = selector.get_support() selected_names = all_names[selected_mask] unselected_names = all_names[~selected_mask] print('Features selected:') for name in selected_names: print(name) print('\nFeatures not selected:') for name in unselected_names: print(name) # + id="5e7fmsff62Pz" colab_type="code" colab={}
Module3/2_1_3A_regression_classification.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # 이 노트북의 코드에 대한 설명은 [다중 평가 지표: cross_validate()](https://tensorflow.blog/2018/03/13/%EB%8B%A4%EC%A4%91-%ED%8F%89%EA%B0%80-%EC%A7%80%ED%91%9C-cross_validate/) 글을 참고하세요. import pandas as pd import numpy as np from sklearn.datasets import load_digits from sklearn.model_selection import train_test_split, cross_val_score digits = load_digits() X_train, X_test, y_train, y_test = train_test_split( digits.data, digits.target == 9, random_state=42) from sklearn.svm import SVC cross_val_score(SVC(gamma='auto'), X_train, y_train, cv=3) cross_val_score(SVC(gamma='auto'), X_train, y_train, scoring='accuracy', cv=3) from sklearn.model_selection import cross_validate cross_validate(SVC(gamma='auto'), X_train, y_train, scoring=['accuracy', 'roc_auc'], return_train_score=True, cv=3) cross_validate(SVC(gamma='auto'), X_train, y_train, scoring=['accuracy'], cv=3, return_train_score=False)['test_accuracy'] cross_validate(SVC(gamma='auto'), X_train, y_train, scoring={'acc':'accuracy', 'ra':'roc_auc'}, return_train_score=False, cv=3) from sklearn.model_selection import GridSearchCV param_grid = {'gamma': [0.0001, 0.01, 0.1, 1, 10]} grid = GridSearchCV(SVC(), param_grid=param_grid, scoring=['accuracy'], refit='accuracy', return_train_score=True, cv=3) grid.fit(X_train, y_train) grid.best_params_ grid.best_score_ np.transpose(pd.DataFrame(grid.cv_results_)) grid = GridSearchCV(SVC(), param_grid=param_grid, scoring={'acc':'accuracy', 'ra':'roc_auc'}, refit='ra', return_train_score=True, cv=3) grid.fit(X_train, y_train) grid.best_params_ grid.best_score_ np.transpose(pd.DataFrame(grid.cv_results_)) grid.best_estimator_
Python/scikit-learn/cross_validate.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # <a href="https://colab.research.google.com/github/weymouth/NumericalPython/blob/main/05SciPy.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # # Scientific Python # # We've now had a few good examples of using NumPy for engineering computing and PyPlot for visualization. However, we haven't had much exposure to classic *numerical methods*. That's because this isn't a numerical methods class, it is a Python programming tutorial. However, there are some important aspect of programming which come up in using numerical methods. # # First and foremost is **don't reinvent the wheel**. When your focus is solving an engineering problem, you should not code your own numerical methods. Instead you should use methods which have been carefully implemented and tested already - letting you focus on your own work. Luckily the *Scientific Python* or [SciPy library](https://www.scipy.org/scipylib/index.html) has hundred of numerical methods for common mathematical and scientific problems such as: # # | Category | Sub module | Description | # |-------------------|-------------------|--------------------------------------------------------| # | Interpolation | scipy.interpolate | Numerical interpolation of 1D and multivariate data | # | Optimization | scipy.optimize | Function optimization, curve fitting, and root finding | # | Integration | scipy.integrate | Numerical integration quadratures and ODE integrators | # | Signal processing | scipy.signal | Signal processing methods | # | Special functions | scipy.special | Defines transcendental functions such as $J_n$ and $\Gamma$| # <span style="display:none"></span> # # In this notebook, we will illustrate the use of SciPy with a few engineering applications to demonstrate a few more important programming issues. We won't attempt to go through all of the important numerical methods in SciPy - for that you can read the [SciPy book](http://scipy-lectures.org/intro/scipy.html). # --- # # ## Ordinary Differential Equations # # Ordinary Differential Equations (ODEs) are ubiquitous in engineering and dynamics, and numerical methods are excellent at producing high-quality approximate solutions to ODEs that can't be solved analytically. # # As a warm up, the function $y=e^{t}$ is an exact solution of the initial value problem (IVP) # # $$ \frac{dy}{dt} = y \quad\text{with}\quad y(0) = 1 $$ # # SciPy has a few functions to solve IVPs, but I like `solve_ivp` the best. Let's check it out. # + import numpy as np import matplotlib.pyplot as plt from scipy.integrate import solve_ivp # ?solve_ivp # - # So the first argument is the ODE function itself `func=dy/dt`, then the span over which we want to integrate, and then the initial condition. Let's try it. fun = lambda t,y: y # lambda function syntax y0 = [1] t_span = [0,2] sol = solve_ivp(fun, t_span, y0) sol # So the function outputs a bunch of useful information about what happened. Also note the times is stored in a 1D array `sol.t` and the solution is stored in a 2D array (more on that later). Let's plot this up. t = np.linspace(0,2,21) plt.plot(t,np.exp(t),label='exact') # sol = solve_ivp(fun, t_span = [0,2] , y0 = y0, t_eval = t) # distributed points for plot plt.plot(sol.t,sol.y[0],'ko',label='solve_ivp') plt.xlabel('t') plt.ylabel('y',rotation=0) plt.legend(); # First off, the numerical method matches the exact solution extremely well. But this plot seems a little weird. The solver used a small time step at first (`t[1]-t[0]=0.1`) and then took bigger steps (`t[3]-t[2]=0.99`). This is because the solver uses an [adaptive 4th order Runge-Kutta method](https://en.wikipedia.org/wiki/Runge%E2%80%93Kutta%E2%80%93Fehlberg_method) to integrate by default, which adjusts the time step to get the highest accuracy for the least number of function evaluations. # # That's great, but we want the results at a more regular interval for plotting, and the argument `t_eval` - do that by uncommenting the second line above. The result is evenly distributed and the accuracy is still excellent - it just took a few more evaluations. # # --- # # That's nice, but most engineering systems are more complex than first order ODEs. For example, even a forced spring-mass-damper systems is second order: # # $$ m \frac{d^2 x}{dt^2} + c \frac{dx}{dt} + k x = f(t) $$ # # But it is actually very simple to deal with this additional derivative, we just define the position and velocity as two separate variables, the *states* of the oscillator: # # $$ y = \left[x,\ \frac{dx}{dt}\right] $$ # # And therefore # # $$ \frac{dy}{dt} = \left[ \frac{dx}{dt},\ \frac{d^2x}{dt^2}\right] = \left[y[1],\ \frac{f(t)-c y[1] - k y[0]}{m} \right] $$ # # This trick can reduce any ODE of order `m` down to system of `m` states all governed by first order ODEs. `solve_ivp` assumes `y` is a 2D array of these states since it is the standard way to deal with dynamical systems. # # Let's try it on this example. # + # define forcing, mass-damping-stiffness, and ODE f = lambda t: np.sin(2*np.pi*t) m,c,k = 1,0.5,(2*np.pi)**2 linear = lambda t,y: [y[1],(f(t)-c*y[1]-k*y[0])/m] t = np.linspace(40,42) y = solve_ivp(linear,[0,t[-1]],[0,0], t_eval=t).y plt.plot(t,y[0],label='$x$') plt.plot(t,y[1],label='$\dot x$') plt.xlabel('t') plt.legend(); # - # This gives a sinusoid, as expected but is it correct? Instead of using the exact solution (available in this case but not generally), let's *sanity check* the results based on physical understanding. **You should always do this when using numerical methods!** # # - If we could ignore dynamics, the expected deflection would simply be $x=f/k$. Since the magnitude of $f=1$ and $k=(2\pi)^2$ this would mean we would have an amplitude of $x\sim (2\pi)^{-2} \approx 0.025$. Instead we see an amplitude $x=0.4$! Is this reasonable?? # - The natural frequency given the parameters above is $\omega_n = \sqrt(k/m) = 2\pi$. The force is *also* being applied at a frequency of $2\pi$. This could explain the high amplitude - our spring-mass system is in resonance! # # Since we have an idea to explain our results - it is your turn to test it out: # 1. Lower the forcing frequency x10. This should reduce the influence of dynamics and we should see amplitudes similar to our prediction. # 2. Reset the frequency and increase the mass x10. Predict what this should do physically before running the simulation. Do the results match your predictions? # # # Finally, one of the main advantages of the numerical approach to ODEs is that they extend trivially to nonlinear equations. For example, using a nonlinear damping $c\dot x \rightarrow d \dot x|\dot x|$ makes the dynamics difficult to solve analytically, but requires no change to our approach, only an updated ODE: # + # define nonlinear damped ODE d = 100 nonlinear = lambda t,y: [y[1],(f(t)-d*y[1]*abs(y[1])-k*y[0])/m] t = np.linspace(40,42) y = solve_ivp(nonlinear,[0,t[-1]],[0,0], t_eval=t).y plt.plot(t,y[0],label='$x$') plt.plot(t,y[1],label='$\dot x$') plt.xlabel('t') plt.legend(); # - # ## Root finding and implicit equations # # Another ubiquitous problem in engineering is *root finding*; determining the arguments which make a function zero. As before, there are a few SciPy routines for this, but `fsolve` is a good general purpose choice. Let's check it out. # + from scipy.optimize import fsolve # ?fsolve # - # So `fsolve` also takes a function as the first argument, and the second argument is the starting point `x0` of the search for the root. # # As before, let's start with a simple example, say $\text{func}=x\sin x$ which is zero at $x=n\pi$ for $n=0,1,2,\ldots$. # + func = lambda x: x*np.sin(x) for x0 in range(1,8,2): print('x0={}, root={:.2f}'.format(x0,fsolve(func,x0)[0])) # - # This example shows that a root finding method needs to be used with care when there is more than one root. Here we get different answers depending on `x0` and it's sometimes surprising; `x0=5` found the root at $5\pi$ instead of $2\pi$. Something to keep in mind. # # Root finding methods are especially useful for dealing with implicit equations. For example, the velocity of fluid through a pipe depends on the fluid friction, but this friction is itself a function of the flow velocity. The [semi-emperical equation](https://en.wikipedia.org/wiki/Darcy_friction_factor_formulae#Colebrook%E2%80%93White_equation) for the Darcy friction factor $f$ is # # $$ \frac 1 {\sqrt f} = -2\log_{10}\left(\frac \epsilon{3.7 D}+ \frac{2.51}{Re \sqrt f} \right)$$ # # where $\epsilon/D$ is the pipe wall roughness to diameter ratio, $Re=UD/\nu$ is the diameter-based Reynolds number, and the coefficients are determined from experimental tests. # # Directly solving this equation for $f$ is difficult, and engineers use charts like the [Moody Diagram](https://en.wikipedia.org/wiki/Moody_chart#/media/File:Moody_EN.svg) instead. But this is simple to solve with a root finding method; we just need to express this as function which is zero at the solution and this is always possible by simply subtracting the right-hand-side from the left! # # $$ \text{func} = \frac 1 {\sqrt f} + 2\log_{10}\left(\frac \epsilon{3.7 D}+ \frac{2.51}{Re \sqrt f} \right)$$ # # which is zero when $f$ satisfies our original equation. # + # @np.vectorize def darcy(Re,eps_D,f0=0.03): func = lambda f: 1/np.sqrt(f)+2*np.log10(eps_D/3.7+2.51/(Re*np.sqrt(f))) return fsolve(func,f0)[0] darcy(1e6,1e-3) # - # Notice we have defined one function *inside* another. This lets us define $Re$ and $\epsilon/D$ as *arguments* of `darcy`, while being *constants* in `func`. There are other ways to parameterize rooting finding, but I like this approach because the result is a function (like `darcy`) which behaves exactly like an explicit function (in this case, for $f$). # # This matches the results in the Moody Diagram, and in fact, we should be able to make our own version of the diagram to test it out fully: Re = np.logspace(3.5,8) for i,eps_D in enumerate(np.logspace(-3,-1.5,7)): f = darcy(Re,eps_D) plt.loglog(Re,f, label='{:.1g}'.format(eps_D), color=plt.cm.cool(i/7)) plt.xlabel('Re') plt.ylabel('f',rotation=0) plt.legend(title='$\epsilon/D$',loc='upper right'); # Uh oh - this didn't work. Remember how functions such as `np.sin` *broadcast* the function across an array of arguments by default. Well, `fsolve` doesn't broadcast by default, so we need to do it ourselves. # # Luckily, this is trivial using [decorators](https://docs.python.org/3/library/functools.html). Decorators are a neat python feature which lets you add capabilities to a function without coding them yourself. There are tons of useful examples (like adding a `@cache` to avoid repeating expensive calculations) but the one we need is `@np.vectorize`. Uncomment that line above the function definition and run that block again - you should see that the output is now an array. Now try running the second code cell and you should see our version of the Moody Diagram. # # Notice I've used `np.logspace` to get logarithmically spaced points, `plt.loglog` to make a plot with log axis in both x and y, and `plt.cm.cool` to use a [sequential color palette](https://medium.com/nightingale/how-to-choose-the-colors-for-your-data-visualizations-50b2557fa335) instead of the PyPlot default. Use the help features to look up these functions for details. # # Your turn: # 1. Write a function to solve the equation $r^{4}-2r^{2}\cos 2\theta = b^{4}-1$ for $r$. Test that your function gives $r=\sqrt{2}$ when $b=1$ and $\theta=0$. # 2. Reproduce a plot of the [Cassini ovals](https://en.wikipedia.org/wiki/Cassini_oval) using this function for $1\le b \le 2$. Why doesn't your function work for $b<1$? # # *Hint:* Define `theta=np.linspace(0,2*np.pi)`, use `@np.vectorize`, and use `plt.polar` or convert $r,\theta \rightarrow x,y$ using the method in [notebook 3](https://github.com/weymouth/NumericalPython/blob/main/03NumpyAndPlotting.ipynb). # ## Blasius boundary layer # # As a last example, I want to show how you can **combine** these two techniques to solves a truly hard engineering equation with just a couple lines of code. Dividing complex problems down to pieces that you can solve with simple methods and combining them back together to obtain the solution is the secret sauce of programming and well worth learning. # # The governing equations for viscous fluids are very difficult to deal with, both [mathematically](https://www.claymath.org/millennium-problems/navier%E2%80%93stokes-equation) and [numerically](https://en.wikipedia.org/wiki/Turbulence_modeling). But these equations can be simplified in the case of a laminar flow along a flat plate. In this case we expect the velocity $u=0$ on the plate because of friction, but then to rapidly increase up to an asymptotic value $u\rightarrow U$. # # ![Blasius1.png](attachment:Blasius1.png) # # This thin region of slowed down flow is called the boundary layer and we want to predict the shape of the *velocity profile* in this region. The [Blasius equation](https://en.wikipedia.org/wiki/Blasius_boundary_layer) governs this shape: # # $$ A'''+\frac{1}{2} A A'' = 0 $$ # # where $A'(z) = u/U$ is the scaled velocity function and $z$ is the scaled distance from the wall. The function $A$ has the boundary conditions # # $$ A(0) = A'(0) = 0 \quad\text{and}\quad A'(\infty) = 1 $$ # # This equation is still too complex to solve analytically, and it might look too hard numerically as well. But we just need to take it one step at a time. # # ### Step 1: # # We can reduce the Blasius equation to a first order ODE as before by defining # # $$ y = \left[A,\ A',\ A'' \right],\quad y' = \left[y[1],\ y[2],\ -\frac{1}{2} y[0]y[2] \right] $$ # # Notice `y[1]`=$u/U$ is our goal, the velocity profile. # # But to use `solve_ivp` we also need our initial conditions. We don't know $A''(0)=$`C0`, but *if we did* the initial condition would be `y0 = [0,0,C0]` and we could solve for the profile: # + def blasius(t,C0): return solve_ivp(lambda t,y: [y[1],y[2],-0.5*y[0]*y[2]], [0,t[-1]], [0,0,C0], t_eval = t).y[1] C0 = 1 # guess # C0 = fsolve(lambda C0: blasius([12],C0)[-1]-1,x0=1)[0] # solve! z = np.linspace(0,6,31) plt.plot(blasius(z,C0),z) plt.xlabel('u/U') plt.ylabel('z',rotation=0); # - # ### Step 2 # # We can determine `C0` using the addition boundary condition, $A'(\infty)=1$. It is hard to deal with infinity numerically, but we see in the plot above that the profile is pretty much constant for z>4 anyway, so we'll just apply this condition to the last point, ie `blasius(C0)[-1]=1`. This is an implicit equation for `C0`, and we can solve it using `fsolve` as we did above: we simply substract the right-hand-side and define `func = blasius(C0)[-1]-1` which is zero when `C0` satisfies the boundary condition. Uncomment the line in the code block above to check that it works. # # The value of `C0` is actually physically important as well - it's related to the friction coefficient, and we have that value # as well: print("Blasius C_F sqrt(Re) = {:.3f}".format(4*C0)) # So $C_F = 1.328/\sqrt{Re}$ for a laminar boundary layer. # # And just like that, we're done. We've numerically solved the Blasius equation in around two lines of code; determining one of the very few exact solutions for nonlinear flows in engineering and come up with a practical friction coefficient that we can use to determine the drag on immersed bodies. Not too shabby.
05SciPy.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda root] # language: python # name: conda-root-py # --- # ##### [sample solution, trained for a few hours (not converged)] # # # This tutorial is will bring you through your first deep reinforcement learning model # # # * Seaquest game as an example # * Training a simple lasagne neural network for Q_learning objective # # # ## About OpenAI Gym # # * Its a recently published platform that basicly allows you to train agents in a wide variety of environments with near-identical interface. # * This is twice as awesome since now we don't need to write a new wrapper for every game # * Go check it out! # * Blog post - https://openai.com/blog/openai-gym-beta/ # * Github - https://github.com/openai/gym # # # ## New to Lasagne and AgentNet? # * We only require surface level knowledge of theano and lasagne, so you can just learn them as you go. # * Alternatively, you can find Lasagne tutorials here: # * Official mnist example: http://lasagne.readthedocs.io/en/latest/user/tutorial.html # * From scratch: https://github.com/ddtm/dl-course/tree/master/Seminar4 # * From theano: https://github.com/craffel/Lasagne-tutorial/blob/master/examples/tutorial.ipynb # * This is pretty much the basic tutorial for AgentNet, so it's okay not to know it. # # %load_ext autoreload # %autoreload 2 # # Experiment setup # * Here we basically just load the game and check that it works from __future__ import print_function import matplotlib.pyplot as plt # %matplotlib inline import numpy as np # %env THEANO_FLAGS="floatX=float32" # + #global params. GAME = "LunarLanderContinuous-v2" #number of parallel agents and batch sequence length (frames) N_AGENTS = 1 SEQ_LENGTH = 10 # - import numpy as np import matplotlib.pyplot as plt # %matplotlib inline import gym env = gym.make(GAME) env.reset() obs = env.step(env.action_space.sample())[0] state_size = len(obs) print(obs.shape) env.action_space.low,env.action_space.high, # # Basic agent setup # Here we define a simple agent that maps game images into Qvalues using shallow neural network. # # + import lasagne from lasagne.layers import InputLayer,DenseLayer,batch_norm,dropout,NonlinearityLayer,GaussianNoiseLayer,ElemwiseSumLayer import theano.tensor as T #image observation at current tick goes here, shape = (sample_i,x,y,color) observation_layer = InputLayer((None,state_size)) dense0 = DenseLayer(observation_layer,256,name='dense1') dense1 = DenseLayer(dense0,256,name='dense2',nonlinearity=T.tanh,) nn = dense1 # + #a layer that predicts Qvalues from agentnet.learning.qlearning_naf import LowerTriangularLayer,NAFLayer import theano epsilon = theano.shared(np.float32(0.0)) n_actions = env.action_space.shape[0] low = env.action_space.low high = env.action_space.high class naf: #predict mean mean = DenseLayer(nn,n_actions,nonlinearity=None,name='mu') action = NonlinearityLayer(mean,lambda a: a.clip(low,high)) #add exploration (noize) action = GaussianNoiseLayer(action,sigma=epsilon) #clip back to action range action = NonlinearityLayer(action,lambda a: a.clip(low,high)) #state value (for optimal action) V_layer = DenseLayer(nn,1,nonlinearity=None,name='V') #lower triangular matrix that describes "variance" term in NAF L_layer = LowerTriangularLayer(nn,n_actions,name='L') #advantage term [negative] A_layer = NAFLayer(action,mean,L_layer) #Q = V + A = optimal_value + negative_penalty_for_diverging_from_mean qvalues_layer = ElemwiseSumLayer([V_layer,A_layer]) # - # ##### Finally, agent # We declare that this network is and MDP agent with such and such inputs, states and outputs from agentnet.agent import Agent #all together agent = Agent(observation_layers=observation_layer, policy_estimators=[naf.qvalues_layer,naf.V_layer,naf.L_layer,naf.mean], action_layers=naf.action) #Since it's a single lasagne network, one can get it's weights, output, etc weights = lasagne.layers.get_all_params(naf.qvalues_layer,trainable=True) weights # # Create and manage a pool of atari sessions to play with # # * To make training more stable, we shall have an entire batch of game sessions each happening independent of others # * Why several parallel agents help training: http://arxiv.org/pdf/1602.01783v1.pdf # * Alternative approach: store more sessions: https://www.cs.toronto.edu/~vmnih/docs/dqn.pdf # + from agentnet.experiments.openai_gym.pool import EnvPool pool = EnvPool(agent,GAME, N_AGENTS,max_size=10000) # + # %%time #interact for 7 ticks _,action_log,reward_log,_,_,_ = pool.interact(10) print(action_log) print(reward_log) # - #load first sessions (this function calls interact and remembers sessions) pool.update(SEQ_LENGTH) # # Q-learning # * An agent has a method that produces symbolic environment interaction sessions # * Such sessions are in sequences of observations, agent memory, actions, q-values,etc # * one has to pre-define maximum session length. # # * SessionPool also stores rewards (Q-learning objective) # + #get agent's Qvalues obtained via experience replay replay = pool.experience_replay.sample_session_batch(100,replace=True) _,_,_,_,(action_qvalues_seq,optimal_qvalues_seq,l_term,mu_term) = agent.get_sessions( replay, session_length=SEQ_LENGTH, experience_replay=True, ) # + #get reference Qvalues according to Qlearning algorithm from agentnet.learning.qlearning_naf import get_elementwise_objective #loss for Qlearning = (Q(s,a) - (r+gamma*Q(s',a_max)))^2 elwise_mse_loss = get_elementwise_objective(action_qvalues_seq[:,:,0], optimal_qvalues_seq[:,:,0], replay.rewards, replay.is_alive, gamma_or_gammas=0.99,) #compute mean over "alive" fragments loss = elwise_mse_loss.sum() / replay.is_alive.sum() #add l2 regularizer loss += lasagne.regularization.regularize_network_params(nn,lasagne.regularization.l2)*1e-5 # - # Compute weight updates updates = lasagne.updates.adam(loss,weights) #compile train function import theano train_step = theano.function([],loss,updates=updates) # # Demo run #for MountainCar-v0 evaluation session is cropped to 200 ticks untrained_reward = pool.evaluate(save_path="./records",record_video=False,use_monitor=False) # + from IPython.display import HTML video_path="<paste link from previous cell starting from records>" HTML(""" <video width="640" height="480" controls> <source src="{}" type="video/mp4"> </video> """.format(video_path)) # - # # Training loop # + #starting epoch epoch_counter = 1 loss=train_step() #full game rewards rewards = {epoch_counter:untrained_reward} # - #pre-fill pool from tqdm import tqdm for i in tqdm(range(1000)): pool.update(SEQ_LENGTH,append=True,) # + epsilon.set_value(0.1) #the loop may take eons to finish. #consider interrupting early. for i in tqdm(range(10000)): #train for i in range(10): pool.update(SEQ_LENGTH,append=True,) for i in range(10): loss = loss * 0.99 + train_step()*0.01 if epoch_counter%100==0: #average reward per game tick in current experience replay pool pool_mean_reward = np.average(pool.experience_replay.rewards.get_value()[:,:-1], weights=pool.experience_replay.is_alive.get_value()[:,:-1]) pool_size = pool.experience_replay.rewards.get_value().shape[0] print("iter=%i\tloss=%.3f\tepsilon=%.3f\treward/step=%.5f\tpool_size=%i"%(epoch_counter, loss, epsilon.get_value(), pool_mean_reward, pool_size)) ##record current learning progress and show learning curves if epoch_counter%500 ==0: n_games = 10 epsilon.set_value(0) rewards[epoch_counter] = pool.evaluate( record_video=False,n_games=n_games,verbose=False) print("Current score(mean over %i) = %.3f"%(n_games,np.mean(rewards[epoch_counter]))) epsilon.set_value(0.1) #if you see sudden slowdowns after few thousand iterations, it means box2d issue is still not fixed #use this line to reset it time to time: #pool.envs[0] = gym.make(GAME) epoch_counter +=1 # Time to drink some coffee! # - import pandas as pd ticks,r = zip(*sorted(rewards.items(),key=lambda (k,v): k)) plt.plot(ticks,pd.ewma(np.array(map(np.mean,r)),alpha=0.1)) pool.evaluate(10)
examples/Continuous LunarLander using normalized advantage functions.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="z00jAXE6vQER" # # Area Plots Histograms and Bar Charts # <NAME> <br> # GitHub: <a href="https://github.com/mateusvictor">mateusvictor</a> # # + [markdown] id="26eainCmvsv0" # ## Setup # + id="NT0gjeXgv1NB" # %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt import pandas as pd import numpy as np # For ggplot style mpl.style.use('ggplot') # + id="-UhusNwfvuxp" df_can = pd.read_excel('https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-DV0101EN-SkillsNetwork/Data%20Files/Canada.xlsx', sheet_name='Canada by Citizenship', skiprows=range(20), skipfooter=2 ) # + [markdown] id="mwSEXwBCwIdJ" # ## Basic data cleaning # # + id="KGD52RlUwJRI" outputId="3f647710-a01e-44d4-fbdd-30aaac42b072" colab={"base_uri": "https://localhost:8080/", "height": 301} # Drop columns df_can.drop(['AREA', 'REG', 'DEV', 'Type', 'Coverage'], axis=1, inplace=True) df_can.head() # + id="B4gIWlFFyd8q" # Rename columns df_can.rename(columns={'OdName':'Country', 'AreaName':'Continent', 'RegName':'Region'}, inplace=True) # + id="THpEKJeGwOAy" outputId="812f1675-6935-4c53-d71c-7595d752c594" colab={"base_uri": "https://localhost:8080/"} # Ensure that all column labels of type string all(isinstance(column, str) for column in df_can.columns) # + id="OcIpuXogxMLe" outputId="8ca2319b-1658-4042-e029-60825660fd5d" colab={"base_uri": "https://localhost:8080/"} # Convert columns name to string df_can.columns = list(map(str, df_can.columns)) all(isinstance(column, str) for column in df_can.columns) # + id="vm_5GPvzxkQA" outputId="e2a0c710-c3b0-4638-a5e9-864c4f3c07a5" colab={"base_uri": "https://localhost:8080/", "height": 331} # Set the index to Country df_can.set_index('Country', inplace=True) df_can.head() # + id="zr3yFMFjx74I" outputId="ee98ac8c-cd2a-4744-f4b1-56fa0d591557" colab={"base_uri": "https://localhost:8080/", "height": 331} # Add totat columns df_can['Total'] = df_can.sum(axis=1) df_can.head() # + id="3jwefY0cy5t2" # Create a list of the years in the table to use in posterior visualizations years = list(map(str, range(1980, 2014))) # + [markdown] id="_1U1Z9uy-RPY" # ## Area Plots # + id="L9NEFAunzeH0" outputId="4e01e60e-3873-40b5-b8eb-6411b80d4516" colab={"base_uri": "https://localhost:8080/", "height": 197} # Sorting in desceding order df_can.sort_values(['Total'], ascending=False, axis=0, inplace=True) #Get the top 5 df_top5 = df_can.head() # Transposing so that the years become the x axis values df_top5 = df_top5[years].transpose() df_top5.head() # + id="LqWlQ5Y70eaL" outputId="a5e69c7b-4458-4240-e851-64ffb3b5f758" colab={"base_uri": "https://localhost:8080/", "height": 625} # Plot the graph option 1 df_top5.index = map(int, df_top5.index) df_top5.plot(kind='area', alpha=0.35, stacked=False, figsize=(20, 10)) plt.title('Immigration Trend of Top Countries') plt.ylabel('Number of Immigrants') plt.xlabel('Years') plt.show() # + id="42Tqq6QR3qW3" outputId="d8e2352c-c2dd-4df2-c7e0-497f1aa743b5" colab={"base_uri": "https://localhost:8080/", "height": 643} # Plot the graph option 2 ax = df_top5.plot(kind='area', alpha=0.35, figsize = (20, 10)) ax.set_title('Immigration Trend of Top 5 Countries') ax.set_ylabel('Number of Imigrants') ax.set_xlabel('Years') # + [markdown] id="29cZuzEE-Wyj" # ## Histograms # + [markdown] id="bO2KGRKtODeC" # A way of representing the frequency distribution of numeric dataset. The way it works is it partitions the x-axis into bins, assigns each data point in our dataset toa bin and then counts the number of data points that have been assigned to each bin. So the y-axis is the frequency of the number of of data points in each bin. # + [markdown] id="cVwg_8yAOo23" # First we have to examine the data split into intervals. # + id="PVL7296SOvP7" outputId="3a6e1c93-a9f0-448f-eb31-cb5542dabfea" colab={"base_uri": "https://localhost:8080/"} # np.histogram returns 2 values count, bin_edges = np.histogram(df_can['2013']) print(f"Frequency: {count}\nRanges: {bin_edges}") # + [markdown] id="ahqWvICsPj08" # - 178 countries contributed between 0 to 3412.9 immigrants # - 11 countries contributed between 3412.9 to 6825.8 immigrants # - 1 country contributed between 6285.8 to 10238.7 immigrants, and so on.. # # <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DV0101EN/labs/Images/Mod2Fig1-Histogram.JPG" align="center" width=800> # + id="EtQygnKqPkNw" outputId="a24f3732-b6cf-4377-b580-af245dbb53ea" colab={"base_uri": "https://localhost:8080/", "height": 354} # Plot the histograms passing bin_edges as x-axis df_can['2013'].plot(kind='hist', figsize=(8, 5), xticks=bin_edges) plt.title('Histogram of Imigration from 195 countries in 2013') plt.ylabel('Number of Countries') plt.xlabel('Number of Immigrants') plt.show() # + [markdown] id="yJuL-U0JRwEg" # We can plot multiple histograms on the same graph # + id="CI1IhlxcRzbe" outputId="26f59ad6-dfcd-4c9a-9ae7-80fc8748d380" colab={"base_uri": "https://localhost:8080/", "height": 197} # Get the dataset and transpose so that the index will be the years df_temp = df_can.loc[['Denmark', 'Norway', 'Sweden'], years].transpose() df_temp.head() # + id="Wgt0YFEpR6fc" outputId="a4aee01f-6877-46b1-e03d-e79725f35de8" colab={"base_uri": "https://localhost:8080/", "height": 407} df_temp.plot(kind='hist', figsize=(10, 6)) plt.title('Histogram of Immigration from Denmark, Norway, and Sweden from 1980 - 2013') plt.ylabel('Number of Years') plt.xlabel('Number of Immigrants') plt.show() # + [markdown] id="46bfaxEaUIg3" # Making a few modifications to improve the impact and aesthetics of the previous plot # + id="vfe0w3DZUQ-j" outputId="11d8b900-4ba6-4bf3-cb29-79734d012e06" colab={"base_uri": "https://localhost:8080/", "height": 408} # Get the x-tick values and pass 15 as the number of bins count, bin_edges = np.histogram(df_temp, 15) # Plot the un-stacked histogram df_temp.plot(kind='hist', figsize=(10, 6), bins=15, alpha=0.6, xticks=bin_edges, color=['coral', 'darkslateblue', 'mediumseagreen'] ) plt.title('Histogram of Immigration from Denmark, Norway, and Sweden') # + [markdown] id="DFq5YCW4ZL2c" # If we do no want the plots to overlap each other, we can stack them using the `stacked` paramemter. Let's also adjust the min and max x-axis labels to remove the extra gap on the edges of the plot. We can pass a tuple (min,max) using the `xlim` paramater, as show below. # # + id="P8Fo1MWiZXvB" outputId="c6655c25-bc67-4f93-ba64-928e4f7a087b" colab={"base_uri": "https://localhost:8080/", "height": 407} count, bin_edges = np.histogram(df_temp, 15) xmin = bin_edges[0] - 10 xmax = bin_edges[-1] + 10 # stacked histogram df_temp.plot(kind='hist', figsize=(10, 6), bins=15, xticks=bin_edges, color=['coral', 'darkslateblue', 'mediumseagreen'], stacked=True, xlim=(xmin,xmax) ) plt.title('Histogram of Immigration from Denmark, Norway, and Sweden from 1980 - 2013') plt.ylabel('Number of Years') plt.xlabel('Number of Immigrants') plt.show() # + [markdown] id="MsZh-JJ9a3hX" # ## Bar Charts # + [markdown] id="i3pIpJysbDpP" # A bar plot is a way of representing data where the _length_ of the bars represents the magnitude/size of the feature/variable. Bar graphs usually represent numerical and categorical variables grouped in intervals. # **Let's start off by analyzing the effect of Iceland's Financial Crisis:** # # The 2008 - 2011 Icelandic Financial Crisis was a major economic and political event in Iceland. Relative to the size of its economy, Iceland's systemic banking collapse was the largest experienced by any country in economic history. The crisis led to a severe economic depression in 2008 - 2011 and significant political unrest. # # **Question:** Let's compare the number of Icelandic immigrants (country = 'Iceland') to Canada from year 1980 to 2013. # + id="GDbqyQKTbD-8" outputId="89b9f1b7-98fd-4e02-c19b-c647b116bd7a" colab={"base_uri": "https://localhost:8080/"} # Get the data df_iceland = df_can.loc['Iceland', years] df_iceland.head() # + [markdown] id="Ri1zpWaheEaQ" # ### Vertical bar plot # + id="frLkiSGkbvYQ" outputId="3c2c5251-580f-407b-82b3-e48247355bae" colab={"base_uri": "https://localhost:8080/", "height": 442} # Plot df_iceland.plot(kind='bar', figsize=(10, 6)) plt.xlabel('Year') plt.ylabel('Number of immigrants') plt.title('Icelandic immigrants to Canada from 1980 to 2013') # + [markdown] id="gGAQTTMPcnK4" # We can see clearly the impact of the financial crisis increasing the number of immigrants to Canada after 2008. Let's annotate this on the plot using the annotate method. # + id="5m9Ctajbc-IM" outputId="2ac796aa-29c3-4ff8-9e04-97ffcd151f55" colab={"base_uri": "https://localhost:8080/", "height": 424} df_iceland.plot(kind='bar', figsize=(10, 6), rot=90) # rotate the xticks(labelled points on x-axis) by 90 degrees plt.xlabel('Year') plt.ylabel('Number of Immigrants') plt.title('Icelandic Immigrants to Canada from 1980 to 2013') # Annotate arrow plt.annotate('', # s: str. Will leave it blank for no text xy=(32, 70), # place head of the arrow at point (year 2012 , pop 70) xytext=(28, 20), # place base of the arrow at point (year 2008 , pop 20) xycoords='data', # will use the coordinate system of the object being annotated arrowprops=dict(arrowstyle='->', connectionstyle='arc3', color='blue', lw=2) ) # Annotate Text plt.annotate('2008 - 2011 Financial Crisis', # text to display xy=(28, 30), # start the text at at point (year 2008 , pop 30) rotation=72.5, # based on trial and error to match the arrow va='bottom', # want the text to be vertically 'bottom' aligned ha='left', # want the text to be horizontally 'left' algned. ) plt.show() # + [markdown] id="0wx0BD05eNk3" # ### Horizontal bar plot # + [markdown] id="CX7_b31Efu0n" # **Question:** Using the scripting layter and the `df_can` dataset, create a _horizontal_ bar plot showing the _total_ number of immigrants to Canada from the top 15 countries, for the period 1980 - 2013. Label each country with the total immigrant count. # + id="rC6mILXEfzh6" outputId="01ca909d-68fe-4d73-8c3a-03a12c4a02bb" colab={"base_uri": "https://localhost:8080/", "height": 752} # sort dataframe on 'Total' column (descending) df_can.sort_values(by='Total', ascending=False, inplace=True) # get top 15 countries df_top15 = df_can['Total'].head(15) df_top15 df_top15.plot(kind='barh', figsize=(12, 12), color='steelblue') plt.xlabel('Number of Immigrants') plt.title('Top 15 Conuntries Contributing to the Immigration to Canada between 1980 - 2013')
Data_Vizualization/Area-Plots-Histograms-and-Bar-Charts.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] colab_type="text" id="jNKaJz5j_ylj" # # Определение эмоциональной окраски твитов с помощью BERT # + # Если Вы запускаете ноутбук на colab или kaggle, # выполните следующие строчки, чтобы подгрузить библиотеку dlnlputils: # # !git clone https://github.com/Samsung-IT-Academy/stepik-dl-nlp.git && pip install -r stepik-dl-nlp/requirements.txt # import sys; sys.path.append('./stepik-dl-nlp') # + [markdown] colab_type="text" id="RX_ZDhicpHkV" # ## Установка библиотек # + colab={"base_uri": "https://localhost:8080/", "height": 382} colab_type="code" id="0NmMdkZO8R6q" outputId="1cc59bfa-1dbb-4540-cb22-196f399f62af" # # !pip install pytorch-transformers # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="Ok002ceNB8E7" outputId="06ef90d2-7518-4209-da66-1dd45c357c78" import torch from torch.utils.data import TensorDataset, DataLoader, RandomSampler, SequentialSampler from keras.preprocessing.sequence import pad_sequences from sklearn.model_selection import train_test_split from pytorch_transformers import BertTokenizer, BertConfig from pytorch_transformers import AdamW, BertForSequenceClassification from tqdm import tqdm, trange import pandas as pd import io import numpy as np from sklearn.metrics import accuracy_score import matplotlib.pyplot as plt # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="oYsV4H8fCpZ-" outputId="b8812c8e-3149-475f-b4c0-262160485c39" device = torch.device("cuda" if torch.cuda.is_available() else "cpu") if device == 'cpu': print('cpu') else: n_gpu = torch.cuda.device_count() print(torch.cuda.get_device_name(0)) # + [markdown] colab_type="text" id="guw6ZNtaswKc" # ## Загрузка данных # # - # Мы выбрали необычный датасет с разметкой сентимента русскоязычных твитов (подробнее про него в [статье](http://www.swsys.ru/index.php?page=article&id=3962&lang=)). В корпусе, который мы использовали 114,911 положительных и 111,923 отрицательных записей. Загрузить его можно [тут](https://study.mokoron.com/). # + import pandas as pd # Если Вы запускаете ноутбук на colab или kaggle, добавьте в начало пути ./stepik-dl-nlp pos_texts = pd.read_csv('datasets/bert_sentiment_analysis/positive.csv', encoding='utf8', sep=';', header=None) neg_texts = pd.read_csv('datasets/bert_sentiment_analysis/negative.csv', encoding='utf8', sep=';', header=None) # - pos_texts.sample(5) # + sentences = np.concatenate([pos_texts[3].values, neg_texts[3].values]) sentences = ["[CLS] " + sentence + " [SEP]" for sentence in sentences] labels = [[1] for _ in range(pos_texts.shape[0])] + [[0] for _ in range(neg_texts.shape[0])] # - assert len(sentences) == len(labels) == pos_texts.shape[0] + neg_texts.shape[0] print(sentences[1000]) # + from sklearn.model_selection import train_test_split train_sentences, test_sentences, train_gt, test_gt = train_test_split(sentences, labels, test_size=0.3) # - print(len(train_gt), len(test_gt)) # + [markdown] colab_type="text" id="ex5O1eV-Pfct" # ## Inputs # + colab={"base_uri": "https://localhost:8080/", "height": 71} colab_type="code" id="Z474sSC6oe7A" outputId="fbaa8fd8-bccd-4feb-ce52-beba5d293cfa" from pytorch_transformers import BertTokenizer, BertConfig tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', do_lower_case=True) tokenized_texts = [tokenizer.tokenize(sent) for sent in train_sentences] print (tokenized_texts[0]) # + [markdown] colab_type="text" id="87_kXUeT2-br" # BERTу нужно предоставить специальный формат входных данных. # # # - **input ids**: последовательность чисел, отождествляющих каждый токен с его номером в словаре. # - **labels**: вектор из нулей и единиц. В нашем случае нули обозначают негативную эмоциональную окраску, единицы - положительную. # - **segment mask**: (необязательно) последовательность нулей и единиц, которая показывает, состоит ли входной текст из одного или двух предложений. Для случая одного предложения получится вектор из одних нулей. Для двух: <length_of_sent_1> нулей и <length_of_sent_2> единиц. # - **attention mask**: (необязательно) последовательность нулей и единиц, где единицы обозначают токены предложения, нули - паддинг. # + colab={} colab_type="code" id="Cp9BPRd1tMIo" input_ids = [tokenizer.convert_tokens_to_ids(x) for x in tokenized_texts] input_ids = pad_sequences( input_ids, maxlen=100, dtype="long", truncating="post", padding="post" ) attention_masks = [[float(i>0) for i in seq] for seq in input_ids] # + colab={} colab_type="code" id="aFbE-UHvsb7-" train_inputs, validation_inputs, train_labels, validation_labels = train_test_split( input_ids, train_gt, random_state=42, test_size=0.1 ) train_masks, validation_masks, _, _ = train_test_split( attention_masks, input_ids, random_state=42, test_size=0.1 ) # + colab={} colab_type="code" id="jw5K2A5Ko1RF" train_inputs = torch.tensor(train_inputs) train_labels = torch.tensor(train_labels) train_masks = torch.tensor(train_masks) # - validation_inputs = torch.tensor(validation_inputs) validation_labels = torch.tensor(validation_labels) validation_masks = torch.tensor(validation_masks) train_labels # + colab={} colab_type="code" id="GEgLpFVlo1Z-" train_data = TensorDataset(train_inputs, train_masks, train_labels) train_dataloader = DataLoader( train_data, sampler=RandomSampler(train_data), batch_size=32 ) # - validation_data = TensorDataset(validation_inputs, validation_masks, validation_labels) validation_dataloader = DataLoader( validation_data, sampler=SequentialSampler(validation_data), batch_size=32 ) # + [markdown] colab_type="text" id="pNl8khAhPYju" # ## Обучение модели # - # Загружаем [BertForSequenceClassification](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/modeling.py#L1129): from pytorch_transformers import AdamW, BertForSequenceClassification # Аналогичные модели есть и для других задач: from pytorch_transformers import BertForQuestionAnswering, BertForTokenClassification # + colab={"base_uri": "https://localhost:8080/", "height": 1000} colab_type="code" id="gFsCTp_mporB" outputId="dd067229-1925-4b37-f517-0c14e25420d1" model = BertForSequenceClassification.from_pretrained("bert-base-uncased", num_labels=2) model.cuda() # + colab={} colab_type="code" id="QxSMw0FrptiL" param_optimizer = list(model.named_parameters()) no_decay = ['bias', 'gamma', 'beta'] optimizer_grouped_parameters = [ {'params': [p for n, p in param_optimizer if not any(nd in n for nd in no_decay)], 'weight_decay_rate': 0.01}, {'params': [p for n, p in param_optimizer if any(nd in n for nd in no_decay)], 'weight_decay_rate': 0.0} ] optimizer = AdamW(optimizer_grouped_parameters, lr=2e-5) # + colab={"base_uri": "https://localhost:8080/", "height": 172} colab_type="code" id="6J-FYdx6nFE_" outputId="8e388ad1-f9db-4c7b-d080-6c0a0e964610" from IPython.display import clear_output # Будем сохранять loss во время обучения # и рисовать график в режиме реального времени train_loss_set = [] train_loss = 0 # Обучение # Переводим модель в training mode model.train() for step, batch in enumerate(train_dataloader): # добавляем батч для вычисления на GPU batch = tuple(t.to(device) for t in batch) # Распаковываем данные из dataloader b_input_ids, b_input_mask, b_labels = batch # если не сделать .zero_grad(), градиенты будут накапливаться optimizer.zero_grad() # Forward pass loss = model(b_input_ids, token_type_ids=None, attention_mask=b_input_mask, labels=b_labels) train_loss_set.append(loss[0].item()) # Backward pass loss[0].backward() # Обновляем параметры и делаем шаг используя посчитанные градиенты optimizer.step() # Обновляем loss train_loss += loss[0].item() # Рисуем график clear_output(True) plt.plot(train_loss_set) plt.title("Training loss") plt.xlabel("Batch") plt.ylabel("Loss") plt.show() print("Loss на обучающей выборке: {0:.5f}".format(train_loss / len(train_dataloader))) # Валидация # Переводим модель в evaluation mode model.eval() valid_preds, valid_labels = [], [] for batch in validation_dataloader: # добавляем батч для вычисления на GPU batch = tuple(t.to(device) for t in batch) # Распаковываем данные из dataloader b_input_ids, b_input_mask, b_labels = batch # При использовании .no_grad() модель не будет считать и хранить градиенты. # Это ускорит процесс предсказания меток для валидационных данных. with torch.no_grad(): logits = model(b_input_ids, token_type_ids=None, attention_mask=b_input_mask) # Перемещаем logits и метки классов на CPU для дальнейшей работы logits = logits[0].detach().cpu().numpy() label_ids = b_labels.to('cpu').numpy() batch_preds = np.argmax(logits, axis=1) batch_labels = np.concatenate(label_ids) valid_preds.extend(batch_preds) valid_labels.extend(batch_labels) print("Процент правильных предсказаний на валидационной выборке: {0:.2f}%".format( accuracy_score(valid_labels, valid_preds) * 100 )) # - print("Процент правильных предсказаний на валидационной выборке: {0:.2f}%".format( accuracy_score(valid_labels, valid_preds) * 100 )) # + [markdown] colab_type="text" id="mkyubuJSOzg3" # # Оценка качества на отложенной выборке # + colab={} colab_type="code" id="mAN0LZBOOPVh" tokenized_texts = [tokenizer.tokenize(sent) for sent in test_sentences] input_ids = [tokenizer.convert_tokens_to_ids(x) for x in tokenized_texts] input_ids = pad_sequences( input_ids, maxlen=100, dtype="long", truncating="post", padding="post" ) # + attention_masks = [[float(i>0) for i in seq] for seq in input_ids] prediction_inputs = torch.tensor(input_ids) prediction_masks = torch.tensor(attention_masks) prediction_labels = torch.tensor(test_gt) prediction_data = TensorDataset( prediction_inputs, prediction_masks, prediction_labels ) prediction_dataloader = DataLoader( prediction_data, sampler=SequentialSampler(prediction_data), batch_size=32 ) # + colab={} colab_type="code" id="Hba10sXR7Xi6" model.eval() test_preds, test_labels = [], [] for batch in prediction_dataloader: # добавляем батч для вычисления на GPU batch = tuple(t.to(device) for t in batch) # Распаковываем данные из dataloader b_input_ids, b_input_mask, b_labels = batch # При использовании .no_grad() модель не будет считать и хранить градиенты. # Это ускорит процесс предсказания меток для тестовых данных. with torch.no_grad(): logits = model(b_input_ids, token_type_ids=None, attention_mask=b_input_mask) # Перемещаем logits и метки классов на CPU для дальнейшей работы logits = logits[0].detach().cpu().numpy() label_ids = b_labels.to('cpu').numpy() # Сохраняем предсказанные классы и ground truth batch_preds = np.argmax(logits, axis=1) batch_labels = np.concatenate(label_ids) test_preds.extend(batch_preds) test_labels.extend(batch_labels) # - acc_score = accuracy_score(test_labels, test_preds) print('Процент правильных предсказаний на отложенной выборке составил: {0:.2f}%'.format( acc_score*100 )) print('Неправильных предсказаний: {0}/{1}'.format( sum(test_labels != test_preds), len(test_labels) )) # ### Домашнее задание # Скачайте датасет с отзывами на фильмы. Например, используйте датасет [IMDB Dataset of 50K Movie Reviews](https://www.kaggle.com/lakshmi25npathi/imdb-dataset-of-50k-movie-reviews). # + import pandas as pd dataset = pd.read_csv('datasets/bert_sentiment_analysis/homework/IMDB_Dataset.csv') # - dataset.head() # Используйте для дообучения BERT датасет IMDB. # Ответьте на вопросы: # 1. удалось ли достичь такого же accuracy (98\%) при использовании IMDB датасета? # 2. удалось ли получить хорошее качество классификации всего за одну эпоху? # 3. подумайте, в чем может быть причина различий в дообучении одной и той же модели на разных датасетах # - Внимательно изучите датасет с русскими твитами. В чем его особенности? Нет ли явных паттернов или ключевых слов, которые однозначно определяют сентимент твита? # - Попробуйте удалить пунктуацию из датасета с русскими твитами и перезапустите дообучение модели. Изменилось ли итоговое качество работы модели? Почему?
task9_bert_sentiment_analysis.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Welcome to AI for Science Bootcamp # # The objective of this bootcamp is to provide an introduction to applications of artificial intelligence (AI) algorithms in scientific high performance computing. This bootcamp will introduce participants to fundamentals of AI and how AI can be applied to HPC simulation domains. # # The following contents will be covered during the bootcamp: # - [CNN Primer and Keras 101](Intro_to_DL/Part_2.ipynb) # - [Steady State Flow using Neural Networks](CFD/Start_Here.ipynb) # # ## Quick GPU Check # # Before moving forward let us verify that TensorFlow is able to see and use your GPU. # + # Import Necessary Libraries from __future__ import absolute_import, division, print_function, unicode_literals # TensorFlow and tf.keras import tensorflow as tf from tensorflow import keras # Helper libraries import numpy as np import matplotlib.pyplot as plt print(tf.__version__) tf.test.gpu_device_name() # - # The output of the cell above should show an available compatible GPU on the system (if there are multiple GPUs, only device 0 will be shown). If no GPU device is listed or you see an error, it means that there was no compatible GPU present on the system, and the future calls may run on the CPU, consuming more time. # ## [CNN Primer and Keras 101](Intro_to_DL/Part_2.ipynb) # # In this notebook, participants will be introduced to convolutional neural networks (CNNs) and how to implement one using the Keras API in TensorFlow. This notebook would serve as a good starting point for absolute beginners to neural networks. # # **By the end of this notebook you will:** # # - Understand machine learning pipelines # - Understand how a convolutional neural network works # - Write your own deep learning classifier and train it # # For an in-depth understanding of deep learning concepts, visit the [NVIDIA Deep Learning Institute](https://www.nvidia.com/en-us/deep-learning-ai/education/). # ## [Steady State Flow using Neural Networks](CFD/Start_Here.ipynb) # # In this notebook, participants will be introduced to how deep learning can be applied in the field of fluid dynamics. # # **Contents of this notebook:** # # - Understanding the problem statement # - Building a deep learning pipeline # - Understand the data and the task # - Discuss various models # - Define neural network parameters # - Fully connected networks # - Convolutional models # - Advanced networks # # **By the end of the notebook the participant will:** # # - Understand the process of applying deep learning to computational fluid dynamics # - Understand how residual blocks work # - Benchmark between different models and how they compare against one another # # ## Licensing # This material is released by OpenACC-Standard.org, in collaboration with NVIDIA Corporation, under the Creative Commons Attribution 4.0 International (CC BY 4.0).
hpc_ai/ai_science_cfd/English/python/jupyter_notebook/Start_Here.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # ES Tutorial with Fiber # # <NAME> # # June 2020 # # [Introduction to Fiber](https://eng.uber.com/fiberdistributed/) # # ### Overview # # In this notebook, we will through: # # * how to install Fiber # * how to implement parallel ES with Fiber # * how to run parallel ES locally # * how to run parallel ES on a remote cluster # # ### Installing dependencies # !pip install fiber numpy # ### Import libraries import fiber import functools import numpy as np # ### Parallized Evolution Strategies # # ![Parallel ES](https://github.com/uber/fiber/raw/docs/gecco-2020/tutorials/imgs/parallel_es.png) # # [source](https://arxiv.org/abs/1703.03864) # ### Define target function # + solution = np.array([5.0, -5.0, 1.5]) def F(theta): return -np.sum(np.square(theta - solution)) # - # ### Define optimization worker function def worker(dim, sigma, theta): epsilon = np.random.rand(dim) return F(theta + sigma * epsilon), epsilon # ### Define ES main loop def es(theta0, worker, workers=40, sigma=0.1, alpha=0.05, iterations=200): dim = theta0.shape[0] theta = theta0 pool = fiber.Pool(workers) func = functools.partial(worker, dim, sigma) for t in range(iterations): returns = pool.map(func, [theta] * workers) rewards = [ret[0] for ret in returns] epsilons = [ret[1] for ret in returns] # normalize rewards normalized_rewards = (rewards - np.mean(rewards)) / np.std(rewards) theta = theta + alpha * 1.0 / (workers * sigma) * sum( [reward * epsilon for reward, epsilon in zip(normalized_rewards, epsilons)] ) if t % 10 == 0: print(theta) return theta theta0 = np.random.rand(3) print(theta0) result = es(theta0, worker) print("Result", result) # + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}}
examples/gecco-2020/Fiber_ES_tutorial.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # # Resonant Sequences # # (from http://journals.aps.org/prstab/abstract/10.1103/PhysRevSTAB.17.014001) # !date # %matplotlib inline # + from __future__ import division import math def fareySequence(N, k=1): """ Generate Farey sequence of order N, less than 1/k """ # assert type(N) == int, "Order (N) must be an integer" a, b = 0, 1 c, d = 1, N seq = [(a,b)] while c/d <= 1/k: seq.append((c,d)) tmp = int(math.floor((N+b)/d)) a, b, c, d = c, d, tmp*c-a, tmp*d-b return seq def resonanceSequence(N, k): """ Compute resonance sequence Arguments: - N (int): Order - k (int): denominator of the farey frequency resonances are attached to """ a, b = 0, 1 c, d = k, N-k seq = [(a,b)] while d >= 0: seq.append((c,d)) tmp = int(math.floor((N+b+a)/(d+c))) a, b, c, d = c, d, tmp*c-a, tmp*d-b return seq def plotResonanceDiagram(N, figsize=(10,10)): import matplotlib.pyplot as plt ALPHA = 0.5/N plt.figure(figsize=figsize) ticks = set([]) for h, k in fareySequence(N, 1): ticks.add((h,k)) for a, b in resonanceSequence(N, k): if b == 0: x = np.array([h/k, h/k]) y = np.array([0, 1]) elif a== 0: x = np.array([0, 1]) y = np.array([h/k, h/k]) else: m = a/b cp, cm = m*h/k, -m*h/k x = np.array([0, h/k, 1]) y = np.array([cp, 0, cm+m]) plt.plot( x, y, 'b', alpha=ALPHA) # seqs. attached to horizontal axis plt.plot( y, x, 'b', alpha=ALPHA) # seqs. attached to vertical axis # also draw symetrical lines, to be fair (otherwise lines in the # lower left traingle will be duplicated, but no the others) plt.plot( x, 1-y, 'b', alpha=ALPHA) plt.plot(1-y, x, 'b', alpha=ALPHA) plt.xlim(0, 1) plt.ylim(0, 1) plt.xticks([h/k for h,k in ticks], [r"$\frac{{{:d}}}{{{:d}}}$".format(h,k) for h,k in ticks], fontsize=15) plt.yticks([h/k for h,k in ticks], [r"${:d}/{:d}$".format(h,k) for h,k in ticks]) plt.title("N = {:d}".format(N)) # - # ## Generating resonance sequences is fast # # Try it! # # **Note**: in the original paper there was a minor mistake. Eq. (8) read # # $$ # \Bigg( \Big\lfloor \frac{N+b+a}{d} \Big\rfloor c - a , \Big\lfloor \frac{N+b+a}{d} \Big\rfloor d - b \Bigg) # $$ # # but it should read # # $$ # \Bigg( \Big\lfloor \frac{N+b+a}{d+c} \Big\rfloor c - a , \Big\lfloor \frac{N+b+a}{d+c} \Big\rfloor d - b \Bigg) # $$ # # I've contacted <NAME> (the author) and he agreed with the correction (an erratum will be sent to the publication). N = 5 for k in set([k for _,k in fareySequence(N, 1)]): print "N={}, k={}:".format(N, k) print "\t", resonanceSequence(N, k) # ## ..., but plotting can be slow for large N (N > 10) # # Try it, but be patient ... (lots of lines to plot) # + # from matplotlib2tikz import save as save_tikz import numpy as np import matplotlib.pyplot as plt plotResonanceDiagram(10, figsize=(12,12)) # save_tikz('resonanceDiagram_N7.tikz') def plotSolution(a,b,c,color='r'): x = [c/a, 0, (c-b)/a, 1] y = [0, c/b, 1, (c-a)/b] plt.plot(x, y, color=color, alpha=0.5, linewidth=4) # plot some example solutions if True: # solutions for (x,y) = (0.5, 1) plotSolution( 4, -1, 1) plotSolution(-2, 2, 1) plotSolution(-2, 3, 2) plotSolution( 4, 1, 3) plotSolution( 2, 1, 2) # solutions for (x,y) = (0.5, 0.5) plotSolution( 3, -1, 1, 'g') plotSolution(-1, 3, 1, 'g') plotSolution( 3, 1, 2, 'g') plotSolution( 1, 3, 2, 'g') plotSolution( 1, 1, 1, 'g') # - plt.show()
notebooks/utils/Resonance Sequences.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.6 # language: python # name: python3 # --- # # CASE STUDY - unsupervised learning # # !pip install joblib # !pip install -U imbalanced-learn # + import os import joblib import time import numpy as np import pandas as pd import scipy.stats as stats import matplotlib.pyplot as plt from sklearn.utils import shuffle from sklearn.model_selection import train_test_split, GridSearchCV from sklearn.compose import ColumnTransformer from sklearn.base import BaseEstimator, TransformerMixin from sklearn.impute import SimpleImputer from sklearn.cluster import KMeans, SpectralClustering from sklearn.preprocessing import StandardScaler, OneHotEncoder from sklearn.pipeline import Pipeline from sklearn.metrics import classification_report, f1_score from sklearn.metrics import silhouette_score from sklearn.ensemble import RandomForestClassifier from sklearn.mixture import BayesianGaussianMixture from sklearn.svm import SVC from sklearn.linear_model import LogisticRegression import imblearn.pipeline as pl from imblearn.over_sampling import RandomOverSampler from imblearn.over_sampling import SMOTE, SVMSMOTE plt.style.use('seaborn') # %matplotlib inline # - # ## Make this notebook run in IBM Watson # + # The code was removed by Watson Studio for sharing. # + # START CODE BLOCK # cos2file - takes an object from Cloud Object Storage and writes it to file on container file system. # Uses the IBM project_lib library. # See https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/project-lib-python.html # Arguments: # p: project object defined in project token # data_path: the directory to write the file # filename: name of the file in COS import os def cos2file(p,data_path,filename): data_dir = p.project_context.home + data_path if not os.path.exists(data_dir): os.makedirs(data_dir) open( data_dir + '/' + filename, 'wb').write(p.get_file(filename).read()) # file2cos - takes file on container file system and writes it to an object in Cloud Object Storage. # Uses the IBM project_lib library. # See https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/project-lib-python.html # Arguments: # p: prooject object defined in project token # data_path: the directory to read the file from # filename: name of the file on container file system import os def file2cos(p,data_path,filename): data_dir = p.project_context.home + data_path path_to_file = data_dir + '/' + filename if os.path.exists(path_to_file): file_object = open(path_to_file, 'rb') p.save_data(filename, file_object, set_project_asset=True, overwrite=True) else: print("file2cos error: File not found") # END CODE BLOCK # - cos2file(project, '/data', 'aavail-target.csv') # ## Synopsis # # > We are now going to predict customer retention. There are many models and many transforms to consider. Use your # knowledge of pipelines and functions to ensure that your code makes it easy to compare and iterate. # # > Marketing has asked you to make a report on customer retention. They would like you to come up with information that can be used to improve current marketing strategy efforts. The current plan is for marketing at AAVAIL to # collect more features on subscribers the and they would like to use your report as a proof-of-concept in order to get buyin for this effort. # # ## Outline # # 1. Create a churn prediction baseline model # 2. Use clustering as part of your prediction pipeline # 3. # 4. Run and experiment to see if re-sampling techniques improve your model # # ## Data # # Here we load the data as we have already done. # # `aavail-target.csv` data_dir = os.path.join("..","data") df = pd.read_csv(os.path.join(data_dir, r"aavail-target.csv")) df.head() ## pull out the target and remove uneeded columns _y = df.pop('is_subscriber') y = np.zeros(_y.size) y[_y==0] = 1 df.drop(columns=['customer_id','customer_name'], inplace=True) df.head() # ### QUESTION 1 # # Create a stratified train test split of the data # + X_train, X_test, y_train, y_test = train_test_split(df, y, test_size=0.25, stratify=y, random_state=1) print(X_train.shape, y_train.shape) print(X_test.shape, y_test.shape) # - # ### QUESTION 2 # # Create a baseline model. We are going to test whether clustering followed by a model improves the results. The we will test whether re-sampling techniques provide improvements. Use a pipeline or another method, but create a baseline model given the data. Here is the ColumnTransformer we have used before. # + ## preprocessing pipeline numeric_features = ['age', 'num_streams'] numeric_transformer = Pipeline(steps=[ ('imputer', SimpleImputer(strategy='mean')), ('scaler', StandardScaler())]) categorical_features = ['country', 'subscriber_type'] categorical_transformer = Pipeline(steps=[ ('imputer', SimpleImputer(strategy='constant', fill_value='missing')), ('onehot', OneHotEncoder(handle_unknown='ignore'))]) preprocessor = ColumnTransformer( transformers=[ ('num', numeric_transformer, numeric_features), ('cat', categorical_transformer, categorical_features)]) # + best_params = {} # Logistic Regression pipe_log = Pipeline([("prep", preprocessor), ("log", LogisticRegression())]) param_grid_log = [{ 'log__C': [0.01,0.1,0.5,1.0,1.5,5.0,10.0], 'log__penalty': ["l1", "l2"] }] grid_search_log = GridSearchCV(pipe_log, param_grid=param_grid_log, cv=5, n_jobs=-1) grid_search_log.fit(X_train, y_train) y_pred = grid_search_log.predict(X_test) print("-->".join(pipe_log.named_steps.keys())) best_params = grid_search_log.best_params_ print("f1_score", round(f1_score(y_test, y_pred,average='binary'),3)) # SVM pipe_svm = Pipeline([("prep", preprocessor), ("svm", SVC(kernel='rbf', class_weight='balanced'))]) param_grid_svm = [{ 'svm__C': [0.01,0.1,0.5,1.0,1.5,5.0,10.0], 'svm__gamma': [0.001,0.01,0.1] }] grid_search_svm = GridSearchCV(pipe_svm, param_grid=param_grid_svm, cv=5, n_jobs=-1) grid_search_svm.fit(X_train, y_train) y_pred = grid_search_svm.predict(X_test) print("-->".join(pipe_svm.named_steps.keys())) best_params = dict(best_params, **grid_search_svm.best_params_) print("f1_score", round(f1_score(y_test, y_pred, average='binary'),3)) # Random Forest pipe_rf = Pipeline([("prep", preprocessor), ("rf", RandomForestClassifier())]) param_grid_rf = { 'rf__n_estimators': [20,50,100,150], 'rf__max_depth': [4, 5, 6, 7, 8], 'rf__criterion': ['gini', 'entropy'] } grid_search_rf = GridSearchCV(pipe_rf, param_grid=param_grid_rf, cv=5, n_jobs=-1) grid_search_rf.fit(X_train, y_train) y_pred = grid_search_rf.predict(X_test) print("-->".join(pipe_rf.named_steps.keys())) best_params = dict(best_params, **grid_search_rf.best_params_) print("f1_score",round(f1_score(y_test, y_pred,average='binary'),3)) ### best_params # - # ### QUESTION 3 # # The next part is to create version of the classifier that uses identified clusters. Here is a class to get you started. It is a transformer like those that we have been working with. There is an example of how to use it just below. In this example 4 clusters were specified and their one-hot encoded versions were appended to the feature matrix. Now using pipelines and/or functions compare the performance using cluster profiling as part of your matrix to the baseline. You may compare multiple models and multiple clustering algorithms here. # + class KmeansTransformer(BaseEstimator, TransformerMixin): def __init__(self, k=4): self.km = KMeans(n_clusters=k, n_init=20) def transform(self, X, *_): labels = self.km.predict(X) enc = OneHotEncoder(categories='auto') oh_labels = enc.fit_transform(labels.reshape(-1,1)) oh_labels = oh_labels.todense() return(np.hstack((X,oh_labels))) def fit(self,X,y=None,*_): self.km.fit(X) labels = self.km.predict(X) self.silhouette_score = round(silhouette_score(X,labels,metric='mahalanobis'),3) return(self) class GmmTransformer(BaseEstimator, TransformerMixin): def __init__(self, k=4): self.gmm = BayesianGaussianMixture(n_components=k,covariance_type='full', max_iter=500, n_init=10, warm_start=True) def transform(self, X,*_): probs = self.gmm.predict_proba(X) + np.finfo(float).eps return(np.hstack((X,probs))) def fit(self,X,y=None,*_): self.gmm.fit(X) labels = self.gmm.predict(X) self.silhouette_score = round(silhouette_score(X,labels,metric='mahalanobis'),3) return(self) ## example for GMM preprocessor.fit(X_train) X_train_pre = preprocessor.transform(X_train) gt = GmmTransformer(4) gt.fit(X_train_pre) X_train_gmm = gt.transform(X_train_pre) print(X_train_pre.shape) print(X_train_gmm.shape) ## example for kmeans preprocessor.fit(X_train) X_train_pre = preprocessor.transform(X_train) kt = KmeansTransformer(4) kt.fit(X_train_pre) X_train_kmeans = kt.transform(X_train_pre) print(X_train_pre.shape) print(X_train_kmeans.shape) # - def run_clustering_pipeline(X_train, y_train, X_test, y_test, smodel, umodel, best_params, preprocessor): fscores,sscores = [],[] for n_clusters in np.arange(3, 8): if smodel=="rf": clf = RandomForestClassifier(n_estimators=best_params['rf__n_estimators'], criterion=best_params['rf__criterion'], max_depth=best_params['rf__max_depth']) elif smodel=="log": clf = LogisticRegression(C=best_params['log__C'], penalty=best_params["log__penalty"]) elif smodel=="svm": clf = SVC(C=best_params['svm__C'], gamma=best_params['svm__gamma']) else: raise Exception("invalid supervised learning model") if umodel=="kmeans": cluster = KmeansTransformer(k=n_clusters) elif umodel=="gmm": cluster = GmmTransformer(k=n_clusters) else: raise Exception("invalid unsupervised learning model") pipe = Pipeline(steps=[('pre', preprocessor), ('cluster', cluster), ('clf', clf)]) pipe.fit(X_train, y_train) y_pred = pipe.predict(X_test) fscore = round(f1_score(y_test, y_pred, average='binary'),3) sscore = pipe['cluster'].silhouette_score fscores.append(fscore) sscores.append(sscore) return fscores, sscores # + cp_results = {} smodels = ("svm","rf") umodels = ("kmeans","gmm") for pair in [(smodel, umodel) for smodel in smodels for umodel in umodels]: f, s = run_clustering_pipeline(X_train, y_train, X_test, y_test, smodel=pair[0], umodel=pair[1], best_params=best_params, preprocessor=preprocessor) cp_results[pair[0] + "-" + pair[1] + "-f"] = f cp_results[pair[0] + "-" + pair[1] + "-s"] = s cp_results # - ## display table of results df_cp = pd.DataFrame(cp_results) df_cp["n_clusters"] = [str(i) for i in np.arange(3, 8)] df_cp.set_index("n_clusters", inplace=True) df_cp.head(n=10) # `svm-kmeans` performs at baseline while `svm-gmm` performs below. The `random forests` model potentially sees a small improvement with the addition of clusters. This is a fairly small dataset with a small number of features. The utility of adding clustering to the pipeline is generally more apparent in higher dimensional data sets. # ## QUESTION 4 # # Run an experiment to see if you can you improve on your workflow with the addition of re-sampling techniques? def run_clustering_and_resampling_pipeline(X_train, y_train, X_test, y_test, smodel, umodel, best_params, preprocessor): fscores,sscores = [],[] for n_clusters in np.arange(3, 8): if smodel=="rf": clf = RandomForestClassifier(n_estimators=best_params['rf__n_estimators'], criterion=best_params['rf__criterion'], max_depth=best_params['rf__max_depth']) elif smodel=="log": clf = LogisticRegression(C=best_params['log__C'], penalty=best_params["log__penalty"]) elif smodel=="svm": clf = SVC(C=best_params['svm__C'], gamma=best_params['svm__gamma']) else: raise Exception("invalid supervised learning model") if umodel=="kmeans": cluster = KmeansTransformer(k=n_clusters) elif umodel=="gmm": cluster = GmmTransformer(k=n_clusters) else: raise Exception("invalid unsupervised learning model") pipe = pl.Pipeline(steps=[ ('pre', preprocessor), ('cluster', cluster), ('smote', SMOTE(random_state=42)), ('clf', clf)]) pipe.fit(X_train, y_train) y_pred = pipe.predict(X_test) fscore = round(f1_score(y_test, y_pred, average='binary'),3) sscore = pipe['cluster'].silhouette_score fscores.append(fscore) sscores.append(sscore) return fscores, sscores # + cp_results = {} smodels = ("svm","rf") umodels = ("kmeans","gmm") for pair in [(smodel, umodel) for smodel in smodels for umodel in umodels]: f, s = run_clustering_and_resampling_pipeline(X_train, y_train, X_test, y_test, smodel=pair[0], umodel=pair[1], best_params=best_params, preprocessor=preprocessor) cp_results[pair[0] + "-" + pair[1] + "-f"] = f cp_results[pair[0] + "-" + pair[1] + "-s"] = s cp_results # - ## display table of results df_cp = pd.DataFrame(cp_results) df_cp["n_clusters"] = [str(i) for i in np.arange(3, 8)] df_cp.set_index("n_clusters", inplace=True) df_cp.head(n=10) # ## Solution Note # The inclusion of customer profiles does not significantly improve the overall model performance pipeline for either model. There may be some minor improvement depending on the random seed, but since it does not degrade model performance either it can be useful in the context of marketing. The clusters are customer profiles that are tied to predictive performance. The re-sampling does help the random forest classifiers obtain similar performance results to SVM in this case.
m3-feature-engineering-and-bias-detection/case-study-clustering.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ### 基本概念 # * 股市是多空博弈的场所,而筹码是博弈的核心,筹码分布的学术名为"流通股票持仓成本分布",它反映的是在不同价位上的投资者的持仓数量 # * 筹码分布理论是通过股票价格和成交量来研究筹码和现金可逆互换的理论,该理论的假设是,所有影响股票内在价值和供求关系都可以用筹码来还原,依据该理论,股票的投资收益来自现金在低位转换为筹码,再将筹码在高位兑换为现金的过程。因此股票运动的本质等于成交量背后的筹码运动状态,是资金和筹码之间的博弈。 # ### 运动关系 # 筹码分布和股价的变动关系: # * 当某一筹码较为集中时,表明主力正在收集筹码,股价上涨的概率较大 # * 当某一筹码较为分散时,表明主力正在抛售,股价下跌的概率加大 # # 上市公司股东人数的变化与其二级市场走势存在一定相关性: # * 股东人数越少,筹码越集中,股价走势往往具有独立个性,常逆大势而动 # * 股东人数越多,筹码越分散,股价走势往往疲软,不具有独立性,常随大盘而动。 # #### 筹码的形态与运动 # 筹码分布的形态主要有密集和分散两种 # 筹码分布的运动主要有集中和发散两类 # # 成交密集的区域,形成筹码峰,两股之间形成谷,这是筹码的视觉形态,密集也分高位密集和低位密集。 # # 任何一轮行情都将经历低位换手到高位换手,再由高位换手到低位换手 # ### 筹码分布三个阶段 # 一轮行情主要由三个阶段构成:吸筹阶段、拉升阶段和派发阶段。 # ### 筹码与庄家 # * 这局棋里到处是人家的记号,庄家没几句话是真的,难道就不玩了?那也太小家子气了,咱可以开动脑筋,顺其道、反其道、绕其道而行之,也就乐在其中了。 # * 其实,在股市的博弈之中,主力资金具有至关重要的作用。没有主力的"辛勤劳动",很难想象某只股价可以上涨一倍以上,即使是绩优股,也需要主力机构去发掘。 # * 散户没有能力到上市公司去调研,不信我们可以打电话给任何一家公司的董秘,他绝对不会告诉你什么有价值的东西。主力机构看好一只股票是要布阵的,有时甚至要放个"炸弹",通吃流通盘。而这恰巧给了我们机会。只要能够想办法侦察出主力的建仓行为,甚至能估算主力的仓位和持仓成本,那等于我们有了一双X光眼,可以看清主力的司令摆在哪里,炸弹又在何方,这棋岂不是下得有点意思? # ### 筹码计算信息 # * 由于证券交易所不向公众提供投资者的帐目信息,所以各类软件中的筹码分布状况均是通过历史交易计算出来的近似值。假定筹码的抛出概率与浮动盈利及持股时间有关,可以在一定数量的投资群体中进行抽样检测,以获得这个抛出概率的函数,然后再根据这个抛出概率,认定每日交易中哪些原先的老筹码被冲销,并由现在的新筹码来代替。 # * 我们把问题说得再简单一点:根据相当多的投资者的获利了结的习惯,尤其就散户而言,在获利10%至20%之间最容易把股票卖掉;而对主力而言,很难在盈利30%以下时卖出他的大部分仓位。那么,获利15%的获利盘对当日成交的贡献就比获利25%要大一些。这是较为精确的计算筹码分布的方法,有时候出于计算量的考虑,也可以用相等的抛出概率来代替真实的抛出概率统计,这样会引发一定的误差,不过这个误差是可以承受的。因为在实际的投资分析中,某个价位的筹码量多一些或少一些不会影响最终的结论。 # * “筹码分布”的准确的学术名称是“流通股票持仓成本分布”,它反映的是在不同价位上投资者的持仓数量。 # * 筹码分布就是将历史上在每个价位成交的量叠加起来,并以此来判断当前市场上所有流通股的持仓成本。 # *当然历史上成交中的一部分会在后面的交易日中被抛出,也就是说不能简单地将以前的成交累积到现在,而应该有一定的衰减。这个衰减的比例也就是每天的换手率。 # > 比如说,1000万的盘子,前天均价为10元,成交量为200万,也就是20%换手率;昨天以均价11元又成交300万,也就是30%换手率;那前天的200万成交量怎么样了呢?成本分析假定,前天的200万在昨天也以11元被30%换手了,那么,前天以10元成交的成交量还剩了200*(1-30%)=140万;若今天以均价12元又成交了400万,同理可算,现在的筹码分布是:10元筹码为200*(1-30%)*(1-40%)=84万,11元的筹码为300*(1-40%)=180万,12元的筹码是400万。 # 既然我们都不知道筹码的准确来源,那行情软件是怎么画筹码分布图的呢? # 其实方法非常简单粗暴,既然不知道卖出的筹码到底从哪个价格来,那么干脆一刀切,强制让所有价格的筹码都卖出相同的比例。 # ![image.png](attachment:862c236a-5fde-4d87-aff0-ea53009d1fb8.png) # 知道第二笔交易怎么处理后,第三笔、第四笔可依此类推,只要有逐笔数据我们就能画出筹码分布图。 # 有的行情软件算法会更高级一点,并不是所有价格档位都按照相同的比例卖出。 # # 有的算法认为盈利越多的价格,越倾向于减仓。10元的盈利比10.3元高,所以10元的减仓比例高于10.3元。 # # 而有的算法认为持有时间越长,越倾向于减仓。 # # 当然,虽然这些算法在尝试逼近真实情况,但肯定还是有差距的。 # > 不过有的行情软件就没那么讲究了,会进一步偷懒。 # 什么意思呢?我们之前讲的算法是用逐笔数据近似的,数据量非常大。 # 有的行情软件为了偷懒,把一整天的交易数据合并成一笔交易数据,即当日的均价和总成交量。然后仅仅用这一笔数据,来计算筹码分布。 # 这样算出来的数据,是近似的近似,失真程度大家可以自己想象。 # ### 算法 # 初始成本分布=原始股本,分布在上市价格 # # 平均成本分布=原始股本/(当日价格最高-当日价格最低)*换手率*衰减系数 # # 衰减系数=1 # 例如:某股票600xxx,第一天上市价格=7.8,股本3000万,价格在7.8-8.6之间,换手率=0.2;那么,该股票的 # # 初始成本分布=3000万分布在7.8上; # # 8.4的平均成本分布=3000/(8.6-8.7)*0.2*1. # # # 从当前角度看成本分布: # # 移动成本分布=当日成本分布*换手率*衰减系数+历史成本分布*(1-换手率*衰减系数) # # 同理,上面的股票,7.8的移动成本分布=3000/(8.6-8.7)*0.2*1+3000*(1-0.2*1). # ### 活跃筹码 # * 活跃筹码就是反映股价附近的筹码占所有流通筹码的百分比。它的取值范围是从0到100,数值越大表示股价附近的活跃筹码越多,数值越小表示股价附近的活跃筹码越少。 # * 活跃筹码的多少还可用来描述筹码的密集程度,如今天的活跃筹码的值是50,则表示在股价附近的筹码呈密集状态。如今天的活跃筹码的值是10,则表示在股价附近的筹码很少,大多数筹码都在远离股价的地方,获利很多,或者亏损很多。 # * 活跃筹码的数值很小时是很值得注意的一种情况。比如,一只股票经过漫长的下跌后,活跃筹码的值很小(小于10),大部分筹码都处于被套较深的状态,这时多数持股者已经不愿意割肉出局了,所以这时候往往能成为一个较好的买入点;再比如:一只股票经过一段时间的上涨,活跃筹码很小(小于10),大部分筹码都处于获利较多的状态,如果这时控盘强弱的值较大(大于20),前期有明显的庄股特征,总体涨幅不太大,也能成为一个较好的买入点。所以,在股价运行到不同的阶段时,考虑一下活跃筹码的多少,能起到很好的辅助效果。 # # ### 谁的筹码不抛 # 在筹码价位较高,没有明显出货迹象时,试想一只股票下跌了30%以上,而从没有放量,高端和低端筹码都不动,这是不正常的,散户重在散,下跌到一定程度一定会有很多人止损和出局,而持续盘跌不放量,只能说明其中有主力被套了,因为主力一般无法止损出局,那样成本太高了。这种股最适用于擒庄操作。 # ### 谁的筹码不卖 # # 在筹码价位较低时,一只股票上涨了20%以上,而从不放量,底端密集筹码不动也是不正常的,散户一般很少有人经得起如此引诱而一致不出货,这只能说明其中有主力在运做,而多数主力没有30%以上的利润是不会离场的,因为那样除去费用纯利就太低了。这种股最适用于坐轿操作。 # 做量化投资,光看这个图是没用的,一定要有最原始的数据。
选股/筹码理论.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pickle import numpy as np # ## General description of OpSeF # # The analysis pipeline consists of four principle sets of functions to import and reshape the data, to pre-process it, to segment it and to analyze and classify results. # # <img src="./Figures_Demo/Fig_M1.jpg" alt = "Paper Fig M1" style = "width: 500px;"/> # # OpSeF is designed such that parameter for pre-processing and selection of the ideal model for segmentation are one functional unit. # # We recommend a iterative optimization, that starts with a large number of model, and relatively few, but conceptually distinct preprocessing pipelines, to the lower then the number of model to be explored while fine-tuning the most pre-processing pipeline, e.g. by optimizing filter kernel or the way how histograms are equalized. # ## Description of the parameter tuning process for the epithelial cell dataset # # The parameter tuning was performed as described below: # In run 1 weak and strong smooting (Input_000,Input_004) was applied and the additional effect of histogram equalization (Input_001, Input5), edge enhancement (Input 002,Input006), and image inversion (Input 003,Input007) was tested. # # All available model were tested, the Cellpose size range 0.6,0.8,1,1.4,1.8 was explored. # # ### Run 1: # # The Cellpose nuclei with size = 1.4 & Input 0 gave a resonable segmentation for big and small cells. # # <img src="./Figures_Demo/Fig_R4_A.jpg" alt = "Paper Fig R4 A" style = "width: 600px;"/> # # StarDist segmenations using the same input segmented the nuclei also resonable well. # However many false positive detections were present. As they are in general much larger than nuclei, they could likely be filtered out easily. # # While the U-NET did not give usefull segmentations with Input 000, it returned resonable results with Input 5. # # #### => focus further optimization on Input 000 and Input 005 # #### => focus further development on Cellpose nuclei, StarDist and U-Net # # ### Run 2: # # Sligthy stronger smoothing of objects is overall beneficial for: # Cellpose nuclei size = 1.5 & StarDist, but overall results are not much improved compared # to Run 1. Thus, likeley retraining or smart postprocessing will be required. # Both are better than the U-Net. Still all three might be usefull if combined in a "majority voting". # # <img src="./Figures_Demo/Fig_R4_BC.jpg" alt = "Paper Fig R4 BC" style = "width: 600px;"/> # # # #### => check how well the StarDist & Cellpose nuclei size = 1.4 with Input 1 perform on other images # # ### Run3: # Cellpose nuclei size = 1.5 with ("Median",5,0,"Max",False,run_def(clahe_prm),"no",False) pre-processing, # works similar good on all images # # ### Run4: # StarDist with ("Median",5,0,"Max",False,run_def(clahe_prm),"no",False) pre-processing, # works similar good on all images # # ### Conclusion # Retraining of the model or smart postprocessing will be required. Interstingly, the Cellpose model misses reproduceibly round, very bright nuclei. This problem should be easy to fix with new training data. Similar the numerous false detections by the StarDist model. # # ## Load Core-Settings that shall not be changed # Please use OpSef_Configure_XXX to change these global settings. # Changes in this file only necessary to intergrate new model, # or to change the aut-ogenerated folderstructure. # Changes to the folderstructure might cause errors. file_path = "./my_runs/main_settings.pkl" infile = open(file_path,'rb') parameter = pickle.load(infile) print("Loading processing pipeline from",file_path) infile.close() model_dic,folder_structure = parameter # ## Define General Parameter # most parameter define the overall processing and likeley do not change between runs # + # Define variables that determine the processing pipeline and (generally) do not change between runs pc = {} ################# ## Basic ######## ################# pc["sub_f"] = folder_structure # these folder will be auto-generated pc["batch_size"] = 2 # the number of images to be quantified must be a multiple of batch size (for segmentation) # extract the properties (below) from region_props pc["get_property"] = ["label","area","centroid", "eccentricity", "equivalent_diameter","mean_intensity","max_intensity", "min_intensity","perimeter"] pc["naming_scheme"] = "Simple" # or "Simple" Export_ZSplit" to create substacks pc["toFiji"] = False # Shall images be prepared for import in Fiji ############################### # Define use of second channel ############################### pc["export_another_channel"] = False # export other channel (to create a mask or for quantification) ? if ["export_another_channel"]: pc["create_filter_mask_from_channel"] = False # use second channel to make a mask? pc["Quantify_2ndCh"] = False # shall this channel be quantified? if pc["Quantify_2ndCh"]: pc["merge_results"] = True # shall the results of the two intensity quantification be merged # (needed for advanced plotting) pc["plot_merged"] = True # plot head of dataframe in notebook ? ################################ # Define Analysis & Plotting ### ################################ pc["Export_to_CSV"] = False # shall results be exported to CSV (usually only true for the final run) if pc["Export_to_CSV"]: pc["Intensity_Ch"] = 999 # put 999 if data contains only one channel pc["Plot_Results"] = True # Do you want to plot results ? pc["Plot_xy"] = [["area","mean_intensity"],["area","circularity"]] # Define what you want to plot (x/y) pc["plot_head_main"] = True # plot head of dataframe in notebook ? pc["Do_ClusterAnalysis"] = False # shall cluster analysis be performed? if pc["Do_ClusterAnalysis"]: # Define (below) which values will be included in the TSNE: pc["include_in_tsne"] = ["area","eccentricity","equivalent_diameter", "mean_intensity","max_intensity","min_intensity","perimeter"] pc["cluster_expected"] = 4 # How many groups/classes do you expected? pc["tSNE_learning_rate"] = 100 # Define learning rate pc["link_method"] = "ward" # or "average", or "complete", or "single" details see below else: pc["Plot_Results"] = False pc["Do_ClusterAnalysis"] = False pc["toFiji"] = False # + # Define here input & basic processing that (generally) does not change between runs input_def = {} input_def["root"] = "/home/trasse/OpSefDemo/SDB2018_EpiCells" # define folder where images are located input_def["dataset"] = "SDB2018_EpiCells" # give the dataset a common name input_def["mydtype"] = np.uint8 # bit depth of input images input_def["input_type"] = ".tif" # or .tif if input_def["input_type"] == ".tif": input_def["is3D"] = False # is the data 3D or 2D ??? elif input_def["input_type"] == ".lif": input_def["rigth_size"] = (2048,2048) input_def["export_single_ch"] = 99 # which channel to extract from the lif file (if only one) input_def["split_z"] = False # chose here to split z-stack into multiple substacts to avoid that cells fuse after projection if input_def["split_z"]:# choosen, define: input_def["z_step"] = 3 # size of substacks if input_def["input_type"] == ".lif": input_def["export_multiple_ch"] = [0,1] # channels to be exported ######################################################################### ## the following options are only implemented with Tiff files as input ## ######################################################################### input_def["toTiles"] = False if input_def["toTiles"]: input_def["patch_size"] = (15,512,512) input_def["bin"] = False if input_def["bin"]: input_def["bin_factor"] = 2 # same for x/y # coming soon... input_def["n2v"] = False input_def["CARE"] = False # + # Define parameter for export (if needed) if pc["export_another_channel"]: input_def["post_export_single_ch"] = 0 # which channel to extract from the lif file input_def["post_subset"] = ["09_Ch_000_CS985"] # analyse these intensity images if pc["Quantify_2ndCh"]: pc["Intensity_2ndCh"] = input_def["post_export_single_ch"] # + # Define model # in this dictionary all settings for the model are stored initModelSettings = {} # Variables U-Net Cellprofiler initModelSettings["UNet_model_file_CP01"] = "./model_Unet/UNet_CP001.h5" initModelSettings["UNetShape"] = (1024,1024) initModelSettings["UNetSettings"] = [{"activation": "relu", "padding": "same"},{ "momentum": 0.9}] # Variables StarDist initModelSettings["basedir_StarDist"] = "./model_stardist" # Variables Cellpose initModelSettings["Cell_Channels"] = [[0,0],[0,0]] # - # ## Define Runs # # These parameter listed below are likely change between runs. # # Preprocessing is mainly based on scikit-image. # # Segmentation in cooperates the pre-trained U-Net implementation used in Cellprofiler 3.0, the StarDist 2D model and Cellpose. # # Importantly, OpSeF is designed such that parameter for pre-processing and selection of the ideal model for segmentation are one functional unit. # # <img src="./Figures_Demo/Fig_M4.jpg" alt = "Paper Fig M4" style = "width: 500px;"/> # # The above show Figure illustrates this concept with a processing pipeline, in three different models are applied to four different pre-processing pipelines each. Next, the resulting images are classified into results that are largely correct or suffer from failure to detect objects, under- or over-segmentation. In the given example, pre-processing pipeline three and model two seem to give overall the best result. # + # Define variable that might change in each run run_def = {} run_def["display_base"] = "000" # defines the image used as basis for the overlay. See documemtation for details. run_def["run_ID"] = "004" #give each run a new ID (unless you want to overwrite the old data) run_def["clahe_prm"] = [(18,18),3] # Parameter for CLAHE # Run 1-2 input_def["subset"] = ["Train"] # filter by name # Run 3-4 input_def["subset"] = ["All"] # filter by name ######################### # Define preprocessing ## ######################### # Run1 run_def["pre_list"] = [["Median",3,0,"Max",False,run_def["clahe_prm"],"no",False], ["Median",3,0,"Max",True,run_def["clahe_prm"],"no",False], ["Median",3,0,"Max",False,run_def["clahe_prm"],"sobel",False], ["Median",3,0,"Max",False,run_def["clahe_prm"],"no",True], ["Mean",7,0,"Max",False,run_def["clahe_prm"],"no",False], ["Mean",7,0,"Max",True,run_def["clahe_prm"],"no",False], ["Mean",7,0,"Max",False,run_def["clahe_prm"],"sobel",False], ["Mean",7,0,"Max",False,run_def["clahe_prm"],"no",True]] # Run2 run_def["pre_list"] = [["Median",3,0,"Max",False,run_def["clahe_prm"],"no",False], ["Median",5,0,"Max",False,run_def["clahe_prm"],"no",False], ["Mean",7,0,"Max",True,run_def["clahe_prm"],"no",False], ["Mean",9,0,"Max",True,run_def["clahe_prm"],"no",False]] # Run 3,4 run_def["pre_list"] = [["Median",5,0,"Max",False,run_def["clahe_prm"],"no",False]] # For Cellpose run_def["rescale_list"] = [0.6,0.8,1,1.4,1.8] # run1 run_def["rescale_list"] = [1.5,2,2.5,3] # run2 run_def["rescale_list"] = [1.5] # run3 # run4 = SD # Define model run_def["ModelType"] = ["CP_nuclei","CP_cyto","SD_2D_dsb2018","UNet_CP001"] # run1,2 run_def["ModelType"] = ["CP_nuclei"] # run3 run_def["ModelType"] = ["SD_2D_dsb2018"] # run4 ############################################################ # Define postprocessing & filtering # # keep only the objects within the defined ranges # ########################################################### # (same for all runs) run_def["filter_para"] = {} run_def["filter_para"]["area"] = [0,10000] run_def["filter_para"]["perimeter"] = [0,99999999] run_def["filter_para"]["circularity"] = [0,1] # (equivalent_diameter * math.pi) / perimeter run_def["filter_para"]["mean_intensity"] = [0,65535] run_def["filter_para"]["sum_intensity"] = [0,100000000000000] run_def["filter_para"]["eccentricity"] = [0,10] ############################################################################ # settings that are only needed if condition below is met which means you # # plan to use a mask from a second channel to filter results # ############################################################################ if pc["create_filter_mask_from_channel"]: run_def["binary_filter_mp"] = [["open",5,2,1,"Morphology"],["close",5,3,1,"Morphology"],["erode",5,1,1,"Morphology"]] run_def["para_mp"] = [["Mean",5],[0.6],run_def["binary_filter_mp"]] # - # ## How to reproduce these results? # # The notebook is set up to reproduce the results of the last run (Run 4) # # The execute previous runs please delete or commented out settings used for Run 4 # # Settings are saved in a .pkl file. # # The next cell prints the filepath & name of this file. # # OpSef_Run_XXX loads the settings specified above and processed all images. # The only change you have to make within OpSef_Run_XXX is specifying the location # of this setting file. # # ## Save Parameter # + # auto-create parameter set from input above run_def["run_now_list"] = [model_dic[x] for x in run_def["ModelType"]] parameter = [pc,input_def,run_def,initModelSettings] # save it file_name = "./my_runs/Parameter_{}_Run_{}.pkl".format(input_def["dataset"],run_def["run_ID"]) file_name_load = "./Demo_Notebooks/my_runs/Parameter_{}_Run_{}.pkl".format(input_def["dataset"],run_def["run_ID"]) print("Please execute this file with OPsef_Run_XXX",file_name_load) outfile = open(file_name,'wb') pickle.dump(parameter,outfile) outfile.close() # - # ## Documentation # + ########################## ## Folderstructure #### input_def["root"] = "/home/trasse/OpSefDemo/leaves" # defines the main folder # Put files in these subfolder # .lif # root/myimage_container.lif # root/tiff/myimage1.tif (in case this folder is the direct input to the pre-processing pipeline) # /myimage2.tif ... # or # root/tiff_raw_2D/myimage1.tif (if you want to make patches in 2D) # root/tiff_to_split/myimage1.tif (if you want ONLY create substacts, bt not BIN or patch before) # root/tiff_raw/myimage1.tif (for all pipelines that start with patching or binning and use stacks) ###################################### ### What is a display base image ???? ###################################### run_def["display_base"] ''' display base is ideally set to "same". in this case the visualiation of segmentation border will be drawn on top of the input image to the segmentation If this behavior is not desired a three digit number that refers to the position in the run_def["pre_list"] might to be entered. example: run_def["pre_list"] = [["Median",3,8,"Sum",True,clahe_prm],["Mean",5,3,"Max",True,clahe_prm]] & the image resulting from: ["Mean",5,3,"Max",True,clahe_prm] shall be used as basis for display: then set: run_def["display_base"] = "001" ''' ########################## # Parameter for Cellpose: ########################## # Define: # initModelSettings["Cell_Channels"] # to run segementation on grayscale=0, R=1, G=2, B=3 # initModelSettings["Cell_Channels"] = [cytoplasm, nucleus] # if NUCLEUS channel does not exist, set the second channel to 0 initModelSettings["Cell_Channels"] = [[0,0],[0,0]] # IF ALL YOUR IMAGES ARE THE SAME TYPE, you can give a list with 2 elements initModelSettings["Cell_Channels"] = [0,0] # IF YOU HAVE GRAYSCALE initModelSettings["Cell_Channels"] = [2,3] # IF YOU HAVE G=cytoplasm and B=nucleus initModelSettings["Cell_Channels"] = [2,1] # IF YOU HAVE G=cytoplasm and R=nucleus # if rescale is set to None, the size of the cells is estimated on a per image basis # if you want to set the size yourself, set it to 30. / average_cell_diameter # - # Preprocessing is mainly based on scikit-image. It consist of a linear pipeline: # # <img src="./Figures_Demo/Fig_M3.jpg" alt = "Paper Fig M3" style = "width: =800px;"/> # + ##################################### ## Variables for Preprocessing ##################################### ## The list run_def["pre_list"] #is organized as such: # (1) Filter type # (2) Kernel # (3) substract fixed value # (4) projection type # (5) calhe enhance (Yes/No) as defined above # (6) Calhe parameter # (7) enhance edges (and how) (no, roberts, sobel) # (8) invert image # It is a list of lists, each entry defines one pre-processing pipeline: # e.g. # Run1 run_def["pre_list"] = [["Median",3,50,"Max",False,run_def["clahe_prm"],"no",False], ["Median",3,50,"Max",True,run_def["clahe_prm"],"no",False], ["Median",3,50,"Max",False,run_def["clahe_prm"],"sobel",False], ["Median",3,50,"Max",False,run_def["clahe_prm"],"no",True]] # - # ### Link analysis (Settings from t-SNE) # # from https://scikit-learn.org/stable/modules/generated/sklearn.cluster.AgglomerativeClustering.html # # linkage{“ward”, “complete”, “average”, “single”} # # Which linkage criterion to use: # # The linkage criterion determines which distance to use between sets of observation. The algorithm will merge the pairs of cluster that minimize this criterion. # # ward minimizes the variance of the clusters being merged. average uses the average of the distances of each observation of the two sets. complete or maximum linkage uses the maximum distances between all observations of the two sets. # # single uses the minimum of the distances between all observations of the two sets. #
Demo_Notebooks/.ipynb_checkpoints/OpSeF_Setup_IV_0001-SDB_EpiCells-checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] colab_type="text" id="view-in-github" # <a href="https://colab.research.google.com/github/arthurflor23/handwritten-text-recognition/blob/master/src/tutorial.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + [markdown] colab_type="text" id="gP-v0E_S-mQP" # <img src="https://github.com/arthurflor23/handwritten-text-recognition/blob/master/doc/image/header.png?raw=true" /> # # # Handwritten Text Recognition using TensorFlow 2.0 # # This tutorial shows how you can use the project [Handwritten Text Recognition](https://github.com/arthurflor23/handwritten-text-recognition) in your Google Colab. # # # + [markdown] colab_type="text" id="oMty1YwuWHpN" # ## 1 Localhost Environment # # We'll make sure you have the project in your Google Drive with the datasets in HDF5. If you already have structured files in the cloud, skip this step. # + [markdown] colab_type="text" id="39blvPTPQJpt" # ### 1.1 Datasets # # The datasets that you can use: # # a. [Bentham](http://transcriptorium.eu/datasets/bentham-collection/) # # b. [IAM](http://www.fki.inf.unibe.ch/databases/iam-handwriting-database) # # c. [Rimes](http://www.a2ialab.com/doku.php?id=rimes_database:start) # # d. [Saint Gall](http://www.fki.inf.unibe.ch/databases/iam-historical-document-database/saint-gall-database) # # e. [Washington](http://www.fki.inf.unibe.ch/databases/iam-historical-document-database/washington-database) # + [markdown] colab_type="text" id="QVBGMLifWQwl" # ### 1.2 Raw folder # # On localhost, download the code project from GitHub and extract the chosen dataset (or all if you prefer) in the **raw** folder. Don't change anything of the structure of the dataset, since the scripts were made from the **original structure** of them. Your project directory will be like this: # # ``` # . # ├── raw # │   ├── bentham # │   │   ├── BenthamDatasetR0-GT # │   │   └── BenthamDatasetR0-Images # │   ├── iam # │   │   ├── ascii # │   │   ├── forms # │   │   ├── largeWriterIndependentTextLineRecognitionTask # │   │   ├── lines # │   │   └── xml # │   ├── rimes # │   │   ├── eval_2011 # │   │   ├── eval_2011_annotated.xml # │   │   ├── training_2011 # │   │   └── training_2011.xml # │   ├── saintgall # │   │ ├── data # │   │ ├── ground_truth # │   │ ├── README.txt # │   │ └── sets # │   └── washington # │   ├── data # │   ├── ground_truth # │   ├── README.txt # │   └── sets # └── src # ├── data # │   ├── evaluation.py # │   ├── generator.py # │   ├── preproc.py # │   ├── reader.py # │   ├── similar_error_analysis.py # ├── main.py # ├── network # │   ├── architecture.py # │   ├── layers.py # │   ├── model.py # └── tutorial.ipynb # # ``` # # After that, create virtual environment and install the dependencies with python 3 and pip: # # > ```python -m venv .venv && source .venv/bin/activate``` # # > ```pip install -r requirements.txt``` # + [markdown] colab_type="text" id="WyLRbAwsWSYA" # ### 1.3 HDF5 files # # Now, you'll run the *transform* function from **main.py**. For this, execute on **src** folder: # # > ```python main.py --source=<DATASET_NAME> --transform``` # # Your data will be preprocess and encode, creating and saving in the **data** folder. Now your project directory will be like this: # # # ``` # . # ├── data # │   ├── bentham.hdf5 # │   ├── iam.hdf5 # │   ├── rimes.hdf5 # │   ├── saintgall.hdf5 # │   └── washington.hdf5 # ├── raw # │   ├── bentham # │   │   ├── BenthamDatasetR0-GT # │   │   └── BenthamDatasetR0-Images # │   ├── iam # │   │   ├── ascii # │   │   ├── forms # │   │   ├── largeWriterIndependentTextLineRecognitionTask # │   │   ├── lines # │   │   └── xml # │   ├── rimes # │   │   ├── eval_2011 # │   │   ├── eval_2011_annotated.xml # │   │   ├── training_2011 # │   │   └── training_2011.xml # │   ├── saintgall # │   │ ├── data # │   │ ├── ground_truth # │   │ ├── README.txt # │   │ └── sets # │   └── washington # │   ├── data # │   ├── ground_truth # │   ├── README.txt # │   └── sets # └── src # ├── data # │   ├── evaluation.py # │   ├── generator.py # │   ├── preproc.py # │   ├── reader.py # │   ├── similar_error_analysis.py # ├── main.py # ├── network # │   ├── architecture.py # │   ├── layers.py # │   ├── model.py # └── tutorial.ipynb # # ``` # # Then upload the **data** and **src** folders in the same directory in your Google Drive. # + [markdown] colab_type="text" id="jydsAcWgWVth" # ## 2 Google Drive Environment # # + [markdown] colab_type="text" id="wk3e7YJiXzSl" # ### 2.1 TensorFlow 2.0 # + [markdown] colab_type="text" id="Z7twXyNGXtbJ" # Make sure the jupyter notebook is using GPU mode. # + colab_type="code" id="mHw4tODULT1Z" colab={} # !nvidia-smi # + [markdown] colab_type="text" id="UJECz8H8XVCY" # Now, we'll install TensorFlow 2.0 with GPU support. # + colab_type="code" id="FMg-B5PH9h3r" colab={} # !pip install -q tensorflow-gpu==2.1.0-rc2 # + colab_type="code" id="w5ukHtpZiz0g" colab={} import tensorflow as tf device_name = tf.test.gpu_device_name() if device_name != "/device:GPU:0": raise SystemError("GPU device not found") print(f"Found GPU at: {device_name}") # + [markdown] colab_type="text" id="FyMv5wyDXxqc" # ### 2.2 Google Drive # + [markdown] colab_type="text" id="P5gj6qwoX9W3" # Mount your Google Drive partition. # # **Note:** *\"Colab Notebooks/handwritten-text-recognition/src/\"* was the directory where you put the project folders, specifically the **src** folder. # + colab_type="code" id="ACQn1iBF9k9O" colab={} from google.colab import drive drive.mount("./gdrive", force_remount=True) # %cd "./gdrive/My Drive/Colab Notebooks/handwritten-text-recognition/src/" # !ls -l # + [markdown] colab_type="text" id="YwogUA8RZAyp" # After mount, you can see the list os files in the project folder. # + [markdown] colab_type="text" id="-fj7fSngY1IX" # ## 3 Set Python Classes # + [markdown] colab_type="text" id="p6Q4cOlWhNl3" # ### 3.1 Environment # + [markdown] colab_type="text" id="wvqL2Eq5ZUc7" # First, let's define our environment variables. # # Set the main configuration parameters, like input size, batch size, number of epochs and list of characters. This make compatible with **main.py** and jupyter notebook: # # * **dataset**: "bentham", "iam", "rimes", "saintgall", "washington" # # * **arch**: network to run: "bluche", "puigcerver", "flor" # # * **epochs**: number of epochs # # * **batch_size**: number size of the batch # + colab_type="code" id="_Qpr3drnGMWS" colab={} import os import datetime import string # define parameters source = "bentham" arch = "flor" epochs = 1000 batch_size = 16 # define paths source_path = os.path.join("..", "data", f"{source}.hdf5") output_path = os.path.join("..", "output", source, arch) target_path = os.path.join(output_path, "checkpoint_weights.hdf5") os.makedirs(output_path, exist_ok=True) # define input size, number max of chars per line and list of valid chars input_size = (1024, 128, 1) max_text_length = 128 charset_base = string.printable[:95] print("source:", source_path) print("output", output_path) print("target", target_path) print("charset:", charset_base) # + [markdown] colab_type="text" id="BFextshOhTKr" # ### 3.2 DataGenerator Class # + [markdown] colab_type="text" id="KfZ1mfvsanu1" # The second class is **DataGenerator()**, responsible for: # # * Load the dataset partitions (train, valid, test); # # * Manager batchs for train/validation/test process. # + colab_type="code" id="8k9vpNzMIAi2" colab={} from data.generator import DataGenerator dtgen = DataGenerator(source=source_path, batch_size=batch_size, charset=charset_base, max_text_length=max_text_length) print(f"Train images: {dtgen.size['train']}") print(f"Validation images: {dtgen.size['valid']}") print(f"Test images: {dtgen.size['test']}") # + [markdown] colab_type="text" id="-OdgNLK0hYAA" # ### 3.3 HTRModel Class # + [markdown] colab_type="text" id="jHktk8AFcnKy" # The third class is **HTRModel()**, was developed to be easy to use and to abstract the complicated flow of a HTR system. It's responsible for: # # * Create model with Handwritten Text Recognition flow, in which calculate the loss function by CTC and decode output to calculate the HTR metrics (CER, WER and SER); # # * Save and load model; # # * Load weights in the models (train/infer); # # * Make Train/Predict process using *generator*. # # To make a dynamic HTRModel, its parameters are the *architecture*, *input_size* and *vocab_size*. # + colab_type="code" id="nV0GreStISTR" colab={} from network.model import HTRModel # create and compile HTRModel # note: `learning_rate=None` will get architecture default value model = HTRModel(architecture=arch, input_size=input_size, vocab_size=dtgen.tokenizer.vocab_size) model.compile(learning_rate=0.001) # save network summary model.summary(output_path, "summary.txt") # get default callbacks and load checkpoint weights file (HDF5) if exists model.load_checkpoint(target=target_path) callbacks = model.get_callbacks(logdir=output_path, checkpoint=target_path, verbose=1) # + [markdown] colab_type="text" id="KASq6zqogG6Q" # ## 4 Tensorboard # + [markdown] colab_type="text" id="T8eBxuoogM-d" # To facilitate the visualization of the model's training, you can instantiate the Tensorboard. # # **Note**: All data is saved in the output folder # + colab_type="code" id="bPx4hRHuJGAd" colab={} # %load_ext tensorboard # %tensorboard --reload_interval=300 --logdir={output_path} # + [markdown] colab_type="text" id="T1fnz0Eugqru" # ## 5 Training # + [markdown] colab_type="text" id="w1mLOcqYgsO-" # The training process is similar to the *fit()* of the Keras. After training, the information (epochs and minimum loss) is save. # + colab_type="code" id="2P6MSoxCISlD" colab={} # to calculate total and average time per epoch start_time = datetime.datetime.now() h = model.fit(x=dtgen.next_train_batch(), epochs=epochs, steps_per_epoch=dtgen.steps['train'], validation_data=dtgen.next_valid_batch(), validation_steps=dtgen.steps['valid'], callbacks=callbacks, shuffle=True, verbose=1) total_time = datetime.datetime.now() - start_time loss = h.history['loss'] val_loss = h.history['val_loss'] min_val_loss = min(val_loss) min_val_loss_i = val_loss.index(min_val_loss) time_epoch = (total_time / len(loss)) total_item = (dtgen.size['train'] + dtgen.size['valid']) t_corpus = "\n".join([ f"Total train images: {dtgen.size['train']}", f"Total validation images: {dtgen.size['valid']}", f"Batch: {dtgen.batch_size}\n", f"Total time: {total_time}", f"Time per epoch: {time_epoch}", f"Time per item: {time_epoch / total_item}\n", f"Total epochs: {len(loss)}", f"Best epoch {min_val_loss_i + 1}\n", f"Training loss: {loss[min_val_loss_i]:.8f}", f"Validation loss: {min_val_loss:.8f}" ]) with open(os.path.join(output_path, "train.txt"), "w") as lg: lg.write(t_corpus) print(t_corpus) # + [markdown] colab_type="text" id="13g7tDjWgtXV" # ## 6 Predict # + [markdown] colab_type="text" id="ddO26OT-g_QK" # The predict process is similar to the *predict* of the Keras: # + colab_type="code" id="a9iHL6tmaL_j" colab={} from data import preproc as pp from google.colab.patches import cv2_imshow start_time = datetime.datetime.now() # predict() function will return the predicts with the probabilities predicts, _ = model.predict(x=dtgen.next_test_batch(), steps=dtgen.steps['test'], ctc_decode=True, verbose=1) # decode to string predicts = [dtgen.tokenizer.decode(x[0]) for x in predicts] total_time = datetime.datetime.now() - start_time # mount predict corpus file with open(os.path.join(output_path, "predict.txt"), "w") as lg: for pd, gt in zip(predicts, dtgen.dataset['test']['gt']): lg.write(f"TE_L {gt}\nTE_P {pd}\n") for i, item in enumerate(dtgen.dataset['test']['dt'][:10]): print("=" * 1024, "\n") cv2_imshow(pp.adjust_to_see(item)) print(dtgen.dataset['test']['gt'][i]) print(predicts[i], "\n") # + [markdown] colab_type="text" id="9JcAs3Q3WNJ-" # ## 7 Evaluate # + [markdown] colab_type="text" id="8LuZBRepWbom" # Evaluation process is more manual process. Here we have the `ocr_metrics`, but feel free to implement other metrics instead. In the function, we have three parameters: # # * predicts # * ground_truth # * norm_accentuation (calculation with/without accentuation) # * norm_punctuation (calculation with/without punctuation marks) # + colab_type="code" id="0gCwEYdKWOPK" colab={} from data import evaluation evaluate = evaluation.ocr_metrics(predicts=predicts, ground_truth=dtgen.dataset['test']['gt'], norm_accentuation=False, norm_punctuation=False) e_corpus = "\n".join([ f"Total test images: {dtgen.size['test']}", f"Total time: {total_time}", f"Time per item: {total_time / dtgen.size['test']}\n", f"Metrics:", f"Character Error Rate: {evaluate[0]:.8f}", f"Word Error Rate: {evaluate[1]:.8f}", f"Sequence Error Rate: {evaluate[2]:.8f}" ]) with open(os.path.join(output_path, "evaluate.txt"), "w") as lg: lg.write(e_corpus) print(e_corpus)
src/tutorial.ipynb
// --- // jupyter: // jupytext: // text_representation: // extension: .scala // format_name: light // format_version: '1.5' // jupytext_version: 1.14.4 // kernelspec: // display_name: spylon-kernel // language: scala // name: spylon-kernel // --- spark // + language="python" // from pyspark.ml.linalg import Vectors // import numpy as np // + language="python" // # Load MNIST // import tensorflow as tf // mnist = tf.keras.datasets.mnist // // (x_train, y_train),(x_test, y_test) = mnist.load_data() // x_train, x_test = x_train / 255.0, x_test / 255.0 // #print(x_train.shape, x_test.shape) // x_train=x_train.flatten().reshape(60000, 28*28) // x_test=x_test.flatten().reshape(10000, 28*28) // y_train=y_train.flatten().reshape(60000, 1) // y_test=y_test.flatten().reshape(10000, 1) // + language="python" // x_train.shape // + language="python" // y_x_tr = np.hstack([y_train, x_train]) // dff = map(lambda y_x: ( // int(y_x[0]), Vectors.dense(y_x[1:]) // ), y_x_tr // ) // // mnistdf = spark.createDataFrame(dff, schema=["label", "features"]).cache() // #mnistdf = spark.createDataFrame(, schema=["label", "features"]) // + language="python" // mnistdf.take(1) // + language="python" // print(x_train.shape) // print(y_train.shape) // x_y_tr = np.concatenate(x_train, y_train) // - // + language="python" // from sklearn.linear_model import LogisticRegression // clf = LogisticRegression() // clf.fit(x_train, y_train) // + language="python" // from sklearn import metrics // from sklearn.preprocessing import LabelBinarizer // enc = LabelBinarizer() // enc.fit(y_test) // pred = clf.predict_proba(x_test) // pred_cond = enc.inverse_transform(pred) // // #y_te_exp = enc.transform(y_test) // // print(pred[:1]) // print(metrics.accuracy_score(y_test, pred_cond)) // + language="python" // from sklearn. // -
examples/Untitled.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import numpy as np import pandas as pd pd.set_option('display.max_columns', 50) # - # ## National Statistics Postcode Lookup postcodes = pd.read_csv('input/National_Statistics_Postcode_Lookup_UK.csv.gz') postcodes.head() # ### Pick One Postcode # # The three postcode fields differ in their spacing. It looks like `Postcode 3` matches the My EU definition of a 'clean' postcode. postcodes['postcode'] = postcodes['Postcode 1'].\ str.upper().\ str.strip().\ str.replace(r'[^A-Z0-9]', '').\ str.replace(r'^(\S+)([0-9][A-Z]{2})$', r'\1 \2') assert not np.any(postcodes['postcode'] != postcodes['Postcode 3']) # ### Save Useful Fields output_postcodes = postcodes[[ 'Postcode 3', 'Parliamentary Constituency Code', 'Parliamentary Constituency Name', 'Latitude', 'Longitude' ]].rename(columns={ 'Postcode 3': 'postcode', 'Parliamentary Constituency Code': 'parliamentary_constituency_code', 'Parliamentary Constituency Name': 'parliamentary_constituency_name', 'Latitude': 'latitude', 'Longitude': 'longitude' }) output_postcodes.head() output_postcodes.sort_values('postcode', inplace=True) assert output_postcodes.shape[0] == output_postcodes.postcode.unique().shape[0] output_postcodes.count() # ## Postcode to NUTS postcode_to_nuts = pd.read_csv('input/pc2018_uk_NUTS-2016_v3.0.csv.gz', delimiter=';', quotechar="'") postcode_to_nuts.head() postcode_to_nuts.count() # The spacing rules for the postcodes are not quite the same, e.g. `BT1 1AA` here vs `BT1 1AA` in the other dataset. postcode_to_nuts['postcode'] = postcode_to_nuts.CODE.\ str.upper().\ str.strip().\ str.replace(r'[^A-Z0-9]', '').\ str.replace(r'^(\S+)([0-9][A-Z]{2})$', r'\1 \2') np.sum(postcode_to_nuts.postcode != postcode_to_nuts.CODE) postcode_to_nuts.head() output_postcodes_with_nuts = pd.merge( output_postcodes, postcode_to_nuts[['postcode', 'NUTS3']].rename(columns={'NUTS3': 'nuts3'}), 'left' ) output_postcodes_with_nuts.head() output_postcodes_with_nuts.count() # ## Save Output output_postcodes.to_pickle('output/postcode_lookup.pkl.gz')
data/postcodes/clean_postcode_lookup.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/juangross/cursoAM2021/blob/main/PDI_TP2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + [markdown] id="GqJjgWXpl2Qt" # TP2) # La propuesta para esta actividad consiste en manipular independientemente la luminancia y la saturación de una imagen. Para ello convertimos cada pixel de la imagen del espacio RGB al espacio YIQ, luego alteramos los valores de Y (para cambiar la luminancia) y/o de IQ (para cambiar la saturación). Con los nuevos valores de YIQ, convertimos a RGB nuevamente y obtenemos una nueva imagen. # # **Observaciones**: para cada uno estos deben mostrar la imagen original vs el resultado de la modificación realizada a la imagen para un diferente rango de valores de prueba de Y e IQ. # # La segunda parte del TP2 consiste en utilizar algunos datasets 2D (pueden ser mapas de altitud, de temperatura, etc.) y “visualizarlos” con diferentes paletas, incluyendo la de niveles de gris y la arco iris. # # **Observaciones**: el objetivo es ver cómo con diferentes paletas de colores se pueden apreciar ciertos detalles de una imagen, sin necesidad de modificar el contenido de la misma. # Les sugiero no utilizar imágenes de más de 1000 X 1000 pixeles # # + [markdown] id="KCXTR_KZPV-M" # **Parte 1** # + colab={"base_uri": "https://localhost:8080/"} id="gq3twaoolvXF" outputId="258eb7f2-4912-40c4-c98a-c80a99f83f2b" # !git clone https://github.com/juangross/cAM # + [markdown] id="laBKoZXAl1iQ" # Cargo imagen de prueba # # + colab={"base_uri": "https://localhost:8080/"} id="uuXeToA4mYqY" outputId="c122456c-6e56-462d-b313-df8a6ddc7aa0" import matplotlib.pyplot as plt #import matplotlib.image as mpimg import imageio as img import numpy as np # Import an image from directory: path="./cAM/imagenes/" archi="patron_RGBCMYWK" archo="output" archo2="output2" ext="png" #formato actual print("leyendo archivo:", f"{path}{archi}.{ext}") #input_image= mpimg.imread(f"{archi}.{ext}") #lee con matplotlib input_image=img.imread(f"{path}{archi}.{ext}") #lee con imageio #print ("imagen de entrada") #plt.subplot(1,2,1) #plt.imshow(input_image) # + colab={"base_uri": "https://localhost:8080/"} id="atzv7NUZajcv" outputId="7d7af342-67dc-4c67-a274-9b7291c9c9b3" input_tam=input_image.shape print("Dimensiones (X,Y,canales): ", input_tam) print("tipo de datos: " , input_image.dtype) print("Datos en crudo: ") #input_image[:] # + colab={"base_uri": "https://localhost:8080/", "height": 104} id="pEx0lOwXaMt9" outputId="caedb66e-2758-45c0-eb94-4a3695b789a0" #creo un array nuevo pero vacío usando numpy input_image_norm=np.zeros(input_tam, dtype=float) #normalizo los valores correspondientes a cada canal RGB. haciendo: <valor color pixel>/256 #solamente se normaliza si el valor del color para ese canal es >0. #normalizo la imagen input_image_norm=1/255*input_image #print(input_image_norm) #print(input_image_norm.shape) #transformo a YIQ #me baso en esta explicación del producto matricial para hacer la conversión de bases #https://stackoverflow.com/questions/46990838/numpy-transforming-rgb-image-to-yiq-color-space YIQ_image=np.zeros(input_tam, dtype=float) YIQ_image_mod=np.zeros(input_tam, dtype=float) RGB2YIQ=np.array([[0.299,0.587,0.114],[0.595716,-0.274453,-0.321263],[0.211456,-0.522591,0.311135]], dtype=float) YIQ_image=np.dot(input_image_norm,RGB2YIQ.T.copy()) #print("imagen YIQ") #print(YIQ_image) #Genero los parámetros que voy a usar para alterar YIQ, en forma de una matriz, cada fila es una coordenada #[[Y], # [I], # [Q]] #K_YIQ=[-1,-1,1] #[Y,I,Q] coeficientes para alterar los canales #K_YIQ=np.array([[-1,-.75,-.5,-.25,0.,0.25,0.5,0.75,1,1.25,1.5,1.75,2.0], # [-1,-.75,-.5,-.25,0.,0.25,0.5,0.75,1,1.25,1.5,1.75,2.0], # [-1,-.75,-.5,-.25,0.,0.25,0.5,0.75,1,1.25,1.5,1.75,2.0]]) K_YIQ=np.array([[-1,-.5,0.,0.5,1,1.5,2.0], [-1,-.5,0.,0.5,1,1.5,2.0], [-1,-.5,0.,0.5,1,1.5,2.0]]) #copio la imagen YIQ_image_mod = YIQ_image #YIQ_image_mod[:,:,0] = YIQ_image[:,:,0]*K_YIQ[0] #YIQ_image_mod[:,:,1] = YIQ_image[:,:,1]*K_YIQ[1] #YIQ_image_mod[:,:,2] = YIQ_image[:,:,2]*K_YIQ[2] i=2 YIQ_image_mod[:,:,0] = YIQ_image[:,:,0]*K_YIQ[0,i] #YIQ_image_mod[:,:,1] = YIQ_image[:,:,1]*K_YIQ[1,i] #YIQ_image_mod[:,:,2] = YIQ_image[:,:,2]*K_YIQ[2,i] #print("imagen YIQ alterada: ") #print(YIQ_image_mod) #YIQ -> RGB normalizado YIQ2RGB=np.array([[1,0.9663,0.6210 ],[1,-0.2721,-0.6474],[1,-1.1070,1.7046]], dtype=float) output_image_norm_RGB= np.dot(YIQ_image,YIQ2RGB.T.copy()) #print("imagen RGB alterada") #output_image_norm_RGB #desnormalizar RGB output_image=255*output_image_norm_RGB output_image=np.rint(output_image) #redondeo a valores enteros output_image=np.clip(output_image,0,255) #recorto los valores que superen 255 #output_image #print(output_image) #mostrar imagenes fig,axes=plt.subplots(1,2) #print ("imagen de entrada") plt.subplot(1,2,1) plt.imshow(input_image) #imagen de salida plt.subplot(1,2,2) plt.imshow(output_image.astype('uint8'))
PDI_TP2.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # IBM Streams Kafka sample application # # This sample demonstrates how to create a Streams Python application that connects to a Kafka cluster by using a consumer group and uses partitioned parallel processing of fetched messages. # # A Kafka cluster is typically setup and configured by an administrator, but it is also possible that you setup a single Kafka broker on a virtual or physical machine by yourself following the instructions on https://kafka.apache.org/quickstart. In this case you know the details, how to connect, and what topics can be used. Otherwise the administrator of the Kafka cluster must provide the required information. # # In this notebook, you'll see examples of how to: # 1. [Setup your data connections](#setup) # 1. [Create the consumer application](#create_1) # 1. [Create a simple producer application](#create_2) # 1. [Submit the applications](#launch) # 1. [Connect to the running consumer application to view data](#view) # 1. [Stop the applications](#cancel) # # # Overview # # **About the sample** # # The main goal of the sample is to show how to connect to a Kafka broker and how to create a Kafka consumer group with downstream parallel processing of the fetched messages. A consumer group is mostly used to consume partitioned topics with multiple consumers sharing the partitions. The messages are typically distributed to the partitions by using a *key*. When keyed messages are processed in parallel after reception by a consumer group it is desired to stick the message keys to one channel only. To achieve this, we route the tuples by a hash of the key to the parallel channel. # # Consuming a single-partition topic with more than one consumer has no advantage as failed consumers are restarted by Streams nearly as quickly as a failover of the single partition to another consumer would take. # # For completion of this sample there is also a data generator, which publishes data to the topic. # # **How it works** # # The Python application created in this notebook is submitted to the IBM Streams service for execution. Once the application is running in the service, you can connect to it from the notebook to retrieve the results. # # <img src="https://developer.ibm.com/streamsdev/wp-content/uploads/sites/15/2019/04/how-it-works.jpg" alt="How it works"> # # # ### Documentation # # - [Kafka consumer groups](https://kafka.apache.org/documentation/#intro_consumers) # - [Streams Python development guide](https://ibmstreams.github.io/streamsx.documentation/docs/latest/python/) # - [Streams Python API](https://streamsxtopology.readthedocs.io/) # - [streamsx.kafka Python package](https://streamsxkafka.readthedocs.io/) # # # # <a name="setup"></a> # # 1. Setup # ### 1.1 Add credentials for the IBM Streams service # # In order to submit a Streams application you need to provide the name of the Streams instance. # # 1. From the navigation menu, click **My instances**. # 2. Click the **Provisioned Instances** tab. # 3. Update the value of `streams_instance_name` in the cell below according to your Streams instance name. from icpd_core import icpd_util streams_instance_name = "my-instance" ## change this to Streams instance cfg=icpd_util.get_service_instance_details(name=streams_instance_name) # ### 1.2 Optional: Upgrade the `streamsx.kafka` Python package # # Uncomment and run the cell below to upgrade to the latest version of the `streamsx.kafka` package. # # + # #!pip install --user --upgrade streamsx.kafka # - # The python packages will be installed in the top of user path.<br/> # If you have problem to get the latest version of python packages you can set the order of python packages manually to user path.<br/> # you can find the user path with this command:<br/> # ` # import sys # for e in sys.path: # print(e) # ` # + #import sys #sys.path.insert(0, '/home/wsuser/.local/lib/python3.6/site-packages') # - import os import streamsx.kafka as kafka import streamsx.topology.context as context print ("INFO: streamsx package version: " + context.__version__) print ("INFO: streamsx.kafka package version: " + kafka.__version__) # ### 1.3 Configure the connection to the Kafka cluster # # Kafka consumers and producers are configured by properties. They are described in the Kafka documentation in separate sections for [producer configs](https://kafka.apache.org/22/documentation.html#producerconfigs) and [consumer configs](https://kafka.apache.org/22/documentation.html#consumerconfigs). In Python, you will be using a `dict` variable for the properties. # # The operators of the underlying SPL toolkit set defaults for some properties. You can review these operator provided defaults in the [toolkit documentation](https://ibmstreams.github.io/streamsx.kafka/doc/spldoc/html/) under **Operators**. # # The most important setting is the `bootstrap.servers` configuration, which is required for both consumers and producers. This config has the form # ``` # host1:port1,host2:port2,.... # ``` # Since these servers are just used for the initial connection to discover the full cluster membership (which may change dynamically), this list need not contain the full set of servers (you may want more than one, though, in case a server is down). # # The required properties for consumers and producers depend on the configuration of the Kafka cluster. # - Connection over TLS, yes or no? # - If TLS is in place, is the server certificate trusted, e.g. signed by a public root CA? If not, you will need the server certificate to configure a truststore. # - Is client or user authentication used? # - If yes, what authentication mechanism is configured? # Dependent on the authentication mechanism you will need additional secrets, for example username and password, or client certificate and private key. # # For example, [AMQ Streams](https://access.redhat.com/products/red-hat-amq#streams), a Kafka cluster for the Openshift container platform, supports encryption with TLS, and authentication using TLS certificates or SCRAM-SHA-512. # # Before you begin, you must gather the required connection details. Once you have all information, it is quite comfortable to create the properties for consumers and producers. For the notebook, create a partitioned topic, for example with three partitions or ask the administrator to do this for you. # # # ### 1.3.1 Handling of certificates and keys in the notebook # # When you need certificates or keys, you must provide them in PEM format. The PEM format is a text format with base64 coded data enclosed in BEGIN and END anchors. You can add certificates or keys directly to the Python code, for example # # ca_server_cert = """ # -----BEGIN CERTIFICATE----- # ... # -----END CERTIFICATE----- # """ # # or you can upload certificate and key files as *Data Assets* to your project and use the file names of the local files. # # ca_server_cert = '/project_data/data_asset/<your dataset name>' # # In the Kafka cluster, create a partitioned topic, for example with three partitions. # # # <a id="create_1"></a> # # 2. Create the consumer application # # This application subscribes to a Kafka topic by using a consumer group (multiple consumers that share a group identifier). # # We assume that the messages we fetch from the topic, are JSON formatted with the content like # ``` # {"sensor_id": "sensor_4545", "value": 3567.87, "ts": 1559029421} # ``` # # All Streams applications start with a Topology object, so start by creating one: # + from streamsx.topology.topology import Topology from streamsx.topology.context import submit, ContextTypes from streamsx.topology.topology import Routing from streamsx.topology.schema import StreamSchema from streamsx.kafka.schema import Schema from streamsx.kafka import AuthMethod consumer_topology = Topology(name='KafkaParallelSample-Consumer') # - # ## 2.1 Create the consumer properties from your connection details # # Use the helper function [create_connection_properties(...)](https://streamsxkafka.readthedocs.io/en/latest/#streamsx.kafka.create_connection_properties) to create the properties. # # Dependent on the Kafka cluster configuration you may need # - A trusted server CA certificate # - Information about the authentication method. The function supports following authentication methods: # - No authentication # - SASL/PLAIN - you need a username and a password # - SASL/SCRAM-SHA-512 - you need a username and a password # - TLS - you need a client certificate and the private key of the certificate. # # You always need a topic name that can be accessed. Enter at least the bootstrap servers and the topic name into the below cell. # + topic = "my_topic" ## change this to an existing topic, it should have multiple partitions kafka_group_id = "group1" ## change the consumer group identifier if required bootstrap_servers = "host.domain:9092" ## change the bootstrap server(s) here # this template connects to an unsecured (no TLS) cluster without authentication connect_tls = False # set True when Kafka must be connected with TLS ca_server_cert = None # use PEM or filename if required, see section 1.3.1 auth = AuthMethod.NONE # chose one of NONE, TLS, PLAIN, SCRAM_SHA_512 client_cert = None # use PEM or filename if auth is TLS client_priv_key = None # use PEM or filename if auth is TLS username = None # required for PLAIN and SCRAM_SHA_512 password = None # required for PLAIN and <PASSWORD> consumer_configs = kafka.create_connection_properties( bootstrap_servers=bootstrap_servers, use_TLS=connect_tls, enable_hostname_verification=True, cluster_ca_cert=ca_server_cert, authentication=auth, client_cert=client_cert, client_private_key=client_priv_key, username=username, password=password, topology=consumer_topology) # print the consumer configs for reference. Note, that they can contain sensitive data print() for key, value in consumer_configs.items(): print(key + "=" + value) # - # <div class="alert alert-block alert-warning"> # <b>Warning:</b> # When a certificate or private key is used to create properties, the topology parameter must not be <tt>None</tt>. In this case, the function <tt>create_connection_properties</tt> creates a keystore and/or a truststore file, which are attached as file dependencies to the topology, whereas the filenames go into the created properties. # # These properties can therefore not be used within a different topology. # </div> # # The `group.id` config is not required here. The group identifier is specified on Python API level later. However, when you need other special [consumer configs](https://kafka.apache.org/22/documentation.html#consumerconfigs), you should add them to the `consumer_configs` dict variable here. # # ## 2.2 Create the consumer group # # From the Kafka broker we fetch keyed messages, where the message type is a string. In addition to it we want to fetch the message meta data, like partition number, message timestamp, and other. # # That's why we specify `Schema.StringMessageMeta` as the schema for the created Stream in the [kafka.subscribe](https://streamsxkafka.readthedocs.io/en/latest/index.html#streamsx.kafka.subscribe) function. # # This schema is a structured schema that defines following attributes: # # - message(str) - the message content # - key(str) - the key for partitioning # - topic(str) - the Kafka topic # - partition(int) - the topic partition number (32 bit) # - offset(int) - the offset of the message within the topic partition (64 bit) # - messageTimestamp(int) - the message timestamp in milliseconds since epoch (64 bit) # # Create the stream `received` by subscribing to the Kafka topic, parallelize the source with `set_parallel` and combine the parallel streams with `end_parallel`. The result is a stream created by a consumer group with three consumers. # consumerSchema = Schema.StringMessageMeta received = kafka.subscribe( consumer_topology, topic=topic, schema=consumerSchema, group=kafka_group_id, # when not specified it is the job name, concatenated with the topic kafka_properties=consumer_configs, name="SensorSubscribe" ).set_parallel(3).end_parallel() # ## 2.3 Parallelize message processing with schema transform # # Parallelize processing in four parallel channels, routing to the channels is hash based. # We need a consistent hash function that calculates a hash from a string. # # <div class="alert alert-block alert-warning"> # <b>Warning:</b> The Python <i>hash</i> function cannot be used as a <i>consistent</i> hash function as it adds random salt to the hash calculation. Its result is different after a process restarted. When a processing element in the Streams application is re-launched, the results of <i>hash</i> change, so that the routing to the parallel channel changes. # </div> # # + # calculate a hash code of a string in a consistent way # needed for partitioned parallel streams def string_hashcode(s): h = 0 for c in s: h = (31 * h + ord(c)) & 0xFFFFFFFF return ((h + 0x80000000) & 0xFFFFFFFF) - 0x80000000 # start another parallel region partitioned by message key, # so that each key always goes into the same parallel channel receivedParallelPartitioned = received.parallel( 4, routing=Routing.HASH_PARTITIONED, func=lambda _tuple: string_hashcode(_tuple['key'])) # - # Define a new schema by extending `typing.NamedTuple` and a function that parses the JSON of the messages and maps a couple of attributes of `Schema.StringMessageMeta` to the new schema. # + import json import typing class SensorMessage(typing.NamedTuple): sensor_id: str reading: float ts: int # timestamp of the sensor measurement partition: int messageTimestamp: int # timestamp of the Kafka message # parses the JSON in the message and adds the attributes to a tuple def parse_json_to_tuple(tuple): # the tuple is passed in as dict, the output is a namedTuple messageAsDict = json.loads(tuple['message']) # as an example, in 'messageAsDict we have # {"sensor_id": "sensor_4545", "value": 3567.87, "ts": 1559029421} # 5 required positional arguments: 'sensor_id', 'reading', 'ts', 'partition', and 'messageTimestamp' return SensorMessage( messageAsDict['sensor_id'], messageAsDict['value'], messageAsDict['ts'], tuple['partition'], tuple['messageTimestamp'] ) # map the parallelized stream to the new schema receivedParallelPartitionedParsed = receivedParallelPartitioned.map( func=parse_json_to_tuple, name='ParseMsgJson', schema=SensorMessage) # tuple passing will be named tuple: class SensorMessage # validate by removing negative and zero values from the streams, # pass only positive values and timestamps receivedValidated = receivedParallelPartitionedParsed.filter( # remember, the tuple tup is passed as a Python named tuple, not as a dict func=lambda _tuple: (_tuple.reading > 0) and (_tuple.ts > 0), name='Validate') # end parallel processing and (dummy) filter as it is not possible to # create a view on a combined stream directly parallelEnd = receivedValidated.end_parallel().filter(lambda x: x) # - # <a name="create_view"></a> # ## 2.4 Create a `View` to preview the tuples on the `Stream` # # A `View` is a connection to a `Stream` that becomes activated when the application is running. We examine the data from within the notebook in [section 5](#view), below. # streamView = parallelEnd.view(name="ValidatedSensorData", description="Validated Sensor data") # ## 2.5 Define output # # The `parallelEnd` stream is our final result. We will use `Stream.publish()` to make this stream available to other Streams applications. # # If you want to send the stream to another database or system, you would use a sink function and invoke it using `Stream.for_each`. You can also use the functions of other Python packages to send the stream to other systems, for example the eventstore. # + import json # publish results as JSON parallelEnd.publish(topic="SensorData", schema=json, name="PublishSensors") # other options include: # invoke another sink function: # parallelEnd.for_each (func=send_to_db) # parallelEnd.print() # - # <a id="create_2"></a> # # 3. Create a simple producer application # # To make the consumer application work, we need to publish some data to the topic. Therefore we create another, less complicated application that publishes data to the Kafka broker. # # ## 3.1 Define a data generator for the messages and a source stream # + import random import time import json from datetime import datetime # define a callable source for data that we publich in Kafka class SensorReadingsSource(object): def __call__(self): # this is just an example of using generated data, here you could # - connect to db # - generate data # - connect to data set # - open a file i = 0 while(i < 500000): time.sleep(0.001) i = i + 1 sensor_id = random.randint(1, 100) reading = dict() reading["sensor_id"] = "sensor_" + str(sensor_id) reading["value"] = random.random() * 3000 reading["ts"] = int(datetime.now().timestamp()) yield reading producer_topology = Topology(name='KafkaParallelSample-Producer') # create the data and map them to the attributes 'message' and 'key' of the # 'Schema.StringMessage' schema for Kafka, so that we have messages with keys sensorStream = producer_topology.source( SensorReadingsSource(), "RawDataSource" ).map( func=lambda reading: {'message': json.dumps(reading), 'key': reading['sensor_id']}, name="ToKeyedMessage", schema=Schema.StringMessage) # - # ## 3.2 Publish the data to the Kafka topic # # For advanced producer configurations please review the [producer configs section](https://kafka.apache.org/21/documentation.html#producerconfigs) of the Kafka documentation. Here we setup only the `bootstrap.servers` as used for the consumers. For a Kafka basic install out of the box this is sufficient. When you have configured authentication or other security options for the consumer, you must configure the same options also for the producer. # # ### 3.2.1 Create the producer properties from your connection details # # We will be using the helper function [create_connection_properties(...)](https://streamsxkafka.readthedocs.io/en/latest/#streamsx.kafka.create_connection_properties), but will use the `producer_topology` variable as the `topology` parameter to get any created keystore or truststore files attached as file dependencies. # # <div class="alert alert-block alert-info"> # <b>Info:</b> # We intentionally do not re-use the consumer properties for the producer application. In case we use certificates in any way, we have created consumer configurations that include a keystore or truststore file, which was added as a file dependency to the consumer topology. When we re-used the consumer properties here we would miss the file dependency in the prodcuer topology. # </div> # # + producer_configs = kafka.create_connection_properties( bootstrap_servers=bootstrap_servers, use_TLS=connect_tls, enable_hostname_verification=True, cluster_ca_cert=ca_server_cert, authentication=auth, client_cert=client_cert, client_private_key=client_priv_key, username=username, password=password, topology=producer_topology) kafkaSink = kafka.publish( sensorStream, topic=topic, kafka_properties=producer_configs, name="SensorPublish") # print the producer configs for reference. Note, that they can contain sensitive data print() for key, value in producer_configs.items(): print(key + "=" + value) # - # <a id="launch"></a> # # 4. Submit both applications to the Streams instance # A running Streams application is called a *job*. By submitting the topologies we create two independent jobs. # + # disable SSL certificate verification if necessary cfg[context.ConfigParams.SSL_VERIFY] = False # submit consumer topology as a Streams job consumer_submission_result = submit(ContextTypes.DISTRIBUTED, consumer_topology, cfg) consumer_job = consumer_submission_result.job if consumer_job: print("JobId of consumer job: ", consumer_job.id , "\nJob name: ", consumer_job.name) # - # submit producer topology as a Streams job producer_submission_result = submit(ContextTypes.DISTRIBUTED, producer_topology, cfg) producer_job = producer_submission_result.job if producer_job: print("JobId of producer job: ", producer_job.id , "\nJob name: ", producer_job.name) # <a name="view"></a> # # 5. Use a `View` to access data from the job # Now that the job is started, use the `View` object you created in [step 2.4](#create_view) to start retrieving data from a `Stream`. # connect to the view and display 20 samples of the data queue = streamView.start_data_fetch() try: for val in range(20): print(queue.get()) finally: streamView.stop_data_fetch() # ## 5.1 Display the results in real time # Calling `View.display()` from the notebook displays the results of the view in a table that is updated in real-time. # display the results for 60 seconds streamView.display(duration=60) # # ## 4.2 See job status # # You can view job status and logs by going to **My Instances** > **Jobs**. Find your job based on the id printed above. # Retrieve job logs using the "Download logs" action from the job's context menu. # # To view other information about the job such as detailed metrics, access the graph. Go to **My Instances** > **Jobs**. Select "View graph" action for the running job. # # <a name="cancel"></a> # # # 6. Cancel the jobs # # This cell generates widgets you can use to cancel the jobs. # cancel the jobs in the IBM Streams service interactively producer_submission_result.cancel_job_button() consumer_submission_result.cancel_job_button() # You can also interact with the job through the [Job](https://streamsxtopology.readthedocs.io/en/stable/streamsx.rest_primitives.html#streamsx.rest_primitives.Job) object returned from `producer_submission_result.job` and `consumer_submission_result.job` # # For example, use `producer_job.cancel()` to cancel the running producer job directly. # + # cancel the jobs directly using the Job objects #producer_job.cancel() #consumer_job.cancel() # - # # 7. Congratulation # # You created a non-trivial Streams application that connected to a Kafka cluster with a consumer group for load sharing. Then you parallelized the processing of the fetched messages in a way that the keys of the messages were pinned to the parallel channels. Finally you sampled the processed data by using a view, and published the data within the Streams instance, so that other Streams applications in the instance can subscribe to it. # # Not to forget, to bring the application to life, you also created a simple producer application, which published artificial sensor data to the Kafka topic.
Streams-KafkaParallelSample.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # --- # ## Trigger Word Detection # # Welcome to the final programming assignment of this specialization! # # In this week's videos, you learned about applying deep learning to speech recognition. In this assignment, you will construct a speech dataset and implement an algorithm for trigger word detection (sometimes also called keyword detection, or wakeword detection). Trigger word detection is the technology that allows devices like Amazon Alexa, Google Home, Apple Siri, and Baidu DuerOS to wake up upon hearing a certain word. # # For this exercise, our trigger word will be "Activate." Every time it hears you say "activate," it will make a "chiming" sound. By the end of this assignment, you will be able to record a clip of yourself talking, and have the algorithm trigger a chime when it detects you saying "activate." # # After completing this assignment, perhaps you can also extend it to run on your laptop so that every time you say "activate" it starts up your favorite app, or turns on a network connected lamp in your house, or triggers some other event? # # <img src="images/sound.png" style="width:1000px;height:150px;"> # # In this assignment you will learn to: # - Structure a speech recognition project # - Synthesize and process audio recordings to create train/dev datasets # - Train a trigger word detection model and make predictions # # Lets get started! Run the following cell to load the package you are going to use. # import numpy as np from pydub import AudioSegment import random import sys import io import os import glob import IPython from td_utils import * # %matplotlib inline # # 1 - Data synthesis: Creating a speech dataset # # Let's start by building a dataset for your trigger word detection algorithm. A speech dataset should ideally be as close as possible to the application you will want to run it on. In this case, you'd like to detect the word "activate" in working environments (library, home, offices, open-spaces ...). You thus need to create recordings with a mix of positive words ("activate") and negative words (random words other than activate) on different background sounds. Let's see how you can create such a dataset. # # ## 1.1 - Listening to the data # # One of your friends is helping you out on this project, and they've gone to libraries, cafes, restaurants, homes and offices all around the region to record background noises, as well as snippets of audio of people saying positive/negative words. This dataset includes people speaking in a variety of accents. # # In the raw_data directory, you can find a subset of the raw audio files of the positive words, negative words, and background noise. You will use these audio files to synthesize a dataset to train the model. The "activate" directory contains positive examples of people saying the word "activate". The "negatives" directory contains negative examples of people saying random words other than "activate". There is one word per audio recording. The "backgrounds" directory contains 10 second clips of background noise in different environments. # # Run the cells below to listen to some examples. IPython.display.Audio("./raw_data/activates/1.wav") IPython.display.Audio("./raw_data/negatives/4.wav") IPython.display.Audio("./raw_data/backgrounds/1.wav") # You will use these three type of recordings (positives/negatives/backgrounds) to create a labelled dataset. # ## 1.2 - From audio recordings to spectrograms # # What really is an audio recording? A microphone records little variations in air pressure over time, and it is these little variations in air pressure that your ear also perceives as sound. You can think of an audio recording is a long list of numbers measuring the little air pressure changes detected by the microphone. We will use audio sampled at 44100 Hz (or 44100 Hertz). This means the microphone gives us 44100 numbers per second. Thus, a 10 second audio clip is represented by 441000 numbers (= $10 \times 44100$). # # It is quite difficult to figure out from this "raw" representation of audio whether the word "activate" was said. In order to help your sequence model more easily learn to detect triggerwords, we will compute a *spectrogram* of the audio. The spectrogram tells us how much different frequencies are present in an audio clip at a moment in time. # # (If you've ever taken an advanced class on signal processing or on Fourier transforms, a spectrogram is computed by sliding a window over the raw audio signal, and calculates the most active frequencies in each window using a Fourier transform. If you don't understand the previous sentence, don't worry about it.) # # Lets see an example. IPython.display.Audio("audio_examples/example_train.wav") x = graph_spectrogram("audio_examples/example_train.wav") # The graph above represents how active each frequency is (y axis) over a number of time-steps (x axis). # # <img src="images/spectrogram.png" style="width:500px;height:200px;"> # <center> **Figure 1**: Spectrogram of an audio recording, where the color shows the degree to which different frequencies are present (loud) in the audio at different points in time. Green squares means a certain frequency is more active or more present in the audio clip (louder); blue squares denote less active frequencies. </center> # # The dimension of the output spectrogram depends upon the hyperparameters of the spectrogram software and the length of the input. In this notebook, we will be working with 10 second audio clips as the "standard length" for our training examples. The number of timesteps of the spectrogram will be 5511. You'll see later that the spectrogram will be the input $x$ into the network, and so $T_x = 5511$. # _, data = wavfile.read("audio_examples/example_train.wav") print("Time steps in audio recording before spectrogram", data[:,0].shape) print("Time steps in input after spectrogram", x.shape) # Now, you can define: Tx = 5511 # The number of time steps input to the model from the spectrogram n_freq = 101 # Number of frequencies input to the model at each time step of the spectrogram # Note that even with 10 seconds being our default training example length, 10 seconds of time can be discretized to different numbers of value. You've seen 441000 (raw audio) and 5511 (spectrogram). In the former case, each step represents $10/441000 \approx 0.000023$ seconds. In the second case, each step represents $10/5511 \approx 0.0018$ seconds. # # For the 10sec of audio, the key values you will see in this assignment are: # # - $441000$ (raw audio) # - $5511 = T_x$ (spectrogram output, and dimension of input to the neural network). # - $10000$ (used by the `pydub` module to synthesize audio) # - $1375 = T_y$ (the number of steps in the output of the GRU you'll build). # # Note that each of these representations correspond to exactly 10 seconds of time. It's just that they are discretizing them to different degrees. All of these are hyperparameters and can be changed (except the 441000, which is a function of the microphone). We have chosen values that are within the standard ranges uses for speech systems. # # Consider the $T_y = 1375$ number above. This means that for the output of the model, we discretize the 10s into 1375 time-intervals (each one of length $10/1375 \approx 0.0072$s) and try to predict for each of these intervals whether someone recently finished saying "activate." # # Consider also the 10000 number above. This corresponds to discretizing the 10sec clip into 10/10000 = 0.001 second itervals. 0.001 seconds is also called 1 millisecond, or 1ms. So when we say we are discretizing according to 1ms intervals, it means we are using 10,000 steps. # Ty = 1375 # The number of time steps in the output of our model # ## 1.3 - Generating a single training example # # Because speech data is hard to acquire and label, you will synthesize your training data using the audio clips of activates, negatives, and backgrounds. It is quite slow to record lots of 10 second audio clips with random "activates" in it. Instead, it is easier to record lots of positives and negative words, and record background noise separately (or download background noise from free online sources). # # To synthesize a single training example, you will: # # - Pick a random 10 second background audio clip # - Randomly insert 0-4 audio clips of "activate" into this 10sec clip # - Randomly insert 0-2 audio clips of negative words into this 10sec clip # # Because you had synthesized the word "activate" into the background clip, you know exactly when in the 10sec clip the "activate" makes its appearance. You'll see later that this makes it easier to generate the labels $y^{\langle t \rangle}$ as well. # # You will use the pydub package to manipulate audio. Pydub converts raw audio files into lists of Pydub data structures (it is not important to know the details here). Pydub uses 1ms as the discretization interval (1ms is 1 millisecond = 1/1000 seconds) which is why a 10sec clip is always represented using 10,000 steps. # + # Load audio segments using pydub activates, negatives, backgrounds = load_raw_audio() print("background len: " + str(len(backgrounds[0]))) # Should be 10,000, since it is a 10 sec clip print("activate[0] len: " + str(len(activates[0]))) # Maybe around 1000, since an "activate" audio clip is usually around 1 sec (but varies a lot) print("activate[1] len: " + str(len(activates[1]))) # Different "activate" clips can have different lengths # - # **Overlaying positive/negative words on the background**: # # Given a 10sec background clip and a short audio clip (positive or negative word), you need to be able to "add" or "insert" the word's short audio clip onto the background. To ensure audio segments inserted onto the background do not overlap, you will keep track of the times of previously inserted audio clips. You will be inserting multiple clips of positive/negative words onto the background, and you don't want to insert an "activate" or a random word somewhere that overlaps with another clip you had previously added. # # For clarity, when you insert a 1sec "activate" onto a 10sec clip of cafe noise, you end up with a 10sec clip that sounds like someone sayng "activate" in a cafe, with "activate" superimposed on the background cafe noise. You do *not* end up with an 11 sec clip. You'll see later how pydub allows you to do this. # # **Creating the labels at the same time you overlay**: # # Recall also that the labels $y^{\langle t \rangle}$ represent whether or not someone has just finished saying "activate." Given a background clip, we can initialize $y^{\langle t \rangle}=0$ for all $t$, since the clip doesn't contain any "activates." # # When you insert or overlay an "activate" clip, you will also update labels for $y^{\langle t \rangle}$, so that 50 steps of the output now have target label 1. You will train a GRU to detect when someone has *finished* saying "activate". For example, suppose the synthesized "activate" clip ends at the 5sec mark in the 10sec audio---exactly halfway into the clip. Recall that $T_y = 1375$, so timestep $687 = $ `int(1375*0.5)` corresponds to the moment at 5sec into the audio. So, you will set $y^{\langle 688 \rangle} = 1$. Further, you would quite satisfied if the GRU detects "activate" anywhere within a short time-internal after this moment, so we actually set 50 consecutive values of the label $y^{\langle t \rangle}$ to 1. Specifically, we have $y^{\langle 688 \rangle} = y^{\langle 689 \rangle} = \cdots = y^{\langle 737 \rangle} = 1$. # # This is another reason for synthesizing the training data: It's relatively straightforward to generate these labels $y^{\langle t \rangle}$ as described above. In contrast, if you have 10sec of audio recorded on a microphone, it's quite time consuming for a person to listen to it and mark manually exactly when "activate" finished. # # Here's a figure illustrating the labels $y^{\langle t \rangle}$, for a clip which we have inserted "activate", "innocent", activate", "baby." Note that the positive labels "1" are associated only with the positive words. # # <img src="images/label_diagram.png" style="width:500px;height:200px;"> # <center> **Figure 2** </center> # # To implement the training set synthesis process, you will use the following helper functions. All of these function will use a 1ms discretization interval, so the 10sec of audio is alwsys discretized into 10,000 steps. # # 1. `get_random_time_segment(segment_ms)` gets a random time segment in our background audio # 2. `is_overlapping(segment_time, existing_segments)` checks if a time segment overlaps with existing segments # 3. `insert_audio_clip(background, audio_clip, existing_times)` inserts an audio segment at a random time in our background audio using `get_random_time_segment` and `is_overlapping` # 4. `insert_ones(y, segment_end_ms)` inserts 1's into our label vector y after the word "activate" # The function `get_random_time_segment(segment_ms)` returns a random time segment onto which we can insert an audio clip of duration `segment_ms`. Read through the code to make sure you understand what it is doing. # def get_random_time_segment(segment_ms): """ Gets a random time segment of duration segment_ms in a 10,000 ms audio clip. Arguments: segment_ms -- the duration of the audio clip in ms ("ms" stands for "milliseconds") Returns: segment_time -- a tuple of (segment_start, segment_end) in ms """ segment_start = np.random.randint(low=0, high=10000-segment_ms) # Make sure segment doesn't run past the 10sec background segment_end = segment_start + segment_ms - 1 return (segment_start, segment_end) # Next, suppose you have inserted audio clips at segments (1000,1800) and (3400,4500). I.e., the first segment starts at step 1000, and ends at step 1800. Now, if we are considering inserting a new audio clip at (3000,3600) does this overlap with one of the previously inserted segments? In this case, (3000,3600) and (3400,4500) overlap, so we should decide against inserting a clip here. # # For the purpose of this function, define (100,200) and (200,250) to be overlapping, since they overlap at timestep 200. However, (100,199) and (200,250) are non-overlapping. # # **Exercise**: Implement `is_overlapping(segment_time, existing_segments)` to check if a new time segment overlaps with any of the previous segments. You will need to carry out 2 steps: # # 1. Create a "False" flag, that you will later set to "True" if you find that there is an overlap. # 2. Loop over the previous_segments' start and end times. Compare these times to the segment's start and end times. If there is an overlap, set the flag defined in (1) as True. You can use: # ```python # for ....: # if ... <= ... and ... >= ...: # ... # ``` # Hint: There is overlap if the segment starts before the previous segment ends, and the segment ends after the previous segment starts. # + # GRADED FUNCTION: is_overlapping def is_overlapping(segment_time, previous_segments): """ Checks if the time of a segment overlaps with the times of existing segments. Arguments: segment_time -- a tuple of (segment_start, segment_end) for the new segment previous_segments -- a list of tuples of (segment_start, segment_end) for the existing segments Returns: True if the time segment overlaps with any of the existing segments, False otherwise """ segment_start, segment_end = segment_time ### START CODE HERE ### (≈ 4 line) # Step 1: Initialize overlap as a "False" flag. (≈ 1 line) overlap = False # Step 2: loop over the previous_segments start and end times. # Compare start/end times and set the flag to True if there is an overlap (≈ 3 lines) for previous_start, previous_end in previous_segments: if segment_start <= previous_end and segment_end >= previous_start: overlap = True ### END CODE HERE ### return overlap # - overlap1 = is_overlapping((950, 1430), [(2000, 2550), (260, 949)]) overlap2 = is_overlapping((2305, 2950), [(824, 1532), (1900, 2305), (3424, 3656)]) print("Overlap 1 = ", overlap1) print("Overlap 2 = ", overlap2) # **Expected Output**: # # <table> # <tr> # <td> # **Overlap 1** # </td> # <td> # False # </td> # </tr> # <tr> # <td> # **Overlap 2** # </td> # <td> # True # </td> # </tr> # </table> # Now, lets use the previous helper functions to insert a new audio clip onto the 10sec background at a random time, but making sure that any newly inserted segment doesn't overlap with the previous segments. # # **Exercise**: Implement `insert_audio_clip()` to overlay an audio clip onto the background 10sec clip. You will need to carry out 4 steps: # # 1. Get a random time segment of the right duration in ms. # 2. Make sure that the time segment does not overlap with any of the previous time segments. If it is overlapping, then go back to step 1 and pick a new time segment. # 3. Add the new time segment to the list of existing time segments, so as to keep track of all the segments you've inserted. # 4. Overlay the audio clip over the background using pydub. We have implemented this for you. # + # GRADED FUNCTION: insert_audio_clip def insert_audio_clip(background, audio_clip, previous_segments): """ Insert a new audio segment over the background noise at a random time step, ensuring that the audio segment does not overlap with existing segments. Arguments: background -- a 10 second background audio recording. audio_clip -- the audio clip to be inserted/overlaid. previous_segments -- times where audio segments have already been placed Returns: new_background -- the updated background audio """ # Get the duration of the audio clip in ms segment_ms = len(audio_clip) ### START CODE HERE ### # Step 1: Use one of the helper functions to pick a random time segment onto which to insert # the new audio clip. (≈ 1 line) segment_time = get_random_time_segment(segment_ms) # Step 2: Check if the new segment_time overlaps with one of the previous_segments. If so, keep # picking new segment_time at random until it doesn't overlap. (≈ 2 lines) while is_overlapping(segment_time, previous_segments): segment_time = get_random_time_segment(segment_ms) # Step 3: Add the new segment_time to the list of previous_segments (≈ 1 line) previous_segments.append(segment_time) ### END CODE HERE ### # Step 4: Superpose audio segment and background new_background = background.overlay(audio_clip, position = segment_time[0]) return new_background, segment_time # - np.random.seed(5) audio_clip, segment_time = insert_audio_clip(backgrounds[0], activates[0], [(3790, 4400)]) audio_clip.export("insert_test.wav", format="wav") print("Segment Time: ", segment_time) IPython.display.Audio("insert_test.wav") # **Expected Output** # # <table> # <tr> # <td> # **Segment Time** # </td> # <td> # (2254, 3169) # </td> # </tr> # </table> # Expected audio IPython.display.Audio("audio_examples/insert_reference.wav") # Finally, implement code to update the labels $y^{\langle t \rangle}$, assuming you just inserted an "activate." In the code below, `y` is a `(1,1375)` dimensional vector, since $T_y = 1375$. # # If the "activate" ended at time step $t$, then set $y^{\langle t+1 \rangle} = 1$ as well as for up to 49 additional consecutive values. However, make sure you don't run off the end of the array and try to update `y[0][1375]`, since the valid indices are `y[0][0]` through `y[0][1374]` because $T_y = 1375$. So if "activate" ends at step 1370, you would get only `y[0][1371] = y[0][1372] = y[0][1373] = y[0][1374] = 1` # # **Exercise**: Implement `insert_ones()`. You can use a for loop. (If you are an expert in python's slice operations, feel free also to use slicing to vectorize this.) If a segment ends at `segment_end_ms` (using a 10000 step discretization), to convert it to the indexing for the outputs $y$ (using a $1375$ step discretization), we will use this formula: # ``` # segment_end_y = int(segment_end_ms * Ty / 10000.0) # ``` # + # GRADED FUNCTION: insert_ones def insert_ones(y, segment_end_ms): """ Update the label vector y. The labels of the 50 output steps strictly after the end of the segment should be set to 1. By strictly we mean that the label of segment_end_y should be 0 while, the 50 followinf labels should be ones. Arguments: y -- numpy array of shape (1, Ty), the labels of the training example segment_end_ms -- the end time of the segment in ms Returns: y -- updated labels """ # duration of the background (in terms of spectrogram time-steps) segment_end_y = int(segment_end_ms * Ty / 10000.0) # Add 1 to the correct index in the background label (y) ### START CODE HERE ### (≈ 3 lines) for i in range(segment_end_y + 1, segment_end_y + 51): if i < Ty: y[0, i] = 1 ### END CODE HERE ### return y # - arr1 = insert_ones(np.zeros((1, Ty)), 9700) plt.plot(insert_ones(arr1, 4251)[0,:]) print("sanity checks:", arr1[0][1333], arr1[0][634], arr1[0][635]) # **Expected Output** # <table> # <tr> # <td> # **sanity checks**: # </td> # <td> # 0.0 1.0 0.0 # </td> # </tr> # </table> # <img src="images/ones_reference.png" style="width:320;height:240px;"> # Finally, you can use `insert_audio_clip` and `insert_ones` to create a new training example. # # **Exercise**: Implement `create_training_example()`. You will need to carry out the following steps: # # 1. Initialize the label vector $y$ as a numpy array of zeros and shape $(1, T_y)$. # 2. Initialize the set of existing segments to an empty list. # 3. Randomly select 0 to 4 "activate" audio clips, and insert them onto the 10sec clip. Also insert labels at the correct position in the label vector $y$. # 4. Randomly select 0 to 2 negative audio clips, and insert them into the 10sec clip. # # + # GRADED FUNCTION: create_training_example def create_training_example(background, activates, negatives): """ Creates a training example with a given background, activates, and negatives. Arguments: background -- a 10 second background audio recording activates -- a list of audio segments of the word "activate" negatives -- a list of audio segments of random words that are not "activate" Returns: x -- the spectrogram of the training example y -- the label at each time step of the spectrogram """ # Set the random seed np.random.seed(18) # Make background quieter background = background - 20 ### START CODE HERE ### # Step 1: Initialize y (label vector) of zeros (≈ 1 line) y = np.zeros((1, Ty)) # Step 2: Initialize segment times as empty list (≈ 1 line) previous_segments = [] ### END CODE HERE ### # Select 0-4 random "activate" audio clips from the entire list of "activates" recordings number_of_activates = np.random.randint(0, 5) random_indices = np.random.randint(len(activates), size=number_of_activates) random_activates = [activates[i] for i in random_indices] ### START CODE HERE ### (≈ 3 lines) # Step 3: Loop over randomly selected "activate" clips and insert in background for random_activate in random_activates: # Insert the audio clip on the background background, segment_time = insert_audio_clip(background, random_activate, previous_segments) # Retrieve segment_start and segment_end from segment_time segment_start, segment_end = segment_time # Insert labels in "y" y = insert_ones(y, segment_end_ms=segment_end) ### END CODE HERE ### # Select 0-2 random negatives audio recordings from the entire list of "negatives" recordings number_of_negatives = np.random.randint(0, 3) random_indices = np.random.randint(len(negatives), size=number_of_negatives) random_negatives = [negatives[i] for i in random_indices] ### START CODE HERE ### (≈ 2 lines) # Step 4: Loop over randomly selected negative clips and insert in background for random_negative in random_negatives: # Insert the audio clip on the background background, _ = insert_audio_clip(background, random_negative, previous_segments) ### END CODE HERE ### # Standardize the volume of the audio clip background = match_target_amplitude(background, -20.0) # Export new training example file_handle = background.export("train" + ".wav", format="wav") print("File (train.wav) was saved in your directory.") # Get and plot spectrogram of the new recording (background with superposition of positive and negatives) x = graph_spectrogram("train.wav") return x, y # - x, y = create_training_example(backgrounds[0], activates, negatives) # **Expected Output** # <img src="images/train_reference.png" style="width:320;height:240px;"> # Now you can listen to the training example you created and compare it to the spectrogram generated above. IPython.display.Audio("train.wav") # **Expected Output** IPython.display.Audio("audio_examples/train_reference.wav") # Finally, you can plot the associated labels for the generated training example. plt.plot(y[0]) # **Expected Output** # <img src="images/train_label.png" style="width:320;height:240px;"> # ## 1.4 - Full training set # # You've now implemented the code needed to generate a single training example. We used this process to generate a large training set. To save time, we've already generated a set of training examples. # Load preprocessed training examples X = np.load("./XY_train/X.npy") Y = np.load("./XY_train/Y.npy") # ## 1.5 - Development set # # To test our model, we recorded a development set of 25 examples. While our training data is synthesized, we want to create a development set using the same distribution as the real inputs. Thus, we recorded 25 10-second audio clips of people saying "activate" and other random words, and labeled them by hand. This follows the principle described in Course 3 that we should create the dev set to be as similar as possible to the test set distribution; that's why our dev set uses real rather than synthesized audio. # # Load preprocessed dev set examples X_dev = np.load("./XY_dev/X_dev.npy") Y_dev = np.load("./XY_dev/Y_dev.npy") # # 2 - Model # # Now that you've built a dataset, lets write and train a trigger word detection model! # # The model will use 1-D convolutional layers, GRU layers, and dense layers. Let's load the packages that will allow you to use these layers in Keras. This might take a minute to load. from keras.callbacks import ModelCheckpoint from keras.models import Model, load_model, Sequential from keras.layers import Dense, Activation, Dropout, Input, Masking, TimeDistributed, LSTM, Conv1D from keras.layers import GRU, Bidirectional, BatchNormalization, Reshape from keras.optimizers import Adam # ## 2.1 - Build the model # # Here is the architecture we will use. Take some time to look over the model and see if it makes sense. # # <img src="images/model_trigger.png" style="width:600px;height:600px;"> # <center> **Figure 3** </center> # # One key step of this model is the 1D convolutional step (near the bottom of Figure 3). It inputs the 5511 step spectrogram, and outputs a 1375 step output, which is then further processed by multiple layers to get the final $T_y = 1375$ step output. This layer plays a role similar to the 2D convolutions you saw in Course 4, of extracting low-level features and then possibly generating an output of a smaller dimension. # # Computationally, the 1-D conv layer also helps speed up the model because now the GRU has to process only 1375 timesteps rather than 5511 timesteps. The two GRU layers read the sequence of inputs from left to right, then ultimately uses a dense+sigmoid layer to make a prediction for $y^{\langle t \rangle}$. Because $y$ is binary valued (0 or 1), we use a sigmoid output at the last layer to estimate the chance of the output being 1, corresponding to the user having just said "activate." # # Note that we use a uni-directional RNN rather than a bi-directional RNN. This is really important for trigger word detection, since we want to be able to detect the trigger word almost immediately after it is said. If we used a bi-directional RNN, we would have to wait for the whole 10sec of audio to be recorded before we could tell if "activate" was said in the first second of the audio clip. # # Implementing the model can be done in four steps: # # **Step 1**: CONV layer. Use `Conv1D()` to implement this, with 196 filters, # a filter size of 15 (`kernel_size=15`), and stride of 4. [[See documentation.](https://keras.io/layers/convolutional/#conv1d)] # # **Step 2**: First GRU layer. To generate the GRU layer, use: # ``` # X = GRU(units = 128, return_sequences = True)(X) # ``` # Setting `return_sequences=True` ensures that all the GRU's hidden states are fed to the next layer. Remember to follow this with Dropout and BatchNorm layers. # # **Step 3**: Second GRU layer. This is similar to the previous GRU layer (remember to use `return_sequences=True`), but has an extra dropout layer. # # **Step 4**: Create a time-distributed dense layer as follows: # ``` # X = TimeDistributed(Dense(1, activation = "sigmoid"))(X) # ``` # This creates a dense layer followed by a sigmoid, so that the parameters used for the dense layer are the same for every time step. [[See documentation](https://keras.io/layers/wrappers/).] # # **Exercise**: Implement `model()`, the architecture is presented in Figure 3. # + # GRADED FUNCTION: model def model(input_shape): """ Function creating the model's graph in Keras. Argument: input_shape -- shape of the model's input data (using Keras conventions) Returns: model -- Keras model instance """ X_input = Input(shape = input_shape) ### START CODE HERE ### # Step 1: CONV layer (≈4 lines) X = Conv1D(196, kernel_size=15, strides=4)(X_input) # CONV1D X = BatchNormalization()(X) # Batch normalization X = Activation('relu')(X) # ReLu activation X = Dropout(0.8)(X) # dropout (use 0.8) # Step 2: First GRU Layer (≈4 lines) X = GRU(units = 128, return_sequences = True)(X) # GRU (use 128 units and return the sequences) X = Dropout(0.8)(X) # dropout (use 0.8) X = BatchNormalization()(X) # Batch normalization # Step 3: Second GRU Layer (≈4 lines) X = GRU(units = 128, return_sequences = True)(X) # GRU (use 128 units and return the sequences) X = Dropout(0.8)(X) # dropout (use 0.8) X = BatchNormalization()(X) # Batch normalization X = Dropout(0.8)(X) # dropout (use 0.8) # Step 4: Time-distributed dense layer (≈1 line) X = TimeDistributed(Dense(1, activation = "sigmoid"))(X) # time distributed (sigmoid) ### END CODE HERE ### model = Model(inputs = X_input, outputs = X) return model # - model = model(input_shape = (Tx, n_freq)) # Let's print the model summary to keep track of the shapes. model.summary() # **Expected Output**: # # <table> # <tr> # <td> # **Total params** # </td> # <td> # 522,561 # </td> # </tr> # <tr> # <td> # **Trainable params** # </td> # <td> # 521,657 # </td> # </tr> # <tr> # <td> # **Non-trainable params** # </td> # <td> # 904 # </td> # </tr> # </table> # The output of the network is of shape (None, 1375, 1) while the input is (None, 5511, 101). The Conv1D has reduced the number of steps from 5511 at spectrogram to 1375. # ## 2.2 - Fit the model # Trigger word detection takes a long time to train. To save time, we've already trained a model for about 3 hours on a GPU using the architecture you built above, and a large training set of about 4000 examples. Let's load the model. model = load_model('./models/tr_model.h5') # You can train the model further, using the Adam optimizer and binary cross entropy loss, as follows. This will run quickly because we are training just for one epoch and with a small training set of 26 examples. opt = Adam(lr=0.0001, beta_1=0.9, beta_2=0.999, decay=0.01) model.compile(loss='binary_crossentropy', optimizer=opt, metrics=["accuracy"]) model.fit(X, Y, batch_size = 5, epochs=1) # ## 2.3 - Test the model # # Finally, let's see how your model performs on the dev set. loss, acc = model.evaluate(X_dev, Y_dev) print("Dev set accuracy = ", acc) # This looks pretty good! However, accuracy isn't a great metric for this task, since the labels are heavily skewed to 0's, so a neural network that just outputs 0's would get slightly over 90% accuracy. We could define more useful metrics such as F1 score or Precision/Recall. But let's not bother with that here, and instead just empirically see how the model does. # # 3 - Making Predictions # # Now that you have built a working model for trigger word detection, let's use it to make predictions. This code snippet runs audio (saved in a wav file) through the network. # # <!-- # can use your model to make predictions on new audio clips. # # You will first need to compute the predictions for an input audio clip. # # **Exercise**: Implement predict_activates(). You will need to do the following: # # 1. Compute the spectrogram for the audio file # 2. Use `np.swap` and `np.expand_dims` to reshape your input to size (1, Tx, n_freqs) # 5. Use forward propagation on your model to compute the prediction at each output step # !--> def detect_triggerword(filename): plt.subplot(2, 1, 1) x = graph_spectrogram(filename) # the spectogram outputs (freqs, Tx) and we want (Tx, freqs) to input into the model x = x.swapaxes(0,1) x = np.expand_dims(x, axis=0) predictions = model.predict(x) plt.subplot(2, 1, 2) plt.plot(predictions[0,:,0]) plt.ylabel('probability') plt.show() return predictions # Once you've estimated the probability of having detected the word "activate" at each output step, you can trigger a "chiming" sound to play when the probability is above a certain threshold. Further, $y^{\langle t \rangle}$ might be near 1 for many values in a row after "activate" is said, yet we want to chime only once. So we will insert a chime sound at most once every 75 output steps. This will help prevent us from inserting two chimes for a single instance of "activate". (This plays a role similar to non-max suppression from computer vision.) # # <!-- # **Exercise**: Implement chime_on_activate(). You will need to do the following: # # 1. Loop over the predicted probabilities at each output step # 2. When the prediction is larger than the threshold and more than 75 consecutive time steps have passed, insert a "chime" sound onto the original audio clip # # Use this code to convert from the 1,375 step discretization to the 10,000 step discretization and insert a "chime" using pydub: # # ` audio_clip = audio_clip.overlay(chime, position = ((i / Ty) * audio.duration_seconds)*1000) # ` # !--> chime_file = "audio_examples/chime.wav" def chime_on_activate(filename, predictions, threshold): audio_clip = AudioSegment.from_wav(filename) chime = AudioSegment.from_wav(chime_file) Ty = predictions.shape[1] # Step 1: Initialize the number of consecutive output steps to 0 consecutive_timesteps = 0 # Step 2: Loop over the output steps in the y for i in range(Ty): # Step 3: Increment consecutive output steps consecutive_timesteps += 1 # Step 4: If prediction is higher than the threshold and more than 75 consecutive output steps have passed if predictions[0,i,0] > threshold and consecutive_timesteps > 75: # Step 5: Superpose audio and background using pydub audio_clip = audio_clip.overlay(chime, position = ((i / Ty) * audio_clip.duration_seconds)*1000) # Step 6: Reset consecutive output steps to 0 consecutive_timesteps = 0 audio_clip.export("chime_output.wav", format='wav') # ## 3.3 - Test on dev examples # Let's explore how our model performs on two unseen audio clips from the development set. Lets first listen to the two dev set clips. IPython.display.Audio("./raw_data/dev/1.wav") IPython.display.Audio("./raw_data/dev/2.wav") # Now lets run the model on these audio clips and see if it adds a chime after "activate"! filename = "./raw_data/dev/1.wav" prediction = detect_triggerword(filename) chime_on_activate(filename, prediction, 0.5) IPython.display.Audio("./chime_output.wav") filename = "./raw_data/dev/2.wav" prediction = detect_triggerword(filename) chime_on_activate(filename, prediction, 0.5) IPython.display.Audio("./chime_output.wav") # # Congratulations # # You've come to the end of this assignment! # # Here's what you should remember: # - Data synthesis is an effective way to create a large training set for speech problems, specifically trigger word detection. # - Using a spectrogram and optionally a 1D conv layer is a common pre-processing step prior to passing audio data to an RNN, GRU or LSTM. # - An end-to-end deep learning approach can be used to built a very effective trigger word detection system. # # *Congratulations* on finishing the fimal assignment! # # Thank you for sticking with us through the end and for all the hard work you've put into learning deep learning. We hope you have enjoyed the course! # # # 4 - Try your own example! (OPTIONAL/UNGRADED) # # In this optional and ungraded portion of this notebook, you can try your model on your own audio clips! # # Record a 10 second audio clip of you saying the word "activate" and other random words, and upload it to the Coursera hub as `myaudio.wav`. Be sure to upload the audio as a wav file. If your audio is recorded in a different format (such as mp3) there is free software that you can find online for converting it to wav. If your audio recording is not 10 seconds, the code below will either trim or pad it as needed to make it 10 seconds. # # Preprocess the audio to the correct format def preprocess_audio(filename): # Trim or pad audio segment to 10000ms padding = AudioSegment.silent(duration=10000) segment = AudioSegment.from_wav(filename)[:10000] segment = padding.overlay(segment) # Set frame rate to 44100 segment = segment.set_frame_rate(44100) # Export as wav segment.export(filename, format='wav') # Once you've uploaded your audio file to Coursera, put the path to your file in the variable below. your_filename = "audio_examples/my_audio.wav" preprocess_audio(your_filename) IPython.display.Audio(your_filename) # listen to the audio you uploaded # Finally, use the model to predict when you say activate in the 10 second audio clip, and trigger a chime. If beeps are not being added appropriately, try to adjust the chime_threshold. chime_threshold = 0.5 prediction = detect_triggerword(your_filename) chime_on_activate(your_filename, prediction, chime_threshold) IPython.display.Audio("./chime_output.wav")
_sequential/Deep Learning Sequential/Week 3/Triggerword Detection/Trigger word detection - v1.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # Use a RF to stack LSTM predictions with engineered features import os import logging dir_path = os.path.realpath('..') # ## Import data import numpy as np import pandas as pd # + path = 'data/processed/stacking.csv' full_path = os.path.join(dir_path, path) df = pd.read_csv(full_path, header=0, index_col=0) print("Dataset has {} rows, {} columns.".format(*df.shape)) # - # fill NaN with string "unknown" df.fillna('unknown',inplace=True) # ## Feature engineering df['processed'] = df['comment_text'].str.split() df['uppercase_count'] = df['processed'].apply(lambda x: sum(1 for t in x if t.isupper() and len(t)>2)) df = df.drop(['processed'], axis=1) df.head() # + from sklearn.model_selection import train_test_split seed = 42 np.random.seed(seed) test_size = 0.2 target = ['toxic', 'severe_toxic', 'obscene', 'threat', 'insult', 'identity_hate'] corpus = 'comment_text' X = df.drop(target + [corpus], axis=1) y = df[target] Xtrain, Xtest, ytrain, ytest = train_test_split(X, y, test_size=test_size, random_state=seed) # - X.head() # ## Model fit from sklearn.ensemble import RandomForestClassifier from sklearn.model_selection import GridSearchCV from sklearn.metrics import log_loss #Tuning the model param_grid = { "n_estimators" : [150, 200, 250], "max_depth" : [4, 8], "min_samples_split" : [4, 8] , "bootstrap": [True]} # + # %%time clf = RandomForestClassifier(random_state=seed) clf_cv = GridSearchCV(clf, param_grid, cv=5) clf_cv.fit(Xtrain, ytrain) # - hold_out_preds # + # concatenating features with lstm preds y_pred = clf_cv.predict_proba(Xtest) hold_out_preds = pd.DataFrame(index=ytest.index, columns=target) i = 0 for label in target: hold_out_preds[label] = y_pred[i][:,1] i += 1 losses = [] for label in target: loss = log_loss(ytest[label], hold_out_preds[label]) losses.append(loss) print("{} log loss is {} .".format(label, loss)) print("Combined log loss: {} .".format(np.mean(losses))) # + # Comparing to original preds for label in target: loss = log_loss(ytest[label], Xtest[label+'_pred']) losses.append(loss) print("{} log loss is {} .".format(label, loss)) print("Combined log loss: {} .".format(np.mean(losses))) # - clf_cv.best_params_ # ## Rf only from sklearn.ensemble import RandomForestClassifier from sklearn.model_selection import GridSearchCV from sklearn.metrics import log_loss #Tuning the model param_grid = { "n_estimators" : [150, 200, 250], "max_depth" : [4, 8], "min_samples_split" : [4, 8] , "bootstrap": [True]} # + # %%time clf = RandomForestClassifier(random_state=seed) clf_cv = GridSearchCV(clf, param_grid, cv=5) clf_cv.fit(Xtrain['uppercase_count'].reshape(-1,1), ytrain) # + # features only y_pred = clf_cv.predict_proba(Xtest['uppercase_count'].reshape(-1,1)) hold_out_preds = pd.DataFrame(index=ytest.index, columns=target) i = 0 for label in target: hold_out_preds[label] = y_pred[i][:,1] i += 1 losses = [] for label in target: loss = log_loss(ytest[label], hold_out_preds[label]) losses.append(loss) print("{} log loss is {} .".format(label, loss)) print("Combined log loss: {} .".format(np.mean(losses))) # - hold_out_preds ytest.describe() # + # Comparing to original preds for label in target: loss = log_loss(ytest[label], Xtest[label+'_pred']) losses.append(loss) print("{} log loss is {} .".format(label, loss)) print("Combined log loss: {} .".format(np.mean(losses))) # - clf_cv.best_params_
notebooks/archives/17-jc-lstm-with-feats.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + from transformers import ( GPT2TokenizerFast, AdamW, get_scheduler ) import torch from model import GPT2PromptTuningLM # - # # Training class Config: # Same default parameters as run_clm_no_trainer.py in tranformers # https://github.com/huggingface/transformers/blob/master/examples/pytorch/language-modeling/run_clm_no_trainer.py num_train_epochs = 3 weight_decay = 0.01 learning_rate = 0.01 lr_scheduler_type = "linear" num_warmup_steps = 0 max_train_steps = num_train_epochs # Prompt-tuning # number of prompt tokens n_prompt_tokens = 20 # If True, soft prompt will be initialized from vocab # Otherwise, you can set `random_range` to initialize by randomization. init_from_vocab = True # random_range = 0.5 args = Config() tokenizer = GPT2TokenizerFast.from_pretrained("gpt2") # Initialize GPT2LM with soft prompt model = GPT2PromptTuningLM.from_pretrained( "gpt2", n_tokens=args.n_prompt_tokens, initialize_from_vocab=args.init_from_vocab ) model.soft_prompt.weight # Prepare dataset inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") print(inputs) # Only update soft prompt'weights for prompt-tuning. ie, all weights in LM are set as `require_grad=False`. optimizer_grouped_parameters = [ { "params": [p for n, p in model.named_parameters() if n == "soft_prompt.weight"], "weight_decay": args.weight_decay, } ] optimizer = AdamW(optimizer_grouped_parameters, lr=args.learning_rate) lr_scheduler = get_scheduler( name=args.lr_scheduler_type, optimizer=optimizer, num_warmup_steps=args.num_warmup_steps, num_training_steps=args.max_train_steps, ) model.train() outputs = model(**inputs, labels=inputs["input_ids"]) loss = outputs.loss print(f"loss: {loss}") loss.backward() optimizer.step() model.soft_prompt.weight # Confirmed the weights were changed! # save the prompt model save_dir_path = "." model.save_soft_prompt(save_dir_path) # Once it's done, `soft_prompt.model` is in the dir # # Inference # In the inference phase, you need to input ids to the model by using `model.forward()` so that you cannot use `model.generate()` attribute. After you get `next_token_logits` as below, you will need additional codes for your decoding method. tokenizer = GPT2TokenizerFast.from_pretrained("gpt2") # Load the model model = GPT2PromptTuningLM.from_pretrained( "gpt2", soft_prompt_path="./soft_prompt.model" ) model.eval() input_ids = tokenizer.encode('I enjoy walking with my cute dog', return_tensors='pt') input_ids outputs = model.forward(input_ids=input_ids) next_token_logits = outputs[0][0, -1, :] ...
example.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: bai # language: python # name: bai # --- # <html> # <body> # <center> # <h1><u>Assignment 1</u></h1> # <h3> Quick intro + checking code works on your system </h3> # </center> # </body> # </html> # ### Learning Outcomes: The goal of this assignment is two-fold: # # - This code-base is designed to be easily extended for different research projects. Running this notebook to the end will ensure that the code runs on your system, and that you are set-up to start playing with machine learning code. # - This notebook has one complete application: training a CNN classifier to predict the digit in MNIST Images. The code is written to familiarize you to a typical machine learning pipeline, and to the building blocks of code used to do ML. So, read on! # ### Please specify your Name, Email ID and forked repository url here: # - Name: Daniel # - Email: <EMAIL> # - Link to your forked github repository: https://github.com/dlee1111111/Harvard_BAI # + ### General libraries useful for python ### import os import sys from tqdm.notebook import tqdm import json import random import pickle import copy from IPython.display import display import ipywidgets as widgets # - ### Finding where you clone your repo, so that code upstream paths can be specified programmatically #### work_dir = os.getcwd() git_dir = '/'.join(work_dir.split('/')[:-1]) print('Your github directory is :%s'%git_dir) ### Libraries for visualizing our results and data ### from PIL import Image import matplotlib.pyplot as plt ### Import PyTorch and its components ### import torch import torchvision import torch.nn as nn import torch.optim as optim # #### Let's load our flexible code-base which you will build on for your research projects in future assignments. # # Above we have imported modules (libraries for those familiar to programming languages other than python). These modules are of two kinds - (1) inbuilt python modules like `os`, `sys`, `random`, or (2) ones which we installed using conda (ex. `torch`). # # Below we will be importing our own written code which resides in the `res` folder in your github directory. This is structured to be easily expandable for different machine learning projects. Suppose that you want to do a project on object detection. You can easily add a few files to the sub-folders within `res`, and this script will then be flexibly do detection instead of classication (which is presented here). Expanding on this codebase will be the main subject matter of Assignment 2. For now, let's continue with importing. ### Making helper code under the folder res available. This includes loaders, models, etc. ### sys.path.append('%s/res/'%git_dir) from models.models import get_model from loader.loader import get_loader # #### See those paths printed above? # `res/models` holds different model files. So, if you want to load ResNet architecture or a transformers architecture, they will reside there as separate files. # # Similarly, `res/loader` holds programs which are designed to load different types of data. For example, you may want to load data differently for object classification and detection. For classification each image will have only a numerical label corresponding to its category. For detection, the labels for the same image would contain bounding boxes for different objects and the type of the object in the box. # # So, to expand further you will be adding files to the folders above. # ### Setting up Weights and Biases for tracking your experiments. ### # # We have Weights and Biases (wandb.ai) integrated into the code for easy visualization of results and for tracking performance. `Please make an account at wandb.ai, and follow the steps to login to your account!` import wandb wandb.login() # ### Specifying settings/hyperparameters for our code below ### # + wandb_config = {} wandb_config['batch_size'] = 10 wandb_config['base_lr'] = 0.01 wandb_config['model_arch'] = 'CustomCNN' wandb_config['num_classes'] = 10 wandb_config['run_name'] = 'assignment_1' ### If you are using a CPU, please set wandb_config['use_gpu'] = 0 below. However, if you are using a GPU, leave it unchanged #### wandb_config['use_gpu'] = 1 wandb_config['num_epochs'] = 2 wandb_config['git_dir'] = git_dir # - # By changing above, different experiments can be run. For example, you can specify which model architecture to load, which dataset you will be loading, and so on. # ### Data Loading ### # The most common task many of you will be doing in your projects will be running a script on a new dataset. In PyTorch this is done using data loaders, and it is extremely important to understand this works. In next assignment, you will be writing your own dataloader. For now, we only expose you to basic data loading which for the MNIST dataset for which PyTorch provides easy functions. # ### Let's load MNIST. The first time you run it, the dataset gets downloaded. # # Data Transforms tell PyTorch how to pre-process your data. Recall that images are stored with values between 0-255 usually. One very common pre-processing for images is to normalize to be 0 mean and 1 standard deviation. This pre-processing makes the task easier for neural networks. There are many, many kinds of normalization in deep learning, the most basic one being those imposed on the image data while loading it. # + data_transforms = {} data_transforms['train'] = torchvision.transforms.Compose([ torchvision.transforms.ToTensor(), torchvision.transforms.Normalize( (0.1307,), (0.3081,))]) data_transforms['test'] = torchvision.transforms.Compose([ torchvision.transforms.ToTensor(), torchvision.transforms.Normalize( (0.1307,), (0.3081,))]) # - # `torchvision.datasets.MNIST` allows you to load MNIST data. In future, we will be using our own `get_loader` function from above to load custom data. Notice that data_transforms are passed as argument while loading the data below. mnist_dataset = {} mnist_dataset['train'] = torchvision.datasets.MNIST('%s/datasets'%wandb_config['git_dir'], train = True, download = True, transform = data_transforms['train']) mnist_dataset['test'] = torchvision.datasets.MNIST('%s/datasets'%wandb_config['git_dir'], train = False, download = True, transform = data_transforms['test']) # #### Dataset vs Dataloader # Most deep learning datasets are huge. Can be as large as million data points. We want to keep our GPUs free to store intermediate calculations for neural networks, like gradients. We would not be able to load a million samples into the GPU (or even CPU) and do forward or backward passes on the network. # # So, samples are loaded in batches, and this method of gradient descent is called mini-batch gradient descent. `torch.utils.data.DataLoader` allows you to specify a pytorch dataset, and makes it easy to loop over it in batches. So, we leverage this to create a data loader from our above loaded MNIST dataset. # # The dataset itself only contains lists of where to find the inputs and outputs i.e. paths. The data loader defines the logic on loading this information into the GPU/CPU and so it can be passed into the neural net. # + data_loaders = {} data_loaders['train'] = torch.utils.data.DataLoader(mnist_dataset['train'], batch_size = wandb_config['batch_size'], shuffle = True) data_loaders['test'] = torch.utils.data.DataLoader(mnist_dataset['test'], batch_size = wandb_config['batch_size'], shuffle = False) data_sizes = {} data_sizes['train'] = len(mnist_dataset['train']) data_sizes['test'] = len(mnist_dataset['test']) # - # ### We will use the `get_model` functionality to load a CNN architecture. model = get_model(wandb_config['model_arch'], wandb_config['num_classes']) # ### Curious what the model architecture looks like? # # `get_model` is just a function in the file `res/models/models.py`. Stop here, open this file, and see what the function does. # + layout = widgets.Layout(width='auto', height='90px') #set width and height button = widgets.Button(description="Read the function?\n Click me!", layout=layout) output = widgets.Output() display(button, output) def on_button_clicked(b): with output: print("As you can see, the function simply returns an object of the class CustomCNN, which is defined in res/models/CustomCNN.py") print("This is our neural network model.") button.on_click(on_button_clicked) # - # #### Below we have the function which trains, tests and returns the best model weights. def model_pipeline(model, criterion, optimizer, dset_loaders, dset_sizes, hyperparameters): with wandb.init(project="HARVAR_BAI", config=hyperparameters): if hyperparameters['run_name']: wandb.run.name = hyperparameters['run_name'] config = wandb.config best_model = model best_acc = 0.0 print(config) print(config.num_epochs) for epoch_num in range(config.num_epochs): wandb.log({"Current Epoch": epoch_num}) model = train_model(model, criterion, optimizer, dset_loaders, dset_sizes, config) best_acc, best_model = test_model(model, best_acc, best_model, dset_loaders, dset_sizes, config) return best_model # #### The different steps of the train model function are annotated below inside the function. Read them step by step # + def train_model(model, criterion, optimizer, dset_loaders, dset_sizes, configs): print('Starting training epoch...') best_model = model best_acc = 0.0 ### This tells python to track gradients. While testing weights aren't updated hence they are not stored. model.train() running_loss = 0.0 running_corrects = 0 iters = 0 ### We loop over the data loader we created above. Simply using a for loop. for data in tqdm(dset_loaders['train']): inputs, labels = data ### If you are using a gpu, then script will move the loaded data to the GPU. ### If you are not using a gpu, ensure that wandb_configs['use_gpu'] is set to False above. if configs.use_gpu: inputs = inputs.float().cuda() labels = labels.long().cuda() else: print('WARNING: NOT USING GPU!') inputs = inputs.float() labels = labels.long() ### We set the gradients to zero, then calculate the outputs, and the loss function. ### Gradients for this process are automatically calculated by PyTorch. optimizer.zero_grad() outputs = model(inputs) _, preds = torch.max(outputs.data, 1) loss = criterion(outputs, labels) ### At this point, the program has calculated gradient of loss w.r.t. weights of our NN model. loss.backward() optimizer.step() ### optimizer.step() updated the models weights using calculated gradients. ### Let's store these and log them using wandb. They will be displayed in a nice online ### dashboard for you to see. iters += 1 running_loss += loss.item() running_corrects += torch.sum(preds == labels.data) wandb.log({"train_running_loss": running_loss/float(iters*len(labels.data))}) wandb.log({"train_running_corrects": running_corrects/float(iters*len(labels.data))}) epoch_loss = float(running_loss) / dset_sizes['train'] epoch_acc = float(running_corrects) / float(dset_sizes['train']) wandb.log({"train_accuracy": epoch_acc}) wandb.log({"train_loss": epoch_loss}) return model # - def test_model(model, best_acc, best_model, dset_loaders, dset_sizes, configs): print('Starting testing epoch...') model.eval() ### tells pytorch to not store gradients as we won't be updating weights while testing. running_corrects = 0 iters = 0 for data in tqdm(dset_loaders['test']): inputs, labels = data if configs.use_gpu: inputs = inputs.float().cuda() labels = labels.long().cuda() else: print('WARNING: NOT USING GPU!') inputs = inputs.float() labels = labels.long() outputs = model(inputs) _, preds = torch.max(outputs.data, 1) iters += 1 running_corrects += torch.sum(preds == labels.data) wandb.log({"train_running_corrects": running_corrects/float(iters*len(labels.data))}) epoch_acc = float(running_corrects) / float(dset_sizes['test']) wandb.log({"test_accuracy": epoch_acc}) ### Code is very similar to train set. One major difference, we don't update weights. ### We only check the performance is best so far, if so, we save this model as the best model so far. if epoch_acc > best_acc: best_acc = epoch_acc best_model = copy.deepcopy(model) wandb.log({"best_accuracy": best_acc}) return best_acc, best_model # + ### Criterion is simply specifying what loss to use. Here we choose cross entropy loss. criterion = nn.CrossEntropyLoss() ### tells what optimizer to use. There are many options, we here choose Adam. ### the main difference between optimizers is that they vary in how weights are updated based on calculated gradients. optimizer_ft = optim.Adam(model.parameters(), lr = wandb_config['base_lr']) if wandb_config['use_gpu']: criterion.cuda() model.cuda() # - ### Creating the folder where our models will be saved. if not os.path.isdir("%s/saved_models/"%wandb_config['git_dir']): os.mkdir("%s/saved_models/"%wandb_config['git_dir']) # + ### Let's run it all, and save the final best model. best_final_model = model_pipeline(model, criterion, optimizer_ft, data_loaders, data_sizes, wandb_config) save_path = '%s/saved_models/%s_final.pt'%(wandb_config['git_dir'], wandb_config['run_name']) with open(save_path,'wb') as F: torch.save(best_final_model,F) # - # ### Congratulations! # # You just completed your first deep learning program - image classification for MNIST. This wraps up assignment 1. In the next assignment, we will see how you can make changes to above mentioned folders/files to adapt this code-base to your own research project. # # Deliverables for Assignment 1: # # ### Please run this assignment through to the end, and then make two submissions: # # - Download this notebook as an HTML file. Click File ---> Download as ---> HTML. Submit this on canvas. # - Add, commit and push these changes to your github repository.
assignment_1/assignment_1.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # widgets.image_cleaner # fastai offers several widgets to support the workflow of a deep learning practitioner. The purpose of the widgets are to help you organize, clean, and prepare your data for your model. Widgets are separated by data type. # + hide_input=true from fastai.vision import * from fastai.widgets import DatasetFormatter, ImageCleaner, ImageDownloader, download_google_images from fastai.gen_doc.nbdoc import show_doc # + hide_input=true # %reload_ext autoreload # %autoreload 2 # - path = untar_data(URLs.MNIST_SAMPLE) data = ImageDataBunch.from_folder(path) learn = create_cnn(data, models.resnet18, metrics=error_rate) learn.fit_one_cycle(2) learn.save('stage-1') # We create a databunch with all the data in the training set and no validation set (DatasetFormatter uses only the training set) db = (ImageItemList.from_folder(path) .no_split() .label_from_folder() .databunch()) learn = create_cnn(db, models.resnet18, metrics=[accuracy]) learn.load('stage-1'); # + hide_input=true show_doc(DatasetFormatter) # - # The [`DatasetFormatter`](/widgets.image_cleaner.html#DatasetFormatter) class prepares your image dataset for widgets by returning a formatted [`DatasetTfm`](/vision.data.html#DatasetTfm) based on the [`DatasetType`](/basic_data.html#DatasetType) specified. Use `from_toplosses` to grab the most problematic images directly from your learner. Optionally, you can restrict the formatted dataset returned to `n_imgs`. # + hide_input=true show_doc(DatasetFormatter.from_similars) # + hide_input=true from fastai.gen_doc.nbdoc import * from fastai.widgets.image_cleaner import * # + hide_input=true show_doc(DatasetFormatter.from_toplosses) # + hide_input=true show_doc(ImageCleaner) # - # [`ImageCleaner`](/widgets.image_cleaner.html#ImageCleaner) is for cleaning up images that don't belong in your dataset. It renders images in a row and gives you the opportunity to delete the file from your file system. To use [`ImageCleaner`](/widgets.image_cleaner.html#ImageCleaner) we must first use `DatasetFormatter().from_toplosses` to get the suggested indices for misclassified images. ds, idxs = DatasetFormatter().from_toplosses(learn) ImageCleaner(ds, idxs, path) # [`ImageCleaner`](/widgets.image_cleaner.html#ImageCleaner) does not change anything on disk (neither labels or existence of images). Instead, it creates a 'cleaned.csv' file in your data path from which you need to load your new databunch for the files to changes to be applied. df = pd.read_csv(path/'cleaned.csv', header='infer') # We create a databunch from our csv. We include the data in the training set and we don't use a validation set (DatasetFormatter uses only the training set) np.random.seed(42) db = (ImageItemList.from_df(df, path) .no_split() .label_from_df() .databunch(bs=64)) learn = create_cnn(db, models.resnet18, metrics=error_rate) learn = learn.load('stage-1') # You can then use [`ImageCleaner`](/widgets.image_cleaner.html#ImageCleaner) again to find duplicates in the dataset. To do this, you can specify `duplicates=True` while calling ImageCleaner after getting the indices and dataset from `.from_similars`. Note that if you are using a layer's output which has dimensions [n_batches, n_features, 1, 1] then you don't need any pooling (this is the case with the last layer). The suggested use of `.from_similars()` with resnets is using the last layer and no pooling, like in the following cell. ds, idxs = DatasetFormatter().from_similars(learn, layer_ls=[0,7,1], pool=None) ImageCleaner(ds, idxs, path, duplicates=True) show_doc(ImageDownloader) # [`ImageDownloader`](/widgets.image_downloader.html#ImageDownloader) widget gives you a way to quickly bootstrap your image dataset without leaving the notebook. It searches and downloads images that match the search criteria and resolution / quality requirements and stores them on your filesystem within the provided `path`. # # Images for each search query (or label) are stored in a separate folder within `path`. For example, if you pupulate `tiger` with a `path` setup to `./data`, you'll get a folder `./data/tiger/` with the tiger images in it. # # [`ImageDownloader`](/widgets.image_downloader.html#ImageDownloader) will automatically clean up and verify the downloaded images with [`verify_images()`](/vision.data.html#verify_images) after downloading them. path = Path('./image_downloader_data') ImageDownloader(path) # After populating images with [`ImageDownloader`](/widgets.image_downloader.html#ImageDownloader), you can get a an [`ImageDataBunch`](/vision.data.html#ImageDataBunch) by calling `ImageDataBunch.from_folder(path, size=size)`, or using the data block API. path.ls() src = (ImageItemList.from_folder(path) .random_split_by_pct() .label_from_folder() .transform(get_transforms(), size=224)) db = src.databunch(bs=16) learn = create_cnn(db, models.resnet34, metrics=[accuracy]) learn.fit_one_cycle(3) # #### Downloading more than a hundred images # # To fetch more than a hundred images, [`ImageDownloader`](/widgets.image_downloader.html#ImageDownloader) uses `selenium` and `chromedriver` to scroll through the Google Images search results page and scrape image URLs. They're not required as dependencies by default. If you don't have them installed on your system, the widget will show you an error message. # # To install `selenium`, just `pip install selenium` in your fastai environment. # # **On a mac**, you can install `chromedriver` with `brew cask install chromedriver`. # # **On Ubuntu** # Take a look at the latest Chromedriver version available, then something like: # # ``` # wget https://chromedriver.storage.googleapis.com/2.45/chromedriver_linux64.zip # unzip chromedriver_linux64.zip # ``` # #### Downloading images in python scripts outside Jupyter notebooks path = Path('image_downloader_data') download_google_images(path, 'aussie shepherd', size='>1024*768', n_images=150) show_doc(download_google_images) # Note that downloading under 100 images doesn't require any dependencies other than fastai itself, however downloading more than a hundred images [uses `selenium` and `chromedriver`](/widgets.image_cleaner.html#Downloading-more-than-a-hundred-images). # # `size` can be one of: # # ``` # '>400*300' # '>640*480' # '>800*600' # '>1024*768' # '>2MP' # '>4MP' # '>6MP' # '>8MP' # '>10MP' # '>12MP' # '>15MP' # '>20MP' # '>40MP' # '>70MP' # ``` # ## Methods # ## Undocumented Methods - Methods moved below this line will intentionally be hidden # ## New Methods - Please document or move to the undocumented section # + hide_input=true show_doc(ImageCleaner.make_dropdown_widget)
docs_src/widgets.image_cleaner.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- from qiskit import * import numpy as np # ### Step 1: Building circuits # Register a quantum register and circuit q = QuantumRegister(2,'q') circ = QuantumCircuit(q) # + ## Add operations to 2 Qubits system # Add an Hadamard gate H to the 0 qubit and move it to superposition circ.h(q[0]) # Add an controlled NOT operation between qubit 0 and 1 circ.cx(q[0],q[1]) # - ## visualize the circuit circ.draw() # ### Step 2: Setup Quantum simulation # + from qiskit import BasicAer ## running the above circuit in a statevector simulator backend = BasicAer.get_backend('statevector_simulator') # - # ### Step 3: Running the job ## create a job and execute job = execute(circ,backend) ## printing the job status job.status() ## getting the result from the job result = job.result() ## display the result output = result.get_statevector(circ, decimals=2) print(output) # + ## visualize the result from qiskit.visualization import plot_state_city plot_state_city(output) # -
sample/Hello World Quantum.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # !date # # All clusters DE # + import anndata import pandas as pd import numpy as np import matplotlib.pyplot as plt import matplotlib as mpl import matplotlib.patches as mpatches import matplotlib.colors as mcolors import scanpy as sc from scipy.stats import ks_2samp, ttest_ind import ast from scipy.sparse import csr_matrix import warnings warnings.filterwarnings('ignore') import sys sys.path.append('../../../../BYVSTZP_2020/dexpress') from dexpress import dexpress, utils, plot #sys.path.append('../../../BYVSTZP_2020/trackfig') #from trackfig.utils import get_notebook_name #from trackfig.trackfig import trackfig #TRACKFIG = "../../../BYVSTZP_2020/trackfig.txt" #NB = get_notebook_name() def yex(ax): lims = [ np.min([ax.get_xlim(), ax.get_ylim()]), # min of both axes np.max([ax.get_xlim(), ax.get_ylim()]), # max of both axes ] # now plot both limits against eachother ax.plot(lims, lims, 'k-', alpha=0.75, zorder=0) ax.set_aspect('equal') ax.set_xlim(lims) ax.set_ylim(lims) return ax fsize=20 plt.rcParams.update({'font.size': fsize}) # %config InlineBackend.figure_format = 'retina' # - cluster_cmap = { "Astro": (0.38823529411764707, 0.4745098039215686, 0.2235294117647059 ), # 637939, "Endo" : (0.5490196078431373, 0.6352941176470588, 0.3215686274509804 ), # 8ca252, "SMC" : (0.7098039215686275, 0.8117647058823529, 0.4196078431372549 ), # b5cf6b, "VLMC" : (0.807843137254902, 0.8588235294117647, 0.611764705882353 ), # cedb9c, "Low Quality" : (0,0,0), "L2/3 IT" : (0.9921568627450981, 0.6823529411764706, 0.4196078431372549 ), # fdae6b "L5 PT" : (0.9921568627450981, 0.8156862745098039, 0.6352941176470588 ), # fdd0a2 "L5 IT" : (0.5176470588235295, 0.23529411764705882, 0.2235294117647059 ), # 843c39 "L5/6 NP": "#D43F3A", "L6 CT" : (0.8392156862745098, 0.3803921568627451, 0.4196078431372549 ), # d6616b "L6 IT" : (0.9058823529411765, 0.5882352941176471, 0.611764705882353 ), # e7969c "L6b" : (1.0, 0.4980392156862745, 0.054901960784313725), # ff7f0e "L6 IT Car3" : (1.0, 0.7333333333333333, 0.47058823529411764 ), # ffbb78 "Lamp5" : (0.19215686274509805, 0.5098039215686274, 0.7411764705882353 ), # 3182bd # blues "Sncg" : (0.4196078431372549, 0.6823529411764706, 0.8392156862745098 ), # 6baed6 "Vip" : (0.6196078431372549, 0.792156862745098, 0.8823529411764706 ), # 9ecae1 "Sst" : (0.7764705882352941, 0.8588235294117647, 0.9372549019607843 ), # c6dbef "Pvalb":(0.7372549019607844, 0.7411764705882353, 0.8627450980392157 ), # bcbddc } cluster_cmap = pd.read_csv('../../metadata_files/CTX_Hip_anno_SSv4.csv', index_col='cluster_label',usecols=['cluster_label','cluster_color']) cluster_cmap = cluster_cmap.drop_duplicates() cluster_cmap = cluster_cmap.cluster_color.apply(lambda x: mcolors.to_rgb(x) ) cluster_cmap = cluster_cmap.to_dict() # + num_TSNE = 2 state = 42 metric = "euclidean" n_neighbors = 30 num_PCA = 25 num_NCA = 10 # Filtering criteria cell_threshold = 250 disp_threshold = 0.001 mito_criteria = 10 n_top_genes = 5000 n_bins = 20 flavor="seurat" scale_clip = 10 # - import ast gene = anndata.read_h5ad("../../../data/notebook/revision/gene.h5ad") isoform = anndata.read_h5ad("../../../data/notebook/revision/isoform.h5ad") isoform = isoform[isoform.obs.eval("subclass_label != 'L5 IT'").values] gene = gene[gene.obs.eval("subclass_label != 'L5 IT'").values] gene_id = gene.var["gene_id"].values gene_names = gene.var["gene_name"].values.astype(str) # # Restrict to genes with more than one isoform gene = gene[:,gene.var["num_isoforms"]>1] # %%time transcripts = [] l = gene.var.txn_list.values for sublist in l: sublist = ast.literal_eval(sublist) for item in sublist: transcripts.append(item) isoform = isoform[:,isoform.var["transcript_id"].isin(transcripts)] print(gene) print(isoform) isoform = isoform[isoform.obs.sort_values(["cluster_label", "cell_id"]).index] gene = gene[gene.obs.sort_values(["cluster_label", "cell_id"]).index] False in (gene.obs.cluster_label == isoform.obs.cluster_label) isoform.obs # # determine the isoforms def violinplot(data, ax, **kwd): xticklabels = kwd.get("xticklabels", []) xticks = kwd.get("xticks", []) color = kwd.get("color", "#D43F3A") if len(xticks)==0: xticks = np.arange(len(data))+1; if len(xticklabels)==0: xticklabels = np.arange(len(data))+1; assert(len(xticks) == len(xticklabels)) violins = ax.violinplot(data, positions=xticks, showmeans=False, showmedians=False, showextrema=False) for vidx, v in enumerate(violins['bodies']): v.set_facecolor(color) v.set_edgecolor('black') v.set_alpha(1) for didx, d in enumerate(data): x = xticks[didx] xx = np.random.normal(x, 0.04, size=len(d)) # actual points ax.scatter(xx, d, s = 2, color="grey") # mean and error bars mean = np.mean(d) stdev = np.sqrt(np.var(d)) ax.scatter(x, mean,color="black") ax.vlines(x, mean - stdev, mean+stdev, color='black', linestyle='-', lw=2) return ax # # do for all clusters with a certain number of cells per cluster # + subclasses = np.sort(isoform.obs.subclass_label.unique()) subclasses = np.setdiff1d(subclasses, ["L5 IT", "Low Quality"]) # + # %%time n_cells = 20 de_clusters = [] de_genes = [] de_isoforms = [] for cidx, c in enumerate(subclasses): print(f"{cidx+1} of {len(subclasses)}: {c}") tmp_isoform = isoform[isoform.obs.eval(f"subclass_label == '{c}'")].copy() tmp_gene = gene[gene.obs.eval(f"subclass_label == '{c}'")].copy() big_enough_clusters = tmp_gene.obs["cluster_label"].value_counts()[tmp_gene.obs["cluster_label"].value_counts()>n_cells].index.values if len(big_enough_clusters) > 1: tmp_isoform = tmp_isoform[tmp_isoform.obs["cluster_label"].isin(big_enough_clusters)].copy() tmp_gene = tmp_gene[tmp_gene.obs["cluster_label"].isin(big_enough_clusters)].copy() #if tmp_isoform.shape[0] >= n_cells: # cluster must have at least 20 cells #this is checking subclasses, not clusters! # if tmp_isoform.obs.cluster_label.nunique()>1: de_clusters.append(c) ####### Genes mat = tmp_gene.layers["log1p"].todense() components = tmp_gene.obs.cell_id.values features = tmp_gene.var.gene_name.values assignments = tmp_gene.obs.cluster_label.values # parameters unique = np.unique(assignments) nan_cutoff = 0.9 # of elements in cluster corr_method = "bonferroni" p_raw, stat, es, nfeat = dexpress.dexpress(mat, components, features, assignments, nan_cutoff=nan_cutoff) p_corr = dexpress.correct_pval(p_raw, nfeat, corr_method) s = stat markers_gene = dexpress.make_table(assignments, features, p_raw, p_corr, es) # convert the 0 pvalues to the smallest possible float markers_gene["p_corr"][markers_gene.eval("p_corr == 0").values] = sys.float_info.min markers_gene["n_isoforms"] = markers_gene.name.map(gene.var.num_isoforms) de_genes.append(markers_gene) ######### Isoforms mat = tmp_isoform.layers["log1p"].todense() components = tmp_isoform.obs.cell_id.values features = tmp_isoform.var.transcript_name.values assignments = tmp_isoform.obs.cluster_label.values # parameters unique = np.unique(assignments) nan_cutoff = 0.9 # of elements in cluster corr_method = "bonferroni" p_raw, stat, es, nfeat = dexpress.dexpress(mat, components, features, assignments, nan_cutoff=nan_cutoff) p_corr = dexpress.correct_pval(p_raw, nfeat, corr_method) s = stat markers_isoform = dexpress.make_table(assignments, features, p_raw, p_corr, es) markers_isoform["p_corr"][markers_isoform.eval("p_corr == 0").values] = sys.float_info.min de_isoforms.append(markers_isoform) # + markers_gene = pd.concat(de_genes) markers_isoform = pd.concat(de_isoforms) markers_isoform["index"].value_counts() # - markers_gene len(markers_isoform.index) markers_isoform = markers_isoform.query('es>0') markers_gene = markers_gene.query('es>0') len(markers_isoform.index) # # Make the two tables, hidden by gene and not hidden by gene # + alpha =0.01 fc = 2 relevant_genes = markers_gene.p_corr < alpha markers_gene["index_name"] = markers_gene["index"] + "_" + markers_gene.name.apply(lambda x: "".join(x.split("_")[:-1])) markers_isoform["index_name"] = markers_isoform["index"] + "_" + markers_isoform.name.apply(lambda x: "-".join(x.split("-")[:-1])) setdiff = np.setdiff1d(markers_isoform["index_name"].values, markers_gene[relevant_genes]["index_name"].values) # + markers_isoform_hidden = markers_isoform[markers_isoform["index_name"].isin(setdiff)].sort_values(["es", "p_corr"]) markers_isoform_hidden = markers_isoform_hidden.query(f"p_corr < {alpha}") # - len(markers_isoform_hidden.index) alpha = 0.01 markers_gene = markers_gene.query(f"p_corr < {alpha}") markers_isoform = markers_isoform.query(f"p_corr < {alpha}") # write isoform_only markers_isoform.to_csv("../../../tables/unordered/all_clusters_DE_isoform_only.csv") markers_isoform_hidden.to_csv("../../../tables/unordered/all_clusters_DE.csv") markers_isoform markers_isoform.groupby("index")["name"].nunique().sum() markers_isoform_hidden.groupby("index")["name"].nunique().sum() # + identified_isoforms = markers_isoform_hidden["name"].drop_duplicates(keep='first') identified_genes = identified_isoforms.apply(lambda x: x.split("-")[0]) print("{} isoforms from {} genes identified.".format(identified_isoforms.shape[0], identified_genes.nunique())) # + identified_isoforms = markers_isoform["name"].drop_duplicates(keep='first') identified_genes = identified_isoforms.apply(lambda x: x.split("-")[0]) print("{} isoforms from {} genes identified.".format(identified_isoforms.shape[0], identified_genes.nunique())) # - markers_isoform.groupby("index")["name"].nunique().shape # # Visualize a hidden one markers_isoform_hidden['index'].value_counts() markers_isoform_hidden_tmp = markers_isoform_hidden #markers_isoform_hidden_tmp[:,markers_isoform_hidden_tmp["p_corr"]<.0001]#.sort_values("es").head(10) markers_isoform_hidden_tmp.query(f"p_corr < .0001").sort_values("es",ascending=False).head(10) specific_cluster = "145_L2/3 IT PAR" specific_isoform = "Rps6-204_ENSMUST00000136174.8" specific_gene = "".join(specific_isoform.split("-")[:-1]) subclass = " ".join(specific_cluster.split(" ")[:-1]) specific_gene subclass = 'L2/3 IT PPP' isoform_f = isoform[isoform.obs.eval(f"subclass_label == '{subclass}'")] gene_f = gene[gene.obs.eval(f"subclass_label == '{subclass}'")] #need to filter out subclasses that are too small big_enough_clusters = gene_f.obs["cluster_label"].value_counts()[gene_f.obs["cluster_label"].value_counts()>n_cells].index.values isoform_f = isoform_f[isoform_f.obs["cluster_label"].isin(big_enough_clusters)].copy() gene_f = gene_f[gene_f.obs["cluster_label"].isin(big_enough_clusters)].copy() gene_f.var[gene_f.var.gene_name.str.contains(specific_gene+"_")].gene_name.values specific_gene = gene_f.var[gene_f.var.gene_name.str.contains(specific_gene+"_")].gene_name.values[0] specific_gene isoform_f.var[isoform_f.var.gene_name.str.contains(specific_gene)].transcript_name.values def violinplot(data, ax, **kwd): xticklabels = kwd.get("xticklabels", []) xticks = kwd.get("xticks", []) selected = kwd.get("selected", None) color = kwd.get("color", "grey") if len(xticks)==0: xticks = np.arange(len(data))+1; if len(xticklabels)==0: xticklabels = np.arange(len(data))+1; assert(len(xticks) == len(xticklabels)) violins = ax.violinplot(data, positions=xticks, showmeans=False, showmedians=False, showextrema=False) for vidx, v in enumerate(violins['bodies']): v.set_facecolor(color) v.set_edgecolor('black') v.set_alpha(1) if selected == vidx: v.set_facecolor("#D43F3A") for didx, d in enumerate(data): x = xticks[didx] xx = np.random.normal(x, 0.04, size=len(d)) # actual points ax.scatter(xx, d, s = 5, color="white", edgecolor="black", linewidth=1) # mean and error bars mean = np.mean(d) stdev = np.sqrt(np.var(d)) ax.scatter(x, mean, color="lightgrey", edgecolor="black", linewidth=1, zorder=10) ax.vlines(x, mean - stdev, mean+stdev, color='lightgrey', linestyle='-', lw=2, zorder=9) ax.set(**{"xticks": xticks, "xticklabels":xticklabels}) return ax gene_f.obs.cluster_label.unique() # + fig, ax = plt.subplots(figsize=(15,10), nrows=2, sharex=True) fig.subplots_adjust(hspace=0, wspace=0) # Declare unique = np.unique(gene_f.obs.cluster_label) unique = np.delete(unique, np.where(unique=="Low Quality")) labels = unique lidx = np.arange(1, len(labels)+1) # the label locations midx = np.where(unique==specific_cluster)[0][0] plt.xticks(rotation=270) ## Plot # Gene x = [] for c in unique: #x.append(np.asarray(isoform_f[isoform_f.obs.cluster_label==c][:,isoform_f.var.transcript_name==specific_isoform].layers["log1p"].todense()).reshape(-1).tolist()) x.append(np.asarray(gene_f[gene_f.obs.cluster_label==c][:,gene_f.var.gene_name==specific_gene].layers["log1p"].todense()).reshape(-1).tolist()) violinplot(x, ax[0], selected=midx) # Isoform x = [] for c in unique: x.append(np.asarray(isoform_f[isoform_f.obs.cluster_label==c][:,isoform_f.var.transcript_name==specific_isoform].layers["log1p"].todense()).reshape(-1).tolist()) violinplot(x, ax[1], selected=midx, xticks=lidx, xticklabels=labels) ## Style ax[0].set(**{ "title":"{} gene & {} isoform expression".format(specific_gene.split("_")[0], specific_isoform.split("_")[0]), "ylabel":"Gene $log(TPM + 1)$", "ylim": -0.5 }) ymin, ymax = ax[0].get_ylim() ax[1].set(**{ "ylabel":"Isoform $log(TPM + 1)$", "ylim": (ymin, ymax), }) plt.savefig("../../../figures/cluster_DE_violin_{}.png".format(specific_gene.split("_")[0]), bbox_inches='tight',dpi=300) plt.show() # - from sklearn.neighbors import NeighborhoodComponentsAnalysis from sklearn.decomposition import TruncatedSVD from sklearn.manifold import TSNE from matplotlib import cm num_NCA = 5 state = 42 num_PCA = 10 num_TSNE = 2 metric = "euclidean" # + X = gene_f.X tsvd = TruncatedSVD(n_components=num_PCA) Y = tsvd.fit_transform(X) # + # NCA X = Y y = gene_f.obs.cluster_id.values.astype(int) nca = NeighborhoodComponentsAnalysis(n_components=num_NCA,random_state=state) YY = nca.fit_transform(X, y) # - tsne = TSNE(n_components=num_TSNE, metric=metric, random_state=state) YYY = tsne.fit_transform(YY) # + fig, ax = plt.subplots(figsize=(10,10)) x = YYY[:,0] y = YYY[:,1] c = cm.get_cmap("tab20b") assignments = gene_f.obs.cluster_label.values unique = np.unique(assignments) for uidx, u in enumerate(unique): mask = assignments==u xx = x[mask] yy = y[mask] ax.scatter(xx, yy, color=c(uidx*3), cmap="tab20b", label=u) ax.legend(bbox_to_anchor=(1, 0.5)) ax.set_axis_off() plt.show() # + complement_color = (0.8, 0.8, 0.8, 1.0) fig, ax = plt.subplots(figsize=(30,10), ncols=3) x = YYY[:,0] y = YYY[:,1] c = np.asarray(gene_f[:, gene_f.var.gene_name==specific_gene].layers["log1p"].todense()).reshape(-1) cmap="Greys" alpha = 0.75 ax[0].set_title("Non-differential gene: {}".format(specific_gene.split("_")[0])) ax[0].scatter(x, y, c=c, cmap=cmap, alpha=alpha) ax[0].set_axis_off() x = YYY[:,0] y = YYY[:,1] c = np.asarray(isoform_f[:, isoform_f.var.transcript_name==specific_isoform].layers["log1p"].todense()).reshape(-1) cmap="Greys" alpha = 0.75 ax[1].set_title("Differential isoform: {}".format(specific_isoform.split("_")[0])) ax[1].scatter(x, y, c=c, cmap=cmap, alpha=alpha) ax[1].set_axis_off() x = YYY[:,0] y = YYY[:,1] c = gene_f.obs["cluster_id"].values.astype(int) c = gene_f.obs["cluster_label"]==specific_cluster alpha=0.75 cmap="nipy_spectral_r" ax[2].scatter(x, y, c=c, cmap=cmap, alpha=alpha) ax[2].set_axis_off() #ax[2].set_title("Cluster: {}".format(specific_cluster)) plt.savefig("../../../figures/cluster_DE_nca_{}.png".format(specific_gene.split("_")[0]), bbox_inches='tight',dpi=300) plt.show() # -
analysis/notebooks/cluster/final-all_clusters_DE.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.9.5 64-bit # name: python3 # --- # + [markdown] id="ZeNH2z6OzfND" colab_type="text" # #**Trabalhando com Planilhas do Excel** # + id="Jwz3_uGfzmYp" colab_type="code" colab={} #Importando a biblioteca import pandas as pd # + id="pCTDd0YKzqkc" colab_type="code" colab={} #Leitura dos arquivos df1 = pd.read_excel("Aracaju.xlsx") df2 = pd.read_excel("Fortaleza.xlsx") df3 = pd.read_excel("Natal.xlsx") df4 = pd.read_excel("Recife.xlsx") df5 = pd.read_excel("Salvador.xlsx") # + id="bt3rZ7tWBPj7" colab_type="code" outputId="ffe2a67a-51ad-4749-d941-81cdd22dd29b" colab={"base_uri": "https://localhost:8080/", "height": 204} df5.head() # + id="7CUnX6220WVx" colab_type="code" colab={} #juntando todos os arquivos df = pd.concat([df1,df2,df3,df4,df5]) # + id="3ZFau-ii08Lr" colab_type="code" outputId="f1ed7fc3-ac01-4af0-cfae-392c343ffb03" colab={"base_uri": "https://localhost:8080/", "height": 204} #Exibindo as 5 primeiras linhas df.head() # + id="oURFLxhL09Uq" colab_type="code" outputId="15c819f5-a1c0-42ac-a1be-4727c424340a" colab={"base_uri": "https://localhost:8080/", "height": 204} #Exibindo as 5 últimas linhas df.tail() # + id="j8eDDblOBsRG" colab_type="code" outputId="efe39301-6ece-4446-a3b9-60cb39095e69" colab={"base_uri": "https://localhost:8080/", "height": 204} df.sample(5) # + id="kw0zQfVL0_-L" colab_type="code" outputId="3617ad27-63b0-483f-ea6e-1a8413272afc" colab={"base_uri": "https://localhost:8080/", "height": 119} #Verificando o tipo de dado de cada coluna df.dtypes # + id="JB2rkM0b1kKF" colab_type="code" colab={} #Alterando o tipo de dado da coluna LojaID df["LojaID"] = df["LojaID"].astype("object") # + id="3t1uir2H1w3x" colab_type="code" outputId="a8d766f2-504e-4fcb-dc09-842b4c5218b6" colab={"base_uri": "https://localhost:8080/", "height": 119} df.dtypes # + id="B0Z8PPuJ19dc" colab_type="code" outputId="10d2f11c-d421-4434-fe61-dfa537b7d6bf" colab={"base_uri": "https://localhost:8080/", "height": 204} df.head() # + [markdown] id="br1B_k4v2HVF" colab_type="text" # **Tratando valores faltantes** # + id="J5L9EehP2MQ_" colab_type="code" outputId="91e21217-879a-426d-8b2b-fb15e77b2f87" colab={"base_uri": "https://localhost:8080/", "height": 119} #Consultando linhas com valores faltantes df.isnull().sum() # + id="Pbq2ztpN3Qn8" colab_type="code" colab={} #Substituindo os valores nulos pela média df["Vendas"].fillna(df["Vendas"].mean(), inplace=True) # + id="mD0kfsgSC4Qm" colab_type="code" outputId="ed9d1efa-e3b7-479e-9119-bab340148876" colab={"base_uri": "https://localhost:8080/", "height": 34} df["Vendas"].mean() # + id="lA5QVn5N4C-A" colab_type="code" outputId="84874d37-3368-452f-bed6-43782dea6cf6" colab={"base_uri": "https://localhost:8080/", "height": 119} df.isnull().sum() # + id="ds7pcl-ZCzb_" colab_type="code" outputId="e37c7b13-8bd1-46e2-e3f3-104e7fce8f8a" colab={"base_uri": "https://localhost:8080/", "height": 514} df.sample(15) # + id="mMzEuPzg4N7U" colab_type="code" colab={} #Substituindo os valores nulos por zero df["Vendas"].fillna(0, inplace=True) # + id="pS7Hw6Df4Z7x" colab_type="code" colab={} #Apagando as linhas com valores nulos df.dropna(inplace=True) # + id="iCpMj9MD4mW4" colab_type="code" colab={} #Apagando as linhas com valores nulos com base apenas em 1 coluna df.dropna(subset=["Vendas"], inplace=True) # + id="LYGy2VqH8uaM" colab_type="code" colab={} #Removendo linhas que estejam com valores faltantes em todas as colunas df.dropna(how="all", inplace=True) # + [markdown] id="6qEyt17h9IwX" colab_type="text" # **Criando colunas novas** # + id="1HAAiPkh1yIN" colab_type="code" colab={} #Criando a coluna de receita df["Receita"] = df["Vendas"].mul(df["Qtde"]) # + id="_gMBlvMq5fPj" colab_type="code" outputId="a971500e-05cb-417a-f782-9a7737c2728f" colab={"base_uri": "https://localhost:8080/", "height": 204} df.head() # + id="DyU5SIhB9Q8w" colab_type="code" colab={} df["Receita/Vendas"] = df["Receita"] / df["Vendas"] # + id="YfMgO16q9m8F" colab_type="code" outputId="6b323566-e8aa-4a51-8623-cd0c7ec0cd69" colab={"base_uri": "https://localhost:8080/", "height": 204} df.head() # + id="8uy9S6JZ3DB4" colab_type="code" outputId="1e059579-152c-4151-8aaf-2f61e689644b" colab={"base_uri": "https://localhost:8080/", "height": 34} #Retornando a maior receita df["Receita"].max() # + id="y0eoDEcQ5cZC" colab_type="code" outputId="4a520c61-b418-4bdc-8196-f4ea5eee07c8" colab={"base_uri": "https://localhost:8080/", "height": 34} #Retornando a menor receita df["Receita"].min() # + id="gX87zZJ45p5e" colab_type="code" outputId="bf1cce13-9157-4752-cf40-32b31f0c6977" colab={"base_uri": "https://localhost:8080/", "height": 142} #nlargest df.nlargest(3, "Receita") # + id="gPK25dF_5w8q" colab_type="code" outputId="dc8bdffa-f584-4baa-a1d9-67bf4006048a" colab={"base_uri": "https://localhost:8080/", "height": 142} #nsamllest df.nsmallest(3, "Receita") # + id="VS5Bu2fQ53fG" colab_type="code" outputId="8d41e480-5db7-4175-cb4e-184f91a52a38" colab={"base_uri": "https://localhost:8080/", "height": 136} #Agrupamento por cidade df.groupby("Cidade")["Receita"].sum() # + id="wYZDthyQ6DMI" colab_type="code" outputId="524d93a9-0246-46fe-8bfe-9451bc52b65a" colab={"base_uri": "https://localhost:8080/", "height": 359} #Ordenando o conjunto de dados df.sort_values("Receita", ascending=False).head(10) # + [markdown] id="6cA7C78N6sV2" colab_type="text" # #**Trabalhando com datas** # + id="bRaEoWjR6deI" colab_type="code" colab={} #Trasnformando a coluna de data em tipo inteiro df["Data"] = df["Data"].astype("int64") # + id="dz5kfhncHi7Y" colab_type="code" outputId="275eb110-54a8-450e-b8c6-165961d670b7" colab={"base_uri": "https://localhost:8080/", "height": 153} #Verificando o tipo de dado de cada coluna df.dtypes # + id="oQhrdhlyHkED" colab_type="code" colab={} #Transformando coluna de data em data df["Data"] = pd.to_datetime(df["Data"]) # + id="F5zeaq6tH1P0" colab_type="code" outputId="5b8043b2-c63b-4322-df5a-dd41737591c6" colab={"base_uri": "https://localhost:8080/", "height": 153} df.dtypes # + id="c027o0jyH2qg" colab_type="code" outputId="4db1851a-4712-44b6-f297-ec2c16d61f67" colab={"base_uri": "https://localhost:8080/", "height": 85} #Agrupamento por ano df.groupby(df["Data"].dt.year)["Receita"].sum() # + id="kX_HYKgQIEPD" colab_type="code" colab={} #Criando uma nova coluna com o ano df["Ano_Venda"] = df["Data"].dt.year # + id="MJjiTggaISUi" colab_type="code" outputId="664952a0-57c6-4f6d-ad6f-03a5d846d557" colab={"base_uri": "https://localhost:8080/", "height": 204} df.sample(5) # + id="QPNcE_6rIT6F" colab_type="code" colab={} #Extraindo o mês e o dia df["mes_venda"], df["dia_venda"] = (df["Data"].dt.month, df["Data"].dt.day) # + id="9AOp3NNfIrah" colab_type="code" outputId="be23b592-06ec-4eab-88f0-46da74f85544" colab={"base_uri": "https://localhost:8080/", "height": 204} df.sample(5) # + id="r0la0X6aIuTR" colab_type="code" outputId="bd244f1b-6e69-4991-c290-f071af4e0d60" colab={"base_uri": "https://localhost:8080/", "height": 34} #Retornando a data mais antiga df["Data"].min() # + id="7fxtFDflI7L0" colab_type="code" colab={} #Calculando a diferença de dias df["diferenca_dias"] = df["Data"] - df["Data"].min() # + id="997DVEidJKNG" colab_type="code" outputId="3cdb03c2-cb8d-4891-b10b-27474beb1806" colab={"base_uri": "https://localhost:8080/", "height": 204} df.sample(5) # + id="KHAOU_EuJLkb" colab_type="code" colab={} #Criando a coluna de trimestre df["trimestre_venda"] = df["Data"].dt.quarter # + id="OWZos9y5JbDQ" colab_type="code" outputId="79806c1f-ed51-4705-d0b9-cbb2659f5844" colab={"base_uri": "https://localhost:8080/", "height": 204} df.sample(5) # + id="ie2WTtU5Jc-G" colab_type="code" colab={} #Filtrando as vendas de 2019 do mês de março vendas_marco_19 = df.loc[(df["Data"].dt.year == 2019) & (df["Data"].dt.month == 3)] # + id="4x6GgzC9KB_e" colab_type="code" outputId="c65a0365-31d1-4876-96a9-803c355a4bed" colab={"base_uri": "https://localhost:8080/", "height": 669} vendas_marco_19.sample(20) # + [markdown] id="G2RavTidRF8A" colab_type="text" # #**Visualização de dados** # + id="JmZ6dy1xKEtC" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 357} outputId="fcc0d3fd-32a1-4059-c89f-d14c033842ea" df["LojaID"].value_counts(ascending=False) # + id="LCh4ANjpRDiU" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 468} outputId="4d5393d3-98ee-4067-8a2c-22a0b85975c0" #Gráfico de barras df["LojaID"].value_counts(ascending=False).plot.bar() # + id="hMiNsqBKR3K2" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 447} outputId="f57f2b15-b9b2-436d-fbee-51cd724abe5f" #Gráfico de barras horizontais df["LojaID"].value_counts().plot.barh() # + id="rg7ehfpzSE2W" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 430} outputId="8f9170af-f986-4188-da92-5d4cef66e860" #Gráfico de barras horizontais df["LojaID"].value_counts(ascending=True).plot.barh(); # + id="pJ0gpi2_SKrh" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 423} outputId="99f08166-8040-4e6b-8370-5e0853fb9b80" #Gráfico de Pizza df.groupby(df["Data"].dt.year)["Receita"].sum().plot.pie() # + id="2y-7DsTsTSMV" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 119} outputId="416e4eee-3d9f-4470-cd20-5d4e131716b2" #Total vendas por cidade df["Cidade"].value_counts() # + id="6IWtDupKSmDn" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 522} outputId="ee683895-a27a-4032-df49-224b7e338b3f" #Adicionando um título e alterando o nome dos eixos import matplotlib.pyplot as plt df["Cidade"].value_counts().plot.bar(title="Total vendas por Cidade") plt.xlabel("Cidade") plt.ylabel("Total Vendas"); # + id="Gtp8f-8wTK82" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 522} outputId="8cc99270-325b-4ee0-a0e1-2edbf11202a7" #Alterando a cor df["Cidade"].value_counts().plot.bar(title="Total vendas por Cidade", color="red") plt.xlabel("Cidade") plt.ylabel("Total Vendas"); # + id="7ee4w2uHVBHJ" colab_type="code" colab={} #Alterando o estilo plt.style.use("ggplot") # + id="QhimePNYVRnR" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 477} outputId="b9c36ccc-d7a9-4510-f29f-95a981d88d24" df.groupby(df["mes_venda"])["Qtde"].sum().plot(title = "Total Produtos vendidos x mês") plt.xlabel("Mês") plt.ylabel("Total Produtos Vendidos") plt.legend(); # + id="N8-WMDAZVj5P" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 255} outputId="5d0f1192-6ef7-4268-9a7a-ec8f295ea6f3" df.groupby(df["mes_venda"])["Qtde"].sum() # + id="FwhIPO6DVoRD" colab_type="code" colab={} #Selecionando apenas as vendas de 2019 df_2019 = df[df["Ano_Venda"] == 2019] # + id="Pd33t7PKj360" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 119} outputId="27497ff8-f68d-4278-b7a1-e6333d773283" df_2019.groupby(df_2019["mes_venda"])["Qtde"].sum() # + id="7wdwXD2RX9Qo" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 455} outputId="8512105b-ae83-434b-89f4-da12f3e52a2d" #Total produtos vendidos por mês df_2019.groupby(df_2019["mes_venda"])["Qtde"].sum().plot(marker = "o") plt.xlabel("Mês") plt.ylabel("Total Produtos Vendidos") plt.legend(); # + id="AHLzBwDpY4he" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 430} outputId="3a74b228-183a-4dde-dab1-f709419621be" #Hisograma plt.hist(df["Qtde"], color="orangered"); # + id="bmET28xDacQb" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 430} outputId="8e7cd981-3d01-44b5-895b-16b27f431b8a" plt.scatter(x=df_2019["dia_venda"], y = df_2019["Receita"]); # + id="1tFrsehWc7IN" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 478} outputId="b185e26f-1aa4-4d09-f2aa-e5bc3ee748c6" #Salvando em png df_2019.groupby(df_2019["mes_venda"])["Qtde"].sum().plot(marker = "v") plt.title("Quantidade de produtos vendidos x mês") plt.xlabel("Mês") plt.ylabel("Total Produtos Vendidos"); plt.legend() plt.savefig("grafico QTDE x MES.png") # + id="mIcmLx2iktxl" colab_type="code" colab={}
analise-de-dados-com-pandas/Notebooks/Pandas_Aula6_Visualizacao.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # #!/usr/bin/python # -*- coding: utf-8 -*- """This notebook creates the statistics of TAG in COUNTRY in YEAR""" import inspect, os, sys try : import pywikibot as pb from pywikibot import pagegenerators, textlib from pywikibot.specialbots import UploadRobot except : current_folder = os.path.realpath(os.path.abspath(os.path.split(inspect.getfile(inspect.currentframe()))[0])) folder_parts = current_folder.split(os.sep) pywikibot_folder = os.sep.join(folder_parts[:-1]) if current_folder not in sys.path: sys.path.insert(0, current_folder) if pywikibot_folder not in sys.path: sys.path.insert(0, pywikibot_folder) import pywikibot as pb from pywikibot import pagegenerators, textlib from pywikibot.specialbots import UploadRobot import mwparserfromhell as mwh # - from modules.wmtools import flickr_ripper, \ get_image_wikitext, \ get_project_name, \ get_registration_time, \ heat_color, \ upload_to_commons, \ upload_to_commons2, \ wrap_label import pandas as pd import numpy as np from mako.template import Template from io import StringIO from datetime import datetime from urllib.parse import urlencode import requests import json from itertools import groupby from operator import itemgetter from functools import reduce import math import random from geojson import Feature, Point, FeatureCollection import geojson # + import matplotlib as mpl import matplotlib.pyplot as plt import matplotlib.dates as mdates from matplotlib.ticker import MaxNLocator import seaborn as sns import pylab sns.set_style("darkgrid") # #%matplotlib inline # + # Project parameters YEAR = 2017 TAG = 'WLE' TAG_EXT = "Wiki Loves Earth" COUNTRY = "Spain" BASE_NAME = "Commons:Wiki Loves in {2}/{1}/{0}".format(YEAR, TAG_EXT, COUNTRY) LOG_PAGE = BASE_NAME + '/Log' STATISTICS_PAGE = BASE_NAME + '/Stats' GALLERY_QI = BASE_NAME + '/QI' GALLERY_PAGE = BASE_NAME + '/User Gallery' MAP_WLE_PAGE = BASE_NAME + '/Map' BASE_SITE_DB_NAME = "Commons:Wiki Loves in {1}/{0}".format(TAG_EXT, COUNTRY) SITE_DB_PAGE = BASE_SITE_DB_NAME + "/Sites DB" VALID_NAMESPACES = ['0', '4', '100', '102', '104'] # see https://es.wikipedia.org/wiki/Ayuda:Espacio_de_nombres DAYS_BEFORE_REGISTRATION = 15 WLE_FINALIST_CATEGORY = { "2015": "Category:Finalists of {0} in {2} {1}".format(TAG_EXT, YEAR, COUNTRY), "2016": "Category:Evaluation of images from {0} {1} in {2} - Final".format(TAG_EXT, YEAR, COUNTRY), "2017": "Category:Evaluation of images from {0} {1} in {2} - Final".format(TAG_EXT, YEAR, COUNTRY) } commons_site = pb.Site('commons', 'commons') # - # Base URL for interacting with MediaWiki API MW_API_BASE_URL = 'https://commons.wikimedia.org/w/api.php' MW_API_QUERY_STRING = {"action": "query", "format": "json", "gulimit": "500", "prop": "globalusage", "guprop": "url|namespace", "titles": None } # Different sizes for the images created figsize=[15., 10.] figsize_mid=[15., 15.] figsize_half=[8., 10.] figsize_high=[15., 30.] figsize_low=[15., 6.] # + # Folder management (templates, images...) cwd = os.getcwd() images_directory = os.path.join(cwd, 'images') if not os.path.exists(images_directory): os.makedirs(images_directory) templates_directory = os.path.join(cwd, 'templates') # - now = (datetime.now().strftime("%Y-%m-%d")) # Image description page template_file = os.path.join(templates_directory, 'file.wiki') fh = open(template_file, 'r', encoding = "utf-8") image_wikitext = fh.read() fh.close() annexes = { 'ES-AN': [u'Anexo:Lugares de importancia comunitaria de Andalucía', 'Andalusia'], 'ES-AR': [u'Anexo:Lugares de importancia comunitaria de Aragón', 'Aragon'], 'ES-AS': [u'Anexo:Lugares de importancia comunitaria de Asturias', 'Asturias'], 'ES-CB': [u'Anexo:Lugares de importancia comunitaria de Cantabria', 'Cantabria'], 'ES-CM': [u'Anexo:Lugares de importancia comunitaria de Castilla-La Mancha', 'Castile-La Mancha'], 'ES-CL': [u'Anexo:Lugares de importancia comunitaria de Castilla y León', u'Castile and León'], 'ES-CT': [u'Anexo:Lugares de importancia comunitaria de Cataluña', 'Catalonia'], 'ES-MD': [u'Anexo:Lugares de importancia comunitaria de la Comunidad de Madrid', 'Community of Madrid'], 'ES-VC': [u'Anexo:Lugares de importancia comunitaria de la Comunidad Valenciana', 'Valencian Community'], 'ES-EX': [u'Anexo:Lugares de importancia comunitaria de Extremadura', 'Extremadura'], 'ES-IB': [u'Anexo:Lugares de importancia comunitaria de las Islas Baleares', 'Balearic Islands'], 'ES-CN': [u'Anexo:Lugares de importancia comunitaria de las Islas Canarias', 'Canary Islands'], 'ES-GA': [u'Anexo:Lugares de importancia comunitaria de Galicia', 'Galicia'], 'ES-RI': [u'Anexo:Lugares de importancia comunitaria de La Rioja', 'La Rioja'], 'ES-NC': [u'Anexo:Lugares de importancia comunitaria de Navarra', 'Navarre'], 'ES-MC': [u'Anexo:Lugares de importancia comunitaria de la Región de Murcia', 'Region of Murcia'], 'ES-PV': [u'Anexo:Lugares de importancia comunitaria del País Vasco', 'Basque Country'], 'ES-CE': [u'Anexo:Lugares de importancia comunitaria de Ceuta y Melilla', 'Ceuta'], 'ES-ML': [u'Anexo:Lugares de importancia comunitaria de Ceuta y Melilla', 'Melilla'], 'ES-MAGRAMA': [u'Anexo:Lugares de importancia comunitaria del MAGRAMA', 'MAGRAMA'] } # Seaborn palette for autonomous communities autcom_palette = [i[1:] for i in sns.color_palette('hls', 20).as_hex()] autcoms = [annexes[key][1] for key in annexes] autcom_colors = {autcom: autcom_palette[i] for i, autcom in enumerate(autcoms)} autcom_colors # ## Auxiliary functions # + def expand_itemid (_list): new_list = [{"itemid": i, "name": site_df[site_df['code'] == i]['name'].values[0], "category": site_df[site_df['code'] == i]['commons_cat'].values[0]} if type(site_df[site_df['code'] == i]['commons_cat'].values[0]) is str else {"itemid": i, "name": site_df[site_df['code'] == i]['name'].values[0], "category": ''} for i in _list] if len(new_list) > 0: new_list = sorted(new_list, key=lambda k: k['name']) return new_list def decode_list (_list) : try: new_list = _list[:] except : new_list = [] return new_list # - def to_geojson (row) : """For each site of community importance, identified by row['code'], this function creates a proper GeoJSON Feature""" images_subset_df = images_df[(images_df['code'] == row['code']) & (images_df['width'] > images_df['height'])] if len (images_subset_df.index) == 0: images_subset_df = images_df[images_df['code'] == row['code']] if len(images_subset_df[images_subset_df['qi'] == 'qi']) > 0 : popup_image = images_subset_df[images_subset_df['qi'] == 'qi'].sample(1, random_state=0)['image_title'].values[0] elif len(images_subset_df[images_subset_df['finalist'] == 'finalist']) > 0 : popup_image = images_subset_df[images_subset_df['finalist'] == 'finalist'].sample(1, random_state=0)['image_title'].values[0] else : popup_image = images_subset_df.sample(1, random_state=0)['image_title'].values[0] properties = {"description": "[[File:{0}|150px]]".format(popup_image), "title": "[[:Category:Images of a site of community importance with code {0} from {2} {1} in {4}|{3}]]".format(row['code'], YEAR, TAG_EXT, row['name'], COUNTRY), "marker-size": "small", "marker-symbol": "circle", "marker-color": autcom_colors[row['aut_com']]} feature = Feature(geometry=Point((float(row['longitude']), float(row['latitude']))), properties=properties ) return feature # ## Retrieval of the sites of community importance # + # retrieval of the WLE SCI (site of community importance) log pb.output('Retrieving --> WLE site of community importance list') site_list_page = pb.Page(commons_site, SITE_DB_PAGE) site_list_text = StringIO(site_list_page.text[site_list_page.text.find('\n') + 1:site_list_page.text.rfind('\n')]) site_df = pd.read_csv(site_list_text, sep=";", index_col=False, names=["name", "code", "magrama_url", "community", "bio_region", "continent", "min_altitude", "max_altitude", "avg_altitude", "longitude", "latitude", "area", "marine_percentage", "marine_area", "image", "commons_cat", "wikidata_id"]) pb.output('Retrieved --> WLE site of community importance list') # - site_df["aut_com"] = site_df["community"].apply(lambda x: annexes[x][1]) site_df.head() site_length = len(site_df.index) site_length valid_sites = site_df['code'].values valid_sites # ## Retrieval of the images log # + pb.output('Retrieving --> {1} {0} in {2} images list from cache'.format(YEAR, TAG, COUNTRY)) list_page = pb.Page(commons_site, LOG_PAGE) list_page_text = StringIO(list_page.text[list_page.text.find('\n') + 1:list_page.text.rfind('\n')]) images_df = pd.read_csv(list_page_text, sep=";", index_col=False, names=['image_title', 'code', 'uploader', 'uploader_registration', 'timestamp', 'date', 'size', 'height', 'width', 'qi', 'finalist'] ).fillna('') pb.output('Retrieved --> {1} {0} in {2} images list from cache'.format(YEAR, TAG, COUNTRY)) images_df['timestamp'] = pd.to_datetime(images_df['timestamp'], format="%Y-%m-%d %H:%M:%S") images_df['days_from_user_reg'] = images_df.apply(lambda row: (row['timestamp'] - pd.to_datetime(row['uploader_registration'], format="%Y-%m-%d")).days, axis=1) images_df['days_from_creation'] = images_df.apply(lambda row: (row['timestamp'] - pd.to_datetime(row['date'], format="%Y-%m-%d")).days, axis=1) images_df.set_index(["timestamp"], inplace=True) del images_df.index.name total_images_length = len(images_df) total_images_length # - images_df.head() # ### Quality images qi_list = images_df[images_df['qi'] == 'qi']['image_title'] qi_list qi_length = len(qi_list) qi_length # ## Uploaders # ### Uploaders age uploaders = images_df.groupby(['uploader']).min()['days_from_user_reg'] uploaders authors_length = len(uploaders.index) authors_length days_from_user_reg = uploaders[~uploaders.index.str.contains('flickr')].value_counts().sort_index(ascending=False) days_from_user_reg age = pd.cut(uploaders, bins=[0, 15, 365, 730, 3650, 5000], include_lowest=True).value_counts() # + padding = { "2017": 3, "2016": 3, "2015": 6 } PADDING = padding[str(YEAR)] fig, ax = plt.subplots(figsize=figsize) p = sns.barplot(ax=ax, y=age.index, x=age.values) p.set_xlabel("# Contestants", fontsize=18) p.set_ylabel("Age (time from registration)", fontsize=18) p.set_title(label='{1} {0} in {2}: Contestant age'.format(YEAR, TAG, COUNTRY), fontsize=20) p.tick_params(labelsize=14) p.set_yticklabels(['Less than 15 days\nnew )', 'Between 15 days\nand one year', 'Between one\nand two years', 'Between two\nand ten years', 'More than ten years']) for patch in ax.patches: ax.text(patch.get_width() + PADDING, patch.get_y() + patch.get_height()/2., '{:1.0f}'.format(patch.get_width()), ha="center", fontsize=14) desc = get_image_wikitext(image_wikitext, '{1} {0} in {2}: Contestant age. Time from registration to first contribution to contest.'.format(YEAR, TAG_EXT, COUNTRY), YEAR, TAG_EXT, COUNTRY ) upload_to_commons2(p, "{1} {0} in {2} - Contestant age.png".format(YEAR, TAG, COUNTRY), desc) # - # ### New uploaders new_uploaders = uploaders[uploaders<DAYS_BEFORE_REGISTRATION].index new_uploaders new_uploaders_length = len(new_uploaders) new_uploaders_length # ### Images by uploader images_per_uploader = images_df['uploader'].value_counts() images_per_uploader = images_per_uploader.rename('images') images_per_uploader = images_per_uploader.iloc[np.lexsort([images_per_uploader.index, -images_per_uploader.values])] images_per_uploader new_uploaders_contributions = images_per_uploader[new_uploaders] new_uploaders_contributions # THIS PARAMETER IS YEAR-DEPENDENT AND COMES FROM MANUAL INSPECTION top_uploaders ={ "2017": 16, "2016": 18, "2015": 19 } TOP_UPLOADERS = top_uploaders[str(YEAR)] remaining_images_per_uploader = images_per_uploader[:TOP_UPLOADERS] remaining_images_per_uploader.index = remaining_images_per_uploader.index.map(flickr_ripper).map(lambda x: wrap_label(x, 16)) remaining_images_per_uploader # + padding = { "2017": 2, "2016": 3, "2015": 4 } PADDING = padding[str(YEAR)] fig, ax = plt.subplots(figsize=figsize) p = sns.barplot(ax=ax, x=remaining_images_per_uploader.index, y=remaining_images_per_uploader.values) p.set_xlabel("Contributors", fontsize=18) p.set_ylabel("# Photographs", fontsize=18) p.set_title(label='{1} {0} in {2}: Top uploaders'.format(YEAR, TAG, COUNTRY), fontsize=20) p.tick_params(labelsize=14) p.set_xticklabels(p.get_xticklabels(), rotation=90) for patch in p.patches: height = patch.get_height() p.text(patch.get_x() + patch.get_width()/2., height + PADDING, '{:1.0f}'.format(height), ha="center", fontsize=13) desc = get_image_wikitext(image_wikitext, 'Top {2} contributors to {1} {0} in {3}'.format(YEAR, TAG_EXT, TOP_UPLOADERS, COUNTRY), YEAR, TAG_EXT, COUNTRY ) upload_to_commons2(p, "{1} {0} in {2} - Top authors.png".format(YEAR, TAG, COUNTRY), desc) # - images_df['uploader'].unique() valid_images_length = len(images_df[images_df['code'].isin(valid_sites)].index) valid_images_length images_df[images_df['code'].isin(valid_sites)]['code'].unique() # ### Sites by uploader sites_per_uploader_df = images_df[images_df['code'].isin(valid_sites)].\ groupby(['uploader']).\ agg({"code": pd.Series.nunique}).\ sort_values('code', ascending=False) sites_per_uploader = sites_per_uploader_df["code"] sites_per_uploader = sites_per_uploader.rename('sites') sites_per_uploader = sites_per_uploader.iloc[np.lexsort([sites_per_uploader.index, -sites_per_uploader.values])] sites_per_uploader # THIS PARAMETER IS YEAR-DEPENDENT AND COMES FROM MANUAL INSPECTION top_uploaders_by_site ={ "2017": 16, "2016": 16, "2015": 16 } TOP_UPLOADERS_BY_SITE = top_uploaders_by_site[str(YEAR)] wle_sites_length = images_df[images_df['code'].isin(valid_sites)]['code'].unique().size wle_sites_length # + padding = { "2017": 0.5, "2016": 0.5, "2015": 0.5 } PADDING = padding[str(YEAR)] fig, ax = plt.subplots(figsize=figsize) p = sns.barplot(ax=ax, x=sites_per_uploader[:TOP_UPLOADERS_BY_SITE].index.map(flickr_ripper).map(lambda x: wrap_label(x, 16)), y=sites_per_uploader[:TOP_UPLOADERS_BY_SITE].values ) p.set_xlabel("Contributors", fontsize=18) p.set_ylabel("# Sites", fontsize=18) p.set_title(label='{1} {0} in {2}: Top uploaders by site of community importance'.format(YEAR, TAG, COUNTRY), fontsize=20) p.tick_params(labelsize=14) p.set_xticklabels(p.get_xticklabels(), rotation=90) p.yaxis.set_major_locator(MaxNLocator(integer=True)) for patch in p.patches: height = patch.get_height() p.text(patch.get_x() + patch.get_width()/2., height + PADDING, '{:1.0f}'.format(height), ha="center", fontsize=13) desc = get_image_wikitext(image_wikitext, 'Top {2} contributors to {1} {0} in {3}'.format(YEAR, TAG_EXT, TOP_UPLOADERS_BY_SITE, COUNTRY), YEAR, TAG_EXT, COUNTRY ) upload_to_commons2(p, "{1} {0} in {2} - Top authors by site of community importance.png".format(YEAR, TAG, COUNTRY), desc) # - uploaders_df = pd.concat([sites_per_uploader, images_per_uploader], axis=1).fillna(0) uploaders_df.columns=['Sites', 'Photographs'] uploaders_df['Sites'] = uploaders_df['Sites'].astype(int) uploaders_df = uploaders_df.iloc[np.lexsort([uploaders_df.index, -uploaders_df['Photographs']])] uploaders_df.index = uploaders_df.index.map(flickr_ripper).map(lambda x: wrap_label(x, 16)) uploaders_df # + padding = { "2017": 2, "2016": 3, "2015": 4 } PADDING = padding[str(YEAR)] fig, ax = plt.subplots(figsize=figsize) p = sns.barplot(ax=ax, x=uploaders_df[:TOP_UPLOADERS].index, y=uploaders_df[:TOP_UPLOADERS]['Photographs'], hue=uploaders_df[:TOP_UPLOADERS]['Sites'], dodge=False) p.set_xlabel("Contributors", fontsize=18) p.set_ylabel("# Photographs", fontsize=18) p.set_title(label='{1} {0} in {2}: Top uploaders\nby number of photographs and sites'.format(YEAR, TAG, COUNTRY), fontsize=20) p.tick_params(labelsize=14) p.set_xticklabels(p.get_xticklabels(), rotation=90) handles, labels = p.get_legend_handles_labels() handles.reverse() labels.reverse() legend = plt.legend(loc='upper right', title='Number of\nsites', fontsize=14, labels=labels, handles=handles) plt.setp(legend.get_title(), fontsize=16) for patch in p.patches: height = patch.get_height() if not math.isnan(height): p.text(patch.get_x() + patch.get_width()/2., height + PADDING, '{:1.0f}'.format(height), ha="center", fontsize=13) desc = get_image_wikitext(image_wikitext, 'Top {2} contributors to {1} {0} in {3} with contribution to sites'.format(YEAR, TAG_EXT, TOP_UPLOADERS, COUNTRY), YEAR, TAG_EXT, COUNTRY ) upload_to_commons2(p, "{1} {0} in {2} - Top authors (2).png".format(YEAR, TAG, COUNTRY), desc) # - # ## Contributions by day upload_ts = images_df['image_title'].resample('d').count() try: upload_ts[datetime(YEAR, 5, 31)] = upload_ts[datetime(YEAR, 5, 31)] + upload_ts[datetime(YEAR, 6, 1)] upload_ts.drop(datetime(YEAR, 6, 1), inplace=True) except : pass upload_ts = pd.Series([0]*31, index = pd.date_range(datetime(YEAR, 5, 1), periods=31, freq='D')) + upload_ts upload_ts = upload_ts.fillna(0).astype(int) upload_ts # + padding = { "2017": 4, "2016": 4, "2015": 5 } PADDING = padding[str(YEAR)] # THIS IS YEAR-DEPENDENT fig, ax = plt.subplots(figsize=figsize) p = ax.bar(upload_ts.index.to_pydatetime(), upload_ts.values, color=sns.color_palette("Blues_d", 30)) ax.xaxis.set_major_locator(mdates.AutoDateLocator()) ax.xaxis.set_major_formatter(mdates.DateFormatter('%Y-%m-%d')) ax.set_xlabel('Date', fontsize=18) ax.set_ylabel("# Photographs", fontsize=18) ax.set_title(label='Photographs uploaded to {1} {0} in {2}'.format(YEAR, TAG, COUNTRY), fontsize=20) ax.tick_params(labelsize=14) plt.xticks(rotation=90) for patch in ax.patches: height = patch.get_height() if height > 0 : ax.text(patch.get_x() + patch.get_width()/2., height + PADDING, '{:1.0f}'.format(height), ha="center", fontsize=13) desc = get_image_wikitext(image_wikitext, 'Number of images uploaded to {1} {0} in {2} by day'.format(YEAR, TAG, COUNTRY), YEAR, TAG_EXT, COUNTRY ) upload_to_commons2(plt, "{1} {0} in {2} - Uploads by day.png".format(YEAR, TAG, COUNTRY), desc) # - # ## MediaWiki tables creation # Dataframe merge (images and sites) images_extended_df = pd.merge(images_df, site_df, on='code', how='left') len(images_extended_df.index) # ### New uploaders new_uploaders_contributions = images_per_uploader[new_uploaders] new_uploaders_contributions # ### Authors dataframe valid_images_per_uploader = images_df[images_df['code'].isin(valid_sites)]['uploader'].value_counts() valid_images_per_uploader = valid_images_per_uploader.rename('valid_images') valid_images_per_uploader = valid_images_per_uploader.iloc[np.lexsort([valid_images_per_uploader.index, -valid_images_per_uploader.values])] valid_images_per_uploader site_list_per_uploader = images_extended_df[images_extended_df['code'].isin(valid_sites)]\ .groupby('uploader')['code']\ .apply(set)\ .apply(lambda x: filter(None, x))\ .apply(lambda x: expand_itemid(x))\ .rename('site_list', inplace=True) site_list_per_uploader # + authors_df = pd.concat([images_per_uploader, valid_images_per_uploader, sites_per_uploader, site_list_per_uploader], axis=1)\ .sort_values(by='images', ascending=False)\ .reset_index()\ .rename(columns = {'index': 'contestant'}) authors_df[['images', 'valid_images', 'sites']] = authors_df[['images', 'valid_images', 'sites']]\ .fillna(0)\ .astype('int') authors_df = authors_df.iloc[np.lexsort([authors_df['contestant'], -authors_df['images']])] authors_df['registration_string'] = authors_df['contestant'].map(lambda x: get_registration_time(x)) authors_df['site_list'] = authors_df['site_list'].map(lambda x: decode_list(x)) authors_df # - # ### Images by site dataframe images_per_site = images_extended_df[images_extended_df['code'].isin(valid_sites)]['code'].value_counts() images_per_site # THIS PARAMETER IS YEAR-DEPENDENT AND COMES FROM MANUAL INSPECTION # May be set to the number of sites with more than X pictures top_sites ={ "2017": 20, "2016": 22, "2015": 18 } TOP_SITES = top_sites[str(YEAR)] images_per_site_df = pd.DataFrame(data=images_per_site).reset_index() images_per_site_df.rename(columns={'index': 'code', 'code': 'count'}, inplace=True) images_per_site_df = pd.merge(images_per_site_df, site_df, on='code')[['count', 'code', 'name', 'aut_com', 'latitude', 'longitude', 'commons_cat']].fillna('') images_per_site_df = images_per_site_df.iloc[np.lexsort([images_per_site_df['name'], -images_per_site_df['count']])] images_per_site_df['name'] = images_per_site_df['name'].map(lambda x: x.replace('_', ' ')) images_per_site_df.head() compact_images_per_site = images_per_site_df[:TOP_SITES][['count', 'name']] compact_images_per_site = compact_images_per_site.iloc[np.lexsort([compact_images_per_site['name'], -compact_images_per_site['count']])] compact_images_per_site['name'] = compact_images_per_site['name'].map(lambda x: wrap_label(x, 25)) compact_images_per_site # + padding = { "2017": 0.5, "2016": 2, "2015": 2 } PADDING = padding[str(YEAR)] fig, ax = plt.subplots(figsize=figsize) p = sns.barplot(ax=ax, data=compact_images_per_site, x='name', y='count') p.set_xlabel("Sites", fontsize=18) p.set_ylabel("# Photographs", fontsize=18) p.set_title(label='{1} {0} in {3}: Top {2} sites of community importance'.format(YEAR, TAG, TOP_SITES, COUNTRY), fontsize=20) p.tick_params(labelsize=14) p.set_xticklabels(p.get_xticklabels(), rotation=90) for patch in p.patches: height = patch.get_height() p.text(patch.get_x() + patch.get_width()/2., height + PADDING, '{:1.0f}'.format(height), ha="center", fontsize=13) desc = get_image_wikitext(image_wikitext, 'Top {2} sites of community importance in {1} {0} in {3}'.format(YEAR, TAG_EXT, TOP_SITES, COUNTRY), YEAR, TAG_EXT, COUNTRY ) upload_to_commons2(p, "{1} {0} in {2} - Top sites of community importance.png".format(YEAR, TAG, COUNTRY), desc) # - # ## Map creation images_per_site_df['geojson'] = images_per_site_df.apply(lambda row: to_geojson(row), axis=1) features = images_per_site_df['geojson'].tolist() feature_collection = FeatureCollection(features) dump = geojson.dumps(feature_collection, ensure_ascii=False, indent=2) #print(dump) map_template = """<mapframe text="Sites of community importance" latitude="39" longitude="-4" zoom="5" width="800" height="600" align="center"> ${map} </mapframe>""" vars = { "map": dump } t = Template(map_template) map_text = t.render(**vars) maps_page = pb.Page(commons_site, MAP_WLE_PAGE) if maps_page.text.strip() != map_text.strip() : maps_page.text = map_text pb.output('Publishing --> {0} in {1} map'.format(TAG, COUNTRY)) maps_page.save('{0} in {1} map'.format(TAG, COUNTRY)) # ## Autonomous communities sites_per_autcom = images_per_site_df.groupby(['aut_com']).\ count().\ sort_values(by='count', ascending=False).\ reset_index()[['aut_com', 'count']] sites_per_autcom['aut_com'] = sites_per_autcom['aut_com'].map(lambda x: wrap_label(x, 14)) sites_per_autcom aut_coms = len(sites_per_autcom.index) aut_coms # + padding = { "2017": 0.3, "2016": 0.5, "2015": 0.5 } PADDING = padding[str(YEAR)] fig, ax = plt.subplots(figsize=figsize) p = sns.barplot(ax=ax, data=sites_per_autcom, x='aut_com', y='count') p.set_xlabel("Autonomous community", fontsize=18) p.set_ylabel("# Sites", fontsize=18) p.set_title(label='{1} {0} in {2}:\nSites of community importance by autonomous community'.format(YEAR, TAG, COUNTRY), fontsize=20) p.tick_params(labelsize=14) p.set_xticklabels(p.get_xticklabels(), rotation=90) for patch in p.patches: height = patch.get_height() p.text(patch.get_x() + patch.get_width()/2., height + PADDING, '{:1.0f}'.format(height), ha="center", fontsize=13) desc = get_image_wikitext(image_wikitext, 'Sites of community importance by autonomous community in {1} {0} in {2}'.format(YEAR, TAG_EXT, COUNTRY), YEAR, TAG_EXT, COUNTRY ) upload_to_commons2(p, "{1} {0} in {2} - Sites of community importance by autonomous community.png".format(YEAR, TAG, COUNTRY), desc) # - images_per_autcom = images_per_site_df.groupby(['aut_com']).\ sum().\ sort_values(by='count', ascending=False).\ reset_index() images_per_autcom['aut_com'] = images_per_autcom['aut_com'].map(lambda x: wrap_label(x, 14)) images_per_autcom # + padding = { "2017": 4, "2016": 5, "2015": 5 } PADDING = padding[str(YEAR)] fig, ax = plt.subplots(figsize=figsize) p = sns.barplot(ax=ax, data=images_per_autcom, x='aut_com', y='count') p.set_xlabel("Autonomous community", fontsize=18) p.set_ylabel("# Photographs", fontsize=18) p.set_title(label='{1} {0} in {2}: Photographs by autonomous community'.format(YEAR, TAG, COUNTRY), fontsize=20) p.tick_params(labelsize=14) p.set_xticklabels(p.get_xticklabels(), rotation=90) for patch in p.patches: height = patch.get_height() p.text(patch.get_x() + patch.get_width()/2., height + PADDING, '{:1.0f}'.format(height), ha="center", fontsize=13) desc = get_image_wikitext(image_wikitext, 'Photographs by autonomous community in {1} {0} in {2}'.format(YEAR, TAG_EXT, COUNTRY), YEAR, TAG_EXT, COUNTRY ) upload_to_commons2(p, "{1} {0} in {2} - Photographs by autonomous community.png".format(YEAR, TAG, COUNTRY), desc) # - # ## Usage management # + n=5 list_df = [images_df[i:i+n] for i in range(0, images_df.shape[0], n)] usage_dict = {} counter = 0 for df in list_df : query_string_items = list() for _, row in df.iterrows(): title = 'File:{0}'.format(row["image_title"]) query_string_items.append(title) raw_api_query_string = '|'.join(query_string_items) MW_API_QUERY_STRING["titles"] = raw_api_query_string r = requests.post(MW_API_BASE_URL, data=urlencode(MW_API_QUERY_STRING)) response = r.text try: response_dict = json.loads(response) for _, value in response_dict["query"]["pages"].items(): uses_dict = value['globalusage'] tuples = [(item['wiki'], 1) for item in uses_dict if (item['ns'] in VALID_NAMESPACES)] summary = [reduce(lambda x, y: (x[0], x[1]+y[1]), group) for _, group in groupby(sorted(tuples), key=itemgetter(0))] if len(summary) > 0 : counter +=1 title = value['title'].replace('File:', '') summary_dict = {tuple[0]: tuple[1] for tuple in summary} usage_dict.update({title: summary_dict}) except Exception as e: print ('Error found ({})'.format(e)) pass # - # unique images used usage_df = pd.DataFrame(usage_dict).transpose() total_unique=usage_df.count(axis=1).count() total_unique # summary table usages_df = pd.concat([usage_df.sum(), usage_df.count()], axis=1) usages_df.columns = ['usages', 'unique'] usages_df['usages'] = usages_df['usages'].astype(int) usages_df.sort_values(by=['unique'], axis=0, ascending=False, inplace=True) usages_df['name'] = usages_df.index usages_df['name'] = usages_df['name'].map(get_project_name) usages_df.set_index(['name'], inplace=True) usages_df = usages_df.iloc[np.lexsort([usages_df.index, -usages_df['unique']])] usages_df # THIS PARAMETER IS YEAR-DEPENDENT AND COMES FROM MANUAL INSPECTION # May be set to the number of projects with more than 1 picture top_projects ={ "2017": 15, "2016": 18, "2015": 23 } TOP_PROJECTS = top_projects[str(YEAR)] remaining_df = pd.DataFrame(usages_df.iloc[TOP_PROJECTS:].sum()).transpose() other_projects_num = len(usages_df.index)-TOP_PROJECTS remaining_df.index=['Other projects ({})'.format(other_projects_num)] top_df = usages_df.iloc[:TOP_PROJECTS] reduced_usages_df = top_df.append(remaining_df) reduced_usages_df # + padding = { "2017": 0.5, "2016": 1, "2015": 3 } PADDING = padding[str(YEAR)] fig, ax = plt.subplots(figsize=figsize) p = sns.barplot(ax=ax, x='index', y='unique', data=reduced_usages_df.reset_index()) p.set_xlabel("Project", fontsize=18) p.set_ylabel("# Photographs", fontsize=18) p.set_title(label='Unique photographs from {2} {0} in {3} used in Wikimedia projects\n({1})'.format(YEAR, now, TAG, COUNTRY), fontsize=20) p.tick_params(labelsize=14) p.set_xticklabels(p.get_xticklabels(), rotation=90) for patch in p.patches: height = patch.get_height() p.text(patch.get_x() + patch.get_width()/2., height + PADDING, '{:1.0f}'.format(height), ha="center", fontsize=13) desc = get_image_wikitext(image_wikitext, 'Unique photographs from {1} {0} in {2} used in WMF projects: top {3} projects'.format(YEAR, TAG_EXT, COUNTRY, TOP_PROJECTS), YEAR, TAG_EXT, COUNTRY ) upload_to_commons2(p, "{1} {0} in {2} - Unique photographs used in WMF projects.png".format(YEAR, TAG, COUNTRY), desc) # + padding = { "2017": 0.6, "2016": 2, "2015": 4 } PADDING = padding[str(YEAR)] fig, ax = plt.subplots(figsize=figsize) p = sns.barplot(ax=ax, x='index', y='usages', data=reduced_usages_df.reset_index()) p.set_xlabel("Project", fontsize=18) p.set_ylabel("# Uses", fontsize=18) p.set_title(label='Uses of photographs from {2} {0} in {3} in Wikimedia projects\n({1})'.format(YEAR, now, TAG, COUNTRY), fontsize=20) p.tick_params(labelsize=14) p.set_xticklabels(p.get_xticklabels(), rotation=90) for patch in p.patches: height = patch.get_height() p.text(patch.get_x() + patch.get_width()/2., height + PADDING, '{:1.0f}'.format(height), ha="center", fontsize=13) desc = get_image_wikitext(image_wikitext, 'Uses of photographs from {1} {0} in {2} in WMF projects: top {3} projects'.format(YEAR, TAG_EXT, COUNTRY, TOP_PROJECTS), YEAR, TAG_EXT, COUNTRY ) upload_to_commons2(p, "{1} {0} in {2} - Uses of photographs in WMF projects.png".format(YEAR, TAG, COUNTRY), desc) # + vf = np.vectorize(lambda x: wrap_label(x, 10)) projects = vf(reduced_usages_df.index.values)[::-1] unique_uses = reduced_usages_df['unique'].values[::-1] article_uses = reduced_usages_df['usages'].values[::-1] # + padding = { "2015": {"left_offset": 116, "left_factor": 1.24, "right_offsest": 4} , "2016": {"left_offset": 39, "left_factor": 1.2, "right_offsest": 3} , "2017": {"left_offset": 16, "left_factor": 1.2, "right_offsest": 1} } LEFT_PADDING = padding[str(YEAR)]["left_offset"] LEFT_FACTOR = padding[str(YEAR)]["left_factor"] RIGHT_PADDING = padding[str(YEAR)]["right_offsest"] # + y = np.arange(article_uses.size) fig, axes = plt.subplots(ncols=2, sharey=True, figsize=figsize_mid) pylab.gcf().suptitle('Photographs from {2} {0} in Spain used in Wikimedia projects ({1})'.format(YEAR, now, TAG), fontsize=20, y=1.04) axes[0].barh(y, unique_uses, align='center', color=sns.color_palette('hls', TOP_PROJECTS+1)) axes[0].set_title('# Unique photographs', fontsize=16) axes[1].barh(y, article_uses, align='center', color=sns.color_palette('hls', TOP_PROJECTS+1)) axes[1].set_title('# Articles with photographs', fontsize=16) axes[0].invert_xaxis() axes[0].set_yticks(y) axes[0].set_yticklabels(projects, horizontalalignment="center", fontsize=12) axes[0].tick_params(axis='y', which='major', pad=44) axes[0].yaxis.tick_right() for ax in axes.flat: ax.margins(0.03) ax.grid(True) for patch in axes[1].patches: ax.text(patch.get_width() + RIGHT_PADDING, patch.get_y()+patch.get_height()/2., '{:1.0f}'.format(patch.get_width()), va="center", fontsize=14) for patch in axes[0].patches: ax.text(-(LEFT_PADDING + patch.get_width()*LEFT_FACTOR), patch.get_y()+patch.get_height()/2., '{:1.0f}'.format(patch.get_width()), va="center", fontsize=14) fig.tight_layout() fig.subplots_adjust(wspace=0.18) desc = get_image_wikitext(image_wikitext, 'Photographs from {1} {0} in {3} used in WMF projects: top {2} projects'.format(YEAR, TAG_EXT, TOP_PROJECTS, COUNTRY), YEAR, TAG_EXT, COUNTRY ) upload_to_commons2(fig, "{1} {0} in {2} - Photographs used in WMF projects.png".format(YEAR, TAG, COUNTRY), desc) # - # ## Quality Images # Quality images gallery template = """This page lists the ${len(QI_list)} '''[[Commons:Quality Images|quality images]]''' uploaded as part of the [[Commons:${tag}|${tag}]] contest in ${year} in ${country}. <gallery> % for image in QI_list: ${image} % endfor </gallery> '''Statistics generation date''': {{subst:CURRENTMONTHNAME}} {{subst:CURRENTDAY}}, {{subst:CURRENTYEAR}} [[Category:Reports from ${tag} in Spain| Quality]] [[Category:Reports from ${tag} ${year} in ${country}]]""" vars = { "QI_list": qi_list.values, "tag": TAG_EXT, "year": YEAR, "country": COUNTRY } t = Template(template) if qi_length > 0 : qi_gallery_text = t.render(**vars) qi_page = pb.Page(commons_site, GALLERY_QI) if qi_page.text != qi_gallery_text: qi_page.text = qi_gallery_text pb.output('Publishing --> {1} {0} in {2} featured articles gallery'.format(YEAR, TAG, COUNTRY)) qi_page.save("{1} {0} in {2} featured articles gallery".format(YEAR, TAG, COUNTRY)) # ## Coverage # + es_site = pb.Site('es', 'wikipedia') threshold_date = datetime(YEAR, 9, 1) sites_df = pd.DataFrame( columns=['aut_com', 'commons_cat', 'image']) for annex in annexes : page = pb.Page(es_site, annexes[annex][0]) rev_id = None for i, revision in enumerate(page.revisions()) : if revision['timestamp'] < threshold_date: rev_id = i break for i, revision in enumerate(page.revisions(content=True)) : if i == rev_id : text = revision['text'] break wikicode = mwh.parse(text) templates = [template for template in wikicode.filter_templates() if template.name.lower().strip() == "fila lic"] for template in templates : df_row = { "aut_com": annexes[annex][1], "commons_cat": None, "image": None } if annex == 'ES-CE' and '631' in template.get("código").value : df_row['aut_com'] = 'Ceuta' elif annex == 'ES-CE' : df_row['aut_com'] = 'Melilla' elif annex == 'ES-ML' : continue try: if template.get("categoría-Commons").value: if len(template.get("categoría-Commons").value.strip()) > 0 : df_row["commons_cat"] = template.get("categoría-Commons").value.strip() if template.get("imagen").value: if len(template.get("imagen").value.strip()) > 0 : df_row["image"] = template.get("imagen").value.strip() except Exception as e: print ('Exception ({0}) in {1} ({2})'.format(e, template.get("código").value.strip(), annexes[annex][1])) sites_df = sites_df.append(df_row, ignore_index=True) # + coverage_category = sites_df[['aut_com', 'commons_cat', 'image']].groupby(['aut_com'])['commons_cat'].count() coverage_images = sites_df[['aut_com', 'commons_cat', 'image']].groupby(['aut_com'])['image'].count() coverage_df = pd.concat([sites_df[['aut_com', 'commons_cat', 'image']].groupby(['aut_com'])['aut_com'].count(), sites_df[['aut_com', 'commons_cat', 'image']].groupby(['aut_com'])['commons_cat'].count(), sites_df[['aut_com', 'commons_cat', 'image']].groupby(['aut_com'])['image'].count()], axis=1) total_coverage = coverage_df.sum(numeric_only=True).rename('Total') coverage_df = coverage_df.append(total_coverage) coverage_df['aut_com'] = coverage_df['aut_com'].fillna(0).astype('int') coverage_df['category_percentage'] = (100.*coverage_df['commons_cat']/coverage_df['aut_com']).round(2) coverage_df['image_percentage'] = (100.*coverage_df['image']/coverage_df['aut_com']).round(2) coverage_df['commons_cat'] = coverage_df['commons_cat'].fillna(0).astype('int') coverage_df['image'] = coverage_df['image'].fillna(0).astype('int') coverage_df['image_color'] = coverage_df['image_percentage'].apply(heat_color) coverage_df['cat_color'] = coverage_df['category_percentage'].apply(heat_color) # - coverage_df # ## Finalists # + cat_wle = pb.Category(commons_site, WLE_FINALIST_CATEGORY[str(YEAR)]) gen_wle = pagegenerators.CategorizedPageGenerator(cat_wle) finalist_images_wle = [page.title(withNamespace=False) for page in gen_wle if page.is_filepage()] finalist_images_count = len(finalist_images_wle) finalist_images_count # - finalist_images_df = images_extended_df[images_extended_df['image_title'].isin(finalist_images_wle)] finalist_authors = finalist_images_df['uploader'].value_counts() finalist_authors = finalist_authors.iloc[np.lexsort([finalist_authors.index, -finalist_authors.values])] finalist_authors finalist_authors_count = len(finalist_authors) finalist_authors_count # THIS PARAMETER IS YEAR-DEPENDENT AND COMES FROM MANUAL INSPECTION # May be set to the number of contestants with more than 1 finalist picture top_finalists ={ "2017": 25, "2016": 19, "2015": 21 } TOP_FINALISTS = top_finalists[str(YEAR)] finalist_authors = finalist_authors.iloc[:TOP_FINALISTS] finalist_authors # + padding = { "2017": 0.4, "2016": 0.2, "2015": 0.1 } PADDING = padding[str(YEAR)] fig, ax = plt.subplots(figsize=figsize_mid) p = sns.barplot(ax=ax, y=finalist_authors.index.map(flickr_ripper).map(lambda x: wrap_label(x, 18)), x=finalist_authors.values) p.set_xlabel("# Photographs", fontsize=18) p.set_ylabel("Contestants", fontsize=18) p.set_title(label='{1} {0} in {2}: Top finalists'.format(YEAR, TAG, COUNTRY), fontsize=20) p.tick_params(labelsize=14) for patch in ax.patches: ax.text(patch.get_width() + PADDING, patch.get_y() + patch.get_height()/2., '{:1.0f}'.format(patch.get_width()), ha="center", fontsize=14) desc = get_image_wikitext(image_wikitext, 'Top contributors reaching the final round of {1} {0} in {2}.'.format(YEAR, TAG_EXT, COUNTRY), YEAR, TAG_EXT, COUNTRY ) upload_to_commons2(p, "{1} {0} in {2} - Finalists.png".format(YEAR, TAG, COUNTRY), desc) # - finalist_sites = finalist_images_df['commons_cat'].value_counts() finalist_sites = finalist_sites.iloc[np.lexsort([finalist_sites.index, -finalist_sites.values])] finalist_sites # THIS PARAMETER IS YEAR-DEPENDENT AND COMES FROM MANUAL INSPECTION # May be set to the number of sites with more than 1 picture top_finalist_sites ={ "2017": 24, "2016": 23, "2015": 22 } TOP_FINALIST_SITES = top_finalist_sites[str(YEAR)] finalist_sites = finalist_sites.iloc[:TOP_FINALIST_SITES] finalist_sites # + padding = { "2017": 0.1, "2016": 0.1, "2015": 0.1 } PADDING = padding[str(YEAR)] fig, ax = plt.subplots(figsize=figsize_mid) p = sns.barplot(ax=ax, y=finalist_sites.index.map(lambda x: wrap_label(x.replace(' (site of community importance)', ''), 30)), x=finalist_sites.values) p.set_xlabel("# Photographs", fontsize=18) p.set_ylabel("Sites of community importance", fontsize=18) p.set_title(label='{1} {0} in {2}: Sites of community importance in the final round'.format(YEAR, TAG, COUNTRY), fontsize=20) p.tick_params(labelsize=14) for patch in ax.patches: ax.text(patch.get_width() + PADDING, patch.get_y() + patch.get_height()/2., '{:1.0f}'.format(patch.get_width()), ha="center", fontsize=14) desc = get_image_wikitext(image_wikitext, 'Top sites of community importance in the final round of {1} {0} in {2}.'.format(YEAR, TAG_EXT, COUNTRY), YEAR, TAG_EXT, COUNTRY ) upload_to_commons2(p, "{1} {0} in Spain - Finalist sites of community importance.png".format(YEAR, TAG), desc) # - finalist_autcoms = finalist_images_df['aut_com'].value_counts() finalist_autcoms = finalist_autcoms.iloc[np.lexsort([finalist_autcoms.index, -finalist_autcoms.values])] finalist_autcoms # + padding = { "2017": 0.3, "2016": 0.4, "2015": 0.2 } PADDING = padding[str(YEAR)] fig, ax = plt.subplots(figsize=figsize) p = sns.barplot(ax=ax, y=finalist_autcoms.index.map(lambda x: wrap_label(x, 14)), x=finalist_autcoms.values.astype(int)) p.set_xlabel("# Photographs", fontsize=18) p.set_ylabel("Autonomous communities", fontsize=18) p.set_title(label='{1} {0} in {2}: Autonomous communities in the final round'.format(YEAR, TAG, COUNTRY), fontsize=20) p.tick_params(labelsize=14) for patch in ax.patches: ax.text(patch.get_width() + PADDING, patch.get_y() + patch.get_height()/2., '{:1.0f}'.format(patch.get_width()), ha="center", fontsize=14) ax.xaxis.set_major_locator(MaxNLocator(integer=True)) desc = get_image_wikitext(image_wikitext, 'Spanish autonomous communities in the final round of {1} {0} in {2}.'.format(YEAR, TAG_EXT, COUNTRY), YEAR, TAG_EXT, COUNTRY ) upload_to_commons2(p, "{1} {0} in {2} - Finalist autonomous communities.png".format(YEAR, TAG, COUNTRY), desc) # - # ## Page generation template_file = os.path.join(templates_directory, 'wle.wiki') fh = open(template_file, 'r', encoding = "utf-8") template = fh.read() fh.close() vars = { "images_length": total_images_length, "valid_images_length": valid_images_length, "site_images_length": 0, "qi_length": qi_length, "gallery_quality_images": GALLERY_QI, "wle_sites_length": wle_sites_length, "authors_length": authors_length, "top_authors": TOP_UPLOADERS, "top_authors_by_site": TOP_UPLOADERS_BY_SITE, "new_uploaders_length": new_uploaders_length, "site_length": site_length, "top_sites": TOP_SITES, "aut_coms": aut_coms, "authors_df": authors_df, "images_per_site_df": images_per_site_df, "usages_df": usages_df, "coverage_df": coverage_df, "total_unique": total_unique, "new_uploaders": new_uploaders_contributions, "new_uploaders_sum": new_uploaders_contributions.sum(), "top_projects": TOP_PROJECTS, "finalist_images_count": finalist_images_count, "finalist_authors_count": finalist_authors_count, "finalist_cat": WLE_FINALIST_CATEGORY[str(YEAR)], "year": YEAR, "tag": TAG, "full_tag": TAG_EXT, "base": BASE_NAME, "country": COUNTRY, "date": datetime.now().strftime("%B %-d, %Y") } t = Template(template) statisticts_text = t.render(**vars) stats_page = pb.Page(commons_site, STATISTICS_PAGE) if stats_page.text.strip() != statisticts_text.strip() : stats_page.text = statisticts_text pb.output('Publishing --> {1} {0} in {2} Statistics'.format(YEAR, TAG, COUNTRY)) stats_page.save("{1} {0} in {2} statistics".format(YEAR, TAG, COUNTRY))
WLE stats generator.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + id="jWOHYOGsQ7Vm" colab_type="code" colab={} import pandas as pd import numpy as np from sklearn.tree import DecisionTreeRegressor from sklearn.metrics import mean_absolute_error from sklearn.model_selection import cross_val_score # + id="g1UkcaupSnHG" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="3dc2f30f-ca5b-4e77-8346-21b69a774d80" executionInfo={"status": "ok", "timestamp": 1581626597336, "user_tz": -60, "elapsed": 905, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/-KmNQFMSUVFo/AAAAAAAAAAI/AAAAAAAAAD0/7ei-tJ_qy80/s64/photo.jpg", "userId": "09218944195483713936"}} # cd '/content' # + id="ZHAKl48TSpvi" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="e823be3c-e9a7-4cfc-b29e-08e9b1e187ed" executionInfo={"status": "ok", "timestamp": 1581626602523, "user_tz": -60, "elapsed": 2377, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/-KmNQFMSUVFo/AAAAAAAAAAI/AAAAAAAAAD0/7ei-tJ_qy80/s64/photo.jpg", "userId": "09218944195483713936"}} # ls # + id="KHBVdw2pSqpu" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="473d72ba-6aa9-4803-ebaa-9e3060119661" executionInfo={"status": "ok", "timestamp": 1581626657443, "user_tz": -60, "elapsed": 892, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/-KmNQFMSUVFo/AAAAAAAAAAI/AAAAAAAAAD0/7ei-tJ_qy80/s64/photo.jpg", "userId": "09218944195483713936"}} # cd drive # + id="9J01zE_cSus-" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="fe5d5cd3-8c1c-497c-c0c6-5044fecf43b4" executionInfo={"status": "ok", "timestamp": 1581626668952, "user_tz": -60, "elapsed": 730, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/-KmNQFMSUVFo/AAAAAAAAAAI/AAAAAAAAAD0/7ei-tJ_qy80/s64/photo.jpg", "userId": "09218944195483713936"}} # cd "My Drive" # + id="SuMhhfEjS7Qb" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="726e03c5-55fe-4d16-e7cc-5cd05bc5319e" executionInfo={"status": "ok", "timestamp": 1581626706104, "user_tz": -60, "elapsed": 940, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/-KmNQFMSUVFo/AAAAAAAAAAI/AAAAAAAAAD0/7ei-tJ_qy80/s64/photo.jpg", "userId": "09218944195483713936"}} # cd "Colab Notebooks/dw_matrix" # + id="cekYOP0-TESs" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="b3faff5b-c6a8-47cc-da72-108868abd6ae" executionInfo={"status": "ok", "timestamp": 1581626735695, "user_tz": -60, "elapsed": 2576, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/-KmNQFMSUVFo/AAAAAAAAAAI/AAAAAAAAAD0/7ei-tJ_qy80/s64/photo.jpg", "userId": "09218944195483713936"}} # ls data # + id="ZaeQmXy6TK-K" colab_type="code" colab={} df=pd.read_csv('data/men_shoes.csv',low_memory=False) # + id="kM03VdYoT02g" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="597c4012-a24f-4365-ce52-b7ed2d05b8bf" executionInfo={"status": "ok", "timestamp": 1581627000206, "user_tz": -60, "elapsed": 995, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/-KmNQFMSUVFo/AAAAAAAAAAI/AAAAAAAAAD0/7ei-tJ_qy80/s64/photo.jpg", "userId": "09218944195483713936"}} df.shape # + id="-gfh6pkhUMvS" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 221} outputId="4ebc9a41-fdfd-4d0b-eb00-cd0fc42e2ff2" executionInfo={"status": "ok", "timestamp": 1581627123176, "user_tz": -60, "elapsed": 954, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/-KmNQFMSUVFo/AAAAAAAAAAI/AAAAAAAAAD0/7ei-tJ_qy80/s64/photo.jpg", "userId": "09218944195483713936"}} df.columns # + id="MvFqFQfFVRHR" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="60bce9da-ffbc-4ca7-adbb-8cbd0846ed19" executionInfo={"status": "ok", "timestamp": 1581627374488, "user_tz": -60, "elapsed": 771, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/-KmNQFMSUVFo/AAAAAAAAAAI/AAAAAAAAAD0/7ei-tJ_qy80/s64/photo.jpg", "userId": "09218944195483713936"}} mean_price=np.mean(df['prices_amountmin']) mean_price # + id="soTS_xnTT232" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="7b99af3e-8aff-4325-cbf6-5c28eef37c27" executionInfo={"status": "ok", "timestamp": 1581627485759, "user_tz": -60, "elapsed": 958, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/-KmNQFMSUVFo/AAAAAAAAAAI/AAAAAAAAAD0/7ei-tJ_qy80/s64/photo.jpg", "userId": "09218944195483713936"}} [1]*5 # + id="RsuiXv8mWCoG" colab_type="code" colab={} y_true=df['prices_amountmin'] y_pred=[mean_price]*y_true.shape[0] # + id="oUoCRBQiW3sf" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="603cb117-8fd6-4e08-a999-93f0163a9788" executionInfo={"status": "ok", "timestamp": 1581627835389, "user_tz": -60, "elapsed": 819, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/-KmNQFMSUVFo/AAAAAAAAAAI/AAAAAAAAAD0/7ei-tJ_qy80/s64/photo.jpg", "userId": "09218944195483713936"}} mean_absolute_error(y_true,y_pred) # + id="L3GOYhoCWUEl" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 282} outputId="a954312c-404e-4b30-fbb0-426b4e5981a9" executionInfo={"status": "ok", "timestamp": 1581627963775, "user_tz": -60, "elapsed": 941, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/-KmNQFMSUVFo/AAAAAAAAAAI/AAAAAAAAAD0/7ei-tJ_qy80/s64/photo.jpg", "userId": "09218944195483713936"}} df['prices_amountmin'].hist(bins=100) # + id="asJkjp62X_97" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 282} outputId="f61a9e46-62ae-46ac-e6b0-19dc120806e9" executionInfo={"status": "ok", "timestamp": 1581628068711, "user_tz": -60, "elapsed": 946, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/-KmNQFMSUVFo/AAAAAAAAAAI/AAAAAAAAAD0/7ei-tJ_qy80/s64/photo.jpg", "userId": "09218944195483713936"}} np.log(df['prices_amountmin']+1).hist(bins=100) # + id="Da3sm5i0YSbF" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 282} outputId="57f428ef-5390-47f2-fc2d-8a95a65ed4a8" executionInfo={"status": "ok", "timestamp": 1581628187400, "user_tz": -60, "elapsed": 894, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/-KmNQFMSUVFo/AAAAAAAAAAI/AAAAAAAAAD0/7ei-tJ_qy80/s64/photo.jpg", "userId": "09218944195483713936"}} np.log1p(df['prices_amountmin']).hist(bins=100) # + id="G7YJkPC7X3U3" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="4323a48d-1bdc-4db3-e105-9fe1247bde9a" executionInfo={"status": "ok", "timestamp": 1581628493718, "user_tz": -60, "elapsed": 2145, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/-KmNQFMSUVFo/AAAAAAAAAAI/AAAAAAAAAD0/7ei-tJ_qy80/s64/photo.jpg", "userId": "09218944195483713936"}} y_true=df['prices_amountmin'] y_pred=[np.median(y_true)]*y_true.shape[0] mean_absolute_error(y_true,y_pred) # + id="By7AUkSYasFh" colab_type="code" colab={} df['prices_amountmin'].hist(bins=100) # + id="6IdMdCq_Z3gO" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="b304ad74-9f1f-47a9-deed-3ea8a4911cdc" executionInfo={"status": "ok", "timestamp": 1581629426014, "user_tz": -60, "elapsed": 1477, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/-KmNQFMSUVFo/AAAAAAAAAAI/AAAAAAAAAD0/7ei-tJ_qy80/s64/photo.jpg", "userId": "09218944195483713936"}} y_true=df['prices_amountmin'] price_log_mean=np.expm1(np.mean(np.log1p(y_true))) y_pred=[price_log_mean]*y_true.shape[0] mean_absolute_error(y_true,y_pred) # + id="y7RiJOhebXd6" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="d2087eb4-a77a-4e3f-83d8-02b469882fee" executionInfo={"status": "ok", "timestamp": 1581628908596, "user_tz": -60, "elapsed": 961, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/-KmNQFMSUVFo/AAAAAAAAAAI/AAAAAAAAAD0/7ei-tJ_qy80/s64/photo.jpg", "userId": "09218944195483713936"}} np.mean(np.log1p(y_true)) # + id="qU9ZyKLTblRT" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="715d25e8-6d44-4d15-edb8-ac71a4e2b69f" executionInfo={"status": "ok", "timestamp": 1581629021254, "user_tz": -60, "elapsed": 832, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/-KmNQFMSUVFo/AAAAAAAAAAI/AAAAAAAAAD0/7ei-tJ_qy80/s64/photo.jpg", "userId": "09218944195483713936"}} np.exp(np.mean(np.log1p(y_true)))-1 # + id="hRmA1Wyzb6dm" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 221} outputId="436f302d-0801-4cc3-f650-d5c9d6e7a54c" executionInfo={"status": "ok", "timestamp": 1581629498409, "user_tz": -60, "elapsed": 1261, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/-KmNQFMSUVFo/AAAAAAAAAAI/AAAAAAAAAD0/7ei-tJ_qy80/s64/photo.jpg", "userId": "09218944195483713936"}} df.columns # + id="lcQPHltrdoLM" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 221} outputId="36fdae81-4951-458b-ff66-3fd83f7bd1df" executionInfo={"status": "ok", "timestamp": 1581629549590, "user_tz": -60, "elapsed": 819, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/-KmNQFMSUVFo/AAAAAAAAAAI/AAAAAAAAAD0/7ei-tJ_qy80/s64/photo.jpg", "userId": "09218944195483713936"}} df.brand.value_counts() # + id="kxf4JlhTd6iF" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 136} outputId="934f7456-e120-4f66-8d6d-9d05367eb137" executionInfo={"status": "ok", "timestamp": 1581629714580, "user_tz": -60, "elapsed": 842, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/-KmNQFMSUVFo/AAAAAAAAAAI/AAAAAAAAAD0/7ei-tJ_qy80/s64/photo.jpg", "userId": "09218944195483713936"}} df['brand'].factorize() # + id="Q-IFTUMZeizf" colab_type="code" colab={} df['brand_cat']=df['brand'].factorize()[0] # + id="9z7gH4mMelQ1" colab_type="code" colab={} feats=['brand_cat'] # + id="rebTnGo9fH5h" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="83856c78-52a8-4bc4-964b-4340b5706316" executionInfo={"status": "ok", "timestamp": 1581630343243, "user_tz": -60, "elapsed": 1130, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/-KmNQFMSUVFo/AAAAAAAAAAI/AAAAAAAAAD0/7ei-tJ_qy80/s64/photo.jpg", "userId": "09218944195483713936"}} X=df[feats].values y=df['prices_amountmin'].values model=DecisionTreeRegressor(max_depth=5) scores=cross_val_score(model, X,y,scoring='neg_mean_absolute_error') np.mean(scores), np.std(scores) # + id="V4C8uwgkhGj4" colab_type="code" colab={} def run_model(feats): X=df[feats].values y=df['prices_amountmin'].values model=DecisionTreeRegressor(max_depth=5) scores=cross_val_score(model, X,y,scoring='neg_mean_absolute_error') return np.mean(scores), np.std(scores) # + id="CvZnqRojhh1l" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="c06f4265-389c-42b2-a035-0261ca5fe982" executionInfo={"status": "ok", "timestamp": 1581630511711, "user_tz": -60, "elapsed": 812, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/-KmNQFMSUVFo/AAAAAAAAAAI/AAAAAAAAAD0/7ei-tJ_qy80/s64/photo.jpg", "userId": "09218944195483713936"}} run_model(['brand_cat']) # + id="xWiAZqPyhlbN" colab_type="code" colab={} df['manufacturer_cat']=df['manufacturer'].factorize()[0] # + id="FRkLQ0Hoh9TR" colab_type="code" colab={} def run_model(feats): X=df[feats].values y=df['prices_amountmin'].values model=DecisionTreeRegressor(max_depth=5) scores=cross_val_score(model, X,y,scoring='neg_mean_absolute_error') return np.mean(scores), np.std(scores) # + id="bd091TAZiQGD" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="92bf4212-a756-454a-e8fd-ba14ca2732c3" executionInfo={"status": "ok", "timestamp": 1581630708934, "user_tz": -60, "elapsed": 802, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/-KmNQFMSUVFo/AAAAAAAAAAI/AAAAAAAAAD0/7ei-tJ_qy80/s64/photo.jpg", "userId": "09218944195483713936"}} run_model(['manufacturer_cat']) # + id="OcpnRXZgigrg" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="77e3b106-b701-4cd2-cd91-1443ba173996" executionInfo={"status": "ok", "timestamp": 1581631143719, "user_tz": -60, "elapsed": 1014, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/-KmNQFMSUVFo/AAAAAAAAAAI/AAAAAAAAAD0/7ei-tJ_qy80/s64/photo.jpg", "userId": "09218944195483713936"}} run_model(['manufacturer_cat','brand_cat']) # + id="fPDP4EmgmTmo" colab_type="code" colab={} # !git add . # + id="j7BuBHSgmVvC" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="f0c1bce1-be0a-44c7-baad-a7efe53169f8" executionInfo={"status": "ok", "timestamp": 1581631894393, "user_tz": -60, "elapsed": 7414, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/-KmNQFMSUVFo/AAAAAAAAAAI/AAAAAAAAAD0/7ei-tJ_qy80/s64/photo.jpg", "userId": "09218944195483713936"}} # !git push # + id="tGGHmFXjmiHi" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 221} outputId="a2357fa4-be3c-4e70-c845-0dbd98493e3d" executionInfo={"status": "ok", "timestamp": 1581631934508, "user_tz": -60, "elapsed": 2691, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/-KmNQFMSUVFo/AAAAAAAAAAI/AAAAAAAAAD0/7ei-tJ_qy80/s64/photo.jpg", "userId": "09218944195483713936"}} # !git commit # + id="-s5YtqhFnAU6" colab_type="code" colab={} # !git config --global user.email "<EMAIL>" # !git config --global user.name "alichota" # + id="Q5LY3Xp1nMFU" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 153} outputId="f91404fd-c829-4820-c251-d44da7126ae8" executionInfo={"status": "ok", "timestamp": 1581631990150, "user_tz": -60, "elapsed": 5229, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/-KmNQFMSUVFo/AAAAAAAAAAI/AAAAAAAAAD0/7ei-tJ_qy80/s64/photo.jpg", "userId": "09218944195483713936"}} # !git status # + id="XQme65cinNS5" colab_type="code" colab={} # !git add . # + id="aExe50Wdned4" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 153} outputId="f4878720-d288-4ebb-b1b9-5ee2288eb883" executionInfo={"status": "ok", "timestamp": 1581632072148, "user_tz": -60, "elapsed": 2948, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/-KmNQFMSUVFo/AAAAAAAAAAI/AAAAAAAAAD0/7ei-tJ_qy80/s64/photo.jpg", "userId": "09218944195483713936"}} # !git status # + id="vpD4UQxanhr-" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="d71d5215-24ef-43f4-b322-f8875f42c555" executionInfo={"status": "ok", "timestamp": 1581632092037, "user_tz": -60, "elapsed": 872, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/-KmNQFMSUVFo/AAAAAAAAAAI/AAAAAAAAAD0/7ei-tJ_qy80/s64/photo.jpg", "userId": "09218944195483713936"}} pwd # + id="p3T6iG7ynnPE" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="f8380f8f-f903-43ec-f96f-924e3f7d30e6" executionInfo={"status": "ok", "timestamp": 1581632168015, "user_tz": -60, "elapsed": 5932, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/-KmNQFMSUVFo/AAAAAAAAAAI/AAAAAAAAAD0/7ei-tJ_qy80/s64/photo.jpg", "userId": "09218944195483713936"}} # !git add day4.ipynb # + id="3D3kax_8n4he" colab_type="code" colab={}
day4.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Classifying characters in scikit-learn # #### Classifying handwritten digits # + import matplotlib.pyplot as plt from sklearn.datasets import fetch_mldata import matplotlib.cm as cm mnist = fetch_mldata('MNIST original', data_home='data/mnist') counter = 1 for i in range(1, 4): for j in range(1, 6): plt.subplot(3, 5, counter) plt.imshow(mnist.data[(i - 1) * 8000 + j].reshape((28, 28)), cmap=cm.Greys_r) plt.axis('off') counter += 1 plt.show() # + from sklearn.pipeline import Pipeline from sklearn.preprocessing import scale from sklearn.model_selection import train_test_split from sklearn.svm import SVC from sklearn.grid_search import GridSearchCV from sklearn.metrics import classification_report if __name__ == '__main__': X, y = mnist.data, mnist.target X = X/255.0*2 - 1 X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=11) pipeline = Pipeline([ ('clf', SVC(kernel='rbf', gamma=0.01, C=100)) ]) parameters = { 'clf__gamma': (0.01, 0.03, 0.1, 0.3, 1), 'clf__C': (0.1, 0.3, 1, 3, 10, 30), } grid_search = GridSearchCV(pipeline, parameters, n_jobs=2, verbose=1, scoring='accuracy') grid_search.fit(X_train[:10000], y_train[:10000]) print('Best score: %0.3f' % grid_search.best_score_) print('Best parameters set:') best_parameters = grid_search.best_estimator_.get_params() for param_name in sorted(parameters.keys()): print('\t%s: %r' % (param_name, best_parameters[param_name])) predictions = grid_search.predict(X_test) print(classification_report(y_test, predictions)) # - # #### Classifying characters in natural images # + import os import numpy as np from sklearn.pipeline import Pipeline from sklearn.svm import SVC from sklearn.model_selection import train_test_split from sklearn.grid_search import GridSearchCV from sklearn.metrics import classification_report from PIL import Image X = [] y = [] for path, subdirs, files in os.walk('data/English/Img/GoodImg/Bmp/'): for filename in files: f = os.path.join(path, filename) target = filename[3:filename.index('-')] img = Image.open(f).convert('L').resize((30, 30), resample=Image.LANCZOS) X.append(np.array(img).reshape(900,)) y.append(target) X = np.array(X) # + X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.1, random_state=11) pipeline = Pipeline([ ('clf', SVC(kernel='rbf', gamma=0.01, C=100)) ]) parameters = { 'clf__gamma': (0.01, 0.03, 0.1, 0.3, 1), 'clf__C': (0.1, 0.3, 1, 3, 10, 30), } if __name__ == '__main__': grid_search = GridSearchCV(pipeline, parameters, n_jobs=3, verbose=1, scoring='accuracy') grid_search.fit(X_train, y_train) print('Best score: %0.3f' % grid_search.best_score_) print('Best parameters set:') best_parameters = grid_search.best_estimator_.get_params() for param_name in sorted(parameters.keys()): print('\t%s: %r' % (param_name, best_parameters[param_name])) predictions = grid_search.predict(X_test) print(classification_report(y_test, predictions)) # -
Section 11/Section 11.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Heart Failure Prediction and Classification using Machine Learning Techniques # + # import 'Pandas' import pandas as pd # import 'Numpy' import numpy as np # import subpackage of Matplotlib import matplotlib.pyplot as plt from matplotlib.colors import ListedColormap # import 'Seaborn' import seaborn as sns # to suppress warnings from warnings import filterwarnings filterwarnings('ignore') # display all columns of the dataframe pd.options.display.max_columns = None # display all rows of the dataframe pd.options.display.max_rows = None # to display the float values upto 6 decimal places pd.options.display.float_format = '{:.6f}'.format # import train-test split from sklearn.model_selection import train_test_split # import StandardScaler to perform scaling from sklearn.preprocessing import StandardScaler from sklearn.preprocessing import MinMaxScaler # import various functions from sklearn from sklearn.neighbors import KNeighborsClassifier from sklearn import metrics from sklearn.metrics import classification_report from sklearn.model_selection import GridSearchCV from sklearn.metrics import accuracy_score from sklearn.metrics import roc_curve from sklearn.metrics import roc_auc_score from sklearn.metrics import confusion_matrix from sklearn.model_selection import GridSearchCV from sklearn.model_selection import cross_val_score from sklearn.linear_model import LogisticRegression from sklearn.naive_bayes import GaussianNB from sklearn.tree import DecisionTreeClassifier from sklearn.ensemble import RandomForestClassifier from sklearn import tree from sklearn.feature_selection import RFE from mlxtend.feature_selection import SequentialFeatureSelector as sfs # for performance metric from sklearn.metrics import precision_score from sklearn.metrics import recall_score from sklearn.metrics import f1_score # import the functions for visualizing the decision tree import pydotplus from IPython.display import Image # - plt.rcParams['figure.figsize'] = [15,8] df_heart = pd.read_csv('heart_failure_clinical_records_dataset.csv') df_heart.shape df_heart.columns df_heart.head() df_heart.dtypes df_heart.apply(lambda x:len(x.unique())) # age 40-95 # creatinine_phosphokinase 23-7861 # ejection_fraction 14-80 # platelets 25100-850000 # serum_creatinine 0.5-9.4 # serum_sodium 113-148 # time 4-285 '''anaemia diabetes high_blood_pressure sex smoking''' df_heart['anaemia'].value_counts() df_heart['diabetes'].value_counts() df_heart['high_blood_pressure'].value_counts() df_heart['sex'].value_counts() df_heart['smoking'].value_counts() # + df_heart.loc[:,['age','creatinine_phosphokinase','ejection_fraction','platelets','serum_creatinine','serum_sodium','time']].describe().iloc[[3,7],:] # - from pandas_profiling import ProfileReport profile = ProfileReport(df_heart, title="Pandas Profiling Report") profile df_heart.plot(kind="box") target_dis = df_heart.DEATH_EVENT.value_counts() target_dis target_perc = pd.Series((round(target_dis[0]/sum(target_dis),2), round(target_dis[1]/sum(target_dis),2))) target_perc # ## Functions def get_test_report(model, x_test, y_test): test_pred = model.predict(x_test) return(classification_report(y_test, test_pred)) def plot_confusion_matrix(model, x_test, y_test): y_pred = model.predict(x_test) cm = confusion_matrix(y_test, y_pred) conf_matrix = pd.DataFrame(data = cm,columns = ['Predicted:0','Predicted:1'], index = ['Actual:0','Actual:1']) sns.heatmap(conf_matrix, annot = True, fmt = 'd', cmap = ListedColormap(['lightskyblue']), cbar = False, linewidths = 0.1, annot_kws = {'size':25}) plt.xticks(fontsize = 20) plt.yticks(fontsize = 20) plt.show() def plot_roc(model, x_test, y_test): y_pred_prob = model.predict_proba(x_test)[:,1] fpr, tpr, thresholds = roc_curve(y_test, y_pred_prob) plt.plot(fpr, tpr) plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.0]) plt.plot([0, 1], [0, 1],'r--') plt.title('ROC curve', fontsize = 15) plt.xlabel('False positive rate (1-Specificity)', fontsize = 15) plt.ylabel('True positive rate (Sensitivity)', fontsize = 15) plt.text(x = 0.02, y = 0.9, s = ('AUC Score:',round(roc_auc_score(y_test, y_pred_prob),4))) plt.grid(True) def calculate_auc_score(model, x_test, y_test): y_pred_prob = model.predict_proba(x_test)[:,1] return round(roc_auc_score(y_test, y_pred_prob),4) models_summary = pd.DataFrame(columns=['Model_name', 'AUC_score', 'Accuracy', 'Precision', 'Recall', 'F1_score']) def update_model_summary(model, x_test, y_test, model_name): y_predict = model.predict(x_test) return models_summary.append(pd.Series( { "Model_name": model_name, "AUC_score": calculate_auc_score(model, x_test, y_test), "Accuracy": model.score(x_test, y_test), "Precision": precision_score(y_test, y_predict), "Recall": recall_score(y_test, y_predict), "F1_score": f1_score(y_test, y_predict) } ), ignore_index=True) # ## Spilt features = df_heart.drop('DEATH_EVENT', axis=1) features.head() scaler = StandardScaler() X_scaled = scaler.fit_transform(features) X = pd.DataFrame(X_scaled, columns = features.columns) X.head() # + # scaler = MinMaxScaler() # X_scaler = scaler.fit_transform(features) # X = pd.DataFrame(X_scaled, columns = features.columns) # X.head() # - target = df_heart.DEATH_EVENT target.head() y = target X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.7, random_state=10) X_train.shape, X_test.shape, y_train.shape, y_test.shape # # Models # ## Logistic lr = LogisticRegression() logistic_model = lr.fit(X_train, y_train) logistic_model.score(X_test, y_test) calculate_auc_score(logistic_model, X_test, y_test) test_report = get_test_report(logistic_model, X_test, y_test) print(test_report) plot_roc(logistic_model, X_test, y_test) plot_confusion_matrix(logistic_model, X_test, y_test) models_summary = update_model_summary(logistic_model, X_test, y_test, "Logistic model") models_summary # ## Naive Bayes gnb = GaussianNB() gnb_model = gnb.fit(X_train, y_train) test_report = get_test_report(gnb_model, X_test, y_test) print(test_report) plot_roc(gnb_model, X_test, y_test) plot_confusion_matrix(gnb_model, X_test, y_test) models_summary = update_model_summary(gnb_model, X_test, y_test, "GaussianNB model") models_summary scores = cross_val_score(gnb_model, X, y, cv=10) print("10 cross validationscore:\n",scores) print("Average Score:", scores.mean()) # ## KNN knn_classification = KNeighborsClassifier(n_neighbors = 3) knn_model = knn_classification.fit(X_train, y_train) test_report = get_test_report(knn_model, X_test, y_test) print(test_report) plot_roc(knn_model, X_test, y_test) plot_confusion_matrix(knn_model, X_test, y_test) models_summary = update_model_summary(knn_model, X_test, y_test, "KNN model") models_summary # ### Finding best KNN # + tuned_paramaters = {'n_neighbors': np.arange(1, 25, 2), 'metric': ['hamming','euclidean','manhattan','Chebyshev']} # instantiate the 'KNeighborsClassifier' knn_classification = KNeighborsClassifier() knn_grid = GridSearchCV(estimator = knn_classification, param_grid = tuned_paramaters, cv = 5, scoring = 'accuracy') knn_grid.fit(X_train, y_train) print('Best parameters for KNN Classifier: ', knn_grid.best_params_, '\n') # - best_knn_classification = KNeighborsClassifier(n_neighbors = 5, metric = 'manhattan') best_knn_model = best_knn_classification.fit(X_train, y_train) test_report = get_test_report(best_knn_model, X_test, y_test) print(test_report) plot_roc(best_knn_model, X_test, y_test) plot_confusion_matrix(best_knn_model, X_test, y_test) models_summary = update_model_summary(best_knn_model, X_test, y_test, "KNN best model(manhattan - 5)") models_summary # ## Decision Tree X_dec_train, X_dec_test, y_dec_train, y_dec_test = train_test_split(features, target, train_size=0.7, random_state=10) X_dec_train.shape, X_dec_test.shape, y_dec_train.shape, y_dec_test.shape X_dec_train.head() decision_tree_classification = DecisionTreeClassifier(criterion = 'entropy', random_state = 10) decision_tree = decision_tree_classification.fit(X_dec_train, y_dec_train) # labels = X_train.columns # dot_data = tree.export_graphviz(decision_tree, feature_names = labels, class_names = ["0","1"]) # graph = pydotplus.graph_from_dot_data(dot_data) # Image(graph.create_png()) print(get_test_report(decision_tree, X_dec_test, y_dec_test)) plot_confusion_matrix(decision_tree, X_dec_test, y_dec_test) plot_roc(decision_tree, X_dec_test, y_dec_test) models_summary = update_model_summary(decision_tree, X_dec_test, y_dec_test, "Decission Tree - Entropy") models_summary decision_tree_classification = DecisionTreeClassifier(criterion = 'gini', random_state = 10) decision_tree = decision_tree_classification.fit(X_dec_train, y_dec_train) # labels = X_dec_train.columns # dot_data = tree.export_graphviz(decision_tree, feature_names = labels, class_names = ["0","1"]) # graph = pydotplus.graph_from_dot_data(dot_data) # Image(graph.create_png()) print(get_test_report(decision_tree, X_dec_test, y_dec_test)) plot_confusion_matrix(decision_tree, X_dec_test, y_dec_test) plot_roc(decision_tree, X_dec_test, y_dec_test) models_summary = update_model_summary(decision_tree, X_dec_test, y_dec_test, "Decission Tree - Gini") models_summary # ## Random Forest rf_classification = RandomForestClassifier(n_estimators = 10, random_state = 10) rf_model = rf_classification.fit(X_dec_train, y_dec_train) print(get_test_report(rf_model, X_dec_test, y_dec_test)) plot_confusion_matrix(rf_model, X_dec_test, y_dec_test) plot_roc(rf_model, X_dec_test, y_dec_test) models_summary = update_model_summary(rf_model, X_dec_test, y_dec_test, "Random Forest") models_summary # ## Random Forest - Tuning Paramaters '''%%time tuned_paramaters = [{'criterion': ['entropy', 'gini'], 'n_estimators': [10, 30, 50, 70, 90], 'max_depth': [10, 15, 20], 'max_features': ['sqrt', 'log2'], 'min_samples_split': [2, 5, 8, 11], 'min_samples_leaf': [1, 5, 9], 'max_leaf_nodes': [2, 5, 8, 11]}] random_forest_classification = RandomForestClassifier(random_state = 10) rf_grid = GridSearchCV(estimator = random_forest_classification, param_grid = tuned_paramaters, cv = 5) rf_grid_model = rf_grid.fit(X_dec_train, y_dec_train) print('Best parameters for random forest classifier: ', rf_grid_model.best_params_, '\n')''' '''%%time tuned_paramaters = [{'criterion': ['entropy', 'gini'], 'n_estimators': [10, 30, 50, 70, 90], 'max_depth': [10, 15, 20], 'max_features': ['sqrt', 'log2'], 'min_samples_split': [2, 5, 8, 11], 'min_samples_leaf': [1, 5, 9], 'max_leaf_nodes': [2, 5, 8, 11]}] random_forest_classification = RandomForestClassifier(random_state = 10) rf_grid = GridSearchCV(estimator = random_forest_classification, param_grid = tuned_paramaters, cv = 5) rf_grid_model = rf_grid.fit(X_dec_train, y_dec_train) print('Best parameters for random forest classifier: ', rf_grid_model.best_params_, '\n')''' '''Best parameters for random forest classifier: {'criterion': 'entropy', 'max_depth': 10, 'max_features': 'sqrt', 'max_leaf_nodes': 5, 'min_samples_leaf': 5, 'min_samples_split': 11, 'n_estimators': 10} Wall time: 31min 20s''' # + # rf_model_tune = RandomForestClassifier(criterion = rf_grid_model.best_params_.get('criterion'), # n_estimators = rf_grid_model.best_params_.get('n_estimators'), # max_depth = rf_grid_model.best_params_.get('max_depth'), # max_features = rf_grid_model.best_params_.get('max_features'), # max_leaf_nodes = rf_grid_model.best_params_.get('max_leaf_nodes'), # min_samples_leaf = rf_grid_model.best_params_.get('min_samples_leaf'), # min_samples_split = rf_grid_model.best_params_.get('min_samples_split'), # random_state = 10) rf_model_tune = RandomForestClassifier(criterion = 'entropy', n_estimators = 10, max_depth = 10 , max_features = 'sqrt', max_leaf_nodes = 5, min_samples_leaf = 5, min_samples_split = 11, random_state = 10) rf_model_tune = rf_model_tune.fit(X_dec_train, y_dec_train) # print the performance measures for test set for the model with best parameters print('Classification Report for test set:\n', get_test_report(rf_model_tune, X_dec_test, y_dec_test)) # - X_dec_train.columns X_dec_train.head() print(get_test_report(rf_model_tune, X_dec_test, y_dec_test)) plot_confusion_matrix(rf_model_tune, X_dec_test, y_dec_test) plot_roc(rf_model_tune, X_dec_test, y_dec_test) models_summary = update_model_summary(rf_model_tune, X_dec_test, y_dec_test, "Random Forest - Tuned Hyperparameter") models_summary base_model_summary = models_summary.copy() base_model_summary.sort_values("AUC_score", ascending=False) # # Improvization # ## Logistic # ## RFE rfe_model = RFE(estimator=lr) rfe_model = rfe_model.fit(X_train, y_train) feat_index = pd.Series(data = rfe_model.ranking_, index = X_train.columns) signi_feat_rfe = feat_index[feat_index==1].index print(signi_feat_rfe) X_rfe = X[signi_feat_rfe] X_rfe.head() X_rfe_train, X_rfe_test, y_rfe_train, y_rfe_test = train_test_split(X_rfe, y, test_size = 0.3, random_state = 0) X_rfe_train.shape, X_rfe_test.shape, y_rfe_train.shape, y_rfe_test.shape logistic_model_rfe = lr.fit(X_rfe_train, y_rfe_train) test_report = get_test_report(logistic_model_rfe, X_rfe_test, y_rfe_test) print(test_report) plot_roc(logistic_model_rfe, X_rfe_test, y_rfe_test) plot_confusion_matrix(logistic_model_rfe, X_rfe_test, y_rfe_test) models_summary = update_model_summary(logistic_model_rfe, X_rfe_test, y_rfe_test, "Logistic Model from RFE") models_summary # ## Forward Elimination lr_forward = sfs(estimator = lr, k_features = 'best', forward = True) sfs_forward = lr_forward.fit(X_train, y_train) print('Features selelected using forward selection are: ') sfs_forward_features = list(sfs_forward.k_feature_names_) print(sfs_forward_features) print('\nAccuracy: ', sfs_forward.k_score_) X_lr_fr = X[sfs_forward_features] X_lr_fr.head() X_lr_fr_train, X_lr_fr_test, y_lr_fr_train, y_lr_fr_test = train_test_split(X_lr_fr, y, test_size = 0.3, random_state = 0) X_lr_fr_train.shape, X_lr_fr_test.shape, y_lr_fr_train.shape, y_lr_fr_test.shape logistic_model_lr_fr = lr.fit(X_lr_fr_train, y_lr_fr_train) test_report = get_test_report(logistic_model_lr_fr, X_lr_fr_test, y_lr_fr_test) print(test_report) plot_roc(logistic_model_lr_fr, X_lr_fr_test, y_lr_fr_test) plot_confusion_matrix(logistic_model_lr_fr, X_lr_fr_test, y_lr_fr_test) models_summary = update_model_summary(logistic_model_lr_fr, X_lr_fr_test, y_lr_fr_test, "Logistic Model from Forward Selection") models_summary # ## Backward Elimination lr_backward = sfs(estimator = lr, k_features = 'best', forward = False) sfs_backward = lr_backward.fit(X_train, y_train) print('Features selelected using backward selection are: ') sfs_backward_features = list(sfs_backward.k_feature_names_) print(sfs_backward_features) print('\nAccuracy: ', sfs_backward.k_score_) X_lr_bk = X[sfs_backward_features] X_lr_bk.head() X_lr_bk_train, X_lr_bk_test, y_lr_bk_train, y_lr_bk_test = train_test_split(X_lr_bk, y, test_size = 0.3, random_state = 0) X_lr_bk_train.shape, X_lr_bk_test.shape, y_lr_bk_train.shape, y_lr_bk_test.shape logistic_model_lr_bk = lr.fit(X_lr_bk_train, y_lr_bk_train) test_report = get_test_report(logistic_model_lr_bk, X_lr_bk_test, y_lr_bk_test) print(test_report) plot_roc(logistic_model_lr_bk, X_lr_bk_test, y_lr_bk_test) plot_confusion_matrix(logistic_model_lr_bk, X_lr_bk_test, y_lr_bk_test) models_summary = update_model_summary(logistic_model_lr_bk, X_lr_bk_test, y_lr_bk_test, "Logistic Model from Backward Selection") models_summary models_summary.sort_values("AUC_score", ascending=False) # # SMOTE from imblearn.over_sampling import SMOTE y.value_counts() sm = SMOTE(random_state=10) X_sm, y_sm = sm.fit_resample(X, y) y_sm.value_counts() X_sm_train, X_sm_test, y_sm_train, y_sm_test = train_test_split(X_sm, y_sm, test_size = 0.3, random_state = 0) X_sm_train.shape, X_sm_test.shape, y_sm_train.shape, y_sm_test.shape logistic_model_sm = lr.fit(X_sm_train, y_sm_train) logistic_model_sm.score(X_sm_test, y_sm_test) calculate_auc_score(logistic_model_sm, X_sm_test, y_sm_test) test_report = get_test_report(logistic_model_sm, X_sm_test, y_sm_test) print(test_report) plot_roc(logistic_model_sm, X_sm_test, y_sm_test) plot_confusion_matrix(logistic_model_sm, X_sm_test, y_sm_test) models_summary = update_model_summary(logistic_model_sm, X_sm_test, y_sm_test, "Logistic Model after SMOTE") models_summary models_summary.sort_values("AUC_score", ascending=False) # + rf_model_tune = RandomForestClassifier(criterion = 'entropy', n_estimators = 10, max_depth = 10 , max_features = 'sqrt', max_leaf_nodes = 5, min_samples_leaf = 5, min_samples_split = 11, random_state = 10) rf_model_tune = rf_model_tune.fit(X_dec_train, y_dec_train) # print the performance measures for test set for the model with best parameters print('Classification Report for test set:\n', get_test_report(rf_model_tune, X_dec_test, y_dec_test)) # - def calculate_auc_score(model, x_test, y_test): y_pred_prob = model.predict_proba(x_test)[:,1] return round(roc_auc_score(y_test, y_pred_prob),4) X_dec_test, y_dec_test y1_pred_prob=rf_model_tune.predict_proba(X_dec_test)[:,1] round(roc_auc_score(y_dec_test, y1_pred_prob),4) # + rf_model_tune.score(X_dec_test,y_dec_test) # - rf_model_tune.predict(X_dec_test)[:5] X_dec_test.head() # # Export model import joblib joblib.dump(rf_model_tune,'Heart_Stroke_prediction_model.ml') import pickle file=open('Heart_Stroke_prediction_model.pkl','wb') pickle.dump(rf_model_tune,file)
Heart Failure Prediction and Classification using Machine Learning Techniques.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # ### Absolute geolocation error (ALE) of Sentinel-1 IW SLC in CRs (Rosamond, CA) # <B><I>Input image</I></B> # <br> # &nbsp;&nbsp;&nbsp;&nbsp;Sentinel-1 processed by ISCE2 (Sentinel-1B IPF version 003.31) # <br> # &nbsp;&nbsp;&nbsp;&nbsp;relative orbit: 71 # <br> # &nbsp;&nbsp;&nbsp;&nbsp;satellite direction: descending # <br> # &nbsp;&nbsp;&nbsp;&nbsp;acquisition date: 2021 01 06 # <br> # &nbsp;&nbsp;&nbsp;&nbsp;subswath: IW2 <b>(NOTE: this notebook is only for a single subswath CSLC)</b> # <br> # &nbsp;&nbsp;&nbsp;&nbsp;number of bursts: 2 # <br> # &nbsp;&nbsp;&nbsp;&nbsp;Rosamond corner reflectors locate in burst No. 2 # <B><I>Accuracy requirement of the Sentinel-1 CSLC product (CSLC-S1)</I></B> # <ul> # <li>0.75 m in range</li> # <li>1.5 m in azimuth</li> # </ul> # <div class="alert alert-warning"> # Corrections to be applied for estimating ALE<br> # <ul> # <li>Plate motion</li> # <li>Solid Earth Tide (SET)</li> # <li>Ionospheric effect in range</li> # <li>Bistatic offsets in azimuth</li> # <li>Doppler shift in range</li> # <li>Topographic induced shift in azimuth due to Doppler FM-rate mismatch</li> # <li>Tropospheric effect in range</li> # </ul> # </div> # + tags=[] import numpy as np import datetime as dt import pandas as pd import os import scipy import pysolid import re import math import matplotlib.pyplot as plt plt.rcParams["font.family"] = "Times New Roman" plt.rcParams.update({'font.size': 18}) from osgeo import gdal import isce import isceobj from isceobj.Orbit.Orbit import Orbit, StateVector # - # #### Preparing input parameters def loadProduct(xmlname): ''' Load the product using Product Manager. ''' from iscesys.Component.ProductManager import ProductManager as PM pm = PM() pm.configure() obj = pm.loadProduct(xmlname) return obj # + xmlfile = './datasets/IW2.xml' info = loadProduct(xmlfile) #loading xml file nbursts = info.numberOfBursts #number of bursts in CSLC file # + #defining parameters related with S1 annotation xml file xmlfile_S1 = './datasets/s1a-iw2-slc-vv-20210106t135212-20210106t135240-036018-043864-005.xml' import xml.etree.ElementTree as ET xmltree = ET.parse(xmlfile_S1) xmlroot = xmltree.getroot() #reading xml file # + #reading orbit info from xml orb = Orbit() #Orbit class for sv in info.orbit.stateVectors.list: SV = StateVector() SV.setTime(sv.getTime()) SV.setPosition(sv.getPosition()) SV.setVelocity(sv.getVelocity()) orb.addStateVector(SV) # + sensingStart = info.sensingStart sensingMid = info.sensingMid sensingStop = info.sensingStop print(sensingStart,sensingMid,sensingStop) #min, max time of data orb.minTime = sensingStart orb.maxTime = sensingStop nearRange = info.startingRange midRange = info.midRange farRange = info.farRange print('range (m) (near, mid, far)',nearRange, midRange, farRange) #below parameters are identical in bursts of the same subswath (reading the first burst) wvl = info.bursts.burst1.radarWavelength #wavelength print('wavelength (m): ', wvl) rangePixelSize = info.bursts.burst1.rangePixelSize print('rangepixelsize (m): ',rangePixelSize) prf = info.bursts.burst1.prf #pulse repetition frequency pri = 1/prf #pulse repetition interval print('PRF (Hz): ',prf) print('PRI (s): ',pri) # + #calculating azimuth pixel spacing given satellite geometry azimuthTimeInterval = info.bursts.burst1.azimuthTimeInterval #line time interval Vs = np.linalg.norm(orb.interpolateOrbit(sensingMid,method='hermite').getVelocity()) #satellite velocity at center Ps_vec = orb.interpolateOrbit(sensingMid,method='hermite').getPosition() Ps = np.linalg.norm(Ps_vec) #satellite position at center #approximate terrain height terrainHeight = info.bursts.burst1.terrainHeight #latitude, longitude, elevation at image center llh_cen = orb.rdr2geo(sensingMid,midRange,height=terrainHeight) from isceobj.Planet.Planet import Planet refElp = Planet(pname='Earth'). ellipsoid xyz_cen = refElp.llh_to_xyz(llh_cen) #xyz coordinate at image center Re = np.linalg.norm(xyz_cen) cosb = (Ps**2 + Re**2 - midRange**2)/(2*Ps*Re) Vg = (Re*cosb)*Vs/Ps print('satellite velocity (m/s)',Vs) print('satellite velocity over the ground (m/s)',Vg) azimuthPixelSize = float(xmlroot.find('.//azimuthPixelSpacing').text) #reading from S1 annotation xml # azimuthPixelSize = 13.94096 # S-1 SAFE annotation xml <azimuthPixelSpacing> # azimuthPixelSize = Vg*azimuthTimeInterval print('azimuthPixelSize (m): ',azimuthPixelSize) # - # #### Calculating pixel location of CRs # + #reading info of CRs # https://uavsar.jpl.nasa.gov/cgi-bin/calibration.pl csvCR = '2021-01-06_1352_Rosamond-corner-reflectors.csv' df = pd.read_csv(csvCR) #df = pd.read_csv(csvCR,index_col=0) #renaming header for convenience #df.index.names = ['ID'] df.rename(columns={'Corner reflector ID':'ID'}, inplace=True) df.rename(columns={'Latitude (deg)':'lat'}, inplace=True) df.rename(columns={'Longitude (deg)':'lon'}, inplace=True) df.rename(columns={'Azimuth (deg)':'azm'}, inplace=True) df.rename(columns={'Height above ellipsoid (m)':'hgt'}, inplace=True) df.rename(columns={'Side length (m)':'slen'}, inplace=True) # - df.head() # ##### <I>Solid Earth Tide (SET) correction with PySolid</I> # + dt0 = sensingStart dt1 = sensingStop step_sec = 5 # sample spacing in time domain in seconds for idx, row in df.iterrows(): llh = [row['lat'], row['lon'], row['hgt']] refElp = Planet(pname='Earth'). ellipsoid xyz = refElp.llh_to_xyz(llh) #xyz coordinate of CR # compute SET via pysolid (dt_out, tide_e, tide_n, tide_u) = pysolid.calc_solid_earth_tides_point(llh[0], llh[1], dt0, dt1, step_sec=step_sec, display=False, verbose=False) tide_e = np.mean(tide_e[0:2]) tide_n = np.mean(tide_n[0:2]) tide_u = np.mean(tide_u[0:2]) #updating lat,lon,hgt after SET correction xyz = [xyz[0]+tide_e, xyz[1]+tide_n, xyz[2]+tide_u] llh = refElp.xyz_to_llh(xyz) df.loc[idx,'lat'] = llh[0] df.loc[idx,'lon'] = llh[1] df.loc[idx,'hgt'] = llh[2] # - # ##### <I>Ionospheric correction with vTEC from JPL</I> # + # functions for parsing ionex file #ref: https://github.com/daniestevez/jupyter_notebooks/blob/master/IONEX.ipynb def parse_map(tecmap, exponent = -1): tecmap = re.split('.*END OF TEC MAP', tecmap)[0] return np.stack([np.fromstring(l, sep=' ') for l in re.split('.*LAT/LON1/LON2/DLON/H\\n',tecmap)[1:]])*10**exponent def get_tecmaps(filename): with open(filename) as f: ionex = f.read() return [parse_map(t) for t in ionex.split('START OF TEC MAP')[1:]] def get_tec(tecmap, lat, lon): i = round((87.5 - lat)*(tecmap.shape[0]-1)/(2*87.5)) j = round((180 + lon)*(tecmap.shape[1]-1)/360) return tecmap[i,j] # + #functions for downloading ionex from NASA CDDIS #NOTE: requires EARTHDATA login for download def ionex_filename(year, day, center, zipped = True): return '{}g{:03d}0.{:02d}i{}'.format(center, day, year % 100, '.Z' if zipped else '') def ionex_http_path(year, day, center): return 'https://cddis.nasa.gov/archive/gnss/products/ionex/{:04d}/{:03d}/{}'.format(year,day,ionex_filename(year, day, center)) # + ''' showing how to download ionex file from NASA CDDIS but actually not downloading because of requiring EARTHDATA credential ''' day = dt.datetime(year=sensingStart.year, month=sensingStart.month, day=sensingStart.day) day_of_year = int(day.strftime('%j')) center = 'jpl' cmd = 'wget --auth-no-challenge --user=ID --password=PASSWORD '+ ionex_http_path(sensingStart.year,day_of_year,center) print(cmd) # os.system(cmd) cmd = 'gzip -d ' + ionex_filename(sensingStart.year, day_of_year, center) print(cmd) # os.system(cmd) # tecfile = ionex_filename(sensingStart.year, day_of_year, center).replace('.Z','') # print(tecfile) # + ## parameter setup for ionospheric correction #JPL global ionospheric map (GIM) product tecfile = ionex_filename(sensingStart.year, day_of_year, center).replace('.Z','') tecmap_array = get_tecmaps(tecfile) tecmap_array = np.array(tecmap_array) sensing_hour = sensingStart.hour #daily TEC map has 2-hour interval if (sensing_hour % 2) == 0: ind_hour = int(sensing_hour / 2) else: ind_hour = sensing_hour // 2 + 1 tecmap = tecmap_array[ind_hour,:,:] from isceobj.Constants import SPEED_OF_LIGHT C = SPEED_OF_LIGHT #speed of light (m/s) freq = C / wvl #carrier frequency (Hz) #LOS vector los = (np.array(Ps_vec)-np.array(xyz_cen))/np.linalg.norm(np.array(Ps_vec)-np.array(xyz_cen)) deg2rad = np.pi/180 n_vec = np.array([np.cos(llh_cen[0]*deg2rad)*np.cos(llh_cen[1]*deg2rad), np.cos(llh_cen[0]*deg2rad)*np.sin(llh_cen[1]*deg2rad), np.sin(llh_cen[0]*deg2rad)]) inc_ang = np.arccos(np.dot(los, n_vec)) #incidence angle at center elv_ang = np.pi/2 - inc_ang #elevation angle at center hsp = 400000 #effective ionospheric height (m) cosX = np.sqrt(1-(Re*np.cos(elv_ang)/(Re+hsp))**2) MF = 1/cosX #mapping funciton # + #pixel location of CRs xloc = [] #expected location of CR in range (integer) yloc = [] #expected location of CR in azimuth (integer) xloc_float = [] #expected location of CR in range (float) yloc_float = [] #expected location of CR in azimuth (float) dIon = [] #range delay due to ionospheric effect for lat, lon, hgt in zip(df.lat,df.lon,df.hgt): llh = [lat, lon, hgt] tguess, rng = orb.geo2rdr(llh) #main calculation for conversion between llh and pixel location vTEC = get_tec(tecmap,lat,lon) _dIon = 40.3 * (10**16) / (freq**2) * vTEC * MF #slant range path delay xloc.append(int(np.floor((rng-nearRange)/rangePixelSize))) yloc.append(int(np.floor((tguess - sensingStart).total_seconds()/azimuthTimeInterval))) xloc_float.append((rng-nearRange)/rangePixelSize) yloc_float.append((tguess - sensingStart).total_seconds()/azimuthTimeInterval) dIon.append(_dIon) df['xloc'] = xloc df['yloc'] = yloc df['xloc_float'] = xloc_float df['yloc_float'] = yloc_float df['dIon'] = dIon # - df.head() # #### Plotting CRs on SLC image # + #reading SLC file SLCvrt = './datasets/20210106.slc.full.vrt' ds = gdal.Open(SLCvrt, gdal.GA_ReadOnly) slc = ds.GetRasterBand(1).ReadAsArray() ds = None #extent around CRs (for figure) buffer = 20 xmin = np.min(xloc) - buffer xmax = np.max(xloc) + buffer ymin = np.min(yloc) - buffer ymax = np.max(yloc) + buffer # put all zero values to nan and do not plot nan try: slc[slc==0]=np.nan except: pass fig, ax = plt.subplots(figsize=(30, 20)) cax=ax.imshow(20*np.log10(np.abs(slc)), cmap='gray',interpolation=None, origin='upper') ax.set_xlim(xmin,xmax) ax.set_ylim(ymin,ymax) ax.axis('off') #cbar = fig.colorbar(cax,orientation="horizontal") ax.set_aspect(1) for sl in pd.unique(df.slen): xx = df.loc[df['slen']==sl]['xloc'] yy = df.loc[df['slen']==sl]['yloc'] ID = df.loc[df['slen']==sl]['ID'] if sl == 2.4384: color='blue' elif sl == 4.8: color='red' elif sl == 2.8: color='yellow' else: color='green' ax.scatter(xx,yy,color=color,marker="+",lw=1) for _ID,_xx,_yy in zip(ID,xx,yy): ax.annotate(_ID, (_xx, _yy)) fig.savefig('Rosamond.png',dpi=300,bbox_inches='tight') # - if info.bursts.burst1.passDirection == 'DESCENDING': df_filter = df.loc[df['azm']>349].reset_index(drop=True) #only east-looking CRs (for right-looking descending) else: #ASCENDING df_filter = df.loc[df['azm']<200].reset_index(drop=True) #only west-looking CRs (for right-looking ascending) df = None # + tags=[] df_filter # + #start and stop time of bursts bursts_start_time = [] bursts_stop_time = [] for ii in range(nbursts): burst_ind = ii + 1 burstname = 'info.bursts.burst' + str(burst_ind) _ = eval(burstname+'.burstStartUTC') bursts_start_time.append(_) _ = eval(burstname+'.burstStopUTC') bursts_stop_time.append(_) # + #determining where the CRs locate among multiple bursts loc_bursts = [] #location of CRs in multiple bursts for idx, row in df_filter.iterrows(): _aztime =sensingStart + dt.timedelta(seconds=azimuthTimeInterval * row['yloc_float']) #azimuth time at CR for ii in range(nbursts): if (_aztime > bursts_start_time[ii]) and (_aztime < bursts_stop_time[ii]): loc_bursts.append(int(ii+1)) print('location of CRs in bursts: ',loc_bursts) df_filter['burst_NO'] = loc_bursts # + #determining where the CRs locate among multiple bursts in S1 annotation xml file nburst_SAFE = len(xmltree.findall('.//burst')) print("number of bursts in Sentinel-1 annotation xml file") allburst_aztime = xmlroot.findall('.//burst/azimuthTime') dateformat = '%Y-%m-%dT%H:%M:%S.%f' loc_bursts_SAFE = [] #location of CRs in multiple bursts for idx, row in df_filter.iterrows(): _aztime =sensingStart + dt.timedelta(seconds=azimuthTimeInterval * row['yloc_float']) #azimuth time at CR cnt = 0 for ii in range(nburst_SAFE): _burst_aztime = dt.datetime.strptime(allburst_aztime[ii].text,dateformat) if (_aztime > _burst_aztime): cnt +=1 loc_bursts_SAFE.append(cnt) print('location of CRs in bursts of S1 xml file: ',loc_bursts_SAFE) df_filter['burst_NO_SAFE'] = loc_bursts_SAFE # - # #### Finding CRs (intensity peak) from image def slc_ovs(slc,ovsFactor=1,y=None,x=None): ''' oversampling SLC data ovsFactor: oversampling factor ''' if y is None: y = np.arange(slc.shape[0]) if x is None: x = np.arange(slc.shape[1]) rows, cols = np.shape(slc) _slc = np.fft.fftshift(np.fft.fft2(slc)) min_row = math.ceil(rows * ovsFactor / 2 - rows / 2) max_row = min_row + rows min_col = math.ceil(cols * ovsFactor / 2 - cols / 2) max_col = min_col + cols slc_padding = np.zeros((rows * ovsFactor, cols * ovsFactor), dtype=_slc.dtype) #zero padding slc_padding[min_row:max_row,min_col:max_col] = _slc slc_ = np.fft.fftshift(slc_padding) slcovs = np.fft.ifft2(slc_) * ovsFactor * ovsFactor y_orign_step = y[1]-y[0] y_ovs_step = y_orign_step/ovsFactor x_orign_step = x[1]-x[0] x_ovs_step = x_orign_step/ovsFactor y = np.arange(y[0],y[-1]+y_orign_step,y_ovs_step) x = np.arange(x[0],x[-1]+x_orign_step,x_ovs_step) return slcovs,y,x def findCR(data,y,x,x_bound=[-np.inf,np.inf],y_bound=[-np.inf,np.inf],method="sinc"): ''' Find the location of CR with fitting ''' max_ind = np.argmax(data) max_data = data[max_ind] def _sinc2D(x,x0,y0,a,b,c): return c*np.sinc(a*(x[0]-x0))*np.sinc(b*(x[1]-y0)) def _para2D(x,x0,y0,a,b,c,d): return a*(x[0]-x0)**2+b*(x[1]-y0)**2+c*(x[0]-x0)*(x[1]-y0)+d if method == "sinc": # using sinc function for fitting xdata = np.vstack((x,y)) p0 = [x[max_ind],y[max_ind],0.7,0.7,max_data] bounds = ([x_bound[0],y_bound[0],0,0,0],[x_bound[1],y_bound[1],1,1,np.inf]) popt = scipy.optimize.curve_fit(_sinc2D,xdata,data,p0=p0,bounds=bounds)[0] x_loc = popt[0]; y_loc = popt[1] elif method == "para": #using paraboloid function for fitting xdata = np.vstack((x,y)) p0 = [x[max_ind],y[max_ind],-1,-1,1,1] bounds = ([x_bound[0],y_bound[0],-np.inf,-np.inf,-np.inf,0],[x_bound[1],y_bound[1],0,0,np.inf,np.inf]) popt = scipy.optimize.curve_fit(_para2D,xdata,data,p0=p0,bounds=bounds)[0] x_loc = popt[0]; y_loc = popt[1] return y_loc,x_loc # + slc[np.isnan(slc)] = 0.0 xpeak = [] ypeak = [] for ID, xoff, yoff in zip(df_filter['ID'],df_filter['xloc'],df_filter['yloc']): # crop a patch of 10*10 with center at the calculated CR position pxbuff = 5 pybuff = 5 slccrop = slc[(yoff-pybuff):(yoff+pybuff+1),(xoff-pxbuff):(xoff+pxbuff+1)] # find the peak amplitude in the 10*10 patch yind,xind = np.unravel_index(np.argmax(np.abs(slccrop), axis=None), slccrop.shape) # give a warning if the peak and the calculated postion are too far dyind = yind-pybuff; dxind = xind-pxbuff dist = math.sqrt(dyind**2+dxind**2) if dist > 5.0: warnings.warn(f'the most bright pixel and the xloc is too far for CR {ID}') # crop a patch of 32*32 but with its center at the peak xbuff = 16 ybuff = 16 ycrop = np.arange(yoff+dyind-ybuff,yoff+dyind+ybuff+1) xcrop = np.arange(xoff+dxind-xbuff,xoff+dxind+xbuff+1) slccrop = slc[ycrop,:][:,xcrop] # oversample this 32*32 patch by 32 ovsFactor = 32 slccrop_ovs,ycrop_ovs,xcrop_ovs = slc_ovs(slccrop,ovsFactor=ovsFactor,y=ycrop,x=xcrop) # find the peak amplitude again in a 2 x 2 patch, it correspond to # (2*ovsFactor) x (2*ovsFactor) in oversampled slc yoff2 = int(slccrop_ovs.shape[0]/2) xoff2 = int(slccrop_ovs.shape[1]/2) slccrop2 = slccrop_ovs[yoff2-ovsFactor:yoff2+ovsFactor+1, xoff2-ovsFactor:xoff2+ovsFactor+1] yind2,xind2 = np.unravel_index(np.argmax(abs(slccrop2), axis=None), slccrop2.shape) dyind2 = yind2-ovsFactor; dxind2 = xind2-ovsFactor # crop a patch of 3x3 oversampled patch with center at the peak slccrop2 = slccrop_ovs[yoff2+dyind2-1:yoff2+dyind2+2,xoff2+dxind2-1:xoff2+dxind2+2] ycrop2 = ycrop_ovs[yoff2+dyind2-1:yoff2+dyind2+2] xcrop2 = xcrop_ovs[xoff2+dxind2-1:xoff2+dxind2+2] xxcrop2,yycrop2 = np.meshgrid(xcrop2,ycrop2) xxcrop2_f = xxcrop2.flatten() yycrop2_f = yycrop2.flatten() slccrop2_f = slccrop2.flatten() # using sinc function for fitting to find the location of CR _ypeak,_xpeak = findCR(np.abs(slccrop2_f),yycrop2_f,xxcrop2_f, x_bound=[xcrop2[0],xcrop2[-1]],y_bound=[ycrop2[0],ycrop2[-1]],method="para") xpeak.append(_xpeak) ypeak.append(_ypeak) df_filter['xloc_CR'] = xpeak df_filter['yloc_CR'] = ypeak # - df_filter # #### <I>Tropospheric correction </I> # <I><B>Note:</B> <br> # &emsp;&emsp;This step requires MintPy and PyAPS for downloading GRIB files and calculating a range delay</I> # <br> # <I>&emsp;&emsp;For ERA5, CDS API key should exist in ~/.cdsapirc</I> import tropo_utils as tu #importing util functions in tropo_utils (from MintPy) # + #parameters to download weather model date_list = sensingStart.strftime('%Y%m%d') hour = f'{sensingStart.hour:02}' model = 'ERA5' #weather model grib_dir = '.' #current folder #coverage of re-analysis model minlat = int(np.floor(np.min(df_filter['lat']))) maxlat = int(np.ceil(np.max(df_filter['lat']))) minlon = int(np.floor(np.min(df_filter['lon']))) maxlon = int(np.ceil(np.max(df_filter['lon']))) snwe = (minlat, maxlat, minlon, maxlon) #coverage grib_files = tu.get_grib_filenames(date_list, hour, model, grib_dir, snwe) #grib file name print('GRIB file name to be downloaded: ',grib_files) # - #downloading ERA5 GRIB file tu.dload_grib_files(grib_files, tropo_model='ERA5', snwe=snwe) # + tropo_delay = [] for idx, row in df_filter.iterrows(): lat = row['lat']; lon = row['lon']; hgt = row['hgt'] llh = [lat, lon, hgt] #lat, lon, hgt at CR refElp = Planet(pname='Earth'). ellipsoid xyz = refElp.llh_to_xyz(llh) #xyz coordinate at CR _aztime =sensingStart + dt.timedelta(seconds=azimuthTimeInterval * row['yloc_CR']) #azimuth time at CR xyz_pos_sat = orb.interpolateOrbit(_aztime,method='hermite').getPosition() #satellite position at azimuth time los = (np.array(xyz_pos_sat)-np.array(xyz))/np.linalg.norm(np.array(xyz_pos_sat)-np.array(xyz)) #LOS vector n_vec = np.array([np.cos(llh[0]*deg2rad)*np.cos(llh[1]*deg2rad), np.cos(llh[0]*deg2rad)*np.sin(llh[1]*deg2rad), np.sin(llh[0]*deg2rad)]) inc_ang = np.arccos(np.dot(los, n_vec))*180./np.pi #incidence angle (unit: deg) _hgt = np.zeros((1,1)); _lat = np.zeros((1,1)); _lon = np.zeros((1,1)) _hgt[0,0] = hgt; _lat[0,0] = lat; _lon[0,0] = lon #calculating range delay estimated from weather model delay = tu.get_delay(grib_files[0], tropo_model='ERA5', delay_type='comb', dem=_hgt, inc=inc_ang, lat=_lat, lon=_lon, verbose=True) tropo_delay.append(-delay[0][0]) df_filter['tropo'] = tropo_delay # - # #### <I>Correcting bistatic offset effects in azimuth</I> # + bistatic = [] rank = np.floor((nearRange*2/C)/pri) tau0 = rank * pri for idx, row in df_filter.iterrows(): midRngTime = midRange * 2 / C #two-way mid range time rngTime = (nearRange + row['xloc_CR']*rangePixelSize)*2/C bistatic.append((midRngTime/2 + rngTime/2 - tau0)*Vg) # - # #### <I>Correcting Doppler shift in range and topography induced FM-rate mismatch in azimuth</I> # + dopShift = [] fmMismatch = [] import copy for idx, row in df_filter.iterrows(): burst_no = int(row['burst_NO']) burstname = 'info.bursts.burst' + str(burst_no) dop = eval(burstname+'.doppler._coeffs') #doppler coefficient burst_no_safe = int(row['burst_NO_SAFE']) - 1 Kr = float(xmlroot.find('.//txPulseRampRate').text) #sweep rate from S-1 SAFE annotation xml <txPulseRampRate> (Hz/s) all_dop_t0 = xmlroot.findall('.//dcEstimate/t0') dop_t0 = float(all_dop_t0[burst_no_safe].text) #S-1 SAFE annotation xml <dcEstimate><t0> (s) Kst = eval(burstname+'.azimuthSteeringRate') #azimuth steering rate (radian/s) Ks = 2*Vs/C*freq*Kst #Doppler centroid rate azFmRateCoeffs = eval(burstname+'.azimuthFMRate._coeffs') all_azFmt0 = xmlroot.findall('.//azimuthFmRate/t0') azFmt0 = float(all_azFmt0[burst_no_safe].text) #S-1 SAFE annotation xml <azimuthFmRate><t0> (s) rngTime = (nearRange + row['xloc_CR']*rangePixelSize)*2/C #range time of CR reflector fdc_geom = dop[0]+dop[1]*(rngTime-dop_t0)+dop[2]*(rngTime-dop_t0)**2 azFmRate = azFmRateCoeffs[0] + azFmRateCoeffs[1]*(rngTime-azFmt0) + azFmRateCoeffs[2]*(rngTime-azFmt0)**2 Kt = azFmRate * Ks / (azFmRate - Ks) burstMid = eval(burstname+'.burstMidUTC') # azTime = (sensingStart - sensingMid).total_seconds() + azimuthTimeInterval * row['yloc_CR'] # azTime = (burstStart - burstMid).total_seconds() + azimuthTimeInterval * (row['yloc_CR']-burst1line) # azTime = (sensingStart - burstMid).total_seconds() + azimuthTimeInterval * (row['yloc_CR']-burst1line) azTime = (sensingStart - burstMid).total_seconds() + azimuthTimeInterval * (row['yloc_CR']) fdc = fdc_geom + Kt * azTime planet = Planet(pname='Earth') refelp = copy.copy(planet.ellipsoid) llh_CR = [row['lat'], row['lon'], row['hgt']] xyz_CR = refElp.llh_to_xyz(llh_CR) #xyz coordinate at corner reflector _aztime =sensingStart + dt.timedelta(seconds=azimuthTimeInterval * row['yloc_CR']) #azimuth time at CR xyz_pos_sat = orb.interpolateOrbit(_aztime,method='hermite').getPosition() #satellite position at azimuth time xyz_vel_sat = orb.interpolateOrbit(_aztime,method='hermite').getVelocity() #satellite velocity at azimuth time #computing acceleration dist = np.linalg.norm(xyz_pos_sat) r_spinvec = np.array([0., 0., planet.spin]) r_tempv = np.cross(r_spinvec, xyz_pos_sat) inert_acc = np.array([-planet.GM*x/(dist**3) for x in xyz_pos_sat]) r_tempa = np.cross(r_spinvec, xyz_vel_sat) r_tempvec = np.cross(r_spinvec, r_tempv) xyz_acc_sat = inert_acc - 2 * r_tempa - r_tempvec #satellite acceleration at azimuth time xyz_CR = np.array(xyz_CR) xyz_pos_sat = np.array(xyz_pos_sat) xyz_vel_sat = np.array(xyz_vel_sat) xyz_acc_sat = np.array(xyz_acc_sat) kgeo = -2/(wvl * np.linalg.norm(xyz_pos_sat-xyz_CR))*(np.dot((xyz_pos_sat-xyz_CR),xyz_acc_sat)+np.dot(xyz_vel_sat,xyz_vel_sat)) dopShift.append(fdc/Kr*C/2) fmMismatch.append(fdc*(-1/azFmRate+1/kgeo)*Vg) # - # #### Calculating and plotting final ALE in range and azimuth #absloute geolocation error in range and azimuth after corrections ALE_Rg = (df_filter['xloc_CR'] - df_filter['xloc_float'])*rangePixelSize - df_filter['dIon'] + dopShift - df_filter['tropo'] ALE_Az = (df_filter['yloc_CR'] - df_filter['yloc_float'])*azimuthPixelSize + bistatic - fmMismatch #plotting ALE fig, ax = plt.subplots(figsize=(15,10)) sc = ax.scatter(ALE_Rg, ALE_Az, s=200, c=df_filter['slen'], alpha=0.8, marker='d') ax.legend(*sc.legend_elements(),facecolor='lightgray') ax.get_legend().set_title('side length (m)') for ii, txt in enumerate(df_filter.iloc[:,0]): ax.annotate(txt, (ALE_Rg[ii],ALE_Az[ii])) #putting IDs in each CR ax.grid(True) ax.set_xlim(-1,1) ax.set_ylim(-4,4) ax.axhline(0, color='black') ax.axvline(0, color='black') ax.set_title('Absolute geolocation error') ax.set_xlabel('$\Delta$ Range (m)') ax.set_ylabel('$\Delta$ Azimuth (m)') fig.savefig('ALE.png',dpi=300,bbox_inches='tight') print('mean ALE in range: ',np.mean(ALE_Rg), 'std ALE in range: ',np.std(ALE_Rg)) print('mean ALE in azimuth: ',np.mean(ALE_Az), 'std ALE in azimuth: ',np.std(ALE_Az))
ALE_Rosamond_tropo.S1A.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="fepLQAj-JshB" # # Logistic Regression From Scratch # <NAME><br> # <NAME> # + id="kf2IjDoxJo2F" import pandas as pd import numpy as np from sklearn.utils import shuffle from sklearn.preprocessing import StandardScaler from sklearn.linear_model import LogisticRegression from sklearn.metrics import accuracy_score from sklearn.metrics import f1_score from sklearn.model_selection import train_test_split import warnings warnings.filterwarnings('ignore') # + [markdown] id="qe6IrDnXiuGx" # ## 1. Implementation of Logistic Regression # + [markdown] id="ZhivEz0sc3ya" # ### 1.1 Math Functions # + id="Tgtwi3Z1bNwH" # define sigmoid function def sigmoid(x): """Calculates Sigmoid value""" sig = 1./(1+np.exp(-x)) return sig # define cost function: def cost_fn(x, y, b): """Calculates cost""" z = np.dot(x,b) sig = sigmoid(z) J = - np.dot(y.T,np.log(sig+1e-6)) - np.dot((1-y).T,np.log(1e-6-sig)) # add a small constant to avoid numerical issue return J # define function to compute gradient def gradient(x, y, b): """Calculates gradient vector""" sig = sigmoid(np.dot(x, b)) grad = np.dot(x.T, sig - y) return grad # define function to compute Hessian matrix def hessian(x, y, b): """Calculates Hessian matrix""" N,D = x.shape R = np.zeros((N,N)) # initialize diagonal matrix for i in range(N): sig = sigmoid(np.dot(x[i], b)) R[i,i] = sig*(1-sig) # fill diagonal entries H = x.T@R@x # Hessain matrix return H # + [markdown] id="6-ATXxdAiuGz" # ### 1.2 Optimization Methods # # Note: the `gradient_descent` function is a 3-in-1 universal function for batch (ordinary) gradient descent, stochastic gradient descent, and mini-batch gradient descent. User can choose which one to use by inputting the proper `batch_size` and `epochs`. # # For batch (ordinary) gradient descent: # - `batch_size` = the number of observations in the dataset # - `epochs` is the same as the maximum number of iterations # # For stochastic gradient descent: # - `batch_size` = 1 # - the maximum number of iterations = `epochs` $\times$ the number of observations # # For mini-batch gradient descent: # - `batch_size` = any valid value # - the maximum number of iterations = `epochs` $\times$ the number of batches (the number of observations / `batch_size`) # + id="ey3U6LSciuG0" # define generalized gradient descent function def gradient_descent(x, y, b, learning_rate, decay_rate, epsilon, epochs, batch_size, momentum): """Perform generalized gradient descent algorithm, including ordinary, stochastic, mini-batch versions Parameters ---------- x : array-like The input matrix with rows as observations and columns as features y : array-like The labels for observations b : array-like The initial model parameters learning_rate : float Determines the step size at each iteration, must be > 0, typically > 0 and < 1 decay_rate : float The decay rate for inversely decaying learning rate, must be >=0, typically >= 0 and < 1 epsilon : float The precision tolerance of cost function value as stopping criteria epochs : int The number of times that the entire dataset is traversed batch_size : int The the size of a batch of the input data momentum : float The coefficient of the momentum term Returns ------- J_new : float The final cost function value b : array-like The optimized model parameters iteration : int The total number of iterations run """ x = np.array(x, dtype=np.dtype("float64")) y = np.array(y, dtype=np.dtype("float64")).flatten() J = np.inf num_obs = len(x) num_batch = num_obs // batch_size iteration = 0 change =0 # arguement value check if not 0 < batch_size <= num_obs: raise ValueError( "'batch_size' must be greater than zero and less than " "or equal to the number of observations") if num_obs != y.shape[0]: raise ValueError("'x' and 'y' lengths do not match") if x.shape[1] != len(b): raise ValueError("incorrect number of parameters") # algorithem starts for epoch in range(epochs): x, y = shuffle(x, y, random_state=0) # shuffle x and y for a new epoch for start in range(0, num_obs, batch_size): end = start + batch_size x_sub = x[start:end,:] y_sub = y[start:end] J_new = cost_fn(x, y, b) # Compute cost grad = gradient(x_sub, y_sub, b) # compute gradient new_change = learning_rate*grad + momentum*change # calculate the update of parameters b = b - new_change # update parameters learning_rate = learning_rate/(1+decay_rate*iteration) # inverse decay of learning rate change = new_change iteration += 1 if np.abs(J_new - J) < epsilon: # terminate if the difference in cost function is less than the precision break else: J = J_new if num_obs % batch_size != 0: # if batch size is not divisible by the number of observations x_sub = x[num_obs-(num_obs%batch_size):num_obs,:] # use the rest of the data to perform one more iteration y_sub = y[num_obs-(num_obs%batch_size):num_obs] J_new = cost_fn(x, y, b) # Compute cost grad = gradient(x_sub, y_sub, b) # compute gradient new_change = learning_rate*grad + momentum*change # calculate the update of parameters b = b - new_change # update parameters learning_rate = learning_rate/(1+decay_rate*iteration) # inverse decay of learning rate change = new_change iteration += 1 if np.abs(J_new - J) < epsilon: # terminate if the difference in cost function is less than the precision break else: J = J_new return J_new, b, iteration # define function for Newton Method def newton(x, y, b, epsilon, max_iters): """Perform generalized gradient descent algorithm, including ordinary, stochastic, mini-batch stochastic versions Parameters ---------- x : array-like The input matrix with rows as observations and columns as features y : array-like The labels for observations b : array-like The initial model parameters epsilon : float The precision tolerance of cost function value as stopping criteria max_iters : int The maximum number of iterations to be run Returns ------- J_new : float The final cost function value b : array-like The optimized model parameters iteration : int The total number of iterations run """ x = np.array(x, dtype=np.dtype("float64")) y = np.array(y, dtype=np.dtype("float64")).flatten() J = 1e9 iteration = 0 for i in range(0, max_iters): J_new = cost_fn(x, y, b) # Compute cost grad = gradient(x, y, b) # compute gradient H = hessian(x, y, b) # compute Hessian matrix b = b - np.linalg.inv(H+np.identity(len(b))*1e-6)@grad # update parameters, add an indentity matrix with small constant to avoid numerical issue iteration += 1 if np.abs(J_new - J) < epsilon: # terminate if the difference in cost function is less than the precision break else: J = J_new return J_new, b, iteration # + [markdown] id="4ToFeMy8iuG2" # ### 1.3 Logistic Regression Class # + id="Lvt2NPrXiuG3" class Logistic_Regression: """ A class used to represent a logistic regression model Attributes ---------- order : int (either 1 or 2) Indicates the use of first-order (1) or second-order (2) optimization method, default value = 1 learning_rate : float Determines the step size at each iteration, must be > 0, typically > 0 and < 1, default value = 1e-3 decay_rate : float The decay rate for inversely decaying learning rate, must be >=0, typically >= 0 and < 1, default value = 0 epsilon : float The precision tolerance of cost function value as stopping criteria, default value = 1e-4 epochs : int The number of times that the entire dataset is traversed, default value = 32 batch_size : int The the size of a batch of the input data, default value = 1 momentum : float The coefficient of the momentum term, default value = 0 threshold : float The probability threshold for class assignment, default value = 0.5 verbose : boolean Indicates whether to show model output or not, default value = True b : array-like Initial model parameters param : array-like The optimized model parameters coef_ : array-like (2D) The optimized model coefficients intercept_ : array-like (2D) The optimized model intercept Methods ------- fit(x, y) Trains the model with input data x and true label y predict(x) Predicts class labels for samples in x predict_proba(x) Predicts probability estimates for samples in x score(x, y) Calculates classification accuracy """ def __init__(self, order=1, learning_rate=1e-3, decay_rate=0, epsilon=1e-4, epochs=32, batch_size=1, momentum=0.0, threshold=0.5, verbose=True): """Initialize an object of the class""" self.order = int(order) # the order of optimization method (1 for first order, 2 for second order) self.learning_rate = learning_rate # initial learning rate for gradient descent self.decay_rate = decay_rate # decay rate of inversely decaying learning rate self.epsilon = epsilon # the precision tolerance of cost function value as stopping criteria self.epochs = int(epochs) # the number of epochs to iterate self.batch_size = int(batch_size) # the size of batch for general stochastic gradient descent self.momentum = momentum # the momentum coefficient for general stochastic gradient descent self.threshold = threshold # the probability threshold for class assignment self.verbose = verbose # whether to show results or not # argument value check if self.order != 1 and self.order != 2: raise ValueError("only first or second order methods are supported, " "please type in 1 for first order, 2 for second order") if self.learning_rate <= 0: raise ValueError("'learning_rate' must be greater than zero") if self.decay_rate < 0: raise ValueError("'decay_rate' must be greater or equal to zero") if self.epsilon <= 0: raise ValueError("'tolerance' must be greater than zero") if self.epochs <= 0: raise ValueError("'epochs' must be greater than zero") if self.batch_size <= 0: raise ValueError("'batch_size' must be greater than zero") if self.momentum < 0 or self.momentum > 1: raise ValueError("'momentum' must be between zero and one") # fit function for training the model def fit(self, x, y): """Trains the model with input data x and true label y Parameters ---------- x : array-like The input matrix with rows as observations and columns as features y : array-like The true labels for observations Returns ------- self : object The object of the class """ if x.ndim == 1: # if input matrix has only one dimension x = x[:, None] N = x.shape[0] x = np.column_stack([np.ones(N), x]) # add a column of ones for intercept of the model N,D = x.shape self.b = np.zeros(D) # initialize parameters if self.order == 1: # use gradient descent cost, self.param, iters = gradient_descent(x, y, self.b, self.learning_rate, self.decay_rate, self.epsilon, self.epochs, self.batch_size, self.momentum) else: # use Newton method cost, self.param, iters = newton(x, y, self.b, self.epsilon, self.epochs) if self.verbose: print(f'terminated after {iters} iterations, with cost equal to {cost}') print(f'the coefficients found: {self.param}') self.coef_ = np.array([self.param[1:]]) self.intercept_ = np.array([self.param[0]]) return self # function for predicting class labels for samples in x def predict(self, x): """Predicts class labels for samples in x Parameters ---------- x : array-like The input matrix with rows as observations and columns as features Returns ------- y_pred : array The array of predicted class labels """ if x.ndim == 1: x = x[:, None] Nt = x.shape[0] x = np.column_stack([np.ones(Nt), x]) # add a column of ones for intercept of the model yh = sigmoid(np.dot(x, self.param)) # predict output probability y_pred = [1 if x>self.threshold else 0 for x in yh] # assign class labels with threshold return np.array(y_pred) # function for predicting probability estimates for samples in x def predict_proba(self, x): """Predicts probability estimates for samples in x Parameters ---------- x : array-like The input matrix with rows as observations and columns as features Returns ------- yh : array The array of predicted probability estimates """ if x.ndim == 1: x = x[:, None] Nt = x.shape[0] x = np.column_stack([np.ones(Nt), x]) # add a column of ones for intercept of the model yh = sigmoid(np.dot(x, self.param)) # predict output probability return np.array(yh) # function for calculating classification accuracy def score(self, x, y): """Calculates classification accuracy Parameters ---------- x : array-like The input matrix with rows as observations and columns as features y : array-like The true labels for observations Returns ------- accuracy : float The classification accuracy """ y_pred = self.predict(x) # predicted labels accuracy = 1 - np.mean(abs(y - y_pred)) # classification accuracy return accuracy # + [markdown] id="cSDFB6-wiuG6" # ## 2. Use Case Application # + [markdown] id="yBvVHxlcJydX" # ### 2.1 Import Data # + id="g-3xVkZdKWxt" url = 'https://raw.githubusercontent.com/alicekejialiu/datasets/main/Kickstarter.csv' df = pd.read_csv(url,index_col=0,encoding = 'unicode_escape') # + colab={"base_uri": "https://localhost:8080/", "height": 987} id="R_dMnfKEO2yy" outputId="8eb1d7b8-c224-47d5-b4b9-21d69eedf15e" df # + [markdown] id="mkepaX01OP80" # ### 2.2 Set predictors and outcome variable # + colab={"base_uri": "https://localhost:8080/"} id="0VPpsh3gaED-" outputId="989f002f-59a6-4a4e-c3c2-b2056593c9b6" df.groupby('state')['state'].count() # + id="GzTGLiNPafRL" df=df[~df['state'].isin(['canceled','live','suspended'])] # + id="l4UubhG0Kc_v" # df['state'] = np.where(df['state'] == 'successful', 1, df['state']) # df['state'] = np.where(df['state'] == 'failed', 0, df['state']) df['state'] = np.where(df['state'] == 'successful', 1, 0) # + colab={"base_uri": "https://localhost:8080/"} id="wkgO0GgdiuHA" outputId="3ac5485b-d1f4-451f-ae24-82d2c56bc8aa" df['state'].value_counts() # + id="VU2untgqLQWM" df=df.rename(columns = {'state':'success'}) # + id="7vf8BrH7Mc4t" df = df[['success','goal','name_len_clean','create_to_launch_days','launch_to_deadline_days']] # + id="QqtefyxEQ0cR" df.replace([np.inf, -np.inf], np.nan, inplace=True) df.dropna(inplace=True) # + colab={"base_uri": "https://localhost:8080/"} id="gGJ_Wt_WbAqr" outputId="c592fdaa-055e-44a1-ab95-d593ef573aef" df.info() # + colab={"base_uri": "https://localhost:8080/", "height": 455} id="4n5xS8qUiuHD" outputId="1c6e9f01-8abd-4323-a979-13fb0d67534e" df # + [markdown] id="JPySr7KwMpbM" # ### 2.3 Data Pre-processing # + id="3KW6kzKXOt8s" X = df.iloc[:,1:] y = df['success'] # + id="rDA6Sx1ZNjFe" scaler = StandardScaler() X_std = scaler.fit_transform(X) # + id="K0ljcrYlNjY8" X_train, X_test, y_train, y_test = train_test_split(X_std, np.array(y), test_size=0.33, random_state=42) # + [markdown] id="nLGRHPwrPZC5" # ### 2.4 Applying Logistic Regression with sklearn # + colab={"base_uri": "https://localhost:8080/"} id="epRVg1ZvPWXm" outputId="9eec8095-c307-4bd1-8528-82f296ced504" lr = LogisticRegression() model = lr.fit(X_train, y_train) scikit_learn = [] print('The results for logistic regression with sklearn: ') print('Intercept b0 =', round(model.intercept_[0],4)) scikit_learn.append(round(model.intercept_[0],4)) for i in range(len(model.coef_[0])): print('b'+str(i+1)+' =', round(model.coef_[0][i],4)) scikit_learn.append(round(model.coef_[0][i],4)) # + colab={"base_uri": "https://localhost:8080/"} id="aVCBrnMvcVRp" outputId="272fc89d-1ce3-43c2-97d2-13d1d71fc454" y_pred = model.predict(X_test) print('The accuracy score of sklearn logistic regression model: ', round(accuracy_score(y_test,y_pred),3)) scikit_learn.append(round(accuracy_score(y_test,y_pred),3)) # + colab={"base_uri": "https://localhost:8080/"} id="UjmPCUIgiuHF" outputId="feab2459-39a4-473d-bc10-6a29f303109c" from sklearn.metrics import confusion_matrix print('Confusion matrix:') print(confusion_matrix(y_test,y_pred)) print('F1 score:') print(f1_score(y_test,y_pred)) scikit_learn.append(f1_score(y_test,y_pred)) # + [markdown] id="qdYbnSQ0ufFM" # ### 2.5 Applying Home-Made Logistic Regression # + [markdown] id="K3Hks-w0iuHG" # #### 2.5.1 Batch (Ordinary) Gradient Descent # + colab={"base_uri": "https://localhost:8080/"} id="vWupZG2KiuHG" outputId="9cba96dc-97a5-40b6-bb11-e283b296940b" logit1 = Logistic_Regression(order=1, learning_rate=1e-4, decay_rate=0, epsilon=1e-6, epochs= 5000, batch_size=len(X_train), momentum=0, threshold=0.5, verbose=True) model1 = logit1.fit(X_train, y_train) ordinary_gradient_descent = [] print('The results for self-implemented logistic regression with ordinary gradient descent algorithm: ') print('Intercept b0 =', round(model1.intercept_[0],4)) ordinary_gradient_descent.append(round(model1.intercept_[0],4)) for i in range(len(model1.coef_[0])): print('b'+str(i+1)+' =', round(model1.coef_[0][i],4)) ordinary_gradient_descent.append(round(model1.coef_[0][i],4)) # + colab={"base_uri": "https://localhost:8080/"} id="4CHD8vciiuHG" outputId="0582d893-5594-4b0d-e69e-3fe1145f384e" y_pred1 = model1.predict(X_test) print('The accuracy score of self-implemented logistic regression with ordinary gradient descent algorithm: ', round(accuracy_score(y_test,y_pred1),3)) ordinary_gradient_descent.append(round(accuracy_score(y_test,y_pred1),3)) # + colab={"base_uri": "https://localhost:8080/"} id="u4cFpdW-iuHH" outputId="a438b712-a610-47f7-f598-cdbe9091b971" from sklearn.metrics import confusion_matrix print('Confusion matrix:') print(confusion_matrix(y_test,y_pred1)) print('F1 score:') print(f1_score(y_test,y_pred1)) ordinary_gradient_descent.append(f1_score(y_test,y_pred1)) # + [markdown] id="GpDlaaNQiuHH" # #### 2.5.2 Stochastic Gradient Descent # + colab={"base_uri": "https://localhost:8080/"} id="oKDEcJsWiuHH" outputId="626106c9-ea08-461c-b031-465792b8662e" logit2 = Logistic_Regression(order=1, learning_rate=1e-3, decay_rate=0, epsilon=1e-6, epochs = 32, batch_size=1, momentum=0.2, threshold=0.5, verbose=True) model2 = logit2.fit(X_train, y_train) stochastic_gradient_descent = [] print('The results for self-implemented logistic regression with stochastic gradient descent algorithm: ') print('Intercept b0 =', round(model2.intercept_[0],4)) stochastic_gradient_descent.append(round(model2.intercept_[0],4)) for i in range(len(model2.coef_[0])): print('b'+str(i+1)+' =', round(model2.coef_[0][i],4)) stochastic_gradient_descent.append(round(model2.coef_[0][i],4)) # + colab={"base_uri": "https://localhost:8080/"} id="LxuIu6griuHH" outputId="735b4f96-8c2c-4599-deba-9b78c2ddf6c5" y_pred2 = model2.predict(X_test) print('The accuracy score of self-implemented logistic regression with stochastic gradient descent algorithm: ', round(accuracy_score(y_test, y_pred2), 3)) stochastic_gradient_descent.append(round(accuracy_score(y_test, y_pred2), 3)) # + colab={"base_uri": "https://localhost:8080/"} id="G_oc64c9iuHI" outputId="24a3b06b-7496-470b-fbb6-b343b1eebd4d" from sklearn.metrics import confusion_matrix print('Confusion matrix:') print(confusion_matrix(y_test, y_pred2)) print('F1 score:') print(f1_score(y_test,y_pred2)) stochastic_gradient_descent.append(f1_score(y_test, y_pred2)) # + [markdown] id="SceEto4fiuHI" # #### 2.5.3 Mini-Batch Gradient Descent # + colab={"base_uri": "https://localhost:8080/"} id="y2v5McOwiuHI" outputId="f0237464-889d-4810-dad8-96921cafcd39" logit3 = Logistic_Regression(order=1, learning_rate=1e-4, decay_rate=0, epsilon=1e-6, epochs=5000, batch_size=64, momentum=0, threshold=0.5, verbose=True) model3 = logit3.fit(X_train, y_train) mini_batch_gradient_descent = [] print('The results for self-implemented logistic regression with mini-batch gradient descent algorithm: ') print('Intercept b0 =', round(model3.intercept_[0],4)) mini_batch_gradient_descent.append(round(model3.intercept_[0],4)) for i in range(len(model3.coef_[0])): print('b'+str(i+1)+' =', round(model3.coef_[0][i],4)) mini_batch_gradient_descent.append(round(model3.coef_[0][i],4)) # + colab={"base_uri": "https://localhost:8080/"} id="T1GxwH2GiuHI" outputId="79b2e48a-2004-49fe-b062-a77d793e64eb" y_pred3 = model3.predict(X_test) print('The accuracy score of self-implemented logistic regression with mini-batch gradient descent algorithm: ', round(accuracy_score(y_test,y_pred3),3)) mini_batch_gradient_descent.append(round(accuracy_score(y_test,y_pred3),3)) # + colab={"base_uri": "https://localhost:8080/"} id="WfEurEZJiuHI" outputId="03cbf773-3b8c-4ca0-beb7-d3d0aca1a338" from sklearn.metrics import confusion_matrix print('Confusion matrix:') print(confusion_matrix(y_test,y_pred3)) print('F1 score:') print(f1_score(y_test,y_pred3)) mini_batch_gradient_descent.append(f1_score(y_test,y_pred3)) # + [markdown] id="NWjFM4SgiuHJ" # #### 2.5.4 Newton Method # + colab={"base_uri": "https://localhost:8080/"} id="cwz7ic94iuHJ" outputId="94ef1718-5522-43bf-ea3f-90a0408b2f75" logit4 = Logistic_Regression(order=2, epsilon=1e-6, epochs= 5000, threshold=0.5, verbose=True) model4 = logit4.fit(X_train, y_train) newton_method = [] print('The results for self-implemented logistic regression with Newton method: ') print('Intercept b0 =', round(model4.intercept_[0],4)) newton_method.append(round(model4.intercept_[0],4)) for i in range(len(model4.coef_[0])): print('b'+str(i+1)+' =', round(model4.coef_[0][i],4)) newton_method.append(round(model4.coef_[0][i],4)) # + colab={"base_uri": "https://localhost:8080/"} id="q7gIcvfviuHJ" outputId="834f95b1-2b81-4bc4-c921-2cdcf71720f0" y_pred4 = model4.predict(X_test) print('The accuracy score of self-implemented logistic regression with Newton method: ', round(accuracy_score(y_test,y_pred4),3)) newton_method.append(round(accuracy_score(y_test,y_pred4),3)) # + colab={"base_uri": "https://localhost:8080/"} id="ea0cf1cniuHJ" outputId="508769cb-c40f-43d8-efd0-6aa39444e412" from sklearn.metrics import confusion_matrix print('Confusion matrix:') print(confusion_matrix(y_test,y_pred4)) print('F1 score:') print(f1_score(y_test,y_pred4)) newton_method.append(f1_score(y_test,y_pred4)) # + [markdown] id="2LdYVNW39hMU" # ### 2.6 Overall Test Result # + colab={"base_uri": "https://localhost:8080/", "height": 206} id="h_nL7sAdiuHJ" outputId="9a15558a-1976-4188-e5b7-d92de68a589d" df_result = pd.DataFrame([scikit_learn,ordinary_gradient_descent,stochastic_gradient_descent,mini_batch_gradient_descent,newton_method], index=['Scikit Learn','Ordinary Gradient Descent','Stochastic Gradient Descent','Mini-batch Gradient Descent',"Newton's Method"], columns = ['b0','b1','b2','b3','b4','Accuracy Score','F1 Score']) df_result
Logistic_Regression_From_Scratch.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .r # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: R # language: R # name: ir # --- library(devtools) devtools::install_local('/home/jovyan/gganimate-0.1.1.tar.gz') devtools::install_local('/home/jovyan/ggraph-d15fd149babe9ad32316474b9a178e019f376ba6.zip') library(gganimate) library(ggraph) library(igraph) library(RColorBrewer) # + # Originally for data from http://konect.uni-koblenz.de/networks/sociopatterns-infectious # as part of a museum exhibit on spread of infections http://www.sociopatterns.org/deployments/infectious-sociopatterns/ # and the paper http://www.sociopatterns.org/publications/whats-in-a-crowd-analysis-of-face-to-face-behavioral-networks/ # Code adapted from https://gist.github.com/thomasp85/eee48b065ff454e390e1 # and from https://gist.github.com/jalapic/612036977d9f9c773107681bc4a46d58 infect <- read.table('/home/jovyan/networkDynamicsLabels.txt', skip = 0, sep = ' ', stringsAsFactors = FALSE) infect$V3 <- NULL names(infect) <- c('from', 'to', 'time') infect$timebins <- as.numeric(cut(infect$time, breaks = 150)) # lower means more bursty, i.e. breaks = 10 # We want that nice fading effect so we need to add extra data for the trailing infectAnim <- lapply(1:10, function(i) {infect$timebins <- infect$timebins + i; infect$delay <- i; infect}) infect$delay <- 0 infectAnim <- rbind(infect, do.call(rbind, infectAnim)) infectGraph <- graph_from_data_frame(infectAnim, directed = F) # We use only original data for the layout subGr <- subgraph.edges(infectGraph, which(E(infectGraph)$delay == 0)) V(subGr)$degree <- degree(subGr) V(subGr)$group <- cluster_louvain(subGr)$membership lay <- createLayout(subGr, 'igraph', algorithm = 'fr') # change spatial layout of network # Then we reassign the full graph with edge trails attr(lay, 'graph') <- infectGraph # Now we create the graph with timebins as frame p <- ggraph(data = lay, layout = 'fr', aes(frame = timebins)) + geom_node_point(size = .025, col = "white") + # change size & color of inactive nodes geom_node_point(aes(alpha=0.6), size = .025, colour = factor(lay$group), show.legend = FALSE) + # change size & color of active nodes # geom_edge_link0(aes(frame = timebins, alpha = delay, width = delay), edge_colour = '#dccf9f') + geom_edge_link0(aes(frame = timebins, alpha = delay, width = delay, colour = factor(node1.group)), data = gEdges(nodePar = 'group'), show.legend = FALSE) + # geom_edge_link0(aes(frame = timebins, alpha = delay, width = delay, colour = node1.degree), data = gEdges(nodePar = 'degree'), show.legend = FALSE) + scale_edge_alpha(range = c(1, 0), guide = 'none') + scale_edge_width(range = c(0.25, 0.75), trans = 'exp', guide = 'none') + # change edge width scale_size(guide = 'none') + expand_limits(x = c(min(lay$x), max(lay$x)), y = c(min(lay$y), max(lay$y))) + ggforce::theme_no_axes() + theme(plot.background = element_rect(fill = '#000000'), # change background color panel.background = element_blank(), panel.border = element_blank(), plot.title = element_text(color = '#cecece')) infect # + # Note if the animation surpasses the memory usage of the notebook, it will crash. # reduce memory by changing image size, image resolution, making network smaller, # changing interval, and/or changing breaks # And then we animate animation::ani.options(interval=0.01) # change speed of frame transitions gganim <- gganimate(p, '/home/jovyan/sparkingCuriosity.gif', title_frame = FALSE, ani.width = 400, ani.height = 400, res=100) # change image size and resolution
animate.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: geostats_env # language: python # name: geostats_env # --- # # Automatic Data Downloads # Satellite images and outputs from global earth systems models can be very large files. If we're dealing with time series, large spatial areas, or multivariate model outputs, we can quickly be moving into data volumes that exceed the memory and storage capacity of personal computers. To access these types of global data, we are interfacing with online databases. Today's lesson is intended to give you the tools to programmatically access online databases. These tools will enable you to use your personal computer to convert these large datasets into analysis-ready data for your research project. Specifically, today we'll learn to: # # 1. Interpret directory structure of ftp and http addresses. # 2. Create a project directory on your local machine. # 3. Configure a .gitignore file to ignore raw data. # 4. Use the command line to download files from the internet. # # If there's time, we'll break into groups based on research interest and start utilizing APIs to search datasets on public geospatial data repositories that match the location and time period of your study area. import pandas as pd from IPython.display import HTML import os import urllib.request import ssl if (not os.environ.get('PYTHONHTTPSVERIFY', '') and getattr(ssl, '_create_unverified_context', None)): ssl._create_default_https_context = ssl._create_unverified_context # ## G is for *Generalizable* # When we're making measurements of an earth system process, we often care deeply about how well uur experimental results apply to other times/places. Since it is often too expensive or two difficult to collect in-situ samples of our earth systems process at all the times and locations that matter, environmental data science allows us to use statistical models to leverage globally available observations to improve the generalizability of our system. These models will generalize our inferences about our earth systems process in one of three ways: # # 1. *Prediction*: can our model allow us to generalize our observations to out-of-sample times and locations? For example: will my model linking air temperature to green-up time from my experimental forest accurately apply to a forest 200 miles away? # 2. *Interpolation*: can our model allow us to "fill in the gaps" in our spatial/temporal sampling schele? For example: do my measurements of precipitation for my two precipitation gage locations accurately represent the total precipitation that fell in my watershed? # 3. *Diagnosis*: can our model help us to interpret what processes are either drivers of or covariates with our earth systems process, allowing us to improve our physical understanding of trends and variability in that system: for example: is air temperature or precipitation a more important driver of current cropping system productivity, and how might this impact cropping system function under climate change? # # ### These global observations are often publically available to researchers on online geodatabases. # For example: # - NASA: https://earthdata.nasa.gov/ # - USGS: https://earthexplorer.usgs.gov/  # - NOAA: https://psl.noaa.gov/data/gridded/  # - Google: https://developers.google.com/earth-engine/datasets  # - NY State: https://cugir.library.cornell.edu/  # # ## R is for *Reproducible* # Since the raw data for our generalizable analysis is globally available, programmatically accessing our data gives us an important added benefit: we can design our version controlled, collaborative project repositories so they directly interface with these public geodatabases. That way, anyone who wants to can access the raw data required to reproduce our analytic workflow. # # A reminder on why reproducible science is so important: HTML('<iframe width="930" height="523" src="https://www.youtube.com/embed/NGFO0kdbZmk", frameborder="0" allowfullscreen></iframe>') # ### Project Repository # Your project repository is where you store all of the elements of your data science workflow. At it's core, it should have folders for raw data, processed data, code, outputs, and images. A good project repository is. # # 1. Human readable: use directory names that are easy to understand, includes a highly detailed README file that explains what's in each folder, how to sequence inputs and outputs to code files, and how to cite the repository. # 2. Machine readable - avoid funky characters OR SPACES. # 3. Supportive of sorting - If you have a list of input files, it’s nice to be able to sort them to quickly see what’s there and find what you need. # # You should also take extra steps to preserve raw data so it’s not modified. More on this later. # # We're going to create a new repository for your class project. The os package (os stands for **O**perating **S**ystem) allows you to manipulate files on your computer. Ask it what it does: # ?os #For example, this command is the equivalent of ls in terminal: os.getcwd() # + #this command is the equivalent of: # mkdir H:/EnvDatSci/project #os.mkdir('H:\\EnvDatSci\\project') #this command is the equivalent of: # # cd H:/EnvDatSci/project os.chdir('H:\\EnvDatSci\\project') # - # ### TASK 1: enter a command in the below cell to check and make sure you're in your project directory: #Task 1: os.getcwd() # ### TASK 2: populate your project directory with appropriate files # Read Chapter 4.1 of the textbook: https://www.earthdatascience.org/courses/earth-analytics/document-your-science/file-organization-101/ # # Using os commands, populate your project directory with subfolders. # # Print your directory to the screen (hint: see Task 1) #Task 2: os.mkdir("data_raw") os.mkdir("data_analysisReady") os.mkdir("code") os.mkdir("figures") os.listdir() # ### TASK 3: change the current working directory to your the folder where you intend to store raw data. #Task 3: os.chdir("./data_raw") # ## Decoding the file structure of online geodatabases # Just like we can use code to find and access files on our local machine, we can use code to find and access files on public geodatabases. Since these geodatabases are version controlled, providing code that links to the online files helps prevent us from making redundant copies of data on the internet. Programatically accessing public geodatabases requires that we understand how the database itself has been organized. # # - Click on the following link to the National Oceanic and Atmospheric Association databse website: https://psl.noaa.gov/data/gridded/  # # - Navigate to the "NCEP/NCAR Reanalysis dataset" # - Of the seven sections they've divided data into, click on "Surface" # - Under "Air Temperature: Daily", click "See list" # - Under "Surface", click "See list" # # ### TASK 4: Right click on the first link in the list, and select "copy link". Paste that link address below: # https://downloads.psl.noaa.gov/Datasets/ncep.reanalysis.dailyavgs/surface/air.sig995.1948.nc # ##### Task 4: double click on this markdown cell to add text # ### Tasking your computer to download files # Our goal is to write a script that can download files, extract a relevant subset of information from the files, and then delete the files. The first part of this task to to learn the filenames that we want to download. # # In the link above, we can break the filepath down into substrings, using basic text commands: http_dir = "https://downloads.psl.noaa.gov/Datasets/" dataset = "ncep.reanalysis.dailyavgs" lev_type = "surface" variable = "air.sig995." time = "2010" file_type = ".nc" filepaths= http_dir + dataset + "/" + lev_type + "/" + variable + time + file_type print(filepaths) # What happens if you click on that link? You can also have python download the file for you using the <urllib.request.urlretrieve> function: #what does this function do and how do we use it? # ?urllib.request.urlretrieve url = filepaths filename = variable + time + file_type urllib.request.urlretrieve(url, filename) print(url, filename) #what happens? os.listdir() # We can infer patterns from the database itself and generate the names of multiple files. For example, if we need five years of daily air temperature data: time =pd.Series(list(range(1965,1970))) time = time.apply(str) filepaths= http_dir + dataset + "/" + lev_type + "/" + variable + time + file_type print(filepaths) # ### TASK 5: Write a "for" loop that downloads all five years worth of air temperature data into you working directory. Print the contents of your directory to the screen. #Task 5 for i in range(len(filepaths)): filename = variable + time[i] + file_type url= filepaths[i] urllib.request.urlretrieve(url, filename) os.listdir()
CodeSprints/Automatic Data Downloads_Solution.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [default] # language: python # name: python3 # --- def show2imgs(im1, im2, title1='Obraz pierwszy', title2='Obraz drugi', size=(10,10)): import matplotlib.pyplot as plt f, (ax1, ax2) = plt.subplots(1,2, figsize=size) ax1.imshow(im1, cmap='gray') ax1.axis('off') ax1.set_title(title1) ax2.imshow(im2, cmap='gray') ax2.axis('off') ax2.set_title(title2) plt.show() # ### Detekcja krawędzi :: Canny # # Udoskonalony filtr Laplace'a opracowany w 1986 r. autorstwa J. Canny. W algorytmie tym pochodne są obliczane kierunkach $x$ i $y$ po czym z ich kombinacji wyznaczane są cztery pochodne kierunkowe (*gradienty*). Punkty maksymalnych wartości tych pochodnych są potencjalnymi elementami krawędzi. Najważniejszym elementem tego algorytmu jest faza, w której następuje łaczenie pojedynczych pikseli wskazywanych jako elementy krawędzi w kontury. # # Formowanie konturów odbywa się w procesie nazywanym progowaniem z histerezą (ang. *hysteresis threshold*). Zalecane proporcje między progami to $2:1$ i $3:1$. # # Obliczanie gradientów: # # ** $L2gradient = True$, wiariant dokładniejszy # $$|grad(x,y)|_{L2} = \sqrt{\frac{dI}{dx}^2 + \frac{dI}{dy}^2}$$ # # ** $L2gradient = False$, wiariant uproszczony # $$|grad(x,y)|_{L2} = \left|\frac{dI}{dx}\right| + \left|\frac{dI}{dy}\right|$$ # + import cv2 from skimage import data import numpy as np im = data.coins() th = 120 th, bim = cv2.threshold(im, thresh=th, maxval=255, type=cv2.THRESH_BINARY) print(th) element = np.ones((3,3),np.uint8) mbim = cv2.morphologyEx(bim, op=cv2.MORPH_CLOSE, kernel=element, iterations=3) cim = cv2.Canny(mbim, 200, 250, apertureSize = 5, L2gradient = True) show2imgs(im, cim, title1='Obraz oryginalny', title2='Obraz po detekcji krawędzi', size=(20,20)) # + from skimage import io, color, img_as_ubyte import warnings warnings.filterwarnings('ignore') url = 'http://www.lenna.org/lena_std.tif' lena = io.imread(url) lena = color.rgb2gray(lena) lena = img_as_ubyte(lena) th = 150 th, blena = cv2.threshold(lena, thresh=th, maxval=255, type=cv2.THRESH_OTSU) clena = cv2.Canny(blena, threshold1=200, threshold2=250, apertureSize = 5, L2gradient = False) show2imgs(lena, clena, title1='Obraz oryginalny', title2='Obraz po detekcji krawędzi', size=(20,20)) # + from skimage import io, color, img_as_ubyte, util import warnings warnings.filterwarnings('ignore') url = 'images/pattern1.png' p = io.imread(url) p = color.rgb2gray(p) p = img_as_ubyte(p) p = util.invert(p) th = 150 th, bp = cv2.threshold(p, thresh=th, maxval=255, type=cv2.THRESH_OTSU) print(th) cp = cv2.Canny(bp, threshold1=200, threshold2=250, apertureSize = 5, L2gradient = False) show2imgs(p, cp, title1='Obraz oryginalny', title2='Obraz po detekcji krawędzi', size=(25,25)) # - # ### Transformacja odległościowa # # Transformacja odległościowa (ang. *distance transform*) to proces, w ramach którego każdy piksel wejściowy jest ustawiany na wartość równą odległści od najbliższego zerowego piskela w obrazie wejściowym. Wymagane jest przyjęcie pewnej miary odległości.Typowym obrazem wejściowym tej transformacji jest obraz krawędzi. # # + icp = util.invert(cp) dt = cv2.distanceTransform(bp, distanceType=cv2.DIST_L2, maskSize=cv2.DIST_MASK_PRECISE) show2imgs(bp, dt, title1='Obraz oryginalny', title2='Obraz po detekcji krawędzi', size=(25,25)) # + icim = util.invert(cim) dt = cv2.distanceTransform(bim, distanceType=cv2.DIST_L2, maskSize=cv2.DIST_MASK_PRECISE) show2imgs(bim, dt, title1='Obraz oryginalny', title2='Obraz po detekcji krawędzi', size=(25,25)) # + from skimage import segmentation element = np.ones((3,3),np.uint8) mbim2 = cv2.erode(bim, kernel=element, iterations=0) show2imgs(bim, mbim2, title1='Obraz oryginalny', title2='Obraz po przekształceniu', size=(10,10)) im_border = segmentation.clear_border(mbim2, buffer_size=1) show2imgs(bim, im_border, title1='Obraz oryginalny', title2='Obraz po przekształceniu', size=(10,10)) element = np.ones((3,3),np.uint8) mbim2 = cv2.morphologyEx(im_border, op=cv2.MORPH_CLOSE, kernel=element, iterations=3) show2imgs(bim, im_border, title1='Obraz oryginalny', title2='Obraz po przekształceniu', size=(10,10)) dt = cv2.distanceTransform(mbim2, distanceType=cv2.DIST_L2, maskSize=cv2.DIST_MASK_PRECISE) show2imgs(bim, dt, title1='Obraz oryginalny', title2='Obraz po przekształceniu', size=(10,10)) # - print(dt.min(), dt.max())
Lab6/01_segmentacja.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # + [markdown] id="O4fyFskoK4Ub" # # Homework 9 Backpropagation and introduction to Pytorch # Due on April 21th # + [markdown] id="I9xF7q68KE7j" # ## Problem 1: Backprop in a simple MLP # Here, we ask you to derive all the steps of the backpropagation algorithm for a simple classification network. Consider a fully-connected neural network, also known as a multi-layer perceptron (MLP), with a single hidden layer and a one-node output layer. The hidden and output nodes use an elementwise sigmoid activation function and the loss layer uses cross-entropy loss: # <p> # $f(z)=\frac{1}{1+exp(-z))}$ # <br> # $L(\hat{y},y)=-yln(\hat{y}) - (1-y)ln(1-\hat{y})$ # </p> # <p> # The computation graph for an example network is shown below. Note that it has an equal number of nodes in the input and hidden layer (3 each), but, in general, they need not be equal. Also, to make the application of backprop easier, we show the <i>computation graph</i> which shows the dot product and activation functions as their own nodes, rather than the usual graph showing a single node for both. # # </p> # # <img src="https://raw.githubusercontent.com/ruizhaoz/EC414/master/mlpgraph.png" style="height:200px;"> # # The backpropagation algorithm for an MLP is displayed below. For simplicity, we will assume no regularization on the weights, so you can ignore the terms involving $\Omega$. The forward step is: # # <img src="https://raw.githubusercontent.com/ruizhaoz/EC414/master/forward.png" style="width:200px;"> # # and the backward step is: # # <img src="https://raw.githubusercontent.com/ruizhaoz/EC414/master/backward.png" style="width:200px;"> # + [markdown] id="VENM1O51Lm3u" # Write down each step of the backward pass explicitly for all layers, i.e. for the loss and $k=2,1$, compute all gradients above, expressing them as a function of variables $x, y, h, W, b$. <i>Hint: you should substitute the updated values for the gradient $g$ in each step and simplify as much as possible.</i> Specifically, compute the following (we have replaced the superscript notation $u^{(i)}$ with $u^i$): # # **Q1.1**: $\nabla_{\hat{y}}L(\hat{y},y)$ # + [markdown] id="BbpAW-DlLoPp" # # # **Solution:** # - $L = (−y)ln(\hat{y})$ # - $\nabla L_\hat{y} = \frac{(−y)}{\hat{y}}$ # -(dL/da) # # + [markdown] id="j1fJZBj3dcYG" # **Q1.2**: $\nabla_{a^{(2)}}J$ # + [markdown] id="jXfklfb7diX1" # **Solution:** # # -$\nabla_{a^{(2)}}J =\nabla_{a^{(2)}}\sigma(z^{(2)})$ # # -$\nabla_{a^{(2)}}J = \sigma(z^{(2)})*(1-\sigma(z^{(2)}))$ # # -$(da/dz)J = \sigma(WXB)*(1-\sigma(WXB)$ # # -$(dL/dz)J/g = \sigma(WXB)*(1-\sigma(WXB)*-y/\hat{y}$ # + [markdown] id="xYh1E8CTiN2C" # **Q1.3**: $\nabla_{b^{(2)}}J$ # + [markdown] id="o7pIlEcqiSjP" # **Solution:** # # -$\nabla_{b_2} = g+\nabla_{b_2}$ # + [markdown] id="Gp6SkxGwio3n" # ** Q1.4**: $\nabla_{W^{(2)}}J$ # + [markdown] id="zYEhB8SYix5-" # **Solution:** # # -$\nabla_{W_2} = g*h_{1}+\nabla_{W_2}$ # # + [markdown] id="EGdFV93qkAni" # **Q1.5**: # # -$\nabla_{h^{(1)}}J$ # # -$\nabla_{h^{(1)}}J = W_2*g$ # # -$\nabla_{h^{(1)}}J = W_2*\sigma(W_2*B_2)*(1-\sigma(W_2*B_211))$ # + [markdown] id="IWCCJapRkTZk" # **Solution:** # # + [markdown] id="IUcN5zqukgps" # **Q1.6**: $\nabla_{b^{(1)}}J$, $\nabla_{W^{(1)}}J$ # + [markdown] id="0q0REkiokwHg" # **Solution:** # # -∇W1 = g∗h0+∇W1 # - ∇b1= g+∇b1 # + [markdown] id="aQppboJql-s4" # **Q1.7** Briefly, explain how would the computational speed of backpropagation be affected if it did not save results in the forward pass? # + [markdown] id="LD_17D3QmAqK" # **Solution:** # It would be slowed by a factor of a half because you would need to recompute the forward propigation step todo the backward pass. # + [markdown] id="MXTrAN89nkJd" # # Problem 2: Pytorch Intro # ## **Q2.0**: Pytorch tutorials # This homework will introduce you to [PyTorch](https://pytorch.org), currently the fastest growing deep learning library, and the one we will use in this course. # # Before starting the homework, please go over these introductory tutorials on the PyTorch webpage: # # * [60-minute Blitz](https://pytorch.org/tutorials/beginner/deep_learning_60min_blitz.html) # + id="NrFPTKJYdaeY" import torch # + [markdown] id="i0Z5zMSRqPEG" # The `torch.Tensor` class is the basic building block in PyTorch and is used to hold data and parameters. The `autograd` package provides automatic differentiation for all operations on Tensors. After reading about Autograd in the tutorials above, we will implement a few simple examples of what Autograd can do. # + [markdown] id="MTYYQoSlqbb0" # ## **Q2.1**. Simple function # Use `autograd` to do backpropagation on the simple function, $f=(x+y)*z$. # # **Q2.1.1** Create the three input tensors with values $x=-2$, $y=5$ and $z=-4$ as tensors and set `requires_grad=True` to track computation on them. # # # + colab={"base_uri": "https://localhost:8080/"} id="bhABwVlxqNJp" outputId="c18cfc47-1db6-4135-be1e-6102d51f90ef" # solution here x = torch.tensor(-2.0,requires_grad=True) y = torch.tensor(5.0,requires_grad=True) z = torch.tensor(-4.0,requires_grad=True) print(x, y, z) # + [markdown] id="QVIMV4oUqoDD" # **Q2.1.2** Compute the $q=x+y$ and $f=q \times z$ functions, creating tensors for them in the process. Print out $q,f$, then run `f.backward(retain_graph=True)`, to compute the gradients w.r.t. $x,y,z$. The `retain_graph` attribute tells autograd to keep the computation graph around after backward pass as opposed deleting it (freeing some memory). Print the gradients. Note that the gradient for $q$ will be `None` since it is an intermediate node, even though `requires_grad` for it is automatically set to `True`. To access gradients for intermediate nodes in PyTorch you can use hooks as mentioned in [this answer](https://discuss.pytorch.org/t/why-cant-i-see-grad-of-an-intermediate-variable/94/2). Compute the values by hand (or check the slides) to verify your solution. # + colab={"base_uri": "https://localhost:8080/"} id="PTON_Ch6q0zX" outputId="7e026092-bee6-431d-9ead-1fc6c4bf7001" # solution here q = x+y f = q*z print(q, f,"done qf") q.register_hook(print) f.backward(retain_graph=True) # compute the gradient print("starting grads",x.grad, y.grad, z.grad, -4) q.register_hook(print) f.backward(retain_graph=True) f.backward(retain_graph=True) f.backward(retain_graph=True) # compute the gradient print("starting grads",x.grad, y.grad, z.grad, -4) # + [markdown] id="2pfwkA3yq3Mz" # **Q2.1.3** If we now run `backward()` again, it will add the gradients to their previous values. Try it by running the above cell multiple times. This is useful in some cases, but if we just wanted to re-compute the gradients again, we need to zero them first, then run `backward()`. Add this step, then try running the backward function multiple times to make sure the answer is the same each time! # + colab={"base_uri": "https://localhost:8080/"} id="6FKH2O14q8uL" outputId="b8e9e78f-38ee-44fa-d64a-37077564e825" # solution here # zero the gradient x.grad = None y.grad =None z.grad = None q.grad = None f.backward(retain_graph=True) print(x.grad, y.grad, z.grad, q.grad) # compute the gradient again x.grad = None y.grad =None z.grad = None q.grad = None f.backward(retain_graph=True) print(x.grad, y.grad, z.grad, q.grad) # + [markdown] id="nX01mssrrJnk" # ## **Q2.2** Neuron # ### 2.2.1 # Implement the function corresponding to one neuron (logistic regression unit) that we saw in the lecture and compute the gradient w.r.t. $x$ and $w$. The function is $f=\sigma(w^Tx)$ where $\sigma()$ is the sigmoid function. Initialize $x=[-1, -2, 1]$ and the weights to $w=[2, -3, -3]$ where $w_3$ is the bias. Print out the gradients. # + colab={"base_uri": "https://localhost:8080/"} id="_qOeEIubrAn1" outputId="1bf9e5c7-12b2-43fa-bdb5-fdc513a3a083" # solution here import numpy as np X = [-1.0,-2.0,1.0] x = torch.tensor(X,requires_grad=True) W = [2.0,-3.0,-3.0] w = torch.tensor(W,requires_grad=True) wtx = torch.matmul(w, x) print (wtx) f = torch.sigmoid(wtx) x.grad = None w.grad =None f.grad = None print("\nx=", x, "\nw=", w, "\nf(x,w)=", f) # compute the gradient calling backward() f.backward(retain_graph=True) print("The gradient of f() w.r.t. x is", x.grad) print("The gradient of f() w.r.t. w is", w.grad) # + [markdown] id="b9Zdn-SmS-dW" # ### 2.2.2 # Derive the gradient $\nabla_x f$ and $\nabla_\omega f$ by hand to verify your results in 2.2.1. (Write out necessary steps i.e. chain rule, final computation results) # + [markdown] id="8mRG8qCDTtZA" # **Solution**: # # -$W.T @ X -> h1 -> \sigma -> f $ # # - $\left( \begin{array}{cc} # 2.0\\-3.0\\-3.0 # \end{array} \right) # % # \left( \begin{array}{cc} # -1.0&-2.0&1.0 # \end{array}\right) # \left |= 1\right| ->\sigma(1) -> 0.731 # $ # # - $\frac{ds}{dh1} = (\sigma(z)*(1 - \sigma(z)) = 0.731 * 0.279 = 0.204$ # # - $\frac{dh1}{w.t@x} = W_1*X_1 + W_2*X_2 + W_3*X_3 $ # # - $\frac{dh1}{w} = X_1 , X_2 , X_3 $ # # - $\frac{dh1}{x} = W_1 , W_2 , W_3 $ # # - $\frac{w.t@x}{x} =\frac{dh1}{x}*\frac{ds}{dh1} = <W_1 , W_2 , W_3 >*0.204$ # # - $\frac{w.t@x}{w} =\frac{dh1}{w}*\frac{ds}{dh1} = <X_1 , X_2 , X_3 >*0.204$ # # -$w.t.r_x = 0.204*\left( \begin{array}{cc} # 2.0\\-3.0\\-3.0 # \end{array} \right) = % # \left( \begin{array}{cc} # 0.406\\-0.609\\-0.609 # \end{array}\right) $ # # -$w.t.r_w = 0.204*\left( \begin{array}{cc} # -1.0\\-2.0\\1.0 # \end{array} \right) = % # \left( \begin{array}{cc} # -0.204\\-0.408\\0.204 # \end{array}\right) $ # # + [markdown] id="9f6vmhMQr2EK" # ## **Q2.3**. torch.nn # # Implement the function corresponding to one neuron (logistic regression unit) that we saw in the lecture and compute the gradient w.r.t. $x$ and $w$. The function is $f=\sigma(w^Tx+b)$ where $\sigma()$ is the sigmoid function. Initialize $x=[-1, -2]$ and the weights to $w=[2, -3]$ bias $b=-3$. Now we are using but using the `Linear` class from `torch.nn`, followed by the [Sigmoid](https://pytorch.org/docs/stable/nn.html#torch.nn.Sigmoid) class. # # In general, many useful functions are already implemented for us in this package. Compute the gradients $\partial f/\partial w$ by running `backward()` and print them out (they will be stored in the Linear variable, e.g. in `.weight.grad`.) # + colab={"base_uri": "https://localhost:8080/"} id="HTBMhenyr4uj" outputId="fd2f1b9c-4b2b-42fe-c42c-127608340d50" # solution here import torch.nn as nn x.grad = None w.grad =None w = torch.tensor([[2.], [-3.]]) b = torch.tensor( [[2.0]],requires_grad=True) # initilize a linear layer linear_f = nn.Linear(2, 1) # initilize weight and bias linear_f.weight.data = torch.transpose(w, 0, 1) linear_f.bias.data = b print("\nweights:", linear_f.weight) print("\nbias:",linear_f.bias) # initilize x and compute f X = [[-1.0,-2.0]] x = torch.tensor([[-1.0,-2.0]],requires_grad=True) print("\nx:",x.shape) forSig = linear_f(x) print("\nlin aprox = ",forSig) m =nn.Sigmoid() f = m(forSig) print("\nf:", f) # do backprop f.backward(retain_graph=True) print("The gradient of f() w.r.t. w is", linear_f.weight.grad,"|b is:" ,linear_f.bias.grad) # + [markdown] id="YmTyL4tAtWZC" # ## **Q2.4** Module # Now lets put these two functions (Linear and Sigmoid) together into a "module". Read the [Neural Networks tutorial](https://pytorch.org/tutorials/beginner/blitz/neural_networks_tutorial.html) if you have not already. # # **Q2.4.1** Make a subclass of the `Module` class, called `Neuron`. Set variables to the same values as above. You will need to fill out the `__init__` and the `forward` is already given. # + id="4Oy9NG-gr81q" # solution here import torch.nn as nn class Neuron(nn.Module): def __init__(self): super(Neuron, self).__init__() # an affine operation: y = weight*x + bias, with fixed parameters self.linear = nn.Linear(2, 1) self.linear.weight.data = torch.transpose(torch.tensor([[2.], [-3.]]),0,1); self.linear.bias.data = torch.tensor([[2.0]]) # a sigmoid function self.sigmoid = nn.Sigmoid() def forward(self, x): x = self.linear(x) x = self.sigmoid(x) return x # + [markdown] id="xM13evB9tj0j" # **Q2.4.2** Now create a variable of your `Neuron` class called `my_neuron` and run backpropagation on it. Print out the gradients again. Make sure you zero out the gradients first, by calling `.zero_grad()` function of the parent class. Even if you will not re-compute the backprop, it is good practice to do this every time to avoid accumulating gradient! # + colab={"base_uri": "https://localhost:8080/"} id="v-VKjnu7tfgP" outputId="b576db84-167d-4bbf-824b-3448d989b714" # solution here my_neuron = Neuron() print(my_neuron) params = list(my_neuron.parameters()) print("The weights are:", params[0]) # linear layer's .weight # initialize the input x same as in Q2.3 and compute the output of my_neuron x = torch.tensor([[-1.0,-2.0]],requires_grad=True) out = my_neuron.forward(x) print("\nf(x,w)=", out) # zero the gradient of my_neuron and compute the gradient by calling backward. my_neuron.zero_grad() out.backward(retain_graph=True) print("The gradient of f() w.r.t. w is", params[0].grad, params[1].grad) # + [markdown] id="1ywjySyatv0l" # ## **Q2.5**. Loss and SGD # Now, lets train our neuron on some data. The code below creates a toy dataset containing a few inputs $x$ and outputs $y$ (a binary 0/1 label), as well as a function that plots the data and current solution. # + colab={"base_uri": "https://localhost:8080/", "height": 286} id="s7fQKy56txsR" outputId="d338a7ab-b78d-492f-e6cb-6aac083edd17" import matplotlib.pyplot as plt # create some toy 2-D datapoints with binary (0/1) labels x = torch.tensor([[1.2, 1], [0.2, 1.4], [0.5, 0.5], [-1.5, -1.3], [0.2, -1.4], [-0.7, -0.5]]) y = torch.tensor([0, 0, 0, 1, 1, 1 ]) def plot_soln(x, y, params): plt.plot(x[y==1,0], x[y==1,1], 'r+') plt.plot(x[y==0,0], x[y==0,1], 'b.') plt.grid(True) plt.axis([-2, 2, -2, 2]) # NOTE : This may depend on how you implement Neuron. # Change accordingly w0 = params[0][0][0].item() w1 = params[0][0][1].item() bias = params[1][0].item() print("w0 =", w0, "w1 =", w1, "bias =", bias) dbx = torch.tensor([-2, 2]) dby = -(1/w1)*(w0*dbx + bias) # plot the line corresponding to the weights and bias plt.plot(dbx, dby) params = list(my_neuron.parameters()) plot_soln(x, y, params) # + [markdown] id="QpxhBtl8t6Th" # **Q2.5.1** Declare an object `criterion` of type `nn.CrossEntropyLoss`. Note that this can be called as a function on two tensors, one representing the network outputs and the other, the targets that the network is being trained to predict, to return the loss. # + colab={"base_uri": "https://localhost:8080/"} id="tEZiH8lLtnrT" outputId="2522a110-5783-417b-f3f6-cbbeda57ea8d" # solution here criterion = nn.CrossEntropyLoss() # forward + backward + optimize outputs =my_neuron.forward((x)) input = torch.transpose(torch.tensor([[1.0,1.0,1.0,1.0,1.0,1.0]]),0,1) outputs = torch.cat((outputs,input), 1) loss = criterion(outputs,y) print("loss =", loss.item()) # + [markdown] id="eYv8hztvuLg1" # # **Q2.5.2** Print out the chain of `grad_fn` functions backwards starting from `loss.grad_fn` to demonstrate what backpropagation will be run on. This part is already given to you. # + colab={"base_uri": "https://localhost:8080/"} id="zM6X4OIYuG8O" outputId="69b2d6ec-6d94-4330-92c6-76bc50d74ed2" print(loss.grad_fn) print(loss.grad_fn.next_functions[0][0]) print(loss.grad_fn.next_functions[0][0].next_functions[0][0]) print(loss.grad_fn.next_functions[0][0].next_functions[0][0].next_functions[0][0]) # + [markdown] id="8jQb5oh5uWG-" # **Q2.5.3** Run the Stochastic Gradient Descent (SGD) optimizer from the `torch.optim` package to train your classifier on the toy dataset. Use the entire dataset in each batch. Use a learning rate of $0.01$ (no other hyperparameters). You will need to write a training loop that uses the `.step()` function of the optimizer. Plot the solution and print the loss after 10000 iterations. # + colab={"base_uri": "https://localhost:8080/", "height": 303} id="vkm4xUHZuVQf" outputId="e427ba15-7fac-4b7c-932e-0de13a923e97" # solution here import torch.optim as optim # create your optimizer optimizer = optim.SGD(params, lr=0.01) # training loop for i in range(10000): # in your training loop: # 1. zero the gradient buffers optimizer.zero_grad() # 2. compute the output by given the input x out = my_neuron.forward(x) my_neuron.zero_grad() # 3. compute the loss outputs = torch.cat((out,input), 1) loss = criterion(outputs,y) # 4. computing the gradient loss.backward(retain_graph=True) # 5. update the parameter by calling step on the optimizer optimizer.step() print("loss =", loss.item()) params = list(my_neuron.parameters()) plot_soln(x, y, params) # + id="sXQQ5Vxjue5D"
ec414_Intro_to_machine_learning/Backpropagation_and_introduction_to_Pytorch_Homework9.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # # Data Analysis with Pandas # ## Outline # # ### Part 1 # # * [Intro to Pandas](#Intro-to-Pandas) # * [Pandas Data Structures](#Pandas-Data-Structures) # * [DataFrame](#DataFrame) # * [Dealing with Columns](#Dealing-with-Columns) # # ### Part 2 # # * [Indexing & Selecting](#Indexing-&-Selecting) # * [Grouping](#Grouping) # * [Handling Missing Data](#Handling-Missing-Data) # # # ### Part 3 # # * [Plotting](#Plotting) # --- # ## Part 1 # ### Intro to Pandas from IPython.core.display import HTML HTML("<iframe src=http://pandas.pydata.org width=800 height=350></iframe>") import pandas as pd # ## DataFrame # ### Reading Data from File # UCI Machine Learning Repository: [Adult Data Set](https://archive.ics.uci.edu/ml/datasets/Adult) # การอ่านไฟล์ เราไม่จำเป็นต้องมีไฟล์บนเครื่องก็ได้ ในที่นี้เราจะอ่านไฟล์จาก URL ซึ่งข้อมูลจะต้องอยู่ในรูปแบบที่เราจะอ่าน ในที่นี้คือ CSV ถ้าเป็น JSON ก็ให้ใช้คำสั่ง `read_json` แทน # **หมายเหตุ** ถ้าอินเตอร์เนทช้า ไม่สามารถอัพโหลดไฟล์ขึ้น Colab ได้ ให้ใช้ URL นี้แทน https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data adult = pd.read_csv("data/adult.csv") adult.head(3) # ข้างต้นเราสังเกตเห็นว่าข้อมูลแถวแรกของเรากลายเป็น header ไป เนื่องจากว่าข้อมูลที่โหลดมาไม่มี header ที่แถวแรกนั่นเอง เราแก้ได้โดยการใส่ `header=None` เข้าไปตอนที่โหลดข้อมูล adult = pd.read_csv("data/adult.csv", header=None) adult.head() # เราสามารถใส่ชื่อ column ได้ดังนี้ columns = ["age", "Work Class", "fnlwgt", "education", "education-num", "marital-status", "occupation", "relationship", "race", "sex", "capital-gain", "capital-loss", "hours-per-week", "native-country", "Money Per Year"] adult.columns = columns adult.head(2) adult["age"][0:3] columns = ["age", "Work Class", "fnlwgt", "education", "education-num", "marital-status", "occupation", "relationship", "race", "sex", "capital-gain", "capital-loss", "hours-per-week", "native-country", "Money Per Year"] adult = pd.read_csv("data/adult.csv", names=columns) adult.head() # เราสามารถเลือกข้อมูลเฉพาะ column นั้นๆ ได้ตามนี้ adult["age"] # คำสั่ง `value_count` เป็นคำสั่งเอาไว้นับจำนวนของค่าที่อยู่ใน column นั้นๆ เช่นถ้าเราต้องการนับจำนวนของคนที่มีอายุในแต่ละช่วง ทำได้ตามนี้ adult["age"].value_counts(ascending=True) # คำนวณค่าสถิติพื้นฐานได้ adult["age"].mean() adult["age"].std() adult["age"].value_counts(ascending=True)[0:5] adult["age"].value_counts().index[0] adult[adult["age"] > 50] adult[adult["age"] > 60] adult_age_36 = adult["age"] == adult["age"].value_counts().index[0] adult[adult_age_36]["sex"].value_counts() adult["age"].value_counts() # ## Dealing with Columns # ### Renaming Columns columns = ["age", "Work Class", "fnlwgt", "education", "education-num", "marital-status", "occupation", "relationship", "race", "sex", "capital-gain", "capital-loss", "hours-per-week", "native-country", "Money Per Year"] adult = pd.read_csv("data/adult.csv", names=columns) adult.head() adult.rename(columns={"Work Class": "workclass"}) adult.head(1) adult_new = adult.rename(columns={"Work Class": "workclass"}) adult_new.head(1) adult_new.columns.str.lower() adult_new.columns = adult_new.columns.str.lower() \ .str.replace(" ", "-") \ .str.replace("-", "_") adult_new.columns adult_new["money_per_year"].value_counts() adult_new.info() # ### Adding New Columns adult["normalized-age"] = (adult["age"] - adult["age"].mean()) / adult["age"].std() adult.head() adult[adult["capital-gain"] > 5000] adult["normalized-age"] > 1 adult[adult["normalized-age"] > 1] adult["sex"][0] adult.head(2) " yyy".strip() adult[adult["sex"].str.strip() == "Male"] adult.info() # ### Removing Existing Columns adult.drop("normalized-age") # We need to specify the parameter called `axis` when we drop. adult.drop("normalized-age", axis="columns") adult.head() adult = adult.drop("normalized-age", axis="columns") adult.head(2) adult.drop([0, 1], axis="index") # --- # ## Part 2 # ## Indexing & Selecting import pandas as pd columns = ["age", "Work Class", "fnlwgt", "education", "education-num", "marital-status", "occupation", "relationship", "race", "sex", "capital-gain", "capital-loss", "hours-per-week", "native-country", "Money Per Year"] adult = pd.read_csv("data/adult.csv", names=columns) adult.head(2) # การเลือก column นั้นมาทำได้โดยสั่ง adult["age"] # ถ้าต้องการมากกว่า 1 column ทำได้โดยสั่ง adult[["age", "education", "occupation"]].head() # การอ้างอิงถึง column นั้น เราสามารถใช้ . ในการอ้างถึงได้ เช่น ถ้าเราอยากได้ค่า age adult.age # ซึ่งในการใช้ . นั้น ชื่อ column จำเป็นต้องอยู่ในรูปแบบของชื่อตัวแปร คือ ห้ามมาเว้นวรรค แล้วก็ห้ามมี - # ถ้าเราอยาก filter ข้อมูลของคนที่มีอายุมากกว่า 30 ทำได้ดังนี้ adult[adult.age > 30].head(3) # ถ้าอยากใส่มากกว่า 1 condition adult[(adult.age > 30) & (adult["capital-gain"] > 2000)].head() adult.loc[11:12, ["age", "education"]] adult[(adult["capital-gain"] > 30000) & (adult["capital-gain"] < 50000)][["age", "education"]] adult.loc[(adult["capital-gain"] > 30000) & (adult["capital-gain"] < 50000), ["age", "education", "capital-gain"]] adult[adult.education == "Masters"] adult.education[0] # ข้อมูลมีเว้นวรรคปนเข้ามา เราสามารถตัดออกได้ตามนี้ adult.education = adult.education.str.strip() adult[adult.education == "Masters"].head() adult[adult.education.isin(["Bachelors", "Masters"])].head() adult[adult.education.str.contains("Mas")] # ## Grouping # การ group ข้อมูล จำเป็นต้องมี aggregation ตามมาเสมอ เช่น group เสร็จแล้วหาค่า mean หรือ group เสร็จแล้วหาค่า median เป็นต้น #adult.groupby("education").agg("mean").head(2) adult.groupby("education").mean().head(2) adult.groupby("education").mean().reset_index().head(2) # เราไม่จำเป็นต้องทำภายในคำสั่งเดียวเสมอไป เราสามารถเก็บใส่ตัวแปรก่อนได้ # + # Same result as above adult_group = adult.groupby("education") adult_group.mean().tail() # - # เราสามารถ group ข้อมูลได้มากกว่า 1 column ซึ่งลำดับจะมีผลต่อผลลัพธ์ที่ได้ออกมา adult.groupby(["education", "sex"]).mean().head() adult.groupby(["sex", "education"]).mean() adult.columns = adult.columns.str.lower().str.replace(" ", "-") adult[["capital-gain", "capital-loss", "money-per-year"]].groupby("money-per-year").mean() # ## Handling Missing Data # การจัดการ missing data เบื้องต้นทำได้หลายวิธี ไม่ว่าจะเป็น # 1. แทนที่ค่านั้นด้วยค่าเฉลี่ย # 2. แทนที่ค่านั้นด้วยค่าที่เกิดขึ้นบ่อยที่สุด # 3. แทนที่ค่านั้นด้วยการเดา หรือมาจาก domain expert # 4. ลบข้อมูลที่มี missing data ทิ้งไปเลย # # _หมายเหตุ:_ ในการทำงานจริงกับข้อมูลจริง วิธีจัดการ missing data ข้างต้นอาจจะไม่เหมาะ อยากลองไปศึกษาเพิ่มเติมดูว่ามีวิธีอะไรบ้างที่ดีกว่า เหมาะสมกว่า titanic = pd.read_csv("data/titanic.csv") # เราใช้ `info` ดูได้เบื้องต้นว่ามี missing data หรือเปล่า titanic.info() # เนื่องจาก `info` จะนับจำนวนแถวที่มีค่าอยู่จริง ดังนั้นเราจะดูว่ามี missing data ได้ เช่น boat มีค่าจริงๆ อยู่ 486 ค่าเท่านั้นจากข้อมูลทั้งหมด 1309 titanic.shape titanic.isnull().head() titanic.notnull().head() titanic.isnull().sum() df = titanic.drop("Cabin", axis="columns") df.dropna().shape # โค้ดด้านล่างนี้เป็นการ drop แถวที่ age หรือ body ตัวใดตัวหนึ่ง มี missing data titanic.dropna(subset=["Age", "Cabin"], how="any").shape # โค้ดด้านล่างนี้เป็นการ drop แถวที่ age และ body มี missinge data ทั้งคู่ titanic.dropna(subset=["Age", "Cabin"], how='all').shape # แทนค่า missing data ที่ column ชื่อ Cabin ด้วยค่าเฉลี่ย age_mean = titanic["Age"].mean() titanic["Age"] = titanic["Age"].fillna(age_mean) titanic.head() titanic.info() titanic["Cabin"].value_counts(dropna=False).head() # แทนค่า missing data ที่ column ชื่อ cabin ด้วยค่าของ cabin ที่เกิดขึ้นบ่อยที่สุด ในที่นี้คือ "C23 C25 C27" titanic["Cabin"].fillna("C23 C25 C27").value_counts().head() # --- # ## Part 3 # ## Plotting # + # %matplotlib inline import pandas as pd # - # คำสั่ง `%matplotlib inline` เป็นการบอกว่ากราฟที่เราสร้างขึ้นมาให้แสดงผลออกทาง notebook นี้ # ### Biometric statistics for a group of office workers # Credit: https://people.sc.fsu.edu/~jburkardt/datasets/datasets.html df = pd.read_csv("data/biostats.csv") df.columns df.columns = ["Name", "Sex", "Age", "Height (in)", "Weight (lbs)"] df.head() df["Sex"] = df["Sex"].str.replace('"', "") df.head() df.info() df.hist() df.groupby("Sex").hist() df.boxplot(column=["Age"], by="Sex") df.boxplot(column=["Height (in)"], by="Sex") # ถ้าเราอยากให้กราฟของเรามาอยู่รวมๆ กัน เราจะนำความสามารถของ `subplots` ใน `matplotlib` เข้ามาช่วย import matplotlib.pyplot as plt # + fig, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(16, 4)) df.boxplot(column=["Age"], by="Sex", ax=ax1) df.boxplot(column=["Height (in)"], by="Sex", ax=ax2) df.boxplot(column=["Weight (lbs)"], by="Sex", ax=ax3) # - # ### Seaborn # Seaborn เป็น package ที่แนะนำให้ศึกษาก่อน ถ้าอยากจะทำ data visualization ใน Python เพราะว่าง่ายกว่าการใช้ Matplotlib หรือการใช้ Pandas สร้างกราฟ import seaborn as sns # ### Revisit: Total awards vs. population by state # + df = pd.read_csv("data/pop_vs_degrees.csv") df["pop"] = df["pop"] / 1000000 df["degrees"] = df["degrees"] / 10000 g = sns.regplot(x="pop", y="degrees", data=df, color="g") g.set_xlabel("Population [millions]") g.set_ylabel("Total degrees/awards in 2013 [x10k]") g.set_title("Total awards vs. population by state") # - sns.barplot(data=df) # ### Revisit: Biometric statistics for a group of office workers df = pd.read_csv("data/biostats.csv") df.columns = ["Name", "Sex", "Age", "Height (in)", "Weight (lbs)"] df["Sex"] = df["Sex"].str.replace('"', "") _, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(16, 4)) sns.barplot(x="Sex", y="Age", data=df, ax=ax1) sns.barplot(x="Sex", y="Height (in)", data=df, ax=ax2) sns.barplot(x="Sex", y="Weight (lbs)", data=df, ax=ax3) _, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(16, 4)) sns.boxplot(x="Sex", y="Age", data=df, ax=ax1) sns.boxplot(x="Sex", y="Height (in)", data=df, ax=ax2) sns.boxplot(x="Sex", y="Weight (lbs)", data=df, ax=ax3) sns.distplot(df["Age"]) # ### Titatic import pandas as pd import matplotlib.pyplot as plt titanic_df = pd.read_csv("data/titanic.csv") titanic_df.head() titanic_df = titanic_df.drop(["Name", "Ticket", "Cabin"], axis="columns").dropna(how="any") titanic_df.info() sns.barplot(x="Sex", y="Survived", hue="Pclass", data=titanic_df) sns.barplot(x="Sex", y="Survived", hue="Pclass", data=titanic_df, palette=sns.cubehelix_palette(4, start=0.5, rot=-.75)) sns.countplot(y="Embarked", hue="Pclass", data=titanic_df, palette="Greens_d"); g = sns.FacetGrid(titanic_df, row="Sex", col="Survived") g.map(plt.hist, "Age") g = sns.FacetGrid(titanic_df, row="Sex", col="Pclass") g.map(plt.hist, "Survived") g = sns.FacetGrid(titanic_df, row="Sex", col="Survived") g.map(sns.regplot, "Age", "Parch")
02-pandas.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # %matplotlib inline # %config InlineBackend.figure_format='retina' from datacharm import * import matplotlib import matplotlib.dates as mpd import matplotlib.pyplot as plt from matplotlib.backends.backend_agg import FigureCanvasAgg as FigureCanvas import matplotlib.patches as mp import datetime as dt from io import BytesIO import locale from enerpi.database import extract_log_file from enerpi.base import timeit from enerpi.database import init_catalog # Pandas to html! INIT_LOG_MARK = "Init ENERPI logging & broadcasting..." def _extract_log_file(log_file, extract_temps=True, verbose=True): rg_log_msg = re.compile('(?P<tipo>INFO|WARNING|DEBUG|ERROR) \[(?P<func>.+?)\] ' '- (?P<ts>\d{1,2}/\d\d/\d\d\d\d \d\d:\d\d:\d\d): (?P<msg>.*?)\n', re.DOTALL) with open(log_file, 'r') as log_f: df_log = pd.DataFrame(rg_log_msg.findall(log_f.read()), columns=['tipo', 'func', 'ts', 'msg']) df_log.drop('func', axis=1, inplace=True) df_log['tipo'] = df_log['tipo'].astype('category') df_log['ts'] = df_log['ts'].apply(lambda x: dt.datetime.strptime(x, '%d/%m/%Y %H:%M:%S')) df_log.loc[df_log.msg.str.startswith('Tªs --> '), 'temp'] = True df_log.loc[df_log.msg.str.startswith('SENDED: '), 'debug_send'] = True b_warn = df_log.tipo == 'WARNING' df_log.loc[b_warn, 'no_red'] = df_log[b_warn].msg.str.startswith('OSError: [Errno 101] La red es inaccesible') df_log['exec'] = df_log['msg'].str.contains(INIT_LOG_MARK).cumsum().astype(int) df_log = df_log.set_index('ts') if extract_temps: rg_temps = 'Tªs --> (?P<CPU>\d{1,2}\.\d) / (?P<GPU>\d{1,2}\.\d) ºC' df_log = df_log.join(df_log[df_log['temp'].notnull()].msg.str.extract(rg_temps, expand=True).astype(float)) if verbose: clasific = df_log.groupby(['exec', 'tipo']).count().dropna(how='all').astype(int) print_ok(clasific) conteo_tipos = df_log.groupby('tipo').count() if 'ERROR' in conteo_tipos.index: print_err(df_log[df_log.tipo == 'ERROR'].dropna(how='all', axis=1)) if 'INFO' in conteo_tipos.index: print_info(df_log[df_log.tipo == 'INFO'].dropna(how='all', axis=1)) return df_log #log = extract_log_file('/Users/uge/Dropbox/PYTHON/PYPROJECTS/enerpi/enerpi/DATA/enerpi.log', verbose=False) #ejecuciones = list(sorted(set(log['exec']))) #print_red(ejecuciones) #print_info(log.count()) #last = log[log['exec'] == ejecuciones[-1]] #penult = log[log['exec'] == ejecuciones[-2]] #last # - catalog = init_catalog(base_path='/Users/uge/ENERPIDATA/') catalog # + # TILES optimization IMG_BASEPATH = '/Users/uge/ENERPIDATA/PLOTS' DEFAULT_IMG_MASK = 'enerpi_power_consumption_ldr_{:%Y%m%d_%H%M}_{:%Y%m%d_%H%M}.png' def _gen_tableau20(): # # These are the "Tableau 20" colors as RGB. tableau = [(31, 119, 180), (174, 199, 232), (255, 127, 14), (255, 187, 120), (44, 160, 44), (152, 223, 138), (214, 39, 40), (255, 152, 150), (148, 103, 189), (197, 176, 213), (140, 86, 75), (196, 156, 148), (227, 119, 194), (247, 182, 210), (127, 127, 127), (199, 199, 199), (188, 189, 34), (219, 219, 141), (23, 190, 207), (158, 218, 229)] # Scale the RGB values to the [0, 1] range, which is the format matplotlib accepts. for i in range(len(tableau)): r, g, b = tableau[i] tableau[i] = (r / 255., g / 255., b / 255.) return tableau # semaforo_4 = [sns.palettes.crayons[k] for k in ['Green', 'Sea Green', 'Mango Tango', 'Razzmatazz']] # These are the "Tableau 20" colors as RGB. tableau20 = _gen_tableau20() lang, codec = locale.getlocale() use_locale = '{}.{}'.format(lang, codec) locale.setlocale(locale.LC_ALL, use_locale) # sns.set_style('whitegrid') REGEXPR_SVG_HEIGHT = re.compile(r'<svg height="\d{1,4}pt"') REGEXPR_SVG_WIDTH = re.compile(r' width="(\d{1,4}pt")') GRIDSPEC_FULL = {'left': 0, 'right': 1, 'bottom': 0, 'top': 1, 'hspace': 0} # GRIDSPEC_NORMAL = {'left': 0.075, 'right': .925, 'bottom': 0.11, 'top': 0.91, 'hspace': 0} FONTSIZE = 10 FONTSIZE_TILE = 12 TICK_PARAMS_TILE = dict(direction='in', pad=-15, length=3, width=.5) font = {'family': 'sans-serif', 'size': FONTSIZE} # 'weight' : 'light', matplotlib.rc('font', **font) @timeit('_write_fig_to_svg') def _write_fig_to_svg(fig, name_img): #plt.close(fig) canvas = FigureCanvas(fig) output = BytesIO() imgformat = 'svg' canvas.print_figure(output, format=imgformat, transparent=True) svg_out = output.getvalue() # preserve_ratio=True svg_out = REGEXPR_SVG_WIDTH.sub(' width="100%" preserveAspectRatio="none"', REGEXPR_SVG_HEIGHT.sub('<svg height="100%"', svg_out.decode(), count=0), count=0).encode() try: with open(name_img, 'wb') as f: f.write(svg_out) except Exception as e: print('HA OCURRIDO UN ERROR GRABANDO SVG A DISCO: {}'.format(e)) return False return True def _tile_figsize(fraction=1.): dpi = 72 # height = 200 # width = 1.875 * height height = 200 width = 4.5 * height * fraction return (round(width / dpi, 2), round(height / dpi, 2)) @timeit('_prep_axis_tile') def _prep_axis_tile(color): matplotlib.rcParams['axes.linewidth'] = 0 fig, ax = plt.subplots(figsize=_tile_figsize(), dpi=72, gridspec_kw=GRIDSPEC_FULL, facecolor='none') fig.patch.set_alpha(0) ax.patch.set_alpha(0) ax.tick_params(direction='in', pad=-15, length=3, width=.5) ax.tick_params(axis='y', length=0, width=0, labelsize=FONTSIZE_TILE) ax.tick_params(axis='x', which='both', top='off', labelbottom='off') ax.xaxis.grid(True, color=color, linestyle=':', linewidth=1.5, alpha=.6) ax.yaxis.grid(True, color=color, linestyle=':', linewidth=1, alpha=.5) return fig, ax @timeit('_adjust_tile_limits') def _adjust_tile_limits(name, ylim, date_ini, date_fin, ax): ax.set_ylim(ylim) ax.set_xlim(left=date_ini, right=date_fin) yticks = list(ax.get_yticks())[1:-1] yticks_l = [v for v in yticks if (v - ylim[0] < (2 * (ylim[1] - ylim[0]) / 3)) and (v > ylim[0])] ax.set_yticks(yticks) if name == 'power': ax.set_yticklabels([str(round(float(y / 1000.), 1)) + 'kW' for y in yticks_l]) ax.tick_params(pad=-45) elif name == 'ldr': ax.set_yticklabels([str(round(float(y / 10.))) + '%' for y in yticks_l]) ax.tick_params(pad=-40) else: ax.tick_params(pad=-30) ax.set_yticklabels([str(round(y, 4)) for y in yticks_l]) return ax @timeit('plot_tile_last_24h') def plot_tile_last_24h(data_s, rs_data_s=None, rm_data_s=None, barplot=False, ax=None, fig=None): color = [1, 0, 1] if ax is None: fig, ax = _prep_axis_tile(color) if not barplot and rm_data_s is not None: data_s = data_s.rolling(rm_data_s).mean() elif not barplot and rs_data_s is not None: data_s = data_s.resample(rs_data_s, label='left').mean() rango_ts = data_s.index[0], data_s.index[-1] date_ini, date_fin = [t.to_pydatetime() for t in rango_ts] if data_s is not None and not data_s.empty: lw, alpha = 1.5, 1. ax.grid(b=True, which='major') data_s = data_s.fillna(0) if barplot: div = .5 ylim = (0, np.ceil((data_s.max() + div) // div) * div) ax.bar(data_s.index, data_s, width=1 / 28, edgecolor=color, color=[1, 1, 1, .5], linewidth=lw) ax.xaxis.set_major_locator(mpd.HourLocator((0, 12))) # ax.set_xticks([]) else: if data_s.name == 'power': div = 500 ylim = (0, np.ceil((data_s.max() + div / 5) / div) * div) else: # ldr div = 100 ylim = (0, np.ceil((data_s.max() + div / 2) // div) * div) data_s = data_s.fillna(0) ax.plot(data_s.index, data_s, color=color, linewidth=lw, alpha=alpha) ax.fill_between(data_s.index, data_s, color=color, alpha=alpha / 2) ax.xaxis.set_major_locator(mpd.HourLocator((0, 12))) ax.xaxis.set_minor_locator(mpd.HourLocator(interval=1)) else: ylim = 0, 100 ax.annotate('NO DATA!', xy=(.35, .3), xycoords='axes fraction', va='center', ha='center', color=(.9, .9, .9), fontsize=25) _adjust_tile_limits(data_s.name, ylim, date_ini, date_fin, ax) return fig, ax @timeit('gen_svg_tiles') def gen_svg_tiles(path_dest, catalog, last_hours=(72, 48, 24)): total_hours = last_hours[0] last_data, last_data_c = catalog.get(last_hours=total_hours, with_summary_data=True) if last_data is not None: ahora = dt.datetime.now().replace(second=0, microsecond=0) xlim = mpd.date2num(ahora - dt.timedelta(hours=total_hours)), mpd.date2num(ahora) delta = xlim[1] - xlim[0] fig, ax = None, None for data_s, plot_bar in zip([last_data.power, last_data.ldr, last_data_c.kWh], [False, False, True]): if ax is not None: plt.cla() fig.set_figwidth(_tile_figsize()[0]) fig, ax = plot_tile_last_24h(data_s, rs_data_s='5min', barplot=plot_bar, ax=ax, fig=fig) # , rm_data_s=300) for lh in last_hours: file = os.path.join(path_dest, 'tile_{}_{}_last_{}h.svg'.format('enerpi_data', data_s.name, lh)) ax.set_xlim((xlim[0] + delta * (1 - lh / total_hours), xlim[1])) fig.set_figwidth(_tile_figsize(lh / total_hours)[0]) _write_fig_to_svg(fig, name_img=file) if fig is not None: plt.close(fig) return True else: return False gen_svg_tiles('/Users/uge/ENERPIDATA/PLOTS', catalog, last_hours=(96, 84, 72, 60, 48, 36, 24, 12)) # - gen_svg_tiles('/Users/uge/ENERPIDATA/PLOTS', catalog, last_hours=(72, 48, 24)) # + # mpd.HourLocator? # #mpd.DayLocator? # - with pd.HDFStore('/Users/uge/Dropbox/PYTHON/PYPROJECTS/enerpi/enerpiweb/static/debug.h5', mode='r') as st: data = st['debug'] data['T'] = data.index.map(lambda x: x.time().strftime('%H:%M:%S')) data.head() # + from ipywidgets import HTML df = data.reset_index(drop=True)[['T', 'Power (W)', 'LDR (%)', 'nº samples']].head() HTML(df.to_html(justify='center', index=False, bold_rows=True, notebook=True, show_dimensions=True)) # - new_html = ( df.style.caption .format(percent) .applymap(color_negative_red, subset=['col1', 'col2']) .set_properties(**{'font-size': '9pt', 'font-family': 'Calibri'}) .bar(subset=['col4', 'col5'], color='lightblue') .render() ) # + #df.style.set_uuid('idunico-tablebuffer').render() #df.style.set_table_attributes('class="table-responsive"').render() template = Template(""" <style type="text/css" > {% for s in table_styles %} #T_{{uuid}} {{s.selector}} { {% for p,val in s.props %} {{p}}: {{val}}; {% endfor %} } {% endfor %} {% for s in cellstyle %} #T_{{uuid}}{{s.selector}} { {% for p,val in s.props %} {{p}}: {{val}}; {% endfor %} } {% endfor %} </style> <table id="Table_{{uuid}}" {{ table_attributes }}> {% if caption %} <caption>{{caption}}</caption> {% endif %} <thead> {% for r in head %} <tr> {% for c in r %} <{{c.type}} class="{{c.class}}">{{c.value}} {% endfor %} </tr> {% endfor %} </thead> <tbody> {% for r in body %} <tr> {% for c in r %} <{{c.type}} id="T_{{uuid}}_{{c.id}}" class="{{c.class}}"> {{ c.display_value }} {% endfor %} </tr> {% endfor %} </tbody> </table> """) # - pd.Timestamp('2016-08-20 20:31:32.854454').replace(microsecond=0) # + from enerpi import BASE_PATH paleta = pd.read_csv(os.path.join(BASE_PATH, 'rsc', 'paleta_power_w.csv') ).set_index('Unnamed: 0')['0'].str[1:-1].str.split(', ').apply(lambda x: [float(i) for i in x]) def aplica_paleta(serie): return ['background-color: rgba({}, {}, {}, .7); color: #fff'.format( *map(lambda x: int(255 * x), paleta.loc[:v].iloc[-1])) for v in serie] #df.style #paleta #valores_w = [100, 150, 225, 300, 500, 1000, 1500, 2000, 4000, 6000] #colores = [paleta.loc[:v].iloc[-1] for v in valores_w] #sns.palplot(colores) #plt.show() html_styled = (df.style .apply(aplica_paleta, subset=['Power (W)']) .format({'LDR (%)': lambda x: "{:.1f} %".format(x), 'Power (W)': lambda x: "<strong>{}</strong> W".format(x)}) .bar(subset=['LDR (%)'], color='lightblue') .render() ) print_blue(html_styled) HTML(html_styled) # - (df.style .apply(aplica_paleta, subset=['Power (W)']) #.format({'LDR (%)': lambda x: "{:.1f} %".format(x), 'Power (W)': lambda x: "<strong>{}</strong> W".format(x)}) .bar(subset=['LDR (%)'], color='yellow') .set_properties(**{'font-size': '12pt', 'font-family': 'Calibri'}) ) # + print(pd.formats.style.Styler.template.render()) # pd.formats.style.Styler.use? # - # + # Lectura de nginx.conf import re raw = '''server { listen 80; server_name localhost; charset utf-8; client_max_body_size 75M; # EnerWeb location = /enerweb { rewrite ^ /enerweb/; } # location /enerweb { try_files $uri @enerweb; } # location @enerweb { location /enerweb/ { include uwsgi_params; uwsgi_param /home/pi/PYTHON/enerweb/enerwebapp/enerweb.wsgi.py /enerweb; uwsgi_pass unix:/tmp/enerweb.sock; uwsgi_read_timeout 300; } # EnerpiWeb location = /enerpi { rewrite ^ /enerpi/; } location /enerpi/ { include uwsgi_params; uwsgi_param /home/pi/PYTHON/enerpi/enerpiweb/__main__.py /enerpi; uwsgi_pass unix:/tmp/enerpiweb.sock; uwsgi_read_timeout 300; } # MotionEye location = /cams { rewrite ^ /cams/; } location /cams/ { proxy_pass http://127.0.0.1:8765/; proxy_read_timeout 120s; #access_log off; } # Home Assistant (HASS) location = /hass { rewrite ^ /hass/; } location /hass/ { proxy_pass http://127.0.0.1:8123/; proxy_read_timeout 120s; #access_log off; } }''' rg_servers = re.compile('server {(.*)}', flags=re.DOTALL | re.MULTILINE) rg_lines = re.compile('(\n+?\s+)', flags=re.DOTALL | re.MULTILINE) print(rg_servers.findall(raw)[0]) nginx = pd.DataFrame(rg_lines.split(rg_servers.findall(raw)[0])[::2], columns=['msg']) nginx.tail() # - nginx['comment'] = nginx['msg'].str.startswith('#') nginx['open'] = (~nginx['comment'] & nginx['msg'].str.contains('\{')).cumsum() nginx['close'] = (~nginx['comment'] & nginx['msg'].str.contains('\}')).cumsum() nginx # + import configparser config = configparser.RawConfigParser() config.read('/Users/uge/Dropbox/PYTHON/PYPROJECTS/enerpi/enerpi/config_enerpi.ini') {s: dict(config[s]) for s in config.sections()} # #parser.read_file(p_conf) #print(open(p_conf, 'r').readlines()) # - list(config.keys()) config.get('ENERPI_DATA', 'LOGGING_LEVEL', fallback=4) pd.read_hdf('/Users/uge/ENERPIDATA/CURRENT_MONTH/DATA_2016_08_DAY_15.h5', '/hours') s_b = parser['broadcast'] dict(s_b.items()) config['BROADCAST']['udp_port'] s_b.getint('UDP_PORT', fallback=57775) dict(config['RGBLED'].getint()) # + from ipywidgets import HTML last = last.dropna(how='all', axis=1) HTML(last.to_html(columns=['tipo', 'msg'], classes=['table', 'table-responsive', 'table-hover'], notebook=True)) # - dict(config['ENERPI_SAMPLER']) # + # -
notebooks/enerpi log & pandas to html.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + from pynq import Overlay import asyncio import numpy as np import pynq import time overlay = Overlay("/home/xilinx/IPBitFile/Unmixing.bit") # overlay.download() AE = overlay.Abundance_Estimator_0 MVHT = overlay.MVHT_0 # pynq.ps.Clocks.fclk0_mhz = 300 Fp_hsi = open("/home/xilinx/jupyter_notebooks/hsi.txt", 'r') hsi_ori = np.zeros((65536, 8)) hsi_ds = np.zeros((4096, 8)) for i in range(65536): for j in range(8): line_hsi = Fp_hsi.readline() hsi_ori[i, j] = float(line_hsi[:-1]) for i in range(4096): hsi_ds[i,:] = hsi_ori[i*16, :] for i in range(8): for j in range(4096): MVHT.write(0x4000 + 4*(i*4096+j), int(hsi_ds[j, i]*2**22)) # set interrupt for done signal MVHT.write(0x0008, 1) # enable globel interrupt MVHT.write(0x0004, 1) # start MVHT.write(0x0000, 1) # t0 = time.time() while True: if (MVHT.read(0x000c) == 1): vtx_idx = np.zeros(8, int) for idx in range(8): vtx_idx[idx] = int(MVHT.read(0x24000 + 4*idx)) print(str(vtx_idx[idx])) break # async def MVHT_intr_handler(MVHT): # while True: # await MVHT.interrupt.wait() # print('intr received from ' + str(MVHT.read(0x000c))) # if (MVHT.read(0x000c) == 1): # vtx_idx = np.zeros(8) # for idx in range(8): # vtx_idx[idx] = MVHT.read(0x24000 + 4*idx) # print(str(vtx_idx[idx])) # if (MVHT.read(0x000c) & 0x1): # MVHT.write(0x000c, 1) async def AE_intr_handler(AE): while True: await AE.interrupt.wait() print('intr received from ' + str(AE.read(0x000c))) if (AE.read(0x000c) == 1): abundance = np.zeros((8,64)) for i in range(8): for j in range(64): abundance[i, j] = AE.read(0x1800 + 4*(i*64+j)) / 2**22 print(str(abundance[i, j])) break # # get EventLoop: # loop = asyncio.get_event_loop() # # run coroutine # loop.run_until_complete(MVHT_intr_handler(MVHT)) # loop.close() # vtx_idx = [242, 688, 183, 66, 3145, 2277, 532, 377] endmember = np.zeros((7,8)) hsi_batch = np.zeros((7,64)) for i in range(8): endmember[:, i] = hsi_ori[vtx_idx[i]*16,:-1].T for batch in range(1024): hsi_batch = hsi_ori[batch*64:(batch+1)*64,:-1].T for j in range(7): for k in range(8): AE.write(0x1000 + 4*(j*8+k), int(endmember[j, k]*2**22)) for k in range(64): AE.write(0x0800 + 4*(j*64+k), int(hsi_batch[j, k]*2**22)) # set interrupt for done signal AE.write(0x0008, 1) # enable globel interrupt AE.write(0x0004, 1) # start AE.write(0x0000, 1) # get EventLoop: loop = asyncio.get_event_loop() # run coroutine loop.run_until_complete(AE_intr_handler(AE)) # loop.close() # set interrupt for done signal AE.write(0x0008, 0) # enable globel interrupt AE.write(0x0004, 0) # start AE.write(0x0000, 0) # print('time:', time.time() - t0) # -
final-project/repositories/Hyperspectral_Unmixing/Unmixing-main/python/Unmixing.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import pandas as pd import numpy as np export = pd.DataFrame({'International':['cocoa butter', 'liquor', 'cake', ''], 'Locally':['cocoa powder', '', '', ''] }) segments = pd.DataFrame({'Refreshment Beverages':['Bournvita','Hot Chocolate', '', ''], "Confectionery":['Tom Tom', 'Buttermint', '', ''], 'Intermadiate Cocoa Products':['cocoa powder','cocoa butter', 'cocoa liquor', 'cocoa cake'] }) brands = pd.DataFrame({'Refreshment Beverages':['Cadbury Bournvita','Cadbury 3-in-1 Hot Chocolate', '', ''], "Confectionery":['Tom Tom Classic', 'Tom Tom Buttermint', 'Tom Tom Strawberry', ''], 'Intermadiate Cocoa Products':['cocoa powder','cocoa butter', '', ''] }) arr = pd.concat([export, brands, segments],keys =('Refreshment Beverages',"Confectionery",'Intermadiate Cocoa Products', "export")) arr #segments #brands # -
week 7/project_2.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [Root] # language: python # name: Python [Root] # --- # # Autograd in MinPy # # This tutorial is also available in step-by-step notebook version on [github](https://github.com/dmlc/minpy/blob/master/examples/tutorials/autograd_tutorial.ipynb). Please try it out! # # Writing backprop is often the most tedious and error prone part of a deep net implementation. In fact, the feature of autograd has wide applications and goes beyond the domain of deep learning. MinPy's autograd applies to any NumPy code that is imperatively programmed. Moreover, it is seemlessly integrated with MXNet's symbolic program (see [for example](../tutorial/complete_sol_opt_guide/complete.rst)). By using MXNet's execution engine, all operations can be executed in GPU if available. # # ## A Close Look at Autograd System # MinPy's implementation of autograd is insprired from the [Autograd project](https://github.com/HIPS/autograd). It computes a gradient function for any single-output function. For example, we define a simple function `foo`: # + def foo(x): return x**2 foo(4) # - # Now we want to get its derivative. To do so, simply import `grad` from `minpy.core`. # + import minpy.numpy as np # currently need import this at the same time from minpy.core import grad d_foo = grad(foo) # - d_foo(4) # You can also differentiate as many times as you want: d_2_foo = grad(d_foo) d_3_foo = grad(d_2_foo) # Now import `matplotlib` to visualize the derivatives. # + import matplotlib.pyplot as plt # %matplotlib inline plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' x = np.linspace(-10, 10, 200) # plt.plot only takes ndarray as input. Explicitly convert MinPy Array into ndarray. plt.plot(x.asnumpy(), foo(x).asnumpy(), x.asnumpy(), d_foo(x).asnumpy(), x.asnumpy(), d_2_foo(x).asnumpy(), x.asnumpy(), d_3_foo(x).asnumpy()) plt.show() # - # Just as you expected. # Autograd also differentiates vector inputs. For example: x = np.array([1, 2, 3, 4]) d_foo(x) # ## Gradient of Multivariate Functions # As for multivariate functions, you also need to specify arguments for derivative calculation. Only the specified argument will be calcualted. Just pass the position of the target argument (of a list of arguments) in `grad`. For example: def bar(a, b, c): return 3*a + b**2 - c # We get their gradients by specifying their argument position. gradient = grad(bar, [0, 1, 2]) grad_array = gradient(2, 3, 4) print grad_array # `grad_array[0]`, `grad_array[1]`, and `grad_array[2]` are gradients of argument `a`, `b`, and `c`. # # The following section will introduce a more comprehensive example on matrix calculus. # ## Autograd for Loss Function # # Since in world of machine learning we optimize a scalar loss, Autograd is particular useful to obtain the gradient of input parameters for next updates. For example, we define an affine layer, relu layer, and a softmax loss. Before dive into this section, please see [Logistic regression tutorial](../get-started/logistic_regression.rst) first for a simpler application of Autograd. # + def affine(x, w, b): """ Computes the forward pass for an affine (fully-connected) layer. The input x has shape (N, d_1, ..., d_k) and contains a minibatch of N examples, where each example x[i] has shape (d_1, ..., d_k). We will reshape each input into a vector of dimension D = d_1 * ... * d_k, and then transform it to an output vector of dimension M. Inputs: - x: A numpy array containing input data, of shape (N, d_1, ..., d_k) - w: A numpy array of weights, of shape (D, M) - b: A numpy array of biases, of shape (M,) Returns a tuple of: - out: output, of shape (N, M) """ out = np.dot(x, w) + b return out def relu(x): """ Computes the forward pass for a layer of rectified linear units (ReLUs). Input: - x: Inputs, of any shape Returns a tuple of: - out: Output, of the same shape as x """ out = np.maximum(0, x) return out def softmax_loss(x, y): """ Computes the loss for softmax classification. Inputs: - x: Input data, of shape (N, C) where x[i, j] is the score for the jth class for the ith input. - y: Vector of labels, of shape (N,) where y[i] is the label for x[i] and 0 <= y[i] < C Returns a tuple of: - loss: Scalar giving the loss """ N = x.shape[0] probs = np.exp(x - np.max(x, axis=1, keepdims=True)) probs = probs / np.sum(probs, axis=1, keepdims=True) loss = -np.sum(np.log(probs) * y) / N return loss # - # Then we use these layers to define a single layer fully-connected network, with a softmax output. class SimpleNet(object): def __init__(self, input_size=100, num_class=3): # Define model parameters. self.params = {} self.params['w'] = np.random.randn(input_size, num_class) * 0.01 self.params['b'] = np.zeros((1, 1)) # don't use int(1) (int cannot track gradient info) def forward(self, X): # First affine layer (fully-connected layer). y1 = affine(X, self.params['w'], self.params['b']) # ReLU activation. y2 = relu(y1) return y2 def loss(self, X, y): # Compute softmax loss between the output and the label. return softmax_loss(self.forward(X), y) # We define some hyperparameters. batch_size = 100 input_size = 50 num_class = 3 # Here is the net and data. net = SimpleNet(input_size, num_class) x = np.random.randn(batch_size, hidden_size) idx = np.random.randint(0, 3, size=batch_size) y = np.zeros((batch_size, num_class)) y[np.arange(batch_size), idx] = 1 # Now get gradients. gradient = grad(net.loss) # Then we can get gradient by simply call `gradient(X, y)`. d_x = gradient(x, y) # Ok, Ok, I know you are not interested in `x`'s gradient. I will show you how to get the gradient of the parameters. First, you need to define a function with the parameters as the arguments for Autograd to process. Autograd can only track the gradients **in the parameter list**. def loss_func(w, b, X, y): net.params['w'] = w net.params['b'] = b return net.loss(X, y) # Yes, you just need to provide an entry in the new function's parameter list for `w` and `b` and that's it! Now let's try to derive its gradient. # 0, 1 are the positions of w, b in the paramter list. gradient = grad(loss_func, [0, 1]) # Note that you need to specify a list for the parameters that you want their gradients. # # Now we have d_w, d_b = gradient(net.params['w'], net.params['b'], x, y) # With `d_w` and `d_b` in hand, training `net` is just a piece of cake. # ## Less Calculation: Get Forward Pass and Backward Pass Simultaneously # Since gradient calculation in MinPy needs forward pass information, if you need the forward result and the gradient calculation at the same time, please use `grad_and_loss` to get them simultaneously. In fact, `grad` is just a wrapper of `grad_and_loss`. For example, we can get from minpy.core import grad_and_loss forward_backward = grad_and_loss(bar, [0, 1, 2]) grad_array, result = forward_backward(2, 3, 4) # `grad_array` and `result` are result of gradient and forward pass respectively.
examples/tutorials/autograd_tutorial.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # The Project # # 1. This is a project with minimal scaffolding. Expect to use the the discussion forums to gain insights! It’s not cheating to ask others for opinions or perspectives! # 2. Be inquisitive, try out new things. # 3. Use the previous modules for insights into how to complete the functions! You'll have to combine Pillow, OpenCV, and Pytesseract # 4. There are hints provided in Coursera, feel free to explore the hints if needed. Each hint provide progressively more details on how to solve the issue. This project is intended to be comprehensive and difficult if you do it without the hints. # # ### The Assignment ### # Take a [ZIP file](https://en.wikipedia.org/wiki/Zip_(file_format)) of images and process them, using a [library built into python](https://docs.python.org/3/library/zipfile.html) that you need to learn how to use. A ZIP file takes several different files and compresses them, thus saving space, into one single file. The files in the ZIP file we provide are newspaper images (like you saw in week 3). Your task is to write python code which allows one to search through the images looking for the occurrences of keywords and faces. E.g. if you search for "pizza" it will return a contact sheet of all of the faces which were located on the newspaper page which mentions "pizza". This will test your ability to learn a new ([library](https://docs.python.org/3/library/zipfile.html)), your ability to use OpenCV to detect faces, your ability to use tesseract to do optical character recognition, and your ability to use PIL to composite images together into contact sheets. # # Each page of the newspapers is saved as a single PNG image in a file called [images.zip](./readonly/images.zip). These newspapers are in english, and contain a variety of stories, advertisements and images. Note: This file is fairly large (~200 MB) and may take some time to work with, I would encourage you to use [small_img.zip](./readonly/small_img.zip) for testing. # # Here's an example of the output expected. Using the [small_img.zip](./readonly/small_img.zip) file, if I search for the string "Christopher" I should see the following image: # ![Christopher Search](./readonly/small_project.png) # If I were to use the [images.zip](./readonly/images.zip) file and search for "Mark" I should see the following image (note that there are times when there are no faces on a page, but a word is found!): # ![Mark Search](./readonly/large_project.png) # # Note: That big file can take some time to process - for me it took nearly ten minutes! Use the small one for testing. # + import zipfile from PIL import Image import pytesseract import cv2 as cv import numpy as np # loading the face detection classifier face_cascade = cv.CascadeClassifier('readonly/haarcascade_frontalface_default.xml') # the rest is up to you! # + # Process the zipped source files, use dictionary here, and indexed by file name 'a-0.png' etc # Each image was saved in nested dictionary with label raw_image src_images = {} raw_files = zipfile.ZipFile('readonly/images.zip', 'r') for item in raw_files.infolist(): # print(item) shows the processing details with raw_files.open(item) as file: image = Image.open(file).convert('RGB') src_images[item.filename] = {'raw_image':image} print(raw_files.infolist()[0]) raw_files.close() # Quick check by printout some information print(src_images.keys()) print(type(src_images['a-0.png'])) # + # Extract text information from each image, assign to 'text' attribute # Grab a cup of coffee, it takes a while for image_item in src_images.keys(): text_info = pytesseract.image_to_string(src_images[image_item]['raw_image']) src_images[image_item]['text'] = text_info # Quick result check, make sure all text are reeasonable recognized print(src_images['a-0.png']['text']) # + # Search for the faces across page and define the bounding boxes and extract information for image_item in src_images.keys(): image_ocv = np.array(src_images[image_item]['raw_image']) # And we'll convert it to grayscale using the cvtColor image gray_image = cv.cvtColor(image_ocv, cv.COLOR_BGR2GRAY) faces_collection = face_cascade.detectMultiScale(gray_image, 1.3, 5) src_images[image_item]['faces'] = [] # set up a empty 'faces' atrribute list for x, y, w, h in faces_collection: face = src_images[image_item]['raw_image'].crop((x , y, x + w, y + h)) src_images[image_item]['faces'].append(face) # + #create thumbnails for image_item in src_images.keys(): for face_item in src_images[image_item]['faces']: face_item.thumbnail((100,100)) # + #search the keyword in every page's text and return the faces def search(keyword): for image_item in src_images.keys(): if (keyword in src_images[image_item]['text']): if(len(src_images[image_item]['faces']) != 0): print("Result found in file {}".format(image_item)) h = math.ceil(len(src_images[image_item]['faces'])/5) contact_sheet = Image.new('RGB',(500, 100 * h)) xc = 0 yc = 0 for img in src_images[image_item]['faces']: contact_sheet.paste(img, (xc, yc)) if xc + 100 == contact_sheet.width: xc = 0 yc += 100 else: xc += 100 display(contact_sheet) else: print("Result found in file {} \nBut there were no faces in that file\n\n".format(image_item)) return # - import math search('Christopher') search('Mark') search('pizza')
LW_Course5_Week3.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # # Programming assignment # ## Transfer learning # ### Instructions # # In this notebook, you will create a neural network model to classify images of cats and dogs, using transfer learning: you will use part of a pre-trained image classifier model (trained on ImageNet) as a feature extractor, and train additional new layers to perform the cats and dogs classification task. # # Some code cells are provided you in the notebook. You should avoid editing provided code, and make sure to execute the cells in order to avoid unexpected errors. Some cells begin with the line: # # `#### GRADED CELL ####` # # Don't move or edit this first line - this is what the automatic grader looks for to recognise graded cells. These cells require you to write your own code to complete them, and are automatically graded when you submit the notebook. Don't edit the function name or signature provided in these cells, otherwise the automatic grader might not function properly. Inside these graded cells, you can use any functions or classes that are imported below, but make sure you don't use any variables that are outside the scope of the function. # # ### How to submit # # Complete all the tasks you are asked for in the worksheet. When you have finished and are happy with your code, press the **Submit Assignment** button at the top of this notebook. # # ### Let's get started! # # We'll start running some imports, and loading the dataset. Do not edit the existing imports in the following cell. If you would like to make further Tensorflow imports, you should add them here. # + #### PACKAGE IMPORTS #### # Run this cell first to import all required packages. Do not make any imports elsewhere in the notebook import tensorflow as tf from tensorflow.keras.models import Sequential, Model import numpy as np import os import pandas as pd from sklearn.metrics import confusion_matrix import matplotlib.pyplot as plt # %matplotlib inline import seaborn as sns # If you would like to make further imports from Tensorflow, add them here # - # <img src="data/Cats-Dogs-Rex.jpg" alt="Drawing" style="height: 450px;" align="center"/> # # #### The Dogs vs Cats dataset # # In this assignment, you will use the [Dogs vs Cats dataset](https://www.kaggle.com/c/dogs-vs-cats/data), which was used for a 2013 Kaggle competition. It consists of 25000 images containing either a cat or a dog. We will only use a subset of 600 images and labels. The dataset is a subset of a much larger dataset of 3 million photos that were originally used as a CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart), referred to as “Asirra” or Animal Species Image Recognition for Restricting Access. # # * <NAME>, <NAME>, <NAME>, and <NAME>. "Asirra: A CAPTCHA that Exploits Interest-Aligned Manual Image Categorization." Proceedings of 14th ACM Conference on Computer and Communications Security (CCS), October 2007. # # Your goal is to train a classifier model using part of a pre-trained image classifier, using the principle of transfer learning. # #### Load and preprocess the data # + images_train = np.load('data/images_train.npy') / 255. images_valid = np.load('data/images_valid.npy') / 255. images_test = np.load('data/images_test.npy') / 255. labels_train = np.load('data/labels_train.npy') labels_valid = np.load('data/labels_valid.npy') labels_test = np.load('data/labels_test.npy') # - print("{} training data examples".format(images_train.shape[0])) print("{} validation data examples".format(images_valid.shape[0])) print("{} test data examples".format(images_test.shape[0])) # #### Display sample images and labels from the training set # + # Display a few images and labels class_names = np.array(['Dog', 'Cat']) plt.figure(figsize=(15,10)) inx = np.random.choice(images_train.shape[0], 15, replace=False) for n, i in enumerate(inx): ax = plt.subplot(3,5,n+1) plt.imshow(images_train[i]) plt.title(class_names[labels_train[i]]) plt.axis('off') # - # #### Create a benchmark model # # We will first train a CNN classifier model as a benchmark model before implementing the transfer learning approach. Using the functional API, build the benchmark model according to the following specifications: # # * The model should use the `input_shape` in the function argument to set the shape in the Input layer. # * The first and second hidden layers should be Conv2D layers with 32 filters, 3x3 kernel size and ReLU activation. # * The third hidden layer should be a MaxPooling2D layer with a 2x2 window size. # * The fourth and fifth hidden layers should be Conv2D layers with 64 filters, 3x3 kernel size and ReLU activation. # * The sixth hidden layer should be a MaxPooling2D layer with a 2x2 window size. # * The seventh and eighth hidden layers should be Conv2D layers with 128 filters, 3x3 kernel size and ReLU activation. # * The ninth hidden layer should be a MaxPooling2D layer with a 2x2 window size. # * This should be followed by a Flatten layer, and a Dense layer with 128 units and ReLU activation # * The final layer should be a Dense layer with a single neuron and sigmoid activation. # * All of the Conv2D layers should use `'SAME'` padding. # # In total, the network should have 13 layers (including the `Input` layer). # # The model should then be compiled with the RMSProp optimiser with learning rate 0.001, binary cross entropy loss and and binary accuracy metric. # + #### GRADED CELL #### # Complete the following function. # Make sure to not change the function name or arguments. def get_benchmark_model(input_shape): """ This function should build and compile a CNN model according to the above specification, using the functional API. The function takes input_shape as an argument, which should be used to specify the shape in the Input layer. Your function should return the model. """ inputs = tf.keras.Input(shape=input_shape) x = tf.keras.layers.Conv2D(32, 3, activation='relu', padding='same')(inputs) x = tf.keras.layers.Conv2D(32, 3, activation='relu', padding='same')(x) x = tf.keras.layers.MaxPool2D(2)(x) x = tf.keras.layers.Conv2D(64, 3, activation='relu', padding='same')(x) x = tf.keras.layers.Conv2D(64, 3, activation='relu', padding='same')(x) x = tf.keras.layers.MaxPool2D(2)(x) x = tf.keras.layers.Conv2D(128, 3, activation='relu', padding='same')(x) x = tf.keras.layers.Conv2D(128, 3, activation='relu', padding='same')(x) x = tf.keras.layers.MaxPool2D(2)(x) x = tf.keras.layers.Flatten()(x) x = tf.keras.layers.Dense(128, activation='relu')(x) outputs = tf.keras.layers.Dense(1, activation='sigmoid')(x) model = tf.keras.models.Model(inputs=inputs, outputs=outputs) model.compile(optimizer=tf.keras.optimizers.RMSprop(learning_rate=0.001), loss='binary_crossentropy', metrics=['accuracy']) return model # + # Build and compile the benchmark model, and display the model summary benchmark_model = get_benchmark_model(images_train[0].shape) benchmark_model.summary() # - # #### Train the CNN benchmark model # # We will train the benchmark CNN model using an `EarlyStopping` callback. Feel free to increase the training time if you wish. # + # Fit the benchmark model and save its training history earlystopping = tf.keras.callbacks.EarlyStopping(patience=2) history_benchmark = benchmark_model.fit(images_train, labels_train, epochs=1, batch_size=32, validation_data=(images_valid, labels_valid), callbacks=[earlystopping]) # - # #### Plot the learning curves # + # Run this cell to plot accuracy vs epoch and loss vs epoch plt.figure(figsize=(15,5)) plt.subplot(121) try: plt.plot(history_benchmark.history['accuracy']) plt.plot(history_benchmark.history['val_accuracy']) except KeyError: plt.plot(history_benchmark.history['acc']) plt.plot(history_benchmark.history['val_acc']) plt.title('Accuracy vs. epochs') plt.ylabel('Accuracy') plt.xlabel('Epoch') plt.legend(['Training', 'Validation'], loc='lower right') plt.subplot(122) plt.plot(history_benchmark.history['loss']) plt.plot(history_benchmark.history['val_loss']) plt.title('Loss vs. epochs') plt.ylabel('Loss') plt.xlabel('Epoch') plt.legend(['Training', 'Validation'], loc='upper right') plt.show() # - # #### Evaluate the benchmark model # + # Evaluate the benchmark model on the test set benchmark_test_loss, benchmark_test_acc = benchmark_model.evaluate(images_test, labels_test, verbose=0) print("Test loss: {}".format(benchmark_test_loss)) print("Test accuracy: {}".format(benchmark_test_acc)) # - # #### Load the pretrained image classifier model # # You will now begin to build our image classifier using transfer learning. # You will use the pre-trained MobileNet V2 model, available to download from [Keras Applications](https://keras.io/applications/#mobilenetv2). However, we have already downloaded the pretrained model for you, and it is available at the location `./models/MobileNetV2.h5`. # + #### GRADED CELL #### # Complete the following function. # Make sure to not change the function name or arguments. def load_pretrained_MobileNetV2(path): """ This function takes a path as an argument, and uses it to load the full MobileNetV2 pretrained model from the path. Your function should return the loaded model. """ return tf.keras.models.load_model(path) # + # Call the function loading the pretrained model and display its summary base_model = load_pretrained_MobileNetV2('models/MobileNetV2.h5') base_model.summary() # - # #### Use the pre-trained model as a feature extractor # # You will remove the final layer of the network and replace it with new, untrained classifier layers for our task. You will first create a new model that has the same input tensor as the MobileNetV2 model, and uses the output tensor from the layer with name `global_average_pooling2d_6` as the model output. # + #### GRADED CELL #### # Complete the following function. # Make sure to not change the function name or arguments. def remove_head(pretrained_model): """ This function should create and return a new model, using the input and output tensors as specified above. Use the 'get_layer' method to access the correct layer of the pre-trained model. """ output = pretrained_model.get_layer('global_average_pooling2d_6').output model = Model(inputs=pretrained_model.inputs, outputs=output) return model # + # Call the function removing the classification head and display the summary feature_extractor = remove_head(base_model) feature_extractor.summary() # - # You can now construct new final classifier layers for your model. Using the Sequential API, create a new model according to the following specifications: # # * The new model should begin with the feature extractor model. # * This should then be followed with a new dense layer with 32 units and ReLU activation function. # * This should be followed by a dropout layer with a rate of 0.5. # * Finally, this should be followed by a Dense layer with a single neuron and a sigmoid activation function. # # In total, the network should be composed of the pretrained base model plus 3 layers. # + #### GRADED CELL #### # Complete the following function. # Make sure to not change the function name or arguments. def add_new_classifier_head(feature_extractor_model): """ This function takes the feature extractor model as an argument, and should create and return a new model according to the above specification. """ model = Sequential([feature_extractor_model, tf.keras.layers.Dense(32, activation='relu'), tf.keras.layers.Dropout(0.5), tf.keras.layers.Dense(1, activation='sigmoid')]) return model # + # Call the function adding a new classification head and display the summary new_model = add_new_classifier_head(feature_extractor) new_model.summary() # - # #### Freeze the weights of the pretrained model # You will now need to freeze the weights of the pre-trained feature extractor, so that only the weights of the new layers you have added will change during the training. # # You should then compile your model as before: use the RMSProp optimiser with learning rate 0.001, binary cross entropy loss and and binary accuracy metric. # + #### GRADED CELL #### # Complete the following function. # Make sure to not change the function name or arguments. def freeze_pretrained_weights(model): """ This function should freeze the weights of the pretrained base model. Your function should return the model with frozen weights. """ model.layers[0].trainable = False opt = tf.keras.optimizers.RMSprop(learning_rate=0.001) model.compile(optimizer=opt, loss='binary_crossentropy', metrics=[tf.keras.metrics.BinaryAccuracy()]) return model # + # Call the function freezing the pretrained weights and display the summary frozen_new_model = freeze_pretrained_weights(new_model) frozen_new_model.summary() # - # #### Train the model # # You are now ready to train the new model on the dogs vs cats data subset. We will use an `EarlyStopping` callback with patience set to 2 epochs, as before. Feel free to increase the training time if you wish. # + # Train the model and save its training history earlystopping = tf.keras.callbacks.EarlyStopping(patience=2) history_frozen_new_model = frozen_new_model.fit(images_train, labels_train, epochs=1, batch_size=32, validation_data=(images_valid, labels_valid), callbacks=[earlystopping]) # - history_frozen_new_model.history # #### Plot the learning curves # + # Run this cell to plot accuracy vs epoch and loss vs epoch plt.figure(figsize=(15,5)) plt.subplot(121) try: plt.plot(history_frozen_new_model.history['binary_accuracy']) plt.plot(history_frozen_new_model.history['val_binary_accuracy']) except KeyError: plt.plot(history_frozen_new_model.history['acc']) plt.plot(history_frozen_new_model.history['val_acc']) plt.title('Accuracy vs. epochs') plt.ylabel('Accuracy') plt.xlabel('Epoch') plt.legend(['Training', 'Validation'], loc='lower right') plt.subplot(122) plt.plot(history_frozen_new_model.history['loss']) plt.plot(history_frozen_new_model.history['val_loss']) plt.title('Loss vs. epochs') plt.ylabel('Loss') plt.xlabel('Epoch') plt.legend(['Training', 'Validation'], loc='upper right') plt.show() # - # #### Evaluate the new model # + # Evaluate the benchmark model on the test set new_model_test_loss, new_model_test_acc = frozen_new_model.evaluate(images_test, labels_test, verbose=0) print("Test loss: {}".format(new_model_test_loss)) print("Test accuracy: {}".format(new_model_test_acc)) # - # #### Compare both models # # Finally, we will look at the comparison of training, validation and test metrics between the benchmark and transfer learning model. # + # Gather the benchmark and new model metrics benchmark_train_loss = history_benchmark.history['loss'][-1] benchmark_valid_loss = history_benchmark.history['val_loss'][-1] try: benchmark_train_acc = history_benchmark.history['acc'][-1] benchmark_valid_acc = history_benchmark.history['val_acc'][-1] except KeyError: benchmark_train_acc = history_benchmark.history['accuracy'][-1] benchmark_valid_acc = history_benchmark.history['val_accuracy'][-1] new_model_train_loss = history_frozen_new_model.history['loss'][-1] new_model_valid_loss = history_frozen_new_model.history['val_loss'][-1] try: new_model_train_acc = history_frozen_new_model.history['binary_accuracy'][-1] new_model_valid_acc = history_frozen_new_model.history['val_binary_accuracy'][-1] except KeyError: new_model_train_acc = history_frozen_new_model.history['accuracy'][-1] new_model_valid_acc = history_frozen_new_model.history['val_accuracy'][-1] # + # Compile the metrics into a pandas DataFrame and display the table comparison_table = pd.DataFrame([['Training loss', benchmark_train_loss, new_model_train_loss], ['Training accuracy', benchmark_train_acc, new_model_train_acc], ['Validation loss', benchmark_valid_loss, new_model_valid_loss], ['Validation accuracy', benchmark_valid_acc, new_model_valid_acc], ['Test loss', benchmark_test_loss, new_model_test_loss], ['Test accuracy', benchmark_test_acc, new_model_test_acc]], columns=['Metric', 'Benchmark CNN', 'Transfer learning CNN']) comparison_table.index=['']*6 comparison_table # + # Plot confusion matrices for benchmark and transfer learning models plt.figure(figsize=(15, 5)) preds = benchmark_model.predict(images_test) preds = (preds >= 0.5).astype(np.int32) cm = confusion_matrix(labels_test, preds) df_cm = pd.DataFrame(cm, index=['Dog', 'Cat'], columns=['Dog', 'Cat']) plt.subplot(121) plt.title("Confusion matrix for benchmark model\n") sns.heatmap(df_cm, annot=True, fmt="d", cmap="YlGnBu") plt.ylabel("Predicted") plt.xlabel("Actual") preds = frozen_new_model.predict(images_test) preds = (preds >= 0.5).astype(np.int32) cm = confusion_matrix(labels_test, preds) df_cm = pd.DataFrame(cm, index=['Dog', 'Cat'], columns=['Dog', 'Cat']) plt.subplot(122) plt.title("Confusion matrix for transfer learning model\n") sns.heatmap(df_cm, annot=True, fmt="d", cmap="YlGnBu") plt.ylabel("Predicted") plt.xlabel("Actual") plt.show() # - # Congratulations for completing this programming assignment! In the next week of the course we will learn how to develop an effective data pipeline.
Transfer Learning - Cats and Dogs Classifier/.ipynb_checkpoints/Transfer Learning - Cats and Dogs Classifier-checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/forhow/github-slideshow/blob/main/h_prj01_news_category_classfication_02_preprocessing.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + [markdown] id="LH5nsL-G6JCX" # ## Project 전처리 # + id="taVueS2v58Q6" import pandas as pd import numpy as np from tensorflow.keras.utils import to_categorical from sklearn.model_selection import train_test_split from sklearn.preprocessing import LabelEncoder from tensorflow.keras.preprocessing.text import * from tensorflow.keras.preprocessing.sequence import pad_sequences from konlpy.tag import Okt import pickle # + id="SD3smrGF7TZ4" pd.set_option('display.unicode.east_asian_width', True) # + colab={"base_uri": "https://localhost:8080/"} id="R4vveuLs7JV8" outputId="19d4c0a8-565b-476f-b94f-343fbc83f53c" df = pd.read_csv('/content/datasets/naver_news_titles_210616.csv', index_col=0) print(df) print(df.info()) # + colab={"base_uri": "https://localhost:8080/"} id="59oJNOzhjhpK" outputId="fcbe3340-99ff-4d30-89ad-7746318bfc93" # title의 중복확인 col_dup = df['title'].duplicated() print(col_dup) sum_dup = df.title.duplicated().sum() print(sum_dup) # + colab={"base_uri": "https://localhost:8080/"} id="yW7AH_oHj_xN" outputId="99d8f77c-bbff-490e-8812-41b0b3d08ee0" # title의 중복 제거 df = df.drop_duplicates(subset=['title']) sum_sup = df.title.duplicated().sum() print(sum_dup) # + id="puTGggQbkXH6" # 제거된 데이터의 인덱스 재지정 # drop = 기존 인덱스 삭제, False시 기존 인덱스가 새로운 열로 생성됨 df.reset_index(drop=True, inplace=True) # + id="xJx_xxyq7hsJ" X = df['title'] Y = df['category'] # + colab={"base_uri": "https://localhost:8080/"} id="RbeQjvEi79Ip" outputId="494c6143-817e-40f8-ba65-d9f3c077ca19" # Y(target)을- label로 변환 encoder = LabelEncoder() labeled_Y = encoder.fit_transform(Y) label = encoder.classes_ print(label) # + id="4DK5gfcFbdpk" # encoding mapping 정보를 저장 with open('/content/datasets/category_encoder.pickle', 'wb') as f: pickle.dump(encoder, f) # + colab={"base_uri": "https://localhost:8080/"} id="rVzmfPkf8O7Y" outputId="25908fe8-611a-4369-95cf-b360acac4705" print(labeled_Y) # + colab={"base_uri": "https://localhost:8080/"} id="0DNqFvkF8XMD" outputId="2834723e-e94d-48e4-c67c-295551c9cc24" # label을 onehotencoding 으로 변환 onehot_Y = to_categorical(labeled_Y) print(onehot_Y) # + colab={"base_uri": "https://localhost:8080/"} id="sUxWSXNT9GAn" outputId="e5fb13f3-3352-4a78-d1d6-d036a6f2c1db" # Train data 전처리 # - 자연어 처리; 형태소 분석 kokoma, kotlan, Okt # 형태소 분석 모듈 설치 # # !pip install Okt - x # # !pip install konlp - for R # !pip install konlpy # + id="AOK_y8_9AMZb" from konlpy.tag import Okt # + colab={"base_uri": "https://localhost:8080/"} id="xwWeVO6ZA6o_" outputId="e3df8493-f28e-4f61-d792-178f7cc4189c" # 형태소 분리기 마다 성능차이가 있음, 적합한 형태소 분류기를 사용하도록 함 okt = Okt() print(type(X)) okt_X = okt.morphs(X[0]) print(X[0]) print(okt_X) # + colab={"base_uri": "https://localhost:8080/"} id="_HebFrRzBOej" outputId="8d9de8f1-0bba-4bdd-f38d-68cadb9be850" # 하나의 문장을 형태소로 분할 for i in range(len(X)): X[i] = okt.morphs(X[i]) print(X) # + id="7CcGtdkLCcaf" # 접속사 조사 감탄사 등 문장분류에 도움이 되지 않는 단어들을 제거 # 불용어(stopwords) 제거 # stopwords.csv 파일 upload stopwords = pd.read_csv('/content/datasets/stopwords.csv') # + colab={"base_uri": "https://localhost:8080/"} id="1WpitW37T88W" outputId="00cd59cc-bb83-4813-82dd-1ccf2c366440" print(stopwords) # + colab={"base_uri": "https://localhost:8080/"} id="nHbhhKjrUfKO" outputId="af72d03a-633d-4766-d33c-4c515977bf7a" words = [] for word in okt_X: if word not in list(stopwords['stopword']): words.append(word) print(words) print(len(okt_X)) print(len(words)) # + colab={"base_uri": "https://localhost:8080/"} id="H_lacq4SVWtY" outputId="7ceb9f2f-f48e-4185-bf3f-5a64376af380" # 불용어 제거 후 형태소로 이루어진 문장으로 재조합 for i in range(len(X)) : result = [] for j in range(len(X[i])): if len(X[i][j]) > 1: if X[i][j] not in list(stopwords['stopword']): result.append(X[i][j]) X[i] = ' '.join(result) print(X) # + colab={"base_uri": "https://localhost:8080/"} id="4lC1aR8SWqru" outputId="87704e47-9306-4263-9dde-bf394161c6fa" # 단어 tokenizing : 각 단어별로 숫자를 배정 token = Tokenizer() token.fit_on_texts(X) # 형태소에 어떤 숫자를 배정할 것인지 배정 tokened_X = token.texts_to_sequences(X) # 토큰에 저장된 정보를 바탕으로 문장을 변환 print(tokened_X[0]) # tokenizing mapping 정보를 저장해서 차기 자료에도 동일하게 적용할 수 있도록 함 # + id="lcic9_zBZc7v" import pickle # 데이터 형태 그대로 저장할 수 있도록함 with open('/content/datasets/news_token.pickle', 'wb') as f: pickle.dump(token, f) # + colab={"base_uri": "https://localhost:8080/"} id="S4GIL25ibF17" outputId="0a2eb71d-ffc3-4452-af8c-15704b3a21c1" # 형태소 개수 확인 wordsize = len(token.word_index) +1 print(token.word_index) print(wordsize) # index 0를 padding 으로 추가 예정 # + colab={"base_uri": "https://localhost:8080/"} id="y8GVV5yfcRom" outputId="ab24c5c2-ac62-4a17-93b5-a9aaa390a6ce" # 모델에 제공하는 데이터 길이를 맞춰주기 위한 작업 수행 # 모든 문장 중 가장 긴 문장 기준으로 맞춤 # 문장의 앞자리에 0을 추가함으로써 문장 앞부분의 영향을 미미하도록 유지 # 1. 가장 긴 문자의 길이 확인 max = 0 for i in range(len(tokened_X)): if max < len(tokened_X[i]): max = len(tokened_X[i]) print(max) # 16 # + colab={"base_uri": "https://localhost:8080/"} id="iCBd2S0LeBGe" outputId="5353d351-9b70-464f-a815-322fa242238e" # 앞쪽을 0으로 채움 X_pad = pad_sequences(tokened_X, max) print(X_pad[:10]) # + colab={"base_uri": "https://localhost:8080/"} id="IGZoXk7QeWgB" outputId="f70a8600-b92b-4f80-a462-dea39a853482" # Train / test set split X_train, X_test, Y_train, Y_test = train_test_split(X_pad, onehot_Y, test_size=0.1) print(X_train.shape) print(X_test.shape) print(Y_train.shape) print(Y_test.shape) # + colab={"base_uri": "https://localhost:8080/"} id="pBIgv8cthlrK" outputId="a8fb5cca-a2fa-4bbc-c571-1c4b70f33a3d" # train / test set을 저장 xy = X_train, X_test, Y_train, Y_test np.save('/content/datasets/news_data_max_{}_size_{}'.format(max, wordsize), xy) # + id="QBboD435iOAO"
h_prj01_news_category_classfication_02_preprocessing.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Multitask GP Regression # # ## Introduction # # This notebook demonstrates how to perform standard (Kronecker) multitask regression. # # This differs from the [hadamard multitask example](./Hadamard_Multitask_GP_Regression.ipynb) in one key way: # - Here, we assume that we want to learn **all tasks per input**. (The kernel that we learn is expressed as a Kronecker product of an input kernel and a task kernel). # - In the other notebook, we assume that we want to learn one tasks per input. For each input, we specify the task of the input that we care about. (The kernel in that notebook is the Hadamard product of an input kernel and a task kernel). # # Multitask regression, first introduced in [this paper](https://papers.nips.cc/paper/3189-multi-task-gaussian-process-prediction.pdf) learns similarities in the outputs simultaneously. It's useful when you are performing regression on multiple functions that share the same inputs, especially if they have similarities (such as being sinusodial). # # Given inputs $x$ and $x'$, and tasks $i$ and $j$, the covariance between two datapoints and two tasks is given by # # \begin{equation*} # k([x, i], [x', j]) = k_\text{inputs}(x, x') * k_\text{tasks}(i, j) # \end{equation*} # # where $k_\text{inputs}$ is a standard kernel (e.g. RBF) that operates on the inputs. # $k_\text{task)$ is a special kernel - the `IndexKernel` - which is a lookup table containing inter-task covariance. # + import math import torch import gpytorch from matplotlib import pyplot as plt # %matplotlib inline # %load_ext autoreload # %autoreload 2 # - # ### Set up training data # # In the next cell, we set up the training data for this example. We'll be using 100 regularly spaced points on [0,1] which we evaluate the function on and add Gaussian noise to get the training labels. # # We'll have two functions - a sine function (y1) and a cosine function (y2). # # For MTGPs, our `train_targets` will actually have two dimensions: with the second dimension corresponding to the different tasks. # + train_x = torch.linspace(0, 1, 100) train_y = torch.stack([ torch.sin(train_x * (2 * math.pi)) + torch.randn(train_x.size()) * 0.2, torch.cos(train_x * (2 * math.pi)) + torch.randn(train_x.size()) * 0.2, ], -1) # - # ## Set up the model # # The model should be somewhat similar to the `ExactGP` model in the [simple regression example](../01_Simple_GP_Regression/Simple_GP_Regression.ipynb). # # The differences: # # 1. We're going to wrap ConstantMean with a `MultitaskMean`. This makes sure we have a mean function for each task. # 2. Rather than just using a RBFKernel, we're using that in conjunction with a `MultitaskKernel`. This gives us the covariance function described in the introduction. # 3. We're using a `MultitaskMultivariateNormal` and `MultitaskGaussianLikelihood`. This allows us to deal with the predictions/outputs in a nice way. For example, when we call MultitaskMultivariateNormal.mean, we get a `n x num_tasks` matrix back. # # You may also notice that we don't use a ScaleKernel, since the IndexKernel will do some scaling for us. (This way we're not overparameterizing the kernel.) # + class MultitaskGPModel(gpytorch.models.ExactGP): def __init__(self, train_x, train_y, likelihood): super(MultitaskGPModel, self).__init__(train_x, train_y, likelihood) self.mean_module = gpytorch.means.MultitaskMean( gpytorch.means.ConstantMean(), num_tasks=2 ) self.covar_module = gpytorch.kernels.MultitaskKernel( gpytorch.kernels.RBFKernel(), num_tasks=2, rank=1 ) def forward(self, x): mean_x = self.mean_module(x) covar_x = self.covar_module(x) return gpytorch.distributions.MultitaskMultivariateNormal(mean_x, covar_x) likelihood = gpytorch.likelihoods.MultitaskGaussianLikelihood(num_tasks=2) model = MultitaskGPModel(train_x, train_y, likelihood) # - # ## Train the model hyperparameters # # + # Find optimal model hyperparameters model.train() likelihood.train() # Use the adam optimizer optimizer = torch.optim.Adam([ {'params': model.parameters()}, # Includes GaussianLikelihood parameters ], lr=0.1) # "Loss" for GPs - the marginal log likelihood mll = gpytorch.mlls.ExactMarginalLogLikelihood(likelihood, model) n_iter = 50 for i in range(n_iter): optimizer.zero_grad() output = model(train_x) loss = -mll(output, train_y) loss.backward() print('Iter %d/%d - Loss: %.3f' % (i + 1, n_iter, loss.item())) optimizer.step() # - # ## Make predictions with the model # + # Set into eval mode model.eval() likelihood.eval() # Initialize plots f, (y1_ax, y2_ax) = plt.subplots(1, 2, figsize=(8, 3)) # Make predictions with torch.no_grad(), gpytorch.fast_pred_var(): test_x = torch.linspace(0, 1, 51) predictions = likelihood(model(test_x)) mean = predictions.mean lower, upper = predictions.confidence_region() # This contains predictions for both tasks, flattened out # The first half of the predictions is for the first task # The second half is for the second task # Plot training data as black stars y1_ax.plot(train_x.detach().numpy(), train_y[:, 0].detach().numpy(), 'k*') # Predictive mean as blue line y1_ax.plot(test_x.numpy(), mean[:, 0].numpy(), 'b') # Shade in confidence y1_ax.fill_between(test_x.numpy(), lower[:, 0].numpy(), upper[:, 0].numpy(), alpha=0.5) y1_ax.set_ylim([-3, 3]) y1_ax.legend(['Observed Data', 'Mean', 'Confidence']) y1_ax.set_title('Observed Values (Likelihood)') # Plot training data as black stars y2_ax.plot(train_x.detach().numpy(), train_y[:, 1].detach().numpy(), 'k*') # Predictive mean as blue line y2_ax.plot(test_x.numpy(), mean[:, 1].numpy(), 'b') # Shade in confidence y2_ax.fill_between(test_x.numpy(), lower[:, 1].numpy(), upper[:, 1].numpy(), alpha=0.5) y2_ax.set_ylim([-3, 3]) y2_ax.legend(['Observed Data', 'Mean', 'Confidence']) y2_ax.set_title('Observed Values (Likelihood)') None # -
examples/03_Multitask_GP_Regression/Multitask_GP_Regression.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:evol-ling] # language: python # name: conda-env-evol-ling-py # --- # # Google Ngrams Lexicon Preprocessing # The purpose of this is to preprocess and normalize an entire Google Ngrams \*.gz file dataset once so that subsequent analyses can be done quickly. This needs to be optimized with [`pandas`](https://pandas.pydata.org/) and [`numpy`](https://numpy.org/) to be faster and reduced memory. # [Previous preprocessing notebook](https://github.com/wzkariampuzha/EvolutionaryLinguistics/blob/main/LexiconSize/Create%20English%20Language%20Complete%20Set%20(preprocessing%20with%20lemmatization).ipynb) # + import os import gzip import numpy as np import pickle #for progress bars from tqdm import tqdm from nltk import WordNetLemmatizer lemmatizer = WordNetLemmatizer() #For checking if the word has any non-English-alphabetical letters from unidecode import unidecode import re #For the Google POS tagging mapping underscore = re.compile('_{1}') # - # ### [NLTK POS Lemmatizer](https://www.nltk.org/_modules/nltk/stem/wordnet.html) # # The Part Of Speech tag. Valid options are `"n"` for nouns, # `"v"` for verbs, `"a"` for adjectives, `"r"` for adverbs and `"s"` # for [satellite adjectives](https://stackoverflow.com/questions/18817396/what-part-of-speech-does-s-stand-for-in-wordnet-synsets). # # Syntax: # `lemmatizer.lemmatize(word)` # ### [Google Tags](https://books.google.com/ngrams/info) # These tags can either stand alone (\_PRON\_) or can be appended to a word (she_PRON) # - _NOUN_ # - _VERB_ # - _ADJ_ adjective # - _ADV_ adverb # - _PRON_ pronoun # - _DET_ determiner or article # - _ADP_ an adposition: either a preposition or a postposition # - _NUM_ numeral # - _CONJ_ conjunction # - _PRT_ particle # Define sets which are going to be used in the unigram tests # + import string PUNCTUATION = set(char for char in string.punctuation).union({'“','”'}) #ALPHABET = set(string.ascii_letters) DIGITS = set(string.digits) VOWELS = set("aeiouyAEIOUY") #Excluding '_' (underscore) from DASHES precludes the tagged 1grams "_NOUN", add it to also include the tagged 1grams DASHES = {'—','–','—','―','‒','-','_'} PUNCTUATION.difference_update(DASHES) STOPS = PUNCTUATION.union(DIGITS) #GOOGLE_TAGS = {'_NOUN','_VERB','_ADJ','_ADV','_PRON','_DET','_ADP','_NUM','_CONJ','_PRT'} #maps Google pos_tag to Wordnet pos_tag POS_mapper = {'NOUN':'n', 'VERB':'v', 'ADJ':'a', 'ADV':'v'} # - #Demo of unidecode to show how will use it to filter out accents and non-English letters unidecode('días', errors='replace') unigram = 'kožušček' test = unidecode(unigram, errors='replace') if test == unigram: print('yes') pass else: print("no") # [How to open Gzip files](https://stackoverflow.com/questions/31028815/how-to-unzip-gz-file-using-python) def open_gzip(directory,file_path): with gzip.open(directory+file_path,'r') as f_in: for line in f_in: yield line.decode('utf8').strip() def save_pickle(ngram_dict,directory,file_path): output = file_path[:-3]+'-preprocessed.pickle' if len(ngram_dict)>0: with open(directory+output, 'wb') as f_out: pickle.dump(ngram_dict, f_out) print('SAVED: ',output,len(ngram_dict)) else: print('unigram dict empty',output) def csv2tuple(string): year,match_count,volume_count = tuple(string.split(',')) return np.int8(year),np.int32(match_count),np.int16(volume_count) def unigram_tests(unigram): #Exclude words with more than one underscore, can make this != to only include tagged words if len(underscore.findall(unigram))!=1: return False #Checks each character in the unigram against the characters in the STOP set. (character level filtering) - no punctuation or digits allowed if set(unigram).intersection(STOPS): return False #Excluded all of the form _PRON_ (or anything that starts or ends with an underscore) if unigram[0] == '_' or unigram[-1] == '_': return False #must have a vowel (presupposes that it must also have a letter of the alphabet inside) if not set(unigram).intersection(VOWELS): return False #Words cannot start or end with dashes if unigram[0] in DASHES or unigram[-1] in DASHES: return False #must have 0 non-english letters test = unidecode(unigram, errors='replace') if test != unigram: return False #Can implement more tests here if you need to do more filtering else: return True def preprocess_ngrams(directory,file_path): ngram_dict = dict() #This implementation uses {1gram:{year:match_count ...} ...} i=0 for row in tqdm(open_gzip(directory,file_path)): columns = row.split('\t') #unigram is the first entry, the rest of the entries are of the form year,match_count,volume_count\t n times, where n is variable each line unigram = columns[0] #If it passes the word tests continue parsing and lemmatizing the unigram if unigram_tests(unigram): word_tag = underscore.split(unigram) # list of [word,tag] pos = "n" #Default for wordnet lemmatizer if word_tag[1] in POS_mapper.keys(): pos = POS_mapper[word_tag[1]] #word_tag[0] removes the tag before processing unigram string #Lemmatize based on POS unigram = lemmatizer.lemmatize(word_tag[0].lower().strip(),pos) #Adds the tag back onto the unigram unigram+='_'+word_tag[1] #Parse the new entry and create a dictionary of records in form {year:match_count} records = dict() for entry in columns[1:]: year,match_count,volume_count = csv2tuple(str(entry)) #This is the crucial filtering by volume count because only words in >1 volume are reasonably assumed to be used by >1 person #Words only used by one person - which translates the computational parameter 1 volume - are not considered part of the lexicon if volume_count>1: records[year] = match_count #Modify the dictionary if new entry is already there, else just add it as a new unigram:records to the dict if unigram in ngram_dict.keys(): #accessing the ngram dictionary and seeing if each year is present, if so add match count, else add a new record entry to the dictionary. for yr, match_ct in records.items(): #each record should be of the form {year:match_count} #If the year in the new record is in the dict for this 1gram, then find where it is. if yr in ngram_dict[unigram].keys(): ngram_dict[unigram][yr] += match_ct else: #This just adds the record to the end, will need to sort later ngram_dict[unigram][yr] = match_ct else: ngram_dict[unigram] = records #Save as Pickle save_pickle(ngram_dict,directory,file_path) directory = '../Ngrams/brit_unigram_data/' file_path = '1-00002-of-00004.gz' preprocess_ngrams(directory,file_path) directory = '../Ngrams/brit_unigram_data/' file_path = '1-00003-of-00004.gz' preprocess_ngrams(directory,file_path) # %%time directories = ['../Ngrams/amer_unigram_data/','../Ngrams/brit_unigram_data/'] for directory in directories: files = os.listdir(directory) for file_path in files: if '.gz' in file_path and not '.json' in file_path: preprocess_ngrams(directory,file_path)
Amer-v-Brit-Lexicon-Analysis/Optimized Preprocessing (lemmatization).ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] editable=false run_control={"frozen": false, "read_only": false} # <span style="color:blue"> **Chapter 5 of Jupyter Notes** </span> # + [markdown] run_control={"frozen": false, "read_only": false} # # Regular Expressions, conversion to NFA # # In this module, we will cover regular expressions by showing how they can be converted to NFA. The scanner and parser for RE to convert them to NFA are the main part of this module. # # # Top-level functions in this module # # ``` # This module contains the following functions that may be used in other modules to exercise concepts, compose functions, etc. # # s : string # # Here are the functions # # def re2nfa(s, stno = 0): # ``` # + run_control={"frozen": false, "read_only": false} # Fix import paths later to exact low-level module from jove.DotBashers import dotObj_dfa from jove.DotBashers import dotObj_nfa from jove.Module4_NFA import nfa2dfa from jove.Module4_NFA import rev_dfa from jove.Module4_NFA import mk_nfa from jove.Module4_NFA import min_dfa_brz from lex import lex from yacc import yacc # + [markdown] run_control={"frozen": false, "read_only": false} # # Parsing regular expressions : ReParse # # + run_control={"frozen": false, "read_only": false} # ----------------------------------------------------------------------------- # reparseNEW.py # # Parses regular expressions (without the empty RE case) # Produces NFA as output. # # The NEW signifies that I'm generating NFAs starting from # sets of states. # # Adapted from calc.py that is available from # www.dabeaz.com/ply/example.html # ----------------------------------------------------------------------------- #----------------------------------------------------------------- #-- Begin lexer construction #----------------------------------------------------------------- #-- The tokens that constitute an RE are these tokens = ( 'EPS','STR','LPAREN','RPAREN','PLUS','STAR' ) #-- The token definitions in terms of raw strings are being expressed now t_PLUS = r'\+' t_STAR = r'\*' t_LPAREN = r'\(' t_RPAREN = r'\)' t_EPS = r'\'\'|\"\"' # Not allowing @ for empty string anymore! t_STR = r'[a-zA-Z0-9]' # Making the above r'[a-zA-Z0-9]+' to accept strings as # "tokens", i.e. indivisible units that can be subject to # RE operations #-- Ignored characters by the lexer t_ignore = " \t" #-- Upon new lines, increase the lexer's line count variable def t_newline(t): r'\n+' t.lexer.lineno += t.value.count("\n") #-- Lexer's error announcer for illegal characters def t_error(t): print("Illegal character '%s'" % t.value[0]) t.lexer.skip(1) #-- NOW BUILD THE LEXER -- lexer = lex() #-------------------------------------------------------------------- #--- Here is the parser set-up in terms of binary operator attributes #-------------------------------------------------------------------- #--- This is a global - for name generation in parser NxtStateNum = 0 def NxtStateStr(): global NxtStateNum NxtStateNum += 1 return "St"+str(NxtStateNum) #-- Token precedences and associativity are declared in one place #-- By declaring PLUS before STAR, we are implying that it's of lower #-- precedence. Also declared is that they are both left-associative. precedence = ( ('left','PLUS'), ('left','STAR'), ) #--------------------------------------------------------------------- #--- Here are the parsing rules for REs; each returns an NFA as "code" #--------------------------------------------------------------------- #-- * The E -> E + C production def p_expression_plus(t): '''expression : expression PLUS catexp''' t[0] = mk_plus_nfa(t[1], t[3]) # Union of the two NFAs is returned def mk_plus_nfa(N1, N2): """Given two NFAs, return their union. """ delta_accum = dict({}) delta_accum.update(N1["Delta"]) delta_accum.update(N2["Delta"]) # Simply accumulate the transitions # The alphabet is inferred bottom-up; thus we must union the Sigmas # of the NFAs! return mk_nfa(Q = N1["Q"] | N2["Q"], Sigma = N1["Sigma"] | N2["Sigma"], Delta = delta_accum, Q0 = N1["Q0"] | N2["Q0"], F = N1["F"] | N2["F"]) #-- * The E -> C production def p_expression_plus_id(t): '''expression : catexp''' # Simply inherit the attribute from t[1] and pass on t[0] = t[1] #-- * The C -> C O production def p_expression_cat(t): '''catexp : catexp ordyexp''' t[0] = mk_cat_nfa(t[1], t[2]) def mk_cat_nfa(N1, N2): delta_accum = dict({}) delta_accum.update(N1["Delta"]) delta_accum.update(N2["Delta"]) # Now, introduce moves from every one of N1's final states # to the set of N2's initial states. for f in N1["F"]: # However, N1's final states may already have epsilon moves to # other N1-states! # Expand the target of such jumps to include N2's Q0 also! if (f, "") in N1["Delta"]: delta_accum.update({ (f,""):(N2["Q0"] | N1["Delta"][(f, "")]) }) else: delta_accum.update({ (f, ""): N2["Q0"] }) # In syntax-directed translation, it is impossible # that N2 and N1 have common states. Check anyhow # in case there are bugs elsewhere that cause it. assert((N2["F"] & N1["F"]) == set({})) return mk_nfa(Q = N1["Q"] | N2["Q"], Sigma = N1["Sigma"] | N2["Sigma"], Delta = delta_accum, Q0 = N1["Q0"], F = N2["F"]) #-- * The C -> O production def p_expression_cat_id(t): '''catexp : ordyexp''' # Simply inherit the attribute from t[1] and pass on t[0] = t[1] #-- * The O -> O STAR production def p_expression_ordy_star(t): 'ordyexp : ordyexp STAR' t[0] = mk_star_nfa(t[1]) def mk_star_nfa(N): # Follow construction from Kozen's book: # 1) Introduce new (single) start+final state IF # 2) Let Q0 = set({ IF }) # 2) Move on epsilon from IF to the set N[Q0] # 3) Make N[F] non-final # 4) Spin back from every state in N[F] to Q0 # delta_accum = dict({}) IF = NxtStateStr() Q0 = set({ IF }) # new set of start + final states # Jump from IF to N's start state set delta_accum.update({ (IF,""): N["Q0"] }) delta_accum.update(N["Delta"]) # for f in N["F"]: # N's final states may already have epsilon moves to # other N-states! # Expand the target of such jumps to include Q0 also. if (f, "") in N["Delta"]: delta_accum.update({ (f, ""): (Q0 | N["Delta"][(f, "")]) }) else: delta_accum.update({ (f, ""): Q0 }) # return mk_nfa(Q = N["Q"] | Q0, Sigma = N["Sigma"], Delta = delta_accum, Q0 = Q0, F = Q0) #-- * The O -> ( E ) production def p_expression_ordy_paren(t): 'ordyexp : LPAREN expression RPAREN' # Simply inherit the attribute from t[2] and pass on t[0] = t[2] #-- * The O -> EPS production def p_expression_ordy_eps(t): 'ordyexp : EPS' t[0] = mk_eps_nfa() def mk_eps_nfa(): """An nfa with exactly one start+final state """ Q0 = set({ NxtStateStr() }) F = Q0 return mk_nfa(Q = Q0, Sigma = set({}), Delta = dict({}), Q0 = Q0, F = Q0) #-- * The O -> STR production, i.e. a single re letter def p_expression_ordy_str(t): 'ordyexp : STR' t[0] = mk_symbol_nfa(t[1]) def mk_symbol_nfa(a): """The NFA for a single re letter """ # Make a fresh initial state q0 = NxtStateStr() Q0 = set({ q0 }) # Make a fresh final state f = NxtStateStr() F = set({ f }) return mk_nfa(Q = Q0 | F, Sigma = set({a}), Delta = { (q0,a): F }, Q0 = Q0, F = F) def p_error(t): print("Syntax error at '%s'" % t.value) #-- NOW BUILD THE PARSER -- parser = yacc() # End of reparseNEW.py # ----------------------------------------------------------------------------- # + [markdown] run_control={"frozen": false, "read_only": false} # ## RE to NFA code # + run_control={"frozen": false, "read_only": false} def re2nfa(s, stno = 0): global NxtStateNum NxtStateNum = stno myparsednfa = parser.parse(s) #-- for debugging : return dotObj_nfa(myparsednfa, nfaname) return myparsednfa # + run_control={"frozen": false, "read_only": false} re2nfa("(bb*+cc)(c+d*)") # + run_control={"frozen": false, "read_only": false} dotObj_nfa(re2nfa("aa")) # + run_control={"frozen": false, "read_only": false} dotObj_nfa(re2nfa("''")) # + run_control={"frozen": false, "read_only": false} dotObj_nfa(re2nfa("a*")) # + run_control={"frozen": false, "read_only": false} dotObj_nfa(re2nfa("aa*+ab*"), "n") # + run_control={"frozen": false, "read_only": false} re2nfa("(aa*+ab*)*") # + run_control={"frozen": false, "read_only": false} re2nfa("a*b") # + run_control={"frozen": false, "read_only": false} re2nfa("(a*b)*") # + run_control={"frozen": false, "read_only": false} re2nfa("a*b") # + run_control={"frozen": false, "read_only": false} astarbnfa = re2nfa("a*b") # + run_control={"frozen": false, "read_only": false} dotObj_nfa(astarbnfa) # + run_control={"frozen": false, "read_only": false} dotObj_nfa(astarbnfa, visible_eps=True) # + run_control={"frozen": false, "read_only": false} astarbdfa = nfa2dfa(astarbnfa) # + run_control={"frozen": false, "read_only": false} dotObj_dfa(astarbdfa) # + run_control={"frozen": false, "read_only": false} rev_dfa(astarbdfa) # + run_control={"frozen": false, "read_only": false} # + run_control={"frozen": false, "read_only": false} dotObj_dfa(min_dfa_brz(nfa2dfa(re2nfa("(a+b)*a(a+b)(a+b)")))) # + run_control={"frozen": false, "read_only": false} dotObj_dfa(nfa2dfa(re2nfa("(b*+ba*)*"))) # + run_control={"frozen": false, "read_only": false} dotObj_nfa(re2nfa("''*")) # + run_control={"frozen": false, "read_only": false} dotObj_nfa(re2nfa("''*"), visible_eps=True) # + run_control={"frozen": false, "read_only": false} dotObj_nfa(re2nfa("(apple+orange)*"), visible_eps=True) # + run_control={"frozen": true, "read_only": true} # If you want to have some fun, change the STR RE to have "+" at the end, as recommended in the comments below t_STR (early part of these notes). Then you can accept "apple" as a single token. # - # Let this NFA be specified via # our markdown as follows: nfaExer = md2mc(''' NFA I1 : a -> X I2 : b -> X I3 : c -> X X : p | q -> X X : m -> F1 X : n -> F2 ''') # First form the Dot Object... DO_nfaExer \ = dotObj_nfa(nfaExer) # Check things by displaying DO_nfaExer # Form a GNFA out of the NFA gnfaExer = mk_gnfa(nfaExer) # Form a Dot Object DO_gnfaExer \ = dotObj_gnfa(gnfaExer) # Check things by displaying DO_gnfaExer # Now invoke del_gnfa_states # First argument is the GNFA # of our exercise, gnfaExer # # The second arg (optional) # is the deletion order of # the states. If omitted, # the tool picks the order # (makes a HUGE difference). (G, DO, RE) = \ del_gnfa_states( gnfaExer, DelList=["X", "I1", "I2","I3", "F1","F2"]) # Display DO[0] through DO[6] # G is the final GNFA returned # RE is the final RE compiled
notebooks/module/Module5_RE.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # This notebook was prepared by [wdonahoe](https://github.com/wdonahoe). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). # # Solution Notebook # ## Problem: Implement a function that groups identical items based on their order in the list. # # * [Constraints](#Constraints) # * [Test Cases](#Test-Cases) # * [Algorithm: Modified Selection Sort](#Algorithm: Modified Selection Sort) # * [Code: Modified Selection Sort](#Code: Modified Selection Sort) # * [Algorithm: Ordered Dict](#Algorithm: Ordered Dict) # * [Code: Ordered Dict](#Code:-Ordered-Dict) # * [Unit Test](#Unit-Test) # ## Constraints # # * Can we use extra data structures? # * Yes # ## Test Cases # # * group_ordered([1,2,1,3,2]) -> [1,1,2,2,3] # * group_ordered(['a','b','a') -> ['a','a','b'] # * group_ordered([1,1,2,3,4,5,2,1]-> [1,1,1,2,2,3,4,5] # * group_ordered([]) -> [] # * group_ordered([1]) -> [1] # * group_ordered(None) -> None # # ## Algorithm: Modified Selection Sort # # * Save the relative position of the first-occurrence of each item in a list. # * Iterate through list of unique items. # * Keep an outer index; scan rest of list, swapping matching items with outer index and incrementing outer index each time. # # Complexity: # * Time: O(n^2) # * Space: O(n) # # Code: Modified Selection Sort # + def make_order_list(list_in): order_list = [] for item in list_in: if item not in order_list: order_list.append(item) return order_list def group_ordered(list_in): if list_in is None: return None order_list = make_order_list(list_in) current = 0 for item in order_list: search = current + 1 while True: try: if list_in[search] != item: search += 1 else: current += 1 list_in[current], list_in[search] = list_in[search], list_in[current] search += 1 except IndexError: break return list_in # - # ## Algorithm: Ordered Dict. # # * Use an ordered dict to track insertion order of each key # * Flatten list of values. # # Complexity: # # * Time: O(n) # * Space: O(n) # ## Code: Ordered Dict # + from collections import OrderedDict def group_ordered_alt(list_in): if list_in is None: return None result = OrderedDict() for value in list_in: result.setdefault(value, []).append(value) return [v for group in result.values() for v in group] # - # ## Unit Test # # #### The following unit test is expected to fail until you solve the challenge. # + # %%writefile test_group_ordered.py import unittest class TestGroupOrdered(unittest.TestCase): def test_group_ordered(self, func): self.assertEqual(func(None), None) print('Success: ' + func.__name__ + " None case.") self.assertEqual(func([]), []) print('Success: ' + func.__name__ + " Empty case.") self.assertEqual(func([1]), [1]) print('Success: ' + func.__name__ + " Single element case.") self.assertEqual(func([1, 2, 1, 3, 2]), [1, 1, 2, 2, 3]) self.assertEqual(func(['a', 'b', 'a']), ['a', 'a', 'b']) self.assertEqual(func([1, 1, 2, 3, 4, 5, 2, 1]), [1, 1, 1, 2, 2, 3, 4, 5]) self.assertEqual(func([1, 2, 3, 4, 3, 4]), [1, 2, 3, 3, 4, 4]) print('Success: ' + func.__name__) def main(): test = TestGroupOrdered() test.test_group_ordered(group_ordered) try: test.test_group_ordered(group_ordered_alt) except NameError: # Alternate solutions are only defined # in the solutions file pass if __name__ == '__main__': main() # - # %run -i test_group_ordered.py
staging/sorting_searching/group_ordered/group_ordered_solution.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/AlexAdvent/python-highcharts/blob/master/color_schema_try.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + id="kyrNjefCd4vw" colab={"base_uri": "https://localhost:8080/"} outputId="d57ed836-ceb5-42f7-8869-fa0dc1d5b5d5" # !pip install python-box python-highcharts mpld3 pandas-highcharts fire --quiet # !pip install utilmy matplotlib ipython --quiet # !pip install pretty-html-table pyvis --quiet # + id="rXKpRAWsd71s" """ Converter python Graph ---> HTML !pip install python-box python-highcharts mpld3 pandas-highcharts fire !pip install utilmy matplotlib ipython !pip install pretty-html-table https://try2explore.com/questions/10109123 https://mpld3.github.io/examples/index.html https://notebook.community/johnnycakes79/pyops/dashboard/pandas-highcharts-examples https://datatables.net/ """ import os, sys, random, numpy as np, pandas as pd, fire from datetime import datetime from typing import List from tqdm import tqdm from box import Box # Converting python --> HTML import matplotlib.pyplot as plt import mpld3 # + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="aKg5P1ybnHeD" outputId="f372fbe3-1795-4bf6-e35e-0bcaf96ba02f" colors_getlist_id() # + colab={"base_uri": "https://localhost:8080/", "height": 281} id="u04KEosLvau3" outputId="f7773aa3-c7bc-4dcf-f8b4-9c1eb7029817" testhistogram(color_schema='coolwarm') # + colab={"base_uri": "https://localhost:8080/", "height": 281} id="GT-Lgq40vLb5" outputId="fc81e4cf-275a-4376-cfee-19e90ffa6088" testhistogram(color_schema='GnBu') # + colab={"base_uri": "https://localhost:8080/", "height": 281} id="C0ko8xX3u5Hx" outputId="b83b5471-4582-46ac-94d1-ef9b78645c15" testhistogram(color_schema='seismic') # + colab={"base_uri": "https://localhost:8080/", "height": 281} id="dOOAI0BUnwnl" outputId="ec37af09-b76c-4a8b-ed0e-4badbd21aea7" testhistogram(color_schema='hsv') # + colab={"base_uri": "https://localhost:8080/", "height": 281} id="lSTXQ16Dhfx9" outputId="e71c41c2-8814-4681-d22c-92998dc54095" testhistogram() # + id="Qnx02hpMhCNT" def testhistogram(color_schema="RdYlBu"): # pip install box-python can use .key or ["mykey"] for dict data = test_getdata(verbose=False) df2 = data['sales.csv'] from box import Box cfg = Box({}) cfg.tseries = {"title": 'ok'} cfg.scatter = {"title" : "Titanic", 'figsize' : (12, 7)} cfg.histo = {"title": 'ok'} doc = htmlDoc(dir_out="", title="hello", format='myxxxx', cfg=cfg) doc.plot_histogram(df2,col='Unit Price',color_schema=color_schema,cfg = cfg.histo,title="Price",ylabel="Unit price", mode='matplot', save_img="") doc.save(dir_out="myfile.html") doc.open_browser() # Open myfile.html # + id="uGe3LaNRgtF8" def pd_plot_histogram_matplot(df:pd.DataFrame, col: str='',color_schema:str='RdYlBu', title: str='', nbin=20.0, q5=0.005, q95=0.995, nsample=-1, save_img: str="",xlabel: str=None,ylabel: str=None): """ fig = plt.figure() ax = fig.add_subplot(111) ax.hist(df[config['x']].values, bins=config['bins'], color='red', alpha=0.5) ax.set_xlabel(config['x']) ax.set_ylabel(config['y']) ax.set_title(config['title']) ax.set_xlim(config['xlim']) ax.set_ylim(config['ylim']) return fig """ cm = plt.cm.get_cmap(color_schema) dfi = df[col] q0 = dfi.quantile(q5) q1 = dfi.quantile(q95) fig = plt.figure() if nsample < 0: dfi.hist(bins=2) # dfi.hist(bins=np.arange(q0, q1, (q1 - q0) / nbin)) else: n, bins, patches = plt.hist(dfi, bins=np.arange(q0, q1, (q1 - q0) / nbin)) # dfi.sample(n=nsample, replace=True).hist( bins=np.arange(q0, q1, (q1 - q0) / nbin)) for i, p in enumerate(patches): plt.setp(p, 'facecolor', cm(i/nbin)) plt.title(title) plt.xlabel(xlabel) plt.ylabel(ylabel) if len(save_img)>0 : os.makedirs(os.path.dirname(save_img), exist_ok=True) plt.savefig(save_img) print(save_img) # plt.close(fig) return fig # + id="Kmf_XVi6k5b6" def colors_getlist_id(): cmaps = {} cmaps['Perceptually Uniform Sequential'] = [ 'viridis', 'plasma', 'inferno', 'magma', 'cividis'] cmaps['Sequential'] = [ 'Greys', 'Purples', 'Blues', 'Greens', 'Oranges', 'Reds', 'YlOrBr', 'YlOrRd', 'OrRd', 'PuRd', 'RdPu', 'BuPu', 'GnBu', 'PuBu', 'YlGnBu', 'PuBuGn', 'BuGn', 'YlGn'] cmaps['Sequential (2)'] = [ 'binary', 'gist_yarg', 'gist_gray', 'gray', 'bone', 'pink', 'spring', 'summer', 'autumn', 'winter', 'cool', 'Wistia', 'hot', 'afmhot', 'gist_heat', 'copper'] cmaps['Diverging'] = [ 'PiYG', 'PRGn', 'BrBG', 'PuOr', 'RdGy', 'RdBu', 'RdYlBu', 'RdYlGn', 'Spectral', 'coolwarm', 'bwr', 'seismic'] cmaps['Cyclic'] = ['twilight', 'twilight_shifted', 'hsv'] cmaps['Qualitative'] = ['Pastel1', 'Pastel2', 'Paired', 'Accent', 'Dark2', 'Set1', 'Set2', 'Set3', 'tab10', 'tab20', 'tab20b', 'tab20c'] gradient = np.linspace(0, 1, 256) gradient = np.vstack((gradient, gradient)) def plot_color_gradients(cmap_category, cmap_list): # Create figure and adjust figure height to number of colormaps nrows = len(cmap_list) figh = 0.35 + 0.15 + (nrows + (nrows - 1) * 0.1) * 0.22 fig, axs = plt.subplots(nrows=nrows + 1, figsize=(6.4, figh)) fig.subplots_adjust(top=1 - 0.35 / figh, bottom=0.15 / figh, left=0.2, right=0.99) axs[0].set_title(cmap_category + ' colormaps', fontsize=14) for ax, name in zip(axs, cmap_list): ax.imshow(gradient, aspect='auto', cmap=plt.get_cmap(name)) ax.text(-0.01, 0.5, name, va='center', ha='right', fontsize=10, transform=ax.transAxes) # Turn off *all* ticks & spines, not just the ones with colormaps. for ax in axs: ax.set_axis_off() for cmap_category, cmap_list in cmaps.items(): plot_color_gradients(cmap_category, cmap_list) plt.show() # + id="wCKv7Etfd_RF" ################################################################################################# def log(*s): print(*s, flush=True) # + id="xwJEy9glesAz" ################################################################################### #### Example usage ################################################################ def test_getdata(verbose=True): """ data = test_get_data() df = data['housing.csv'] df.head(3) https://github.com/szrlee/Stock-Time-Series-Analysis/tree/master/data """ import pandas as pd flist = [ 'https://raw.githubusercontent.com/samigamer1999/datasets/main/titanic.csv', 'https://github.com/subhadipml/California-Housing-Price-Prediction/raw/master/housing.csv', 'https://raw.githubusercontent.com/AlexAdvent/high_charts/main/data/stock_data.csv', 'https://raw.githubusercontent.com/samigamer1999/datasets/main/cars.csv', 'https://raw.githubusercontent.com/samigamer1999/datasets/main/sales.csv', 'https://raw.githubusercontent.com/AlexAdvent/high_charts/main/data/weatherdata.csv' ] data = {} for url in flist : fname = url.split("/")[-1] # print( "\n", "\n", url, ) df = pd.read_csv(url) data[fname] = df if verbose: print(df) # df.to_csv(fname , index=False) # print(data.keys() ) return data # + id="FHJc0yxwe1g_" def test1(): #### Test Datatable doc = htmlDoc(dir_out="", title="hello", format='myxxxx', cfg={}) # check add css css = """.intro { background-color: yellow; } """ doc.add_css(css) # test create table df = test_getdata()['titanic.csv'] doc.h1(" Table test ") doc.table(df, use_datatable=True, table_id="test", custom_css_class='intro') doc.print() doc.save(dir_out="testdata/test_viz_table.html") doc.open_browser() # Open myfile.html def test2(): """ # pip install --upgrade utilmy from util.viz import vizhtml as vi vi.test2() """ data = test_getdata() doc = htmlDoc(title='Weather report', dir_out="", cfg={} ) doc.h1(' Weather report') doc.hr() ; doc.br() # create time series chart. mode highcharts doc.h2('Plot of weather data') doc.plot_tseries(data['weatherdata.csv'].iloc[:1000, :], coldate = 'Date', date_format = '%m/%d/%Y', cols_axe1 = ['Temperature'], cols_axe2 = ["Rainfall"], # x_label= 'Date', # axe1_label= "Temperature", # axe2_label= "Rainfall", title = "Weather", cfg={}, mode='highcharts' ) doc.hr() ; doc.br() doc.h3('Weather data') doc.table(data['weatherdata.csv'].iloc[:10 : ], use_datatable=True ) # create histogram chart. mode highcharts doc.plot_histogram(data['housing.csv'].iloc[:1000, :], col="median_income", xaxis_label= "x-axis",yaxis_label="y-axis",cfg={}, mode='highcharts', save_img=False) # Testing with example data sets (Titanic) cfg = {"title" : "Titanic", 'figsize' : (20, 7)} # create scatter chart. mode highcharts doc.plot_scatter(data['titanic.csv'].iloc[:50, :], colx='Age', coly='Fare', collabel='Name', colclass1='Sex', colclass2='Age', colclass3='Sex', figsize=(20,7), cfg=cfg, mode='highcharts', ) doc.save('viz_test3_all_graphs.html') doc.open_browser() html1 = doc.get_html() # print(html1) # html_show(html1) def test3(verbose=True): # pip install box-python can use .key or ["mykey"] for dict data = test_getdata() dft = data['titanic.csv'] df = data['housing.csv'] df2 = data['sales.csv'] from box import Box cfg = Box({}) cfg.tseries = {"title": 'ok'} cfg.scatter = {"title" : "Titanic", 'figsize' : (12, 7)} cfg.histo = {"title": 'ok'} cfg.use_datatable = True df = pd.DataFrame([[1, 2]]) df2_list = [df, df, df] print(df2_list) doc = htmlDoc(dir_out="", title="hello", format='myxxxx', cfg=cfg) doc.h1('My title') # h1 doc.sep() doc.br() # <br> doc.tag('<h2> My graph title </h2>') doc.plot_scatter(dft, colx='Age', coly='Fare', collabel='Name', colclass1='Sex', colclass2='Age', colclass3='Sex', cfg=cfg.scatter, mode='matplot', save_img='') doc.hr() # doc.sep() line separator # for df2_i in df2_list: # print(df2_i) # col2 =df2_i.columns # # doc.h3(f" plot title: {df2_i['category'].values[0]}") # doc.plot_tseries(df2_i, coldate= col2[0], cols_axe1= col2[1], cfg = cfg.tseries, mode='highcharts') doc.tag('<h2> My histo title </h2>') doc.plot_histogram(df2,col='Unit Cost',mode='matplot', save_img="") doc.plot_histogram(df2,col='Unit Price',cfg = cfg.histo,title="Price", mode='matplot', save_img="") doc.save(dir_out="myfile.html") doc.open_browser() # Open myfile.html # + id="HBI3gbnEfA0E" def test_scatter_and_histogram_matplot(): data = test_getdata() dft = data['titanic.csv'] df = data['housing.csv'] df2 = data['sales.csv'] cfg = Box({}) cfg.tseries = {"title": 'ok'} cfg.scatter = {"title" : "Titanic", 'figsize' : (12, 7)} cfg.histo = {"title": 'ok'} cfg.use_datatable = True df = pd.DataFrame([[1, 2]]) df2_list = [df, df, df] doc = htmlDoc(dir_out="", title="hello", format='myxxxx', cfg=cfg) doc.h1('My title') # h1 doc.sep() doc.br() # <br> doc.tag('<h2> My graph title </h2>') doc.plot_scatter(dft, colx='Age', coly='Fare', collabel='Name', colclass1='Sex', colclass2='Age', colclass3='Sex', cfg=cfg.scatter, mode='matplot', save_img='') doc.hr() # doc.sep() line separator doc.plot_histogram(df2,col='Unit Cost',mode='matplot', save_img="") doc.save(dir_out="myfile.html") doc.open_browser() # Open myfile.html # + id="MbvbyZ04fGJS" def test_pd_plot_network(): df = pd.DataFrame({ 'from':['A', 'B', 'C','A'], 'to':['D', 'A', 'E','C'], 'weight':[1, 2, 1,5]}) html_code = pd_plot_network(df, cola='from', colb='to', coledge='col_edge',colweight="weight") print(html_code) # + id="TEjzqxxTfKxo" def test_cssname(verbose=True,css_name="A4_size"): # pip install box-python can use .key or ["mykey"] for dict data = test_getdata() dft = data['titanic.csv'] df = data['housing.csv'] df2 = data['sales.csv'] from box import Box cfg = Box({}) cfg.tseries = {"title": 'ok'} cfg.scatter = {"title" : "Titanic", 'figsize' : (12, 7)} cfg.histo = {"title": 'ok'} cfg.use_datatable = True df = pd.DataFrame([[1, 2]]) df2_list = [df, df, df] print(df2_list) doc = htmlDoc(dir_out="", title="hello",css_name=css_name, format='myxxxx', cfg=cfg) doc.h1('My title') # h1 doc.sep() doc.br() # <br> doc.tag('<h2> My graph title </h2>') doc.plot_scatter(dft, colx='Age', coly='Fare', collabel='Name', colclass1='Sex', colclass2='Age', colclass3='Sex', cfg=cfg.scatter, mode='matplot', save_img='') doc.hr() # doc.sep() line separator # test create table df = test_getdata()['titanic.csv'] doc.h1(" Table test ") doc.table(df[0:10], use_datatable=True, table_id="test", custom_css_class='intro') doc.tag('<h2> My histo title </h2>') doc.plot_histogram(df2,col='Unit Cost',mode='matplot', save_img="") doc.plot_histogram(df2,col='Unit Price',cfg = cfg.histo,title="Price", mode='matplot', save_img="") doc.save(dir_out="myfile.html") doc.open_browser() # Open myfile.html # + id="HWsawGuUfNfH" def help(): ss = "from utilmy.vi.vizhtml import * \n\n" ss = ss + "data = test_getdata() \n\n " ss = ss + help_get_codesource(test1) + "\n\n\n ##############################\n" ss = ss + help_get_codesource(test2) + "\n\n\n ##############################\n" ss = ss + help_get_codesource(test3) + "\n\n\n ##############################\n" ss = ss + help_get_codesource(test_scatter_and_histogram_matplot) + "\n\n\n ##############################\n" ss = ss + help_get_codesource(test_pd_plot_network) + "\n\n\n ##############################\n" ss = ss + help_get_codesource(test_cssname ) + "\n\n\n ##############################\n" ss = ss + "Template CSS: \n\n " + str( CSS_TEMPLATE.keys() ) print(ss) # + id="H83A4iOVfcEF" ##################################################################################### #### HTML doc ######################################################################## class htmlDoc(object): def __init__(self, dir_out="", mode="", title: str="", format: str = None, cfg: dict =None,css_name:str="a4_size"): """ Generate HTML page to display graph/Table. Combine pages together. """ import mpld3 self.fig_to_html = mpld3.fig_to_html cfg = {} if cfg is None else cfg self.cc = Box(cfg) # Config dict self.dir_out = dir_out.replace("\\", "/") self.head = f" <html>\n " self.html = "\n </head> \n<body>" self.tail = "\n </body>\n</html>" ##### HighCharts links = """<link href="https://www.highcharts.com/highslide/highslide.css" rel="stylesheet" /> <script type="text/javascript" src="https://ajax.googleapis.com/ajax/libs/jquery/1.9.1/jquery.min.js"></script> <script type="text/javascript" src="https://code.highcharts.com/6/highcharts.js"></script> <script type="text/javascript" src="https://code.highcharts.com/6/highcharts-more.js"></script> <script type="text/javascript" src="https://code.highcharts.com/6/modules/heatmap.js"></script> <script type="text/javascript" src="https://code.highcharts.com/6/modules/histogram-bellcurve.js"></script> <script type="text/javascript" src="https://code.highcharts.com/6/modules/exporting.js"></script> <link href="https://fonts.googleapis.com/css2?family=Arvo&display=swap" rel="stylesheet"> """ self.head = self.head + """<head><title>{title}</title> {links}""".format(title=title,links=links) self.add_css(CSS_TEMPLATE.get(css_name, '')) # self.add_css(css_get_template(css_name=css_name)) if css_name=="a4_size": self.html = self.html + '\n <page size="A4">' self.tail = "</page> \n" + self.tail def tag(self, x): self.html += "\n" + x def h1(self, x,css: str='') : self.html += "\n" + f"<h1 style='{css}'>{x}</h1>" def h2(self, x,css: str='') : self.html += "\n" + f"<h2 style='{css}'>{x}</h2>" def h3(self, x,css: str='') : self.html += "\n" + f"<h3 style='{css}'>{x}</h3>" def h4(self, x,css: str='') : self.html += "\n" + f"<h4 style='{css}'>{x}</h4>" def p(self, x,css: str='') : self.html += "\n" + f"<p style='{css}'>{x}</p>" def div(self, x,css: str='') : self.html += "\n" + f"<div style='{css}'>{x}</div>" def hr(self, css: str='') : self.html += "\n" + f"<hr style='{css}'/>" def sep(self, css: str='') : self.html += "\n" + f"<hr style='{css}'/>" def br(self, css: str='') : self.html += "\n" + f"<br style='{css}'/>" def get_html(self)-> str: full = self.head + self.html + self.tail return full def print(self): full = self.head + self.html + self.tail print(full, flush=True) def save(self, dir_out=None): self.dir_out = dir_out if dir_out is not None else self.dir_out self.dir_out = dir_out.replace("\\", "/") self.dir_out = os.getcwd() + "/" + self.dir_out if "/" not in self.dir_out[0] else self.dir_out os.makedirs( os.path.dirname(self.dir_out) , exist_ok = True ) full = self.head + self.html + self.tail with open(self.dir_out, mode='w') as fp: fp.write(full) def open_browser(self): if os.name == 'nt': os.system(f'start chrome "file:///{self.dir_out}" ') ### file:///D:/_devs/Python01/gitdev/myutil/utilmy/viz/test_viz_table.html def add_css(self, css): data = f"\n<style>\n{css}\n</style>\n" self.head += data def add_js(self,js): data = f"\n<script>\n{js}\n</script>\n" self.tail = data + self.tail def hidden(self, x,css: str=''): # Hidden P paragraph custom_id = str(random.randint(9999,999999)) # self.head += "\n" + js_code.js_hidden # Hidden javascript self.html += "\n" + f"<div id='div{custom_id}' style='{css}'>{x}</div>" button = """<button id="{btn_id}">Toggle</button>""".format(btn_id="btn"+custom_id) self.html += "\n" + f"{button}" js = """function toggle() {{ if (document.getElementById("{div_id}").style.visibility === "visible") {{ document.getElementById("{div_id}").style.visibility = "hidden" }} else {{ document.getElementById("{div_id}").style.visibility = "visible" }} }} document.getElementById('{btn_id}').addEventListener('click', toggle);""".format(btn_id="btn"+custom_id,div_id="div"+custom_id) self.add_js(js) def table(self, df:pd.DataFrame, format: str='blue_light', custom_css_class=None, use_datatable=False, table_id=None, **kw): """ Show Pandas in HTML and interactive ## show table in HTML : https://pypi.org/project/pretty-html-table/ Args: format: List of colors available at https://pypi.org/project/pretty-html-table/ custom_css_class: [Option] Add custom class for table use_datatable: [Option] Create html table as a database table_id: [Option] Id for table tag """ import pretty_html_table html_code = pretty_html_table.build_table(df, format) table_id = random.randint(9999,999999) if table_id is None else table_id #### Unique ID # add custom CSS class if custom_css_class: html_code = html_code.replace('<table', f'<table class="{custom_css_class}"') if use_datatable: # JS add datatables library self.head = self.head + """ <link rel="stylesheet" type="text/css" href="https://cdn.datatables.net/v/dt/dt-1.10.25/datatables.min.css"/> <script src="https://ajax.googleapis.com/ajax/libs/jquery/3.1.0/jquery.min.js"></script> <script type="text/javascript" src="https://cdn.datatables.net/v/dt/dt-1.10.25/datatables.min.js"></script>""" # https://datatables.net/manual/installation # add $(document).ready( function () { $('#table_id').DataTable(); } ); # add data table html_code = html_code.replace('<table', f'<table id="{table_id}" ') html_code += """\n<script>$(document).ready( function () { $('#{mytable_id}').DataTable({ "lengthMenu": [[10, 50, 100, 500, -1], [10, 50, 100, 500, "All"]] }); });\n</script>\n """.replace('{mytable_id}', str(table_id)) self.html += "\n\n" + html_code def plot_tseries(self, df:pd.DataFrame, coldate, cols_axe1: list, cols_axe2=None, title: str="", figsize: tuple=(14,7), nsample: int= 10000, x_label=None, axe1_label=None, axe2_label=None, date_format: str='%m/%d/%Y', plot_type="",spacing=0.1, cfg: dict = {}, mode: str='matplot', save_img="", **kw): """Create html time series chart. Args: df: pd Dataframe cols_axe1: list of column for axis 1 cols_axe2: list of column for axis 2 ... mode: matplot or highcharts """ html_code = '' if mode == 'matplot': fig = pd_plot_tseries_matplot(df, coldate, cols_axe1=cols_axe1, cols_axe2=cols_axe2, figsize=figsize, title=title, x_label=x_label, axe1_label=axe1_label, axe2_label=axe2_label, cfg=cfg, mode=mode, save_img=save_img, spacing=spacing ) html_code = mpld3.fig_to_html(fig) elif mode == 'highcharts': html_code = pd_plot_tseries_highcharts(df, coldate, cols_axe1=cols_axe1, cols_axe2=cols_axe2, date_format='%m/%d/%Y', figsize=figsize, title=title, x_label=x_label, axe1_label=axe1_label, axe2_label=axe2_label, cfg=cfg, mode=mode, save_img=save_img, ) self.html += "\n\n" + html_code def plot_histogram(self, df:pd.DataFrame, col, xlabel: str=None,ylabel: str=None, title: str='', figsize: tuple=(14,7), color_schema:str = 'RdYlBu', nsample=10000,nbin=10, q5=0.005, q95=0.95,cfg: dict = {}, mode: str='matplot', save_img="", **kw): """Create html histogram chart. Args: df: pd Dataframe col: x Axis ... mode: matplot or highcharts """ html_code = '' if mode == 'matplot': fig = pd_plot_histogram_matplot(df, col, title=title, nbin=nbin, q5=q5, q95=q95, xlabel= xlabel,ylabel=ylabel,color_schema=color_schema, nsample=nsample, save_img=save_img) html_code = self.fig_to_html(fig) elif mode == 'highcharts': cfg['figsize'] = figsize html_code = pd_plot_histogram_highcharts(df,colname=col, title=title, cfg=cfg,mode=mode,save_img=save_img) self.html += "\n\n" + html_code def plot_scatter(self, df:pd.DataFrame, colx, coly, title: str='', figsize: tuple=(14,7), nsample: int=10000, collabel=None, colclass1=None, colclass2=None, colclass3=None, cfg: dict = {}, mode: str='matplot', save_img='', **kw): """Create html scatter chart. Args: df: pd Dataframe colx: x Axis coly: y Axis ... mode: matplot or highcharts """ html_code = '' if mode == 'matplot': html_code = pd_plot_scatter_matplot(df, colx=colx, coly=coly, collabel=collabel, colclass1=colclass1, colclass2=colclass2, nsample=nsample, cfg=cfg, mode=mode, save_img=save_img,) elif mode == 'highcharts': html_code = pd_plot_scatter_highcharts(df, colx= colx, coly=coly, colclass1=colclass1, colclass2=colclass2, colclass3=colclass3, nsample=nsample, cfg=cfg, mode=mode, save_img=save_img, verbose=False ) self.html += "\n\n" + html_code def images_dir(self, dir_input="*.png", title: str="", verbose:bool =False): html_code = images_to_html(dir_input=dir_input, title=title, verbose=verbose) self.html += "\n\n" + html_code def pd_plot_network(self, df:pd.DataFrame, cola: str='col_node1', colb: str='col_node2', coledge: str='col_edge'): html_code = pd_plot_network(df, cola=cola, colb=colb, coledge=coledge) self.html += "\n\n" + html_code # + id="27WoXPJ5fib7" ################################################################################################################## ######### MLPD3 Display ########################################################################################## mpld3_CSS = """ text.mpld3-text, div.mpld3-tooltip { font-family:Arial, Helvetica, sans-serif; } g.mpld3-xaxis, g.mpld3-yaxis { display: none; } """ class mpld3_TopToolbar(mpld3.plugins.PluginBase): """Plugin for moving toolbar to top of figure""" JAVASCRIPT = """ mpld3.register_plugin("toptoolbar", TopToolbar); TopToolbar.prototype = Object.create(mpld3.Plugin.prototype); TopToolbar.prototype.constructor = TopToolbar; function TopToolbar(fig, props){ mpld3.Plugin.call(this, fig, props); }; TopToolbar.prototype.draw = function(){ // the toolbar svg doesn't exist // yet, so first draw it this.fig.toolbar.draw(); // then change the y position to be // at the top of the figure this.fig.toolbar.toolbar.attr("x", 150); this.fig.toolbar.toolbar.attr("y", 400); // then remove the draw function, // so that it is not called again this.fig.toolbar.draw = function() {} } """ def __init__(self): self.dict_ = {"type": "toptoolbar"} def mlpd3_add_tooltip(fig, points, labels): # set tooltip using points, labels and the already defined 'css' tooltip = mpld3.plugins.PointHTMLTooltip( points[0], labels, voffset=10, hoffset=10, css=mpld3_CSS) # connect tooltip to fig mpld3.plugins.connect(fig, tooltip, mpld3_TopToolbar()) # + id="kJYzZbE1flLT" def pd_plot_scatter_get_data(df0:pd.DataFrame,colx: str=None, coly: str=None, collabel: str=None, colclass1: str=None, colclass2: str=None, nmax: int=20000): # import copy nmax = min(nmax, len(df0)) df = df0.sample(nmax) colx = 'x' if colx is None else colx coly = 'y' if coly is None else coly collabel = 'label' if collabel is None else collabel ### label per point colclass1 = 'class1' if colclass1 is None else colclass1 ### Color per point class1 colclass2 = 'class2' if colclass2 is None else colclass2 ### Size per point class2 ####################################################################################### for ci in [ collabel, colclass1, colclass2 ] : if ci not in df.columns : df[ci] = '' df[ci] = df[ci].fillna('') ####################################################################################### xx = df[colx].values yy = df[coly].values # label_list = df[collabel].values label_list = ['{collabel} : {value}'.format(collabel=collabel,value = df0[collabel][i]) for i in range(len(df0[collabel]))] ### Using Class 1 ---> Color color_scheme = [ 0,1,2,3] n_colors = len(color_scheme) color_list = [ color_scheme[ hash(str( x)) % n_colors ] for x in df[colclass1].values ] ### Using Class 2 ---> Color n_size = len(df[colclass2].unique()) smin, smax = 100.0, 200.0 size_scheme = np.arange(smin, smax, (smax-smin)/n_size) n_colors = len(size_scheme) size_list = [ size_scheme[ hash(str( x)) % n_colors ] for x in df[colclass2].values ] ### ptype_list = [] return xx, yy, label_list, color_list, size_list, ptype_list # + id="umzjdKtHfqyn" def pd_plot_scatter_matplot(df:pd.DataFrame, colx: str=None, coly: str=None, collabel: str=None, colclass1: str=None, colclass2: str=None, cfg: dict = {}, mode='d3', save_path: str='', **kw)-> str: cc = Box(cfg) cc.figsize = cc.get('figsize', (25, 15)) # Dict type default values cc.title = cc.get('title', 'scatter title' ) ####################################################################################### xx, yy, label_list, color_list, size_list, ptype_list = pd_plot_scatter_get_data(df,colx, coly, collabel, colclass1, colclass2) # set up plot fig, ax = plt.subplots(figsize= cc.figsize) # set size ax.margins(0.05) # Optional, just adds 5% padding to the autoscaling scatter = ax.scatter(xx, yy, c=color_list, s=size_list, alpha=1, cmap=plt.cm.jet) ax.grid(color='white', linestyle='solid') # ax.scatter(xx, yy, s= size_list, label=label_list, # c=color_list) ax.set_aspect('auto') ax.tick_params(axis='x', # changes apply to the x-axis which='both', # both major and minor ticks are affected bottom='off', # ticks along the bottom edge are off top='off', # ticks along the top edge are off labelbottom='off') ax.tick_params(axis='y', # changes apply to the y-axis which='both', # both major and minor ticks are affected left='off', # ticks along the bottom edge are off top='off', # ticks along the top edge are off labelleft='off') # ax.legend(numpoints=1) # show legend with only 1 point # label_list = ['{0}'.format(d_small['Name'][i]) for i in range(N)] # add label in x,y position with the label # for i in range(N): # ax.text(df['Age'][i], df['Fare'][i], label_list[i], size=8) if len(save_path) > 1 : plt.savefig(f'{cc.save_path}-{datetime.now().strftime("%Y-%m-%d %H-%M-%S")}.png', dpi=200) ax.set_aspect('auto') # uncomment to hide tick # set tick marks as blank # ax.axes.get_xaxis().set_ticks([]) # ax.axes.get_yaxis().set_ticks([]) # set axis as blank # ax.axes.get_xaxis().set_visible(False) # ax.axes.get_yaxis().set_visible(False) # connect tooltip to fig tooltip = mpld3.plugins.PointLabelTooltip(scatter, labels=label_list, voffset=10, hoffset=10) mpld3.plugins.connect(fig, tooltip, mpld3_TopToolbar()) # mlpd3_add_tooltip(fig, points, label_list) # ax.legend(numpoints=1) # show legend with only one dot # mpld3.save_html(fig, f"okembeds.html") # return fig ##### Export ############################################################ html_code = mpld3.fig_to_html(fig) # print(html_code) return html_code # + colab={"base_uri": "https://localhost:8080/", "height": 266} id="MFOhATH3z7LR" outputId="d82e0e1d-45cc-4741-dc20-9de872b49d5c" import matplotlib.pyplot as plt Ntotal = 1000 data = 0.05 * np.random.randn(Ntotal) + 0.5 cm = plt.cm.RdBu_r n, bins, patches = plt.hist(data, 25, color='black') for i, p in enumerate(patches): plt.setp(p, 'facecolor', cm(i/25)) # notice the i/25 plt.show() # + colab={"base_uri": "https://localhost:8080/", "height": 265} id="9-Smef6dzt6M" outputId="03ab20d5-1d4c-4df0-9d3e-9f31a3d3afad" import numpy as np import matplotlib.pyplot as plt Ntotal = 1000 data = 0.05 * np.random.randn(Ntotal) + 0.5 cm = plt.cm.get_cmap('RdYlBu_r') n, bins, patches = plt.hist(data, 25, color='green') # To normalize your values col = (n-n.min())/(n.max()-n.min()) for c, p in zip(col, patches): plt.setp(p, 'facecolor', cm(c)) plt.show() # + colab={"base_uri": "https://localhost:8080/", "height": 265} id="WhzTJvw73fBJ" outputId="754fd5ba-fc5e-42fe-dce9-cdfe61d3f5cf" import numpy as np; np.random.seed(42) import matplotlib.pyplot as plt plt.rcParams["figure.figsize"] = 6.4,4 def randn(n, sigma, mu): return sigma * np.random.randn(n) + mu x1 = randn(999, 40., -80) x2 = randn(750, 40., 80) x3 = randn(888, 16., -30) def hist(x, ax=None): cm = plt.cm.get_cmap("seismic") ax = ax or plt.gca() _, bins, patches = ax.hist(x,color="r",bins=30) bin_centers = 0.5*(bins[:-1]+bins[1:]) maxi = np.abs(bin_centers).max() norm = plt.Normalize(-maxi,maxi) for c, p in zip(bin_centers, patches): plt.setp(p, "facecolor", cm(norm(c))) fig, axes = plt.subplots(nrows=3, sharex=True) for x, ax in zip([x1,x2,x3], axes): hist(x,ax=ax) plt.show() # + colab={"base_uri": "https://localhost:8080/", "height": 265} id="OohguhX2zQF8" outputId="65b3cf14-8acb-4391-f03a-bdd6db0f14b7" import numpy as n import matplotlib.pyplot as plt # Random gaussian data. Ntotal = 1000 data = 0.05 * n.random.randn(Ntotal) + 0.5 # This is the colormap I'd like to use. cm = plt.cm.get_cmap('RdYlBu_r') # Plot histogram. n, bins, patches = plt.hist(data, 25, color='green') bin_centers = 0.5 * (bins[:-1] + bins[1:]) # scale values to interval [0,1] col = bin_centers - min(bin_centers) col /= max(col) for c, p in zip(col, patches): plt.setp(p, 'facecolor', cm(c)) plt.show() # + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="49RWbfsDZVjW" outputId="2b29e438-eec7-4033-cf04-6cc3f352e9f0" testhistogram() # + id="0XDk1qPzZLBp" # + id="0PmKgX6BcvBm" import matplotlib # + id="YfxTo3Qiftmh" # + id="5to8w1j6fwPd" def pd_plot_tseries_matplot(df:pd.DataFrame, plot_type: str=None, cols_axe1: list = [], cols_axe2: list = [], figsize: tuple =(8, 4), spacing=0.1, **kw): """ """ from pandas import plotting from pandas.plotting import _matplotlib from matplotlib import pyplot as plt plt.figure(figsize=figsize) # Get default color style from pandas - can be changed to any other color list if cols_axe1 is None: cols_axe1 = df.columns if len(cols_axe1) == 0: return colors = getattr(getattr(plotting, '_matplotlib').style, '_get_standard_colors')( num_colors=len(cols_axe1 + cols_axe2)) # Displays subplot's pair in case of plot_type defined as `pair` if plot_type == 'pair': ax = df.plot(subplots=True, figsize=figsize, **kw) # plt.show() html_code = mpld3.fig_to_html(ax, **kw) return html_code # First axis ax = df.loc[:, cols_axe1[0]].plot( label=cols_axe1[0], color=colors[0], **kw) ax.set_ylabel(ylabel=cols_axe1[0]) ## lines, labels = ax.get_legend_handles_labels() lines, labels = [], [] i1 = len(cols_axe1) for n in range(1, len(cols_axe1)): df.loc[:, cols_axe1[n]].plot( ax=ax, label=cols_axe1[n], color=colors[(n) % len(colors)], **kw) line, label = ax.get_legend_handles_labels() lines += line labels += label for n in range(0, len(cols_axe2)): # Multiple y-axes ax_new = ax.twinx() ax_new.spines['right'].set_position(('axes', 1 + spacing * (n - 1))) df.loc[:, cols_axe2[n]].plot( ax=ax_new, label=cols_axe2[n], color=colors[(i1 + n) % len(colors)], **kw) ax_new.set_ylabel(ylabel=cols_axe2[n]) # Proper legend position line, label = ax_new.get_legend_handles_labels() lines += line labels += label ax.legend(lines, labels, loc=0) #plt.show() return ax # html_code = mpld3.fig_to_html(ax, **kw) # return html_code # + id="86-9HVdFf0E9" def mpld3_server_start(): # Windows specifc # if os.name == 'nt': os.system(f'start chrome "{dir_out}/embeds.html" ') # mpld3.show(fig=None, ip='127.0.0.1', port=8888, n_retries=50, local=True, open_browser=True, http_server=None, **kwargs)[source] mpld3.show() # show the plot # + id="J_6jrVXmf6XW" ############################################################################################################################ ############################################################################################################################ highcharts_doc =""" https://www.highcharts.com/docs/getting-started/how-to-set-options """ def pd_plot_highcharts(df): """ # Basic line plot chart = serialize(df, render_to="my-chart", title="My Chart") # Basic column plot chart = serialize(df, render_to="my-chart", title="Test", kind="bar") # Plot C on secondary axis chart = serialize(df, render_to="my-chart", title="Test", secondary_y = ["C"]) # Plot on a 1000x700 div chart = serialize(df, render_to="my-chart", title="Test", figsize = (1000, 700)) """ import pandas_highcharts data = pandas_highcharts.serialize( df, render_to='my-chart', output_type='json') json_data_2 = "new Highcharts.StockChart(%s);" % pandas_highcharts.core.json_encode( data) html_code = """<div id="{chart_id}"</div> <script type="text/javascript">{data}</script>""".format(chart_id="new_brownian", data=json_data_2) return html_code # + id="DYHmbQ9WgAXi" def pd_plot_scatter_highcharts(df0:pd.DataFrame, colx:str=None, coly:str=None, collabel: str=None, colclass1: str=None, colclass2: str=None, colclass3: str=None, nsample=10000, cfg:dict={}, mode='d3', save_img='', verbose=True, **kw)-> str: """ Plot Highcharts X=Y Scatter from utilmy.viz import vizhtml vizhtml.pd_plot_scatter_highcharts(df, colx:str=None, coly:str=None, collabel=None, colclass1=None, colclass2=None, colclass3=None, nsample=10000, cfg:dict={}, mode='d3', save_img=False, verbose=True ) """ import matplotlib from box import Box from highcharts import Highchart cc = Box(cfg) cc.title = cc.get('title', 'my scatter') cc.figsize = cc.get('figsize', (640, 480) ) ### Dict type default values cc.colormap = cc.get('colormap', 'brg') if verbose: print(cc['title'], cc['figsize']) nsample = min(nsample, len(df0)) df = df0.sample(nsample) colx = 'x' if colx is None else colx coly = 'y' if coly is None else coly collabel = 'label' if collabel is None else collabel ### label per point colclass1 = 'class1' if colclass1 is None else colclass1 ### Color per point class1 colclass2 = 'class2' if colclass2 is None else colclass2 ### Size per point class2 colclass3 = 'class3' if colclass3 is None else colclass3 ### Marker per point ####################################################################################### for ci in [ collabel, colclass1, colclass2 ] : if ci not in df.columns : df[ci] = '' ### add missing df[ci] = df[ci].fillna('') xx = df[colx].values yy = df[coly].values label_list = df[collabel].values ### Using Class 1 ---> Color color_list = [ hash(str(x)) for x in df[colclass1].values ] # Normalize the classes value over [0.0, 1.0] norm = matplotlib.colors.Normalize(vmin=min(color_list), vmax=max(color_list)) c_map = plt.cm.get_cmap(cc.colormap) color_list = [ matplotlib.colors.rgb2hex(c_map(norm(x))).upper() for x in color_list ] ### Using Class 2 ---> Color n_size = len(df[colclass2].unique()) smin, smax = 1.0, 15.0 size_scheme = np.arange(smin, smax, (smax-smin)/n_size) n_colors = len(size_scheme) size_list = [ size_scheme[ hash(str( x)) % n_colors ] for x in df[colclass2].values ] # Create chart object container_id = 'cid_' + str(np.random.randint(9999, 99999999)) chart = Highchart(renderTo=container_id) options = { 'chart': { 'width': cc.figsize[0], 'height': cc.figsize[1] }, 'title': { 'text': cc.title }, 'xAxis': { 'title': { 'text': colx } }, 'yAxis': { 'title': { 'text': coly } }, 'legend': { 'enabled': False },'tooltip': { 'pointFormat': '{point.label}' }} chart.set_dict_options(options) # Plot each cluster with the correct size and color data = [{ 'x' : float(xx[i]), 'y' : float(yy[i]), "label" : str(label_list[i]), "marker": { 'radius' : int(size_list[i]) }, 'color' : color_list[i] } for i in range(len(df)) ] chart.add_data_set(data, 'scatter') chart.buildcontent() html_code = chart._htmlcontent.decode('utf-8') return html_code # + id="okSRqn3kgHVG" def pd_plot_tseries_highcharts(df, coldate:str=None, date_format:str='%m/%d/%Y', cols_axe1:list =[], cols_axe2:list =[], figsize:tuple = None, title:str=None, x_label:str=None, axe1_label:str=None, axe2_label:str=None, cfg:dict={}, mode='d3', save_img="")-> str: ''' function to return highchart json cord for time_series. input parameter df : panda dataframe on which you want to apply time_series cols_axe1: column name for y-axis one cols_axe2: column name for y-axis second x_label : label of x-axis cols_axe1_label : label for yaxis 1 cols_axe2_label : label for yaxis 2 date_format : %m for moth , %d for day and %Y for Year. ''' from highcharts import Highchart from box import Box cc = Box(cfg) cc.coldate = 'date' if coldate is None else coldate cc.x_label = coldate if x_label is None else x_label cc.axe1_label = "_".join(cols_axe1) if axe1_label is None else axe1_label cc.axe2_label = "_".join(cols_axe2) if axe2_label is None else axe2_label cc.title = cc.get('title', str(axe1_label) + " vs " + str(coldate) ) if title is None else title cc.figsize = cc.get('figsize', (25, 15) ) if figsize is None else figsize cc.subtitle = cc.get('subtitle', '') cc.cols_axe1 = cols_axe1 cc.cols_axe2 = cols_axe2 df[cc.coldate] = pd.to_datetime(df[cc.coldate],format=date_format) ######################################################### container_id = 'cid_' + str(np.random.randint(9999, 99999999)) H = Highchart(renderTo=container_id) options = { 'chart': { 'zoomType': 'xy'}, 'title': { 'text': cc.title}, 'subtitle': { 'text': cc.subtitle }, 'xAxis': [{ 'type': 'datetime', 'title': { 'text': cc.x_label } }], 'yAxis': [{ 'labels': { 'style': { 'color': 'Highcharts.getOptions().colors[2]' } }, 'title': { 'text': cc.axe2_label, 'style': { 'color': 'Highcharts.getOptions().colors[2]' } }, 'opposite': True }, { 'gridLineWidth': 0, 'title': { 'text': cc.axe1_label, 'style': { 'color': 'Highcharts.getOptions().colors[0]' } }, 'labels': { 'style': { 'color': 'Highcharts.getOptions().colors[0]' } } }], 'tooltip': { 'shared': True, }, 'legend': { 'layout': 'vertical', 'align': 'left', 'x': 80, 'verticalAlign': 'top', 'y': 55, 'floating': True, 'backgroundColor': "(Highcharts.theme && Highcharts.theme.legendBackgroundColor) || '#FFFFFF'" }, } H.set_dict_options(options) for col_name in cc.cols_axe1: data = [[df[cc.coldate][i] , float(df[col_name][i]) ] for i in range(df.shape[0])] H.add_data_set(data, 'spline', col_name,yAxis=1) for col_name in cc.cols_axe2: data = [[df[cc.coldate][i] , float(df[col_name][i])] for i in range(df.shape[0])] H.add_data_set(data, 'spline', col_name, yAxis=0, ) ################################################################## H.buildcontent() html_code = H._htmlcontent.decode('utf-8') return html_code # + id="tBsu1RZvgMQ7" def pd_plot_histogram_highcharts(df:pd.DataFrame, colname:str=None, binsNumber=None, binWidth=None, title:str="", xaxis_label:str= "x-axis", yaxis_label:str="y-axis", cfg:dict={}, mode='d3', save_img="", show=False): ''' function to return highchart json code for histogram. input parameter df : panda dataframe on which you want to apply histogram colname : column name from dataframe in which histogram will apply xaxis_label: label for x-axis yaxis_label: label for y-axis binsNumber: Number of bin in bistogram. binWidth : width of each bin in histogram title : title of histogram cols_axe2_label : label for yaxis 2 date_format : %m for moth , %d for day and %Y for Year. df = data['housing.csv'] html_code = pd_plot_histogram_hcharts(df,colname="median_income",xaxis_label= "x-axis",yaxis_label="y-axis",cfg={}, mode='d3', save_img=False) # highcharts_show_chart(html_code) ''' cc = Box(cfg) cc.title = cc.get('title', "My Title" ) if title is None else title cc.xaxis_label = xaxis_label cc.yaxis_label = yaxis_label container_id = 'cid_' + str(np.random.randint(9999, 99999999)) data = df[colname].values.tolist() code_html_start = f""" <script src="https://code.highcharts.com/6/modules/histogram-bellcurve.js"></script> <div id="{container_id}">Loading</div> <script> """ data_code = """ var data = {data} """.format(data = data) title = """{ text:'""" + cc.title +"""' }""" xAxis = """[{ title: { text:'""" + cc.xaxis_label + """'}, alignTicks: false, opposite: false }]""" yAxis = """[{ title: { text:'""" + cc.yaxis_label + """'}, opposite: false }] """ append_series1 = """[{ name: 'Histogram', type: 'histogram', baseSeries: 's1',""" if binsNumber is not None: append_series1 += """ binsNumber:{binsNumber}, """.format(binsNumber = binsNumber) if binWidth is not None: append_series1 += """ binWidth:{binWidth},""".format(binWidth = binWidth) append_series2 = """}, { name: ' ', type: 'scatter', data: data, visible:false, id: 's1', marker: { radius: 0} }] """ append_series = append_series1 + append_series2 js_code = """Highcharts.chart('"""+container_id+"""', { title:""" + title+""", xAxis:""" + xAxis+""", yAxis:""" + yAxis+""", series: """+append_series+""" }); </script>""" html_code = data_code + js_code # if show : html_code = code_html_start + html_code # print(html_code) return html_code # + id="O-iRhKEagRTC" def html_show_chart_highchart(html_code, verbose=True): # Function to display highcharts graph from highcharts import Highchart from IPython.core.display import display, HTML hc = Highchart() hc.buildhtmlheader() html_code = hc.htmlheader + html_code if verbose: print(html_code) display(HTML(html_code)) # + id="Qv881mf3gWB6" def html_show(html_code, verbose=True): # Function to display HTML from IPython.core.display import display, HTML display(HTML( html_code)) # + id="NqDkkqJ5gZus" ############################################################################################################################ ############################################################################################################################ def images_to_html(dir_input="*.png", title="", verbose=False): """ images_to_html( model_path + "/graph_shop_17_past/*.png" , model_path + "shop_17.html" ) """ import matplotlib.pyplot as plt import base64 from io import BytesIO import glob html = "" flist = glob.glob(dir_input) flist.sorted() for fp in flist: if verbose: print(fp, end=",") with open(fp, mode="rb") as fp2: tmpfile = fp2.read() encoded = base64.b64encode(tmpfile) .decode('utf-8') html = html + \ f'<p><img src=\'data:image/png;base64,{encoded}\'> </p>\n' return html # + id="xfexq7PSgeyA" ############################################################################################################################ ############################################################################################################################ def pd_plot_network(df:pd.DataFrame, cola: str='col_node1', colb: str='col_node2', coledge: str='col_edge', colweight: str="weight",html_code:bool = True): """ https://pyviz.org/tools.html """ def convert_to_networkx(df:pd.DataFrame, cola: str="", colb: str="", colweight: str=None): """ Convert a panadas dataframe into a networkx graph and return a networkx graph Args: df ([type]): [description] """ import networkx as nx import pandas as pd g = nx.Graph() for index, row in df.iterrows(): g.add_edge(row[cola], row[colb], weight=row[colweight],) nx.draw(g, with_labels=True) return g def draw_graph(networkx_graph, notebook:bool =False, output_filename='graph.html', show_buttons:bool =True, only_physics_buttons:bool =False,html_code:bool = True): """ This function accepts a networkx graph object, converts it to a pyvis network object preserving its node and edge attributes, and both returns and saves a dynamic network visualization. Valid node attributes include: "size", "value", "title", "x", "y", "label", "color". (For more info: https://pyvis.readthedocs.io/en/latest/documentation.html#pyvis.network.Network.add_node) Args: networkx_graph: The graph to convert and display notebook: Display in Jupyter? output_filename: Where to save the converted network show_buttons: Show buttons in saved version of network? only_physics_buttons: Show only buttons controlling physics of network? """ from pyvis import network as net import re # make a pyvis network pyvis_graph = net.Network(notebook=notebook) # for each node and its attributes in the networkx graph for node, node_attrs in networkx_graph.nodes(data=True): pyvis_graph.add_node(str(node), **node_attrs) # for each edge and its attributes in the networkx graph for source, target, edge_attrs in networkx_graph.edges(data=True): # if value/width not specified directly, and weight is specified, set 'value' to 'weight' if not 'value' in edge_attrs and not 'width' in edge_attrs and 'weight' in edge_attrs: # place at key 'value' the weight of the edge edge_attrs['value'] = edge_attrs['weight'] # add the edge pyvis_graph.add_edge(str(source), str(target), **edge_attrs) # turn buttons on if show_buttons: if only_physics_buttons: pyvis_graph.show_buttons(filter_=['physics']) else: pyvis_graph.show_buttons() # return and also save pyvis_graph.show(output_filename) if html_code: def extract_text(tag: str,content: str)-> str: reg_str = "<" + tag + ">\s*((?:.|\n)*?)</" + tag + ">" extracted = re.findall(reg_str, content)[0] return extracted with open(output_filename) as f: content = f.read() head = extract_text('head',content) body = extract_text('head',content) return head + "\n" + body networkx_graph = convert_to_networkx(df, cola, colb, colweight=colweight) ng2 = draw_graph(networkx_graph, notebook=False, output_filename='graph.html', show_buttons=True, only_physics_buttons=False,html_code = True) return ng2 # + colab={"base_uri": "https://localhost:8080/", "height": 554} id="fKIUQ4nUeKUN" outputId="196ee6c8-2fcb-46c4-8c73-71cbf6a5109f" ################################################################################################### ########CSS Teamplates ############################################################################ CSS_TEMPLATE = Box({}) CSS_TEMPLATE.base_grey = """ .body { font: 90%/1.45em "Helvetica Neue", HelveticaNeue, Verdana, Arial, Helvetica, sans-serif; margin: 0; padding: 0; color: #333; background-color: #fff; } """ CSS_TEMPLATE.base = """ body{margin:25px;font-family: 'Open Sans', sans-serif;} h1,h2,h3,h4,h5,h6{margin-bottom: 0.5rem;font-family: 'Arvo', serif;line-height: 1.5;color: #32325d;} .dataTables_wrapper{overflow-x: auto;} hr{border-top: dotted 4px rgba(26, 47, 51, 0.7);opacity:0.3 ;} div{margin-top: 5px;margin-bottom: 5px;} table {border-collapse: collapse;} table th,table td {border: 1px solid lightgrey;} """ CSS_TEMPLATE.a4_page = CSS_TEMPLATE.base + """ body {background: rgb(204,204,204); } page { background: white;display: block;padding:15px;margin: 0 auto;margin-bottom: 0.5cm; box-shadow: 0 0 0.5cm rgba(0,0,0,0.5); } page[size="A4"] {width: 21cm; } @media print {body, page {margin: 0;box-shadow: 0;}} """ CSS_TEMPLATE.border = CSS_TEMPLATE.base + """ .highcharts-container {border: 3px dotted grey;} .mpld3-figure {border: 3px dotted grey;} """ CSS_TEMPLATE.a3d = CSS_TEMPLATE.base + """ div { background: white;display: block;margin: 0 auto; margin-bottom: 0.5cm;box-shadow: 0 0 0.5cm rgba(0,0,0,0.5);} h1,h2,h3,h4,h5,h6 {box-shadow: 0 0 0.5cm rgba(0,0,0,0.5); padding: 5px;} """ ################################################################################################### ######### JScript ################################################################################# js_code = Box({}) # List of javascript code js_code.js_hidden = """<script> var x = document.getElementById('hidden_section_id'); x.onclick = function() { if (x.style.display == 'none') { x.style.display = 'block'; } else { x.style.display = 'none'; } } </script> """ ################################################################################################### ################################################################################################### def help_get_codesource(func): """ Using the magic method __doc__, we KNOW the size of the docstring. We then, just substract this from the total length of the function """ import inspect try: lines_to_skip = len(func.__doc__.split('\n')) except AttributeError: lines_to_skip = 0 lines = inspect.getsourcelines(func)[0] return ''.join( lines[lines_to_skip+1:] ) ################################################################################################### if __name__ == "__main__": import fire fire.Fire() # test2() def zz_css_get_template(css_name:str= "A4_size"): css_code = """ body{margin:25px;font-family: 'Open Sans', sans-serif;} h1,h2,h3,h4,h5,h6{margin-bottom: 0.5rem;font-family: 'Arvo', serif;line-height: 1.5;color: #32325d;} .dataTables_wrapper{overflow-x: auto;} hr{border-top: dotted 4px rgba(26, 47, 51, 0.7);opacity:0.3 ;} div{margin-top: 5px;margin-bottom: 5px;} table {border-collapse: collapse;} table th,table td {border: 1px solid lightgrey;} """ if css_name == "A4_size": css_code = css_code + """ body {background: rgb(204,204,204); } page { background: white;display: block;padding:15px;margin: 0 auto;margin-bottom: 0.5cm; box-shadow: 0 0 0.5cm rgba(0,0,0,0.5); } page[size="A4"] {width: 21cm; } @media print {body, page {margin: 0;box-shadow: 0;}} """ if css_name == "border": css_code = css_code + """ .highcharts-container {border: 3px dotted grey;} .mpld3-figure {border: 3px dotted grey;} """ if css_name == "3d": css_code = css_code + """ div { background: white;display: block;margin: 0 auto; margin-bottom: 0.5cm;box-shadow: 0 0 0.5cm rgba(0,0,0,0.5);} h1,h2,h3,h4,h5,h6 {box-shadow: 0 0 0.5cm rgba(0,0,0,0.5); padding: 5px;} """ return css_code def zz_test_get_random_data(n=100): ### return random data df = {'date' :pd.date_range("1/1/2018", "1/1/2020")[:n] } df = pd.DataFrame(df) df['col1'] = np.random.choice( a=[0, 1, 2], size=len(df), p=[0.5, 0.3, 0.2] ) df['col2'] = np.random.choice( a=['a0', 'a1', 'a2'], size=len(df), p=[0.5, 0.3, 0.2] ) for ci in ['col3', 'col4', 'col5'] : df[ci] = np.random.random(len(df)) return df def zz_pd_plot_histogram_highcharts_old(df, col, figsize=None, title=None, cfg:dict={}, mode='d3', save_img=''): from box import Box cc = Box(cfg) cc.title = cc.get('title', 'Histogram' + col ) if title is None else title cc.figsize = cc.get('figsize', (25, 15) ) if figsize is None else figsize cc.subtitle = cc.get('subtitle', '') x_label = col+'-bins' y_label = col+'-frequency' #### Get data, calculate histogram and bar centers hist, bin_edges = np.histogram( df[col].values ) bin_centers = [float(bin_edges[i+1] + bin_edges[i]) / 2 for i in range(len(hist))] hist_val = hist.tolist() #### Plot pd_plot_histogram_highcharts_base(bins = bin_centers, vals = hist_val, figsize = figsize, title = title, x_label = x_label, y_label=y_label, cfg=cfg, mode=mode, save_img=save_img) """ https://pyviz.org/tools.html Name Stars Contributors Downloads License Docs PyPI Conda Sponsors networkx - graphviz - pydot - - pygraphviz - python-igraph - pyvis - pygsp - graph-tool - - - - nxviz - Py3Plex - - - py2cytoscape - ipydagred3 - - ipycytoscape - - webweb - netwulf - - - ipysigma - - - """ # + id="frLuYJQ_yA_Y"
color_schema_try.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Collaborative filtering # + hide_input=true from fastai.gen_doc.nbdoc import * # - # This package contains all the necessary functions to quickly train a model for a collaborative filtering task. Let's start by importing all we'll need. from fastai import * from fastai.collab import * # ## Overview # Collaborative filtering is when you're tasked to predict how much a user is going to like a certain item. The fastai library contains a [`CollabFilteringDataset`](/collab.html#CollabFilteringDataset) class that will help you create datasets suitable for training, and a function `get_colab_learner` to build a simple model directly from a ratings table. Let's first see how we can get started before devling in the documentation. # # For our example, we'll use a small subset of the [MovieLens](https://grouplens.org/datasets/movielens/) dataset. In there, we have to predict the rating a user gave a given movie (from 0 to 5). It comes in the form of a csv file where each line is the rating of a movie by a given person. path = untar_data(URLs.ML_SAMPLE) ratings = pd.read_csv(path/'ratings.csv') ratings.head() # We'll first turn the `userId` and `movieId` columns in category codes, so that we can replace them with their codes when it's time to feed them to an `Embedding` layer. This step would be even more important if our csv had names of users, or names of items in it. To do it, we wimply have to call a `CollabDataBunch` factory method. data = CollabDataBunch.from_df(ratings) # Now that this step is done, we can directly create a [`Learner`](/basic_train.html#Learner) object: learn = get_collab_learner(data, n_factors=50, min_score=0., max_score=5.) # And then immediately begin training learn.fit_one_cycle(5, 5e-3, wd=0.1) # + hide_input=true show_doc(CollabDataBunch, doc_string=False) # - # This is the basic class to buil a [`DataBunch`](/basic_data.html#DataBunch) suitable for colaborative filtering. # + hide_input=true show_doc(CollabDataBunch.from_df, doc_string=False) # - # Takes a `ratings` dataframe and splits it randomly for train and test following `pct_val` (unless it's None). `user_name`, `item_name` and `rating_name` give the names of the corresponding columns (defaults to the first, the second and the third column). Optionally a `test` dataframe can be passed an a `seed` for the separation between training and validation set. The `kwargs` will be passed to [`DataBunch.create`](/basic_data.html#DataBunch.create). # ## Model and [`Learner`](/basic_train.html#Learner) # + hide_input=true show_doc(EmbeddingDotBias, doc_string=False, title_level=3) # - # Creates a simple model with `Embedding` weights and biases for `n_users` and `n_items`, with `n_factors` latent factors. Takes the dot product of the embeddings and adds the bias, then feed the result to a sigmoid rescaled to go from `min_score` to `max_score`. # + hide_input=true show_doc(get_collab_learner, doc_string=False) # - # Creates a [`Learner`](/basic_train.html#Learner) object built from the data in `ratings`, `pct_val`, `user_name`, `item_name`, `rating_name` to [`CollabFilteringDataset`](/collab.html#CollabFilteringDataset). Optionally, creates another [`CollabFilteringDataset`](/collab.html#CollabFilteringDataset) for `test`. `kwargs` are fed to [`DataBunch.create`](/basic_data.html#DataBunch.create) with these datasets. The model is given by [`EmbeddingDotBias`](/collab.html#EmbeddingDotBias) with `n_factors`, `min_score` and `max_score` (the numbers of users and items will be inferred from the data). # ## Undocumented Methods - Methods moved below this line will intentionally be hidden # + hide_input=true show_doc(EmbeddingDotBias.forward)
docs_src/collab.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # Assemble a new Darwin Core Archive using data from another archive # This is based on a DwC-A exported from Symbiota # The process only uses the occurrences.csv and images.csv file for filtering # and creates a copy of the original meta.xml file. # All other files are ignored. # Intended for upload to BioSpex # ----------------- # Extract a DwC Archive file and put the contents in a directory named dwc_source in same path as this notebook # Create a directory called dwc_out to store output # To create the new DwC archive file, ZIP the contents of dwc_out (not the directory itself) # - from shutil import copyfile import pandas as pd pd.set_option('display.max_columns', 10) pd.set_option('display.max_colwidth', -1) # load the occurrences file from a Darwin Core Archive df_occurrences = pd.read_csv("dwc_source/occurrences.csv", low_memory=False) # load the images file from a Darwin Core Archive df_images = pd.read_csv("dwc_source/images.csv", low_memory=False) # Make sure imported records match what you expect df_occurrences.shape df_images.shape # + # Filter the occurrence records from Symbiota to include the records you want to import into BioSpex # processingStatus isn't in the Symbiota DwCA which is generated using DwC Publishing, must use backup DwC file instead # Use one or more filters to determine what will be included/excluded from the output DwC-A # Filter to get records that match a particular Symbiota processingStatus: df_filtered_occurrences = df_occurrences[df_occurrences['processingStatus'] == 'pending review-nfn'] # Filter to get records that have particular DwC fields unpopulated df_filtered_occurrences = df_occurrences[(df_occurrences['stateProvince'].isnull())&(df_occurrences['recordedBy'].isnull())&(df_occurrences['scientificName'].isnull())] # If not filtering just assign to new DF: #df_filtered_occurrences = df_occurrences # - # Check to make sure the record count is what you expect df_filtered_occurrences.shape # Filter the image records to only include those with occurrence records df_filtered_images = df_images[df_images['coreid'].isin(df_filtered_occurrences['id'])] df_filtered_images.shape # + # If you want to exclude any records from the filtered set based on catalog numbers, first load the catalog numbers here #df_exclude = pd.read_csv("exclude_catalog_numbers.csv", low_memory=False) # + # Exclude records (e.g. those already in BioSpex or in a separate transcription workflow) #df_filtered_occurrences_use = df_filtered_occurrences[~df_filtered_occurrences['catalogNumber'].isin(df_exclude['catalog_number'])] # + # Specify catalog numbers to include # df_include = pd.read_csv("example_include_catnums.csv", low_memory=False) # - # Include records based on catalog numbers df_filtered_occurrences_use = df_filtered_occurrences[df_filtered_occurrences['catalogNumber'].isin(df_include['catalog_number'])] df_filtered_occurrences_use.shape # Select only images to be used df_filtered_images_use = df_images[df_images['coreid'].isin(df_filtered_occurrences_use['id'])] df_filtered_images_use.shape # + # Check for duplicates if you wish #print(df_filtered_images_use[df_filtered_images_use.duplicated(subset='coreid', keep=False)]['accessURI']) # - # Write occurrences to destination directory df_filtered_occurrences_use.to_csv('dwc_out/occurrences.csv', index = False) # Write images to destination directory df_filtered_images_use.to_csv('dwc_out/images.csv', index = False) # Copy meta.xml to destination directory copyfile('dwc_source/meta.xml', 'dwc_out/meta.xml')
DwC-A_assembly.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Deep Convolutional GANs # # In this notebook, you'll build a GAN using convolutional layers in the generator and discriminator. This is called a Deep Convolutional GAN, or DCGAN for short. The DCGAN architecture was first explored last year and has seen impressive results in generating new images, you can read the [original paper here](https://arxiv.org/pdf/1511.06434.pdf). # # You'll be training DCGAN on the [Street View House Numbers](http://ufldl.stanford.edu/housenumbers/) (SVHN) dataset. These are color images of house numbers collected from Google street view. SVHN images are in color and much more variable than MNIST. # # ![SVHN Examples](assets/SVHN_examples.png) # # So, we'll need a deeper and more powerful network. This is accomplished through using convolutional layers in the discriminator and generator. It's also necessary to use batch normalization to get the convolutional networks to train. The only real changes compared to what [you saw previously](https://github.com/udacity/deep-learning/tree/master/gan_mnist) are in the generator and discriminator, otherwise the rest of the implementation is the same. # + # %matplotlib inline import pickle as pkl import matplotlib.pyplot as plt import numpy as np from scipy.io import loadmat import tensorflow as tf # - # !mkdir data # ## Getting the data # # Here you can download the SVHN dataset. Run the cell above and it'll download to your machine. # + from urllib.request import urlretrieve from os.path import isfile, isdir from tqdm import tqdm data_dir = 'data/' if not isdir(data_dir): raise Exception("Data directory doesn't exist!") class DLProgress(tqdm): last_block = 0 def hook(self, block_num=1, block_size=1, total_size=None): self.total = total_size self.update((block_num - self.last_block) * block_size) self.last_block = block_num if not isfile(data_dir + "train_32x32.mat"): with DLProgress(unit='B', unit_scale=True, miniters=1, desc='SVHN Training Set') as pbar: urlretrieve( 'http://ufldl.stanford.edu/housenumbers/train_32x32.mat', data_dir + 'train_32x32.mat', pbar.hook) if not isfile(data_dir + "test_32x32.mat"): with DLProgress(unit='B', unit_scale=True, miniters=1, desc='SVHN Testing Set') as pbar: urlretrieve( 'http://ufldl.stanford.edu/housenumbers/test_32x32.mat', data_dir + 'test_32x32.mat', pbar.hook) # - # These SVHN files are `.mat` files typically used with Matlab. However, we can load them in with `scipy.io.loadmat` which we imported above. trainset = loadmat(data_dir + 'train_32x32.mat') testset = loadmat(data_dir + 'test_32x32.mat') # Here I'm showing a small sample of the images. Each of these is 32x32 with 3 color channels (RGB). These are the real images we'll pass to the discriminator and what the generator will eventually fake. idx = np.random.randint(0, trainset['X'].shape[3], size=36) fig, axes = plt.subplots(6, 6, sharex=True, sharey=True, figsize=(5,5),) for ii, ax in zip(idx, axes.flatten()): ax.imshow(trainset['X'][:,:,:,ii], aspect='equal') ax.xaxis.set_visible(False) ax.yaxis.set_visible(False) plt.subplots_adjust(wspace=0, hspace=0) # Here we need to do a bit of preprocessing and getting the images into a form where we can pass batches to the network. First off, we need to rescale the images to a range of -1 to 1, since the output of our generator is also in that range. We also have a set of test and validation images which could be used if we're trying to identify the numbers in the images. def scale(x, feature_range=(-1, 1)): # scale to (0, 1) x = ((x - x.min())/(255 - x.min())) # scale to feature_range min, max = feature_range x = x * (max - min) + min return x class Dataset: def __init__(self, train, test, val_frac=0.5, shuffle=False, scale_func=None): split_idx = int(len(test['y'])*(1 - val_frac)) self.test_x, self.valid_x = test['X'][:,:,:,:split_idx], test['X'][:,:,:,split_idx:] self.test_y, self.valid_y = test['y'][:split_idx], test['y'][split_idx:] self.train_x, self.train_y = train['X'], train['y'] self.train_x = np.rollaxis(self.train_x, 3) self.valid_x = np.rollaxis(self.valid_x, 3) self.test_x = np.rollaxis(self.test_x, 3) if scale_func is None: self.scaler = scale else: self.scaler = scale_func self.shuffle = shuffle def batches(self, batch_size): if self.shuffle: idx = np.arange(len(dataset.train_x)) np.random.shuffle(idx) self.train_x = self.train_x[idx] self.train_y = self.train_y[idx] n_batches = len(self.train_y)//batch_size for ii in range(0, len(self.train_y), batch_size): x = self.train_x[ii:ii+batch_size] y = self.train_y[ii:ii+batch_size] yield self.scaler(x), y # ## Network Inputs # # Here, just creating some placeholders like normal. def model_inputs(real_dim, z_dim): inputs_real = tf.placeholder(tf.float32, (None, *real_dim), name='input_real') inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z') return inputs_real, inputs_z # ## Generator # # Here you'll build the generator network. The input will be our noise vector `z` as before. Also as before, the output will be a $tanh$ output, but this time with size 32x32 which is the size of our SVHN images. # # What's new here is we'll use convolutional layers to create our new images. The first layer is a fully connected layer which is reshaped into a deep and narrow layer, something like 4x4x1024 as in the original DCGAN paper. Then we use batch normalization and a leaky ReLU activation. Next is a transposed convolution where typically you'd halve the depth and double the width and height of the previous layer. Again, we use batch normalization and leaky ReLU. For each of these layers, the general scheme is convolution > batch norm > leaky ReLU. # # You keep stacking layers up like this until you get the final transposed convolution layer with shape 32x32x3. Below is the archicture used in the original DCGAN paper: # # ![DCGAN Generator](assets/dcgan.png) # # Note that the final layer here is 64x64x3, while for our SVHN dataset, we only want it to be 32x32x3. # # >**Exercise:** Build the transposed convolutional network for the generator in the function below. Be sure to use leaky ReLUs on all the layers except for the last tanh layer, as well as batch normalization on all the transposed convolutional layers except the last one. def generator(z, output_dim, reuse=False, alpha=0.2, training=True): with tf.variable_scope('generator', reuse=reuse): # First fully connected layer x # Output layer, 32x32x3 logits = out = tf.tanh(logits) return out # ## Discriminator # # Here you'll build the discriminator. This is basically just a convolutional classifier like you've built before. The input to the discriminator are 32x32x3 tensors/images. You'll want a few convolutional layers, then a fully connected layer for the output. As before, we want a sigmoid output, and you'll need to return the logits as well. For the depths of the convolutional layers I suggest starting with 16, 32, 64 filters in the first layer, then double the depth as you add layers. Note that in the DCGAN paper, they did all the downsampling using only strided convolutional layers with no maxpool layers. # # You'll also want to use batch normalization with `tf.layers.batch_normalization` on each layer except the first convolutional and output layers. Again, each layer should look something like convolution > batch norm > leaky ReLU. # # Note: in this project, your batch normalization layers will always use batch statistics. (That is, always set `training` to `True`.) That's because we are only interested in using the discriminator to help train the generator. However, if you wanted to use the discriminator for inference later, then you would need to set the `training` parameter appropriately. # # >**Exercise:** Build the convolutional network for the discriminator. The input is a 32x32x3 images, the output is a sigmoid plus the logits. Again, use Leaky ReLU activations and batch normalization on all the layers except the first. def discriminator(x, reuse=False, alpha=0.2): with tf.variable_scope('discriminator', reuse=reuse): # Input layer is 32x32x3 x = logits = out = return out, logits # ## Model Loss # # Calculating the loss like before, nothing new here. def model_loss(input_real, input_z, output_dim, alpha=0.2): """ Get the loss for the discriminator and generator :param input_real: Images from the real dataset :param input_z: Z input :param out_channel_dim: The number of channels in the output image :return: A tuple of (discriminator loss, generator loss) """ g_model = generator(input_z, output_dim, alpha=alpha) d_model_real, d_logits_real = discriminator(input_real, alpha=alpha) d_model_fake, d_logits_fake = discriminator(g_model, reuse=True, alpha=alpha) d_loss_real = tf.reduce_mean( tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=tf.ones_like(d_model_real))) d_loss_fake = tf.reduce_mean( tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.zeros_like(d_model_fake))) g_loss = tf.reduce_mean( tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.ones_like(d_model_fake))) d_loss = d_loss_real + d_loss_fake return d_loss, g_loss # ## Optimizers # # Not much new here, but notice how the train operations are wrapped in a `with tf.control_dependencies` block so the batch normalization layers can update their population statistics. def model_opt(d_loss, g_loss, learning_rate, beta1): """ Get optimization operations :param d_loss: Discriminator loss Tensor :param g_loss: Generator loss Tensor :param learning_rate: Learning Rate Placeholder :param beta1: The exponential decay rate for the 1st moment in the optimizer :return: A tuple of (discriminator training operation, generator training operation) """ # Get weights and bias to update t_vars = tf.trainable_variables() d_vars = [var for var in t_vars if var.name.startswith('discriminator')] g_vars = [var for var in t_vars if var.name.startswith('generator')] # Optimize with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)): d_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(d_loss, var_list=d_vars) g_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(g_loss, var_list=g_vars) return d_train_opt, g_train_opt # ## Building the model # # Here we can use the functions we defined about to build the model as a class. This will make it easier to move the network around in our code since the nodes and operations in the graph are packaged in one object. class GAN: def __init__(self, real_size, z_size, learning_rate, alpha=0.2, beta1=0.5): tf.reset_default_graph() self.input_real, self.input_z = model_inputs(real_size, z_size) self.d_loss, self.g_loss = model_loss(self.input_real, self.input_z, real_size[2], alpha=alpha) self.d_opt, self.g_opt = model_opt(self.d_loss, self.g_loss, learning_rate, beta1) # Here is a function for displaying generated images. def view_samples(epoch, samples, nrows, ncols, figsize=(5,5)): fig, axes = plt.subplots(figsize=figsize, nrows=nrows, ncols=ncols, sharey=True, sharex=True) for ax, img in zip(axes.flatten(), samples[epoch]): ax.axis('off') img = ((img - img.min())*255 / (img.max() - img.min())).astype(np.uint8) ax.set_adjustable('box-forced') im = ax.imshow(img, aspect='equal') plt.subplots_adjust(wspace=0, hspace=0) return fig, axes # And another function we can use to train our network. Notice when we call `generator` to create the samples to display, we set `training` to `False`. That's so the batch normalization layers will use the population statistics rather than the batch statistics. Also notice that we set the `net.input_real` placeholder when we run the generator's optimizer. The generator doesn't actually use it, but we'd get an error without it because of the `tf.control_dependencies` block we created in `model_opt`. def train(net, dataset, epochs, batch_size, print_every=10, show_every=100, figsize=(5,5)): saver = tf.train.Saver() sample_z = np.random.uniform(-1, 1, size=(72, z_size)) samples, losses = [], [] steps = 0 with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for e in range(epochs): for x, y in dataset.batches(batch_size): steps += 1 # Sample random noise for G batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size)) # Run optimizers _ = sess.run(net.d_opt, feed_dict={net.input_real: x, net.input_z: batch_z}) _ = sess.run(net.g_opt, feed_dict={net.input_z: batch_z, net.input_real: x}) if steps % print_every == 0: # At the end of each epoch, get the losses and print them out train_loss_d = net.d_loss.eval({net.input_z: batch_z, net.input_real: x}) train_loss_g = net.g_loss.eval({net.input_z: batch_z}) print("Epoch {}/{}...".format(e+1, epochs), "Discriminator Loss: {:.4f}...".format(train_loss_d), "Generator Loss: {:.4f}".format(train_loss_g)) # Save losses to view after training losses.append((train_loss_d, train_loss_g)) if steps % show_every == 0: gen_samples = sess.run( generator(net.input_z, 3, reuse=True, training=False), feed_dict={net.input_z: sample_z}) samples.append(gen_samples) _ = view_samples(-1, samples, 6, 12, figsize=figsize) plt.show() saver.save(sess, './checkpoints/generator.ckpt') with open('samples.pkl', 'wb') as f: pkl.dump(samples, f) return losses, samples # ## Hyperparameters # # GANs are very sensitive to hyperparameters. A lot of experimentation goes into finding the best hyperparameters such that the generator and discriminator don't overpower each other. Try out your own hyperparameters or read [the DCGAN paper](https://arxiv.org/pdf/1511.06434.pdf) to see what worked for them. # # >**Exercise:** Find hyperparameters to train this GAN. The values found in the DCGAN paper work well, or you can experiment on your own. In general, you want the discriminator loss to be around 0.3, this means it is correctly classifying images as fake or real about 50% of the time. # + real_size = (32,32,3) z_size = 100 learning_rate = 0.001 batch_size = 64 epochs = 1 alpha = 0.01 beta1 = 0.9 # Create the network net = GAN(real_size, z_size, learning_rate, alpha=alpha, beta1=beta1) # - # Load the data and train the network here dataset = Dataset(trainset, testset) losses, samples = train(net, dataset, epochs, batch_size, figsize=(10,5)) fig, ax = plt.subplots() losses = np.array(losses) plt.plot(losses.T[0], label='Discriminator', alpha=0.5) plt.plot(losses.T[1], label='Generator', alpha=0.5) plt.title("Training Losses") plt.legend() _ = view_samples(-1, samples, 6, 12, figsize=(10,5))
dcgan-svhn/DCGAN_Exercises.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- from CSVUtils import * import pandas as pd import numpy as np from os import path DIR = "./input/currency" name_prefix = ["USD_TWD", "BRL_USD"] name_suffix = ['1995-2009','2010-2020'] df_list = [] for suffix in name_suffix: filename = "USD_TWD Historical Data_"+suffix+".csv" df = csv2df(DIR, filename, source="investing") df['Price'] = 1/df['Price'] df['Open'] = 1/df['Open'] df['High'] = 1/df['High'] df['Low'] = 1/df['Low'] df['Change'] = df['Price'].pct_change() df = df.sort_values('Date').reset_index(drop=True) df_list.append(df) df[df['Date'] == pd.to_datetime("2015-08-25")] twd_df = pd.concat(df_list) twd_df = twd_df.dropna() twd_df.reset_index(drop=True,inplace=True) twd_df DIR = "./input/currency" name_suffix = ['1995-2009','2010-2020'] df_list = [] prefix = "BRL_USD Historical Data_" for suffix in name_suffix: filename = prefix+suffix+".csv" df = csv2df(DIR, filename, source="investing") df = df.sort_values('Date').reset_index(drop=True) df_list.append(df) brl_df = pd.concat(df_list) brl_df = brl_df.dropna() brl_df.reset_index(drop=True,inplace=True) brl_df twd_df.to_csv("./input/currency/TWD_1995-2020.csv", index=False) brl_df.to_csv("./input/currency/BRL_1995-2020.csv", index=False) DIR = "./from github\Stock-Trading-Environment\data" twii_df = csv2df(DIR, "^TWII.csv", source="yahoo") bvsp_df = csv2df(DIR, "^BVSP.csv", source="yahoo") twd_df = pd.read_csv("./input/currency/TWD_1995-2020.csv") brl_df = pd.read_csv("./input/currency/BRL_1995-2020.csv") twd_df['Date'] = pd.to_datetime(twd_df['Date']) brl_df['Date'] = pd.to_datetime(brl_df['Date']) def normalize_to_usd(market_df, currency_df): currency_df = currency_df[['Date', 'Price']] common_date = np.intersect1d(currency_df['Date'] , market_df['Date']) currency_df = currency_df[currency_df['Date'].isin(common_date)] currency_df.reset_index(drop=True, inplace=True) market_df = market_df[market_df['Date'].isin(common_date)] market_df.reset_index(drop=True, inplace=True) market_df.loc[:,'Open'] *= currency_df['Price'] market_df.loc[:,'High'] *= currency_df['Price'] market_df.loc[:,'Low'] *= currency_df['Price'] market_df.loc[:,'Price'] *= currency_df['Price'] market_df.loc[:,'Change'] = market_df['Price'].pct_change() return market_df twii_df_new = normalize_to_usd(twii_df,twd_df) twii_df_new bvsp_df_new = normalize_to_usd(bvsp_df,brl_df) bvsp_df_new twii_df_new.to_csv("./from github\\Stock-Trading-Environment\\data\\^TWII_new.csv", index=False) bvsp_df_new.to_csv("./from github\\Stock-Trading-Environment\\data\\^BVSP_new.csv", index=False) "aaa_new"[-4:] twd_exchange = twd_df[['Date', 'Price']] common_date = np.intersect1d(twd_exchange['Date'] , twii_df['Date']) twd_exchange = twd_exchange[twd_exchange['Date'].isin(common_date)] twd_exchange.reset_index(drop=True, inplace=True) twii_df = twii_df[twii_df['Date'].isin(common_date)] twii_df.reset_index(drop=True, inplace=True) twii_df twii_df['Open'] *= twd_exchange['Price'] twii_df['High'] *= twd_exchange['Price'] twii_df['Low'] *= twd_exchange['Price'] twii_df['Price'] *= twd_exchange['Price'] twii_df['Change'] = twii_df['Price'].pct_change() twii_df
.ipynb_checkpoints/0304 - currency history cleaning-checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # 1. Introducción a la regression # # Realicemos una predicción basada en una regresión lineal. # Se parte de los datos analizados, normalizados y acotados logrados en el punto 0, para el training. # # Este método se basa en hacer una predicción basada en regresiones lineales con y sin regularización. # # Partiendo de una contrucción del modelo, haremos un proceso iterativo de validación y ajuste del mismo (modificando parámetros y variables), hasta obtener el que mejor predice nuestra target, sin infra o sobreajustes # # ## Importación de datos y selección de variables # # + #Librerías a usar import pandas as pd import numpy as np #Importación de datos data = pd.read_csv("data/PreciosCasas/train_final.csv", sep='\t', encoding='utf-8') # print a summary of the data in Melbourne data data.describe() # - data.shape # # ### Reescalado # # Al contrario que en otros modelos, cuando se aplican modelos de regresión lineal con terminos de penalización (como Lasso o Ridge), debemos tener las variables en la misma escala data = (data - data.min()) / data.max() print(data.max().max()) print(data.min().min()) # + #Vamos a ver que variables elegimos: todas como columnas y el SalesPrice como target X= data.ix[:, data.columns != 'Unnamed: 0'] X= X.loc[:, X.columns != 'SalePrice'] X= X.loc[:, X.columns != 'Id'] print (X.head()) y= data['SalePrice'] # - X.describe() X.columns.values # ## Implementación del modelo de Regresión # # Haremos primero una regresión lineal sin regularizar, analizaremos el modelo, y luego iremos probando con los distintos tipos de regularización a ver como lo vamos mejorando. # # - **A. Regresión lineal sin regularizar** # # Para ser capaces de ir validando el modelo, lo separaremos en dos grupos, predictors and target. Lo haremos mediando un split con un número generaro aleatorio. Como queremos que todas las veces que ejecutemos el modelo nos salga lo mismo, estableceremos el argumento de random_state. # # + #Importación de librerías import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns from sklearn.linear_model import LinearRegression, RidgeCV, LassoCV, ElasticNetCV from sklearn.model_selection import cross_val_score from sklearn.metrics import mean_squared_error # %matplotlib inline from sklearn.metrics import mean_absolute_error from sklearn.model_selection import train_test_split from sklearn.tree import DecisionTreeRegressor #Separamos los datos en dos grupos, train_X, val_X, train_y, val_y = train_test_split( X, y,random_state = 0) # - # Modelo de Regresión lineal LR = LinearRegression() LR.fit(train_X, train_y) #Variables que más influyen maxcoef = np.argsort(-np.abs(LR.coef_)) coef = LR.coef_[maxcoef] for i in range(0, 5): print("{:.<025} {:< 010.4e}".format(train_X.columns[maxcoef[i]], coef[i])) # Vemos que el modelo está demasiado sobreajustado y que el error es increible ( se ve en el grafico que como modelo, no mola nada). Seguramente es por culpa de esas variables # # *Nota: se supone que esto se miró en el punto de "Analisis de los datos" y con los mapas de calor se quitaron las variables dependientes, así que avancemos en ajustar este modelo y veamos que va pasando* train_X['ExterQual_Gd'].head(5) prediccion = LR.predict(val_X) print(mean_absolute_error(val_y, prediccion)) # Es un error enorme ... a ver si es que no está bien normalizada o algo, a ver que valores da print (prediccion [:100]) print(max(prediccion)) val_y.head(5) # Veamoslo en un scatter plot plt.scatter(prediccion,val_y); plt.title('Validación'); plt.ylabel('Modelo'); plt.xlabel('Prediccion'); plt.show() # En este caso el modelo funciona fatal, tiene un error increible, así que empecemos con los ajustes (no se que hace que val_y se vaya a valores tan tan grandes) # # - **B. Regresión lineal con Lasso** # # Modelo de Regresión lineal Lasso Ls = LassoCV() Ls.fit(train_X, train_y) prediccion = Ls.predict(val_X) print(mean_absolute_error(val_y, prediccion)); # Veamoslo en un scatter plot plt.scatter(prediccion,val_y); plt.title('Validación'); plt.ylabel('Modelo'); plt.xlabel('Prediccion'); plt.show() #Variables que más influyen maxcoef = np.argsort(-np.abs(Ls.coef_)) coef = Ls.coef_[maxcoef] for i in range(0, 5): print("{:.<025} {:< 010.4e}".format(train_X.columns[maxcoef[i]], coef[i])) # Ahora mejor, es cierto que el tamaño de la planta (1stFlrSF) y la calidad (OverallQual) son variables a tener en cuenta para decidir el precio # # - **C. Regresión lineal Ridge** # # Modelo de Regresión lineal Ridge Rr = RidgeCV() Rr.fit(train_X, train_y) prediccion = Rr.predict(val_X) print(mean_absolute_error(val_y, prediccion)); # Veamoslo en un scatter plot plt.scatter(prediccion,val_y); plt.title('Validación'); plt.ylabel('Modelo'); plt.xlabel('Prediccion'); plt.show() #Variables que más influyen maxcoef = np.argsort(-np.abs(Rr.coef_)) coef = Rr.coef_[maxcoef] for i in range(0, 5): print("{:.<025} {:< 010.4e}".format(train_X.columns[maxcoef[i]], coef[i])) # Similar a lo obtenido en L1, así que bien. Veamos que pasa si juntamos ahora las dos regularizaciones # # # - **D. Regresión lineal elástica** # # La ventaja de juntar las dos, es que si dos variables están correlacionadas, va a mantener las dos # Modelo de Regresión lineal elastic net EN = ElasticNetCV(l1_ratio=np.linspace(0.1, 1.0, 5)) # intentamos aplanar el Rr train_EN = EN.fit(train_X, train_y) prediccion = EN.predict(val_X) print(mean_absolute_error(val_y, prediccion)); # Veamoslo en un scatter plot plt.scatter(prediccion,val_y); plt.title('Validación'); plt.ylabel('Modelo'); plt.xlabel('Prediccion'); plt.show() #Variables que más influyen maxcoef = np.argsort(-np.abs(EN.coef_)) coef = EN.coef_[maxcoef] for i in range(0, 5): print("{:.<025} {:< 010.4e}".format(train_X.columns[maxcoef[i]], coef[i])) # Perfecto, similar a los otros dos, pero claro... ¿cuál es mejor? # # ## Selección # # Comparemos todas las opciones model = [LR, Ls, Rr, EN] M = len(model) CV = 5 score = np.empty((M, CV)) for i in range(0, M): score[i, :] = cross_val_score(model[i], train_X, train_y, cv=CV) print(score.mean(axis=1)) # La regresión lineal no parece funcionar muy bien, por eso necesita regularización (cualquiera de las tres que hemos aplicado ya sería buena y lo mejora todo un montón), no obstante, para estos datos, no parece que este modelo sea el mejor
1. Introduccion a la regression.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.9.5 64-bit ('3.9') # name: python3 # --- # # _PyEnzyme_ - Export template # # #### Usage # # - This template offers all functionalities to write information to a given EnzymeML document. # - Simply reduce the template to your application-specific variables and map these to your own application. # ------------------------------ from pyenzyme.enzymeml.core import EnzymeMLDocument, Vessel, Protein, Reactant, EnzymeReaction, Creator, Replicate from pyenzyme.enzymeml.models import KineticModel from pyenzyme.enzymeml.tools import EnzymeMLWriter # ## Initialize EnzymeML document # # - The blank EnzymeMLDocument object serves as a container for all those objects to be added # - Simply pre-define your objects such as proteins, reactants or reactions and call the addX function # - Units will be parsed automatically if they align with the follow convention # # Unit / Unit => e.g. mole / l # # **!! Please make sure to separate each word or "/" via space !!** enzmldoc = EnzymeMLDocument("Your experiment name") # ## User information # # Information about the creators of a given EnzymeML document is stored within a list of _Creator_ objects. Via the _setCreator_ function the creator is added to the document. Note, that you can also enter a list of _Creator_ objects. # # Attributes: # - Given name # - last name # - E-mail # + creator_1 = Creator(family_name="ML", given_name="Enzyme", mail="EnzymeML@PyEnzyme") creator_2 = Creator(family_name="Musterman", given_name="Max", mail="Max.<EMAIL>mann@PyEnzyme") creators = [ creator_1, creator_2 ] # for multiple creators use a list enzmldoc.setCreator( creators ) # - # ## Vessel # # - In order to add vessel information to you EnzymeML document, pre-define each by creating an instance of a _Vessel_ object. # - When finished, simply set the _Vessel_ object to the _EnzymeMLDocument_ object via its _setVessel_ method. # - Units are parsed and added to the _UnitDictionary_ by the backend. # # Attributes: # - Name: Systematic name of vessel # - Size: Value of size # - Unit: Size defining unit (e.g. volumetric such as 'ml') # vessel = Vessel( name="Reaction Vessel", id_="v0", size=1.0, unit='ml' ) vessel_id = enzmldoc.setVessel(vessel) # ## Proteins # # - In order to add protein information to you EnzymeML document, pre-define each by creating an instance of a _Protein_ object. # - When finished, simply add the _Protein_ object via the _addProtein_ function of the _EnzymeMLDocument_ object. # - Units are parsed and added to the _UnitDictionary_ as well as internal IDs given by the backend. # # Attributes: # - ID: Internal identifier # - Name: Systematic name of protein # - Conc(entration): Value of initial concentration # - Unit: Name of the concentration unit # - Sequence: Protein aminoacid sequence # - Vessel: Name of vessel used in experiment protein_1 = Protein( name="EnzymeMLase", sequence="ENZYMEML", vessel=vessel_id, init_conc=1.0, substanceunits="mmole / l", organism="E.coli", ecnumber="EC:1.2.2.4" ) protein_1 = enzmldoc.addProtein(protein_1) # ---------- # **Important** # # - When adding reactants the function will return the ID # - Store the ID to use it later on in a reaction # # -------- # ## Reactants # # - In order to add reactant information to you EnzymeML document, pre-define each by creating an instance of a _Reactant_ object. # - When finished, simply add the _Reactant_ object via the _addReactant_ function of the _EnzymeMLDocument_ object. # - Units are parsed and added to the _UnitDictionary_ as well as internal IDs given by the backend. # # Attributes: # - Name: Systematic name of protein # - Compartment ID: Internal ID of you pre-defined Vessel # - Initial concentration: Value of the initial concentration # - Substance Unit: Name of the concentration unit # - Constant: Whether or not the substance is at constant concentration # # - Inchi: String defining the INCHI-encoded molecular composition # - Smiles: String defining the SMILES-encoded molecular composition reactant_1 = Reactant( name="Reactant1", vessel=vessel_id, init_conc=1.0, substanceunits="mmole / l", constant=True, inchi="INCHI", smiles="SMILES" ) reactant_1 = enzmldoc.addReactant(reactant_1) # ------- # **Important** # # - When adding reactants the function will return the ID # - Store the ID to use it later on in a reaction # --------- # ## Reactions # # - In order to add reactant information to you EnzymeML document, pre-define each by creating an instance of a _EnzymeReaction_ object. # - When finished, simply add the _EnzymeReaction_ object via the _addReaction_ function of the _EnzymeMLDocument_ object. # - Units are parsed and added to the _UnitDictionary_ as well as internal IDs given by the backend. # # # Attributes: # # - Name: Reaction name # - Temperature: Value of temperature # - Temp Unit: Unit defining the temperature # - pH: Value of pH # - Reversible: Whether or not the reaction is reversible # reaction_1 = EnzymeReaction( name="Reaction1", temperature=20.0, tempunit="C", ph=7.0, reversible=True ) # ### Building reaction # # - In _PyEnzyme_ reactions are built by using the pre-defined reactants and protein. # - Educts, products as well as modifiers such as a protein or buffer are added to the reaction via the respective _addXX_ method inherited by the _EnzymeReaction_ object. # - **If you previously stored reactant/protein IDs (returned by the _addXX_ function) make sure you use them when building reactions to guarantee consistency** # # Attributes # # - ID: Internal ID of educt/product/modifier # - Stoichiometry: Floating point number defining stoichiometry # - Constant: Whether or not the participant is constant # - enzmldoc: EnzymeMLDocument class to which it is added. Checks consistency of IDs. # # + reaction_1.addEduct( speciesID=reactant_1, stoichiometry=1.0, isConstant=False, enzmldoc=enzmldoc ) reaction_1.addModifier( speciesID=protein_1, stoichiometry=1.0, isConstant=True, enzmldoc=enzmldoc ) # - # ### Add replicate data # # - In order to add replicates and time course data, pre-define the _Replicate_ object and set its data via the objects method _setData_. # - The replicate is then added to the respective educt/product/modifier by the given ID # # Attributes # # - Replica: Unique ID for the replicate # - Reactant: Unique ID for the reactant/protein # - Type: Defines the type of data (e.g. concentration, photometric) # - Data unit: Unit of given data # - Time unit: Unit of given time # replicate_1 = Replicate( replica="Replica_1", reactant=reactant_1, type_="conc", data_unit="mmole / l", time_unit="s", init_conc=1.0, measurement="m0" ) # + data = [1,2,3,4,5,6] # Here should be your own data time = [1,2,3,4,5,6] # Here should be your own data replicate_1.setData(data, time) # - reaction_1.addReplicate(replicate_1, enzmldoc) # ### Add reaction to _EnzymeMLDocument_ # # - When the creation of the reaction is completed, simply add the _EnzymeReaction_ object to the _EnzymeMLDocument_ via its _addReaction_ method. reaction1 = enzmldoc.addReaction(reaction_1) # # Modelling # # - Within _PyEnzyme_ it is also possible to store kinetic models # - Create a _KineticModel_ object and add it to your reaction # - This can either be done while creating an _EnzymeReaction_ object or afterwards # # Attributes: # # - Equation: String that defines the kinetic model ( use Internal IDs as variables ) # - Parameters: Dictionary with parameter name as key and respective numeric value # # + equation = "s0 * vmax / ( s0 + Km )" parameters = dict() parameters['Km_s0'] = (1.0, "mmole / s") parameters['vmax_s0'] = (10.0, "mmole / l") kinetic_model = KineticModel( equation=equation, parameters=parameters, enzmldoc=enzmldoc ) # - enzmldoc.getReaction(reaction1).setModel(kinetic_model) # # Ready to write # # - Simply call the _EnzymeMLWriter_ class to write your _EnzymeMLDocument_ to an .omex container enzmldoc.printUnits() enzmldoc.printProteins() enzmldoc.printReactants() enzmldoc.printReactions() out_dir = "YourDirectory/YourFilename" EnzymeMLWriter().toFile( enzmldoc=enzmldoc, path=out_dir )
templates/TL_ExportTemplate.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="ScitaPqhKtuW" # ##### Copyright 2018 The TensorFlow Hub Authors. # # Licensed under the Apache License, Version 2.0 (the "License"); # + id="bNnChGfZK2_w" # Copyright 2018 The TensorFlow Hub Authors. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ============================================================================== # + [markdown] id="9Z_ZvMk5JPFV" # # Classify Flowers with Transfer Learning # # + [markdown] id="MfBg1C5NB3X0" # <table class="tfo-notebook-buttons" align="left"> # <td> # <a target="_blank" href="https://www.tensorflow.org/hub/tutorials/image_feature_vector"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> # </td> # <td> # <a target="_blank" href="https://colab.research.google.com/github/tensorflow/hub/blob/master/examples/colab/image_feature_vector.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> # </td> # <td> # <a target="_blank" href="https://github.com/tensorflow/hub/blob/master/examples/colab/image_feature_vector.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View on GitHub</a> # </td> # <td> # <a href="https://storage.googleapis.com/tensorflow_docs/hub/examples/colab/image_feature_vector.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> # </td> # <td> # <a href="https://tfhub.dev/google/imagenet/mobilenet_v2_035_128/feature_vector/2"><img src="https://www.tensorflow.org/images/hub_logo_32px.png" />See TF Hub model</a> # </td> # </table> # + [markdown] id="gh-LWtlqLtgH" # Have you ever seen a beautiful flower and wondered what kind of flower it is? Well, you're not the first, so let's build a way to identify the type of flower from a photo! # # For classifying images, a particular type of *deep neural network*, called a *convolutional neural network* has proved to be particularly powerful. However, modern convolutional neural networks have millions of parameters. Training them from scratch requires a lot of labeled training data and a lot of computing power (hundreds of GPU-hours or more). We only have about three thousand labeled photos and want to spend much less time, so we need to be more clever. # # We will use a technique called *transfer learning* where we take a pre-trained network (trained on about a million general images), use it to extract features, and train a new layer on top for our own task of classifying images of flowers. # # + [markdown] id="ORy-KvWXGXBo" # ## Setup # # + id="NTrs9zBKJK1c" import collections import io import math import os import random from six.moves import urllib from IPython.display import clear_output, Image, display, HTML import tensorflow.compat.v1 as tf tf.disable_v2_behavior() import tensorflow_hub as hub import numpy as np import matplotlib.pyplot as plt import seaborn as sns import sklearn.metrics as sk_metrics import time # + [markdown] id="Do-T63G7NCSB" # ## The flowers dataset # # The flowers dataset consists of images of flowers with 5 possible class labels. # # When training a machine learning model, we split our data into training and test datasets. We will train the model on our training data and then evaluate how well the model performs on data it has never seen - the test set. # # Let's download our training and test examples (it may take a while) and split them into train and test sets. # # Run the following two cells: # + cellView="both" id="HYQr1SILIxSK" FLOWERS_DIR = './flower_photos' TRAIN_FRACTION = 0.8 RANDOM_SEED = 2018 def download_images(): """If the images aren't already downloaded, save them to FLOWERS_DIR.""" if not os.path.exists(FLOWERS_DIR): DOWNLOAD_URL = 'http://download.tensorflow.org/example_images/flower_photos.tgz' print('Downloading flower images from %s...' % DOWNLOAD_URL) urllib.request.urlretrieve(DOWNLOAD_URL, 'flower_photos.tgz') # !tar xfz flower_photos.tgz print('Flower photos are located in %s' % FLOWERS_DIR) def make_train_and_test_sets(): """Split the data into train and test sets and get the label classes.""" train_examples, test_examples = [], [] shuffler = random.Random(RANDOM_SEED) is_root = True for (dirname, subdirs, filenames) in tf.gfile.Walk(FLOWERS_DIR): # The root directory gives us the classes if is_root: subdirs = sorted(subdirs) classes = collections.OrderedDict(enumerate(subdirs)) label_to_class = dict([(x, i) for i, x in enumerate(subdirs)]) is_root = False # The sub directories give us the image files for training. else: filenames.sort() shuffler.shuffle(filenames) full_filenames = [os.path.join(dirname, f) for f in filenames] label = dirname.split('/')[-1] label_class = label_to_class[label] # An example is the image file and it's label class. examples = list(zip(full_filenames, [label_class] * len(filenames))) num_train = int(len(filenames) * TRAIN_FRACTION) train_examples.extend(examples[:num_train]) test_examples.extend(examples[num_train:]) shuffler.shuffle(train_examples) shuffler.shuffle(test_examples) return train_examples, test_examples, classes # + id="_9NklpcANhtB" # Download the images and split the images into train and test sets. download_images() TRAIN_EXAMPLES, TEST_EXAMPLES, CLASSES = make_train_and_test_sets() NUM_CLASSES = len(CLASSES) print('\nThe dataset has %d label classes: %s' % (NUM_CLASSES, CLASSES.values())) print('There are %d training images' % len(TRAIN_EXAMPLES)) print('there are %d test images' % len(TEST_EXAMPLES)) # + [markdown] id="tHF7bHTfnD6S" # ## Explore the data # # The flowers dataset consists of examples which are labeled images of flowers. Each example contains a JPEG flower image and the class label: what type of flower it is. Let's display a few images together with their labels. # + cellView="both" id="1friUvN6kPYM" #@title Show some labeled images def get_label(example): """Get the label (number) for given example.""" return example[1] def get_class(example): """Get the class (string) of given example.""" return CLASSES[get_label(example)] def get_encoded_image(example): """Get the image data (encoded jpg) of given example.""" image_path = example[0] return tf.gfile.GFile(image_path, 'rb').read() def get_image(example): """Get image as np.array of pixels for given example.""" return plt.imread(io.BytesIO(get_encoded_image(example)), format='jpg') def display_images(images_and_classes, cols=5): """Display given images and their labels in a grid.""" rows = int(math.ceil(len(images_and_classes) / cols)) fig = plt.figure() fig.set_size_inches(cols * 3, rows * 3) for i, (image, flower_class) in enumerate(images_and_classes): plt.subplot(rows, cols, i + 1) plt.axis('off') plt.imshow(image) plt.title(flower_class) NUM_IMAGES = 15 #@param {type: 'integer'} display_images([(get_image(example), get_class(example)) for example in TRAIN_EXAMPLES[:NUM_IMAGES]]) # + [markdown] id="Hyjr6PuboTAg" # ## Build the model # # We will load a [TF-Hub](https://tensorflow.org/hub) image feature vector module, stack a linear classifier on it, and add training and evaluation ops. The following cell builds a TF graph describing the model and its training, but it doesn't run the training (that will be the next step). # + id="LbkSRaK_oW5Y" LEARNING_RATE = 0.01 tf.reset_default_graph() # Load a pre-trained TF-Hub module for extracting features from images. We've # chosen this particular module for speed, but many other choices are available. image_module = hub.Module('https://tfhub.dev/google/imagenet/mobilenet_v2_035_128/feature_vector/2') # Preprocessing images into tensors with size expected by the image module. encoded_images = tf.placeholder(tf.string, shape=[None]) image_size = hub.get_expected_image_size(image_module) def decode_and_resize_image(encoded): decoded = tf.image.decode_jpeg(encoded, channels=3) decoded = tf.image.convert_image_dtype(decoded, tf.float32) return tf.image.resize_images(decoded, image_size) batch_images = tf.map_fn(decode_and_resize_image, encoded_images, dtype=tf.float32) # The image module can be applied as a function to extract feature vectors for a # batch of images. features = image_module(batch_images) def create_model(features): """Build a model for classification from extracted features.""" # Currently, the model is just a single linear layer. You can try to add # another layer, but be careful... two linear layers (when activation=None) # are equivalent to a single linear layer. You can create a nonlinear layer # like this: # layer = tf.layers.dense(inputs=..., units=..., activation=tf.nn.relu) layer = tf.layers.dense(inputs=features, units=NUM_CLASSES, activation=None) return layer # For each class (kind of flower), the model outputs some real number as a score # how much the input resembles this class. This vector of numbers is often # called the "logits". logits = create_model(features) labels = tf.placeholder(tf.float32, [None, NUM_CLASSES]) # Mathematically, a good way to measure how much the predicted probabilities # diverge from the truth is the "cross-entropy" between the two probability # distributions. For numerical stability, this is best done directly from the # logits, not the probabilities extracted from them. cross_entropy = tf.nn.softmax_cross_entropy_with_logits_v2(logits=logits, labels=labels) cross_entropy_mean = tf.reduce_mean(cross_entropy) # Let's add an optimizer so we can train the network. optimizer = tf.train.GradientDescentOptimizer(learning_rate=LEARNING_RATE) train_op = optimizer.minimize(loss=cross_entropy_mean) # The "softmax" function transforms the logits vector into a vector of # probabilities: non-negative numbers that sum up to one, and the i-th number # says how likely the input comes from class i. probabilities = tf.nn.softmax(logits) # We choose the highest one as the predicted class. prediction = tf.argmax(probabilities, 1) correct_prediction = tf.equal(prediction, tf.argmax(labels, 1)) # The accuracy will allow us to eval on our test set. accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) # + [markdown] id="0vvhYQ7-3AG_" # ## Train the network # # Now that our model is built, let's train it and see how it perfoms on our test set. # + id="1YnBg7-OS3Dz" # How long will we train the network (number of batches). NUM_TRAIN_STEPS = 100 #@param {type: 'integer'} # How many training examples we use in each step. TRAIN_BATCH_SIZE = 10 #@param {type: 'integer'} # How often to evaluate the model performance. EVAL_EVERY = 10 #@param {type: 'integer'} def get_batch(batch_size=None, test=False): """Get a random batch of examples.""" examples = TEST_EXAMPLES if test else TRAIN_EXAMPLES batch_examples = random.sample(examples, batch_size) if batch_size else examples return batch_examples def get_images_and_labels(batch_examples): images = [get_encoded_image(e) for e in batch_examples] one_hot_labels = [get_label_one_hot(e) for e in batch_examples] return images, one_hot_labels def get_label_one_hot(example): """Get the one hot encoding vector for the example.""" one_hot_vector = np.zeros(NUM_CLASSES) np.put(one_hot_vector, get_label(example), 1) return one_hot_vector with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for i in range(NUM_TRAIN_STEPS): # Get a random batch of training examples. train_batch = get_batch(batch_size=TRAIN_BATCH_SIZE) batch_images, batch_labels = get_images_and_labels(train_batch) # Run the train_op to train the model. train_loss, _, train_accuracy = sess.run( [cross_entropy_mean, train_op, accuracy], feed_dict={encoded_images: batch_images, labels: batch_labels}) is_final_step = (i == (NUM_TRAIN_STEPS - 1)) if i % EVAL_EVERY == 0 or is_final_step: # Get a batch of test examples. test_batch = get_batch(batch_size=None, test=True) batch_images, batch_labels = get_images_and_labels(test_batch) # Evaluate how well our model performs on the test set. test_loss, test_accuracy, test_prediction, correct_predicate = sess.run( [cross_entropy_mean, accuracy, prediction, correct_prediction], feed_dict={encoded_images: batch_images, labels: batch_labels}) print('Test accuracy at step %s: %.2f%%' % (i, (test_accuracy * 100))) # + id="ZFUNJxuH2t0V" def show_confusion_matrix(test_labels, predictions): """Compute confusion matrix and normalize.""" confusion = sk_metrics.confusion_matrix( np.argmax(test_labels, axis=1), predictions) confusion_normalized = confusion.astype("float") / confusion.sum(axis=1) axis_labels = list(CLASSES.values()) ax = sns.heatmap( confusion_normalized, xticklabels=axis_labels, yticklabels=axis_labels, cmap='Blues', annot=True, fmt='.2f', square=True) plt.title("Confusion matrix") plt.ylabel("True label") plt.xlabel("Predicted label") show_confusion_matrix(batch_labels, test_prediction) # + [markdown] id="Uu3vo8DK8BdL" # ## Incorrect predictions # # Let's a take a closer look at the test examples that our model got wrong. # # - Are there any mislabeled examples in our test set? # - Is there any bad data in the test set - images that aren't actually pictures of flowers? # - Are there images where you can understand why the model made a mistake? # + id="hqa0V3WN8C9M" incorrect = [ (example, CLASSES[prediction]) for example, prediction, is_correct in zip(test_batch, test_prediction, correct_predicate) if not is_correct ] display_images( [(get_image(example), "prediction: {0}\nlabel:{1}".format(incorrect_prediction, get_class(example))) for (example, incorrect_prediction) in incorrect[:20]]) # + [markdown] id="YN_s04Il8TvK" # ## Exercises: Improve the model! # # We've trained a baseline model, now let's try to improve it to achieve better accuracy. (Remember that you'll need to re-run the cells when you make a change.) # # ### Exercise 1: Try a different image model. # With TF-Hub, trying a few different image models is simple. Just replace the `"https://tfhub.dev/google/imagenet/mobilenet_v2_050_128/feature_vector/2"` handle in the `hub.Module()` call with a handle of different module and rerun all the code. You can see all available image modules at [tfhub.dev](https://tfhub.dev/s?module-type=image-feature-vector). # # A good choice might be one of the other [MobileNet V2 modules](https://tfhub.dev/s?module-type=image-feature-vector&network-architecture=mobilenet-v2). Many of the modules -- including the MobileNet modules -- were trained on the [ImageNet dataset](http://image-net.org/challenges/LSVRC/2012) which contains over 1 million images and 1000 classes. Choosing a network architecture provides a tradeoff between speed and classification accuracy: models like MobileNet or NASNet Mobile are fast and small, more traditional architectures like Inception and ResNet were designed for accuracy. # # For the larger Inception V3 architecture, you can also explore the benefits of pre-training on a domain closer to your own task: it is also available as a [module trained on the iNaturalist dataset](https://tfhub.dev/google/inaturalist/inception_v3/feature_vector/1) of plants and animals. # # ### Exercise 2: Add a hidden layer. # Stack a hidden layer between extracted image features and the linear classifier (in function `create_model()` above). To create a non-linear hidden layer with e.g. 100 nodes, use [tf.layers.dense](https://www.tensorflow.org/api_docs/python/tf/compat/v1/layers/dense) with units set to 100 and activation set to `tf.nn.relu`. Does changing the size of the hidden layer affect the test accuracy? Does adding second hidden layer improve the accuracy? # # ### Exercise 3: Change hyperparameters. # Does increasing *number of training steps* improves final accuracy? Can you *change the learning rate* to make your model converge more quickly? Does the training *batch size* affect your model's performance? # # ### Exercise 4: Try a different optimizer. # # Replace the basic GradientDescentOptimizer with a more sophisticate optimizer, e.g. [AdagradOptimizer](https://www.tensorflow.org/api_docs/python/tf/compat/v1/train/AdagradOptimizer). Does it make a difference to your model training? If you want to learn more about the benefits of different optimization algorithms, check out [this post](http://ruder.io/optimizing-gradient-descent/). # + [markdown] id="kdwVXO1eJS5-" # ## Want to learn more? # # If you are interested in a more advanced version of this tutorial, check out the [TensorFlow image retraining tutorial](https://www.tensorflow.org/hub/tutorials/image_retraining) which walks you through visualizing the training using TensorBoard, advanced techniques like dataset augmentation by distorting images, and replacing the flowers dataset to learn an image classifier on your own dataset. # # You can learn more about TensorFlow at [tensorflow.org](http://tensorflow.org) and see the TF-Hub API documentation is available at [tensorflow.org/hub](https://www.tensorflow.org/hub/). Find available TensorFlow Hub modules at [tfhub.dev](http://tfhub.dev) including more image feature vector modules and text embedding modules. # # Also check out the [Machine Learning Crash Course](https://developers.google.com/machine-learning/crash-course/) which is Google's fast-paced, practical introduction to machine learning.
site/en-snapshot/hub/tutorials/image_feature_vector.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # HELIX-SCOPE # # ### Developing the data processing # # In this notebook we will test how best to process national summary statics from the Helix consortium data. Summary statistics (mean, max, min and standard deviation) will be calculated for every shape in an arbitrary shapefile for every netcdf file on path. # # Data should be downloaded from the SFTP site (bi.nsc.liu.se), which requires a username and password login. The data should be placed in the `/data` folder within this repo. # # ** THIS NOTEBOOK IS FOR DEVELOPMENT PURPOSES ONLY** from netCDF4 import Dataset import os import cartoframes import re import fiona import rasterio from rasterio.mask import mask from rasterio.plot import show from rasterstats import zonal_stats import geopandas as gpd import pandas as pd import numpy as np from matplotlib.pyplot import cm import matplotlib.pyplot as plt import datetime import warnings warnings.filterwarnings('ignore') # %matplotlib inline # + def identify_netcdf_and_csv_files(path='data'): """Crawl through a specified folder and return a dict of the netcdf d['nc'] and csv d['csv'] files contained within. Returns something like {'nc':'data/CNRS_data/cSoil/orchidee-giss-ecearth.SWL_15.eco.cSoil.nc'} """ netcdf_files = [] csv_files = [] for root, dirs, files in os.walk(path): if isinstance([], type(files)): for f in files: if f.split('.')[-1] in ['nc']: netcdf_files.append(''.join([root,'/',f])) elif f.split('.')[-1] in ['csv']: csv_files.append(''.join([root,'/',f])) return {'nc':netcdf_files,'csv':csv_files} def generate_metadata(filepath): """Pass a path and file as a sigle string. Expected in the form of: data/CNRS_data/cSoil/orchidee-giss-ecearth.SWL_15.eco.cSoil.nc """ file_metadata = get_nc_attributes(filepath) filename_properties = extract_medata_from_filename(filepath) return {**file_metadata, **filename_properties} def extract_medata_from_filename(filepath): """extract additonal data from filename using REGEX""" warning = "Filepath should resemble: data/CNRS_data/cSoil/orchidee-giss-ecearth.SWL_15.eco.cSoil.nc" assert len(file.split('/')) == 4, warning fname = filepath.split("/")[3] variable = filepath.split("/")[2] model_taxonomy = re.search('(^.*?)\.',fname, re.IGNORECASE).group(1) model_short_name = re.search('(^.*?)-',model_taxonomy, re.IGNORECASE).group(1) return {"model_short_name":model_short_name, "variable":variable, "model_taxonomy":model_taxonomy} def get_nc_attributes(filepath): """ Most info is stored in the files’ global attribute description, we will access it using netCDF4.ncattrs function. Example: ncAttributes('data/CNRS_data/cSoil/orchidee-giss-ecearth.SWL_15.eco.cSoil.nc') """ nc_file = Dataset(filepath, 'r') d = {} nc_attrs = nc_file.ncattrs() for nc_attr in nc_attrs: d.update({nc_attr: nc_file.getncattr(nc_attr)}) could_be_true = ['true', 'True', 'TRUE'] d['is_multi_model_summary'] = d['is_multi_model_summary'] in could_be_true d['is_seasonal'] = d['is_seasonal'] in could_be_true del d['contact'] return d def get_shape_attributes(i): """Get attributes of shapes for gadm28_admin1 data index (i) should be passed """ d = {} for table_attribute in ['iso','name_0','id_1','name_1','engtype_1']: try: d[table_attribute] = shps[table_attribute][i] except: d[table_attribute] = None return d # - # ## Single core process # # Single core version: # # Place the data folders from Helixscope into the data folder of this repo. # # ``` # data # ├── CNRS_data # │   ├── README.txt # │   ├── cSoil # │   │   ├── orchidee-giss-ecearth.SWL_15.eco.cSoil.nc # │   │   ├── orchidee-giss-ecearth.SWL_2.eco.cSoil.nc # │   │   ├── orchidee-giss-ecearth.SWL_4.eco.cSoil.nc # │   │   ├── orchidee-ipsl-ecearth.SWL_15.eco.cSoil.nc # │   │   ├── orchidee-ipsl-ecearth.SWL_2.eco.cSoil.nc # │   │   ├── orchidee-ipsl-ecearth.SWL_4.eco.cSoil.nc # │   │   ├── orchidee-ipsl-hadgem.SWL_15.eco.cSoil.nc # │   │   ├── orchidee-ipsl-hadgem.SWL_2.eco.cSoil.nc # │   │   └── orchidee-ipsl-hadgem.SWL_4.eco.cSoil.nc # │   ├── cVeg # │   │   ├── orchidee-giss-ecearth.SWL_15.eco.cVeg.nc # │   │   ├── orchidee-giss-ecearth.SWL_2.eco.cVeg.nc # │   │   ├── orchidee-giss-ecearth.SWL_4.eco.cVeg.nc # │   │   ├── orchidee-ipsl-ecearth.SWL_15.eco.cVeg.nc # │   │   ├── orchidee-ipsl-ecearth.SWL_2.eco.cVeg.nc # ``` # # Also include the shapefile in the data folder: # # ``` # ./data/minified_gadm28_countries/gadm28_countries.shp # ``` # + # %%time #shps = gpd.read_file('./data/minified_gadm28_countries/gadm28_countries.shp') shps = gpd.read_file('./data/gadm28_adm1/gadm28_adm1.shp') shps = shps.to_crs(epsg='4326') files = identify_netcdf_and_csv_files() keys = ['name_0','iso','id_1','name_1','engtype_1','variable','SWL_info', 'count', 'max','min','mean','std','impact_tag','institution', 'model_long_name','model_short_name','model_taxonomy', 'is_multi_model_summary','is_seasonal'] # + def process_file(file, status=False, overwrite=False): """Given a single file, generate a csv table with the same folder/file name in ./data/processed/ with all required csv info. Expect file to be a string e.g.: "data/CNRS_data/cSoil/orchidee-giss-ecearth.SWL_15.eco.cSoil.nc" """ output_filename = "".join(['./processed/',file[5:-3],'.csv']) if os.path.isfile(output_filename) and not overwrite: print("{0} output exists.".format(output_filename)) print("Specifcy overwrite=True on process_file() function call if you want to replace it.") return else: if status: print("Processing '{}'".format(file)) tmp_metadata = generate_metadata(file) with rasterio.open(file) as nc_file: rast=nc_file.read() properties = nc_file.profile tmp = rast[0,:,:] mask = tmp == properties.get('nodata') tmp[mask] = np.nan stats_per_file = [] for i in shps.index: shp = shps.iloc[i].geometry zstats = zonal_stats(shp, tmp, band=1, stats=['mean', 'max','min','std','count'], all_touched=True, raster_out=False, affine=properties['transform'], no_data=np.nan) if zstats[0].get('count', 0) > 0: shp_atts = get_shape_attributes(i) tmp_d = {**zstats[0], **shp_atts, **tmp_metadata} stats_per_file.append([tmp_d.get(key, None) for key in keys]) df = pd.DataFrame(stats_per_file, columns=keys) path_check = "/".join(output_filename.split("/")[0:-1]) if not os.path.exists(path_check): os.makedirs(path_check) df.to_csv(output_filename, index=False) return def combine_processed_results(path='./processed'): """Combine all the csv files in the path (e.g. all processed files) into a single master table """ output_files = identify_netcdf_and_csv_files(path) frames = [pd.read_csv(csv_file) for csv_file in output_files['csv']] master_table = pd.concat(frames) master_table.to_csv("./master_admin1.csv", index=False) return # - for file in files.get('nc')[0:3]: print(file) process_file(file) output_files['csv'] def combine_processed_results(path='./processed'): """Combine all the csv files in the path (e.g. all processed files) into a single master table """ output_files = identify_netcdf_and_csv_files(path) frames = [pd.read_csv(csv_file) for csv_file in output_files['csv']] master_table = pd.concat(frames) master_table.to_csv("./master_admin1.csv", index=False) return # + # %%time for file in files.get('nc')[0:1]: print("Processing '{}'".format(file)) tmp_metadata = generate_metadata(file) with rasterio.open(files['nc'][0]) as nc_file: rast=nc_file.read() properties = nc_file.profile tmp = rast[0,:,:] # The first dim should be stripped mask = tmp == properties.get('nodata') # Now we need to make a mask for missing data tmp[mask] = np.nan # and replace it with a NAN value stats_per_file = [] for i in shps.index: shp = shps.iloc[i].geometry zstats = zonal_stats(shp, tmp, band=1, stats=['mean', 'max','min','std','count'], all_touched=True, raster_out=False, affine=properties['transform'], no_data=np.nan) if zstats[0].get('count', 0) > 0: # If shape generated stats, then add it #shp_atts = {'iso2' : shps.iso2[i], # 'country' : shps.name_engli[i]} shp_atts = get_shape_attributes(i) tmp_d = {**zstats[0], **shp_atts, **tmp_metadata} stats_per_file.append([tmp_d.get(key, None) for key in keys]) # - df = pd.DataFrame(stats_per_file, columns=keys) df.head() df.to_csv('./processed/raw_output.csv') # Next steps: # # * Need to ensure this can handel any admin1 shapefiles # - Done, but big slowdown (2 min per layer) # * Need to paraellise this so it will run in a convienient time # * Need to check that the regex changes Alex applied post-table creation are included import cartoframes shps.keys() # + with open(".env") as f: key = f.read() api_key = key.split('API_KEY=')[1].split()[0] 'helixscope' # - CF = cartoframes.CartoContext( creds=cartoframes.Credentials(username='pycones03', key=api_key) ) CF.write(boundaries, 'uk_boundaries', overwrite=True) # + CF.map(layers=[ cartoframes.BaseMap('light'), cartoframes.Layer('uk_boundaries'), ], interactive=True) CF.map(layers=[ cartoframes.BaseMap('light'), cartoframes.Layer('trump', color={'column': 'trump_haters', 'scheme': cartoframes.styling.sunset(5)}), ], interactive=True) uk_pop = [{'numer_id': 'uk.ons.LC2102EW0001', 'normalization': 'prenormalized'}] augmented = CF.data_augment('trump', uk_pop) augmented.head()
work/Raw_data_process.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # import requests data = {'name':'liuyu','age':'18'} response = requests.get('http://httpbin.org/get',params=data) print(response.text) import requests response=requests.get() # 文本写入 liuyu = 'hahahahahahahahahah' file = '/Users/liuyu/Desktop/text.text' with open(file,'w',encoding='gbk')as f: f.write(liuyu) liuyu = 'hahahahahahahahahah' file = '/D:\18企业实训/text.text' with open(file,'w',encoding='gbk')as f: f.write(liuyu) liuyu = 'hahahahahahahahahah' file = '/D:\18企业实训/text.text' with open(file,'w',encoding='gbk')as f: f.write(liuyu) liuyu = 'hahahahahahahahahah' file = 'C:\Users\刘钰\Desktop\text.text' with open(file,'w',encoding='gbk')as f: f.write(liuyu) liuyu = 'hahahahahahahahahah' file = 'C:\Users\刘钰\Desktop\text.text' with open(file,'w',encoding='gbk')as f: f.write(liuyu) liuyu = 'hahahahahahahahahah' file = 'C:\Users\刘钰\Desktop\test.txt' with open(file,'w',encoding='gbk')as f: f.write(liuyu) # + file_path ='D:\day01\kaifangX.txt' file = 'C:/Users/刘钰/Desktop/test.txt' a = open(file,'w',encoding='gbk') with open(file_path,'r',encoding='gbk',errors='ignore') as f: for i in range(10000): try: emile=f.readline().split(',')[9] a.write(emile) except Exception as e: print('空行') a.close() # -
day09.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ``` # # The Elves managed to locate the chimney-squeeze prototype fabric for Santa's suit (thanks to someone who helpfully wrote its box IDs on the wall of the warehouse in the middle of the night). Unfortunately, anomalies are still affecting them - nobody can even agree on how to cut the fabric. # # The whole piece of fabric they're working on is a very large square - at least 1000 inches on each side. # # Each Elf has made a claim about which area of fabric would be ideal for Santa's suit. All claims have an ID and consist of a single rectangle with edges parallel to the edges of the fabric. Each claim's rectangle is defined as follows: # # The number of inches between the left edge of the fabric and the left edge of the rectangle. # The number of inches between the top edge of the fabric and the top edge of the rectangle. # The width of the rectangle in inches. # The height of the rectangle in inches. # A claim like #123 @ 3,2: 5x4 means that claim ID 123 specifies a rectangle 3 inches from the left edge, 2 inches from the top edge, 5 inches wide, and 4 inches tall. Visually, it claims the square inches of fabric represented by # (and ignores the square inches of fabric represented by .) in the diagram below: # # ........... # ........... # ...#####... # ...#####... # ...#####... # ...#####... # ........... # ........... # ........... # The problem is that many of the claims overlap, causing two or more claims to cover part of the same areas. For example, consider the following claims: # # #1 @ 1,3: 4x4 # #2 @ 3,1: 4x4 # #3 @ 5,5: 2x2 # Visually, these claim the following areas: # # ........ # ...2222. # ...2222. # .11XX22. # .11XX22. # .111133. # .111133. # ........ # The four square inches marked with X are claimed by both 1 and 2. (Claim 3, while adjacent to the others, does not overlap either of them.) # # If the Elves all proceed with their own plans, none of them will have enough fabric. How many square inches of fabric are within two or more claims? # ``` # + import re from collections import Counter REGEX_CLAIM = re.compile("#([0-9]*) @ ([0-9]*),([0-9]*): ([0-9]*)x([0-9]*)") def parse_claim(line): return tuple(int(g) for g in REGEX_CLAIM.match(line).groups()) def coordinates(claim): id, left, top, wide, tall = claim return Counter((x, y) for x in range(left, left+wide) for y in range(top, top+tall)) # - square = open("input/day03.txt") |> fmap$(parse_claim ..> coordinates) |> reduce$(+) sum(1 for i in square.values() if i > 1) # ``` # --- Part Two --- # Amidst the chaos, you notice that exactly one claim doesn't overlap by even a single square inch of fabric with any other claim. If you can somehow draw attention to it, maybe the Elves will be able to make Santa's suit after all! # # For example, in the claims above, only claim 3 is intact after all claims are made. # # What is the ID of the only claim that doesn't overlap? # ``` def no_overlaps(claim): return sum(1 for key in coordinates(claim).keys() if square[key] > 1) == 0 open("input/day03.txt") |> fmap$(parse_claim) |> filter$(no_overlaps) |> list
2018/jordi/Day 3 - No Matter How You Slice It.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] colab_type="text" id="P7sX-B4aUbRN" pycharm={"name": "#%% md\n"} # # Introduction to RNNs # This notebook is part of the [SachsLab Workshop for Intracranial Neurophysiology and Deep Learning](https://github.com/SachsLab/IntracranialNeurophysDL). # # ### Normalize Environments # Run the first two cells to normalize Local / Colab environments, then proceed below for the lesson. # # <table class="tfo-notebook-buttons" align="left"> # <td> # <a target="_blank" href="https://colab.research.google.com/github/SachsLab/IntracranialNeurophysDL/blob/master/notebooks/06_01_intro_to_RNNs.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> # </td> # <td> # <a target="_blank" href="https://github.com/SachsLab/IntracranialNeurophysDL/blob/master/notebooks/06_01_intro_to_RNNs.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> # </td> # </table> # + colab_type="code" id="aL4QcKd5UbRT" outputId="a4accba4-507a-4eaf-cb68-c5a2263fd8aa" pycharm={"is_executing": false, "name": "#%%\n"} colab={"base_uri": "https://localhost:8080/", "height": 67} from pathlib import Path import os try: # See if we are running on google.colab import google.colab IN_COLAB = True # Setup tensorflow 2.0 # !pip install -q tensorflow-gpu==2.0.0-rc0 except ModuleNotFoundError: IN_COLAB = False import sys if Path.cwd().stem == 'notebooks': os.chdir(Path.cwd().parent) # + colab_type="code" id="rXM9gHqLUbRP" pycharm={"is_executing": false, "name": "#%%\n"} colab={} # Common imports import numpy as np import tensorflow as tf import matplotlib.pyplot as plt plt.rcParams.update({'font.size': 22}) # %matplotlib inline # %load_ext autoreload # %autoreload 2 # + [markdown] colab_type="text" id="k9Og3lOfUbRV" pycharm={"name": "#%% md\n"} # ## RNN Step-by-Step # A recurrent neural network (RNN) is an artificial neural network that maintains an internal state (memory) # as it processes items in a sequence. RNNs are most often applied to sequence data such as text # (sequences of characters; sequences of words) or time series (e.g., stock prices, weather data, neurophysiology). # RNNs can provide an output at each step of the sequence to predict different sequences (e.g., text translation) or # the next item in the same sequence (e.g., assistive typing, stock prediction, prosthetic control). # The loss does not back-propagate to update weights until the sequence is complete. RNNs can also be configured # to produce a single output at the end of a sequence (e.g., classify a tweet sentiment, decode categorical intention). # # We will create our own RNN step-by-step. We will train it using toy data that we generate. # + [markdown] colab_type="text" id="_DAvkmycUbRV" pycharm={"name": "#%% md\n"} # ### Generate data # X is a multi-channel time series and Y is a different multi-channel time series # constructed from a linear combination of a delayed version of X. # + colab_type="code" id="kA9Fqu_qUbRW" pycharm={"is_executing": false, "name": "#%%\n"} colab={} PEAK_FREQS = [10, 22, 75] # Hz FS = 1000.0 # Hz DURATION = 5.0 # seconds DELAY = 0.01 # seconds N_OUT = 2 # Y dimensions n_samples = int(DURATION * FS) delay_samples = int(DELAY * FS) t_vec = np.arange(n_samples + delay_samples) / FS X = np.zeros((n_samples + delay_samples, len(PEAK_FREQS)), dtype=np.float32) for chan_ix, pf in enumerate(PEAK_FREQS): X[:, chan_ix] = (1 / (chan_ix + 1) ) * np.sin(t_vec * 2 * np.pi * pf) # Create mixing matrix that mixes inputs into outputs W = 2 * np.random.rand(N_OUT, len(PEAK_FREQS)) - 1 # W = W / np.sum(W, axis=1, keepdims=True) b = np.random.rand(N_OUT) - 0.5 Y = W @ X[:-delay_samples, :].T + b[:, None] Y = Y.T X = X[delay_samples:, :] t_vec = t_vec[delay_samples:] X += 0.05*np.random.rand(*X.shape) Y += 0.05*np.random.rand(*Y.shape) # + colab_type="code" id="A4PXkiSEUbRY" outputId="9fcdf1da-e60b-4493-f8d8-41cc5007c14a" pycharm={"is_executing": false, "name": "#%%\n"} colab={"base_uri": "https://localhost:8080/", "height": 378} plt.figure(figsize=(8, 6), facecolor='white') plt.plot(t_vec[:100], X[:100, :]) plt.plot(t_vec[:100], Y[:100, :], 'k') plt.show() # + [markdown] colab_type="text" id="8ymuKRXPUbRc" pycharm={"name": "#%% md\n"} # ### Define forward pass and loss # We aren't actually going to run this next cell. # This is just to give you an indication of what the forward pass looks like. # + colab_type="code" id="spnNLauUUbRd" outputId="595f796f-e37a-4a3e-8eca-7da000931293" pycharm={"is_executing": false, "name": "#%%\n"} colab={"base_uri": "https://localhost:8080/", "height": 35} activation_fn = lambda x: x # activation_fn = np.tanh # Initialize parameters state_t = np.zeros((N_OUT,)) _W = np.random.random((N_OUT, X.shape[1])) # Mixes input with output _U = np.random.random((N_OUT, N_OUT)) # Mixes old state with output _b = np.random.random((N_OUT,)) # Bias term # Create variable to store output successive_outputs = [] # Iterate over each timestep in X for x_t in X: y_t = activation_fn(np.dot(_W, x_t) + np.dot(_U, state_t) + _b) successive_outputs.append(y_t) state_t = y_t final_output_sequence = np.stack(successive_outputs, axis=0) loss = np.mean( (Y - final_output_sequence)**2 ) print(loss) # + [markdown] colab_type="text" id="ukwchqVWUbRf" # ## RNN in Tensorflow # [Tutorial](https://www.tensorflow.org/tutorials/sequences/text_generation) (text generation w/ eager) # + [markdown] colab_type="text" id="cOHeBypcUbRg" pycharm={"name": "#%% md\n"} # ### Prepare data for Tensorflow # In the tutorial linked above, the `batch` transformation is used to convert a continuous sequence into # many sequences, then the batch transform is used AGAIN to get batches of sequences. # + colab_type="code" id="kW2toraOUbRg" pycharm={"is_executing": false, "name": "#%%\n"} colab={} SEQ_LENGTH = 200 BATCH_SIZE = 2 BUFFER_SIZE = 10000 dataset = tf.data.Dataset.from_tensor_slices((X, Y)) dataset = dataset.batch(SEQ_LENGTH, drop_remainder=True) # Continuous to segmented sequences dataset = dataset.shuffle(BUFFER_SIZE) dataset = dataset.batch(BATCH_SIZE, drop_remainder=True) # Segmented sequences to batches of seg. seq. # + [markdown] colab_type="text" id="iVYi9e43UbRj" pycharm={"name": "#%% md\n"} # ### Define RNN model in Tensorflow # + colab_type="code" id="kxhHmWZXUbRk" outputId="d6324dfd-db59-44fa-f60c-79bf25979cd6" pycharm={"is_executing": false, "name": "#%%\n"} colab={"base_uri": "https://localhost:8080/", "height": 218} inputs = tf.keras.layers.Input(shape=(SEQ_LENGTH, X.shape[-1])) outputs = tf.keras.layers.SimpleRNN(N_OUT, return_sequences=True, activation='linear')(inputs) model = tf.keras.Model(inputs, outputs) model.compile(optimizer='rmsprop', loss='mean_squared_error') model.summary() # + colab_type="code" id="L9nwtlv8UbRn" pycharm={"is_executing": false, "name": "#%%\n"} colab={} # Just to save us some training time, let's cheat and init the model with good weights old_init_weights = model.layers[-1].get_weights() model.layers[-1].set_weights([W.T, old_init_weights[1], b]) # + colab_type="code" id="Xk1qktbEUbRo" outputId="a506a658-ffab-4bc5-de35-39c513a757df" pycharm={"is_executing": false, "name": "#%%\n"} colab={"base_uri": "https://localhost:8080/", "height": 1000} EPOCHS = 50 history = model.fit(x=dataset, epochs=EPOCHS, verbose=1) # + colab_type="code" id="buU5s_XlUbRr" outputId="c6ca8690-e302-485a-8f82-ae88217eb2d0" pycharm={"is_executing": false} colab={"base_uri": "https://localhost:8080/", "height": 378} _Y1 = model.predict(X[:SEQ_LENGTH, :][None, :, :]) plt.figure(figsize=(8, 6), facecolor='white') plt.plot(Y[:SEQ_LENGTH, :], label='real') plt.plot(_Y1[0], label='pred') plt.legend() plt.show() # + colab_type="code" id="VQeoT-SlUbRu" outputId="43725caf-8679-42f5-d27c-f479d28f0fd1" pycharm={"is_executing": false, "name": "#%%\n"} colab={"base_uri": "https://localhost:8080/", "height": 305} _W, _U, _b = model.layers[-1].get_weights() print(W) print(_W.T) print(b, _b) plt.figure(figsize=(10, 6), facecolor='white') plt.subplot(1, 2, 1) plt.imshow(W) plt.subplot(1, 2, 2) plt.imshow(_W.T) plt.show()
notebooks/06_01_intro_to_RNNs.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # # TimeEval shared parameter optimization result analysis # Automatically reload packages: # %load_ext autoreload # %autoreload 2 # imports import json import warnings import pandas as pd import numpy as np import scipy as sp import plotly.offline as py import plotly.graph_objects as go import plotly.figure_factory as ff import plotly.express as px from plotly.subplots import make_subplots from pathlib import Path from timeeval import Datasets # ## Configuration # # Target parameters that were optimized in this run (per algorithm): algo_param_mapping = { "FFT": ["context_window_size"], "Subsequence LOF": ["n_neighbors", "leaf_size"], "Spectral Residual (SR)": ["mag_window_size", "score_window_size"], "LaserDBN": ["n_bins"], "k-Means": ["n_clusters"], "XGBoosting (RR)": ["n_estimators", "train_window_size", "n_trees"], "Hybrid KNN": ["n_neighbors", "n_estimators"], "Subsequence IF": ["n_trees"], "DeepAnT": ["prediction_window_size"], "Random Forest Regressor (RR)": ["train_window_size", "n_trees"] } # Define data and results folder: # + # constants and configuration data_path = Path("../../data") / "test-cases" result_root_path = Path("../timeeval_experiments/results") experiment_result_folder = "2021-09-27_shared-optim" # build paths result_paths = [d for d in result_root_path.iterdir() if d.is_dir()] print("Available result directories:") display(result_paths) result_path = result_root_path / experiment_result_folder print("\nSelecting:") print(f"Data path: {data_path.resolve()}") print(f"Result path: {result_path.resolve()}") # - # Load results and dataset metadata: # + def extract_hyper_params(param_names): def extract(value): params = json.loads(value) result = "" for name in param_names: value = params[name] result += f"{name}={value}," return "".join(result.rsplit(",", 1)) return extract # load results print(f"Reading results from {result_path.resolve()}") df = pd.read_csv(result_path / "results.csv") # add dataset_name column df["dataset_name"] = df["dataset"].str.split(".").str[0] # add optim_params column df["optim_params"] = "" for algo in algo_param_mapping: df_algo = df.loc[df["algorithm"] == algo] df.loc[df_algo.index, "optim_params"] = df_algo["hyper_params"].apply(extract_hyper_params(algo_param_mapping[algo])) # load dataset metadata dmgr = Datasets(data_path) # - # Define plotting functions: # + def load_scores_df(algorithm_name, dataset_id, optim_params, repetition=1): params_id = df.loc[(df["algorithm"] == algorithm_name) & (df["collection"] == dataset_id[0]) & (df["dataset"] == dataset_id[1]) & (df["optim_params"] == optim_params), "hyper_params_id"].item() path = ( result_path / algorithm_name / params_id / dataset_id[0] / dataset_id[1] / str(repetition) / "anomaly_scores.ts" ) return pd.read_csv(path, header=None) def plot_scores(algorithm_name, dataset_name): if isinstance(algorithm_name, tuple): algorithms = [algorithm_name] elif not isinstance(algorithm_name, list): raise ValueError("Please supply a tuple (algorithm_name, optim_params) or a list thereof as first argument!") else: algorithms = algorithm_name # construct dataset ID dataset_id = ("GutenTAG", f"{dataset_name}.unsupervised") # load dataset details df_dataset = dmgr.get_dataset_df(dataset_id) # check if dataset is multivariate dataset_dim = df.loc[df["dataset_name"] == dataset_name, "dataset_input_dimensionality"].unique().item() dataset_dim = dataset_dim.lower() auroc = {} df_scores = pd.DataFrame(index=df_dataset.index) skip_algos = [] for algo, optim_params in algorithms: # get algorithm metric results try: auroc[(algo, optim_params)] = df.loc[ (df["algorithm"] == algo) & (df["dataset_name"] == dataset_name) & (df["optim_params"] == optim_params), "ROC_AUC" ].item() except ValueError: warnings.warn(f"No ROC_AUC score found! Probably {algo} with params {optim_params} was not executed on {dataset_name}.") auroc[(algo, optim_params)] = -1 skip_algos.append((algo, optim_params)) continue # load scores training_type = df.loc[df["algorithm"] == algo, "algo_training_type"].values[0].lower().replace("_", "-") try: df_scores[(algo, optim_params)] = load_scores_df(algo, ("GutenTAG", f"{dataset_name}.{training_type}"), optim_params).iloc[:, 0] except (ValueError, FileNotFoundError): warnings.warn(f"No anomaly scores found! Probably {algo} was not executed on {dataset_name} with params {optim_params}.") df_scores[(algo, optim_params)] = np.nan skip_algos.append((algo, optim_params)) algorithms = [a for a in algorithms if a not in skip_algos] # Create plot fig = make_subplots(2, 1) if dataset_dim == "multivariate": for i in range(1, df_dataset.shape[1]-1): fig.add_trace(go.Scatter(x=df_dataset.index, y=df_dataset.iloc[:, i], name=f"channel-{i}"), 1, 1) else: fig.add_trace(go.Scatter(x=df_dataset.index, y=df_dataset.iloc[:, 1], name="timeseries"), 1, 1) fig.add_trace(go.Scatter(x=df_dataset.index, y=df_dataset["is_anomaly"], name="label"), 2, 1) for item in algorithms: algo, optim_params = item fig.add_trace(go.Scatter(x=df_scores.index, y=df_scores[item], name=f"{algo}={auroc[item]:.4f} ({optim_params})"), 2, 1) fig.update_xaxes(matches="x") fig.update_layout( title=f"Results of {','.join(np.unique([a for a, _ in algorithms]))} on {dataset_name}", height=400 ) return py.iplot(fig) # - # ## Analyze TimeEval results df[["algorithm", "dataset_name", "status", "AVERAGE_PRECISION", "PR_AUC", "RANGE_PR_AUC", "ROC_AUC", "execute_main_time", "optim_params"]] # --- # # ### Errors df_error_counts = df.pivot_table(index=["algo_training_type", "algorithm"], columns=["status"], values="repetition", aggfunc="count") df_error_counts = df_error_counts.fillna(value=0).astype(np.int64) # #### Aggregation of errors per algorithm grouped by algorithm training type for tpe in ["SEMI_SUPERVISED", "SUPERVISED", "UNSUPERVISED"]: if tpe in df_error_counts.index: print(tpe) display(df_error_counts.loc[tpe]) # #### Slow algorithms # # Algorithms, for which more than 50% of all executions ran into the timeout. df_error_counts[df_error_counts["Status.TIMEOUT"] > (df_error_counts["Status.ERROR"] + df_error_counts["Status.OK"])] # #### Broken algorithms # # Algorithms, which failed for at least 50% of the executions. error_threshold = 0.5 df_error_counts[df_error_counts["Status.ERROR"] > error_threshold*( df_error_counts["Status.TIMEOUT"] + df_error_counts["Status.ERROR"] + df_error_counts["Status.OK"] )] # #### Detail errors # + algo_list = ["DeepAnT", "Hybrid KNN", "LaserDBN"] error_list = ["OOM", "Segfault", "ZeroDivisionError", "IncompatibleParameterConfig", "WrongDBNState", "other"] errors = pd.DataFrame(0, index=error_list, columns=algo_list, dtype=np.int_) for algo in algo_list: df_tmp = df[(df["algorithm"] == algo) & (df["status"] == "Status.ERROR")] for i, run in df_tmp.iterrows(): path = result_path / run["algorithm"] / run["hyper_params_id"] / run["collection"] / run["dataset"] / str(run["repetition"]) / "execution.log" with path.open("r") as fh: log = fh.read() if "status code '139'" in log: errors.loc["Segfault", algo] += 1 elif "status code '137'" in log: errors.loc["OOM", algo] += 1 elif "Expected n_neighbors <= n_samples" in log: errors.loc["IncompatibleParameterConfig", algo] += 1 elif "ZeroDivisionError" in log: errors.loc["ZeroDivisionError", algo] += 1 elif "does not have key" in log: errors.loc["WrongDBNState", algo] += 1 else: print(f'\n\n#### {run["dataset"]} ({run["optim_params"]})') print(log) errors.loc["other", algo] += 1 errors.T # - # --- # # ### Parameter assessment # + sort_by = ("ROC_AUC", "mean") metric_agg_type = ["mean", "median"] time_agg_type = "mean" aggs = { "AVERAGE_PRECISION": metric_agg_type, "RANGE_PR_AUC": metric_agg_type, "PR_AUC": metric_agg_type, "ROC_AUC": metric_agg_type, "train_main_time": time_agg_type, "execute_main_time": time_agg_type, "repetition": "count" } df_tmp = df.reset_index() df_tmp = df_tmp.groupby(by=["algorithm", "optim_params"]).agg(aggs) df_tmp = (df_tmp .reset_index() .sort_values(by=["algorithm", sort_by], ascending=False) .set_index(["algorithm", "optim_params"])) with pd.option_context("display.max_rows", None, "display.max_columns", None): display(df_tmp) # - # #### Selected parameters # # - k-Means: `n_clusters=50` (more are usually better) # - XGBoosting (RR): `n_estimators=500,train_window_size=500,n_trees=10` (more estimators are better) # - Subsequence LOF: `n_neighbors=50,leaf_size=20` (robust to leaf_size) # - Subsequence IF: `n_trees=100` # - Spectral Residual (SR): `mag_window_size=40,score_window_size=40` (robust, but bad performance) # - Random Forest Regressor (RR): `train_window_size=500,n_trees=500` (more trees are better) # - LaserDBN: `n_bins=10` (more are better; marginal improvement) # - Hybrid KNN: `n_neighbors=10,n_estimators=1000` (less neighbors and more estimators are better) # - FFT: `context_window_size=5` (robust, but bad performance) # - DeepAnT: `prediction_window_size=50` # # Summary: # # - n_clusters=50 # - n_estimators=500 # - train_window_size=500 # - n_trees=500 # - n_neighbors=50 # - mag_window_size=40 # - score_window_size=40 # - prediction_window_size=50 # - n_bins=10 (**re-test for other algorithms!**) # - context_window_size=5 (**re-test for other algorithms!**) # - Overwrites for Hybrid KNN: `n_neighbors=10,n_estimators=1000` # - Overwrites for XGBoosting (RR): `n_trees=10` plot_scores([("k-Means", "n_clusters=50"), ("k-Means", "n_clusters=5")], "ecg-channels-single-of-5") # --- # # ### Window size parameter assessment # + algo_list = ["Subsequence LOF", "Subsequence IF", "Spectral Residual (SR)", "DeepAnT"] df2 = df[df["algorithm"].isin(algo_list)].copy() # overwrite optim_params column df2 = df2.drop(columns=["optim_params"]) df2["window_size"] = "" for algo in algo_list: df_algo = df2.loc[df2["algorithm"] == algo] df2.loc[df_algo.index, "window_size"] = df_algo["hyper_params"].apply(extract_hyper_params(["window_size"])) df2["window_size"] = df2["window_size"].str.split("=").apply(lambda v: v[1]).astype(int) df2["period_size"] = df2["dataset"].apply(lambda d: dmgr.get(("GutenTAG", d)).period_size) df2["window_size_group"] = df2["window_size"] / df2["period_size"] df2["window_size_group"] = (df2["window_size_group"] .fillna(df2["window_size"]) .round(1) .replace(50., 0.5) .replace(100, 1.0) .replace(150, 1.5) .replace(200, 2.0)) df2 = df2.drop(columns=["window_size", "period_size"]) df2 # + sort_by = ("ROC_AUC", "mean") metric_agg_type = ["mean", "median"] time_agg_type = "mean" aggs = { "AVERAGE_PRECISION": metric_agg_type, "RANGE_PR_AUC": metric_agg_type, "PR_AUC": metric_agg_type, "ROC_AUC": metric_agg_type, "train_main_time": time_agg_type, "execute_main_time": time_agg_type, "index": lambda index: "" if len(index) < 2 else f"{index.iloc[0]}-{index.iloc[-1]}", "repetition": "count" } df_tmp = df2.reset_index() df_tmp = df_tmp.groupby(by=["algorithm", "window_size_group"]).agg(aggs) df_tmp = df_tmp.rename(columns={"index": "experiment IDs", "<lambda>": ""}) df_tmp = (df_tmp .reset_index() .sort_values(by=["algorithm", sort_by], ascending=False) .set_index(["algorithm", "window_size_group"])) with pd.option_context("display.max_rows", None, "display.max_columns", None): display(df_tmp) # - # #### Selected parameters # # Use the heuristic `2.0 dataset period size`. It works best for SubLOF, SR, and DeepAnT. SubIF seems to perform better with 1.5 period size, but just slightly, so 2.0 should be fine.
notebooks/TimeEval shared param optimization analysis.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Latent Dirichlet Allocation for Text Data # # In this assignment you will # # * apply standard preprocessing techniques on Wikipedia text data # * use Turi Create to fit a Latent Dirichlet allocation (LDA) model # * explore and interpret the results, including topic keywords and topic assignments for documents # # Recall that a major feature distinguishing the LDA model from our previously explored methods is the notion of *mixed membership*. Throughout the course so far, our models have assumed that each data point belongs to a single cluster. k-means determines membership simply by shortest distance to the cluster center, and Gaussian mixture models suppose that each data point is drawn from one of their component mixture distributions. In many cases, though, it is more realistic to think of data as genuinely belonging to more than one cluster or category - for example, if we have a model for text data that includes both "Politics" and "World News" categories, then an article about a recent meeting of the United Nations should have membership in both categories rather than being forced into just one. # # With this in mind, we will use Turi Create tools to fit an LDA model to a corpus of Wikipedia articles and examine the results to analyze the impact of a mixed membership approach. In particular, we want to identify the topics discovered by the model in terms of their most important words, and we want to use the model to predict the topic membership distribution for a given document. # **Note to Amazon EC2 users**: To conserve memory, make sure to stop all the other notebooks before running this notebook. # ## Text Data Preprocessing # We'll start by importing our familiar Wikipedia dataset. # + import turicreate as tc import numpy as np import matplotlib.pyplot as plt # %matplotlib inline # - # import wiki data wiki = tc.SFrame('people_wiki.sframe/') wiki.head(5) # In the original data, each Wikipedia article is represented by a URI, a name, and a string containing the entire text of the article. Recall from the video lectures that LDA requires documents to be represented as a _bag of words_, which ignores word ordering in the document but retains information on how many times each word appears. As we have seen in our previous encounters with text data, words such as 'the', 'a', or 'and' are by far the most frequent, but they appear so commonly in the English language that they tell us almost nothing about how similar or dissimilar two documents might be. # # Therefore, before we train our LDA model, we will preprocess the Wikipedia data in two steps: first, we will create a bag of words representation for each article, and then we will remove the common words that don't help us to distinguish between documents. For both of these tasks we can use pre-implemented tools from Turi Create: wiki_docs = tc.text_analytics.count_words(wiki['text']) wiki_docs = wiki_docs.dict_trim_by_keys(tc.text_analytics.stop_words(), exclude=True) # ## Model fitting and interpretation # In the video lectures we saw that Gibbs sampling can be used to perform inference in the LDA model. In this assignment we will use a Turi Create method to learn the topic model for our Wikipedia data, and our main emphasis will be on interpreting the results. We'll begin by creating the topic model using create() from Turi Create's topic_model module. # # Note: This may take several minutes to run. topic_model = tc.topic_model.create(wiki_docs, num_topics=10, num_iterations=200) # Turi provides a useful summary of the model we have fitted, including the hyperparameter settings for alpha, gamma (note that Turi Create calls this parameter beta), and K (the number of topics); the structure of the output data; and some useful methods for understanding the results. topic_model # It is certainly useful to have pre-implemented methods available for LDA, but as with our previous methods for clustering and retrieval, implementing and fitting the model gets us only halfway towards our objective. We now need to analyze the fitted model to understand what it has done with our data and whether it will be useful as a document classification system. This can be a challenging task in itself, particularly when the model that we use is complex. We will begin by outlining a sequence of objectives that will help us understand our model in detail. In particular, we will # # * get the top words in each topic and use these to identify topic themes # * predict topic distributions for some example documents # * compare the quality of LDA "nearest neighbors" to the NN output from the first assignment # * understand the role of model hyperparameters alpha and gamma # ## Load a fitted topic model # The method used to fit the LDA model is a _randomized algorithm_, which means that it involves steps that are random; in this case, the randomness comes from Gibbs sampling, as discussed in the LDA video lectures. Because of these random steps, the algorithm will be expected to yield slighty different output for different runs on the same data - note that this is different from previously seen algorithms such as k-means or EM, which will always produce the same results given the same input and initialization. # # It is important to understand that variation in the results is a fundamental feature of randomized methods. However, in the context of this assignment this variation makes it difficult to evaluate the correctness of your analysis, so we will load and analyze a pre-trained model. # # We recommend that you spend some time exploring your own fitted topic model and compare our analysis of the pre-trained model to the same analysis applied to the model you trained above. topic_model = tc.load_model('topic_models/lda_assignment_topic_model') # # Identifying topic themes by top words # # We'll start by trying to identify the topics learned by our model with some major themes. As a preliminary check on the results of applying this method, it is reasonable to hope that the model has been able to learn topics that correspond to recognizable categories. In order to do this, we must first recall what exactly a 'topic' is in the context of LDA. # # In the video lectures on LDA we learned that a topic is a probability distribution over words in the vocabulary; that is, each topic assigns a particular probability to every one of the unique words that appears in our data. Different topics will assign different probabilities to the same word: for instance, a topic that ends up describing science and technology articles might place more probability on the word 'university' than a topic that describes sports or politics. Looking at the highest probability words in each topic will thus give us a sense of its major themes. Ideally we would find that each topic is identifiable with some clear theme _and_ that all the topics are relatively distinct. # # We can use the Turi Create function get_topics() to view the top words (along with their associated probabilities) from each topic. # # __Quiz Question:__ Identify the top 3 most probable words for the first topic. topic_model.get_topics([0], num_words=3) # **Quiz Question:** What is the sum of the probabilities assigned to the top 50 words in the 3rd topic? sum(topic_model.get_topics([2], num_words=50)['score']) # Let's look at the top 10 words for each topic to see if we can identify any themes: [x['words'] for x in topic_model.get_topics(output_type='topic_words', num_words=10)] # We propose the following themes for each topic: # # - topic 0: Business # - topic 1: Science and research # - topic 2: International music # - topic 3: Art and publishing # - topic 4: Team sports # - topic 5: Family and society # - topic 6: Politics # - topic 7: International athletics # - topic 8: TV and film # - topic 9: General music # # We'll save these themes for later: themes = ['business', 'science and research', 'international music', 'art and publishing', 'team sports', 'family and society', 'politics', 'international athletics', 'TV and film', 'general music'] # ### Measuring the importance of top words # # We can learn more about topics by exploring how they place probability mass (which we can think of as a weight) on each of their top words. # # We'll do this with two visualizations of the weights for the top words in each topic: # - the weights of the top 100 words, sorted by the size # - the total weight of the top 10 words # # Here's a plot for the top 100 words by weight in each topic: for i in range(10): plt.plot(range(100), topic_model.get_topics(topic_ids=[i], num_words=100)['score']) plt.xlabel('Word rank') plt.ylabel('Probability') plt.title('Probabilities of Top 100 Words in each Topic') # In the above plot, each line corresponds to one of our ten topics. Notice how for each topic, the weights drop off sharply as we move down the ranked list of most important words. This shows that the top 10-20 words in each topic are assigned a much greater weight than the remaining words - and remember from the summary of our topic model that our vocabulary has 547462 words in total! # # # Next we plot the total weight assigned by each topic to its top 10 words: # + top_probs = [sum(topic_model.get_topics(topic_ids=[i], num_words=10)['score']) for i in range(10)] ind = np.arange(10) width = 0.5 fig, ax = plt.subplots() ax.bar(ind-(width/2),top_probs,width) ax.set_xticks(ind) plt.xlabel('Topic') plt.ylabel('Probability') plt.title('Total Probability of Top 10 Words in each Topic') plt.xlim(-0.5,9.5) plt.ylim(0,0.15) plt.show() # - # Here we see that, for our topic model, the top 10 words only account for a small fraction (in this case, between 5% and 13%) of their topic's total probability mass. So while we can use the top words to identify broad themes for each topic, we should keep in mind that in reality these topics are more complex than a simple 10-word summary. # # Finally, we observe that some 'junk' words appear highly rated in some topics despite our efforts to remove unhelpful words before fitting the model; for example, the word 'born' appears as a top 10 word in three different topics, but it doesn't help us describe these topics at all. # # Topic distributions for some example documents # # As we noted in the introduction to this assignment, LDA allows for mixed membership, which means that each document can partially belong to several different topics. For each document, topic membership is expressed as a vector of weights that sum to one; the magnitude of each weight indicates the degree to which the document represents that particular topic. # # We'll explore this in our fitted model by looking at the topic distributions for a few example Wikipedia articles from our data set. We should find that these articles have the highest weights on the topics whose themes are most relevant to the subject of the article - for example, we'd expect an article on a politician to place relatively high weight on topics related to government, while an article about an athlete should place higher weight on topics related to sports or competition. # Topic distributions for documents can be obtained using Turi Create's predict() function. Turi Create uses a collapsed Gibbs sampler similar to the one described in the video lectures, where only the word assignments variables are sampled. To get a document-specific topic proportion vector post-facto, predict() draws this vector from the conditional distribution given the sampled word assignments in the document. Notice that, since these are draws from a _distribution_ over topics that the model has learned, we will get slightly different predictions each time we call this function on a document - we can see this below, where we predict the topic distribution for the article on Barack Obama: obama = tc.SArray([wiki_docs[int(np.where(wiki['name']=='Barack Obama')[0])]]) pred1 = topic_model.predict(obama, output_type='probability') pred2 = topic_model.predict(obama, output_type='probability') print(tc.SFrame({'topics':themes, 'predictions (first draw)':pred1[0], 'predictions (second draw)':pred2[0]})) # To get a more robust estimate of the topics for each document, we can average a large number of predictions for the same document: def average_predictions(model, test_document, num_trials=100): avg_preds = np.zeros((model.num_topics)) for i in range(num_trials): avg_preds += model.predict(test_document, output_type='probability')[0] avg_preds = avg_preds/num_trials result = tc.SFrame({'topics':themes, 'average predictions':avg_preds}) result = result.sort('average predictions', ascending=False) return result print(average_predictions(topic_model, obama, 100)) # __Quiz Question:__ What is the topic most closely associated with the article about former US President <NAME>? Use the average results from 100 topic predictions. bush = tc.SArray([wiki_docs[int(np.where(wiki['name']=='<NAME>')[0])]]) print(average_predictions(topic_model, bush, 100)) # __Quiz Question:__ What are the top 3 topics corresponding to the article about English football (soccer) player <NAME>? Use the average results from 100 topic predictions. gerrard = tc.SArray([wiki_docs[int(np.where(wiki['name']=='<NAME>')[0])]]) print(average_predictions(topic_model, gerrard, 100)) # # Comparing LDA to nearest neighbors for document retrieval # # So far we have found that our topic model has learned some coherent topics, we have explored these topics as probability distributions over a vocabulary, and we have seen how individual documents in our Wikipedia data set are assigned to these topics in a way that corresponds with our expectations. # # In this section, we will use the predicted topic distribution as a representation of each document, similar to how we have previously represented documents by word count or TF-IDF. This gives us a way of computing distances between documents, so that we can run a nearest neighbors search for a given document based on its membership in the topics that we learned from LDA. We can contrast the results with those obtained by running nearest neighbors under the usual TF-IDF representation, an approach that we explored in a previous assignment. # # We'll start by creating the LDA topic distribution representation for each document: wiki['lda'] = topic_model.predict(wiki_docs, output_type='probability') # Next we add the TF-IDF document representations: wiki['word_count'] = tc.text_analytics.count_words(wiki['text']) wiki['tf_idf'] = tc.text_analytics.tf_idf(wiki['word_count']) # For each of our two different document representations, we can use Turi Create to compute a brute-force nearest neighbors model: model_tf_idf = tc.nearest_neighbors.create(wiki, label='name', features=['tf_idf'], method='brute_force', distance='cosine') model_lda_rep = tc.nearest_neighbors.create(wiki, label='name', features=['lda'], method='brute_force', distance='cosine') # Let's compare these nearest neighbor models by finding the nearest neighbors under each representation on an example document. For this example we'll use <NAME>, an American economist: model_tf_idf.query(wiki[wiki['name'] == '<NAME>'], label='name', k=10) model_lda_rep.query(wiki[wiki['name'] == '<NAME>'], label='name', k=10) # Notice that that there is no overlap between the two sets of top 10 nearest neighbors. This doesn't necessarily mean that one representation is better or worse than the other, but rather that they are picking out different features of the documents. # # With TF-IDF, documents are distinguished by the frequency of uncommon words. Since similarity is defined based on the specific words used in the document, documents that are "close" under TF-IDF tend to be similar in terms of specific details. This is what we see in the example: the top 10 nearest neighbors are all economists from the US, UK, or Canada. # # Our LDA representation, on the other hand, defines similarity between documents in terms of their topic distributions. This means that documents can be "close" if they share similar themes, even though they may not share many of the same keywords. For the article on <NAME>, we expect the most important topics to be 'American college and politics' and 'science and research'. As a result, we see that the top 10 nearest neighbors are academics from a wide variety of fields, including literature, anthropology, and religious studies. # # Understanding the role of LDA model hyperparameters # # Finally, we'll take a look at the effect of the LDA model hyperparameters alpha and gamma on the characteristics of our fitted model. Recall that alpha is a parameter of the prior distribution over topic weights in each document, while gamma is a parameter of the prior distribution over word weights in each topic. # # In the video lectures, we saw that alpha and gamma can be thought of as smoothing parameters when we compute how much each document "likes" a topic (in the case of alpha) or how much each topic "likes" a word (in the case of gamma). In both cases, these parameters serve to reduce the differences across topics or words in terms of these calculated preferences; alpha makes the document preferences "smoother" over topics, and gamma makes the topic preferences "smoother" over words. # # Our goal in this section will be to understand how changing these parameter values affects the characteristics of the resulting topic model. # # __Quiz Question:__ What was the value of alpha used to fit our original topic model? topic_model # __Quiz Question:__ What was the value of gamma used to fit our original topic model? Remember that Turi Create uses "beta" instead of "gamma" to refer to the hyperparameter that influences topic distributions over words. # We'll start by loading some topic models that have been trained using different settings of alpha and gamma. Specifically, we will start by comparing the following two models to our original topic model: # - tpm_low_alpha, a model trained with alpha = 1 and default gamma # - tpm_high_alpha, a model trained with alpha = 50 and default gamma tpm_low_alpha = tc.load_model('topic_models/lda_low_alpha') tpm_high_alpha = tc.load_model('topic_models/lda_high_alpha') # ### Changing the hyperparameter alpha # # Since alpha is responsible for smoothing document preferences over topics, the impact of changing its value should be visible when we plot the distribution of topic weights for the same document under models fit with different alpha values. In the code below, we plot the (sorted) topic weights for the Wikipedia article on Barack Obama under models fit with high, original, and low settings of alpha. # + a = np.sort(tpm_low_alpha.predict(obama,output_type='probability')[0])[::-1] b = np.sort(topic_model.predict(obama,output_type='probability')[0])[::-1] c = np.sort(tpm_high_alpha.predict(obama,output_type='probability')[0])[::-1] ind = np.arange(len(a)) width = 0.3 def param_bar_plot(a,b,c,ind,width,ylim,param,xlab,ylab): fig = plt.figure() ax = fig.add_subplot(111) b1 = ax.bar(ind, a, width, color='lightskyblue') b2 = ax.bar(ind+width, b, width, color='lightcoral') b3 = ax.bar(ind+(2*width), c, width, color='gold') ax.set_xticks(ind+width) ax.set_xticklabels(range(10)) ax.set_ylabel(ylab) ax.set_xlabel(xlab) ax.set_ylim(0,ylim) ax.legend(handles = [b1,b2,b3],labels=['low '+param,'original model','high '+param]) plt.tight_layout() param_bar_plot(a,b,c,ind,width,ylim=1.0,param='alpha', xlab='Topics (sorted by weight of top 100 words)',ylab='Topic Probability for Obama Article') # - # Here we can clearly see the smoothing enforced by the alpha parameter - notice that when alpha is low most of the weight in the topic distribution for this article goes to a single topic, but when alpha is high the weight is much more evenly distributed across the topics. # # __Quiz Question:__ How many topics are assigned a weight greater than 0.3 or less than 0.05 for the article on <NAME> in the **low alpha** model? Use the average results from 100 topic predictions. krugman = tc.SArray([wiki_docs[int(np.where(wiki['name']=='<NAME>')[0])]]) print(average_predictions(tpm_low_alpha, krugman, 100)) # __Quiz Question:__ How many topics are assigned a weight greater than 0.3 or less than 0.05 for the article on <NAME> in the **high alpha** model? Use the average results from 100 topic predictions. print(average_predictions(tpm_high_alpha, krugman, 100)) # ### Changing the hyperparameter gamma # # Just as we were able to see the effect of alpha by plotting topic weights for a document, we expect to be able to visualize the impact of changing gamma by plotting word weights for each topic. In this case, however, there are far too many words in our vocabulary to do this effectively. Instead, we'll plot the total weight of the top 100 words and bottom 1000 words for each topic. Below, we plot the (sorted) total weights of the top 100 words and bottom 1000 from each topic in the high, original, and low gamma models. # Now we will consider the following two models: # - tpm_low_gamma, a model trained with gamma = 0.02 and default alpha # - tpm_high_gamma, a model trained with gamma = 0.5 and default alpha # + del tpm_low_alpha del tpm_high_alpha tpm_low_gamma = tc.load_model('topic_models/lda_low_gamma') tpm_high_gamma = tc.load_model('topic_models/lda_high_gamma') # + a_top = np.sort([sum(tpm_low_gamma.get_topics(topic_ids=[i], num_words=100)['score']) for i in range(10)])[::-1] b_top = np.sort([sum(topic_model.get_topics(topic_ids=[i], num_words=100)['score']) for i in range(10)])[::-1] c_top = np.sort([sum(tpm_high_gamma.get_topics(topic_ids=[i], num_words=100)['score']) for i in range(10)])[::-1] a_bot = np.sort([sum(tpm_low_gamma.get_topics(topic_ids=[i], num_words=547462)[-1000:]['score']) for i in range(10)])[::-1] b_bot = np.sort([sum(topic_model.get_topics(topic_ids=[i], num_words=547462)[-1000:]['score']) for i in range(10)])[::-1] c_bot = np.sort([sum(tpm_high_gamma.get_topics(topic_ids=[i], num_words=547462)[-1000:]['score']) for i in range(10)])[::-1] ind = np.arange(len(a)) width = 0.3 param_bar_plot(a_top, b_top, c_top, ind, width, ylim=0.6, param='gamma', xlab='Topics (sorted by weight of top 100 words)', ylab='Total Probability of Top 100 Words') param_bar_plot(a_bot, b_bot, c_bot, ind, width, ylim=0.0002, param='gamma', xlab='Topics (sorted by weight of bottom 1000 words)', ylab='Total Probability of Bottom 1000 Words') # - # From these two plots we can see that the low gamma model results in higher weight placed on the top words and lower weight placed on the bottom words for each topic, while the high gamma model places relatively less weight on the top words and more weight on the bottom words. Thus increasing gamma results in topics that have a smoother distribution of weight across all the words in the vocabulary. # __Quiz Question:__ For each topic of the **low gamma model**, compute the number of words required to make a list with total probability 0.5. What is the average number of words required across all topics? (HINT: use the get\_topics() function from Turi Create with the cdf\_cutoff argument). def calculate_avg_words(model, num_words=547462, cdf_cutoff=0.5, num_topics=10): avg_num_of_words = [] for i in range(num_topics): avg_num_of_words.append(len(model.get_topics(topic_ids=[i], num_words=547462, cdf_cutoff=.5))) avg_num_of_words = np.mean(avg_num_of_words) return avg_num_of_words calculate_avg_words(tpm_low_gamma) # __Quiz Question:__ For each topic of the **high gamma model**, compute the number of words required to make a list with total probability 0.5. What is the average number of words required across all topics? (HINT: use the get\_topics() function from Turi Create with the cdf\_cutoff argument). calculate_avg_words(tpm_high_gamma) # We have now seen how the hyperparameters alpha and gamma influence the characteristics of our LDA topic model, but we haven't said anything about what settings of alpha or gamma are best. We know that these parameters are responsible for controlling the smoothness of the topic distributions for documents and word distributions for topics, but there's no simple conversion between smoothness of these distributions and quality of the topic model. In reality, there is no universally "best" choice for these parameters. Instead, finding a good topic model requires that we be able to both explore the output (as we did by looking at the topics and checking some topic predictions for documents) and understand the impact of hyperparameter settings (as we have in this section).
04_machine_learning_clustering_and_retrieval/week_5/quiz_01.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/amanoj03/Machine-Learning/blob/master/Transfer_Learning.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + id="3xc1y4Zx08Al" colab_type="code" colab={} import numpy as np import matplotlib.pyplot as plt # + id="6_PxpTvL1JU2" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 64} outputId="4a760d2f-9f87-46ae-e703-d98661421d90" import tensorflow as tf tf.compat.v1.enable_eager_execution() keras = tf.keras # + id="BOaEeqh81PWk" colab_type="code" colab={} import tensorflow_datasets as tfds tfds.disable_progress_bar() # + id="tkgVzYNi1aOq" colab_type="code" colab={} SPLIT_WEIGHTS = (8,1,1) splits = tfds.Split.TRAIN.subsplit(weighted=SPLIT_WEIGHTS) # + id="SW_bt0Fd1lwj" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 267} outputId="29138bd5-aff0-44b7-ff20-70d05b7ca9fc" (raw_train,raw_validation,raw_test), metadata = tfds.load( 'cats_vs_dogs',split=list(splits), with_info=True,as_supervised=True ) # + id="z-iq3g2t2ebe" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 70} outputId="0515de0d-c9bc-44ba-dcea-c16e12e1fdfd" print(raw_test) print(raw_train) print(raw_validation) # + id="IKc1IYQU25Y6" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 545} outputId="64ec92cd-61e8-455a-f0b0-262123327a3f" get_label_name = metadata.features['label'].int2str for image,label in raw_train.take(2): plt.figure() plt.imshow(image) plt.title(get_label_name(label)) # + id="pT6hR1_93p8P" colab_type="code" colab={} IMG_SIZE = 160 def format_example(image,label): image = tf.cast(image,tf.float32) image = (image/127.5)-1 image = tf.image.resize(image,(IMG_SIZE,IMG_SIZE)) return image,label # + id="zdUh20E56Lks" colab_type="code" colab={} train = raw_train.map(format_example) validation = raw_validation.map(format_example) test = raw_test.map(format_example) # + id="jmMvgzqW7K4Q" colab_type="code" colab={} BATCH_SIZE = 32 SHUFFLE_BUFFER_SIZE = 1000 # + id="OMfNz2mq8Mkb" colab_type="code" colab={} train_batches = train.shuffle(SHUFFLE_BUFFER_SIZE).batch(BATCH_SIZE) validation_batches = validation.batch(BATCH_SIZE) test_batches = test.batch(BATCH_SIZE) # + id="Djp2Syjr8S2E" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="1adbb93e-8673-4b04-cc25-87d634054c08" for image_batch, label_batch in train_batches.take(1): pass image_batch.shape # + id="zDC9TkIP8rmE" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 72} outputId="88a0c533-9f97-4f61-d353-3f7f39f5edaf" IMG_SHAPE = (IMG_SIZE,IMG_SIZE,3) base_model = tf.keras.applications.MobileNetV2(input_shape=IMG_SHAPE, include_top=False, weights='imagenet') # + id="P9m2vO-V9pK3" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="3b39ac59-18c0-46bd-ede6-a67af943b08c" feature_batch = base_model(image_batch) print(feature_batch.shape) # + id="2ippzAIz-rmA" colab_type="code" colab={} base_model.trainable = False # + id="5LS_64fI-xaP" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="cec51e63-60ca-41e1-9d49-49245f388273" base_model.summary() # + id="oqW3EGy6-1F6" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="6a104ae5-7037-425b-8c76-a719400962b4" global_average_layer = tf.keras.layers.GlobalAveragePooling2D() feature_batch_average = global_average_layer(feature_batch) print(feature_batch_average.shape) # + id="uPOyC1ZU_5B2" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="91924196-0d89-4913-c5ba-cb702e5f0e67" prediction_layer = keras.layers.Dense(1) prediction_batch = prediction_layer(feature_batch_average) print(prediction_batch.shape) # + id="T2xrNdyrAbh7" colab_type="code" colab={} model = tf.keras.Sequential([ base_model, global_average_layer, prediction_layer ]) # + id="yYBqh0ZuD2wB" colab_type="code" colab={} model.compile( optimizer= tf.keras.optimizers.RMSprop(lr=0.0001), loss = 'binary_crossentropy', metrics = ['accuracy'] ) # + id="3xzJVMyXBLOz" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 265} outputId="08459b1b-8336-4db7-e699-3d3e1d84cfdd" model.summary() # + id="NjiCqfw4DQkH" colab_type="code" colab={} num_train, num_val,num_test = ( metadata.splits['train'].num_examples*weight/10 for weight in SPLIT_WEIGHTS ) # + id="WXehBSIXBPmu" colab_type="code" colab={} initial_epochs = 10 steps_per_epoch = round(num_train)//BATCH_SIZE validation_steps = 20 # + id="xnKS1oNaCpuc" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="4812d2b0-1139-4945-ec7e-cae28382a85e" loss0,accuracy0 = model.evaluate(validation_batches , steps = validation_steps) # + id="syuaHuRkESGO" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 497} outputId="f1587842-177f-4e1f-9b58-1149e5474049" history = model.fit(train_batches, epochs = initial_epochs) # + id="VmyN53JoHKQG" colab_type="code" colab={}
Transfer_Learning.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- from frame import DataFrame as df import pandas as pd import numpy as np x = r"C:\Users\evanh\Projects\AT\Data\Binance-BTCUSDT.csv" pddf = pd.read_csv(x, index_col=0)#, parse_dates=True) data = df(pddf.values, index=pddf.index._data, columns=pddf.columns._data) data.iloc[5:7] pddf["2020-01-01 00:05:00+00:00": "2020-01-01 00:06:00+00:00"] p = np.arange(25).reshape((5, 5)) p[0:5] # Frequency = 24 hrs freq = np.timedelta64(1, "h")#.astype("timedelta64[ns]").astype(np.int64) freq class A(object): def x(self): return self.__new__(self.__class__) class B(A): pass a = B() print(id(a)) print(id(a.x())) x = 1 res = data.resample(freq) for group in range(len(res.groups) - 1): ret = data.iloc[res.groups[group]: res.groups[group + 1]] if ret.values.size != 0: print(ret) break data.iloc[res.groups[0]: res.groups[1]] data.index.BD pddf.index._data._data.view(np.int64)#.view("datetime64[ns]") from index import DateTimeIndex dt = DateTimeIndex(pddf.index._data._data) k = dt.keys k[60] bins[1] # Assuming that k is sorted # We create bins from k[0] to k[-1] # With a new bin every 24 hours bins = np.arange(k[0], k[-1], freq, dtype=np.int64) np.searchsorted(k, bins) '2020-01-01 00:00:00+00:00' is in pddf.index 4 in pd.RangeIndex(start=5) # %timeit pddf["Test"] = pddf["Volume"].values # %timeit data["Test"] = pddf["Volume"].values a = np.array([[1, 2], [3, 4]]) b = np.array([[5, 6]]) np.concatenate((a, b.T), axis=1) b.T.shape pddf["Volume"].values.shape data.loc["2020-01-01 00:00:00+00:00": "2020-08-05 01:18:00+00:00"] data # + og = ["A", "B", "C", "D"] ti = ["A", "X", "Y", "Z", "B", "D", "E"] odg = np.ones((4, 10)) # - np.concatenate((data.values, np.transpose([pddf["Volume"].values])), axis=1) len_columns = 10 len_index = len(ti) l = np.zeros((len_index, len_columns)) for target_index in range(len(ti)): for i in range(len(og)): if ti[target_index] == og[i]: l[target_index] = odg[i] l type(data.values_) l.shape == (7,10) arg = slice(None, 5) arg.start >= 0 class MathTest: def __init__(self, val): self.x = val def printt(self): print(self.x) def __add__(self, other): print(self.x, other) return self.x + other def __radd__(self, other): print(self, other) return self.x + other # + test1 = MathTest(5) test2 = MathTest(4) y = np.arange(10, 20) test1 + test2 # - test2 + test1 np.array(np.array([5,4,5])) pddf.index = pd.RangeIndex(0) class Testing: a = {"Open": 5} def __getattr__(self, attr): return self.a[attr] test = Testing() test.Open
Sloth/.ipynb_checkpoints/Untitled1-checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # CSEP tests # # This notebook describes the theory of each of the forecast tests included in pyCSEP with application to an example forecast. You will find information on the aims of each test, the theory behind the test, how the tests are applied in practice and how forecasts are 'scored' given the test results, as well as key references. The code required to run in each test and a description of how to interpret the test results is also included. import csep from csep.core import regions, catalog_evaluations, poisson_evaluations as poisson #from csep.core import poisson_evaluations as poisson from csep.utils import datasets, time_utils, comcat, plots, readers # ## Grid-based tests # # These tests are for grid-based forecasts constructed as per the RELM experiments (Schorlemmer et al, 2007), where rates are provided in cells of a forecast. In this case, rates are specified in time-space-magnitude cells covering the region of interest. The region $\boldsymbol{R}$ is then the product of the spatial rate $\boldsymbol{S}$ and the binned magnitude rate $\boldsymbol{M}$. # $$ \boldsymbol{R} = \boldsymbol{M} \times \boldsymbol{S} $$ # # A forecast $\boldsymbol{\Lambda}$ can then be fully specified as the expected number of events in each space-magnitude bin ($m_i, s_j$) covering the region $\boldsymbol{R}$ and therefore can be written as # $$ \boldsymbol{\Lambda} = \{ \lambda_{m_i, s_j}| m_i \in \boldsymbol{M}, s_j \in \boldsymbol{S} \} $$ # where $\lambda_{m_i, s_j}$ is the expected rate of events in magnitude bin $m_i$ and spatial bin $s_j$. # # The observed catalogue of events $\boldsymbol{\Omega}$ that we wish to test the forecast against is similarly discretised into the same space-magnitude bins such that it can be described as # $$ \boldsymbol{\Omega} = \{ \omega_{m_i, s_j}| m_i \in \boldsymbol{M}, s_j \in \boldsymbol{S} \} $$ # where $ \omega_{m_i, s_j}$ is the observed number of events in spatial cell $s_j$ and magnitude bin $m_i$. # # The magnitude bins are specified in the forecast: typically these are in 0.1 increments and this is the case in the examples we use here. The range of magnitude bins is determined by the forecast, or the forecast specification in the case of CSEP experiments where the spatial region is also predetermined for consistency across competing models (e.g. Schorlemmer et al 2007; Schorlemmer et al, 2010a). These examples use the Helmstetter et al (2007) smoothed seismicity forecast (including aftershocks), testing over a 5 year period between 2010 and 2015. # + ## Set up experiment parameters start_date = time_utils.strptime_to_utc_datetime('2010-01-01 00:00:00.0') end_date = time_utils.strptime_to_utc_datetime('2015-01-01 00:00:00.0') ## Loads from the PyCSEP package Helmstetter = csep.load_gridded_forecast(datasets.helmstetter_aftershock_fname, start_date=start_date, end_date=end_date, name='helmstetter_aftershock') ## Set up evaluation catalog catalog = csep.query_comcat(Helmstetter.start_time, Helmstetter.end_time, min_magnitude=Helmstetter.min_magnitude) ## Filter evaluation catalog catalog = catalog.filter_spatial(Helmstetter.region) # - # ### Consistency tests # # The consistency tests aim to whether observations are consistent with the forecast assuming that the forecast is 'true'. These tests were developed across a range of experiments and publications (Schorlemmer et al, 2007; Zechar et al 2010; Werner et al, 2011), building on previous tests or ideas about how tests should be constructed. The consistency tests are all based on the likelihood of observing the catalogue (actual recorded events) given the specified forecast, which is given as rate $\lambda$ in each cell. The total likelihood is the joint-likelihood of observing the events in each individual bin given the specified forecast rate $\lambda$ in each bin. We can write this as: # $$ Pr(\omega_1 | \lambda_1) Pr(\omega_2 | \lambda_2)...Pr(\omega_n | \lambda_n) = \prod_{m_i , s_j \in \boldsymbol{R}} f_{m_i, s_j}(\omega(m_i, s_j))$$ # where $f_{m_i, s_j}$ specifies the probability distribution in each space-magnitude bin. In these tests, we assume this probability distribution is Poisson in nature, meaning it follows a Poisson distribution where the probability of an event occurring is independent of the time since the last event and events occur at a rate $\lambda$. # # # We choose to use the joint log-likelihood in order to sum log-likelihoods rather than multiply the likelihoods, so the likelihood can be written as # $$ L(\boldsymbol{\Omega} | \boldsymbol{\Lambda}) = \sum_{m_i , s_j \in \boldsymbol{R}} log(f_{m_i, s_j}(\omega(m_i, s_j)) $$ # This says that the likelihood of the observations ($\boldsymbol{\Omega}$) given the forecast $\boldsymbol{\Lambda}$ is the sum over all space-magnitude bins of the log probabilities in individual cells of the forecast. # # # When a forecast is Poisson, it's likelihood follows a Poisson likelihood and the log-likelihood is then # $$ L(\boldsymbol{\Omega} | \boldsymbol{\Lambda}) = \sum_{m_i , s_j \in \boldsymbol{R}} -\lambda(m_i, s_j) + \omega(m_i, s_j)\log(\lambda(m_i, s_j)) - log(\omega(m_i, s_j)!) $$ # # where $\lambda(m_i, s_j)$ and $\omega(m_i, s_j)$ are the forecast rates and observed counts in cell $m_i, s_j$ respectively, so we can calculate the likelihood directly given the forecast and discretised observations. # # <b> Simulation approach </b> # To carry out these tests in practice, a simulation based approach is often used to account for uncertainty in the forecast. In the pyCSEP package, as in the original CSEP tests, this simulation is carried out using the cumulative probability density in each bin, which we shall call $F_{m_is_j}$ for the cumulative probability density in cell $m_i, s_j$. The simulation approach then works as follows: # # * For each forecast bin, draw a random number $z$ from a uniform distribution between 0 and 1 # * Calculate the number of events for this bin by calculating the inverse cumulative density distribution at this point $F^{-1}_{m_i, s_j}(z)$ # * Iterate over all bins to generate a catalog consistent with the forecast # # For each of these tests, we can plot the distribution of likelihoods of simulated catalogs relative to the observations using the `plots.plot_poisson_consistency_test` function. We can also calculate a quantile score to decide if a model passes or fails an individual test based on the location of the observations within the tails of the foreast distribution and a selected level of sensitivity. The number of simulations can be supplied to the Poisson consistency test functions using the `num_simulations` argument: for best results we would suggest 100 000 simulations to ensure convergence. # #### <b>L-test</b> # # <b>Aim:</b> Evaluate the likelihood of observed events given the provided forecast - this is a joint likelihood that includes the number, spatial distribution and magnitude. # # <b>Method:</b> The L-test is one of the original forecast tests described in Schorlemmer et al, 2007. The likelihood of the observation given the model is described by a Poisson likelihood function in each cell and the total joint likelihood described by the product over all bins, or the sum of the log-likelihoods (see above, or Zechar 2011 for more details). # # By simulating from the forecast as described above, we generate a set of simulated catalogs $\{\hat{\boldsymbol{\Omega}}\}$ where each catalogue can be written as # $$\hat{\boldsymbol{\Omega}}_x =\{ \hat{\omega}_x(m_i, s_j)|(m_i, s_j) \in \boldsymbol{R}\}$$ # and $ \hat{\omega}_x(m_i, s_j)$ is the number of simulated earthquakes in cell $m_i, s_j$ of (simulated) catalog $x$. # # We then compute the joint log-likelihood for each simulated catalogue $\hat{L}_x = L(\hat{\Omega}_x|\Lambda)$, so the joint log-likelihood for each simulated catalogue given the forecast, to give us a set of log-likelihoods $\{\hat{\boldsymbol{L}}\}$ that represents the range of log-likelihoods consistent with the forecast. # We then compare our simulated log-likelihoods with the observed log-likelihood $L_{obs} = L(\boldsymbol{\Omega}|\boldsymbol{\Lambda})$: if the observed log-likelihood falls within the distribution of the simulated log-likelihoods then the forecast is consistent with observations. # # The quantile score is then defined by the fraction of simulated joint log-likelihoods less than or equal to the observed. # $$\gamma = \frac{ |\{ \hat{L}_x | \hat{L}_x \le L_{obs}\} |}{|\{ \hat{\boldsymbol{L}} \}|}$$ # # # Whether a forecast can be said to pass a forecast depends on the significance level chosen for the testing process. The quantile score explicitly tells us something about the significance of the result: the observation is consistent with the forecast with $100(1-\gamma)\%$ confidence (Zechar, 2011). Low $\gamma$ values demonstrate that the observed likelihood score is less than most of the simulated catalogs. The L-Test is generally considered to be a one-sided test: values which are too small are ruled inconsistent with the forecast, but very large values may not necessarily be inconsistent with the forecast and additional testing should be used to further clarify this (Schorlemmer et al, 2007). Different implementations of CSEP testing have used different sensitivity values. Schorlemmer et al (2010b) consider $\gamma \lt 0.05$ while the implementation in the Italian CSEP testing experiment uses 0.01 (Taroni et al, 2018). # # <b> pyCSEP implementation </b> # # The pyCSEP tests takes the forecast and catalog and returns the test distribution, observed statistic and quantile score, which can be accessed from the `likelihood_test_result` object. We can pass this directly to the plotting function, specifying that the test should be one-sided. likelihood_test_result = poisson.likelihood_test(Helmstetter, catalog) ax = plots.plot_poisson_consistency_test( likelihood_test_result, one_sided_lower=True, plot_args = {'title': r'$\mathcal{L}-\mathrm{test}$', 'xlabel': 'Log-likelihood'} ) # The pyCSEP consistency test shows the resulting $95\%$ range of likelihoods returned by the simulation with the black bar. The observed likelihood score is shown by a green square where the forecast passes the test and a red circle where the observed likelihood is outside the likelihood distribution. # #### <b> CL-test </b> # <b>Aim</b>: The original likelihood test described above gives a result that combines the spatial, magnitude and number components of a forecast. This means that, should a forecast do badly in forecasting the number of events, it will receive a poor L-Test result. The L-test is significantly influenced by the number of events in the forecast. The conditional likelihood or CL-Test was developed to test the spatial and magnitude performance of a forecast without the influence of the number of events (Werner et al. 2011a, 2011b). # # <b>Method</b> # The CL-test is computed in the same way as the L-test, but with the number of events normalised to the observed catalog $N_{obs}$ during the simulation stage. # The quantile score is then calculated similarly such that # $$\gamma_{CL} = \frac{ |\{ \hat{CL}_x | \hat{CL}_x \le CL_{obs}\} |}{|\{ \hat{\boldsymbol{CL}} \}|}$$ # # # <b>Implementation in pyCSEP</b> cond_likelihood_test_result = poisson.conditional_likelihood_test(Helmstetter, catalog) ax = plots.plot_poisson_consistency_test(cond_likelihood_test_result, one_sided_lower=True, plot_args = {'title': r'$\mathcal{CL}-\mathrm{test}$', 'xlabel': 'conditional log-likelihood'}) # Again, the $95\%$ confidence range of likelihoods is shown by the black bar, and the symbol reflects the observed conditional-likelihood score. In this case, the observed conditional-likelihood is shown with the red circle, which falls outside the range of likelihoods simulated from the forecast. To understand why the L- and CL-tests give different results, consider the results of the N-test and S-test in the following sections. # #### <b>N-test</b> # # <b>Aim</b>: The number or N-test is the most conceptually simple test of a forecast: To test whether the number of observed events is consistent with that of the supplied forecast. # # <b>Method</b>: The originial N-test was introduced by Schorlemmer et al (2007) and modified by Zechar et al (2010). The observed number of events is given by: # $$N_{obs} = \sum_{m_i, s_j \in R} \omega(m_i, s_j)$$ # Using the simulations described above, the expected number of events is calculated by summing the simulated number of events over all grid cells # $$\hat{N_x} = \sum_{m_i, s_j \in R} \hat{\omega}_x(m_i, s_j) $$ # where $\hat{\omega}_x(m_i, s_j)$ is the simulated number of events in catalog $x$ in spatial cell $s_j$ and magnitude cell $m_i$, generating a set of simulated rates $\{ \hat{N} \}$. # # # We can then calculate the probability of i) observing at most $N_{obs}$ events and ii) of observing at least $N_{obs}$ events. # These probabilities can be written as: # # $$\delta_1 = \frac{ |\{ \hat{N_x} | \hat{N_x} \le N_{obs}\} |}{|\{ \hat{N} \}|}$$ # and # $$\delta_2 = \frac{ |\{ \hat{N_x} | \hat{N_x} \gt N_{obs}\} |}{|\{ \hat{N} \}|}$$ # # # If a forecast is Poisson, the expected number of events in the forecast follows a Poisson distribution with expectation $N_{fore} = \sum_{m_i, s_j \in R} \lambda(m_i, s_j)$. The cumulative distribution is then a Poisson cumulative distribution: # # $$F(x|N_{fore}) = \exp(-N_{fore}) \sum^{x}_{i=0} \frac{(N_{fore})^i}{i!}$$ # # which can be used directly without the need for simulations. The N-test quantile score is then # $$\delta_1 = 1 - F((N_{obs}-1)|N_{fore})$$ # and # $$\delta_2 = F(N_{obs}|N_{fore})$$ # # The original N-test considered only $\delta_2$ and it's complement $1-\delta_2$, which effectively tested the probability of at most $N_{obs}$ events and more than $N_{obs}$ events. Very small or very large values (<0.025 or > 0.975 respectively) were considered to be inconsistent with the forecast in Schorlemmer et al (2010). However the approach above aims to test something subtely different, that is at least $N_{obs}$ events and at most $N_obs$ events. Zechar et al (2010a) recommends testing both $\delta_1$ and $\delta_2$ with an effective significance of have the required significance level, so for a required significance level of 0.05, a forecast is consistent if both $\delta_1$ and $\delta_2$ are greater than 0.025. A very small $\delta_1$ suggest the rate is too low while a very low $\delta_2$ suggests a rate which is too high to be consistent with observations. # # <b> Implementation in pyCSEP </b> # # pyCSEP uses the Zechar et al (2010) version of the N-test and the cumulative Poisson approach to estimate the range of expected events from the forecasts, so does not implement a simulation in this case. The upper and lower bounds for the test are determined from the cumulative Poisson distribution. `number_test_result.quantile` will return both $\delta_1$ and $\delta_2$ values. number_test_result = poisson.number_test(Helmstetter, catalog) ax = plots.plot_poisson_consistency_test(number_test_result, plot_args={'xlabel':'Number of events'}) # In this case, the black bar shows the $95\%$ interval for the number of events in the forecast. The actual observed number of events is shown by the green box, which just passes the N-test in this case: the forecast generallly expects more events than are observed in practice, but the observed number falls just within the lower limits of what is expected so the forecast (just!) passes the N-test. # #### <b>M-test</b> # # <b>Aim:</b> Establish consistency (or lack thereof) of observed event magnitudes with forecast magnitudes. # # <b>Method:</b> The M-test is first described in Zechar et al. (2010) and aims to isolate the magnitude component of a forecast. To do this, we sum over the spatial bins and normalise so that the sum of events matches the observations. # $$\hat{\boldsymbol{\Omega}}^m = \big{\{}\omega^{m}(m_i)| m_i \in \boldsymbol{M}\big{\}}$$ # where # $$ \omega^m(m_i) = \sum_{s_j \in \boldsymbol{S}} \omega(m_i, s_j) $$ # and # $$\boldsymbol{\Lambda}^m = \big{\{} \lambda^m(m_i)| m_i \in \boldsymbol{M} \big{\}} $$ # where # $$ \lambda^m(m_i) = \frac{N_{obs}}{N_{fore}}\sum_{s_j \in \boldsymbol{S}} \lambda\big{(}m_i, s_j\big{)}$$ # # Then we compute the joint log-likelihood as we did for the L-test: # $$ M = L(\boldsymbol{\Omega}^m | \boldsymbol{\Lambda}^m) $$ # # We then wish to compare this with the distribution of simulated log-likelihoods, this time keep the number of events fixed to $N_{obs}$. Then for each simulated catalogue, $\hat{M}_x = L(\hat{\boldsymbol{\Omega}}^m | \boldsymbol{\Lambda}^m)$ # # # <b>Quantile score: </b> The final test statistic is again the fraction of observed log likelihoods within the range of the simulated log likelihood values: # $$\kappa = \frac{ |\{ \hat{M_x} | \hat{M_x} \le M\} |}{|\{ \hat{M} \}|}$$ # and the observed magnitudes are inconsistent with the forecast if $\kappa$ is less than the significance level. # # # <b>pyCSEP implementation</b> mag_test_result = poisson.magnitude_test(Helmstetter, catalog) ax = plots.plot_poisson_consistency_test(mag_test_result, one_sided_lower=True, plot_args={'xlabel':'Normalised likelihood'}) # In this example, the forecast passes the M-test, demonstrating that the magnitude distribution in the forecast is consistent with observed events. This is shown by the green square marking the joint log-likelihood for the observed events. # #### <b>S-test</b> # # <b>Aim:</b> The spatial or S-test aims to establish consistency (or lack thereof) of observed event locations with a forecast. It is originally defined in Zechar et al (2010). # # <b>Method:</b> Similar to the M-test, but in this case we sum over all magnitude bins. # $$\hat{\boldsymbol{\Omega}^s} = \{\omega^s(s_j)| s_j \in \boldsymbol{S}\}$$ # where # $$ \omega^s(s_j) = \sum_{m_i \in \boldsymbol{M}} \omega(m_i, s_j) $$ # and # $$\boldsymbol{\Lambda}^s = \{ \lambda^s(s_j)| s_j \in \boldsymbol{S} \} $$ # where # $$ \lambda^s(s_j) = \frac{N_{obs}}{N_{fore}}\sum_{m_i \in M} \lambda(m_i, s_j)$$ # # Then we compute the joint log-likelihood as we did for the L-test or the M-test: # $$ S = L(\boldsymbol{\Omega}^s | \boldsymbol{\Lambda}^s) $$ # # We then wish to compare this with the distribution of simulated log-likelihoods, this time keeping the number of events fixed to $N_{obs}$. Then for each simulated catalogue, $\hat{S}_x = L(\hat{\boldsymbol{\Omega}}^s | \boldsymbol{\Lambda}^s)$ # # The final test statistic is again the fraction of observed log likelihoods within the range of the simulated log likelihood values: # $$\zeta = \frac{ |\{ \hat{S_x} | \hat{S_x} \le S\} |}{|\{ \hat{S} \}|}$$ # and again the distinction between a forecast passing or failing the test depends on our significance level. # # <b> pyCSEP implementation </b> # # The S-test is again a one-sided test, so we specify this when plotting the result. spatial_test_result = poisson.spatial_test(Helmstetter, catalog) ax = plots.plot_poisson_consistency_test(spatial_test_result, one_sided_lower=True, plot_args = {'xlabel':'normalised spatial likelihood'}) # The Helmstetter model fails the S-test as the observed spatial likelihood falls in the tail of the simulated likelihood distribution. Again this is shown by a coloured symbol which highlights whether the forecast model passes or fails the test. # ### Forecast comparison tests # # The consistency tests above check whether a forecast is consistent with observations, but do not provide a straightforward way to compare two different forecasts. A few suggestions for this focus on the information gain of one forecast relative to another (Harte and Vere-Jones 2005, Imoto and Hurukawa, 2006, Imoto and Rhoades, 2010, Rhoades et al 2011). The T-test and W-test implementations for forecast comparison described here are first described in Rhoades et al, 2011. # # The information gain per earthquake of model A compared to model B is defined by $I_{N}(A, B) = R/N$ where R is the rate-corrected log-likelihood ratio of models A and B gven by # $$ R = \sum_{k=1}^{N}\big{(}\log\lambda_A(i_k) - \log \lambda_B(i_k)\big{)} - \big{(}\hat{N}_A - \hat{N}_B\big{)}$$ # If we set $X_i=\log\lambda_A(k_i)$ and $Y_i=\log\lambda_B(k_i)$ then we can define the information gain per earthquake as # $$I_N(A, B) = \frac{1}{N}\sum^N_{i=1}\big{(}X_i - Y_i\big{)} - \frac{\hat{N}_A - \hat{N}_B}{N}$$ # If $I(A, B)$ differs significantly from 0, the model with the lower likelihood can be rejected in favour of the other. # # <b> T-test </b> # If $X_i - Y_i$ are independent amd come from the same normal population with mean $\mu$ then we can use the classic paired T-test to evaluate the null hypothesis that $\mu = (\hat{N}_A - \hat{N}_B)/N$ against the alternative hypothesis $\mu \ne (\hat{N}_A - \hat{N}_B)/N$. # To implement this, we let $s$ denote the sample variance of $(X_i - Y_i)$ such that # $$ s^2 = \frac{1}{N-1}\sum^N_{i=1}\big{(}X_i - Y_i\big{)}^2 - \frac{1}{N^2 - N}\bigg{(}\sum^N_{i=1}\big{(}X_i - Y_i\big{)}\bigg{)}^2 $$ # # Under the null hypothesis $T = I_N(A, B)\big{/}\big{(}s/\sqrt{N}\big{)}$ has a t-distribution with $N-1$ degrees of freedom and the null hypothesis can be rejected if $|T|$ exceeds a critical value of the $t_{N-1}$ distribution. The confidence intervals for $\mu - (\hat{N}_A - \hat{N}_B)/N$ can then be constructed with the form $I_N(A,B) \pm ts/\sqrt{N}$ where t is the appropriate quantile of the $t_{N-1}$ distribution. # # <b> W-test </b> # An alternative to the T-test is the Wilcoxan signed-rank test or W-test. This is a non-parameteric alternative to the T-test which can be used if we do not feel the assumption of normally distributed differences in $X_i - Y_i$ is valid. This assumption might b particularly poor when we have small sample sizes. The W-test instead depends on the (weaker) assumption that $X_i - Y_i$ is symmetric and tests whether the meadian of $X_i - Y_i$ is equal to $(\hat{N}_A - \hat{N}_B)/N$. The W-test is less powerful than the T-test for normally distributed differences and cannot reject the null hypothesis (with $95\%$ confidence) for very small sample sizes ($N \leq 5$). # # The T-test becomes more accurate as $N \rightarrow \infty$ due to the central limit theorem and therefore the T-test is considered dependable for large $N$. Where $N$ is small, a model might only be considered more informative if both the T- and W-test results agree. # # <b>Implementation in pyCSEP</b> # The T-test and W-tests are implemented in pyCSEP as below. # # + Helmstetter_MS = csep.load_gridded_forecast(datasets.helmstetter_mainshock_fname, name ="Helmstetter Mainshock") t_test = poisson.paired_t_test( Helmstetter, Helmstetter_MS, catalog) w_test = poisson.w_test(Helmstetter, Helmstetter_MS, catalog) comp_args = {'title': 'Paired T-test result', 'ylabel': 'information gain', 'xlabel': 'Model'} ax = plots.plot_comparison_test([t_test], plot_args= comp_args) # - # The first argument to the `paired_t_test` function is taken as model B and the second as our basline model, or model A. When plotting the result, the horizontal dashed line indicates the performance of model A and the vertical bar shows the confidence bars for the information gain $I_N(A, B)$ associated with model B relative to model A. In this case, model B does not show significant information gain over model A, so is coloured red to highlight this. # ## Catalog-based forecast tests # # As an alternative to the grid-based models, the catalog-based tests are designed to better evaluate forecasts which are overdispersed relaative to a Poisson distribution. Further, they allow modellers to capture more of the uncertainty in their models than the traditional grid-based approach. Specifying simulated catalogs removes the need to simulate from a Poisson distribution: earthquake forecasts are often overdispersed due to spatio-temporal clustering, but models with overdispersion are more likely to be rejected by the original, Poisson-based CSEP tests (Werner et al, 2011a). In the catalog-based forecast tests, forecasts are specified as a set of catalogs generated from the forecast model itself by the forecaster. This allows for a broader range of forecast models, for example those generate from earthquake simulator models. The distribution of realisations is then compared with observations, similar to in the grid-based case. These tests were developed by Savran et al 2020, who applied them to test forecasts following the 2019 Ridgecrest earthquake in Southern California. # # Again we begin by defining a region $\boldsymbol{R}$ as a function of some magnitude range $\boldsymbol{M}$, spatial domain $\boldsymbol{S}$ and time period $\boldsymbol{T}$ # $$ \boldsymbol{R} = \boldsymbol{M} \times \boldsymbol{S} \times \boldsymbol{T}$$ # # An earthquake $e$ can be described by a magnitude $m_i$ at some location $s_j$ and time $t_k$.A catalog is simply a collection of earthquakes, such that the observed catalog can be written as # $$\Omega = \big{\{}e_n \big{|} n= 1...N_{obs}; e_n \in \boldsymbol{R} \big{\}}$$ # This is our testing catalog, which is made up of observed events. # # A forecast is specified as a collection of synthetic catalogs containing events $\hat{e}_{nj}$ in domain $\boldsymbol{R}$ # # $$ \boldsymbol{\Lambda} \equiv \Lambda_j = \{\hat{e}_{nj} | n = 1... N_j, j= 1....J ;\hat{e}_{nj} \in \boldsymbol{R} \} $$ # That is, a forecast consists of $J$ simulated catalogs each containing $N_j$ events, described in time, space and magnitude such that $\hat{e}_{nj}$ describes the $n$th synthetic event in the $j$th synthetic catalog $\Lambda_j$ # # When using simulated forecasts in pyCSEP, we must first explicitly specify the forecast region by specifying the spatial domain and magnitude regions as below. This is necessary for some of the tests, as you will see below. The examples in this section are catalog-based forecast simulations for the Landers earthquake and aftershock sequence generated using UCERF3-ETAS (Field et al, 2017). # + start_time = time_utils.strptime_to_utc_datetime("1992-06-28 11:57:34.14") end_time = time_utils.strptime_to_utc_datetime("1992-07-28 11:57:34.14") # Magnitude bins properties min_mw = 4.95 max_mw = 8.95 dmw = 0.1 # Create space and magnitude regions. The forecast is already filtered in space and magnitude magnitudes = regions.magnitude_bins(min_mw, max_mw, dmw) region = regions.california_relm_region() # Bind region information to the forecast (this will be used for binning of the catalogs) space_magnitude_region = regions.create_space_magnitude_region(region, magnitudes) # Load forecast forecast = csep.load_catalog_forecast(datasets.ucerf3_ascii_format_landers_fname, start_time = start_time, end_time = end_time, region = space_magnitude_region) forecast.filters = [f'origin_time >= {forecast.start_epoch}', f'origin_time < {forecast.end_epoch}'] _ = forecast.get_expected_rates(verbose=False) # Obtain Comcat catalog and filter to region. comcat_catalog = csep.query_comcat(start_time, end_time, min_magnitude=forecast.min_magnitude) # Filter observed catalog using the same region as the forecast comcat_catalog = comcat_catalog.filter_spatial(forecast.region) # - # #### <b> Number Test </b> # # <b>Aim</b>: As above, the number test aims to evaluate if the number of observed events is consistent with the forecast. # # <b>Method</b>: The observed statistic in this case is given by $N_{obs} = |\Omega|$, which is simply the number of events in the observed catalog. # To build the test distribution from the forecast, we simply count the number of events in each simulated catalog. # $$ N_{j} = |\Lambda_c|; j = 1...J$$ # # As in the gridded test above, we can then evaluate the probabilities of at least and at most N events, in this case using the empirical cumlative distribution function of $F_N$: # $$\delta_1 = P(N_j \geq N_{obs}) = 1 - F_N(N_{obs}-1)$$ # and # $$\delta_2 = P(N_j \leq N_{obs}) = F_N(N_{obs})$$ # # <b> Implementation in pyCSEP </b> # number_test_result = catalog_evaluations.number_test(forecast, comcat_catalog) ax = number_test_result.plot() # Plotting the number test result of a simulated catalog forecast displays a histogram of the numbers of events $\hat{N}_j$ in each simulated catalog $j$, which makes up the test distribution. The test statistic is shown by the dashed line - in this case it is the number of observed events in the catalog $N_{obs}$. # #### <b> Magnitude Test </b> # <b>Aim</b>: Once again, the magnitude test aims to test the consistency of the observed frequency-magnitude distribution with that in the simulated catalogs that make up the forecast. # # <b>Method:</b> The catalog-based magnitude test is implemented quite differently to the grid-based equivalent. We first define the union catalog $\Lambda_U$ as the union of all simulated catalogs in the forecast. Formally: # $$ \Lambda_U = \{ \lambda_1 \cup \lambda_2 \cup ... \cup \lambda_j \}$$ # so that the union catalog contains all events across all simulated catalogs for a total of $N_U = \sum_{j=1}^{J} \big{|}\lambda_j\big{|}$ events. # We then compute the following histograms discretised to the magnitude range and stepsize (specified earlier for pyCSEP): # 1. the histogram of the union catalog magnitudes $\Lambda_U^{(m)}$ # 2. Histograms of magnitudes in each of the individual simulated catalogs $\lambda_j^{(m)}$ # 3. the histogram of the observed catalog magnitudes $\Omega^{(m)}$ # # The histograms are normalised so that the total number of events across all bins is equal to the observed number. # # The observed statistic is then calculated as the sum of squared logarithmic residuals between the normalised observed magnitudes and the union histograms # $$d_{obs}= \sum_{k}\Bigg(\log\Bigg[\frac{N_{obs}}{N_U} \Lambda_U^{(m)}(k) + 1\Bigg]- \log\Big[\Omega^{(m)}(k) + 1\Big]\Bigg)^2$$ # where $\Lambda_U^{(m)}(k)$ and $\Omega^{(m)}(k) $ represent the count in the $k$th bin of the magnitude-frequency distribution in the union and observed catalogs respectively. We add unity to each bin to avoid $\log(0)$. # # We then build the test distribution from the catalogs in $\boldsymbol{\Lambda}$: # $$ D_j = \sum_{k}\Bigg(\log\Bigg[\frac{N_{obs}}{N_U} \Lambda_U^{(m)}(k) + 1\Bigg]- \log\Bigg[\frac{N_{obs}}{N_j}\Lambda_j^{(m)}(k) + 1\Bigg]\Bigg)^2; j= 1...J$$ # where $\lambda_j^{(m)}(k)$ represents the count in the $k$th bin of the magnitude-frequency distribution of the $j$th catalog. # # The quantile score can then be calculated using the empirical CDF such that # $$ \gamma_m = F_D(d_{obs})= P(D_j \leq d_{obs})$$ # # <b> Implementation in pyCSEP </b> # Hopefully you now see why it was necessary to specify our magnitude range explicitly when we set up the catalog-type testing - we need to makes sure the magnitudes are properly discretised for the model we want to test. magnitude_test_result = catalog_evaluations.magnitude_test(forecast, comcat_catalog) ax = magnitude_test_result.plot() # The histogram shows the resulting test distribution with $D^*$ calculated for each simulated catalog as described in the method above. The test statistic $\omega = d_{obs}$ is shown with the dashed horizontal line. The quantile score for this forecast is $\gamma = 0.29$. # #### <b>Pseudo-likelihood test</b> # # <b> Aim </b>: The pseudo-likelihood test aims to evaluate the likelihood of a forecast given an observed catalog. # # <b> Method </b>: The pseudo-likelihood test has similar aims to the grid-based likelihood test above, but its implementation differs in a few significant ways. Firstly, it does not compute an actual likelihood (hence the name pseudo-likelihood), and instead of aggregating over cells as in the grid-based case, the pseudo-likelihood test aggregates likelihood over target event likelihood scores (so likelihood score per target event, rather than likelihood score per grid cell). The most important difference, however, is that the pseudo-likelihood tests do not use a Poisson likelihood at all, but instead calculate a test distribution of pseudo-likelihoods from the simulated catalogs themselves. # # The pseudo-likelihood approach is based on the continuous point process likelihood function. A continuous marked space-time point process can be specified by a conditional intensity function $\lambda(\boldsymbol{e}|H_t)$, in which $H_t$ describes the history of the process in time. The log-likelihood function for any point process in $\boldsymbol{R}$ is given by # $$ L = \sum_{i=1}^{N} \log \lambda(e_i|H_t) - \int_{\boldsymbol{R}}\lambda(\boldsymbol{e}|H_t)d\boldsymbol{R}$$ # Not all models will have an explicit likelihood function, so instead we approximate the expectation of $\lambda(e|H_t)$ using the forecast catalogs. The approximate rate density is defined as the conditional expectation given a discretised region $R_d$ of the continuous rate # $$\hat{\lambda}(\boldsymbol{e}|H_t) = E\big[\lambda(\boldsymbol{e}|H_t)|R_d\big]$$ # We still regard the model as continuous, but the rate density is approximated within a single cell. This is analogous to the gridded approach where we count the number of events in discrete cells. # The pseudo-loglikelihood is then # $$\hat{L} = \sum_{i=1}^N \log \hat{\lambda}(e_i|H_t) - \int_R \hat{\lambda}(\boldsymbol{e}|H_t) dR $$ # and we can write the approximate rate density as # $$\hat{\lambda}(\boldsymbol{e}|H_t) = \sum_M \hat{\lambda}(\boldsymbol{e}|H_t) $$ # Where we take the sum over all magnitude bins $M$. # # We can calculate observed pseudolikelihood as # $$ \hat{L}_{obs} = \sum_{i=1}^{N_{obs}} \log \hat{\lambda}_s(k_i) - \bar{N} $$ # where $\hat{\lambda}_s(k_i)$ is the approximate rate density in the $k$th spatial cell and $k_i$ denotes the spatil cell in which the $i$th event occurs. $\bar{N}$ is the expected number of events in $R_d$. # Similarly, we calculate the test distribution as # $$\hat{L}_{j} = \Bigg[\sum_{i=1}^{N_{j}} \log\hat{\lambda}_s(k_{ij}) - \bar{N}\Bigg]; j = 1....J $$ # where $\hat{\lambda}_s(k_{ij})$ describes the approximate rate density of the $i$th event in the $j$th catalog. # # We can then calculate the quantile score as # $$ \gamma_L = F_L(\hat{L}_{obs})= P(\hat{L}_j \leq \hat{L}_{obs})$$ # # <b> Implementation in pyCSEP </b> pseudolikelihood_test_result = catalog_evaluations.pseudolikelihood_test(forecast, comcat_catalog) ax = pseudolikelihood_test_result.plot() # The histogram shows the test distribution of pseudolikelihood as calculated above for each catalog $j$. The dashed vertical line shows the observed statistic $\hat{L}_{obs} = \omega$. It is clear that the observed statistic falls within the test distribution, as reflected in the quantile score of $\gamma_L = 0.44$. # #### <b> Spatial test </b> # # <b> Aim</b>: The spatial test again aims to isolate the spatial component of the forecast and test the consistency of spatial rates with observed events. # # <b>Method</b> We perform the spatial test in the catalog-based approach in a similar way to the grid-based spatial test approach: by normalising the approximate rate density. In this case, we use the normalisation $\hat{\lambda}_s = \hat{\lambda}_s \big/ \sum_{R} \hat{\lambda}_s$. Then the observed spatial test statistic is calculated as # $$ S_{obs} = \Bigg[\sum_{i=1}^{N_{obs}} \log \hat{\lambda}_s^*(k_i)\Bigg]N_{obs}^{-1}$$ # in which $\hat{\lambda}_s^*(k_i)$ is the normalised approximate rate density in the $k$th cell corresponding to the $i$th event in the observed catalog $\Omega$. # Similarly, we define the test distribution using # $$ S_{c} = \bigg[\sum_{i=1}^{N_{j}} \log \hat{\lambda}_s^*(k_{ij})\bigg]N_{j}^{-1}; j= 1...J$$ # for each catalog j. # Finally, the quantile score for the spatial test is determined by once again comparing the observed and test distribution statistics: # $$\gamma_s = F_s(\hat{S}_{obs}) = P (\hat{S}_j \leq \hat{S}_{obs})$$ # # <b> Implementation in pyCSEP </b> spatial_test_result = catalog_evaluations.spatial_test(forecast, comcat_catalog) ax = spatial_test_result.plot() # The histogram shows the test distribution of normalised pseduo-likelihood computed for each simulated catalog $j$. The dashed vertical line shows the observed test statistic $s_{obs} = \omega = -5.92$, which is clearly within the test distribution. The quantile score $\gamma_s = 0.71$ is also printed on the figure by default. # ### References # <NAME>., <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, and <NAME> (2017). A spatiotemporal clustering model for the third Uniform California Earthquake Rupture Forecast (UCERF3-ETAS): Toward an operational earthquake forecast, Bull. Seismol. Soc. Am. 107, 1049–1081. # # <NAME>., and <NAME> (2005), The entropy score and its uses in earthquake forecasting, Pure Appl. Geophys. 162 , 6-7, 1229-1253, DOI: 10.1007/ # s00024-004-2667-2. # # <NAME>., <NAME>, and <NAME> (2006). Comparison of short-term and time-independent earthquake forecast models for southern California, Bulletin of the Seismological Society of America 96 90-106. # # <NAME>., and <NAME> (2006), Assessing potential seismic activity in Vrancea, Romania, using a stress-release model, Earth Planets Space 58 , # 1511-1514. # # <NAME>., and <NAME> (2010), Seismicity models of moderate earthquakes in Kanto, Japan utilizing multiple predictive parameters, Pure Appl. Geophys. # 167 , 6-7, 831-843, DOI: 10.1007/s00024-010-0066-4. # # <NAME>., Schorlemmer, M.C.Gerstenberger, <NAME>, <NAME> & <NAME> (2011) Efficient testing of earthquake forecasting models, Acta Geophysica 59 # # <NAME>., <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, and <NAME> (2020). Pseudoprospective evaluation of UCERF3-ETAS forecasts during the 2019 Ridgecrest Sequence, Bulletin of the Seismological Society of America. # # <NAME>., and <NAME> (2007), RELM testing center, Seismol. Res. Lett. 78, 30–36. # # <NAME>., <NAME>, <NAME>, <NAME>, and <NAME> (2007), Earthquake likelihood model testing, Seismol. Res. Lett. 78, 17–29. # # <NAME>., <NAME>, <NAME>, <NAME>, <NAME> and <NAME> (2010a). Setting up an earthquake forecast experiment in Italy, Annals of Geophysics, 53, no.3 # # <NAME>., <NAME>, <NAME>, <NAME>, <NAME>, and <NAME> (2010b), First results of the Regional Earthquake Likelihood Models experiment, Pure Appl. Geophys., 167, 8/9, doi:10.1007/s00024-010-0081-5. # # <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>; Prospective CSEP Evaluation of 1‐Day, 3‐Month, and 5‐Yr Earthquake Forecasts for Italy. Seismological Research Letters 2018;; 89 (4): 1251–1261. doi: https://doi.org/10.1785/0220180031 # # <NAME>., <NAME>, <NAME>, and <NAME> (2011a). High-Resolution Long-Term and Short-Term Earthquake Forecasts for California, Bulletin of the Seismological Society of America 101 1630-1648 # # <NAME>. <NAME>, <NAME>, and <NAME> (2011b), Retrospective evaluation of the five-year and ten-year CSEP-Italy earthquake forecasts, Annals of Geophysics 53, no. 3, 11–30, doi:10.4401/ag-4840. # # Zechar, 2011: Evaluating earthquake predictions and earthquake forecasts: a guide for students and new researchers, CORSSA (http://www.corssa.org/en/articles/theme_6/) # # <NAME>., <NAME>, and <NAME> (2010a), Likelihood-based tests for evaluating space-rate-magnitude forecasts, Bull. Seis. Soc. Am., 100(3), 1184—1195, doi:10.1785/0120090192. # # <NAME>., <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, and <NAME> (2010b), The Collaboratory for the Study of Earthquake Predictability perspective on computational earthquake science, Concurr. Comp-Pract. E., doi:10.1002/cpe.1519. # #
CSEP_tests.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: python3 # language: python # name: python3 # --- # # BaseTransformer # This notebook shows the functionality included in the BaseTransformer class. This is the base class for the package and all other transformers within should inherit from it. This means that the functionality below is also present in the other transformers in the package. <br> # This is more 'behind the scenes' functionality that is useful to be aware of, but not the actual transformations required before building / predicting with models. <br> # Examples of the actual pre processing transformations can be found in the other notebooks in this folder. import pandas as pd import numpy as np import tubular from tubular.base import BaseTransformer tubular.__version__ # ## Load Boston house price dataset from sklearn # Note, the load_boston script modifies the original Boston dataset to include nulls values and pandas categorical dtypes. boston_df = tubular.testing.test_data.prepare_boston_df() boston_df.shape boston_df.head() boston_df.dtypes # ## Initialising BaseTransformer # ### Not setting columns # Columns do not have to be specified when initialising BaseTransformer objects. Both the fit and transform methods call the columns_set_or_check to ensure that columns is set before the transformer has to do any work. base_1 = BaseTransformer(copy = True, verbose = True) # ## BaseTransformer fit # Not all transformers in the package will implement a fit method, if the user directly specifies the values the transformer needs e.g. passes the impute value, there is no need for it. # ### Setting columns in fit # If the columns attribute is not set when fit is called, columns_set_or_check will set columns to be all columns in X. base_1.columns is None base_1.fit(boston_df) base_1.columns # ## BaseTransformer transform # All transformers will implement a transform method. # ### Transform with copy # This ensures that the input dataset is not modified in transform. boston_df_2 = base_1.transform(boston_df) pd.testing.assert_frame_equal(boston_df_2, boston_df) boston_df_2 is boston_df # ### Transform without copy # This can be useful if you are working with a large dataset or are concerned about the time to copy. base_2 = BaseTransformer(copy = False, verbose = True) boston_df_3 = base_2.fit_transform(boston_df) boston_df_3 is boston_df
examples/base/BaseTransformer.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # # Seldon Kafka Integration with KEDA scaling over SSL # # In this example we will # # * run SeldonDeployments for a CIFAR10 Tensorflow model which take their inputs from a Kafka topic and push their outputs to a Kafka topic. # * We will scale the Seldon Deployment via KEDA. # * We will consume/product request over SSL # ## Requirements # # * [Install gsutil](https://cloud.google.com/storage/docs/gsutil_install) # # !pip install -r requirements.txt # ## Setup Kafka and KEDA # * Install Strimzi on cluster via out [playbook](https://github.com/SeldonIO/ansible-k8s-collection/blob/master/playbooks/kafka.yaml) # # ``` # ansible-playbook kafka.yaml # ``` # # * [Install KEDA](https://keda.sh/docs/2.6/deploy/) (tested on 2.6.1) # * See docs for [Kafka Scaler](https://keda.sh/docs/2.6/scalers/apache-kafka/) # # ## Create Kafka Cluster # # * Note tls listener is created with authentication # !cat cluster.yaml # + active="" # !kubectl create -f cluster.yaml -n kafka # - # ## Create Kafka User # # This will create a secret called seldon-user in the kafka namespace with cert and key we can use later # !cat user.yaml # !kubectl create -f user.yaml -n kafka # ## Create Topics # res = !kubectl get service seldon-kafka-tls-bootstrap -n kafka -o=jsonpath='{.status.loadBalancer.ingress[0].ip}' ip = res[0] # %env TLS_BROKER=$ip:9093 # res = !kubectl get service seldon-kafka-plain-bootstrap -n kafka -o=jsonpath='{.status.loadBalancer.ingress[0].ip}' ip = res[0] # %env BROKER=$ip:9092 # %%writefile topics.yaml apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: cifar10-rest-input namespace: kafka labels: strimzi.io/cluster: "seldon" spec: partitions: 2 replicas: 1 --- apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: cifar10-rest-output namespace: kafka labels: strimzi.io/cluster: "seldon" spec: partitions: 2 replicas: 1 # Create two topics with 2 partitions each. This will allow scaling up to 2 replicas. # !kubectl create -f topics.yaml # ## Install Seldon # # * [Install Seldon](https://docs.seldon.io/projects/seldon-core/en/latest/workflow/install.html) # * [Follow our docs to intstall the Grafana analytics](https://docs.seldon.io/projects/seldon-core/en/latest/analytics/analytics.html). # ## Download Test Request Data # We have two example datasets containing 50,000 requests in tensorflow serving format for CIFAR10. One in JSON format and one as length encoded proto buffers. # !gsutil cp gs://seldon-datasets/cifar10/requests/tensorflow/cifar10_tensorflow.json.gz cifar10_tensorflow.json.gz # !gunzip cifar10_tensorflow.json.gz # ## Test CIFAR10 REST Model # Upload tensorflow serving rest requests to kafka. This may take some time dependent on your network connection. # !python ../../../util/kafka/test-client.py produce $BROKER cifar10-rest-input --file cifar10_tensorflow.json # res = !kubectl get service -n kafka seldon-kafka-tls-bootstrap -o=jsonpath='{.spec.clusterIP}' ip = res[0] # %env TLS_BROKER_CIP=$ip # !kubectl create secret generic keda-enable-tls --from-literal=tls=enable -n kafka # ## Create Trigger Auth # # * References keda-enable-tls secret # * References seldon-cluster-ca-cert for ca cert # * References seldon-user for user certificate # !cat trigger-auth.yaml # !kubectl create -f trigger-auth.yaml -n kafka # %%writefile cifar10_rest.yaml apiVersion: machinelearning.seldon.io/v1 kind: SeldonDeployment metadata: name: tfserving-cifar10 namespace: kafka spec: protocol: tensorflow transport: rest serverType: kafka predictors: - componentSpecs: - spec: containers: - args: - --port=8500 - --rest_api_port=8501 - --model_name=resnet32 - --model_base_path=gs://seldon-models/tfserving/cifar10/resnet32 - --enable_batching image: tensorflow/serving name: resnet32 ports: - containerPort: 8501 name: http kedaSpec: pollingInterval: 15 minReplicaCount: 1 maxReplicaCount: 2 triggers: - type: kafka metadata: bootstrapServers: TLS_BROKER_CIP consumerGroup: model.tfserving-cifar10.kafka lagThreshold: "50" topic: cifar10-rest-input offsetResetPolicy: latest #authMode: sasl_ssl (for latest KEDA - not released yet) authenticationRef: name: seldon-kafka-auth svcOrchSpec: env: - name: KAFKA_BROKER value: TLS_BROKER_CIP - name: KAFKA_INPUT_TOPIC value: cifar10-rest-input - name: KAFKA_OUTPUT_TOPIC value: cifar10-rest-output - name: KAFKA_SECURITY_PROTOCOL value: ssl - name: KAFKA_SSL_CA_CERT valueFrom: secretKeyRef: name: seldon-cluster-ca-cert key: ca.crt - name: KAFKA_SSL_CLIENT_CERT valueFrom: secretKeyRef: name: seldon-user key: user.crt - name: KAFKA_SSL_CLIENT_KEY valueFrom: secretKeyRef: name: seldon-user key: user.key - name: KAFKA_SSL_CLIENT_KEY_PASS valueFrom: secretKeyRef: name: seldon-user key: user.password graph: name: resnet32 type: MODEL endpoint: service_port: 8501 name: model replicas: 1 # !cat cifar10_rest.yaml | sed s/TLS_BROKER_CIP/$TLS_BROKER_CIP:9093/ | kubectl apply -f - # !kubectl delete -f cifar10_rest.yaml
examples/kafka/kafka_keda/cifar10_kafka.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + #'MetaData #@Author : <NAME> #Date : 28-06-2021 # + # Importing the necessary libraries # - import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns import sklearn # %matplotlib inline import warnings warnings.filterwarnings("ignore") pd. set_option('display.max_columns', 150) # or 1000. pd. set_option('display.max_rows', 150) # or 1000. # Importing the data df=pd.read_csv(r'C:\Users\yashs\Desktop\HighRadius\H2HBABBA2968.csv') df # Splitting the data based on the clear_date main_train = df[df.clear_date.isnull()==False] main_test = df[df.clear_date.isnull()] # # Data pre-processing # Converting the below columns to datetime main_train['posting_date'] = pd.to_datetime(main_train['posting_date'], format='%Y/%m/%d' ) main_train['document_create_date'] = pd.to_datetime(main_train['document_create_date'], format='%Y%m%d') main_train['clear_date'] = pd.to_datetime(main_train['clear_date'], format='%Y/%m/%d') main_train['baseline_create_date']=pd.to_datetime(main_train.baseline_create_date, format='%Y%m%d') main_train['due_in_date']=pd.to_datetime(main_train.due_in_date, format='%Y%m%d') # Calculating the delay based upon the simple formula: Delay= clear_date-due_in_date main_train['Y=Delay'] = main_train['clear_date'] - main_train['due_in_date'] # Converting the datatype of delay column from timedelta to int main_train['Y=Delay'] = pd.to_numeric(main_train['Y=Delay'].dt.days, downcast='integer') # + # Encoding the main features : from sklearn.preprocessing import LabelEncoder business_code_encoder = LabelEncoder() business_code_encoder.fit(main_train['business_code']) main_train['business_code_enc'] = business_code_encoder.transform(main_train['business_code']) document_type_encoder = LabelEncoder() document_type_encoder.fit(main_train['document type']) main_train['document_type_enc'] = document_type_encoder.transform(main_train['document type']) cust_number_encoder = LabelEncoder() cust_number_encoder.fit(main_train['cust_number']) main_train['cust_number_enc'] = cust_number_encoder.transform(main_train['cust_number']) name_customer_encoder = LabelEncoder() name_customer_encoder.fit(main_train['name_customer']) main_train['name_customer_enc'] = name_customer_encoder.transform(main_train['name_customer']) clear_date_encoder = LabelEncoder() clear_date_encoder.fit(main_train['clear_date']) main_train['clear_date_enc'] = clear_date_encoder.transform(main_train['clear_date']) buisness_year_encoder = LabelEncoder() buisness_year_encoder.fit(main_train['buisness_year']) main_train['buisness_year_enc'] = buisness_year_encoder.transform(main_train['buisness_year']) doc_id_encoder = LabelEncoder() doc_id_encoder.fit(main_train['doc_id']) main_train['doc_id_enc'] = doc_id_encoder.transform(main_train['doc_id']) posting_date_encoder = LabelEncoder() posting_date_encoder.fit(main_train['posting_date']) main_train['posting_date_enc'] = posting_date_encoder.transform(main_train['posting_date']) document_create_date_encoder = LabelEncoder() document_create_date_encoder.fit(main_train['document_create_date']) main_train['document_create_date_enc'] = document_create_date_encoder.transform(main_train['document_create_date']) document_create_date1_encoder = LabelEncoder() document_create_date1_encoder.fit(main_train['document_create_date.1']) main_train['document_create_date.1_enc'] = document_create_date1_encoder.transform(main_train['document_create_date.1']) due_in_date_encoder = LabelEncoder() due_in_date_encoder.fit(main_train['due_in_date']) main_train['due_in_date_enc'] = due_in_date_encoder.transform(main_train['due_in_date']) invoice_currency_encoder = LabelEncoder() invoice_currency_encoder.fit(main_train['invoice_currency']) main_train['invoice_currency_enc'] = invoice_currency_encoder.transform(main_train['invoice_currency']) total_open_amount_encoder = LabelEncoder() total_open_amount_encoder.fit(main_train['total_open_amount']) main_train['total_open_amount_enc'] = total_open_amount_encoder.transform(main_train['total_open_amount']) document_create_date_encoder = LabelEncoder() document_create_date_encoder.fit(main_train['document_create_date']) main_train['document_create_date_enc'] = document_create_date_encoder.transform(main_train['document_create_date']) baseline_create_date_encoder = LabelEncoder() baseline_create_date_encoder.fit(main_train['baseline_create_date']) main_train['baseline_create_date_enc'] = baseline_create_date_encoder.transform(main_train['baseline_create_date']) cust_payment_terms_encoder = LabelEncoder() cust_payment_terms_encoder.fit(main_train['cust_payment_terms']) main_train['cust_payment_terms_enc'] = cust_payment_terms_encoder.transform(main_train['cust_payment_terms']) invoice_id_encoder = LabelEncoder() invoice_id_encoder.fit(main_train['invoice_id']) main_train['invoice_id_enc'] = invoice_id_encoder.transform(main_train['invoice_id']) # - main_train.shape # dropping columns document type', 'area_business', 'document_create_date.1' # Here we also know that isOpen and posting_id are constant columns main_train.drop(columns=['area_business', 'posting_id', 'isOpen', 'business_code', 'cust_number', 'name_customer', 'clear_date', 'buisness_year', 'doc_id', 'posting_date', 'document_create_date', 'document_create_date.1', 'due_in_date', 'invoice_currency', 'total_open_amount', 'document_create_date', 'baseline_create_date', 'cust_payment_terms', 'invoice_id', 'document type', 'clear_date_enc' ], inplace=True) main_train = main_train.sort_values(by="posting_date_enc") main_train.reset_index(inplace=True, drop=True) # Splitting the main_train dataframe into two parts: X and Y # Here, X=main_train without the target column i.e Y=Delay # and y= the target column of the main_train dataframe X=main_train.drop('Y=Delay', axis=1) y=main_train['Y=Delay'] # splitting the data with 30% of the data going to the intermediate test dataset from sklearn.model_selection import train_test_split X_train,X_inter_test,y_train,y_inter_test = train_test_split(X,y,test_size=0.3,random_state=0 , shuffle = False) X_val,X_test,y_val,y_test = train_test_split(X_inter_test,y_inter_test,test_size=0.5,random_state=0 , shuffle = False) # Checking the shapes of all three, X_train, X_val and X_test X_train.shape , X_val.shape , X_test.shape # # That's all for Milestone1 # # Milestone 2 Beginning # # Exploratory data analysis sns.distplot(y_train) # + # The distribution is positively skewed or right skewed # mean>median>mode # - X_train.info() X_train y_train # + # Checking out the heatmap for correlation : # - colormap = plt.cm.RdBu plt.figure(figsize=(14,14)) plt.title('Pearson Correlation of Features', y=1.05, size=15) sns.heatmap(X_train.merge(y_train , on = X_train.index ).corr(),linewidths=0.1,vmax=1.0, square=True, cmap=colormap, linecolor='white', annot=True) # + # Since posting date has correlation=1 with document_create_date_enc and document_create_date1_enc # we can drop the other two # + # Feature selection and feature engineering # - X_train.drop(columns=['document_create_date_enc', 'document_create_date.1_enc', 'baseline_create_date_enc'], inplace=True) X_val.drop(columns=['document_create_date_enc', 'document_create_date.1_enc', 'baseline_create_date_enc'], inplace=True) X_test.drop(columns=['document_create_date_enc', 'document_create_date.1_enc', 'baseline_create_date_enc'], inplace=True) # # Beginning with milestone3 # # Modelling # Training the model using Linear regression from sklearn.linear_model import LinearRegression base_model = LinearRegression() base_model.fit(X_train, y_train) y_predict = base_model.predict(X_val) # Comparing the predicted values and actual values side by side comp_res=tuple(zip(y_train, y_predict)) comp_res # Calculating the mean squared error from sklearn.metrics import mean_squared_error mean_squared_error(y_val, y_predict, squared=False) met = pd.DataFrame(zip(y_predict , y_test),columns=['Predicted','Actuals']) (abs(met.Predicted-met.Actuals)/met.Actuals).mean() # + # Tree based approach # Training the model using the decision tree approach # - from sklearn.tree import DecisionTreeRegressor regressor = DecisionTreeRegressor(random_state=0 , max_depth=5) regressor.fit(X_train, y_train) y_predict2 = regressor.predict(X_val) mean_squared_error(y_val, y_predict2, squared=False) ################################################################# # Doing a prediction on X_test y_predict_test = regressor.predict(X_test) mean_squared_error(y_test, y_predict_test, squared=False) # Comparing the predicted y_test values and the actual y_test values side by side comp_res=tuple(zip(y_test, y_predict_test)) comp_res # Doing a prediction on X_val y_predict_val = regressor.predict(X_val) mean_squared_error(y_val, y_predict_val, squared=False) # Comparing the predicted y_val values and the actual y_val values side by side comp_res=tuple(zip(y_val, y_predict_val)) comp_res # Making a copy of the main_test X_main_test=main_test.copy(deep=True) # + # Converting the below columns to datetime X_main_test['posting_date'] = pd.to_datetime(X_main_test['posting_date'], format='%Y/%m/%d' ) X_main_test['document_create_date'] = pd.to_datetime(X_main_test['document_create_date'], format='%Y%m%d') X_main_test['clear_date'] = pd.to_datetime(X_main_test['clear_date'], format='%Y/%m/%d') X_main_test['baseline_create_date']=pd.to_datetime(X_main_test.baseline_create_date, format='%Y%m%d') X_main_test['due_in_date']=pd.to_datetime(X_main_test.due_in_date, format='%Y%m%d') main_test['posting_date'] = pd.to_datetime(main_test['posting_date'], format='%Y/%m/%d' ) main_test['document_create_date'] = pd.to_datetime(main_test['document_create_date'], format='%Y%m%d') main_test['clear_date'] = pd.to_datetime(main_test['clear_date'], format='%Y/%m/%d') main_test['baseline_create_date']=pd.to_datetime(main_test.baseline_create_date, format='%Y%m%d') main_test['due_in_date']=pd.to_datetime(main_test.due_in_date, format='%Y%m%d') # + # Encoding the main features of the main_test : from sklearn.preprocessing import LabelEncoder business_code_encoder = LabelEncoder() business_code_encoder.fit(X_main_test['business_code']) X_main_test['business_code_enc'] = business_code_encoder.transform(X_main_test['business_code']) cust_number_encoder = LabelEncoder() cust_number_encoder.fit(X_main_test['cust_number']) X_main_test['cust_number_enc'] = cust_number_encoder.transform(X_main_test['cust_number']) name_customer_encoder = LabelEncoder() name_customer_encoder.fit(X_main_test['name_customer']) X_main_test['name_customer_enc'] = name_customer_encoder.transform(X_main_test['name_customer']) buisness_year_encoder = LabelEncoder() buisness_year_encoder.fit(X_main_test['buisness_year']) X_main_test['buisness_year_enc'] = buisness_year_encoder.transform(X_main_test['buisness_year']) doc_id_encoder = LabelEncoder() doc_id_encoder.fit(X_main_test['doc_id']) X_main_test['doc_id_enc'] = doc_id_encoder.transform(X_main_test['doc_id']) posting_date_encoder = LabelEncoder() posting_date_encoder.fit(X_main_test['posting_date']) X_main_test['posting_date_enc'] = posting_date_encoder.transform(X_main_test['posting_date']) document_create_date_encoder = LabelEncoder() document_create_date_encoder.fit(X_main_test['document_create_date']) X_main_test['document_create_date_enc'] = document_create_date_encoder.transform(X_main_test['document_create_date']) document_create_date1_encoder = LabelEncoder() document_create_date1_encoder.fit(X_main_test['document_create_date.1']) X_main_test['document_create_date.1_enc'] = document_create_date1_encoder.transform(X_main_test['document_create_date.1']) due_in_date_encoder = LabelEncoder() due_in_date_encoder.fit(X_main_test['due_in_date']) X_main_test['due_in_date_enc'] = due_in_date_encoder.transform(X_main_test['due_in_date']) invoice_currency_encoder = LabelEncoder() invoice_currency_encoder.fit(X_main_test['invoice_currency']) X_main_test['invoice_currency_enc'] = invoice_currency_encoder.transform(X_main_test['invoice_currency']) document_type_encoder = LabelEncoder() document_type_encoder.fit(X_main_test['document type']) X_main_test['document_type_enc'] = document_type_encoder.transform(X_main_test['document type']) posting_id_encoder = LabelEncoder() posting_id_encoder.fit(X_main_test['posting_id']) X_main_test['posting_id_enc'] = posting_id_encoder.transform(X_main_test['posting_id']) total_open_amount_encoder = LabelEncoder() total_open_amount_encoder.fit(X_main_test['total_open_amount']) X_main_test['total_open_amount_enc'] = total_open_amount_encoder.transform(X_main_test['total_open_amount']) baseline_create_date_encoder = LabelEncoder() baseline_create_date_encoder.fit(X_main_test['baseline_create_date']) X_main_test['baseline_create_date_enc'] = baseline_create_date_encoder.transform(X_main_test['baseline_create_date']) cust_payment_terms_encoder = LabelEncoder() cust_payment_terms_encoder.fit(X_main_test['cust_payment_terms']) X_main_test['cust_payment_terms_enc'] = cust_payment_terms_encoder.transform(X_main_test['cust_payment_terms']) invoice_id_encoder = LabelEncoder() invoice_id_encoder.fit(X_main_test['invoice_id']) X_main_test['invoice_id_enc'] = invoice_id_encoder.transform(X_main_test['invoice_id']) isOpen_encoder = LabelEncoder() isOpen_encoder.fit(X_main_test['isOpen']) X_main_test['isOpen_enc'] = isOpen_encoder.transform(X_main_test['isOpen']) # - # dropping columns document type', 'area_business', 'document_create_date.1' # Here we also know that isOpen and posting_id are constant columns X_main_test.drop(columns=['area_business', 'posting_id', 'isOpen', 'business_code', 'cust_number', 'name_customer', 'clear_date', 'buisness_year', 'doc_id', 'posting_date', 'document_create_date', 'document_create_date.1', 'due_in_date', 'invoice_currency', 'total_open_amount', 'document_create_date', 'baseline_create_date', 'cust_payment_terms', 'invoice_id', 'document type', 'document_create_date_enc', 'document_create_date.1_enc', 'posting_id_enc', 'baseline_create_date_enc', 'isOpen_enc' ], inplace=True) # Calculating the final results final_result = regressor.predict(X_main_test) final_result = pd.Series(final_result,name='Y=Delay') # resetting the index of main test so that we can merge price series with it main_test.reset_index(drop=True,inplace=True) # Creating the Final dataframe Final = main_test.merge(final_result , on = main_test.index ) Final['Y=Delay'] = Final['Y=Delay'].astype(int) Final # Calculating the clear_date using a simple formula : clear_date= due_in_date-Delay Final['clear_date']=Final['due_in_date']+ pd.to_timedelta(Final['Y=Delay'], unit='d') Final[['clear_date', 'due_in_date', 'Y=Delay']] # Displaying the final table with the predicted delay column and the calculated clear_date column from the predicted delay # and the already existing due_in_date Final # + # Bucketing the values of the delay column into categories of 10 units for displaying the range in which the delay lies lower_limit=((Final['Y=Delay']//10)*10).min() upper_limit=((Final['Y=Delay']//10+1)*10).max() bins = np.arange(lower_limit,upper_limit+10,10) labels = [f'{int(i)} days-{int(j)} days' for i, j in zip(bins[:-1], bins[1:])] Final['Delay_Range'] = pd.cut(Final['Y=Delay'], bins=bins, labels=labels, right=False) Final # - Final.to_csv('predicted_values2.csv', header=True, index=False)
Yash- Payment Date prediction.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import cv2 import numpy as np import matplotlib.pyplot as plt import PIL.Image as Image # + # input file mov_path = 'Resources/StitchReceipt/bon.mp4' # + # capture frames to folder sift = cv2.xfeatures2d.SIFT_create() vid_capture = cv2.VideoCapture(mov_path) count = 0 success, image_pre = vid_capture.read() while success: success, image_post = vid_capture.read() if count % 5 != 0: count += 1 continue image_pre_gray = cv2.cvtColor(image_pre, cv2.COLOR_BGR2GRAY) image_post_gray = cv2.cvtColor(image_post, cv2.COLOR_BGR2GRAY) kp1, ds1 = sift.detectAndCompute(image_pre_gray, None) kp2, ds2 = sift.detectAndCompute(image_post_gray, None) im1 = image_pre.copy() im2 = image_post.copy() matcher = cv2.BFMatcher() matches = matcher.knnMatch(ds1, ds2, k=2) good = [] for m,n in matches: if m.distance < 0.75 * n.distance: good.append([m]) pts1 = np.zeros((len(matches), 2), dtype=np.float32) pts2 = np.zeros((len(matches), 2), dtype=np.float32) for i, match in enumerate(matches): pts1[i,:] = kp1[match[0].queryIdx].pt pts2[i,:] = kp2[match[0].trainIdx].pt h, mask = cv2.findHomography(pts2, pts1, cv2.RANSAC) im_reg = cv2.warpPerspective(im2, h, ((im2.shape[1] + im1.shape[1]), im1.shape[0])) cv2.imwrite("reg-%d.jpg" % count, im_reg) image_pre = im_reg count += 1 # - image_pre_gray = cv2.cvtColor(image_pre, cv2.COLOR_BGR2GRAY) image_post_gray = cv2.cvtColor(image_post, cv2.COLOR_BGR2GRAY) # + sift = cv2.xfeatures2d.SIFT_create() kp1, ds1 = sift.detectAndCompute(image_pre_gray, None) kp2, ds2 = sift.detectAndCompute(image_post_gray, None) print(np.shape(kp1)) print(np.shape(kp2)) # + im1 = image_pre.copy() im2 = image_post.copy() kp_img1 = cv2.drawKeypoints(image_pre_gray, kp1, image_pre, flags = cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS) kp_img2 = cv2.drawKeypoints(image_post_gray, kp2, image_post, flags = cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS) plt.imshow(kp_img1),plt.show() plt.imshow(kp_img2),plt.show() WRITE_IMAGES = False if WRITE_IMAGES: cv2.imwrite('img1-kp.jpg', kp_img1) cv2.imwrite('img2-kp.jpg', kp_img2) # + matcher = cv2.BFMatcher() matches = matcher.knnMatch(ds1, ds2, k=2) good = [] for m,n in matches: if m.distance < 0.75 * n.distance: good.append([m]) match_res = cv2.drawMatchesKnn(image_pre, kp1, image_post, kp2, good, None, flags = cv2.DrawMatchesFlags_NOT_DRAW_SINGLE_POINTS) plt.imshow(match_res), plt.show() WRITE_RESULT = False if WRITE_RESULT: cv2.imwrite('match-res.jpg', match_res) #print(matches) #print(matches[0][0].distance) # + pts1 = np.zeros((len(matches), 2), dtype=np.float32) pts2 = np.zeros((len(matches), 2), dtype=np.float32) for i, match in enumerate(matches): pts1[i,:] = kp1[match[0].queryIdx].pt pts2[i,:] = kp2[match[0].trainIdx].pt h, mask = cv2.findHomography(pts2, pts1, cv2.RANSAC) print(h) dims = np.shape(image_post) bnd1 = np.array([[0,0,1]]).T bnd2 = np.array([[0,dims[1],1]]).T print(bnd1) print(bnd2) bnds1 = np.matmul(h,bnd1) bnds2 = np.matmul(h,bnd2) print(bnds1) print(bnds2) height,width,color = image_pre.shape im_reg = cv2.warpPerspective(im2, h, ((im2.shape[1] + im1.shape[1]), im1.shape[0])) plt.imshow(im_reg), plt.show() WRITE_IMAGE = True if WRITE_IMAGE: cv2.imwrite('registered-1-2.jpg', im_reg) # - plt.imshow(np.real(stitched))
Homography_warp.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:py35] # language: python # name: conda-env-py35-py # --- # + import pandas as pd import numpy as np import matplotlib.pyplot as plt from matplotlib.ticker import FuncFormatter import matplotlib as mpl import matplotlib.dates as mdates import datetime # Set the matplotlib settings (eventually this will go at the top of the graph_util) mpl.rcParams['axes.labelsize'] = 16 mpl.rcParams['axes.titlesize'] = 20 mpl.rcParams['legend.fontsize'] = 16 mpl.rcParams['font.size'] = 16.0 mpl.rcParams['figure.figsize'] = [15,10] mpl.rcParams['xtick.labelsize'] = 16 mpl.rcParams['ytick.labelsize'] = 16 # Set the style for the graphs plt.style.use('bmh') # Additional matplotlib formatting settings months = mdates.MonthLocator() # This formats the months as three-letter abbreviations months_format = mdates.DateFormatter('%b') # - def area_cost_distribution(df, fiscal_year_col, utility_col_list, filename): # Inputs include the dataframe, the column name for the fiscal year column, and the list of column names for the # different utility bills. The dataframe should already include the summed bills for each fiscal year. fig, ax = plt.subplots() # Take costs for each utility type and convert to percent of total cost by fiscal year df['total_costs'] = df[utility_col_list].sum(axis=1) percent_columns = [] for col in utility_col_list: percent_col = "Percent " + col percent_columns.append(percent_col) df[percent_col] = df[col] / df.total_costs # Create stacked area plot ax.stackplot(df[fiscal_year_col], df[percent_columns].T, labels=percent_columns) # Format the y axis to be in percent ax.yaxis.set_major_formatter(FuncFormatter('{0:.0%}'.format)) # Format the x-axis to include all fiscal years plt.xticks(np.arange(df[fiscal_year_col].min(), df[fiscal_year_col].max()+1, 1.0)) # Add title and axis labels plt.title('Annual Utility Cost Distribution') plt.ylabel('Utility Cost Distribution') plt.xlabel('Fiscal Year') # Add legend plt.legend() # Make sure file goes in the proper directory folder_and_filename = 'output/images/' + filename # Save and show plt.savefig(folder_and_filename) plt.show() def area_use_distribution(df, fiscal_year_col, utility_col_list, filename): # Inputs include the dataframe, the column name for the fiscal year column, and the list of column names for the # different utility bills. The dataframe should already include the summed bills for each fiscal year. fig, ax = plt.subplots() # Take usage for each utility type and convert to percent of total cost by fiscal year df['total_use'] = df[utility_col_list].sum(axis=1) percent_columns = [] for col in utility_col_list: percent_col = "Percent " + col percent_columns.append(percent_col) df[percent_col] = df[col] / df.total_use # Create stacked area plot ax.stackplot(df[fiscal_year_col], df[percent_columns].T, labels=percent_columns) # Format the y axis to be in percent ax.yaxis.set_major_formatter(FuncFormatter('{0:.0%}'.format)) # Format the x-axis to include all fiscal years plt.xticks(np.arange(df[fiscal_year_col].min(), df[fiscal_year_col].max()+1, 1.0)) # Add title and axis labels plt.title('Annual Energy Usage Distribution') plt.ylabel('Annual Energy Usage Distribution') plt.xlabel('Fiscal Year') # Add legend plt.legend() # Make sure file goes in the proper directory folder_and_filename = 'output/images/' + filename # Save and show plt.savefig(folder_and_filename) plt.show() def create_stacked_bar(df, fiscal_year_col, column_name_list, filename): # Parameters include the dataframe, the name of the column where the fiscal year is listed, a list of the column names # with the correct data for the chart, and the filename where the output should be saved. # Create the figure plt.figure() # Set the bar width width = 0.50 # Create the stacked bars. The "bottom" is the sum of all previous bars to set the starting point for the next bar. previous_col_name = 0 for col in column_name_list: short_col_name = col.split(" Cost")[0] short_col_name = plt.bar(df[fiscal_year_col], df[col], width, label=short_col_name, bottom=previous_col_name) previous_col_name = previous_col_name + df[col] # label axes plt.ylabel('Utility Cost [$]') plt.xlabel('Fiscal Year') plt.title('Total Annual Utility Costs') # Make one bar for each fiscal year plt.xticks(np.arange(df[fiscal_year_col].min(), df[fiscal_year_col].max()+1, 1.0), np.sort(list(df[fiscal_year_col].unique()))) # Set the yticks to go up to the total cost in increments of 100,000 df['total_cost'] = df[column_name_list].sum(axis=1) plt.yticks(np.arange(0, df.total_cost.max(), 100000)) plt.legend() # Make sure file goes in the proper directory folder_and_filename = 'output/images/' + filename # Save and show plt.savefig(filename) plt.show() def energy_use_stacked_bar(df, fiscal_year_col, column_name_list, filename): # Parameters include the dataframe, the name of the column where the fiscal year is listed, a list of the column names # with the correct data for the chart, and the filename where the output should be saved. # Create the figure plt.figure() # Set the bar width width = 0.50 # Create the stacked bars. The "bottom" is the sum of all previous bars to set the starting point for the next bar. previous_col_name = 0 for col in column_name_list: short_col_name = col.split(" [MMBTU")[0] short_col_name = plt.bar(df[fiscal_year_col], df[col], width, label=short_col_name, bottom=previous_col_name) previous_col_name = previous_col_name + df[col] # label axes plt.ylabel('Annual Energy Usage [MMBTU]') plt.xlabel('Fiscal Year') plt.title('Total Annual Energy Usage') # Make one bar for each fiscal year plt.xticks(np.arange(df[fiscal_year_col].min(), df[fiscal_year_col].max()+1, 1.0), np.sort(list(df[fiscal_year_col].unique()))) # Set the yticks to go up to the total usage in increments of 1,000 df['total_use'] = df[column_name_list].sum(axis=1) plt.yticks(np.arange(0, df.total_use.max(), 1000)) plt.legend() # Make sure file goes in the proper directory folder_and_filename = 'output/images/' + filename # Save and show plt.savefig(folder_and_filename) plt.show() def usage_pie_charts(df, use_or_cost_cols, chart_type, filename): # df: A dataframe with the fiscal_year as the index and needs to include the values for the passed in list of columns. # use_or_cost_cols: a list of the energy usage or energy cost column names # chart_type: 1 for an energy use pie chart, 2 for an energy cost pie chart # Get the three most recent complete years of data complete_years = df.query("month_count == 12.0") sorted_completes = complete_years.sort_index(ascending=False) most_recent_complete_years = sorted_completes[0:3] years = list(most_recent_complete_years.index.values) # Create percentages from usage most_recent_complete_years = most_recent_complete_years[use_or_cost_cols] most_recent_complete_years['Totals'] = most_recent_complete_years.sum(axis=1) for col in use_or_cost_cols: most_recent_complete_years[col] = most_recent_complete_years[col] / most_recent_complete_years.Totals most_recent_complete_years = most_recent_complete_years.drop('Totals', axis=1) for col in use_or_cost_cols: if most_recent_complete_years[col].iloc[0] == 0: most_recent_complete_years = most_recent_complete_years.drop(col, axis=1) # Create a pie chart for each of 3 most recent complete years for year in years: year_df = most_recent_complete_years.query("fiscal_year == @year") plt.figure() fig, ax = plt.subplots() ax.pie(list(year_df.iloc[0].values), labels=list(year_df.columns.values), autopct='%1.1f%%', shadow=True, startangle=90) # Create the title based on whether it is an energy use or energy cost pie chart. if chart_type == 1: title = "FY " + str(year) + " Energy Usage [MMBTU]" else: titel = "FY " + str(year) + " Energy Cost [$]" plt.title(title) ax.axis('equal') # Equal aspect ratio ensures that pie is drawn as a circle. # Make sure file goes in the proper directory folder_and_filename = 'output/images/' + filename + str(year) # Save and show plt.savefig(folder_and_filename) plt.show() def create_monthly_profile(df, graph_column_name, yaxis_name, color_choice, filename): # Parameters: # df: A dataframe with the fiscal_year, fiscal_mo, and appropriate graph column name ('kWh', 'kW', etc.) # graph_column_name: The name of the column containing the data to be graphed on the y-axis # yaxis_name: A string that will be displayed on the y-axis # color_choice: 'blue', 'red', or 'green' depending on the desired color palette. # Additional matplotlib formatting settings months = mdates.MonthLocator() # This formats the months as three-letter abbreviations months_format = mdates.DateFormatter('%b') # Get five most recent years recent_years = (sorted(list(df.index.levels[0].values), reverse=True)[0:5]) # Reset the index of the dataframe for more straightforward queries df_reset = df.reset_index() def get_date(row): # Converts the fiscal year and fiscal month columns to a datetime object for graphing # Year is set to 2016-17 so that the charts overlap; otherwise they will be spread out by year. # The "year trick" allows the graph to start from July so the seasonal energy changes are easier to identify if row['fiscal_mo'] > 6: year_trick = 2016 else: year_trick = 2017 return datetime.date(year=year_trick, month=row['fiscal_mo'], day=1) # This creates a new date column with data in the datetime format for graphing df_reset['date'] = df_reset[['fiscal_year', 'fiscal_mo']].apply(get_date, axis=1) # Create a color dictionary of progressively lighter colors of three different shades and convert to dataframe color_dict = {'blue': ['#08519c', '#3182bd', '#6baed6', '#bdd7e7', '#eff3ff'], 'red': ['#a50f15', '#de2d26', '#fb6a4a', '#fcae91', '#fee5d9'], 'green': ['#006d2c', '#31a354', '#74c476', '#bae4b3', '#edf8e9'] } color_df = pd.DataFrame.from_dict(color_dict) # i is the counter for the different colors i=0 # Create the plots fig, ax = plt.subplots() for year in recent_years: # Create df for one year only so it's plotted as a single line year_df = electric_pivot_monthly_reset.query("fiscal_year == @year") year_df = year_df.sort_values(by='date') # Plot the data ax.plot_date(year_df['date'], year_df[graph_column_name], fmt='-', color=color_df.iloc[i][color_choice], label=str(year_df.fiscal_year.iloc[0])) # Increase counter by one to use the next color i += 1 # Format the dates ax.xaxis.set_major_locator(months) ax.xaxis.set_major_formatter(months_format) fig.autofmt_xdate() # Add the labels plt.xlabel('Month of Year') plt.ylabel(yaxis_name) plt.legend() # Make sure file goes in the proper directory folder_and_filename = 'output/images/' + filename # Save and show plt.savefig(folder_and_filename) plt.show()
testing/FNSB_Graph_Util_Notebook.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- import pandas as pd import numpy as np # + #Loading the dataset: data = pd.read_csv("AB_NYC_2019.csv") data.head() # - # ### Features # # For the rest of the homework, you'll need to use the features from the previous homework with additional two `'neighbourhood_group'` and `'room_type'`. So the whole feature set will be set as follows: # # * `'neighbourhood_group'`, # * `'room_type'`, # * `'latitude'`, # * `'longitude'`, # * `'price'`, # * `'minimum_nights'`, # * `'number_of_reviews'`, # * `'reviews_per_month'`, # * `'calculated_host_listings_count'`, # * `'availability_365'` # # Select only them and fill in the missing values with 0. # new_data = data[['neighbourhood_group','room_type','latitude', 'longitude', 'price', 'minimum_nights','number_of_reviews', 'reviews_per_month', 'calculated_host_listings_count','availability_365']] new_data.info() new_data.isnull().sum() new_data['reviews_per_month']= new_data['reviews_per_month'].fillna(0) new_data.isnull().sum() # ### Question 1 # # What is the most frequent observation (mode) for the column `'neighbourhood_group'`? # print('The most frequent observation for the column neighbourhood_group is', new_data['neighbourhood_group'].mode()) # ### Split the data # # * Split your data in train/val/test sets, with 60%/20%/20% distribution. # * Use Scikit-Learn for that (the `train_test_split` function) and set the seed to 42. # * Make sure that the target value ('price') is not in your dataframe. # # + from sklearn.model_selection import train_test_split X = new_data.drop(['price'], axis=1) y = new_data["price"] X_full_train, X_test, y_full_train, y_test=train_test_split(X, y, test_size = 0.2, random_state=42) #Data is divided only 2 parts with this code(80% for train, 20% for test) X_train, X_val, y_train, y_val=train_test_split(X_full_train, y_full_train, test_size = 0.2, random_state=42) #Now, train data is divided 2 parts to create validation set # - X_train = X_train.reset_index(drop=True) X_val = X_val.reset_index(drop=True) X_test = X_val.reset_index(drop=True) # ### Question 2 # # * Create the [correlation matrix](https://www.google.com/search?q=correlation+matrix) for the numerical features of your train dataset. # * In a correlation matrix, you compute the correlation coefficient between every pair of features in the dataset. # * What are the two features that have the biggest correlation in this dataset? X_train.corr() # The *number of reviews* and *reviews_per_month* has the highest correlation score as 0.59. # ### Make price binary # # * We need to turn the price variable from numeric into binary. # * Let's create a variable `above_average` which is `1` if the price is above (or equal to) `152`. y_train =pd.DataFrame(y_train) y_train1 = y_train #not to lose original train set with price y_train1['above_average'] = y_train1['price'] >= 152 y_train1 y_train1['above_average'] = y_train1.above_average.astype(int) y_train1 y_val =pd.DataFrame(y_val) y_val1 = y_val y_val1['above_average'] = y_val1['price'] >= 152 y_val1['above_average'] = y_val1.above_average.astype(int) y_val1 y_test =pd.DataFrame(y_test) y_test1 = y_test y_test1['above_average'] = y_test1['price'] >= 152 y_test1['above_average'] = y_test1.above_average.astype(int) y_test1 # ### Question 3 # # * Calculate the mutual information score with the (binarized) price for the two categorical variables that we have. Use the training set only. # * Which of these two variables has bigger score? # * Round it to 2 decimal digits using `round(score, 2)` from sklearn.metrics import mutual_info_score round(mutual_info_score(X_train.room_type, y_train1.above_average),2) round(mutual_info_score(X_train.neighbourhood_group, y_train1.above_average),2) # Room type has the bigger mutual score with binarized price variable. # ### Question 4 # # * Now let's train a logistic regression # * Remember that we have two categorical variables in the data. Include them using one-hot encoding. # * Fit the model on the training dataset. # * To make sure the results are reproducible across different versions of Scikit-Learn, fit the model with these parameters: # * `model = LogisticRegression(solver='liblinear', C=1.0, random_state=42)` # * Calculate the accuracy on the validation dataset and rount it to 2 decimal digits. new_data.columns categorical = ['neighbourhood_group', 'room_type'] numerical = [ 'latitude', 'longitude', 'minimum_nights', 'number_of_reviews', 'reviews_per_month', 'calculated_host_listings_count', 'availability_365'] # + #ONE HOT ENCODING from sklearn.feature_extraction import DictVectorizer train_dict = X_train[categorical + numerical].to_dict(orient='records') # - train_dict[0] dv = DictVectorizer(sparse=False) dv.fit(train_dict) X_train = dv.transform(train_dict) print(X_train.shape) print(X_train) dv.get_feature_names() y_train1 = y_train1[['above_average']] y_train1 # + #TRAINING LOGISTIC REGRESSION from sklearn.linear_model import LogisticRegression model = LogisticRegression(solver='liblinear', C=1.0, random_state=42) model.fit(X_train, y_train1) # + val_dict = X_val[categorical + numerical].to_dict(orient='records') dv = DictVectorizer(sparse=False) dv.fit(val_dict) X_val = dv.transform(val_dict) X_val.shape # - print(y_val) y_val1 = y_val[['above_average']] from sklearn.metrics import accuracy_score y_pred = model.predict(X_val) round(accuracy_score(y_val1,y_pred),2) # ### Question 5 # # * We have 9 features: 7 numerical features and 2 categorical. # * Let's find the least useful one using the *feature elimination* technique. # * Train a model with all these features (using the same parameters as in Q4). # * Now exclude each feature from this set and train a model without it. Record the accuracy for each model. # * For each feature, calculate the difference between the original accuracy and the accuracy without the feature. # * Which of following feature has the smallest difference? # * `neighbourhood_group` # * `room_type` # * `number_of_reviews` # * `reviews_per_month` # # > **note**: the difference doesn't have to be positive # #Model without neighbourhood_group model1 = LogisticRegression(solver='liblinear', C=1.0, random_state=42) model1.fit(np.delete(X_train, [5,6,7,8,9], 1), y_train1) y_val1 = y_val1[['above_average']] y_pred1 = model1.predict(np.delete(X_val, [5,6,7,8,9], 1)) round(accuracy_score(y_val1,y_pred1),2) # + #Model without room_type model1 = LogisticRegression(solver='liblinear', C=1.0, random_state=42) model1.fit(np.delete(X_train, [12,13,14], 1), y_train1) y_pred1 = model1.predict(np.delete(X_val, [12,13,14], 1)) round(accuracy_score(y_val1,y_pred1),2) # + #Model without number_of_reviews model1 = LogisticRegression(solver='liblinear', C=1.0, random_state=42) model1.fit(np.delete(X_train, 10, 1), y_train1) y_pred1 = model1.predict(np.delete(X_val, 10, 1)) round(accuracy_score(y_val1,y_pred1),2) # + #Model without reviews_per_month model1 = LogisticRegression(solver='liblinear', C=1.0, random_state=42) model1.fit(np.delete(X_train, 11, 1), y_train1) y_pred1 = model1.predict(np.delete(X_val, 11, 1)) round(accuracy_score(y_val1,y_pred1),2) # - # number_of_reviews and reviews_per_month does not change the global accuracy. # ### Question 6 # # * For this question, we'll see how to use a linear regression model from Scikit-Learn # * We'll need to use the original column `'price'`. Apply the logarithmic transformation to this column. # * Fit the Ridge regression model on the training data. # * This model has a parameter `alpha`. Let's try the following values: `[0, 0.01, 0.1, 1, 10]` # * Which of these alphas leads to the best RMSE on the validation set? Round your RMSE scores to 3 decimal digits. # # If there are multiple options, select the smallest `alpha`. from sklearn.linear_model import Ridge def rmse(y, y_pred): error = y - y_pred se = error ** 2 mse = se.mean() return np.sqrt(mse) y_train = pd.DataFrame(y_train) y_train #Log Transformation on Price y_train = np.log(y_train['price']) y_train y_train=pd.DataFrame(y_train) y_train y_val = np.log(y_val['price']) y_val=pd.DataFrame(y_val) y_val y_test = np.log(y_test['price']) y_test=pd.DataFrame(y_test) y_test X_train = pd.DataFrame(X_train) X_train # + #Ridge Regression from sklearn.linear_model import Ridge for a in [0, 0.01, 0.1, 1, 10]: clf = Ridge(alpha=a) clf.fit(X_train, y_train) y_pred = clf.predict(X_val) rmse_score = rmse(y_val, y_pred) print('RMSE for',a,'is', rmse_score) # - # All RMSEs are very close to each other, however the minimum one belongs to alpha=0.01
Homework3.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Optical Flow # # Optical flow tracks objects by looking at where the *same* points have moved from one image frame to the next. Let's load in a few example frames of a pacman-like face moving to the right and down and see how optical flow finds **motion vectors** that describe the motion of the face! # # As usual, let's first import our resources and read in the images. import numpy as np import matplotlib.image as mpimg # for reading in images import matplotlib.pyplot as plt import cv2 # computer vision library # %matplotlib inline # + # Read in the image frames frame_1 = cv2.imread('images/pacman_1.png') frame_2 = cv2.imread('images/pacman_2.png') frame_3 = cv2.imread('images/pacman_3.png') # convert to RGB frame_1 = cv2.cvtColor(frame_1, cv2.COLOR_BGR2RGB) frame_2 = cv2.cvtColor(frame_2, cv2.COLOR_BGR2RGB) frame_3 = cv2.cvtColor(frame_3, cv2.COLOR_BGR2RGB) # Visualize the individual color channels f, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(20,10)) ax1.set_title('frame 1') ax1.imshow(frame_1) ax2.set_title('frame 2') ax2.imshow(frame_2) ax3.set_title('frame 3') ax3.imshow(frame_3) # - # ## Finding Points to Track # # Befor optical flow can work, we have to give it a set of *keypoints* to track between two image frames! # # In the below example, we use a **Shi-Tomasi corner detector**, which uses the same process as a Harris corner detector to find patterns of intensity that make up a "corner" in an image, only it adds an additional parameter that helps select the most prominent corners. You can read more about this detection algorithm in [the documentation](https://docs.opencv.org/3.0-beta/doc/py_tutorials/py_feature2d/py_shi_tomasi/py_shi_tomasi.html). # # Alternatively, you could choose to use Harris or even ORB to find feature points. I just found that this works well. # # **You sould see that the detected points appear at the corners of the face.** # + # parameters for ShiTomasi corner detection feature_params = dict( maxCorners = 10, qualityLevel = 0.2, minDistance = 5, blockSize = 5 ) # convert all frames to grayscale gray_1 = cv2.cvtColor(frame_1, cv2.COLOR_RGB2GRAY) gray_2 = cv2.cvtColor(frame_2, cv2.COLOR_RGB2GRAY) gray_3 = cv2.cvtColor(frame_3, cv2.COLOR_RGB2GRAY) # Take first frame and find corner points in it pts_1 = cv2.goodFeaturesToTrack(gray_1, mask = None, **feature_params) # display the detected points plt.imshow(frame_1) for p in pts_1: # plot x and y detected points plt.plot(p[0][0], p[0][1], 'r.', markersize=15) # print out the x-y locations of the detected points print(pts_1) # - # ## Perform Optical Flow # # Once we've detected keypoints on our initial image of interest, we can calculate the optical flow between this image frame (frame 1) and the next frame (frame 2), using OpenCV's `calcOpticalFlowPyrLK` which is [documented, here](https://docs.opencv.org/trunk/dc/d6b/group__video__track.html#ga473e4b886d0bcc6b65831eb88ed93323). It takes in an initial image frame, the next image, and the first set of points, and it returns the detected points in the next frame and a value that indicates how good matches are between points from one frame to the next. # # The parameters also include a window size and maxLevels that indicate the size of a window and mnumber of levels that will be used to scale the given images using pyramid scaling; this version peforms an iterative search for matching points and this matching criteria is reflected in the last parameter (you may need to change these values if you are working with a different image, but these should work for the provided example). # + # parameters for lucas kanade optical flow lk_params = dict( winSize = (5,5), maxLevel = 2, criteria = (cv2.TERM_CRITERIA_EPS | cv2.TERM_CRITERIA_COUNT, 10, 0.03)) # calculate optical flow between first and second frame pts_2, match, err = cv2.calcOpticalFlowPyrLK(gray_1, gray_2, pts_1, None, **lk_params) # Select good matching points between the two image frames good_new = pts_2[match==1] good_old = pts_1[match==1] print(good_new) print(good_old) # - # Next, let's display the resulting motion vectors! You should see the first image with motion vectors drawn on it that indicate the direction of motion from the first frame to the next. # + # create a mask image for drawing (u,v) vectors on top of the second frame mask = np.zeros_like(frame_2) # draw the lines between the matching points (these lines indicate motion vectors) for i,(new,old) in enumerate(zip(good_new,good_old)): a,b = new.ravel() c,d = old.ravel() # draw points on the mask image mask = cv2.circle(mask,(a,b),5,(200),-1) # draw motion vector as lines on the mask image #mask = cv2.line(mask, (a,b),(c,d), (200), 3) # add the line image and second frame together composite_im = np.copy(frame_2) composite_im[mask!=0] = [0] plt.imshow(composite_im) # - # ### TODO: Perform Optical Flow between image frames 2 and 3 # # Repeat this process but for the last two image frames; see what the resulting motion vectors look like. Imagine doing this for a series of image frames and plotting the entire-motion-path of a given object. # ## TODO: Perform optical flow between image frames 2 and 3
3_7_Optical_Flow/Optical Flow.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import csv import datetime import json import matplotlib.pyplot as plt import pandas as pd import requests # %matplotlib inline # - # ### Data Collection # ##### Build utilities for data collection. # + endpoint_legacy = 'https://wikimedia.org/api/rest_v1/metrics/legacy/pagecounts/aggregate/{project}/{access-site}/{granularity}/{start}/{end}' endpoint_pageviews = 'https://wikimedia.org/api/rest_v1/metrics/pageviews/aggregate/{project}/{access}/{agent}/{granularity}/{start}/{end}' # SAMPLE parameters for getting aggregated legacy view data # see: https://wikimedia.org/api/rest_v1/#!/Legacy_data/get_metrics_legacy_pagecounts_aggregate_project_access_site_granularity_start_end example_params_legacy = { "project" : "en.wikipedia.org", "access-site" : "desktop-site", "granularity" : "monthly", "start" : "2008010100", # for end use 1st day of month following final month of data "end" : "2016080100" } # SAMPLE parameters for getting aggregated current standard pageview data # see: https://wikimedia.org/api/rest_v1/#!/Pageviews_data/get_metrics_pageviews_aggregate_project_access_agent_granularity_start_end example_params_pageviews = { "project" : "en.wikipedia.org", "access" : "desktop", "agent" : "user", "granularity" : "monthly", "start" : "2015070100", # for end use 1st day of month following final month of data "end" : '2019090100' } # Customize these with your own information headers = { 'User-Agent': 'https://github.com/bhuvi3', 'From': '<EMAIL>' } def api_call(endpoint,parameters): uri = endpoint.format(**parameters) call = requests.get(uri, headers=headers) response = call.json() print("URI: %s" % uri) return response # - # ##### Collecting data from Legacy Pagecounts endpoint. # Collect desktop and mobile data from legacy pagecounts. access_points = ["desktop-site", "mobile-site"] for access_point in access_points: example_params_legacy["access-site"] = access_point cur_data_dict = api_call(endpoint_legacy, example_params_legacy) outfile = "./pagecounts_%s_200801-201607.json" % access_point with open(outfile, "w") as fp: json.dump(cur_data_dict, fp) # ##### Collecting data from Pageviews endpoint. # Collect desktop, mobile (app and web) data from pageviews. access_points = ["desktop", "mobile-app", "mobile-web"] for access_point in access_points: example_params_pageviews["access"] = access_point cur_data_dict = api_call(endpoint_pageviews, example_params_pageviews) outfile = "./pageviews_%s_201507-201908.json" % access_point with open(outfile, "w") as fp: json.dump(cur_data_dict, fp) # ### Data Processing # + # The range of the dates. start_year = 2008 start_month = 1 end_year = 2019 end_month = 9 # The downloaded files. pagecounts_desktop_file = "./pagecounts_desktop-site_200801-201607.json" pagecounts_mobile_file = "./pagecounts_mobile-site_200801-201607.json" pageviews_desktop_file = "./pageviews_desktop_201507-201908.json" pageviews_mobile_web_file = "./pageviews_mobile-web_201507-201908.json" pageviews_mobile_app_file = "./pageviews_mobile-app_201507-201908.json" # The output files. csv_outfile = "./en-wikipedia_traffic_200801-201908.csv" png_outfile = "./en-wikipedia_traffic_200801-201908.png" # - # ##### Read the data into count dictionaries, where the key is the timestamp and value is the count of views. def _get_count_dict_from_json_file(json_file, endpoint): """ Utility function to read the json file to a count dictionary containing the mapping from timestamp to view count. """ with open(json_file) as fp: json_dict = json.load(fp) res_dict = {} for item_dict in json_dict["items"]: res_dict[item_dict["timestamp"]] = item_dict["views" if endpoint == "pageviews" else "count"] return res_dict # + pagecounts_desktop_dict = _get_count_dict_from_json_file(pagecounts_desktop_file, "pagecounts") pagecounts_mobile_dict = _get_count_dict_from_json_file(pagecounts_mobile_file, "pagecounts") pageviews_desktop_dict = _get_count_dict_from_json_file(pageviews_desktop_file, "pageviews") pageviews_mobile_web_dict = _get_count_dict_from_json_file(pageviews_mobile_web_file, "pageviews") pageviews_mobile_app_dict = _get_count_dict_from_json_file(pageviews_mobile_app_file, "pageviews") # - # ##### Step 1: Create pageviews_mobile_dict by merging traffic from mobile_web and mobile_app. pageviews_mobile_dict = {} for timestamp, web_count in pageviews_mobile_web_dict.items(): app_count = pageviews_mobile_app_dict[timestamp] # pageviews_mobile_app_dict.get(timestamp, 0) pageviews_mobile_dict[timestamp] = web_count + app_count # ##### Step 2: Split timestamp string key into (YYYY, MM) key. # + def _update_key(timestamp_key_dict): """ Utility function for updating the keys of the given count dictionary from timestamp to a tuple of year and month. """ updated_dict = {} for timestamp_key, value in timestamp_key_dict.items(): year_month_key = timestamp_key[:4], timestamp_key[4:6] updated_dict[year_month_key] = value return updated_dict pagecounts_desktop_dict_updated = _update_key(pagecounts_desktop_dict) pagecounts_mobile_dict_updated = _update_key(pagecounts_mobile_dict) pageviews_desktop_dict_updated = _update_key(pageviews_desktop_dict) pageviews_mobile_dict_updated = _update_key(pageviews_mobile_dict) # - # ##### Step 3: Create a csv containing the desktop and mobile views from both endpoints. # + # Learnt from stack-overflow: https://stackoverflow.com/questions/5734438/how-to-create-a-month-iterator def month_year_iter(start_year, start_month, end_year, end_month): """ A function which gives an iterator over the months in the given range of dates. """ ym_start = 12 * start_year + start_month - 1 ym_end = 12 * end_year + end_month - 1 for ym in range(ym_start, ym_end): y, m = divmod(ym, 12) yield y, m + 1 def _write_line(csv_writer, row_dict, columns, delim=","): """ Utility function to write a row using csv_writer. """ row_list = [] for column_name in columns: row_list.append(row_dict[column_name]) csv_writer.writerow(row_list) # Write using csv writer. delim = "," with open(csv_outfile, "w", newline='\n', encoding='utf-8') as fp: writer = csv.writer(fp, delimiter=delim, quotechar='"', quoting=csv.QUOTE_NONNUMERIC) columns = [ "year", "month", "pagecount_all_views", "pagecount_desktop_views", "pagecount_mobile_views", "pageview_all_views", "pageview_desktop_views", "pageview_mobile_views" ] writer.writerow(columns) # Iterate through the months in our date range and fill the values row-wise. for year_month_tup in month_year_iter(start_year, start_month, end_year, end_month): cur_date = datetime.date(year=year_month_tup[0], month=year_month_tup[1], day=1) cur_row_dict = {} cur_row_dict["year"] = cur_date.strftime("%Y") cur_row_dict["month"] = cur_date.strftime("%m") year_month_key = cur_row_dict["year"], cur_row_dict["month"] cur_row_dict["pagecount_desktop_views"] = pagecounts_desktop_dict_updated.get(year_month_key, 0) cur_row_dict["pagecount_mobile_views"] = pagecounts_mobile_dict_updated.get(year_month_key, 0) cur_row_dict["pagecount_all_views"] = cur_row_dict["pagecount_desktop_views"] + cur_row_dict["pagecount_mobile_views"] cur_row_dict["pageview_desktop_views"] = pageviews_desktop_dict_updated.get(year_month_key, 0) cur_row_dict["pageview_mobile_views"] = pageviews_mobile_dict_updated.get(year_month_key, 0) cur_row_dict["pageview_all_views"] = cur_row_dict["pageview_desktop_views"] + cur_row_dict["pageview_mobile_views"] _write_line(writer, cur_row_dict, columns, delim=delim) # - # ### Data Analysis # ##### Load the csv into a Pandas Dataframe for easier manipulation. # + df = pd.read_csv(csv_outfile) df["date"] = pd.to_datetime((df.year*10000+df.month*100+1).apply(str), format='%Y%m%d') # Convert the counts to millions. view_count_columns = ['pagecount_all_views', 'pagecount_desktop_views', 'pagecount_mobile_views', 'pageview_all_views', 'pageview_desktop_views', 'pageview_mobile_views'] for column_name in view_count_columns: df[column_name] = df[column_name] / 1000000 # - # ##### Plot the graph: Desktop views in green, Mobile in blue and Total in black. Marking Legacy Pagecounts as dotted line and Pageviews as solid line. # + plt.rcParams["figure.figsize"] = [15, 8] plt.plot("date", "pagecount_desktop_views", data=df[df["pagecount_desktop_views"] != 0], marker='', color='green', linewidth=1, label="Desktop", linestyle='dashed') plt.plot("date", "pagecount_mobile_views", data=df[df["pagecount_mobile_views"] != 0], marker='', color='blue', linewidth=1, label="Mobile", linestyle='dashed') plt.plot("date", "pagecount_all_views", data=df[df["pagecount_all_views"] != 0], marker='', color='black', linewidth=1, label="Total", linestyle='dashed') plt.legend(fontsize=12) plt.plot("date", "pageview_desktop_views", data=df[df["pageview_desktop_views"] != 0], marker='', color='green', linewidth=1) plt.plot("date", "pageview_mobile_views", data=df[df["pageview_mobile_views"] != 0], marker='', color='blue', linewidth=1) plt.plot("date", "pageview_all_views", data=df[df["pageview_all_views"] != 0], marker='', color='black', linewidth=1) plt.xlabel("Date", fontsize=14) plt.ylabel("Page Views in Millions", fontsize=14) plt.xticks(sorted(pd.to_datetime((df.year).apply(str), format='%Y').unique())) plt.tick_params(labelsize=12) plt.title("Page Views on English Wikipedia (x 1,000,000)", fontsize=14) plt.savefig(png_outfile) # - # **Note:** From May 2015, a new pageview definition took effect, which eliminated all crawler traffic. Solid lines mark new definition.
hcds-a1-data-curation.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Find the kth largest element in an unsorted array # Note that it is the kth largest element in the sorted order, not the kth distinct element. # # Example 1: # # Input: [3,2,1,5,6,4] and k = 2 # Output: 5 # Example 2: # # Input: [3,2,3,1,2,4,5,5,6] and k = 4 # Output: 4 # Note: # You may assume k is always valid, 1 ≤ k ≤ array's length. # # # Verification: https://leetcode.com/problems/kth-largest-element-in-an-array/ # ### Solution 1: using merge sort upto kth largest element # # or using heap?? class Solution1(object): def findKthLargest(self, nums, k): """ :type nums: List[int] :type k: int :rtype: int """ assert k >= 1 and k <= len(nums) sorted_A = self.mergeSortUptokthLargest(nums,k) return sorted_A[-1] def mergeSortUptokthLargest(self, A, k): """ sort integer array a using Merge Sort """ if len(A) == 1: sorted_A = A else: half_n = int(len(A)/2) sorted_B = self.mergeSortUptokthLargest(A[:half_n],k) sorted_C = self.mergeSortUptokthLargest(A[half_n:],k) sorted_A = self.mergeTwoSortedArrayUptokthLargest(sorted_B,sorted_C,k) return sorted_A def mergeTwoSortedArrayUptokthLargest(self, A, B, k): i = 0 j = 0 C = [] if k > len(A) + len(B): k = len(A) + len(B) while True: if A[i] < B[j]: C.append(B[j]) j += 1 if j == len(B): i_end = k - len(C) + i for Ai in A[i:i_end]: C.append(Ai) break if len(C) == k: break else: C.append(A[i]) i += 1 if i == len(A): j_end = k - len(C) + j for Bj in B[j:j_end]: C.append(Bj) break if len(C) == k: break return C # ### Solution 2: using quick sort class Solution2(object): def findKthLargest(self, nums, k): """ :type nums: List[int] :type k: int :rtype: int """ assert k >= 1 and k <= len(nums) pivot = self.quickSort(nums) # print('k', k, nums) while k != pivot + 1: # print(pivot, nums) if k > pivot + 1: k = k - pivot - 1 nums = nums[pivot+1:] elif k < pivot + 1: nums = nums[:pivot] pivot = self.quickSort(nums) # print('k', k, nums) return nums[pivot] def quickSort(self, A): """ sort integer array and return index iSmaller, all elements after iSmaller are smaller than A[pivot], all elements before iSmaller are larger than A[pivot]. """ pivot = len(A) - 1 iSmaller = pivot - 1 i = 0 while iSmaller >= i: if A[i] < A[pivot]: ATmp = A[i] A[i] = A[iSmaller] A[iSmaller] = ATmp iSmaller -= 1 else: i += 1 ATmp = A[pivot] A[pivot] = A[iSmaller+1] A[iSmaller+1] = ATmp return iSmaller+1 S = Solution2() S.findKthLargest([3,1,2,4],2) # ### solution 3: using quick sort plus random shuffle the input # + import random class Solution3(object): def findKthLargest(self, nums, k): """ :type nums: List[int] :type k: int :rtype: int """ assert k >= 1 and k <= len(nums) random.shuffle(nums) pivot = self.quickSort(nums) # print('k', k, nums) while k != pivot + 1: # print(pivot, nums) if k > pivot + 1: k = k - pivot - 1 nums = nums[pivot+1:] elif k < pivot + 1: nums = nums[:pivot] pivot = self.quickSort(nums) # print('k', k, nums) return nums[pivot] def quickSort(self, A): """ sort integer array and return index iSmaller, all elements after iSmaller are smaller than A[pivot], all elements before iSmaller are larger than A[pivot]. """ pivot = len(A) - 1 iSmaller = pivot - 1 i = 0 while iSmaller >= i: if A[i] < A[pivot]: ATmp = A[i] A[i] = A[iSmaller] A[iSmaller] = ATmp iSmaller -= 1 else: i += 1 ATmp = A[pivot] A[pivot] = A[iSmaller+1] A[iSmaller+1] = ATmp return iSmaller+1 # -
FindkthLargest.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] colab_type="text" id="nCc3XZEyG3XV" # Lambda School Data Science # # *Unit 2, Sprint 3, Module 1* # # --- # # # # Define ML problems # # You will use your portfolio project dataset for all assignments this sprint. # # ## Assignment # # Complete these tasks for your project, and document your decisions. # # - [ ] Choose your target. Which column in your tabular dataset will you predict? # - [ ] Is your problem regression or classification? # - [ ] How is your target distributed? # - Classification: How many classes? Are the classes imbalanced? # - Regression: Is the target right-skewed? If so, you may want to log transform the target. # - [ ] Choose your evaluation metric(s). # - Classification: Is your majority class frequency >= 50% and < 70% ? If so, you can just use accuracy if you want. Outside that range, accuracy could be misleading. What evaluation metric will you choose, in addition to or instead of accuracy? # - Regression: Will you use mean absolute error, root mean squared error, R^2, or other regression metrics? # - [ ] Choose which observations you will use to train, validate, and test your model. # - Are some observations outliers? Will you exclude them? # - Will you do a random split or a time-based split? # - [ ] Begin to clean and explore your data. # - [ ] Begin to choose which features, if any, to exclude. Would some features "leak" future information? # # If you haven't found a dataset yet, do that today. [Review requirements for your portfolio project](https://lambdaschool.github.io/ds/unit2) and choose your dataset. # # Some students worry, ***what if my model isn't “good”?*** Then, [produce a detailed tribute to your wrongness. That is science!](https://twitter.com/nathanwpyle/status/1176860147223867393) # - # # Import Data and Packages import numpy as np import pandas as pd df_raw = pd.read_csv('../data/msrp.csv') df_raw.head() # # Choose Target ## Regression Target target_reg = 'MSRP' ## Classification Target target_class = 'Make' # # Target Distribution ## Left with 9,000 rows after removing outliers ## Fairly normally distributed dist1 = df_raw[df_raw[target_reg] <= 75000] dist2 = dist1[dist1[target_reg] > 10000] dist2[target_reg].hist(bins=100) ## Furthest outlier exp = df_raw[df_raw[target_reg] > 2000000] exp.head() df_raw['Make'].value_counts(normalize=True).max() # # Choose Metrics # + ## R^2 and mean absolute error will be good metrics for the regression model # + ## Majority class in classification model is between 50 and 70%, therefore accuracy would be a good evaluation metric # - # # Observations # + ## Outliers will be removed and a random train-test split will be done # - # # Wrangle df_raw df_raw.info() def wrangle(df): df = df.copy() ## Fix column names df.columns = df.columns.str.lower().str.replace(' ', '_') ## Remove Outliers df = df[df['msrp'] <= 75000] ## Market Category is a high cardinality column; but we can adjust to not remove it #df['luxury'] = [1 if 'Luxury' in x else 0 for x in df['market_category']] ## 'Model' is not going to allow for leakage in a way df.drop(columns='model', inplace=True) return df df = wrangle(df_raw).reset_index() # + ## Encoding target variable for classification model #df_raw['make'] = [x if x in list(classes) else 'Other' for x in df_raw['make']]
module1-define-ml-problems/Assign21_LS_DS_231_assignment.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # **This notebook is an exercise in the [Pandas](https://www.kaggle.com/learn/pandas) course. You can reference the tutorial at [this link](https://www.kaggle.com/residentmario/indexing-selecting-assigning).** # # --- # # # Introduction # # In this set of exercises we will work with the [Wine Reviews dataset](https://www.kaggle.com/zynicide/wine-reviews). # Run the following cell to load your data and some utility functions (including code to check your answers). # + import pandas as pd reviews = pd.read_csv("../input/wine-reviews/winemag-data-130k-v2.csv", index_col=0) pd.set_option("display.max_rows", 5) from learntools.core import binder; binder.bind(globals()) from learntools.pandas.indexing_selecting_and_assigning import * print("Setup complete.") # - # Look at an overview of your data by running the following line. reviews.head() # # Exercises # ## 1. # # Select the `description` column from `reviews` and assign the result to the variable `desc`. # + # Your code here desc = reviews.description # Check your answer q1.check() # - # Follow-up question: what type of object is `desc`? If you're not sure, you can check by calling Python's `type` function: `type(desc)`. # + #q1.hint() #q1.solution() # - # ## 2. # # Select the first value from the description column of `reviews`, assigning it to variable `first_description`. # + first_description = desc.iloc[0] # Check your answer q2.check() first_description # + #q2.hint() #q2.solution() # - # ## 3. # # Select the first row of data (the first record) from `reviews`, assigning it to the variable `first_row`. # + first_row = reviews.iloc[0,:] # Check your answer q3.check() first_row # + #q3.hint() #q3.solution() # - # ## 4. # # Select the first 10 values from the `description` column in `reviews`, assigning the result to variable `first_descriptions`. # # Hint: format your output as a pandas Series. # + first_descriptions = desc[:10] # Check your answer q4.check() first_descriptions # + #q4.hint() #q4.solution() # - # ## 5. # # Select the records with index labels `1`, `2`, `3`, `5`, and `8`, assigning the result to the variable `sample_reviews`. # # In other words, generate the following DataFrame: # # ![](https://i.imgur.com/sHZvI1O.png) # + sample_reviews = reviews.iloc[[1,2,3,5,8],:] # Check your answer q5.check() sample_reviews # + #q5.hint() #q5.solution() # - # ## 6. # # Create a variable `df` containing the `country`, `province`, `region_1`, and `region_2` columns of the records with the index labels `0`, `1`, `10`, and `100`. In other words, generate the following DataFrame: # # ![](https://i.imgur.com/FUCGiKP.png) # + df = reviews.loc[[0,1,10,100], ['country', 'province', 'region_1', 'region_2']] # Check your answer q6.check() df # + #q6.hint() #q6.solution() # - # ## 7. # # Create a variable `df` containing the `country` and `variety` columns of the first 100 records. # # Hint: you may use `loc` or `iloc`. When working on the answer this question and the several of the ones that follow, keep the following "gotcha" described in the tutorial: # # > `iloc` uses the Python stdlib indexing scheme, where the first element of the range is included and the last one excluded. # `loc`, meanwhile, indexes inclusively. # # > This is particularly confusing when the DataFrame index is a simple numerical list, e.g. `0,...,1000`. In this case `df.iloc[0:1000]` will return 1000 entries, while `df.loc[0:1000]` return 1001 of them! To get 1000 elements using `loc`, you will need to go one lower and ask for `df.iloc[0:999]`. # + df = reviews.loc[:99, ['country', 'variety']] # Check your answer q7.check() df # + #q7.hint() #q7.solution() # - # ## 8. # # Create a DataFrame `italian_wines` containing reviews of wines made in `Italy`. Hint: `reviews.country` equals what? italian_wines = reviews[reviews.country == 'Italy'] # Check your answer q8.check() # + #q8.hint() #q8.solution() # - # ## 9. # # Create a DataFrame `top_oceania_wines` containing all reviews with at least 95 points (out of 100) for wines from Australia or New Zealand. # + top_oceania_wines = reviews[reviews.country.isin(['Australia', 'New Zealand']) & (reviews.points >= 95)] # Check your answer q9.check() top_oceania_wines # - q9.hint() q9.solution() # # Keep going # # Move on to learn about **[summary functions and maps](https://www.kaggle.com/residentmario/summary-functions-and-maps)**. # --- # # # # # *Have questions or comments? Visit the [Learn Discussion forum](https://www.kaggle.com/learn-forum/161299) to chat with other Learners.*
Pandas/2 Indexing, Selecting & Assigning/exercise-indexing-selecting-assigning.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + pycharm={"is_executing": false} from facenet_pytorch import MTCNN, InceptionResnetV1, prewhiten from facenet_pytorch.models.utils.detect_face import extract_face import torch from torch.utils.data import DataLoader, random_split from torchvision import transforms, datasets import numpy as np import pandas as pd from PIL import Image, ImageDraw from matplotlib import pyplot as plt from tqdm.auto import tqdm # + pycharm={"is_executing": false, "name": "#%%\n"} device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu') print('Running on device: {}'.format(device)) # + pycharm={"is_executing": false, "name": "#%%\n"} def get_image(path, trans): img = Image.open(path) img = trans(img) return img # + pycharm={"is_executing": false, "name": "#%%\n"} trans = transforms.Compose([ transforms.Resize(512) ]) trans_cropped = transforms.Compose([ np.float32, transforms.ToTensor(), prewhiten ]) # + pycharm={"is_executing": false, "name": "#%%\n"} dataset = datasets.ImageFolder('dataset/spoof_dataset', transform=trans) dataset.idx_to_class = {k: v for v, k in dataset.class_to_idx.items()} total_item = len(dataset) # train_dataset, test_dataset = random_split(dataset, [total_item * .8, total_item * .2]) # train_loader, test_loader = DataLoader(train_dataset, collate_fn=lambda x: x[0]), DataLoader(test_dataset, collate_fn=lambda x: x[0]) loader = DataLoader(dataset, collate_fn=lambda x: x[0]) # + pycharm={"is_executing": false, "name": "#%%\n"} mtcnn = MTCNN(device=device) # + pycharm={"is_executing": false, "name": "#%%\n"} names = [] aligned = [] for img, idx in tqdm(loader): name = dataset.idx_to_class[idx] # start = time() img_align = mtcnn(img)#, save_path = "data/aligned/{}/{}.png".format(name, str(idx))) # print('MTCNN time: {:6f} seconds'.format(time() - start)) if img_align is not None: names.append(name) aligned.append(img_align) # aligned = torch.stack(aligned) # + pycharm={"name": "#%%\n"} resnet = InceptionResnetV1(pretrained='casia-webface').eval().to(device) # + pycharm={"name": "#%%\n"} img = Image.open("dataset/emma1.jpg") img_cropped1 = mtcnn(img) # + pycharm={"name": "#%%\n"} img_embedding1 = resnet(img_cropped1.unsqueeze(0).to(device)).cpu().detach().numpy() # dist = np.linalg.norm(img_embedding1 - img_embedding2)
train_for_spoofing.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="duWPWgPsdhUu" # # Text Generation using GPT Neo 2.7B # ### Required Packages # + id="3fFR9LMeDv7y" # !pip install transformers # + id="8o7AEzWoEPeS" from transformers import pipeline # + [markdown] id="5FwilAgkdyqI" # ### GPT Neo 1.3B # # GPT-Neo 1.3B is a transformer model designed using EleutherAI's replication of the GPT-3 architecture. GPT-Neo refers to the class of models, while 1.3B represents the number of parameters of this particular pre-trained model. # # You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run: # # + colab={"base_uri": "https://localhost:8080/", "height": 311, "referenced_widgets": ["86414a1313c84044a20794501b987e8c", "51166aefc6784b38bd1f7f9a6fa788c8", "da0721d295d940fe9e6a841ff00343ec", "a40cf76a542d4614807061720f5b19f7", "539785b08fdc4680ab4daab1698048f6", "7d74bf474b43481492482c3095e2d16d", "<KEY>", "<KEY>", "<KEY>", "6b98b15038e747079571f8719787b110", "a38b13a6490f4b25b9e89667ce9d3e3e", "<KEY>", "a4893f5ebe7e4c0793c0b1348713521e", "<KEY>", "<KEY>", "7736fb595ec6438196ef0cab042067ad", "98c3df25adde497296530ed9e4a85e17", "<KEY>", "5f3e87a17108440b8a5716c1132890c0", "101ce74f059c48cdad725e9bbf5a80f4", "38ebb0c2a89e4d53a8df294312624adf", "<KEY>", "4d7c473a4f8c42b6a186cca686be8737", "fefe0067442f4c5da25016e5fa23cb78", "a78748b2c1a04dfe89fe3f26bdce6ebd", "<KEY>", "62ae321a797147779bd748ce2ae17ff2", "e921981b54034bdcad36974e0a94c83d", "59f1795707984a38a0b6f92aca542831", "d8b3e445e00945d6b9c5e028569c5daa", "cf63a675327d426ab69ba9fb00e3f1db", "54c10042725b497d928e3de1eaf1a1cc", "4037fa797b1547b7981a5215d13bd003", "1bfc0bcece1049dba94202889e60e664", "15d7f77c85034b70b313e6ea8476245f", "1efd4b70da7e444f8a656509cc9e5e18", "<KEY>", "<KEY>", "acaab5d5bab845648e91db476244d1ee", "f0cc26e309c5481a974f39c8a48d7e1f", "1e7ba3017d104f16aae0ce3bd17d009c", "<KEY>", "a22050d88df8436eaacae5282f4b0314", "<KEY>", "c802b3ed025a4df392b2d141ece5f6f7", "<KEY>", "<KEY>", "fb67444a10ed4111b5beb575d58fc915"]} id="mGJLtvlNER7p" outputId="145e68ba-1a6e-425a-a884-3e04d597e347" generator = pipeline('text-generation', model='EleutherAI/gpt-neo-1.3B') # + [markdown] id="N-4zGKpmgu4a" # ### Text Generation # + colab={"base_uri": "https://localhost:8080/"} id="OIudVNThEVYR" outputId="5eb70c9b-cca7-47b9-9f4d-f91f956eb341" generator("How was your", max_length=20, num_return_sequences=5)
Natural Language Processing/NLP/TextGenerationGPTNEO1_3B.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # %matplotlib inline # # Interpolate irregular data # -------------------------- # # The functions :func:`fatiando.gridder.interp` and # :func:`fatiando.gridder.interp_at` offer convenient wrappers around # ``scipy.interpolate.griddata``. The scipy function is more general and can # interpolate n-dimensional data. Our functions offer the convenience of # generating the regular grid points and optionally using nearest-neighbor # interpolation to extrapolate outside the convex hull of the data points. # # # + from fatiando import gridder import matplotlib.pyplot as plt import numpy as np # Generate synthetic data measured at random points area = (0, 1, 0, 1) x, y = gridder.scatter(area, n=500, seed=0) data = x*(1 - x)*np.cos(4*np.pi*x)*np.sin(4*np.pi*y**2)**2 # Say we want to interpolate the data onto a regular grid with a given shape shape = (100, 200) # The gridder.interp function takes care of selecting the containing area of # the data and generating the regular grid for us. # Let's interpolate using the different options offered by gridddata and plot # them all. plt.figure(figsize=(10, 8)) xp, yp, nearest = gridder.interp(x, y, data, shape, algorithm='nearest') plt.subplot(2, 2, 1) plt.title('Nearest-neighbors') plt.contourf(yp.reshape(shape), xp.reshape(shape), nearest.reshape(shape), 30, cmap='RdBu_r') xp, yp, linear = gridder.interp(x, y, data, shape, algorithm='linear') plt.subplot(2, 2, 2) plt.title('Linear') plt.contourf(yp.reshape(shape), xp.reshape(shape), linear.reshape(shape), 30, cmap='RdBu_r') xp, yp, cubic = gridder.interp(x, y, data, shape, algorithm='cubic') plt.subplot(2, 2, 3) plt.title('Cubic') plt.contourf(yp.reshape(shape), xp.reshape(shape), cubic.reshape(shape), 30, cmap='RdBu_r') # Notice that the cubic and linear interpolation leave empty the points that # are outside the convex hull (bounding region) of the original scatter data. # These data points will have NaN values or be masked in the data array, which # can cause some problems for processing and inversion (any FFT operation in # fatiando.gravmag will fail, for example). Use "extrapolate=True" to use # nearest-neighbors to fill in those missing points. xp, yp, cubic_ext = gridder.interp(x, y, data, shape, algorithm='cubic', extrapolate=True) plt.subplot(2, 2, 4) plt.title('Cubic with extrapolation') plt.contourf(yp.reshape(shape), xp.reshape(shape), cubic_ext.reshape(shape), 30, cmap='RdBu_r') plt.tight_layout() plt.show()
_downloads/interpolate.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Title # ## Initialisation import os import plyfile import numpy import matplotlib.pyplot import pandas import csv import gdal from osgeo import osr doPlyClean = False # Set to False as this needs only be run once. doTerrain1km = True # Set to False to speed up work with 1km data. doTerrain100m = False # Set to False to speed up work with 1km data. doTerrain10m = False # Set to False to speed up work with 1km data. # ## Data Cleaning def cleanPlyFiles(directory): for file in os.listdir(directory): if file.endswith(".ply"): with open(directory + "/" + file, "r") as fileOpen: lines = fileOpen.read() with open(directory + "/" + file, "w") as fileNew: lines = lines.replace("\n,comment", ",\ncomment") fileNew.write(lines) if doPlyClean: cleanPlyFiles("../data/ahn3_feature_1km") if doPlyClean: cleanPlyFiles("../data/ahn3_feature_100m") if doPlyClean: cleanPlyFiles("../data/ahn3_feature_10m") # ## Data Import terrainHeader = ["x", "y", "z", "coeff_var_z", "density_absolute_mean", "eigenv_1", "eigenv_2", "eigenv_3", "gps_time", "intensity", "kurto_z", "max_z", "mean_z", "median_z", "min_z", "perc_10", "perc_100", "perc_20", "perc_30", "perc_40", "perc_50", "perc_60", "perc_70", "perc_80", "perc_90", "point_density", "pulse_penetration_ratio", "range", "skew_z", "std_z", "var_z"] def plyIntoNumpyArray(directory, gridLength, columnList): fileList = [s for s in os.listdir(directory) if s.endswith(".ply")] terrainData = numpy.empty((gridLength * len(fileList), len(columnList))) for i, file in enumerate(fileList): plydata = plyfile.PlyData.read(directory + "/" + file) for j, column in enumerate(columnList): terrainData[gridLength * i:gridLength * i + gridLength, j] = plydata.elements[0].data[column] return terrainData if doTerrain1km: terrainData1km = plyIntoNumpyArray("../data/ahn3_feature_1km", 4, terrainHeader) if doTerrain100m: terrainData100m = plyIntoNumpyArray("../data/ahn3_feature_100m", 400, terrainHeader) if doTerrain10m: terrainData10m = plyIntoNumpyArray("../data/ahn3_feature_10m", 40000, terrainHeader) matplotlib.pyplot.scatter(terrainData1km[:, 0], terrainData1km[:, 1]) matplotlib.pyplot.show() birdColumnHeaders = ["x_coordinaat_m", "y_coordinaat_m", "taxon_id"] counter = 0 with open("../data/forest_nl_headers_species.csv") as f: birdData = numpy.empty((sum(1 for row in f), len(birdColumnHeaders))) with open("../data/forest_nl_headers_species.csv") as f: reader = csv.DictReader(f) for row in reader: data = [row[column] for column in birdColumnHeaders] if "" not in data and "0" not in data: birdData[counter] = numpy.asarray(data, dtype=float) counter += 1 birdData = numpy.delete(birdData, numpy.s_[counter:], 0) matplotlib.pyplot.scatter(birdData[:, 0], birdData[:, 1]) matplotlib.pyplot.show() # ## Combined PLY file def combinePlyFiles(directory, outputFileName): header = True with open(outputFileName + ".ply", "w") as fileNew: for file in os.listdir(directory): if file.endswith(".ply"): with open(directory + "/" + file, "r") as fileOpen: lines = fileOpen.readlines() if header: header = False for line in lines: fileNew.write(line) else: for j, line in enumerate(lines): if line.rstrip() == "end_header": keepLines = lines[j + 1:] for keepLine in keepLines: fileNew.write(keepLine) break fileNew.close() fileOpen.close() if doTerrain1km: combinePlyFiles("../data/ahn3_feature_1km", "../data/combined1km") if doTerrain100m: combinePlyFiles("../data/ahn3_feature_100m", "../data/combined100m") if doTerrain10m: combinePlyFiles("../data/ahn3_feature_100m", "../data/combined10m") # ## Compressed Numpy Dataset if doTerrain1km: numpy.savez_compressed("../data/compressedDatasets1km", terrainData1km=terrainData1km, terrainHeader=terrainHeader, birdData=birdData) if doTerrain100m: numpy.savez_compressed("../data/compressedDatasets100m", terrainData100m=terrainData100m, terrainHeader=terrainHeader, birdData=birdData) if doTerrain10m: numpy.savez_compressed("../data/compressedDatasets10m", terrainData10m=terrainData10m, terrainHeader=terrainHeader, birdData=birdData) # ## GeoTiff def combineTerrainFeatures(terrainData, terrainHeader): bands = len(terrainHeader) - 3 # removing x, y and z listX = numpy.unique(terrainData[:, 0]) dictX = dict(zip(listX, range(len(listX)))) listY = numpy.unique(terrainData[:, 1]) dictY = dict(zip(listY, range(len(listY)))) arrays = numpy.full((bands, len(listY), len(listX)), numpy.nan) for terrainDatum in terrainData: indexX = dictX[terrainDatum[0]] indexY = dictY[terrainDatum[1]] for i in range(bands): arrays[i, indexY, indexX] = terrainDatum[3 + i] return arrays, bands, len(listY), len(listX) def getGeoTransform(terrainData, nrows, ncols): xmin, ymin, xmax, ymax = [terrainData[:, 0].min(), terrainData[:, 1].min(), terrainData[:, 0].max(), terrainData[:, 1].max()] xres = (xmax - xmin) / float(ncols) yres = (ymax - ymin) / float(nrows) return (xmin, xres, 0, ymin, 0, yres) def writeGeoTiff(featureArrays, terrainHeader, geoTransform, outputFileName, ncols, nrows, bands): output_raster = gdal.GetDriverByName('GTiff').Create(outputFileName + ".tif", ncols, nrows, bands, gdal.GDT_Float32, ['COMPRESS=LZW']) output_raster.SetMetadata(dict(zip(["band_{:02d}_key".format(i) for i in range(1, 1 + bands)], terrainHeader[3:]))) output_raster.SetGeoTransform(geoTransform) srs = osr.SpatialReference() srs.ImportFromEPSG(28992) output_raster.SetProjection(srs.ExportToWkt()) for i in range(bands): rb = output_raster.GetRasterBand(1 + i) rb.SetMetadata({"band_key": terrainHeader[3 + i]}) rb.WriteArray(featureArrays[i]) output_raster.FlushCache() def terrainDataToGeoTiff(terrainData, terrainHeader, outputFileName): combinedTerrainFeatures, bands, nrows, ncols = combineTerrainFeatures(terrainData, terrainHeader) geoTransform = getGeoTransform(terrainData, nrows, ncols) writeGeoTiff(combinedTerrainFeatures, terrainHeader, geoTransform, outputFileName, ncols, nrows, bands) if doTerrain1km: terrainDataToGeoTiff(terrainData1km, terrainHeader, "../data/terrainData1km") if doTerrain100m: terrainDataToGeoTiff(terrainData100m, terrainHeader, "../data/terrainData100m") if doTerrain10m: terrainDataToGeoTiff(terrainData10m, terrainHeader, "../data/terrainData10m") # End of _Jupyter Notebook_.
jupyter_notebooks/DataConversion.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Load Packages # Primary Packages import pandas as pd import numpy as np import seaborn as sns import matplotlib.pyplot as plt # + # Modelling from sklearn.model_selection import train_test_split from sklearn.ensemble import RandomForestClassifier # Package to Save Model import joblib # Metrics from sklearn.metrics import classification_report # - # Pandas options pd.options.display.max_columns = 999 # # Load Data data = pd.read_csv('data/census.csv') data.head() dep_var = 'high_income' cat_names = ['workclass', 'education_level', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country'] cont_names = ['age', 'capital-gain', 'capital-loss', 'hours-per-week'] # # Feature Engineering # + # Create Boolean Target data['high_income'] = data['income'] == '>50K' # Multihot encode categorical variables df_cat = pd.get_dummies(data[cat_names].astype(str)) # Reassign numerical to diff df df_cont = data[cont_names] # Normalize numerical features df_cont_norm = (df_cont-df_cont.min())/(df_cont.max()-df_cont.min()) # Concatenate features X = pd.concat([df_cat, df_cont_norm], axis=1) # Create target df y = data[dep_var] # - X.shape, y.shape X.columns # + # Train-test split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.4, random_state = 42) model = RandomForestClassifier(n_estimators=300, random_state=42) # model = GradientBoostingRegressor(n_estimators=300, random_state=42, ) # Fit Model # %time model.fit(X_train, y_train) # - y_pred = model.predict(X_test) report = classification_report(y_pred, y_test) print(report) # # Create A Simpler Model # # For our WebApp, let's just use a few simple features like: # - Age # - Hours per Week # - Education Level # - Sex # - Race # Get features we want and create a new dataframe columns = ['age', 'hours-per-week', 'education_level', 'sex', 'race'] data_small = data[columns] # Onehot encode categorical features data_small_dummies = pd.get_dummies(data_small) data_small_dummies.head() # # Generate Model # Assign X and y X_small, y_small = data_small_dummies, y # + # Train-test split X_train, X_test, y_train, y_test = train_test_split(X_small, y_small, test_size = 0.4, random_state = 42) model = RandomForestClassifier(n_estimators=10, random_state=42) # model = GradientBoostingRegressor(n_estimators=300, random_state=42, ) # Fit Model # %time model.fit(X_train, y_train) # + # Get Test Predictions y_pred = model.predict(X_test) # Get Metrics report = classification_report(y_pred, y_test) print(report) # - # # Save Model # Save Model joblib.dump(model, 'model/census_model.pkl') # # Generate Predictions Using Sample Inputs and Saved Model # ### Education Levels education_level_values.values.tolist() education_level_values = pd.Series(data['education_level'].unique()).str.strip() education_level_dummies = pd.get_dummies(education_level_values) education_level_dummies # ### Race race_values = pd.Series(data['race'].unique()).str.strip() race_dummies = pd.get_dummies(race_values) race_dummies # ### Sex sex_values = pd.Series(data['sex'].unique()).str.strip() sex_dummies = pd.get_dummies(sex_values) sex_dummies data_small_dummies.columns # ### Load Model model = joblib.load('model/census_model.pkl') education_level_values np.where(education_level_values == education_level_sample) race_values # + # Age age = 21 # Hours per Pweek hours = 80 # Education Level education_level_sample = 'HS-grad' education_level_sample_dummies = (education_level_dummies.loc[np.where(education_level_values.values == education_level_sample)[0]] .values.tolist()[0]) # Race race_sample = 'White' race_sample_dummies = race_dummies.loc[np.where(race_values.values == race_sample)[0]].values.tolist()[0] # Gender/Sex sex_sample = 'Male' sex_sample_dummies = sex_dummies.loc[np.where(sex_values.values == sex_sample)[0]].values.tolist()[0] # - # Concatenate features for sample prediction sample_features = [age, hours] + education_level_sample_dummies + sex_sample_dummies + race_sample_dummies len(sample_features) # Sample Predictions prediction = model.predict([sample_features])[0] prediction
model_generator.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # <NAME> v 0414190301 # # + idir="/home/data/sandbox/3C273_all/" import os ilist=os.listdir(idir) from glob import glob ele=[glob(idir+w+"/odf/*TAR") for w in ilist] def ru1(w): os.chdir(idir+w+"/odf/");os.system("tar xzf "+w+".tar.gz") # - qru1=lambda q,w:q.put(ru1(w)) todo=[ilist[e] for e in range(len(ele)) if len(ele[e])==0 and not os.path.exists(idir+ilist[e]+"/odf/ccf.cif")] todo jobs=[] for i in todo: p = mp.Process(target=ru1, args=(i,)) jobs.append(p) p.start() jobs import subprocess as sp import multiprocessing as mp # + ele=[glob(idir+w+"/odf/*TAR") for w in ilist] def ru2(w): os.chdir(os.path.dirname(w)) os.system("tar xf "+os.path.basename(w)) #os.mkdir("pn") os.unlink(w) ele2=glob(idir+"0*/odf/*SUM.ASC") def ru3(u): os.environ["SAS_ODF"]=u print(sp.Popen("cifbuild",cwd=os.path.dirname(u),stdout=sp.PIPE).stdout.readlines()[-1]) # - # next step of unpacking todo=[e[0] for e in ele if len(e)>0] jobs=[] for i in todo: p = mp.Process(target=ru2, args=(i,)) jobs.append(p) p.start() jobs todo=[e for e in ele2 if not os.path.exists(os.path.dirname(e)+"/ccf.cif")] jobs=[] for i in todo[:10]: p = mp.Process(target=ru3, args=(i,)) jobs.append(p) p.start() jobs ele3=glob(idir+"0*/odf/ccf.cif") len(ilist),len(ele2),len(ele3) os.environ.update({"HEADAS":"/isdc/heasoft-6.16/x86_64-unknown-linux-gnu-libc2.12/"}) scrp=os.popen("/isdc/heasoft-6.16/x86_64-unknown-linux-gnu-libc2.12/BUILD_DIR/headas-setup csh").read().strip() scomm=open(scrp).readlines() os.unlink(scrp) #scomm slist=[a.strip().replace('"','').split()[1:] for a in scomm] os.environ.update(dict(slist)) os.environ["SAS_DIR"]="/isdc/xmmsas_20141104_1833" rdir=os.environ["SAS_DIR"] os.environ.update({'SAS_PATH':rdir,'SAS_VERBOSITY':'4','SAS_IMAGEVIEWER':'ds9','SAS_BROWSER':'firefox','SAS_SUPPRESS_WARNING':'1'}) os.environ['LD_LIBRARY_PATH']=rdir+'/lib:'+rdir+'/libextra:'+os.environ.get('LD_LIBRARY_PATH','.') # #+'/opt/rh/python33/root/usr/lib64:/isdc/heasoft-6.16/x86_64-unknown-linux-gnu-libc2.12/lib' os.environ['LIBRARY_PATH']=rdir+'/libsys:'+rdir+'/libextra:'+rdir+'/lib' os.environ['PATH']=rdir+'/binextra:'+rdir+'/bin:'+rdir+'/bin/devel:'+os.environ.get('PATH','') sas_init=". "+os.environ["SAS_DIR"]+"/sas-setup.sh; " def ru4(u): os.environ["SAS_ODF"]=os.path.dirname(u) os.environ["SAS_CCF"]=u print(sp.Popen("odfingest",cwd=os.path.dirname(u),stdout=sp.PIPE).stdout.readlines()[-1]) return #ru4(ele3[4]) ele4=glob(idir+"0*/odf/*SAS") dir4=[os.path.dirname(e) for e in ele4] todo=[e for e in ele3 if os.path.dirname(e) not in dir4] print(len(todo),todo[:5]) jobs=[] for i in todo[:8]: p = mp.Process(target=ru4, args=(i,)) jobs.append(p) p.start() # + sele4 = [os.path.dirname(e) for e in ele4 if not os.path.exists(os.path.dirname(e)+'/mos')] def ru5(u,mode='pn'): os.environ["SAS_ODF"]=u os.environ["SAS_CCF"]=u+'/ccf.cif' os.mkdir(u+'/'+mode) print(sp.Popen("e"+mode[0]+"proc",cwd=u+'/'+mode,stdout=sp.PIPE,stderr=sp.PIPE).stdout.readlines()[-1]) return jobs=[] for i in sele4[:14]: p = mp.Process(target=ru5, args=(i,'mos'),name=i[-14:-4]) jobs.append(p) p.start() jobs # - zz=[q for q in jobs if q.is_alive()] print(len(zz)) zz # + res1=glob(idir+"0*/odf/pn/*ingEvts.ds") res2=glob(idir+"0*/odf/mos/*ingEvts.ds") [(os.path.basename(r),os.path.getsize(r)//2**20) for r in res2]
analysis/Multiproc.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: AA-Wk # language: python # name: jl # --- # + import sys import jmespath import matplotlib.pyplot as plt import matplotlib as mpl import numpy as np import pandas as pd import fiostats plt.style.use('ggplot') # - # # Ten clients write to a single OST data = fiostats.read_test_sequence("many_writer/20190429T125251") # print(jmespath.search('[*].jobs[*]."job options"', data)) bw = jmespath.search('[*].jobs[*].write.bw_mean', data) bw = np.array([ x[0] for x in bw]) bw /= 1024. print("Bandwith per client:", bw) print("Total Bandwidth:", bw.sum()) # # Bandwith for many clients writing to a single OST # read data for 1, 2, 4 and 10 client nodes (single process/client, the 10 clients is a repeat of the result above) dall = [] for date in ('20190429T133618', '20190429T134050', '20190429T134534', '20190429T135021'): dall.append(fiostats.read_test_sequence("many_writer/{}".format(date))) c = [] for d in dall: bw = jmespath.search('[*].jobs[*].write.bw_mean', d) nt = len(bw) for test in bw: c.append((nt, test[0]/1024.)) df = pd.DataFrame(data=c, columns=('nc', 'bw')) grp = df.groupby('nc') for res in jmespath.search('[*][*].jobs[*]."job options".directory',dall): print(res) fig = plt.figure(figsize=(11,7)) plt.plot(grp.sum()/1024., 'ob') plt.xlabel('Number of client nodes (one process/client)') plt.ylabel('Bandwidth [GB/s]') plt.plot([0, 10], [0, 10], 'm--', label="1GB/s/process") plt.legend(loc=4) _ = plt.title("Bandwith of many clients writing to a single OST") # + ## Rate for a each client. # - cl = ('r', 'g', 'b', 'y') pl = [] for nc, group in grp: x = group.iloc[:,1].values pl.append(x) _ = plt.hist(pl, color=cl) # # Many clients writing to many osts # The number of client nodes was varied. Each client ran four processes and the writing was spread over all eight OSTs. # The path __/ffb01/wktst/tests/all/__ was used which has a stripe count of one and no stripe index set (lustre selects the OSTs). # + # data for 1, 2, 4 and 10 client nodes (four processes/client) d2 = [] for date in ('20190429T155313', '20190429T155816', '20190429T160319', '20190429T160953'): d2.append(fiostats.read_test_sequence("many_writer/{}".format(date))) for res in jmespath.search('[*][*].jobs[*]."job options".directory',d2): print(res) # + c = [] for d in d2: bw = jmespath.search('[*].jobs[*].write.bw_mean', d) bw_flatten = [ x for test in bw for x in test] nt = len(bw_flatten) for test in bw_flatten: c.append((nt, test/1024.)) df = pd.DataFrame(data=c, columns=('nc', 'bw')) grp4 = df.groupby('nc') fig = plt.figure(figsize=(11,7)) plt.plot(grp4.sum()/1024., 'ob-', label="4proc/node writing to 8 OSTs") plt.plot(grp.sum()/1024., 'og-', label="1proc/node writing to single OST") plt.xlabel('Number of processes (n-clients * procs/clients)') plt.ylabel('Bandwidth [GB/s]') plt.plot([0, 30], [0, 30], 'm--', label="1GB/s/process") plt.legend(loc=4) _ = plt.title("Bandwith of many clients writing to a 1 or 8 OSTs") plt.savefig("pics/multiple_node_wbw.png") # -
drpdev/many_writer.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- from pykat import finesse from pykat.commands import * import numpy as np import matplotlib.pyplot as plt import scipy import scipy.signal basecode = """ # laser (n0)------------> (n1)|M1|(n2)<---->s(3k)<--------->(n3)|M2|(n4) l laser 1 0 n0 s s0 0.1 n0 n1 #the cavity m1 M1 0.15 0 0 n1 n2 s scav 3k n2 n3 m1 M2 0.15 0 0 n3 n4 """ basekat = finesse.kat() basekat.verbose = False basekat.parse(basecode) kat1 = basekat.deepcopy() PDcode = """ # Photo diodes measuring DC-power pd refl n1 # Reflected field pd circ n3 # Circulating field pd tran n4 # Transmitted field ## Simulation instructions ## xaxis M2 phi lin -20 200 300 yaxis abs """ kat1.parse(PDcode) out1 = kat1.run() out1.plot(xlabel='Position of mirror M2 [deg]', ylabel='Power [W]', title = 'Power vs. microscopic cavity length change') kat2 = kat1.deepcopy() kat2.parse("xaxis laser f lin 0 200k 1000") out = kat2.run() fig = out.plot(ylabel="Power [W]") indexes = scipy.signal.find_peaks_cwt(out['circ'], np.ones_like(out['circ'])) print("Modelled FSR: ", out.x[indexes][2]-out.x[indexes][1]) kat3 = kat2.deepcopy() kat3.M2.setRTL(1,0,0) out = kat3.run() out.plot(detectors=['circ']) indexes = scipy.signal.find_peaks_cwt(out['circ'], np.ones_like(out['circ'])) FSR = out.x[indexes][2] - out.x[indexes][1] FSR kat4 = kat3.deepcopy() kat4.parse(""" xaxis laser f lin 49k 51k 1000 """) out = kat4.run() plt.plot(out.x, out['circ']/out['circ'].max()) plt.ylabel("P_circ / max(P_circ)") plt.xlabel("f [Hz]") # + plt.axhline(0.5,color='r') plt.axvline(49300,color='r',ls='--') plt.axvline(50600,color='r',ls='--') # - print("Modelled finesse =", FSR/1300) print("Calculated finesse =", np.pi / (1 - np.sqrt(0.85)) )
Fabry Perot Cavity/simple_fabry_perot.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # # Backtesting Plots for mutation growth rate paper # # This notebook generates plots for the [paper/backtesting](paper/backtesting) directory. This assumes you've alread run # ```sh # make update # Downloads data (~1hour). # make preprocess-usher # Preprocesses usher tree # make backtesting # Fits backtesting models # ``` # + # #%load_ext autoreload # #%autoreload 2 # - import datetime import math import os import pickle import re import logging from collections import Counter, OrderedDict, defaultdict import numpy as np import matplotlib import matplotlib.pyplot as plt import pandas as pd import torch import pyro.distributions as dist from pyrocov import mutrans, pangolin, stats from pyrocov.stats import normal_log10bf from pyrocov.util import pretty_print, pearson_correlation import seaborn as sns import matplotlib.colors as mcolors import matplotlib.cm as cm import numpy as np import seaborn as sns import colorcet as cc matplotlib.rcParams["figure.dpi"] = 200 # configure logging logging.basicConfig(format="%(relativeCreated) 9d %(message)s", level=logging.INFO) # This line can be used to modify logging as required later in the notebook #logging.getLogger().setLevel(logging.INFO) # set matplotlib params #matplotlib.rcParams["figure.dpi"] = 200 #matplotlib.rcParams['figure.figsize'] = [8, 8] matplotlib.rcParams["axes.edgecolor"] = "gray" matplotlib.rcParams["savefig.bbox"] = "tight" matplotlib.rcParams['font.family'] = 'sans-serif' matplotlib.rcParams['font.sans-serif'] = ['Arial', 'Avenir', 'DejaVu Sans'] # ## Load input data # Load the entire constant dataset max_num_clades = 3000 min_num_mutations = 1 min_region_size = 50 ambiguous = False columns_filename=f"results/columns.{max_num_clades}.pkl" features_filename=f"results/features.{max_num_clades}.{min_num_mutations}.pt" input_dataset = mutrans.load_gisaid_data( device="cpu", columns_filename=columns_filename, features_filename=features_filename, min_region_size=min_region_size ) # ## Load trained models fits = torch.load("results/mutrans.backtesting.pt", map_location="cpu") print(f'We have loaded {len(fits)} models') # print info on available models and what the keys are if True: for key in fits: print(f'{key} -- {fits[key]["weekly_clades_shape"]}') # Scale `coef` by 1/100 in all results. # + ALREADY_SCALED = set() def scale_tensors(x, names={"coef"}, scale=0.01, prefix="", verbose=True): if id(x) in ALREADY_SCALED: return if isinstance(x, dict): for k, v in list(x.items()): if k in names: if verbose: print(f"{prefix}.{k}") x[k] = v * scale elif k == "diagnostics": continue else: scale_tensors(v, names, scale, f"{prefix}.{k}", verbose=verbose) ALREADY_SCALED.add(id(x)) scale_tensors(fits, verbose=False) # - forecast_dir_prefix = "paper/backtesting/" # # Forecasting def weekly_clades_to_lineages(weekly_clades, clade_id_to_lineage_id, n_model_lineages): weekly_lineages = weekly_clades.new_zeros(weekly_clades.shape[:-1] + (n_model_lineages,)).scatter_add_( -1, clade_id_to_lineage_id.expand_as(weekly_clades), weekly_clades) return weekly_lineages def plusminus(mean, std): p95 = 1.96 * std return torch.stack([mean - p95, mean, mean + p95]) from pyrocov.util import ( pretty_print, pearson_correlation, quotient_central_moments, generate_colors ) def split(a, n): k, m = divmod(len(a), n) return (a[i*k+min(i, m):(i+1)*k+min(i+1, m)] for i in range(n)) # + tags=[] def select_lineages_for_plot( weekly_lineages, num_lineages, lineage_id_inv, location_ids, # location ids nbins = 10, additional_lineages = [], ): """Return names of lineages for plot""" keep_per_bin = math.ceil(num_lineages / nbins) T = weekly_lineages.shape[0] time_intervals = list(split(np.arange(T), nbins)) lineage_ids = [] for interval in time_intervals: kept_lineage_ids = weekly_lineages[interval][:, location_ids].sum([0, 1]).sort(-1, descending=True).indices[:keep_per_bin] lineage_ids.append(kept_lineage_ids) lineage_ids = torch.cat(lineage_ids) additional_indexes = list(lineage_id_inv.index(x) for x in additional_lineages) lineage_ids = torch.cat((lineage_ids, torch.tensor(additional_indexes))).tolist() return sorted(set(lineage_id_inv[x] for x in lineage_ids)) # - def generate_colors_from_lineage_names(lineage_names): standard_colors_dict = { 'BA.1': cc.glasbey[0], 'BA.2': cc.glasbey[1], 'BA.1.1': cc.glasbey[2], 'AY.4': cc.glasbey[3], 'B.1.1.7': cc.glasbey[4], 'B.1.1': cc.glasbey[5], 'B.1.177': cc.glasbey[6], } glasbey_offset = len(standard_colors_dict) colors = [] for lineage_name in lineage_names: try: color = standard_colors_dict[lineage_name] except KeyError: color = cc.glasbey[glasbey_offset] glasbey_offset += 1 colors.append(color) return colors # + tags=[] def plot_forecast2(fit, input_dataset, queries, num_lineages=10, filenames=[], verbose=False, additional_lineages = ['BA.2'], nbins=5, legend_out=False, figsize_x = None): # Convert queries to array if only only string if isinstance(queries, str): queries = [queries] # Get dimensions of the model fit (T,P,L) these are probabilities n_model_periods, n_model_places, n_model_lineages = fit['mean']['probs'].shape if (verbose): print('---') print(f'n_model_periods: {n_model_periods}') print(f'n_model_places: {n_model_places}') print(f'n_model_lineages: {n_model_lineages}') # Get dimensions of weekly_cases (T,P) these are JHU counts weekly_cases_fit = fit['weekly_cases'] n_cases_periods, n_cases_places = weekly_cases_fit.shape if (verbose): print('---') print(f'n_cases_periods: {n_cases_periods}') print(f'n_cases_places: {n_cases_places}') # Some checks assert n_cases_places == n_model_places assert n_model_periods > n_cases_periods # Calculate how many periods are forecasted (i.e. are beyond the input to the model) n_forecast_steps = n_model_periods - n_cases_periods if (verbose): print(f'n_forecast_steps: {n_forecast_steps}') # Weekly case counts by time place and clade obtained from the fit weekly_clades_fit = fit['weekly_clades'] # T, P, C if verbose: print('---') print(f'weekly_clades_fit shape: {weekly_clades_fit.shape}') # Weekly case counts by time place and clade obtain from the input data # This has more time point and more regions than the one from the fit weekly_clades_data = input_dataset['weekly_clades'] if verbose: print('---') print(f'weekly_clades_data shape: {weekly_clades_data.shape}') # Mapping from clades to lineages, a tensor of indexes # This is valid for both the fit and the input_data clade_id_to_lineage_id = input_dataset['clade_id_to_lineage_id'] if verbose: print('---') print(f'clade_id_to_lineage_id length: {len(clade_id_to_lineage_id)}') # We don't have clade_id_to_lineage_id in the fit -- it should in principle be the same # Summarize the counts of the weekly_clades (from data or fit) to the number of lineages in the model weekly_lineages_data = weekly_clades_to_lineages(weekly_clades_data, clade_id_to_lineage_id, n_model_lineages) weekly_lineages_fit = weekly_clades_to_lineages(weekly_clades_fit, clade_id_to_lineage_id, n_model_lineages) # Add CI to the probs probs = plusminus(fit['mean']['probs'], fit['std']['probs']) # [3,T,P,L] # Expand weekly_cases_fit (JHU counts) from the model to cover the steps we are forecasting padding = 1 + weekly_cases_fit.mean(0, keepdim=True).expand(n_forecast_steps, -1) weekly_cases_fit_ = torch.cat([weekly_cases_fit, padding], 0) weekly_cases_fit_.add_(10) # Generate predictions # Note: For the evaluation maybe we are better off comparing probabilities not counts predicted = probs * weekly_cases_fit_[..., None] # This is an array of strings listing the locations for the data location_id_inv_data = input_dataset['location_id_inv'] if (verbose): print('---') print(f'location_id_inv_data length: {len(location_id_inv_data)}') # This is an array of strings listing the locations for the fit location_id_inv_fit = fit['location_id_inv'] if verbose: print('---') print(f'location_id_inv_fit length: {len(location_id_inv_fit)}') # Get the location indexes that we want to keep based on query for the data ids_fit = torch.tensor([i for i, name in enumerate(location_id_inv_fit) if any(q in name for q in queries)]) # These are the lineage labels, we can get them from either the fit or the dataset. # We assume that these are identical and we assert this below lineage_id_inv_fit = fit['lineage_id_inv'] lineage_id_inv_data = input_dataset['lineage_id_inv'] assert lineage_id_inv_fit == lineage_id_inv_data # Subset weekly_lineages_fit to those location sum over time and place and get the indices in descending order plot_lineages_ids_inv_fit = select_lineages_for_plot( weekly_lineages = weekly_lineages_fit, num_lineages = num_lineages, lineage_id_inv = lineage_id_inv_fit, location_ids = ids_fit, nbins = nbins, additional_lineages = additional_lineages, ) # tbw plot_lineages_ids_inv_pred = select_lineages_for_plot( weekly_lineages = fit['mean']['probs'], num_lineages = num_lineages, lineage_id_inv = lineage_id_inv_fit, location_ids = ids_fit, nbins = nbins, additional_lineages = additional_lineages, ) # Same thing for the data ids_data = torch.tensor([ i for i, name in enumerate(location_id_inv_data) if any(q in name for q in queries)]) # Subset weekly_lineages_fit to those location sum over time and place and get the indices in descending order plot_lineages_ids_inv_data = select_lineages_for_plot( weekly_lineages = weekly_lineages_data, num_lineages = num_lineages, lineage_id_inv = lineage_id_inv_data, location_ids = ids_data, nbins = nbins, additional_lineages = additional_lineages, ) # merge the lineage name from datset and fit to get a single list lineage_name_to_index_map_data = { l:i for i, l in enumerate(lineage_id_inv_data)} lineage_name_to_index_map_fit = { l:i for i, l in enumerate(lineage_id_inv_fit)} plot_lineages_ids_inv_joint = sorted( set(plot_lineages_ids_inv_fit) .union(plot_lineages_ids_inv_data) .union(plot_lineages_ids_inv_pred)) lineage_ids_fit = list(map(lineage_name_to_index_map_fit.get, plot_lineages_ids_inv_joint)) lineage_ids_data = list(map(lineage_name_to_index_map_data.get, plot_lineages_ids_inv_joint)) assert lineage_ids_fit == lineage_ids_data # we may have a few plotted lineages now... num_lineages = len(plot_lineages_ids_inv_joint) # Get some colors to plot with colors = generate_colors_from_lineage_names(plot_lineages_ids_inv_joint) assert len(colors) >= num_lineages light = '#bbbbbb' dark = '#444444' # Generate Figure if figsize_x is None: figsize_x = 8 fig, axes = plt.subplots(len(queries), figsize=(figsize_x, 0.5 + 2.5 * len(queries)), sharex=True) if not isinstance(axes, (list, np.ndarray)): axes = [axes] # Get x axis dates for plotting dates = matplotlib.dates.date2num(mutrans.date_range(len(fit["mean"]["probs"]))) # Query (region) plotting loop for row, (query, ax) in enumerate(zip(queries, axes)): # location ids for this query (some queries are made of multiple regions) ids_fit = torch.tensor([i for i, name in enumerate(location_id_inv_fit) if query in name]) if verbose: print('---') print(f"{query} matched {len(ids_fit)} regions in the fit") # location ids for this query in the data ids_data = torch.tensor([i for i, name in enumerate(location_id_inv_data) if query in name]) if len(axes) > 1: # Plot weekly cases total counts = weekly_cases_fit[:, ids_fit].sum(1) if verbose: print(f"{query}: max {counts.max():g}, total {counts.sum():g}") counts /= counts.max() ax.plot(dates[:len(counts)], counts, "k-", color=light, lw=0.8, zorder=-20) # Plot weekly lineages total we are getting the data from the fit not the dataset counts = weekly_lineages_fit[:, ids_fit].sum([1, 2]) counts /= counts.max() ax.plot(dates[:len(counts)], counts, "k--", color=light, lw=1, zorder=-20) # Get the predictions for the relevant regions, normalize pred = predicted.index_select(-2, ids_fit).sum(-2) pred /= pred[1].sum(-1, True).clamp_(min=1e-20) # Get the observations for the relevant regions obs = weekly_lineages_fit[:, ids_fit].sum(1) obs /= obs.sum(-1, True).clamp_(min=1e-9) # Observations from the data -- this extends further in the time dimension obs_data = weekly_lineages_data[:, ids_data].sum(1) obs_data /= obs_data.sum(-1, True).clamp(min=1e-9) # lineage plotting loop for s, color in zip(lineage_ids_fit, colors): lb, mean, ub = pred[..., s] ax.fill_between(dates, lb, ub, color=color, alpha=0.2, zorder=-10) ax.plot(dates, mean, color=color, lw=1, zorder=-9) # Get the lineage label lineage = lineage_id_inv_fit[s] ax.plot(dates[:len(obs)], obs[:, s], color=color, lw=0, marker='o', markersize=3, label=lineage if row == 0 else None) # Plot observations from the dataset for all the forecast points # TODO: Fix colors to match (we probably want to fix "sort(-1, descending=True)" to be a matching permutation instead) for s, color in zip(lineage_ids_data, colors): lineage = lineage_id_inv_data[s] max_time_step = min((len(obs)+n_forecast_steps), obs_data.shape[0]-1) ax.plot(dates[len(obs):max_time_step], obs_data[len(obs):max_time_step, s], label='_nolegend_', color=color, lw=0, marker='x', markersize=2) # Add shading for the forecast region ax.axvline(dates[len(obs)], linestyle='--', lw=1, color=(0.5, 0.5, 0.5)) ax.axvspan(dates[len(obs)],dates[len(obs)+n_forecast_steps-1], color=(0.5, 0.5, 0.5), alpha=0.2) # Set axis ticks ax.set_ylim(0, 1) ax.set_yticks(()) ax.set_ylabel(query.replace(" / ", "\n")) ax.set_xlim(dates.min(), dates.max()) # Print legend if legend_out: if row == 0: ax.legend(loc="upper left", bbox_to_anchor=(1.01, 1.04), fontsize=10) elif row == 1: ax.plot([], "k--", color=light, lw=1, label="relative #samples") ax.plot([], "k-", color=light, lw=0.8, label="relative #cases") ax.plot([], lw=0, marker='o', markersize=3, color='gray', label="observed portion") ax.fill_between([], [], [], color='gray', label="predicted portion") ax.legend(loc="upper left") else: if row == 0: ax.legend(loc="upper left", fontsize=8 * (10 / num_lineages) ** 0.8) elif row == 1: ax.plot([], "k--", color=light, lw=1, label="relative #samples") ax.plot([], "k-", color=light, lw=0.8, label="relative #cases") ax.plot([], lw=0, marker='o', markersize=3, color='gray', label="observed portion") ax.fill_between([], [], [], color='gray', label="predicted portion") ax.legend(loc="upper left",) # Setup the date axis correctly ax.xaxis.set_major_locator(matplotlib.dates.MonthLocator()) ax.xaxis.set_major_formatter(matplotlib.dates.DateFormatter("%b %Y")) plt.xticks(rotation=90) plt.subplots_adjust(hspace=0) for filename in filenames: plt.savefig(filename, bbox_inches='tight') # - # ## Generate all Forecasting Plots if False: for model_key in list(fits.keys()): fit_n = fits[model_key] plot_forecast2( fit_n, input_dataset, queries=["England", "USA / Ma", "Brazil"], num_lineages=10, verbose=False, filenames = [f'{forecast_dir_prefix}/backtesting_day_{model_key[9]}.png'] ) # ## Generate Selected Forecast Plots # + k = list(fits.keys())[4] print(k[4]) fit_n = fits[k] plot_forecast2( fit_n, input_dataset, queries=["England"], num_lineages=14, verbose=False, additional_lineages = ['AY.4'], filenames = [f'{forecast_dir_prefix}/backtesting_day_{k[9]}_early_prediction_england.png'], figsize_x = 528 / 752 * 8, ) # - k = list(fits.keys())[4] print(k[9]) fit_n = fits[k] plot_forecast2( fit_n, input_dataset, queries=["England"], num_lineages=13, verbose=False, additional_lineages = ['BA.1'], filenames = [f'{forecast_dir_prefix}/backtesting_day_{k[9]}_early_prediction_england.png'], figsize_x = 8 ) # ## Country Specific Ones k = list(fits.keys())[len( list(fits.keys()))-1] print(k[4]) fit_n = fits[k] plot_forecast2( fit_n, input_dataset, queries=["USA","France","England","Brazil","Australia","Russia"], num_lineages=20, verbose=False, additional_lineages = ['BA.1'], filenames = [f'{forecast_dir_prefix}/backtesting_day_{k[9]}_early_prediction_USA_France_England_Brazil_Australia_Russia.png'], legend_out = True ) k = list(fits.keys())[len( list(fits.keys()))-1] print(k[9]) fit_n = fits[k] plot_forecast2( fit_n, input_dataset, queries=["Asia","Europe","Africa"], num_lineages=13, verbose=False, additional_lineages = ['BA.1'], filenames = [f'{forecast_dir_prefix}/backtesting_day_{k[9]}_early_prediction_Asia_Europe_Africa.png'] ) # ### Evaluate the forecast def evaluate_forecast2(fit, input_dataset, queries, num_lineages=10, filenames=[], verbose=False, data_region = None): # Convert queries to array if only only string if isinstance(queries, str): queries = [queries] # Get dimensions of the model fit (T,P,L) these are probabilities n_model_periods, n_model_places, n_model_lineages = fit['mean']['probs'].shape if (verbose): print('---') print(f'n_model_periods: {n_model_periods}') print(f'n_model_places: {n_model_places}') print(f'n_model_lineages: {n_model_lineages}') # Get dimensions of weekly_cases (T,P) these are JHU counts weekly_cases_fit = fit['weekly_cases'] n_cases_periods, n_cases_places = weekly_cases_fit.shape if (verbose): print('---') print(f'n_cases_periods: {n_cases_periods}') print(f'n_cases_places: {n_cases_places}') # Some checks assert n_cases_places == n_model_places assert n_model_periods > n_cases_periods # Calculate how many periods are forecasted (i.e. are beyond the input to the model) n_forecast_steps = n_model_periods - n_cases_periods if (verbose): print(f'n_forecast_steps: {n_forecast_steps}') # Weekly case counts by time place and clade obtained from the fit weekly_clades_fit = fit['weekly_clades'] # T, P, C if verbose: print('---') print(f'weekly_clades_fit shape: {weekly_clades_fit.shape}') # Weekly case counts by time place and clade obtain from the input data # This has more time point and more regions than the one from the fit weekly_clades_data = input_dataset['weekly_clades'] if verbose: print('---') print(f'weekly_clades_data shape: {weekly_clades_data.shape}') # Mapping from clades to lineages, a tensor of indexes # This is valid for both the fit and the input_data clade_id_to_lineage_id = input_dataset['clade_id_to_lineage_id'] if verbose: print('---') print(f'clade_id_to_lineage_id length: {len(clade_id_to_lineage_id)}') # We don't have clade_id_to_lineage_id in the fit -- it should in principle be the same # Summarize the counts of the weekly_clades (from data or fit) to the number of lineages in the model weekly_lineages_data = weekly_clades_to_lineages(weekly_clades_data, clade_id_to_lineage_id, n_model_lineages) weekly_lineages_fit = weekly_clades_to_lineages(weekly_clades_fit, clade_id_to_lineage_id, n_model_lineages) # Get the probs probs = fit['mean']['probs'] #probs = plusminus(fit['mean']['probs'], fit['std']['probs']) # [3,T,P,L] # Expand weekly_cases_fit (JHU counts) from the model to cover the steps we are forecasting #padding = 1 + weekly_cases_fit.mean(0, keepdim=True).expand(n_forecast_steps, -1) #weekly_cases_fit_ = torch.cat([weekly_cases_fit, padding], 0) # Generate predictions # Note: For the evaluation maybe we are better off comparing probabilities not counts #predicted = probs * weekly_cases_fit_[..., None] # This is an array of strings listing the locations for the data location_id_inv_data = input_dataset['location_id_inv'] if (verbose): print('---') print(f'location_id_inv_data length: {len(location_id_inv_data)}') # This is an array of strings listing the locations for the fit location_id_inv_fit = fit['location_id_inv'] if verbose: print('---') print(f'location_id_inv_fit length: {len(location_id_inv_fit)}') # Get the location indexes that we want to keep based on query for the data ids_fit = torch.tensor([i for i, name in enumerate(location_id_inv_fit) if any(q in name for q in queries)]) # Subset weekly_lineages_fit to those location sum over time and place and get the indices in descending order lineage_ids_fit = weekly_lineages_fit[:, ids_fit].sum([0, 1]).sort(-1, descending=True).indices if verbose: print('---') print(f'lineage_ids_fit shape: {lineage_ids_fit.shape}') # Keep only the top n number of lineages we want to plot lineage_ids_fit = lineage_ids_fit[:num_lineages] # This is problematic without fixing the above permutation # TODO: Add assert that they are the same set / eliminate code # Check if order of lineage_ids_data = lineage_ids_fit[:num_lineages] # These are the lineage labels, we can get them from either the fit or the dataset. # We assume that these are identical and we assert this below lineage_id_inv_fit = fit['lineage_id_inv'] lineage_id_inv_data = input_dataset['lineage_id_inv'] assert lineage_id_inv_fit == lineage_id_inv_data # Get shared locations between full dataset and fit dataset common_regions = list(set(location_id_inv_fit).intersection(set(location_id_inv_data))) if data_region is not None: common_regions = list(set(common_regions).intersection(set(data_region))) # Get indexes of these common regions for each set common_regions_fit_inv_map = [] common_regions_data_inv_map = [] for r in common_regions: common_regions_fit_inv_map.append(location_id_inv_fit.index(r)) common_regions_data_inv_map.append(location_id_inv_data.index(r)) # We want to compare empirical and predicted probabilities for the forecast interval probs = probs[n_cases_periods:,common_regions_fit_inv_map,:] # Subset observed to relevant periods and regions obs_data = weekly_lineages_data[n_cases_periods:n_cases_periods+n_forecast_steps,common_regions_data_inv_map,:] empirical_probs = obs_data / obs_data.sum(-1,True).clamp_(min=1e-9) # Truncate to availanle data probs = probs[:empirical_probs.shape[0],] # Calculate errors l1_error = (probs - empirical_probs).abs().sum([-1,-2]) / probs.shape[-2] l2_error = (probs - empirical_probs).pow(2).sum([-1,-2]).sqrt() / probs.shape[-2] # consider spearman error # correlations on the probabilities (average over time) # precision at k return { 'L1_error': l1_error, 'L2_error': l2_error, } def generate_forecast_eval(fits, input_dataset, data_region = None, queries = None): model_keys = list(fits.keys()) if not queries: queries = input_dataset['location_id_inv'] forecast_start_days = [] period_forecast_ahead = [] l1_error = [] l2_error = [] period_length = 14 for key in model_keys: forecast_start_day = key[9] fit_n = fits[key] # Get forecast error for all independent location ids forecast_error = evaluate_forecast2( fit_n, input_dataset, queries = queries, num_lineages=100, data_region = data_region, verbose=False) n_periods_forecast = len(forecast_error['L1_error'].tolist()) forecast_start_days.extend([forecast_start_day] * n_periods_forecast) period_forecast_ahead.extend(list(range(1,n_periods_forecast+1))) l1_error.extend(forecast_error['L1_error'].tolist()) l2_error.extend(forecast_error['L2_error'].tolist()) df1 = pd.DataFrame({ 'forecast_start_days': forecast_start_days, 'period_forecast_ahead': period_forecast_ahead, 'l1_error': l1_error, 'l2_error': l2_error}) df1['day_of_forecast'] = df1['forecast_start_days'] + df1['period_forecast_ahead'] * 14 return df1 all_region_forecast = generate_forecast_eval(fits, input_dataset) matplotlib.rcParams['figure.figsize'] = [6,4] ax = sns.boxplot(x="period_forecast_ahead", y="l1_error", data=all_region_forecast, palette='rainbow') ax.set(xlabel = '2-week period forecast ahead', ylabel="L1 Error") ax.set_ylim([0.0,2.0]) plt.savefig('paper/backtesting/L1_error_barplot_all.png') # + ## Top 100 region forecasting # - # Get top covered regions top_region_idx = input_dataset['weekly_clades'].sum([0,2]).sort(-1, descending=True).indices[:100].tolist() regions = list(input_dataset['location_id_inv'][x] for x in top_region_idx) top_region_forecast = generate_forecast_eval(fits, input_dataset, data_region = regions) ax = sns.boxplot(x="period_forecast_ahead", y="l1_error", data=top_region_forecast, palette='rainbow') ax.set(xlabel = '2-week period forecast ahead', ylabel="L1 Error") ax.set_ylim([0.0,2.0]) plt.savefig('paper/backtesting/L1_error_barplot_top100.png') # + ## Top 100-200 region forecasting # - # Get top covered regions top_region_idx = input_dataset['weekly_clades'].sum([0,2]).sort(-1, descending=True).indices[100:1000].tolist() regions = list(input_dataset['location_id_inv'][x] for x in top_region_idx) top_region_forecast = generate_forecast_eval(fits, input_dataset, data_region = regions) ax = sns.boxplot(x="period_forecast_ahead", y="l1_error", data=top_region_forecast, palette='rainbow') ax.set(xlabel = '2-week period forecast ahead', ylabel="L1 Error") ax.set_ylim([0.0,2.0]) plt.savefig('paper/backtesting/L1_error_barplot_top100-1000.png') # + ## Other region forecasting # - # Get top covered regions bottom_region_idx = input_dataset['weekly_clades'].sum([0,2]).sort(-1, descending=True).indices[100:].tolist() regions = list(input_dataset['location_id_inv'][x] for x in bottom_region_idx) bottom_region_forecast = generate_forecast_eval(fits, input_dataset, data_region = regions) ax = sns.boxplot(x="period_forecast_ahead", y="l1_error", data=bottom_region_forecast, palette='rainbow') ax.set(xlabel = '2-week period forecast ahead', ylabel="L1 Error") ax.set_ylim([0.0,2.0]) plt.savefig('paper/backtesting/L1_error_barplot_other.png') # ## Evaluation of forecasting accuracy # # - What are we trying to do? For a given region and for all models get a % of how often we predict the correct strain n intervals ahead # def evaluate_forecast3(fit, input_dataset, queries, num_lineages=10, verbose=False, data_region = None): # Convert queries to array if only only string if isinstance(queries, str): queries = [queries] # Get dimensions of the model fit (T,P,L) these are probabilities n_model_periods, n_model_places, n_model_lineages = fit['mean']['probs'].shape if (verbose): print('---') print(f'n_model_periods: {n_model_periods}') print(f'n_model_places: {n_model_places}') print(f'n_model_lineages: {n_model_lineages}') # Get dimensions of weekly_cases (T,P) these are JHU counts weekly_cases_fit = fit['weekly_cases'] n_cases_periods, n_cases_places = weekly_cases_fit.shape if (verbose): print('---') print(f'n_cases_periods: {n_cases_periods}') print(f'n_cases_places: {n_cases_places}') # Some checks assert n_cases_places == n_model_places assert n_model_periods > n_cases_periods # Calculate how many periods are forecasted (i.e. are beyond the input to the model) n_forecast_steps = n_model_periods - n_cases_periods if (verbose): print(f'n_forecast_steps: {n_forecast_steps}') # Weekly case counts by time place and clade obtained from the fit weekly_clades_fit = fit['weekly_clades'] # T, P, C if verbose: print('---') print(f'weekly_clades_fit shape: {weekly_clades_fit.shape}') # Weekly case counts by time place and clade obtain from the input data # This has more time point and more regions than the one from the fit weekly_clades_data = input_dataset['weekly_clades'] if verbose: print('---') print(f'weekly_clades_data shape: {weekly_clades_data.shape}') # Mapping from clades to lineages, a tensor of indexes # This is valid for both the fit and the input_data clade_id_to_lineage_id = input_dataset['clade_id_to_lineage_id'] if verbose: print('---') print(f'clade_id_to_lineage_id length: {len(clade_id_to_lineage_id)}') # We don't have clade_id_to_lineage_id in the fit -- it should in principle be the same # Summarize the counts of the weekly_clades (from data or fit) to the number of lineages in the model weekly_lineages_data = weekly_clades_to_lineages(weekly_clades_data, clade_id_to_lineage_id, n_model_lineages) weekly_lineages_fit = weekly_clades_to_lineages(weekly_clades_fit, clade_id_to_lineage_id, n_model_lineages) # Get the probs probs = fit['mean']['probs'] #probs = plusminus(fit['mean']['probs'], fit['std']['probs']) # [3,T,P,L] # Expand weekly_cases_fit (JHU counts) from the model to cover the steps we are forecasting #padding = 1 + weekly_cases_fit.mean(0, keepdim=True).expand(n_forecast_steps, -1) #weekly_cases_fit_ = torch.cat([weekly_cases_fit, padding], 0) # Generate predictions # Note: For the evaluation maybe we are better off comparing probabilities not counts #predicted = probs * weekly_cases_fit_[..., None] # This is an array of strings listing the locations for the data location_id_inv_data = input_dataset['location_id_inv'] if (verbose): print('---') print(f'location_id_inv_data length: {len(location_id_inv_data)}') # This is an array of strings listing the locations for the fit location_id_inv_fit = fit['location_id_inv'] if verbose: print('---') print(f'location_id_inv_fit length: {len(location_id_inv_fit)}') # Get the location indexes that we want to keep based on query for the data ids_fit = torch.tensor([i for i, name in enumerate(location_id_inv_fit) if any(q in name for q in queries)]) # Subset weekly_lineages_fit to those location sum over time and place and get the indices in descending order lineage_ids_fit = weekly_lineages_fit[:, ids_fit].sum([0, 1]).sort(-1, descending=True).indices if verbose: print('---') print(f'lineage_ids_fit shape: {lineage_ids_fit.shape}') # Keep only the top n number of lineages we want to plot lineage_ids_fit = lineage_ids_fit[:num_lineages] # This is problematic without fixing the above permutation # TODO: Add assert that they are the same set / eliminate code # Check if order of lineage_ids_data = lineage_ids_fit[:num_lineages] # These are the lineage labels, we can get them from either the fit or the dataset. # We assume that these are identical and we assert this below lineage_id_inv_fit = fit['lineage_id_inv'] lineage_id_inv_data = input_dataset['lineage_id_inv'] assert lineage_id_inv_fit == lineage_id_inv_data # Get shared locations between full dataset and fit dataset common_regions = list(set(location_id_inv_fit).intersection(set(location_id_inv_data))) if data_region is not None: common_regions = list(set(common_regions).intersection(set(data_region))) # Get indexes of these common regions for each set common_regions_fit_inv_map = [] common_regions_data_inv_map = [] for r in common_regions: common_regions_fit_inv_map.append(location_id_inv_fit.index(r)) common_regions_data_inv_map.append(location_id_inv_data.index(r)) # We want to compare empirical and predicted probabilities for the forecast interval probs = probs[n_cases_periods:,common_regions_fit_inv_map,:] # Subset observed to relevant periods and regions obs_data = weekly_lineages_data[n_cases_periods:n_cases_periods+n_forecast_steps,common_regions_data_inv_map,:] empirical_probs = obs_data / obs_data.sum(-1,True).clamp_(min=1e-9) # Truncate to availanle data probs = probs[:empirical_probs.shape[0]-1,] return { 'probs': probs, 'empirical_probs': empirical_probs, } def generate_forecast_eval_percent(fits, input_dataset, data_region = None, queries = None): model_keys = list(fits.keys()) match_4wk = [] match_8wk = [] if queries is None: queries = input_dataset['location_id_inv'] for key in model_keys: forecast_start_day = key[9] fit_n = fits[key] # Get forecast error for all independent location ids probs_dict = evaluate_forecast3( fit_n, input_dataset, queries = queries, num_lineages=101, data_region = data_region, verbose=False) try: period_index_4wk = 1 predicted_4wk = probs_dict['probs'][period_index_4wk,:].sum(-2).argmax(0).item() actual_4wk = probs_dict['empirical_probs'][period_index_4wk,:].sum(-2).argmax(0).item() period_index_8wk = 3 predicted_8wk = probs_dict['probs'][period_index_8wk,:].sum(-2).argmax(0).item() actual_8wk = probs_dict['empirical_probs'][period_index_8wk,].sum(-2).argmax(0).item() match_4wk.append(predicted_4wk == actual_4wk) match_8wk.append(predicted_8wk == actual_8wk) except: pass return { 'match_4wk': match_4wk, 'match_8wk': match_8wk, } # ### USA query = 'USA' regions = list(x for x in input_dataset['location_id'].keys() if query in x) selected_region_forecast = generate_forecast_eval_percent(fits, input_dataset, data_region = regions) torch.tensor(selected_region_forecast['match_4wk']).sum() / len(selected_region_forecast['match_4wk']) * 100 torch.tensor(selected_region_forecast['match_8wk']).sum() / len(selected_region_forecast['match_8wk']) * 100 # ### France query = 'France' regions = list(x for x in input_dataset['location_id'].keys() if query in x) selected_region_forecast = generate_forecast_eval_percent(fits, input_dataset, data_region = regions) torch.tensor(selected_region_forecast['match_4wk']).sum() / len(selected_region_forecast['match_4wk']) * 100 torch.tensor(selected_region_forecast['match_8wk']).sum() / len(selected_region_forecast['match_8wk']) * 100 # ### England query = 'England' regions = list(x for x in input_dataset['location_id'].keys() if query in x) selected_region_forecast = generate_forecast_eval_percent(fits, input_dataset, data_region = regions) torch.tensor(selected_region_forecast['match_4wk']).sum() / len(selected_region_forecast['match_4wk']) * 100 torch.tensor(selected_region_forecast['match_8wk']).sum() / len(selected_region_forecast['match_8wk']) * 100 # ### Brazil query = 'Brazil' regions = list(x for x in input_dataset['location_id'].keys() if query in x) selected_region_forecast = generate_forecast_eval_percent(fits, input_dataset, data_region = regions) torch.tensor(selected_region_forecast['match_4wk']).sum() / len(selected_region_forecast['match_4wk']) * 100 torch.tensor(selected_region_forecast['match_8wk']).sum() / len(selected_region_forecast['match_8wk']) * 100 # ### Australia query = 'Australia' regions = list(x for x in input_dataset['location_id'].keys() if query in x) selected_region_forecast = generate_forecast_eval_percent(fits, input_dataset, data_region = regions) torch.tensor(selected_region_forecast['match_4wk']).sum() / len(selected_region_forecast['match_4wk']) * 100 torch.tensor(selected_region_forecast['match_8wk']).sum() / len(selected_region_forecast['match_8wk']) * 100 # ### Russia query = 'Russia' regions = list(x for x in input_dataset['location_id'].keys() if query in x) selected_region_forecast = generate_forecast_eval_percent(fits, input_dataset, data_region = regions) torch.tensor(selected_region_forecast['match_4wk']).sum() / len(selected_region_forecast['match_4wk']) * 100 torch.tensor(selected_region_forecast['match_8wk']).sum() / len(selected_region_forecast['match_8wk']) * 100
notebooks/mutrans_backtesting.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # + import numpy as np # %matplotlib inline import matplotlib.pyplot as plt X = 2 * np.random.rand(100,1) y = 4 + 3*X + np.random.randn(100,1) # - ##使用linalg求逆,使用dot内积 x_b = np.c_[np.ones((100,1)),X] theta_best = np.linalg.inv(x_b.T.dot(x_b)).dot(x_b.T).dot(y) #查看公式结果 theta_best X_new = np.array([[0],[2]]) X_new_b = np.c_[np.ones((2,1)),X_new] y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, 'r-') plt.plot(X, y, 'b.') plt.axis([0,2,0,15]) plt.show() ##以上公式等效如下方法 from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) # + ##梯度下降算法快速实现 eta = 0.1 #learning rate n_iterations = 1000 m = 100 theta = np.random.randn(2,1) for iteration in range(n_iterations): gradients = 2/m * x_b.T.dot(x_b.dot(theta)- y) theta = theta - eta * gradients theta # + ## 随机梯度下降 n_epochs = 50 t0, t1 = 5, 50 def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) for epoch in range(n_epochs): for i in range(m): random_index = np.random.randint(m) xi = x_b[random_index:random_index + 1] yi = y[random_index:random_index + 1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta*gradients # - theta # + from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(n_iter=50, penalty=None, eta0=0.1) sgd_reg.fit(X, y.ravel()) # - sgd_reg.intercept_, sgd_reg.coef_ ## 多项式回归 # 用线性模型拟合非线性数据,简单方法是将每个特征的幂次方添加为一个新特征,然后再这个拓展过的特征集上训练线性模型. m = 100 X = 6 * np.random.rand(m,1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m,1) plt.plot(X, y, 'b.') plt.show() #利用ploynomialFeatures对训练数据进行转换,将每个特征的平方(二次多项式)作为新特征加入到训练集 from sklearn.preprocessing import PolynomialFeatures ploy_feature = PolynomialFeatures(degree=2,include_bias=False) X_ploy = ploy_feature.fit_transform(X) X[0] X_ploy[0] # + lin_reg = LinearRegression() lin_reg.fit(X_ploy,y) lin_reg.intercept_,lin_reg.coef_ ##因此模型预估 y^ = 0.487x^2+1.011x+2.048 # + from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train_predict,y_train[:m])) val_errors.append(mean_squared_error(y_val_predict,y_val)) plt.plot(np.sqrt(train_errors),'r-+',linewidth=2,label='train') plt.plot(np.sqrt(val_errors),'b-',linewidth=3,label='val') # - lin_reg = LinearRegression() plot_learning_curves(lin_reg,X,y) # + ##使用pipeline from sklearn.pipeline import Pipeline polynomial_regression = Pipeline(( ('poly_features', PolynomialFeatures(degree=10, include_bias=False)), ('sgd_reg', LinearRegression()), )) plot_learning_curves(polynomial_regression, X, y) # - # 两个重大的区别 # - 训练数据的误差远低于线性回归的模型 # - 两条曲线存在差距,说明模型在训练数据上的表现比验证集上的要好很多 # from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1,solver='cholesky') ridge_reg.fit(X,y) ridge_reg.predict([[1.5]]) #使用随机梯度 sgd_reg = SGDRegressor(penalty='l2') sgd_reg.fit(X,y.ravel()) sgd_reg.predict([[1.5]])
linear/linear.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Identifying special matrices # ## Instructions # In this assignment, you shall write a function that will test if a 4×4 matrix is singular, i.e. to determine if an inverse exists, before calculating it. # # You shall use the method of converting a matrix to echelon form, and testing if this fails by leaving zeros that can’t be removed on the leading diagonal. # # Don't worry if you've not coded before, a framework for the function has already been written. # Look through the code, and you'll be instructed where to make changes. # We'll do the first two rows, and you can use this as a guide to do the last two. # # ### Matrices in Python # In the *numpy* package in Python, matrices are indexed using zero for the top-most column and left-most row. # I.e., the matrix structure looks like this: # ```python # A[0, 0] A[0, 1] A[0, 2] A[0, 3] # A[1, 0] A[1, 1] A[1, 2] A[1, 3] # A[2, 0] A[2, 1] A[2, 2] A[2, 3] # A[3, 0] A[3, 1] A[3, 2] A[3, 3] # ``` # You can access the value of each element individually using, # ```python # A[n, m] # ``` # which will give the n'th row and m'th column (starting with zero). # You can also access a whole row at a time using, # ```python # A[n] # ``` # Which you will see will be useful when calculating linear combinations of rows. # # A final note - Python is sensitive to indentation. # All the code you should complete will be at the same level of indentation as the instruction comment. # # ### How to submit # Edit the code in the cell below to complete the assignment. # Once you are finished and happy with it, press the *Submit Assignment* button at the top of this notebook. # # Please don't change any of the function names, as these will be checked by the grading script. # # If you have further questions about submissions or programming assignments, here is a [list](https://www.coursera.org/learn/linear-algebra-machine-learning/discussions/weeks/1/threads/jB4klkn5EeibtBIQyzFmQg) of Q&A. You can also raise an issue on the discussion forum. Good luck! # + # GRADED FUNCTION import numpy as np # Our function will go through the matrix replacing each row in order turning it into echelon form. # If at any point it fails because it can't put a 1 in the leading diagonal, # we will return the value True, otherwise, we will return False. # There is no need to edit this function. def isSingular(A) : B = np.array(A, dtype=np.float_) # Make B as a copy of A, since we're going to alter it's values. try: fixRowZero(B) fixRowOne(B) fixRowTwo(B) fixRowThree(B) except MatrixIsSingular: return True return False # This next line defines our error flag. For when things go wrong if the matrix is singular. # There is no need to edit this line. class MatrixIsSingular(Exception): pass # For Row Zero, all we require is the first element is equal to 1. # We'll divide the row by the value of A[0, 0]. # This will get us in trouble though if A[0, 0] equals 0, so first we'll test for that, # and if this is true, we'll add one of the lower rows to the first one before the division. # We'll repeat the test going down each lower row until we can do the division. # There is no need to edit this function. def fixRowZero(A) : if A[0,0] == 0 : A[0] = A[0] + A[1] if A[0,0] == 0 : A[0] = A[0] + A[2] if A[0,0] == 0 : A[0] = A[0] + A[3] if A[0,0] == 0 : raise MatrixIsSingular() A[0] = A[0] / A[0,0] return A # First we'll set the sub-diagonal elements to zero, i.e. A[1,0]. # Next we want the diagonal element to be equal to one. # We'll divide the row by the value of A[1, 1]. # Again, we need to test if this is zero. # If so, we'll add a lower row and repeat setting the sub-diagonal elements to zero. # There is no need to edit this function. def fixRowOne(A) : A[1] = A[1] - A[1,0] * A[0] if A[1,1] == 0 : A[1] = A[1] + A[2] A[1] = A[1] - A[1,0] * A[0] if A[1,1] == 0 : A[1] = A[1] + A[3] A[1] = A[1] - A[1,0] * A[0] if A[1,1] == 0 : raise MatrixIsSingular() A[1] = A[1] / A[1,1] return A # This is the first function that you should complete. # Follow the instructions inside the function at each comment. def fixRowTwo(A) : # Insert code below to set the sub-diagonal elements of row two to zero (there are two of them). A[2] = A[2] - A[2,0] * A[0] A[2] = A[2] - A[2,1] * A[1] # Next we'll test that the diagonal element is not zero. if A[2,2] == 0 : # Insert code below that adds a lower row to row 2. A[2] = A[2] + A[3] # Now repeat your code which sets the sub-diagonal elements to zero. A[2] = A[2] - A[2,0] * A[0] A[2] = A[2] - A[2,1] * A[1] if A[2,2] == 0 : raise MatrixIsSingular() # Finally set the diagonal element to one by dividing the whole row by that element. A[2] = A[2]/A[2,2] return A # You should also complete this function # Follow the instructions inside the function at each comment. def fixRowThree(A) : # Insert code below to set the sub-diagonal elements of row three to zero. A[3] = A[3] - A[3,0] * A[0] A[3] = A[3] - A[3,1] * A[1] A[3] = A[3] - A[3,2] * A[2] # Complete the if statement to test if the diagonal element is zero. if A[3,3] == 0: raise MatrixIsSingular() # Transform the row to set the diagonal element to one. A[3] = A[3]/A[3,3] return A # - # ## Test your code before submission # To test the code you've written above, run the cell (select the cell above, then press the play button [ ▶| ] or press shift-enter). # You can then use the code below to test out your function. # You don't need to submit this cell; you can edit and run it as much as you like. # # Try out your code on tricky test cases! A = np.array([ [2, 0, 0, 0], [0, 3, 0, 0], [0, 0, 4, 4], [0, 0, 5, 5] ], dtype=np.float_) isSingular(A) A = np.array([ [0, 7, -5, 3], [2, 8, 0, 4], [3, 12, 0, 5], [1, 3, 1, 3] ], dtype=np.float_) fixRowZero(A) fixRowOne(A) fixRowTwo(A) fixRowThree(A)
Mathematics for Machine Learning/Linear Algebra/1 Identifying Special Matrices.ipynb