markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
<font color="red">删除生成的测试文件。注意,不要误删其它文件!</font> 如果重复运行上面的Import_Test()将会发现GIScript_Test.udb和GIScript_Test.udd文件会不断增大。 但是打开UDB文件却只有一份数据,为什么呢? * 因为UDB文件是增量存储的,不用的存储块需要使用SQLlite的存储空间紧缩处理才能回收。
!rm ../data/GIScript_Test.*
geospatial/giscript/giscript_quickstart.ipynb
supergis/git_notebook
gpl-3.0
再次查看目录,文件是否存在。
!ls -l -h ../data/GIScript_Test.*
geospatial/giscript/giscript_quickstart.ipynb
supergis/git_notebook
gpl-3.0
Search for a reference genome Homo sapiens's reference genome sequence We would need two reference genomes. One as a fasta file with each chromosome, and one that we will use exclusively for the mapping that would contain all contigs. The use of contigs in the reference genome increases the mapping specificity.
species = 'Homo sapiens' taxid = '9606' genome = 'GRCh38.p10' refseq, dir1, dir2, dir3 = 'GCF', '000', '001', '405' genbank = 'GCF_000001405.36' sumurl = 'ftp://ftp.ncbi.nlm.nih.gov/genomes/all/{0}/{1}/{2}/{3}/{4}_{5}/{4}_{5}_assembly_report.txt'.format( refseq, dir1, dir2, dir3, genbank, genome) crmurl = 'ht...
Notebooks/A1-Preparation_reference_genome.ipynb
4DGenome/Chromosomal-Conformation-Course
gpl-3.0
Download from the NCBI the list of chromosome/contigs
! wget -q $sumurl -O chromosome_list.txt ! head chromosome_list.txt dirname = 'genome/' ! mkdir -p $dirname
Notebooks/A1-Preparation_reference_genome.ipynb
4DGenome/Chromosomal-Conformation-Course
gpl-3.0
For each contig/chromosome download the corresponding FASTA file from NCBI
contig = [] for line in open('chromosome_list.txt'): if line.startswith('#'): continue seq_name, seq_role, assigned_molecule, _, genbank, _, refseq, _ = line.split(None, 7) if seq_role == 'assembled-molecule': name = 'chr%s.fasta' % assigned_molecule else: name = 'chr%s_%s.fasta'...
Notebooks/A1-Preparation_reference_genome.ipynb
4DGenome/Chromosomal-Conformation-Course
gpl-3.0
Concatenate all contigs/chromosomes into a single file
contig_file = open('genome/Homo_sapiens_contigs.fa','w') for molecule in contig: for line in open('genome/' + molecule): # replace the header of the sequence in the fasta file if line == '\n': continue if line.startswith('>'): line = '>' + molecule[3:].replace('.fasta...
Notebooks/A1-Preparation_reference_genome.ipynb
4DGenome/Chromosomal-Conformation-Course
gpl-3.0
Remove all the other files (with single chromosome/contig)
! rm -f genome/*.fasta
Notebooks/A1-Preparation_reference_genome.ipynb
4DGenome/Chromosomal-Conformation-Course
gpl-3.0
Creation of an index file for GEM mapper
! gem-indexer -t 8 -i genome/Homo_sapiens_contigs.fa -o genome/Homo_sapiens_contigs
Notebooks/A1-Preparation_reference_genome.ipynb
4DGenome/Chromosomal-Conformation-Course
gpl-3.0
For the sake of convenience, one first removes the information bar at the bottom, in order to retain only the region of the image with the blobs of interest. This operation is just an array slicing removing the last rows, for which we can leverage the nice syntax of NumPy's slicing. In order to determine how many rows...
phase_separation = im[:947] plt.imshow(phase_separation, cmap='gray') np.nonzero(np.all(im < 0.1 * im.max(), axis=1))[0][0]
scikit_image/lectures/adv5_blob_segmentation.v3.ipynb
M-R-Houghton/euroscipy_2015
mit
Image contrast, histogram and thresholding In order to separate blobs from the background, a simple idea is to use the gray values of pixels: blobs are typically darker than the background. In order to check this impression, let us look at the histogram of pixel values of the image.
from skimage import exposure histogram = exposure.histogram(phase_separation) plt.plot(histogram[1], histogram[0]) plt.xlabel('gray value') plt.ylabel('number of pixels') plt.title('Histogram of gray values')
scikit_image/lectures/adv5_blob_segmentation.v3.ipynb
M-R-Houghton/euroscipy_2015
mit
Two peaks are clearly visible in the histogram, but they have a strong overlap. What happens if we try to threshold the image at a value that separates the two peaks? For an automatic computation of the thresholding values, we use Otsu's thresholding, an operation that chooses the threshold in order to have a good sepa...
from skimage import filters threshold = filters.threshold_otsu(phase_separation) print(threshold) fig, ax = plt.subplots(ncols=2, figsize=(12, 8)) ax[0].imshow(phase_separation, cmap='gray') ax[0].contour(phase_separation, [threshold]) ax[1].imshow(phase_separation < threshold, cmap='gray')
scikit_image/lectures/adv5_blob_segmentation.v3.ipynb
M-R-Houghton/euroscipy_2015
mit
Image denoising In order to improve the thresholding, we will try first to filter the image so that gray values are more uniform inside the two phases, and more separated. Filters used to this aim are called denoising filters, since their action amounts to reducing the intensity of the noise on the image. Zooming on a ...
plt.imshow(phase_separation[390:410, 820:840], cmap='gray', interpolation='nearest') plt.colorbar() print(phase_separation[390:410, 820:840].std())
scikit_image/lectures/adv5_blob_segmentation.v3.ipynb
M-R-Houghton/euroscipy_2015
mit
Several denoising filters average together pixels that are close to each other. If the noise is not spatially correlated, random noise fluctuations will be strongly attenuated by this averaging. One of the most common denoising filters is called the median filter: it replaces the value of a pixel by the median gray va...
from skimage import restoration from skimage import filters median_filtered = filters.median(phase_separation, np.ones((7, 7))) plt.imshow(median_filtered, cmap='gray') plt.imshow(median_filtered[390:410, 820:840], cmap='gray', interpolation='nearest') plt.colorbar() print(median_filtered[390:410, 820:84...
scikit_image/lectures/adv5_blob_segmentation.v3.ipynb
M-R-Houghton/euroscipy_2015
mit
Variations of gray levels inside zones that should be uniform are now smaller in range, and also spatially smoother. Plotting the histogram of the denoised image shows that the gray levels of the two phases are now better separated.
histo_median = exposure.histogram(median_filtered) plt.plot(histo_median[1], histo_median[0])
scikit_image/lectures/adv5_blob_segmentation.v3.ipynb
M-R-Houghton/euroscipy_2015
mit
As a consequence, Otsu thresholding now results in a much better segmentation.
plt.imshow(phase_separation[:300, :300], cmap='gray') plt.contour(median_filtered[:300, :300], [filters.threshold_otsu(median_filtered)])
scikit_image/lectures/adv5_blob_segmentation.v3.ipynb
M-R-Houghton/euroscipy_2015
mit
Going further: Otsu thresholding with adaptative threshold. For images with non-uniform illumination, it is possible to extend Otsu's method to the case for which different thresholds are used in different regions of space.
binary_image = median_filtered < filters.threshold_otsu(median_filtered) plt.imshow(binary_image, cmap='gray')
scikit_image/lectures/adv5_blob_segmentation.v3.ipynb
M-R-Houghton/euroscipy_2015
mit
Exercise: try other denoising filters Several other denoising filters are available in scikit-image. The bilateral filter uses similar ideas as for the median filter or the average filter: it averages a pixel with other pixels in a neighbourhood, but gives more weight to pixels for which the gray value is close to th...
blob_markers = median_filtered < 110 bg_markers = median_filtered > 160 markers = np.zeros_like(phase_separation) markers[blob_markers] = 2 markers[bg_markers] = 1 from skimage import morphology watershed = morphology.watershed(filters.sobel(median_filtered), markers) plt.imshow(watershed, cmap='gray')
scikit_image/lectures/adv5_blob_segmentation.v3.ipynb
M-R-Houghton/euroscipy_2015
mit
Image cleaning If we use the denoising + thresholding approach, the result of the thresholding is not completely what we want: small objects are detected, and small holes exist in the objects. Such defects of the segmentation can be amended, using the knowledge that no small holes should exist, and that blobs have a mi...
from skimage import morphology only_large_blobs = morphology.remove_small_objects(binary_image, min_size=300) plt.imshow(only_large_blobs, cmap='gray') only_large = np.logical_not(morphology.remove_small_objects( np.logical_not(on...
scikit_image/lectures/adv5_blob_segmentation.v3.ipynb
M-R-Houghton/euroscipy_2015
mit
Measuring region properties The segmentation of foreground (objects) and background results in a binary image. In order to measure the properties of the different blobs, one must first attribute a different label to each blob (identified as a connected component of the foreground phase). Then, the utility function meas...
from skimage import measure labels = measure.label(only_large) plt.imshow(labels, cmap='spectral') props = measure.regionprops(labels, phase_separation) areas = np.array([prop.area for prop in props]) perimeters = np.array([prop.perimeter for prop in props]) plt.plot(np.sort(perimeters**2./areas), 'o')
scikit_image/lectures/adv5_blob_segmentation.v3.ipynb
M-R-Houghton/euroscipy_2015
mit
Other examples Plotting labels on an image Measuring region properties Exercise: visualize an image where the color of a blob encodes its size (blobs of similar size have a similar color). Exercise: visualize an image where only the most circular blobs are represented. Hint: this involves some manipulations of NumPy a...
def remove_information_bar(image, value=0.1): value *= image.max() row_index = np.nonzero(np.all(image < value, axis=1))[0][0] return image[:row_index] from scipy import stats def clean_image(binary_image): labels = measure.label(binary_image) props = measure.regionprops(labels) areas = np.arra...
scikit_image/lectures/adv5_blob_segmentation.v3.ipynb
M-R-Houghton/euroscipy_2015
mit
The glob module is very handy to retrieve lists of image file names using wildcard patterns.
from glob import glob filelist = glob('../images/phase_separation*.png') filelist.sort() print(filelist) fig, ax = plt.subplots(nrows=2, ncols=2, figsize=(12, 8)) for index, filename in enumerate(filelist[1:]): print(filename) im = io.imread(filename) binary_im = process_blob_image(im) i, j = np.unrave...
scikit_image/lectures/adv5_blob_segmentation.v3.ipynb
M-R-Houghton/euroscipy_2015
mit
Pipeline approach and order of operations It is quite uncommon to perform a successful segmentation in only one or two operations: typical image require some pre- and post-processing. However, a large number of image processing steps, each using some hand-tuning of parameters, can result in disasters, since the process...
crude_segmentation = phase_separation < filters.threshold_otsu(phase_separation) clean_crude = morphology.remove_small_objects(crude_segmentation, 300) clean_crude = np.logical_not(morphology.remove_small_objects( np.logical_not(clean_crude), 300)) plt.imshow(clean_crude[:200, :200], cmap=...
scikit_image/lectures/adv5_blob_segmentation.v3.ipynb
M-R-Houghton/euroscipy_2015
mit
The following examples refer to a fault with the following properties: Length (Along-strike) = 100 km, Width (Down-Dip) = 20 km, Slip = 10.0 mm/yr, Rake = 0. (Strike Slip), Magnitude Scaling Relation = Wells & Coppersmith (1994), Shear Modulus = 30.0 GPa
# Set up fault parameters slip = 10.0 # Slip rate in mm/yr # Area = along-strike length (km) * down-dip with (km) area = 100.0 * 20.0 # Rake = 0. rake = 0. # Magnitude Scaling Relation msr = WC1994()
hmtk/Geology.ipynb
g-weatherill/notebooks
agpl-3.0
Anderson & Luco (Arbitrary) This describes a set of distributons where the maximum magnitude is assumed to rupture the whole fault surface
#Magnitude Frequency Distribution Example anderson_luco_config1 = {'Model_Name': 'AndersonLucoArbitrary', 'Model_Type': 'First', 'Model_Weight': 1.0, # Weight is a required key - normally weights should sum to 1.0 - current example is simply illustrative! ...
hmtk/Geology.ipynb
g-weatherill/notebooks
agpl-3.0
Anderson & Luco (Area - MMax) This describes a set of distributons where the maximum rupture extent is limited to only part of the fault surface
anderson_luco_config1 = {'Model_Name': 'AndersonLucoAreaMmax', 'Model_Type': 'First', 'Model_Weight': 1.0, # Weight is a required key - normally weights should sum to 1.0 - current example is simply illustrative! 'MFD_spacing': 0.1, ...
hmtk/Geology.ipynb
g-weatherill/notebooks
agpl-3.0
Characteristic Earthquake The following example illustrates a "Characteristic" Model, represented by a Truncated Gaussian Distribution
characteristic = [{'Model_Name': 'Characteristic', 'MFD_spacing': 0.05, 'Model_Weight': 1.0, 'Maximum_Magnitude': None, 'Sigma': 0.15, # Standard Deviation of Distribution (in Magnitude Units) - omit for fixed value 'Lower_B...
hmtk/Geology.ipynb
g-weatherill/notebooks
agpl-3.0
Youngs & Coppersmith (1985) Models The following describes the recurrence from two distributions presented by Youngs & Coppersmith (1985): 1) Exponential Distribution, 2) Hybrid Exponential-Characteristic Distribution
exponential = {'Model_Name': 'YoungsCoppersmithExponential', 'MFD_spacing': 0.1, 'Maximum_Magnitude': None, 'Maximum_Magnitude_Uncertainty': None, 'Minimum_Magnitude': 5.0, 'Model_Weight': 1.0, 'b_value': [0.8, 0.1]} hybrid = {'M...
hmtk/Geology.ipynb
g-weatherill/notebooks
agpl-3.0
Epistemic Uncertainty Examples This example considers the fault defined at the top of the page. This fault defines two values of slip rate and two different magnitude frequency distributions
def show_file_contents(filename): """ Shows the file contents """ fid = open(filename, 'r') for row in fid.readlines(): print row fid.close() input_file = 'input_data/simple_fault_example_4branch.yml' show_file_contents(input_file)
hmtk/Geology.ipynb
g-weatherill/notebooks
agpl-3.0
Example 1 - Full Enumeration In this example each individual MFD for each branch is determined. In the resulting file the fault is duplicated n_branches number of times, with the corresponding MFD multiplied by the end-branch weight
# Import the Parser from hmtk.parsers.faults.fault_yaml_parser import FaultYmltoSource # Fault mesh discretization step mesh_spacing = 1.0 # (km) # Read in the fault model reader = FaultYmltoSource(input_file) fault_model, tectonic_region = reader.read_file(mesh_spacing) # Construct the fault source model (this is r...
hmtk/Geology.ipynb
g-weatherill/notebooks
agpl-3.0
Example 2: Collapsed Branches In the following example we implement the same model, this time collapsing the branched. This means that the MFD is discretised and the incremental rate in each magnitude bin is the weighted sum of the rates in that bin from all the end branches of the logic tree. When collapsing the branc...
# Read in the fault model reader = FaultYmltoSource(input_file) fault_model, tectonic_region = reader.read_file(mesh_spacing) # Scaling relation for export output_msr = WC1994() # Construct the fault source model - collapsing the branches fault_model.build_fault_model(collapse=True, rendered_msr=output_msr) # Write...
hmtk/Geology.ipynb
g-weatherill/notebooks
agpl-3.0
2) Assuming the resting potential for the plasma membrane is -70mV, explain whether each of the ions in question 1 would be expected to move into or out of the cell. Use an I-V plot to support your answer.
# Values from Table 3.1 p57 in syllabus G_Na = 1 G_K = 100 G_Cl = 25 goldman = lambda Na_Out, Na_In, K_Out, K_In, Cl_Out, Cl_In: \ rt_div_f * log((G_Na * Na_Out + G_K * K_Out + G_Cl * Cl_In)/\ (1.0 * G_Na * Na_In + G_K * K_In + G_Cl * Cl_Out)) print "Potential at equalibrium is %.2f mV" % goldman(150, 15, 5, 150, 100...
Physio.ipynb
massie/notebooks
apache-2.0
IV graph
%matplotlib inline import numpy as np import matplotlib.pyplot as plt plt.figure(figsize=(20,20)) x = np.arange(-100, 60, 0.1); iv_line = lambda G_val, E_x: G_val * x + ((0.0 - E_x) * G_val) K_line = iv_line(G_K, K_Eq) Na_line = iv_line(G_Na, Na_Eq) Cl_line = iv_line(G_Cl, Cl_Eq) Sum_line = K_line + Na_line + Cl_l...
Physio.ipynb
massie/notebooks
apache-2.0
Custom VJPs with jax.custom_vjp
from jax import custom_vjp @custom_vjp def f(x, y): return jnp.sin(x) * y def f_fwd(x, y): # Returns primal output and residuals to be used in backward pass by f_bwd. return f(x, y), (jnp.cos(x), jnp.sin(x), y) def f_bwd(res, g): cos_x, sin_x, y = res # Gets residuals computed in f_fwd return (cos_x * g * y,...
docs/notebooks/Custom_derivative_rules_for_Python_code.ipynb
google/jax
apache-2.0
Example problems To get an idea of what problems jax.custom_jvp and jax.custom_vjp are meant to solve, let's go over a few examples. A more thorough introduction to the jax.custom_jvp and jax.custom_vjp APIs is in the next section. Numerical stability One application of jax.custom_jvp is to improve the numerical stabil...
import jax.numpy as jnp def log1pexp(x): return jnp.log(1. + jnp.exp(x)) log1pexp(3.)
docs/notebooks/Custom_derivative_rules_for_Python_code.ipynb
google/jax
apache-2.0
Since it's written in terms of jax.numpy, it's JAX-transformable:
from jax import jit, grad, vmap print(jit(log1pexp)(3.)) print(jit(grad(log1pexp))(3.)) print(vmap(jit(grad(log1pexp)))(jnp.arange(3.)))
docs/notebooks/Custom_derivative_rules_for_Python_code.ipynb
google/jax
apache-2.0
But there's a numerical stability problem lurking here:
print(grad(log1pexp)(100.))
docs/notebooks/Custom_derivative_rules_for_Python_code.ipynb
google/jax
apache-2.0
That doesn't seem right! After all, the derivative of $x \mapsto \log (1 + e^x)$ is $x \mapsto \frac{e^x}{1 + e^x}$, and so for large values of $x$ we'd expect the value to be about 1. We can get a bit more insight into what's going on by looking at the jaxpr for the gradient computation:
from jax import make_jaxpr make_jaxpr(grad(log1pexp))(100.)
docs/notebooks/Custom_derivative_rules_for_Python_code.ipynb
google/jax
apache-2.0
Stepping through how the jaxpr would be evaluated, we can see that the last line would involve multiplying values that floating point math will round to 0 and $\infty$, respectively, which is never a good idea. That is, we're effectively evaluating lambda x: (1 / (1 + jnp.exp(x))) * jnp.exp(x) for large x, which effect...
from jax import custom_jvp @custom_jvp def log1pexp(x): return jnp.log(1. + jnp.exp(x)) @log1pexp.defjvp def log1pexp_jvp(primals, tangents): x, = primals x_dot, = tangents ans = log1pexp(x) ans_dot = (1 - 1/(1 + jnp.exp(x))) * x_dot return ans, ans_dot print(grad(log1pexp)(100.)) print(jit(log1pexp)(3....
docs/notebooks/Custom_derivative_rules_for_Python_code.ipynb
google/jax
apache-2.0
Here's a defjvps convenience wrapper to express the same thing:
@custom_jvp def log1pexp(x): return jnp.log(1. + jnp.exp(x)) log1pexp.defjvps(lambda t, ans, x: (1 - 1/(1 + jnp.exp(x))) * t) print(grad(log1pexp)(100.)) print(jit(log1pexp)(3.)) print(jit(grad(log1pexp))(3.)) print(vmap(jit(grad(log1pexp)))(jnp.arange(3.)))
docs/notebooks/Custom_derivative_rules_for_Python_code.ipynb
google/jax
apache-2.0
Enforcing a differentiation convention A related application is to enforce a differentiation convention, perhaps at a boundary. Consider the function $f : \mathbb{R}+ \mapsto \mathbb{R}+$ with $f(x) = \frac{x}{1 + \sqrt{x}}$, where we take $\mathbb{R}_+ = [0, \infty)$. We might implement $f$ as a program like this:
def f(x): return x / (1 + jnp.sqrt(x))
docs/notebooks/Custom_derivative_rules_for_Python_code.ipynb
google/jax
apache-2.0
As a mathematical function on $\mathbb{R}$ (the full real line), $f$ is not differentiable at zero (because the limit defining the derivative doesn't exist from the left). Correspondingly, autodiff produces a nan value:
print(grad(f)(0.))
docs/notebooks/Custom_derivative_rules_for_Python_code.ipynb
google/jax
apache-2.0
But mathematically if we think of $f$ as a function on $\mathbb{R}_+$ then it is differentiable at 0 [Rudin's Principles of Mathematical Analysis Definition 5.1, or Tao's Analysis I 3rd ed. Definition 10.1.1 and Example 10.1.6]. Alternatively, we might say as a convention we want to consider the directional derivative ...
@custom_jvp def f(x): return x / (1 + jnp.sqrt(x)) @f.defjvp def f_jvp(primals, tangents): x, = primals x_dot, = tangents ans = f(x) ans_dot = ((jnp.sqrt(x) + 2) / (2 * (jnp.sqrt(x) + 1)**2)) * x_dot return ans, ans_dot print(grad(f)(0.))
docs/notebooks/Custom_derivative_rules_for_Python_code.ipynb
google/jax
apache-2.0
Here's the convenience wrapper version:
@custom_jvp def f(x): return x / (1 + jnp.sqrt(x)) f.defjvps(lambda t, ans, x: ((jnp.sqrt(x) + 2) / (2 * (jnp.sqrt(x) + 1)**2)) * t) print(grad(f)(0.))
docs/notebooks/Custom_derivative_rules_for_Python_code.ipynb
google/jax
apache-2.0
Gradient clipping While in some cases we want to express a mathematical differentiation computation, in other cases we may even want to take a step away from mathematics to adjust the computation autodiff performs. One canonical example is reverse-mode gradient clipping. For gradient clipping, we can use jnp.clip toget...
from functools import partial from jax import custom_vjp @custom_vjp def clip_gradient(lo, hi, x): return x # identity function def clip_gradient_fwd(lo, hi, x): return x, (lo, hi) # save bounds as residuals def clip_gradient_bwd(res, g): lo, hi = res return (None, None, jnp.clip(g, lo, hi)) # use None to...
docs/notebooks/Custom_derivative_rules_for_Python_code.ipynb
google/jax
apache-2.0
Python debugging Another application that is motivated by development workflow rather than numerics is to set a pdb debugger trace in the backward pass of reverse-mode autodiff. When trying to track down the source of a nan runtime error, or just examine carefully the cotangent (gradient) values being propagated, it ca...
from jax.lax import while_loop def fixed_point(f, a, x_guess): def cond_fun(carry): x_prev, x = carry return jnp.abs(x_prev - x) > 1e-6 def body_fun(carry): _, x = carry return x, f(a, x) _, x_star = while_loop(cond_fun, body_fun, (x_guess, f(a, x_guess))) return x_star
docs/notebooks/Custom_derivative_rules_for_Python_code.ipynb
google/jax
apache-2.0
This is an iterative procedure for numerically solving the equation $x = f(a, x)$ for $x$, by iterating $x_{t+1} = f(a, x_t)$ until $x_{t+1}$ is sufficiently close to $x_t$. The result $x^$ depends on the parameters $a$, and so we can think of there being a function $a \mapsto x^(a)$ that is implicitly defined by equat...
def newton_sqrt(a): update = lambda a, x: 0.5 * (x + a / x) return fixed_point(update, a, a) print(newton_sqrt(2.))
docs/notebooks/Custom_derivative_rules_for_Python_code.ipynb
google/jax
apache-2.0
We can vmap or jit the function as well:
print(jit(vmap(newton_sqrt))(jnp.array([1., 2., 3., 4.])))
docs/notebooks/Custom_derivative_rules_for_Python_code.ipynb
google/jax
apache-2.0
We can't apply reverse-mode automatic differentiation because of the while_loop, but it turns out we wouldn't want to anyway: instead of differentiating through the implementation of fixed_point and all its iterations, we can exploit the mathematical structure to do something that is much more memory-efficient (and FLO...
from jax import vjp @partial(custom_vjp, nondiff_argnums=(0,)) def fixed_point(f, a, x_guess): def cond_fun(carry): x_prev, x = carry return jnp.abs(x_prev - x) > 1e-6 def body_fun(carry): _, x = carry return x, f(a, x) _, x_star = while_loop(cond_fun, body_fun, (x_guess, f(a, x_guess))) retu...
docs/notebooks/Custom_derivative_rules_for_Python_code.ipynb
google/jax
apache-2.0
We can check our answers by differentiating jnp.sqrt, which uses a totally different implementation:
print(grad(jnp.sqrt)(2.)) print(grad(grad(jnp.sqrt))(2.))
docs/notebooks/Custom_derivative_rules_for_Python_code.ipynb
google/jax
apache-2.0
A limitation to this approach is that the argument f can't close over any values involved in differentiation. That is, you might notice that we kept the parameter a explicit in the argument list of fixed_point. For this use case, consider using the low-level primitive lax.custom_root, which allows for deriviatives in c...
from jax import custom_jvp import jax.numpy as jnp # f :: a -> b @custom_jvp def f(x): return jnp.sin(x) # f_jvp :: (a, T a) -> (b, T b) def f_jvp(primals, tangents): x, = primals t, = tangents return f(x), jnp.cos(x) * t f.defjvp(f_jvp) from jax import jvp print(f(3.)) y, y_dot = jvp(f, (3.,), (1.,)) pri...
docs/notebooks/Custom_derivative_rules_for_Python_code.ipynb
google/jax
apache-2.0
In words, we start with a primal function f that takes inputs of type a and produces outputs of type b. We associate with it a JVP rule function f_jvp that takes a pair of inputs representing the primal inputs of type a and the corresponding tangent inputs of type T a, and produces a pair of outputs representing the pr...
from jax import grad print(grad(f)(3.)) print(grad(grad(f))(3.))
docs/notebooks/Custom_derivative_rules_for_Python_code.ipynb
google/jax
apache-2.0
For automatic transposition to work, the JVP rule's output tangents must be linear as a function of the input tangents. Otherwise a transposition error is raised. Multiple arguments work like this:
@custom_jvp def f(x, y): return x ** 2 * y @f.defjvp def f_jvp(primals, tangents): x, y = primals x_dot, y_dot = tangents primal_out = f(x, y) tangent_out = 2 * x * y * x_dot + x ** 2 * y_dot return primal_out, tangent_out print(grad(f)(2., 3.))
docs/notebooks/Custom_derivative_rules_for_Python_code.ipynb
google/jax
apache-2.0
The defjvps convenience wrapper lets us define a JVP for each argument separately, and the results are computed separately then summed:
@custom_jvp def f(x): return jnp.sin(x) f.defjvps(lambda t, ans, x: jnp.cos(x) * t) print(grad(f)(3.))
docs/notebooks/Custom_derivative_rules_for_Python_code.ipynb
google/jax
apache-2.0
Here's a defjvps example with multiple arguments:
@custom_jvp def f(x, y): return x ** 2 * y f.defjvps(lambda x_dot, primal_out, x, y: 2 * x * y * x_dot, lambda y_dot, primal_out, x, y: x ** 2 * y_dot) print(grad(f)(2., 3.)) print(grad(f, 0)(2., 3.)) # same as above print(grad(f, 1)(2., 3.))
docs/notebooks/Custom_derivative_rules_for_Python_code.ipynb
google/jax
apache-2.0
As a shorthand, with defjvps you can pass a None value to indicate that the JVP for a particular argument is zero:
@custom_jvp def f(x, y): return x ** 2 * y f.defjvps(lambda x_dot, primal_out, x, y: 2 * x * y * x_dot, None) print(grad(f)(2., 3.)) print(grad(f, 0)(2., 3.)) # same as above print(grad(f, 1)(2., 3.))
docs/notebooks/Custom_derivative_rules_for_Python_code.ipynb
google/jax
apache-2.0
Calling a jax.custom_jvp function with keyword arguments, or writing a jax.custom_jvp function definition with default arguments, are both allowed so long as they can be unambiguously mapped to positional arguments based on the function signature retrieved by the standard library inspect.signature mechanism. When you'r...
@custom_jvp def f(x): print('called f!') # a harmless side-effect return jnp.sin(x) @f.defjvp def f_jvp(primals, tangents): print('called f_jvp!') # a harmless side-effect x, = primals t, = tangents return f(x), jnp.cos(x) * t from jax import vmap, jit print(f(3.)) print(vmap(f)(jnp.arange(3.))) print...
docs/notebooks/Custom_derivative_rules_for_Python_code.ipynb
google/jax
apache-2.0
The custom JVP rule is invoked during differentiation, whether forward or reverse:
y, y_dot = jvp(f, (3.,), (1.,)) print(y_dot) print(grad(f)(3.))
docs/notebooks/Custom_derivative_rules_for_Python_code.ipynb
google/jax
apache-2.0
Notice that f_jvp calls f to compute the primal outputs. In the context of higher-order differentiation, each application of a differentiation transform will use the custom JVP rule if and only if the rule calls the original f to compute the primal outputs. (This represents a kind of fundamental tradeoff, where we can'...
grad(grad(f))(3.)
docs/notebooks/Custom_derivative_rules_for_Python_code.ipynb
google/jax
apache-2.0
You can use Python control flow with jax.custom_jvp:
@custom_jvp def f(x): if x > 0: return jnp.sin(x) else: return jnp.cos(x) @f.defjvp def f_jvp(primals, tangents): x, = primals x_dot, = tangents ans = f(x) if x > 0: return ans, 2 * x_dot else: return ans, 3 * x_dot print(grad(f)(1.)) print(grad(f)(-1.))
docs/notebooks/Custom_derivative_rules_for_Python_code.ipynb
google/jax
apache-2.0
Use jax.custom_vjp to define custom reverse-mode-only rules While jax.custom_jvp suffices for controlling both forward- and, via JAX's automatic transposition, reverse-mode differentiation behavior, in some cases we may want to directly control a VJP rule, for example in the latter two example problems presented above....
from jax import custom_vjp import jax.numpy as jnp # f :: a -> b @custom_vjp def f(x): return jnp.sin(x) # f_fwd :: a -> (b, c) def f_fwd(x): return f(x), jnp.cos(x) # f_bwd :: (c, CT b) -> CT a def f_bwd(cos_x, y_bar): return (cos_x * y_bar,) f.defvjp(f_fwd, f_bwd) from jax import grad print(f(3.)) print(g...
docs/notebooks/Custom_derivative_rules_for_Python_code.ipynb
google/jax
apache-2.0
In words, we again start with a primal function f that takes inputs of type a and produces outputs of type b. We associate with it two functions, f_fwd and f_bwd, which describe how to perform the forward- and backward-passes of reverse-mode autodiff, respectively. The function f_fwd describes the forward pass, not onl...
from jax import custom_vjp @custom_vjp def f(x, y): return jnp.sin(x) * y def f_fwd(x, y): return f(x, y), (jnp.cos(x), jnp.sin(x), y) def f_bwd(res, g): cos_x, sin_x, y = res return (cos_x * g * y, -sin_x * g) f.defvjp(f_fwd, f_bwd) print(grad(f)(2., 3.))
docs/notebooks/Custom_derivative_rules_for_Python_code.ipynb
google/jax
apache-2.0
Calling a jax.custom_vjp function with keyword arguments, or writing a jax.custom_vjp function definition with default arguments, are both allowed so long as they can be unambiguously mapped to positional arguments based on the function signature retrieved by the standard library inspect.signature mechanism. As with ja...
@custom_vjp def f(x): print("called f!") return jnp.sin(x) def f_fwd(x): print("called f_fwd!") return f(x), jnp.cos(x) def f_bwd(cos_x, y_bar): print("called f_bwd!") return (cos_x * y_bar,) f.defvjp(f_fwd, f_bwd) print(f(3.)) print(grad(f)(3.)) from jax import vjp y, f_vjp = vjp(f, 3.) print(y) pr...
docs/notebooks/Custom_derivative_rules_for_Python_code.ipynb
google/jax
apache-2.0
Forward-mode autodiff cannot be used on the jax.custom_vjp function and will raise an error:
from jax import jvp try: jvp(f, (3.,), (1.,)) except TypeError as e: print('ERROR! {}'.format(e))
docs/notebooks/Custom_derivative_rules_for_Python_code.ipynb
google/jax
apache-2.0
If you want to use both forward- and reverse-mode, use jax.custom_jvp instead. We can use jax.custom_vjp together with pdb to insert a debugger trace in the backward pass:
import pdb @custom_vjp def debug(x): return x # acts like identity def debug_fwd(x): return x, x def debug_bwd(x, g): import pdb; pdb.set_trace() return g debug.defvjp(debug_fwd, debug_bwd) def foo(x): y = x ** 2 y = debug(y) # insert pdb in corresponding backward pass step return jnp.sin(y)
docs/notebooks/Custom_derivative_rules_for_Python_code.ipynb
google/jax
apache-2.0
```python jax.grad(foo)(3.) <ipython-input-113-b19a2dc1abf7>(12)debug_bwd() -> return g (Pdb) p x DeviceArray(9., dtype=float32) (Pdb) p g DeviceArray(-0.91113025, dtype=float32) (Pdb) q ``` More features and details Working with list / tuple / dict containers (and other pytrees) You should expect standard Python con...
from collections import namedtuple Point = namedtuple("Point", ["x", "y"]) @custom_jvp def f(pt): x, y = pt.x, pt.y return {'a': x ** 2, 'b': (jnp.sin(x), jnp.cos(y))} @f.defjvp def f_jvp(primals, tangents): pt, = primals pt_dot, = tangents ans = f(pt) ans_dot = {'a': 2 * pt.x * pt_dot.x, ...
docs/notebooks/Custom_derivative_rules_for_Python_code.ipynb
google/jax
apache-2.0
And an analogous contrived example with jax.custom_vjp:
@custom_vjp def f(pt): x, y = pt.x, pt.y return {'a': x ** 2, 'b': (jnp.sin(x), jnp.cos(y))} def f_fwd(pt): return f(pt), pt def f_bwd(pt, g): a_bar, (b0_bar, b1_bar) = g['a'], g['b'] x_bar = 2 * pt.x * a_bar + jnp.cos(pt.x) * b0_bar y_bar = -jnp.sin(pt.y) * b1_bar return (Point(x_bar, y_bar),...
docs/notebooks/Custom_derivative_rules_for_Python_code.ipynb
google/jax
apache-2.0
Handling non-differentiable arguments Some use cases, like the final example problem, call for non-differentiable arguments like function-valued arguments to be passed to functions with custom differentiation rules, and for those arguments to also be passed to the rules themselves. In the case of fixed_point, the func...
from functools import partial @partial(custom_jvp, nondiff_argnums=(0,)) def app(f, x): return f(x) @app.defjvp def app_jvp(f, primals, tangents): x, = primals x_dot, = tangents return f(x), 2. * x_dot print(app(lambda x: x ** 3, 3.)) print(grad(app, 1)(lambda x: x ** 3, 3.))
docs/notebooks/Custom_derivative_rules_for_Python_code.ipynb
google/jax
apache-2.0
Notice the gotcha here: no matter where in the argument list these parameters appear, they're placed at the start of the signature of the corresponding JVP rule. Here's another example:
@partial(custom_jvp, nondiff_argnums=(0, 2)) def app2(f, x, g): return f(g((x))) @app2.defjvp def app2_jvp(f, g, primals, tangents): x, = primals x_dot, = tangents return f(g(x)), 3. * x_dot print(app2(lambda x: x ** 3, 3., lambda y: 5 * y)) print(grad(app2, 1)(lambda x: x ** 3, 3., lambda y: 5 * y))
docs/notebooks/Custom_derivative_rules_for_Python_code.ipynb
google/jax
apache-2.0
jax.custom_vjp with nondiff_argnums A similar option exists for jax.custom_vjp, and, similarly, the convention is that the non-differentiable arguments are passed as the first arguments to the _bwd rule, no matter where they appear in the signature of the original function. The signature of the _fwd rule remains unchan...
@partial(custom_vjp, nondiff_argnums=(0,)) def app(f, x): return f(x) def app_fwd(f, x): return f(x), x def app_bwd(f, x, g): return (5 * g,) app.defvjp(app_fwd, app_bwd) print(app(lambda x: x ** 2, 4.)) print(grad(app, 1)(lambda x: x ** 2, 4.))
docs/notebooks/Custom_derivative_rules_for_Python_code.ipynb
google/jax
apache-2.0
The first thing we notice is that the axes have similar ranges. The model clearly has learned the overall magnitude of errors in the predictions. There also is clearly a correlation between the axes. Values with larger uncertainties tend on average to have larger errors. Now let's see how well the values satisfy the...
plot.hist(abs_error/y_std.flatten(), 20) plot.show()
examples/tutorials/Uncertainty.ipynb
ktaneishi/deepchem
mit
MICRO FUTURES
# symbol: (description, multiplier) micro_futures = { 'MES=F': 'Micro E-mini S&P 500 Index Futures', 'MNQ=F': 'Micro E-mini Nasdaq-100 Index Futures', 'M2K=F': 'Micro E-mini Russell 2000 Index Futures', 'MYM=F': 'Micro E-mini Dow Jones Futures', 'MGC=F': 'Micro Gold Futures', 'SIL=F': 'Micro S...
examples/300.micro-futures/strategy.ipynb
fja05680/pinkfish
mit
Run Strategy
s = strategy.Strategy(symbols, capital, start, end, options=options) s.run()
examples/300.micro-futures/strategy.ipynb
fja05680/pinkfish
mit
View log DataFrames: raw trade log, trade log, and daily balance
s.rlog.head() s.tlog.head() s.dbal.tail()
examples/300.micro-futures/strategy.ipynb
fja05680/pinkfish
mit
Generate strategy stats - display all available stats
pf.print_full(s.stats)
examples/300.micro-futures/strategy.ipynb
fja05680/pinkfish
mit
View Performance by Symbol
weights = {symbol: 1 / len(symbols) for symbol in symbols} totals = s.portfolio.performance_per_symbol(weights=weights) totals corr_df = s.portfolio.correlation_map(s.ts) corr_df
examples/300.micro-futures/strategy.ipynb
fja05680/pinkfish
mit
Run Benchmark, Retrieve benchmark logs, and Generate benchmark stats
benchmark = pf.Benchmark('SPY', s.capital, s.start, s.end, use_adj=True) benchmark.run()
examples/300.micro-futures/strategy.ipynb
fja05680/pinkfish
mit
Analysis: Kelly Criterian
kelly = pf.kelly_criterion(s.stats, benchmark.stats) kelly
examples/300.micro-futures/strategy.ipynb
fja05680/pinkfish
mit
Hmm, this worked fine for our previous dataset. What could be wrong?
np.max(img_cube) # Noooo! Do not want! i_nan = np.where(np.isnan(img_cube)) img_cube[i_nan] = 0 cal_cube = cal_all(img_cube, g_o) np.max(img_cube) # ok, this error was due to a really large value that we can't multiply for this dataset type (np.max(img_cube), np.max(g_o), np.min(g_o)) i_max = np.where(img_cube == np...
new_horizons/NHRalphLEISA_Jupiter.ipynb
benkamphaus/remote-sensing-notebooks
epl-1.0
It's really doubtful that these high values are meaningful, but there are probably several bright pixels due to particles hitting the sensor or glitched pixels. I can try applying the bad data mask.
qf = pyfits.getdata(leisa_file, 6) (np.shape(qf)) (np.max(img_cube[0, qf]), np.max(img_cube[0,:,:]))
new_horizons/NHRalphLEISA_Jupiter.ipynb
benkamphaus/remote-sensing-notebooks
epl-1.0
Well, high values are outside of the bad pixels.
%matplotlib inline plt.hist(np.ravel(img_cube[0, qf]), bins=np.arange(1e13,2e16,1e14)) plt.show() plt.hist(np.ravel(img_cube[0, qf]), bins=np.arange(1e13,2e16,1e13)) plt.show() # Yeah, let's just play fast and loose right now and kill those high values i_bright = np.where(img_cube[0,:,:] >= 2e16) map(len, i_bright) ...
new_horizons/NHRalphLEISA_Jupiter.ipynb
benkamphaus/remote-sensing-notebooks
epl-1.0
OK, this finally looks interesting. We're out of real units with the small image, so let's see if we can apply gains and offsets to the large image if we use quad-preicison floating point.
img128 = np.float128(img_cube) i_nan = np.where(np.isnan(img128)) img128[i_nan] = 0. go_i_nan = np.where(np.isnan(g_o)) g_o[go_i_nan] = 0. g_o = np.float128(g_o) np.max(img128) np.shape(g_o) (np.max(g_o[0,:,:]), np.min(g_o[0,:,:])) (np.max(g_o[1,:,:]), np.min(g_o[1,:,:]))
new_horizons/NHRalphLEISA_Jupiter.ipynb
benkamphaus/remote-sensing-notebooks
epl-1.0
Well, I thought I had calibrate correct in the previous notebook, but I suppose not. A gain across the board of 0.0 makes no sense -- the same for offsets is reasonable if we're really in values that match some kind of meaningful units. This could be the case since the values are very high - much higher than you would ...
# since we don't have to add an offset, we can go back to our smaller img_cube_small vals (np.shape(img_cube_small), np.max(img_cube_small), np.min(img_cube_small)) i_inf = np.where(np.isinf(img_cube_small)) img_cube_small[i_inf] = 0. def apply_gains(cube, gains): out = np.zeros_like(cube, dtype=np.float64) f...
new_horizons/NHRalphLEISA_Jupiter.ipynb
benkamphaus/remote-sensing-notebooks
epl-1.0
At this point, I looked a few cells above where I defined apply_gains and realized I assigned the result of multiplying out and gains, not cube and gains. Oops!
def apply_gains(cube, gains): out = np.zeros_like(cube, dtype=np.float64) for i in range(np.shape(out)[0]): out[i,:,:] = cube[i,:,:] * gains return out img_cube_small_cal = apply_gains(img_cube_small, g_o[0,:,:]) plt.hist(np.ravel(img_cube_small_cal[:,:,100]), bins=np.arange(0, 0.5e5, 1.5e3)) plt....
new_horizons/NHRalphLEISA_Jupiter.ipynb
benkamphaus/remote-sensing-notebooks
epl-1.0
Whew, finally a reasonable looking histogram!
plt.imshow(img_cube_small_cal[:,:,100], clim=(0.,3.5e5), cmap='hot') plt.imshow(img_cube_small_cal[:,100,:], clim=(500,1e5), cmap='bone') plt.figure(figsize=(6,6)) plt.imshow(img_cube_small_cal[:,50,:], clim=(500,1e4), cmap='bone')
new_horizons/NHRalphLEISA_Jupiter.ipynb
benkamphaus/remote-sensing-notebooks
epl-1.0
That's right -- this is BIL, so the spectral information is in the second dimension (dimension indexed by 1). These images won't be pretty -- that's what LORRI and MVIC are for. What we want, in this case, are image reference points to let us know where we can pull meaningful spectral information from the planetary bod...
plt.plot(wl[0,:,200], img_cube_small_cal[375,:,200]) plt.show()
new_horizons/NHRalphLEISA_Jupiter.ipynb
benkamphaus/remote-sensing-notebooks
epl-1.0
These wavelengths look correct, but this is the same issue we ran into previously where the wavelength sort is off.
sort_i = np.argsort(wl[0,:,200]) plt.plot(wl[0,sort_i,200], img_cube_small_cal[375,sort_i,200]) plt.show() sort_i = np.argsort(wl[0,:,200]) plt.plot(wl[0,sort_i,200], img_cube_small_cal[375,sort_i,200], color='blue') plt.plot(wl[0, sort_i,220], img_cube_small_cal[550,sort_i,220], color='red') plt.plot(wl[0, sort_i,220...
new_horizons/NHRalphLEISA_Jupiter.ipynb
benkamphaus/remote-sensing-notebooks
epl-1.0
We can now spot abpsorption, reflectance, and (maybe?) emissivity features! We definitely have some noisy and incorrect looking spectral information, though. One thing we forgot to do was filter down the data to only capture known good pixels. It's possible we may be able to kill some of the bad bands and only see valu...
def apply_mask(img, qf): img_copy = np.copy(img) mult = np.int16(np.logical_not(qf)) for i in range(np.shape(img_copy)[0]): img_copy[i,:,:] *= mult return img_copy spec_cube = apply_mask(img_cube_small_cal, qf) plt.plot(wl[0, sort_i, 200], spec_cube[375, sort_i, 200], color='blue') plt.plot(wl...
new_horizons/NHRalphLEISA_Jupiter.ipynb
benkamphaus/remote-sensing-notebooks
epl-1.0
Much improved! 0's aren't a perfect "no data" value, but really everything is kind of crap. 0, Arbitrary numbers like -9999.9, NaN all have their problems. Here we can at least filter out 0's on the fly when necessary and be pretty sure they're not a real, empirical 0. Look at the plot above -- notice the "peaks" in tw...
def get_wl_match(wl_array, target_wl): """ Pass a spectrum and a wl, returns the index of the closest wavelength to the target wavelength value. """ i_match = np.argmin(np.abs(wl_array - target_wl)) return i_match # We're just eyeballing it here, but we'd really want to look at, e.g., local min...
new_horizons/NHRalphLEISA_Jupiter.ipynb
benkamphaus/remote-sensing-notebooks
epl-1.0
I can tell from the images above that the spectral dimension of the data does not well match Jupiter's position. That is, a spectral slice of the data will be getting different materials as Jupiter moves vertically across the data as we move through the data's spectral axis.
plt.plot(wl[0, sort_i, 220], spec_cube[200, sort_i, 220], color='teal') plt.plot(wl[0, sort_i, 220], spec_cube[450, sort_i, 220], color='red') plt.xlabel("Wavelength ($\mu$m)") plt.show()
new_horizons/NHRalphLEISA_Jupiter.ipynb
benkamphaus/remote-sensing-notebooks
epl-1.0
Import necessary libraries.
from google.cloud import bigquery import pandas as pd
courses/machine_learning/deepdive2/structured/labs/4a_sample_babyweight.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Lab Task #1: Set environment variables. Set environment variables so that we can use them throughout the entire lab. We will be using our project name for our bucket, so you only need to change your project and region.
%%bash export PROJECT=$(gcloud config list project --format "value(core.project)") echo "Your current GCP Project Name is: "$PROJECT # TODO: Change environment variables PROJECT = "cloud-training-demos" # Replace with your PROJECT
courses/machine_learning/deepdive2/structured/labs/4a_sample_babyweight.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
We need to figure out the right way to divide our hash values to get our desired splits. To do that we need to define some values to hash with in the modulo. Feel free to play around with these values to get the perfect combination.
modulo_divisor = 100 train_percent = 80.0 eval_percent = 10.0 train_buckets = int(modulo_divisor * train_percent / 100.0) eval_buckets = int(modulo_divisor * eval_percent / 100.0)
courses/machine_learning/deepdive2/structured/labs/4a_sample_babyweight.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
The next query is going to find the counts of each of the unique 657484 hash_values. This will be our first step at making actual hash buckets for our split via the GROUP BY.
# Get the counts of each of the unique hashs of our splitting column first_bucketing_query = """ SELECT hash_values, COUNT(*) AS num_records FROM ({CTE_data}) GROUP BY hash_values """.format(CTE_data=data_query) display_dataframe_head_from_query(first_bucketing_query)
courses/machine_learning/deepdive2/structured/labs/4a_sample_babyweight.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Lastly, we'll show the final split between train, eval, and test sets. We can see both the number of records and percent of the total data. It is really close to the 80/10/10 that we were hoping to get.
# Show final splitting and associated statistics split_query = """ SELECT dataset_id, dataset_name, SUM(num_records) AS num_records, SUM(percent_records) AS percent_records FROM ({CTE_union}) GROUP BY dataset_id, dataset_name ORDER BY dataset_id """.format(CTE_union=union_query) display...
courses/machine_learning/deepdive2/structured/labs/4a_sample_babyweight.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Lab Task #1: Sample BigQuery dataset. Sample the BigQuery result set (above) so that you have approximately 8,000 training examples and 1000 evaluation examples. The training and evaluation datasets have to be well-distributed (not all the babies are born in Jan 2005, for example) and should not overlap (no baby is par...
# every_n allows us to subsample from each of the hash values # This helps us get approximately the record counts we want every_n = # TODO: Experiment with values to get close to target counts # TODO: Replace FUNC with correct function to split with # TODO: Replace COLUMN with correct column to split on splitting_stri...
courses/machine_learning/deepdive2/structured/labs/4a_sample_babyweight.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
It is always crucial to clean raw data before using in machine learning, so we have a preprocessing step. We'll define a preprocess function below. Note that the mother's age is an input to our model so users will have to provide the mother's age; otherwise, our service won't work. The features we use for our model wer...
def preprocess(df): """ Preprocess pandas dataframe for augmented babyweight data. Args: df: Dataframe containing raw babyweight data. Returns: Pandas dataframe containing preprocessed raw babyweight data as well as simulated no ultrasound data masking some of the original d...
courses/machine_learning/deepdive2/structured/labs/4a_sample_babyweight.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Let's process the train/eval/test set and see a small sample of the training data after our preprocessing:
train_df = preprocess(train_df) eval_df = preprocess(eval_df) test_df = preprocess(test_df) train_df.head() train_df.tail()
courses/machine_learning/deepdive2/structured/labs/4a_sample_babyweight.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Table of Contents 1.- Just in Time Compilation 2.- Numba 3.- Applications 4.- NumExp <div id='jit' /> 1.- Just in Time Compilation A JIT compiler runs after the program has started and compiles the code (usually bytecode) on the fly (or just-in-time, as it's called) into a form that's usually faster, typically the ho...
def sum_array(inp): I,J = inp.shape mysum = 0 for i in range(I): for j in range(J): mysum += inp[i, j] return mysum arr = np.random.random((500,500)) sum_array(arr) naive = %timeit -o sum_array(arr) #lazzy compilation @numba.jit def sum_array_numba(inp): I,J = inp.shape m...
04_jit/04_jit.ipynb
mavillan/SciProg
gpl-3.0
Some important notes: * The first time we invoke a JITted function, it is translated to native machine code. * The very first time you run a numba compiled function, there will be a little bit of overhead for the compilation step to take place. * As an optimizing compiler, Numba needs to decide on the type of each var...
#single signature @numba.jit('float64[:] (float64[:], float64[:])') def sum1(a,b): return a+b a = np.arange(10, dtype=np.float64) b = np.arange(10, dtype=np.float64) print(sum1(a,b)) #multiple signatures (polymorphism) signatures = ['int32[:] (int32[:], int32[:])', 'int64[:] (int64[:], int64[:])', \ ...
04_jit/04_jit.ipynb
mavillan/SciProg
gpl-3.0
For a full reference of the signature types supported by Numba see the documentation. Now that we've run sum1 and sum2 once, they are now compiled and we can check out what's happened behind the scenes. Use the inspect_types method to see how Numba translated the functions.
sum1.inspect_types()
04_jit/04_jit.ipynb
mavillan/SciProg
gpl-3.0