markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Choosing the right model and learning algorithm
# creating a error calc fuction def error(f, x, y): return np.sum((f(x) - y)**2)
BMLSwPython/01_GettingStarted_withPython.ipynb
atulsingh0/MachineLearning
gpl-3.0
Linear 1-d model
# sp's polyfit func do the same fp1, residuals, rank, sv, rcond = sp.polyfit(X, y, 1, full=True) print(fp1) print(residuals) # generating the one order function f1 = sp.poly1d(fp1) # checking error print("Error : ",error(f1, X, y)) x1 = np.array([-100, np.max(X)+100]) y1 = f1(x1) ax.plot(x1, y1, c='g', linewidth=2...
BMLSwPython/01_GettingStarted_withPython.ipynb
atulsingh0/MachineLearning
gpl-3.0
$$ f(x) = 2.59619213 * x + 989.02487106 $$ Polynomial 2-d
# sp's polyfit func do the same fp2 = sp.polyfit(X, y, 2) print(fp2) # generating the 2 order function f2= sp.poly1d(fp2) # checking error print("Error : ",error(f2, X, y)) x1= np.linspace(-100, np.max(X)+100, 2000) y2= f2(x1) ax.plot(x1, y2, c='r', linewidth=2) ax.legend(["data", "d = %i" % f1.order, "d = %i" % f...
BMLSwPython/01_GettingStarted_withPython.ipynb
atulsingh0/MachineLearning
gpl-3.0
$$ f(x) = 0.0105322215 * x^2 - 5.26545650 * x + 1974.6082 $$ What if we want to regress two response output instead of one, As we can see in the graph that there is a steep change in data between week 3 and 4, so let's draw two reponses line, one for the data between week0 and week3.5 and second for week3.5 to week5
# we are going to divide the data on time so div = 3.5*7*24 X1 = X[X<=div] Y1 = y[X<=div] X2 = X[X>div] Y2 = y[X>div] # now plotting the both data fa = sp.poly1d(sp.polyfit(X1, Y1, 1)) fb = sp.poly1d(sp.polyfit(X2, Y2, 1)) fa_error = error(fa, X1, Y1) fb_error = error(fb, X2, Y2) print("Error inflection = %f" % (...
BMLSwPython/01_GettingStarted_withPython.ipynb
atulsingh0/MachineLearning
gpl-3.0
Suppose we choose that function with degree 2 is best fit for our data and want to predict that if everything will go same then when we will hit the 100000 count ?? $$ 0 = f(x) - 100000 = 0.0105322215 * x^2 - 5.26545650 * x + 1974.6082 - 100000 $$ SciPy's optimize module has the function fsolve that achieves this, ...
print(f2) print(f2 - 100000) # import from scipy.optimize import fsolve reached_max = fsolve(f2-100000, x0=800)/(7*24) print("100,000 hits/hour expected at week %f" % reached_max[0])
BMLSwPython/01_GettingStarted_withPython.ipynb
atulsingh0/MachineLearning
gpl-3.0
datacleaning The datacleaning module is used to clean and organize the data into 51 CSV files corresponding to the 50 states of the US and the District of Columbia. The wrapping function clean_all_data takes all the data sets as input and sorts the data in to CSV files of the states. The CSVs are stored in the Cleane...
data_cleaning.clean_all_data()
examples/Demo.ipynb
uwkejia/Clean-Energy-Outlook
mit
missing_data The missing_data module is used to estimate the missing data of the GDP (from 1960 - 1962) and determine the values of the predictors (from 2016-2020). The wrapping function predict_all takes the CSV files of the states as input and stores the predicted missing values in the same CSV files. The CSVs gener...
missing_data.predict_all()
examples/Demo.ipynb
uwkejia/Clean-Energy-Outlook
mit
ridge_prediction The ridge_prediction module is used to predict the future values of energies like wind energy, solar energy, hydro energy and nuclear energy from 2016-2020 using ridge regression. The wrapping function ridge_predict_all takes the CSV files of the states as input and stores the future values of the ene...
ridge_prediction.ridge_predict_all()
examples/Demo.ipynb
uwkejia/Clean-Energy-Outlook
mit
svr_prediction The svr_prediction module is used to predict the future values of energies like wind energy, solar energy, hydro energy and nuclear energy from 2016-2020 using Support Vector Regression The wrapping function SVR_predict_all takes the CSV files of the states as input and stores the future values of the e...
svr_prediction.SVR_predict_all()
examples/Demo.ipynb
uwkejia/Clean-Energy-Outlook
mit
plots Visualizations is done using Tableau software. The Tableau workbook for the predicted data is included in the repository. The Tableau dashboard created for this data is illustrated below:
%%HTML <div class='tableauPlaceholder' id='viz1489609724011' style='position: relative'><noscript><a href='#'><img alt='Clean Energy Production in the contiguous United States(in million kWh) ' src='https:&#47;&#47;public.tableau.com&#47;static&#47;images&#47;PB&#47;PB87S38NW&#47;1_rss.png' style='border: none' /></a>...
examples/Demo.ipynb
uwkejia/Clean-Energy-Outlook
mit
Visualize source leakage among labels using a circular graph This example computes all-to-all pairwise leakage among 68 regions in source space based on MNE inverse solutions and a FreeSurfer cortical parcellation. Label-to-label leakage is estimated as the correlation among the labels' point-spread functions (PSFs). I...
# Authors: Olaf Hauk <olaf.hauk@mrc-cbu.cam.ac.uk> # Martin Luessi <mluessi@nmr.mgh.harvard.edu> # Alexandre Gramfort <alexandre.gramfort@inria.fr> # Nicolas P. Rougier (graph code borrowed from his matplotlib gallery) # # License: BSD (3-clause) import numpy as np import matplotlib.pyplot a...
0.23/_downloads/c7633c38a703b9d0a626a5a4fa161026/psf_ctf_label_leakage.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Load forward solution and inverse operator We need a matching forward solution and inverse operator to compute resolution matrices for different methods.
data_path = sample.data_path() subjects_dir = data_path + '/subjects' fname_fwd = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif' fname_inv = data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-fixed-inv.fif' forward = mne.read_forward_solution(fname_fwd) # Convert forward solution to fixed source orient...
0.23/_downloads/c7633c38a703b9d0a626a5a4fa161026/psf_ctf_label_leakage.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Read and organise labels for cortical parcellation Get labels for FreeSurfer 'aparc' cortical parcellation with 34 labels/hemi
labels = mne.read_labels_from_annot('sample', parc='aparc', subjects_dir=subjects_dir) n_labels = len(labels) label_colors = [label.color for label in labels] # First, we reorder the labels based on their location in the left hemi label_names = [label.name for label in labels] lh_lab...
0.23/_downloads/c7633c38a703b9d0a626a5a4fa161026/psf_ctf_label_leakage.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Compute point-spread function summaries (PCA) for all labels We summarise the PSFs per label by their first five principal components, and use the first component to evaluate label-to-label leakage below.
# Compute first PCA component across PSFs within labels. # Note the differences in explained variance, probably due to different # spatial extents of labels. n_comp = 5 stcs_psf_mne, pca_vars_mne = get_point_spread( rm_mne, src, labels, mode='pca', n_comp=n_comp, norm=None, return_pca_vars=True) n_verts = rm_mn...
0.23/_downloads/c7633c38a703b9d0a626a5a4fa161026/psf_ctf_label_leakage.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
We can show the explained variances of principal components per label. Note how they differ across labels, most likely due to their varying spatial extent.
with np.printoptions(precision=1): for [name, var] in zip(label_names, pca_vars_mne): print(f'{name}: {var.sum():.1f}% {var}')
0.23/_downloads/c7633c38a703b9d0a626a5a4fa161026/psf_ctf_label_leakage.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
The output shows the summed variance explained by the first five principal components as well as the explained variances of the individual components. Evaluate leakage based on label-to-label PSF correlations Note that correlations ignore the overall amplitude of PSFs, i.e. they do not show which region will potentiall...
# get PSFs from Source Estimate objects into matrix psfs_mat = np.zeros([n_labels, n_verts]) # Leakage matrix for MNE, get first principal component per label for [i, s] in enumerate(stcs_psf_mne): psfs_mat[i, :] = s.data[:, 0] # Compute label-to-label leakage as Pearson correlation of PSFs # Sign of correlation is...
0.23/_downloads/c7633c38a703b9d0a626a5a4fa161026/psf_ctf_label_leakage.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Most leakage occurs for neighbouring regions, but also for deeper regions across hemispheres. Save the figure (optional) Matplotlib controls figure facecolor separately for interactive display versus for saved figures. Thus when saving you must specify facecolor, else your labels, title, etc will not be visible:: &gt;&...
# left and right lateral occipital idx = [22, 23] stc_lh = stcs_psf_mne[idx[0]] stc_rh = stcs_psf_mne[idx[1]] # Maximum for scaling across plots max_val = np.max([stc_lh.data, stc_rh.data])
0.23/_downloads/c7633c38a703b9d0a626a5a4fa161026/psf_ctf_label_leakage.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Point-spread function for the lateral occipital label in the left hemisphere
brain_lh = stc_lh.plot(subjects_dir=subjects_dir, subject='sample', hemi='both', views='caudal', clim=dict(kind='value', pos_lims=(0, max_val / 2., max_val))) brain_lh.add_text(0.1, 0.9, label_names[idx[0]], 'title', font_size=16)
0.23/_downloads/c7633c38a703b9d0a626a5a4fa161026/psf_ctf_label_leakage.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
and in the right hemisphere.
brain_rh = stc_rh.plot(subjects_dir=subjects_dir, subject='sample', hemi='both', views='caudal', clim=dict(kind='value', pos_lims=(0, max_val / 2., max_val))) brain_rh.add_text(0.1, 0.9, label_names[idx[1]], 'title', font_size=16)
0.23/_downloads/c7633c38a703b9d0a626a5a4fa161026/psf_ctf_label_leakage.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
DTensor Concepts <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/guide/dtensor_overview"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.c...
!pip install --quiet --upgrade --pre tensorflow
site/en/guide/dtensor_overview.ipynb
tensorflow/docs
apache-2.0
Once installed, import tensorflow and tf.experimental.dtensor. Then configure TensorFlow to use 6 virtual CPUs. Even though this example uses vCPUs, DTensor works the same way on CPU, GPU or TPU devices.
import tensorflow as tf from tensorflow.experimental import dtensor print('TensorFlow version:', tf.__version__) def configure_virtual_cpus(ncpu): phy_devices = tf.config.list_physical_devices('CPU') tf.config.set_logical_device_configuration(phy_devices[0], [ tf.config.LogicalDeviceConfiguration(), ]...
site/en/guide/dtensor_overview.ipynb
tensorflow/docs
apache-2.0
DTensor's model of distributed tensors DTensor introduces two concepts: dtensor.Mesh and dtensor.Layout. They are abstractions to model the sharding of tensors across topologically related devices. Mesh defines the device list for computation. Layout defines how to shard the Tensor dimension on a Mesh. Mesh Mesh repr...
mesh_1d = dtensor.create_mesh([('x', 6)], devices=DEVICES) print(mesh_1d)
site/en/guide/dtensor_overview.ipynb
tensorflow/docs
apache-2.0
A Mesh can be multi dimensional as well. In the following example, 6 CPU devices form a 3x2 mesh, where the 'x' mesh dimension has a size of 3 devices, and the 'y' mesh dimension has a size of 2 devices: <img src="https://www.tensorflow.org/images/dtensor/dtensor_mesh_2d.png" alt="A 2 dimensional mesh with 6 CPUs" ...
mesh_2d = dtensor.create_mesh([('x', 3), ('y', 2)], devices=DEVICES) print(mesh_2d)
site/en/guide/dtensor_overview.ipynb
tensorflow/docs
apache-2.0
Layout Layout specifies how a tensor is distributed, or sharded, on a Mesh. Note: In order to avoid confusions between Mesh and Layout, the term dimension is always associated with Mesh, and the term axis with Tensor and Layout in this guide. The rank of Layout should be the same as the rank of the Tensor where the Lay...
layout = dtensor.Layout([dtensor.UNSHARDED, dtensor.UNSHARDED], mesh_1d)
site/en/guide/dtensor_overview.ipynb
tensorflow/docs
apache-2.0
Using the same tensor and mesh the layout Layout(['unsharded', 'x']) would shard the second axis of the tensor across the 6 devices. <img src="https://www.tensorflow.org/images/dtensor/dtensor_layout_rank1.png" alt="A tensor sharded across a rank-1 mesh" class="no-filter">
layout = dtensor.Layout([dtensor.UNSHARDED, 'x'], mesh_1d)
site/en/guide/dtensor_overview.ipynb
tensorflow/docs
apache-2.0
Given a 2-dimensional 3x2 mesh such as [("x", 3), ("y", 2)], (mesh_2d from the previous section), Layout(["y", "x"], mesh_2d) is a layout for a rank-2 Tensor whose first axis is sharded across across mesh dimension "y", and whose second axis is sharded across mesh dimension "x". <img src="https://www.tensorflow.org/ima...
layout = dtensor.Layout(['y', 'x'], mesh_2d)
site/en/guide/dtensor_overview.ipynb
tensorflow/docs
apache-2.0
For the same mesh_2d, the layout Layout(["x", dtensor.UNSHARDED], mesh_2d) is a layout for a rank-2 Tensor that is replicated across "y", and whose first axis is sharded on mesh dimension x. <img src="https://www.tensorflow.org/images/dtensor/dtensor_layout_hybrid.png" alt="A tensor replicated across mesh-dimension y, ...
layout = dtensor.Layout(["x", dtensor.UNSHARDED], mesh_2d)
site/en/guide/dtensor_overview.ipynb
tensorflow/docs
apache-2.0
Single-Client and Multi-Client Applications DTensor supports both single-client and multi-client applications. The colab Python kernel is an example of a single client DTensor application, where there is a single Python process. In a multi-client DTensor application, multiple Python processes collectively perform as a ...
def dtensor_from_array(arr, layout, shape=None, dtype=None): """Convert a DTensor from something that looks like an array or Tensor. This function is convenient for quick doodling DTensors from a known, unsharded data object in a single-client environment. This is not the most efficient way of creating a DTens...
site/en/guide/dtensor_overview.ipynb
tensorflow/docs
apache-2.0
Anatomy of a DTensor A DTensor is a tf.Tensor object, but augumented with the Layout annotation that defines its sharding behavior. A DTensor consists of the following: Global tensor meta-data, including the global shape and dtype of the tensor. A Layout, which defines the Mesh the Tensor belongs to, and how the Tenso...
mesh = dtensor.create_mesh([("x", 6)], devices=DEVICES) layout = dtensor.Layout([dtensor.UNSHARDED], mesh) my_first_dtensor = dtensor_from_array([0, 1], layout) # Examine the dtensor content print(my_first_dtensor) print("global shape:", my_first_dtensor.shape) print("dtype:", my_first_dtensor.dtype)
site/en/guide/dtensor_overview.ipynb
tensorflow/docs
apache-2.0
Layout and fetch_layout The layout of a DTensor is not a regular attribute of tf.Tensor. Instead, DTensor provides a function, dtensor.fetch_layout to access the layout of a DTensor.
print(dtensor.fetch_layout(my_first_dtensor)) assert layout == dtensor.fetch_layout(my_first_dtensor)
site/en/guide/dtensor_overview.ipynb
tensorflow/docs
apache-2.0
Component tensors, pack and unpack A DTensor consists of a list of component tensors. The component tensor for a device in the Mesh is the Tensor object representing the piece of the global DTensor that is stored on this device. A DTensor can be unpacked into component tensors through dtensor.unpack. You can make use o...
for component_tensor in dtensor.unpack(my_first_dtensor): print("Device:", component_tensor.device, ",", component_tensor)
site/en/guide/dtensor_overview.ipynb
tensorflow/docs
apache-2.0
As shown, my_first_dtensor is a tensor of [0, 1] replicated to all 6 devices. The inverse operation of dtensor.unpack is dtensor.pack. Component tensors can be packed back into a DTensor. The components must have the same rank and dtype, which will be the rank and dtype of the returned DTensor. However there is no stri...
packed_dtensor = dtensor.pack( [[0, 1], [0, 1], [0, 1], [0, 1], [0, 1], [0, 1]], layout=layout ) print(packed_dtensor)
site/en/guide/dtensor_overview.ipynb
tensorflow/docs
apache-2.0
Sharding a DTensor to a Mesh So far you've worked with the my_first_dtensor, which is a rank-1 DTensor fully replicated across a dim-1 Mesh. Next create and inspect DTensors that are sharded across a dim-2 Mesh. The next example does this with a 3x2 Mesh on 6 CPU devices, where size of mesh dimension 'x' is 3 devices, ...
mesh = dtensor.create_mesh([("x", 3), ("y", 2)], devices=DEVICES)
site/en/guide/dtensor_overview.ipynb
tensorflow/docs
apache-2.0
Fully sharded rank-2 Tensor on a dim-2 Mesh Create a 3x2 rank-2 DTensor, sharding its first axis along the 'x' mesh dimension, and its second axis along the 'y' mesh dimension. Because the tensor shape equals to the mesh dimension along all of the sharded axes, each device receives a single element of the DTensor. The...
fully_sharded_dtensor = dtensor_from_array( tf.reshape(tf.range(6), (3, 2)), layout=dtensor.Layout(["x", "y"], mesh)) for raw_component in dtensor.unpack(fully_sharded_dtensor): print("Device:", raw_component.device, ",", raw_component)
site/en/guide/dtensor_overview.ipynb
tensorflow/docs
apache-2.0
Fully replicated rank-2 Tensor on a dim-2 Mesh For comparison, create a 3x2 rank-2 DTensor, fully replicated to the same dim-2 Mesh. Because the DTensor is fully replicated, each device receives a full replica of the 3x2 DTensor. The rank of the component tensors are the same as the rank of the global shape -- this fa...
fully_replicated_dtensor = dtensor_from_array( tf.reshape(tf.range(6), (3, 2)), layout=dtensor.Layout([dtensor.UNSHARDED, dtensor.UNSHARDED], mesh)) # Or, layout=tensor.Layout.fully_replicated(mesh, rank=2) for component_tensor in dtensor.unpack(fully_replicated_dtensor): print("Device:", component_tensor.de...
site/en/guide/dtensor_overview.ipynb
tensorflow/docs
apache-2.0
Hybrid rank-2 Tensor on a dim-2 Mesh What about somewhere between fully sharded and fully replicated? DTensor allows a Layout to be a hybrid, sharded along some axes, but replicated along others. For example, you can shard the same 3x2 rank-2 DTensor in the following way: 1st axis sharded along the 'x' mesh dimension....
hybrid_sharded_dtensor = dtensor_from_array( tf.reshape(tf.range(6), (3, 2)), layout=dtensor.Layout(['x', dtensor.UNSHARDED], mesh)) for component_tensor in dtensor.unpack(hybrid_sharded_dtensor): print("Device:", component_tensor.device, ",", component_tensor)
site/en/guide/dtensor_overview.ipynb
tensorflow/docs
apache-2.0
You can inspect the component tensors of the created DTensor and verify they are indeed sharded according to your scheme. It may be helpful to illustrate the situation with a chart: <img src="https://www.tensorflow.org/images/dtensor/dtensor_hybrid_mesh.png" alt="A 3x2 hybrid mesh with 6 CPUs" class="no-filter" wi...
print(fully_replicated_dtensor.numpy()) try: fully_sharded_dtensor.numpy() except tf.errors.UnimplementedError: print("got an error as expected for fully_sharded_dtensor") try: hybrid_sharded_dtensor.numpy() except tf.errors.UnimplementedError: print("got an error as expected for hybrid_sharded_dtensor")
site/en/guide/dtensor_overview.ipynb
tensorflow/docs
apache-2.0
TensorFlow API on DTensor DTensor strives to be a drop-in replacement for tensor in your program. The TensorFlow Python API that consume tf.Tensor, such as the Ops library functions, tf.function, tf.GradientTape, also work with DTensor. To accomplish this, for each TensorFlow Graph, DTensor produces and executes an equ...
mesh = dtensor.create_mesh([("x", 6)], devices=DEVICES) layout = dtensor.Layout([dtensor.UNSHARDED, dtensor.UNSHARDED], mesh) a = dtensor_from_array([[1, 2, 3], [4, 5, 6]], layout=layout) b = dtensor_from_array([[6, 5], [4, 3], [2, 1]], layout=layout) c = tf.matmul(a, b) # runs 6 identical matmuls in parallel on 6 dev...
site/en/guide/dtensor_overview.ipynb
tensorflow/docs
apache-2.0
Sharding operands along the contracted axis You can reduce the amount of computation per device by sharding the operands a and b. A popular sharding scheme for tf.matmul is to shard the operands along the axis of the contraction, which means sharding a along the second axis, and b along the first axis. The global matri...
mesh = dtensor.create_mesh([("x", 3), ("y", 2)], devices=DEVICES) a_layout = dtensor.Layout([dtensor.UNSHARDED, 'x'], mesh) a = dtensor_from_array([[1, 2, 3], [4, 5, 6]], layout=a_layout) b_layout = dtensor.Layout(['x', dtensor.UNSHARDED], mesh) b = dtensor_from_array([[6, 5], [4, 3], [2, 1]], layout=b_layout) c = tf....
site/en/guide/dtensor_overview.ipynb
tensorflow/docs
apache-2.0
Additional Sharding You can perform additional sharding on the inputs, and they are appropriately carried over to the results. For example, you can apply additional sharding of operand a along its first axis to the 'y' mesh dimension. The additional sharding will be carried over to the first axis of the result c. Total...
mesh = dtensor.create_mesh([("x", 3), ("y", 2)], devices=DEVICES) a_layout = dtensor.Layout(['y', 'x'], mesh) a = dtensor_from_array([[1, 2, 3], [4, 5, 6]], layout=a_layout) b_layout = dtensor.Layout(['x', dtensor.UNSHARDED], mesh) b = dtensor_from_array([[6, 5], [4, 3], [2, 1]], layout=b_layout) c = tf.matmul(a, b) ...
site/en/guide/dtensor_overview.ipynb
tensorflow/docs
apache-2.0
DTensor as Output What about Python functions that do not take operands, but returns a Tensor result that can be sharded? Examples of such functions are tf.ones, tf.zeros, tf.random.stateless_normal, For these Python functions, DTensor provides dtensor.call_with_layout which eagelry executes a Python function with DT...
help(dtensor.call_with_layout)
site/en/guide/dtensor_overview.ipynb
tensorflow/docs
apache-2.0
The eagerly executed Python function usually only contain a single non-trivial TensorFlow Op. To use a Python function that emits multiple TensorFlow Ops with dtensor.call_with_layout, the function should be converted to a tf.function. Calling a tf.function is a single TensorFlow Op. When the tf.function is called, DTe...
help(tf.ones) mesh = dtensor.create_mesh([("x", 3), ("y", 2)], devices=DEVICES) ones = dtensor.call_with_layout(tf.ones, dtensor.Layout(['x', 'y'], mesh), shape=(6, 4)) print(ones)
site/en/guide/dtensor_overview.ipynb
tensorflow/docs
apache-2.0
APIs that emit multiple TensorFlow Ops If the API emits multiple TensorFlow Ops, convert the function into a single Op through tf.function. For example tf.random.stateleess_normal
help(tf.random.stateless_normal) ones = dtensor.call_with_layout( tf.function(tf.random.stateless_normal), dtensor.Layout(['x', 'y'], mesh), shape=(6, 4), seed=(1, 1)) print(ones)
site/en/guide/dtensor_overview.ipynb
tensorflow/docs
apache-2.0
Wrapping a Python function that emits a single TensorFlow Op with tf.function is allowed. The only caveat is paying the associated cost and complexity of creating a tf.function from a Python function.
ones = dtensor.call_with_layout( tf.function(tf.ones), dtensor.Layout(['x', 'y'], mesh), shape=(6, 4)) print(ones)
site/en/guide/dtensor_overview.ipynb
tensorflow/docs
apache-2.0
From tf.Variable to dtensor.DVariable In Tensorflow, tf.Variable is the holder for a mutable Tensor value. With DTensor, the corresponding variable semantics is provided by dtensor.DVariable. The reason a new type DVariable was introduced for DTensor variable is because DVariables have an additional requirement that th...
mesh = dtensor.create_mesh([("x", 6)], devices=DEVICES) layout = dtensor.Layout([dtensor.UNSHARDED, dtensor.UNSHARDED], mesh) v = dtensor.DVariable( initial_value=dtensor.call_with_layout( tf.function(tf.random.stateless_normal), layout=layout, shape=tf.TensorShape([64, 32]), seed=[...
site/en/guide/dtensor_overview.ipynb
tensorflow/docs
apache-2.0
Other than the requirement on matching the layout, a DVariable behaves the same as a tf.Variable. For example, you can add a DVariable to a DTensor,
a = dtensor.call_with_layout(tf.ones, layout=layout, shape=(64, 32)) b = v + a # add DVariable and DTensor print(b)
site/en/guide/dtensor_overview.ipynb
tensorflow/docs
apache-2.0
You can also assign a DTensor to a DVariable.
v.assign(a) # assign a DTensor to a DVariable print(a)
site/en/guide/dtensor_overview.ipynb
tensorflow/docs
apache-2.0
Attempting to mutate the layout of a DVariable, by assigning a DTensor with an incompatible layout produces an error.
# variable's layout is immutable. another_mesh = dtensor.create_mesh([("x", 3), ("y", 2)], devices=DEVICES) b = dtensor.call_with_layout(tf.ones, layout=dtensor.Layout([dtensor.UNSHARDED, dtensor.UNSHARDED], another_mesh), shape=(64, 32)) try: v.assign(b) except: print("exc...
site/en/guide/dtensor_overview.ipynb
tensorflow/docs
apache-2.0
Reading TSV files
CWD = osp.join(osp.expanduser('~'), 'documents','grants_projects','roberto_projects', \ 'guillaume_huguet_CNV','File_OK') filename = 'Imagen_QC_CIA_MMAP_V2_Annotation.tsv' fullfname = osp.join(CWD, filename) arr = np.loadtxt(fullfname, dtype='str', comments=None, delimiter='\Tab', conv...
CNV_dangerosite.ipynb
jbpoline/cnv_analysis
artistic-2.0
transforming the "Pvalue_MMAP_V2_..." into danger score Testing the function danger_score
assert util._test_danger_score_1() assert util._test_danger_score()
CNV_dangerosite.ipynb
jbpoline/cnv_analysis
artistic-2.0
QUESTION pour Guillaume: a quoi correspondent les '' dans la colonne "Pvalue_MMAP_V2_sans_intron_and_Intergenic" (danger)? Ansewer: cnv for which we have no dangerosity information
""" danger_not_empty = dangers != '' danger_scores = dangers[danger_not_empty] danger_scores = np.asarray([util.danger_score(pstr, util.pH1_with_apriori) for pstr in danger_scores]) """;
CNV_dangerosite.ipynb
jbpoline/cnv_analysis
artistic-2.0
To be or not to be a CNV: p value from the 'SCORE' column
reload(util) #get the scores scores = np.asarray([line.split('\t')[i_score] for line in arr[1:]]) assert len(scores) == expected_nb_values print(len(np.unique(scores))) #tmp_score = np.asarray([util.str2floats(s, comma2point=True, sep=' ')[0] for s in scores]) assert scores.shape[0] == EXPECTED_LINES - 1 # h = plt.h...
CNV_dangerosite.ipynb
jbpoline/cnv_analysis
artistic-2.0
Replace the zero score by the maximum score: cf Guillaume's procedure
scoresf = np.asarray([util.str2floats(s, comma2point=True, sep=' ')[0] for s in scores]) print(scoresf.max(), scoresf.min(),(scoresf==0).sum()) #clean_score = util.process_scores(scores) #h = plt.hist(clean_score[clean_score < 60], bins=100) #h = plt.hist(sc...
CNV_dangerosite.ipynb
jbpoline/cnv_analysis
artistic-2.0
Transforms the scores into P(cnv is real)
# Creating a function from score to proba from Guillaume's values p_cnv = util._build_dict_prob_cnv() #print(p_cnv.keys()) #print(p_cnv.values()) #### Definition with a piecewise linear function #score2prob = util.create_score2prob_lin_piecewise(p_cnv) #scores = np.arange(15,50,1) #probs = [score2prob(sc) for sc in sc...
CNV_dangerosite.ipynb
jbpoline/cnv_analysis
artistic-2.0
Finally, putting things together
# re-loading reload(util) CWD = osp.join(osp.expanduser('~'), 'documents','grants_projects','roberto_projects', \ 'guillaume_huguet_CNV','File_OK') filename = 'Imagen_QC_CIA_MMAP_V2_Annotation.tsv' fullfname = osp.join(CWD, filename) # in numpy array arr = np.loadtxt(fullfname, dtype='str', comments=No...
CNV_dangerosite.ipynb
jbpoline/cnv_analysis
artistic-2.0
Create a dict of the cnv
from collections import OrderedDict cnv = OrderedDict() names_from = ["CHR de Merge_CIA_610_660_QC", 'START', 'STOP'] #, "5'gene", "3'gene", "5'dist(kb)", "3'dist(kb)"] blank_dgr = 0 for line in arr[1:]: lline = line.split('\t') dgr = lline[i_DANGER] scr = lline[i_SCORE] cnv_...
CNV_dangerosite.ipynb
jbpoline/cnv_analysis
artistic-2.0
Create a dictionary of the subjects -
cnv = OrderedDict({}) #names_from = ['START', 'STOP', "5'gene", "3'gene", "5'dist(kb)", "3'dist(kb)"] names_from = ['IID_projet'] for line in arr[1:]: lline = line.split('\t') dgr = lline[i_DANGER] scr = lline[i_SCORE] sub_iid = util.make_uiid(line, names_from, arr[0]) if dgr != '': a...
CNV_dangerosite.ipynb
jbpoline/cnv_analysis
artistic-2.0
Histogram of the number of cnv used to compute dangerosity
print(len(cnv)) nbcnv = [len(cnv[sb]) for sb in cnv] hist = plt.hist(nbcnv, bins=50) print(np.max(np.asarray(nbcnv))) # definition of dangerosity from a list of cnv def dangerosity(listofcnvs): """ inputs: list tuples (danger_score, proba_cnv) returns: a dangerosity score """ last = -1 #slicing th...
CNV_dangerosite.ipynb
jbpoline/cnv_analysis
artistic-2.0
Testing dangerosity
for k in range(1,30, 30): print(cnv[cnv.keys()[k]], ' yields ', dangerosity(cnv[cnv.keys()[k]])) test_dangerosity_input = [[(1., .5), (1., .5), (1., .5), (1., .5)], [(2., 1.)], [(10000., 0.)]] test_dangerosity_output = [2., 2., 0] #print( [dangerosity(icnv) ...
CNV_dangerosite.ipynb
jbpoline/cnv_analysis
artistic-2.0
Printing out results
dtime = datetime.now().strftime("%y-%m-%d_h%H-%M") outfile = dtime+'dangerosity_cnv.txt' fulloutfile = osp.join(CWD, outfile) with open(fulloutfile, 'w') as outf: for sub in cnv: outf.write("\t".join([sub, str(dangerosity(cnv[sub]))]) + "\n")
CNV_dangerosite.ipynb
jbpoline/cnv_analysis
artistic-2.0
Testing of playing pyguessgame. Generates random numbers and plays a game. Create two random lists of numbers 0/9,10/19,20/29 etc to 100. Compare the two lists. If win mark, if lose mark. Debian
#for ronum in ranumlis: # print ronum randict = dict() othgues = [] othlow = 0 othhigh = 9 for ranez in range(10): randxz = random.randint(othlow, othhigh) othgues.append(randxz) othlow = (othlow + 10) othhigh = (othhigh + 10) #print othgues tenlis = ['zero', 'ten', 'twenty', 'thirty',...
pggNumAdd.ipynb
wcmckee/signinlca
mit
Makes dict with keys pointing to the 10s numbers. The value needs the list of random numbers updated. Currently it just adds the number one. How to add the random number list?
for ronum in ranumlis: #print ronum if ronum in othgues: print (str(ronum) + ' You Win!') else: print (str(ronum) + ' You Lose!') #dieci = dict() #for ranz in range(10): #print str(ranz) + str(1)# # dieci.update({str(ranz) + str(1): str(ranz)}) # for numz in range(10): ...
pggNumAdd.ipynb
wcmckee/signinlca
mit
<p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br> Creating a Pandas DataFrame from a CSV file<br></p>
data = pd.read_csv('./weather/daily_weather.csv')
Week-7-MachineLearning/Weather Data Classification using Decision Trees.ipynb
harishkrao/DSE200x
mit
<p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold">Daily Weather Data Description</p> <br> The file daily_weather.csv is a comma-separated file that contains weather data. This data comes from a weather station located in San Diego, California. The weather station is equipped with sensors t...
data.columns
Week-7-MachineLearning/Weather Data Classification using Decision Trees.ipynb
harishkrao/DSE200x
mit
<br>Each row in daily_weather.csv captures weather data for a separate day. <br><br> Sensor measurements from the weather station were captured at one-minute intervals. These measurements were then processed to generate values to describe daily weather. Since this dataset was created to classify low-humidity days vs....
data data[data.isnull().any(axis=1)]
Week-7-MachineLearning/Weather Data Classification using Decision Trees.ipynb
harishkrao/DSE200x
mit
<p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br> Data Cleaning Steps<br><br></p> We will not need to number for each row so we can clean it.
del data['number']
Week-7-MachineLearning/Weather Data Classification using Decision Trees.ipynb
harishkrao/DSE200x
mit
Now let's drop null values using the pandas dropna function.
before_rows = data.shape[0] print(before_rows) data = data.dropna() after_rows = data.shape[0] print(after_rows)
Week-7-MachineLearning/Weather Data Classification using Decision Trees.ipynb
harishkrao/DSE200x
mit
<p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br> How many rows dropped due to cleaning?<br><br></p>
before_rows - after_rows
Week-7-MachineLearning/Weather Data Classification using Decision Trees.ipynb
harishkrao/DSE200x
mit
<p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"> Convert to a Classification Task <br><br></p> Binarize the relative_humidity_3pm to 0 or 1.<br>
clean_data = data.copy() clean_data['high_humidity_label'] = (clean_data['relative_humidity_3pm'] > 24.99)*1 print(clean_data['high_humidity_label'])
Week-7-MachineLearning/Weather Data Classification using Decision Trees.ipynb
harishkrao/DSE200x
mit
<p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br> Target is stored in 'y'. <br><br></p>
y=clean_data[['high_humidity_label']].copy() #y clean_data['relative_humidity_3pm'].head() y.head()
Week-7-MachineLearning/Weather Data Classification using Decision Trees.ipynb
harishkrao/DSE200x
mit
<p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br> Use 9am Sensor Signals as Features to Predict Humidity at 3pm <br><br></p>
morning_features = ['air_pressure_9am','air_temp_9am','avg_wind_direction_9am','avg_wind_speed_9am', 'max_wind_direction_9am','max_wind_speed_9am','rain_accumulation_9am', 'rain_duration_9am'] X = clean_data[morning_features].copy() X.columns y.columns
Week-7-MachineLearning/Weather Data Classification using Decision Trees.ipynb
harishkrao/DSE200x
mit
<p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br> Perform Test and Train split <br><br></p> REMINDER: Training Phase In the training phase, the learning algorithm uses the training data to adjust the model’s parameters to minimize errors. At the end of the training phase, you get th...
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=324) #type(X_train) #type(X_test) #type(y_train) #type(y_test) #X_train.head() #y_train.describe()
Week-7-MachineLearning/Weather Data Classification using Decision Trees.ipynb
harishkrao/DSE200x
mit
<p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br> Fit on Train Set <br><br></p>
humidity_classifier = DecisionTreeClassifier(max_leaf_nodes=10, random_state=0) humidity_classifier.fit(X_train, y_train) type(humidity_classifier)
Week-7-MachineLearning/Weather Data Classification using Decision Trees.ipynb
harishkrao/DSE200x
mit
<p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br> Predict on Test Set <br><br></p>
predictions = humidity_classifier.predict(X_test) predictions[:10] y_test['high_humidity_label'][:10]
Week-7-MachineLearning/Weather Data Classification using Decision Trees.ipynb
harishkrao/DSE200x
mit
<p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br> Measure Accuracy of the Classifier <br><br></p>
accuracy_score(y_true = y_test, y_pred = predictions)
Week-7-MachineLearning/Weather Data Classification using Decision Trees.ipynb
harishkrao/DSE200x
mit
2 使用类(class)装饰器
from functools import wraps def singleton(cls): instances = {} @wraps(cls) def wrapper(*args, **kwargs): if cls not in instances: instances[cls] = cls(*args, **kwargs) return instances[cls] return wrapper @singleton class MyClass(object): pass myclass1 ...
python-statatics-tutorial/advance-theme/Singleton.ipynb
gaufung/Data_Analytics_Learning_Note
mit
3 使用GetInstance方法,非线程安全
class MySingleton(object): @classmethod def getInstance(cls): if not hasattr(cls, '_instance'): cls._instance = cls() return cls._instance mysingleton1 = MySingleton.getInstance() mysingleton2 = MySingleton.getInstance() print id(mysingleton1) == id(mysingleton2)
python-statatics-tutorial/advance-theme/Singleton.ipynb
gaufung/Data_Analytics_Learning_Note
mit
Indefinite integrals Here is a table of definite integrals. Many of these integrals has a number of parameters $a$, $b$, etc. Find five of these integrals and perform the following steps: Typeset the integral using LateX in a Markdown cell. Define an integrand function that computes the value of the integrand. Define ...
def integrand(x, a): return 1.0/(x**2 + a**2) def integral_approx(a): # Use the args keyword argument to feed extra arguments to your integrand I, e = integrate.quad(integrand, 0, np.inf, args=(a,)) return I def integral_exact(a): return 0.5*np.pi/a print("Numerical: ", integral_approx(1.0)) prin...
assignments/assignment09/IntegrationEx02.ipynb
JackDi/phys202-2015-work
mit
Integral 1 \begin{equation} \int_{0}^{a}{\sqrt{a^2 - x^2}} dx=\frac{\pi a^2}{4} \end{equation}
# YOUR CODE HERE def integrand(x, a): return (np.sqrt(a**2 - x**2)) def integral_approx(a): # Use the args keyword argument to feed extra arguments to your integrand I, e = integrate.quad(integrand, 0, a, args=(a,)) return I def integral_exact(a): return (0.25*np.pi*a**2) print("Numerical: ", int...
assignments/assignment09/IntegrationEx02.ipynb
JackDi/phys202-2015-work
mit
Integral 2 \begin{equation} \int_{0}^{\infty} e^{-ax^2} dx =\frac{1}{2}\sqrt{\frac{\pi}{a}} \end{equation}
# YOUR CODE HERE def integrand(x, a): return np.exp(-a*x**2) def integral_approx(a): # Use the args keyword argument to feed extra arguments to your integrand I, e = integrate.quad(integrand, 0, np.inf, args=(a,)) return I def integral_exact(a): return 0.5*np.sqrt(np.pi/a) print("Numerical: ", in...
assignments/assignment09/IntegrationEx02.ipynb
JackDi/phys202-2015-work
mit
Integral 3 \begin{equation} \int_{0}^{\infty} \frac{x}{e^x-1} dx =\frac{\pi^2}{6} \end{equation}
# YOUR CODE HERE def integrand(x, a): return x/(np.exp(x)-1) def integral_approx(a): # Use the args keyword argument to feed extra arguments to your integrand I, e = integrate.quad(integrand, 0, np.inf, args=(a,)) return I def integral_exact(a): return (1/6.0)*np.pi**2 print("Numerical: ", integr...
assignments/assignment09/IntegrationEx02.ipynb
JackDi/phys202-2015-work
mit
Integral 4 \begin{equation} \int_{0}^{\infty} \frac{x}{e^x+1} dx =\frac{\pi^2}{12} \end{equation}
# YOUR CODE HERE def integrand(x, a): return x/(np.exp(x)+1) def integral_approx(a): # Use the args keyword argument to feed extra arguments to your integrand I, e = integrate.quad(integrand, 0, np.inf, args=(a,)) return I def integral_exact(a): return (1/12.0)*np.pi**2 print("Numerical: ", integ...
assignments/assignment09/IntegrationEx02.ipynb
JackDi/phys202-2015-work
mit
Integral 5 \begin{equation} \int_{0}^{1} \frac{ln x}{1-x} dx =-\frac{\pi^2}{6} \end{equation}
# YOUR CODE HERE def integrand(x, a): return np.log(x)/(1-x) def integral_approx(a): # Use the args keyword argument to feed extra arguments to your integrand I, e = integrate.quad(integrand, 0, 1, args=(a,)) return I def integral_exact(a): return (-1.0/6.0)*np.pi**2 print("Numerical: ", integral...
assignments/assignment09/IntegrationEx02.ipynb
JackDi/phys202-2015-work
mit
Lets analyze this graph: - the first ir basic block has the name set to main - it is composed of 2 AssignBlocks - the first AssignBlock contains only one assignment, EAX = EBX - the second one is IRDst = loc_key_1 The IRDst is a special register which represent a kind of program counter in intermediate representation. ...
graph_ir_x86(""" main: ADD EAX, 3 """)
doc/ir/lift.ipynb
serpilliere/miasm
gpl-2.0
In this graph, we can note that each instruction side effect is represented. Note that in the equation: zf = FLAG_EQ_CMP(EAX, -0x3) The detailed version of the expression: ExprId('zf', 1) = ExprOp('FLAG_EQ_CMP', ExprId('EAX', 32), ExprInt(-0x3, 32)) The operator FLAG_EQ_CMP is a kind of high level representation. But y...
graph_ir_x86(""" main: XCHG EAX, EBX """)
doc/ir/lift.ipynb
serpilliere/miasm
gpl-2.0
This one is interesting, as it demonstrate perfectly the parallel execution of multiple assignments. In you are puzzled by this notation, imagine this describes equations, which expresses destination variables of an output state depending on an input state. The equations can be rewritten: EAX_out = EBX_in EBX_out = EAX...
# Here is a push graph_ir_x86(""" main: PUSH EAX """) graph_ir_x86(""" main: CMOVZ EAX, EBX """)
doc/ir/lift.ipynb
serpilliere/miasm
gpl-2.0
Here are some remarks we can do on this version: - one x86 instruction has generated multiple IRBlocks - the first IRBlock only reads the zf (we don't take the locations into account here) - EAX is assigned only in the case of zf equals to 1 - EBX is read only in the case of zf equals to 1 We can dispute on the fact th...
graph_ir_x86(""" main: JZ end MOV EAX, EBX end: """)
doc/ir/lift.ipynb
serpilliere/miasm
gpl-2.0
The conclusion is that in intermediate representation, the cmovz is exactly as difficult as analyzing the code using jz/mov So an important point is that in intermediate representation, one instruction can generate multiple IRBlocks. Here are some interesting examples:
graph_ir_x86(""" main: MOVSB """)
doc/ir/lift.ipynb
serpilliere/miasm
gpl-2.0
And now, the version using a repeat prefix:
graph_ir_x86(""" main: REP MOVSB """)
doc/ir/lift.ipynb
serpilliere/miasm
gpl-2.0
In the very same way as cmovz, if the rep movsb instruction didn't exist, we would use a more complex code. The translation of some instructions are tricky:
graph_ir_x86(""" main: SHR EAX, 1 """)
doc/ir/lift.ipynb
serpilliere/miasm
gpl-2.0
For the moment, nothing special. EAX is updated correctly, and the flags are updated according to the result (note those side effects are in parallel here). But look at the next one:
graph_ir_x86(""" main: SHR EAX, CL """)
doc/ir/lift.ipynb
serpilliere/miasm
gpl-2.0
In this case, if CL is zero, the destination is shifted by a zero amount. The instruction behaves (in 32 bit mode) as a nop, and the flags are not assigned. We could have done the same trick as in the cmovz, but this representation matches more accurately the instruction semantic. Here is another one:
graph_ir_x86(""" main: DIV ECX """)
doc/ir/lift.ipynb
serpilliere/miasm
gpl-2.0
This instruction may generate an exception in case of the divisor is zero. The intermediate representation generates a test in which it evaluate the divisor value and assigns a special register exception_flags to a constant. This constant represents the division by zero. Note this is arbitrary. We could have done the c...
graph_ir_x86(""" main: INT 0x3 """)
doc/ir/lift.ipynb
serpilliere/miasm
gpl-2.0
Memory accesses by default explicit segmentation:
graph_ir_x86(""" main: MOV EAX, DWORD PTR FS:[EBX] """)
doc/ir/lift.ipynb
serpilliere/miasm
gpl-2.0
The pointer of the memory uses the special operator segm, which takes two arguments: - the value of the segment used the memory access - the base address Note that if you work in a flat segmentation model, you can add a post translation pass which will simplify ExprOp("segm", A, B) into B. This will ease code analysis....
asmcfg = gen_x86_asmcfg(""" main: CALL 0x11223344 MOV EBX, EAX """) asmcfg.graphviz() graph_ir_x86(""" main: CALL 0x11223344 MOV EBX, EAX """)
doc/ir/lift.ipynb
serpilliere/miasm
gpl-2.0
What did happened here ? - the call instruction has 2 side effects: stacking the return address and jumping to the subfunction address - here, the subfunction address is 0x1122334455, and the return address is located at offset 0x5, which is represented here by loc_5 The question is: why are there unlinked nodes in the...
graph_ir_x86(""" main: MOV EBX, 0x1234 CALL 0x11223344 MOV ECX, EAX RET """, True)
doc/ir/lift.ipynb
serpilliere/miasm
gpl-2.0
What happened here? The translation of the call is replaced by two side effects which occur in parallel: - EAX is set to the result of the operator call_func_ret which has two arguments: loc_11223344 and ESP - ESP is set to the result of the operator call_func_stack which has two arguments: loc_11223344 and ESP The fir...
# Construct a custom lifter class LifterFixCallStack(LifterModelCall_x86_32): def call_effects(self, addr, instr): if addr.is_loc(): if self.loc_db.get_location_offset(addr.loc_key) == 0x11223344: # Suppose the function at 0x11223344 has 3 arguments ...
doc/ir/lift.ipynb
serpilliere/miasm
gpl-2.0
Read File Containing Zones Using the read_zbarray utility, we can import zonebudget-style array files.
from flopy.utils import read_zbarray zone_file = os.path.join(loadpth, 'zonef_mlt') zon = read_zbarray(zone_file) nlay, nrow, ncol = zon.shape fig = plt.figure(figsize=(10, 4)) for lay in range(nlay): ax = fig.add_subplot(1, nlay, lay+1) im = ax.pcolormesh(zon[lay, :, :]) cbar = plt.colorbar(im) plt....
examples/Notebooks/flopy3_ZoneBudget_example.ipynb
bdestombe/flopy-1
bsd-3-clause
Extract Budget Information from ZoneBudget Object At the core of the ZoneBudget object is a numpy structured array. The class provides some wrapper functions to help us interogate the array and save it to disk.
# Create a ZoneBudget object and get the budget record array zb = flopy.utils.ZoneBudget(cbc_f, zon, kstpkper=(0, 1096)) zb.get_budget() # Get a list of the unique budget record names zb.get_record_names() # Look at a subset of fluxes names = ['RECHARGE_IN', 'ZONE_1_IN', 'ZONE_3_IN'] zb.get_budget(names=names) # Loo...
examples/Notebooks/flopy3_ZoneBudget_example.ipynb
bdestombe/flopy-1
bsd-3-clause
Convert Units The ZoneBudget class supports the use of mathematical operators and returns a new copy of the object.
cmd = flopy.utils.ZoneBudget(cbc_f, zon, kstpkper=(0, 0)) cfd = cmd / 35.3147 inyr = (cfd / (250 * 250)) * 365 * 12 cmdbud = cmd.get_budget() cfdbud = cfd.get_budget() inyrbud = inyr.get_budget() names = ['RECHARGE_IN'] rowidx = np.in1d(cmdbud['name'], names) colidx = 'ZONE_1' print('{:,.1f} cubic meters/day'.format...
examples/Notebooks/flopy3_ZoneBudget_example.ipynb
bdestombe/flopy-1
bsd-3-clause