markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
3. Initialize and run the merger
# init & run polygon merger pm = Polygon_merger_v2(contours_df, verbose=1) pm.unique_groups.remove("roi") pm.run()
docs/examples/polygon_merger_using_rtree.ipynb
DigitalSlideArchive/HistomicsTK
apache-2.0
NOTE: The following steps are only "aesthetic", and just ensure the contours look nice when posted to Digital Slide Archive for viewing with GeoJS.
# add colors (aesthetic) for group in pm.unique_groups: cs = contours_df.loc[contours_df.loc[:, "group"] == group, "color"] pm.new_contours.loc[ pm.new_contours.loc[:, "group"] == group, "color"] = cs.iloc[0] # get rid of nonenclosed stroma (aesthetic) pm.new_contours = _discard_nonenclosed_background_group( pm.new_contours, background_group="mostly_stroma")
docs/examples/polygon_merger_using_rtree.ipynb
DigitalSlideArchive/HistomicsTK
apache-2.0
This is the result
pm.new_contours.head()
docs/examples/polygon_merger_using_rtree.ipynb
DigitalSlideArchive/HistomicsTK
apache-2.0
4. Visualize results on HistomicsTK
# deleting existing annotations in target slide (if any) existing_annotations = gc.get('/annotation/item/' + POST_SLIDE_ID) for ann in existing_annotations: gc.delete('/annotation/%s' % ann['_id']) # get list of annotation documents annotation_docs = get_annotation_documents_from_contours( pm.new_contours.copy(), separate_docs_by_group=True, docnamePrefix='test', verbose=False, monitorPrefix=POST_SLIDE_ID + ": annotation docs") # post annotations to slide -- make sure it posts without errors for annotation_doc in annotation_docs: resp = gc.post( "/annotation?itemId=" + POST_SLIDE_ID, json=annotation_doc)
docs/examples/polygon_merger_using_rtree.ipynb
DigitalSlideArchive/HistomicsTK
apache-2.0
Clusterized ranking
M = np.array([ [5, 3, 1, 2, 8, 4, 6, 7], [5, 4, 3, 1, 8, 2, 6, 7], [1, 7, 5, 4, 8, 2, 3, 6], [6, 4, 2.5, 2.5, 8, 1, 7, 5], [8, 2, 4, 6, 3, 5, 1, 7], [5, 6, 4, 3, 2, 1, 7, 8], [6, 1, 2, 3, 5, 4, 8, 7], [5, 1, 3, 2, 7, 4, 6, 8], [6, 1, 3, 2, 5, 4, 7, 8], [5, 3, 2, 1, 8, 4, 6, 7], [7, 1, 3, 2, 6, 4, 5, 8], [1, 6, 5, 3, 8, 4, 2, 7] ]) n, m = M.shape
decision_theory/lab2.ipynb
lionell/laboratories
mit
Here is how we find average ranking.
average_rank = rankdata(np.average(M, axis=0)) average_rank
decision_theory/lab2.ipynb
lionell/laboratories
mit
And this way we can get median ranking.
median_rank = rankdata(np.median(M, axis=0)) median_rank
decision_theory/lab2.ipynb
lionell/laboratories
mit
Next we need to compute kernel of disagreement.
adj = np.zeros((m, m), dtype=np.bool) kernel = [] for i in range(m): for j in range(i + 1, m): if (average_rank[i] - average_rank[j])*(median_rank[i] - median_rank[j]) < 0: kernel.append([i, j]) adj[i][j] = adj[j][i] = True kernel
decision_theory/lab2.ipynb
lionell/laboratories
mit
Now that we have a graph of the disagreement, we can easily find a full component via Depth First Search.
def dfs(i, used): if i in used: return [] used.add(i) res = [i] for j in range(m): if adj[i][j]: res += dfs(j, used) return res
decision_theory/lab2.ipynb
lionell/laboratories
mit
Last thing to do, is to iterate in the correct order, and don't forget to print a whole cluster when needed.
order = sorted(range(m), key=lambda i: (average_rank[i], median_rank[i])) order result = [] used = set() for i in order: cluster = dfs(i, used) if len(cluster) > 0: result.append(cluster) result
decision_theory/lab2.ipynb
lionell/laboratories
mit
Kemeny distance
rankings = np.array([ [[1], [2, 3], [4], [5], [6, 7]], [[1, 3], [4], [2], [5], [7], [6]], [[1], [4], [2], [3], [6], [5], [7]], [[1], [2, 4], [3], [5], [7], [6]], [[2], [3], [4], [5], [1], [6], [7]], [[1], [3], [2], [5], [6], [7], [4]], [[1], [5], [3], [4], [2], [6], [7]] ]) n = rankings.shape[0]
decision_theory/lab2.ipynb
lionell/laboratories
mit
We need to be able to build relation matrix out of the ranking.
def build(x): n = sum(map(lambda r: len(r), x)) # Total amount of objects m = np.zeros((n, n), dtype=np.bool) for r in x: for i in r: for j in range(n): if not m[j][i - 1] or j + 1 in r: m[i - 1][j] = True return m
decision_theory/lab2.ipynb
lionell/laboratories
mit
Now we can calculate Kemedy distances between each two rankings.
dist = np.zeros((n, n)) for i in range(n): for j in range(n): dist[i][j] = np.sum(build(rankings[i]) ^ build(rankings[j])) dist
decision_theory/lab2.ipynb
lionell/laboratories
mit
Let's find Kemeny median for the ranks.
median = np.argmin(np.sum(dist, axis=1)) rankings[median]
decision_theory/lab2.ipynb
lionell/laboratories
mit
"Classic" use with cell magic
%%ferret -s 600,400 set text/font=arial use monthly_navy_winds.cdf show data/full plot uwnd[i=@ave,j=@ave,l=@sbx:12]
notebooks/ferretmagic_06_InteractWidget.ipynb
PBrockmann/ipython_ferretmagic
mit
Explore interactive widgets
from ipywidgets import interact @interact(var=['uwnd','vwnd'], smooth=(1, 20), vrange=(0.5,5,0.5)) def plot(var='uwnd', smooth=5, vrange=1) : %ferret_run -s 600,400 'ppl color 6, 70, 70, 70; plot/grat=(dash,color=6)/vlim=-%(vrange)s:%(vrange)s %(var)s[i=@ave,j=@ave], %(var)s[i=@ave,j=@ave,l=@sbx:%(smooth)s]' % locals()
notebooks/ferretmagic_06_InteractWidget.ipynb
PBrockmann/ipython_ferretmagic
mit
Another example with a map
# The line of code to make interactive %ferret_run -q -s 600,400 'cancel mode logo; \ ppl color 6, 70, 70, 70; \ shade/grat=(dash,color=6) %(var)s[l=%(lstep)s] ; \ go land' % {'var':'uwnd','lstep':'3'} import ipywidgets as widgets from ipywidgets import interact play = widgets.Play( value=1, min=1, max=10, step=1, description="Press play", disabled=False ) slider = widgets.IntSlider( min=1, max=10 ) widgets.jslink((play, 'value'), (slider, 'value')) a=widgets.HBox([play, slider]) @interact(var=['uwnd','vwnd'], lstep=slider, lstep1=play) def plot(var='uwnd', lstep=1, lstep1=1) : %ferret_run -q -s 600,400 'cancel mode logo; \ ppl color 6, 70, 70, 70; \ shade/grat=(dash,color=6)/lev=(-inf)(-10,10,2)(inf)/pal=mpl_Div_PRGn.spk %(var)s[l=%(lstep)s] ; \ go land' % locals()
notebooks/ferretmagic_06_InteractWidget.ipynb
PBrockmann/ipython_ferretmagic
mit
A variety of tools employing different methodologies have been developed over the years to compute multi-group cross sections for certain applications, including NJOY (LANL), MC$^2$-3 (ANL), and Serpent (VTT). The openmc.mgxs Python module is designed to leverage OpenMC's tally system to calculate multi-group cross sections with arbitrary energy discretizations for fine-mesh heterogeneous deterministic neutron transport applications. Before proceeding to illustrate how one may use the openmc.mgxs module, it is worthwhile to define the general equations used to calculate multi-group cross sections. This is only intended as a brief overview of the methodology used by openmc.mgxs - we refer the interested reader to the large body of literature on the subject for a more comprehensive understanding of this complex topic. Introductory Notation The continuous real-valued microscopic cross section may be denoted $\sigma_{n,x}(\mathbf{r}, E)$ for position vector $\mathbf{r}$, energy $E$, nuclide $n$ and interaction type $x$. Similarly, the scalar neutron flux may be denoted by $\Phi(\mathbf{r},E)$ for position $\mathbf{r}$ and energy $E$. Note: Although nuclear cross sections are dependent on the temperature $T$ of the interacting medium, the temperature variable is neglected here for brevity. Spatial and Energy Discretization The energy domain for critical systems such as thermal reactors spans more than 10 orders of magnitude of neutron energies from 10$^{-5}$ - 10$^7$ eV. The multi-group approximation discretization divides this energy range into one or more energy groups. In particular, for $G$ total groups, we denote an energy group index $g$ such that $g \in {1, 2, ..., G}$. The energy group indices are defined such that the smaller group the higher the energy, and vice versa. The integration over neutron energies across a discrete energy group is commonly referred to as energy condensation. Multi-group cross sections are computed for discretized spatial zones in the geometry of interest. The spatial zones may be defined on a structured and regular fuel assembly or pin cell mesh, an arbitrary unstructured mesh or the constructive solid geometry used by OpenMC. For a geometry with $K$ distinct spatial zones, we designate each spatial zone an index $k$ such that $k \in {1, 2, ..., K}$. The volume of each spatial zone is denoted by $V_{k}$. The integration over discrete spatial zones is commonly referred to as spatial homogenization. General Scalar-Flux Weighted MGXS The multi-group cross sections computed by openmc.mgxs are defined as a scalar flux-weighted average of the microscopic cross sections across each discrete energy group. This formulation is employed in order to preserve the reaction rates within each energy group and spatial zone. In particular, spatial homogenization and energy condensation are used to compute the general multi-group cross section $\sigma_{n,x,k,g}$ as follows: $$\sigma_{n,x,k,g} = \frac{\int_{E_{g}}^{E_{g-1}}\mathrm{d}E'\int_{\mathbf{r} \in V_{k}}\mathrm{d}\mathbf{r}\sigma_{n,x}(\mathbf{r},E')\Phi(\mathbf{r},E')}{\int_{E_{g}}^{E_{g-1}}\mathrm{d}E'\int_{\mathbf{r} \in V_{k}}\mathrm{d}\mathbf{r}\Phi(\mathbf{r},E')}$$ This scalar flux-weighted average microscopic cross section is computed by openmc.mgxs for most multi-group cross sections, including total, absorption, and fission reaction types. These double integrals are stochastically computed with OpenMC's tally system - in particular, filters on the energy range and spatial zone (material, cell or universe) define the bounds of integration for both numerator and denominator. Multi-Group Scattering Matrices The general multi-group cross section $\sigma_{n,x,k,g}$ is a vector of $G$ values for each energy group $g$. The equation presented above only discretizes the energy of the incoming neutron and neglects the outgoing energy of the neutron (if any). Hence, this formulation must be extended to account for the outgoing energy of neutrons in the discretized scattering matrix cross section used by deterministic neutron transport codes. We denote the incoming and outgoing neutron energy groups as $g$ and $g'$ for the microscopic scattering matrix cross section $\sigma_{n,s}(\mathbf{r},E)$. As before, spatial homogenization and energy condensation are used to find the multi-group scattering matrix cross section $\sigma_{n,s,k,g \to g'}$ as follows: $$\sigma_{n,s,k,g\rightarrow g'} = \frac{\int_{E_{g'}}^{E_{g'-1}}\mathrm{d}E''\int_{E_{g}}^{E_{g-1}}\mathrm{d}E'\int_{\mathbf{r} \in V_{k}}\mathrm{d}\mathbf{r}\sigma_{n,s}(\mathbf{r},E'\rightarrow E'')\Phi(\mathbf{r},E')}{\int_{E_{g}}^{E_{g-1}}\mathrm{d}E'\int_{\mathbf{r} \in V_{k}}\mathrm{d}\mathbf{r}\Phi(\mathbf{r},E')}$$ This scalar flux-weighted multi-group microscopic scattering matrix is computed using OpenMC tallies with both energy in and energy out filters. Multi-Group Fission Spectrum The energy spectrum of neutrons emitted from fission is denoted by $\chi_{n}(\mathbf{r},E' \rightarrow E'')$ for incoming and outgoing energies $E'$ and $E''$, respectively. Unlike the multi-group cross sections $\sigma_{n,x,k,g}$ considered up to this point, the fission spectrum is a probability distribution and must sum to unity. The outgoing energy is typically much less dependent on the incoming energy for fission than for scattering interactions. As a result, it is common practice to integrate over the incoming neutron energy when computing the multi-group fission spectrum. The fission spectrum may be simplified as $\chi_{n}(\mathbf{r},E)$ with outgoing energy $E$. Unlike the multi-group cross sections defined up to this point, the multi-group fission spectrum is weighted by the fission production rate rather than the scalar flux. This formulation is intended to preserve the total fission production rate in the multi-group deterministic calculation. In order to mathematically define the multi-group fission spectrum, we denote the microscopic fission cross section as $\sigma_{n,f}(\mathbf{r},E)$ and the average number of neutrons emitted from fission interactions with nuclide $n$ as $\nu_{n}(\mathbf{r},E)$. The multi-group fission spectrum $\chi_{n,k,g}$ is then the probability of fission neutrons emitted into energy group $g$. Similar to before, spatial homogenization and energy condensation are used to find the multi-group fission spectrum $\chi_{n,k,g}$ as follows: $$\chi_{n,k,g'} = \frac{\int_{E_{g'}}^{E_{g'-1}}\mathrm{d}E''\int_{0}^{\infty}\mathrm{d}E'\int_{\mathbf{r} \in V_{k}}\mathrm{d}\mathbf{r}\chi_{n}(\mathbf{r},E'\rightarrow E'')\nu_{n}(\mathbf{r},E')\sigma_{n,f}(\mathbf{r},E')\Phi(\mathbf{r},E')}{\int_{0}^{\infty}\mathrm{d}E'\int_{\mathbf{r} \in V_{k}}\mathrm{d}\mathbf{r}\nu_{n}(\mathbf{r},E')\sigma_{n,f}(\mathbf{r},E')\Phi(\mathbf{r},E')}$$ The fission production-weighted multi-group fission spectrum is computed using OpenMC tallies with both energy in and energy out filters. This concludes our brief overview on the methodology to compute multi-group cross sections. The following sections detail more concretely how users may employ the openmc.mgxs module to power simulation workflows requiring multi-group cross sections for downstream deterministic calculations. Generate Input Files
import numpy as np import matplotlib.pyplot as plt import openmc import openmc.mgxs as mgxs %matplotlib inline
docs/source/pythonapi/examples/mgxs-part-i.ipynb
mjlong/openmc
mit
First we need to define materials that will be used in the problem. Before defining a material, we must create nuclides that are used in the material.
# Instantiate some Nuclides h1 = openmc.Nuclide('H-1') o16 = openmc.Nuclide('O-16') u235 = openmc.Nuclide('U-235') u238 = openmc.Nuclide('U-238') zr90 = openmc.Nuclide('Zr-90')
docs/source/pythonapi/examples/mgxs-part-i.ipynb
mjlong/openmc
mit
With the nuclides we defined, we will now create a material for the homogeneous medium.
# Instantiate a Material and register the Nuclides inf_medium = openmc.Material(name='moderator') inf_medium.set_density('g/cc', 5.) inf_medium.add_nuclide(h1, 0.028999667) inf_medium.add_nuclide(o16, 0.01450188) inf_medium.add_nuclide(u235, 0.000114142) inf_medium.add_nuclide(u238, 0.006886019) inf_medium.add_nuclide(zr90, 0.002116053)
docs/source/pythonapi/examples/mgxs-part-i.ipynb
mjlong/openmc
mit
With our material, we can now create a MaterialsFile object that can be exported to an actual XML file.
# Instantiate a MaterialsFile, register all Materials, and export to XML materials_file = openmc.MaterialsFile() materials_file.default_xs = '71c' materials_file.add_material(inf_medium) materials_file.export_to_xml()
docs/source/pythonapi/examples/mgxs-part-i.ipynb
mjlong/openmc
mit
Now let's move on to the geometry. This problem will be a simple square cell with reflective boundary conditions to simulate an infinite homogeneous medium. The first step is to create the outer bounding surfaces of the problem.
# Instantiate boundary Planes min_x = openmc.XPlane(boundary_type='reflective', x0=-0.63) max_x = openmc.XPlane(boundary_type='reflective', x0=0.63) min_y = openmc.YPlane(boundary_type='reflective', y0=-0.63) max_y = openmc.YPlane(boundary_type='reflective', y0=0.63)
docs/source/pythonapi/examples/mgxs-part-i.ipynb
mjlong/openmc
mit
With the surfaces defined, we can now create a cell that is defined by intersections of half-spaces created by the surfaces.
# Instantiate a Cell cell = openmc.Cell(cell_id=1, name='cell') # Register bounding Surfaces with the Cell cell.region = +min_x & -max_x & +min_y & -max_y # Fill the Cell with the Material cell.fill = inf_medium
docs/source/pythonapi/examples/mgxs-part-i.ipynb
mjlong/openmc
mit
OpenMC requires that there is a "root" universe. Let us create a root universe and add our square cell to it.
# Instantiate Universe root_universe = openmc.Universe(universe_id=0, name='root universe') root_universe.add_cell(cell)
docs/source/pythonapi/examples/mgxs-part-i.ipynb
mjlong/openmc
mit
We now must create a geometry that is assigned a root universe, put the geometry into a GeometryFile object, and export it to XML.
# Create Geometry and set root Universe openmc_geometry = openmc.Geometry() openmc_geometry.root_universe = root_universe # Instantiate a GeometryFile geometry_file = openmc.GeometryFile() geometry_file.geometry = openmc_geometry # Export to "geometry.xml" geometry_file.export_to_xml()
docs/source/pythonapi/examples/mgxs-part-i.ipynb
mjlong/openmc
mit
Next, we must define simulation parameters. In this case, we will use 10 inactive batches and 40 active batches each with 2500 particles.
# OpenMC simulation parameters batches = 50 inactive = 10 particles = 2500 # Instantiate a SettingsFile settings_file = openmc.SettingsFile() settings_file.batches = batches settings_file.inactive = inactive settings_file.particles = particles settings_file.output = {'tallies': True, 'summary': True} bounds = [-0.63, -0.63, -0.63, 0.63, 0.63, 0.63] settings_file.set_source_space('fission', bounds) # Export to "settings.xml" settings_file.export_to_xml()
docs/source/pythonapi/examples/mgxs-part-i.ipynb
mjlong/openmc
mit
Now we are ready to generate multi-group cross sections! First, let's define a 2-group structure using the built-in EnergyGroups class.
# Instantiate a 2-group EnergyGroups object groups = mgxs.EnergyGroups() groups.group_edges = np.array([0., 0.625e-6, 20.])
docs/source/pythonapi/examples/mgxs-part-i.ipynb
mjlong/openmc
mit
We can now use the EnergyGroups object, along with our previously created materials and geometry, to instantiate some MGXS objects from the openmc.mgxs module. In particular, the following are subclasses of the generic and abstract MGXS class: TotalXS TransportXS AbsorptionXS CaptureXS FissionXS NuFissionXS ScatterXS NuScatterXS ScatterMatrixXS NuScatterMatrixXS Chi These classes provide us with an interface to generate the tally inputs as well as perform post-processing of OpenMC's tally data to compute the respective multi-group cross sections. In this case, let's create the multi-group total, absorption and scattering cross sections with our 2-group structure.
# Instantiate a few different sections total = mgxs.TotalXS(domain=cell, domain_type='cell', groups=groups) absorption = mgxs.AbsorptionXS(domain=cell, domain_type='cell', groups=groups) scattering = mgxs.ScatterXS(domain=cell, domain_type='cell', groups=groups)
docs/source/pythonapi/examples/mgxs-part-i.ipynb
mjlong/openmc
mit
Each multi-group cross section object stores its tallies in a Python dictionary called tallies. We can inspect the tallies in the dictionary for our Absorption object as follows.
absorption.tallies
docs/source/pythonapi/examples/mgxs-part-i.ipynb
mjlong/openmc
mit
The Absorption object includes tracklength tallies for the 'absorption' and 'flux' scores in the 2-group structure in cell 1. Now that each MGXS object contains the tallies that it needs, we must add these tallies to a TalliesFile object to generate the "tallies.xml" input file for OpenMC.
# Instantiate an empty TalliesFile tallies_file = openmc.TalliesFile() # Add total tallies to the tallies file for tally in total.tallies.values(): tallies_file.add_tally(tally) # Add absorption tallies to the tallies file for tally in absorption.tallies.values(): tallies_file.add_tally(tally) # Add scattering tallies to the tallies file for tally in scattering.tallies.values(): tallies_file.add_tally(tally) # Export to "tallies.xml" tallies_file.export_to_xml()
docs/source/pythonapi/examples/mgxs-part-i.ipynb
mjlong/openmc
mit
Now we a have a complete set of inputs, so we can go ahead and run our simulation.
# Run OpenMC executor = openmc.Executor() executor.run_simulation()
docs/source/pythonapi/examples/mgxs-part-i.ipynb
mjlong/openmc
mit
In addition to the statepoint file, our simulation also created a summary file which encapsulates information about the materials and geometry. This is necessary for the openmc.mgxs module to properly process the tally data. We first create a Summary object and link it with the statepoint.
# Load the summary file and link it with the statepoint su = openmc.Summary('summary.h5') sp.link_with_summary(su)
docs/source/pythonapi/examples/mgxs-part-i.ipynb
mjlong/openmc
mit
The statepoint is now ready to be analyzed by our multi-group cross sections. We simply have to load the tallies from the StatePoint into each object as follows and our MGXS objects will compute the cross sections for us under-the-hood.
# Load the tallies from the statepoint into each MGXS object total.load_from_statepoint(sp) absorption.load_from_statepoint(sp) scattering.load_from_statepoint(sp)
docs/source/pythonapi/examples/mgxs-part-i.ipynb
mjlong/openmc
mit
Voila! Our multi-group cross sections are now ready to rock 'n roll! Extracting and Storing MGXS Data Let's first inspect our total cross section by printing it to the screen.
total.print_xs()
docs/source/pythonapi/examples/mgxs-part-i.ipynb
mjlong/openmc
mit
Since the openmc.mgxs module uses tally arithmetic under-the-hood, the cross section is stored as a "derived" Tally object. This means that it can be queried and manipulated using all of the same methods supported for the Tally class in the OpenMC Python API. For example, we can construct a Pandas DataFrame of the multi-group cross section data.
df = scattering.get_pandas_dataframe() df.head(10)
docs/source/pythonapi/examples/mgxs-part-i.ipynb
mjlong/openmc
mit
Each multi-group cross section object can be easily exported to a variety of file formats, including CSV, Excel, and LaTeX for storage or data processing.
absorption.export_xs_data(filename='absorption-xs', format='excel')
docs/source/pythonapi/examples/mgxs-part-i.ipynb
mjlong/openmc
mit
The following code snippet shows how to export all three MGXS to the same HDF5 binary data store.
total.build_hdf5_store(filename='mgxs', append=True) absorption.build_hdf5_store(filename='mgxs', append=True) scattering.build_hdf5_store(filename='mgxs', append=True)
docs/source/pythonapi/examples/mgxs-part-i.ipynb
mjlong/openmc
mit
Comparing MGXS with Tally Arithmetic Finally, we illustrate how one can leverage OpenMC's tally arithmetic data processing feature with MGXS objects. The openmc.mgxs module uses tally arithmetic to compute multi-group cross sections with automated uncertainty propagation. Each MGXS object includes an xs_tally attribute which is a "derived" Tally based on the tallies needed to compute the cross section type of interest. These derived tallies can be used in subsequent tally arithmetic operations. For example, we can use tally artithmetic to confirm that the TotalXS is equal to the sum of the AbsorptionXS and ScatterXS objects.
# Use tally arithmetic to compute the difference between the total, absorption and scattering difference = total.xs_tally - absorption.xs_tally - scattering.xs_tally # The difference is a derived tally which can generate Pandas DataFrames for inspection difference.get_pandas_dataframe()
docs/source/pythonapi/examples/mgxs-part-i.ipynb
mjlong/openmc
mit
Similarly, we can use tally arithmetic to compute the ratio of AbsorptionXS and ScatterXS to the TotalXS.
# Use tally arithmetic to compute the absorption-to-total MGXS ratio absorption_to_total = absorption.xs_tally / total.xs_tally # The absorption-to-total ratio is a derived tally which can generate Pandas DataFrames for inspection absorption_to_total.get_pandas_dataframe() # Use tally arithmetic to compute the scattering-to-total MGXS ratio scattering_to_total = scattering.xs_tally / total.xs_tally # The scattering-to-total ratio is a derived tally which can generate Pandas DataFrames for inspection scattering_to_total.get_pandas_dataframe()
docs/source/pythonapi/examples/mgxs-part-i.ipynb
mjlong/openmc
mit
Lastly, we sum the derived scatter-to-total and absorption-to-total ratios to confirm that they sum to unity.
# Use tally arithmetic to ensure that the absorption- and scattering-to-total MGXS ratios sum to unity sum_ratio = absorption_to_total + scattering_to_total # The scattering-to-total ratio is a derived tally which can generate Pandas DataFrames for inspection sum_ratio.get_pandas_dataframe()
docs/source/pythonapi/examples/mgxs-part-i.ipynb
mjlong/openmc
mit
From raw data to dSPM on SPM Faces dataset Runs a full pipeline using MNE-Python: - artifact removal - averaging Epochs - forward model computation - source reconstruction using dSPM on the contrast : "faces - scrambled" <div class="alert alert-info"><h4>Note</h4><p>This example does quite a bit of processing, so even on a fast machine it can take several minutes to complete.</p></div>
# Authors: Alexandre Gramfort <alexandre.gramfort@inria.fr> # Denis Engemann <denis.engemann@gmail.com> # # License: BSD-3-Clause import matplotlib.pyplot as plt import mne from mne.datasets import spm_face from mne.preprocessing import ICA, create_eog_epochs from mne import io, combine_evoked from mne.minimum_norm import make_inverse_operator, apply_inverse print(__doc__) data_path = spm_face.data_path() subjects_dir = data_path + '/subjects'
0.24/_downloads/5ac2a3ff8baa6aba4bf6dd1d047703e2/spm_faces_dataset_sgskip.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Load and filter data, set up epochs
raw_fname = data_path + '/MEG/spm/SPM_CTF_MEG_example_faces%d_3D.ds' raw = io.read_raw_ctf(raw_fname % 1, preload=True) # Take first run # Here to save memory and time we'll downsample heavily -- this is not # advised for real data as it can effectively jitter events! raw.resample(120., npad='auto') picks = mne.pick_types(raw.info, meg=True, exclude='bads') raw.filter(1, 30, method='fir', fir_design='firwin') events = mne.find_events(raw, stim_channel='UPPT001') # plot the events to get an idea of the paradigm mne.viz.plot_events(events, raw.info['sfreq']) event_ids = {"faces": 1, "scrambled": 2} tmin, tmax = -0.2, 0.6 baseline = None # no baseline as high-pass is applied reject = dict(mag=5e-12) epochs = mne.Epochs(raw, events, event_ids, tmin, tmax, picks=picks, baseline=baseline, preload=True, reject=reject) # Fit ICA, find and remove major artifacts ica = ICA(n_components=0.95, max_iter='auto', random_state=0) ica.fit(raw, decim=1, reject=reject) # compute correlation scores, get bad indices sorted by score eog_epochs = create_eog_epochs(raw, ch_name='MRT31-2908', reject=reject) eog_inds, eog_scores = ica.find_bads_eog(eog_epochs, ch_name='MRT31-2908') ica.plot_scores(eog_scores, eog_inds) # see scores the selection is based on ica.plot_components(eog_inds) # view topographic sensitivity of components ica.exclude += eog_inds[:1] # we saw the 2nd ECG component looked too dipolar ica.plot_overlay(eog_epochs.average()) # inspect artifact removal ica.apply(epochs) # clean data, default in place evoked = [epochs[k].average() for k in event_ids] contrast = combine_evoked(evoked, weights=[-1, 1]) # Faces - scrambled evoked.append(contrast) for e in evoked: e.plot(ylim=dict(mag=[-400, 400])) plt.show() # estimate noise covarariance noise_cov = mne.compute_covariance(epochs, tmax=0, method='shrunk', rank=None)
0.24/_downloads/5ac2a3ff8baa6aba4bf6dd1d047703e2/spm_faces_dataset_sgskip.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Visualize fields on MEG helmet
# The transformation here was aligned using the dig-montage. It's included in # the spm_faces dataset and is named SPM_dig_montage.fif. trans_fname = data_path + ('/MEG/spm/SPM_CTF_MEG_example_faces1_3D_' 'raw-trans.fif') maps = mne.make_field_map(evoked[0], trans_fname, subject='spm', subjects_dir=subjects_dir, n_jobs=1) evoked[0].plot_field(maps, time=0.170)
0.24/_downloads/5ac2a3ff8baa6aba4bf6dd1d047703e2/spm_faces_dataset_sgskip.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Look at the whitened evoked daat
evoked[0].plot_white(noise_cov)
0.24/_downloads/5ac2a3ff8baa6aba4bf6dd1d047703e2/spm_faces_dataset_sgskip.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Compute forward model
src = data_path + '/subjects/spm/bem/spm-oct-6-src.fif' bem = data_path + '/subjects/spm/bem/spm-5120-5120-5120-bem-sol.fif' forward = mne.make_forward_solution(contrast.info, trans_fname, src, bem)
0.24/_downloads/5ac2a3ff8baa6aba4bf6dd1d047703e2/spm_faces_dataset_sgskip.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Compute inverse solution
snr = 3.0 lambda2 = 1.0 / snr ** 2 method = 'dSPM' inverse_operator = make_inverse_operator(contrast.info, forward, noise_cov, loose=0.2, depth=0.8) # Compute inverse solution on contrast stc = apply_inverse(contrast, inverse_operator, lambda2, method, pick_ori=None) # stc.save('spm_%s_dSPM_inverse' % contrast.comment) # Plot contrast in 3D with mne.viz.Brain if available brain = stc.plot(hemi='both', subjects_dir=subjects_dir, initial_time=0.170, views=['ven'], clim={'kind': 'value', 'lims': [3., 6., 9.]}) # brain.save_image('dSPM_map.png')
0.24/_downloads/5ac2a3ff8baa6aba4bf6dd1d047703e2/spm_faces_dataset_sgskip.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Representamos ambos diámetro y la velocidad de la tractora en la misma gráfica
datos.ix[:, "Diametro X":"Diametro Y"].plot(figsize=(16,10),ylim=(0.5,3)).hlines([1.85,1.65],0,3500,colors='r') #datos['RPM TRAC'].plot(secondary_y='RPM TRAC') datos.ix[:, "Diametro X":"Diametro Y"].boxplot(return_type='axes')
medidas/11082015/Análisis de datos.ipynb
darkomen/TFG
cc0-1.0
En el boxplot, se ve como la mayoría de los datos están por encima de la media (primer cuartil). Se va a tratar de bajar ese porcentaje. La primera aproximación que vamos a realizar será la de hacer mayores incrementos al subir la velocidad en los tramos que el diámetro se encuentre entre $1.80mm$ y $1.75 mm$(caso 5) haremos incrementos de $d_v2$ en lugar de $d_v1$ Comparativa de Diametro X frente a Diametro Y para ver el ratio del filamento
plt.scatter(x=datos['Diametro X'], y=datos['Diametro Y'], marker='.')
medidas/11082015/Análisis de datos.ipynb
darkomen/TFG
cc0-1.0
How many assets in a Cisco Router? As some of you may have heard, HPE IMC is a multi-vendor tool and offers support for many of the common devices you'll see in your daily travels. In this example, we're going to use a Cisco 2811 router to showcase the basic function. Routers, like chassis switches have multiple components. As any one who's ever been the ~~victem~~ owner of a Smartnet contract, you'll know that you have individual components which have serial numbers as well and all of them have to be reported for them to be covered. So let's see if we managed to grab all of those by first checking out how many individual items we got back in the asset list for this cisco router.
len(ciscorouter)
examples/.ipynb_checkpoints/Working with Network Assets-checkpoint.ipynb
netmanchris/PYHPEIMC
apache-2.0
What's in the box??? Now we know that we've got an idea of how many assets are in here, let's take a look to see exactly what's in one of the asset records to see if there's anything useful in here.
ciscorouter[0]
examples/.ipynb_checkpoints/Working with Network Assets-checkpoint.ipynb
netmanchris/PYHPEIMC
apache-2.0
What can we do with this? With some basic python string manipulation we could easily print out some of the attributes that we want into what could easily turn into a nicely formated report. Again realise that the example below is just a subset of what's available in the JSON above. If you want more, just add it to the list.
for i in ciscorouter: print ("Device Name: " + i['deviceName'] + " Device Model: " + i['model'] + "\nAsset Name is: " + i['name'] + " Asset Serial Number is: " + i['serialNum']+ "\n")
examples/.ipynb_checkpoints/Working with Network Assets-checkpoint.ipynb
netmanchris/PYHPEIMC
apache-2.0
Why not just write that to disk? Although we could go directly to the formated report without a lot of extra work, we would be losing a lot of data which we may have use for later. Instead why don't we export all the available data from the JSON above into a CSV file which can be later opened in your favourite spreadsheet viewer and manipulated to your hearst content. Pretty cool, no?
keys = ciscorouter[0].keys() with open('ciscorouter.csv', 'w') as file: dict_writer = csv.DictWriter(file, keys) dict_writer.writeheader() dict_writer.writerows(ciscorouter)
examples/.ipynb_checkpoints/Working with Network Assets-checkpoint.ipynb
netmanchris/PYHPEIMC
apache-2.0
Reading it back Now we'll read it back from disk to make sure it worked properly. When working with data like this, I find it useful to think about who's going to be consuming the data. For example, when looking at this remember this is a CSV file which can be easily opened in python, or something like Microsoft Excel to manipuate further. It's not realy intended to be read by human beings in this particular format. You'll need another program to consume and munge the data first to turn it into something human consumable.
with open('ciscorouter.csv') as file: print (file.read())
examples/.ipynb_checkpoints/Working with Network Assets-checkpoint.ipynb
netmanchris/PYHPEIMC
apache-2.0
What about all my serial numbers at once? That's a great question! I'm glad you asked. One of the most beautiful things about learning to automate things like asset gathering through an API is that it's often not much more work to do something 1000 times than it is to do it a single time. This time instead of using the get_dev_asset_details function that we used above which gets us all the assets associated with a single device, let's grab ALL the devices at once.
all_assets = get_dev_asset_details_all(auth.creds, auth.url) len (all_assets)
examples/.ipynb_checkpoints/Working with Network Assets-checkpoint.ipynb
netmanchris/PYHPEIMC
apache-2.0
That's a lot of assets! Exactly why we automate things. Now let's write the all_assets list to disk as well. **note for reasons unknown to me at this time, although the majority of the assets have 27 differnet fields, a few of them actually have 28 different attributes. Something I'll have to dig into later.
keys = all_assets[0].keys() with open('all_assets.csv', 'w') as file: dict_writer = csv.DictWriter(file, keys) dict_writer.writeheader() dict_writer.writerows(all_assets)
examples/.ipynb_checkpoints/Working with Network Assets-checkpoint.ipynb
netmanchris/PYHPEIMC
apache-2.0
Well That's not good.... So it looks like there are a few network assets that have a different number of attributes than the first one in the list. We'll write some quick code to figure out how big of a problem this is.
print ("The length of the first items keys is " + str(len(keys))) for i in all_assets: if len(i) != len(all_assets[0].keys()): print ("The length of index " + str(all_assets.index(i)) + " is " + str(len(i.keys())))
examples/.ipynb_checkpoints/Working with Network Assets-checkpoint.ipynb
netmanchris/PYHPEIMC
apache-2.0
Well that's not so bad It looks like the items which don't have exactly 27 attribues have exactly 28 attributes. So we'll just pick one of the longer ones to use as the headers for our CSV file and then run the script again. For this one, I'm going to ask you to trust me that the file is on disk and save us all the trouble of having to print out 1013 seperate assets into this blog post.
keys = all_assets[879].keys() with open ('all_assets.csv', 'w') as file: dict_writer = csv.DictWriter(file, keys) dict_writer.writeheader() dict_writer.writerows(all_assets)
examples/.ipynb_checkpoints/Working with Network Assets-checkpoint.ipynb
netmanchris/PYHPEIMC
apache-2.0
http://www.sciencedirect.com/science/article/pii/S0092867412014080 Table S1. trans-Splice Sites, Transcription Start Sites, and csRNA Loci for Protein-Coding Genes and Transcription Start Sites for pri-miRNAs, Related to Figure 2. Analysis of C. elegans CapSeq and CIP-TAP, containing lists of trans-splice sites, transcription start sites, and sense and antisense csRNAs derived from protein coding genes. Also included is the list of the transcription start sites for pri-miRNAs. For C. elegans analysis, reads were mapped to the genome (WormBase release WS215)
#!cd ~/relmapping/wget; wget -m --no-parent https://ars.els-cdn.com/content/image/1-s2.0-S0092867412014080-mmc1.xlsx fp_ = 'wget/ars.els-cdn.com/content/image/1-s2.0-S0092867412014080-mmc1_B._TS_sites_for_protein_genes.csv' df_ = pd.read_csv(fp_, skiprows=11) df_['assigned_to_an_annotation'] = df_['transcript'].map(lambda x: x == x) print('%d records, ~20,000 not assigned to an annotation:' % (len(df_))) print(df_['assigned_to_an_annotation'].value_counts()) df_.head()
annot/notebooks/Fig2S3_import_Gu2012.ipynb
jurgjn/relmapping
gpl-2.0
Using a cutoff of one CapSeq read per 10 million total reads, and a requirement for a YR motif, our CapSeq data predicted approximately 64,000 candidate TS sites genome wide (Table S1B).
print(df_['transcript type'].value_counts()) m_ = df_['transcript type'] == "coding" df_ = df_.loc[m_].reset_index(drop=True) print('%d records with annotated as "coding"' % (len(df_.query('transcript == transcript')),)) # Raw (Gu et al., 2012) TSS sites (=many assigned to multiple transcripts) df_gu = pd.DataFrame() df_gu['chrom'] = 'chr' + df_['chromosome'] df_gu['start'] = df_['start'] df_gu['end'] = df_['start'] + 1 df_gu['name'] = df_['transcript'] df_gu['score'] = df_['reads'] df_gu['strand'] = df_['strand'] df_gu = df_gu.sort_values(['chrom', 'start', 'end', 'start']).reset_index(drop=True) fp_ = vp('Gu2012_tss.bed') write_gffbed(fp_, chrom = df_gu['chrom'], start = df_gu['start'], end = df_gu['end'], name = df_gu['name'], strand = df_gu['strand'], score = df_gu['score'], ) !wc -l {fp_} # Collapse TSS annotations by chrom/start/end/strand (raw TSS assignments are to all "compatible" transcripts) fp_ = vp('Gu2012_tss_unique.bed') df_gu.groupby(['chrom', 'start', 'end', 'strand']).agg({ 'name': lambda l: os.path.commonprefix(list(l)).rstrip('.'),#lambda l: ','.join(sorted(set(l))), 'score': np.sum, })\ .reset_index().sort_values(['chrom', 'start', 'end', 'strand'])[['chrom', 'start', 'end', 'name', 'score', 'strand']]\ .to_csv(fp_, sep='\t', index=False, header=False) !wc -l {fp_} # Cluster TSS annotations using single-linkage, strand-specific, using a distance cutoff of 50 df_gu_cluster50_ = BedTool.from_dataframe(df_gu).cluster(d=50, s=True).to_dataframe() df_gu_cluster50_.columns = ('chrom', 'start', 'end', 'transcript_id', 'score', 'strand', 'cluster_id') fp_ = vp('Gu2012_tss_clustered.bed') df_gu_cluster50 = df_gu_cluster50_.groupby('cluster_id').agg({ 'chrom': lambda s: list(set(s))[0], 'start': np.min, 'end': np.max, 'transcript_id': lambda l: os.path.commonprefix(list(l)).rstrip('.'),#lambda l: ','.join(sorted(set(l))), 'score': np.sum, 'strand': lambda s: list(set(s))[0], })\ .sort_values(['chrom', 'start', 'end', 'strand']).reset_index(drop=True) df_gu_cluster50.to_csv(fp_, sep='\t', index=False, header=False) !wc -l {fp_} # Overlaps to TSS clusters df_regl_ = regl_Apr27(flank_len=150)[['chrom', 'start', 'end', 'annot']] gv = yp.GenomicVenn2( BedTool.from_dataframe(df_regl_), BedTool.from_dataframe(df_gu_cluster50[yp.NAMES_BED3]), label_a='Accessible sites', label_b='(Gu et al., 2012)\nTSS clusters', ) plt.figure(figsize=(12,6)).subplots_adjust(wspace=0.5) plt.subplot(1,2,1) gv.plot() plt.subplot(1,2,2) annot_count_ = gv.df_a_with_b['name'].value_counts()[config['annot']] annot_count_.index = [ 'coding_promoter', 'pseudogene_promoter', 'unknown_promoter', 'putative_enhancer', 'non-coding_RNA', '\n\nother_element' ] #plt.title('Annotation of %d accessible sites that overlap a TSS from (Gu et al., 2012)' % (len(gv.df_a_with_b),)) plt.pie( annot_count_.values, labels = ['%s (%d)' % (l, c) for l, c in annot_count_.iteritems()], colors=[yp.RED, yp.ORANGE, yp.YELLOW, yp.GREEN, '0.4', yp.BLUE], counterclock=False, startangle=70, autopct='%.1f%%', ); plt.gca().set_aspect('equal') #plt.savefig('annot/Fig2S5_tss/Gu2012_annot.pdf', bbox_inches='tight', transparent=True) annot_count_ df_regl_ = regl_Apr27(flank_len=150)[['chrom', 'start', 'end', 'annot']] gv = yp.GenomicVenn2( BedTool.from_dataframe(df_regl_), BedTool.from_dataframe(df_gu_cluster50[yp.NAMES_BED3]), label_a='Accessible sites', label_b='(Gu et al., 2012)\nTSS clusters', ) plt.figure(figsize=(8,4)).subplots_adjust(wspace=0.2) plt.subplot(1,2,1) v = gv.plot(style='compact') v.get_patch_by_id('10').set_color(yp.RED) v.get_patch_by_id('01').set_color(yp.GREEN) v.get_patch_by_id('11').set_color(yp.YELLOW) plt.subplot(1,2,2) d_reduced_ = collections.OrderedDict([ ('coding_promoter', 'coding_promoter, pseudogene_promoter'), ('pseudogene_promoter', 'coding_promoter, pseudogene_promoter'), ('unknown_promoter', 'unknown_promoter'), ('putative_enhancer', 'putative_enhancer'), ('non-coding_RNA', 'other_element, non-coding_RNA'), ('other_element', 'other_element, non-coding_RNA'), ]) d_colour_ = collections.OrderedDict([ ('coding_promoter, pseudogene_promoter', yp.RED), ('unknown_promoter', yp.YELLOW), ('putative_enhancer', yp.GREEN), ('other_element, non-coding_RNA', yp.BLUE), ]) gv.df_a_with_b['name_reduced'] = [*map(lambda a: d_reduced_[a], gv.df_a_with_b['name'])] annot_count_ = gv.df_a_with_b['name_reduced'].value_counts()[d_colour_.keys()] #plt.title('Annotation of %d accessible sites that overlap a TSS from (Chen et al., 2013)' % (len(gv.df_a_with_b),)) (patches, texts) = plt.pie( annot_count_.values, labels = yp.pct_(annot_count_.values), colors=d_colour_.values(), counterclock=False, startangle=45, ); plt.gca().set_aspect('equal') #plt.savefig(vp('Gu2012_annot.pdf'), bbox_inches='tight', transparent=True) plt.savefig('annot_Apr27/Fig2S3D_Gu2012_annot.pdf', bbox_inches='tight', transparent=True) #fp_ = 'annot/Fig2S4_TSS/Gu2012_not_atac.bed' #gv.df_b_only.to_csv(fp_, header=False, sep='\t', index=False) #!wc -l {fp_}
annot/notebooks/Fig2S3_import_Gu2012.ipynb
jurgjn/relmapping
gpl-2.0
Skew-T with Complex Layout Combine a Skew-T and a hodograph using Matplotlib's GridSpec layout capability.
import matplotlib.gridspec as gridspec import matplotlib.pyplot as plt import pandas as pd import metpy.calc as mpcalc from metpy.cbook import get_test_data from metpy.plots import add_metpy_logo, Hodograph, SkewT from metpy.units import units
v0.12/_downloads/0c4dbfdebeb6fcd2f5364a69f0c6d4a8/Skew-T_Layout.ipynb
metpy/MetPy
bsd-3-clause
Upper air data can be obtained using the siphon package, but for this example we will use some of MetPy's sample data.
col_names = ['pressure', 'height', 'temperature', 'dewpoint', 'direction', 'speed'] df = pd.read_fwf(get_test_data('may4_sounding.txt', as_file_obj=False), skiprows=5, usecols=[0, 1, 2, 3, 6, 7], names=col_names) # Drop any rows with all NaN values for T, Td, winds df = df.dropna(subset=('temperature', 'dewpoint', 'direction', 'speed' ), how='all').reset_index(drop=True)
v0.12/_downloads/0c4dbfdebeb6fcd2f5364a69f0c6d4a8/Skew-T_Layout.ipynb
metpy/MetPy
bsd-3-clause
We will pull the data out of the example dataset into individual variables and assign units.
p = df['pressure'].values * units.hPa T = df['temperature'].values * units.degC Td = df['dewpoint'].values * units.degC wind_speed = df['speed'].values * units.knots wind_dir = df['direction'].values * units.degrees u, v = mpcalc.wind_components(wind_speed, wind_dir) # Create a new figure. The dimensions here give a good aspect ratio fig = plt.figure(figsize=(9, 9)) add_metpy_logo(fig, 630, 80, size='large') # Grid for plots gs = gridspec.GridSpec(3, 3) skew = SkewT(fig, rotation=45, subplot=gs[:, :2]) # Plot the data using normal plotting functions, in this case using # log scaling in Y, as dictated by the typical meteorological plot skew.plot(p, T, 'r') skew.plot(p, Td, 'g') skew.plot_barbs(p, u, v) skew.ax.set_ylim(1000, 100) # Add the relevant special lines skew.plot_dry_adiabats() skew.plot_moist_adiabats() skew.plot_mixing_lines() # Good bounds for aspect ratio skew.ax.set_xlim(-30, 40) # Create a hodograph ax = fig.add_subplot(gs[0, -1]) h = Hodograph(ax, component_range=60.) h.add_grid(increment=20) h.plot(u, v) # Show the plot plt.show()
v0.12/_downloads/0c4dbfdebeb6fcd2f5364a69f0c6d4a8/Skew-T_Layout.ipynb
metpy/MetPy
bsd-3-clause
To create a Series we need to set the column (using usecols) to use and set the parameter squeeze to True.
data = pd.read_csv("data/input.csv", usecols=["name"], squeeze=True) print type(data) data.head() data.index
01_SERIES/CSV-Reader.ipynb
topix-hackademy/pandas-for-dummies
mit
If the input file has only 1 column we don't need to provide the usecols argument.
data = pd.read_csv("data/input_with_one_column.csv", squeeze=True) print type(data) # HEAD print data.head(2), "\n" # TAIL print data.tail()
01_SERIES/CSV-Reader.ipynb
topix-hackademy/pandas-for-dummies
mit
On Series we can perform classic python operation using Built-In Functions!
list(data) dict(data) max(data) min(data) dir(data) type(data) sorted(data) data = pd.read_csv("data/input_with_two_column.csv", index_col="name", squeeze=True) data.head() data[["Alex", "asd"]] data["Alex":"Vale"]
01_SERIES/CSV-Reader.ipynb
topix-hackademy/pandas-for-dummies
mit
Loading data
import numpy, pandas from rep.utils import train_test_split from sklearn.metrics import roc_auc_score sig_data = pandas.read_csv('toy_datasets/toyMC_sig_mass.csv', sep='\t') bck_data = pandas.read_csv('toy_datasets/toyMC_bck_mass.csv', sep='\t') labels = numpy.array([1] * len(sig_data) + [0] * len(bck_data)) data = pandas.concat([sig_data, bck_data]) train_data, test_data, train_labels, test_labels = train_test_split(data, labels, train_size=0.7)
howto/04-howto-folding.ipynb
Quadrocube/rep
apache-2.0
Training variables
variables = ["FlightDistance", "FlightDistanceError", "IP", "VertexChi2", "pt", "p0_pt", "p1_pt", "p2_pt", 'LifeTime', 'dira'] data = data[variables]
howto/04-howto-folding.ipynb
Quadrocube/rep
apache-2.0
Folding strategy - stacking algorithm It implements the same interface as all classifiers, but with some difference: all prediction methods have additional parameter "vote_function" (example folder.predict(X, vote_function=None)), which is used to combine all classifiers' predictions. By default "mean" is used as "vote_function"
from rep.estimators import SklearnClassifier from sklearn.ensemble import GradientBoostingClassifier
howto/04-howto-folding.ipynb
Quadrocube/rep
apache-2.0
Define folding model
from rep.metaml import FoldingClassifier n_folds = 4 folder = FoldingClassifier(GradientBoostingClassifier(), n_folds=n_folds, features=variables) folder.fit(train_data, train_labels)
howto/04-howto-folding.ipynb
Quadrocube/rep
apache-2.0
Default prediction (predict i_th_ fold by i_th_ classifier)
folder.predict_proba(train_data)
howto/04-howto-folding.ipynb
Quadrocube/rep
apache-2.0
Voting prediction (predict i-fold by all classifiers and take value, which is calculated by vote_function)
# definition of mean function, which combines all predictions def mean_vote(x): return numpy.mean(x, axis=0) folder.predict_proba(test_data, vote_function=mean_vote)
howto/04-howto-folding.ipynb
Quadrocube/rep
apache-2.0
Comparison of folds Again use ClassificationReport class to compare different results. For folding classifier this report uses only default prediction. Report training dataset
from rep.data.storage import LabeledDataStorage from rep.report import ClassificationReport # add folds_column to dataset to use mask train_data["FOLDS"] = folder._get_folds_column(len(train_data)) lds = LabeledDataStorage(train_data, train_labels) report = ClassificationReport({'folding': folder}, lds)
howto/04-howto-folding.ipynb
Quadrocube/rep
apache-2.0
Signal distribution for each fold Use mask parameter to plot distribution for the specific fold
for fold_num in range(n_folds): report.prediction_pdf(mask="FOLDS == %d" % fold_num, labels_dict={1: 'sig fold %d' % fold_num}).plot()
howto/04-howto-folding.ipynb
Quadrocube/rep
apache-2.0
Background distribution for each fold
for fold_num in range(n_folds): report.prediction_pdf(mask="FOLDS == %d" % fold_num, labels_dict={0: 'bck fold %d' % fold_num}).plot()
howto/04-howto-folding.ipynb
Quadrocube/rep
apache-2.0
ROCs (each fold used as test dataset)
for fold_num in range(n_folds): report.roc(mask="FOLDS == %d" % fold_num).plot()
howto/04-howto-folding.ipynb
Quadrocube/rep
apache-2.0
Report for test dataset NOTE: Here vote function is None, so default prediction is used
lds = LabeledDataStorage(test_data, test_labels) report = ClassificationReport({'folding': folder}, lds) report.prediction_pdf().plot(new_plot=True, figsize = (9, 4)) report.roc().plot(xlim=(0.5, 1))
howto/04-howto-folding.ipynb
Quadrocube/rep
apache-2.0
Creating MNE objects from data arrays In this simple example, the creation of MNE objects from numpy arrays is demonstrated. In the last example case, a NEO file format is used as a source for the data.
# Author: Jaakko Leppakangas <jaeilepp@student.jyu.fi> # # License: BSD (3-clause) import numpy as np import neo import mne print(__doc__)
0.13/_downloads/plot_objects_from_arrays.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Create arbitrary data
sfreq = 1000 # Sampling frequency times = np.arange(0, 10, 0.001) # Use 10000 samples (10s) sin = np.sin(times * 10) # Multiplied by 10 for shorter cycles cos = np.cos(times * 10) sinX2 = sin * 2 cosX2 = cos * 2 # Numpy array of size 4 X 10000. data = np.array([sin, cos, sinX2, cosX2]) # Definition of channel types and names. ch_types = ['mag', 'mag', 'grad', 'grad'] ch_names = ['sin', 'cos', 'sinX2', 'cosX2']
0.13/_downloads/plot_objects_from_arrays.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Creation of info dictionary.
# It is also possible to use info from another raw object. info = mne.create_info(ch_names=ch_names, sfreq=sfreq, ch_types=ch_types) raw = mne.io.RawArray(data, info) # Scaling of the figure. # For actual EEG/MEG data different scaling factors should be used. scalings = {'mag': 2, 'grad': 2} raw.plot(n_channels=4, scalings=scalings, title='Data from arrays', show=True, block=True) # It is also possible to auto-compute scalings scalings = 'auto' # Could also pass a dictionary with some value == 'auto' raw.plot(n_channels=4, scalings=scalings, title='Auto-scaled Data from arrays', show=True, block=True)
0.13/_downloads/plot_objects_from_arrays.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
EpochsArray
event_id = 1 events = np.array([[200, 0, event_id], [1200, 0, event_id], [2000, 0, event_id]]) # List of three arbitrary events # Here a data set of 700 ms epochs from 2 channels is # created from sin and cos data. # Any data in shape (n_epochs, n_channels, n_times) can be used. epochs_data = np.array([[sin[:700], cos[:700]], [sin[1000:1700], cos[1000:1700]], [sin[1800:2500], cos[1800:2500]]]) ch_names = ['sin', 'cos'] ch_types = ['mag', 'mag'] info = mne.create_info(ch_names=ch_names, sfreq=sfreq, ch_types=ch_types) epochs = mne.EpochsArray(epochs_data, info=info, events=events, event_id={'arbitrary': 1}) picks = mne.pick_types(info, meg=True, eeg=False, misc=False) epochs.plot(picks=picks, scalings='auto', show=True, block=True)
0.13/_downloads/plot_objects_from_arrays.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
EvokedArray
nave = len(epochs_data) # Number of averaged epochs evoked_data = np.mean(epochs_data, axis=0) evokeds = mne.EvokedArray(evoked_data, info=info, tmin=-0.2, comment='Arbitrary', nave=nave) evokeds.plot(picks=picks, show=True, units={'mag': '-'}, titles={'mag': 'sin and cos averaged'})
0.13/_downloads/plot_objects_from_arrays.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Extracting data from NEO file
# The example here uses the ExampleIO object for creating fake data. # For actual data and different file formats, consult the NEO documentation. reader = neo.io.ExampleIO('fakedata.nof') bl = reader.read(cascade=True, lazy=False)[0] # Get data from first (and only) segment seg = bl.segments[0] title = seg.file_origin ch_names = list() data = list() for asig in seg.analogsignals: # Since the data does not contain channel names, channel indices are used. ch_names.append(str(asig.channel_index)) asig = asig.rescale('V').magnitude data.append(asig) sfreq = int(seg.analogsignals[0].sampling_rate.magnitude) # By default, the channel types are assumed to be 'misc'. info = mne.create_info(ch_names=ch_names, sfreq=sfreq) raw = mne.io.RawArray(data, info) raw.plot(n_channels=4, scalings={'misc': 1}, title='Data from NEO', show=True, block=True, clipping='clamp')
0.13/_downloads/plot_objects_from_arrays.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Process Instantiation Let's start with most basic example of spawning new process to run a function
from multiprocessing import Process print('Starting demo...') p = Process(target=printer, args=('hello demo',)) p.start()
notebooks/MultiProcessing.ipynb
apagac/cfme_tests
gpl-2.0
Process timing Use printer's delay to see process timing Track multiple process objects Execute code in main process while chile process is running Use Process.join() to wait for processes to finish
proc_list = [] for values in [('immediate', 0), ('delayed', 2), ('eternity', 5)]: p = Process(target=printer, args=values) proc_list.append(p) p.start() # start execution of printer print('Not waiting for proccesses to finish...') [p.join() for p in proc_list] print('After processes...')
notebooks/MultiProcessing.ipynb
apagac/cfme_tests
gpl-2.0
Process Pool Worker processes instead of direct instantiation Context manager to handle starting/joining child processes Pool.map() works like default python map(f, args) function Pool.map() Does not unpack args
from multiprocessing.pool import Pool with Pool(3) as pool: pool.map(printer, ['Its', ('A', 5), 'Race']) # each worker process executes one function
notebooks/MultiProcessing.ipynb
apagac/cfme_tests
gpl-2.0
Process + args/kwargs iteration with starmap
with Pool(2) as pool: pool.starmap(printer, [('Its',), ('A', 2), ('Race',)]) # one worker will execute 2 functions, one worker will execute the 'slow' function
notebooks/MultiProcessing.ipynb
apagac/cfme_tests
gpl-2.0
Starmap is the bomb
def pretend_delete_method(provider, vm_name): print('Pretend delete: {} on {}. (Pid: {})' .format(vm_name, provider, getpid())) # Assuming we fetched a list of vm names on providers we want to cleanup... example_provider_vm_lists = dict( vmware=['test_vm_1', 'test_vm_2'], rhv=['test_vm_3', 'test_vm_4'], osp=['test_vm_5', 'test_vm_6'], ) # don't hate me for nested comprehension here - building tuples of provider+name from multiprocessing.pool import ThreadPool # Threadpool instead of process pool, same interface with ThreadPool(6) as pool: pool.starmap( pretend_delete_method, [(key, vm) for key, vms in example_provider_vm_lists.items() for vm in vms] )
notebooks/MultiProcessing.ipynb
apagac/cfme_tests
gpl-2.0
Locking semaphore-type object that can be acquired and released When acquired, only thread that has the lock can run Necessary when using shared objects
# Printing is thread safe, but will sometimes print separate messages on the same line (above) # Use a lock around print from multiprocessing import Lock lock = Lock() def safe_printing_method(provider, vm_name): with lock: print('Pretend delete: {} on {}. (Pid: {})' .format(vm_name, provider, getpid())) with ThreadPool(6) as pool: pool.starmap( safe_printing_method, [(key, vm) for key, vms in example_provider_vm_lists.items() for vm in vms])
notebooks/MultiProcessing.ipynb
apagac/cfme_tests
gpl-2.0
Queues Store data/objects in child thread/processes and retrieve in parent FIFO stack with put, get, and empty methods multiprocessing.Queue cannot be pickled and thus can't be passed to Pool methods can deadlock with improper join use multiprocessing.Manager.Queue is proxy, can be pickled can be shared between processes
from multiprocessing import Manager from random import randint # Create instance of manager manager = Manager() def multiple_output_method(provider, vm_name, fail_queue): # random success of called method if randint(0, 1): return True else: # Store our failure vm on the queue fail_queue.put(vm_name) return None # Create queue object to give to child processes queue_for_failures = manager.Queue() with Pool(2) as pool: results = pool.starmap( multiple_output_method, [(key, vm, queue_for_failures) for key, vms in example_provider_vm_lists.items() for vm in vms] ) print('Results are in: {}'.format(results)) failed_vms = [] # get items from the queue while its not empty while not queue_for_failures.empty(): failed_vms.append(queue_for_failures.get()) print('Failures are in: {}'.format(failed_vms))
notebooks/MultiProcessing.ipynb
apagac/cfme_tests
gpl-2.0
TODO: Implement LeNet-5 Implement the LeNet-5 neural network architecture. This is the only cell you need to edit. Input The LeNet architecture accepts a 32x32xC image as input, where C is the number of color channels. Since MNIST images are grayscale, C is 1 in this case. Architecture Layer 1: Convolutional. The output shape should be 28x28x6. Activation. Your choice of activation function. Pooling. The output shape should be 14x14x6. Layer 2: Convolutional. The output shape should be 10x10x16. Activation. Your choice of activation function. Pooling. The output shape should be 5x5x16. Flatten. Flatten the output shape of the final pooling layer such that it's 1D instead of 3D. The easiest way to do is by using tf.contrib.layers.flatten, which is already imported for you. Layer 3: Fully Connected. This should have 120 outputs. Activation. Your choice of activation function. Layer 4: Fully Connected. This should have 84 outputs. Activation. Your choice of activation function. Layer 5: Fully Connected (Logits). This should have 10 outputs. Output Return the result of the 2nd fully connected layer.
from tensorflow.contrib.layers import flatten def LeNet(x): # Arguments used for tf.truncated_normal, randomly defines variables for the weights and biases for each layer mu = 0 sigma = 0.1 # TODO: Layer 1: Convolutional. Input = 32x32x1. Output = 28x28x6. conv1_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 1, 6), mean = mu, stddev = sigma)) conv1_b = tf.Variable(tf.zeros(6)) conv1 = tf.nn.conv2d(x, conv1_W, strides=[1, 1, 1, 1], padding='VALID') + conv1_b # TODO: Activation. conv1 = tf.nn.relu(conv1) # TODO: Pooling. Input = 28x28x6. Output = 14x14x6. conv1 = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID') # TODO: Layer 2: Convolutional. Output = 10x10x16. conv2_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 6, 16), mean = mu, stddev = sigma)) conv2_b = tf.Variable(tf.zeros(16)) conv2 = tf.nn.conv2d(conv1, conv2_W, strides=[1, 1, 1, 1], padding='VALID') + conv2_b # TODO: Activation. conv2 = tf.nn.relu(conv2) # TODO: Pooling. Input = 10x10x16. Output = 5x5x16. conv2 = tf.nn.max_pool(conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID') # TODO: Flatten. Input = 5x5x16. Output = 400. fc0 = flatten(conv2) # TODO: Layer 3: Fully Connected. Input = 400. Output = 120. fc1_W = tf.Variable(tf.truncated_normal(shape=(400, 120), mean = mu, stddev = sigma)) fc1_b = tf.Variable(tf.zeros(120)) fc1 = tf.matmul(fc0, fc1_W) + fc1_b # TODO: Activation. fc1 = tf.nn.relu(fc1) # TODO: Layer 4: Fully Connected. Input = 120. Output = 84. fc2_W = tf.Variable(tf.truncated_normal(shape=(120, 84), mean = mu, stddev = sigma)) fc2_b = tf.Variable(tf.zeros(84)) fc2 = tf.matmul(fc1, fc2_W) + fc2_b # TODO: Activation. fc2 = tf.nn.relu(fc2) # TODO: Layer 5: Fully Connected. Input = 84. Output = 10. fc3_W = tf.Variable(tf.truncated_normal(shape=(84, 10), mean = mu, stddev = sigma)) fc3_b = tf.Variable(tf.zeros(10)) logits = tf.matmul(fc2, fc3_W) + fc3_b return logits
LeNet-Lab.ipynb
darienmt/intro-to-tensorflow
mit
Training Pipeline Create a training pipeline that uses the model to classify MNIST data. You do not need to modify this section.
rate = 0.001 logits = LeNet(x) cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=one_hot_y, logits=logits) loss_operation = tf.reduce_mean(cross_entropy) optimizer = tf.train.AdamOptimizer(learning_rate = rate) training_operation = optimizer.minimize(loss_operation)
LeNet-Lab.ipynb
darienmt/intro-to-tensorflow
mit
Train the Model Run the training data through the training pipeline to train the model. Before each epoch, shuffle the training set. After each epoch, measure the loss and accuracy of the validation set. Save the model after training. You do not need to modify this section.
with tf.Session() as sess: sess.run(tf.global_variables_initializer()) num_examples = len(X_train) print("Training...") print() for i in range(EPOCHS): X_train, y_train = shuffle(X_train, y_train) for offset in range(0, num_examples, BATCH_SIZE): end = offset + BATCH_SIZE batch_x, batch_y = X_train[offset:end], y_train[offset:end] sess.run(training_operation, feed_dict={x: batch_x, y: batch_y}) validation_accuracy = evaluate(X_validation, y_validation) print("EPOCH {} ...".format(i+1)) print("Validation Accuracy = {:.3f}".format(validation_accuracy)) print() saver.save(sess, './models/lenet') print("Model saved")
LeNet-Lab.ipynb
darienmt/intro-to-tensorflow
mit
Evaluate the Model Once you are completely satisfied with your model, evaluate the performance of the model on the test set. Be sure to only do this once! If you were to measure the performance of your trained model on the test set, then improve your model, and then measure the performance of your model on the test set again, that would invalidate your test results. You wouldn't get a true measure of how well your model would perform against real data. You do not need to modify this section.
with tf.Session() as sess: saver.restore(sess, tf.train.latest_checkpoint('./models')) test_accuracy = evaluate(X_test, y_test) print("Test Accuracy = {:.3f}".format(test_accuracy))
LeNet-Lab.ipynb
darienmt/intro-to-tensorflow
mit
Bungee jumping In the previous case study, we simulated a bungee jump with a model that took into account gravity, air resistance, and the spring force of the bungee cord, but we ignored the weight of the cord. It is tempting to say that the weight of the cord doesn't matter, because it falls along with the jumper. But that intuition is incorrect, as explained by Heck, Uylings, and Kędzierska. As the cord falls, it transfers energy to the jumper. They derive a differential equation that relates the acceleration of the jumper to position and velocity: $a = g + \frac{\mu v^2/2}{\mu(L+y) + 2L}$ where $a$ is the net acceleration of the number, $g$ is acceleration due to gravity, $v$ is the velocity of the jumper, $y$ is the position of the jumper relative to the starting point (usually negative), $L$ is the length of the cord, and $\mu$ is the mass ratio of the cord and jumper. If you don't believe this model is correct, this video might convince you. Following the example in Chapter 21, we'll model the jump with the following modeling assumptions: Initially the bungee cord hangs from a crane with the attachment point 80 m above a cup of tea. Until the cord is fully extended, it applies a force to the jumper as explained above. After the cord is fully extended, it obeys Hooke's Law; that is, it applies a force to the jumper proportional to the extension of the cord beyond its resting length. The jumper is subject to drag force proportional to the square of their velocity, in the opposite of their direction of motion. First I'll create a Param object to contain the quantities we'll need: Let's assume that the jumper's mass is 75 kg and the cord's mass is also 75 kg, so mu=1. The jumpers's frontal area is 1 square meter, and terminal velocity is 60 m/s. I'll use these values to back out the coefficient of drag. The length of the bungee cord is L = 25 m. The spring constant of the cord is k = 40 N / m when the cord is stretched, and 0 when it's compressed. I adopt the coordinate system and most of the variable names from Heck, Uylings, and Kędzierska.
m = UNITS.meter s = UNITS.second kg = UNITS.kilogram N = UNITS.newton params = Params(v_init = 0 * m / s, g = 9.8 * m/s**2, M = 75 * kg, # mass of jumper m_cord = 75 * kg, # mass of cord area = 1 * m**2, # frontal area of jumper rho = 1.2 * kg/m**3, # density of air v_term = 60 * m / s, # terminal velocity of jumper L = 25 * m, # length of cord k = 40 * N / m) # spring constant of cord
notebooks/jump2.ipynb
AllenDowney/ModSimPy
mit
Now here's a version of make_system that takes a Params object as a parameter. make_system uses the given value of v_term to compute the drag coefficient C_d. It also computes mu and the initial State object.
def make_system(params): """Makes a System object for the given params. params: Params object returns: System object """ M, m_cord = params.M, params.m_cord g, rho, area = params.g, params.rho, params.area v_init, v_term = params.v_init, params.v_term # back out the coefficient of drag C_d = 2 * M * g / (rho * area * v_term**2) mu = m_cord / M init = State(y=0*m, v=v_init) t_end = 10 * s return System(params, C_d=C_d, mu=mu, init=init, t_end=t_end)
notebooks/jump2.ipynb
AllenDowney/ModSimPy
mit
Let's make a System
system = make_system(params)
notebooks/jump2.ipynb
AllenDowney/ModSimPy
mit
drag_force computes drag as a function of velocity:
def drag_force(v, system): """Computes drag force in the opposite direction of `v`. v: velocity returns: drag force in N """ rho, C_d, area = system.rho, system.C_d, system.area f_drag = -np.sign(v) * rho * v**2 * C_d * area / 2 return f_drag
notebooks/jump2.ipynb
AllenDowney/ModSimPy
mit
Here's drag force at 20 m/s.
drag_force(20 * m/s, system)
notebooks/jump2.ipynb
AllenDowney/ModSimPy
mit
The following function computes the acceleration of the jumper due to tension in the cord. $a_{cord} = \frac{\mu v^2/2}{\mu(L+y) + 2L}$
def cord_acc(y, v, system): """Computes the force of the bungee cord on the jumper: y: height of the jumper v: velocity of the jumpter returns: acceleration in m/s """ L, mu = system.L, system.mu a_cord = -v**2 / 2 / (2*L/mu + (L+y)) return a_cord
notebooks/jump2.ipynb
AllenDowney/ModSimPy
mit
Here's acceleration due to tension in the cord if we're going 20 m/s after falling 20 m.
y = -20 * m v = -20 * m/s cord_acc(y, v, system)
notebooks/jump2.ipynb
AllenDowney/ModSimPy
mit