markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Create volume required by secret
spec.volumes = [client.V1Volume(name="foo")] spec.volumes[0].secret = client.V1SecretVolumeSource(secret_name="mysecret") spec.containers = [container] pod.spec = spec
examples/notebooks/create_secret.ipynb
kubernetes-client/python
apache-2.0
Create the Pod
api_instance.create_namespaced_pod(namespace="default",body=pod)
examples/notebooks/create_secret.ipynb
kubernetes-client/python
apache-2.0
View secret being used within the pod Wait for atleast 10 seconds to ensure pod is running before executing this section.
user = api_instance.connect_get_namespaced_pod_exec(name="mypod", namespace="default", command=[ "/bin/sh", "-c", "cat /data/redis/username" ], stderr=True, stdin=False, stdout=True, tty=False) print(user) passwd = api_instance.connect_get_namespaced_pod_exec(name="mypod", namespace="default", command=[ "/bin/sh", "-c"...
examples/notebooks/create_secret.ipynb
kubernetes-client/python
apache-2.0
Delete Pod
api_instance.delete_namespaced_pod(name="mypod", namespace="default", body=client.V1DeleteOptions())
examples/notebooks/create_secret.ipynb
kubernetes-client/python
apache-2.0
Delete Secret
api_instance.delete_namespaced_secret(name="mysecret", namespace="default", body=sec)
examples/notebooks/create_secret.ipynb
kubernetes-client/python
apache-2.0
Optically pumped magnetometer (OPM) data In this dataset, electrical median nerve stimulation was delivered to the left wrist of the subject. Somatosensory evoked fields were measured using nine QuSpin SERF OPMs placed over the right-hand side somatomotor area. Here we demonstrate how to localize these custom OPM data ...
import os.path as op import numpy as np import mne data_path = mne.datasets.opm.data_path() subject = 'OPM_sample' subjects_dir = op.join(data_path, 'subjects') raw_fname = op.join(data_path, 'MEG', 'OPM', 'OPM_SEF_raw.fif') bem_fname = op.join(subjects_dir, subject, 'bem', subject + '-5120-5120-5...
0.24/_downloads/e7de7baffeeb4beff147bad8657b46dc/opm_data.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Prepare data for localization First we filter and epoch the data:
raw = mne.io.read_raw_fif(raw_fname, preload=True) raw.filter(None, 90, h_trans_bandwidth=10.) raw.notch_filter(50., notch_widths=1) # Set epoch rejection threshold a bit larger than for SQUIDs reject = dict(mag=2e-10) tmin, tmax = -0.5, 1 # Find median nerve stimulator trigger event_id = dict(Median=257) events = m...
0.24/_downloads/e7de7baffeeb4beff147bad8657b46dc/opm_data.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Examine our coordinate alignment for source localization and compute a forward operator: <div class="alert alert-info"><h4>Note</h4><p>The Head<->MRI transform is an identity matrix, as the co-registration method used equates the two coordinate systems. This mis-defines the head coordinate system ...
bem = mne.read_bem_solution(bem_fname) trans = mne.transforms.Transform('head', 'mri') # identity transformation # To compute the forward solution, we must # provide our temporary/custom coil definitions, which can be done as:: # # with mne.use_coil_def(coil_def_fname): # fwd = mne.make_forward_solution( # ...
0.24/_downloads/e7de7baffeeb4beff147bad8657b46dc/opm_data.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Perform dipole fitting
# Fit dipoles on a subset of time points with mne.use_coil_def(coil_def_fname): dip_opm, _ = mne.fit_dipole(evoked.copy().crop(0.040, 0.080), cov, bem, trans, verbose=True) idx = np.argmax(dip_opm.gof) print('Best dipole at t=%0.1f ms with %0.1f%% GOF' % (1000 * dip_opm.times[i...
0.24/_downloads/e7de7baffeeb4beff147bad8657b46dc/opm_data.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Perform minimum-norm localization Due to the small number of sensors, there will be some leakage of activity to areas with low/no sensitivity. Constraining the source space to areas we are sensitive to might be a good idea.
inverse_operator = mne.minimum_norm.make_inverse_operator( evoked.info, fwd, cov, loose=0., depth=None) del fwd, cov method = "MNE" snr = 3. lambda2 = 1. / snr ** 2 stc = mne.minimum_norm.apply_inverse( evoked, inverse_operator, lambda2, method=method, pick_ori=None, verbose=True) # Plot source estimate a...
0.24/_downloads/e7de7baffeeb4beff147bad8657b46dc/opm_data.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
2. Starting logging temperature once every second
mytmp.start_log()
boards/Pynq-Z1/base/notebooks/pmod/pmod_tmp2.ipynb
cathalmccabe/PYNQ
bsd-3-clause
3. Try to modify temperature reading by touching the sensor The default interval between samples is 1 second. So wait for at least 10 seconds to get enough samples. During this period, try to press finger on the sensor to increase its temperature reading. Stop the logging whenever done trying to change sensor's value.
mytmp.stop_log() log = mytmp.get_log()
boards/Pynq-Z1/base/notebooks/pmod/pmod_tmp2.ipynb
cathalmccabe/PYNQ
bsd-3-clause
5. Plot values over time
%matplotlib inline import matplotlib.pyplot as plt plt.plot(range(len(log)), log, 'ro') plt.title('TMP2 Sensor log') plt.axis([0, len(log), min(log), max(log)]) plt.show()
boards/Pynq-Z1/base/notebooks/pmod/pmod_tmp2.ipynb
cathalmccabe/PYNQ
bsd-3-clause
Visualizing epoched data This tutorial shows how to plot epoched data as time series, how to plot the spectral density of epoched data, how to plot epochs as an imagemap, and how to plot the sensor locations and projectors stored in :class:~mne.Epochs objects. :depth: 2 We'll start by importing the modules we need, ...
import os import mne sample_data_folder = mne.datasets.sample.data_path() sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample', 'sample_audvis_raw.fif') raw = mne.io.read_raw_fif(sample_data_raw_file, verbose=False).crop(tmax=120)
0.20/_downloads/bc5044f9d3ef1d29067dd6b7d83ceed2/plot_20_visualize_epochs.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
To create the :class:~mne.Epochs data structure, we'll extract the event IDs stored in the :term:stim channel, map those integer event IDs to more descriptive condition labels using an event dictionary, and pass those to the :class:~mne.Epochs constructor, along with the :class:~mne.io.Raw data and the desired temporal...
events = mne.find_events(raw, stim_channel='STI 014') event_dict = {'auditory/left': 1, 'auditory/right': 2, 'visual/left': 3, 'visual/right': 4, 'face': 5, 'buttonpress': 32} epochs = mne.Epochs(raw, events, tmin=-0.2, tmax=0.5, event_id=event_dict, preload=True) del raw
0.20/_downloads/bc5044f9d3ef1d29067dd6b7d83ceed2/plot_20_visualize_epochs.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Plotting Epochs as time series .. sidebar:: Interactivity in pipelines and scripts To use the interactive features of the :meth:`~mne.Epochs.plot` method when running your code non-interactively, pass the ``block=True`` parameter, which halts the Python interpreter until the figure window is closed. That way, any chann...
catch_trials_and_buttonpresses = mne.pick_events(events, include=[5, 32]) epochs['face'].plot(events=catch_trials_and_buttonpresses, event_id=event_dict, event_colors=dict(buttonpress='red', face='blue'))
0.20/_downloads/bc5044f9d3ef1d29067dd6b7d83ceed2/plot_20_visualize_epochs.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Plotting projectors from an Epochs object In the plot above we can see heartbeat artifacts in the magnetometer channels, so before we continue let's load ECG projectors from disk and apply them to the data:
ecg_proj_file = os.path.join(sample_data_folder, 'MEG', 'sample', 'sample_audvis_ecg-proj.fif') ecg_projs = mne.read_proj(ecg_proj_file) epochs.add_proj(ecg_projs) epochs.apply_proj()
0.20/_downloads/bc5044f9d3ef1d29067dd6b7d83ceed2/plot_20_visualize_epochs.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Just as we saw in the tut-section-raw-plot-proj section, we can plot the projectors present in an :class:~mne.Epochs object using the same :meth:~mne.Epochs.plot_projs_topomap method. Since the original three empty-room magnetometer projectors were inherited from the :class:~mne.io.Raw file, and we added two ECG projec...
epochs.plot_projs_topomap(vlim='joint')
0.20/_downloads/bc5044f9d3ef1d29067dd6b7d83ceed2/plot_20_visualize_epochs.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Note that these field maps illustrate aspects of the signal that have already been removed (because projectors in :class:~mne.io.Raw data are applied by default when epoching, and because we called :meth:~mne.Epochs.apply_proj after adding additional ECG projectors from file). You can check this by examining the 'activ...
print(all(proj['active'] for proj in epochs.info['projs']))
0.20/_downloads/bc5044f9d3ef1d29067dd6b7d83ceed2/plot_20_visualize_epochs.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Plotting sensor locations Just like :class:~mne.io.Raw objects, :class:~mne.Epochs objects keep track of sensor locations, which can be visualized with the :meth:~mne.Epochs.plot_sensors method:
epochs.plot_sensors(kind='3d', ch_type='all') epochs.plot_sensors(kind='topomap', ch_type='all')
0.20/_downloads/bc5044f9d3ef1d29067dd6b7d83ceed2/plot_20_visualize_epochs.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Plotting the power spectrum of Epochs Again, just like :class:~mne.io.Raw objects, :class:~mne.Epochs objects have a :meth:~mne.Epochs.plot_psd method for plotting the spectral density_ of the data.
epochs['auditory'].plot_psd(picks='eeg')
0.20/_downloads/bc5044f9d3ef1d29067dd6b7d83ceed2/plot_20_visualize_epochs.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Plotting Epochs as an image map A convenient way to visualize many epochs simultaneously is to plot them as an image map, with each row of pixels in the image representing a single epoch, the horizontal axis representing time, and each pixel's color representing the signal value at that time sample for that epoch. Of c...
epochs['auditory'].plot_image(picks='mag', combine='mean')
0.20/_downloads/bc5044f9d3ef1d29067dd6b7d83ceed2/plot_20_visualize_epochs.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
To plot image maps for individual sensors or a small group of sensors, use the picks parameter. Passing combine=None (the default) will yield separate plots for each sensor in picks; passing combine='gfp' will plot the global field power (useful for combining sensors that respond with opposite polarity).
epochs['auditory'].plot_image(picks=['MEG 0242', 'MEG 0243']) epochs['auditory'].plot_image(picks=['MEG 0242', 'MEG 0243'], combine='gfp')
0.20/_downloads/bc5044f9d3ef1d29067dd6b7d83ceed2/plot_20_visualize_epochs.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
To plot an image map for all sensors, use :meth:~mne.Epochs.plot_topo_image, which is optimized for plotting a large number of image maps simultaneously, and (in interactive sessions) allows you to click on each small image map to pop open a separate figure with the full-sized image plot (as if you had called :meth:~mn...
reject_criteria = dict(mag=3000e-15, # 3000 fT grad=3000e-13, # 3000 fT/cm eeg=150e-6) # 150 µV epochs.drop_bad(reject=reject_criteria) for ch_type, title in dict(mag='Magnetometers', grad='Gradiometers').items(): layout = mne.channels.find_layout(epochs.i...
0.20/_downloads/bc5044f9d3ef1d29067dd6b7d83ceed2/plot_20_visualize_epochs.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
To plot image maps for all EEG sensors, pass an EEG layout as the layout parameter of :meth:~mne.Epochs.plot_topo_image. Note also here the use of the sigma parameter, which smooths each image map along the vertical dimension (across epochs) which can make it easier to see patterns across the small image maps (by smear...
layout = mne.channels.find_layout(epochs.info, ch_type='eeg') epochs['auditory/left'].plot_topo_image(layout=layout, fig_facecolor='w', font_color='k', sigma=1)
0.20/_downloads/bc5044f9d3ef1d29067dd6b7d83ceed2/plot_20_visualize_epochs.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Code Day 1: Inverse Captcha The captcha requires you to review a sequence of digits (your puzzle input) and find the sum of all digits that match the next digit in the list. The list is circular, so the digit after the last digit is the first digit in the list.
! cat day1_input.txt input_data = None with open("day1_input.txt") as f: input_data = f.read().strip().split() input_data = [w.strip(",") for w in input_data ]
adventofcode/2017/.ipynb_checkpoints/day1_9-checkpoint.ipynb
bicepjai/Puzzles
bsd-3-clause
We will form the direction map since they are finite.
directions = { ("N","R") : ("E",0,1), ("N","L") : ("W",0,-1), ("W","R") : ("N",1,1), ("W","L") : ("S",1,-1), ("E","R") : ("S",1,-1), ("E","L") : ("N",1,1), ("S","R") : ("W",0,-1), ("S","L") : ("E",0,1) } def get_distance(data): d,pos = "N",[0,0] for code in data: ...
adventofcode/2017/.ipynb_checkpoints/day1_9-checkpoint.ipynb
bicepjai/Puzzles
bsd-3-clause
Day 2: Bathroom Security part1 You arrive at Easter Bunny Headquarters under cover of darkness. However, you left in such a rush that you forgot to use the bathroom! Fancy office buildings like this one usually have keypad locks on their bathrooms, so you search the front desk for the code. "In order to improve securit...
input_data = None with open("day2_input.txt") as f: input_data = f.read().strip().split() def get_codes(data, keypad, keypad_max_size, start_index=(1,1), verbose=False): r,c = start_index digit = "" for codes in data: if verbose: print(" ",codes) for code in codes: if verbo...
adventofcode/2017/.ipynb_checkpoints/day1_9-checkpoint.ipynb
bicepjai/Puzzles
bsd-3-clause
part2 You finally arrive at the bathroom (it's a several minute walk from the lobby so visitors can behold the many fancy conference rooms and water coolers on this floor) and go to punch in the code. Much to your bladder's dismay, the keypad is not at all like you imagined it. Instead, you are confronted with the resu...
input_data = None with open("day21_input.txt") as f: input_data = f.read().strip().split() keypad = [[None, None, 1, None, None], [None, 2, 3, 4, None], [ 5, 6, 7, 8, None], [None, 'A', 'B', 'C', None], [None, None, 'D', None, None]] sample = ["ULL", "RRDD...
adventofcode/2017/.ipynb_checkpoints/day1_9-checkpoint.ipynb
bicepjai/Puzzles
bsd-3-clause
Day3 squares With Three Sides part1 Now that you can think clearly, you move deeper into the labyrinth of hallways and office furniture that makes up this part of Easter Bunny HQ. This must be a graphic design department; the walls are covered in specifications for triangles. Or are they? The design document gives the ...
input_data = None with open("day3_input.txt") as f: input_data = f.read().strip().split("\n") input_data = [list(map(int, l.strip().split())) for l in input_data] result = [ (sides[0]+sides[1] > sides[2]) and (sides[2]+sides[1] > sides[0]) and (sides[0]+sides[2] > sides[1]) for sides in input_data] sum(result)
adventofcode/2017/.ipynb_checkpoints/day1_9-checkpoint.ipynb
bicepjai/Puzzles
bsd-3-clause
part2 Now that you've helpfully marked up their design documents, it occurs to you that triangles are specified in groups of three vertically. Each set of three numbers in a column specifies a triangle. Rows are unrelated. For example, given the following specification, numbers with the same hundreds digit would be par...
input_data = None with open("day31_input.txt") as f: input_data = f.read().strip().split("\n") input_data = [list(map(int, l.strip().split())) for l in input_data] input_data[:5] def chunks(l, n): """Yield successive n-sized chunks from l.""" for i in range(0, len(l), n): yield l[i:i + n] ...
adventofcode/2017/.ipynb_checkpoints/day1_9-checkpoint.ipynb
bicepjai/Puzzles
bsd-3-clause
Day4 part1: Security Through Obscurity Finally, you come across an information kiosk with a list of rooms. Of course, the list is encrypted and full of decoy data, but the instructions to decode the list are barely hidden nearby. Better remove the decoy data first. Each room consists of an encrypted name (lowercase let...
input_data = None with open("day4_input.txt") as f: input_data = f.read().strip().split("\n") len(input_data), input_data[:5] answer = 0 for code in input_data: m = re.match(r'(.+)-(\d+)\[([a-z]*)\]', code) code, sector, checksum = m.groups() code = code.replace("-","") counts = collections.Counter...
adventofcode/2017/.ipynb_checkpoints/day1_9-checkpoint.ipynb
bicepjai/Puzzles
bsd-3-clause
part2 With all the decoy data out of the way, it's time to decrypt this list and get moving. The room names are encrypted by a state-of-the-art shift cipher, which is nearly unbreakable without the right software. However, the information kiosk designers at Easter Bunny HQ were not expecting to deal with a master crypt...
for code in input_data: m = re.match(r'(.+)-(\d+)\[([a-z]*)\]', code) code, sector, checksum = m.groups() sector = int(sector) code = code.replace("-","") counts = collections.Counter(code).most_common() counts.sort(key=lambda k: (-k[1], k[0])) string_maps = string.ascii_lowercase cipher...
adventofcode/2017/.ipynb_checkpoints/day1_9-checkpoint.ipynb
bicepjai/Puzzles
bsd-3-clause
A variety of tools employing different methodologies have been developed over the years to compute multi-group cross sections for certain applications, including NJOY (LANL), MC$^2$-3 (ANL), and Serpent (VTT). The openmc.mgxs Python module is designed to leverage OpenMC's tally system to calculate multi-group cross sec...
%matplotlib inline import numpy as np import matplotlib.pyplot as plt import openmc import openmc.mgxs as mgxs
examples/jupyter/mdgxs-part-i.ipynb
johnnyliu27/openmc
mit
First we need to define materials that will be used in the problem. Before defining a material, we must create nuclides that are used in the material.
# Instantiate some Nuclides h1 = openmc.Nuclide('H1') o16 = openmc.Nuclide('O16') u235 = openmc.Nuclide('U235') u238 = openmc.Nuclide('U238') pu239 = openmc.Nuclide('Pu239') zr90 = openmc.Nuclide('Zr90')
examples/jupyter/mdgxs-part-i.ipynb
johnnyliu27/openmc
mit
With the nuclides we defined, we will now create a material for the homogeneous medium.
# Instantiate a Material and register the Nuclides inf_medium = openmc.Material(name='moderator') inf_medium.set_density('g/cc', 5.) inf_medium.add_nuclide(h1, 0.03) inf_medium.add_nuclide(o16, 0.015) inf_medium.add_nuclide(u235 , 0.0001) inf_medium.add_nuclide(u238 , 0.007) inf_medium.add_nuclide(pu239, 0.00003) inf_...
examples/jupyter/mdgxs-part-i.ipynb
johnnyliu27/openmc
mit
With our material, we can now create a Materials object that can be exported to an actual XML file.
# Instantiate a Materials collection and export to XML materials_file = openmc.Materials([inf_medium]) materials_file.default_xs = '71c' materials_file.export_to_xml()
examples/jupyter/mdgxs-part-i.ipynb
johnnyliu27/openmc
mit
Next, we must define simulation parameters. In this case, we will use 10 inactive batches and 40 active batches each with 2500 particles.
# OpenMC simulation parameters batches = 50 inactive = 10 particles = 5000 # Instantiate a Settings object settings_file = openmc.Settings() settings_file.batches = batches settings_file.inactive = inactive settings_file.particles = particles settings_file.output = {'tallies': True} # Create an initial uniform spatia...
examples/jupyter/mdgxs-part-i.ipynb
johnnyliu27/openmc
mit
Now we are ready to generate multi-group cross sections! First, let's define a 100-energy-group structure and 1-energy-group structure using the built-in EnergyGroups class. We will also create a 6-delayed-group list.
# Instantiate a 100-group EnergyGroups object energy_groups = mgxs.EnergyGroups() energy_groups.group_edges = np.logspace(-3, 7.3, 101) # Instantiate a 1-group EnergyGroups object one_group = mgxs.EnergyGroups() one_group.group_edges = np.array([energy_groups.group_edges[0], energy_groups.group_edges[-1]]) delayed_gr...
examples/jupyter/mdgxs-part-i.ipynb
johnnyliu27/openmc
mit
We can now use the EnergyGroups object and delayed group list, along with our previously created materials and geometry, to instantiate some MGXS objects from the openmc.mgxs module. In particular, the following are subclasses of the generic and abstract MGXS class: TotalXS TransportXS AbsorptionXS CaptureXS FissionXS...
# Instantiate a few different sections chi_prompt = mgxs.Chi(domain=cell, groups=energy_groups, by_nuclide=True, prompt=True) prompt_nu_fission = mgxs.FissionXS(domain=cell, groups=energy_groups, by_nuclide=True, nu=True, prompt=True) chi_delayed = mgxs.ChiDelayed(domain=cell, energy_groups=energy_groups, by_nuclide=Tr...
examples/jupyter/mdgxs-part-i.ipynb
johnnyliu27/openmc
mit
Each multi-group cross section object stores its tallies in a Python dictionary called tallies. We can inspect the tallies in the dictionary for our Decay Rate object as follows.
decay_rate.tallies
examples/jupyter/mdgxs-part-i.ipynb
johnnyliu27/openmc
mit
The Beta object includes tracklength tallies for the 'nu-fission' and 'delayed-nu-fission' scores in the 100-energy-group and 6-delayed-group structure in cell 1. Now that each MGXS and MDGXS object contains the tallies that it needs, we must add these tallies to a Tallies object to generate the "tallies.xml" input fil...
# Instantiate an empty Tallies object tallies_file = openmc.Tallies() # Add chi-prompt tallies to the tallies file tallies_file += chi_prompt.tallies.values() # Add prompt-nu-fission tallies to the tallies file tallies_file += prompt_nu_fission.tallies.values() # Add chi-delayed tallies to the tallies file tallies_f...
examples/jupyter/mdgxs-part-i.ipynb
johnnyliu27/openmc
mit
In addition to the statepoint file, our simulation also created a summary file which encapsulates information about the materials and geometry. By default, a Summary object is automatically linked when a StatePoint is loaded. This is necessary for the openmc.mgxs module to properly process the tally data. The statepoin...
# Load the tallies from the statepoint into each MGXS object chi_prompt.load_from_statepoint(sp) prompt_nu_fission.load_from_statepoint(sp) chi_delayed.load_from_statepoint(sp) delayed_nu_fission.load_from_statepoint(sp) beta.load_from_statepoint(sp) decay_rate.load_from_statepoint(sp)
examples/jupyter/mdgxs-part-i.ipynb
johnnyliu27/openmc
mit
Voila! Our multi-group cross sections are now ready to rock 'n roll! Extracting and Storing MGXS Data Let's first inspect our delayed-nu-fission section by printing it to the screen after condensing the cross section down to one group.
delayed_nu_fission.get_condensed_xs(one_group).get_xs()
examples/jupyter/mdgxs-part-i.ipynb
johnnyliu27/openmc
mit
Since the openmc.mgxs module uses tally arithmetic under-the-hood, the cross section is stored as a "derived" Tally object. This means that it can be queried and manipulated using all of the same methods supported for the Tally class in the OpenMC Python API. For example, we can construct a Pandas DataFrame of the mult...
df = delayed_nu_fission.get_pandas_dataframe() df.head(10) df = decay_rate.get_pandas_dataframe() df.head(12)
examples/jupyter/mdgxs-part-i.ipynb
johnnyliu27/openmc
mit
Each multi-group cross section object can be easily exported to a variety of file formats, including CSV, Excel, and LaTeX for storage or data processing.
beta.export_xs_data(filename='beta', format='excel')
examples/jupyter/mdgxs-part-i.ipynb
johnnyliu27/openmc
mit
The following code snippet shows how to export the chi-prompt and chi-delayed MGXS to the same HDF5 binary data store.
chi_prompt.build_hdf5_store(filename='mdgxs', append=True) chi_delayed.build_hdf5_store(filename='mdgxs', append=True)
examples/jupyter/mdgxs-part-i.ipynb
johnnyliu27/openmc
mit
Using Tally Arithmetic to Compute the Delayed Neutron Precursor Concentrations Finally, we illustrate how one can leverage OpenMC's tally arithmetic data processing feature with MGXS objects. The openmc.mgxs module uses tally arithmetic to compute multi-group cross sections with automated uncertainty propagation. Each ...
# Get the decay rate data dr_tally = decay_rate.xs_tally dr_u235 = dr_tally.get_values(nuclides=['U235']).flatten() dr_pu239 = dr_tally.get_values(nuclides=['Pu239']).flatten() # Compute the exponential decay of the precursors time = np.logspace(-3,3) dr_u235_points = np.exp(-np.outer(dr_u235, time)) dr_pu239_points =...
examples/jupyter/mdgxs-part-i.ipynb
johnnyliu27/openmc
mit
Now let's compute the initial concentration of the delayed neutron precursors:
# Use tally arithmetic to compute the precursor concentrations precursor_conc = beta.get_condensed_xs(one_group).xs_tally.summation(filter_type=openmc.EnergyFilter, remove_filter=True) * \ delayed_nu_fission.get_condensed_xs(one_group).xs_tally.summation(filter_type=openmc.EnergyFilter, remove_filter=True) / \ ...
examples/jupyter/mdgxs-part-i.ipynb
johnnyliu27/openmc
mit
We can plot the delayed neutron fractions for each nuclide.
energy_filter = [f for f in beta.xs_tally.filters if type(f) is openmc.EnergyFilter] beta_integrated = beta.get_condensed_xs(one_group).xs_tally.summation(filter_type=openmc.EnergyFilter, remove_filter=True) beta_u235 = beta_integrated.get_values(nuclides=['U235']) beta_pu239 = beta_integrated.get_values(nuclides=['Pu...
examples/jupyter/mdgxs-part-i.ipynb
johnnyliu27/openmc
mit
We can also plot the energy spectrum for fission emission of prompt and delayed neutrons.
chi_d_u235 = np.squeeze(chi_delayed.get_xs(nuclides=['U235'], order_groups='decreasing')) chi_d_pu239 = np.squeeze(chi_delayed.get_xs(nuclides=['Pu239'], order_groups='decreasing')) chi_p_u235 = np.squeeze(chi_prompt.get_xs(nuclides=['U235'], order_groups='decreasing')) chi_p_pu239 = np.squeeze(chi_prompt.get_xs(nucl...
examples/jupyter/mdgxs-part-i.ipynb
johnnyliu27/openmc
mit
Best parameter for n4
n4/n
maxloglikelihood.ipynb
muatik/dm
mit
after removing the constant part
lh = [] for i in range(1, 100): P4 = 1.0/i lh.append( (P4**n4) * ((1-P4) ** (n-n4)) ) plt.plot(lh)
maxloglikelihood.ipynb
muatik/dm
mit
Coin example Setup problem
# denoting tails by 0 and heads by 1 TAIL = 0 HEAD = 1 # tossing coint N times N = 10 # 8 of N times tail occurs TAIL_COUNT = 8 experiments = [TAIL] * TAIL_COUNT + [HEAD] * (N - TAIL_COUNT) print(experiments, N)
maxloglikelihood.ipynb
muatik/dm
mit
Looking at the experient shown above, we can easily predict that the probability of TAIL is 6/10, that is higher than the probability of HEAD 4/10. It is easy to calculate the probabilities in this setup without getting involved in Maximum likelihood. However, there are other problems which are not so obvious as this c...
PROBABILITY_SCALE = 100 likelihoods = [] for i in range(1, PROBABILITY_SCALE + 1): P_TAIL = float(i) / PROBABILITY_SCALE constant_part = ( math.factorial(N) / (math.factorial(TAIL_COUNT) * math.factorial(N-TAIL_COUNT))) likelihood = ( constant_part * np.power(P_TAIL, TAIL_...
maxloglikelihood.ipynb
muatik/dm
mit
Create the siamese net feature extraction model
img_placeholder = tf.placeholder(tf.float32, [None, 28, 28, 1], name='img') net = mnist_model(img_placeholder, reuse=False)
Similar image retrieval.ipynb
ardiya/siamesenetwork-tensorflow
mit
Restore from checkpoint and calc the features from all of train data
saver = tf.train.Saver() with tf.Session() as sess: sess.run(tf.global_variables_initializer()) ckpt = tf.train.get_checkpoint_state("model") saver.restore(sess, "model/model.ckpt") train_feat = sess.run(net, feed_dict={img_placeholder:train_images[:10000]})
Similar image retrieval.ipynb
ardiya/siamesenetwork-tensorflow
mit
Searching for similar test images from trainset based on siamese feature
#generate new random test image idx = np.random.randint(0, len_test) im = test_images[idx] #show the test image show_image(idx, test_images) print("This is image from id:", idx) #run the test image through the network to get the test features saver = tf.train.Saver() with tf.Session() as sess: sess.run(tf.global_...
Similar image retrieval.ipynb
ardiya/siamesenetwork-tensorflow
mit
After loading the image in both color and grayscale, we load a pretrained Haar cascade:
filename = 'data/haarcascade_frontalface_default.xml' face_cascade = cv2.CascadeClassifier(filename)
notebooks/10.04-Implementing-AdaBoost.ipynb
mbeyeler/opencv-machine-learning
mit
The classifier will then detect faces present in the image using the following function call:
faces = face_cascade.detectMultiScale(img_gray, 1.1, 5)
notebooks/10.04-Implementing-AdaBoost.ipynb
mbeyeler/opencv-machine-learning
mit
Note that the algorithm operates only on grayscale images. That's why we saved two pictures of Lena, one to which we can apply the classifier (img_gray), and one on which we can draw the resulting bounding box (img_bgr):
color = (255, 0, 0) thickness = 2 for (x, y, w, h) in faces: cv2.rectangle(img_bgr, (x, y), (x + w, y + h), color, thickness)
notebooks/10.04-Implementing-AdaBoost.ipynb
mbeyeler/opencv-machine-learning
mit
Then we can plot the image using the following code:
import matplotlib.pyplot as plt %matplotlib inline plt.figure(figsize=(10, 6)) plt.imshow(cv2.cvtColor(img_bgr, cv2.COLOR_BGR2RGB));
notebooks/10.04-Implementing-AdaBoost.ipynb
mbeyeler/opencv-machine-learning
mit
Obviously, this picture contains only a single face. However, the preceding code will work even on images where multiple faces could be detected. Try it out! Implementing AdaBoost in scikit-learn In scikit-learn, AdaBoost is just another ensemble estimator. We can create an ensemble from 100 decision stumps as follows:
from sklearn.ensemble import AdaBoostClassifier ada = AdaBoostClassifier(n_estimators=100, random_state=456)
notebooks/10.04-Implementing-AdaBoost.ipynb
mbeyeler/opencv-machine-learning
mit
We can load the breast cancer set once more and split it 75-25:
from sklearn.datasets import load_breast_cancer cancer = load_breast_cancer() X = cancer.data y = cancer.target from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split( X, y, random_state=456 )
notebooks/10.04-Implementing-AdaBoost.ipynb
mbeyeler/opencv-machine-learning
mit
Then fit and score AdaBoost using the familiar procedure:
ada.fit(X_train, y_train) ada.score(X_test, y_test)
notebooks/10.04-Implementing-AdaBoost.ipynb
mbeyeler/opencv-machine-learning
mit
The result is remarkable, 97.9% accuracy! We might want to compare this result to a random forest. However, to be fair, we should make the trees in the forest all decision stumps. Then we will know the difference between bagging and boosting:
from sklearn.ensemble import RandomForestClassifier forest = RandomForestClassifier(n_estimators=100, max_depth=1, random_state=456) forest.fit(X_train, y_train) forest.score(X_test, y_test)
notebooks/10.04-Implementing-AdaBoost.ipynb
mbeyeler/opencv-machine-learning
mit
Of course, if we let the trees be as deep as needed, we might get a better score:
forest = RandomForestClassifier(n_estimators=100, random_state=456) forest.fit(X_train, y_train) forest.score(X_test, y_test)
notebooks/10.04-Implementing-AdaBoost.ipynb
mbeyeler/opencv-machine-learning
mit
Assignment with an = on lists does not make a copy. Instead, assignment makes the two variables point to the one list in memory.
b = colours ## does not copy list
3 - Lists.ipynb
sastels/Onboarding
mit
The "empty list" is just an empty pair of brackets [ ]. The '+' works to append two lists, so [1, 2] + [3, 4] yields [1, 2, 3, 4] (this is just like + with strings). FOR and IN Python's for and in constructs are extremely useful, and the first use of them we'll see is with lists. The for construct -- for var in list --...
squares = [1, 4, 9, 16] sum = 0 for num in squares: sum += num print sum
3 - Lists.ipynb
sastels/Onboarding
mit
If you know what sort of thing is in the list, use a variable name in the loop that captures that information such as "num", or "name", or "url". Since python code does not have other syntax to remind you of types, your variable names are a key way for you to keep straight what is going on. The in construct on its own ...
list = ['larry', 'curly', 'moe'] if 'curly' in list: print 'yay'
3 - Lists.ipynb
sastels/Onboarding
mit
The for/in constructs are very commonly used in Python code and work on data types other than list, so should just memorize their syntax. You may have habits from other languages where you start manually iterating over a collection, where in Python you should just use for/in. You can also use for/in to work on a string...
for i in range(100): print i,
3 - Lists.ipynb
sastels/Onboarding
mit
There is a variant xrange() which avoids the cost of building the whole list for performance sensitive cases (in Python 3, range() will have the good performance behavior and you can forget about xrange() ). While Loop Python also has the standard while-loop, and the break and continue statements work as in C++ and Jav...
a = ['a', 34, 3.14, [1,2], 'c'] i = 0 while i < len(a): print a[i] i = i + 3
3 - Lists.ipynb
sastels/Onboarding
mit
List Methods Here are some other common list methods. list.append(elem) -- adds a single element to the end of the list. Common error: does not return the new list, just modifies the original. list.insert(index, elem) -- inserts the element at the given index, shifting elements to the right. list.extend(list2) adds th...
list = ['larry', 'curly', 'moe'] list.append('shemp') list list.insert(0, 'xxx') list list.extend(['yyy', 'zzz']) list print list.index('curly') list.remove('curly') list print(list.pop(1)) list
3 - Lists.ipynb
sastels/Onboarding
mit
Common error: note that the above methods do not return the modified list, they just modify the original list.
list = [1, 2, 3] print(list.append(4))
3 - Lists.ipynb
sastels/Onboarding
mit
So list.append() doesn't return a value. 'None' is a python value that means there is no value (roll with it). It's great for situations where in other languages you'd set variables to -1 or something. List Build Up One common pattern is to start a list a the empty list [], then use append() or extend() to add elements...
list = [] list.append('a') list.append('b') list
3 - Lists.ipynb
sastels/Onboarding
mit
List Slices Slices work on lists just as with strings, and can also be used to change sub-parts of the list.
list = ['a', 'b', 'c', 'd'] list[1:-1] list[0:2] = 'z' list
3 - Lists.ipynb
sastels/Onboarding
mit
You get to choose which columns, left and right, serve as "gear teeth" for synchronizing rows (sewing them together). Or choose the index, not a column. In the expression below, we go with the one synchronizing element: the index, on both input tables.
pd.merge(dfA, dfB, left_index=True, right_index=True) import string dfA.index = list(string.ascii_lowercase[:8]) # new index, of letters instead dfA dfB.index = list(string.ascii_lowercase[5:8+5]) # overlapping letters dfB pd.merge(dfA, dfB, left_index=True, right_index=True) # intersection, not the union pd.me...
Merging DataFrames.ipynb
4dsolutions/Python5
mit
En exp3 tenemos el resultado del cambio de varibles en la ecuación. Veamos que tiene
exp3
Teoria_Basica/scripts/EjerciciosGruposLie.ipynb
fdmazzone/Ecuaciones_Diferenciales
gpl-2.0
Verdaderamente asusta. Veamos si lo simplifica
exp4=exp3.simplify() exp4
Teoria_Basica/scripts/EjerciciosGruposLie.ipynb
fdmazzone/Ecuaciones_Diferenciales
gpl-2.0
La ecuación luce parecida a la original, pero lamentablemete no simplifica el argumento de la función sen. Tomo esos argumentos por separado y le pido que me los simplifique
((-epsilon*x1**2 - epsilon*x1*y1 + x1**2 + 2*x1*y1 + y1**2)/(epsilon*x1 - x1 - y1)).simplify()
Teoria_Basica/scripts/EjerciciosGruposLie.ipynb
fdmazzone/Ecuaciones_Diferenciales
gpl-2.0
Los argumentos de la función sen es -x1-y1 que es justo lo que necesito para que quede la ecuación original. Ahora hallemos coordenadas canónicas
x1=x*(x+y)/(y+(1+epsilon)*x) y1=(epsilon*x+y)*(x+y)/(y+(1+epsilon)*x) xi=x1.diff(epsilon).subs(epsilon,0) eta=y1.diff(epsilon).subs(epsilon,0) xi,eta (eta/xi).simplify() dsolve(y.diff(x)-eta/xi,y)
Teoria_Basica/scripts/EjerciciosGruposLie.ipynb
fdmazzone/Ecuaciones_Diferenciales
gpl-2.0
Vemos que r=y+x
r=symbols('r') s=Integral((1/xi).subs(y,r-x),x).doit() s s=s.subs(r,x+y) s s.expand()
Teoria_Basica/scripts/EjerciciosGruposLie.ipynb
fdmazzone/Ecuaciones_Diferenciales
gpl-2.0
La variable s sería 1+y/x. Pero las variables canónicas no son únicas, vimos que (F(r),G(r)+s) es canónica, para cualquier F no nula y G.En particular si tomamos F(r)=r y G(r)=-1, vemos que podemos elegir como coordenadas canónicas r=x+y y s=y/x. Hagamos la sustitución en coordenadas canónicas
r=x+y s=y/x exp5=r.diff(x)/s.diff(x) exp6=exp5.subs(y.diff(x),(x**2*sin(x+y)+y)/x/(1-x*sin(x+y))) exp6.simplify()
Teoria_Basica/scripts/EjerciciosGruposLie.ipynb
fdmazzone/Ecuaciones_Diferenciales
gpl-2.0
Vemos que el resultado es 1/sen(r). No obstante hagamos, de curiosidad, la sustitución como si no nos hubiesemos dado cuenta todavía del resultado. Hallemos los cambios inversos (r,s)->(x,y)
r2,s2,x2,y2=symbols('r2,s2,x2,y2') solve([r2-x2-y2,s2-y2/x2],[x2,y2]) exp6.subs({y:r2*s2/(s2+1),x:r2/(s2+1)}).simplify()
Teoria_Basica/scripts/EjerciciosGruposLie.ipynb
fdmazzone/Ecuaciones_Diferenciales
gpl-2.0
La ecuación que resulta r'=1/sen(r) se resuelve facilmente a mano. Nos queda cos(r)=s+C -> cos(x+y)=y/x+C que es la solución de la ecuación original Ejercicio 2.2(c) p. 41 de Hydon. Hay que hallar el grupo de Lie cuyo generador infintesimal es $$X=2xy\partial_x+(y^2-x^2)\partial_y$$ La idea es 1) Hallar coordenadas c...
x,y,r,s=symbols('x,y,r,s') f=Function('f')(x) dsolve(f.diff(x)-(f**2-x**2)/2/x/f) Integral(1/(2*x*sqrt(x*(r-x))),x).doit()
Teoria_Basica/scripts/EjerciciosGruposLie.ipynb
fdmazzone/Ecuaciones_Diferenciales
gpl-2.0
No lo sabe hacer. Si hacemos la sustitución $x=u^2$ nos queda $$\int\frac{dx}{2x\sqrt{x(r-x)}}=\int\frac{du}{u^2\sqrt{r-u^2}}.$$ Y esta si la sabe resolver.
u=symbols('u') Integral(1/u**2/sqrt(r-u**2),u).doit()
Teoria_Basica/scripts/EjerciciosGruposLie.ipynb
fdmazzone/Ecuaciones_Diferenciales
gpl-2.0
$s=-\frac{1}{r}\sqrt{\frac{r}{u^2}-1}= -\frac{1}{r} \sqrt{ \frac{r-x}{x}}= -\frac{x}{x^2+y^2} \sqrt{ \frac{y^2/x}{x}}=-\frac{y}{x^2+y^2}$ Ahora escribimos $$\hat{r}=r\quad\text{y}\quad\hat{s}=s+\epsilon$$ en $x,y$.
x,y,xn,yn,epsilon=symbols('x,y,\hat{x},\hat{y},epsilon') A=solve([(xn**2+yn**2)/xn-(x**2+y**2)/x , -yn/(xn**2+yn**2)+y/(x**2+y**2)-epsilon],[xn,yn]) A A[0] A=Matrix(A[0]) A
Teoria_Basica/scripts/EjerciciosGruposLie.ipynb
fdmazzone/Ecuaciones_Diferenciales
gpl-2.0
Chequeemos que $\left.\frac{d}{d\epsilon}(\hat{x},\hat{y})\right|_{\epsilon=0}=(2xy,y^2-x^2)$
A.diff(epsilon).subs(epsilon,0)
Teoria_Basica/scripts/EjerciciosGruposLie.ipynb
fdmazzone/Ecuaciones_Diferenciales
gpl-2.0
Chequeemos la propiedad de grupo de Lie. Definimos el operador $T$ con lambda
T=lambda x,y,epsilon: Matrix([ x/(epsilon**2*(x**2+y**2)-2*epsilon*y+1),-(epsilon*x**2+epsilon*y**2-y)/(epsilon**2*(x**2+y**2)-2*epsilon*y+1)]) epsilon_1,epsilon_2=symbols('epsilon_1,epsilon_2') expr=T(T(x,y,epsilon_1)[0],T(x,y,epsilon_1)[1],epsilon_2)-T(x,y,epsilon_1+epsilon_2) expr simplify(expr)
Teoria_Basica/scripts/EjerciciosGruposLie.ipynb
fdmazzone/Ecuaciones_Diferenciales
gpl-2.0
We'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building basically the simplest autoencoder with a single ReLU hidden layer. This layer will be used as the compressed rep...
# Size of the encoding layer (the hidden layer) encoding_dim = 32 # feel free to change this value image_size = mnist.train.images.shape[1] print(image_size) # Input and target placeholders inputs_ = tf.placeholder(tf.float32, (None, image_size), name='inputs') targets_ = tf.placeholder(tf.float32, (None, image_size)...
nd101 Deep Learning Nanodegree Foundation/DockerImages/19_Autoencoders/notebooks/autoencoder/Simple_Autoencoder.ipynb
anandha2017/udacity
mit
Examples
testing = (__name__ == '__main__') if testing: ! jupyter nbconvert --to python ptrans.ipynb import numpy as np %matplotlib inline import matplotlib.image as mpimg import matplotlib.pyplot as plt import sys,os ia898path = os.path.abspath('../../') if ia898path not in sys.path: sys...
src/ptrans.ipynb
robertoalotufo/ia898
mit
Example 1 Numeric examples in 2D and 3D.
if testing: # 2D example f = np.arange(15).reshape(3,5) print("Original 2D image:\n",f,"\n\n") print("Image translated by (0,0):\n",ia.ptrans(f, (0,0)).astype(int),"\n\n") print("Image translated by (0,1):\n",ia.ptrans(f, (0,1)).astype(int),"\n\n") print("Image translated by (-1,2):\n",ia.ptran...
src/ptrans.ipynb
robertoalotufo/ia898
mit
Example 2 Image examples in 2D
if testing: # 2D example f = mpimg.imread('../data/cameraman.tif') plt.imshow(f,cmap='gray'), plt.title('Original 2D image - Cameraman') plt.imshow(ia.ptrans(f, np.array(f.shape)//3),cmap='gray'), plt.title('Cameraman periodically translated')
src/ptrans.ipynb
robertoalotufo/ia898
mit
Equation For 2D case we have $$ \begin{matrix} t &=& (t_r t_c),\ g = f_t &=& f_{tr,tc},\ g(rr,cc) &=& f((rr-t_r)\ mod\ H, (cc-t_c) \ mod\ W), 0 \leq rr < H, 0 \leq cc < W,\ \mbox{where} & & \ a \ mod\ N &=& (a + k N) \ mod\ N, k \in Z. \end{matrix} $$ The equation above can be extended to n-dime...
if testing: print('testing ptrans') f = np.array([[1,2,3,4,5],[6,7,8,9,10],[11,12,13,14,15]],'uint8') print(repr(ia.ptrans(f, [-1,2]).astype(np.uint8)) == repr(np.array( [[ 9, 10, 6, 7, 8], [14, 15, 11, 12, 13], [ 4, 5, 1, 2, 3]],'uint8')))
src/ptrans.ipynb
robertoalotufo/ia898
mit
Preprocessing Task: Use TF-IDF Vectorization to create a vectorized document term matrix. You may want to explore the max_df and min_df parameters.
from sklearn.feature_extraction.text import TfidfVectorizer tfidf = TfidfVectorizer(max_df=0.95, min_df=2, stop_words='english') dtm = tfidf.fit_transform(quora['Question']) dtm
nlp/UPDATED_NLP_COURSE/05-Topic-Modeling/03-LDA-NMF-Assessment-Project-Solutions.ipynb
rishuatgithub/MLPy
apache-2.0
Non-negative Matrix Factorization TASK: Using Scikit-Learn create an instance of NMF with 20 expected components. (Use random_state=42)..
from sklearn.decomposition import NMF nmf_model = NMF(n_components=20,random_state=42) nmf_model.fit(dtm)
nlp/UPDATED_NLP_COURSE/05-Topic-Modeling/03-LDA-NMF-Assessment-Project-Solutions.ipynb
rishuatgithub/MLPy
apache-2.0
TASK: Print our the top 15 most common words for each of the 20 topics.
for index,topic in enumerate(nmf_model.components_): print(f'THE TOP 15 WORDS FOR TOPIC #{index}') print([tfidf.get_feature_names()[i] for i in topic.argsort()[-15:]]) print('\n')
nlp/UPDATED_NLP_COURSE/05-Topic-Modeling/03-LDA-NMF-Assessment-Project-Solutions.ipynb
rishuatgithub/MLPy
apache-2.0
TASK: Add a new column to the original quora dataframe that labels each question into one of the 20 topic categories.
quora.head() topic_results = nmf_model.transform(dtm) topic_results.argmax(axis=1) quora['Topic'] = topic_results.argmax(axis=1) quora.head(10)
nlp/UPDATED_NLP_COURSE/05-Topic-Modeling/03-LDA-NMF-Assessment-Project-Solutions.ipynb
rishuatgithub/MLPy
apache-2.0
Modifying data in-place It is often necessary to modify data once you have loaded it into memory. Common examples of this are signal processing, feature extraction, and data cleaning. Some functionality is pre-built into MNE-python, though it is also possible to apply an arbitrary function to the data.
import mne import os.path as op import numpy as np from matplotlib import pyplot as plt
0.17/_downloads/4db67f73b2950e88bd1e641ba8cf44c0/plot_modifying_data_inplace.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Load an example dataset, the preload flag loads the data into memory now
data_path = op.join(mne.datasets.sample.data_path(), 'MEG', 'sample', 'sample_audvis_raw.fif') raw = mne.io.read_raw_fif(data_path, preload=True) raw = raw.crop(0, 10) print(raw)
0.17/_downloads/4db67f73b2950e88bd1e641ba8cf44c0/plot_modifying_data_inplace.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause