markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
After designating one of the k segments as the validation set, we train a model using the rest of the data. To choose the remainder, we slice (0:start) and (end+1:n) of the data and paste them together. SFrame has append() method that pastes together two disjoint sets of rows originating from a common dataset. For inst...
train4 = train_valid_shuffled[:start].append(train_valid_shuffled[end+1:]) print(len(train4)) print(n - len(train4))
ml-regression/week3-4/week-4-ridge-regression-assignment-1.ipynb
isendel/machine-learning
apache-2.0
To verify that we have the right elements extracted, run the following cell, which computes the average price of the data with fourth segment excluded. When rounded to nearest whole number, the average should be $539,450.
print(int(round(train4['price'].mean(), 0)))
ml-regression/week3-4/week-4-ridge-regression-assignment-1.ipynb
isendel/machine-learning
apache-2.0
Now we are ready to implement k-fold cross-validation. Write a function that computes k validation errors by designating each of the k segments as the validation set. It accepts as parameters (i) k, (ii) l2_penalty, (iii) dataframe, (iv) name of output column (e.g. price) and (v) list of feature names. The function ret...
def k_fold_cross_validation(k, l2_penalty, data, output_name, features_list): validation_errors = [] for i in range(k): n = len(data) start = (n*i)/k end = (n*(i+1))/k validation_set = data[start:end + 1] training_set = data[0:start].append(data[end + 1:n]) model...
ml-regression/week3-4/week-4-ridge-regression-assignment-1.ipynb
isendel/machine-learning
apache-2.0
Once we have a function to compute the average validation error for a model, we can write a loop to find the model that minimizes the average validation error. Write a loop that does the following: * We will again be aiming to fit a 15th-order polynomial model using the sqft_living input * For l2_penalty in [10^1, 10^1...
import sys validation_errors = [] lowest_error = sys.float_info.max penalty = 0 for l2_penalty in np.logspace(1, 7, num=13): data_poly, features = polynomial_sframe(train_valid_shuffled['sqft_living'], 15) data_poly['price'] = train_valid_shuffled['price'] average_validation_error = k_fold_cross_validation(...
ml-regression/week3-4/week-4-ridge-regression-assignment-1.ipynb
isendel/machine-learning
apache-2.0
QUIZ QUESTIONS: What is the best value for the L2 penalty according to 10-fold validation? You may find it useful to plot the k-fold cross-validation errors you have obtained to better understand the behavior of the method.
# Plot the l2_penalty values in the x axis and the cross-validation error in the y axis. # Using plt.xscale('log') will make your plot more intuitive. plt.plot(np.logspace(1, 7, num=13), validation_errors, '-') plt.xscale('log') print(validation_errors)
ml-regression/week3-4/week-4-ridge-regression-assignment-1.ipynb
isendel/machine-learning
apache-2.0
Once you found the best value for the L2 penalty using cross-validation, it is important to retrain a final model on all of the training data using this value of l2_penalty. This way, your final model will be trained on the entire dataset.
data_poly, features = polynomial_sframe(train_valid_shuffled['sqft_living'], 15) model = linear_model.Ridge(normalize=True, alpha=penalty) model.fit(data_poly[features], train_valid_shuffled['price'])
ml-regression/week3-4/week-4-ridge-regression-assignment-1.ipynb
isendel/machine-learning
apache-2.0
QUIZ QUESTION: Using the best L2 penalty found above, train a model using all training data. What is the RSS on the TEST data of the model you learn with this L2 penalty?
poly_data_test, features = polynomial_sframe(test['sqft_living'], 15) predictions = model.predict(poly_data_test[features]) test_errors = predictions - test['price'] RSS_test = test_errors.T.dot(test_errors) RSS_test
ml-regression/week3-4/week-4-ridge-regression-assignment-1.ipynb
isendel/machine-learning
apache-2.0
3. Affine decomposition For this problem the affine decomposition is straightforward: $$m(u,v;\boldsymbol{\mu})=\underbrace{1}{\Theta^{m}_0(\boldsymbol{\mu})}\underbrace{\int{\Omega}uv \ d\boldsymbol{x}}{m_0(u,v)},$$ $$a(u,v;\boldsymbol{\mu})=\underbrace{\mu_0}{\Theta^{a}0(\boldsymbol{\mu})}\underbrace{\int{\Omega_1}\n...
class UnsteadyThermalBlock(ParabolicCoerciveProblem): # Default initialization of members def __init__(self, V, **kwargs): # Call the standard initialization ParabolicCoerciveProblem.__init__(self, V, **kwargs) # ... and also store FEniCS data structures for assembly assert "sub...
tutorials/06_thermal_block_unsteady/tutorial_thermal_block_unsteady_1_pod.ipynb
mathLab/RBniCS
lgpl-3.0
4. Main program 4.1. Read the mesh for this problem The mesh was generated by the data/generate_mesh.ipynb notebook.
mesh = Mesh("data/thermal_block.xml") subdomains = MeshFunction("size_t", mesh, "data/thermal_block_physical_region.xml") boundaries = MeshFunction("size_t", mesh, "data/thermal_block_facet_region.xml")
tutorials/06_thermal_block_unsteady/tutorial_thermal_block_unsteady_1_pod.ipynb
mathLab/RBniCS
lgpl-3.0
4.2. Create Finite Element space (Lagrange P1, two components)
V = FunctionSpace(mesh, "Lagrange", 1)
tutorials/06_thermal_block_unsteady/tutorial_thermal_block_unsteady_1_pod.ipynb
mathLab/RBniCS
lgpl-3.0
4.3. Allocate an object of the UnsteadyThermalBlock class
problem = UnsteadyThermalBlock(V, subdomains=subdomains, boundaries=boundaries) mu_range = [(0.1, 10.0), (-1.0, 1.0)] problem.set_mu_range(mu_range) problem.set_time_step_size(0.05) problem.set_final_time(3)
tutorials/06_thermal_block_unsteady/tutorial_thermal_block_unsteady_1_pod.ipynb
mathLab/RBniCS
lgpl-3.0
4.4. Prepare reduction with a POD-Galerkin method
reduction_method = PODGalerkin(problem) reduction_method.set_Nmax(20, nested_POD=4) reduction_method.set_tolerance(1e-8, nested_POD=1e-4)
tutorials/06_thermal_block_unsteady/tutorial_thermal_block_unsteady_1_pod.ipynb
mathLab/RBniCS
lgpl-3.0
4.5. Perform the offline phase
reduction_method.initialize_training_set(100) reduced_problem = reduction_method.offline()
tutorials/06_thermal_block_unsteady/tutorial_thermal_block_unsteady_1_pod.ipynb
mathLab/RBniCS
lgpl-3.0
4.6. Perform an online solve
online_mu = (8.0, -1.0) reduced_problem.set_mu(online_mu) reduced_solution = reduced_problem.solve() plot(reduced_solution, reduced_problem=reduced_problem, every=5, interval=500)
tutorials/06_thermal_block_unsteady/tutorial_thermal_block_unsteady_1_pod.ipynb
mathLab/RBniCS
lgpl-3.0
4.7. Perform an error analysis
reduction_method.initialize_testing_set(10) reduction_method.error_analysis()
tutorials/06_thermal_block_unsteady/tutorial_thermal_block_unsteady_1_pod.ipynb
mathLab/RBniCS
lgpl-3.0
4.8. Perform a speedup analysis
reduction_method.initialize_testing_set(10) reduction_method.speedup_analysis()
tutorials/06_thermal_block_unsteady/tutorial_thermal_block_unsteady_1_pod.ipynb
mathLab/RBniCS
lgpl-3.0
DICS for power mapping In this tutorial, we'll simulate two signals originating from two locations on the cortex. These signals will be sinusoids, so we'll be looking at oscillatory activity (as opposed to evoked activity). We'll use dynamic imaging of coherent sources (DICS) [1]_ to map out spectral power along the co...
# Author: Marijn van Vliet <w.m.vanvliet@gmail.com> # # License: BSD (3-clause)
0.20/_downloads/8763e6c899a8b9971980be1308b5f693/plot_dics.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Setup We first import the required packages to run this tutorial and define a list of filenames for various things we'll be using.
import os.path as op import numpy as np from scipy.signal import welch, coherence, unit_impulse from matplotlib import pyplot as plt import mne from mne.simulation import simulate_raw, add_noise from mne.datasets import sample from mne.minimum_norm import make_inverse_operator, apply_inverse from mne.time_frequency im...
0.20/_downloads/8763e6c899a8b9971980be1308b5f693/plot_dics.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Data simulation The following function generates a timeseries that contains an oscillator, whose frequency fluctuates a little over time, but stays close to 10 Hz. We'll use this function to generate our two signals.
sfreq = 50. # Sampling frequency of the generated signal n_samp = int(round(10. * sfreq)) times = np.arange(n_samp) / sfreq # 10 seconds of signal n_times = len(times) def coh_signal_gen(): """Generate an oscillating signal. Returns ------- signal : ndarray The generated signal. """ ...
0.20/_downloads/8763e6c899a8b9971980be1308b5f693/plot_dics.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Let's simulate two timeseries and plot some basic information about them.
signal1 = coh_signal_gen() signal2 = coh_signal_gen() fig, axes = plt.subplots(2, 2, figsize=(8, 4)) # Plot the timeseries ax = axes[0][0] ax.plot(times, 1e9 * signal1, lw=0.5) ax.set(xlabel='Time (s)', xlim=times[[0, -1]], ylabel='Amplitude (Am)', title='Signal 1') ax = axes[0][1] ax.plot(times, 1e9 * signal2...
0.20/_downloads/8763e6c899a8b9971980be1308b5f693/plot_dics.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Now we put the signals at two locations on the cortex. We construct a :class:mne.SourceEstimate object to store them in. The timeseries will have a part where the signal is active and a part where it is not. The techniques we'll be using in this tutorial depend on being able to contrast data that contains the signal of...
# The locations on the cortex where the signal will originate from. These # locations are indicated as vertex numbers. vertices = [[146374], [33830]] # Construct SourceEstimates that describe the signals at the cortical level. data = np.vstack((signal1, signal2)) stc_signal = mne.SourceEstimate( data, vertices, tm...
0.20/_downloads/8763e6c899a8b9971980be1308b5f693/plot_dics.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Before we simulate the sensor-level data, let's define a signal-to-noise ratio. You are encouraged to play with this parameter and see the effect of noise on our results.
snr = 1. # Signal-to-noise ratio. Decrease to add more noise.
0.20/_downloads/8763e6c899a8b9971980be1308b5f693/plot_dics.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Now we run the signal through the forward model to obtain simulated sensor data. To save computation time, we'll only simulate gradiometer data. You can try simulating other types of sensors as well. Some noise is added based on the baseline noise covariance matrix from the sample dataset, scaled to implement the desir...
# Read the info from the sample dataset. This defines the location of the # sensors and such. info = mne.io.read_info(raw_fname) info.update(sfreq=sfreq, bads=[]) # Only use gradiometers picks = mne.pick_types(info, meg='grad', stim=True, exclude=()) mne.pick_info(info, picks, copy=False) # Define a covariance matrix...
0.20/_downloads/8763e6c899a8b9971980be1308b5f693/plot_dics.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
We create an :class:mne.Epochs object containing two trials: one with both noise and signal and one with just noise
events = mne.find_events(raw, initial_event=True) tmax = (len(stc_signal.times) - 1) / sfreq epochs = mne.Epochs(raw, events, event_id=dict(signal=1, noise=2), tmin=0, tmax=tmax, baseline=None, preload=True) assert len(epochs) == 2 # ensure that we got the two expected events # Plot some of the ch...
0.20/_downloads/8763e6c899a8b9971980be1308b5f693/plot_dics.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Power mapping With our simulated dataset ready, we can now pretend to be researchers that have just recorded this from a real subject and are going to study what parts of the brain communicate with each other. First, we'll create a source estimate of the MEG data. We'll use both a straightforward MNE-dSPM inverse solut...
# Compute the inverse operator fwd = mne.read_forward_solution(fwd_fname) inv = make_inverse_operator(epochs.info, fwd, cov) # Apply the inverse model to the trial that also contains the signal. s = apply_inverse(epochs['signal'].average(), inv) # Take the root-mean square along the time dimension and plot the result...
0.20/_downloads/8763e6c899a8b9971980be1308b5f693/plot_dics.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
We will now compute the cortical power map at 10 Hz. using a DICS beamformer. A beamformer will construct for each vertex a spatial filter that aims to pass activity originating from the vertex, while dampening activity from other sources as much as possible. The :func:mne.beamformer.make_dics function has many switche...
# Estimate the cross-spectral density (CSD) matrix on the trial containing the # signal. csd_signal = csd_morlet(epochs['signal'], frequencies=[10]) # Compute the spatial filters for each vertex, using two approaches. filters_approach1 = make_dics( info, fwd, csd_signal, reg=0.05, pick_ori='max-power', normalize_f...
0.20/_downloads/8763e6c899a8b9971980be1308b5f693/plot_dics.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Feature Map ๋งŒ์•ฝ weight๊ฐ€ ํŠน์ • image patter์— ๋Œ€ํ•ด a=1์ธ ์ถœ๋ ฅ์„ ๋‚ด๋„๋ก training ๋˜์—ˆ๋‹ค๋ฉด hidden layer๋Š” feature๊ฐ€ ์กด์žฌํ•˜๋Š” ์œ„์น˜๋ฅผ ํ‘œ์‹œ => feature map ์—ฌ๊ธฐ์—์„œ์˜ feature๋Š” input data๋ฅผ ์˜๋ฏธํ•˜๋Š” ๊ฒƒ์ด ์•„๋‹ˆ๋ผ image ๋ถ„๋ฅ˜์— ์‚ฌ์šฉ๋˜๋Š” input data์˜ ํŠน์ •ํ•œ pattern์„ ๋œปํ•จ <img src="http://www.kdnuggets.com/wp-content/uploads/computer-vision-filters.jpg"> Multiple Feature Maps ํ•˜๋‚˜์˜ ๊ณตํ†ต w...
%cd /home/dockeruser/neural-networks-and-deep-learning/src
ํ†ต๊ณ„, ๋จธ์‹ ๋Ÿฌ๋‹ ๋ณต์Šต/160705ํ™”์ˆ˜_25,26์ผ์ฐจ_๋‰ด๋Ÿด ๋„คํŠธ์›Œํฌ Neural Network/6.CNN.ipynb
kimkipyo/dss_git_kkp
mit
Normal MLP
import network3 from network3 import Network from network3 import ConvPoolLayer, FullyConnectedLayer, SoftmaxLayer training_data, validation_data, test_data = network3.load_data_shared() mini_batch_size = 10 net = Network([ FullyConnectedLayer(n_in=784, n_out=100), SoftmaxLayer(n_in=100, n_out...
ํ†ต๊ณ„, ๋จธ์‹ ๋Ÿฌ๋‹ ๋ณต์Šต/160705ํ™”์ˆ˜_25,26์ผ์ฐจ_๋‰ด๋Ÿด ๋„คํŠธ์›Œํฌ Neural Network/6.CNN.ipynb
kimkipyo/dss_git_kkp
mit
Add Convolutional + Pooling Layer
net = Network([ ConvPoolLayer(image_shape=(mini_batch_size, 1, 28, 28), filter_shape=(20, 1, 5, 5), poolsize=(2, 2)), FullyConnectedLayer(n_in=20*12*12, n_out=100), SoftmaxLayer(n_in=100, n_out=10)], mini_batch_size) net.SGD(training_data, ...
ํ†ต๊ณ„, ๋จธ์‹ ๋Ÿฌ๋‹ ๋ณต์Šต/160705ํ™”์ˆ˜_25,26์ผ์ฐจ_๋‰ด๋Ÿด ๋„คํŠธ์›Œํฌ Neural Network/6.CNN.ipynb
kimkipyo/dss_git_kkp
mit
Add Additional Convolution + Pool Layer ๋‘๋ฒˆ์งธ convolutional-pooling layer์˜ ์—ญํ•  feature map์—์„œ feature๊ฐ€ ๋‚˜ํƒ€๋‚˜๋Š” pattern์˜ ํฌ์ฐฉ feature of feature map
net = Network([ ConvPoolLayer(image_shape=(mini_batch_size, 1, 28, 28), filter_shape=(20, 1, 5, 5), poolsize=(2, 2)), ConvPoolLayer(image_shape=(mini_batch_size, 20, 12, 12), filter_shape=(40, 20, 5, 5), poolsize...
ํ†ต๊ณ„, ๋จธ์‹ ๋Ÿฌ๋‹ ๋ณต์Šต/160705ํ™”์ˆ˜_25,26์ผ์ฐจ_๋‰ด๋Ÿด ๋„คํŠธ์›Œํฌ Neural Network/6.CNN.ipynb
kimkipyo/dss_git_kkp
mit
Apply ReLu sigmoid activation functions ๋ณด๋‹ค ์„ฑ๋Šฅ ํ–ฅ์ƒ
from network3 import ReLU net = Network([ ConvPoolLayer(image_shape=(mini_batch_size, 1, 28, 28), filter_shape=(20, 1, 5, 5), poolsize=(2, 2), activation_fn=ReLU), ConvPoolLayer(image_shape=(mini_batch_size, 20, 12, 12), ...
ํ†ต๊ณ„, ๋จธ์‹ ๋Ÿฌ๋‹ ๋ณต์Šต/160705ํ™”์ˆ˜_25,26์ผ์ฐจ_๋‰ด๋Ÿด ๋„คํŠธ์›Œํฌ Neural Network/6.CNN.ipynb
kimkipyo/dss_git_kkp
mit
Read main data file containing patient visits to short stay unit Here's the first few lines from our csv file containing the patient stop data: PatID,InRoomTS,OutRoomTS,PatType,Severity,PatTypeSeverity 1,01/01/96 07:44 AM,01/01/96 08:50 AM,IVT,1,IVT_1 2,01/01/96 08:28 AM,01/01/96 09:20 AM,IVT,1,IVT_1 3,01/01/96 11:44 A...
file_stopdata = '../data/ShortStay2.csv' stops_df = pd.read_csv(file_stopdata, parse_dates=['InRoomTS','OutRoomTS']) stops_df.info()
notebooks/basic_usage_shortstay_unit_multicats.ipynb
misken/hillmaker-examples
apache-2.0
Check out the top and bottom of stops_df.
stops_df.head(7) stops_df.tail(5)
notebooks/basic_usage_shortstay_unit_multicats.ipynb
misken/hillmaker-examples
apache-2.0
Enhancement to handle multiple categorical fields Notice that the PatType field are strings while Severity is integer data. In the previous version of hillmaker (v0.1.1), you could only specify a single category field and it needed to be of type string. So, to compute occupancy statistics by Severity required some data...
stops_df.groupby('PatType')['PatID'].count() stops_df.groupby('Severity')['PatID'].count()
notebooks/basic_usage_shortstay_unit_multicats.ipynb
misken/hillmaker-examples
apache-2.0
No obvious problems. We'll assume the data was all read in correctly. Creating occupancy summaries The primary function in Hillmaker is called make_hills and plays the same role as the Hillmaker function in the original Access VBA version of Hillmaker. Let's get a little help on this function.
help(hm.make_hills)
notebooks/basic_usage_shortstay_unit_multicats.ipynb
misken/hillmaker-examples
apache-2.0
Most of the parameters are similar to those in the original VBA version, though a few new ones have been added. Since the VBA version used an Access database as the container for its output, new parameters were added to control output to csv files and/or pandas DataFrames instead. Example 1: 60 minute bins, PatientTyp...
# Required inputs scenario = 'example1' in_fld_name = 'InRoomTS' out_fld_name = 'OutRoomTS' start = '1/1/1996' end = '3/30/1996 23:45' # Optional inputs cat_fld_name = ['PatType', 'Severity'] verbose = 1 output = './output'
notebooks/basic_usage_shortstay_unit_multicats.ipynb
misken/hillmaker-examples
apache-2.0
Now we'll call the main make_hills function. We won't capture the return values but will simply take the default behavior of having the summaries exported to csv files. You'll see that the filenames will contain the scenario value.
hm.make_hills(scenario, stops_df, in_fld_name, out_fld_name, start, end, catfield=cat_fld_name, export_path = output, verbose=verbose)
notebooks/basic_usage_shortstay_unit_multicats.ipynb
misken/hillmaker-examples
apache-2.0
Let's list the contents of the output folder containing the csv files created by hillmaker. For Windows users, the following is the Linux ls command. The leading exclamation point tells Jupyter that this is an operating system command. To list the files in Windows, the equivalent would be: !dir output\example1*.csv
!ls ./output/example1*.csv
notebooks/basic_usage_shortstay_unit_multicats.ipynb
misken/hillmaker-examples
apache-2.0
There are three groups of statistical summary files related to arrivals, departures and occupancy. In addition, the intermediate "bydatetime" files are also included. The filenames indicate whether or not the statistics are by category we well as if they are by day of week and time of day. Occupancy, arrival and depar...
pd.set_option('precision', 2) pd.read_csv("./output/example1_occupancy_PatType_Severity_dow_binofday.csv").iloc[100:110]
notebooks/basic_usage_shortstay_unit_multicats.ipynb
misken/hillmaker-examples
apache-2.0
Statistics by day and time but aggregated over all the categories are also available.
pd.read_csv("./output/example1_occupancy_dow_binofday.csv").iloc[20:40]
notebooks/basic_usage_shortstay_unit_multicats.ipynb
misken/hillmaker-examples
apache-2.0
For those files without "dow_binofday" in their name, the statistics are by category only.
pd.read_csv("./output/example1_occupancy_PatType_Severity.csv").head(20)
notebooks/basic_usage_shortstay_unit_multicats.ipynb
misken/hillmaker-examples
apache-2.0
There's even a summary that aggregates over categories and time. Obviously, it contains a single row.
pd.read_csv("./output/example1_occupancy.csv")
notebooks/basic_usage_shortstay_unit_multicats.ipynb
misken/hillmaker-examples
apache-2.0
Intermediate bydatetime files The intermediate tables used to compute the summaries we just looked at, are also available both by category and overall. Each row is a single time bin (e.g. date and hour of day). Note that the occupancy values are not necessarily integer since hillmaker's default behavior is to use fract...
pd.read_csv("./output/example1_bydatetime_datetime.csv").iloc[100:125] pd.read_csv("./output/example1_bydatetime_PatType_Severity_datetime.csv").iloc[100:125]
notebooks/basic_usage_shortstay_unit_multicats.ipynb
misken/hillmaker-examples
apache-2.0
If you've used the previous version of Hillmaker, you'll recognize these files. The default behavior has changed to compute fewer percentiles but any percentiles you want can be computed by specifying them in the percentiles argument to make_hills. Example 2: Compute totals for individual category fields, select perce...
# Required inputs scenario = 'example2' in_fld_name = 'InRoomTS' out_fld_name = 'OutRoomTS' start = '1/1/1996' end = '3/30/1996 23:45' # Optional inputs cat_fld_name = ['PatType', 'Severity'] totals= 2 percentiles=[0.5, 0.95] verbose = 0 # Silent mode output = './output' export_bydatetime_csv = True export_summaries_c...
notebooks/basic_usage_shortstay_unit_multicats.ipynb
misken/hillmaker-examples
apache-2.0
Now we'll call make_hills and tuck the results (a dictionary of DataFrames) into a local variable. Then we can explore them a bit with Pandas.
example2_dfs = hm.make_hills(scenario, stops_df, in_fld_name, out_fld_name, start, end, cat_fld_name, totals=totals, export_path=output, verbose=verbose, export_bydatetime_csv=export_bydatetime_csv, export_summaries_csv=export_summ...
notebooks/basic_usage_shortstay_unit_multicats.ipynb
misken/hillmaker-examples
apache-2.0
The example2_dfs return value is several nested dictionaries eventually leading to pandas DataFrames as values. Let's explore the key structure. It's pretty simple.
example2_dfs.keys()
notebooks/basic_usage_shortstay_unit_multicats.ipynb
misken/hillmaker-examples
apache-2.0
Let's explore the 'summaries' key first. As you might guess, this will eventually lead to the statistical summary DataFrames.
example2_dfs['summaries'].keys() example2_dfs['summaries']['nonstationary'].keys() example2_dfs['summaries']['nonstationary']['Severity_dow_binofday'].keys() example2_dfs['summaries']['nonstationary']['Severity_dow_binofday']['occupancy']
notebooks/basic_usage_shortstay_unit_multicats.ipynb
misken/hillmaker-examples
apache-2.0
The stationary summaries are similar except that there are no day of week and time bin of day related files. Now let's look at the 'bydatetime' key at the top level. Yep, gonna lead to bydatetime DataFrames.
example2_dfs['bydatetime'].keys() example2_dfs['bydatetime']['PatType_Severity_datetime']
notebooks/basic_usage_shortstay_unit_multicats.ipynb
misken/hillmaker-examples
apache-2.0
Example 3 - Workload hills instead of occupancy Assume that we are doing a staffing analysis and want to look at the distribution of workload by time of day and day of week. In order to translate patients to workload, we'll use simple staff to patient ratios based on severity. For example, let's assume that for Severit...
severity_to_workload = {'1':0.25, '2':0.5} stops_df['workload'] = stops_df['Severity'].map(lambda x: severity_to_workload[str(x)]) stops_df.head(10)
notebooks/basic_usage_shortstay_unit_multicats.ipynb
misken/hillmaker-examples
apache-2.0
Now we can create workload hills. I'm just going to compute overall workload by not specifiying a category field. Notice the use of the occ_weight_field argument.
# Required inputs scenario = 'example3' in_fld_name = 'InRoomTS' out_fld_name = 'OutRoomTS' start = '1/1/1996' end = '3/30/1996 23:45' # Optional inputs occ_weight_field = 'workload' verbose = 0 output = './output' example3_dfs = hm.make_hills(scenario, stops_df, in_fld_name, out_fld_name, start, end, ...
notebooks/basic_usage_shortstay_unit_multicats.ipynb
misken/hillmaker-examples
apache-2.0
We can check the overall mean workload in example3 by doing a weighted average of the mean occupancies by Severity from example2 with the workload ratios as weights.
import numpy as np mean_occ = np.asarray(example2_dfs['summaries']['stationary']['Severity']['occupancy'].loc[:,'mean']) mean_occ ratios = [severity_to_workload[str(i+1)] for i in range(2)] ratios overall_mean_workload = np.dot(mean_occ, ratios) overall_mean_workload
notebooks/basic_usage_shortstay_unit_multicats.ipynb
misken/hillmaker-examples
apache-2.0
A sessionmaker does not have a query property - we don't expect it to, after all it's for making sessions, not queries:
# sm.query(Voevent).count() #<--Raises
notebooks/notes_on_scoped_session.ipynb
timstaley/voeventdb
gpl-2.0
So, make a session:
regular_session = sm() regular_session.query(Voevent).count()
notebooks/notes_on_scoped_session.ipynb
timstaley/voeventdb
gpl-2.0
Ok. We can do the same sort of thing with a scoped session:
scoped_session = scoped_sm() scoped_session.query(Voevent).count()
notebooks/notes_on_scoped_session.ipynb
timstaley/voeventdb
gpl-2.0
However - shenanigans! - a sqlalchemy.orm.scoped_session (i.e. a scoped-session factory) has a .query attribute, created via the query_property method. AFAICT this is syntactic sugar, proxying to query attribute of the underlying session. This is documented here: http://docs.sqlalchemy.org/en/rel_1_0/orm/contextual.htm...
scoped_sm.query(Voevent).count()
notebooks/notes_on_scoped_session.ipynb
timstaley/voeventdb
gpl-2.0
Running TCAV This notebook walks you through things you need to run TCAV. Before running this notebook, run the following to download all the data. ``` cd tcav/tcav_examples/image_models/imagenet python download_and_make_datasets.py --source_dir=YOUR_PATH --number_of_images_per_folder=50 --number_of_random_folders=3 `...
%load_ext autoreload %autoreload 2 import tcav.activation_generator as act_gen import tcav.cav as cav import tcav.model as model import tcav.tcav as tcav import tcav.utils as utils import tcav.utils_plot as utils_plot # utils_plot requires matplotlib import os import tensorflow as tf
Run_TCAV_on_colab.ipynb
tensorflow/tcav
apache-2.0
Step 1. Store concept and target class images to local folders and tell TCAV where they are. source_dir: where images of concepts, target class and random images (negative samples when learning CAVs) live. Each should be a sub-folder within this directory. Note that random image directories can be in any name. In this ...
# This is the name of your model wrapper (InceptionV3 and GoogleNet are provided in model.py) model_to_run = 'GoogleNet' # the name of the parent directory that results are stored (only if you want to cache) project_name = 'tcav_class_test' working_dir = '/content/tcav/tcav' # where activations are stored (only if your...
Run_TCAV_on_colab.ipynb
tensorflow/tcav
apache-2.0
Step 2. Write your model wrapper Next step is to tell TCAV how to communicate with your model. See model.GoogleNetWrapper_public for details. You can define a subclass of ModelWrapper abstract class to do this. Let me walk you thru what each function does (tho they are pretty self-explanatory). This wrapper includes a...
%cp -av '/content/tcav/tcav/tcav_examples/image_models/imagenet/YOUR_FOLDER/mobilenet_v2_1.0_224' '/content/tcav/tcav/mobilenet_v2_1.0_224' %rm '/content/tcav/tcav/tcav_examples/image_models/imagenet/YOUR_FOLDER/mobilenet_v2_1.0_224' %cp -av '/content/tcav/tcav/tcav_examples/image_models/imagenet/YOUR_FOLDER/inception...
Run_TCAV_on_colab.ipynb
tensorflow/tcav
apache-2.0
Step 3. Implement a class that returns activations (maybe with caching!) Lastly, you will implement a class of the ActivationGenerationInterface which TCAV uses to load example data for a given concept or target, call into your model wrapper and return activations. I pulled out this logic outside of mymodel because thi...
act_generator = act_gen.ImageActivationGenerator(mymodel, source_dir, activation_dir, max_examples=100)
Run_TCAV_on_colab.ipynb
tensorflow/tcav
apache-2.0
You are ready to run TCAV! Let's do it. num_random_exp: number of experiments to confirm meaningful concept direction. TCAV will search for this many folders named random500_0, random500_1, etc. You can alternatively set the random_concepts keyword to be a list of folders of random concepts. Run at least 10-20 for mean...
import absl absl.logging.set_verbosity(0) num_random_exp=10 ## only running num_random_exp = 10 to save some time. The paper number are reported for 500 random runs. mytcav = tcav.TCAV(sess, target, concepts, bottlenecks, act_generator, ...
Run_TCAV_on_colab.ipynb
tensorflow/tcav
apache-2.0
So from the histograms above we can see all these methods give us points on the unit sphere. (Uniform gives us almost) But are they all uncorrelated? Let us see, See how as $N$ increases the matrix tends to $I$ showing that they are indeed drawn i.i.d
np.matmul(normal.T, normal) np.matmul(uniform.T, uniform) np.matmul(spherical.T, spherical)
Peturn Normally to move Uniformly.ipynb
Aditya8795/Python-Scripts
mit
Measuring SQL bigquery size before acrtually executing it with the bq_helper package;
query = """SELECT value FROM `bigquery-public-data.openaq.global_air_quality` WHERE value > 0""" # ! the quotations marks around 'bigquery..._quality' are NOT quotation marks, they are and have to be 'backticks': ` ! open_aq.estimate_query_size(query)
SQL/SQLquerySizeCalculator.ipynb
StevenPeutz/myDataProjects
cc0-1.0
this means the SQL query above would take 0.000124 TB to run.
query2 = """SELECT value FROM `bigquery-public-data.openaq.global_air_quality` WHERE country = 'NL'""" # ! the quotations marks around 'bigquery..._quality' are NOT quotation marks, they are and have to be 'backticks': ` ! open_aq.estimate_query_size(query2)
SQL/SQLquerySizeCalculator.ipynb
StevenPeutz/myDataProjects
cc0-1.0
and this one would cost 0.000186TB
#or in megabyte; open_aq.estimate_query_size(query2) * 1000
SQL/SQLquerySizeCalculator.ipynb
StevenPeutz/myDataProjects
cc0-1.0
Creating and Manipulating Transforms A number of different spatial transforms are available in SimpleITK. The simplest is the Identity Transform. This transform simply returns input points unaltered.
dimension = 2 print("*Identity Transform*") identity = sitk.Transform(dimension, sitk.sitkIdentity) print("Dimension: " + str(identity.GetDimension())) # Points are always defined in physical space point = (1.0, 1.0) def transform_point(transform, point): transformed_point = transform.TransformPoint(point) ...
Python/21_Transforms_and_Resampling.ipynb
InsightSoftwareConsortium/SimpleITK-Notebooks
apache-2.0
Transform are defined by two sets of parameters, the Parameters and FixedParameters. FixedParameters are not changed during the optimization process when performing registration. For the TranslationTransform, the Parameters are the values of the translation Offset.
print("*Translation Transform*") translation = sitk.TranslationTransform(dimension) print("Parameters: " + str(translation.GetParameters())) print("Offset: " + str(translation.GetOffset())) print("FixedParameters: " + str(translation.GetFixedParameters())) transform_point(translation, point) print("") translation...
Python/21_Transforms_and_Resampling.ipynb
InsightSoftwareConsortium/SimpleITK-Notebooks
apache-2.0
The affine transform is capable of representing translations, rotations, shearing, and scaling.
print("*Affine Transform*") affine = sitk.AffineTransform(dimension) print("Parameters: " + str(affine.GetParameters())) print("FixedParameters: " + str(affine.GetFixedParameters())) transform_point(affine, point) print("") affine.SetTranslation((3.1, 4.4)) print("Parameters: " + str(affine.GetParameters())) transfor...
Python/21_Transforms_and_Resampling.ipynb
InsightSoftwareConsortium/SimpleITK-Notebooks
apache-2.0
A number of other transforms exist to represent non-affine deformations, well-behaved rotation in 3D, etc. See the Transforms tutorial for more information. Applying Transforms to Images Create a function to display the images that is aware of image spacing.
def myshow(img, title=None, margin=0.05, dpi=80): nda = sitk.GetArrayViewFromImage(img) spacing = img.GetSpacing() ysize = nda.shape[0] xsize = nda.shape[1] figsize = (1 + margin) * ysize / dpi, (1 + margin) * xsize / dpi fig = plt.figure(title, figsize=figsize, dpi=dpi) ax = fig.add_axes...
Python/21_Transforms_and_Resampling.ipynb
InsightSoftwareConsortium/SimpleITK-Notebooks
apache-2.0
Create a grid image.
grid = sitk.GridSource( outputPixelType=sitk.sitkUInt16, size=(250, 250), sigma=(0.5, 0.5), gridSpacing=(5.0, 5.0), gridOffset=(0.0, 0.0), spacing=(0.2, 0.2), ) myshow(grid, "Grid Input")
Python/21_Transforms_and_Resampling.ipynb
InsightSoftwareConsortium/SimpleITK-Notebooks
apache-2.0
To apply the transform, a resampling operation is required.
def resample(image, transform): # Output image Origin, Spacing, Size, Direction are taken from the reference # image in this call to Resample reference_image = image interpolator = sitk.sitkCosineWindowedSinc default_value = 100.0 return sitk.Resample(image, reference_image, transform, interpola...
Python/21_Transforms_and_Resampling.ipynb
InsightSoftwareConsortium/SimpleITK-Notebooks
apache-2.0
What happened? The translation is positive in both directions. Why does the output image move down and to the left? It important to keep in mind that a transform in a resampling operation defines the transform from the output space to the input space.
translation.SetOffset(-1 * np.array(translation.GetParameters())) transform_point(translation, point) resampled = resample(grid, translation) myshow(resampled, "Inverse Resampled")
Python/21_Transforms_and_Resampling.ipynb
InsightSoftwareConsortium/SimpleITK-Notebooks
apache-2.0
An affine (line preserving) transformation, can perform translation:
def affine_translate(transform, x_translation=3.1, y_translation=4.6): new_transform = sitk.AffineTransform(transform) new_transform.SetTranslation((x_translation, y_translation)) resampled = resample(grid, new_transform) myshow(resampled, "Translated") return new_transform affine = sitk.AffineTra...
Python/21_Transforms_and_Resampling.ipynb
InsightSoftwareConsortium/SimpleITK-Notebooks
apache-2.0
or scaling:
def affine_scale(transform, x_scale=3.0, y_scale=0.7): new_transform = sitk.AffineTransform(transform) matrix = np.array(transform.GetMatrix()).reshape((dimension, dimension)) matrix[0, 0] = x_scale matrix[1, 1] = y_scale new_transform.SetMatrix(matrix.ravel()) resampled = resample(grid, new_tra...
Python/21_Transforms_and_Resampling.ipynb
InsightSoftwareConsortium/SimpleITK-Notebooks
apache-2.0
or rotation:
def affine_rotate(transform, degrees=15.0): parameters = np.array(transform.GetParameters()) new_transform = sitk.AffineTransform(transform) matrix = np.array(transform.GetMatrix()).reshape((dimension, dimension)) radians = -np.pi * degrees / 180.0 rotation = np.array( [[np.cos(radians), -np...
Python/21_Transforms_and_Resampling.ipynb
InsightSoftwareConsortium/SimpleITK-Notebooks
apache-2.0
or shearing:
def affine_shear(transform, x_shear=0.3, y_shear=0.1): new_transform = sitk.AffineTransform(transform) matrix = np.array(transform.GetMatrix()).reshape((dimension, dimension)) matrix[0, 1] = -x_shear matrix[1, 0] = -y_shear new_transform.SetMatrix(matrix.ravel()) resampled = resample(grid, new_t...
Python/21_Transforms_and_Resampling.ipynb
InsightSoftwareConsortium/SimpleITK-Notebooks
apache-2.0
Composite Transform It is possible to compose multiple transform together into a single transform object. With a composite transform, multiple resampling operations are prevented, so interpolation errors are not accumulated. For example, an affine transformation that consists of a translation and rotation,
translate = (8.0, 16.0) rotate = 20.0 affine = sitk.AffineTransform(dimension) affine = affine_translate(affine, translate[0], translate[1]) affine = affine_rotate(affine, rotate) resampled = resample(grid, affine) myshow(resampled, "Single Transform")
Python/21_Transforms_and_Resampling.ipynb
InsightSoftwareConsortium/SimpleITK-Notebooks
apache-2.0
can also be represented with two Transform objects applied in sequence with a Composite Transform,
composite = sitk.CompositeTransform(dimension) translation = sitk.TranslationTransform(dimension) translation.SetOffset(-1 * np.array(translate)) composite.AddTransform(translation) affine = sitk.AffineTransform(dimension) affine = affine_rotate(affine, rotate) composite.AddTransform(translation) composite = sitk.Comp...
Python/21_Transforms_and_Resampling.ipynb
InsightSoftwareConsortium/SimpleITK-Notebooks
apache-2.0
Beware, tranforms are non-commutative -- order matters!
composite = sitk.CompositeTransform(dimension) composite.AddTransform(affine) composite.AddTransform(translation) resampled = resample(grid, composite) myshow(resampled, "Composite transform in reverse order")
Python/21_Transforms_and_Resampling.ipynb
InsightSoftwareConsortium/SimpleITK-Notebooks
apache-2.0
Resampling <img src="resampling.svg"/><br><br> Resampling as the verb implies is the action of sampling an image, which itself is a sampling of an original continuous signal. Generally speaking, resampling in SimpleITK involves four components: 1. Image - the image we resample, given in coordinate system $m$. 2. Resamp...
def resample_display(image, euler2d_transform, tx, ty, theta): euler2d_transform.SetTranslation((tx, ty)) euler2d_transform.SetAngle(theta) resampled_image = sitk.Resample(image, euler2d_transform) plt.imshow(sitk.GetArrayFromImage(resampled_image)) plt.axis("off") plt.show() logo = sitk.Read...
Python/21_Transforms_and_Resampling.ipynb
InsightSoftwareConsortium/SimpleITK-Notebooks
apache-2.0
Common Errors It is not uncommon to end up with an empty (all black) image after resampling. This is due to: 1. Using wrong settings for the resampling grid, not too common, but does happen. 2. Using the inverse of the transformation $T_f^m$. This is a relatively common error, which is readily addressed by invoking the...
euler2d = sitk.Euler2DTransform() # Why do we set the center? euler2d.SetCenter( logo.TransformContinuousIndexToPhysicalPoint(np.array(logo.GetSize()) / 2.0) ) tx = 64 ty = 32 euler2d.SetTranslation((tx, ty)) extreme_points = [ logo.TransformIndexToPhysicalPoint((0, 0)), logo.TransformIndexToPhysicalPoint...
Python/21_Transforms_and_Resampling.ipynb
InsightSoftwareConsortium/SimpleITK-Notebooks
apache-2.0
Are you puzzled by the result? Is the output just a copy of the input? Add a rotation to the code above and see what happens (euler2d.SetAngle(0.79)). Resampling at a set of locations In some cases you may be interested in obtaining the intensity values at a set of points (e.g. coloring the vertices of a mesh model seg...
img = logo # Generate random samples inside the image, we will obtain the intensity/color values at these points. num_samples = 10 physical_points = [] for pnt in zip(*[list(np.random.random(num_samples) * sz) for sz in img.GetSize()]): physical_points.append(img.TransformContinuousIndexToPhysicalPoint(pnt)) # Cr...
Python/21_Transforms_and_Resampling.ipynb
InsightSoftwareConsortium/SimpleITK-Notebooks
apache-2.0
<font color="red">Homework:</font> creating a color mesh You will now use the code for resampling at arbitrary locations to create a colored mesh. Using the color image of the visible human head [img = sitk.ReadImage(fdata('vm_head_rgb.mha'))]: 1. Implement the marching cubes algorithm to obtain the set of triangles co...
file_names = ["cxr.dcm", "photo.dcm", "POPI/meta/00-P.mhd", "training_001_ct.mha"] images = [] image_file_reader = sitk.ImageFileReader() for fname in file_names: image_file_reader.SetFileName(fdata(fname)) image_file_reader.ReadImageInformation() image_size = list(image_file_reader.GetSize()) # 2D imag...
Python/21_Transforms_and_Resampling.ipynb
InsightSoftwareConsortium/SimpleITK-Notebooks
apache-2.0
<font color="red">Homework:</font> Why do some of the images displayed above look different from others? What are the differences between the various images in the images list? Write code to query them and check their intensity ranges, sizes and spacings. The next cell illustrates how to resize all images to an arbitr...
def resize_and_scale_uint8(image, new_size, outside_pixel_value=0): """ Resize the given image to the given size, with isotropic pixel spacing and scale the intensities to [0,255]. Resizing retains the original aspect ratio, with the original image centered in the new image. Padding is added outsid...
Python/21_Transforms_and_Resampling.ipynb
InsightSoftwareConsortium/SimpleITK-Notebooks
apache-2.0
1. Adiabatic batch reactor Under adiabatic conditions, no heat is supplied/taken from the reactor. The temperature within the reactor is allowed to change. We try to understand the detonation process via the mole fractions of the reactants, products, and intermediates. Diagnostic data Files First lets look at the diagn...
ls adiab/*.out def read_file(fname): with open(fname) as fp: lines = fp.readlines() for line in lines: print(line) read_file("adiab/kf.out")
docs/source/WorkshopJupyterNotebooks/OpenMKM_demo/batch/batch.ipynb
VlachosGroup/VlachosGroupAdditivity
mit
Data Files The files are given as _ss.csv and _tr.csv or _ss.dat and _tr.dat depending on the output format selected. _tr indicates transient output, and _ss indicates steady state. gas_mass_, gas_mole_: Lists the mass fraction and mole fractions of gas phase species respectively gas_msdot_: Production rate of the ga...
ls adiab/*.csv df = pd.read_csv(os.path.join('adiab', 'gas_mole_tr.csv')) df.columns = df.columns.str.strip() df["t_ms"] = df["t(s)"]*1e3 plt.clf() ax1 = plt.subplot(1, 1, 1) ax1.plot('t_ms', 'H', data=df, marker='^', markersize=0.5, label="H mole frac") ax1.plot('t_ms', 'OH', data=df, marker='v', markersize=0.5, ...
docs/source/WorkshopJupyterNotebooks/OpenMKM_demo/batch/batch.ipynb
VlachosGroup/VlachosGroupAdditivity
mit
Reactor State Comparison How do the reactor temperature and pressure evolve for the two different operating conditions?
adiab_state_df = pd.read_csv(os.path.join('adiab','rctr_state_tr.csv')) isotherm_state_df = pd.read_csv(os.path.join('isother','rctr_state_tr.csv')) adiab_state_df.columns = adiab_state_df.columns.str.strip() isotherm_state_df.columns = isotherm_state_df.columns.str.strip() isotherm_state_df["t_ms"] = isotherm_state_d...
docs/source/WorkshopJupyterNotebooks/OpenMKM_demo/batch/batch.ipynb
VlachosGroup/VlachosGroupAdditivity
mit
Initial conditions for $N$-body simulations to create the impact we want Setup the potential and coordinate system
lp= LogarithmicHaloPotential(normalize=1.,q=0.9) R0, V0= 8., 220.
py/Orbits-for-Nbody.ipynb
jobovy/stream-stream
bsd-3-clause
Functions for converting coordinates between rectangular to cylindrical:
def rectangular_to_cylindrical(xv): R,phi,Z= bovy_coords.rect_to_cyl(xv[:,0],xv[:,1],xv[:,2]) vR,vT,vZ= bovy_coords.rect_to_cyl_vec(xv[:,3],xv[:,4],xv[:,5],R,phi,Z,cyl=True) out= numpy.empty_like(xv) # Preferred galpy arrangement of cylindrical coordinates out[:,0]= R out[:,1]= vR out[:,2]= ...
py/Orbits-for-Nbody.ipynb
jobovy/stream-stream
bsd-3-clause
At the time of impact, the phase-space coordinates of the GC can be computed using orbit integration:
xv_prog_init= numpy.array([30.,0.,0.,0.,105.74895,105.74895]) RvR_prog_init= rectangular_to_cylindrical(xv_prog_init[:,numpy.newaxis].T)[0,:] prog_init= Orbit([RvR_prog_init[0]/R0,RvR_prog_init[1]/V0,RvR_prog_init[2]/V0, RvR_prog_init[3]/R0,RvR_prog_init[4]/V0,RvR_prog_init[5]],ro=R0,vo=V0) times= num...
py/Orbits-for-Nbody.ipynb
jobovy/stream-stream
bsd-3-clause
The DM halo at the time of impact is at the following location:
xv_dm_impact= numpy.array([-13.500000,2.840000,-1.840000,6.82200571,132.7700529,149.4174464]) RvR_dm_impact= rectangular_to_cylindrical(xv_dm_impact[:,numpy.newaxis].T)[0,:] dm_impact= Orbit([RvR_dm_impact[0]/R0,RvR_dm_impact[1]/V0,RvR_dm_impact[2]/V0, RvR_dm_impact[3]/R0,RvR_dm_impact[4]/V0,RvR_dm_im...
py/Orbits-for-Nbody.ipynb
jobovy/stream-stream
bsd-3-clause
The orbits over the past 10 Gyr for both objects are:
prog_init.plot() dm_impact.plot(overplot=True) plot(RvR_dm_impact[0],RvR_dm_impact[3],'ro') xlim(0.,35.) ylim(-20.,20.)
py/Orbits-for-Nbody.ipynb
jobovy/stream-stream
bsd-3-clause
Initial condition for the King cluster We start the King cluster at 10.25 WD time units, which corresponds to 10.25x0.9777922212082034 Gyr. The phase-space coordinates of the cluster are then:
prog_backward= prog_init.flip() ts= numpy.linspace(0.,(10.25*0.9777922212082034-10.)/bovy_conversion.time_in_Gyr(V0,R0),1001) prog_backward.integrate(ts,lp) print [prog_backward.x(ts[-1]),prog_backward.y(ts[-1]),prog_backward.z(ts[-1]), -prog_backward.vx(ts[-1]),-prog_backward.vy(ts[-1]),-prog_backward.vz(ts[-1]...
py/Orbits-for-Nbody.ipynb
jobovy/stream-stream
bsd-3-clause
Initial conditions for the Plummer DM subhalo Starting 0.125 time units ago
dm_impact= Orbit([RvR_dm_impact[0]/R0,RvR_dm_impact[1]/V0,RvR_dm_impact[2]/V0, RvR_dm_impact[3]/R0,RvR_dm_impact[4]/V0,RvR_dm_impact[5]],ro=R0,vo=V0) dm_impact= dm_impact.flip() ts= numpy.linspace(0.,0.125*0.9777922212082034/bovy_conversion.time_in_Gyr(V0,R0),10001) dm_impact.integrate(ts,lp) print [d...
py/Orbits-for-Nbody.ipynb
jobovy/stream-stream
bsd-3-clause
Starting 0.25 time units ago
dm_impact= Orbit([RvR_dm_impact[0]/R0,RvR_dm_impact[1]/V0,RvR_dm_impact[2]/V0, RvR_dm_impact[3]/R0,RvR_dm_impact[4]/V0,RvR_dm_impact[5]],ro=R0,vo=V0) dm_impact= dm_impact.flip() ts= numpy.linspace(0.,0.25*0.9777922212082034/bovy_conversion.time_in_Gyr(V0,R0),10001) dm_impact.integrate(ts,lp) print [dm...
py/Orbits-for-Nbody.ipynb
jobovy/stream-stream
bsd-3-clause
Starting 0.375 time units ago
dm_impact= Orbit([RvR_dm_impact[0]/R0,RvR_dm_impact[1]/V0,RvR_dm_impact[2]/V0, RvR_dm_impact[3]/R0,RvR_dm_impact[4]/V0,RvR_dm_impact[5]],ro=R0,vo=V0) dm_impact= dm_impact.flip() ts= numpy.linspace(0.,0.375*0.9777922212082034/bovy_conversion.time_in_Gyr(V0,R0),10001) dm_impact.integrate(ts,lp) print [d...
py/Orbits-for-Nbody.ipynb
jobovy/stream-stream
bsd-3-clause
Starting 0.50 time units ago
dm_impact= Orbit([RvR_dm_impact[0]/R0,RvR_dm_impact[1]/V0,RvR_dm_impact[2]/V0, RvR_dm_impact[3]/R0,RvR_dm_impact[4]/V0,RvR_dm_impact[5]],ro=R0,vo=V0) dm_impact= dm_impact.flip() ts= numpy.linspace(0.,0.50*0.9777922212082034/bovy_conversion.time_in_Gyr(V0,R0),10001) dm_impact.integrate(ts,lp) print [dm...
py/Orbits-for-Nbody.ipynb
jobovy/stream-stream
bsd-3-clause
Initial conditions for the Plummer DM subhalo with $\lambda$ scaled interaction velocities To test the impulse approximation, we want to simulate interactions where the relative velocity ${\bf w}$ is changed by a factor of $\lambda$: ${\bf w} \rightarrow \lambda {\bf w}$. We start by computing the relative velocity for...
v_gc= numpy.array([xv_prog_impact[3],xv_prog_impact[4],xv_prog_impact[5]]) v_dm= numpy.array([6.82200571,132.7700529,149.4174464]) w_base= v_dm-v_gc def v_dm_scaled(lam): return w_base*lam+v_gc
py/Orbits-for-Nbody.ipynb
jobovy/stream-stream
bsd-3-clause
Starting 0.25 time units ago, scaled down by 0.5
lam= 0.5 xv_dm_impact= numpy.array([-13.500000,2.840000,-1.840000,v_dm_scaled(lam)[0],v_dm_scaled(lam)[1],v_dm_scaled(lam)[2]]) RvR_dm_impact= rectangular_to_cylindrical(xv_dm_impact[:,numpy.newaxis].T)[0,:] dm_impact= Orbit([RvR_dm_impact[0]/R0,RvR_dm_impact[1]/V0,RvR_dm_impact[2]/V0, RvR_dm_impact[3...
py/Orbits-for-Nbody.ipynb
jobovy/stream-stream
bsd-3-clause
Linear classifier on sensor data with plot patterns and filters Here decoding, a.k.a MVPA or supervised machine learning, is applied to M/EEG data in sensor space. Fit a linear classifier with the LinearModel object providing topographical patterns which are more neurophysiologically interpretable [1]_ than the classif...
# Authors: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr> # Romain Trachel <trachelr@gmail.com> # Jean-Remi King <jeanremi.king@gmail.com> # # License: BSD (3-clause) import mne from mne import io, EvokedArray from mne.datasets import sample from mne.decoding import Vectorizer, get_coef...
0.18/_downloads/d1b18c3376911723f0257fe5003a8477/plot_linear_model_patterns.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Set parameters
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif' event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif' tmin, tmax = -0.1, 0.4 event_id = dict(aud_l=1, vis_l=3) # Setup for reading the raw data raw = io.read_raw_fif(raw_fname, preload=True) raw.filter(.5, 25, fir_design='firwi...
0.18/_downloads/d1b18c3376911723f0257fe5003a8477/plot_linear_model_patterns.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause