markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
4. Create Pilot and Unit Managers
print "Initializing Pilot Manager ..." pmgr = rp.PilotManager(session=session) print "Initializing Unit Manager ..." umgr = rp.UnitManager (session=session, scheduler=rp.SCHED_DIRECT_SUBMISSION)
02_pilot/Radical_Pilot_YARN_Stampede.ipynb
radical-cybertools/supercomputing2015-tutorial
apache-2.0
5. Submit the pilot to the Pilot and Unit Managers
pdesc = rp.ComputePilotDescription () pdesc.resource = "yarn.stampede" # NOTE: This is a "label", not a hostname pdesc.runtime = 60 # minutes pdesc.cores = 16 pdesc.cleanup = False pdesc.project = '' #Include the Allocation here pdesc.queue = 'development' #You can select a different queue if you want. # submit the pilot. print "Submitting Compute Pilot to Pilot Manager ..." pilot = pmgr.submit_pilots(pdesc) print "Registering Compute Pilot with Unit Manager ..." umgr.add_pilots(pilot)
02_pilot/Radical_Pilot_YARN_Stampede.ipynb
radical-cybertools/supercomputing2015-tutorial
apache-2.0
6. Submit Compute Units First create the description of the compute units which define the task to be executed
NUMBER_JOBS = 16 cudesc_list = [] for i in range(NUMBER_JOBS): cudesc = rp.ComputeUnitDescription() cudesc.environment = {'CU_NO': i} cudesc.executable = "/bin/echo" cudesc.arguments = ['I am CU number $CU_NO'] cudesc.cores = 1 cudesc_list.append(cudesc)
02_pilot/Radical_Pilot_YARN_Stampede.ipynb
radical-cybertools/supercomputing2015-tutorial
apache-2.0
Submit the created Compute Units to the Unit Manager.
print "Submit Compute Units to Unit Manager ..." cu_set = umgr.submit_units (cudesc_list) print "Waiting for CUs to complete ..." umgr.wait_units() print "All CUs completed successfully!"
02_pilot/Radical_Pilot_YARN_Stampede.ipynb
radical-cybertools/supercomputing2015-tutorial
apache-2.0
Printing the output of a Compute Unit
for unit in cu_set: print "* CU %s, state %s, exit code: %s, stdout: %s" \ % (unit.uid, unit.state, unit.exit_code, unit.stdout)
02_pilot/Radical_Pilot_YARN_Stampede.ipynb
radical-cybertools/supercomputing2015-tutorial
apache-2.0
7. Always clean up the session
session.close () del session
02_pilot/Radical_Pilot_YARN_Stampede.ipynb
radical-cybertools/supercomputing2015-tutorial
apache-2.0
Configure Solow Model In this notebook, you can set the parameters of the Solow model. This notebook is part of three notebooks which can be used to work with the Black Rhino Solow model. These notebooks are: 1. Configure_Solow notebook 2. Run_Solow notebook, 3. Analyse_Solow notebook. The current notebook is highlighted. In the Black Rhino framework, parameters are stored in xml files. Using this notebook, you can set change them. Below you will find the parameter inputs for this model.
parameter_values = (('num_sweeps', '30'), ('num_simulations', '1'), ('num_banks', '1'), ('num_firms', '1'), ('num_households', '1'), ('bank_directory', 'agents/banks/'), ('firm_directory', 'agents/firms/'), ('household_directory', 'agents/households'), ('measurement_config', 'measurements/test_output.xml') )
examples/solow/Configure_Solow.ipynb
cogeorg/black_rhino
gpl-3.0
Next, parameter attributes for type, name and value are added to the XML elements.
for idx, p in enumerate(parameters): p.set('type', 'static') p.set('name', parameter_values[idx][0]) p.set('value', parameter_values[idx][1]) xml_params = ET.tostring(environment, encoding="unicode") myfile = open("environments/solow_parameters.xml", "w") myfile.write(xml_params) myfile.close()
examples/solow/Configure_Solow.ipynb
cogeorg/black_rhino
gpl-3.0
Our feed forward neural network will look very similar to our softmax classifier. However, now we have multiple layers and non-linear activations over logits! For this network, we will have 3 layers. 2 hidden layers and 1 output layer. The output layer is a softmax classifier that we implemented in the previous example. Like in the previous example, let's first define our hyperparameters which now include layer sizes Apparently layer sizes are usually in powers of 2 for computational reasons
# Hyperparameters (these are similar to the ones used in the previous example) learning_rate = 0.5 training_epochs = 5 batch_size = 100 # Additional hyperparameters for our Neural Nets - Layer sizes layer_1_size = 256 layer_2_size = 128
notebooks/Feed Forward Neural Network.ipynb
abhay1/tf_rundown
mit
Step 1: Create placeholders to hold the images.
# Create placeholders x = tf.placeholder(tf.float32, shape=(None, 784)) y = tf.placeholder(tf.float32, shape=(None, 10))
notebooks/Feed Forward Neural Network.ipynb
abhay1/tf_rundown
mit
Step 2: Create variables to hold the weight matrices and the bias vectors for all the layers Note that the weights are now initialized with small random numbers. From Karpathy: "A reasonable-sounding idea then might be to set all the initial weights to zero, which we expect to be the “best guess” in expectation. This turns out to be a mistake, because if every neuron in the network computes the same output, then they will also all compute the same gradients during backpropagation and undergo the exact same parameter updates. In other words, there is no source of asymmetry between neurons if their weights are initialized to be the same." "the implementation for one weight matrix might look like W = 0.01* np.random.randn(), where randn samples from a zero mean, unit standard deviation gaussian." Suggested rule of thumb "Initialize the weights by drawing them from a gaussian distribution with standard deviation of sqrt(2/n), where n is the number of inputs to the neuron. E.g. in numpy: W = np.random.randn(n) * sqrt(2.0/n)."
# Model parameters that have to be learned # Note that the weights & biases are now initialized to small random numbers # Also note that the number of columns for should be the size of the first layer! W_h1 = tf.Variable(0.01 * tf.random_normal([784, layer_1_size])) b_h1 = tf.Variable(tf.random_normal([layer_1_size])) # Layer 2 # The input dimensions are not 784 anymore but the size of the first layer. # The number of columns are the size of the second layer W_h2 = tf.Variable(0.01 * tf.random_normal([layer_1_size, layer_2_size])) b_h2 = tf.Variable(tf.random_normal([layer_2_size])) # Output layer - Layer 3 # This is the softmax layer that we implemented earlier # The input dimension size is now the size of the 2nd layer and the number of columns = number of classes W_o = tf.Variable(0.01 * tf.random_normal([layer_2_size, 10])) b_o = tf.Variable(tf.random_normal([10]))
notebooks/Feed Forward Neural Network.ipynb
abhay1/tf_rundown
mit
Step 3: Lets build the flow of data. Each unit in each layer computes the logits (Linear function = W * X + b). Next, it applies an activation function over each of the weighted sum and passes them on as inputs to the next layer. Step 4: Compute the loss function as the cross entropy between the predicted distribution of the labels from the output layer and its true distribution. Note that we will now simply use the tensor flow's softmax compute entropy function
# Get the weighted sum for the first layer preact_h1 = tf.matmul(x, W_h1) + b_h1 # Compute the activations which forms the output of this layer out_h1 = tf.sigmoid(preact_h1) # out_h1 = tf.nn.relu(preact_h1) # Get the weighted sum for the second layer # Note that the input is now the output from the previous layer preact_h2 = tf.matmul(out_h1, W_h2) + b_h2 # Compute the activations which forms the output of this layer out_h2 = tf.sigmoid(preact_h2) # out_h2 = tf.nn.relu(preact_h2) # Get the logits for the softmax output layer logits_o = tf.matmul(out_h2, W_o) + b_o # Final layer doesn't have activations. Simply compute the cross entropy loss cross_entropy_loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y, logits=logits_o))
notebooks/Feed Forward Neural Network.ipynb
abhay1/tf_rundown
mit
Step 5: Lets create an optimizer to minimize the cross entropy loss
# Create an optimizer with the learning rate optimizer = tf.train.GradientDescentOptimizer(learning_rate) # optimizer = tf.train.AdamOptimizer(learning_rate) # Use the optimizer to minimize the loss train_step = optimizer.minimize(cross_entropy_loss)
notebooks/Feed Forward Neural Network.ipynb
abhay1/tf_rundown
mit
Step 6: Lets compute the accuracy
# First create the correct prediction by taking the maximum value from the prediction class # and checking it with the actual class. The result is a boolean column vector correct_predictions = tf.equal(tf.argmax(logits_o, 1), tf.argmax(y, 1)) # Calculate the accuracy over all the images # Cast the boolean vector into float (1s & 0s) and then compute the average. accuracy = tf.reduce_mean(tf.cast(correct_predictions, tf.float32))
notebooks/Feed Forward Neural Network.ipynb
abhay1/tf_rundown
mit
Now lets run our graph as usual
# Initializing global variables init = tf.global_variables_initializer() # Create a saver to save our model saver = tf.train.Saver() # Create a session to run the graph with tf.Session() as sess: # Run initialization sess.run(init) # For the set number of epochs for epoch in range(training_epochs): # Compute the total number of batches num_batches = int(mnist.train.num_examples/batch_size) # Iterate over all the examples (1 epoch) for batch in range(num_batches): # Get a batch of examples batch_xs, batch_ys = mnist.train.next_batch(batch_size) # Now run the session curr_loss, cur_accuracy, _ = sess.run([cross_entropy_loss, accuracy, train_step], feed_dict={x: batch_xs, y: batch_ys}) if batch % 50 == 0: display.clear_output(wait=True) time.sleep(0.05) # Print the loss print("Epoch: %d/%d. Batch: %d/%d. Current loss: %.5f. Train Accuracy: %.2f" %(epoch, training_epochs, batch, num_batches, curr_loss, cur_accuracy)) # Run the session to compute the value and print it test_accuracy = sess.run(accuracy, feed_dict={x: mnist.test.images, y: mnist.test.labels}) print("Test Accuracy: %.2f"%test_accuracy) # Lets save the entire session saver.save(sess, '../models/ff_nn.model') # Load the model back and test its accuracy with tf.Session() as sess: saver.restore(sess, '../models/ff_nn.model') test_accuracy = sess.run(accuracy, feed_dict={x: mnist.test.images, y: mnist.test.labels}) print("Test Accuracy: %.2f"%test_accuracy)
notebooks/Feed Forward Neural Network.ipynb
abhay1/tf_rundown
mit
2 - Zero initialization There are two types of parameters to initialize in a neural network: - the weight matrices $(W^{[1]}, W^{[2]}, W^{[3]}, ..., W^{[L-1]}, W^{[L]})$ - the bias vectors $(b^{[1]}, b^{[2]}, b^{[3]}, ..., b^{[L-1]}, b^{[L]})$ Exercise: Implement the following function to initialize all parameters to zeros. You'll see later that this does not work well since it fails to "break symmetry", but lets try it anyway and see what happens. Use np.zeros((..,..)) with the correct shapes.
# GRADED FUNCTION: initialize_parameters_zeros def initialize_parameters_zeros(layers_dims): """ Arguments: layer_dims -- python array (list) containing the size of each layer. Returns: parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL": W1 -- weight matrix of shape (layers_dims[1], layers_dims[0]) b1 -- bias vector of shape (layers_dims[1], 1) ... WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1]) bL -- bias vector of shape (layers_dims[L], 1) """ parameters = {} L = len(layers_dims) # number of layers in the network for l in range(1, L): ### START CODE HERE ### (≈ 2 lines of code) parameters['W' + str(l)] = np.zeros(shape = (layers_dims[l], layers_dims[l-1])) parameters['b' + str(l)] = np.zeros(shape = (layers_dims[l], 1)) ### END CODE HERE ### return parameters parameters = initialize_parameters_zeros([3,2,1]) print("W1 = " + str(parameters["W1"])) print("b1 = " + str(parameters["b1"])) print("W2 = " + str(parameters["W2"])) print("b2 = " + str(parameters["b2"]))
deeplearning.ai/coursera-improving-neural-networks/week1/assignment1/Initialization.ipynb
ud3sh/coursework
unlicense
The model is predicting 0 for every example. In general, initializing all the weights to zero results in the network failing to break symmetry. This means that every neuron in each layer will learn the same thing, and you might as well be training a neural network with $n^{[l]}=1$ for every layer, and the network is no more powerful than a linear classifier such as logistic regression. <font color='blue'> What you should remember: - The weights $W^{[l]}$ should be initialized randomly to break symmetry. - It is however okay to initialize the biases $b^{[l]}$ to zeros. Symmetry is still broken so long as $W^{[l]}$ is initialized randomly. 3 - Random initialization To break symmetry, lets intialize the weights randomly. Following random initialization, each neuron can then proceed to learn a different function of its inputs. In this exercise, you will see what happens if the weights are intialized randomly, but to very large values. Exercise: Implement the following function to initialize your weights to large random values (scaled by *10) and your biases to zeros. Use np.random.randn(..,..) * 10 for weights and np.zeros((.., ..)) for biases. We are using a fixed np.random.seed(..) to make sure your "random" weights match ours, so don't worry if running several times your code gives you always the same initial values for the parameters.
# GRADED FUNCTION: initialize_parameters_random def initialize_parameters_random(layers_dims): """ Arguments: layer_dims -- python array (list) containing the size of each layer. Returns: parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL": W1 -- weight matrix of shape (layers_dims[1], layers_dims[0]) b1 -- bias vector of shape (layers_dims[1], 1) ... WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1]) bL -- bias vector of shape (layers_dims[L], 1) """ np.random.seed(3) # This seed makes sure your "random" numbers will be the as ours parameters = {} L = len(layers_dims) # integer representing the number of layers for l in range(1, L): ### START CODE HERE ### (≈ 2 lines of code) parameters['W' + str(l)] = np.random.randn( layers_dims[l], layers_dims[l-1]) * 10 parameters['b' + str(l)] = np.zeros(shape = (layers_dims[l], 1)) ### END CODE HERE ### return parameters parameters = initialize_parameters_random([3, 2, 1]) print("W1 = " + str(parameters["W1"])) print("b1 = " + str(parameters["b1"])) print("W2 = " + str(parameters["W2"])) print("b2 = " + str(parameters["b2"]))
deeplearning.ai/coursera-improving-neural-networks/week1/assignment1/Initialization.ipynb
ud3sh/coursework
unlicense
Observations: - The cost starts very high. This is because with large random-valued weights, the last activation (sigmoid) outputs results that are very close to 0 or 1 for some examples, and when it gets that example wrong it incurs a very high loss for that example. Indeed, when $\log(a^{[3]}) = \log(0)$, the loss goes to infinity. - Poor initialization can lead to vanishing/exploding gradients, which also slows down the optimization algorithm. - If you train this network longer you will see better results, but initializing with overly large random numbers slows down the optimization. <font color='blue'> In summary: - Initializing weights to very large random values does not work well. - Hopefully intializing with small random values does better. The important question is: how small should be these random values be? Lets find out in the next part! 4 - He initialization Finally, try "He Initialization"; this is named for the first author of He et al., 2015. (If you have heard of "Xavier initialization", this is similar except Xavier initialization uses a scaling factor for the weights $W^{[l]}$ of sqrt(1./layers_dims[l-1]) where He initialization would use sqrt(2./layers_dims[l-1]).) Exercise: Implement the following function to initialize your parameters with He initialization. Hint: This function is similar to the previous initialize_parameters_random(...). The only difference is that instead of multiplying np.random.randn(..,..) by 10, you will multiply it by $\sqrt{\frac{2}{\text{dimension of the previous layer}}}$, which is what He initialization recommends for layers with a ReLU activation.
# GRADED FUNCTION: initialize_parameters_he def initialize_parameters_he(layers_dims): """ Arguments: layer_dims -- python array (list) containing the size of each layer. Returns: parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL": W1 -- weight matrix of shape (layers_dims[1], layers_dims[0]) b1 -- bias vector of shape (layers_dims[1], 1) ... WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1]) bL -- bias vector of shape (layers_dims[L], 1) """ np.random.seed(3) parameters = {} L = len(layers_dims) - 1 # integer representing the number of layers for l in range(1, L + 1): ### START CODE HERE ### (≈ 2 lines of code) parameters['W' + str(l)] = np.random.randn( layers_dims[l], layers_dims[l-1]) * np.sqrt(2/layers_dims[l-1]) parameters['b' + str(l)] = np.zeros(shape = (layers_dims[l], 1)) ### END CODE HERE ### return parameters parameters = initialize_parameters_he([2, 4, 1]) print("W1 = " + str(parameters["W1"])) print("b1 = " + str(parameters["b1"])) print("W2 = " + str(parameters["W2"])) print("b2 = " + str(parameters["b2"]))
deeplearning.ai/coursera-improving-neural-networks/week1/assignment1/Initialization.ipynb
ud3sh/coursework
unlicense
You have now built a function to describe your model. To train and test this model, there are four steps in Keras: 1. Create the model by calling the function above 2. Compile the model by calling model.compile(optimizer = "...", loss = "...", metrics = ["accuracy"]) 3. Train the model on train data by calling model.fit(x = ..., y = ..., epochs = ..., batch_size = ...) 4. Test the model on test data by calling model.evaluate(x = ..., y = ...) If you want to know more about model.compile(), model.fit(), model.evaluate() and their arguments, refer to the official Keras documentation. Exercise: Implement step 1, i.e. create the model.
### START CODE HERE ### (1 line) happyModel = HappyModel(X_train[0,:,:,:].shape) ### END CODE HERE ###
deeplearning.ai/C4.CNN/week2_DeepModelCaseStudy/hw/KerasTutorial/Keras+-+Tutorial+-+Happy+House+v1.ipynb
jinzishuai/learn2deeplearn
gpl-3.0
Exercise: Implement step 3, i.e. train the model. Choose the number of epochs and the batch size.
### START CODE HERE ### (1 line) happyModel.fit(x = X_train, y = Y_train, epochs = 10, batch_size = 20) ### END CODE HERE ###
deeplearning.ai/C4.CNN/week2_DeepModelCaseStudy/hw/KerasTutorial/Keras+-+Tutorial+-+Happy+House+v1.ipynb
jinzishuai/learn2deeplearn
gpl-3.0
We need about 16.4 seconds of data, after we scale the system to (36+29=) $65\, M_{\odot}$. In terms of $M$ as we know it, that's about...
16.4 / ((36.+29.) * m_sun)
GW150914/HybridizeNR.ipynb
moble/MatchedFiltering
mit
Now read the NR waveform and offset so that the "relaxed" measurement time is $0$.
nr = GWFrames.ReadFromNRAR(data_dir + 'rhOverM_Asymptotic_GeometricUnits_CoM.h5/Extrapolated_N4.dir') nr.SetT(nr.T()-metadata.relaxed_measurement_time); approximant = 'TaylorT4' # 'TaylorT1'|'TaylorT4'|'TaylorT5' delta = (m1 - m2) / (m1 + m2) # Normalized BH mass difference (M1-M2)/(M1+M2) chi1_i = chi1 # Initial dimensionless spin vector of BH1 chi2_i = chi2 # Initial dimensionless spin vector of BH2 Omega_orb_i = Omega_orb_i # Initial orbital angular frequency Omega_orb_0 = Omega_orb_i/3.25 # Earliest orbital angular frequency to compute (default: Omega_orb_i) # R_frame_i: Initial rotation of the binary (default: No rotation) # MinStepsPerOrbit = # Minimum number of time steps at which to evaluate (default: 32) # PNWaveformModeOrder: PN order at which to compute waveform modes (default: 3.5) # PNOrbitalEvolutionOrder: PN order at which to compute orbital evolution (default: 4.0) pn = GWFrames.PNWaveform(approximant, delta, chi1_i, chi2_i, Omega_orb_i, Omega_orb_0) plt.close() plt.semilogy(pn.T(), np.abs(pn.Data(0))) plt.semilogy(nr.T(), np.abs(nr.Data(0))) ! /Users/boyle/.continuum/anaconda/envs/gwframes/bin/python ~/Research/Code/misc/GWFrames/Code/Scripts/HybridizeOneWaveform.py {data_dir} \ --Waveform=rhOverM_Asymptotic_GeometricUnits_CoM.h5/Extrapolated_N4.dir --t1={metadata.relaxed_measurement_time} --t2=2000.0 \ --InitialOmega_orb={Omega_orb_0} --Approximant=TaylorT4 --DirectAlignmentEvaluations 100 h = hybrid.EvaluateAtPoint(0.0, 0.0)[:-1] hybrid = scri.SpEC.read_from_h5(data_dir + 'rhOverM_Inertial_Hybrid.h5') hybrid = hybrid[:-1] h = hybrid.SI_units(current_unit_mass_in_solar_masses=36.+29., distance_from_source_in_megaparsecs=410) t_merger = 16.429 h.max_norm_time() h.t = h.t - h.max_norm_time() + t_merger plt.close() plt.semilogy(h.t, np.abs(h.data[:, 0])) sampling_rate = 4096. # Hz dt = 1 / sampling_rate # sec t = np.linspace(0, 32, num=int(32*sampling_rate)) h_discrete = h.interpolate(t) h_discrete.data[np.argmax(t>16.4739):, :] = 1e-40j from utilities import transition_function h_trimmed = h_discrete.copy() h_trimmed.data = (1-transition_function(h_discrete.t, 16.445, 16.4737))[:, np.newaxis] * h_discrete.data plt.close() plt.semilogy(h_discrete.t, np.abs(h_discrete.data[:, 0])) plt.semilogy(h_trimmed.t, np.abs(h_trimmed.data[:, 0])) import quaternion import spherical_functions as sf sYlm = sf.SWSH(quaternion.one, h_discrete.spin_weight, h_discrete.LM) (sYlm * h_trimmed.data).shape h_data = np.tensordot(sYlm, h_trimmed.data, axes=([0, 1])) np.savetxt('../Data/NR_GW150914.txt', np.vstack((h_data.real, h_data.imag)).T) ! head -n 1 ../Data/NR_GW150914.txt
GW150914/HybridizeNR.ipynb
moble/MatchedFiltering
mit
NOTE: You must have python-twitter installed on the machine to run this script. You can install it by running the below cell (change the cell type in the toolbar above to Code instead of Raw NBConvert). You may need to use "! sudo pip install tweepy".
! sudo pip install tweepy
.ipynb_checkpoints/getTweetsByID-checkpoint.ipynb
renecnielsen/twitter-diy
mit
Setup Twitter Access
# Twitter OAuth Credentials consumer_key = "" consumer_secret = "" access_token = "" access_secret = ""
.ipynb_checkpoints/getTweetsByID-checkpoint.ipynb
renecnielsen/twitter-diy
mit
Analysis of the fanfiction readers and writers To begin, we will examine the userbase of fanfiction.net. Who are they? Where are they from? How active are they? To do so, we will take a random sample of ~10,000 users from the site and break down some of their characteristics.
# opens raw data with open ('../data/clean_data/df_profile', 'rb') as fp: df = pickle.load(fp) # creates subset of data of active users df_active = df.loc[df.status != 'inactive', ].copy() # sets current year cyear = datetime.datetime.now().year # sets stop word list for text parsing stop_word_list = stopwords.words('english')
jupyter_notebooks/profile_analysis.ipynb
lily-tian/fanfictionstatistics
mit
Account status and volume Let's begin by examining the types of profiles that make up the userbase: readers, authors, or inactive users. Inactive users are accounts that are no longer existing.
# examines status of users status = df['status'].value_counts() # plots chart (status/np.sum(status)).plot.bar() plt.xticks(rotation=0) plt.show()
jupyter_notebooks/profile_analysis.ipynb
lily-tian/fanfictionstatistics
mit
About ~20% of users on the site are authors! That's much higher than expected. The number of inactive profiles is also notably negligible, meaning that once a profile has been created, it is very unlikely to be ever deleted or pulled off the site. What about how fast people are joining the site?
# examines when stories first created entry = [int(row[2]) for row in df_active['join'] if row != 'NA'] entry = pd.Series(entry).value_counts().sort_index() # plots chart (entry/np.sum(entry)).plot() plt.xlim([np.min(entry.index.values), cyear-1]) plt.show()
jupyter_notebooks/profile_analysis.ipynb
lily-tian/fanfictionstatistics
mit
Fanfiction has been on an upward trend every since its establishment. Note that sharp increase in 2011! In a later chapter, we will examine what could have caused that surge... Countries The site fanfiction.net allows tracking of location for users. Let's examine where these users are from.
# examines distribution of top 10 countries country = df['country'].value_counts() # plots chart (country[1:10]/np.sum(country)).plot.bar() plt.xticks(rotation=90) plt.show()
jupyter_notebooks/profile_analysis.ipynb
lily-tian/fanfictionstatistics
mit
We looked at the top 10 countries. Over ~30% are from United States! Profile descriptions Users have the option of including a profile description. This is typically to allow users to introduce themselves to the community. Let's see what percentage of the active userbase has written something.
# counts number with written profiles hasprofile = [row != '' for row in df_active['profile']] profiletype = pd.Series(hasprofile).value_counts() # plots chart profiletype.plot.pie(autopct='%.f', figsize=(5,5)) plt.ylabel('') plt.show()
jupyter_notebooks/profile_analysis.ipynb
lily-tian/fanfictionstatistics
mit
It would appear about one-fourth of the users have written something in their profile! For those who have, let's see how many words they have written.
# examines word count of profiles profile_wc = [len(row.split()) for row in df_active['profile']] # plots chart pd.Series(profile_wc).plot.hist(normed=True, bins=np.arange(1, 500, 1)) plt.show() # IMPORTANT NOTE: 'request' package has error in which it cannot find end p tag </p>, thus # leading to description duplicates i some profiles. Until this error is addressed, an abritrary # cutoff is used.
jupyter_notebooks/profile_analysis.ipynb
lily-tian/fanfictionstatistics
mit
The right tail is much longer than depicted, with ~3% of profiles exceeding our 500 word cut off. However, it would appear the majority of profiles fall well under 100-200 words. What do these users typically say? Let's see what are the top 10 words commonly used.
# extracts mostly used common words profile_wf = [set(row.lower().translate(str.maketrans('', '', string.punctuation)).split()) for row in df_active.loc[hasprofile, 'profile']] profile_wf = [item for sublist in profile_wf for item in sublist] profile_wf = pd.Series(profile_wf).value_counts() # prints table stop_word_list.append('im') stop_word_list.append('dont') print((profile_wf.loc[[row not in stop_word_list for row in profile_wf.index.values]][:10]/sum(hasprofile)).to_string())
jupyter_notebooks/profile_analysis.ipynb
lily-tian/fanfictionstatistics
mit
Above are the words most likely to be used at least once in a profile description. And of course, this is excluding stop words, such as "the", "is", or "are". From here, we can guess that most profiles mention the user's likes (or dislikes) and something related to reading and/or writing. Very standard words you'd expect in a community like fanfiction.
name_count = (profile_wf['name'])/sum(hasprofile) gender_count = (profile_wf['gender']+profile_wf['sex'])/sum(hasprofile) age_count = (profile_wf['age']+profile_wf['old'])/sum(hasprofile)
jupyter_notebooks/profile_analysis.ipynb
lily-tian/fanfictionstatistics
mit
The word "name" also made the list. We estimate that ~21% of profiles that have something written will include that word. This prompts the question of what other information we might parse out. We looked into "gender"/"sex" found those words existed in ~6% of profiles. As for "age"/"old", ~20%. Gender Now this is where things get tricky. So we've discovered that ~25% of users written something in their profiles. For those users, we want to figure out 1) if they have disclosed their gender, 2) get the machine to recognize what that gender is. For this exercise, we will only examine users who have disclosed in English. We will start with the most basic approach, which is to search for the key words "female" and "male", then count how many profiles have one word but not the other.
# finds gender breakdown profile_text = [list(set(row.lower().translate(str.maketrans('', '', string.punctuation)).split())) for row in df_active.loc[hasprofile, 'profile']] female = ['female' in row and 'male' not in row for row in profile_text] male = ['male' in row and 'female' not in row for row in profile_text] gender = pd.Series([sum(female), sum(male)], index = ['female', 'male']) # plots chart gender.plot.pie(autopct='%.f', figsize=(5,5)) plt.ylabel('') plt.show()
jupyter_notebooks/profile_analysis.ipynb
lily-tian/fanfictionstatistics
mit
It should be noted that third-gender and non-binary words were also tested but yielded too little observations for analysis. As such, everything we talk about below is only in reference to the female and male genders. Assuming a user is either "female" or "male", we estimate a rough 3:1 ratio, favoring female. This may or may not reflect gender ratio of the actual userbase. Here, we are making a lot of assumptions, including: * Female and male users are equally likely to write something in their profile. * Female and male users are equally likely to disclose their gender in their profile. * Female and male users are equally likely to choose the words "female" or "male" for disclosing their gender. * The words "female" or "male" are primarily used in the context of gender self-identification. We decided to do a check against other gender-related words that might be potential identifiers. The below reveal the proportion of written profiles that contain each word. Note that one profile can contain both the female-specific and male-specific word.
# sets gender words female = ['female', 'girl', 'woman', 'daughter', 'mother', 'sister'] male = ['male', 'boy', 'man', 'son', 'father', 'brother'] other = ['agender', 'queer', 'nonbinary', 'transgender', 'trans', 'bigender', 'intergender'] # finds gender ratio gender_index = [n+'-'+m for m,n in zip(female, male)] gender = pd.DataFrame(list(map(list, zip(profile_wf[female]/sum(hasprofile), (-1 * profile_wf[male])/sum(hasprofile)))), columns = ['female', 'male']) gender = gender.set_index(pd.Series(gender_index)) # plots chart gender.plot.bar(stacked=True) plt.ylabel('') plt.show()
jupyter_notebooks/profile_analysis.ipynb
lily-tian/fanfictionstatistics
mit
Here we have a much more mixed picture. The problem with the pairs "girl"/"boy" and "woman"/"man" is that they may refer to third parties (eg. "I once met a girl who...). However, when those two columns are added together, there does not seem to be any evidence of any gender disparity. The final three pairs are even more likely to refer to third parties, and are only there for comparison purposes only. On a slight digression, the distribution of the word choices is interesting: "girl" is much more widely used than "woman" whereas "man" is more widely used than "boy". All in all, this is a very rough look into the question of gender. Later on, we will try to readdress this question through more sophistificated classification methods. Age Finding the age is an even greater challenge than gender. We will return this question when we focus on text analysis and natural language processing. Account activity and favorites The site fanfiction.net allows users to "favorite" authors and stories. These may also serve as bookmarks to help users refind old stories they have read, and can be thought of as a metric of how active an user is. After some calculations, we find that approximately one-fourth of users have favorited at least one author, and approximately one-third of users have favorited at least one story. In comparison, only ~2% are in a community. Let's look at the distributions of the number of authors vs. stories a user would favorite.
# examines distribution of favorited df_active[['fa', 'fs']].plot.hist(normed=True, bins=np.arange(1, 50, 1), alpha = 0.65) plt.show()
jupyter_notebooks/profile_analysis.ipynb
lily-tian/fanfictionstatistics
mit
The right tail is much longer than is depicted above -- in the thousands. However, we cut it off at 50 to better show the distribution. For the most part, the distributions between the number of favorited authors and the number of favorited stories are similar, with heavy right skew. The number of authors is slightly more concentrated near 1, whereas the number of stories is more dispersed. All of this implies that users are more lenient in favoriting stories than authors, which aligns with our intuition. Stories written Finally, let's look at how many stories users write. We already know that ~20% of the users are authors. Of those authors, how many stories do they tend to publish?
# examines distribution of stories written df_active['st'].plot.hist(normed=True, bins=np.arange(1, np.max(df_active['st']), 1)) plt.show()
jupyter_notebooks/profile_analysis.ipynb
lily-tian/fanfictionstatistics
mit
Introduction: Position switching Let's start with an illustrative example, that shows how exposure calculation is done for a pointed observation (position switch). Usually, the backend provides the measured (spectral) intensities, $P$, in an uncalibrated fashion, e.g., in units of counts. Therefore one needs to derive a conversion factor. Furthermore, one has to remove the influence of the system bandpass (frequency-dependent gain). Both problems can be solved with the so-called position-switching technique: \begin{equation} T = T_\mathrm{sys}\frac{P_\mathrm{on} - P_\mathrm{ref}}{P_\mathrm{ref}}\,. \end{equation} The system temperature, $T_\mathrm{sys}$, is a property of the receiver, which determines the base noise level. It depends on the receiver noise itself, but also has conributions from ground and atmosphere, as well as astronomical background (Galactic continuum, CMB). The noise level, $\Delta T$, of the derived spectral intensity, $T$, will decrease the longer one integrates (radiometer equation): \begin{equation} \Delta T = \frac{T_\mathrm{sys}}{\sqrt{\tau\Delta f}}\,. \end{equation} However, for position switching one divides the On and Off spectra, which increases the noise by a factor of $\sqrt{2}$. This factor is absorbed by the fact, that one often has two polarization channels available, which can be averaged. Example: observation of NH3 lines at 23.7 GHz (at Effelsberg) For this one would use a K-band receiver, which has about 60 K system temperature (zenith). For non-zenith observations, the airmass will increase the effective system temperature, depending on the atmospheric opacity, $\tau$.
dual_pol = True restfreq = 23.7e9 # Hz opacity = 0.07 # assume reasonably good weather Tsys_zenith = 60.
notebooks/B01_OTF-map_exposure_calculator.ipynb
bwinkel/cygrid
gpl-3.0
Atmospheric temperature is approximately given by ambient temperature at ground.
T_amb = 290. # K T_atm = T_amb - 17.
notebooks/B01_OTF-map_exposure_calculator.ipynb
bwinkel/cygrid
gpl-3.0
Calculate telescope sensitivity (aka Kelvins per Jansky).
Gamma = 1.12 # K/Jy eta_MB = 0.79 # main beam efficiency Gamma_MB = Gamma / eta_MB
notebooks/B01_OTF-map_exposure_calculator.ipynb
bwinkel/cygrid
gpl-3.0
The conversion between antenna temperatures and main-beam brightness temperatures is given by
Ta_to_Tb = 1. / eta_MB
notebooks/B01_OTF-map_exposure_calculator.ipynb
bwinkel/cygrid
gpl-3.0
Define spectrometer properties
nchan = 2 ** 16 # 64 k bandwidth = 5e8 # 500 MHz spec_reso = bandwidth / nchan * 1.16 # true spectral resolution 16% worse than channel width print('spec_reso = {:.1f} kHz'.format(spec_reso * 1.e-3))
notebooks/B01_OTF-map_exposure_calculator.ipynb
bwinkel/cygrid
gpl-3.0
If a certain velocity resolution is desired, we first have to infer the desired spectral resolution.
desired_vel_resolution = 1. # km/s desired_freq_resolution = dv_to_df(restfreq, desired_vel_resolution) print('desired_freq_resolution = {:.1f} kHz'.format(desired_freq_resolution * 1.e-3))
notebooks/B01_OTF-map_exposure_calculator.ipynb
bwinkel/cygrid
gpl-3.0
This means, we can bin the original spectrum by a factor of
smooth_nbin = int(desired_freq_resolution / spec_reso + 0.5)
notebooks/B01_OTF-map_exposure_calculator.ipynb
bwinkel/cygrid
gpl-3.0
which will further decrease the noise.
print('smooth_nbin', smooth_nbin)
notebooks/B01_OTF-map_exposure_calculator.ipynb
bwinkel/cygrid
gpl-3.0
Case 1: Calculate the noise after a certain integration time. The result will depend on the elevation of the source (because different airmasses modify $T_\mathrm{sys}$).
exposure = 60. # seconds elevations = np.array([10, 20, 30, 40, 50, 60, 90]) AM = 1. / np.sin(np.radians(elevations)) gain_correction = gaincurve(elevations, 0.954, 3.19E-3, -5.42E-5)
notebooks/B01_OTF-map_exposure_calculator.ipynb
bwinkel/cygrid
gpl-3.0
Note, Tsys is higher for low elevation (more air mass).
Tsys_corr = Tsys_zenith + T_atm * (np.exp(opacity * AM) - np.exp(opacity * 1)) print('Tsys_corr', Tsys_corr) # Tsys_corr = Tsys_zenith + opacity * T_atm * (AM - 1) # approximate formula, for small opacity * AM # print(Tsys_corr)
notebooks/B01_OTF-map_exposure_calculator.ipynb
bwinkel/cygrid
gpl-3.0
Calculate raw $T_\mathrm{A}$ noise:
Ta_rms = Tsys_corr / np.sqrt(spec_reso * smooth_nbin * exposure)
notebooks/B01_OTF-map_exposure_calculator.ipynb
bwinkel/cygrid
gpl-3.0
For dual polarization observations, we can divide by $\sqrt{2}$.
if dual_pol: Ta_rms /= np.sqrt(2.)
notebooks/B01_OTF-map_exposure_calculator.ipynb
bwinkel/cygrid
gpl-3.0
We also have to account for the position switch (division by noisy reference spectrum).
Ta_rms *= np.sqrt(2.)
notebooks/B01_OTF-map_exposure_calculator.ipynb
bwinkel/cygrid
gpl-3.0
Finally, we convert to main-beam brightness temperature, $\Delta T_\mathrm{B}$, and flux-density, $\Delta S$, noise:
Tb_rms = Ta_to_Tb * Ta_rms / gain_correction S_rms = Tb_rms / Gamma_MB
notebooks/B01_OTF-map_exposure_calculator.ipynb
bwinkel/cygrid
gpl-3.0
The astronomical signal is furthermore attenuated by the atmosphere. There are two ways to handle this. (a) If the signal strength is known (or can be expected to have a certain value), just apply the attenuation factor and compare to the noise levels. (b) calculate an effective noise level, by increasing Tb and flux-density noise accordingly. Here, we follow the second approach, as the true signal level is unknown. The effective RMS values are not true RMS estimates, but merely serve to indicate the impact of Earth's atmosphere on the sensitivity. Close to zenith, the effect is a few percent only, but for very low elevations, almost half of the signal is lost!
atm_atten = np.exp(-opacity * AM) print('{0:>8s} {1:>8s} {2:>10s} {3:>10s} {4:>10s} {5:>10s} {6:>10s} {7:>10s} {8:>10s}'.format( 'Elev', 'Airmass', 'Tsys', 'Ta RMS', 'Tb RMS', 'S RMS', 'AtmAtten', 'Tb_eff RMS', 'S_eff RMS' )) print('{0:>8s} {1:>8s} {2:>10s} {3:>10s} {4:>10s} {5:>10s} {6:>10s} {7:>10s} {8:>10s}'.format( '[d]', '', '[K]', '[K]', '[K]', '[Jy]', '', '[K]', '[Jy]' )) for idx in range(len(elevations)): print( '{0:>8.2f} {1:>8.2f} {2:>10.4f} {3:>10.4f} {4:>10.4f} ' '{5:>10.4f} {6:>10.4f} {7:>10.4f} {8:>10.4f}'.format( elevations[idx], AM[idx], Tsys_corr[idx], Ta_rms[idx], Tb_rms[idx], S_rms[idx], atm_atten[idx], Tb_rms[idx] / atm_atten[idx], S_rms[idx] / atm_atten[idx], )) print('Ta RMS = Antenna temp. noise') print('Tb RMS = Brightness temp. noise') print('S RMS = Flux density noise')
notebooks/B01_OTF-map_exposure_calculator.ipynb
bwinkel/cygrid
gpl-3.0
Case 2: Calculate, how long we have to integrate, to reach a certain (effective) noise level. Start with a given noise level
Tb_eff_rms_desired = 0.01 # 10 mK Ta_eff_rms_desired = Tb_eff_rms_desired * gain_correction / Ta_to_Tb
notebooks/B01_OTF-map_exposure_calculator.ipynb
bwinkel/cygrid
gpl-3.0
Again, divide by two for dual polarization
exposure = (Tsys_corr / Ta_eff_rms_desired) ** 2 / (spec_reso * smooth_nbin) if dual_pol: exposure /= 2.
notebooks/B01_OTF-map_exposure_calculator.ipynb
bwinkel/cygrid
gpl-3.0
and account for PSW (division by noisy reference spectrum)
exposure *= 2. print('Exposure time needed to reach an effective MB brightness temperature noise level of {:.1f} mK'.format( Tb_eff_rms_desired * 1.e3)) print('{0:>8s} {1:>10s}'.format('Elev [d]', 'Time [min]')) for idx in range(len(elevations)): print('{0:>8.2f} {1:>10.1f}'.format( elevations[idx], (exposure / 60.)[idx] ))
notebooks/B01_OTF-map_exposure_calculator.ipynb
bwinkel/cygrid
gpl-3.0
On-the-fly mapping For OTF mapping, it is not straight-forward to calculate the effective noise in the map after gridding. Therefore, we simulate the process, use cygrid to produce a map, and will measure the noise accordingly. Map setup In Effelsberg, we do so-called zig-zag scanning. For position switching, one observes a reference position every $n$ scanlines. In contrast to the pointed observations, it is possible to spend a different amount of time on the reference position (this will have an impact on the degree of correlated spatial noise in the final map!).
map_width, map_height = 100., 100. # arcsec beamsize_fwhm = 38. # arcsec; at the frequency given in our example
notebooks/B01_OTF-map_exposure_calculator.ipynb
bwinkel/cygrid
gpl-3.0
The spacing between the scans should not be larger than half the beamwidth, a third is even better.
num_scan_lines = int(3 * map_height / beamsize_fwhm + 0.5) print('num_scan_lines', num_scan_lines)
notebooks/B01_OTF-map_exposure_calculator.ipynb
bwinkel/cygrid
gpl-3.0
Likewise, the scan velocity must not be too high. This is determined by the sampling rate and the beam size (one wants at least three independent samples, per beam, otherwise the resulting map will have some smearing along the scan lines). Let's calculate the maximal scan speed
sampling_interval = 1. # s (== 4 x 250 ms at Effelsberg) max_speed = beamsize_fwhm / 3 / sampling_interval print('max_speed = {:.2f} arcsec per s'.format(max_speed))
notebooks/B01_OTF-map_exposure_calculator.ipynb
bwinkel/cygrid
gpl-3.0
With this, we can calculate the minimal duration of one scan line
min_duration = map_width / max_speed print('min_duration = {:.1f} s'.format(min_duration))
notebooks/B01_OTF-map_exposure_calculator.ipynb
bwinkel/cygrid
gpl-3.0
However, each telescope will have a duty cycle between two scan lines, which is defined by the time the telescope needs for re-positioning. At Effelsberg, this is about 15 s, which means that the duration per scanline should at least be one or two minutes, otherwise the observing efficiency would become really poor. Choose a meaningful scanline duration
scanline_duration = 90. # seconds samples_per_scanline = int(scanline_duration / sampling_interval + 0.5) print('samples_per_scanline', samples_per_scanline)
notebooks/B01_OTF-map_exposure_calculator.ipynb
bwinkel/cygrid
gpl-3.0
We now have to choose the time spent on the reference position. Furthermore, it is possible to do the reference scan only after every $n$ scanlines (to save time). Let's do a relatively long refpos integration, but only after every 2 scanlines (avoids some duty cycles).
refpos_duration = 90. # seconds refpos_interval = 2
notebooks/B01_OTF-map_exposure_calculator.ipynb
bwinkel/cygrid
gpl-3.0
Such a map would need the following total observing time.
duty_cycle = 15. # s total_on_time = (scanline_duration + duty_cycle) * num_scan_lines total_ref_time = (refpos_duration + duty_cycle) * (num_scan_lines // refpos_interval) total_time = total_on_time + total_ref_time print('Total time necessary for map: {:.1f} min'.format(total_time / 60.))
notebooks/B01_OTF-map_exposure_calculator.ipynb
bwinkel/cygrid
gpl-3.0
Now, we create the raw data. Note, the absolute value of the RMS in counts (for the P quantities) is not important, but the RMS ratio between On and Ref spectra is (the absolute number is unimportant, because it gets calibrated away in the (On-Ref/Ref) equation). Compared to the On scan, a Ref position is integrated over 'refpos_duration', which means that noise in the Ref spectrum is $\sqrt{\texttt{refpos_duration} / \texttt{sampling_interval}}$ smaller. To increase the noise estimate accuracy, we will produce several noise maps at once.
num_maps = 128
notebooks/B01_OTF-map_exposure_calculator.ipynb
bwinkel/cygrid
gpl-3.0
Here, a dummy $T_\mathrm{sys}$ value is used. Later, the map (and thus the RMS) can simply be scaled to match the true $T_\mathrm{sys}$.
dummy_tsys = 1.
notebooks/B01_OTF-map_exposure_calculator.ipynb
bwinkel/cygrid
gpl-3.0
We will now create spectral noise (in arbitrary units). This must be much smaller than $T_\mathrm{sys}$ to avoid numerical problems. It is also important, that each of the raw-data spectra has an offset (the $T_\mathrm{sys}$ level in counts, if you will). If one would neglect this, one had a division by very small numbers in the line where the reduced spectra are calculated. The magnitude of the noise is furthermore tied to the $T_\mathrm{sys}$ level - it is based on the "radiometer" factor $\left(1/\sqrt{\tau\Delta f}\right)$
on_noise = dummy_tsys / np.sqrt(spec_reso * smooth_nbin * sampling_interval) ref_noise = dummy_tsys / np.sqrt(spec_reso * smooth_nbin * refpos_duration) print('on_noise = {:.2e}, ref_noise = {:.2e}'.format(on_noise, ref_noise)) reduced_specs = np.empty((num_scan_lines, samples_per_scanline, num_maps)) xcoords, ycoords = np.empty((2, num_scan_lines, samples_per_scanline)) lons = np.linspace(-map_width / 2, map_width / 2, samples_per_scanline) lats = np.linspace(-map_height / 2, map_height / 2, num_scan_lines) for scan_line in range(num_scan_lines): if scan_line % refpos_interval == 0: ref_spec = np.random.normal(0., ref_noise, num_maps) + dummy_tsys on_specs = np.random.normal(0., on_noise, (samples_per_scanline, num_maps)) + dummy_tsys reduced_specs[scan_line] = dummy_tsys * (on_specs - ref_spec) / ref_spec xcoords[scan_line] = lons ycoords[scan_line] = np.repeat(lats[scan_line], lons.size)
notebooks/B01_OTF-map_exposure_calculator.ipynb
bwinkel/cygrid
gpl-3.0
We can test this, by just plotting the "raw" data for one channel.
tmp_map = reduced_specs[..., 0] cabs = np.max(np.abs(tmp_map)) fig = pl.figure(figsize=(10, 5)) ax = fig.add_axes((0.1, 0.1, 0.8, 0.8)) _ = ax.scatter( xcoords, ycoords, c=tmp_map, cmap='bwr', edgecolor='none', vmin=-cabs, vmax=cabs, )
notebooks/B01_OTF-map_exposure_calculator.ipynb
bwinkel/cygrid
gpl-3.0
Now do the real gridding. First, prepare a WCS header
target_header = setup_header((0, 0), (map_width / 3600., map_height / 3600.), beamsize_fwhm / 3600.)
notebooks/B01_OTF-map_exposure_calculator.ipynb
bwinkel/cygrid
gpl-3.0
We already define a WCS object for later use in our plots:
target_wcs = WCS(target_header) # print(target_header)
notebooks/B01_OTF-map_exposure_calculator.ipynb
bwinkel/cygrid
gpl-3.0
Setup the gridder and define kernel sizes (half the beamsize is always a good choice).
gridder = cygrid.WcsGrid(target_header, naxis3=num_maps) kernelsize_fwhm = beamsize_fwhm / 2 kernelsize_fwhm /= 3600. # need to convert to degree kernelsize_sigma = kernelsize_fwhm / np.sqrt(8 * np.log(2)) support_radius = 4. * kernelsize_sigma healpix_reso = kernelsize_sigma / 2. gridder.set_kernel( 'gauss1d', (kernelsize_sigma,), support_radius, healpix_reso, )
notebooks/B01_OTF-map_exposure_calculator.ipynb
bwinkel/cygrid
gpl-3.0
The gridder needs the coordinates as flat arrays. The data to be gridded must be a 2D array (first dimension has to match the number of coordinate samples, second dimension is the number of channels/maps in the desired data cube).
gridder.grid(xcoords.flatten() / 3600, ycoords.flatten() / 3600, reduced_specs.reshape((-1, num_maps))) cygrid_cube = gridder.get_datacube()
notebooks/B01_OTF-map_exposure_calculator.ipynb
bwinkel/cygrid
gpl-3.0
Again, as a sanity check, we plot one of the channel maps.
tmp_map = cygrid_cube[0] cabs = np.max(np.abs(tmp_map)) fig = pl.figure(figsize=(10, 10)) ax = fig.add_subplot(111, projection=target_wcs.celestial) ax.imshow(tmp_map, cmap='bwr', interpolation='nearest', origin='lower', vmin=-cabs, vmax=cabs) lon, lat = ax.coords lon.set_axislabel('R.A. [deg]') lat.set_axislabel('Dec [deg]') pl.show()
notebooks/B01_OTF-map_exposure_calculator.ipynb
bwinkel/cygrid
gpl-3.0
Last but not least, we can measure the noise. There are two possibilities to do this. First, we can calculate the RMS over the full data cube.
rms_cube = np.std(cygrid_cube, ddof=1)
notebooks/B01_OTF-map_exposure_calculator.ipynb
bwinkel/cygrid
gpl-3.0
Second, we can calculate the RMS per plane (and the average over all planes)
rms_plane = np.mean(np.std(cygrid_cube, ddof=1, axis=(1, 2))) print('rms_cube', rms_cube, 'rms_plane', rms_plane)
notebooks/B01_OTF-map_exposure_calculator.ipynb
bwinkel/cygrid
gpl-3.0
The ultimate questions is, what is the noise level in such a map, if we account for real $T_\mathrm{sys}$ and atmospheric effects, etc.
elevations = np.array([10, 20, 30, 40, 50, 60, 90]) AM = 1. / np.sin(np.radians(elevations)) gain_correction = gaincurve(elevations, 0.954, 3.19E-3, -5.42E-5) Tsys_corr = Tsys_zenith + T_atm * (np.exp(opacity * AM) - np.exp(opacity * 1)) print('Tsys_corr', Tsys_corr)
notebooks/B01_OTF-map_exposure_calculator.ipynb
bwinkel/cygrid
gpl-3.0
$T_\mathrm{A}$ noise is now the measured noise from the gridded maps multiplied with the $T_\mathrm{sys}$
Ta_rms = rms_cube * Tsys_corr if dual_pol: Ta_rms /= np.sqrt(2.)
notebooks/B01_OTF-map_exposure_calculator.ipynb
bwinkel/cygrid
gpl-3.0
Note, we don't need to account for the position switch again (division by noisy reference spectrum was already performed in the reduced-spectra array creation)!
Tb_rms = Ta_to_Tb * Ta_rms / gain_correction S_rms = Tb_rms / Gamma_MB atm_atten = np.exp(-opacity * AM) print('-' * 95) print('RMS per map') print('-' * 95) print('{0:>8s} {1:>8s} {2:>10s} {3:>10s} {4:>10s} {5:>10s} {6:>10s} {7:>10s} {8:>10s}'.format( 'Elev', 'Airmass', 'Tsys', 'Ta RMS', 'Tb RMS', 'S RMS', 'AtmAtten', 'Tb_eff RMS', 'S_eff RMS' )) print('{0:>8s} {1:>8s} {2:>10s} {3:>10s} {4:>10s} {5:>10s} {6:>10s} {7:>10s} {8:>10s}'.format( '[d]', '', '[K]', '[K]', '[K]', '[Jy]', '', '[K]', '[Jy]' )) for idx in range(len(elevations)): print( '{0:>8.2f} {1:>8.2f} {2:>10.4f} {3:>10.4f} {4:>10.4f} ' '{5:>10.4f} {6:>10.4f} {7:>10.4f} {8:>10.4f}'.format( elevations[idx], AM[idx], Tsys_corr[idx], Ta_rms[idx], Tb_rms[idx], S_rms[idx], atm_atten[idx], Tb_rms[idx] / atm_atten[idx], S_rms[idx] / atm_atten[idx], )) print('Ta RMS = Antenna temp. noise') print('Tb RMS = Brightness temp. noise') print('S RMS = Flux density noise')
notebooks/B01_OTF-map_exposure_calculator.ipynb
bwinkel/cygrid
gpl-3.0
Network Architecture The encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below. <img src='assets/convolutional_autoencoder.png' width=500px> Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data. What's going on with the decoder Okay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers aren't. Usually, you'll see transposed convolution layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but in reverse. A stride in the input layer results in a larger stride in the transposed convolution layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a transposed convolution layer. The TensorFlow API provides us with an easy way to create the layers, tf.nn.conv2d_transpose. However, transposed convolution layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In this Distill article from Augustus Odena, et al, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with tf.image.resize_images, followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling. Exercise: Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by 2. Odena et al claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in tf.image.resize_images or use tf.image.resize_nearest_neighbor.
learning_rate = 0.001 # Input and target placeholders inputs_ = tf.placeholder(tf.float32, shape=(None, 28, 28, 1), name="inputs") targets_ = tf.placeholder(tf.float32, shape=(None, 28, 28, 1), name="targets") ### Encoder conv1 = tf.layers.conv2d(inputs_, 16, (3,3), padding='same', activation=tf.nn.relu)#parameters in parentheses: kernel size # Now 28x28x16 maxpool1 = tf.layers.max_pooling2d(conv1, (2,2), (2,2), padding='same') # in parentheses: kernel size & strides size, the two divide by 2 # Now 14x14x16 conv2 = tf.layers.conv2d(maxpool1, 8, (3,3), padding='same', activation=tf.nn.relu) # Now 14x14x8 maxpool2 = tf.layers.max_pooling2d(conv2, (2,2), (2,2), padding='same') # Now 7x7x8 conv3 = tf.layers.conv2d(maxpool2, 8, (3,3), padding='same', activation=tf.nn.relu) # Now 7x7x8 encoded = tf.layers.max_pooling2d(conv3, (2,2), (2,2), padding='same') # Now 4x4x8 ### Decoder upsample1 = tf.image.resize_nearest_neighbor(encoded, (7,7)) # Now 7x7x8 conv4 = tf.layers.conv2d(upsample1, 8, (3,3), padding='same', activation=tf.nn.relu) # Now 7x7x8 upsample2 = tf.image.resize_nearest_neighbor(conv4, (14,14)) # Now 14x14x8 conv5 = tf.layers.conv2d(upsample2, 8, (3, 3), padding='same', activation=tf.nn.relu) # Now 14x14x8 upsample3 = tf.image.resize_nearest_neighbor(conv5, (28, 28)) # Now 28x28x8 conv6 = tf.layers.conv2d(upsample3, 16, (3, 3), padding='same', activation=tf.nn.relu) # Now 28x28x16 logits = tf.layers.conv2d(conv6, 1, (3, 3), padding='same', activation=None) #we don't add the logits cause we'll pass it in the cross entropy #Now 28x28x1 # Pass logits through sigmoid to get reconstructed image decoded = tf.nn.sigmoid(logits, name="decoded") # Pass logits through sigmoid and calculate the cross-entropy loss loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits) # Get cost and define the optimizer cost = tf.reduce_mean(loss) opt = tf.train.AdamOptimizer(0.001).minimize(cost)
autoencoder/Convolutional_Autoencoder.ipynb
geilerloui/deep-learning
mit
Define a helper function for synthesize and plot the sound field from the given driving signals.
def sound_field(d, selection, secondary_source, array, grid, tapering=True): if tapering: tapering_window = sfs.tapering.tukey(selection, alpha=0.3) else: tapering_window = sfs.tapering.none(selection) p = sfs.fd.synthesize(d, tapering_window, array, secondary_source, grid=grid) sfs.plot2d.amplitude(p, grid, xnorm=[0, 0, 0]) sfs.plot2d.loudspeakers(array.x, array.n, tapering_window)
doc/examples/sound-field-synthesis.ipynb
sfstoolbox/sfs-python
mit
Circular loudspeaker arrays In the following we show different sound field synthesis methods applied to a circular loudspeaker array.
radius = 1.5 # in m array = sfs.array.circular(number_of_secondary_sources, radius)
doc/examples/sound-field-synthesis.ipynb
sfstoolbox/sfs-python
mit
Wave Field Synthesis (WFS) Plane wave
d, selection, secondary_source = sfs.fd.wfs.plane_25d(omega, array.x, array.n, n=npw) sound_field(d, selection, secondary_source, array, grid)
doc/examples/sound-field-synthesis.ipynb
sfstoolbox/sfs-python
mit
Point source
d, selection, secondary_source = sfs.fd.wfs.point_25d(omega, array.x, array.n, xs) sound_field(d, selection, secondary_source, array, grid)
doc/examples/sound-field-synthesis.ipynb
sfstoolbox/sfs-python
mit
Near-Field Compensated Higher Order Ambisonics (NFC-HOA) Plane wave
d, selection, secondary_source = sfs.fd.nfchoa.plane_25d(omega, array.x, radius, n=npw) sound_field(d, selection, secondary_source, array, grid, tapering=False)
doc/examples/sound-field-synthesis.ipynb
sfstoolbox/sfs-python
mit
Point source
d, selection, secondary_source = sfs.fd.nfchoa.point_25d(omega, array.x, radius, xs) sound_field(d, selection, secondary_source, array, grid, tapering=False)
doc/examples/sound-field-synthesis.ipynb
sfstoolbox/sfs-python
mit
Linear loudspeaker array In the following we show different sound field synthesis methods applied to a linear loudspeaker array.
spacing = 0.07 # in m array = sfs.array.linear(number_of_secondary_sources, spacing, center=[0, -0.5, 0], orientation=[0, 1, 0])
doc/examples/sound-field-synthesis.ipynb
sfstoolbox/sfs-python
mit
Wave Field Synthesis (WFS) Plane wave
d, selection, secondary_source = sfs.fd.wfs.plane_25d(omega, array.x, array.n, npw) sound_field(d, selection, secondary_source, array, grid)
doc/examples/sound-field-synthesis.ipynb
sfstoolbox/sfs-python
mit
Cylindrical coordinates The ISO standard 80000-2 recommends the use of $(ρ, φ, z)$, where $ρ$ is the radial coordinate, $\varphi$ the azimuth, and $z$ the height. For the conversion between cylindrical and Cartesian coordinates, it is convenient to assume that the reference plane of the former is the Cartesian $xy$-plane (with equation $z=0$, and the cylindrical axis is the Cartesian $z$-axis. Then the $z$-coordinate is the same in both systems, and the correspondence between cylindrical $(\rho, \varphi)$ and Cartesian $(x, y)$ are the same as for polar coordinates, namely \begin{align} x &= \rho \cos \varphi \ y &= \rho \sin \varphi \end{align} in one direction, and \begin{align} \rho &= \sqrt{x^2+y^2} \ \varphi &= \begin{cases} 0 & \mbox{if } x = 0 \mbox{ and } y = 0\ \arcsin\left(\frac{y}{\rho}\right) & \mbox{if } x \geq 0 \ \arctan\left(\frac{y}{x}\right) & \mbox{if } x > 0 \ -\arcsin\left(\frac{y}{\rho}\right) + \pi & \mbox{if } x < 0 \end{cases} \end{align} in the other. It is also common to use $\varphi = \mathrm{atan2}(y, x)$.
# Cylinder phi, z = np.mgrid[0:2*np.pi:31j, -2:2*np.pi:31j] x = 2*np.cos(phi) y = 2*np.sin(phi) cylinder = pv.StructuredGrid(x, y, z) # Vertical plane rho, z = np.mgrid[0:3:31j, -2:2*np.pi:31j] x = rho*np.cos(np.pi/4) y = rho*np.sin(np.pi/4) vert_plane = pv.StructuredGrid(x, y, z) # Horizontal plane rho, phi = np.mgrid[0:3:31j, 0:2*np.pi:31j] x = rho*np.cos(phi) y = rho*np.sin(phi) z = np.ones_like(x) hor_plane = pv.StructuredGrid(x, y, z) view(geometries=[cylinder, vert_plane, hor_plane], geometry_colors=[blue, red, green])
notebooks/vector_calculus-pyvista.ipynb
nicoguaro/AdvancedMath
mit
Spherical coordinates The use of $(r, θ, φ)$ to denote radial distance, inclination (or elevation), and azimuth, respectively, is common practice in physics, and is specified by ISO standard 80000-2. The spherical coordinates of a point can be obtained from its Cartesian coordinate system $(x, y, z)$ \begin{align} r&=\sqrt{x^2 + y^2 + z^2} \ \theta &= \arccos\frac{z}{\sqrt{x^2 + y^2 + z^2}} = \arccos\frac{z}{r} \ \varphi &= \arctan \frac{y}{x} \end{align} The inverse tangent denoted in $\varphi = \arctan\left(\frac{y}{x}\right)$ must be suitably defined , taking into account the correct quadrant of $(x, y)$ (using the function atan2). Conversely, the Cartesian coordinates may be retrieved from the spherical coordinates (radius $r$, inclination $\theta$, azimuth $\varphi$), where $r \in [0, \infty)$, $\theta \in [0, \pi]$ and $\varphi \in [0, 2\pi)$, by: \begin{align} x&=r \, \sin\theta \, \cos\varphi \ y&=r \, \sin\theta \, \sin\varphi \ z&=r \, \cos\theta \end{align}
theta, phi = np.mgrid[0:np.pi:21j, 0:np.pi:21j] # Sphere x = np.sin(phi) * np.cos(theta) y = np.sin(phi) * np.sin(theta) z = np.cos(phi) sphere = pv.StructuredGrid(x, y, z) sphere2 = pv.StructuredGrid(-x, -y, z) # Cone x = theta/3 * np.cos(phi) y = theta/3 * np.sin(phi) z = theta/3 cone = pv.StructuredGrid(x, y, z) cone2 = pv.StructuredGrid(-x, -y, z) # Plane x = theta/np.pi y = theta/np.pi z = phi - np.pi/2 plane = pv.StructuredGrid(x, y, z) view(geometries=[sphere, sphere2, cone, cone2, plane], geometry_colors=[blue, blue, red, red, green], geometry_opacities=[1.0, 0.5, 1.0, 0.5, 1.0])
notebooks/vector_calculus-pyvista.ipynb
nicoguaro/AdvancedMath
mit
Ellipsoidal coordinates
v, theta = np.mgrid[0:np.pi/2:21j, 0:np.pi:21j] a = 3 b = 2 c = 1 # Ellipsoid lam = 3 x = np.sqrt(a**2 + lam) * np.cos(v) * np.cos(theta) y = np.sqrt(b**2 + lam)* np.cos(v) * np.sin(theta) z = np.sqrt(c**2 + lam) * np.sin(v) ellipsoid = pv.StructuredGrid(x, y, z) ellipsoid2 = pv.StructuredGrid(x, y, -z) ellipsoid3 = pv.StructuredGrid(x, -y, z) ellipsoid4 = pv.StructuredGrid(x, -y, -z) # Hyperboloid of one sheet mu = 2 x = np.sqrt(a**2 + mu) * np.cosh(v) * np.cos(theta) y = np.sqrt(b**2 + mu) * np.cosh(v) * np.sin(theta) z = np.sqrt(c**2 + mu) * np.sinh(v) z = np.sqrt(c**2 + mu) * np.sinh(v) hyper = pv.StructuredGrid(x, y, z) hyper2 = pv.StructuredGrid(x, y, -z) hyper3 = pv.StructuredGrid(x, -y, z) hyper4 = pv.StructuredGrid(x, -y, -z) # Hyperboloid of two sheets nu = 1 x = np.sqrt(a**2 + nu) * np.cosh(v) y = np.sqrt(c**2 + nu) * np.sinh(v) * np.sin(theta) z = np.sqrt(b**2 + nu) * np.sinh(v) * np.cos(theta) hyper_up = pv.StructuredGrid(x, y, z) hyper_down = pv.StructuredGrid(-x, y, z) hyper_up2 = pv.StructuredGrid(x, -y, z) hyper_down2 = pv.StructuredGrid(-x, -y, z) view(geometries=[ellipsoid, ellipsoid2, ellipsoid3, ellipsoid4, hyper, hyper2, hyper3, hyper4, hyper_up, hyper_down, hyper_up2, hyper_down2], geometry_colors=[blue]*4 + [red]*4 + [green]*4)
notebooks/vector_calculus-pyvista.ipynb
nicoguaro/AdvancedMath
mit
References Wikipedia contributors. "Cylindrical coordinate system." Wikipedia, The Free Encyclopedia. Wikipedia, The Free Encyclopedia, 12 Dec. 2016. Web. 9 Feb. 2017. Wikipedia contributors. "Spherical coordinate system." Wikipedia, The Free Encyclopedia. Wikipedia, The Free Encyclopedia, 29 Jan. 2017. Web. 9 Feb. 2017. Sullivan et al., (2019). PyVista: 3D plotting and mesh analysis through a streamlined interface for the Visualization Toolkit (VTK). Journal of Open Source Software, 4(37), 1450, https://doi.org/10.21105/joss.01450
from IPython.core.display import HTML def css_styling(): styles = open('./styles/custom_barba.css', 'r').read() return HTML(styles) css_styling()
notebooks/vector_calculus-pyvista.ipynb
nicoguaro/AdvancedMath
mit
Preprocessing data
input_filename = "iris.data.txt" att = pd.read_csv(input_filename, sep=',', header=None) H = att.iloc[:,0:-1] # Get content to be trained H = np.c_[np.ones(len(H)), H] y = np.where(att.iloc[:,-1]=="Iris-setosa", 1.0, 0.0) label = list(["Iris-virginica/versicolor","Iris-setosa"])
Tarefas/Programando-Regressão-Logística-do-Zero/Task 04.ipynb
italoPontes/Machine-learning
lgpl-3.0
Compute norma: $\left| \left| \nabla l(w^{(t)}) \right| \right| = \sqrt{\displaystyle \sum_{i=1}^{m} \left( w_{i}^{(t)} \right)^{2} }$ I implement Logistic Regression based on Multiple Linear Regression implementation.
def compute_norma(vector): norma = np.sqrt( np.sum( vector ** 2 ) ) return norma
Tarefas/Programando-Regressão-Logística-do-Zero/Task 04.ipynb
italoPontes/Machine-learning
lgpl-3.0
Compute sigmoid $\sigma (x) = \displaystyle \frac{1}{1 + e^{-x}}$
def sigmoid(x): sig = 1 / ( 1 + exp( - x ) ) return sig
Tarefas/Programando-Regressão-Logística-do-Zero/Task 04.ipynb
italoPontes/Machine-learning
lgpl-3.0
Compute step gradient to train Multiple Linear Regression $\frac{\partial l(w)}{\partial w_{j} } = \displaystyle \sum{ \left( y - \frac{1}{1+e^{-\theta^{t}x}} \right) x }$
def step_gradient(H, w_current, y, learning_rate): diff = y - sigmoid( np.dot( H, w_current ) ) partial = np.sum( ( diff * ( H.transpose() ) ).transpose(), axis = 0 ) norma = compute_norma(partial) w = w_current + ( learning_rate * partial ) return [w, norma]
Tarefas/Programando-Regressão-Logística-do-Zero/Task 04.ipynb
italoPontes/Machine-learning
lgpl-3.0
Compute complete gradient ascending:
def gradient_ascendent(H, y, learning_rate, epsilon): w = np.zeros((H.shape[1])) #has the same size of output num_iterations = 0 gradient = 1 while(gradient > epsilon): [w, gradient] = step_gradient(H, w, y, learning_rate) num_iterations += 1 return [w, num_iterations, gradient]
Tarefas/Programando-Regressão-Logística-do-Zero/Task 04.ipynb
italoPontes/Machine-learning
lgpl-3.0
Running the logistic regression:
learning_rate = 0.0053 epsilon = 0.001 [w, num_iterations, norm_gradient] = gradient_ascendent(H, y, learning_rate, epsilon) print("Norma: {0}\nw: {1}\nnum_iterations: {2}\n\n".format(norm_gradient, w, num_iterations))
Tarefas/Programando-Regressão-Logística-do-Zero/Task 04.ipynb
italoPontes/Machine-learning
lgpl-3.0
Computing the coefficients with Scikit-learn
# C value is used as regularization factor # This is a inverse function, a high C value turn off regularization reg = LogisticRegression(C=1e15) reg.fit(H[:,1:], y) print("\nCoef with scikit-learn: {0}".format(reg.coef_)) print("\nIntercept with scikit-learn: {0}".format(reg.intercept_))
Tarefas/Programando-Regressão-Logística-do-Zero/Task 04.ipynb
italoPontes/Machine-learning
lgpl-3.0
Predict function
#Return the flower name and probability def predict(w, x, label): pred = sigmoid( np.dot( w, x.transpose() ) ) class_name = np.where( np.round(pred), label[1], label[0] ) #Flower name pred = np.where( pred<0.5, 1-pred, pred ) # Flower probability return [ class_name, np.around( pred*100, 3 ) ] # Iris-virginica Iris-versicolor Iris-setosa x = np.array( [[1,7.2,3.2,6.0,1.8], [1,5.0,2.3,3.3,1.0], [1,5.1,3.8,1.5,0.3]] ) [class_name, prob] = predict(w, x, label) for name, p in zip( class_name, prob ): print("Class: {0}, probability: {1}%.".format(name, p))
Tarefas/Programando-Regressão-Logística-do-Zero/Task 04.ipynb
italoPontes/Machine-learning
lgpl-3.0
Algorithm for processing Chunks Make a partition given the extent Produce a tuple (minx ,maxx,miny,maxy) for each element on the partition Calculate the semivariogram for each chunk and save it in a dataframe Plot Everything Do the same with a mMatern Kernel
minx,maxx,miny,maxy = getExtent(new_data) maxy ## Let's build the partition N = 30 xp,dx = np.linspace(minx,maxx,N,retstep=True) yp,dy = np.linspace(miny,maxy,N,retstep=True) xx,yy = np.meshgrid(xp,yp) coordinates_list = [ (xx[i][j],yy[i][j]) for i in range(N) for j in range(N)] from functools import partial tuples = map(lambda (x,y) : partial(getExtentFromPoint,x,y,step_sizex=dx,step_sizey=dy)(),coordinates_list) len(tuples) chunks = map(lambda (mx,Mx,my,My) : subselectDataFrameByCoordinates(new_data,'newLon','newLat',mx,Mx,my,My),tuples) len(chunks) ## Here we can filter based on a threshold threshold = 10 chunks_non_empty = filter(lambda df : df.shape[0] > threshold ,chunks) len(chunks_non_empty) lengths = pd.Series(map(lambda ch : ch.shape[0],chunks_non_empty)) lengths.plot.hist() variograms =map(lambda chunk : tools.Variogram(chunk,'residuals1',using_distance_threshold=200000),chunks_non_empty) vars = map(lambda v : v.calculateEmpirical(),variograms) vars = map(lambda v : v.calculateEnvelope(num_iterations=50),variograms)
notebooks/.ipynb_checkpoints/global_variogram-checkpoint.ipynb
molgor/spystats
bsd-2-clause
Initial set-up Load experiments used for original dataset calibration: - Steady-state activation [Shibata1989] - Steady-state inactivation [Firek1995] - Inactivation time constant [Nygren1998] - Recovery time constant [Nygren1998]
from experiments.ito_shibata import shibata_act from experiments.ito_firek import firek_inact from experiments.ito_nygren import nygren_inact_kin, nygren_rec modelfile = 'models/nygren_ito.mmt'
docs/examples/human-atrial/nygren_ito_original.ipynb
c22n/ion-channel-ABC
gpl-3.0
Plot steady-state and time constant functions of original model
from ionchannelABC.visualization import plot_variables sns.set_context('talk') V = np.arange(-100, 40, 0.01) nyg_par_map = {'ri': 'ito.r_inf', 'si': 'ito.s_inf', 'rt': 'ito.tau_r', 'st': 'ito.tau_s'} f, ax = plot_variables(V, nyg_par_map, modelfile, figshape=(2,2))
docs/examples/human-atrial/nygren_ito_original.ipynb
c22n/ion-channel-ABC
gpl-3.0
Combine model and experiments to produce: - observations dataframe - model function to run experiments and return traces - summary statistics function to accept traces
observations, model, summary_statistics = setup(modelfile, shibata_act, firek_inact, nygren_inact_kin, nygren_rec) assert len(observations)==len(summary_statistics(model({}))) g = plot_sim_results(modelfile, shibata_act, firek_inact, nygren_inact_kin, nygren_rec)
docs/examples/human-atrial/nygren_ito_original.ipynb
c22n/ion-channel-ABC
gpl-3.0