markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Exemplo 8.6 Determine v(t) para t > 0 no circuito RLC da Figura 8.15.
print("Exemplo 8.6") u = 10**(-6) #definicao de micro Vs = 40 L = 0.4 C = 20*u A1 = symbols('A1') A2 = symbols('A2') #Para t < 0 v0 = Vs*50/(50 + 30) i0 = -Vs/(50 + 30) print("V0:",v0,"V") print("i0:",i0,"A") #Para t > 0 #C*dv(0)/dt + i(0) + v(0)/50 = 0 #20u*dv(0)/dt - 0.5 + 0.5 = 0 #dv(0)/dt = 0 R = 50 a...
Aula 14 - Circuito RLC paralelo.ipynb
GSimas/EEL7045
mit
amount, date, contributor_name, contributor_lname, contributor_fname, contributor_type == 'I'
df = pd.read_csv('/opt/names/fec_contrib/contribDB_2010.csv.zip', usecols=['date', 'amount', 'contributor_type', 'contributor_lname', 'contributor_fname', 'contributor_name']) df #sdf = df[df.contributor_type=='I'].sample(1000) sdf = df[df.contributor_type=='I'].copy() sdf from clean_names import clean_name def do_...
ethnicolr/examples/ethnicolr_app_contrib2010.ipynb
suriyan/ethnicolr
mit
a) what proportion of contributors were black, whites, hispanics, asian etc.
adf = rdf.groupby(['race']).agg({'contributor_lname': 'count'}) adf *100 / adf.sum()
ethnicolr/examples/ethnicolr_app_contrib2010.ipynb
suriyan/ethnicolr
mit
b) and proportion of total donation given by blacks, hispanics, whites, and asians.
bdf = rdf.groupby(['race']).agg({'amount': 'sum'}) bdf * 100 / bdf.sum()
ethnicolr/examples/ethnicolr_app_contrib2010.ipynb
suriyan/ethnicolr
mit
c) get amount contributed by people of each race and divide it by total amount contributed.
contrib_white = sum(rdf.amount * rdf.white) contrib_black = sum(rdf.amount * rdf.black) contrib_api = sum(rdf.amount * rdf.api) contrib_hispanic = sum(rdf.amount * rdf.hispanic) contrib_amount = [{'race': 'white', 'amount': contrib_white}, {'race': 'black', 'amount': contrib_black}, ...
ethnicolr/examples/ethnicolr_app_contrib2010.ipynb
suriyan/ethnicolr
mit
Load Iris Flower Data
# Load data iris = datasets.load_iris() X = iris.data y = iris.target
machine-learning/logistic_regression_on_very_large_data.ipynb
tpin3694/tpin3694.github.io
mit
Standardize Features
# Standarize features scaler = StandardScaler() X_std = scaler.fit_transform(X)
machine-learning/logistic_regression_on_very_large_data.ipynb
tpin3694/tpin3694.github.io
mit
Train Logistic Regression Using SAG solver
# Create logistic regression object using sag solver clf = LogisticRegression(random_state=0, solver='sag') # Train model model = clf.fit(X_std, y)
machine-learning/logistic_regression_on_very_large_data.ipynb
tpin3694/tpin3694.github.io
mit
Time to build the network Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes. The network has two layers, a hid...
def sigmoid(x): return 1/(1 + np.exp(-1.0 * x)) class NeuralNetwork(object): def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate): # Set number of nodes in input, hidden and output layers. self.input_nodes = input_nodes self.hidden_nodes = hidden_nodes self...
DLND-your-first-network/dlnd-your-first-neural-network.ipynb
nimish-jose/dlnd
gpl-3.0
Training the network Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training se...
import sys ### Set the hyperparameters here ### epochs = 2500 learning_rate = 0.01 hidden_nodes = 15 output_nodes = 1 N_i = train_features.shape[1] network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate) losses = {'train':[], 'validation':[]} for e in range(epochs): # Go through a random batch of...
DLND-your-first-network/dlnd-your-first-neural-network.ipynb
nimish-jose/dlnd
gpl-3.0
Check out your predictions Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
test_loss = MSE(network.run(test_features), test_targets['cnt'].values) test_loss fig, ax = plt.subplots(figsize=(8,4)) mean, std = scaled_features['cnt'] predictions = network.run(test_features)*std + mean ax.plot(predictions[0], label='Prediction') ax.plot((test_targets['cnt']*std + mean).values, label='Data') ax.s...
DLND-your-first-network/dlnd-your-first-neural-network.ipynb
nimish-jose/dlnd
gpl-3.0
Thinking about your results Answer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does? Note: You can edit the text in this cell by double clicking on it. When you want to render the text, press control + enter Your answer below The model ha...
import unittest inputs = [0.5, -0.2, 0.1] targets = [0.4] test_w_i_h = np.array([[0.1, 0.4, -0.3], [-0.2, 0.5, 0.2]]) test_w_h_o = np.array([[0.3, -0.1]]) class TestMethods(unittest.TestCase): ########## # Unit tests for data loading ########## def test_data_path(self...
DLND-your-first-network/dlnd-your-first-neural-network.ipynb
nimish-jose/dlnd
gpl-3.0
Random Processes in Physics Examples of physical processes that are/can be modelled as random include: Radioactive decay - we know the probability of decay per unit time from quantum physics, but the exact time of the decay is random. Brownian motion - if we could track the motion of all atomic particles, this wo...
#Review the documentation for NumPy's random module: np.random?
StochasticMethods/RandomNumbersLecture1.ipynb
karenlmasters/ComputationalPhysicsUnit
apache-2.0
Some basic functions to point out (we'll get to others in a bit): random() - Uniformly distributed floats over [0, 1]. Will include zero, but not one. If you inclue a number, n in the bracket you get n random floats. randint(n,m) - A single random integer from n to m-1
#print 5 uniformly distributed numbers between 0 and 1 print(np.random.random(5)) #print another 5 - should be different print(np.random.random(5)) #print 5 uniformly distributed integers between 1 and 10 print(np.random.randint(1,11,5)) #print another 5 - should be different print(np.random.randint(1,11,5))
StochasticMethods/RandomNumbersLecture1.ipynb
karenlmasters/ComputationalPhysicsUnit
apache-2.0
Notice you have to use 1-11 for the range. Why?
#If you want to save a random number for future use: z=np.random.random() print("The number is ",z) #Rerun random print(np.random.random()) print("The number is still",z)
StochasticMethods/RandomNumbersLecture1.ipynb
karenlmasters/ComputationalPhysicsUnit
apache-2.0
In Class Exercise - Rolling Dice Write a programme that generates and prints out two random numbers between 1 and 6. This simulates the rolling of two dice. Now modify the programme to simulate making 2 million rolls of two dice. What fraction of the time do you get double six? Extension: Plot a histogram of the...
np.random.seed(42) for i in range(4): print(np.random.random()) np.random.seed(42) for i in range(4): print(np.random.random()) np.random.seed(39) for i in range(4): print(np.random.random())
StochasticMethods/RandomNumbersLecture1.ipynb
karenlmasters/ComputationalPhysicsUnit
apache-2.0
You might want to do this for: Debugging Code repeatability (i.e. when you hand in code for marking!). Coding For Probability In some circumstances you will want to write code which simulates various events, each of which happen with a probability, $p$. This can be coded with random numbers. You generate a random ...
for i in range(10): if np.random.random()<0.2: print("Heads") else: print("Tails")
StochasticMethods/RandomNumbersLecture1.ipynb
karenlmasters/ComputationalPhysicsUnit
apache-2.0
Detecting cells with SIMA Setup data
# Define folder where tiffs are present tiff_folder = "exampleData/20150529/" # Find tiffs in folder tiffs = sorted(glob.glob(tiff_folder + "/*.tif*")) # define motion correction method mc_approach = sima.motion.DiscreteFourier2D() # Define SIMA dataset sequences = [sima.Sequence.create("TIFF", tiff) for tiff in tif...
docs/examples/SIMA example.ipynb
rochefort-lab/fissa
gpl-3.0
Run SIMA segmentation algorithm
stica_approach = sima.segment.STICA(components=2) stica_approach.append(sima.segment.SparseROIsFromMasks()) stica_approach.append(sima.segment.SmoothROIBoundaries()) stica_approach.append(sima.segment.MergeOverlapping(threshold=0.5)) rois = dataset.segment(stica_approach, "auto_ROIs")
docs/examples/SIMA example.ipynb
rochefort-lab/fissa
gpl-3.0
Plot detected cells
# Plotting lines surrounding each of the ROIs plt.figure(figsize=(7, 6)) for roi in rois: # Plot border around cell plt.plot(roi.coords[0][:, 0], roi.coords[0][:, 1]) # Invert the y-axis because image co-ordinates are labelled from top-left plt.gca().invert_yaxis() plt.show()
docs/examples/SIMA example.ipynb
rochefort-lab/fissa
gpl-3.0
Extract decontaminated signals with FISSA FISSA needs either ImageJ ROIs or numpy arrays as inputs for the ROIs. SIMA outputs ROIs as numpy arrays, and can be directly read into FISSA. A given roi is given as python rois[i].coords[0][:, :2] FISSA expects rois to be provided as a list of lists python [[roiA1, roiA2, ro...
rois_fissa = [roi.coords[0][:, :2] for roi in rois] rois[0].coords[0][:, :2].shape
docs/examples/SIMA example.ipynb
rochefort-lab/fissa
gpl-3.0
We can then run FISSA on the data using the ROIs supplied by SIMA having converted them to a FISSA-compatible format, rois_fissa.
output_folder = "fissa_sima_example" experiment = fissa.Experiment(tiff_folder, [rois_fissa], output_folder) experiment.separate()
docs/examples/SIMA example.ipynb
rochefort-lab/fissa
gpl-3.0
Plotting the results
# Fetch the colormap object for Cynthia Brewer's Paired color scheme cmap = plt.get_cmap("Paired") # Select which trial (TIFF index) to plot trial = 0 # Plot the mean image and ROIs from the FISSA experiment plt.figure(figsize=(7, 7)) plt.imshow(experiment.means[trial], cmap="gray") for i_roi in range(len(experiment...
docs/examples/SIMA example.ipynb
rochefort-lab/fissa
gpl-3.0
Result Detik.com
url = '''https://news.detik.com/berita/3494173/polisi-jl-jend-sudirman-macet-karena-salju-palsu-dari-busa-air-got''' sn = scrap_news(url) result = sn.scrap_publisher_news() print('URL : %s' % url) print('Title : %s' % result[0]) print('Content : %s' % result[1])
2_Scraping_Content_Publisher_News_Indonesia.ipynb
kunbud1989/scraping-google-news-indonesia
mit
Kumparan
url = '''https://kumparan.com/kita-setara/menyingkirkan-stigma-buruk-hiv-aids''' sn = scrap_news(url) result = sn.scrap_publisher_news() print('URL : %s' % url) print('Title : %s' % result[0]) print('Content : %s' % result[1])
2_Scraping_Content_Publisher_News_Indonesia.ipynb
kunbud1989/scraping-google-news-indonesia
mit
Metro TV News
url = '''http://celebrity.okezone.com/read/2017/05/06/33/1684964/el-rumi-rayakan-kelulusan-di-puncak-gunung-penanggungan''' sn = scrap_news(url) result = sn.scrap_publisher_news() print('URL : %s' % url) print('Title : %s' % result[0]) print('Content : %s' % result[1]) f = open('list_links_google_news_indonesia.txt',...
2_Scraping_Content_Publisher_News_Indonesia.ipynb
kunbud1989/scraping-google-news-indonesia
mit
Problème Il faut dessiner la pyramide suivante à l'aide de matplotlib.
from IPython.display import Image Image("http://www.xavierdupre.fr/app/code_beatrix/helpsphinx/_images/biodiversite_tri2.png")
_doc/notebooks/td1a/td1a_pyramide_bigarree.ipynb
sdpython/ensae_teaching_cs
mit
Idée de la solution On sépare le problème en deux plus petits : Trouver la position des boules dans un repère cartésien. Choisir la bonne couleur. Le repère est hexagonal. L'image suivante est tirée de la page wikipédia empilement compact.
from pyquickhelper.helpgen import NbImage NbImage("data/hexa.png")
_doc/notebooks/td1a/td1a_pyramide_bigarree.ipynb
sdpython/ensae_teaching_cs
mit
Now we can visualise the relationship between player location, various detection radii and initial pokemon locations
plt.figure(figsize=(15,15)) # non-target pokemons plt.scatter([x for x, y in [coord for coord in pokemons.values()]][1:], [y for x, y in [coord for coord in pokemons.values()]][1:]) # target pokemon plt.scatter([x for x, y in [coord for coord in pokemons.values()]][0], [y for x, y in [coord fo...
pokemon_location_simulator.ipynb
Mithrillion/pokemon-go-simulator-solver
mit
movesets: move up/down/left/right x (default 5) m rewards: target estimated distance increased: moderate penalty target estimated distance decreased: moderate reward target ranking increased: slight reward target ranking decreased: slight penalty target within catch distance: huge reward (game won) (optional) target l...
fig, ax = plt.subplots(4, 1) fig.set_figwidth(10) fig.set_figheight(15) beta = 3 x = np.linspace(-25, 25, 100) ax[0].plot(x, gennorm.pdf(x / 10, beta), 'r-', lw=5, alpha=0.6, label='gennorm pdf') ax[0].set_title("no footprints") beta0 = 3 x = np.linspace(-25, 50, 100) ax[1].plot(x, gennorm.pdf((x - 17.5) / 7.5, beta0...
pokemon_location_simulator.ipynb
Mithrillion/pokemon-go-simulator-solver
mit
Assuming no knowledge of player movement history, the above graphs give us a rough probability distribution of actual distance of a pokemon given the estimated distance. We may establish a relationship between player location plus n_footprints of a pokemon and the probable locations of the pokemon. Combine this with pr...
fig, ax = plt.subplots(3, 1) fig.set_figwidth(10) fig.set_figheight(15) a = 2.5 x = np.linspace(0, 1500, 100) ax[0].plot(x, gamma.pdf((x - 75) / (450/3), a), 'r-', lw=5, alpha=0.6, label='gamma pdf') ax[0].set_title("inner") beta2 = 6 x = np.linspace(-250, 1500, 100) ax[1].plot(x, gennorm.pdf((x - 550) / 450, beta2),...
pokemon_location_simulator.ipynb
Mithrillion/pokemon-go-simulator-solver
mit
Now for pokemons with three footprints, we apply these skewed distributions to estimate their distance if they are ranked first or last (or first k / last k, k adjustable). The other question remains: how do we exploit the information from rank changes? Suppose we have m particles (estimated locations) for pokemon A an...
fig, ax = plt.subplots(1, 1) fig.set_figwidth(10) fig.set_figheight(10) a = 1 x = np.linspace(0, 50, 100) ax.plot(x, gamma.pdf(x / 10, a), 'r-', lw=5, alpha=0.6, label='gennorm pdf') ax.set_title("distribution of distance difference")
pokemon_location_simulator.ipynb
Mithrillion/pokemon-go-simulator-solver
mit
Also we might have to consider situations where a pokemon pops in / disappears from radar. This means they are almost certainly at that point on the edge of the detection radius. Their distance should follow a more skewed distribution.
fig, ax = plt.subplots(2, 1) fig.set_figwidth(10) fig.set_figheight(10) a0 = 1.5 x = np.linspace(0, 1500, 100) ax[0].plot(x, gamma.pdf((-x + 1100) / (450/6), a0), 'r-', lw=5, alpha=0.6, label='gamma pdf') ax[0].set_title("appearing in radar") a = 1.5 x = np.linspace(800, 2000, 100) ax[1].plot(x, gamma.pdf((x - 900) /...
pokemon_location_simulator.ipynb
Mithrillion/pokemon-go-simulator-solver
mit
Situations where we need to re-estimate the distance: initially, when we first receive the footprint counts and rankings when the footprint count of a pokemon changes when a swap in ranking happens (with multiple swaps at the same time, treat it as pairwise swaps) when the highest / lowest ranking pokemon changes when...
def random_particle_generation(side_length=2000, n=1000): particles = [0] * n for i in range(n): particles[i] = (random.uniform(-side_length/2, side_length/2), random.uniform(-side_length/2, side_length/2)) return particles def plot_particles(player_coord, particles): plt.figure(figsize=(15,15)...
pokemon_location_simulator.ipynb
Mithrillion/pokemon-go-simulator-solver
mit
Below I'm running images through the VGG network in batches. Exercise: Below, build the VGG network. Also get the codes from the first fully connected layer (make sure you get the ReLUd values).
# Set the batch size higher if you can fit in in your GPU memory batch_size = 10 codes_list = [] labels = [] batch = [] codes = None with tf.Session() as sess: # TODO: Build the vgg network here vgg = vgg16.Vgg16() input_ = tf.placeholder(tf.float32, [None, 224, 224, 3]) with tf.name_scope("conte...
transfer-learning/Transfer_Learning.ipynb
ianhamilton117/deep-learning
mit
Data prep As usual, now we need to one-hot encode our labels and create validation/test sets. First up, creating our labels! Exercise: From scikit-learn, use LabelBinarizer to create one-hot encoded vectors from the labels.
from sklearn.preprocessing import LabelBinarizer lb = LabelBinarizer() labels_vecs = lb.fit_transform(labels)
transfer-learning/Transfer_Learning.ipynb
ianhamilton117/deep-learning
mit
Now you'll want to create your training, validation, and test sets. An important thing to note here is that our labels and data aren't randomized yet. We'll want to shuffle our data so the validation and test sets contain data from all classes. Otherwise, you could end up with testing sets that are all one class. Typic...
from sklearn.model_selection import StratifiedShuffleSplit ss = StratifiedShuffleSplit(n_splits=1, test_size=0.2) train_idx, val_idx = next(ss.split(codes, labels)) half_val_len = int(len(val_idx)/2) val_idx, test_idx = val_idx[:half_val_len], val_idx[half_val_len:] train_x, train_y = codes[train_idx], labels_vecs[...
transfer-learning/Transfer_Learning.ipynb
ianhamilton117/deep-learning
mit
If you did it right, you should see these sizes for the training sets: Train shapes (x, y): (2936, 4096) (2936, 5) Validation shapes (x, y): (367, 4096) (367, 5) Test shapes (x, y): (367, 4096) (367, 5) Classifier layers Once you have the convolutional codes, you just need to build a classfier from some fully connected...
inputs_ = tf.placeholder(tf.float32, shape=[None, codes.shape[1]]) labels_ = tf.placeholder(tf.int64, shape=[None, labels_vecs.shape[1]]) # TODO: Classifier layers and operations fc = tf.contrib.layers.fully_connected(inputs_, 256) logits = tf.contrib.layers.fully_connected(fc, labels_vecs.shape[1], activation_f...
transfer-learning/Transfer_Learning.ipynb
ianhamilton117/deep-learning
mit
Training Here, we'll train the network. Exercise: So far we've been providing the training code for you. Here, I'm going to give you a bit more of a challenge and have you write the code to train the network. Of course, you'll be able to see my solution if you need help. Use the get_batches function I wrote before to ...
epochs = 10 iteration = 0 saver = tf.train.Saver() with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for e in range(epochs): for x, y in get_batches(train_x, train_y): feed = {inputs_: x, labels_: y} loss, _ = sess.run([cost, optimize...
transfer-learning/Transfer_Learning.ipynb
ianhamilton117/deep-learning
mit
Querying a Series
sports = {'Archery': 'Bhutan', 'Golf': 'Scotland', 'Sumo': 'Japan', 'Taekwondo': 'South Korea'} s = pd.Series(sports) s s.iloc[3] s.loc['Golf'] s[3] s['Golf'] sports = {99: 'Bhutan', 100: 'Scotland', 101: 'Japan', 102: 'South Korea'} s = pd.Series(sports)...
week2/Week+2.ipynb
mangeshjoshi819/ml-learn-python3
mit
The DataFrame Data Structure
import pandas as pd purchase_1 = pd.Series({'Name': 'Chris', 'Item Purchased': 'Dog Food', 'Cost': 22.50}) purchase_2 = pd.Series({'Name': 'Kevyn', 'Item Purchased': 'Kitty Litter', 'Cost': 2.50}) purchase_3 = pd.Series({'Na...
week2/Week+2.ipynb
mangeshjoshi819/ml-learn-python3
mit
Dataframe Indexing and Loading
costs = df['Cost'] costs costs+=2 costs df !cat olympics.csv df = pd.read_csv('olympics.csv') df.head() df = pd.read_csv('olympics.csv', index_col = 0, skiprows=1) df.head() df.columns for col in df.columns: if col[:2]=='01': df.rename(columns={col:'Gold' + col[4:]}, inplace=True) if col[:2]=='02...
week2/Week+2.ipynb
mangeshjoshi819/ml-learn-python3
mit
Querying a DataFrame
df['Gold'] > 0 only_gold = df.where(df['Gold'] > 0) only_gold.head() only_gold['Gold'].count() df['Gold'].count() only_gold = only_gold.dropna() only_gold.head() only_gold = df[df['Gold'] > 0] only_gold.head() len(df[(df['Gold'] > 0) | (df['Gold.1'] > 0)]) df[(df['Gold.1'] > 0) & (df['Gold'] == 0)]
week2/Week+2.ipynb
mangeshjoshi819/ml-learn-python3
mit
Indexing Dataframes
df.head() df['country'] = df.index df = df.set_index('Gold') df.head() df = df.reset_index() df.head() df = pd.read_csv('census.csv') df.head() df['SUMLEV'].unique() df=df[df['SUMLEV'] == 50] df.head() columns_to_keep = ['STNAME', 'CTYNAME', 'BIRTHS2010', '...
week2/Week+2.ipynb
mangeshjoshi819/ml-learn-python3
mit
Missing values
df = pd.read_csv('log.csv') df df.fillna? df = df.set_index('time') df = df.sort_index() df df = df.reset_index() df = df.set_index(['time', 'user']) df df = df.fillna(method='ffill') df.head()
week2/Week+2.ipynb
mangeshjoshi819/ml-learn-python3
mit
SeparableConv2D [convolutional.SeparableConv2D.0] 4 3x3 filters on 5x5x2 input, strides=(1,1), padding='valid', data_format='channels_last', depth_multiplier=1, activation='linear', use_bias=True
data_in_shape = (5, 5, 2) conv = SeparableConv2D(4, (3,3), strides=(1,1), padding='valid', data_format='channels_last', depth_multiplier=1, activation='linear', use_bias=True) layer_0 = Input(shape=data_in_shape) layer_1 = conv(layer_0) model = Model(inputs=layer_0, output...
notebooks/layers/convolutional/SeparableConv2D.ipynb
transcranial/keras-js
mit
[convolutional.SeparableConv2D.1] 4 3x3 filters on 5x5x2 input, strides=(1,1), padding='valid', data_format='channels_last', depth_multiplier=2, activation='relu', use_bias=True
data_in_shape = (5, 5, 2) conv = SeparableConv2D(4, (3,3), strides=(1,1), padding='valid', data_format='channels_last', depth_multiplier=2, activation='relu', use_bias=True) layer_0 = Input(shape=data_in_shape) layer_1 = conv(layer_0) model = Model(inputs=layer_0, outputs=...
notebooks/layers/convolutional/SeparableConv2D.ipynb
transcranial/keras-js
mit
[convolutional.SeparableConv2D.2] 16 3x3 filters on 5x5x4 input, strides=(1,1), padding='valid', data_format='channels_last', depth_multiplier=3, activation='relu', use_bias=True
data_in_shape = (5, 5, 4) conv = SeparableConv2D(16, (3,3), strides=(1,1), padding='valid', data_format='channels_last', depth_multiplier=3, activation='relu', use_bias=True) layer_0 = Input(shape=data_in_shape) layer_1 = conv(layer_0) model = Model(inputs=layer_0, outputs...
notebooks/layers/convolutional/SeparableConv2D.ipynb
transcranial/keras-js
mit
[convolutional.SeparableConv2D.3] 4 3x3 filters on 5x5x2 input, strides=(2,2), padding='valid', data_format='channels_last', depth_multiplier=1, activation='relu', use_bias=True
data_in_shape = (5, 5, 2) conv = SeparableConv2D(4, (3,3), strides=(2,2), padding='valid', data_format='channels_last', depth_multiplier=1, activation='relu', use_bias=True) layer_0 = Input(shape=data_in_shape) layer_1 = conv(layer_0) model = Model(inputs=layer_0, outputs=...
notebooks/layers/convolutional/SeparableConv2D.ipynb
transcranial/keras-js
mit
[convolutional.SeparableConv2D.4] 4 3x3 filters on 5x5x2 input, strides=(1,1), padding='same', data_format='channels_last', depth_multiplier=1, activation='relu', use_bias=True
data_in_shape = (5, 5, 2) conv = SeparableConv2D(4, (3,3), strides=(1,1), padding='same', data_format='channels_last', depth_multiplier=1, activation='relu', use_bias=True) layer_0 = Input(shape=data_in_shape) layer_1 = conv(layer_0) model = Model(inputs=layer_0, outputs=l...
notebooks/layers/convolutional/SeparableConv2D.ipynb
transcranial/keras-js
mit
[convolutional.SeparableConv2D.5] 4 3x3 filters on 5x5x2 input, strides=(1,1), padding='same', data_format='channels_last', depth_multiplier=2, activation='relu', use_bias=False
data_in_shape = (5, 5, 2) conv = SeparableConv2D(4, (3,3), strides=(1,1), padding='same', data_format='channels_last', depth_multiplier=2, activation='relu', use_bias=False) layer_0 = Input(shape=data_in_shape) layer_1 = conv(layer_0) model = Model(inputs=layer_0, outputs=...
notebooks/layers/convolutional/SeparableConv2D.ipynb
transcranial/keras-js
mit
[convolutional.SeparableConv2D.6] 4 3x3 filters on 5x5x2 input, strides=(2,2), padding='same', data_format='channels_last', depth_multiplier=2, activation='relu', use_bias=True
data_in_shape = (5, 5, 2) conv = SeparableConv2D(4, (3,3), strides=(2,2), padding='same', data_format='channels_last', depth_multiplier=2, activation='relu', use_bias=True) layer_0 = Input(shape=data_in_shape) layer_1 = conv(layer_0) model = Model(inputs=layer_0, outputs=l...
notebooks/layers/convolutional/SeparableConv2D.ipynb
transcranial/keras-js
mit
export for Keras.js tests
import os filename = '../../../test/data/layers/convolutional/SeparableConv2D.json' if not os.path.exists(os.path.dirname(filename)): os.makedirs(os.path.dirname(filename)) with open(filename, 'w') as f: json.dump(DATA, f) print(json.dumps(DATA))
notebooks/layers/convolutional/SeparableConv2D.ipynb
transcranial/keras-js
mit
Hyper Parameters, 超参数
learning_rate = 0.01 training_epochs = 1000 smoothing_constant = 0.01 display_step = 50 ctx = mx.cpu()
01_TF_basics_and_linear_regression/linear_regression_mx.ipynb
jastarex/DeepLearningCourseCodes
apache-2.0
Training Data, 训练数据
train_X = numpy.asarray([3.3, 4.4, 5.5, 6.71, 6.93, 4.168, 9.779, 6.182, 7.59, 2.167, 7.042, 10.791, 5.313, 7.997, 5.654, 9.27,3.1]) train_Y = numpy.asarray([1.7, 2.76, 2.09, 3.19, 1.694, 1.573, 3.366, 2.596, 2.53, 1.221, 2.827, 3.465, 1.65, 2.904, 2.42, 2.94, 1.3]) n_s...
01_TF_basics_and_linear_regression/linear_regression_mx.ipynb
jastarex/DeepLearningCourseCodes
apache-2.0
Prepare for Training, 训练准备 mx Graph Input, mxnet图输入
# Set model weights,初始化网络模型的权重 W = nd.random_normal(shape=1) b = nd.random_normal(shape=1) params = [W, b] for param in params: param.attach_grad()
01_TF_basics_and_linear_regression/linear_regression_mx.ipynb
jastarex/DeepLearningCourseCodes
apache-2.0
Construct a linear model, 构造线性模型
def net(X): return X*W + b
01_TF_basics_and_linear_regression/linear_regression_mx.ipynb
jastarex/DeepLearningCourseCodes
apache-2.0
Mean squared error, 损失函数:均方差
# Mean squared error,损失函数:均方差 def square_loss(yhat, y): return nd.mean((yhat - y) ** 2) # Gradient descent, 优化方式:梯度下降 def SGD(params, lr): for param in params: param[:] = param - lr * param.grad
01_TF_basics_and_linear_regression/linear_regression_mx.ipynb
jastarex/DeepLearningCourseCodes
apache-2.0
Start training, 开始训练
# Fit training data data = nd.array(train_X) label = nd.array(train_Y) losses = [] moving_loss = 0 niter = 0 for e in range(training_epochs): with autograd.record(): output = net(data) loss = square_loss(output, label) loss.backward() SGD(params, learning_rate) ########################...
01_TF_basics_and_linear_regression/linear_regression_mx.ipynb
jastarex/DeepLearningCourseCodes
apache-2.0
Regression result, 回归结果
def plot(losses, X, Y, n_samples=10): xs = list(range(len(losses))) f, (fg1, fg2) = plt.subplots(1, 2) fg1.set_title('Loss during training') fg1.plot(xs, losses, '-r') fg2.set_title('Estimated vs real function') fg2.plot(X.asnumpy(), net(X).asnumpy(), 'or', label='Estimated') fg2.plot(X.asnu...
01_TF_basics_and_linear_regression/linear_regression_mx.ipynb
jastarex/DeepLearningCourseCodes
apache-2.0
Here we will use a Naive Bayes estimator to classify the objects. First, we will construct our training data and test data arrays:
import numpy as np train_data = np.load(os.path.join(DATA_HOME, 'sdssdr6_colors_class_train.npy')) test_data = np.load(os.path.join(DATA_HOME, 'sdssdr6_colors_class.200000.npy'))
AstroML/notebooks/07_classification_example.ipynb
diego0020/va_course_2015
mit
The data is stored as a record array, which is a convenient format for collections of labeled data:
print(train_data.dtype.names) print(train_data['u-g'].shape)
AstroML/notebooks/07_classification_example.ipynb
diego0020/va_course_2015
mit
Now we must put these into arrays of shape (n_samples, n_features) in order to pass them to routines in scikit-learn. Training samples with zero-redshift are stars, while samples with positive redshift are quasars:
X_train = np.vstack([train_data['u-g'], train_data['g-r'], train_data['r-i'], train_data['i-z']]).T y_train = (train_data['redshift'] > 0).astype(int) X_test = np.vstack([test_data['u-g'], test_data['g-r'], test_data...
AstroML/notebooks/07_classification_example.ipynb
diego0020/va_course_2015
mit
Notice that we’ve set this up so that quasars have y = 1, and stars have y = 0. Now we’ll set up a Naive Bayes classifier. This will fit a four-dimensional uncorrelated gaussian to each distribution, and from these gaussians quickly predict the label for a test point:
from sklearn import naive_bayes gnb = naive_bayes.GaussianNB() gnb.fit(X_train, y_train) y_pred = gnb.predict(X_test)
AstroML/notebooks/07_classification_example.ipynb
diego0020/va_course_2015
mit
Let’s check our accuracy. This is the fraction of labels that are correct:
accuracy = float(np.sum(y_test == y_pred)) / len(y_test) print(accuracy)
AstroML/notebooks/07_classification_example.ipynb
diego0020/va_course_2015
mit
We have 61% accuracy. Not very good. But we must be careful here: the accuracy does not always tell the whole story. In our data, there are many more stars than quasars
print(np.sum(y_test == 0)) print(np.sum(y_test == 1))
AstroML/notebooks/07_classification_example.ipynb
diego0020/va_course_2015
mit
Stars outnumber Quasars by a factor of 14 to 1. In cases like this, it is much more useful to evaluate the fit based on precision and recall. Because there are many fewer quasars than stars, we’ll call a quasar a positive label and a star a negative label. The precision asks what fraction of positively labeled points a...
TP = np.sum((y_pred == 1) & (y_test == 1)) # true positives FP = np.sum((y_pred == 1) & (y_test == 0)) # false positives FN = np.sum((y_pred == 0) & (y_test == 1)) # false negatives print("precision:") print(TP / float(TP + FP)) print("recall: ") print(TP / float(TP + FN))
AstroML/notebooks/07_classification_example.ipynb
diego0020/va_course_2015
mit
For convenience, these can be computed using the tools in the metrics sub-package of scikit-learn:
from sklearn import metrics print("precision:") print(metrics.precision_score(y_test, y_pred)) print("recall: ") print(metrics.recall_score(y_test, y_pred))
AstroML/notebooks/07_classification_example.ipynb
diego0020/va_course_2015
mit
Precision and Recall tell different stories about the performance of the classifier. Ideally one would try to create a classifier with a high precision and high recall but this is not always possible, and sometimes raising the precision will decrease the recall or viceversa (why?). Think about situations when you'll wa...
print("F1 score:") print(metrics.f1_score(y_test, y_pred))
AstroML/notebooks/07_classification_example.ipynb
diego0020/va_course_2015
mit
For convenience, sklearn.metrics provides a function that computes all of these scores, and returns a nicely formatted string. For example:
print(metrics.classification_report(y_test, y_pred, target_names=['Stars', 'QSOs']))
AstroML/notebooks/07_classification_example.ipynb
diego0020/va_course_2015
mit
Compute MNE-dSPM inverse solution on single epochs Compute dSPM inverse solution on single trial epochs restricted to a brain label.
# Author: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr> # # License: BSD (3-clause) import numpy as np import matplotlib.pyplot as plt import mne from mne.datasets import sample from mne.minimum_norm import apply_inverse_epochs, read_inverse_operator from mne.minimum_norm import apply_inverse print(__...
0.16/_downloads/plot_compute_mne_inverse_epochs_in_label.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
View activation time-series to illustrate the benefit of aligning/flipping
times = 1e3 * stcs[0].times # times in ms plt.figure() h0 = plt.plot(times, mean_stc.data.T, 'k') h1, = plt.plot(times, label_mean, 'r', linewidth=3) h2, = plt.plot(times, label_mean_flip, 'g', linewidth=3) plt.legend((h0[0], h1, h2), ('all dipoles in label', 'mean', 'mean with sign flip'...
0.16/_downloads/plot_compute_mne_inverse_epochs_in_label.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Viewing single trial dSPM and average dSPM for unflipped pooling over label Compare to (1) Inverse (dSPM) then average, (2) Evoked then dSPM
# Single trial plt.figure() for k, stc_trial in enumerate(stcs): plt.plot(times, np.mean(stc_trial.data, axis=0).T, 'k--', label='Single Trials' if k == 0 else '_nolegend_', alpha=0.5) # Single trial inverse then average.. making linewidth large to not be masked plt.plot(times, label_mean...
0.16/_downloads/plot_compute_mne_inverse_epochs_in_label.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Plotting Pints' standard 1d histograms We can now run the MCMC routine and plot the histograms of the inferred parameters.
print('Running...') chains = mcmc.run() print('Done!') # Select chain 0 and discard warm-up chain = chains[0] chain = chain[3000:] import pints.plot # Plot the 1d histogram of each parameter pints.plot.histogram([chain]) plt.show()
examples/plotting/customise-pints-plots.ipynb
martinjrobins/hobo
bsd-3-clause
Customise the plots For example, here our toy model is a logistic model of population growth $$f(t) = \frac{k}{1 + (k/p_0 - 1)\exp(-rt)},$$ where $r$ is the growth rate, $k$ is the carrying capacity, and $p_0$ is the initial population (fixed constant). In this example, we have model parameters $r$ and $k$, together wi...
# Plot the 1d histogram of each parameter fig, axes = pints.plot.histogram([chain]) # Customise the plots parameter_names = [r'$r$', r'$k$', r'$\sigma$'] for i, ax in enumerate(axes): # (1) Add parameter name ax.set_xlabel(parameter_names[i]) # (2i) Add mean ax.axvline(np.mean(chain[:, i]), color='k', ...
examples/plotting/customise-pints-plots.ipynb
martinjrobins/hobo
bsd-3-clause
Implement Preprocess Functions Normalize In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
def normalize(x): """ Normalize a list of sample image data in the range of 0 to 1 : x: List of image data. The image shape is (32, 32, 3) : return: Numpy array of normalize data """ _x = np.array(x) return _x / 256 """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests....
image-classification/dlnd_image_classification.ipynb
d-k-b/udacity-deep-learning
mit
One-hot encode Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 t...
import pandas as pd category_list = ['airplane','automobile','bird','cat','deer','dog','frog','horse','ship','truck'] category_indicies = list(range(len(category_list))) category_encodings = pd.Series(category_indicies) category_encodings = pd.get_dummies(category_encodings) def one_hot_encode(x): """ One hot...
image-classification/dlnd_image_classification.ipynb
d-k-b/udacity-deep-learning
mit
Check Point This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
""" DON'T MODIFY ANYTHING IN THIS CELL """ import pickle import problem_unittests as tests import helper import numpy as np # Load the Preprocessed Validation data valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
image-classification/dlnd_image_classification.ipynb
d-k-b/udacity-deep-learning
mit
Build the network For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittest...
import tensorflow as tf def neural_net_image_input(image_shape): """ Return a Tensor for a batch of image input : image_shape: Shape of the images : return: Tensor for image input. """ return tf.placeholder(tf.float32, [None, image_shape[0], image_shape[1], image_shape[2]], name = 'x') def ne...
image-classification/dlnd_image_classification.ipynb
d-k-b/udacity-deep-learning
mit
Convolution and Max Pooling Layer Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling: * Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor. * Apply a convolution to x_tensor...
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides): # , dropout = None): """ Apply convolution then max pooling to x_tensor :param x_tensor: TensorFlow Tensor :param conv_num_outputs: Number of outputs for the convolutional layer :param conv_ksize: ker...
image-classification/dlnd_image_classification.ipynb
d-k-b/udacity-deep-learning
mit
Flatten Layer Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a ch...
def flatten(x_tensor): """ Flatten x_tensor to (Batch Size, Flattened Image Size) : x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions. : return: A tensor of size (Batch Size, Flattened Image Size). """ batch = -1 # How to extract this??? flattened_image = int(...
image-classification/dlnd_image_classification.ipynb
d-k-b/udacity-deep-learning
mit
Fully-Connected Layer Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packag...
def fully_conn(x_tensor, num_outputs): # , dropout = None): """ Apply a fully connected layer to x_tensor using weight and bias : x_tensor: A 2-D tensor where the first dimension is batch size. : num_outputs: The number of output that the new tensor should be. : return: A 2-D tensor where the second...
image-classification/dlnd_image_classification.ipynb
d-k-b/udacity-deep-learning
mit
Output Layer Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages. Note: Act...
def output(x_tensor, num_outputs): """ Apply a output layer to x_tensor using weight and bias : x_tensor: A 2-D tensor where the first dimension is batch size. : num_outputs: The number of output that the new tensor should be. : return: A 2-D tensor where the second dimension is num_outputs. """...
image-classification/dlnd_image_classification.ipynb
d-k-b/udacity-deep-learning
mit
Create Convolutional Model Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model: Apply 1, 2, or 3 Convolution and Max Pool layers Apply a Flatten Layer Apply 1, 2, or 3 Full...
def conv_net(x, keep_prob): """ Create a convolutional neural network model : x: Placeholder tensor that holds image data. : keep_prob: Placeholder tensor that hold dropout keep probability. : return: Tensor that represents logits """ dropout_rate = 1 - keep_prob flow = conv2d_...
image-classification/dlnd_image_classification.ipynb
d-k-b/udacity-deep-learning
mit
Train the Neural Network Single Optimization Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following: * x for image input * y for labels * keep_prob for keep probability for dropout This function will be cal...
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch): """ Optimize the session on a batch of images and labels : session: Current TensorFlow session : optimizer: TensorFlow optimizer function : keep_probability: keep probability : feature_batch: Batch of Num...
image-classification/dlnd_image_classification.ipynb
d-k-b/udacity-deep-learning
mit
Show Stats Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
def print_stats(session, feature_batch, label_batch, cost, accuracy): """ Print information about loss and validation accuracy : session: Current TensorFlow session : feature_batch: Batch of Numpy image data : label_batch: Batch of Numpy label data : cost: TensorFlow cost function : accuracy...
image-classification/dlnd_image_classification.ipynb
d-k-b/udacity-deep-learning
mit
Hyperparameters Tune the following parameters: * Set epochs to the number of iterations until the network stops learning or start overfitting * Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory: * 64 * 128 * 256 * ... * Set keep_probability to the...
# TODO: Tune Parameters epochs = 12 # epochs = 10 -- hadn't stopped getting more accurate on validation # epochs = 32 -- generally stopped getting more accurate on validation set after 12-15 # epochs = 128 -- no higher than after 12-15 epochs batch_size = 256 keep_probability = .75
image-classification/dlnd_image_classification.ipynb
d-k-b/udacity-deep-learning
mit
Sparse 2d interpolation In this example the values of a scalar field $f(x,y)$ are known at a very limited set of points in a square domain: The square domain covers the region $x\in[-5,5]$ and $y\in[-5,5]$. The values of $f(x,y)$ are zero on the boundary of the square at integer spaced points. The value of $f$ is know...
# YOUR CODE HERE five_1=np.ones(11)*-5 four_1=np.ones(2)*-4 three_1=np.ones(2)*-3 two_1=np.ones(2)*-2 one_1=np.ones(2)*-1 zero=np.ones(3)*0 five=np.ones(11)*5 four=np.ones(2)*4 three=np.ones(2)*3 two=np.ones(2)*2 one=np.ones(2)*1 y=np.linspace(-5,5,11) norm=np.array((-5,5)) mid=np.array((-5,0,5)) x=np.hstack((five_...
assignments/assignment08/InterpolationEx02.ipynb
JackDi/phys202-2015-work
mit
Use meshgrid and griddata to interpolate the function $f(x,y)$ on the entire square domain: xnew and ynew should be 1d arrays with 100 points between $[-5,5]$. Xnew and Ynew should be 2d versions of xnew and ynew created by meshgrid. Fnew should be a 2d array with the interpolated values of $f(x,y)$ at the points (Xne...
# YOUR CODE HERE from scipy.interpolate import interp2d xnew=np.linspace(-5,5,100) ynew=np.linspace(-5,5,100) Xnew, Ynew = np.meshgrid(xnew,ynew) Fnew=griddata((x,y),f,(Xnew,Ynew),method='cubic') assert xnew.shape==(100,) assert ynew.shape==(100,) assert Xnew.shape==(100,100) assert Ynew.shape==(100,100) assert ...
assignments/assignment08/InterpolationEx02.ipynb
JackDi/phys202-2015-work
mit
Plot the values of the interpolated scalar field using a contour plot. Customize your plot to make it effective and beautiful.
# YOUR CODE HERE plt.figure(figsize=(6,6)) cont=plt.contour(Xnew,Ynew,Fnew, colors=('k','k')) plt.title("Contour Map of F(x)") plt.ylabel("Y-Axis") plt.xlabel('X-Axis') # plt.colorbar() plt.clabel(cont, inline=1, fontsize=10) plt.xlim(-5.5,5.5); plt.ylim(-5.5,5.5); # plt.grid() assert True # leave this to grade the pl...
assignments/assignment08/InterpolationEx02.ipynb
JackDi/phys202-2015-work
mit
Introduction to MPI and mpi4py MPI stands for Message Passing Interface. It is a library that allows to: - spawn several processes - adress them individually - have them communicate between them MPI can be used in many languages (C, C++, Fortran), and is extensively used in High-Performance Computing. mpi4py is the Py...
%%file example.py from mpi4py.MPI import COMM_WORLD as communicator import random # Draw one random integer between 0 and 100 i = random.randint(0, 100) print('Rank %d' %communicator.rank + ' drew a random integer: %d' %i ) # Gather the results integer_list = communicator.gather( i, root=0 ) if communicator.rank == ...
code_examples/python_parallel/Classification_mpi4py.ipynb
thehackerwithin/berkeley
bsd-3-clause
What happened? "mpirun -np 3" spawns 3 processes. All processes execute the same code. (In this case, they all execute the same Python script: example.py.) Each process gets a unique identification number (communicator.rank). Based on this identifier and e.g. based on if statements, the different processes can ...
%%file parallel_script.py from classification import nearest_neighbor_prediction import numpy as np from mpi4py.MPI import COMM_WORLD as communicator # Load data train_images = np.load('./data/train_images.npy') train_labels = np.load('./data/train_labels.npy') test_images = np.load('./data/test_images.npy') # Use o...
code_examples/python_parallel/Classification_mpi4py.ipynb
thehackerwithin/berkeley
bsd-3-clause
The code executes faster than the serial example, because each process has a smaller amount of work, and the two processes execute this work in parallel. However, at the end of the script, each process has the corresponding label array small_test_labels. But these arrays still need to be concatenated together and writt...
# Load and split the set of test images test_images = np.load('data/test_images.npy') split_arrays_list = np.array_split( test_images, 4 ) # Print the corresponding shape print( 'Shape of the original array:' ) print( test_images.shape ) print('Shape of the splitted arrays:') for array in split_arrays_list: print(...
code_examples/python_parallel/Classification_mpi4py.ipynb
thehackerwithin/berkeley
bsd-3-clause
Assignement: in the code below, use the function array_split to split test_images between an arbitrary number of processes, and have each process pick their own small array. Note: Within the script, communicator.size gives the number of processes that have been spawn by mpirun.
%%file parallel_script.py from classification import nearest_neighbor_prediction import numpy as np from mpi4py.MPI import COMM_WORLD as communicator # Load data train_images = np.load('./data/train_images.npy') train_labels = np.load('./data/train_labels.npy') test_images = np.load('./data/test_images.npy') # Assig...
code_examples/python_parallel/Classification_mpi4py.ipynb
thehackerwithin/berkeley
bsd-3-clause
Check the results Finally we can check that the results are valid.
# Load the data from the file test_images = np.load('data/test_images.npy') test_labels_parallel = np.load('data/test_labels_parallel.npy') # Define function to have a look at the data def show_random_digit( images, labels=None ): """"Show a random image out of `images`, with the corresponding label if availa...
code_examples/python_parallel/Classification_mpi4py.ipynb
thehackerwithin/berkeley
bsd-3-clause
Let's create an array and output it as $\LaTeX$. We are going to use Python 3.0 string formatting The following shows a float style output with 2 decimal places.
A = np.array([[1.23456, 23.45678],[456.23+1j, 8.239521]]) a2l.to_ltx(A, frmt = '{:.2f}', arraytype = 'array', mathform = True)
Examples.ipynb
josephcslater/array_to_latex
mit
Design is to print results to the screen with no output being available. However, new usages have highlighted the need to enable outputs and hide printing. Thus the addition of the print_out boolean to turn off printing but instead return an output.
A = np.array([[1.23456, 23.45678],[456.23+1j, 8.239521]]) latex_code = a2l.to_ltx(A, frmt = '{:.2f}', arraytype = 'array', mathform = True, print_out=False)
Examples.ipynb
josephcslater/array_to_latex
mit
We can still print the returned formatted latex code:
print(latex_code)
Examples.ipynb
josephcslater/array_to_latex
mit
One can use a number before the decimal place. This defines the minimum width to use for the number, padding with spaces at the beginning. Since the largest number needs 6 characters (3 before the decimal, the decimal, and 2 after), putting a 6 in this location makes everything line up nicely. This would also be a nic...
A = np.array([[1.23456, 23.45678],[456.23+1j, 8.239521]]) a2l.to_ltx(A, frmt = '{:6.2f}', arraytype = 'array', mathform = True)
Examples.ipynb
josephcslater/array_to_latex
mit
Let's put it in exponential form.
a2l.to_ltx(A, frmt = '{:.2e}', arraytype = 'array', mathform=False)
Examples.ipynb
josephcslater/array_to_latex
mit