markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Implement a baseline classifier We add five FFN blocks with skip connections, so that we generate a baseline model with roughly the same number of parameters as the GNN models to be built later.
def create_baseline_model(hidden_units, num_classes, dropout_rate=0.2): inputs = layers.Input(shape=(num_features,), name="input_features") x = create_ffn(hidden_units, dropout_rate, name=f"ffn_block1")(inputs) for block_idx in range(4): # Create an FFN block. x1 = create_ffn(hidden_units, ...
examples/graph/ipynb/gnn_citations.ipynb
keras-team/keras-io
apache-2.0
Train the baseline classifier
history = run_experiment(baseline_model, x_train, y_train)
examples/graph/ipynb/gnn_citations.ipynb
keras-team/keras-io
apache-2.0
Let's plot the learning curves.
display_learning_curves(history)
examples/graph/ipynb/gnn_citations.ipynb
keras-team/keras-io
apache-2.0
Now we evaluate the baseline model on the test data split.
_, test_accuracy = baseline_model.evaluate(x=x_test, y=y_test, verbose=0) print(f"Test accuracy: {round(test_accuracy * 100, 2)}%")
examples/graph/ipynb/gnn_citations.ipynb
keras-team/keras-io
apache-2.0
Examine the baseline model predictions Let's create new data instances by randomly generating binary word vectors with respect to the word presence probabilities.
def generate_random_instances(num_instances): token_probability = x_train.mean(axis=0) instances = [] for _ in range(num_instances): probabilities = np.random.uniform(size=len(token_probability)) instance = (probabilities <= token_probability).astype(int) instances.append(instance) ...
examples/graph/ipynb/gnn_citations.ipynb
keras-team/keras-io
apache-2.0
Now we show the baseline model predictions given these randomly generated instances.
new_instances = generate_random_instances(num_classes) logits = baseline_model.predict(new_instances) probabilities = keras.activations.softmax(tf.convert_to_tensor(logits)).numpy() display_class_probabilities(probabilities)
examples/graph/ipynb/gnn_citations.ipynb
keras-team/keras-io
apache-2.0
Build a Graph Neural Network Model Prepare the data for the graph model Preparing and loading the graphs data into the model for training is the most challenging part in GNN models, which is addressed in different ways by the specialised libraries. In this example, we show a simple approach for preparing and using grap...
# Create an edges array (sparse adjacency matrix) of shape [2, num_edges]. edges = citations[["source", "target"]].to_numpy().T # Create an edge weights array of ones. edge_weights = tf.ones(shape=edges.shape[1]) # Create a node features array of shape [num_nodes, num_features]. node_features = tf.cast( papers.sort...
examples/graph/ipynb/gnn_citations.ipynb
keras-team/keras-io
apache-2.0
Implement a graph convolution layer We implement a graph convolution module as a Keras Layer. Our GraphConvLayer performs the following steps: Prepare: The input node representations are processed using a FFN to produce a message. You can simplify the processing by only applying linear transformation to the representa...
class GraphConvLayer(layers.Layer): def __init__( self, hidden_units, dropout_rate=0.2, aggregation_type="mean", combination_type="concat", normalize=False, *args, **kwargs, ): super(GraphConvLayer, self).__init__(*args, **kwargs) ...
examples/graph/ipynb/gnn_citations.ipynb
keras-team/keras-io
apache-2.0
Implement a graph neural network node classifier The GNN classification model follows the Design Space for Graph Neural Networks approach, as follows: Apply preprocessing using FFN to the node features to generate initial node representations. Apply one or more graph convolutional layer, with skip connections, to the...
class GNNNodeClassifier(tf.keras.Model): def __init__( self, graph_info, num_classes, hidden_units, aggregation_type="sum", combination_type="concat", dropout_rate=0.2, normalize=True, *args, **kwargs, ): super(GNNNodeClass...
examples/graph/ipynb/gnn_citations.ipynb
keras-team/keras-io
apache-2.0
Let's test instantiating and calling the GNN model. Notice that if you provide N node indices, the output will be a tensor of shape [N, num_classes], regardless of the size of the graph.
gnn_model = GNNNodeClassifier( graph_info=graph_info, num_classes=num_classes, hidden_units=hidden_units, dropout_rate=dropout_rate, name="gnn_model", ) print("GNN output shape:", gnn_model([1, 10, 100])) gnn_model.summary()
examples/graph/ipynb/gnn_citations.ipynb
keras-team/keras-io
apache-2.0
Train the GNN model Note that we use the standard supervised cross-entropy loss to train the model. However, we can add another self-supervised loss term for the generated node embeddings that makes sure that neighbouring nodes in graph have similar representations, while faraway nodes have dissimilar representations.
x_train = train_data.paper_id.to_numpy() history = run_experiment(gnn_model, x_train, y_train)
examples/graph/ipynb/gnn_citations.ipynb
keras-team/keras-io
apache-2.0
Let's plot the learning curves
display_learning_curves(history)
examples/graph/ipynb/gnn_citations.ipynb
keras-team/keras-io
apache-2.0
Now we evaluate the GNN model on the test data split. The results may vary depending on the training sample, however the GNN model always outperforms the baseline model in terms of the test accuracy.
x_test = test_data.paper_id.to_numpy() _, test_accuracy = gnn_model.evaluate(x=x_test, y=y_test, verbose=0) print(f"Test accuracy: {round(test_accuracy * 100, 2)}%")
examples/graph/ipynb/gnn_citations.ipynb
keras-team/keras-io
apache-2.0
Examine the GNN model predictions Let's add the new instances as nodes to the node_features, and generate links (citations) to existing nodes.
# First we add the N new_instances as nodes to the graph # by appending the new_instance to node_features. num_nodes = node_features.shape[0] new_node_features = np.concatenate([node_features, new_instances]) # Second we add the M edges (citations) from each new node to a set # of existing nodes in a particular subject...
examples/graph/ipynb/gnn_citations.ipynb
keras-team/keras-io
apache-2.0
Now let's update the node_features and the edges in the GNN model.
print("Original node_features shape:", gnn_model.node_features.shape) print("Original edges shape:", gnn_model.edges.shape) gnn_model.node_features = new_node_features gnn_model.edges = new_edges gnn_model.edge_weights = tf.ones(shape=new_edges.shape[1]) print("New node_features shape:", gnn_model.node_features.shape) ...
examples/graph/ipynb/gnn_citations.ipynb
keras-team/keras-io
apache-2.0
The mystery section remains the same.
all(b==0 for b in song_section(c_dump, 'nextblocks')) all(b==0 for b in song_section(c_dump, 'blockdata'))
documents/Investigationing II.ipynb
hschh86/usersong-extractor
mit
All the blocks are empty.
bytes(song_section(c_dump, 'presetstyle'))
documents/Investigationing II.ipynb
hschh86/usersong-extractor
mit
The 'PresetStyle' settings are empty, too.
print(line_hex(o_dump.reg_data.data, 32, 4)) print(line_hex(c_dump.reg_data.data, 32, 4)) for bank in range(1, 8+1): for button in range(1, 2+1): print(bank, button) print(line_hex(o_dump.reg_data.settings.get_setting(bank, button).data)) print(line_hex(c_dump.reg_data.settings.get_setting(...
documents/Investigationing II.ipynb
hschh86/usersong-extractor
mit
Each of the registry settings are completely blank. Interesting things to note: the first byte is 0 instead of 1, which probably indicates that the setting is unused. The bytes that were FF in each recorded setting are 00 here. Investigating FUNCTION backup According to the manual (page 49), the following settings can ...
for x in range(2, 7): !diff -qs ../data/backup_experiment/cb1.txt ../data/backup_experiment/cb{x}.txt !diff -qs ../data/backup_experiment/cb1.txt ../data/clear_bulk.txt c2_syx_messages = mido.read_syx_file('../data/backup_experiment/cb1.txt') c2_dump = dgxdump.DgxDump(c2_syx_messages) c_dump.song_data.data == c2...
documents/Investigationing II.ipynb
hschh86/usersong-extractor
mit
The only difference seems to be two bytes in the mystery section, at offsets 0x07 and 0x08. Perhaps this has to do with some kind of internal wear levelling or something. Registration extension Now that the memory has been cleared, we can hopefully figure out more about the registration settings. Recording Bank 3, Butt...
r1_dump = dgxdump.DgxDump(mido.read_syx_file('../data/post_clear/1reg.syx')) c2_dump.song_data.data == r1_dump.song_data.data c2_dump.reg_data.data == r1_dump.reg_data.data for bank in range(1, 8+1): for button in range(1, 2+1): if not all(x == 0 for x in r1_dump.reg_data.settings.get_setting(bank, butto...
documents/Investigationing II.ipynb
hschh86/usersong-extractor
mit
I believe the only real way to get unrecorded settings is to reset the memory, which clears all the values to zero. This means that the first byte which has a value of 01 for all recorded settings can indeed be used as a flag... along with the FF byte at offset 24, and any other setting that cannot be set to a value of...
r2_dump = dgxdump.DgxDump(mido.read_syx_file('../data/post_clear/2reg.txt')) sets = r2_dump.reg_data.settings.get_setting(2,2) sets.print_settings() sets.print_unusual()
documents/Investigationing II.ipynb
hschh86/usersong-extractor
mit
Now, in the cell below, use Beautiful Soup to write an expression that evaluates to the number of &lt;h3&gt; tags contained in widgets2016.html.
# TA-COMMENT: All in one line! Beautiful! print(len(document.find_all('h3')))
Data_and_Databases_homework/03/homework_3_schuetz_graded.ipynb
raschuetz/foundations-homework
mit
Now, in the cell below, write an expression or series of statements that displays the telephone number beneath the "Widget Catalog" header.
print(document.find('a', {'class': 'tel'}).string)
Data_and_Databases_homework/03/homework_3_schuetz_graded.ipynb
raschuetz/foundations-homework
mit
In the cell below, use Beautiful Soup to write some code that prints the names of all the widgets on the page. After your code has executed, widget_names should evaluate to a list that looks like this (though not necessarily in this order): Skinner Widget Widget For Furtiveness Widget For Strawman Jittery Widget Silver...
widgets = document.find_all('td', {'class': 'wname'}) for widget in widgets: print(widget.string)
Data_and_Databases_homework/03/homework_3_schuetz_graded.ipynb
raschuetz/foundations-homework
mit
Problem set #2: Widget dictionaries For this problem set, we'll continue to use the HTML page from the previous problem set. In the cell below, I've made an empty list and assigned it to a variable called widgets. Write code that populates this list with dictionaries, one dictionary per widget in the source file. The k...
widgets = [] widget_table = document.find_all('tr', {'class': 'winfo'}) for row in widget_table: partno = row.find('td', {'class': 'partno'}).string wname = row.find('td', {'class': 'wname'}).string price = row.find('td', {'class': 'price'}).string quantity = row.find('td', {'class': 'quantity'}).strin...
Data_and_Databases_homework/03/homework_3_schuetz_graded.ipynb
raschuetz/foundations-homework
mit
In the cell below, duplicate your code from the previous question. Modify the code to ensure that the values for price and quantity in each dictionary are floating-point numbers and integers, respectively. I.e., after executing the cell, your code should display something like this: [{'partno': 'C1-9476', 'price': 2....
widgets = [] widget_table = document.find_all('tr', {'class': 'winfo'}) for row in widget_table: partno = row.find('td', {'class': 'partno'}).string wname = row.find('td', {'class': 'wname'}).string price = float(row.find('td', {'class': 'price'}).string[1:]) quantity = int(row.find('td', {'class': 'qu...
Data_and_Databases_homework/03/homework_3_schuetz_graded.ipynb
raschuetz/foundations-homework
mit
Great! I hope you're having fun. In the cell below, write an expression or series of statements that uses the widgets list created in the cell above to calculate the total number of widgets that the factory has in its warehouse. Expected output: 7928
quantity_total = 0 for widget in widgets: quantity_total += widget['quantity'] # TA-COMMENT: Yassss, putting += to use! print(quantity_total)
Data_and_Databases_homework/03/homework_3_schuetz_graded.ipynb
raschuetz/foundations-homework
mit
In the cell below, write some Python code that prints the names of widgets whose price is above $9.30. Expected output: Widget For Furtiveness Jittery Widget Silver Widget Infinite Widget Widget For Cinema
for widget in widgets: if widget['price'] > 9.3: print(widget['wname'])
Data_and_Databases_homework/03/homework_3_schuetz_graded.ipynb
raschuetz/foundations-homework
mit
Problem set #3: Sibling rivalries In the following problem set, you will yet again be working with the data in widgets2016.html. In order to accomplish the tasks in this problem set, you'll need to learn about Beautiful Soup's .find_next_sibling() method. Here's some information about that method, cribbed from the note...
example_html = """ <h2>Camembert</h2> <p>A soft cheese made in the Camembert region of France.</p> <h2>Cheddar</h2> <p>A yellow cheese made in the Cheddar region of... France, probably, idk whatevs.</p> """
Data_and_Databases_homework/03/homework_3_schuetz_graded.ipynb
raschuetz/foundations-homework
mit
If our task was to create a dictionary that maps the name of the cheese to the description that follows in the &lt;p&gt; tag directly afterward, we'd be out of luck. Fortunately, Beautiful Soup has a .find_next_sibling() method, which allows us to search for the next tag that is a sibling of the tag you're calling it o...
example_doc = BeautifulSoup(example_html, "html.parser") cheese_dict = {} for h2_tag in example_doc.find_all('h2'): cheese_name = h2_tag.string cheese_desc_tag = h2_tag.find_next_sibling('p') cheese_dict[cheese_name] = cheese_desc_tag.string cheese_dict
Data_and_Databases_homework/03/homework_3_schuetz_graded.ipynb
raschuetz/foundations-homework
mit
With that knowledge in mind, let's go back to our widgets. In the cell below, write code that uses Beautiful Soup, and in particular the .find_next_sibling() method, to print the part numbers of the widgets that are in the table just beneath the header "Hallowed Widgets." Expected output: MZ-556/B QV-730 T1-9731 5B-941...
h3_tags = document.find_all('h3') for h3_tag in h3_tags: if h3_tag.string == 'Hallowed widgets': table = h3_tag.find_next_sibling('table') part_numbers = table.find_all('td', {'class': 'partno'}) for part in part_numbers: print(part.string)
Data_and_Databases_homework/03/homework_3_schuetz_graded.ipynb
raschuetz/foundations-homework
mit
Okay, now, the final task. If you can accomplish this, you are truly an expert web scraper. I'll have little web scraper certificates made up and I'll give you one, if you manage to do this thing. And I know you can do it! In the cell below, I've created a variable category_counts and assigned to it an empty dictionary...
category_counts = {} # TA-COMMENT: Beautiful! for h3_tag in h3_tags: table = h3_tag.find_next_sibling('table') list_of_widgets = table.find_all('tr', {'class': 'winfo'}) category_counts[h3_tag.string] = len(list_of_widgets) category_counts
Data_and_Databases_homework/03/homework_3_schuetz_graded.ipynb
raschuetz/foundations-homework
mit
Specify the parameters of the transmission as the fiber length $L$ (in km), the fiber nonlinearity coefficienty $\gamma$ (given in 1/W/km) and the total noise power $P_n$ (given in dBM. The noise is due to amplified spontaneous emission in amplifiers along the link). We assume a model of a dispersion-less fiber affecte...
# Length of transmission (in km) L = 5000 # fiber nonlinearity coefficient gamma = 1.27 Pn = -21.3 # noise power (in dBm) Kstep = 50 # number of steps used in the channel model # noise variance per step sigma_n = np.sqrt((10**((Pn-30)/10)) / Kstep / 2) constellations = {'16-QAM': np.array([-3,-3,-3,-3,-1,-...
mloc/ch4_Deep_Learning/pytorch/Deep_NN_Detection_QAM.ipynb
kit-cel/lecture-examples
gpl-2.0
We consider BPSK transmission over this channel. Show constellation as a function of the fiber input power. When the input power is small, the effect of the nonlinearity is small (as $\jmath\frac{L}{K}\gamma|x_k|^2 \approx 0$) and the transmission is dominated by the additive noise. If the input power becomes larger, t...
length_plot = 4000 def plot_constellation(Pin, constellation_name): constellation = constellations[constellation_name] t = np.random.randint(len(constellation),size=length_plot) r = simulate_channel(t, Pin, constellation) plt.figure(figsize=(12,6)) font = {'size' : 14} plt.rc('font'...
mloc/ch4_Deep_Learning/pytorch/Deep_NN_Detection_QAM.ipynb
kit-cel/lecture-examples
gpl-2.0
Helper function to compute Bit Error Rate (BER)
# helper function to compute the symbol error rate def SER(predictions, labels): return (np.sum(np.argmax(predictions, 1) != labels) / predictions.shape[0])
mloc/ch4_Deep_Learning/pytorch/Deep_NN_Detection_QAM.ipynb
kit-cel/lecture-examples
gpl-2.0
Here, we define the parameters of the neural network and training, generate the validation set and a helping set to show the decision regions
# set input power Pin = -5 #define constellation constellation = constellations['16-APSK'] input_power_linear = 10**((Pin-30)/10) norm_factor = 1 / np.sqrt(np.mean(np.abs(constellation)**2)/input_power_linear) sigma = np.sqrt((10**((Pn-30)/10)) / Kstep / 2) constellation_mat = np.stack([constellation.real * norm_fa...
mloc/ch4_Deep_Learning/pytorch/Deep_NN_Detection_QAM.ipynb
kit-cel/lecture-examples
gpl-2.0
This is the main neural network with 4 hidden layers, each with ELU activation function. Note that the final layer does not use a softmax function, as this function is already included in the CrossEntropyLoss.
class Receiver_Network(nn.Module): def __init__(self, hidden_neurons_1, hidden_neurons_2, hidden_neurons_3, hidden_neurons_4): super(Receiver_Network, self).__init__() # Linear function, 2 input neurons (real and imaginary part) self.fc1 = nn.Linear(2, hidden_neurons_1) # N...
mloc/ch4_Deep_Learning/pytorch/Deep_NN_Detection_QAM.ipynb
kit-cel/lecture-examples
gpl-2.0
This is the main learning function, generate the data directly on the GPU (if available) and the run the neural network. We use a variable batch size that varies during training. In the first iterations, we start with a small batch size to rapidly get to a working solution. The closer we come towards the end of the tra...
model = Receiver_Network(hidden_neurons_1, hidden_neurons_2, hidden_neurons_3, hidden_neurons_4) model.to(device) # Cross Entropy loss accepting logits at input loss_fn = nn.CrossEntropyLoss() # Adam Optimizer optimizer = optim.Adam(model.parameters()) # Softmax function softmax = nn.Softmax(dim=1) num_epochs = 1...
mloc/ch4_Deep_Learning/pytorch/Deep_NN_Detection_QAM.ipynb
kit-cel/lecture-examples
gpl-2.0
Plt decision region and scatter plot of the validation set. Note that the validation set is only used for computing BERs and plotting, there is no feedback towards the training!
cmap = matplotlib.cm.tab20 base = plt.cm.get_cmap(cmap) color_list = base.colors new_color_list = [[t/2 + 0.5 for t in color_list[k]] for k in range(len(color_list))] # find minimum SER from validation set min_SER_iter = np.argmin(validation_SERs) plt.figure(figsize=(16,8)) plt.subplot(121) #plt.contourf(mgx,mgy,deci...
mloc/ch4_Deep_Learning/pytorch/Deep_NN_Detection_QAM.ipynb
kit-cel/lecture-examples
gpl-2.0
Generate animation and save as a gif.
%matplotlib notebook %matplotlib notebook # Generate animation from matplotlib import animation, rc from matplotlib.animation import PillowWriter # Disable if you don't want to save any GIFs. font = {'size' : 18} plt.rc('font', **font) fig, ax = plt.subplots(1, figsize=(8,8)) ax.axis('scaled') written = False def ...
mloc/ch4_Deep_Learning/pytorch/Deep_NN_Detection_QAM.ipynb
kit-cel/lecture-examples
gpl-2.0
Load Data and set Hyperparameters We first load in the pre-sampled data. The data consists of 1000 short trajectories, each with 5 datapoints. The precise sampling procedure is described in "Galerkin Approximation of Dynamical Quantities using Trajectory Data" by Thiede et al. Note that this is a smaller dataset tha...
ntraj = 1000 trajectory_length = 5 dim = 10
examples/Committor_10d/Committor_10d.ipynb
ehthiede/PyEDGAR
mit
Load and format the data
trajs = np.load('data/muller_brown_trajs.npy')[:ntraj, :trajectory_length, :dim] # Raw trajectory stateA = np.load('data/muller_brown_stateA.npy')[:ntraj, :trajectory_length] # 1 if in state A, 0 otherwise stateB = np.load('data/muller_brown_stateB.npy')[:ntraj, :trajectory_length] # 1 if in state B, 0 otherwise print...
examples/Committor_10d/Committor_10d.ipynb
ehthiede/PyEDGAR
mit
We also convert the data into the flattened format. This converts the data into a 2D array, which allows the data to be passed into many ML packages that require a two-dimensional dataset. In particular, this is the format accepted by the Diffusion Atlas object. Trajectory start/stop points are then stored in the tr...
ref_comm = np.load('reference/reference_committor.npy') ref_potential = np.load('reference/potential.npy') xgrid = np.load('reference/xgrid.npy') ygrid = np.load('reference/ygrid.npy') # Plot the true committor. fig, ax = plt.subplots(1) HM = ax.pcolor(xgrid, ygrid, ref_comm, vmin=0, vmax=1) ax.contour(xgrid, ygrid, ...
examples/Committor_10d/Committor_10d.ipynb
ehthiede/PyEDGAR
mit
Construct DGA Committor We now use PyEDGAR to build an estimate for the forward committor. Build Basis Set We first build the basis set required for the DGA Calculation. In this demo, we will use the diffusion map basis.
diff_atlas = pyedgar.basis.DiffusionAtlas.from_sklearn(alpha=0, k=500, bandwidth_type='-1/d', epsilon='bgh_generous') diff_atlas.fit(trajs)
examples/Committor_10d/Committor_10d.ipynb
ehthiede/PyEDGAR
mit
Here, we construct the basis and guess functions, and convert them back into lists of trajectories. The domain is the set of all sets out side of $(A\cup B)^c$.
basis, evals = diff_atlas.make_dirichlet_basis(300, in_domain=in_domain, return_evals=True) guess = diff_atlas.make_FK_soln(stateB, in_domain=in_domain) flat_basis = np.vstack(basis) flat_guess = np.hstack(guess)
examples/Committor_10d/Committor_10d.ipynb
ehthiede/PyEDGAR
mit
We plot the guess function and the first few basis functions.
# Flatten the basis, guess, and trajectories functions for easy plotting. flattened_trajs = np.vstack(trajs) flat_basis = np.vstack(basis) flat_guess = np.hstack(guess) fig, axes= plt.subplots(1, 5, figsize=(14,4.), sharex=True, sharey=True) axes[0].scatter(flattened_trajs[:,0], flattened_trajs[:,1], ...
examples/Committor_10d/Committor_10d.ipynb
ehthiede/PyEDGAR
mit
The third basis function looks like noise from the perspective of the $x$ and $y$ coordinates. This is because it correlates most strongly with the harmonic degrees of freedom. Note that due to the boundary conditions, it is not precisely the dominant eigenvector of the harmonic degrees of freedom.
fig, (ax1) = plt.subplots(1, figsize=(3.5,3.5)) vm = np.max(np.abs(flat_basis[:,2])) ax1.scatter(flattened_trajs[:,3], flattened_trajs[:,5], c=flat_basis[:, 2], s=3, cmap='coolwarm', vmin=-1*vm, vmax=vm) ax1.set_aspect('equal') ax1.set_title(r"$\phi_%d$" % 3) ax1.set_xlabel("$z_2$") ax1.set_...
examples/Committor_10d/Committor_10d.ipynb
ehthiede/PyEDGAR
mit
Build the committor function We are ready to compute the committor function using DGA. This can be done by passing the guess function and the basis to the the Galerkin module.
g = pyedgar.galerkin.compute_committor(basis, guess, lag=1) fig, (ax1) = plt.subplots(1, figsize=(5.5,3.5)) SC = ax1.scatter(flattened_trajs[:,0], flattened_trajs[:,1], c=np.array(g).ravel(), vmin=0., vmax=1., s=3) ax1.set_xlabel('x') ax1.set_ylabel('y') ax1.set_title('Estimated Committor') plt.colorbar(SC) ax1.set_...
examples/Committor_10d/Committor_10d.ipynb
ehthiede/PyEDGAR
mit
Here, we plot how much the DGA estimate perturbs the Guess function
fig, (ax1) = plt.subplots(1, figsize=(4.4,3.5)) SC = ax1.scatter(flattened_trajs[:,0], flattened_trajs[:,1], c=np.array(g).ravel() - flat_guess, vmin=-.5, vmax=.5, cmap='bwr', s=3) ax1.set_aspect('equal') ax1.set_xlabel('x') ax1.set_ylabel('y') ax1.set_title('Estimate - Guess') plt.colorbar(SC, ax=ax1)
examples/Committor_10d/Committor_10d.ipynb
ehthiede/PyEDGAR
mit
Compare against reference To compare against the reference values, we will interpolate the reference onto the datapoints usingy scipy's interpolate package.
import scipy.interpolate as spi spline = spi.RectBivariateSpline(xgrid, ygrid, ref_comm.T) ref_comm_on_data = np.array([spline.ev(c[0], c[1]) for c in flattened_trajs[:,:2]]) ref_comm_on_data[ref_comm_on_data < 0.] = 0. ref_comm_on_data[ref_comm_on_data > 1.] = 1.
examples/Committor_10d/Committor_10d.ipynb
ehthiede/PyEDGAR
mit
A comparison of our estimate with the True committor. While the estimate is good, we systematically underestimate the committor near (0, 0.5).
fig, axes = plt.subplots(1, 3, figsize=(16,3.5), sharex=True, sharey=True) (ax1, ax2, ax3) = axes SC = ax1.scatter(flattened_trajs[:,0], flattened_trajs[:,1], c=ref_comm_on_data, vmin=0., vmax=1., s=3) plt.colorbar(SC, ax=ax1) SC = ax2.scatter(flattened_trajs[:,0], flattened_trajs[:,1], c=np.array(g).ravel(), vmin=0., ...
examples/Committor_10d/Committor_10d.ipynb
ehthiede/PyEDGAR
mit
Данные Возьмите данные с https://www.kaggle.com/c/shelter-animal-outcomes . Обратите внимание, что в этот раз у нас много классов, почитайте в разделе Evaluation то, как вычисляется итоговый счет (score). Визуализация <div class="panel panel-info" style="margin: 50px 0 0 0"> <div class="panel-heading"> <h3 ...
visual = pd.read_csv('data/CatsAndDogs/train.csv') #Сделаем числовой столбец Outcome, показывающий, взяли животное из приюта или нет #Сначала заполним единицами, типа во всех случах хорошо visual['Outcome'] = 'true' #Неудачные случаи занулим visual.loc[visual.OutcomeType == 'Euthanasia', 'Outcome'] = 'false' visual.lo...
3. Котики и собачки (исходные данные).ipynb
lithiumdenis/MLSchool
mit
Сравним по возрасту
mergedByAges = visual.groupby('AgeuponOutcome')['Outcome'].value_counts().to_dict() results = pd.DataFrame(data = mergedByAges, index=[0]).stack().fillna(0).transpose() results.columns = pd.Index(['true', 'false']) results['total'] = results.true + results.false results.sort_values(by='true', ascending=False, inplace=...
3. Котики и собачки (исходные данные).ipynb
lithiumdenis/MLSchool
mit
Сравним по полу
mergedByGender = visual.groupby('Gender')['Outcome'].value_counts().to_dict() results = pd.DataFrame(data = mergedByGender, index=[0]).stack().fillna(0).transpose() results.columns = pd.Index(['true', 'false']) results['total'] = results.true + results.false results.sort_values(by='true', ascending=False, inplace=True...
3. Котики и собачки (исходные данные).ipynb
lithiumdenis/MLSchool
mit
Сравним по фертильности
mergedByFert = visual.groupby('Fertility')['Outcome'].value_counts().to_dict() results = pd.DataFrame(data = mergedByFert, index=[0]).stack().fillna(0).transpose() results.columns = pd.Index(['true', 'false']) results['total'] = results.true + results.false results.sort_values(by='true', ascending=False, inplace=True)...
3. Котики и собачки (исходные данные).ipynb
lithiumdenis/MLSchool
mit
<b>Вывод по возрасту:</b> лучше берут не самых старых, но и не самых молодых <br> <b>Вывод по полу:</b> по большому счёту не имеет значения <br> <b>Вывод по фертильности:</b> лучше берут животных с ненарушенными репродуктивными способностями. Однако две следующие группы не сильно различаются по сути и, если их сложить,...
train, test = pd.read_csv( 'data/CatsAndDogs/train.csv' #исходные данные ), pd.read_csv( 'data/CatsAndDogs/test.csv' #исходные данные ) train.head() test.shape
3. Котики и собачки (исходные данные).ipynb
lithiumdenis/MLSchool
mit
<b>Добавим новые признаки в train</b>
#Сначала по-аналогии с визуализацией #Заменим строки, где в SexuponOutcome, Breed, Color NaN train.loc[train.SexuponOutcome.isnull(), 'SexuponOutcome'] = 'Unknown Unknown' train.loc[train.AgeuponOutcome.isnull(), 'AgeuponOutcome'] = '0 0' train.loc[train.Breed.isnull(), 'Breed'] = 'Unknown' train.loc[train.Color.isnul...
3. Котики и собачки (исходные данные).ipynb
lithiumdenis/MLSchool
mit
<b>Добавим новые признаки в test по-аналогии</b>
#Сначала по-аналогии с визуализацией #Заменим строки, где в SexuponOutcome, Breed, Color NaN test.loc[test.SexuponOutcome.isnull(), 'SexuponOutcome'] = 'Unknown Unknown' test.loc[test.AgeuponOutcome.isnull(), 'AgeuponOutcome'] = '0 0' test.loc[test.Breed.isnull(), 'Breed'] = 'Unknown' test.loc[test.Color.isnull(), 'Co...
3. Котики и собачки (исходные данные).ipynb
lithiumdenis/MLSchool
mit
<div class="panel panel-info" style="margin: 50px 0 0 0"> <div class="panel-heading"> <h3 class="panel-title">Задание 3.</h3> </div> </div> Выполните отбор признаков, попробуйте различные методы. Проверьте качество на кросс-валидации. Выведите топ самых важных и самых незначащих признаков. Предобрабо...
np.random.seed = 1234 from sklearn.preprocessing import LabelEncoder from sklearn import preprocessing #####################Заменим NaN значения на слово Unknown################## #Уберем Nan значения из train train.loc[train.AnimalID.isnull(), 'AnimalID'] = 'Unknown' train.loc[train.Name.isnull(), 'Name'] = 'Unknown'...
3. Котики и собачки (исходные данные).ipynb
lithiumdenis/MLSchool
mit
Статистические тесты
from sklearn.feature_selection import SelectKBest from sklearn.feature_selection import chi2, f_classif, mutual_info_classif skb = SelectKBest(mutual_info_classif, k=15) x_new = skb.fit_transform(X_tr, y_tr) x_new
3. Котики и собачки (исходные данные).ipynb
lithiumdenis/MLSchool
mit
Методы обертки
from sklearn.feature_selection import RFE from sklearn.linear_model import LinearRegression names = X_tr.columns.values lr = LinearRegression() rfe = RFE(lr, n_features_to_select=1) rfe.fit(X_tr,y_tr); print("Features sorted by their rank:") print(sorted(zip(map(lambda x: round(x, 4), rfe.ranking_), names)))
3. Котики и собачки (исходные данные).ipynb
lithiumdenis/MLSchool
mit
Отбор при помощи модели Lasso
from sklearn.linear_model import Lasso clf = Lasso() clf.fit(X_tr, y_tr); clf.coef_ features = X_tr.columns.values print('Всего Lasso выкинуло %s переменных' % (clf.coef_ == 0).sum()) print('Это признаки:') for s in features[np.where(clf.coef_ == 0)[0]]: print(' * ', s)
3. Котики и собачки (исходные данные).ipynb
lithiumdenis/MLSchool
mit
Отбор при помощи модели RandomForest
from sklearn.ensemble import RandomForestRegressor clf = RandomForestRegressor() clf.fit(X_tr, y_tr); clf.feature_importances_ imp_feature_idx = clf.feature_importances_.argsort() imp_feature_idx features = X_tr.columns.values k = 0 while k < len(features): print(features[k], imp_feature_idx[k]) k += 1
3. Котики и собачки (исходные данные).ipynb
lithiumdenis/MLSchool
mit
<b>Вывод по признакам:</b> <br> <b>Не нужны:</b> Name, DateTime, month, day, Breed, breedColor. Всё остальное менее однозначно, можно и оставить. <div class="panel panel-info" style="margin: 50px 0 0 0"> <div class="panel-heading"> <h3 class="panel-title">Задание 4.</h3> </div> </div> Попробуйте смеша...
#Для начала выкинем ненужные признаки, выявленные на прошлом этапе X_tr = X_tr.drop(['Name'], axis=1) #, 'DateTime', 'breedColor', 'Breed' test = test.drop(['Name'], axis=1) #, 'DateTime', 'breedColor', 'Breed' X_tr.head() from sklearn.ensemble import VotingClassifier from sklearn.linear_model import LogisticRegressi...
3. Котики и собачки (исходные данные).ipynb
lithiumdenis/MLSchool
mit
Samples
plot_grid(imgs, titles=labels) %autosave 0
notebooks/01. Data loading and analysis.ipynb
gabrielrezzonico/dogsandcats
mit
Data size
import pandas as pd import glob from PIL import Image files = glob.glob(ORIGINAL_TRAIN_DIRECTORY + '*') df = pd.DataFrame({'fpath':files,'width':0,'height':0}) df['category'] = df.fpath.str.extract('../data/original_train/([a-zA-Z]*).', expand=False) # extract class for idx in df.index: im = Image.open(df.ix[idx]....
notebooks/01. Data loading and analysis.ipynb
gabrielrezzonico/dogsandcats
mit
There are 25000 images in the dataset. We can see that the mean size of the images is (360.478080,404.09904).
df['category'].value_counts() %matplotlib inline import seaborn as sns ax = sns.countplot("category", data=df) sns.jointplot(x='width', y='height', data=df, joint_kws={'s': 0.5}, marginal_kws=dict(bins=50), size=10, stat_func=Non...
notebooks/01. Data loading and analysis.ipynb
gabrielrezzonico/dogsandcats
mit
Data preparation The dataset can be downloaded from https://www.kaggle.com/c/the-nature-conservancy-fisheries-monitoring/data. Number of training examples:
import os TOTAL_NUMBER_FILES = sum([len(files) for r, d, files in os.walk(ORIGINAL_TRAIN_DIRECTORY)]) print("Total number of files in train folder:", TOTAL_NUMBER_FILES)
notebooks/01. Data loading and analysis.ipynb
gabrielrezzonico/dogsandcats
mit
Folder structure The train directory consist of labelled data with the following convention for each image: data/train/CLASS.id.jpg We are going to use keras.preprocessing.image so we want the folder structure to be: data/train/CLASS/image-name.jpg
import glob import os import shutil import numpy as np shutil.rmtree(os.path.join(TEST_DIRECTORY, "dog"), ignore_errors=True) shutil.rmtree(os.path.join(TEST_DIRECTORY, "cat"), ignore_errors=True) shutil.rmtree(os.path.join(VALID_DIRECTORY, "dog"), ignore_errors=True) shutil.rmtree(os.path.join(VALID_DIRECTORY, "cat"...
notebooks/01. Data loading and analysis.ipynb
gabrielrezzonico/dogsandcats
mit
We've loaded CSV file into Pandas DataFrame, which will contain train and test data for our model. Before we will be able to start training we need to encode the strings into numeric values using LabelEncoder.
# Encode strings from CSV into numeric values from sklearn.preprocessing import LabelEncoder enc = LabelEncoder() for col_name in some_data: some_data[col_name] = enc.fit_transform(some_data[col_name])
ip[y]/beaconml.ipynb
llvll/beaconml
bsd-2-clause
Now we split the DataFrame into train and test datasets.
# Split the data into training and test sets (the last 5 items) train_features, train_labels = some_data.iloc[:-5, :-1], some_data.iloc[:-5, -1]
ip[y]/beaconml.ipynb
llvll/beaconml
bsd-2-clause
Let's execute the model training and print the results.
# Create an instance of CommonClassifier, which will use the default list of estimators. # Removing the features with a weight smaller than 0.1. from tinylearn import CommonClassifier wrk = CommonClassifier(default=True, cv=3, reduce_func=lambda x: x < 0.1) wrk.fit(train_features, train_labels) wrk.print_fit_summary()
ip[y]/beaconml.ipynb
llvll/beaconml
bsd-2-clause
CommonClassifier has selected 'ExtraTreesClassifier' estimator. Let's do the actual prediction of labels on the test data:
# Predicting and decoding the labels back to strings print("\nPredicted data:") predicted = wrk.predict(some_data.iloc[-5:, :-1]) print(enc.inverse_transform(predicted))
ip[y]/beaconml.ipynb
llvll/beaconml
bsd-2-clause
Pretty close to the actual labels ... with the following accuracy:
import numpy as np print("\nActual accuracy: " + str(np.sum(predicted == some_data.iloc[-5:, -1])/predicted.size*100) + '%')
ip[y]/beaconml.ipynb
llvll/beaconml
bsd-2-clause
Let's take a look at the internals of TinyLearn and CommonClassifier specifically:
# %load tinylearn.py # Copyright (c) 2015, Oleg Puzanov # All rights reserved. # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions are met: # # * Redistributions of source code must retain the above copyright notice, # this list o...
ip[y]/beaconml.ipynb
llvll/beaconml
bsd-2-clause
Loading and validating our model
!curl -O https://raw.githubusercontent.com/DJCordhose/ai/master/notebooks/manning/model/insurance.hdf5 model = tf.keras.models.load_model('insurance.hdf5')
notebooks/manning/U4-M1-Preparing TensorFlow models.ipynb
DJCordhose/ai
mit
Descison Boundaries for 2 Dimensions
# a little sane check, does it work at all? # within this code, we expect Olli to be a green customer with a high prabability # 0: red # 1: green # 2: yellow olli_data = [100, 47, 10] X = np.array([olli_data]) model.predict(X)
notebooks/manning/U4-M1-Preparing TensorFlow models.ipynb
DJCordhose/ai
mit
Converting our Keras Model to the Alternative High-Level Estimator Model
# https://cloud.google.com/blog/products/gcp/new-in-tensorflow-14-converting-a-keras-model-to-a-tensorflow-estimator estimator_model = tf.keras.estimator.model_to_estimator(keras_model=model) # it still works the same, with a different style of API, though x = {"hidden1_input": X} list(estimator_model.predict(input_fn...
notebooks/manning/U4-M1-Preparing TensorFlow models.ipynb
DJCordhose/ai
mit
Preparing our model for serving https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/saved_model/README.md https://www.tensorflow.org/serving/serving_basic
!rm -rf tf import os export_path_base = 'tf' version = 1 export_path = os.path.join( tf.compat.as_bytes(export_path_base), tf.compat.as_bytes(str(version))) tf.keras.backend.set_learning_phase(0) sess = tf.keras.backend.get_session() classification_inputs = tf.saved_model.utils.build_tensor_info(model.i...
notebooks/manning/U4-M1-Preparing TensorFlow models.ipynb
DJCordhose/ai
mit
<img src="https://duke.box.com/shared/static/p2eucjdttai08eeo7davbpfgqi3zrew0.jpg" width=600 alt="SELECT FROM WHERE" /> 1. Assess whether Dognition personality dimensions are related to the number of tests completed The first variable in the Dognition sPAP we want to investigate is Dognition personality dimensions. Re...
%%sql SELECT DISTINCT dimension FROM dogs
week4/MySQL_Exercise_11_Queries_that_Test_Relationships_Between_Test_Completion_and_Dog_Characterisitcs.ipynb
liufuyang/ManagingBigData_MySQL_DukeUniv
mit
The results of the query above illustrate there are NULL values (indicated by the output value "none") in the dimension column. Keep that in mind in case it is relevant to future queries. We want a summary of the total number of tests completed by dogs with each personality dimension. In order to calculate those su...
%%sql SELECT d.dog_guid AS dogID, d.dimension AS dimension, count(c.created_at) AS numtests FROM dogs d, complete_tests c WHERE d.dog_guid=c.dog_guid GROUP BY dogID
week4/MySQL_Exercise_11_Queries_that_Test_Relationships_Between_Test_Completion_and_Dog_Characterisitcs.ipynb
liufuyang/ManagingBigData_MySQL_DukeUniv
mit
Question 3: Re-write the query in Question 2 using traditional join syntax (described in MySQL Exercise 8).
%%sql SELECT d.dog_guid AS dogID, d.dimension AS dimension, COUNT(ct.created_at) AS numtests FROM dogs d JOIN complete_tests ct on d.dog_guid=ct.dog_guid GROUP BY d.dog_guid
week4/MySQL_Exercise_11_Queries_that_Test_Relationships_Between_Test_Completion_and_Dog_Characterisitcs.ipynb
liufuyang/ManagingBigData_MySQL_DukeUniv
mit
Now we need to summarize the total number of tests completed by each unique DogID within each Dognition personality dimension. To do this we will need to choose an appropriate aggregation function for the count column of the query we just wrote. Question 4: To start, write a query that will output the average number...
%%sql SELECT t.dimension AS dimension, AVG(t.numtests) AS avg_numtests FROM (SELECT d.dog_guid AS dogID, d.dimension AS dimension, COUNT(ct.created_at) AS numtests FROM dogs d JOIN complete_tests ct on d.dog_guid=ct.dog_guid GROUP BY d.dog_guid) AS t GROUP BY t.dimension
week4/MySQL_Exercise_11_Queries_that_Test_Relationships_Between_Test_Completion_and_Dog_Characterisitcs.ipynb
liufuyang/ManagingBigData_MySQL_DukeUniv
mit
You should retrieve an output of 11 rows with one of the dimensions labeled "None" and another labeled "" (nothing is between the quotation marks). Question 5: How many unique DogIDs are summarized in the Dognition dimensions labeled "None" or ""? (You should retrieve values of 13,705 and 71)
%%sql SELECT t.dimension AS dimension, COUNT(DISTINCT t.dogID) AS num_dog_guid, AVG(t.numtests) AS avg_numtests FROM (SELECT d.dog_guid AS dogID, d.dimension AS dimension, COUNT(ct.created_at) AS numtests FROM dogs d JOIN complete_tests ct on d.dog_guid=ct.dog_guid GROUP BY d.dog_guid) AS t GROUP BY t.d...
week4/MySQL_Exercise_11_Queries_that_Test_Relationships_Between_Test_Completion_and_Dog_Characterisitcs.ipynb
liufuyang/ManagingBigData_MySQL_DukeUniv
mit
It makes sense there would be many dogs with NULL values in the dimension column, because we learned from Dognition that personality dimensions can only be assigned after the initial "Dognition Assessment" is completed, which is comprised of the first 20 Dognition tests. If dogs did not complete the first 20 tests, th...
%%sql SELECT d.dog_guid AS dogID, d.dimension AS dimension, d.breed, d.weight, d.exclude, MIN(ct.created_at), MAX(ct.created_at), COUNT(ct.created_at) FROM dogs d JOIN complete_tests ct on d.dog_guid=ct.dog_guid WHERE dimension='' GROUP BY d.dog_guid
week4/MySQL_Exercise_11_Queries_that_Test_Relationships_Between_Test_Completion_and_Dog_Characterisitcs.ipynb
liufuyang/ManagingBigData_MySQL_DukeUniv
mit
A quick inspection of the output from the last query illustrates that almost all of the entries that have non-NULL empty strings in the dimension column also have "exclude" flags of 1, meaning that the entries are meant to be excluded due to factors monitored by the Dognition team. This provides a good argument for ex...
%%sql SELECT t.dimension AS dimension, COUNT(DISTINCT t.dogID) AS num_dog_guid, AVG(t.numtests) AS avg_numtests FROM (SELECT d.dog_guid AS dogID, d.dimension AS dimension, COUNT(ct.created_at) AS numtests FROM dogs d JOIN complete_tests ct on d.dog_guid=ct.dog_guid WHERE (d.dimension != '' AND d.dimension ...
week4/MySQL_Exercise_11_Queries_that_Test_Relationships_Between_Test_Completion_and_Dog_Characterisitcs.ipynb
liufuyang/ManagingBigData_MySQL_DukeUniv
mit
The results of Question 7 suggest there are not appreciable differences in the number of tests completed by dogs with different Dognition personality dimensions. Although these analyses are not definitive on their own, these results suggest focusing on Dognition personality dimensions will not likely lead to significa...
%%sql SELECT DISTINCT d.breed_group FROM dogs d
week4/MySQL_Exercise_11_Queries_that_Test_Relationships_Between_Test_Completion_and_Dog_Characterisitcs.ipynb
liufuyang/ManagingBigData_MySQL_DukeUniv
mit
You can see that there are NULL values in the breed_group field. Let's examine the properties of these entries with NULL values to determine whether they should be excluded from our analysis. Question 9: Write a query that outputs the breed, weight, value in the "exclude" column, first or minimum time stamp in the com...
%%sql SELECT d.dog_guid AS dogID, d.dimension AS dimension, d.breed, d.weight, d.exclude, MIN(ct.created_at), MAX(ct.created_at), COUNT(ct.created_at) FROM dogs d JOIN complete_tests ct on d.dog_guid=ct.dog_guid WHERE d.breed_group IS NULL GROUP BY d.dog_guid
week4/MySQL_Exercise_11_Queries_that_Test_Relationships_Between_Test_Completion_and_Dog_Characterisitcs.ipynb
liufuyang/ManagingBigData_MySQL_DukeUniv
mit
There are a lot of these entries and there is no obvious feature that is common to all of them, so at present, we do not have a good reason to exclude them from our analysis. Therefore, let's move on to question 10 now.... Question 10: Adapt the query in Question 7 to examine the relationship between breed_group and n...
%%sql SELECT t.breed_group AS breed_group, COUNT(DISTINCT t.dogID) AS num_dog_guid, AVG(t.numtests) AS avg_numtests FROM (SELECT d.dog_guid AS dogID, d.breed_group AS breed_group, COUNT(ct.created_at) AS numtests FROM dogs d JOIN complete_tests ct on d.dog_guid=ct.dog_guid GROUP BY d.dog_guid) AS t GROU...
week4/MySQL_Exercise_11_Queries_that_Test_Relationships_Between_Test_Completion_and_Dog_Characterisitcs.ipynb
liufuyang/ManagingBigData_MySQL_DukeUniv
mit
The results show there are non-NULL entries of empty strings in breed_group column again. Ignoring them for now, Herding and Sporting breed_groups complete the most tests, while Toy breed groups complete the least tests. This suggests that one avenue an analyst might want to explore further is whether it is worth it ...
%%sql SELECT t.breed_group AS breed_group, COUNT(DISTINCT t.dogID) AS num_dog_guid, AVG(t.numtests) AS avg_numtests FROM (SELECT d.dog_guid AS dogID, d.breed_group AS breed_group, COUNT(ct.created_at) AS numtests FROM dogs d JOIN complete_tests ct on d.dog_guid=ct.dog_guid WHERE d.breed_group IN ('Sporting...
week4/MySQL_Exercise_11_Queries_that_Test_Relationships_Between_Test_Completion_and_Dog_Characterisitcs.ipynb
liufuyang/ManagingBigData_MySQL_DukeUniv
mit
Next, let's examine the relationship between breed_type and number of completed tests. Questions 12: Begin by writing a query that will output all of the distinct values in the breed_type field.
%%sql SELECT DISTINCT d.breed_type FROM dogs d
week4/MySQL_Exercise_11_Queries_that_Test_Relationships_Between_Test_Completion_and_Dog_Characterisitcs.ipynb
liufuyang/ManagingBigData_MySQL_DukeUniv
mit
Question 13: Adapt the query in Question 7 to examine the relationship between breed_type and number of tests completed. Exclude DogIDs with values of "1" in the exclude column. Your results should return 8865 DogIDs in the Pure Breed group.
%%sql SELECT t.breed_type AS breed_type, COUNT(DISTINCT t.dogID) AS num_dog_guid, AVG(t.numtests) AS avg_numtests FROM (SELECT d.dog_guid AS dogID, d.breed_type AS breed_type, COUNT(ct.created_at) AS numtests FROM dogs d JOIN complete_tests ct on d.dog_guid=ct.dog_guid WHERE d.exclude=0 OR d.exclude IS NUL...
week4/MySQL_Exercise_11_Queries_that_Test_Relationships_Between_Test_Completion_and_Dog_Characterisitcs.ipynb
liufuyang/ManagingBigData_MySQL_DukeUniv
mit
There does not appear to be an appreciable difference between number of tests completed by dogs of different breed types. 3. Assess whether dog breeds and neutering are related to the number of tests completed To explore the results we found above a little further, let's run some queries that relabel the breed_types ac...
%%sql SELECT d.dog_guid AS dogID, d.breed_type AS breed_type, COUNT(ct.created_at) AS numtests, CASE d.breed_type WHEN 'Pure Breed' THEN 'Pure_Breed' ELSE 'Not_Pure_Breed' END AS pure_or_not FROM dogs d JOIN complete_tests ct on d.dog_guid=ct.dog_guid WHERE d.exclude=0 OR d.exclude IS NULL GROU...
week4/MySQL_Exercise_11_Queries_that_Test_Relationships_Between_Test_Completion_and_Dog_Characterisitcs.ipynb
liufuyang/ManagingBigData_MySQL_DukeUniv
mit
Question 15: Adapt your queries from Questions 7 and 14 to examine the relationship between breed_type and number of tests completed by Pure_Breed dogs and non_Pure_Breed dogs. Your results should return 8336 DogIDs in the Not_Pure_Breed group.
%%sql SELECT t.pure_or_not AS pure_or_not, COUNT(DISTINCT t.dogID) AS num_dog_guid, AVG(t.numtests) AS avg_numtests FROM (SELECT d.dog_guid AS dogID, COUNT(ct.created_at) AS numtests, CASE d.breed_type WHEN 'Pure Breed' THEN 'Pure_Breed' ELSE 'Not_Pure_Breed' END AS pure_or_n...
week4/MySQL_Exercise_11_Queries_that_Test_Relationships_Between_Test_Completion_and_Dog_Characterisitcs.ipynb
liufuyang/ManagingBigData_MySQL_DukeUniv
mit
Question 16: Adapt your query from Question 15 to examine the relationship between breed_type, whether or not a dog was neutered (indicated in the dog_fixed field), and number of tests completed by Pure_Breed dogs and non_Pure_Breed dogs. There are DogIDs with null values in the dog_fixed column, so your results should...
%%sql SELECT t.pure_or_not AS pure_or_not, t.neutered_or_not AS neutered_or_not, COUNT(DISTINCT t.dogID) AS num_dog_guid, AVG(t.numtests) AS avg_numtests FROM (SELECT d.dog_guid AS dogID, COUNT(ct.created_at) AS numtests, CASE d.breed_type WHEN 'Pure Breed' THEN 'Pure_Breed' ELSE...
week4/MySQL_Exercise_11_Queries_that_Test_Relationships_Between_Test_Completion_and_Dog_Characterisitcs.ipynb
liufuyang/ManagingBigData_MySQL_DukeUniv
mit
These results suggest that although a dog's breed_type doesn't seem to have a strong relationship with how many tests a dog completed, neutered dogs, on average, seem to finish 1-2 more tests than non-neutered dogs. It may be fruitful to explore further whether this effect is consistent across different segments of do...
%%sql SELECT t.dimension AS dimension, COUNT(DISTINCT t.dogID) AS num_dog_guid, AVG(t.numtests) AS avg_numtests, STDDEV(t.numtests) AS std_numtests FROM (SELECT d.dog_guid AS dogID, d.dimension AS dimension, COUNT(ct.created_at) AS numtests FROM dogs d JOIN complete_tests ct on d.dog_guid=ct.dog_guid ...
week4/MySQL_Exercise_11_Queries_that_Test_Relationships_Between_Test_Completion_and_Dog_Characterisitcs.ipynb
liufuyang/ManagingBigData_MySQL_DukeUniv
mit
The standard deviations are all around 20-25% of the average values of each personality dimension, and they are not appreciably different across the personality dimensions, so the average values are likely fairly trustworthy. Let's try calculating the standard deviation of a different measurement. Question 18: Write a...
%%sql SELECT t.breed_type AS breed_type, COUNT(DISTINCT t.dogID) AS num_dog_guid, AVG(t.numtests) AS avg_numtests, STDDEV(t.numtests) AS std_numtests FROM (SELECT d.dog_guid AS dogID, d.breed_type AS breed_type, COUNT(ct.created_at) AS numtests FROM dogs d JOIN complete_tests ct on d.dog_guid=ct.dog_guid ...
week4/MySQL_Exercise_11_Queries_that_Test_Relationships_Between_Test_Completion_and_Dog_Characterisitcs.ipynb
liufuyang/ManagingBigData_MySQL_DukeUniv
mit
Mediante la siguiente ejecución, podemos obtener también las solución, pero además obtendremos el desarrollo completo del problema: python SimplexSolver.py --input file.txt --expl A continuación, se puede ver la salida(nótese que en su ejecución debe cambiar "%run", por python):
%run ..\PySimplex\SimplexSolver.py --input ..\Files\file1.txt --expl
Documentation/Tutorial SimplexSolver con Python.ipynb
carlosclavero/PySimplex
gpl-3.0
El siguiente comando, nos permite guardar la solución(bien con el desarrollo completo o solo con la solución) del problema en un archivo. El nombre del archivo se le indica mediante --output, y aparecerá en el directorio donde tengamos guardado SimplexSolver.py a no ser que indiquemos otra localización: python SimplexS...
%run ..\PySimplex\SimplexSolver.py --input ..\Files\file1.txt --output out.txt
Documentation/Tutorial SimplexSolver con Python.ipynb
carlosclavero/PySimplex
gpl-3.0
Mediante el siguiente comando, además de lo que nos proporcionaban las anteriores ejecuciones, se puede añadir la solución gráfica(solo se puede obtener cuando el problema tiene dos variables): python SimplexSolver.py --input file.txt --expl --graphic A continuación, se puede ver la salida(nótese que en su ejecuci...
%matplotlib inline %run ..\PySimplex\SimplexSolver.py --input ..\Files\file2.txt --graphic
Documentation/Tutorial SimplexSolver con Python.ipynb
carlosclavero/PySimplex
gpl-3.0