markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
This could also take a while because we are doing a string operation which is slow.
entries_per_row = log_data[0:4].content.str.split("\n",expand=True) log_rows = pd.DataFrame(entries_per_row.stack(), columns=["data"]) log_rows log_status = log_rows[log_rows[0].str.contains("Done.")] log_status.head() %matplotlib inline log_status[0].value_counts().plot(kind='pie')
notebooks/Travis CI Build Breaker Analysis.ipynb
feststelltaste/software-analytics
gpl-3.0
Halde Browsing manually through the log, there are some interesting features. E. g. the start time of the build.
print(log_data.iloc[0]['content'][1400:1800])
notebooks/Travis CI Build Breaker Analysis.ipynb
feststelltaste/software-analytics
gpl-3.0
So let's check for some errors and warnings. For this, we create a new DataFrame because it's another kind of information.
logs['finished'] = logs.content.str[-100:].str.extract("(.*)\n*$", expand=False) pd.DataFrame(logs['finished'].value_counts().head(10)) mapping = { "Done. Your build exited with 0." : "SUCCESS", "Done. Build script exited with: 0" : "SUCCESS", "Done. Build script exited with 0" : "SUCCUESS", "Your buil...
notebooks/Travis CI Build Breaker Analysis.ipynb
feststelltaste/software-analytics
gpl-3.0
Visualisierung
%matplotlib inline import matplotlib matplotlib.style.use('ggplot') successful_builds_over_time['duration_in_min'].plot( title="Build time in minutes")
notebooks/Travis CI Build Breaker Analysis.ipynb
feststelltaste/software-analytics
gpl-3.0
Import section specific modules:
from IPython.display import Image
1_Radio_Science/1_8_astronomical_radio_sources.ipynb
KshitijT/fundamentals_of_interferometry
gpl-2.0
1.8 Astronomical radio sources<a id='science:sec:astronomical_radio_sources'></a> The field of radio astronomy is extremely broad with applications to many areas of science. However, when studying astronomical sources, it is important to have information at as large a frequency range as possible. Radio astronomy should...
Image(filename='figures/fornax_a_lo.jpg', width=300)
1_Radio_Science/1_8_astronomical_radio_sources.ipynb
KshitijT/fundamentals_of_interferometry
gpl-2.0
Figure 1.8.1 Fornax A radio galaxy (Image credit: Image courtesy of NRAO/AUI and J. M. Uson) 1.8.4 Star Forming Galaxies: Star forming galaxies are "normal" galaxies which are undergoing active star formation. The process of star formation produces radio emission (predominantly in spiral galaxies) which is a mix of sy...
Image(filename='figures/m82.png', width=300)
1_Radio_Science/1_8_astronomical_radio_sources.ipynb
KshitijT/fundamentals_of_interferometry
gpl-2.0
Implementing Custom Aggregations <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/federated/tutorials/custom_aggregators"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href=...
#@test {"skip": true} !pip install --quiet --upgrade tensorflow-federated !pip install --quiet --upgrade nest-asyncio import nest_asyncio nest_asyncio.apply()
docs/tutorials/custom_aggregators.ipynb
tensorflow/federated
apache-2.0
Design summary In TFF, "aggregation" refers to the movement of a set of values on tff.CLIENTS to produce an aggregate value of the same type on tff.SERVER. That is, each individual client value need not be available. For example in federated learning, client model updates are averaged to get an aggregate model update t...
import collections import tensorflow as tf import tensorflow_federated as tff
docs/tutorials/custom_aggregators.ipynb
tensorflow/federated
apache-2.0
Instead of summing value, the example task is to sum value * 2.0 and then divide the sum by 2.0. The aggregation result is thus mathematically equivalent to directly summing the value, and could be thought of as consisting of three parts: (1) scaling at clients (2) summing across clients (3) unscaling at server. NOTE: ...
class ExampleTaskFactory(tff.aggregators.UnweightedAggregationFactory): def create(self, value_type): @tff.federated_computation() def initialize_fn(): return tff.federated_value((), tff.SERVER) @tff.federated_computation(initialize_fn.type_signature.result, tff.type...
docs/tutorials/custom_aggregators.ipynb
tensorflow/federated
apache-2.0
Whether everything works as expected can be verified with the following code:
client_data = [1.0, 2.0, 5.0] factory = ExampleTaskFactory() aggregation_process = factory.create(tff.TensorType(tf.float32)) print(f'Type signatures of the created aggregation process:\n' f' - initialize: {aggregation_process.initialize.type_signature}\n' f' - next: {aggregation_process.next.type_signatu...
docs/tutorials/custom_aggregators.ipynb
tensorflow/federated
apache-2.0
Statefulness and measurements Statefulness is broadly used in TFF to represent computations that are expected to be executed iteratively and change with each iteration. For example, the state of a learning computation contains the weights of the model being learned. To illustrate how to use state in an aggregation comp...
class ExampleTaskFactory(tff.aggregators.UnweightedAggregationFactory): def create(self, value_type): @tff.federated_computation() def initialize_fn(): return tff.federated_value(0.0, tff.SERVER) @tff.federated_computation(initialize_fn.type_signature.result, tff.typ...
docs/tutorials/custom_aggregators.ipynb
tensorflow/federated
apache-2.0
Note that the state that comes into next_fn as input is placed at server. In order to use it at clients, it first needs to be communicated, which is achieved using the tff.federated_broadcast operator. To verify all works as expected, we can now look at the reported measurements, which should be different with each rou...
client_data = [1.0, 2.0, 5.0] factory = ExampleTaskFactory() aggregation_process = factory.create(tff.TensorType(tf.float32)) print(f'Type signatures of the created aggregation process:\n' f' - initialize: {aggregation_process.initialize.type_signature}\n' f' - next: {aggregation_process.next.type_signatu...
docs/tutorials/custom_aggregators.ipynb
tensorflow/federated
apache-2.0
Structured types The model weights of a model trained in federated learning are usually represented as a collection of tensors, rather than a single tensor. In TFF, this is represented as tff.StructType and generally useful aggregation factories need to be able to accept the structured types. However, in the above exam...
@tff.tf_computation() def scale(value, factor): return tf.nest.map_structure(lambda x: x * factor, value) @tff.tf_computation() def unscale(value, factor): return tf.nest.map_structure(lambda x: x / factor, value) @tff.tf_computation() def add_one(value): return value + 1.0 class ExampleTaskFactory(tff.aggrega...
docs/tutorials/custom_aggregators.ipynb
tensorflow/federated
apache-2.0
This example highlights a pattern which may be useful to follow when structuring TFF code. When not dealing with very simple operations, the code becomes more legible when the tff.tf_computations that will be used as building blocks inside a tff.federated_computation are created in a separate place. Inside of the tff.f...
client_data = [[[1.0, 2.0], [3.0, 4.0, 5.0]], [[1.0, 1.0], [3.0, 0.0, -5.0]]] factory = ExampleTaskFactory() aggregation_process = factory.create( tff.to_type([(tf.float32, (2,)), (tf.float32, (3,))])) print(f'Type signatures of the created aggregation process:\n' f' - initialize: {aggregation...
docs/tutorials/custom_aggregators.ipynb
tensorflow/federated
apache-2.0
Inner aggregations The final step is to optionally enable delegation of the actual aggregation to other factories, in order to allow easy composition of different aggregation techniques. This is achieved by creating an optional inner_factory argument in the constructor of our ExampleTaskFactory. If not specified, tff.a...
@tff.tf_computation() def scale(value, factor): return tf.nest.map_structure(lambda x: x * factor, value) @tff.tf_computation() def unscale(value, factor): return tf.nest.map_structure(lambda x: x / factor, value) @tff.tf_computation() def add_one(value): return value + 1.0 class ExampleTaskFactory(tff.aggrega...
docs/tutorials/custom_aggregators.ipynb
tensorflow/federated
apache-2.0
When delegating to the inner_process.next function, the return structure we get is a tff.templates.MeasuredProcessOutput, with the same three fields - state, result and measurements. When creating the overall return structure of the composed aggregation process, the state and measurements fields should be generally com...
client_data = [1.0, 2.0, 5.0] factory = ExampleTaskFactory() aggregation_process = factory.create(tff.TensorType(tf.float32)) state = aggregation_process.initialize() output = aggregation_process.next(state, client_data) print('| Round #1') print(f'| Aggregation result: {output.result} (expected 8.0)') pri...
docs/tutorials/custom_aggregators.ipynb
tensorflow/federated
apache-2.0
... and with a different inner aggregation. For example, an ExampleTaskFactory:
client_data = [1.0, 2.0, 5.0] # Note the inner delegation can be to any UnweightedAggregaionFactory. # In this case, each factory creates process that multiplies by the iteration # index (1, 2, 3, ...), thus their combination multiplies by (1, 4, 9, ...). factory = ExampleTaskFactory(ExampleTaskFactory()) aggregation_p...
docs/tutorials/custom_aggregators.ipynb
tensorflow/federated
apache-2.0
The Sequential model is a linear stack of layers. You can create a Sequential model by passing a list of layer instances to the constructor:
from keras.models import Sequential from keras.layers import Dense, Activation model = Sequential([ Dense(32, input_shape=(784,)), Activation('relu'), Dense(10), Activation('softmax'), ])
NEU/Sai_Raghuram_Kothapalli_DL/Hands_on_Keras_Tutorial.ipynb
nikbearbrown/Deep_Learning
mit
You can also simply add layers via the .add() method:
model = Sequential() model.add(Dense(32, input_dim=784)) model.add(Activation('relu'))
NEU/Sai_Raghuram_Kothapalli_DL/Hands_on_Keras_Tutorial.ipynb
nikbearbrown/Deep_Learning
mit
Specifying the input shape The model needs to know what input shape it should expect. For this reason, the first layer in a Sequential model (and only the first, because following layers can do automatic shape inference) needs to receive information about its input shape. There are several possible ways to do this: Pas...
model = Sequential() model.add(Dense(32, input_shape=(784,))) model = Sequential() model.add(Dense(32, input_dim=784))
NEU/Sai_Raghuram_Kothapalli_DL/Hands_on_Keras_Tutorial.ipynb
nikbearbrown/Deep_Learning
mit
Compilation Before training a model, you need to configure the learning process, which is done via the compile method. It receives three arguments: An optimizer. This could be the string identifier of an existing optimizer (such as rmsprop or adagrad), or an instance of the Optimizer class. See: optimizers. A loss fu...
# For a multi-class classification problem model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy']) # For a binary classification problem model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['accuracy']) # For ...
NEU/Sai_Raghuram_Kothapalli_DL/Hands_on_Keras_Tutorial.ipynb
nikbearbrown/Deep_Learning
mit
Training Keras models are trained on Numpy arrays of input data and labels. For training a model, you will typically use the fit function. Read its documentation here.
# For a single-input model with 2 classes (binary classification): model = Sequential() model.add(Dense(32, activation='relu', input_dim=100)) model.add(Dense(1, activation='sigmoid')) model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['accuracy']) # Generate dummy dat...
NEU/Sai_Raghuram_Kothapalli_DL/Hands_on_Keras_Tutorial.ipynb
nikbearbrown/Deep_Learning
mit
Examples Here are a few examples to get you started! Multilayer Perceptron (MLP) for multi-class softmax classification:
import keras from keras.models import Sequential from keras.layers import Dense, Dropout, Activation from keras.optimizers import SGD # Generate dummy data import numpy as np x_train = np.random.random((1000, 20)) y_train = keras.utils.to_categorical(np.random.randint(10, size=(1000, 1)), num_classes=10) x_test = np.r...
NEU/Sai_Raghuram_Kothapalli_DL/Hands_on_Keras_Tutorial.ipynb
nikbearbrown/Deep_Learning
mit
MLP for binary classification:
import numpy as np from keras.models import Sequential from keras.layers import Dense, Dropout # Generate dummy data x_train = np.random.random((1000, 20)) y_train = np.random.randint(2, size=(1000, 1)) x_test = np.random.random((100, 20)) y_test = np.random.randint(2, size=(100, 1)) model = Sequential() model.add(De...
NEU/Sai_Raghuram_Kothapalli_DL/Hands_on_Keras_Tutorial.ipynb
nikbearbrown/Deep_Learning
mit
1.Defining the geometry Just as we have done in tutorial 1, we will use the wgs_creator module to define the geometry of the wing. First off, we initialize a LaWGS object.
from pyPanair.preprocess import wgs_creator wgs = wgs_creator.LaWGS("tapered_wing")
examples/tutorial2/tutorial2.ipynb
SaTa999/pyPanair
mit
Next, we create a Line object that defines the coordinates of the airfoil at the root of the wing. To do so, we will read a csv file that contains the coordinates of the airfoil, using the read_airfoil function. Five csv files, eta0000.csv, eta0126.csv, eta0400.csv, eta0700.csv, and eta1000.csv have been prepared for...
import pandas as pd pd.set_option("display.max_rows", 10) pd.read_csv("eta0000.csv")
examples/tutorial2/tutorial2.ipynb
SaTa999/pyPanair
mit
The first and second columns xup and zup represent the xz-coordinates of the upper surface of the airfoil. The third and fourth columns xlow and zlow represent the xz-coordinates of the lower surface of the airfoil. The csv file must follow four rules: 1. Data in the first row correspond to the xz-coordinates of the ...
wingsection1 = wgs_creator.read_airfoil("eta0000.csv", y_coordinate=0.)
examples/tutorial2/tutorial2.ipynb
SaTa999/pyPanair
mit
The first variable specifies the name of the csv file. The y_coordinate variable defines the y-coordinate of the points included in the Line. Line objects for the remaining four wing sections can be created in the same way.
wingsection2 = wgs_creator.read_airfoil("eta0126.csv", y_coordinate=0.074211) wingsection3 = wgs_creator.read_airfoil("eta0400.csv", y_coordinate=0.235051) wingsection4 = wgs_creator.read_airfoil("eta0700.csv", y_coordinate=0.410350) wingsection5 = wgs_creator.read_airfoil("eta1000.csv", y_coordinate=0.585650)
examples/tutorial2/tutorial2.ipynb
SaTa999/pyPanair
mit
Next, we create four networks by linearly interpolating these wing sections.
wingnet1 = wingsection1.linspace(wingsection2, num=4) wingnet2 = wingsection2.linspace(wingsection3, num=8) wingnet3 = wingsection3.linspace(wingsection4, num=9) wingnet4 = wingsection4.linspace(wingsection5, num=9)
examples/tutorial2/tutorial2.ipynb
SaTa999/pyPanair
mit
Then, we concatenate the networks using the concat_row method.
wing = wingnet1.concat_row((wingnet2, wingnet3, wingnet4))
examples/tutorial2/tutorial2.ipynb
SaTa999/pyPanair
mit
The concatenated network is displayed below.
wing.plot_wireframe()
examples/tutorial2/tutorial2.ipynb
SaTa999/pyPanair
mit
After creating the Network for the wing, we create networks for the wingtip and wake.
wingtip_up, wingtip_low = wingsection5.split_half() wingtip_low = wingtip_low.flip() wingtip = wingtip_up.linspace(wingtip_low, num=5) wake_length = 50 * 0.1412 wingwake = wing.make_wake(edge_number=3, wake_length=wake_length)
examples/tutorial2/tutorial2.ipynb
SaTa999/pyPanair
mit
Next, the Networks will be registered to the wgs object.
wgs.append_network("wing", wing, 1) wgs.append_network("wingtip", wingtip, 1) wgs.append_network("wingwake", wingwake, 18)
examples/tutorial2/tutorial2.ipynb
SaTa999/pyPanair
mit
Then, we create a stl file to check that there are no errors in the model.
wgs.create_stl()
examples/tutorial2/tutorial2.ipynb
SaTa999/pyPanair
mit
Last, we create input files for panin
wgs.create_aux(alpha=(-2, 0, 2), mach=0.6, cbar=0.1412, span=1.1714, sref=0.1454, xref=0.5049, zref=0.) wgs.create_wgs()
examples/tutorial2/tutorial2.ipynb
SaTa999/pyPanair
mit
2. Analysis The analysis can be done in the same way as tutorial 1. Place panair, panin, tapered_wing.aux, and tapered_wing.wgs in the same directory, and run panin and panair. bash $ ./panin Prepare input for PanAir Version 1.0 (4Jan2000) Ralph L. Carmichael, Public Domain Aeronautical Software Enter the name of...
from pyPanair.postprocess import write_vtk write_vtk(n_wake=1) from pyPanair.postprocess import calc_section_force calc_section_force(aoa=2, mac=0.1412, rot_center=(0.5049,0,0), casenum=3, networknum=1) section_force = pd.read_csv("section_force.csv") section_force plt.plot(section_force.pos / 0.5857, section_force....
examples/tutorial2/tutorial2.ipynb
SaTa999/pyPanair
mit
The ffmf file can be parsed using the read_ffmf and write_ffmf methods.
from pyPanair.postprocess import write_ffmf, read_ffmf read_ffmf() write_ffmf()
examples/tutorial2/tutorial2.ipynb
SaTa999/pyPanair
mit
Step 2: Instantiating Temperature Sensor Although this demo can also be done on PMODB, we use PMODA in this demo. Set the log interval to be 1ms. This means the IO Processor (IOP) will read temperature values every 1ms.
tmp2 = Pmod_TMP2(PMODA) tmp2.set_log_interval_ms(1)
Pynq-Z1/notebooks/examples/tracebuffer_i2c.ipynb
AEW2015/PYNQ_PR_Overlay
bsd-3-clause
Step 3: Tracking Transactions Instantiating the trace buffer with IIC protocol. The sample rate is set to 1MHz. Although the IIC clock is only 100kHz, we still have to use higher sample rate to keep track of IIC control signals from IOP. After starting the trace buffer DMA, also start to issue IIC reads for 1 second. T...
tr_buf = Trace_Buffer(PMODA,"i2c",samplerate=1000000) # Start the trace buffer tr_buf.start() # Issue reads for 1 second tmp2.start_log() sleep(1) tmp2_log = tmp2.get_log() # Stop the trace buffer tr_buf.stop()
Pynq-Z1/notebooks/examples/tracebuffer_i2c.ipynb
AEW2015/PYNQ_PR_Overlay
bsd-3-clause
Step 4: Parsing and Decoding Transactions The trace buffer object is able to parse the transactions into a *.csv file (saved into the same folder as this script). The input arguments for the parsing method is: * start : the starting sample number of the trace. * stop : the stopping sample number of the trace. ...
# Configuration for PMODA start = 600 stop = 10000 tri_sel=[0x40000,0x80000] tri_0=[0x4,0x8] tri_1=[0x400,0x800] mask = 0x0 # Parsing and decoding tr_buf.parse("i2c_trace.csv", start,stop,mask,tri_sel,tri_0,tri_1) tr_buf.set_metadata(['SDA','SCL']) tr_buf.decode("i2c_trace.pd")
Pynq-Z1/notebooks/examples/tracebuffer_i2c.ipynb
AEW2015/PYNQ_PR_Overlay
bsd-3-clause
Step 5: Displaying the Result The final waveform and decoded transactions are shown using the open-source wavedrom library. The two input arguments (s0 and s1 ) indicate the starting and stopping location where the waveform is shown. The valid range for s0 and s1 is: 0 &lt; s0 &lt; s1 &lt; (stop-start), where start an...
s0 = 1 s1 = 5000 tr_buf.display(s0,s1)
Pynq-Z1/notebooks/examples/tracebuffer_i2c.ipynb
AEW2015/PYNQ_PR_Overlay
bsd-3-clause
Overview In this notebook, you will learn how to load, explore, visualize, and pre-process a time-series dataset. The output of this notebook is a processed dataset that will be used in following notebooks to build a machine learning model. Dataset CTA - Ridership - Daily Boarding Totals: This dataset shows systemwide ...
%pip install -U statsmodels scikit-learn --user
courses/ai-for-time-series/notebooks/01-explore.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Note: To restart the Kernel, navigate to Kernel > Restart Kernel... on the Jupyter menu. Import libraries and define constants
from pandas.plotting import register_matplotlib_converters from statsmodels.graphics.tsaplots import plot_acf from statsmodels.tsa.seasonal import seasonal_decompose from statsmodels.tsa.stattools import grangercausalitytests import matplotlib.pyplot as plt import pandas as pd import seaborn as sns # Enter your proje...
courses/ai-for-time-series/notebooks/01-explore.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Load data
# Import CSV file df = pd.read_csv(raw_data_file, index_col=[ts_col], parse_dates=[ts_col]) # Model data prior to 2020 df = df[df.index < '2020-01-01'] # Drop duplicates df = df.drop_duplicates() # Sort by date df = df.sort_index()
courses/ai-for-time-series/notebooks/01-explore.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Explore data
# Print the top 5 rows df.head()
courses/ai-for-time-series/notebooks/01-explore.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
TODO 1: Analyze the patterns Is ridership changing much over time? Is there a difference in ridership between the weekday and weekends? Is the mix of bus vs rail ridership changing over time?
# Initialize plotting register_matplotlib_converters() # Addresses a warning sns.set(rc={'figure.figsize':(16,4)}) # Explore total rides over time sns.lineplot(data=df, x=df.index, y=df[target]).set_title('Total Rides') fig = plt.show() # Explore rides by day type: Weekday (W), Saturday (A), Sunday/Holiday (U) sns...
courses/ai-for-time-series/notebooks/01-explore.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
TODO 2: Review summary statistics How many records are in the dataset? What is the average # of riders per day?
df[target].describe().apply(lambda x: round(x))
courses/ai-for-time-series/notebooks/01-explore.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
TODO 3: Explore seasonality Is there much difference between months? Can you extract the trend and seasonal pattern from the data?
# Show the distribution of values for each day of the week in a boxplot: # Min, 25th percentile, median, 75th percentile, max daysofweek = df.index.to_series().dt.dayofweek fig = sns.boxplot(x=daysofweek, y=df[target]) # Show the distribution of values for each month in a boxplot: months = df.index.to_series().dt....
courses/ai-for-time-series/notebooks/01-explore.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Auto-correlation Next, we will create an auto-correlation plot, to show how correlated a time-series is with itself. Each point on the x-axis indicates the correlation at a given lag. The shaded area indicates the confidence interval. Note that the correlation gradually decreases over time, but reflects weekly seasonal...
plot_acf(df[target]) fig = plt.show()
courses/ai-for-time-series/notebooks/01-explore.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Export data This will generate a CSV file, which you will use in the next labs of this quest. Inspect the CSV file to see what the data looks like.
df[[target]].to_csv(processed_file, index=True, index_label=ts_col)
courses/ai-for-time-series/notebooks/01-explore.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Write a function named draw_circle that draws a circle using SVG. Your function should take the parameters of the circle as function arguments and have defaults as shown. You will have to write the raw SVG code as a Python string and then use the IPython.display.SVG object and IPython.display.display function.
def draw_circle(width=100, height=100, cx=25, cy=25, r=5, fill='red'): """Draw an SVG circle. Parameters ---------- width : int The width of the svg drawing area in px. height : int The height of the svg drawing area in px. cx : int The x position of the center of th...
assignments/assignment06/InteractEx05.ipynb
aschaffn/phys202-2015-work
mit
Use interactive to build a user interface for exploing the draw_circle function: width: a fixed value of 300px height: a fixed value of 300px cx/cy: a slider in the range [0,300] r: a slider in the range [0,50] fill: a text area in which you can type a color's name Save the return value of interactive to a variable n...
from IPython.html.widgets import interact, interactive, fixed w = interactive(draw_circle, width=fixed(300), height=fixed(300), cx = (0,300), cy=(0,300), r = (0,50), fill="red" ) c = w.children assert c[0].min==0 and c[0].max==300 assert c[1].min==0 and c[1].max==300 assert c[2].min==0 and c[2].max==50 assert c[3].v...
assignments/assignment06/InteractEx05.ipynb
aschaffn/phys202-2015-work
mit
Using battle_tested to feel out new libraries. battle_tested doesn't necessisarily need to be used as a fuzzer. I like to use its testing funcionality to literally "feel out" a library that is recommended to me so I know what works and what will cause issues. Here is how I used battle_tested to "feel out" sqlitedict s...
from sqlitedict import SqliteDict def harness(key, value): """ this tests what can be assigned in SqliteDict's keys and values """ mydict = SqliteDict(":memory:") mydict[key] = value
tutorials/.ipynb_checkpoints/as_a_feeler-checkpoint.ipynb
CodyKochmann/battle_tested
mit
Now, we import the tools we need from battle_tested and fuzz it.
from battle_tested import fuzz, success_map, crash_map fuzz(harness, keep_testing=True) # keep testing allows us to collect "all" crashes
tutorials/.ipynb_checkpoints/as_a_feeler-checkpoint.ipynb
CodyKochmann/battle_tested
mit
Now we can call success_map() and crash_map() to start to get a feel for what is accepted and what isn't.
crash_map() success_map()
tutorials/.ipynb_checkpoints/as_a_feeler-checkpoint.ipynb
CodyKochmann/battle_tested
mit
Basic settings
#limit number of worker processes to 4 graphlab.set_runtime_config('GRAPHLAB_DEFAULT_NUM_PYLAMBDA_WORKERS', 4) #set canvas to open inline graphlab.canvas.set_target('ipynb')
ml-foundations/week-2/Assignment - Week 2.ipynb
zomansud/coursera
mit
Load House Sales Data
sales = graphlab.SFrame('home_data.gl/')
ml-foundations/week-2/Assignment - Week 2.ipynb
zomansud/coursera
mit
Assignment begins 1. Selection and summary statistics In the notebook we covered in the module, we discovered which neighborhood (zip code) of Seattle had the highest average house sale price. Now, take the sales data, select only the houses with this zip code, and compute the average price. Save this result to answer ...
highest_avg_price_zipcode = '98039' sales_zipcode = sales[sales['zipcode'] == highest_avg_price_zipcode] avg_price_highest_zipcode = sales_zipcode['price'].mean() print avg_price_highest_zipcode
ml-foundations/week-2/Assignment - Week 2.ipynb
zomansud/coursera
mit
2. Filtering data Using logical filters, first select the houses that have ‘sqft_living’ higher than 2000 sqft but no larger than 4000 sqft. What fraction of the all houses have ‘sqft_living’ in this range? Save this result to answer the quiz at the end. Total number of houses
total_houses = sales.num_rows() print total_houses
ml-foundations/week-2/Assignment - Week 2.ipynb
zomansud/coursera
mit
Houses with the above criteria
filtered_houses = sales[(sales['sqft_living'] > 2000) & (sales['sqft_living'] <= 4000)] print filtered_houses.num_rows() filtered_houses = sales[sales.apply(lambda x: (x['sqft_living'] > 2000) & (x['sqft_living'] <= 4000))] print filtered_houses.num_rows() total_filtered_houses = filtered_houses.num_rows() print t...
ml-foundations/week-2/Assignment - Week 2.ipynb
zomansud/coursera
mit
Fraction of Houses
filtered_houses_fraction = total_filtered_houses / float(total_houses) print filtered_houses_fraction
ml-foundations/week-2/Assignment - Week 2.ipynb
zomansud/coursera
mit
3. Building a regression model with several more features Build the feature set
advanced_features = [ 'bedrooms', 'bathrooms', 'sqft_living', 'sqft_lot', 'floors', 'zipcode', 'condition', # condition of house 'grade', # measure of quality of construction 'waterfront', # waterfront property 'view', # type of view 'sqft_above', # square feet above ground 'sqft_basement', # square...
ml-foundations/week-2/Assignment - Week 2.ipynb
zomansud/coursera
mit
Create train and test data
train_data, test_data = sales.random_split(.8, seed=0)
ml-foundations/week-2/Assignment - Week 2.ipynb
zomansud/coursera
mit
Compute the RMSE RMSE(root mean squared error) on the test_data for the model using just my_features, and for the one using advanced_features.
my_feature_model = graphlab.linear_regression.create(train_data, target='price', features=my_features, validation_set=None) print my_feature_model.evaluate(test_data) print test_data['price'].mean() advanced_feature_model = graphlab.linear_regression.create(train_data, target='price', features=advanced_features, val...
ml-foundations/week-2/Assignment - Week 2.ipynb
zomansud/coursera
mit
Difference in RMSE What is the difference in RMSE between the model trained with my_features and the one trained with advanced_features? Save this result to answer the quiz at the end.
print my_feature_model.evaluate(test_data)['rmse'] - advanced_feature_model.evaluate(test_data)['rmse']
ml-foundations/week-2/Assignment - Week 2.ipynb
zomansud/coursera
mit
Counting word frequency To start off we'll need to count how often each word appears in the data. We'll use this count to create a vocabulary we'll use to encode the review data. This resulting count is known as a bag of words. We'll use it to select our vocabulary and build the word vectors. You should have seen how t...
from collections import Counter total_counts = # bag of words here print("Total words in data set: ", len(total_counts))
intro-to-tflearn/TFLearn_Sentiment_Analysis.ipynb
udacity/deep-learning
mit
The last word in our vocabulary shows up in 30 reviews out of 25000. I think it's fair to say this is a tiny proportion of reviews. We are probably fine with this number of words. Note: When you run, you may see a different word from the one shown above, but it will also have the value 30. That's because there are many...
word2idx = ## create the word-to-index dictionary here
intro-to-tflearn/TFLearn_Sentiment_Analysis.ipynb
udacity/deep-learning
mit
Text to vector function Now we can write a function that converts a some text to a word vector. The function will take a string of words as input and return a vector with the words counted up. Here's the general algorithm to do this: Initialize the word vector with np.zeros, it should be the length of the vocabulary. ...
def text_to_vector(text): pass
intro-to-tflearn/TFLearn_Sentiment_Analysis.ipynb
udacity/deep-learning
mit
Building the network TFLearn lets you build the network by defining the layers. Input layer For the input layer, you just need to tell it how many units you have. For example, net = tflearn.input_data([None, 100]) would create a network with 100 input units. The first element in the list, None in this case, sets the ...
# Network building def build_model(): # This resets all parameters and variables, leave this here tf.reset_default_graph() #### Your code #### model = tflearn.DNN(net) return model
intro-to-tflearn/TFLearn_Sentiment_Analysis.ipynb
udacity/deep-learning
mit
Training the network Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You ca...
# Training model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=128, n_epoch=10)
intro-to-tflearn/TFLearn_Sentiment_Analysis.ipynb
udacity/deep-learning
mit
Testing After you're satisified with your hyperparameters, you can run the network on the test set to measure its performance. Remember, only do this after finalizing the hyperparameters.
predictions = (np.array(model.predict(testX))[:,0] >= 0.5).astype(np.int_) test_accuracy = np.mean(predictions == testY[:,0], axis=0) print("Test accuracy: ", test_accuracy)
intro-to-tflearn/TFLearn_Sentiment_Analysis.ipynb
udacity/deep-learning
mit
In the following cell, complete the code with an expression that evaluates to a list of integers derived from the raw numbers in numbers_str, assigning the value of this expression to a variable numbers. If you do everything correctly, executing the cell should produce the output 985 (not '985').
raw_numbers = numbers_str.split(",") numbers_list=[int(x) for x in raw_numbers] max(numbers_list)
data and database/Homework_4_database_shengyingzhao.ipynb
sz2472/foundations-homework
mit
Great! We'll be using the numbers list you created above in the next few problems. In the cell below, fill in the square brackets so that the expression evaluates to a list of the ten largest values in numbers. Expected output: [506, 528, 550, 581, 699, 721, 736, 804, 855, 985] (Hint: use a slice.)
sorted(numbers_list)[-10:]
data and database/Homework_4_database_shengyingzhao.ipynb
sz2472/foundations-homework
mit
In the cell below, write an expression that evaluates to a list of the integers from numbers that are evenly divisible by three, sorted in numerical order. Expected output: [120, 171, 258, 279, 528, 699, 804, 855]
sorted(x for x in numbers_list if x%3==0)
data and database/Homework_4_database_shengyingzhao.ipynb
sz2472/foundations-homework
mit
Okay. You're doing great. Now, in the cell below, write an expression that evaluates to a list of the square roots of all the integers in numbers that are less than 100. In order to do this, you'll need to use the sqrt function from the math module, which I've already imported for you. Expected output: [2.6457513110645...
from math import sqrt [sqrt(x) for x in numbers_list if x < 100]
data and database/Homework_4_database_shengyingzhao.ipynb
sz2472/foundations-homework
mit
Now, in the cell below, write a list comprehension that evaluates to a list of names of the planets that have a radius greater than four earth radii. Expected output: ['Jupiter', 'Saturn', 'Uranus']
[x['name'] for x in planets if x['diameter']>4]
data and database/Homework_4_database_shengyingzhao.ipynb
sz2472/foundations-homework
mit
In the cell below, write a single expression that evaluates to the sum of the mass of all planets in the solar system. Expected output: 446.79
sum(x['mass'] for x in planets)
data and database/Homework_4_database_shengyingzhao.ipynb
sz2472/foundations-homework
mit
Good work. Last one with the planets. Write an expression that evaluates to the names of the planets that have the word giant anywhere in the value for their type key. Expected output: ['Jupiter', 'Saturn', 'Uranus', 'Neptune']
[x['name'] for x in planets if 'giant' in x['type']]
data and database/Homework_4_database_shengyingzhao.ipynb
sz2472/foundations-homework
mit
EXTREME BONUS ROUND: Write an expression below that evaluates to a list of the names of the planets in ascending order by their number of moons. (The easiest way to do this involves using the key parameter of the sorted function, which we haven't yet discussed in class! That's why this is an EXTREME BONUS question.) Ex...
[x['name'] for x in sorted(planets, key=lambda x:x['moons'])] #can't sort a dictionary, sort the dictionary by the number of moons def get_moon_count(d): return d['moons'] [x['name'] for x in sorted(planets, key=get_moon_count)] #sort the dictionary by reverse order of the diameter: [x['name'] for x in sorted(pla...
data and database/Homework_4_database_shengyingzhao.ipynb
sz2472/foundations-homework
mit
In the cell above, I defined a variable poem_lines which has a list of lines in the poem, and imported the re library. In the cell below, write a list comprehension (using re.search()) that evaluates to a list of lines that contain two words next to each other (separated by a space) that have exactly four characters. (...
[line for line in poem_lines if re.search(r"\b\w{4}\s\w{4}\b",line)]
data and database/Homework_4_database_shengyingzhao.ipynb
sz2472/foundations-homework
mit
Good! Now, in the following cell, write a list comprehension that evaluates to a list of lines in the poem that end with a five-letter word, regardless of whether or not there is punctuation following the word at the end of the line. (Hint: Try using the ? quantifier. Is there an existing character class, or a way to w...
[line for line in poem_lines if re.search(r"\b\w{5}\b.?$",line)]
data and database/Homework_4_database_shengyingzhao.ipynb
sz2472/foundations-homework
mit
Okay, now a slightly trickier one. In the cell below, I've created a string all_lines which evaluates to the entire text of the poem in one string. Execute this cell.
all_lines = " ".join(poem_lines) all_lines
data and database/Homework_4_database_shengyingzhao.ipynb
sz2472/foundations-homework
mit
Now, write an expression that evaluates to all of the words in the poem that follow the word 'I'. (The strings in the resulting list should not include the I.) Hint: Use re.findall() and grouping! Expected output: ['could', 'stood', 'could', 'kept', 'doubted', 'should', 'shall', 'took']
match = re.findall(r"I \w+", all_lines) #():only find things after 'I', re.search() returns object in regular expression, #while re.findall() return a list match = re.findall(r"I (\w+)", all_lines) match
data and database/Homework_4_database_shengyingzhao.ipynb
sz2472/foundations-homework
mit
You'll need to pull out the name of the dish and the price of the dish. The v after the hyphen indicates that the dish is vegetarian---you'll need to include that information in your dictionary as well. I've included the basic framework; you just need to fill in the contents of the for loop. Expected output: [{'name': ...
menu = [] for item in entrees: pass # replace 'pass' with your code menu
data and database/Homework_4_database_shengyingzhao.ipynb
sz2472/foundations-homework
mit
Load Image As Greyscale
# Load image as grayscale image = cv2.imread('images/plane_256x256.jpg', cv2.IMREAD_GRAYSCALE)
machine-learning/cropping_images.ipynb
tpin3694/tpin3694.github.io
mit
Crop Image
# Select first half of the columns and all rows image_cropped = image[:,:126]
machine-learning/cropping_images.ipynb
tpin3694/tpin3694.github.io
mit
View Image
# View image plt.imshow(image_cropped, cmap='gray'), plt.axis("off") plt.show()
machine-learning/cropping_images.ipynb
tpin3694/tpin3694.github.io
mit
If you have download the previous file and decompressed it in a folder named rossmann in the fastai data folder, you should see the following list of files with this instruction:
path = Config().data/'rossmann' path.ls()
dev_nbs/course/rossman_data_clean.ipynb
fastai/fastai
apache-2.0
The data that comes from Kaggle is in 'train.csv', 'test.csv', 'store.csv' and 'sample_submission.csv'. The other files are the additional data we were talking about. Let's start by loading everything using pandas.
table_names = ['train', 'store', 'store_states', 'state_names', 'googletrend', 'weather', 'test'] tables = [pd.read_csv(path/f'{fname}.csv', low_memory=False) for fname in table_names] train, store, store_states, state_names, googletrend, weather, test = tables
dev_nbs/course/rossman_data_clean.ipynb
fastai/fastai
apache-2.0
To get an idea of the amount of data available, let's just look at the length of the training and test tables.
len(train), len(test)
dev_nbs/course/rossman_data_clean.ipynb
fastai/fastai
apache-2.0
So we have more than one million records available. Let's have a look at what's inside:
train.head()
dev_nbs/course/rossman_data_clean.ipynb
fastai/fastai
apache-2.0
The Store column contains the id of the stores, then we are given the id of the day of the week, the exact date, if the store was open on that day, if there were any promotion in that store during that day, and if it was a state or school holiday. The Customers column is given as an indication, and the Sales column is ...
test.head()
dev_nbs/course/rossman_data_clean.ipynb
fastai/fastai
apache-2.0
The other table given by Kaggle contains some information specific to the stores: their type, what the competition looks like, if they are engaged in a permanent promotion program, and if so since then.
store.head().T
dev_nbs/course/rossman_data_clean.ipynb
fastai/fastai
apache-2.0
Now let's have a quick look at our four additional dataframes. store_states just gives us the abbreviated name of the sate of each store.
store_states.head()
dev_nbs/course/rossman_data_clean.ipynb
fastai/fastai
apache-2.0
We can match them to their real names with state_names.
state_names.head()
dev_nbs/course/rossman_data_clean.ipynb
fastai/fastai
apache-2.0
Which is going to be necessary if we want to use the weather table:
weather.head().T
dev_nbs/course/rossman_data_clean.ipynb
fastai/fastai
apache-2.0
Lastly the googletrend table gives us the trend of the brand in each state and in the whole of Germany.
googletrend.head()
dev_nbs/course/rossman_data_clean.ipynb
fastai/fastai
apache-2.0
Before we apply the fastai preprocessing, we will need to join the store table and the additional ones with our training and test table. Then, as we saw in our first example in chapter 1, we will need to split our variables between categorical and continuous. Before we do that, though, there is one type of variable tha...
def join_df(left, right, left_on, right_on=None, suffix='_y'): if right_on is None: right_on = left_on return left.merge(right, how='left', left_on=left_on, right_on=right_on, suffixes=("", suffix))
dev_nbs/course/rossman_data_clean.ipynb
fastai/fastai
apache-2.0
First, let's replace the state names in the weather table by the abbreviations, since that's what is used in the other tables.
weather = join_df(weather, state_names, "file", "StateName") weather[['file', 'Date', 'State', 'StateName']].head()
dev_nbs/course/rossman_data_clean.ipynb
fastai/fastai
apache-2.0