markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Parameters
# modulation scheme and constellation points M = 2 constellation_points = [ 0, 1 ] # symbol time and number of symbols t_symb = 1.0 n_symb = 100 # parameters of the RRC filter beta = .33 n_up = 8 # samples per symbol syms_per_filt = 4 # symbols per filter (plus minus in both directions) K_filt = 2...
nt1/vorlesung/3_mod_demod/pulse_shaping.ipynb
kit-cel/wt
gpl-2.0
Signals and their spectra
# get RC pulse and rectangular pulse, # both being normalized to energy 1 rc = get_rc_ir( K_filt, n_up, t_symb, beta ) rc /= np.linalg.norm( rc ) rect = np.append( np.ones( n_up ), np.zeros( len( rc ) - n_up ) ) rect /= np.linalg.norm( rect ) # get pulse spectra RC_PSD = np.abs( np.fft.fftshift( np.fft.fft( rc, N_f...
nt1/vorlesung/3_mod_demod/pulse_shaping.ipynb
kit-cel/wt
gpl-2.0
Real data-modulated Tx-signal
# number of realizations along which to average the psd estimate n_real = 10 # initialize two-dimensional field for collecting several realizations along which to average S_rc = np.zeros( (n_real, N_fft ), dtype=complex ) S_rect = np.zeros( (n_real, N_fft ), dtype=complex ) # loop for multiple realizations in orde...
nt1/vorlesung/3_mod_demod/pulse_shaping.ipynb
kit-cel/wt
gpl-2.0
Plotting
plt.subplot(221) plt.plot( np.arange( np.size( rc ) ) * t_symb / n_up, rc, linewidth=2.0, label='RC' ) plt.plot( np.arange( np.size( rect ) ) * t_symb / n_up, rect, linewidth=2.0, label='Rect' ) plt.ylim( (-.1, 1.1 ) ) plt.grid( True ) plt.legend( loc='upper right' ) #plt.title( '$g(t), s(t)$' ) plt.ylabel('$g(t...
nt1/vorlesung/3_mod_demod/pulse_shaping.ipynb
kit-cel/wt
gpl-2.0
Change to the working directory to the location where the test data are stored.
cd ~/Documents/code_projects/OpticalRS/tests/data/
docs/notebooks/demos/KNNDepthDemo.ipynb
jkibele/OpticalRS
bsd-3-clause
Create a depth estimator object with the denoised WorldView-2 imagery sample and a raster of known depths.
de = DepthEstimator('eReefWV2_denoised.tif', 'BPS_adj_depth_raster_masked.tif')
docs/notebooks/demos/KNNDepthDemo.ipynb
jkibele/OpticalRS
bsd-3-clause
Use the DepthEstimator object to estimate the depths:
depthest = de.knn_depth_estimation(k=5)
docs/notebooks/demos/KNNDepthDemo.ipynb
jkibele/OpticalRS
bsd-3-clause
Display an RGB version of the multispectral imagery with raster of known depths overlayed. Then display the estimated depths. This step is just for display. It is not necessary for depth estimation.
fig, (ax1, ax2) = subplots(1, 2, figsize=(12,8)) rgbarr = MSExposure.equalize_adapthist(de.imarr[...,[3,2,1]], clip_limit=0.02) ax1.imshow(rgbarr) mapable = ax1.imshow(de.known_depth_arr) ax1.set_axis_off() ax1.set_title("RGB Image and Known Depths") blah = colorbar(mapable, ax=ax1, label='Depth (m)', cmap=mpl.cm.Spect...
docs/notebooks/demos/KNNDepthDemo.ipynb
jkibele/OpticalRS
bsd-3-clause
Calculate root mean square error (RMSE) and plot estimated depths against measured depths. Because the same measured depths are being used for both training and evaluation, the RMSE will be better (lower) than it should be. In practice, the training and evaluation (test) sets should be different.
err = de.known_depth_arr - depthest rmse = np.sqrt((err**2).mean()) scatter(de.known_depth_arr, depthest, edgecolor='none', alpha=0.3) ax = gca() titletxt = "Estimate Self RMSE: {:.2}m".format(rmse) ax.set_title(titletxt) ax.set_xlabel("Measured Depths (m)") ax.set_ylabel("KNN Estimated Depths (m)") ax.set_aspect('equ...
docs/notebooks/demos/KNNDepthDemo.ipynb
jkibele/OpticalRS
bsd-3-clause
Train a network To train our network we will use SGD with momentum. In addition, we will adjust the learning rate with an exponential learning rate schedule as optimization proceeds; after each epoch, we will reduce the learning rate by multiplying it by a decay rate.
input_size = 32 * 32 * 3 hidden_size = 50 num_classes = 10 net = TwoLayerNet(input_size, hidden_size, num_classes) # Train the network stats = net.train(X_train, y_train, X_val, y_val, num_iters=1000, batch_size=200, learning_rate=1e-3, learning_rate_decay=0.95, reg=0.25, verbose=Tr...
assignment1/two_layer_net.ipynb
halimacc/CS231n-assignments
unlicense
Tune your hyperparameters What's wrong?. Looking at the visualizations above, we see that the loss is decreasing more or less linearly, which seems to suggest that the learning rate may be too low. Moreover, there is no gap between the training and validation accuracy, suggesting that the model we used has low capacity...
best_net = None # store the best model into this ################################################################################# # TODO: Tune hyperparameters using the validation set. Store your best trained # # model in best_net. # # ...
assignment1/two_layer_net.ipynb
halimacc/CS231n-assignments
unlicense
Now some examples --- please read this code first.
x = np.linspace(-6, 6, 200) y = np.minimum(np.sin(x), np.cos(x)) fig, ax = plt.subplots() ax.plot(x, y) plt.show()
John/plots.ipynb
QuantEcon/phd_workshops
bsd-3-clause
Here's a plot with two curves.
y1 = np.minimum(np.sin(x), np.cos(x)) y2 = np.maximum(np.sin(x), np.cos(x)) fig, ax = plt.subplots() ax.plot(x, y1, label='y1') ax.plot(x, y2, label='y2') ax.legend() plt.show()
John/plots.ipynb
QuantEcon/phd_workshops
bsd-3-clause
Exercise 1 Plot the tent map $$ f(x) = \mu \min{x, 1 - x} $$ on the interval $[0, 1]$ when $\mu = 1.8$
for i in range(15): print("solution below") μ = 1.8 x = np.linspace(0, 1, 200) y = μ * np.minimum(x, 1 - x) fig, ax = plt.subplots() ax.plot(x, y) plt.show()
John/plots.ipynb
QuantEcon/phd_workshops
bsd-3-clause
Exercise 2 The following example makes an empty array of length 3 and then fills them with the numbers 0.5, 1.0, 0.5:
x = np.empty(3) x[0] = 0.5 x[1] = 1.0 x[2] = 0.5 x
John/plots.ipynb
QuantEcon/phd_workshops
bsd-3-clause
The next code shows what the range() function does:
for i in range(5): print(i)
John/plots.ipynb
QuantEcon/phd_workshops
bsd-3-clause
Now compute the time series of length 500 given by $$ x_{t+1} = f(x_t) $$ where $f$ is as given above and $x_0 = 0.2$. Plot $x_t$ against $t$.
for i in range(15): print("solution below") n = 500 x = np.empty(n) x[0] = 0.2 for t in range(n-1): x[t+1] = μ * min(x[t], 1 - x[t]) fig, ax = plt.subplots() ax.plot(range(n), x) plt.show()
John/plots.ipynb
QuantEcon/phd_workshops
bsd-3-clause
Exercise 3 The next code shows how to build a histogram.
z = np.random.randn(1000) fig, ax = plt.subplots() ax.hist(z, bins=40) plt.show()
John/plots.ipynb
QuantEcon/phd_workshops
bsd-3-clause
Now recompute the time series from the tent map, this time with $n=500,000$, and histogram it.
for i in range(15): print("solution below") n = 1_000_000 x = np.empty(n) x[0] = 0.2 for t in range(n-1): x[t+1] = μ * min(x[t], 1 - x[t]) fig, ax = plt.subplots() ax.hist(x, bins=40) plt.show()
John/plots.ipynb
QuantEcon/phd_workshops
bsd-3-clause
Crear el contexto de Streaming. Con la configuración: "TwitterTrend" como nombre del programa, 10 segundos como intervalo para batch y dos hilos de ejecución. De igual manera, crear la instancia del contexto de SQL en Spark.
# Contexto de Spark sc = # Contexto de Spark Streaming ssc = # Contexto de SQL sqlContext =
spark_streaming_class/TwitterTrends_Socket/Twitter_Trends_Ejercicio.ipynb
israelzuniga/spark_streaming_class
mit
Una vez instanciados los contextos, nos conectamos a la fuente de datos:
''' Especificar fuente de datos: https://spark.apache.org/docs/latest/api/python/pyspark.streaming.html#pyspark.streaming.StreamingContext.socketTextStream Hostname: localhost Puerto: 5555 ''' socket_stream =
spark_streaming_class/TwitterTrends_Socket/Twitter_Trends_Ejercicio.ipynb
israelzuniga/spark_streaming_class
mit
Procesar el flujo de datos Cada tweet/mensaje que se recibe, será cortado en palabras (flatMap()), Por cada registro generado, se filtrarán las palabras que empiezan con el simbolo #, Después se convertirán esos registros a minúsculas, Se ejecuta una acción de mapeo a (palabra, 1) Luego, se reducen los pares (llave,...
# Your code here
spark_streaming_class/TwitterTrends_Socket/Twitter_Trends_Ejercicio.ipynb
israelzuniga/spark_streaming_class
mit
Graficar las tendencias Podemos cambiar los valores de time.sleep() junto con la operación en ventana de tiempo para actualizaciones más cortas
for i in range(0,1000): time.sleep( 20 ) top_10_tweets = sqlContext.sql( 'Select tag, count from tweets' ) top_10_df = top_10_tweets.toPandas() display.clear_output(wait=True) plt.figure( figsize = ( 10, 8 ) ) sn.barplot( x="count", y="tag", data=top_10_df) plt.show() ssc.stop()
spark_streaming_class/TwitterTrends_Socket/Twitter_Trends_Ejercicio.ipynb
israelzuniga/spark_streaming_class
mit
Next, we plot a graph showing the ratio between consective fibonacci numbers and the ration between a measurement kilometer and the corresponding mile
def plot_fibo_ratio(series): ratios = [] for i in range(len(series)-1): ratios.append(series[i+1]/series[i]) plt.plot(ratios, 'b*') plt.ylabel('Ratio') plt.xlabel('No.') def plot_km_miles_ratio(kms): miles_km = [1.6094*km/km for km in kms] plt.plot(miles_km, 'ro') num = 100 series...
misc/fibonacci_miles_km/Fibonacci Miles and Kilometers.ipynb
doingmathwithpython/moremathwithpython
mit
The above graph shows that the ratio between consecutive Fibonacci numbers and a distance measurement in kilometer and mile is close to being the same (~ 1.6), popularly referred to as the Golden Ratio. Next, we plot a graph showing the approximated distance in kilometers using the idea in the Reddit thread above and t...
def estimate_kms(miles): approx_kms = [] exact_kms = [1.6094*m for m in miles[1:]] for i in range(len(series)-1): approx_kms.append(series[i]+series[i+1]) plt.figure(2) plt.plot(approx_kms, exact_kms, 'ro') plt.title('Approximating kilometers using fibonacci') series = fibo(num) estima...
misc/fibonacci_miles_km/Fibonacci Miles and Kilometers.ipynb
doingmathwithpython/moremathwithpython
mit
Step 1: Record the video Record the video on the robot Step 2: Analyse the images from the Video For now we just selected 4 images from the video
import cv2 import matplotlib.pyplot as plt img1 = cv2.imread('secchi/secchi1.png') img2 = cv2.imread('secchi/secchi2.png') img3 = cv2.imread('secchi/secchi3.png') img4 = cv2.imread('secchi/secchi4.png') figures = [] fig = plt.figure(figsize=(18, 16)) for i in range(1,13): figures.append(fig.add_subplot(4,3,i)...
notebooks/opencv/secchi.ipynb
cloudmesh/book
apache-2.0
Imports, logging, and data On top of doing the things we already know, we now additionally import also the CollaborativeFiltering algorithm, which is, as should be obvious by now, accessible through the bestPy.algorithms subpackage.
from bestPy import write_log_to from bestPy.datastructures import Transactions from bestPy.algorithms import Baseline, CollaborativeFiltering # Additionally import CollaborativeFiltering logfile = 'logfile.txt' write_log_to(logfile, 20) file = 'examples_data.csv' data = Transactions.from_csv(file)
examples/04.2_AlgorithmsCollaborativeFiltering.ipynb
yedivanseven/bestPy
gpl-3.0
Creating a new CollaborativeFiltering object with data Again, this is as straightforward as you would expect. This time, we will attach the data to the algorithm right away.
recommendation = CollaborativeFiltering().operating_on(data) recommendation.has_data
examples/04.2_AlgorithmsCollaborativeFiltering.ipynb
yedivanseven/bestPy
gpl-3.0
Parameters of the collaborative filtering algorithm Inspecting the new recommendation object with Tab completion again reveals binarize as a first attribute.
recommendation.binarize
examples/04.2_AlgorithmsCollaborativeFiltering.ipynb
yedivanseven/bestPy
gpl-3.0
It has the same meaning as in the baseline recommendation: True means we only care whether or not a customer bought an article and False means we also take into account how often a customer bought an article. Speaking about baseline, you will notice that the recommendation object we just created actually has an attribu...
recommendation.baseline
examples/04.2_AlgorithmsCollaborativeFiltering.ipynb
yedivanseven/bestPy
gpl-3.0
Indeed, collaborative filtering cannot necessarily provide recommendations for all customers. Specifically, it fails to do so if the customer in question only bought articles that no other customer has bought. For these cases, we need a fallback solution, which is provided by the algorithm specified through the baselin...
recommendation.baseline = Baseline() recommendation.baseline
examples/04.2_AlgorithmsCollaborativeFiltering.ipynb
yedivanseven/bestPy
gpl-3.0
More about that later. There is one more paramter to be explored first.
recommendation.similarity
examples/04.2_AlgorithmsCollaborativeFiltering.ipynb
yedivanseven/bestPy
gpl-3.0
In short, collaborative filtering (as it is implemented in bestPy) works by recommending articles that are most similar to the articles the target customer has already bought. What exactly similar means, however, is not set in stone and quite a few similarity measures are available. + Dice (dice) + Jaccard (jaccard) + ...
from bestPy.algorithms.similarities import dice, jaccard, sokalsneath, russellrao, cosine, cosine_binary recommendation.similarity = dice recommendation.similarity
examples/04.2_AlgorithmsCollaborativeFiltering.ipynb
yedivanseven/bestPy
gpl-3.0
And that's it for the parameters of the collaborative filtering algorithm. Making a recommendation for a target customer Now that everything is set up and we have data attached to the algorithm, its for_one() method is available and can be called with the internal integer index of the target customer as argument.
customer = data.user.index_of['5'] recommendation.for_one(customer)
examples/04.2_AlgorithmsCollaborativeFiltering.ipynb
yedivanseven/bestPy
gpl-3.0
And, voilà, your recommendation. Again, a higher number means that the article with the same index as that number is more highly recommended for the target customer. To appreciate the necessity for this fallback solution, we try to get a recommendation for the customer with ID '4' next.
customer = data.user.index_of['4'] recommendation.for_one(customer)
examples/04.2_AlgorithmsCollaborativeFiltering.ipynb
yedivanseven/bestPy
gpl-3.0
Dora R. is a great investigator, and she has access to a large database that she is always checking and she can add and take from it at any time. Her database is called "A gazeta de Geringontzan" - os simply "agazeta.db"
conn = sqlite3.connect('agazeta.db') c = conn.cursor()
TobParser.ipynb
Scoppio/a-gazeta-de-geringontzan
mit
Dora R. has a setup, in case she is creating a new database she enters those small keys to have some meat in her sandwich. Also, she is going to run her investigation skills to understand what is happening on the historics of people, but first things have to work. Dora starts by seeing if she can get back the first 2 a...
try: c.execute('SELECT * FROM apikeys LIMIT 1') except Exception, e: c.execute( 'CREATE TABLE apikeys (id integer primary key, username text, token text, working integer, email text, subscribed integer)') print 'Look! It is working! I have the top two api usernames and tokens from a gazeta!' #for row in c.exec...
TobParser.ipynb
Scoppio/a-gazeta-de-geringontzan
mit
She is very excited, it is working! First thing she grabs her pen and paper and starts to look for special things on the api's, maybe she would find something interesting.
APIkey = namedtuple('APIkey', ['username', 'token']) for row in c.execute('SELECT username, token FROM apikeys WHERE working == 1'): APIkey.username = row[0] APIkey.token = row[1]
TobParser.ipynb
Scoppio/a-gazeta-de-geringontzan
mit
She gets everything read with the last of the username and tokens
get = 'https://trackobot.com/profile/history.json?page={page}&username={username}&token={token}'.format(page=1, username=APIkey.username, ...
TobParser.ipynb
Scoppio/a-gazeta-de-geringontzan
mit
The file has to be loaded from a json online Setting up the system to read JSON files from web
import urllib import json
TobParser.ipynb
Scoppio/a-gazeta-de-geringontzan
mit
Dora R. hates when her data comes with a "u" in front, it gets messy and hard to read, so it is pretty much manicure.
def json_load_byteified(file_handle): return _byteify( json.load(file_handle, object_hook = _byteify), ignore_dicts = True ) def json_loads_byteified(json_text): return _byteify( json.loads(json_text, object_hook = _byteify), ignore_dicts = True ) def _byteify( data, ignore_dic...
TobParser.ipynb
Scoppio/a-gazeta-de-geringontzan
mit
Dora R. now wants to see what is inside the 'data' json, its keys and values
data.keys() data['meta'].keys() data['meta']
TobParser.ipynb
Scoppio/a-gazeta-de-geringontzan
mit
This is the best thing she could find! Here Dora can see how many pages there are, so she can use it to 'spider' around the data. Her eyes shine when she sees all that many possibilities, all those secrets!
data['history'][0].keys()
TobParser.ipynb
Scoppio/a-gazeta-de-geringontzan
mit
With 'history' she sees that here lies what is really important, the games! Now she will run her small trick that returns the number of turns a game had
def total_turns(data, j): a = 0 for i in range(len(data['history'][j]['card_history'])): a = data['history'][j]['card_history'][i]['turn'] if data['history'][j]['card_history'][i]['turn'] > a else a return a
TobParser.ipynb
Scoppio/a-gazeta-de-geringontzan
mit
If she wanted to see who won and with which deck, she would need to do this...
print data['history'][0]['id'],'-', data['history'][0]['added'], data['history'][0]['mode'] print data['history'][0]['hero'], data['history'][0]['hero_deck'], 'vs', data['history'][0]['opponent'], data['history'][0]['opponent_deck'] print 'First player' if not data['history'][0]['coin'] else 'Second player' print 'Turn...
TobParser.ipynb
Scoppio/a-gazeta-de-geringontzan
mit
Dora R. is also curious about the cards that were used during the game
data['history'][0]['card_history']
TobParser.ipynb
Scoppio/a-gazeta-de-geringontzan
mit
Now Dora is much more confident, she wants to get all the ranked games from this data and this page!
# Only ranked games for i in range(len(data['history'])): if data['history'][i]['mode'] != 'ranked': continue print data['history'][i]['id'],'-', data['history'][i]['added'] print data['history'][i]['hero'], data['history'][i]['hero_deck'], 'vs', data['history'][i]['opponent'], data['history'][0]['o...
TobParser.ipynb
Scoppio/a-gazeta-de-geringontzan
mit
It all seens perfect, with just one problem, the datetime is weird... she will have to convert it befor being able to add it to the database.
data['history'][0]['added'] from datetime import datetime datetime.strptime('2016-12-02T02:29:09.000Z', '%Y-%m-%dT%H:%M:%S.000Z')
TobParser.ipynb
Scoppio/a-gazeta-de-geringontzan
mit
Dora R. prefers to work with POSIX dates, she says that they are better to make searchs in the tables... maybe she does not know of the julian date calendar.
import time d = datetime.strptime(data['history'][0]['added'], '%Y-%m-%dT%H:%M:%S.000Z') print str(int(time.mktime(d.timetuple())))+'!!!', print "POSIX seconds! This will help alot when I try to find my data in time windows!"
TobParser.ipynb
Scoppio/a-gazeta-de-geringontzan
mit
Hell yes! Her tricks do work perfectly, now she can enter the date and time in a more sane order on the database! Creating games database into the A Gazeta Dora R. has enough to be sure that it is possible to get the data that she wants out of the api, now she needs to add into the database. The API and Keys table is s...
try: c.execute('SELECT * FROM archive LIMIT 1') print 'Yeap, working' except Exception, e: c.execute('''CREATE TABLE archive (id integer primary key, matchid integer, date_posix integer, rank integer, hero text, hero_deck text, opponent_hero text, opponent_deck text, coin...
TobParser.ipynb
Scoppio/a-gazeta-de-geringontzan
mit
The setup is good, now Dora R. must take care so she does not add duplicated games to the archive. First she will make a process that will run through the apikeys and then it will capture the games from each layer and pipeline it to another process that will write it into the archive.
querie = 'SELECT username, token FROM apikeys WHERE working == 1' def api_getter (verbose=False, limit = 0): APIkey = namedtuple('APIkey', ['username', 'token']) querie = 'SELECT username, token FROM apikeys WHERE working == 1' if limit: querie = 'SELECT username, token FROM apikeys WHERE working =...
TobParser.ipynb
Scoppio/a-gazeta-de-geringontzan
mit
Maexxna is ready to run, Dora R. makes a last check to see if it can return stuff properly.
from datetime import timedelta d = datetime.today() - timedelta(days=7) for i in Maexxna(from_date=d): print len(i) print 'Done'
TobParser.ipynb
Scoppio/a-gazeta-de-geringontzan
mit
'It is working perfectly!!!' Dora yells, her function works and returns what she needs, altought it may return "empty" history, but thats something she may deal with without much problem. Now she must add those informations to the archive
import pandas as pd def Starseeker(verbose = False, iterator = False): def _turns(data): a = 0 for i in range(len(data)): a = data[i]['turn'] if data[i]['turn'] > a else a return a for files in Maexxna(from_date = datetime.today() - timedelta(days=7)): df = pd...
TobParser.ipynb
Scoppio/a-gazeta-de-geringontzan
mit
Note that time periods which are relatively stationary (e.g. times 0-4 in the above figure) will be compressed more heavily than time periods where the value changes often (e.g. times 7-12 in the above figure). Furthermore, sensor output resolution affects the compression rate. Imagine if the step size was 0.5 units in...
sig2 = pd.DataFrame(data={'Time':range(17), 'VarA':[1,None,None,None,1,2,None,2,3,2,3,3,2,None,None,None,2], 'VarB':[4,5,None,None,None,None,5,6,None,None,None,6,4,5,6,None,6]}) plt.figure(1) #Plot 'raw' Variable A using blue dots ax = sig2.plot(x='Time',y='VarA',c='b...
20161017_TimeAligning1.ipynb
drericstrong/Blog
agpl-3.0
<div class="alert alert-info" style="font-size:120%"> <b>REMEMBER</b>: <br><br> Advanced indexing with **loc** and **iloc** * **loc**: select by label: `df.loc[row_indexer, column_indexer]` * **iloc**: select by position: `df.iloc[row_indexer, column_indexer]` </div> <div class="alert alert-success"> <b>EXERCISE 1<...
# %load _solutions/pandas_03b_indexing1.py
notebooks/pandas_03b_indexing.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
<div class="alert alert-success"> <b>EXERCISE 2</b>: <ul> <li>Select the capital and the population column of those countries where the density is larger than 300</li> </ul> </div>
# %load _solutions/pandas_03b_indexing2.py
notebooks/pandas_03b_indexing.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
<div class="alert alert-success"> <b>EXERCISE 3</b>: <ul> <li>Add a column 'density_ratio' with the ratio of the population density to the average population density for all countries.</li> </ul> </div>
# %load _solutions/pandas_03b_indexing3.py
notebooks/pandas_03b_indexing.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
<div class="alert alert-success"> <b>EXERCISE 4</b>: <ul> <li>Change the capital of the UK to Cambridge</li> </ul> </div>
# %load _solutions/pandas_03b_indexing4.py
notebooks/pandas_03b_indexing.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
<div class="alert alert-success"> <b>EXERCISE 5</b>: <ul> <li>Select all countries whose population density is between 100 and 300 people/km²</li> </ul> </div>
# %load _solutions/pandas_03b_indexing5.py
notebooks/pandas_03b_indexing.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
<div class="alert alert-success"> <b>EXERCISE 6</b>: * Select all rows for male passengers and calculate the mean age of those passengers. Do the same for the female passengers. Do this now using `.loc`. </div>
# %load _solutions/pandas_03b_indexing6.py # %load _solutions/pandas_03b_indexing7.py
notebooks/pandas_03b_indexing.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
x is a placeholder, a value that we'll input when we ask TensorFlow to run a computation. We want to be able to input any number of MNIST images, each flattened into a 784-dimensional vector. We represent this as a 2-D tensor of floating-point numbers, with a shape [None, 784]. (Here None means that a dimension can be ...
W = tf.Variable(tf.zeros([784, 10])) b = tf.Variable(tf.zeros([10])) # Implement Softmax Regression y = tf.nn.softmax(tf.matmul(x, W) + b) # Implementing Cross entropy to calculate the loss/error y_ = tf.placeholder(tf.float32, [None, 10]) # a placeholder to input the correct answers cross_entropy = tf.reduce_mean...
classification/MNIST_tf.ipynb
rishuatgithub/MLPy
apache-2.0
Execution of the Model in Session
sess = tf.InteractiveSession() tf.global_variables_initializer().run() for _ in range(1000): batch_xs, batch_ys = mnist.train.next_batch(100) sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
classification/MNIST_tf.ipynb
rishuatgithub/MLPy
apache-2.0
Evaluating the model
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) print(sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels})) mnist.test.images
classification/MNIST_tf.ipynb
rishuatgithub/MLPy
apache-2.0
First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.
with open('anna.txt', 'r') as f: text=f.read() vocab = set(text) vocab_to_int = {c: i for i, c in enumerate(vocab)} int_to_vocab = dict(enumerate(vocab)) chars = np.array([vocab_to_int[c] for c in text], dtype=np.int32) # print('Vocab') # print(vocab) # print('enumeration') # for i in enumerate(vocab): # print...
intro-to-rnns/Anna KaRNNa.ipynb
y2ee201/Deep-Learning-Nanodegree
mit
Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from.
print(np.max(chars)+1) print((np.unique(chars)).size)
intro-to-rnns/Anna KaRNNa.ipynb
y2ee201/Deep-Learning-Nanodegree
mit
Make a histogram of the distribution of planetary masses. This will reproduce Figure 2 in the original paper. Customize your plot to follow Tufte's principles of visualizations. Customize the box, grid, spines and ticks to match the requirements of this data. Pick the number of bins for the histogram appropriately.
# YOUR CODE HERE mass=data[:,2] plt.hist(mass,range=(0,14),bins=30) plt.xlabel('Mass of Planet') plt.ylabel("Number of planets") plt.title("Histogram Showing Planetary Masses") assert True # leave for grading
assignments/assignment04/MatplotlibEx02.ipynb
JackDi/phys202-2015-work
mit
Make a scatter plot of the orbital eccentricity (y) versus the semimajor axis. This will reproduce Figure 4 of the original paper. Use a log scale on the x axis. Customize your plot to follow Tufte's principles of visualizations. Customize the box, grid, spines and ticks to match the requirements of this data.
plt.scatter(data[:,5],data[:,6]) plt.xscale('log') plt.xlim(0.01,10) plt.ylim(0.0,1.0) plt.xlabel("Semimajor Axis (AU)") plt.ylabel("Orbital Eccentricity") plt.title("Orbital Eccentricity vs. Semimajor Axis") assert True # leave for grading
assignments/assignment04/MatplotlibEx02.ipynb
JackDi/phys202-2015-work
mit
Preliminary Analysis
# deal with missing and inconvenient portions of data clean_hospital_read_df = hospital_read_df[hospital_read_df['Number of Discharges'] != 'Not Available'] clean_hospital_read_df.loc[:, 'Number of Discharges'] = clean_hospital_read_df['Number of Discharges'].astype(int) clean_hospital_read_df = clean_hospital_read_df...
hospital_readmit/sliderule_dsi_inferential_statistics_exercise_3.ipynb
zczapran/datascienceintensive
mit
Preliminary Report Read the following results/report. While you are reading it, think about if the conclusions are correct, incorrect, misleading or unfounded. Think about what you would change or what additional analyses you would perform. A. Initial observations based on the plot above + Overall, rate of readmissions...
df = clean_hospital_read_df low_discharges = df[df['Number of Discharges'] < 100]['Excess Readmission Ratio'].dropna() high_discharges = df[df['Number of Discharges'] > 1000]['Excess Readmission Ratio'].dropna() (len(low_discharges.index), len(high_discharges.index)) low_mean = low_discharges.mean() high_mean = high_d...
hospital_readmit/sliderule_dsi_inferential_statistics_exercise_3.ipynb
zczapran/datascienceintensive
mit
Z-score is 7.6 which gives us p-value << 0.001%. (There is <0.001% chance (two-tailed distribution) of sampling such difference or greater) Report statistical significance for α = .01. In order to report statistical significance for α = .01, I've retrieved expected Z-value for a two-tailed distribution and p = 0.995 ...
pd.DataFrame({'low': low_discharges, 'high': high_discharges}).plot.hist(alpha=0.5, bins=20)
hospital_readmit/sliderule_dsi_inferential_statistics_exercise_3.ipynb
zczapran/datascienceintensive
mit
Let us load the reconstructed temperature and $CO_2$ over the last 800,000 years, then plot them.
f1 = pd.read_csv('edc3deuttemp2007.txt',delim_whitespace=True,skiprows=91) f1
GEOL157L/GEOL157_Lab5_paleclimate_records.ipynb
CommonClimate/teaching_notebooks
mit
Load the reconstructed carbon dioxide concentration in the last 800,000 years.
f2 = pd.read_csv('edc-co2-2008.txt',delim_whitespace=True,skiprows=773) f2 plt.style.use('ggplot') fig, ax = plt.subplots(2,sharex=True,figsize=(10,6)) ax[0].plot(f1['Age']/1000,f1['Temperature']-54.5) # Make the y-axis label, ticks and tick labels match the line color. ax[0].set_ylabel(r'Temperature ($^{\circ}$C)') ...
GEOL157L/GEOL157_Lab5_paleclimate_records.ipynb
CommonClimate/teaching_notebooks
mit
Question 1: What is the change in global air temperature between glacial and interglacial periods? Answer 1: Question 2: When did interglacial periods occur? What is their average tempo (aka cyclicity)? Answer 2: Question 3: Let us define interglacials as periods when the temperature difference was at or above 0, on a...
# interpolate co2 and temperature records to the same age to do correlation # NOTE: Pandas has built-in tools for that from scipy.interpolate import interp1d f_t = interp1d(f1['Age']/1000,f1['Temperature']-54.5,bounds_error=False) f_co2 = interp1d(f2['Age(yrBP)']/1000,f2['CO2(ppmv)'],bounds_error=False) # age step: ...
GEOL157L/GEOL157_Lab5_paleclimate_records.ipynb
CommonClimate/teaching_notebooks
mit
We can first plot the scatterplot to see the relationship between temperature and CO$_2$ concentration.
# combine temperature and co2 to a dataframe first d = {'temperature': temp_interp, 'CO2': co2_interp, 'age': age} df = pd.DataFrame(data=d) # plot scatterplot sns.jointplot(x="temperature", y="CO2", data=df, kind = "hex")
GEOL157L/GEOL157_Lab5_paleclimate_records.ipynb
CommonClimate/teaching_notebooks
mit
Now calculate the correlation.
#lag-0 correlation df['temperature'].corr(df['co2'])
GEOL157L/GEOL157_Lab5_paleclimate_records.ipynb
CommonClimate/teaching_notebooks
mit
You all know that correlation is not causation. However, in cases where one variable (A) causes changes in another (B), one often sees A lead the changes in B. Let us hunt for leads in the relationship between temeperature and $CO_2$. We do this via lag-correlations, which are simply correlations between A and lagged c...
# lag correlations (co2 leads 10 kyrs to lags 10 kyrs) lags = np.arange(50,-51,-1) lag_corr = np.zeros(len(lags)) for i in np.arange(len(lags)): lag_corr[i] = df['temperature'].corr(df['co2'].shift(lags[i])) plt.plot(lags*100,lag_corr) plt.xlabel(r'CO$_2$ lags temperature (years)') plt.ylabel('correlation')
GEOL157L/GEOL157_Lab5_paleclimate_records.ipynb
CommonClimate/teaching_notebooks
mit
Note: The positive x values means CO$_2$ lags temperature, and the negative x values means CO$_2$ leads temperature. Question 6a: At which point does the correlation reach a maximum? What does that mean? On these timescales, is $CO_2$ a forcing or a feedback in climate change? Answer 6a: We can also graphically estima...
fig, ax1 = plt.subplots(figsize=(10,6)) ax1.plot(f1['Age']/1000,f1['Temperature'],color='C0') # Make the y-axis label, ticks and tick labels match the line color. ax1.set_xlabel('1000 years before present') ax1.set_ylabel(r'Temperature ($^{\circ}$C)',color='C0') # zoom in 120kyr - 140 kyrs ax1.set_xlim(120,140) ax1.tic...
GEOL157L/GEOL157_Lab5_paleclimate_records.ipynb
CommonClimate/teaching_notebooks
mit
Question 6b: Does this agree with what you obtained in 6a? Answer 6b: 2. Climate of the Common Era The Common Era refers to the past 2,000 years of Earth's history. As we will see in class, one can use data from all manner of sources (tree rings corals, icea cores, lake and marine sediments, etc) to piece together some...
ddict = pd.read_pickle('hakim_lmr_jgra_2016_figure_2.pckl') years = ddict['recon_times'] lmr_nht = ddict['sanhmt'] lmc = ddict['lmc'] nhmt_max = ddict['nhmt_max'] nhmt_min = ddict['nhmt_min'] xvar = ddict['xvar'] lmr_trend = ddict['lmr_trend'] offset = ddict['offset'] allyears = ddict['allyears'] ipcc_mean = ddict['ip...
GEOL157L/GEOL157_Lab5_paleclimate_records.ipynb
CommonClimate/teaching_notebooks
mit
The code below chops the reconstruction in blocks of length "segment_length" and steps them every "step" steps, to find the mean and trend over each interval.
def means_and_slopes(variable,segment_length,step,years): # This function calculates the means and slopes for the specified segments. Outputs: # segment_means: Means of every segment. # segment_slopes: Slopes of every segment. # segment_intercepts: Y-intercepts of every segment, for p...
GEOL157L/GEOL157_Lab5_paleclimate_records.ipynb
CommonClimate/teaching_notebooks
mit
Let's plot the reconstruction and the fitted trend lines.
plt.figure(figsize=(10,6)) plt.plot(years,lmr_nht,'k-',linewidth=2,label='LMR '+lmc,alpha=.5) for i in range(len(segment_idxs)): slope_all_idxs = np.arange(segment_idxs[i,0],segment_idxs[i,1]+1) slope_segment_years = years[slope_all_idxs] slope_segment_values = (slope_all_idxs*segment_slopes[i])+segment_int...
GEOL157L/GEOL157_Lab5_paleclimate_records.ipynb
CommonClimate/teaching_notebooks
mit
Now we wish to see if the latest trend is unprecented.
plt.figure(figsize=(8,8)) ngrid = 100 gmt_blockTrend_older = segment_length*segment_slopes[0:-1].flatten() gmt_blockTrend_newest = segment_length*segment_slopes[-1] cutoff_date = str(years[-1]-segment_length) # extract date string for cutoff old_kws = {"color": 'DarkGray', "label": "pre-"+cutoff_date,"shade": True,"gri...
GEOL157L/GEOL157_Lab5_paleclimate_records.ipynb
CommonClimate/teaching_notebooks
mit
Question 7: Does this suggest that the most recent temperature trend is unprecented? Why? Answer 7: It turns out that there are many estimates of NHT over the past 1,000 to 2,000 years -- these estimates vary in the input data and statistical methods used. Now let's redo this analysis with other reconstructions as in n...
nh_dict = pd.read_pickle('NH1881-1980.pickle') nh_dict
GEOL157L/GEOL157_Lab5_paleclimate_records.ipynb
CommonClimate/teaching_notebooks
mit
Here we list the name of different reconstructions. Each name represent one temperature recontruction of the northern hemisphere in 1881-1980.
nh_dict.keys()
GEOL157L/GEOL157_Lab5_paleclimate_records.ipynb
CommonClimate/teaching_notebooks
mit
Now try to reuse the previous code to redo the analysis with one recontruction.
colors = plt.cm.tab20_r(np.linspace(0, 1, len(nh_dict))) plt.figure(figsize=(10,6)) for key, col in zip(nh_dict, colors): plt.plot(nh_dict[key]['year'],nh_dict[key]['temp'],label=str(key),color=col,alpha=.5) plt.legend() plt.title("NHT Reconstructions as of 2013 (IPCC AR5, Chap 6)") plt.xlabel('Year CE') p...
GEOL157L/GEOL157_Lab5_paleclimate_records.ipynb
CommonClimate/teaching_notebooks
mit
Question 8: Let's compare our previous result with one using one of these curves. Pick your favorite and apply the same analysis as before.
nht = nh_dict['Ju07cvm']['temp'] years = nh_dict['Ju07cvm']['year'] segment_means, segment_slopes, segment_intercepts, segment_idxs = means_and_slopes(nht,segment_length,step,years) plt.figure(figsize=(10,6)) plt.plot(years,nht,'k-',linewidth=2,label='LMR '+lmc,alpha=.5) for i in range(len(segment_idxs)): slope_al...
GEOL157L/GEOL157_Lab5_paleclimate_records.ipynb
CommonClimate/teaching_notebooks
mit
Load the data from our JSON file. The data is stored as a dictionary of dictionaries in the json file. We store it that way beacause it's easy to add data to the existing master data file. Also, I haven't figured out how to get it in a database yet.
with open('../pipeline/data/ProcessedDay90ApartmentData.json') as g: my_dict2 = json.load(g) dframe2 = DataFrame(my_dict2) dframe2 = dframe2.T dframe2 = dframe2[['content', 'laundry', 'price', 'dog', 'bed', 'bath', 'feet', 'long', 'parking', 'lat', 'smoking', 'getphotos', 'cat', 'hasmap', 'wheelchair', 'housing...
analysis/Munge-Copy1.ipynb
rileyrustad/pdxapartmentfinder
mit
Load data On connaît l'âge et l'expérience d'une personne, on veut pouvoir déduire si une personne est badass dans son domaine ou non.
df = pd.DataFrame({ 'Age': [20,16.2,20.2,18.8,18.9,16.7,13.6,20.0,18.0,21.2, 25,31.2,25.2,23.8,23.9,21.7,18.6,25.0,23.0,26.2], 'Experience': [2.3,2.2,1.8,1.4,3.2,3.9,1.4,1.4,3.6,4.3, 4.3,4.2,3.8,3.4,5.2,5.9,3.4,3.4,5.6,6.3], 'Badass': [0,0,0,0,0,0,0,0,0,0, 1,1,1,1,1,1,1,1,...
docs/!ml/notebooks/Logistic Regression.ipynb
a-mt/dev-roadmap
mit
Using sklearn Fit
from sklearn.linear_model import LogisticRegression model = LogisticRegression(C=1e20, solver='liblinear', random_state=0) %time model.fit(X, Y) print(model.intercept_, model.coef_)
docs/!ml/notebooks/Logistic Regression.ipynb
a-mt/dev-roadmap
mit
Plot Decision Boundary <details> <summary>Where does the equation come from? ↓</summary> <img src="https://i.imgur.com/YxSDJZA.png?1"> </details>
b0 = model.intercept_[0] b1 = model.coef_[0][0] b2 = model.coef_[0][1] plt.scatter(df['Age'], df['Experience'], color=colors) # Decision boundary (with threshold 0.5) _X = np.linspace(df['Age'].min(), df['Age'].max(),10) _Y = (-b1/b2)*_X + (-b0/b2) plt.plot(_X, _Y, '-k') # Plot using contour _X1 = np.linspace(df['A...
docs/!ml/notebooks/Logistic Regression.ipynb
a-mt/dev-roadmap
mit
Predict
print('Probabilité de badass:', model.predict_proba([x])[0][1]) print('Prediction:', model.predict([x])[0])
docs/!ml/notebooks/Logistic Regression.ipynb
a-mt/dev-roadmap
mit
From scratch Fit Source: https://github.com/martinpella/logistic-reg/blob/master/logistic_reg.ipynb
def sigmoid(z): return 1 / (1 + np.exp(-z)) def loss(h, y): return (-y * np.log(h) - (1 - y) * np.log(1 - h)).mean() def gradientDescent(X, y, theta, alpha, epochs, verbose=True): m = len(y) for i in range(epochs): h = sigmoid(X.dot(theta)) gradient = (X.T.dot(h - y)) / m the...
docs/!ml/notebooks/Logistic Regression.ipynb
a-mt/dev-roadmap
mit
Plot
b0 = theta[0] b1 = theta[1] b2 = theta[2] plt.scatter(df['Age'], df['Experience'], color=colors) # Decision boundary (with threshold 0.5) _X = np.linspace(df['Age'].min(), df['Age'].max(),10) _Y = (-b1/b2)*_X + (-b0/b2) plt.plot(_X, _Y, '-k')
docs/!ml/notebooks/Logistic Regression.ipynb
a-mt/dev-roadmap
mit
Predict
z = b0 + b1 * x[0] + b2 * x[1] p = 1 / (1 + np.exp(-z)) print('Probabilité de badass:', p) print('Prediction:', (1 if p > 0.5 else 0))
docs/!ml/notebooks/Logistic Regression.ipynb
a-mt/dev-roadmap
mit
2.2 Adding Trainable Autonomous Vehicles The VehicleParams class stores state information on all vehicles in the network. This class is used to identify the dynamical features of a vehicle and whether it is controlled by a reinforcement learning agent. Morover, information pertaining to the observations and reward func...
# vehicles class from flow.core.params import VehicleParams # vehicles dynamics models from flow.controllers import IDMController, ContinuousRouter vehicles = VehicleParams() vehicles.add("human", acceleration_controller=(IDMController, {}), routing_controller=(ContinuousRouter, {}), ...
tutorials/tutorial04_rllab.ipynb
cathywu/flow
mit
2.3 Scenario Object We are finally ready to create the scenario object, as we had done in exercise 1.
scenario = LoopScenario(name="ring_example", vehicles=vehicles, net_params=net_params, initial_config=initial_config, traffic_lights=traffic_lights)
tutorials/tutorial04_rllab.ipynb
cathywu/flow
mit
3. Setting up an Environment Several environments in Flow exist to train RL agents of different forms (e.g. autonomous vehicles, traffic lights) to perform a variety of different tasks. The use of an environment allows us to view the cumulative reward simulation rollouts receive, along with to specify the state/action ...
from flow.core.params import SumoParams sumo_params = SumoParams(sim_step=0.1, render=False)
tutorials/tutorial04_rllab.ipynb
cathywu/flow
mit
3.2 EnvParams EnvParams specifies environment and experiment-specific parameters that either affect the training process or the dynamics of various components within the scenario. For the environment "WaveAttenuationPOEnv", these parameters are used to dictate bounds on the accelerations of the autonomous vehicles, as ...
from flow.core.params import EnvParams env_params = EnvParams( # length of one rollout horizon=100, additional_params={ # maximum acceleration of autonomous vehicles "max_accel": 1, # maximum deceleration of autonomous vehicles "max_decel": 1, # bounds on the ranges...
tutorials/tutorial04_rllab.ipynb
cathywu/flow
mit
3.3 Initializing a Gym Environments Now, we have to specify our Gym Environment and the algorithm that our RL agents we'll use. To specify the environment, one has to use the environment's name (a simple string). A list of all environment names is located in flow/envs/__init__.py. The names of available environments ca...
import flow.envs as flowenvs print(flowenvs.__all__)
tutorials/tutorial04_rllab.ipynb
cathywu/flow
mit
We will use the environment "WaveAttenuationPOEnv", which is used to train autonomous vehicles to attenuate the formation and propagation of waves in a partially observable variable density ring road. To create the Gym Environment, the only necessary parameters are the environment name plus the previously defined varia...
env_name = "WaveAttenuationPOEnv" pass_params = (env_name, sumo_params, vehicles, env_params, net_params, initial_config, scenario)
tutorials/tutorial04_rllab.ipynb
cathywu/flow
mit
4. Setting up and Running an RL Experiment 4.1 run_task We begin by creating a run_task method, which defines various components of the RL algorithm within rllab, such as the environment, the type of policy, the policy training method, etc. We create the gym environment defined in section 3 using the GymEnv function. I...
from rllab.algos.trpo import TRPO from rllab.baselines.linear_feature_baseline import LinearFeatureBaseline from rllab.policies.gaussian_mlp_policy import GaussianMLPPolicy from rllab.envs.normalized_env import normalize from rllab.envs.gym_env import GymEnv def run_task(*_): env = GymEnv( env_name, ...
tutorials/tutorial04_rllab.ipynb
cathywu/flow
mit