markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Part D You discover that there are numbers in the text you'd like to be able to parse out for some math down the road. After stripping out the trailing whitespace, convert the numbers to their proper numerical form. Assign them to the variables num1, num2, and num3 respectively.
line1 = ' 495.59863 \n' line2 = '\t134 ' line3 = '\n\t -5.4 \t' num1 = -1 num2 = -1 num3 = -1 ### BEGIN SOLUTION ### END SOLUTION assert num1 > 495 and num1 < 496 assert num2 == 134 assert num3 > -6 and num3 < -5
assignments/A1/A1_Q1.ipynb
eds-uga/csci1360e-su17
mit
Part E Take the number below, find its square root, convert it to a string, and then print it out. You must use the correct arithmetic operator for the square root, as well as the correct casting function for the string conversion. Put the result in the variable str_version and print that out.
number = 3.14159265359 str_version = "" ### BEGIN SOLUTION ### END SOLUTION assert str_version == "1.7724538509055743"
assignments/A1/A1_Q1.ipynb
eds-uga/csci1360e-su17
mit
Retriving the Data
import sqlite3 db_filename = 'todo.db' with sqlite3.connect(db_filename) as conn: cursor = conn.cursor() cursor.execute(""" select id, priority, details, status, deadline from task where project = 'pymotw' """) for row in cursor.fetchall(): task_id, priority, details, status, deadlin...
DataPersistence/sqllite.ipynb
gaufung/PythonStandardLibrary
mit
Query Metadata The DB-API 2.0 specification says that after execute() has been called, the Cursor should set its description attribute to hold information about the data that will be returned by the fetch methods. The API specification say that the description value is a sequence of tuples containing the column name, t...
import sqlite3 db_filename = 'todo.db' with sqlite3.connect(db_filename) as conn: cursor = conn.cursor() cursor.execute(""" select * from task where project = 'pymotw' """) print('Task table has these columns:') for colinfo in cursor.description: print(colinfo)
DataPersistence/sqllite.ipynb
gaufung/PythonStandardLibrary
mit
Row Object
import sqlite3 db_filename = 'todo.db' with sqlite3.connect(db_filename) as conn: # Change the row factory to use Row conn.row_factory = sqlite3.Row cursor = conn.cursor() cursor.execute(""" select name, description, deadline from project where name = 'pymotw' """) name, description,...
DataPersistence/sqllite.ipynb
gaufung/PythonStandardLibrary
mit
Using Variables and Queries Positional Parameters
import sqlite3 import sys db_filename = 'todo.db' project_name = "pymotw" with sqlite3.connect(db_filename) as conn: cursor = conn.cursor() query = """ select id, priority, details, status, deadline from task where project = ? """ cursor.execute(query, (project_name,)) for row in cursor...
DataPersistence/sqllite.ipynb
gaufung/PythonStandardLibrary
mit
Named Parameters
import sqlite3 import sys db_filename = 'todo.db' project_name = "pymotw" with sqlite3.connect(db_filename) as conn: cursor = conn.cursor() query = """ select id, priority, details, status, deadline from task where project = :project_name order by deadline, priority """ cursor.execute(qu...
DataPersistence/sqllite.ipynb
gaufung/PythonStandardLibrary
mit
Bulk Loading
import csv import sqlite3 import sys db_filename = 'todo.db' data_filename = 'task.csv' SQL = """ insert into task (details, priority, status, deadline, project) values (:details, :priority, 'active', :deadline, :project) """ with open(data_filename, 'rt') as csv_file: csv_reader = csv.DictReader(csv_file) ...
DataPersistence/sqllite.ipynb
gaufung/PythonStandardLibrary
mit
Transactions
import sqlite3 db_filename = 'todo.db' def show_projects(conn): cursor = conn.cursor() cursor.execute('select name, description from project') for name, desc in cursor.fetchall(): print(' ', name) with sqlite3.connect(db_filename) as conn1: print('Before changes:') show_projects(conn1)...
DataPersistence/sqllite.ipynb
gaufung/PythonStandardLibrary
mit
Discarding Changes
import sqlite3 db_filename = 'todo.db' def show_projects(conn): cursor = conn.cursor() cursor.execute('select name, description from project') for name, desc in cursor.fetchall(): print(' ', name) with sqlite3.connect(db_filename) as conn: print('Before changes:') show_projects(conn) ...
DataPersistence/sqllite.ipynb
gaufung/PythonStandardLibrary
mit
Custom Aggregation An aggregation function collects many pieces of individual data and summarizes it in some way. Examples of built-in aggregation functions are avg() (average), min(), max(), and count(). The API for aggregators used by sqlite3 is defined in terms of a class with two methods. The step() method is calle...
import sqlite3 import collections db_filename = 'todo.db' class Mode: def __init__(self): self.counter = collections.Counter() def step(self, value): print('step({!r})'.format(value)) self.counter[value] += 1 def finalize(self): result, count = self.counter.most_common(...
DataPersistence/sqllite.ipynb
gaufung/PythonStandardLibrary
mit
Implement the helper function which computes the probability $P_\ell^w$ that a received word $\boldsymbol{y}$ is exactly at Hamming distance $\ell$ from a codeword of weight $w$ after transmission of the zero codeword over a BSC with error probability $\delta$, with $$ P_{\ell}^w = \sum_{r=0}^{\ell}\binom{w}{\ell-r}\bi...
def Plw(n,l,w,delta): return np.sum([comb(w,l-r)*comb(n-w,r)*(delta**(w-l+2*r))*((1-delta)**(n-w+l-2*r)) for r in range(l+1)])
ccgbc/ch2_Codes_Basic_Concepts/Block_Code_Decoding_Performance.ipynb
kit-cel/lecture-examples
gpl-2.0
Show performance and some bounds illustrating the decoding performance over the BSC of a binary linear block code with generator matrix $$ \boldsymbol{G} = \left(\begin{array}{cccccccccc} 1 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 1 & 1 \ 0 & 1 & 0 & 0 & 0 & 1 & 1 & 0 & 1 & 1 \ 0 & 0 & 1 & 0 & 0 & 1 & 1 & 1 & 0 & 1 \ 0 & 0 & 0...
# weight enumerator polynomial Aw = [0,0,0,4,6,8,8,4,1] n = 10 dmin = np.nonzero(Aw)[0][0] e = int(np.floor((dmin-1)/2)) delta_range = np.logspace(-6,-0.31,100) Pcw_range = [np.sum([Aw[w]*np.sum([Plw(n,l,w,delta) for l in range(e+1)]) for w in range(len(Aw))]) for delta in delta_range] Pcw_bound_range = [np.sum([co...
ccgbc/ch2_Codes_Basic_Concepts/Block_Code_Decoding_Performance.ipynb
kit-cel/lecture-examples
gpl-2.0
Finite - Element solution
# --------------------------------------------------------------- # Initialization of setup # --------------------------------------------------------------- nx = 20 # Number of boundary points u = np.zeros(nx) # Solution vector f = np.zeros(nx) # Source vector mu = 1 # Constant she...
06_finite_elements/fe_static_elasticity.ipynb
davofis/computational_seismology
gpl-3.0
Finite - Difference solution
# Poisson's equation with relaxation method # --------------------------------------------------------------- nt = 500 # Number of time steps iplot = 20 # Snapshot frequency # non-zero boundary conditions u = np.zeros(nx) # set u to zero du = np.zeros(nx) # du/dx f = np.zeros(nx) # forcing f[int(3*nx/4)...
06_finite_elements/fe_static_elasticity.ipynb
davofis/computational_seismology
gpl-3.0
The questions below will give you feedback on your work. Run the following cell to set up the feedback system.
# Set up code checking import os if not os.path.exists("../input/museum_visitors.csv"): os.symlink("../input/data-for-datavis/museum_visitors.csv", "../input/museum_visitors.csv") from learntools.core import binder binder.bind(globals()) from learntools.data_viz_to_coder.ex2 import * print("Setup Complete")
notebooks/data_viz_to_coder/raw/ex2.ipynb
Kaggle/learntools
apache-2.0
Step 1: Load the data Your first assignment is to read the LA Museum Visitors data file into museum_data. Note that: - The filepath to the dataset is stored as museum_filepath. Please do not change the provided value of the filepath. - The name of the column to use as row labels is "Date". (This can be seen in cell ...
# Path of the file to read museum_filepath = "../input/museum_visitors.csv" # Fill in the line below to read the file into a variable museum_data museum_data = ____ # Run the line below with no changes to check that you've loaded the data correctly step_1.check() #%%RM_IF(PROD)%% museum_data = pd.read_csv(museum_fil...
notebooks/data_viz_to_coder/raw/ex2.ipynb
Kaggle/learntools
apache-2.0
Step 2: Review the data Use a Python command to print the last 5 rows of the data.
# Print the last five rows of the data ____ # Your code here
notebooks/data_viz_to_coder/raw/ex2.ipynb
Kaggle/learntools
apache-2.0
The last row (for 2018-11-01) tracks the number of visitors to each museum in November 2018, the next-to-last row (for 2018-10-01) tracks the number of visitors to each museum in October 2018, and so on. Use the last 5 rows of the data to answer the questions below.
# Fill in the line below: How many visitors did the Chinese American Museum # receive in July 2018? ca_museum_jul18 = ____ # Fill in the line below: In October 2018, how many more visitors did Avila # Adobe receive than the Firehouse Museum? avila_oct18 = ____ # Check your answers step_2.check() #%%RM_IF(PROD)%% ...
notebooks/data_viz_to_coder/raw/ex2.ipynb
Kaggle/learntools
apache-2.0
Step 3: Convince the museum board The Firehouse Museum claims they ran an event in 2014 that brought an incredible number of visitors, and that they should get extra budget to run a similar event again. The other museums think these types of events aren't that important, and budgets should be split purely based on rec...
# Line chart showing the number of visitors to each museum over time ____ # Your code here # Check your answer step_3.check() #%%RM_IF(PROD)%% plt.figure(figsize=(12,6)) sns.lineplot(data=museum_data) plt.title("Monthly Visitors to Los Angeles City Museums") step_3.assert_check_passed() #%%RM_IF(PROD)%% sns.lineplot...
notebooks/data_viz_to_coder/raw/ex2.ipynb
Kaggle/learntools
apache-2.0
Step 4: Assess seasonality When meeting with the employees at Avila Adobe, you hear that one major pain point is that the number of museum visitors varies greatly with the seasons, with low seasons (when the employees are perfectly staffed and happy) and also high seasons (when the employees are understaffed and stress...
# Line plot showing the number of visitors to Avila Adobe over time ____ # Your code here # Check your answer step_4.a.check() #%%RM_IF(PROD)%% sns.lineplot(data=museum_data['Avila Adobe'], label='avila_adobe') step_4.a.assert_check_passed() #%%RM_IF(PROD)%% plt.figure(figsize=(12,6)) plt.title("Monthly Visitors to ...
notebooks/data_viz_to_coder/raw/ex2.ipynb
Kaggle/learntools
apache-2.0
Part B Does Avila Adobe get more visitors: - in September-February (in LA, the fall and winter months), or - in March-August (in LA, the spring and summer)? Using this information, when should the museum staff additional seasonal employees?
#_COMMENT_IF(PROD)_ step_4.b.hint() # Check your answer (Run this code cell to receive credit!) step_4.b.solution()
notebooks/data_viz_to_coder/raw/ex2.ipynb
Kaggle/learntools
apache-2.0
Jupyter Notebooks Some things are specific to Jupyter Notebooks, like these: Keyboard Shortcuts Ctrl+Enter = run cell ESC = exit out of a cell Tab = autocomplete a = insert cell above b = insert cell below dd = delete cell The Jupyter Notebook files are saved on your computer, in your home directory. Read more at 28 ...
# Get help with a command by putting ? in front of it. ?print # Run file on your computer. %run 00-hello-world.py %%time # How long does the code take to execute? Put %%time at the top to find out. # Loop 10 million times. for i in range(10000000): pass
1 Introduction to Python/Introduction.ipynb
peterdalle/mij
gpl-3.0
Python as a calculator
10 + 5
1 Introduction to Python/Introduction.ipynb
peterdalle/mij
gpl-3.0
Variabes Assign values to a variable. Variable names must start with a letter. No spaces allowed. Digits are OK.
Name = "John Doe" # String. Age = 40 # Integer. Height = 180.3 # Float. Married = True # Boolean (True/False). Children = ["Emma", "Thomas"] # List. # Print the contents of variable. Age # Print many at the same time. print(Name) print(Age) print(Hei...
1 Introduction to Python/Introduction.ipynb
peterdalle/mij
gpl-3.0
Change variables
# Change the value. Age = Age + 10 print(Age) # We can't add numbers to strings. We will get a "TypeError". Name + Age # We need to convert age to string. mytext = Name + str(Age) print(mytext)
1 Introduction to Python/Introduction.ipynb
peterdalle/mij
gpl-3.0
Strings and string manipulation
text = "The quick brown fox jumps over the lazy dog" # Get single characters from the text string. print(text[0]) # Get the first character print(text[4]) # Get the fifth character # Show the characters between 4th and 9th position (the word "quick"). print(text[4:9]) # Replace "dog" with "journalist. text.replace...
1 Introduction to Python/Introduction.ipynb
peterdalle/mij
gpl-3.0
If-statements if, elif, and else Don't forget to indent Checks whether expression is True or False Comparisons age == 25 # Equal age != 25 # Does not equal age &gt; 25 # Above 25 age &lt; 25 # Below 25 age &gt;= 25 # 25 or above age &lt;= 25 # 25 or below Combine comparisons or means that a...
# First, we assign a value to the variable "Name" and "Age". Name = "John Doe" Age = 40 # Check if age equals 40. if Age == 40: print(Name + " is 40 years old.") else: print(Name + " is not 40 years old.") # Lets change the age. Age = 24 # Check many things at once: Is 40 years old? If not, is he above 40? I...
1 Introduction to Python/Introduction.ipynb
peterdalle/mij
gpl-3.0
For loops range(to) range(from, to, step) can also use for with list
# Loop 5 times. for i in range(5): print(i) # Loop from 0 to 100, by ever 25. for i in range(0, 100, 25): print(i) # We can use for loops on lists. Children = ["Emma", "Thomas", "Nicole"] # Make a list with 3 text strings. for child in Children: print(child) # We can use for loops on numbers. YearsOl...
1 Introduction to Python/Introduction.ipynb
peterdalle/mij
gpl-3.0
Functions Group things together with functions.
# A function that we name "calc" that multiples two numbers together. # It takes to variables as input (x and y). # The function then returns the results of the multiplication. def calc(x, y): return(x * y) # Now we just use the name of the function. calc(10, 5)
1 Introduction to Python/Introduction.ipynb
peterdalle/mij
gpl-3.0
Combine everything Now it's time to combine everything we learned so far. By combining these techniques we can write quite complex programs. Lets say we have a list of names. How many of the names start with the letter E? This is how you do it in principle: You go through the list, name by name, with a for loop. For e...
# Long list with names. names = ["Adelia", "Agustin", "Ahmed", "Alethea", "Aline", "Alton", "Annett", "Arielle", "Billie", "Blake", "Brianne", "Bronwyn", "Charlesetta", "Cleopatra", "Colene", "Corina", "Cruz", "Curt", "Dawn", "Delisa", "Dolores", "Doloris", "Dominic", "Donetta", "Dusti", "Edna", ...
1 Introduction to Python/Introduction.ipynb
peterdalle/mij
gpl-3.0
The graph fo z: <img src="files/pics/z_graph.png"> The graph for f: <img src="files/pics/f_graph.png"> Simple matrix multiplications The following types for input variables are typically used: byte: bscalar, bvector, bmatrix, btensor3, btensor4 16-bit integers: wscalar, wvector, wmatrix, wtensor3, wtensor4 32-bit integ...
import theano import theano.tensor as T import numpy as np x = T.fvector('x') W = T.fmatrix('W') b = T.fvector('b') activation = T.dot(x,W)+b z = T.tanh(activation) f = theano.function(inputs=[x,W,b], outputs=[activation,z])
2015-10_Lecture/Lecture2/code/1_Intro_Theano_solution.ipynb
nreimers/deeplearning4nlp-tutorial
apache-2.0
Shared Variables We use theano.shared() to share a variable (i.e. make it internally available for Theano) Internal state variables are passed by compile time via the parameter givens. So to compute the ouput z, use the shared variable state for the input variable x For information on the borrow=True parameter see: ht...
#New accumulator function, now with an update inc = T.lscalar('inc') accumulator = theano.function(inputs=[inc], outputs=(state,z), givens={x: state}, updates=[(state,state+inc)]) print accumulator(1) print accumulator(1) print accumulator(1)
2015-10_Lecture/Lecture2/code/1_Intro_Theano_solution.ipynb
nreimers/deeplearning4nlp-tutorial
apache-2.0
Naive Bayes From Bayes' Theorem, we have that $$P(c \vert f) = \frac{P(c \cap f)}{P(f)}$$ where $c$ represents a class or category, and $f$ represents a feature vector, such as $\bar V(d)$ as above. We are computing the probability that a document (or whatever we are classifying) belongs to category c given the feature...
#your turn
Statistics_Exercises/Mini_Project_Naive_Bayes.ipynb
RyanAlberts/Springbaord-Capstone-Project
mit
Picking Hyperparameters for Naive Bayes and Text Maintenance We need to know what value to use for $\alpha$, and we also need to know which words to include in the vocabulary. As mentioned earlier, some words are obvious stopwords. Other words appear so infrequently that they serve as noise, and other words in addition...
# Your turn.
Statistics_Exercises/Mini_Project_Naive_Bayes.ipynb
RyanAlberts/Springbaord-Capstone-Project
mit
<div class="span5 alert alert-info"> <h3>Exercise Set IV</h3> <p><b>Exercise:</b> What does using the function `log_likelihood` as the score mean? What are we trying to optimize for?</p> <p><b>Exercise:</b> Without writing any code, what do you think would happen if you choose a value of $\alpha$ that is too high?</p...
from sklearn.naive_bayes import MultinomialNB #the grid of parameters to search over alphas = [.1, 1, 5, 10, 50] best_min_df = None # YOUR TURN: put your value of min_df here. #Find the best value for alpha and min_df, and the best classifier best_alpha = None maxscore=-np.inf for alpha in alphas: vectori...
Statistics_Exercises/Mini_Project_Naive_Bayes.ipynb
RyanAlberts/Springbaord-Capstone-Project
mit
<div class="span5 alert alert-info"> <h3>Exercise Set VII: Predicting the Freshness for a New Review</h3> <br/> <div> <b>Exercise:</b> <ul> <li> Using your best trained classifier, predict the freshness of the following sentence: *'This movie is not remarkable, touching, or superb in any way'* <li> Is the result what y...
#your turn
Statistics_Exercises/Mini_Project_Naive_Bayes.ipynb
RyanAlberts/Springbaord-Capstone-Project
mit
Model and parameters The model is multilayer, with two layers: electrolyte and pedot. In electrolyte, cations and anions are allowed. In pedot, cations, anions and holes are simulated.
electrolyte_mesh = fvm.mesh1d( 25e-9, boundary_names=[ 'electrode0', 'internal_boundary']) pedot_mesh = fvm.mesh1d( 175e-9, boundary_names=[ 'internal_boundary', 'electrode1']) mesh = fvm.multilayer1d( [('electrolyte', electrolyte_mesh), ('pedot', pedot_mesh)]) electr...
examples/ionic/electrochemical-doping-japplphys113.ipynb
mzszym/oedes
agpl-3.0
Doping of PEDOT:
pedot_model.species.append(models.FixedCharge(poisson=pedot_model.poisson))
examples/ionic/electrochemical-doping-japplphys113.ipynb
mzszym/oedes
agpl-3.0
BCs for Poisson's equation:
electrolyte_model.poisson.bc = [ models.AppliedVoltage('electrode0'), models.Equal( pedot_model.poisson, 'internal_boundary')] pedot_model.poisson.bc = [models.AppliedVoltage('electrode1')]
examples/ionic/electrochemical-doping-japplphys113.ipynb
mzszym/oedes
agpl-3.0
BCs for holes and ions:
hole_eqn.bc = [models.DirichletFromParams('electrode1')] for ion_electrolyte in ions_electrolyte: ion_electrolyte.bc = [models.DirichletFromParams('electrode0')] for ion_pedot,ion_electrolyte in zip(ions_pedot,ions_electrolyte): ion_pedot.bc = [models.Equal(ion_electrolyte,'internal_boundary')]
examples/ionic/electrochemical-doping-japplphys113.ipynb
mzszym/oedes
agpl-3.0
Coupled model:
model = models.CompositeModel() model.sub = [electrolyte_model, pedot_model] model.other = [models.RamoShockleyCurrentCalculation([electrolyte_model.poisson,pedot_model.poisson])] model.setUp()
examples/ionic/electrochemical-doping-japplphys113.ipynb
mzszym/oedes
agpl-3.0
Parameters:
params = { 'electrolyte.epsilon_r': 3., 'pedot.epsilon_r': 3, 'electrolyte.T': 300, 'pedot.T': 300, 'electrolyte.electrode0.voltage': 0, 'electrolyte.electrode0.workfunction': 0, 'pedot.electrode1.voltage': 0, 'pedot.electrode1.workfunction': 0, 'pedot.hole.N0': 1e27, 'pedot.hole...
examples/ionic/electrochemical-doping-japplphys113.ipynb
mzszym/oedes
agpl-3.0
Results Figure numbers given below correspond to Figures in the reference. Fig. 2
x = model.X.copy() x[hole_eqn.idx] = params['pedot.Na'] c = context(model, x=x) params['electrolyte.electrode0.voltage'] = 0. params['pedot.electrode1.voltage'] = -1 c.transient(params, 1e-5, 1e-9) from oedes import mpl fig, (ion_plot, hole_plot, potential_plot) = plt.subplots( nrows=3, figsize=(6, 8)) for time i...
examples/ionic/electrochemical-doping-japplphys113.ipynb
mzszym/oedes
agpl-3.0
Fig. 3
def length(out): store(out) c = out['pedot.hole.c'] i = np.argwhere(c > 0.5e25)[0] return pedot_mesh.cells['center'][i] - pedot_mesh.cells['center'][0] times = np.asarray([2e-6, 4e-6, 6e-6, 8e-6, 10e-6]) plt.plot(times * 1e6, [length(c.attime(t).output()) ** 2 * 1e18 for t in ti...
examples/ionic/electrochemical-doping-japplphys113.ipynb
mzszym/oedes
agpl-3.0
Fig. 4
x0 = model.X.copy() x0[hole_eqn.idx] = params['pedot.Na'] by_v = {} voltages = [0.01, 0.05, 0.1, 0.5, 1] for v in voltages: params['pedot.electrode1.voltage'] = -v c = context(model, x=x0) c.transient(params, 1e-5, 1e-9) by_v[v] = c fig, (ion_plot, potential_plot) = plt.subplots(ncols=2, figsize=(12, 4...
examples/ionic/electrochemical-doping-japplphys113.ipynb
mzszym/oedes
agpl-3.0
Fig. 5
x0 = model.X.copy() x0[hole_eqn.idx] = params['pedot.Na'] params['pedot.electrode1.voltage'] = -1. concentrations = [1e24, 1e25, 1e26] by_c = {} for conc in concentrations: params['electrolyte.cation.electrode0'] = conc params['electrolyte.anion.electrode0'] = conc c = context(model, x=x0) c.transient(p...
examples/ionic/electrochemical-doping-japplphys113.ipynb
mzszym/oedes
agpl-3.0
Fig. 6
x0 = model.X.copy() x0[hole_eqn.idx] = params['pedot.Na'] params['pedot.electrode1.voltage'] = -1. params['electrolyte.cation.electrode'] = 1e25 params['electrolyte.anion.electrode'] = 1e25 mu_values = [1e-10, 1e-9, 1e-8] by_mu = {} for mu in mu_values: params['pedot.cation.mu'] = mu params['pedot.anion.mu'] = ...
examples/ionic/electrochemical-doping-japplphys113.ipynb
mzszym/oedes
agpl-3.0
Quick introduction to chocolate bars and classes
from astropy import units as u, constants as const class SnickersBar(object): def __init__(self, w, h, l, weight, energy_density=2460 * u.kJ/ (100 * u.g)): self.w = u.Quantity(w, u.cm) self.h = u.Quantity(h, u.cm) self.l = u.Quantity(l, u.cm) self.weight = u.Quantity(weight, u.g) ...
notebooks/nov_2_2015.ipynb
ESO-python/ESOPythonTutorials
bsd-3-clause
Using cython
%load_ext Cython import numpy as np import numexpr as ne x1, y1 = np.random.random((2, 1000000)) x2, y2 = np.random.random((2, 1000000)) distance = [] def calculate_distances(x1, y1, x2, y2): distances = [] for i in xrange(len(x1)): distances.append(np.sqrt((x1[i] - x2[i])**2 + (y1[i] - y2[i]**2))) ...
notebooks/nov_2_2015.ipynb
ESO-python/ESOPythonTutorials
bsd-3-clause
The rest of the tutorial is organized as follows: sections 1 and 2 walk through the steps needed to specify custom traffic network geometry features and auxiliary features, respectively, while section 3 implements the new scenario in a simulation for visualization and testing purposes. 1. Specifying Traffic Network Fea...
ADDITIONAL_NET_PARAMS = { "radius": 40, "num_lanes": 1, "speed_limit": 30, }
tutorials/tutorial06_scenarios.ipynb
cathywu/flow
mit
All scenarios presented in Flow provide a unique ADDITIONAL_NET_PARAMS component containing the information needed to properly define the network parameters of the scenario. We assume that these values are always provided by the user, and accordingly can be called from net_params. For example, if we would like to call ...
class myScenario(myScenario): # update my scenario class def specify_nodes(self, net_params): # one of the elements net_params will need is a "radius" value r = net_params.additional_params["radius"] # specify the name and position (x,y) of each node nodes = [{"id": "bottom", "x":...
tutorials/tutorial06_scenarios.ipynb
cathywu/flow
mit
1.3 specify_edges Once the nodes are specified, the nodes are linked together using directed edges. This done through the specify_edges method which, similar to specify_nodes, returns a list of dictionary elements, with each dictionary specifying the attributes of a single edge. The attributes include: id: name of the...
# some mathematical operations that may be used from numpy import pi, sin, cos, linspace class myScenario(myScenario): # update my scenario class def specify_edges(self, net_params): r = net_params.additional_params["radius"] edgelen = r * pi / 2 # this will let us control the number of l...
tutorials/tutorial06_scenarios.ipynb
cathywu/flow
mit
1.4 specify_routes The routes are the sequence of edges vehicles traverse given their current position. For example, a vehicle beginning in the edge titled "edge0" (see section 1.3) must traverse, in sequence, the edges "edge0", "edge1", "edge2", and "edge3", before restarting its path. In order to specify the routes a...
class myScenario(myScenario): # update my scenario class def specify_routes(self, net_params): rts = {"edge0": ["edge0", "edge1", "edge2", "edge3"], "edge1": ["edge1", "edge2", "edge3", "edge0"], "edge2": ["edge2", "edge3", "edge0", "edge1"], "edge3": ["edge3",...
tutorials/tutorial06_scenarios.ipynb
cathywu/flow
mit
2. Multiple routes per edge: Alternatively, if the routes are meant to be stochastic, each element can consist of a list of (route, probability) tuples, where the first element in the tuple is one of the routes a vehicle can take from a specific starting edge, and the second element is the probability that vehicles wil...
class myScenario(myScenario): # update my scenario class def specify_routes(self, net_params): rts = {"edge0": [(["edge0", "edge1", "edge2", "edge3"], 1)], "edge1": [(["edge1", "edge2", "edge3", "edge0"], 1)], "edge2": [(["edge2", "edge3", "edge0", "edge1"], 1)], ...
tutorials/tutorial06_scenarios.ipynb
cathywu/flow
mit
3. Per-vehicle routes: Finally, if you would like to assign a specific starting route to a vehicle with a specific ID, you can do so by adding a element into the dictionary whose key is the name of the vehicle and whose content is the list of edges the vehicle is meant to traverse as soon as it is introduced to the net...
class myScenario(myScenario): # update my scenario class def specify_routes(self, net_params): rts = {"edge0": ["edge0", "edge1", "edge2", "edge3"], "edge1": ["edge1", "edge2", "edge3", "edge0"], "edge2": ["edge2", "edge3", "edge0", "edge1"], "edge3": ["edge3",...
tutorials/tutorial06_scenarios.ipynb
cathywu/flow
mit
In all three cases, the routes are ultimately represented in the class in the form described under the multiple routes setting, i.e. &gt;&gt;&gt; print(scenario.rts) { "edge0": [ (["edge0", "edge1", "edge2", "edge3"], 1) ], "edge1": [ (["edge1", "edge2", "edge3", "edge0"], 1) ], "ed...
# import some math functions we may use from numpy import pi class myScenario(myScenario): # update my scenario class def specify_edge_starts(self): r = self.net_params.additional_params["radius"] edgestarts = [("edge0", 0), ("edge1", r * 1/2 * pi), ("...
tutorials/tutorial06_scenarios.ipynb
cathywu/flow
mit
3. Testing the New Scenario In this section, we run a new sumo simulation using our newly generated scenario class. For information on running sumo experiments, see exercise01_sumo.ipynb. We begin by defining some of the components needed to run a sumo experiment.
from flow.core.params import VehicleParams from flow.controllers import IDMController, ContinuousRouter from flow.core.params import SumoParams, EnvParams, InitialConfig, NetParams vehicles = VehicleParams() vehicles.add(veh_id="human", acceleration_controller=(IDMController, {}), routing_con...
tutorials/tutorial06_scenarios.ipynb
cathywu/flow
mit
For visualizing purposes, we use the environment AccelEnv, as it works on any given scenario.
from flow.envs.loop.loop_accel import AccelEnv, ADDITIONAL_ENV_PARAMS env_params = EnvParams(additional_params=ADDITIONAL_ENV_PARAMS)
tutorials/tutorial06_scenarios.ipynb
cathywu/flow
mit
Next, using the ADDITIONAL_NET_PARAMS component see created in section 1.1, we prepare the NetParams component.
additional_net_params = ADDITIONAL_NET_PARAMS.copy() net_params = NetParams(additional_params=additional_net_params)
tutorials/tutorial06_scenarios.ipynb
cathywu/flow
mit
We are ready now to create and run our scenario. Using the newly defined scenario classes, we create a scenario object and feed it into a Experiment simulation. Finally, we are able to visually confirm that are network has been properly generated.
from flow.core.experiment import Experiment scenario = myScenario( # we use the newly defined scenario class name="test_scenario", vehicles=vehicles, net_params=net_params, initial_config=initial_config ) # AccelEnv allows us to test any newly generated scenario quickly env = AccelEnv(env_params, sum...
tutorials/tutorial06_scenarios.ipynb
cathywu/flow
mit
我们以(1, 1), (1, 2), (2, 2), (2, 1)四个点为中心产生了随机分布的点,如果我们的聚类算法正确的话,我们找到的中心点应该和这四个点很接近。先用简单的语言描述 kmeans 算法步骤: 第一步 - 随机选择 K 个点作为点的聚类中心,这表示我们要将数据分为 K 类。 第二步 - 遍历所有的点 P, 算出 P 到每个聚类中心的距离,将 P 放到最近的聚类中心的点集中。遍历结束后我们将得到 K 个点集。 第三步 - 遍历每一个点集,算出每一个点集的中心位置,将其作为新的聚类中心。 第四步 - 重复步骤 2 和步骤 3,直到聚类中心位置不再移动。
# 第一步,随机选择 K 个点 K = 4 p_list = np.stack([points_x, points_y], axis=1) index = np.random.choice(len(p_list), size=K) centeroid = p_list[index] # 以下是画图部分 for p in centeroid: plt.scatter(p[0], p[1], marker='^') plt.xlim(0, 3) plt.ylim(0, 3) plt.show() # 第二步,遍历所有点 P,将 P 放入最近的聚类中心的集合中 points_set = ...
Kmeans.ipynb
hktxt/MachineLearning
gpl-3.0
寻找 K 值 以上已经介绍了 KMeans 方法的具体流程,但是我们还面临一个问题,如何确定 K 值——在以上的演示中,由于数据是我们自己生成的,所以我们很容易就确定了 K 值,但是真实的环境下,我们往往不能立马清楚 K 值的大小。 一种比较通用的解决方法是计算每个点到自己的聚类中心的平均距离,虽然说,K 值越大,理论上这个平均距离会越小。但是当我们画出平均距离随K值的变化曲线后,会发现其中存在一个肘点——在这个肘点前,平均距离随K值变大迅速下降,而在这个肘点后,平均距离的下降将变得缓慢。现在我们使用 sklearn 库中的 KMeans 方法来跑一下聚类过程,然后将到聚类中心的平均值变化作图。
from sklearn.cluster import KMeans loss = [] for i in range(1, 10): kmeans = KMeans(n_clusters=i, max_iter=100).fit(p_list) loss.append(kmeans.inertia_ / point_number / K) plt.plot(range(1, 10), loss) plt.show()
Kmeans.ipynb
hktxt/MachineLearning
gpl-3.0
NOTE: This is only necessary in the Jupyter Notebook. You should be able to import the necessary packages in a regular Python script without using findspark The SparkSession is the entry point for any Spark application.
from pyspark.sql import SparkSession
workshops/w5/Spark_Dash_Pres.ipynb
eds-uga/csci4360-fa17
mit
Let's create a new SparkSession through the builder attribute and the getOrCreate() method.
spark = SparkSession.builder\ .master("local")\ .appName("LinearRegressionModel")\ .config("spark.executor.memory","1gb")\ .getOrCreate() sc = spark.sparkContext
workshops/w5/Spark_Dash_Pres.ipynb
eds-uga/csci4360-fa17
mit
Initializing the master and appName attributes isn't actually important or critical in this introduction, nor is configuring the memory options for the executor. I've included here for the sake of thoroughness. NOTE: If you find the Spark tutorial on the Spark documentation web page it includes the following line of co...
rdd = sc.textFile('data/CaliforniaHousing/cal_housing.data') header = sc.textFile('data/CaliforniaHousing/cal_housing.domain')
workshops/w5/Spark_Dash_Pres.ipynb
eds-uga/csci4360-fa17
mit
Part of what allows for some of the speed-up in Spark applications is that Spark evaluations are mostly lazy evals. So executing the following line of code isn't very useful:
header
workshops/w5/Spark_Dash_Pres.ipynb
eds-uga/csci4360-fa17
mit
Instead we have to take an action on the rdd, such as collect() to materialize the data represented by the rdd abstraction
header.collect()
workshops/w5/Spark_Dash_Pres.ipynb
eds-uga/csci4360-fa17
mit
NOTE: collect() is a pretty dangerous action: if the RDD is especially large then your executor you may run out of RAM and your application will crash. If you're using especially large data and you just want a peak at it to try to suss out its structure, then try take() or first()
rdd.take(2)
workshops/w5/Spark_Dash_Pres.ipynb
eds-uga/csci4360-fa17
mit
Since we read the data in with textFile() we just have a set of strings separated by commas as our data. Let's split the data into separate entriees using the map() function.
rdd = rdd.map(lambda line: line.split(",")) rdd.take(2)
workshops/w5/Spark_Dash_Pres.ipynb
eds-uga/csci4360-fa17
mit
Now we have something more closely resembling a collection of records. But, notice that the data does not have a header and is mostly unstructured. We can fix that by converting the data to a DataFrame. I have been told that n general DataFrames perform better than the RDDs, especially when using Python...
from pyspark.sql import Row df = rdd.map(lambda line: Row(longitude=line[0], latitude=line[1], housingMedianAge=line[2], totalRooms=line[3], totalBedRooms=line[4], pop...
workshops/w5/Spark_Dash_Pres.ipynb
eds-uga/csci4360-fa17
mit
To examine the data types associated with the dataframe printSchemea() method.
df.printSchema()
workshops/w5/Spark_Dash_Pres.ipynb
eds-uga/csci4360-fa17
mit
Since we used the textFile() method to read our data in, the data types are all string The following would cast all of the columns to floats instead:
from pyspark.sql.types import * df = df.withColumn("longitude", df["longitude"].cast(FloatType())) \ .withColumn("latitude", df["latitude"].cast(FloatType())) \ .withColumn("housingMedianAge",df["housingMedianAge"].cast(FloatType())) \ .withColumn("totalRooms", df["totalRooms"].cast(FloatType())) \ .withC...
workshops/w5/Spark_Dash_Pres.ipynb
eds-uga/csci4360-fa17
mit
But that seems pretty inefficient, and it is. We can write a function to handle all of this for us:
from pyspark.sql.types import * def convertCols(df,names,dataType): #df - a dataframe, names - a list of col names, dataType - the cast conversion type for name in names: df = df.withColumn(name,df[name].cast(dataType)) return df names = ['households', 'housingMedianAge', 'latitude', 'longitude', 'med...
workshops/w5/Spark_Dash_Pres.ipynb
eds-uga/csci4360-fa17
mit
The pyspark.sql package has lots of convenient data exploration methods built in that support SQL query language execution. For example, we can select by columns:
df.select('population','totalBedrooms').show(10)
workshops/w5/Spark_Dash_Pres.ipynb
eds-uga/csci4360-fa17
mit
We can use the filter() method to perform a classic SELECT FROM WHERE query as below:
ndf = df.select('population','totalBedrooms').filter(df['totalBedrooms'] > 500) ndf.show(10)
workshops/w5/Spark_Dash_Pres.ipynb
eds-uga/csci4360-fa17
mit
And we can get summary statistics pretty easilly too...
df.describe().show()
workshops/w5/Spark_Dash_Pres.ipynb
eds-uga/csci4360-fa17
mit
Let's do a quick bit of feature engineering and transformation to optimize a linear regression on our feature set...
# Import all from `sql.functions` from pyspark.sql.functions import * df.show() # Adjust the values of `medianHouseValue` df = df.withColumn("medianHouseValue", col("medianHouseValue")/100000) df.show()
workshops/w5/Spark_Dash_Pres.ipynb
eds-uga/csci4360-fa17
mit
We can examine the column of medianHouseValue in the above outputs to make sure that we transformed the data correctly. Let's do some more feature engineering and standardization.
# Import all from `sql.functions` if you haven't yet from pyspark.sql.functions import * # Divide `totalRooms` by `households` roomsPerHousehold = df.select(col("totalRooms")/col("households")) # Divide `population` by `households` populationPerHousehold = df.select(col("population")/col("households")) # Divide `tot...
workshops/w5/Spark_Dash_Pres.ipynb
eds-uga/csci4360-fa17
mit
Notice that we're using the col() function to specify that we're using columnar data in our calculations. The col("totalRooms")/col("households") is acting like a numpy array, element wise dividing the results. Next we'll use the select() method to reorder the data so that our response variable is
# Re-order and select columns df = df.select("medianHouseValue", "totalBedRooms", "population", "households", "medianIncome", "roomsPerHousehold", "populationPerHousehold", "bedroomsPerRoom")
workshops/w5/Spark_Dash_Pres.ipynb
eds-uga/csci4360-fa17
mit
Now we're going to actually isolate the response variable of labels from the predictor variables using a DenseVector, which is essentially a numpy ndarray.
# Import `DenseVector` from pyspark.ml.linalg import DenseVector # Define the `input_data` input_data = df.rdd.map(lambda x: (x[0], DenseVector(x[1:]))) # Replace `df` with the new DataFrame df = spark.createDataFrame(input_data, ["label", "features"])
workshops/w5/Spark_Dash_Pres.ipynb
eds-uga/csci4360-fa17
mit
There are all kinds of great machine learning algorithms and functions already built into PySpark in the Spark ML library. If you're interested in more data pipelining, try visiting this page: https://spark.apache.org/docs/latest/ml-pipeline.html
# Import `StandardScaler` from pyspark.ml.feature import StandardScaler # Initialize the `standardScaler` standardScaler = StandardScaler(inputCol="features", outputCol="features_scaled") # Fit the DataFrame to the scaler scaler = standardScaler.fit(df) # Transform the data in `df` with the scaler scaled_df = scale...
workshops/w5/Spark_Dash_Pres.ipynb
eds-uga/csci4360-fa17
mit
We can divide the data into training and testing sets using the PySpark SQL randomSplit() method.
train_data, test_data = scaled_df.randomSplit([.8,.2],seed=1234)
workshops/w5/Spark_Dash_Pres.ipynb
eds-uga/csci4360-fa17
mit
Now we can create the regression model. The original tutorial directs you to the following URL for information on the linear regression model class: https://spark.apache.org/docs/latest/api/python/pyspark.ml.html#pyspark.ml.regression.LinearRegression
# Import `LinearRegression` from pyspark.ml.regression import LinearRegression # Initialize `lr` lr = LinearRegression(labelCol="label", maxIter=10, regParam=0.3, elasticNetParam=0.8) # Fit the data to the model linearModel = lr.fit(train_data) # Generate predictions predicted = linearModel.transform(test_data) # E...
workshops/w5/Spark_Dash_Pres.ipynb
eds-uga/csci4360-fa17
mit
To evaluate the model, we can inspect the model parameters.
# Coefficients for the model linearModel.coefficients # Intercept for the model linearModel.intercept
workshops/w5/Spark_Dash_Pres.ipynb
eds-uga/csci4360-fa17
mit
And summary data for the model is available as well.
# Get the RMSE linearModel.summary.rootMeanSquaredError # Get the R2 linearModel.summary.r2
workshops/w5/Spark_Dash_Pres.ipynb
eds-uga/csci4360-fa17
mit
Stop the Spark session...
spark.stop()
workshops/w5/Spark_Dash_Pres.ipynb
eds-uga/csci4360-fa17
mit
No Spark Here You'll notice that this tutorial doesn't use parallelization with Spark. This is to keep this simple and make this code generalizable to folks that are running this analysis on their local machines.
import ibmseti import os import zipfile
tutorials/Step_2_reading_SETI_code_challenge_data.ipynb
setiQuest/ML4SETI
apache-2.0
Assume you have the data in a local folder
!ls my_data_folder/basic4 zz = zipfile.ZipFile(mydatafolder + '/' + 'basic4.zip') basic4list = zz.namelist() firstfile = basic4list[0] print firstfile
tutorials/Step_2_reading_SETI_code_challenge_data.ipynb
setiQuest/ML4SETI
apache-2.0
Use ibmseti for convenience While it's somewhat trivial to read these data, the ibmseti.compamp.SimCompamp class will extract the JSON header and the complex-value time-series data for you.
import ibmseti aca = ibmseti.compamp.SimCompamp(zz.open(firstfile, 'rb').read()) # This data file is classified as a 'squiggle' aca.header()
tutorials/Step_2_reading_SETI_code_challenge_data.ipynb
setiQuest/ML4SETI
apache-2.0
The Goal The goal is to take each simulation data file and 1. convert the time-series data into a 2D spectrogram 2. Use the 2D spectrogram as an image to train an image classification model There are multiple ways to improve your model's ability to classify signals. You can * Modify the time-series data with some sign...
%matplotlib inline import numpy as np import matplotlib.pyplot as plt ## ibmseti.compamp.SimCompamp has a method to calculate the spectrogram for you (without any signal processing applied to the time-series data) spectrogram = aca.get_spectrogram() fig, ax = plt.subplots(figsize=(10, 5)) ax.imshow(np.log(spectrogra...
tutorials/Step_2_reading_SETI_code_challenge_data.ipynb
setiQuest/ML4SETI
apache-2.0
2. Build the spectogram yourself You don't need to use ibmseti python package to calculate the spectrogram for you. This is especially important if you want to apply some signals processing to the time-series data before you create your spectrogram
complex_data = aca.complex_data() #complex valued time-series complex_data complex_data = complex_data.reshape(32, 6144) complex_data #Apply a Hanning Window complex_data = complex_data * np.hanning(complex_data.shape[1]) complex_data # Build Spectogram & Plot cpfft = np.fft.fftshift( np.fft.fft(complex_data),...
tutorials/Step_2_reading_SETI_code_challenge_data.ipynb
setiQuest/ML4SETI
apache-2.0
Now let's load the data for analysis.
urls = ["ipython-dev"] archives = [Archive(url,mbox=True) for url in urls] activities = [arx.get_activity(resolved=False) for arx in archives] archives[0].data
examples/activity/Plot Activity.ipynb
datactive/bigbang
mit
For each of the mailing lists we are looking at, plot the rolling average of number of emails sent per day.
plt.figure(figsize=(12.5, 7.5)) for i, activity in enumerate(activities): colors = 'rgbkm' ta = activity.sum(1) rmta = ta.rolling(window).mean() rmtadna = rmta.dropna() plt.plot_date(np.array(rmtadna.index), np.array(rmtadna.values), colors[i], ...
examples/activity/Plot Activity.ipynb
datactive/bigbang
mit
Now, let's see: who are the authors of the most messages to one particular list?
a = activities[0] # for the first mailing list ta = a.sum(0) # sum along the first axis ta.sort_values(ascending=True)[-10:].plot(kind='barh')
examples/activity/Plot Activity.ipynb
datactive/bigbang
mit
This might be useful for seeing the distribution (does the top message sender dominate?) or for identifying key participants to talk to. Many mailing lists will have some duplicate senders: individuals who use multiple email addresses or are recorded as different senders when using the same email address. We want to i...
import Levenshtein distancedf = process.matricize(a.columns[:100], process.from_header_distance) # calculate the edit distance between the two From titles df = distancedf.astype(int) # specify that the values in the matrix are integers fig = plt.figure(figsize=(18, 18)) plt.imshow(df) #plt.yticks(np.arange(0.5, len(df...
examples/activity/Plot Activity.ipynb
datactive/bigbang
mit
Step 2: Setting basic parameters for the model
H = 7 #aquifer thickness in meters zt = -18 #top boundary of aquifer (m) zb = zt - H #bottom boundary of aquifer (m) Q = 788 #constant discharge m3/d
pumpingtests_new/confined1_oude_korendijk.ipynb
mbakker7/ttim
mit
<a id='step_3'></a> Step 3: Creating a TTim conceptual model In this example, we are using the ModelMaq model to conceptualize our aquifer. ModelMaq defines the aquifer system as a stacked vertical sequence of aquifers and leaky layers (aquifer-leaky layer, aquifer-leaky layer, etc). Aquifers are conceptualized as havi...
#unkonwn parameters: kaq, Saq ml = ModelMaq(kaq=60, z=[zt, zb], Saq=1e-4, tmin=1e-5, tmax=1) w = Well(ml, xw=0, yw=0, rw=0.2, tsandQ=[(0, Q)], layers=0) # Here we are setting everything in meters for length and days for time
pumpingtests_new/confined1_oude_korendijk.ipynb
mbakker7/ttim
mit
The last step in our model creation is to "solve" the model:
ml.solve(silent='True')
pumpingtests_new/confined1_oude_korendijk.ipynb
mbakker7/ttim
mit
Step 4: Load data of two observation wells: The preferred method of loading data into TTim is to use numpy arrays. The data is in a text file where the first column is the time data in minutes and the second column is the drawdown in meters For each piezometer we will load the data as a numpy array and create separate ...
#time and drawdown of piezometer 30m away from pumping well data1 = np.loadtxt('data/piezometer_h30.txt', skiprows = 1) t1 = data1[:, 0] / 60 / 24 #convert min to days h1 = data1[:, 1] r1 = 30 #time and drawdown of piezometer 90m away from pumping well data2 = np.loadtxt('data/piezometer_h90.txt', skiprows = 1) t2 = da...
pumpingtests_new/confined1_oude_korendijk.ipynb
mbakker7/ttim
mit