markdown stringlengths 0 1.02M | code stringlengths 0 832k | output stringlengths 0 1.02M | license stringlengths 3 36 | path stringlengths 6 265 | repo_name stringlengths 6 127 |
|---|---|---|---|---|---|
GPU accelerationMany TensorFlow operations are accelerated using the GPU for computation. Without any annotations, TensorFlow automatically decides whether to use the GPU or CPU for an operation—copying the tensor between CPU and GPU memory, if necessary. Tensors produced by an operation are typically backed by the memory of the device on which the operation executed, for example: | x = tf.random.uniform([3, 3])
print("Is there a GPU available: "),
print(tf.test.is_gpu_available())
print("Is the Tensor on GPU #0: "),
print(x.device.endswith('GPU:0')) | _____no_output_____ | Apache-2.0 | site/en/r2/tutorials/eager/eager_basics.ipynb | SamuelMarks/tensorflow-docs |
Device NamesThe `Tensor.device` property provides a fully qualified string name of the device hosting the contents of the tensor. This name encodes many details, such as an identifier of the network address of the host on which this program is executing and the device within that host. This is required for distributed execution of a TensorFlow program. The string ends with `GPU:` if the tensor is placed on the `N`-th GPU on the host. Explicit Device PlacementIn TensorFlow, *placement* refers to how individual operations are assigned (placed on) a device for execution. As mentioned, when there is no explicit guidance provided, TensorFlow automatically decides which device to execute an operation and copies tensors to that device, if needed. However, TensorFlow operations can be explicitly placed on specific devices using the `tf.device` context manager, for example: | import time
def time_matmul(x):
start = time.time()
for loop in range(10):
tf.matmul(x, x)
result = time.time()-start
print("10 loops: {:0.2f}ms".format(1000*result))
# Force execution on CPU
print("On CPU:")
with tf.device("CPU:0"):
x = tf.random.uniform([1000, 1000])
assert x.device.endswith("CPU:0")
time_matmul(x)
# Force execution on GPU #0 if available
if tf.test.is_gpu_available():
with tf.device("GPU:0"): # Or GPU:1 for the 2nd GPU, GPU:2 for the 3rd etc.
x = tf.random.uniform([1000, 1000])
assert x.device.endswith("GPU:0")
time_matmul(x) | _____no_output_____ | Apache-2.0 | site/en/r2/tutorials/eager/eager_basics.ipynb | SamuelMarks/tensorflow-docs |
DatasetsThis section uses the [`tf.data.Dataset` API](https://www.tensorflow.org/guide/datasets) to build a pipeline for feeding data to your model. The `tf.data.Dataset` API is used to build performant, complex input pipelines from simple, re-usable pieces that will feed your model's training or evaluation loops. Create a source `Dataset`Create a *source* dataset using one of the factory functions like [`Dataset.from_tensors`](https://www.tensorflow.org/api_docs/python/tf/data/Datasetfrom_tensors), [`Dataset.from_tensor_slices`](https://www.tensorflow.org/api_docs/python/tf/data/Datasetfrom_tensor_slices), or using objects that read from files like [`TextLineDataset`](https://www.tensorflow.org/api_docs/python/tf/data/TextLineDataset) or [`TFRecordDataset`](https://www.tensorflow.org/api_docs/python/tf/data/TFRecordDataset). See the [TensorFlow Dataset guide](https://www.tensorflow.org/guide/datasetsreading_input_data) for more information. | ds_tensors = tf.data.Dataset.from_tensor_slices([1, 2, 3, 4, 5, 6])
# Create a CSV file
import tempfile
_, filename = tempfile.mkstemp()
with open(filename, 'w') as f:
f.write("""Line 1
Line 2
Line 3
""")
ds_file = tf.data.TextLineDataset(filename) | _____no_output_____ | Apache-2.0 | site/en/r2/tutorials/eager/eager_basics.ipynb | SamuelMarks/tensorflow-docs |
Apply transformationsUse the transformations functions like [`map`](https://www.tensorflow.org/api_docs/python/tf/data/Datasetmap), [`batch`](https://www.tensorflow.org/api_docs/python/tf/data/Datasetbatch), and [`shuffle`](https://www.tensorflow.org/api_docs/python/tf/data/Datasetshuffle) to apply transformations to dataset records. | ds_tensors = ds_tensors.map(tf.square).shuffle(2).batch(2)
ds_file = ds_file.batch(2) | _____no_output_____ | Apache-2.0 | site/en/r2/tutorials/eager/eager_basics.ipynb | SamuelMarks/tensorflow-docs |
Iterate`tf.data.Dataset` objects support iteration to loop over records: | print('Elements of ds_tensors:')
for x in ds_tensors:
print(x)
print('\nElements in ds_file:')
for x in ds_file:
print(x) | _____no_output_____ | Apache-2.0 | site/en/r2/tutorials/eager/eager_basics.ipynb | SamuelMarks/tensorflow-docs |
Introduction to programming for Geoscientists through Python [Gerard Gorman](http://www.imperial.ac.uk/people/g.gorman), [Nicolas Barral](http://www.imperial.ac.uk/people/n.barral) Lecture 6: Files, strings, and dictionaries Learning objectives: You will learn how to:* Read data in from a file* Parse strings to extract specific data of interest.* Use dictionaries to index data using any type of key. | from client.api.notebook import Notebook
from client.api import assignment
from client.utils import auth
args = assignment.Settings(server='okpyic.azurewebsites.net')
ok = Notebook('./lecture6.ok', args)
var1 = 4
var2 = 3
var3 = 3
def funct1():
return 0
def funct2():
return 0
ok.grade('lect6-q0') | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Running tests
---------------------------------------------------------------------
Test summary
Passed: 5
Failed: 0
[ooooooooook] 100.0% passed
| CC-BY-3.0 | notebook/Lecture-6-Introduction-to-programming-for-geoscientists.ipynb | navjotk/test |
Reading data from a plain text fileWe can read text from a [text file](http://en.wikipedia.org/wiki/Text_file) into strings in a program. This is a common (and simple) way for a program to get input data. The basic recipe is: | # Open text file
infile = open("myfile.dat", "r")
# Read next line:
line = infile.readline()
# Read the lines in a loop one by one:
for line in infile:
<process line>
# Load all lines into a list of strings:
lines = infile.readlines()
for line in lines:
<process line> | _____no_output_____ | CC-BY-3.0 | notebook/Lecture-6-Introduction-to-programming-for-geoscientists.ipynb | navjotk/test |
Let's look at the file [data1.txt](https://raw.githubusercontent.com/ggorman/Introduction-to-programming-for-geoscientists/master/notebook/data/data1.txt) (all of the data files in this lecture are stored in the sub-folder *data/* of this notebook library). The files has a column of numbers: | 21.8
18.1
19
23
26
17.8 | _____no_output_____ | CC-BY-3.0 | notebook/Lecture-6-Introduction-to-programming-for-geoscientists.ipynb | navjotk/test |
The goal is to read this file and calculate the mean: | # Open data file
infile = open("data/data1.txt", "r")
# Initialise values
mean = 0
n=0
# Loop to perform sum
for number in infile:
number = float(number)
mean = mean + number
n += 1
# It is good practice to close a file when you are finished.
infile.close()
# Calculate the mean.
mean = mean/n
print(mean) | 20.95
| CC-BY-3.0 | notebook/Lecture-6-Introduction-to-programming-for-geoscientists.ipynb | navjotk/test |
Let's make this example more interesting. There is a **lot** of data out there for you to discover all kinds of interesting facts - you just need to be interested in learning a little analysis. For this case I have downloaded tidal gauge data for the port of Avonmouth from the [BODC](http://www.bodc.ac.uk/). If you look at the header of file [data/2012AVO.txt](https://raw.githubusercontent.com/ggorman/Introduction-to-programming-for-geoscientists/master/notebook/data/2012AVO.txt) you will see the [metadata](http://en.wikipedia.org/wiki/Metadata): | Port: P060
Site: Avonmouth
Latitude: 51.51089
Longitude: -2.71497
Start Date: 01JAN2012-00.00.00
End Date: 30APR2012-23.45.00
Contributor: National Oceanography Centre, Liverpool
Datum information: The data refer to Admiralty Chart Datum (ACD)
Parameter code: ASLVTD02 = Surface elevation (unspecified datum) of the water body by fixed in-situ pressure sensor | _____no_output_____ | CC-BY-3.0 | notebook/Lecture-6-Introduction-to-programming-for-geoscientists.ipynb | navjotk/test |
Let's read the column ASLVTD02 (the surface elevation) and plot it: | from pylab import *
tide_file = open("data/2012AVO.txt", "r")
# We know from inspecting the file that the first 11 lines are just
# header information so lets just skip those lines.
for i in range(11):
line = tide_file.readline()
# Initialise an empty list to store the elevation
elevation = []
days = []
# Now we start reading the interesting data
n=0
while True: # This will keep looping until we break out.
# Here we use a try/except block to try to read the data as normal
# and to break out if unsuccessful - ie when we reach the end of the file.
try:
# Read the next line
line = tide_file.readline()
# Split this line into words.
words = line.split()
# If we do not have 5 words then it must be blank lines at the end of the file.
if len(words)!=5:
break
except:
# If we failed to read a line then we must have got to the end.
break
n+=1 # Count number of data points
try:
# The elevation data is on the 4th column. However, the BODC
# appends a "M" when a value is improbable and an "N" when
# data is missing (maybe a ship dumped into it during rough weather!)
# Therefore, we put this conversion from a string into a float in a
# try/except block.
level = float(words[3])
elevation.append(level)
# There is a measurement every quarter hour.
days.append(n*0.25/24)
except:
continue
# For plotting lets convert the list to a NumPy array.
elevation = array(elevation)
days = array(days)
plot(days, elevation)
xlabel("Days")
ylabel("Elevation (meters)")
show() | _____no_output_____ | CC-BY-3.0 | notebook/Lecture-6-Introduction-to-programming-for-geoscientists.ipynb | navjotk/test |
Quiz time:* What tidal constituents can you identify by looking at this plot?* Is this primarily a diurnal or semi-diurnal tidal region? (hint - change the x-axis range on the plot above).You will notice in the above example that we used the *split()* string member function. This is a very useful function for grabbing individual words on a line. When called without any arguments it assumes that the [delimiter](http://en.wikipedia.org/wiki/Delimiter) is a blank space. However, you can use this to split a string with any delimiter, *e.g.*, *line.split(';')*, *line.split(':')*. Exercise 6.1: Read a two-column data fileThe file [data/xy.dat](https://raw.githubusercontent.com/ggorman/Introduction-to-programming-for-geoscientists/master/notebook/data/xy.dat) contains two columns of numbers, corresponding to *x* and *y* coordinates on a curve. The start of the file looks like this:-1.0000 -0.0000-0.9933 -0.0087-0.9867 -0.0179-0.9800 -0.0274-0.9733 -0.0374Make a program that reads the first column into a list `xlist_61` and the second column into a list `ylist_61`. Then convert the lists to arrays named `xarray_61` and `yarray_61`, and plot the curve. Store the maximum and minimum y coordinates in two variables named `ymin_61` and `ymax_61`. (Hint: Read the file line by line, split each line into words, convert to float, and append to `xlist_61` and `ylist_61`.) | # Open data file
infile = open("data/xy.dat", "r") # "r" is for read
# Initialise empty lists
xlist_61 = []
ylist_61 = []
# Loop through infile and write to x and y lists
for line in infile:
line = line.split() # convert to list by dropping spaces
xlist_61.append(float(line[0])) # take 0th element and covert to float
ylist_61.append(float(line[1])) # take 1st element and covert to float
# Close the filehandle
infile.close()
xarray_61 = np.array(xlist_61)
yarray_61 = np.array(ylist_61)
ymin_61 = yarray_61.min()
ymax_61 = yarray_61.max()
grade = ok.grade('lect6-q1')
print("===", grade, "===") | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Running tests
---------------------------------------------------------------------
Test summary
Passed: 9
Failed: 0
[ooooooooook] 100.0% passed
=== {'passed': 9, 'failed': 0, 'locked': 0} ===
| CC-BY-3.0 | notebook/Lecture-6-Introduction-to-programming-for-geoscientists.ipynb | navjotk/test |
Exercise 6.2: Read a data fileThe files [data/density_water.dat](https://raw.githubusercontent.com/ggorman/Introduction-to-programming-for-geoscientists/master/notebook/data/density_water.dat) and [data/density_air.dat](https://raw.githubusercontent.com/ggorman/Introduction-to-programming-for-geoscientists/master/notebook/data/density_air.dat) contain data about the density of water and air (respectively) for different temperatures. The data files have some comment lines starting with and some lines are blank. The rest of the lines contain density data: the temperature in the first column and the corresponding density in the second column. The goal of this exercise is to read the data in such a file, discard commented or blank lines, and plot the density versus the temperature as distinct (small) circles for each data point. Write a function `readTempDenFile` that takes a filename as argument and returns two lists containing respectively the temperature and the density. Call this function on both files, and store the temperature and density in lists called `temp_air_list`, `dens_air_list`, `temp_water_list` and `dens_water_list`. | def readTempDenFile(filename):
infile = open(filename, "r")
temp = []
dens = []
for line in infile:
try:
t, d = line.split()
t = float(t)
d = float(d)
except:
continue
temp.append(t) # N.B. we're now filling out temp and dens lists
dens.append(d)
infile.close()
plot(array(temp), array(dens))
xlabel("Temperature (C)")
ylabel("Density (kg/m^3)")
show()
return temp,dens
# run function
temp_air_list, dens_air_list = readTempDenFile("data/density_air.dat")
temp_water_list, dens_water_list = readTempDenFile("data/density_water.dat")
ok.grade("lect6-q2") | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Running tests
| CC-BY-3.0 | notebook/Lecture-6-Introduction-to-programming-for-geoscientists.ipynb | navjotk/test |
Exercise 6.3: Read acceleration data and find velocitiesA file [data/acc.dat](https://raw.githubusercontent.com/ggorman/Introduction-to-programming-for-geoscientists/master/notebook/data/acc.dat) contains measurements $a_0, a_1, \ldots, a_{n-1}$ of the acceleration of an object moving along a straight line. The measurement $a_k$ is taken at time point $t_k = k\Delta t$, where $\Delta t$ is the time spacing between the measurements. The purpose of the exercise is to load the acceleration data into a program and compute the velocity $v(t)$ of the object at some time $t$.In general, the acceleration $a(t)$ is related to the velocity $v(t)$ through $v^\prime(t) = a(t)$. This means that$$v(t) = v(0) + \int_0^t{a(\tau)d\tau}$$If $a(t)$ is only known at some discrete, equally spaced points in time, $a_0, \ldots, a_{n-1}$ (which is the case in this exercise), we must compute the integral above numerically, for example by the Trapezoidal rule:$$v(t_k) \approx v(0) + \Delta t \left(\frac{1}{2}a_0 + \frac{1}{2}a_k + \sum_{i=1}^{k-1}a_i \right), \ \ 1 \leq k \leq n-1. $$We assume $v(0) = 0$ so that also $v_0 = 0$.Read the values $a_0, \ldots, a_{n-1}$ from file into an array `acc_array_63` and plot the acceleration versus time for $\Delta_t = 0.5$. The time should be stored in an array named `time_array_63`.Then write a function `compute_velocity(dt, k, a)` that takes as arguments a time interval $\Delta_t$ `dt`, an index `k` and a list of accelerations `a`, uses the Trapezoidal rule to compute one $v(t_k)$ value and return this value. Experiment with different values of $\Delta t$ and $k$. | dt = 0.5
# read in acceleration
infile = open("data/acc.dat", "r")
alist = []
for line in infile:
alist.append(float(line))
infile.close()
acc_array_63 = array(alist)
time_array_63 = array([e*dt for e in range(len(alist))]) # time is specified by dt and the number of elements in acc.dat
#print(time_array_63, acc_array_63)
# plot
plot(time_array_63, acc_array_63)
xlabel("Time")
ylabel("Acceleration")
show()
def compute_velocity(dt, k, alist):
if not (1 <= k <= (len(alist) - 1)):
raise ValueError
return dt*(.5*alist[0] + .5*alist[k] + sum(alist[:k]))
dt = 2
k = 4
print(compute_velocity(2, 4, alist))
print(compute_velocity(3, 5, alist))
print(compute_velocity(12, 21, alist))
ok.grade('lect6-q3') | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Running tests
---------------------------------------------------------------------
Test summary
Passed: 15
Failed: 0
[ooooooooook] 100.0% passed
| CC-BY-3.0 | notebook/Lecture-6-Introduction-to-programming-for-geoscientists.ipynb | navjotk/test |
Python dictionariesSuppose we need to store the temperatures in Oslo, London and Paris. The Python list solution might look like: | temps = [13, 15.4, 17.5]
# temps[0]: Oslo
# temps[1]: London
# temps[2]: Paris | _____no_output_____ | CC-BY-3.0 | notebook/Lecture-6-Introduction-to-programming-for-geoscientists.ipynb | navjotk/test |
In this case we need to remember the mapping between the index and the city name. It would be easier to specify name of city to get the temperature. Containers such as lists and arrays use a continuous series of integers to index elements. However, for many applications such an integer index is not useful.**Dictionaries** are containers where any Python object can be usedas an index. Let's rewrite the previous example using a Python dictionary: | temps = {"Oslo": 13, "London": 15.4, "Paris": 17.5}
print("The temperature in London is", temps["London"]) | The temperature in London is 15.4
| CC-BY-3.0 | notebook/Lecture-6-Introduction-to-programming-for-geoscientists.ipynb | navjotk/test |
Add a new element to a dictionary: | temps["Madrid"] = 26.0
print(temps) | {'Oslo': 13, 'London': 15.4, 'Paris': 17.5, 'Madrid': 26.0}
| CC-BY-3.0 | notebook/Lecture-6-Introduction-to-programming-for-geoscientists.ipynb | navjotk/test |
Loop (iterate) over a dictionary: | for city in temps:
print("The temperature in %s is %g" % (city, temps[city])) | The temperature in Oslo is 13
The temperature in London is 15.4
The temperature in Paris is 17.5
The temperature in Madrid is 26
| CC-BY-3.0 | notebook/Lecture-6-Introduction-to-programming-for-geoscientists.ipynb | navjotk/test |
The index in a dictionary is called the **key**. A dictionary is said to hold key–value pairs. So in general: | for key in dictionary:
value = dictionary[key]
print(value) | _____no_output_____ | CC-BY-3.0 | notebook/Lecture-6-Introduction-to-programming-for-geoscientists.ipynb | navjotk/test |
Does the dictionary have a particular key (*i.e.* a particular data entry)? | if "Berlin" in temps:
print("We have Berlin and its temperature is ", temps["Berlin"])
else:
print("I don't know Berlin' termperature.")
print("Oslo" in temps) # i.e. standard boolean expression | True
| CC-BY-3.0 | notebook/Lecture-6-Introduction-to-programming-for-geoscientists.ipynb | navjotk/test |
The keys and values can be reached as lists: | print("Keys = ", temps.keys())
print("Values = ", temps.values()) | Keys = dict_keys(['Oslo', 'London', 'Paris', 'Madrid'])
Values = dict_values([13, 15.4, 17.5, 26.0])
| CC-BY-3.0 | notebook/Lecture-6-Introduction-to-programming-for-geoscientists.ipynb | navjotk/test |
Note that the sequence of keys is **arbitrary**! Never rely on it, if you need a specific order of the keys then you should explicitly sort: | for key in sorted(temps):
value = temps[key]
print(key, value) | London 15.4
Madrid 26.0
Oslo 13
Paris 17.5
| CC-BY-3.0 | notebook/Lecture-6-Introduction-to-programming-for-geoscientists.ipynb | navjotk/test |
Remove Oslo key:value: | del temps["Oslo"] # remove Oslo key w/value
print(temps, len(temps)) | {'London': 15.4, 'Paris': 17.5, 'Madrid': 26.0} 3
| CC-BY-3.0 | notebook/Lecture-6-Introduction-to-programming-for-geoscientists.ipynb | navjotk/test |
Similarly to what we saw for arrays, two variables can refer to the same dictionary: | t1 = temps
t1["Stockholm"] = 10.0
print(temps) | {'London': 15.4, 'Paris': 17.5, 'Madrid': 26.0, 'Stockholm': 10.0}
| CC-BY-3.0 | notebook/Lecture-6-Introduction-to-programming-for-geoscientists.ipynb | navjotk/test |
So we can see that while we modified *t1*, the *temps* dictionary was also changed. Let's look at a simple example of reading the same data from a file and putting it into a dictionary. We will be reading the file *data/deg2.dat*. | infile = open("data/deg2.dat", "r")
# Start with empty dictionary
temps = {}
for line in infile:
# If you examine the file you will see a ':' after the city name,
# so let's use this as the delimiter for splitting the line.
city, temp = line.split(":")
temps[city] = float(temp)
infile.close()
print(temps) | {'Oslo': 21.8, 'London': 18.1, 'Berlin': 19.0, 'Paris': 23.0, 'Rome': 26.0}
| CC-BY-3.0 | notebook/Lecture-6-Introduction-to-programming-for-geoscientists.ipynb | navjotk/test |
Exercise 6.4: Make a dictionary from a tableThe file [data/constants.txt](https://raw.githubusercontent.com/ggorman/Introduction-to-programming-for-geoscientists/master/notebook/data/constants.txt) contains a table of the values and the dimensions of some fundamental constants from physics. We want to load this table into a dictionary *constants*, where the keys are the names of the constants. For example, *constants['gravitational constant']* holds the value of the gravitational constant (6.67259 $\times$ 10$^{-11}$) in Newton's law of gravitation. Make a function `read_constants(file_path)` that that reads and interprets the text in the file passed as argument, and thereafter returns the dictionary. | def read_constants(file_path):
infile = open(file_path, "r")
constants = {} # An empty dictionary to store the constants that are read in from the file
infile.readline(); infile.readline() # Skip the first two lines of the file, since these just contain the column names and the separator.
for line in infile:
words = line.split() # Split each line up into individual words
dimension = words.pop() # pop is a list operation that removes the last element from a list and returns it
value = float(words.pop()) # Again, use pop to obtain the constant itself.
name = " ".join(words) # After the two 'pop' operations above, the words remaining in the 'words' list must be the name of the constant. Join the individual words together, with spaces inbetween, using .join.
constants[name] = value # Create a new key-value pair in the dictionary
return constants
print(read_constants('data/constants.txt'))
ok.grade('lect6-q4') | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Running tests
---------------------------------------------------------------------
Test summary
Passed: 1
Failed: 0
[ooooooooook] 100.0% passed
| CC-BY-3.0 | notebook/Lecture-6-Introduction-to-programming-for-geoscientists.ipynb | navjotk/test |
Exercise 6.5: Explore syntax differences: lists vs. dictionariesConsider this code: | t1 = {}
t1[0] = -5
t1[1] = 10.5 | _____no_output_____ | CC-BY-3.0 | notebook/Lecture-6-Introduction-to-programming-for-geoscientists.ipynb | navjotk/test |
Explain why the lines above work fine while the ones below do not: | t2 = []
#t2[0] = -5
#t2[1] = 10.5 | _____no_output_____ | CC-BY-3.0 | notebook/Lecture-6-Introduction-to-programming-for-geoscientists.ipynb | navjotk/test |
What must be done in the last code snippet to make it work properly? Exercise 6.6: Compute the area of a triangleAn arbitrary triangle can be described by the coordinates of its three vertices: $(x_1, y_1), (x_2, y_2), (x_3, y_3)$, numbered in a counterclockwise direction. The area of the triangle is given by the formula:$A = \frac{1}{2}|x_2y_3 - x_3y_2 - x_1y_3 + x_3y_1 + x_1y_2 - x_2y_1|.$Write a function `triangle_area(vertices)` that returns the area of a triangle whose vertices are specified by the argument vertices, which is a nested list of the vertex coordinates. For example, vertices can be [[0,0], [1,0], [0,2]] if the three corners of the triangle have coordinates (0, 0), (1, 0), and (0, 2).Then, assume that the vertices of the triangle are stored in a dictionary and not a list. The keys in the dictionary correspond to the vertex number (1, 2, or 3) while the values are 2-tuples with the x and y coordinates of the vertex. For example, in a triangle with vertices (0, 0), (1, 0), and (0, 2) the vertices argument becomes: | def triangle_area(vertices):
# nb. vertices = {v1: (x,y)}
x2y3 = vertices[2][0] * vertices[3][1]
x3y2 = vertices[3][0] * vertices[2][1]
x1y3 = vertices[1][0] * vertices[3][1]
x3y1 = vertices[3][0] * vertices[1][1]
x1y2 = vertices[1][0] * vertices[2][1]
x2y1 = vertices[2][0] * vertices[1][1]
return .5*(x2y3 - x3y2 - x1y3 + x3y1 + x1y2 - x2y1)
print(triangle_area({1: (0,0), 2: (1,0), 3: (0,1)}))
ok.grade('lect6-q6') | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Running tests
---------------------------------------------------------------------
Test summary
Passed: 3
Failed: 0
[ooooooooook] 100.0% passed
| CC-BY-3.0 | notebook/Lecture-6-Introduction-to-programming-for-geoscientists.ipynb | navjotk/test |
String manipulationText in Python is represented as **strings**. Programming with strings is therefore the key to interpret text in files and construct new text (*i.e.* **parsing**). First we show some common string operations and then we apply them to real examples. Our sample string used for illustration is: | s = "Berlin: 18.4 C at 4 pm" | _____no_output_____ | CC-BY-3.0 | notebook/Lecture-6-Introduction-to-programming-for-geoscientists.ipynb | navjotk/test |
Strings behave much like lists/tuples - they are simply a sequence of characters: | print("s[0] = ", s[0])
print("s[1] = ", s[1]) | s[0] = B
s[1] = e
| CC-BY-3.0 | notebook/Lecture-6-Introduction-to-programming-for-geoscientists.ipynb | navjotk/test |
Substrings are just slices of lists and arrays: | # from index 8 to the end of the string
print(s[8:])
# index 8, 9, 10 and 11 (not 12!)
print(s[8:12])
# from index 8 to 8 from the end of the string
print(s[8:-8]) | 18.4 C
| CC-BY-3.0 | notebook/Lecture-6-Introduction-to-programming-for-geoscientists.ipynb | navjotk/test |
You can also find the start of a substring: | # where does "Berlin" start?
print(s.find("Berlin"))
print(s.find("pm"))
print (s.find("Oslo")) | -1
| CC-BY-3.0 | notebook/Lecture-6-Introduction-to-programming-for-geoscientists.ipynb | navjotk/test |
In this last example, Oslo does not exist in the list so the return value is -1. We can also check if a substring is contained in a string: | print ("Berlin" in s)
print ("Oslo" in s)
if "C" in s:
print("C found")
else:
print("C not found") | C found
| CC-BY-3.0 | notebook/Lecture-6-Introduction-to-programming-for-geoscientists.ipynb | navjotk/test |
Search and replaceStrings also support substituting a substring by another string. In general this looks like *s.replace(s1, s2)*, which replaces string *s1* in *s* by string *s2*, *e.g.*: | s = s.replace(" ", "_")
print(s)
s = s.replace("Berlin", "Bonn")
print(s)
# Replace the text before the first colon by 'London'
s = s.replace(s[:s.find(":")], "London")
print(s) | London:_18.4_C_at_4_pm
| CC-BY-3.0 | notebook/Lecture-6-Introduction-to-programming-for-geoscientists.ipynb | navjotk/test |
Notice that in all these examples we assign the new result back to *s*. One of the reasons we are doing this is strings are actually constant (*i.e* immutable) and therefore cannot be modified *inplace*. We **cannot** write for example: | s[18] = '5'
TypeError: "str" object does not support item assignment | _____no_output_____ | CC-BY-3.0 | notebook/Lecture-6-Introduction-to-programming-for-geoscientists.ipynb | navjotk/test |
We also encountered examples above where we used the split function to break up a line into separate substrings for a given separator (where a space is the default delimiter). Sometimes we want to split a string into lines - *i.e.* the delimiter is the [carriage return](http://en.wikipedia.org/wiki/Carriage_return). This can be surprisingly tricky because different computing platforms (*e.g.* Windows, Linux, Mac) use different characters to represent a carriage return. For example, Unix uses '\n'. Luckly Python provides a *cross platform* way of doing this so regardless of what platform created the data file, or what platform you are running Python on, it will do the *right thing*: | t = "1st line\n2nd line\n3rd line"
print ("""original t =
""", t)
# This works here but will give you problems if you are switching
# files between Windows and either Mac or Linux.
print (t.split("\n"))
# Cross platform (ie better) solution
print(t.splitlines()) | ['1st line', '2nd line', '3rd line']
| CC-BY-3.0 | notebook/Lecture-6-Introduction-to-programming-for-geoscientists.ipynb | navjotk/test |
Stripping off leading/trailing whitespaceWhen processing text from a file and composing new strings, we frequently need to trim leading and trailing whitespaces: | s = " text with leading and trailing spaces \n"
print("-->%s<--"%s.strip())
# left strip
print("-->%s<--"%s.lstrip())
# right strip
print("-->%s<--"%s.rstrip()) | --> text with leading and trailing spaces<--
| CC-BY-3.0 | notebook/Lecture-6-Introduction-to-programming-for-geoscientists.ipynb | navjotk/test |
join() (the opposite of split())We can join a list of substrings to form a new string. Similarly to *split()* we put strings together with a delimiter inbetween: | strings = ["Newton", "Secant", "Bisection"]
print(", ".join(strings)) | Newton, Secant, Bisection
| CC-BY-3.0 | notebook/Lecture-6-Introduction-to-programming-for-geoscientists.ipynb | navjotk/test |
You can prove to yourself that these are inverse operations: | t = delimiter.join(stringlist)
stringlist = t.split(delimiter) | _____no_output_____ | CC-BY-3.0 | notebook/Lecture-6-Introduction-to-programming-for-geoscientists.ipynb | navjotk/test |
As an example, let's split off the first two words on a line: | line = "This is a line of words separated by space"
words = line.split()
print("words = ", words)
line2 = " ".join(words[2:])
print("line2 = ", line2) | words = ['This', 'is', 'a', 'line', 'of', 'words', 'separated', 'by', 'space']
line2 = a line of words separated by space
| CC-BY-3.0 | notebook/Lecture-6-Introduction-to-programming-for-geoscientists.ipynb | navjotk/test |
Exercise 6.7: Improve a programThe file [data/densities.dat](https://raw.githubusercontent.com/ggorman/Introduction-to-programming-for-geoscientists/master/notebook/data/densities.dat) contains a table of densities of various substances measured in g/cm$^3$. The following program reads the data in this file and produces a dictionary whose keys are the names of substances, and the values are the corresponding densities. | def read_densities(filename):
infile = open(filename, 'r')
densities = {}
for line in infile:
words = line.split()
density = float(words[-1])
if len(words[:-1]) == 2:
substance = words[0] + ' ' + words[1]
else:
substance = words[0]
densities[substance] = density
infile.close()
return densities
densities = read_densities('data/densities.dat')
print(densities) | {'air': 0.0012, 'gasoline': 0.67, 'ice': 0.9, 'pure water': 1.0, 'seawater': 1.025, 'human body': 1.03, 'limestone': 2.6, 'granite': 2.7, 'iron': 7.8, 'silver': 10.5, 'mercury': 13.6, 'gold': 18.9, 'platinium': 21.4, 'Earth mean': 5.52, 'Earth core': 13.0, 'Moon': 3.3, 'Sun mean': 1.4, 'Sun core': 160.0, 'proton': 280000000000000.0}
| CC-BY-3.0 | notebook/Lecture-6-Introduction-to-programming-for-geoscientists.ipynb | navjotk/test |
One problem we face when implementing the program above is that the name of the substance can contain one or two words, and maybe more words in a more comprehensive table. The purpose of this exercise is to use string operations to shorten the code and make it more general. Implement the following two methods in separate functions `read_densities_join` and `read_densities_substrings`, and control that they give the same result.1. Let *substance* consist of all the words but the last, using the join method in string objects to combine the words.2. Observe that all the densities start in the same column file and use substrings to divide line into two parts. (Hint: Remember to strip the first part such that, e.g., the density of ice is obtained as *densities['ice']* and not *densities['ice ']*.) | def read_densities_join(filename):
infile = open(filename, 'r')
densities = {}
for line in infile:
words = line.split()
density = float(words.pop()) # pop is a list operation that removes the last element from a list and returns it
substance = "_".join(words) # join the remaining words with _
densities[substance] = density
infile.close()
return densities
def read_densities_substrings(filename):
infile = open(filename, 'r')
densities = {}
for line in infile:
density = float(line[12:]) # column 13 onwards
substance = line[:12] # upto coumn 12
substance = substance.strip() # remove trailing spaces
substance = substance.replace(" ", "_") # replace spaces with _
densities[substance] = density
infile.close()
return densities
densities_join = read_densities_join('data/densities.dat')
densities_substrings = read_densities_substrings('data/densities.dat')
print(densities_join)
print(densities_substrings)
ok.grade('lect6-q7') | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Running tests
---------------------------------------------------------------------
Test summary
Passed: 2
Failed: 0
[ooooooooook] 100.0% passed
| CC-BY-3.0 | notebook/Lecture-6-Introduction-to-programming-for-geoscientists.ipynb | navjotk/test |
File writingWriting a file in Python is simple. You just collect the text you want to write in one or more strings and, for each string, use a statement along the lines of | outfile.write(string) | _____no_output_____ | CC-BY-3.0 | notebook/Lecture-6-Introduction-to-programming-for-geoscientists.ipynb | navjotk/test |
The write function does not add a newline character so you may have to do that explicitly: | outfile.write(string + ’\n’) | _____no_output_____ | CC-BY-3.0 | notebook/Lecture-6-Introduction-to-programming-for-geoscientists.ipynb | navjotk/test |
That’s it! Compose the strings and write! Let's do an example. Write a nested list (table) to a file: | # Let's define some table of data
data = [[ 0.75, 0.29619813, -0.29619813, -0.75 ],
[ 0.29619813, 0.11697778, -0.11697778, -0.29619813],
[-0.29619813, -0.11697778, 0.11697778, 0.29619813],
[-0.75, -0.29619813, 0.29619813, 0.75 ]]
# Open the file for writing. Notice the "w" indicates we are writing!
outfile = open("tmp_table.dat", "w")
for row in data:
for column in row:
outfile.write("%14.8f" % column)
outfile.write("\n") # ensure newline
outfile.close() | _____no_output_____ | CC-BY-3.0 | notebook/Lecture-6-Introduction-to-programming-for-geoscientists.ipynb | navjotk/test |
And that's it - run the above cell and take a look at the file that was generated in your Azure library clone. Exercise 6.8: Write function data to a fileWe want to dump $x$ and $f(x)$ values to a file named function_data.dat, where the $x$ values appear in the first column and the $f(x)$ values appear in the second. Choose $n$ equally spaced $x$ values in the interval [-4, 4]. Here, the function $f(x)$ is given by:$f(x) = \frac{1}{\sqrt{2\pi}}\exp(-0.5x^2)$ | from math import pi
# define our function
def f(x):
return (1.0/sqrt(2.0*pi))*exp(-.5*x**2.0)
# let's make our x
xarray = linspace(-4.0, 4.0, 100)
fxs = f(xarray)
fxs[-1] += 1
# let's zip them up for a simple for loop when writing out
data = zip(xarray, fxs) # this combines each element into a tuple e.g. [(xarray1, fxs1), (xarray2, fxs2) ...]
# write out
outfile = open("ex8_out.dat", "w") # w is for writing!
for x,y in data:
outfile.write("X = %.2f Y = %.2f" % (x, y))
outfile.write("\n") # ensure newline
outfile.close()
ok.grade('lect6-q8')
ok.score() | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Scoring tests
---------------------------------------------------------------------
question 0
Passed: 2
Failed: 0
[ooooooooook] 100.0% passed
---------------------------------------------------------------------
question 6.1
Passed: 3
Failed: 0
[ooooooooook] 100.0% passed
| CC-BY-3.0 | notebook/Lecture-6-Introduction-to-programming-for-geoscientists.ipynb | navjotk/test |
DemLin02:Ill-conditioning of Vandermonde matrix* todo: Review this demo, result not the same as in Miranda's | import numpy as np
from numpy.linalg import norm, cond, solve
import time
import matplotlib.pyplot as plt
%matplotlib notebook
np.set_printoptions(precision=4) | _____no_output_____ | MIT | notebooks/old notebooks/demlin02.ipynb | snowdj/CompEcon-python |
Compute approximation error and matrix condition number | n = np.arange(6, 51)
nn = n.size
errv = np.zeros(nn)
conv = np.zeros(nn)
for i in range(nn):
v = np.vander(1 + np.arange(n[i]))
errv[i] = np.log10(norm(np.identity(n[i]) - solve(v, v)))
conv[i] = np.log10(cond(v))
print('errv =\n', errv) | errv =
[-11.0688 -14.6779 -12.5801 -6.8825 -5.5384 -5.9532 -7.6494 -5.9833
-5.6239 -6.3194 -5.651 -5.8029 -4.5616 -5.6639 -4.912 -5.0873
-4.958 -5.8492 -5.0541 -5.6499 -5.7562 -5.6496 -5.8851 -5.7686
-5.475 -5.3383 -5.4446 -5.0718 -5.4484 -5.3056 -5.3707 -5.7315
-5.7709 -6.0165 -5.7509 -5.0538 -5.838 -6.063 -6.0756 -2.9206
-5.0652 -5.759 -5.8286 -6.3859 -6.0894]
| MIT | notebooks/old notebooks/demlin02.ipynb | snowdj/CompEcon-python |
Smooth using quadratic function | X = np.vstack([np.ones(nn), n]).T
b = np.linalg.lstsq(X, errv)[0]
errv = np.dot(X, b)
print('b = ', b)
b = np.linalg.lstsq(X, conv)[0]
conv = np.dot(X, b)
print('b = ', b) | b = [-8.003 0.0681]
b = [1.0590e+01 9.1579e-03]
| MIT | notebooks/old notebooks/demlin02.ipynb | snowdj/CompEcon-python |
Plot matrix condition numbers | plt.figure(figsize=[12, 5])
plt.subplot(1, 2, 1)
plt.plot(n, conv)
plt.xlabel('n')
plt.ylabel('Log_{10} Condition Number')
plt.title('Vandermonde Matrix Condition Numbers') | _____no_output_____ | MIT | notebooks/old notebooks/demlin02.ipynb | snowdj/CompEcon-python |
Confusion MatrixA confusion matrix shows the predicted values vs. the actual values by counting the true positives, true negatives, false positives, and false negatives. | %matplotlib inline
import matplotlib.pyplot as plt
import pandas as pd | _____no_output_____ | ADSL | 01-Lesson-Plans/19-Supervised-Machine-Learning/1/Activities/07-Ins_Confusion-Matrixes/Solved/Ins_Confusion_Matrix.ipynb | anirudhmungre/sneaky-lessons |
Generate some data | from sklearn.datasets import make_blobs
X, y = make_blobs(n_samples=1000, centers=2, cluster_std=3, random_state=42)
print(f"Labels: {y[:10]}")
print(f"Data: {X[:10]}")
# Visualizing both classes
plt.scatter(X[:, 0], X[:, 1], c=y) | _____no_output_____ | ADSL | 01-Lesson-Plans/19-Supervised-Machine-Learning/1/Activities/07-Ins_Confusion-Matrixes/Solved/Ins_Confusion_Matrix.ipynb | anirudhmungre/sneaky-lessons |
Split our data into training and testing data | from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1) | _____no_output_____ | ADSL | 01-Lesson-Plans/19-Supervised-Machine-Learning/1/Activities/07-Ins_Confusion-Matrixes/Solved/Ins_Confusion_Matrix.ipynb | anirudhmungre/sneaky-lessons |
Create a logistic regression model | from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression()
classifier | _____no_output_____ | ADSL | 01-Lesson-Plans/19-Supervised-Machine-Learning/1/Activities/07-Ins_Confusion-Matrixes/Solved/Ins_Confusion_Matrix.ipynb | anirudhmungre/sneaky-lessons |
Fit (train) our model by using the training data | classifier.fit(X_train, y_train) | _____no_output_____ | ADSL | 01-Lesson-Plans/19-Supervised-Machine-Learning/1/Activities/07-Ins_Confusion-Matrixes/Solved/Ins_Confusion_Matrix.ipynb | anirudhmungre/sneaky-lessons |
Validate the model by using the test data | print(f"Training Data Score: {classifier.score(X_train, y_train)}")
print(f"Testing Data Score: {classifier.score(X_test, y_test)}") | Training Data Score: 0.9533333333333334
Testing Data Score: 0.956
| ADSL | 01-Lesson-Plans/19-Supervised-Machine-Learning/1/Activities/07-Ins_Confusion-Matrixes/Solved/Ins_Confusion_Matrix.ipynb | anirudhmungre/sneaky-lessons |
Create a confusion matrix | from sklearn.metrics import confusion_matrix
y_true = y_test
y_pred = classifier.predict(X_test)
confusion_matrix(y_true, y_pred) | _____no_output_____ | ADSL | 01-Lesson-Plans/19-Supervised-Machine-Learning/1/Activities/07-Ins_Confusion-Matrixes/Solved/Ins_Confusion_Matrix.ipynb | anirudhmungre/sneaky-lessons |
The accuracy of the model on the test data is TP + TN / (TP + FP + TN + FN) | tn, fp, fn, tp = confusion_matrix(y_true, y_pred).ravel()
accuracy = (tp + tn) / (tp + fp + tn + fn) # (111 + 128) / (111 + 5 + 128 + 6)
print(f"Accuracy: {accuracy}") | Accuracy: 0.956
| ADSL | 01-Lesson-Plans/19-Supervised-Machine-Learning/1/Activities/07-Ins_Confusion-Matrixes/Solved/Ins_Confusion_Matrix.ipynb | anirudhmungre/sneaky-lessons |
Reflect Tables into SQLAlchemy ORM | # Python SQL toolkit and Object Relational Mapper
import sqlalchemy
from sqlalchemy.ext.automap import automap_base
from sqlalchemy.orm import Session
from sqlalchemy import create_engine, func, inspect
from sqlalchemy import desc
engine = create_engine("sqlite:///Resources/hawaii.sqlite")
conn = engine.connect()
inspector = inspect(engine)
inspector.get_table_names()
# reflect an existing database into a new model
# reflect the tables
Base = automap_base()
Base.prepare(engine, reflect=True)
# We can view all of the classes that automap found
Base.classes.keys()
# Save references to each table
ME = Base.classes.measurement
ST = Base.classes.station
# Create our session (link) from Python to the DB
session = Session(engine) | _____no_output_____ | ADSL | climate_starter.ipynb | solivas89/sqlalchemy-challenge |
Exploratory Climate Analysis | first_row = session.query(ME).first()
first_row.__dict__
first_row = session.query(ST).first()
first_row.__dict__
columns = inspector.get_columns('measurement')
for column in columns:
print(column["name"], column["type"])
columns = inspector.get_columns('station')
for column in columns:
print(column["name"], column["type"])
session.query(func.min(ME.date)).all()
session.query(func.max(ME.date)).all()
# Design a query to retrieve the last 12 months of precipitation data and plot the results
# Calculate the date 1 year ago from the last data point in the database
# Perform a query to retrieve the data and precipitation scores
# Save the query results as a Pandas DataFrame and set the index to the date column
# Sort the dataframe by date
# Use Pandas Plotting with Matplotlib to plot the data
previous_year = dt.date(2017, 8, 23) - dt.timedelta(days=365)
# print(previous_year)
year_query = session.query(ME.date, ME.prcp).\
filter(ME.date >= previous_year).\
order_by(ME.date).all()
# year_query
year_data = pd.DataFrame(year_query)
year_data.set_index('date', inplace = True)
year_data.plot()
plt.xticks(rotation = 'vertical')
# plt.title('Last 12 Months of Precipitation')
plt.xlabel('Date')
plt.ylabel('Inches')
plt.tight_layout()
plt.show()
# Use Pandas to calcualte the summary statistics for the precipitation data
year_data.describe()
# Design a query to show how many stations are available in this dataset?
sel = [func.count(ST.station)]
stations = session.query(*sel).all()
stations
# What are the most active stations? (i.e. what stations have the most rows)?
# List the stations and the counts in descending order.
sel = [ME.station, func.count(ME.station)]
most_active = session.query(*sel).\
group_by(ME.station).\
order_by(func.count(ME.station).desc()).all()
most_active
# Using the station id from the previous query, calculate the lowest temperature recorded,
# highest temperature recorded, and average temperature of the most active station?
#once i find the station with the most count from the measurment table we can query the min and max for that station
sel = [func.min(ME.tobs), func.max(ME.tobs), func.avg(ME.tobs)]
session.query(*sel).\
filter(ME.station == 'USC00519281').all()
# Choose the station with the highest number of temperature observations.
# Query the last 12 months of temperature observation data for this station and plot the results as a histogram
#can use the same query as above but do a date filter
sel = [ME.tobs]
tobs_data = pd.DataFrame(session.query(*sel).\
filter(ME.date >= previous_year).\
filter(ME.station == 'USC00519281').all())
# tobs_data
tobs_data.plot.hist(bins=12)
plt.xlabel('Temperature')
plt.tight_layout()
plt.show() | _____no_output_____ | ADSL | climate_starter.ipynb | solivas89/sqlalchemy-challenge |
Bonus Challenge Assignment | # This function called `calc_temps` will accept start date and end date in the format '%Y-%m-%d'
# and return the minimum, average, and maximum temperatures for that range of dates
def calc_temps(start_date, end_date):
"""TMIN, TAVG, and TMAX for a list of dates.
Args:
start_date (string): A date string in the format %Y-%m-%d
end_date (string): A date string in the format %Y-%m-%d
Returns:
TMIN, TAVE, and TMAX
"""
return session.query(func.min(ME.tobs), func.avg(ME.tobs), func.max(ME.tobs)).\
filter(ME.date >= start_date).filter(ME.date <= end_date).all()
# function usage example
print(calc_temps('2012-02-28', '2012-03-05'))
# Use your previous function `calc_temps` to calculate the tmin, tavg, and tmax
# for your trip using the previous year's data for those same dates.
start_date = dt.date(2012, 2, 28) - dt.timedelta(days=365)
end_date = dt.date(2012, 3, 5) - dt.timedelta(days=365)
trip_temps = calc_temps(start_date, end_date)
trip_temps
# Plot the results from your previous query as a bar chart.
# Use "Trip Avg Temp" as your Title
# Use the average temperature for the y value
# Use the peak-to-peak (tmax-tmin) value as the y error bar (yerr)
fig, ax = plt.subplots(figsize=plt.figaspect(2.))
avg_temp = trip_temps[0][1]
xpos = 1
bar = ax.bar(xpos, avg_temp, yerr=error, alpha=0.5, color='red', align='center')
ax.set(xticks=range(xpos), title="Trip Avg Temp", ylabel="Temperature (F)")
ax.margins(.5, .5)
fig.tight_layout()
fig.show()
# Calculate the total amount of rainfall per weather station for your trip dates using the previous year's matching dates.
# Sort this in descending order by precipitation amount and list the station, name, latitude, longitude, and elevation
# Create a query that will calculate the daily normals
# (i.e. the averages for tmin, tmax, and tavg for all historic data matching a specific month and day)
def daily_normals(date):
"""Daily Normals.
Args:
date (str): A date string in the format '%m-%d'
Returns:
A list of tuples containing the daily normals, tmin, tavg, and tmax
"""
sel = [func.min(Measurement.tobs), func.avg(Measurement.tobs), func.max(Measurement.tobs)]
return session.query(*sel).filter(func.strftime("%m-%d", Measurement.date) == date).all()
daily_normals("01-01")
# calculate the daily normals for your trip
# push each tuple of calculations into a list called `normals`
# Set the start and end date of the trip
# Use the start and end date to create a range of dates
# Stip off the year and save a list of %m-%d strings
# Loop through the list of %m-%d strings and calculate the normals for each date
# Load the previous query results into a Pandas DataFrame and add the `trip_dates` range as the `date` index
# Plot the daily normals as an area plot with `stacked=False`
| _____no_output_____ | ADSL | climate_starter.ipynb | solivas89/sqlalchemy-challenge |
Random Clifford Circuit RandomCliffordGate `RandomClifordGate(*qubits)` represents a random Clifford gate acting on a set of qubits. There is no further parameter to specify, as it is not any particular gate, but a placeholder for a generic random Clifford gate.**Parameters**- `*qubits`: indices of the set of qubits on which the gate acts on.Example: | gate = vaeqst.RandomCliffordGate(0,1)
gate | _____no_output_____ | MIT | circuit.ipynb | hongyehu/Sim-Clifford |
`RandomCliffordGate.random_clifford_map()` evokes a random sampling of the Clifford unitary, return in the form of operator mapping table $M$ and the corresponding sign indicator $h$. Such that under the mapping, any Pauli operator $\sigma_g$ specified by the binary representation $g$ (and localized within the gate support) gets mapped to$$\sigma_g \to \prod_{i=1}^{2n} (-)^{h_i}\sigma_{M_i}^{g_i}.$$The binary representation is in the $g=(x_0,z_0,x_1,z_1,\cdots)$ basis. | gate.random_clifford_map() | _____no_output_____ | MIT | circuit.ipynb | hongyehu/Sim-Clifford |
RandomCliffordLayer `RandomCliffordLayer(*gates)` represents a layer of random Clifford gates. **Parameters:*** `*gates`: quantum gates contained in the layer.The gates in the same layer should not overlap with each other (all gates need to commute). To ensure this, we do not manually add gates to the layer, but using the higher level function `.gate()` provided by `RandomCliffordCircuit` (see discussion later).Example: | layer = vaeqst.RandomCliffordLayer(vaeqst.RandomCliffordGate(0,1),vaeqst.RandomCliffordGate(3,5))
layer | _____no_output_____ | MIT | circuit.ipynb | hongyehu/Sim-Clifford |
It hosts a list of gates: | layer.gates | _____no_output_____ | MIT | circuit.ipynb | hongyehu/Sim-Clifford |
Given the total number of qubits $N$, the layer can sample the Clifford unitary (as product of each gate) $U=\prod_{a}U_a$, and represent it as a single operator mapping (because gates do not overlap, so they maps operators in different supports independently). | layer.random_clifford_map(6) | _____no_output_____ | MIT | circuit.ipynb | hongyehu/Sim-Clifford |
RandomCliffordCircuit `RandomCliffordCircuit()` represents a quantum circuit of random Clifford gates. Methods Construct the CircuitExample: create a random Clifford circuit. | circ = vaeqst.RandomCliffordCircuit() | _____no_output_____ | MIT | circuit.ipynb | hongyehu/Sim-Clifford |
Use `.gate(*qubits)` to add random Clifford gates to the circuit. | circ.gate(0,1)
circ.gate(2,4)
circ.gate(1,4)
circ.gate(0,2)
circ.gate(3,5)
circ.gate(3,4)
circ | _____no_output_____ | MIT | circuit.ipynb | hongyehu/Sim-Clifford |
Gates will automatically arranged into layers. Each new gate added to the circuit will commute through the layers if it is not blocked by the existing gates. If the number of qubits `.N` is not explicitly defined, it will be dynamically infered from the circuit width, as the largest qubit index of all gates + 1. | circ.N | _____no_output_____ | MIT | circuit.ipynb | hongyehu/Sim-Clifford |
Navigate in the Circuit `.layers_forward()` and `.layers_backward()` provides two generators to iterate over layers in forward and backward order resepctively. | list(circ.layers_forward())
list(circ.layers_backward()) | _____no_output_____ | MIT | circuit.ipynb | hongyehu/Sim-Clifford |
`.first_layer` and `.last_layer` points to the first and the last layers. | circ.first_layer
circ.last_layer | _____no_output_____ | MIT | circuit.ipynb | hongyehu/Sim-Clifford |
Use `.next_layer` and `.prev_layer` to move forward and backward. | circ.first_layer.next_layer, circ.last_layer.prev_layer | _____no_output_____ | MIT | circuit.ipynb | hongyehu/Sim-Clifford |
Locate a gate in the circuit. | circ.first_layer.next_layer.next_layer.gates[0] | _____no_output_____ | MIT | circuit.ipynb | hongyehu/Sim-Clifford |
Apply Circuit to State `.forward(state)` and `.backward(state)` applies the circuit to transform the state forward / backward. * Each call will sample a new random realization of the random Clifford circuit.* The transformation will create a new state, the original state remains untouched. | rho = vaeqst.StabilizerState(6, r=0)
rho
circ.forward(rho)
circ.backward(rho) | _____no_output_____ | MIT | circuit.ipynb | hongyehu/Sim-Clifford |
POVM `.povm(nsample)` provides a generator to sample $n_\text{sample}$ from the prior POVM based on the circuit by back evolution. | list(circ.povm(3)) | _____no_output_____ | MIT | circuit.ipynb | hongyehu/Sim-Clifford |
BrickWallRCC `BrickWallRCC(N, depth)` is a subclass of `RandomCliffordCircuit`. It represents the circuit with 2-qubit gates arranged following a brick wall pattern. | circ = vaeqst.BrickWallRCC(16,2)
circ | _____no_output_____ | MIT | circuit.ipynb | hongyehu/Sim-Clifford |
Create an inital state as a computational basis state. | rho = vaeqst.StabilizerState(16, r=0)
rho | _____no_output_____ | MIT | circuit.ipynb | hongyehu/Sim-Clifford |
Backward evolve the state to obtain the measurement operator. | circ.backward(rho) | _____no_output_____ | MIT | circuit.ipynb | hongyehu/Sim-Clifford |
OnSiteRCC `OnSiteRCC(N)` is a subclass of `RandomCliffordCircuit`. It represents the circuit of a single layer of on-site Clifford gates. It can be used to generate random Pauli states. | circ = vaeqst.OnSiteRCC(16)
circ
rho = vaeqst.StabilizerState(16, r=0)
circ.backward(rho) | _____no_output_____ | MIT | circuit.ipynb | hongyehu/Sim-Clifford |
GlobalRCC `GlobalRCC(N)` is a subclass of `RandomCliffordCircuit`. It represents the circuit consists of a global Clifford gate. It can be used to generate Clifford states. | circ = vaeqst.GlobalRCC(16)
circ
rho = vaeqst.StabilizerState(16, r=0)
circ.backward(rho) | _____no_output_____ | MIT | circuit.ipynb | hongyehu/Sim-Clifford |
Connessione | sim = SIM928('COM1','4')
sim.query('*IDN') | _____no_output_____ | MIT | TWPA/notebooks/testsim928.ipynb | biqute/QTLab2122 |
Comandi | #accensione
sim.set_output(1)
#set del voltaggio
sim.set_voltage(4e-3)
sim.ask_voltage() | Voltage = 0.004
V
| MIT | TWPA/notebooks/testsim928.ipynb | biqute/QTLab2122 |
Disconnessione | sim.close_all() | Stanford_Research_Systems,SIM928,s/n030465,ver2.2
Stanford_Research_Systems,SIM900,s/n152741,ver3.6
| MIT | TWPA/notebooks/testsim928.ipynb | biqute/QTLab2122 |
Additional training functions [`train`](/train.htmltrain) provides a number of extension methods that are added to [`Learner`](/basic_train.htmlLearner) (see below for a list and details), along with three simple callbacks:- [`ShowGraph`](/train.htmlShowGraph)- [`GradientClipping`](/train.htmlGradientClipping)- [`BnFreeze`](/train.htmlBnFreeze) | from fastai.gen_doc.nbdoc import *
from fastai.train import *
from fastai.vision import *
| _____no_output_____ | Apache-2.0 | docs_src/train.ipynb | hiroaki-shishido/fastai |
[`Learner`](/basic_train.htmlLearner) extension methods These methods are automatically added to all [`Learner`](/basic_train.htmlLearner) objects created after importing this module. They provide convenient access to a number of callbacks, without requiring them to be manually created. | show_doc(fit_one_cycle)
show_doc(one_cycle_scheduler) | _____no_output_____ | Apache-2.0 | docs_src/train.ipynb | hiroaki-shishido/fastai |
See [`OneCycleScheduler`](/callbacks.one_cycle.htmlOneCycleScheduler) for details. | show_doc(lr_find) | _____no_output_____ | Apache-2.0 | docs_src/train.ipynb | hiroaki-shishido/fastai |
See [`LRFinder`](/callbacks.lr_finder.htmlLRFinder) for details. | show_doc(to_fp16) | _____no_output_____ | Apache-2.0 | docs_src/train.ipynb | hiroaki-shishido/fastai |
See [`MixedPrecision`](/callbacks.fp16.htmlMixedPrecision) for details. | show_doc(to_fp32)
show_doc(mixup) | _____no_output_____ | Apache-2.0 | docs_src/train.ipynb | hiroaki-shishido/fastai |
See [`MixUpCallback`](/callbacks.mixup.htmlMixUpCallback) for more details. Additional callbacks We'll show examples below using our MNIST sample. As usual the `on_something` methods are directly called by the fastai library, no need to call them yourself. | path = untar_data(URLs.MNIST_SAMPLE)
data = ImageDataBunch.from_folder(path)
show_doc(ShowGraph, title_level=3) | _____no_output_____ | Apache-2.0 | docs_src/train.ipynb | hiroaki-shishido/fastai |
```pythonlearn = create_cnn(data, models.resnet18, metrics=accuracy, callback_fns=ShowGraph)learn.fit(3)```  | show_doc(ShowGraph.on_epoch_end)
show_doc(GradientClipping)
learn = create_cnn(data, models.resnet18, metrics=accuracy,
callback_fns=partial(GradientClipping, clip=0.1))
learn.fit(1)
show_doc(GradientClipping.on_backward_end)
show_doc(BnFreeze) | _____no_output_____ | Apache-2.0 | docs_src/train.ipynb | hiroaki-shishido/fastai |
For batchnorm layers where `requires_grad==False`, you generally don't want to update their moving average statistics, in order to avoid the model's statistics getting out of sync with its pre-trained weights. You can add this callback to automate this freezing of statistics (internally, it calls `eval` on these layers). | learn = create_cnn(data, models.resnet18, metrics=accuracy, callback_fns=BnFreeze)
learn.fit(1)
show_doc(BnFreeze.on_epoch_begin) | _____no_output_____ | Apache-2.0 | docs_src/train.ipynb | hiroaki-shishido/fastai |
Curso de Programación en Python Curso de formación interna, CIEMAT. Madrid, Octubre de 2021Antonio Delgado Perishttps://github.com/andelpe/curso-intro-python/ Tema 9 - El ecosistema Python: librería estándar y otros paquetes populares Objetivos- Conocer algunos módulos de la librería estándar - Interacción con el propio intérprete - Interacción con el sistema operativo - Gestión del sistema de ficheros - Gestión de procesos y concurrencia - Desarrollo, depuración y perfilado - Números y matemáticas - Acceso y funcionalidad de red - Utilidades para manejo avanzado de funciones e iteradores- Introducir el ecosistema de librerías científicas de Python - La pila Numpy/SciPY - Gráficos - Matemáticas y estadística - Aprendizaje automático - Procesamiento del lenguaje natural - Biología - Física La librería estándarUno de los eslóganes de Python es _batteries included_. Se refiere a la cantidad de funcionalidad disponible en la instalación Python básica, sin necesidad de recurrir a paquetes externos.En esta sección revisamos brevemente algunos de los módulos disponibles. Para muchas más información: https://docs.python.org/3/library/ Interacción con el intérprete de Python: `sys`Ofrece tanto información, como capacidad de manipular diversos aspectos del propio entorno de Python.- `sys.argv`: Lista con los argumentos pasados al programa en ejecución.- `sys.version`: String con la versión actual de Python.- `sys.stdin/out/err`: Objetos fichero usados por el intérprete para entrada, salida y error.- `sys.exit`: Función para acabar el programa. Interacción con el sistema operativo: `os`Interfaz _portable_ para funcionalidad que depende del sistema operativo.Contiene funcionalidad muy variada, a veces de muy bajo nivel.- `os.environ`: diccionario con variables de entorno (modificable)- `os.getuid`, `os.getgid`, `os.getpid`...: Obtener UID, GID, process ID, etc. (Unix)- `os.uname`: información sobre el sistema operativo - `os.getcwd`, `os.chdir`, `os.mkdir`, `os.remove`, `os.stat`...: operaciones sobre el sistema de ficheros- `os.exec`, `os.fork`, `os.kill`... : gestión de procesosPara algunas de estas operaciones es más conveniente utilizar módulos más específicos, o de más alto nivel. Operaciones sobre el sistema de ficheros- Para manipulación de _paths_, borrado, creación de directorios, etc.: `pathlib` (moderno), o `os.path` (clásico)- Expansión de _wildcards_ de nombres de fichero (Unix _globs_): `glob`- Para operaciones de copia (y otros) de alto nivel: `shutil`- Para ficheros y directorios temporales (de usar y tirar): `tempfile` Gestión de procesos- `threading`: interfaz de alto nivel para gestión de _threads_. - Padece el problema del _Global Interpreter Lock_, de Python: es un _lock_ global, que asegura que solo un thread se está ejecutando en Python en un momento dado (excepto en pausas por I/O). Impide mejorar el rendimiento con múltiples CPUs. - `queue`: implementa colas multi-productor, multi-consumidor para un intercambio seguro de información entre múltiples _threads_.- `multiprocessing`: interfaz que imita al the `threading`, pero utiliza multi-proceso, en lugar de threads (evita el problema del GIL). Soporta Unix y Windows. Ofrece concurrencia local y remota. - El módulo `multiprocessing.shared_memory`: facilita la asignación y gestión de memoria compartida entre varios procesos.- `subprocess`: Permite lanzar y gestionar subprocesos (comandos externos) desde Python. - Para Python >= 3.5, se recomienda usar la función `run`, salvo casos complejos. | from subprocess import run
def showRes(res):
print('\n------- ret code:', res.returncode, '; err:', res.stderr)
if res.stdout:
print('\n'.join(res.stdout.splitlines()[:3]))
print()
print('NO SHELL')
res = run(['ls', '-l'], capture_output=True, text=True)
showRes(res)
print('WITH SHELL')
res = run('ls -l', shell=True, capture_output=True, text=True)
showRes(res)
print('NO OUTPUT')
res = run(['ls', '-l'])
showRes(res)
print('ERROR NO-CHECK')
res = run(['ls', '-l', 'XXXX'], capture_output=True, text=True)
showRes(res)
print('ERROR CHECK')
try:
res = run(['ls', '-l', 'XXXX'], capture_output=True, check=True)
showRes(res)
except Exception as ex:
print(f'--- Error of type {type(ex)}:\n {ex}\n')
print('NO OUTPUT')
res = run(['ls', '-l', 'XXXX'])
showRes(res) | _____no_output_____ | Apache-2.0 | tema_9.ipynb | andelpe/curso-intro-python |
Números y matemáticas- `math`: operaciones matemáticas definidas por el estándar de C (`cmath`, para números complejos)- `random`: generadores de números pseudo-aleatorios para varias distribuciones- `statistics`: estadísticas básicas Manejo avanzado de funciones e iteradores- `itertools`: útiles para crear iteradores de forma eficiente.- `functools`: funciones de alto nivel que manipulan otras funciones- `operators`: funciones correspondientes a los operadores intrínsicos de Python | import operator
operator.add(3, 4) | _____no_output_____ | Apache-2.0 | tema_9.ipynb | andelpe/curso-intro-python |
Acknowledgement**Origine:** This notebook is downloaded at https://github.com/justmarkham/scikit-learn-videos. Some modifications are done. Agenda1. K-nearest neighbors (KNN) classification2. Logistic Regression3. Review of model evaluation4. Classification accuracy5. Confusion matrix6. Adjusting the classification threshold 1. K-nearest neighbors (KNN) classification 1. Pick a value for K.2. Search for the K observations in the training data that are "nearest" to the measurements of the unknown point.3. Use the most popular response value from the K nearest neighbors as the predicted response value for the unknown point. Example training data KNN classification map (K=1) KNN classification map (K=5) *Image Credits: [Data3classes](http://commons.wikimedia.org/wiki/File:Data3classes.png/media/File:Data3classes.png), [Map1NN](http://commons.wikimedia.org/wiki/File:Map1NN.png/media/File:Map1NN.png), [Map5NN](http://commons.wikimedia.org/wiki/File:Map5NN.png/media/File:Map5NN.png) by Agor153. Licensed under CC BY-SA 3.0* 2. Logistic Regression* Linear Model of classification, assumes linear relationship between feature & target* Returns class probabilities* Hyperparameter : C - regularization coef* Fundamentally suited for bi-class classification | import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
from sklearn.datasets import make_blobs
X,y = make_blobs(n_features=2, n_samples=1000, cluster_std=2,centers=2)
plt.scatter(X[:,0],X[:,1],c=y,s=10)
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression(random_state=0, solver='lbfgs')
lr.fit(X,y)
h = .02
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
Z = lr.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
plt.pcolormesh(xx, yy, Z, cmap=plt.cm.Paired)
plt.scatter(X[:,0],X[:,1],c=y,s=10) | _____no_output_____ | BSD-3-Clause | Day04/1-classification/classificationV2.ipynb | kxu08/Bootcamp2019 |
3. Review of model evaluation- Need a way to choose between models: different model types, tuning parameters, and features- Use a **model evaluation procedure** to estimate how well a model will generalize to out-of-sample data- Requires a **model evaluation metric** to quantify the model performance 4. Classification accuracy[Pima Indians Diabetes dataset](https://www.kaggle.com/uciml/pima-indians-diabetes-database) originally from the UCI Machine Learning Repository | # read the data into a pandas DataFrame
path = 'pima-indians-diabetes.data'
col_names = ['pregnant', 'glucose', 'bp', 'skin', 'insulin', 'bmi', 'pedigree', 'age', 'label']
pima = pd.read_csv(path, header=None, names=col_names)
# print the first 5 rows of data
pima.head() | _____no_output_____ | BSD-3-Clause | Day04/1-classification/classificationV2.ipynb | kxu08/Bootcamp2019 |
**Question:** Can we predict the diabetes status of a patient given their health measurements? | # define X and y
feature_cols = ['pregnant', 'insulin', 'bmi', 'age']
X = pima[feature_cols]
y = pima.label
# split X and y into training and testing sets
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
# train a logistic regression model on the training set
from sklearn.linear_model import LogisticRegression
logreg = LogisticRegression(random_state=0, solver='lbfgs')
logreg.fit(X_train, y_train)
# make class predictions for the testing set
y_pred_class = logreg.predict(X_test) | _____no_output_____ | BSD-3-Clause | Day04/1-classification/classificationV2.ipynb | kxu08/Bootcamp2019 |
**Classification accuracy:** percentage of correct predictions | # calculate accuracy
from sklearn import metrics
print(metrics.accuracy_score(y_test, y_pred_class)) | 0.6770833333333334
| BSD-3-Clause | Day04/1-classification/classificationV2.ipynb | kxu08/Bootcamp2019 |
**Null accuracy:** accuracy that could be achieved by always predicting the most frequent class | # examine the class distribution of the testing set (using a Pandas Series method)
y_test.value_counts()
# calculate the percentage of ones
y_test.mean()
# calculate the percentage of zeros
1 - y_test.mean()
# calculate null accuracy (for binary classification problems coded as 0/1)
max(y_test.mean(), 1 - y_test.mean())
# calculate null accuracy (for multi-class classification problems)
y_test.value_counts().head(1) / len(y_test) | _____no_output_____ | BSD-3-Clause | Day04/1-classification/classificationV2.ipynb | kxu08/Bootcamp2019 |
Comparing the **true** and **predicted** response values | # print the first 25 true and predicted responses
print('True:', y_test.values[0:25])
print('Pred:', y_pred_class[0:25]) | True: [1 0 0 1 0 0 1 1 0 0 1 1 0 0 0 0 1 0 0 0 1 1 0 0 0]
Pred: [0 0 0 0 0 0 0 1 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0]
| BSD-3-Clause | Day04/1-classification/classificationV2.ipynb | kxu08/Bootcamp2019 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.