markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Nice! We made our object quack like a duck without defining it to be a duck. LengthyInteger/LengthDecimal doe not inherit from a class that provides the needed functionality. You've seen an example of taking an object that does a thing, and modifying it to conform to an interface. But before you go off and start think...
# copied directy from "Fluent Python", pp. 298-300 from array import array import reprlib import math import numbers import functools import operator import itertools class Vector: typecode = 'd' def __init__(self, components): self._components = array(self.typecode, components) def __iter__(sel...
python-interfaces/python-interfaces.ipynb
fionapigott/Data-Science-45min-Intros
unlicense
So what should I do? Don't... Don't implement every possible interface for every object you build. Just do enough that it works. Don't define new interfaces. The Python data model includes a very robust set of interface definitions. Don't define new abstract base classes (unless you're building a brand new framework)....
def object_to_str(obj): try: return str(obj) except TypeError: return "Don't know how to represent argument as a string" object_to_str({'a':1,'b':[5]}) object_to_str(open('tmp.txt','w'))
python-interfaces/python-interfaces.ipynb
fionapigott/Data-Science-45min-Intros
unlicense
Type Tests II Do test an object's interface with an Abstract Base Class
from collections import abc my_dict = {} isinstance(my_dict, abc.Mapping)
python-interfaces/python-interfaces.ipynb
fionapigott/Data-Science-45min-Intros
unlicense
This allows inline graphics in IPython (Jupyter) notebooks and imports functions nessesary for ploting as plt. In addition we import numpy as np. Let's prepare some data:
x = np.linspace(0,10,20) y = x ** 2
material/sub/koldunov/05 - Graphs and maps - Matplotlib and Basemap.ipynb
geography-munich/sciprog
apache-2.0
Plot is as easy as this:
plt.plot(x,y);
material/sub/koldunov/05 - Graphs and maps - Matplotlib and Basemap.ipynb
geography-munich/sciprog
apache-2.0
Line style and labels are controlled in a way similar to Matlab:
plt.plot(x, y, 'r--o') plt.xlabel('x') plt.ylabel('y') plt.title('title');
material/sub/koldunov/05 - Graphs and maps - Matplotlib and Basemap.ipynb
geography-munich/sciprog
apache-2.0
You can plot several individual lines at once:
plt.plot(x, y, 'r--o', x, y ** 1.1, 'bs', x, y ** 1.2, 'g^-' );
material/sub/koldunov/05 - Graphs and maps - Matplotlib and Basemap.ipynb
geography-munich/sciprog
apache-2.0
One more example:
mu, sigma = 100, 15 x = mu + sigma * np.random.randn(10000) # the histogram of the data n, bins, patches = plt.hist(x, 50, normed=1, facecolor='g', alpha=0.75) plt.xlabel('Smarts') plt.ylabel('Probability') plt.title('Histogram of IQ') plt.text(60, .025, r'$\mu=100,\ \sigma=15$') plt.axis([40, 160, 0, 0.03]) plt.grid...
material/sub/koldunov/05 - Graphs and maps - Matplotlib and Basemap.ipynb
geography-munich/sciprog
apache-2.0
If you feel a bit playful (only in matplotlib > 1.3):
with plt.xkcd(): x = np.linspace(0, 1) y = np.sin(4 * np.pi * x) * np.exp(-5 * x) plt.fill(x, y, 'r') plt.grid(False)
material/sub/koldunov/05 - Graphs and maps - Matplotlib and Basemap.ipynb
geography-munich/sciprog
apache-2.0
Following example is from matplotlib - 2D and 3D plotting in Python - great place to start for people interested in matplotlib.
n = np.array([0,1,2,3,4,5]) xx = np.linspace(-0.75, 1., 100) x = np.linspace(0, 5, 10) fig, axes = plt.subplots(1, 4, figsize=(12,3)) axes[0].scatter(xx, xx + 0.25*np.random.randn(len(xx))) axes[1].step(n, n**2, lw=2) axes[2].bar(n, n**2, align="center", width=0.5, alpha=0.5) axes[3].fill_between(x, x**2, x**3, co...
material/sub/koldunov/05 - Graphs and maps - Matplotlib and Basemap.ipynb
geography-munich/sciprog
apache-2.0
When you going to plot something more or less complicated in Matplotlib, the first thing you do is open the Matplotlib example gallery and choose example closest to your case. You can directly load python code (or basically any text file) to the notebook. This time we download code from the Matplotlib example gallery:
# %load http://matplotlib.org/mpl_examples/pylab_examples/griddata_demo.py from numpy.random import uniform, seed from matplotlib.mlab import griddata import matplotlib.pyplot as plt import numpy as np # make up data. #npts = int(raw_input('enter # of random points to plot:')) seed(0) npts = 200 x = uniform(-2, 2, npts...
material/sub/koldunov/05 - Graphs and maps - Matplotlib and Basemap.ipynb
geography-munich/sciprog
apache-2.0
Maps ... using Basemap In order to create a map we have to first import some data. We are going to use NCEP reanalysis file from previous section:
from netCDF4 import Dataset f =Dataset('air.sig995.2012.nc')
material/sub/koldunov/05 - Graphs and maps - Matplotlib and Basemap.ipynb
geography-munich/sciprog
apache-2.0
Here we create netCDF variable objec for air (we would like to have acces to some of the attributes), but from lat and lon we import only data valies:
air = f.variables['air'] lat = f.variables['lat'][:] lon = f.variables['lon'][:]
material/sub/koldunov/05 - Graphs and maps - Matplotlib and Basemap.ipynb
geography-munich/sciprog
apache-2.0
Easiest way to look at the array is imshow:
plt.imshow(air[0,:,:]) plt.colorbar();
material/sub/koldunov/05 - Graphs and maps - Matplotlib and Basemap.ipynb
geography-munich/sciprog
apache-2.0
But we want some real map :) First convert data from air:
air_c = air[:] - 273.15
material/sub/koldunov/05 - Graphs and maps - Matplotlib and Basemap.ipynb
geography-munich/sciprog
apache-2.0
Our coordinate variables are vectors:
lat.shape
material/sub/koldunov/05 - Graphs and maps - Matplotlib and Basemap.ipynb
geography-munich/sciprog
apache-2.0
For the map we need 2d coordinate arrays. Convert lot lan to 2d:
lon2, lat2 = np.meshgrid(lon,lat)
material/sub/koldunov/05 - Graphs and maps - Matplotlib and Basemap.ipynb
geography-munich/sciprog
apache-2.0
Import Basemap - library for plotting 2D data on maps:
from mpl_toolkits.basemap import Basemap
material/sub/koldunov/05 - Graphs and maps - Matplotlib and Basemap.ipynb
geography-munich/sciprog
apache-2.0
Create Basemap instance (with certain characteristics) and convert lon lat to map coordinates
m = Basemap(projection='npstere',boundinglat=60,lon_0=0,resolution='l') x, y = m(lon2, lat2)
material/sub/koldunov/05 - Graphs and maps - Matplotlib and Basemap.ipynb
geography-munich/sciprog
apache-2.0
Creating the map now is only two lines:
m.drawcoastlines() m.contourf(x,y,air_c[0,:,:])
material/sub/koldunov/05 - Graphs and maps - Matplotlib and Basemap.ipynb
geography-munich/sciprog
apache-2.0
We can make the map look prettier by adding couple of lines:
fig = plt.figure(figsize=(15,7)) m.fillcontinents(color='gray',lake_color='gray') m.drawcoastlines() m.drawparallels(np.arange(-80.,81.,20.)) m.drawmeridians(np.arange(-180.,181.,20.)) m.drawmapboundary(fill_color='white') m.contourf(x,y,air_c[0,:,:],40) plt.title('Monthly mean SAT') plt.colorbar()
material/sub/koldunov/05 - Graphs and maps - Matplotlib and Basemap.ipynb
geography-munich/sciprog
apache-2.0
You can change map characteristics by changin the Basemap instance:
m = Basemap(projection='ortho',lat_0=45,lon_0=-100,resolution='l') x, y = m(lon2, lat2)
material/sub/koldunov/05 - Graphs and maps - Matplotlib and Basemap.ipynb
geography-munich/sciprog
apache-2.0
While the rest of the code might be the same:
fig = plt.figure(figsize=(15,7)) #m.fillcontinents(color='gray',lake_color='gray') m.drawcoastlines() m.drawparallels(np.arange(-80.,81.,20.)) m.drawmeridians(np.arange(-180.,181.,20.)) m.drawmapboundary(fill_color='white') cs = m.contourf(x,y,air_c[0,:,:],20) plt.title('Monthly mean SAT')
material/sub/koldunov/05 - Graphs and maps - Matplotlib and Basemap.ipynb
geography-munich/sciprog
apache-2.0
One more map exampe:
m = Basemap(projection='cyl',llcrnrlat=-90,urcrnrlat=90,\ llcrnrlon=0,urcrnrlon=360,resolution='c') x, y = m(lon2, lat2) fig = plt.figure(figsize=(15,7)) #m.fillcontinents(color='gray',lake_color='gray') m.drawcoastlines() m.drawparallels(np.arange(-80.,81.,20.)) m.drawmeridians(np.arange(0.,360.,20.)) m.d...
material/sub/koldunov/05 - Graphs and maps - Matplotlib and Basemap.ipynb
geography-munich/sciprog
apache-2.0
The preferred way to specify the coefficient in openfermion is to provide an optional coefficient argument. If not provided, the coefficient defaults to 1. In the code below, the first method is preferred. The multiplication in the second method actually creates a copy of the term, which introduces some additional cost...
good_way_to_initialize = FermionOperator('3^ 1', -1.7) print(good_way_to_initialize) bad_way_to_initialize = -1.7 * FermionOperator('3^ 1') print(bad_way_to_initialize) identity = FermionOperator('') print(identity) zero_operator = FermionOperator() print(zero_operator)
examples/openfermion_tutorial.ipynb
jarrodmcc/OpenFermion
apache-2.0
Note that FermionOperator has only one attribute: .terms. This attribute is the dictionary which stores the term tuples.
my_operator = FermionOperator('4^ 1^ 3 9', 1. + 2.j) print(my_operator) print(my_operator.terms)
examples/openfermion_tutorial.ipynb
jarrodmcc/OpenFermion
apache-2.0
Manipulating the FermionOperator data structure So far we have explained how to initialize a single FermionOperator such as $-1.7 \, a^\dagger_3 a_1$. However, in general we will want to represent sums of these operators such as $(1 + 2i) \, a^\dagger_4 a^\dagger_3 a_9 a_1 - 1.7 \, a^\dagger_3 a_1$. To do this, just ad...
from openfermion.ops import FermionOperator term_1 = FermionOperator('4^ 3^ 9 1', 1. + 2.j) term_2 = FermionOperator('3^ 1', -1.7) my_operator = term_1 + term_2 print(my_operator) my_operator = FermionOperator('4^ 3^ 9 1', 1. + 2.j) term_2 = FermionOperator('3^ 1', -1.7) my_operator += term_2 print('') print(my_opera...
examples/openfermion_tutorial.ipynb
jarrodmcc/OpenFermion
apache-2.0
The print function prints each term in the operator on a different line. Note that the line my_operator = term_1 + term_2 creates a new object, which involves a copy of term_1 and term_2. The second block of code uses the inplace method +=, which is more efficient. This is especially important when trying to construct ...
term_1 = FermionOperator('4^ 3^ 9 1', 1. + 2.j) term_2 = FermionOperator('3^ 1', -1.7) my_operator = term_1 - 33. * term_2 print(my_operator) my_operator *= 3.17 * (term_2 + term_1) ** 2 print('') print(my_operator) print('') print(term_2 ** 3) print('') print(term_1 == 2.*term_1 - term_1) print(term_1 == my_operat...
examples/openfermion_tutorial.ipynb
jarrodmcc/OpenFermion
apache-2.0
Additionally, there are a variety of methods that act on the FermionOperator data structure. We demonstrate a small subset of those methods here.
from openfermion.utils import commutator, count_qubits, hermitian_conjugated, normal_ordered # Get the Hermitian conjugate of a FermionOperator, count its qubit, check if it is normal-ordered. term_1 = FermionOperator('4^ 3 3^', 1. + 2.j) print(hermitian_conjugated(term_1)) print(term_1.is_normal_ordered()) print(coun...
examples/openfermion_tutorial.ipynb
jarrodmcc/OpenFermion
apache-2.0
The QubitOperator data structure The QubitOperator data structure is another essential part of openfermion. As the name suggests, QubitOperator is used to store qubit operators in almost exactly the same way that FermionOperator is used to store fermion operators. For instance $X_0 Z_3 Y_4$ is a QubitOperator. The inte...
from openfermion.ops import QubitOperator my_first_qubit_operator = QubitOperator('X1 Y2 Z3') print(my_first_qubit_operator) print(my_first_qubit_operator.terms) operator_2 = QubitOperator('X3 Z4', 3.17) operator_2 -= 77. * my_first_qubit_operator print('') print(operator_2)
examples/openfermion_tutorial.ipynb
jarrodmcc/OpenFermion
apache-2.0
Jordan-Wigner and Bravyi-Kitaev openfermion provides functions for mapping FermionOperators to QubitOperators.
from openfermion.ops import FermionOperator from openfermion.transforms import jordan_wigner, bravyi_kitaev from openfermion.utils import eigenspectrum, hermitian_conjugated # Initialize an operator. fermion_operator = FermionOperator('2^ 0', 3.17) fermion_operator += hermitian_conjugated(fermion_operator) print(fermi...
examples/openfermion_tutorial.ipynb
jarrodmcc/OpenFermion
apache-2.0
We see that despite the different representation, these operators are iso-spectral. We can also apply the Jordan-Wigner transform in reverse to map arbitrary QubitOperators to FermionOperators. Note that we also demonstrate the .compress() method (a method on both FermionOperators and QubitOperators) which removes zero...
from openfermion.transforms import reverse_jordan_wigner # Initialize QubitOperator. my_operator = QubitOperator('X0 Y1 Z2', 88.) my_operator += QubitOperator('Z1 Z4', 3.17) print(my_operator) # Map QubitOperator to a FermionOperator. mapped_operator = reverse_jordan_wigner(my_operator) print('') print(mapped_operato...
examples/openfermion_tutorial.ipynb
jarrodmcc/OpenFermion
apache-2.0
Sparse matrices and the Hubbard model Often, one would like to obtain a sparse matrix representation of an operator which can be analyzed numerically. There is code in both openfermion.transforms and openfermion.utils which facilitates this. The function get_sparse_operator converts either a FermionOperator, a QubitOpe...
from openfermion.hamiltonians import fermi_hubbard from openfermion.transforms import get_sparse_operator, jordan_wigner from openfermion.utils import get_ground_state # Set model. x_dimension = 2 y_dimension = 2 tunneling = 2. coulomb = 1. magnetic_field = 0.5 chemical_potential = 0.25 periodic = 1 spinless = 1 # Ge...
examples/openfermion_tutorial.ipynb
jarrodmcc/OpenFermion
apache-2.0
Hamiltonians in the plane wave basis A user can write plugins to openfermion which allow for the use of, e.g., third-party electronic structure package to compute molecular orbitals, Hamiltonians, energies, reduced density matrices, coupled cluster amplitudes, etc using Gaussian basis sets. We may provide scripts which...
from openfermion.hamiltonians import jellium_model from openfermion.utils import eigenspectrum, fourier_transform, Grid from openfermion.transforms import jordan_wigner # Let's look at a very small model of jellium in 1D. grid = Grid(dimensions=1, length=3, scale=1.0) spinless = True # Get the momentum Hamiltonian. m...
examples/openfermion_tutorial.ipynb
jarrodmcc/OpenFermion
apache-2.0
Basics of MolecularData class Data from electronic structure calculations can be saved in an OpenFermion data structure called MolecularData, which makes it easy to access within our library. Often, one would like to analyze a chemical series or look at many different Hamiltonians and sometimes the electronic structure...
from openfermion.hamiltonians import MolecularData # Set parameters to make a simple molecule. diatomic_bond_length = .7414 geometry = [('H', (0., 0., 0.)), ('H', (0., 0., diatomic_bond_length))] basis = 'sto-3g' multiplicity = 1 charge = 0 description = str(diatomic_bond_length) # Make molecule and print out a few i...
examples/openfermion_tutorial.ipynb
jarrodmcc/OpenFermion
apache-2.0
If we had previously computed this molecule using an electronic structure package, we can call molecule.load() to populate all sorts of interesting fields in the data structure. Though we make no assumptions about what electronic structure packages users might install, we assume that the calculations are saved in OpenF...
# Set molecule parameters. basis = 'sto-3g' multiplicity = 1 bond_length_interval = 0.1 n_points = 25 # Generate molecule at different bond lengths. hf_energies = [] fci_energies = [] bond_lengths = [] for point in range(3, n_points + 1): bond_length = bond_length_interval * point bond_lengths += [bond_length]...
examples/openfermion_tutorial.ipynb
jarrodmcc/OpenFermion
apache-2.0
The geometry data needed to generate MolecularData can also be retreived from the PubChem online database by inputting the molecule's name.
from openfermion.utils import geometry_from_pubchem methane_geometry = geometry_from_pubchem('methane') print(methane_geometry)
examples/openfermion_tutorial.ipynb
jarrodmcc/OpenFermion
apache-2.0
InteractionOperator and InteractionRDM for efficient numerical representations Fermion Hamiltonians can be expressed as $H = h_0 + \sum_{pq} h_{pq}\, a^\dagger_p a_q + \frac{1}{2} \sum_{pqrs} h_{pqrs} \, a^\dagger_p a^\dagger_q a_r a_s$ where $h_0$ is a constant shift due to the nuclear repulsion and $h_{pq}$ and $h_{p...
from openfermion.hamiltonians import MolecularData from openfermion.transforms import get_fermion_operator, get_sparse_operator, jordan_wigner from openfermion.utils import get_ground_state import numpy import scipy import scipy.linalg # Load saved file for LiH. diatomic_bond_length = 1.45 geometry = [('Li', (0., 0., ...
examples/openfermion_tutorial.ipynb
jarrodmcc/OpenFermion
apache-2.0
Quadratic Hamiltonians and Slater determinants The general electronic structure Hamiltonian $H = h_0 + \sum_{pq} h_{pq}\, a^\dagger_p a_q + \frac{1}{2} \sum_{pqrs} h_{pqrs} \, a^\dagger_p a^\dagger_q a_r a_s$ contains terms that act on up to 4 sites, or is quartic in the fermionic creation and annihilation operators. H...
from openfermion.hamiltonians import mean_field_dwave from openfermion.transforms import get_quadratic_hamiltonian # Set model. x_dimension = 2 y_dimension = 2 tunneling = 2. sc_gap = 1. periodic = True # Get FermionOperator. mean_field_model = mean_field_dwave( x_dimension, y_dimension, tunneling, sc_gap, period...
examples/openfermion_tutorial.ipynb
jarrodmcc/OpenFermion
apache-2.0
Any quadratic Hamiltonian may be rewritten in the form $$H = \sum_p \varepsilon_p b^\dagger_p b_p + \text{constant},$$ where the $b_p$ are new annihilation operators that satisfy the fermionic anticommutation relations, and which are linear combinations of the old creation and annihilation operators. This form of $H$ m...
orbital_energies, constant = quadratic_hamiltonian.orbital_energies() print(orbital_energies) print() print(constant)
examples/openfermion_tutorial.ipynb
jarrodmcc/OpenFermion
apache-2.0
Eigenstates of quadratic hamiltonians are also known as fermionic Gaussian states, and they can be prepared efficiently on a quantum computer. One can use OpenFermion to obtain circuits for preparing these states. The following code obtains the description of a circuit which prepares the ground state (operations that c...
from openfermion.utils import gaussian_state_preparation_circuit circuit_description, start_orbitals = gaussian_state_preparation_circuit(quadratic_hamiltonian) for parallel_ops in circuit_description: print(parallel_ops) print('') print(start_orbitals)
examples/openfermion_tutorial.ipynb
jarrodmcc/OpenFermion
apache-2.0
Create a netCDF4.Dataset object f is a Dataset object, representing an open netCDF file. printing the object gives you summary information, similar to ncdump -h.
f = netCDF4.Dataset('data/rtofs_glo_3dz_f006_6hrly_reg3.nc') print(f)
web-services/reading_netCDF.ipynb
rsignell-usgs/python-training
cc0-1.0
Access a netCDF variable variable objects stored by name in variables dict. print the variable yields summary info (including all the attributes). no actual data read yet (just have a reference to the variable object with metadata).
print(f.variables.keys()) # get all variable names temp = f.variables['temperature'] # temperature variable print(temp)
web-services/reading_netCDF.ipynb
rsignell-usgs/python-training
cc0-1.0
List the Dimensions All variables in a netCDF file have an associated shape, specified by a list of dimensions. Let's list all the dimensions in this netCDF file. Note that the MT dimension is special (unlimited), which means it can be appended to.
for d in f.dimensions.items(): print(d)
web-services/reading_netCDF.ipynb
rsignell-usgs/python-training
cc0-1.0
Each variable has a dimensions and a shape attribute.
temp.dimensions temp.shape
web-services/reading_netCDF.ipynb
rsignell-usgs/python-training
cc0-1.0
Each dimension typically has a variable associated with it (called a coordinate variable). Coordinate variables are 1D variables that have the same name as dimensions. Coordinate variables and auxiliary coordinate variables (named by the coordinates attribute) locate values in time and space.
mt = f.variables['MT'] depth = f.variables['Depth'] x,y = f.variables['X'], f.variables['Y'] print(mt) print(x)
web-services/reading_netCDF.ipynb
rsignell-usgs/python-training
cc0-1.0
Accessing data from a netCDF variable object netCDF variables objects behave much like numpy arrays. slicing a netCDF variable object returns a numpy array with the data. Boolean array and integer sequence indexing behaves differently for netCDF variables than for numpy arrays. Only 1-d boolean arrays and integer sequ...
time = mt[:] # Reads the netCDF variable MT, array of one element print(time) dpth = depth[:] # examine depth array print(dpth) xx,yy = x[:],y[:] print('shape of temp variable: %s' % repr(temp.shape)) tempslice = temp[0, dpth > 400, yy > yy.max()/2, xx > xx.max()/2] print('shape of temp slice: %s' % repr(tempslice...
web-services/reading_netCDF.ipynb
rsignell-usgs/python-training
cc0-1.0
What is the sea surface temperature and salinity at 50N, 140W? Finding the latitude and longitude indices of 50N, 140W The X and Y dimensions don't look like longitudes and latitudes Use the auxilary coordinate variables named in the coordinates variable attribute, Latitude and Longitude
lat, lon = f.variables['Latitude'], f.variables['Longitude'] print(lat)
web-services/reading_netCDF.ipynb
rsignell-usgs/python-training
cc0-1.0
Aha! So we need to find array indices iy and ix such that Latitude[iy, ix] is close to 50.0 and Longitude[iy, ix] is close to -140.0 ...
# extract lat/lon values (in degrees) to numpy arrays latvals = lat[:]; lonvals = lon[:] # a function to find the index of the point closest pt # (in squared distance) to give lat/lon value. def getclosest_ij(lats,lons,latpt,lonpt): # find squared distance of every point on grid dist_sq = (lats-latpt)**2 + (lo...
web-services/reading_netCDF.ipynb
rsignell-usgs/python-training
cc0-1.0
Now we have all the information we need to find our answer. |----------+--------| | Variable | Index | |----------+--------| | MT | 0 | | Depth | 0 | | Y | iy_min | | X | ix_min | |----------+--------| What is the sea surface temperature and salinity at the specified point?
sal = f.variables['salinity'] # Read values out of the netCDF file for temperature and salinity print('%7.4f %s' % (temp[0,0,iy_min,ix_min], temp.units)) print('%7.4f %s' % (sal[0,0,iy_min,ix_min], sal.units))
web-services/reading_netCDF.ipynb
rsignell-usgs/python-training
cc0-1.0
Remote data access via openDAP Remote data can be accessed seamlessly with the netcdf4-python API Access happens via the DAP protocol and DAP servers, such as TDS. many formats supported, like GRIB, are supported "under the hood". The following example showcases some nice netCDF features: We are seamlessly accessing...
import datetime date = datetime.datetime.now() # build URL for latest synoptic analysis time URL = 'http://thredds.ucar.edu/thredds/dodsC/grib/NCEP/GFS/Global_0p5deg/GFS_Global_0p5deg_%04i%02i%02i_%02i%02i.grib2/GC' %\ (date.year,date.month,date.day,6*(date.hour//6),0) # keep moving back 6 hours until a valid URL found...
web-services/reading_netCDF.ipynb
rsignell-usgs/python-training
cc0-1.0
Missing values when data == var.missing_value somewhere, a masked array is returned. illustrate with soil moisture data (only defined over land) white areas on plot are masked values over water.
soilmvar = gfs.variables['Volumetric_Soil_Moisture_Content_depth_below_surface_layer'] # flip the data in latitude so North Hemisphere is up on the plot soilm = soilmvar[0,0,::-1,:] print('shape=%s, type=%s, missing_value=%s' % \ (soilm.shape, type(soilm), soilmvar.missing_value)) import matplotlib.pyplot as plt...
web-services/reading_netCDF.ipynb
rsignell-usgs/python-training
cc0-1.0
Packed integer data There is a similar feature for variables with scale_factor and add_offset attributes. short integer data will automatically be returned as float data, with the scale and offset applied. Dealing with dates and times time variables usually measure relative to a fixed date using a certain calendar...
from netCDF4 import num2date, date2num, date2index timedim = sfctmp.dimensions[0] # time dim name print('name of time dimension = %s' % timedim) times = gfs.variables[timedim] # time coord var print('units = %s, values = %s' % (times.units, times[:])) dates = num2date(times[:], times.units) print([date.strftime('%Y-%m...
web-services/reading_netCDF.ipynb
rsignell-usgs/python-training
cc0-1.0
Get index associated with a specified date, extract forecast data for that date.
from datetime import datetime, timedelta date = datetime.now() + timedelta(days=3) print(date) ntime = date2index(date,times,select='nearest') print('index = %s, date = %s' % (ntime, dates[ntime]))
web-services/reading_netCDF.ipynb
rsignell-usgs/python-training
cc0-1.0
Get temp forecast for Boulder (near 40N, -105W) use function getcloses_ij we created before...
lats, lons = gfs.variables['lat'][:], gfs.variables['lon'][:] # lats, lons are 1-d. Make them 2-d using numpy.meshgrid. lons, lats = np.meshgrid(lons,lats) j, i = getclosest_ij(lats,lons,40,-105) fcst_temp = sfctmp[ntime,j,i] print('Boulder forecast valid at %s UTC = %5.1f %s' % \ (dates[ntime],fcst_temp,sfctmp.u...
web-services/reading_netCDF.ipynb
rsignell-usgs/python-training
cc0-1.0
Simple multi-file aggregation What if you have a bunch of netcdf files, each with data for a different year, and you want to access all the data as if it were in one file?
!ls -l data/prmsl*nc
web-services/reading_netCDF.ipynb
rsignell-usgs/python-training
cc0-1.0
MFDataset uses file globbing to patch together all the files into one big Dataset. You can also pass it a list of specific files. Limitations: It can only aggregate the data along the leftmost dimension of each variable. only works with NETCDF3, or NETCDF4_CLASSIC formatted files. kind of slow.
mf = netCDF4.MFDataset('data/prmsl*nc') times = mf.variables['time'] dates = num2date(times[:],times.units) print('starting date = %s' % dates[0]) print('ending date = %s'% dates[-1]) prmsl = mf.variables['prmsl'] print('times shape = %s' % times.shape) print('prmsl dimensions = %s, prmsl shape = %s' %\ (prmsl.dim...
web-services/reading_netCDF.ipynb
rsignell-usgs/python-training
cc0-1.0
Closing your netCDF file It's good to close netCDF files, but not actually necessary when Dataset is open for read access only.
f.close() gfs.close()
web-services/reading_netCDF.ipynb
rsignell-usgs/python-training
cc0-1.0
Primero: definir la órbita
r = [-6045, -3490, 2500] * u.km v = [-3.457, 6.618, 2.533] * u.km / u.s ss = State.from_vectors(Earth, r, v) with plt.style.context('pybonacci'): plot(ss)
Going to Mars with Python in 5 minutes.ipynb
Juanlu001/Charla-PyConES15-poliastro
mit
Segundo: localiza los planetas
epoch = time.Time("2015-06-21 16:35") r_, v_ = ephem.planet_ephem(ephem.EARTH, epoch) r_ v_.to(u.km / u.s)
Going to Mars with Python in 5 minutes.ipynb
Juanlu001/Charla-PyConES15-poliastro
mit
Tercero: Calcula la trayectoria
date_launch = time.Time('2011-11-26 15:02', scale='utc') date_arrival = time.Time('2012-08-06 05:17', scale='utc') tof = date_arrival - date_launch r0, _ = ephem.planet_ephem(ephem.EARTH, date_launch) r, _ = ephem.planet_ephem(ephem.MARS, date_arrival) (v0, v), = iod.lambert(Sun.k, r0, r, tof) v0 v
Going to Mars with Python in 5 minutes.ipynb
Juanlu001/Charla-PyConES15-poliastro
mit
...y es Python puro! Truco: numba Cuarto: ¡vamos a Marte!
def go_to_mars(offset=500., tof_=6000.): # Initial data N = 50 date_launch = time.Time('2016-03-14 09:31', scale='utc') + ((offset - 500.) * u.day) date_arrival = time.Time('2016-10-19 16:00', scale='utc') + ((offset - 500.) * u.day) tof = tof_ * u.h # Calculate vector of times from launch and...
Going to Mars with Python in 5 minutes.ipynb
Juanlu001/Charla-PyConES15-poliastro
mit
Quinto: ¡¡Hagámoslo interactivo!!!1!
%matplotlib inline from ipywidgets import interactive from IPython.display import display w = interactive(go_to_mars, offset=(0., 1000.), tof_=(100., 12000.)) display(w)
Going to Mars with Python in 5 minutes.ipynb
Juanlu001/Charla-PyConES15-poliastro
mit
Use np.loadtxt to read the data into a NumPy array called data. Then create two new 1d NumPy arrays named years and ssc that have the sequence of year and sunspot counts.
data = np.loadtxt('yearssn.dat') years = data[:,0] ssc = data[:,1] assert len(years)==315 assert years.dtype==np.dtype(float) assert len(ssc)==315 assert ssc.dtype==np.dtype(float)
assignments/assignment04/MatplotlibEx01.ipynb
CalPolyPat/phys202-2015-work
mit
Make a line plot showing the sunspot count as a function of year. Customize your plot to follow Tufte's principles of visualizations. Adjust the aspect ratio/size so that the steepest slope in your plot is approximately 1. Customize the box, grid, spines and ticks to match the requirements of this data.
f = plt.figure(figsize = (20, 2)) plt.plot(years, ssc, "b-") plt.box(False) plt.xticks(np.linspace(1700, 2015, 5, dtype = int)) plt.yticks(np.linspace(0, 150, 3, dtype = int)) plt.xlabel("Year") plt.ylabel("# of Sunspots") plt.title("# of Sunspots Per Year") assert True # leave for grading
assignments/assignment04/MatplotlibEx01.ipynb
CalPolyPat/phys202-2015-work
mit
Describe the choices you have made in building this visualization and how they make it effective. I removed the box and grid because nobody cares exactly how many sunspots there were, all that matters is the oscilitory behavior. This is plainly shown here. The ticks give a good measure of time, it is easy to estimate t...
f = plt.figure(figsize = (15,8)) plt.subplot(4,1,1) plt.plot(years[:100], ssc[:100], "b-") plt.box(False) plt.xticks(np.linspace(1700, 1800, 5, dtype = int)) plt.yticks(np.linspace(0, 150, 3, dtype = int)) plt.xlabel("Year") plt.ylabel("# of Sunspots") plt.title("# of Sunspots Per Year from 1700-1800") plt.subplot(4,1...
assignments/assignment04/MatplotlibEx01.ipynb
CalPolyPat/phys202-2015-work
mit
Introduction Let's directly start with importing some data: the titanic dataset about the passengers of the Titanic and their survival:
df = pd.read_csv("data/titanic.csv") df.head()
notebooks/pandas_01_data_structures.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
Starting from reading such a tabular dataset, Pandas provides the functionalities to answer questions about this data in a few lines of code. Let's start with a few examples as illustration: <div class="alert alert-warning"> - What is the age distribution of the passengers? </div>
df['Age'].hist()
notebooks/pandas_01_data_structures.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
<div class="alert alert-warning"> <ul> <li>How does the survival rate of the passengers differ between sexes?</li> </ul> </div>
df.groupby('Sex')[['Survived']].mean()
notebooks/pandas_01_data_structures.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
<div class="alert alert-warning"> <ul> <li>Or how does the survival rate differ between the different classes of the Titanic?</li> </ul> </div>
df.groupby('Pclass')['Survived'].mean().plot.bar()
notebooks/pandas_01_data_structures.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
<div class="alert alert-warning"> <ul> <li>Are young people (e.g. < 25 years) likely to survive?</li> </ul> </div>
df['Survived'].mean() df25 = df[df['Age'] <= 25] df25['Survived'].mean()
notebooks/pandas_01_data_structures.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
All the needed functionality for the above examples will be explained throughout the course, but as a start: the data types to work with. The pandas data structures: DataFrame and Series To load the pandas package and start working with it, we first import the package. The community agreed alias for pandas is pd, whic...
import pandas as pd
notebooks/pandas_01_data_structures.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
Let's start with getting some data. In practice, most of the time you will import the data from some data source (text file, excel, database, ..), and Pandas provides functions for many different formats. But to start here, let's create a small dataset about a few countries manually from a dictionar of lists:
data = {'country': ['Belgium', 'France', 'Germany', 'Netherlands', 'United Kingdom'], 'population': [11.3, 64.3, 81.3, 16.9, 64.9], 'area': [30510, 671308, 357050, 41526, 244820], 'capital': ['Brussels', 'Paris', 'Berlin', 'Amsterdam', 'London']} countries = pd.DataFrame(data) countries
notebooks/pandas_01_data_structures.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
The object created here is a DataFrame:
type(countries)
notebooks/pandas_01_data_structures.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
A DataFrame is a 2-dimensional, tablular data structure comprised of rows and columns. It is similar to a spreadsheet, a database (SQL) table or the data.frame in R. <img align="center" width=50% src="../img/pandas/01_table_dataframe1.svg"> A DataFrame can store data of different types (including characters, integers, ...
countries.dtypes
notebooks/pandas_01_data_structures.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
Each column in a DataFrame is a Series When selecting a single column of a pandas DataFrame, the result is a pandas Series, a 1-dimensional data structure. To select the column, use the column label in between square brackets [].
countries['population'] s = countries['population'] type(s)
notebooks/pandas_01_data_structures.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
Pandas objects have attributes and methods Pandas provides a lot of functionalities for the DataFrame and Series. The .dtypes shown above is an attribute of the DataFrame. Another example is the .columns attribute, returning the column names of the DataFrame:
countries.columns
notebooks/pandas_01_data_structures.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
In addition, there are also functions that can be called on a DataFrame or Series, i.e. methods. As methods are functions, do not forget to use parentheses (). A few examples that can help exploring the data:
countries.head() # Top rows countries.tail() # Bottom rows
notebooks/pandas_01_data_structures.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
The describe method computes summary statistics for each column:
countries['population'].describe()
notebooks/pandas_01_data_structures.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
Sorting your data by a specific column is another important first-check:
countries.sort_values(by='population')
notebooks/pandas_01_data_structures.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
However, for this dataset, it does not say that much:
countries['population'].plot.barh() # or .plot(kind='barh')
notebooks/pandas_01_data_structures.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
<div class="alert alert-success"> **EXERCISE 1**: * You can play with the `kind` keyword or accessor of the `plot` method in the figure above: 'line', 'bar', 'hist', 'density', 'area', 'pie', 'scatter', 'hexbin', 'box' Note: doing `df.plot(kind="bar", ...)` or `df.plot.bar(...)` is exactly equivalent. You will see b...
# pd.read_ # countries.to_
notebooks/pandas_01_data_structures.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
<div class="alert alert-info"> **Note: I/O interface** * All readers are `pd.read_...` * All writers are `DataFrame.to_...` </div> Application on a real dataset Throughout the pandas notebooks, many of exercises will use the titanic dataset. This dataset has records of all the passengers of the Titanic, with charac...
# %load _solutions/pandas_01_data_structures1.py
notebooks/pandas_01_data_structures.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
<div class="alert alert-success"> **EXERCISE 3**: * Quick exploration: show the first 5 rows of the DataFrame. </div>
# %load _solutions/pandas_01_data_structures2.py
notebooks/pandas_01_data_structures.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
<div class="alert alert-success"> **EXERCISE 4**: * How many records (i.e. rows) has the titanic dataset? <details><summary>Hints</summary> * The length of a DataFrame gives the number of rows (`len(..)`). Alternatively, you can check the "shape" (number of rows, number of columns) of the DataFrame using the `shape...
# %load _solutions/pandas_01_data_structures3.py
notebooks/pandas_01_data_structures.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
<div class="alert alert-success"> <b>EXERCISE 5</b>: * Select the 'Age' column (remember: we can use the [] indexing notation and the column label). </div>
# %load _solutions/pandas_01_data_structures4.py
notebooks/pandas_01_data_structures.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
<div class="alert alert-success"> <b>EXERCISE 6</b>: * Make a box plot of the Fare column. </div>
# %load _solutions/pandas_01_data_structures5.py
notebooks/pandas_01_data_structures.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
<div class="alert alert-success"> **EXERCISE 7**: * Sort the rows of the DataFrame by 'Age' column, with the oldest passenger at the top. Check the help of the `sort_values` function and find out how to sort from the largest values to the lowest values </div>
# %load _solutions/pandas_01_data_structures6.py
notebooks/pandas_01_data_structures.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
Simulate a gene tree with 14 tips and MRCA of 1M generations
TREE = toytree.rtree.bdtree(ntips=8, b=0.8, d=0.2, seed=123) TREE = TREE.mod.node_scale_root_height(1e6) TREE.draw(ts='o', layout='d', scalebar=True);
testdocs/analysis/cookbook-mb-ipcoal.ipynb
dereneaton/ipyrad
gpl-3.0
Simulate sequences on single gene tree and write to NEXUS When Ne is greater the gene tree is more likely to deviate from the species tree topology and branch lengths. By setting recombination rate to 0 there will only be one true underlying genealogy for the gene tree. We set nsamples=2 because we want to simulate dip...
# init simulator model = ipcoal.Model(TREE, Ne=2e4, nsamples=2, recomb=0) # simulate sequence data on coalescent genealogies model.sim_loci(nloci=1, nsites=20000) # write results to database file model.write_concat_to_nexus(name="mbtest-1", outdir='/tmp', diploid=True) # the simulated genealogy of haploid alleles ge...
testdocs/analysis/cookbook-mb-ipcoal.ipynb
dereneaton/ipyrad
gpl-3.0
View an example locus This shows the 2 haploid samples simulated for each tip in the species tree.
model.draw_seqview(idx=0, start=0, end=50);
testdocs/analysis/cookbook-mb-ipcoal.ipynb
dereneaton/ipyrad
gpl-3.0
(1) Infer a tree under a relaxed molecular clock model
# init the mb object mb = ipa.mrbayes( data="/tmp/mbtest-1.nex", name="itest-1", workdir="/tmp", clock_model=2, constraints=TREE, ngen=int(1e6), nruns=2, ) # modify a parameter mb.params.clockratepr = "normal(0.01,0.005)" mb.params.samplefreq = 5000 # summary of priors/params print(mb.para...
testdocs/analysis/cookbook-mb-ipcoal.ipynb
dereneaton/ipyrad
gpl-3.0
(2) Concatenated sequences from a species tree Here we use concatenated sequence data from 100 loci where each represents one or more distinct genealogies. In addition, Ne is increased to 1e5, allowing for more genealogical variation. We expect the accuracy of estimated edge lengths will decrease since we are not adequ...
# init simulator model = ipcoal.Model(TREE, Ne=1e5, nsamples=2, recomb=0) # simulate sequence data on coalescent genealogies model.sim_loci(nloci=100, nsites=200) # write results to database file model.write_concat_to_nexus(name="mbtest-2", outdir='/tmp', diploid=True) # the simulated genealogies of haploid alleles ...
testdocs/analysis/cookbook-mb-ipcoal.ipynb
dereneaton/ipyrad
gpl-3.0
To see the NEXUS file (data, parameters, priors):
mb.print_nexus_string()
testdocs/analysis/cookbook-mb-ipcoal.ipynb
dereneaton/ipyrad
gpl-3.0
(3) Tree inference (not fixed topology) and plotting support values Here we will try to infer the topology from a concatenated data set (i.e., not set a constraint on the topology). I increased the ngen setting since the MCMC chain takes longer to converge when searching over topology space. Take note that the support ...
# init the mb object mb = ipa.mrbayes( data="/tmp/mbtest-2.nex", name="itest-3", workdir="/tmp", clock_model=2, ngen=int(2e6), nruns=2, ) # summary of priors/params print(mb.params) # start run mb.run(force=True)
testdocs/analysis/cookbook-mb-ipcoal.ipynb
dereneaton/ipyrad
gpl-3.0
The tree topology was correctly inferred
# load the inferred tree mbtre = toytree.tree("/tmp/itest-3.nex.con.tre", 10) # scale root node from unitless to 1e6 mbtre = mbtre.mod.node_scale_root_height(1e6) # draw inferred tree c, a, m = mbtre.draw( layout='d', scalebar=True, node_sizes=18, node_labels="prob{percent}", );
testdocs/analysis/cookbook-mb-ipcoal.ipynb
dereneaton/ipyrad
gpl-3.0
The branch lengths are not very accurate in this case:
# load the inferred tree mbtre = toytree.tree("/tmp/itest-3.nex.con.tre", 10) # scale root node from unitless to 1e6 mbtre = mbtre.mod.node_scale_root_height(1e6) # draw inferred tree c, a, m = mbtre.draw(ts='o', layout='d', scalebar=True); # draw true tree in orange on the same axes TREE.draw( axes=a, ts=...
testdocs/analysis/cookbook-mb-ipcoal.ipynb
dereneaton/ipyrad
gpl-3.0
10. 行数のカウント 行数をカウントせよ.確認にはwcコマンドを用いよ
with open("hightemp.txt") as f: count = len(f.readlines()) print(count)
chapter2/UNIX command.ipynb
KUrushi/knocks
mit
wc ファイル内のバイト数, 単語数, 及び行数を集計し表示する. また, 空白で区切られたものを単語として扱う. 表示: 行数 単語数 バイト数 wc [-clw] [--bytes] [--chars] [--lines] [--words] [file] オプション -c, --bytes, --chars バイト数のみ集計し表示 -w, --word 単語数のみ集計し表示 -l, --lines 行数のみ集計し表示 file
%%bash wc -l hightemp.txt
chapter2/UNIX command.ipynb
KUrushi/knocks
mit
11. タブをスペースに置換 タブ1文字につきスペース1文字に置換せよ.確認にはsedコマンド,trコマンド,もしくはexpandコマンドを用いよ.
def replace_tab2space(file): with open(file) as f: for i in f.readlines(): print(i.strip('\n').replace('\t', ' ')) replace_tab2space('hightemp.txt')
chapter2/UNIX command.ipynb
KUrushi/knocks
mit