markdown stringlengths 0 1.02M | code stringlengths 0 832k | output stringlengths 0 1.02M | license stringlengths 3 36 | path stringlengths 6 265 | repo_name stringlengths 6 127 |
|---|---|---|---|---|---|
Pasivni visoko-propusni filtriRealizacija visoko-propusnog filtra u ovom slučaju se ostvaruje korištenjem otpornika i zavojnice povezanih u seriju, pri čemu se izlaz promatra kao napon na zavojnici, $V_{out}$ Uz pretpostavku da je signal na ulazu, $V_{in}$, sinusoidalni naponski izvor, analizu možemo prebaciti u frekvencijsku domenu koristeći impedancijski model. Na ovaj način izbjegavamo potrebu za korištenjem diferencijalnog računa i čitav proračun se svodi na jednostavni algebarski problem. Izraz za funkciju prijenosnog odziva dobijamo kao omjer izlaznog i ulaznog napona. Izlazni napon - napon na zavojnici, $V_{out}$, definiramo kroz podjelu ulaznog napona na sljedeći način:$$\begin{align} V_{out} &= \frac{Z_l}{Z_l + Z_r} \cdot V_{in} \\ H(\omega) = \frac{V_{out}}{V_{in}} &= \frac{Z_l}{Z_l + Z_r} = \frac{j\omega L}{j\omega L + R} = \frac{1}{1+R/(j\omega L)}\end{align}$$Kako je $H$ funkcija frekvencije, imamo dva ruba slučaja:* za iznimno niske frekvencije kada je $\omega \sim 0$ slijedi da je $H(\omega) \rightarrow 0$;* za iznimno visoke frekvencije kada $\omega \rightarrow \infty$ slijedi da je $H(\omega) = 0$.Potrebno je dodatno definirati već spomenutu *cut-off* frekvenciju, $f_c$, za koju amplituda funkcije frekvencijskog odziva, $H$, pada za $\sqrt 2$ puta, odnosno za $3$ dB:$$\begin{align} f_c &= \frac{R}{2 \pi L}\end{align}$$Link za interaktivni rad sa pasivnim visoko-propusnim filtrom: http://sim.okawa-denshi.jp/en/LRtool.php Zadatak 1Prvi zadatak je implementirati funkciju `cutoff_frequency` koja na ulazu prima iznose otpora, `R`, i zavojnice, `L`, a na izlazu daje *cutoff* frekvenciju visoko-propusnog filtra. | def cutoff_frequency(R, L):
"""Cutoff frekvencija visoko-propusnog RL filtra.
Args:
R (number) : vrijednost otpora otpornika
L (number) : induktivitet zavojnice
Returns:
number
"""
#######################################################
## TO-DO: implementiraj proračun cutoff frekvencije ##
# Nakon toga zakomentiraj sljedeću liniju.
raise NotImplementedError('Implementiraj proračun cutoff frekvencije.')
#######################################################
# definiraj cutoff frekvenciju
fc = ...
return fc | _____no_output_____ | MIT | emc_512/lab/Python/03-lab-ex.ipynb | antelk/teaching |
Kolika je *cutoff* frekvencija za otpor od $200 \Omega$ i induktivitet zavojnice od $100 mH$? | R = ... # otpor
L = ... # induktivitet
fc = cutoff_frequency(...) # cutoff frekvencija
print(f'R = {R/1000} kΩ')
print(f'L = {L*1000} mH')
print(f'cutoff frekvencija iznosi {fc:.2f} Hz, '
'očekivana vrijednost je 318.31 Hz') | _____no_output_____ | MIT | emc_512/lab/Python/03-lab-ex.ipynb | antelk/teaching |
Zadatak 2Drugi zadatak je implementirati funkciju `rl_highpass` koja na ulazu prima iznose otpora, `R`, induktiviteta, `L`, i frekvenciju, `f`, a na izlazu daje prijenosni odziv pasivnog visoko-propusnog RL filtra. | def rl_highpass(R, L, f):
"""Funkcija prijenosnog odziva RL visoko-propusnog filtra.
Args:
R (number) : vrijednost otpora otpornika
L (number) : induktivitet
f (number or numpy.ndarray) : frekvencija/e
Returns:
float or numpy.ndarray
"""
######################################################
## TO-DO: implementiraj funkciju prijenosnog odziva ##
# Nakon toga zakomentiraj sljedeću liniju.
raise NotImplementedError('Implementiraj funckiju prijenosnog odziva.')
######################################################
# definiraj funkciju prijenosnog pazeći da `f` može biti ili broj (int,
# float) ili 1-D niz (`numpy.ndarray`)
H = ...
return H | _____no_output_____ | MIT | emc_512/lab/Python/03-lab-ex.ipynb | antelk/teaching |
Kolika je vrijednost prijenosne funkcije pri *cutoff* frekvencija za otpor od $200 \Omega$ i induktivitet zavojnice od $100 mH$? | R = ... # otpor
L = ... # induktivitet
Hc = rl_highpass(...) # prijenosna funkcija pri cutoff frekvenciji
print(f'R = {R:.2f} Ω')
print(f'C = {L * 1000:.2f} mH')
print(f'pojačanje pri cutoff frekvenciji iznosi {abs(Hc):.4f}, '
'očekivana vrijednost je 1/√2\n\n'
'provjerite ispravnost dobivenog rezutltata')
# ćelija za provjeru rezultata
| _____no_output_____ | MIT | emc_512/lab/Python/03-lab-ex.ipynb | antelk/teaching |
Pretvorite vrijednost prijenosne funkcije pri *cutoff* frekvenciju u decibele i uvjerite se u tvrdnju da amplituda funkcije frekvencijskog odziva, $H$, pada za $3$ dB pri *cutoff* frekvenciji. | Hc_dB = ... # pretvorba prijenosne funkcije pri cutoff frekvenciji u dB skalu
print(Hc_dB) | _____no_output_____ | MIT | emc_512/lab/Python/03-lab-ex.ipynb | antelk/teaching |
Za raspon od $10000$ vrijednosti frekvencija do $10 kHz$ te za otpor od $200 \Omega$ i induktivitet zavojnice od $100 mH$, izračunajte vrijednosti prijenosne funkcije. | f = np.linspace(..., num=10000)
H = rl_highpass(...) # prijenosna funkcija | _____no_output_____ | MIT | emc_512/lab/Python/03-lab-ex.ipynb | antelk/teaching |
S obzirom da su vrijednosti prijenosne funkcije kompleksne veličine, razmilite što je potrebno napraviti s njima prije nego ih grafički prikažemo? | Hm = ... # konverzija u apsolutne vrijednosti | _____no_output_____ | MIT | emc_512/lab/Python/03-lab-ex.ipynb | antelk/teaching |
Vizualizirajte ovisnost prijenosne funkcije o frekvenciji koristeći `matplotlib` i funkciju `matplotlib.pyplot.plot`. | plt.plot(...)
plt.xlabel('f [Hz]')
plt.ylabel('H(f)')
plt.show() | _____no_output_____ | MIT | emc_512/lab/Python/03-lab-ex.ipynb | antelk/teaching |
Vizualizirajte sada rezultate koristeći već implementiranu funkciju `plot_frequency_response`.Napomena: za provjeru načina korištenja prethodne funkcije koristite sljedeću naredbu:```pythonhelp(plot_frequency_response)```ili jednostavno```pythonplot_frequency_response?``` | # provjerite način korištenja funkcije
fig, ax = plot_frequency_response(...) # grafički prikaz dobivenih rezultata | _____no_output_____ | MIT | emc_512/lab/Python/03-lab-ex.ipynb | antelk/teaching |
Strujno-naponska karakteristika RL visoko-propusnog filtra | def time_constant(L, R):
"""Vremenska konstanta RL visoko-propusnog filtra.
Args:
R (number) : vrijednost otpora otpornika
L (number) : induktivitet
Returns:
float or numpy.ndarray
"""
##################################################################
## TO-DO: implementiraj fnkciju koja racuna vremensku konstantu ##
# Nakon toga zakomentiraj sljedeću liniju.
raise NotImplementedError('Implementiraj vremensku konstantu.')
##################################################################
# definiraj vremensku konstantu
tau = ...
return tau
tau = time_constant(L, R) # vremenska konstanta | _____no_output_____ | MIT | emc_512/lab/Python/03-lab-ex.ipynb | antelk/teaching |
Koja fizikalna veličina je pridružena vremenskoj konstanti? Objasni. | def rl_current(t, t_switch, V, R, L):
"""Struja kroz RL visoko-propusni filtar.
Args:
t (number or numpy.ndarray) : trenutak/ci u kojima računamo
vrijednost struje
t_switch (number) : treneutak promjene predznaka struje
V (number) : napon na ulazu
R (number) : vrijednost otpora otpornika
L (number) : induktivitet
Returns:
float or numpy.ndarray
"""
I0 = V / R
i = np.where(t < t_switch,
I0 * (1 - np.exp((-R / L) * t)),
I0 * np.exp((-R / L) * (t - t_switch)))
return i
V = 5 # napon na ulazu
tau = time_constant(L, R) # vremenska konstanta filtra
t_switch = tau * 4.4 # vrijeme promjene predznaka struje
T = 2 * t_switch # period
t = np.linspace(0, T) # vremenska serija trenutaka u kojima evaluiramo vrijednost struje
i_rl = rl_current(t, t_switch, V, R, L) # RL struja
i = V / R * np.sin(2 * np.pi * t / T) # sinusna struja
# vizualizacija RL struje
plt.figure()
plt.plot(t, i_rl, label='struja')
plt.plot(t, i, label='on-off ciklus')
plt.plot([t.min(), t_switch, t.max()], [0, 0, 0], 'rx')
plt.hlines(0, t.min(), t.max(), 'k')
plt.vlines(t_switch, i.min(), i.max(), 'k')
plt.xlabel('t [s]')
plt.ylabel('i(t) [A]')
plt.legend()
plt.grid()
plt.show() | _____no_output_____ | MIT | emc_512/lab/Python/03-lab-ex.ipynb | antelk/teaching |
Pojasno propusni filtriSljedeći kod koristi više različitih tipova pojasno-propusnih filtara (Hamming, Kaiser, Remez) i uspoređuje ih s idealnom prijenosnom funkcijom. | def bandpass_firwin(ntaps, lowcut, highcut, fs, window='hamming'):
taps = ss.firwin(ntaps, [lowcut, highcut], nyq=0.5 * fs, pass_zero=False,
window=window, scale=False)
return taps
def bandpass_kaiser(ntaps, lowcut, highcut, fs, width):
atten = ss.kaiser_atten(ntaps, width / (0.5 * fs))
beta = ss.kaiser_beta(atten)
taps = ss.firwin(ntaps, [lowcut, highcut], nyq=0.5 * fs, pass_zero=False,
window=('kaiser', beta), scale=False)
return taps
def bandpass_remez(ntaps, lowcut, highcut, fs, width):
delta = 0.5 * width
edges = [0,
lowcut - delta,
lowcut + delta,
highcut - delta,
highcut + delta,
0.5 * fs,
]
taps = ss.remez(ntaps, edges, [0, 1, 0], Hz=fs)
return taps
fs = 63.0
lowcut = 0.7
highcut = 4.0
ntaps = 128
taps_hamming = bandpass_firwin(ntaps, lowcut, highcut, fs)
taps_kaiser16 = bandpass_kaiser(ntaps, lowcut, highcut, fs, width=1.6)
taps_kaiser10 = bandpass_kaiser(ntaps, lowcut, highcut, fs, width=1.0)
taps_remez = bandpass_remez(ntaps, lowcut, highcut, fs=fs, width=1.0)
plt.figure()
w, h = ss.freqz(taps_hamming, 1, worN=2000)
plt.plot(fs * 0.5 / np.pi * w, abs(h), label='Hammingov prozor')
w, h = ss.freqz(taps_kaiser16, 1, worN=2000)
plt.plot(fs * 0.5 / np.pi * w, abs(h), label='Kaiser, širina = 1.6')
w, h = ss.freqz(taps_kaiser10, 1, worN=2000)
plt.plot(fs * 0.5/ np.pi * w, abs(h), label='Kaiser, širina = 1.0')
w, h = ss.freqz(taps_remez, 1, worN=2000)
plt.plot(fs * 0.5 / np.pi * w, abs(h), label=f'Remez, širina = 1.0')
h = np.where((fs * 0.5 / np.pi * w < lowcut) | (fs * 0.5 / np.pi * w > highcut), 0, 1)
plt.plot(fs * 0.5 / np.pi * w, h, 'k-', label='idealna karakteristika')
plt.fill_between(fs * 0.5 / np.pi * w, h, color='gray', alpha=0.1)
plt.xlim(0, 8.0)
plt.grid()
plt.legend(loc='upper right')
plt.xlabel('f (Hz)')
plt.ylabel('H(f)')
plt.show() | _____no_output_____ | MIT | emc_512/lab/Python/03-lab-ex.ipynb | antelk/teaching |
a2d = [[3,2,1],[6,4,8],[7,4,2]]
n=3
print(a2d)
total = 0 #1
print("Nivel 1")
for ren in range(n):
sumaRenglon =0
print("Nivel 2")
for col in range(n):
sumaRenglon += a2d[ren][col]
total += a2d[ren][col]
print("Nivel 3")
print(total)
| _____no_output_____ | MIT | 19Octubre.ipynb | Erik-Silver/daa_2021_1 | |
**EXPERIMENT 1** Aim: Exploring variable in a datasetObjectives:Exploring Variables in a DatasetLearn how to open and examine a dataset.Practice classifying variables by their type: quantitative or categorical.Learn how to handle categorical variables whose values are numerically coded.Link to experiment: https://upscfever.com/upsc-fever/en/data/en-exercises-1.html | import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
depression = pd.read_csv('https://raw.githubusercontent.com/kopalsharma19/J045-ML-Sem-V/master/Lab%20Experiments/Experiment-1%20060720/depression.csv')
friends = pd.read_csv('https://raw.githubusercontent.com/kopalsharma19/J045-ML-Sem-V/master/Lab%20Experiments/Experiment-1%20060720/friends.csv')
actor_age = pd.read_csv('https://raw.githubusercontent.com/kopalsharma19/J045-ML-Sem-V/master/Lab%20Experiments/Experiment-1%20060720/actor_age.csv')
grad_data = pd.read_csv('https://raw.githubusercontent.com/kopalsharma19/J045-ML-Sem-V/master/Lab%20Experiments/Experiment-1%20060720/grad_data.csv')
ratings = pd.read_csv('https://raw.githubusercontent.com/kopalsharma19/J045-ML-Sem-V/master/Lab%20Experiments/Experiment-1%20060720/ratings.csv')
| _____no_output_____ | Apache-2.0 | Lab Experiments/Experiment-1 060720/ML_Experiment_1_060720.ipynb | rohitsmittal7/J045-ML-Sem-V |
**Question 1**What are the categorical variables in depression dataset? | depression.head(10), depression.dtypes | _____no_output_____ | Apache-2.0 | Lab Experiments/Experiment-1 060720/ML_Experiment_1_060720.ipynb | rohitsmittal7/J045-ML-Sem-V |
The categorical Variables in depression dataset are-1. Hospt2. Treat3. Outcome4. Gender **QUESTION 2**What are the quantitative variables in depression dataset? | depression.head(10), depression.dtypes
| _____no_output_____ | Apache-2.0 | Lab Experiments/Experiment-1 060720/ML_Experiment_1_060720.ipynb | rohitsmittal7/J045-ML-Sem-V |
Quantitative variables in depression dataset are-1. Time2. AcuteT3. Age **QUESTION 3**Describe the distribution of the variable "friends" in dataset - Survey that asked 1,200 U.S. college students about their body perception | print("Datatype\n", friends.dtypes)
print("\n")
print("Shape of Dataset - ", friends.shape)
friends.Friends.value_counts()
friends.Friends.value_counts().plot(kind='pie')
| _____no_output_____ | Apache-2.0 | Lab Experiments/Experiment-1 060720/ML_Experiment_1_060720.ipynb | rohitsmittal7/J045-ML-Sem-V |
**QUESTION 4**Describe the distribution of the ages of the Best Actor Oscar winners. Be sure to address shape, center, spread and outliers (Dataset - Best Actor Oscar winners (1970-2013)) | actor_age.describe()
np.median(actor_age['Age'])
actor_age.boxplot(column='Age')
actor_age.shape
actor_age.hist(column='Age')
| _____no_output_____ | Apache-2.0 | Lab Experiments/Experiment-1 060720/ML_Experiment_1_060720.ipynb | rohitsmittal7/J045-ML-Sem-V |
Shape: Skewed to the right, 44 rows and 1 columnCenter (Median): 43.5Spread: The standard deviation is 9.749153Outlier: 76, there are no lower outliers **QUESTION 5**Getting information from the output: a. How many observations are in this data set? b. What is the mean age of the actors who won the Oscar? c. What is the five-number summary of the distribution? (Dataset - Best Actor Oscar winners (1970-2013)) | actor_age.describe() | _____no_output_____ | Apache-2.0 | Lab Experiments/Experiment-1 060720/ML_Experiment_1_060720.ipynb | rohitsmittal7/J045-ML-Sem-V |
a) No. of Observations (count)- 44b) Mean age of actors (mean)- 44.977273c) Five-number summary of distribution ismin- 29First Quartile (25%)- 38Second Quartile (Median) (50%)- 43.5Third Quartile (75%)- 50max- 76 **QUESTION 6**Get information from the five-number summary:a. Half of the actors won the Oscar before what age? b. What is the range covered by all the actors' ages? c. What is the range covered by the middle 50% of the ages? (Dataset - Best Actor Oscar winners (1970-2013)) | actor_age.describe() | _____no_output_____ | Apache-2.0 | Lab Experiments/Experiment-1 060720/ML_Experiment_1_060720.ipynb | rohitsmittal7/J045-ML-Sem-V |
a) Half of the actors won oscar before the age of 43.5b) Range of age for all actors in 29-76c) Range covered by middle 50% of the ages- 38-50.25 **QUESTION 7**What are the standard deviations of the three rating distributions? Was your intuition correct? (Dataset - 27 students in the class were asked to rate the instructor on a number scale of 1 to 9) | ratings.head(10)
ratings.describe() | _____no_output_____ | Apache-2.0 | Lab Experiments/Experiment-1 060720/ML_Experiment_1_060720.ipynb | rohitsmittal7/J045-ML-Sem-V |
Standard Deviation for Class. I- 1.568929Standard Deviation for Class. II- 4.0Standard Deviation for Class. III- 2.631174No my intuition wasn't correct. **QUESTION 8**Assume that the average rating in each of the three classes is 5 (which should be visually reasonably clear from the histograms), and recall the interpretation of the SD as a "typical" or "average" distance between the data points and their mean. Judging from the table and the histograms, which class would have the largest standard deviation, and which one would have the smallest standard deviation? Explain your reasoning (Dataset - 27 students in the class were asked to rate the instructor on a number scale of 1 to 9) | ratings.head()
ratings.describe()
ratings.hist(column='Class.I')
ratings.hist(column='Class.II')
ratings.hist(column='Class.III')
| _____no_output_____ | Apache-2.0 | Lab Experiments/Experiment-1 060720/ML_Experiment_1_060720.ipynb | rohitsmittal7/J045-ML-Sem-V |
Seeing the tables and histogramsClass 1 has the least standard deviation as maximum values lie in the center. Class 2 has the most standard deviation as maximum values lie at different ends of the histogram and very few in the center. | _____no_output_____ | Apache-2.0 | Lab Experiments/Experiment-1 060720/ML_Experiment_1_060720.ipynb | rohitsmittal7/J045-ML-Sem-V | |
Example Map Plotting At the start of a Jupyter notebook you need to import all modules that you will use | import pandas as pd
import xarray as xr
import matplotlib.pyplot as plt
import numpy as np
from scipy.interpolate import griddata
import cartopy
import cartopy.crs as ccrs # For plotting maps
import cartopy.feature as cfeature # For plotting maps
from cartopy.util import add_cyclic_point # For plotting maps
import datetime | _____no_output_____ | Apache-2.0 | Python/maps/plot_map.ipynb | Duseong/CAM-chem |
Define the directories and file of interest for your results. This can be shortened to less lines as well. | #result_dir = "/home/buchholz/Documents/code_database/untracked/my-notebook/Janyl_plotting/"
result_dir = "../../data/"
file = "CAM_chem_merra2_FCSD_1deg_QFED_monthly_2019.nc"
#the netcdf file is now held in an xarray dataset named 'nc' and can be referenced later in the notebook
nc_load = xr.open_dataset(result_dir+file)
#to see what the netCDF file contains, just call the variable you read it into
nc_load | _____no_output_____ | Apache-2.0 | Python/maps/plot_map.ipynb | Duseong/CAM-chem |
Extract the variable of choice at the time and level of choice | #extract grid variables
lat = nc_load['lat']
lon = nc_load['lon']
#extract variable
var_sel = nc_load['PM25']
print(var_sel)
#print(var_sel[0][0][0][0])
#select the surface level at a specific time and convert to ppbv from vmr
#var_srf = var_sel.isel(time=0, lev=55)
#select the surface level for an average over three times and convert to ppbv from vmr
var_srf = var_sel.isel(time=[2,3,4], lev=55) # MAM chosen
var_srf = var_srf.mean('time')
var_srf = var_srf*1e09 # 10-9 to ppb
print(var_srf.shape)
# Add cyclic point to avoid white line over Africa
var_srf_cyc, lon_cyc = add_cyclic_point(var_srf, coord=lon) | _____no_output_____ | Apache-2.0 | Python/maps/plot_map.ipynb | Duseong/CAM-chem |
Plot the value over a specific region | plt.figure(figsize=(20,8))
#Define projection
ax = plt.axes(projection=ccrs.PlateCarree())
#define contour levels
clev = np.arange(0, 100, 1)
#plot the data
plt.contourf(lon_cyc,lat,var_srf_cyc,clev,cmap='Spectral_r',extend='both')
# add coastlines
#ax.coastlines()
ax.add_feature(cfeature.COASTLINE)
#add lat lon grids
ax.gridlines(draw_labels=True, color='grey', alpha=0.5, linestyle='--')
#longitude limits in degrees
ax.set_xlim(20,120)
#latitude limits in degrees
ax.set_ylim(5,60)
# Title
plt.title("CAM-chem 2019 O$_{3}$")
#axes
# y-axis
ax.text(-0.09, 0.55, 'Latitude', va='bottom', ha='center',
rotation='vertical', rotation_mode='anchor',
transform=ax.transAxes)
# x-axis
ax.text(0.5, -0.10, 'Longitude', va='bottom', ha='center',
rotation='horizontal', rotation_mode='anchor',
transform=ax.transAxes)
# legend
ax.text(1.18, 0.5, 'O$_{3}$ (ppb)', va='bottom', ha='center',
rotation='vertical', rotation_mode='anchor',
transform=ax.transAxes)
plt.colorbar()
plt.show() | _____no_output_____ | Apache-2.0 | Python/maps/plot_map.ipynb | Duseong/CAM-chem |
Add location markers | ##Now lets look at the sufrace plot again, but this time add markers for observations at several points.
#first we need to define our observational data into an array
#this can also be imported from text files using various routines
# Kyzylorda, Urzhar, Almaty, Balkhash
obs_lat = np.array([44.8488,47.0870,43.2220,46.2161])
obs_lon = np.array([65.4823,81.6315,76.8512,74.3775])
obs_names = ["Kyzylorda", "Urzhar", "Almaty", "Balkhash"]
num_obs = obs_lat.shape[0]
plt.figure(figsize=(20,8))
#Define projection
ax = plt.axes(projection=ccrs.PlateCarree())
#define contour levels
clev = np.arange(0, 100, 1)
#plot the data
plt.contourf(lon_cyc,lat,var_srf_cyc,clev,cmap='Spectral_r')
# add coastlines
ax.add_feature(cfeature.COASTLINE)
ax.add_feature(cfeature.BORDERS)
#add lat lon grids
ax.gridlines(draw_labels=True, color='grey', alpha=0.5, linestyle='--')
#longitude limits in degrees
ax.set_xlim(20,120)
#latitude limits in degrees
ax.set_ylim(5,60)
# Title
plt.title("CAM-chem 2019 O$_{3}$")
#axes
# y-axisCOUNTRY
ax.text(-0.09, 0.55, 'Latitude', va='bottom', ha='center',
rotation='vertical', rotation_mode='anchor',
transform=ax.transAxes)
# x-axis
ax.text(0.5, -0.10, 'Longitude', va='bottom', ha='center',
rotation='horizontal', rotation_mode='anchor',
transform=ax.transAxes)
# legend
ax.text(1.18, 0.5, 'O$_{3}$ (ppb)', va='bottom', ha='center',
rotation='vertical', rotation_mode='anchor',
transform=ax.transAxes)
#convert your observation lat/lon to Lambert-Conformal grid points
#xpt,ypt = m(obs_lon,obs_lat)
#to specify the color of each point it is easiest plot individual points in a loop
for i in range(num_obs):
plt.plot(obs_lon[i], obs_lat[i], linestyle='none', marker="o", markersize=8, alpha=0.8, c="black", markeredgecolor="black", markeredgewidth=1, transform=ccrs.PlateCarree())
plt.text(obs_lon[i] - 0.8, obs_lat[i] - 0.5, obs_names[i], fontsize=20, horizontalalignment='right', transform=ccrs.PlateCarree())
plt.colorbar()
plt.show()
cartopy.config['data_dir'] | _____no_output_____ | Apache-2.0 | Python/maps/plot_map.ipynb | Duseong/CAM-chem |
DescriptionThis notebook documents allows the following on a group seven LIFX Tilechain with 5 Tileslaid out horizontaly as followingT1 [0] [1] [2] [3] [4]T2 [0] [1] [2] [3] [4]T3 [0] [1] [2] [3] [4]T4 [0] [1] [2] [3] [4]T5 [0] [1] [2] [3] [4]T6 [0] [1] [2] [3] [4]T7 [0] [1] [2] [3] [4]Care should be taken to ensure that the LIFX Tiles are all facing up to ensure that the 0,0 position is in the expected place. Program will perform the following- take a jpg or png located in the same folder as the notebook and create a image to display across all 4 tilechains or 20 tiles. Image will be reduced from original size to a 32x40 matrix so resolution will not be great. You've been warned. | !pip install pylifxtiles
!pip install thread
#Main Program for Convert Single Image to Tiles
# Full running function with all dependencies
#imports RGB to HSBK conversion function from LIFX LAN library
import _thread as thread
from lifxlan import LifxLAN
from lifxlan.utils import RGBtoHSBK
from pylifxtiles import tiles
from pylifxtiles import actions
from matplotlib import image
from PIL import Image
# modify this variable to the name of the specific LIFX Tilechain as shown in the LIFX app
source_image = './images/meghan.jpg'
def main():
lan = LifxLAN()
tilechain_lights = lan.get_tilechain_lights()
print(len(tilechain_lights))
if len(tilechain_lights) != 0:
for tile in tilechain_lights:
if tile.get_label() == 'T1':
print(tile.get_label())
T1 = tile
if tile.get_label() =='T2':
print(tile.get_label())
T2 = tile
if tile.get_label() == 'T3':
print(tile.get_label())
T3 = tile
if tile.get_label() == 'T4':
print(tile.get_label())
T4 = tile
if tile.get_label() == 'T5':
print(tile.get_label())
T5 = tile
if tile.get_label() == 'T6':
print(tile.get_label())
T6 = tile
if tile.get_label() == 'T7':
print(tile.get_label())
T7 = tile
tc_list = [ T1, T2, T3, T4, T5, T6, T7]
try:
thread.start_new_thread(display_image,(source_image,(40,56), tc_list))
except KeyboardInterrupt:
print("Done.")
#combined function
# resize image and force a new shape and save to disk
def display_image(image_to_display,image_size, tilechain_list):
# load the image
my_image = Image.open(image_to_display)
# report the size of the image
#print(my_image.size)
# resize image and ignore original aspect ratio
img_resized = my_image.resize(image_size)
#changing the file extension from jpg to png changes output brightness. You might need to play with this.
img_resized.save('./images/resized_image.jpg')
data = image.imread('./images/resized_image.jpg')
target_tcs = []
for row in data:
temp_row = []
for pixel in row:
temp_row.append(RGBtoHSBK(pixel))
target_tcs.append(temp_row)
#print ("length of target_tcs is " + str(len(target_tcs)))
tcsplit = tiles.split_tilechains(target_tcs)
#print ("legnth of tcssplit is " + str(len(tcsplit)))
#print ("length tilelist is " + str(len(tilechain_list)))
for tile in range(len(tilechain_list)):
print (tile)
tilechain_list[tile].set_tilechain_colors(tiles.split_combined_matrix(tcsplit[tile]),rapid=True)
if __name__ == "__main__":
main() | 23
T4
T5
T3
T6
T1
T7
T2
0
1
2
3
4
5
6
| Apache-2.0 | examples/_working_Convert Single Image to Seven Tilechain-MULTI-THREADS.ipynb | netmanchris/pylifxtiles |
test write to three tiles | #Main Program for Convert Single Image to Tiles
# Full running function with all dependencies
#imports RGB to HSBK conversion function from LIFX LAN library
from lifxlan import LifxLAN
from lifxlan.utils import RGBtoHSBK
from pylifxtiles import tiles
from pylifxtiles import actions
from matplotlib import image
from PIL import Image
# modify this variable to the name of the specific LIFX Tilechain as shown in the LIFX app
source_image = './images/Youtubelogo.jpg'
def main():
lan = LifxLAN()
tilechain_lights = lan.get_tilechain_lights()
print(len(tilechain_lights))
if len(tilechain_lights) != 0:
for tile in tilechain_lights:
if tile.get_label() == 'T1':
print(tile.get_label())
T1 = tile
if tile.get_label() =='T2':
print(tile.get_label())
T2 = tile
if tile.get_label() == 'T3':
print(tile.get_label())
T3 = tile
if tile.get_label() == 'T4':
print(tile.get_label())
T4 = tile
tc_list = [T2, T3, T4]
try:
display_image(source_image,(40,24), tc_list)
except KeyboardInterrupt:
print("Done.")
#combined function
# resize image and force a new shape and save to disk
def display_image(image_to_display,image_size, tilechain_list):
# load the image
my_image = Image.open(image_to_display)
# report the size of the image
#print(my_image.size)
# resize image and ignore original aspect ratio
img_resized = my_image.resize(image_size)
#changing the file extension from jpg to png changes output brightness. You might need to play with this.
img_resized.save('./images/resized_image.jpg')
data = image.imread('./images/resized_image.jpg')
target_tcs = []
for row in data:
temp_row = []
for pixel in row:
temp_row.append(RGBtoHSBK(pixel))
target_tcs.append(temp_row)
print ("length of target_tcs is " + str(len(target_tcs)))
tcsplit = tiles.split_tilechains(target_tcs)
print ("legnth of tcssplit is " + str(len(tcsplit)))
print ("length tilelist is " + str(len(tilechain_list)))
for tile in range(len(tilechain_list)):
print (tile)
tilechain_list[tile].set_tilechain_colors(tiles.split_combined_matrix(tcsplit[tile]),rapid=True)
if __name__ == "__main__":
main()
import threading | _____no_output_____ | Apache-2.0 | examples/_working_Convert Single Image to Seven Tilechain-MULTI-THREADS.ipynb | netmanchris/pylifxtiles |
Import Risk INFORM index | path = "C:\\batch8_worldbank\\datasets\\tempetes\\INFORM_Risk_2021.xlsx"
xl = pd.ExcelFile(path)
xl.sheet_names
inform_df = xl.parse(xl.sheet_names[2])
inform_df.columns = inform_df.iloc[0]
inform_df = inform_df[2:]
inform_df.head() | _____no_output_____ | MIT | model_tempetes/notebooks/vulnerability_explo.ipynb | allezalex/batch8_worldbank |
Import emdat | path = "C:\\batch8_worldbank\\datasets\\tempetes\\wb_disasters_bdd.xlsx"
disasters_df = pd.read_excel(path)
disasters_df.head()
disasters_df['ISO']
max(disasters_df['Year']) | _____no_output_____ | MIT | model_tempetes/notebooks/vulnerability_explo.ipynb | allezalex/batch8_worldbank |
Filter on storms | storms_df = disasters_df[disasters_df["Disaster Type"]=="Storm"] | _____no_output_____ | MIT | model_tempetes/notebooks/vulnerability_explo.ipynb | allezalex/batch8_worldbank |
Number of storms, nb people affected and total damages by country by decade | nb_storms_by_year_by_country = storms_df.groupby(["Start Year", "ISO"]).aggregate({"Disaster Type":"count", "No Affected": "sum", "Total Damages ('000 US$)":"sum"})
nb_storms_by_year_by_country = nb_storms_by_year_by_country.reset_index()
nb_storms_by_year_by_country = nb_storms_by_year_by_country.rename(columns={"Start Year": "year", "Disaster Type": "storms_count", "No Affected": "total_nb_affected", "Total Damages ('000 US$)": "total_damages"})
nb_storms_by_year_by_country["decade"] = nb_storms_by_year_by_country["year"].apply(lambda row: (row//10)*10)
nb_storms_by_decade_by_country = nb_storms_by_year_by_country.groupby(["decade", "ISO"]).aggregate({"storms_count":"sum", "total_nb_affected":"sum", "total_damages":"sum"})
nb_storms_by_decade_by_country = nb_storms_by_decade_by_country.reset_index()
nb_storms_by_decade_by_country.head()
max(nb_storms_by_decade_by_country["decade"]) | _____no_output_____ | MIT | model_tempetes/notebooks/vulnerability_explo.ipynb | allezalex/batch8_worldbank |
Keep observations on decades 2000, 2010 and 2020 to increase nb of datapoints | nb_storms_by_decade_by_country_2020 = nb_storms_by_decade_by_country[nb_storms_by_decade_by_country["decade"]>=2000]
nb_storms_by_decade_by_country_2020.head()
nb_storms_by_decade_by_country_2020.shape
nb_storms_by_decade_by_country_2020.columns
inform_df.columns
# Merge on ISO
nb_storms_by_decade_by_country_2020_with_inform = pd.merge(nb_storms_by_decade_by_country_2020, inform_df, how="left", left_on="ISO", right_on="ISO3")
nb_storms_by_decade_by_country_2020_with_inform.head()
nb_storms_by_decade_by_country_2020_with_inform.shape
nb_storms_by_decade_by_country_2020_with_inform_filt_col = nb_storms_by_decade_by_country_2020_with_inform[["decade", "ISO", "storms_count", "total_nb_affected", "total_damages","INFORM RISK"]]
nb_storms_by_decade_by_country_2020_with_inform_filt_col.dtypes
nb_storms_by_decade_by_country_2020_with_inform_filt_col["INFORM RISK"] = nb_storms_by_decade_by_country_2020_with_inform_filt_col["INFORM RISK"].astype("float")
nb_storms_by_decade_by_country_2020_with_inform_filt_col.head()
nb_storms_inform_by_country_cor = nb_storms_by_decade_by_country_2020_with_inform_filt_col[["ISO", "storms_count", "total_nb_affected", "total_damages","INFORM RISK"]]
corr = nb_storms_inform_by_country_cor.corr()
sm.graphics.plot_corr(corr, xnames=list(corr.columns))
plt.show() | _____no_output_____ | MIT | model_tempetes/notebooks/vulnerability_explo.ipynb | allezalex/batch8_worldbank |
Keep observations on decades 2010 and 2020 | nb_storms_inform_by_country_2010_2020 = nb_storms_by_decade_by_country_2020_with_inform_filt_col[nb_storms_by_decade_by_country_2020_with_inform_filt_col["decade"]>=2010]
nb_storms_inform_by_country_2010_2020_cor = nb_storms_inform_by_country_2010_2020[["ISO", "storms_count", "total_nb_affected", "total_damages","INFORM RISK"]]
corr = nb_storms_inform_by_country_2010_2020_cor.corr()
sm.graphics.plot_corr(corr, xnames=list(corr.columns))
plt.show() | _____no_output_____ | MIT | model_tempetes/notebooks/vulnerability_explo.ipynb | allezalex/batch8_worldbank |
Keep observations on decade 2020 (decade of INFORM index) | nb_storms_inform_by_country_2020_only = nb_storms_by_decade_by_country_2020_with_inform_filt_col[nb_storms_by_decade_by_country_2020_with_inform_filt_col["decade"]==2020]
nb_storms_inform_by_country_2020_only.head()
nb_storms_inform_by_country_2020_only_cor = nb_storms_inform_by_country_2020_only[["ISO", "storms_count", "total_nb_affected", "total_damages","INFORM RISK"]]
corr = nb_storms_inform_by_country_2020_only_cor.corr()
sm.graphics.plot_corr(corr, xnames=list(corr.columns))
plt.show() | _____no_output_____ | MIT | model_tempetes/notebooks/vulnerability_explo.ipynb | allezalex/batch8_worldbank |
Db2 Jupyter Notebook Extensions TutorialThe SQL code tutorials for Db2 rely on a Jupyter notebook extension, commonly refer to as a "magic" command. The beginning of all of the notebooks begin with the following command which will load the extension and allow the remainder of the notebook to use the %sql magic command.&37;run db2.ipynbThe cell below will load the Db2 extension. Note that it will take a few seconds for the extension to load, so you should generally wait until the "Db2 Extensions Loaded" message is displayed in your notebook. | %run db2.ipynb | _____no_output_____ | Apache-2.0 | Db2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2jupyter |
OptionsThere are two options that can be set with the **`%sql`** command. These options are:- **`MAXROWS n`** - The maximum number of rows that you want to display as part of a SQL statement. Setting MAXROWS to -1 will return all output, while maxrows of 0 will suppress all output.- **`RUNTIME n`** - When using the timer option on a SQL statement, the statement will execute for **`n`** number of seconds. The result that is returned is the number of times the SQL statement executed rather than the execution time of the statement. The default value for runtime is one second, so if the SQL is very complex you will need to increase the run time.To set an option use the following syntax:```%sql option option_name value option_name value ....```The following example sets all three options:```%sql option maxrows 100 runtime 2```The values will be saved between Jupyter notebooks sessions. Connections to Db2Before any SQL commands can be issued, a connection needs to be made to the Db2 database that you will be using. The connection can be done manually (through the use of the CONNECT command), or automatically when the first `%sql` command is issued.The Db2 magic command tracks whether or not a connection has occured in the past and saves this information between notebooks and sessions. When you start up a notebook and issue a command, the program will reconnect to the database using your credentials from the last session. In the event that you have not connected before, the system will prompt you for all the information it needs to connect. This information includes:- Database name (SAMPLE) - Hostname - localhost (enter an IP address if you need to connect to a remote server) - PORT - 50000 (this is the default but it could be different) - Userid - DB2INST1 - Password - No password is provided so you have to enter a value - Maximum Rows - 10 lines of output are displayed when a result set is returned There will be default values presented in the panels that you can accept, or enter your own values. All of the information will be stored in the directory that the notebooks are stored on. Once you have entered the information, the system will attempt to connect to the database for you and then you can run all of the SQL scripts. More details on the CONNECT syntax will be found in a section below.If you have credentials available from Db2 on Cloud or DSX, place the contents of the credentials into a variable and then use the `CONNECT CREDENTIALS ` syntax to connect to the database.```Pythondb2blu = { "uid" : "xyz123456", ...}%sql CONNECT CREDENTIALS db2blu```If the connection is successful using the credentials, the variable will be saved to disk so that you can connected from within another notebook using the same syntax.The next statement will force a CONNECT to occur with the default values. If you have not connected before, it will prompt you for the information. | %sql CONNECT | _____no_output_____ | Apache-2.0 | Db2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2jupyter |
Line versus Cell CommandThe Db2 extension is made up of one magic command that works either at the LINE level (`%sql`) or at the CELL level (`%%sql`). If you only want to execute a SQL command on one line in your script, use the `%sql` form of the command. If you want to run a larger block of SQL, then use the `%%sql` form. Note that when you use the `%%sql` form of the command, the entire contents of the cell is considered part of the command, so you cannot mix other commands in the cell.The following is an example of a line command: | %sql VALUES 'HELLO THERE' | _____no_output_____ | Apache-2.0 | Db2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2jupyter |
If you have SQL that requires multiple lines, of if you need to execute many lines of SQL, then you should be using the CELL version of the `%sql` command. To start a block of SQL, start the cell with `%%sql` and do not place any SQL following the command. Subsequent lines can contain SQL code, with each SQL statement delimited with the semicolon (`;`). You can change the delimiter if required for procedures, etc... More details on this later. | %%sql
VALUES
1,
2,
3 | _____no_output_____ | Apache-2.0 | Db2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2jupyter |
If you are using a single statement then there is no need to use a delimiter. However, if you are combining a number of commands then you must use the semicolon. | %%sql
DROP TABLE STUFF;
CREATE TABLE STUFF (A INT);
INSERT INTO STUFF VALUES
1,2,3;
SELECT * FROM STUFF; | _____no_output_____ | Apache-2.0 | Db2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2jupyter |
The script will generate messages and output as it executes. Each SQL statement that generates results will have a table displayed with the result set. If a command is executed, the results of the execution get listed as well. The script you just ran probably generated an error on the DROP table command. OptionsBoth forms of the `%sql` command have options that can be used to change the behavior of the code. For both forms of the command (`%sql`, `%%sql`), the options must be on the same line as the command:%sql -t ...%%sql -tThe only difference is that the `%sql` command can have SQL following the parameters, while the `%%sql` requires the SQL to be placed on subsequent lines.There are a number of parameters that you can specify as part of the `%sql` statement. * `-d` - Use alternative delimiter* `-t` - Time the statement execution* `-q` - Suppress messages * `-j` - JSON formatting of a column* `-a` - Show all output* `-pb` - Bar chart of results* `-pp` - Pie chart of results * `-pl` - Line chart of results* `-i` - Interactive mode with Pixiedust* `-sampledata` Load the database with the sample EMPLOYEE and DEPARTMENT tables* `-r` - Return the results into a variable (list of rows)* `-e` - Echo macro substitutionMultiple parameters are allowed on a command line. Each option should be separated by a space:%sql -a -j ...A `SELECT` statement will return the results as a dataframe and display the results as a table in the notebook. If you use the assignment statement, the dataframe will be placed into the variable and the results will not be displayed:r = %sql SELECT * FROM EMPLOYEEThe sections below will explain the options in more detail. DelimitersThe default delimiter for all SQL statements is the semicolon. However, this becomes a problem when you try to create a trigger, function, or procedure that uses SQLPL (or PL/SQL). Use the `-d` option to turn the SQL delimiter into the at (`@`) sign and `-q` to suppress error messages. The semi-colon is then ignored as a delimiter.For example, the following SQL will use the `@` sign as the delimiter. | %%sql -d -q
DROP TABLE STUFF
@
CREATE TABLE STUFF (A INT)
@
INSERT INTO STUFF VALUES
1,2,3
@
SELECT * FROM STUFF
@ | _____no_output_____ | Apache-2.0 | Db2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2jupyter |
The delimiter change will only take place for the statements following the `%%sql` command. Subsequent cellsin the notebook will still use the semicolon. You must use the `-d` option for every cell that needs to use thesemicolon in the script. Limiting Result SetsThe default number of rows displayed for any result set is 10. You have the option of changing this option when initially connecting to the database. If you want to override the number of rows display you can either updatethe control variable, or use the -a option. The `-a` option will display all of the rows in the answer set. For instance, the following SQL will only show 10 rows even though we inserted 15 values: | %sql values 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15 | _____no_output_____ | Apache-2.0 | Db2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2jupyter |
You will notice that the displayed result will split the visible rows to the first 5 rows and the last 5 rows.Using the `-a` option will display all values in a scrollable table. | %sql -a values 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15 | _____no_output_____ | Apache-2.0 | Db2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2jupyter |
To change the default value of rows displayed, you can use the `%sql option maxrow` command to set the value to something else. A value of 0 or -1 means unlimited output. | %sql option maxrows 5
%sql values 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15 | _____no_output_____ | Apache-2.0 | Db2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2jupyter |
A special note regarding the output from a `SELECT` statement. If the SQL statement is the last line of a block, the results will be displayed by default (unless you assigned the results to a variable). If the SQL is in the middle of a block of statements, the results will not be displayed. To explicitly display the results you must use the display function (or pDisplay if you have imported another library like pixiedust which overrides the pandas display function). | # Set the maximum back
%sql option maxrows 10
%sql values 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15 | _____no_output_____ | Apache-2.0 | Db2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2jupyter |
Quiet ModeEvery SQL statement will result in some output. You will either get an answer set (`SELECT`), or an indication ifthe command worked. For instance, the following set of SQL will generate some error messages since the tables will probably not exist: | %%sql
DROP TABLE TABLE_NOT_FOUND;
DROP TABLE TABLE_SPELLED_WRONG; | _____no_output_____ | Apache-2.0 | Db2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2jupyter |
If you know that these errors may occur you can silence them with the -q option. | %%sql -q
DROP TABLE TABLE_NOT_FOUND;
DROP TABLE TABLE_SPELLED_WRONG; | _____no_output_____ | Apache-2.0 | Db2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2jupyter |
SQL output will not be suppressed, so the following command will still show the results. | %%sql -q
DROP TABLE TABLE_NOT_FOUND;
DROP TABLE TABLE_SPELLED_WRONG;
VALUES 1,2,3; | _____no_output_____ | Apache-2.0 | Db2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2jupyter |
Variables in %sql Blocks Python variables can be passed to a `%sql` line command, and to a `%%sql` block. For both forms of the `%sql` command you can pass variables by placing a colon in front of the variable name.```python%sql SELECT * FROM EMPLOYEE WHERE EMPNO = :empno```The following example illustrates the use of a variable in the SQL. | empno = '000010'
%sql SELECT * FROM EMPLOYEE WHERE EMPNO = :empno | _____no_output_____ | Apache-2.0 | Db2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2jupyter |
You can doublecheck that the substitution took place by using the `-e` option which echos the SQL command after substitution. | %sql -e SELECT * FROM EMPLOYEE WHERE EMPNO = :empno | _____no_output_____ | Apache-2.0 | Db2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2jupyter |
Note that the variable `:empno` did not have quotes around it, although it is a string value. The `%sql` call will examine the contents of the variable and add quotes around strings so you do not have to supply them in the SQL command.Variables can also be array types. Arrays are expanded into multiple values, each separated by commas. This is useful when building SQL `IN` lists. The following example searches for 3 employees based on their employee number. | empnos = ['000010','000020','000030']
%sql SELECT * FROM EMPLOYEE WHERE EMPNO IN (:empnos) | _____no_output_____ | Apache-2.0 | Db2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2jupyter |
You can reference individual array items using this technique as well. If you wanted to search for only the first value in the `empnos` array, use `:empnos[0]` instead. | %sql SELECT * FROM EMPLOYEE WHERE EMPNO IN (:empnos[0]) | _____no_output_____ | Apache-2.0 | Db2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2jupyter |
One final type of variable substitution that is allowed is for dictionaries. Python dictionaries resemble JSON objects and can be used to insert JSON values into Db2. For instance, the following variable contains company information in a JSON structure. | customer = {
"name" : "Aced Hardware Stores",
"city" : "Rockwood",
"employees" : 14
} | _____no_output_____ | Apache-2.0 | Db2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2jupyter |
Db2 has builtin functions for dealing with JSON objects. There is another Jupyter notebook which goes through this in detail. Rather than using those functions, the following code will create a Db2 table with a string column that will contain the contents of this JSON record. | %%sql
DROP TABLE SHOWJSON;
CREATE TABLE SHOWJSON (INJSON VARCHAR(256)); | _____no_output_____ | Apache-2.0 | Db2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2jupyter |
To insert the Dictionary (JSON Record) into this Db2 table, you only need to use the variable name as one of the fields being inserted. | %sql INSERT INTO SHOWJSON VALUES :customer | _____no_output_____ | Apache-2.0 | Db2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2jupyter |
Selecting from this table will show that the data has been inserted as a string. | %sql select * from showjson | _____no_output_____ | Apache-2.0 | Db2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2jupyter |
If you want to retrieve the data from a column that contains JSON records, you must use the `-j` flag to insert the contents back into a variable. | v = %sql -j SELECT * FROM SHOWJSON | _____no_output_____ | Apache-2.0 | Db2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2jupyter |
The variable `v` now contains the original JSON record for you to use. | v | _____no_output_____ | Apache-2.0 | Db2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2jupyter |
SQL Character StringsCharacter strings require special handling when dealing with Db2. The single quote character `'` is reserved for delimiting string constants, while the double quote `"` is used for naming columns that require special characters. You cannot use the double quote character to delimit strings that happen to contain the single quote character. What Db2 requires you do is placed two quotes in a row to have them interpreted as a single quote character. For instance, the next statement will select one employee from the table who has a quote in their last name: `O'CONNELL`. | %sql SELECT * FROM EMPLOYEE WHERE LASTNAME = 'O''CONNELL' | _____no_output_____ | Apache-2.0 | Db2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2jupyter |
Python handles quotes differently! You can assign a string to a Python variable using single or double quotes. The following assignment statements are not identical! | lastname = "O'CONNELL"
print(lastname)
lastname = 'O''CONNELL'
print(lastname) | _____no_output_____ | Apache-2.0 | Db2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2jupyter |
If you use the same syntax as Db2, Python will remove the quote in the string! It interprets this as two strings (O and CONNELL) being concatentated together. That probably isn't what you want! So the safest approach is to use double quotes around your string when you assign it to a variable. Then you can use the variable in the SQL statement as shown in the following example. | lastname = "O'CONNELL"
%sql -e SELECT * FROM EMPLOYEE WHERE LASTNAME = :lastname | _____no_output_____ | Apache-2.0 | Db2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2jupyter |
Notice how the string constant was updated to contain two quotes when inserted into the SQL statement. This is done automatically by the `%sql` magic command, so there is no need to use the two single quotes when assigning a string to a variable. However, you must use the two single quotes when using constants in a SQL statement. Builtin VariablesThere are 5 predefined variables defined in the program:- database - The name of the database you are connected to- uid - The userid that you connected with- hostname = The IP address of the host system- port - The port number of the host system- max - The maximum number of rows to return in an answer setTheses variables are all part of a structure called _settings. To retrieve a value, use the syntax:```pythondb = _settings['database']```There are also 3 variables that contain information from the last SQL statement that was executed.- sqlcode - SQLCODE from the last statement executed- sqlstate - SQLSTATE from the last statement executed- sqlerror - Full error message returned on last statement executedYou can access these variables directly in your code. The following code segment illustrates the use of the SQLCODE variable. | empnos = ['000010','999999']
for empno in empnos:
ans1 = %sql -r SELECT SALARY FROM EMPLOYEE WHERE EMPNO = :empno
if (sqlcode != 0):
print("Employee "+ empno + " left the company!")
else:
print("Employee "+ empno + " salary is " + str(ans1[1][0])) | _____no_output_____ | Apache-2.0 | Db2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2jupyter |
Timing SQL StatementsSometimes you want to see how the execution of a statement changes with the addition of indexes or otheroptimization changes. The `-t` option will run the statement on the LINE or one SQL statement in the CELL for exactly one second. The results will be displayed and optionally placed into a variable. The syntax of thecommand is:sql_time = %sql -t SELECT * FROM EMPLOYEEFor instance, the following SQL will time the VALUES clause. | %sql -t VALUES 1,2,3,4,5,6,7,8,9 | _____no_output_____ | Apache-2.0 | Db2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2jupyter |
When timing a statement, no output will be displayed. If your SQL statement takes longer than one second youwill need to modify the runtime options. You can use the `%sql option runtime` command to change the duration the statement runs. | %sql option runtime 5
%sql -t VALUES 1,2,3,4,5,6,7,8,9
%sql option runtime 1 | _____no_output_____ | Apache-2.0 | Db2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2jupyter |
JSON FormattingDb2 supports querying JSON that is stored in a column within a table. Standard output would just display the JSON as a string. For instance, the following statement would just return a large string of output. | %%sql
VALUES
'{
"empno":"000010",
"firstnme":"CHRISTINE",
"midinit":"I",
"lastname":"HAAS",
"workdept":"A00",
"phoneno":[3978],
"hiredate":"01/01/1995",
"job":"PRES",
"edlevel":18,
"sex":"F",
"birthdate":"08/24/1963",
"pay" : {
"salary":152750.00,
"bonus":1000.00,
"comm":4220.00}
}' | _____no_output_____ | Apache-2.0 | Db2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2jupyter |
Adding the -j option to the `%sql` (or `%%sql`) command will format the first column of a return set to betterdisplay the structure of the document. Note that if your answer set has additional columns associated with it, they will not be displayed in this format. | %%sql -j
VALUES
'{
"empno":"000010",
"firstnme":"CHRISTINE",
"midinit":"I",
"lastname":"HAAS",
"workdept":"A00",
"phoneno":[3978],
"hiredate":"01/01/1995",
"job":"PRES",
"edlevel":18,
"sex":"F",
"birthdate":"08/24/1963",
"pay" : {
"salary":152750.00,
"bonus":1000.00,
"comm":4220.00}
}' | _____no_output_____ | Apache-2.0 | Db2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2jupyter |
JSON fields can be inserted into Db2 columns using Python dictionaries. This makes the input and output of JSON fields much simpler. For instance, the following code will create a Python dictionary which is similar to a JSON record. | employee = {
"firstname" : "John",
"lastname" : "Williams",
"age" : 45
} | _____no_output_____ | Apache-2.0 | Db2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2jupyter |
The field can be inserted into a character column (or BSON if you use the JSON functions) by doing a direct variable insert. | %%sql -q
DROP TABLE SHOWJSON;
CREATE TABLE SHOWJSON(JSONIN VARCHAR(128)); | _____no_output_____ | Apache-2.0 | Db2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2jupyter |
An insert would use a variable parameter (colon in front of the variable) instead of a character string. | %sql INSERT INTO SHOWJSON VALUES (:employee)
%sql SELECT * FROM SHOWJSON | _____no_output_____ | Apache-2.0 | Db2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2jupyter |
An assignment statement to a variable will result in an equivalent Python dictionary type being created. Note that we must use the raw `-j` flag to make sure we only get the data and not a data frame. | x = %sql -j SELECT * FROM SHOWJSON
print("First Name is " + x[0]["firstname"] + " and the last name is " + x[0]['lastname']) | _____no_output_____ | Apache-2.0 | Db2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2jupyter |
PlottingSometimes it would be useful to display a result set as either a bar, pie, or line chart. The first one or twocolumns of a result set need to contain the values need to plot the information.The three possible plot options are: * `-pb` - bar chart (x,y)* `-pp` - pie chart (y)* `-pl` - line chart (x,y)The following data will be used to demonstrate the different charting options. | %sql values 1,2,3,4,5 | _____no_output_____ | Apache-2.0 | Db2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2jupyter |
Since the results only have one column, the pie, line, and bar charts will not have any labels associated withthem. The first example is a bar chart. | %sql -pb values 1,2,3,4,5 | _____no_output_____ | Apache-2.0 | Db2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2jupyter |
The same data as a pie chart. | %sql -pp values 1,2,3,4,5 | _____no_output_____ | Apache-2.0 | Db2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2jupyter |
And finally a line chart. | %sql -pl values 1,2,3,4,5 | _____no_output_____ | Apache-2.0 | Db2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2jupyter |
If you retrieve two columns of information, the first column is used for the labels (X axis or pie slices) and the second column contains the data. | %sql -pb values ('A',1),('B',2),('C',3),('D',4),('E',5) | _____no_output_____ | Apache-2.0 | Db2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2jupyter |
For a pie chart, the first column is used to label the slices, while the data comes from the second column. | %sql -pp values ('A',1),('B',2),('C',3),('D',4),('E',5) | _____no_output_____ | Apache-2.0 | Db2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2jupyter |
Finally, for a line chart, the x contains the labels and the y values are used. | %sql -pl values ('A',1),('B',2),('C',3),('D',4),('E',5) | _____no_output_____ | Apache-2.0 | Db2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2jupyter |
The following SQL will plot the number of employees per department. | %%sql -pb
SELECT WORKDEPT, COUNT(*)
FROM EMPLOYEE
GROUP BY WORKDEPT | _____no_output_____ | Apache-2.0 | Db2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2jupyter |
The final option for plotting data is to use interactive mode `-i`. This will display the data using an open-source project called Pixiedust. You can view the results in a table and then interactively create a plot by dragging and dropping column names into the appropriate slot. The next command will place you into interactive mode. | %sql -i select * from employee | _____no_output_____ | Apache-2.0 | Db2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2jupyter |
Sample DataMany of the Db2 notebooks depend on two of the tables that are found in the `SAMPLE` database. Rather thanhaving to create the entire `SAMPLE` database, this option will create and populate the `EMPLOYEE` and `DEPARTMENT` tables in your database. Note that if you already have these tables defined, they will not be dropped. | %sql -sampledata | _____no_output_____ | Apache-2.0 | Db2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2jupyter |
Result Sets By default, any `%sql` block will return the contents of a result set as a table that is displayed in the notebook. The results are displayed using a feature of pandas dataframes. The following select statement demonstrates a simple result set. | %sql select * from employee fetch first 3 rows only | _____no_output_____ | Apache-2.0 | Db2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2jupyter |
You can assign the result set directly to a variable. | x = %sql select * from employee fetch first 3 rows only | _____no_output_____ | Apache-2.0 | Db2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2jupyter |
The variable x contains the dataframe that was produced by the `%sql` statement so you access the result set by using this variable or display the contents by just referring to it in a command line. | x | _____no_output_____ | Apache-2.0 | Db2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2jupyter |
There is an additional way of capturing the data through the use of the `-r` flag.var = %sql -r select * from employeeRather than returning a dataframe result set, this option will produce a list of rows. Each row is a list itself. The column names are found in row zero (0) and the data rows start at 1. To access the first column of the first row, you would use var[1][0] to access it. | rows = %sql -r select * from employee fetch first 3 rows only
print(rows[1][0]) | _____no_output_____ | Apache-2.0 | Db2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2jupyter |
The number of rows in the result set can be determined by using the length function and subtracting one for the header row. | print(len(rows)-1) | _____no_output_____ | Apache-2.0 | Db2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2jupyter |
If you want to iterate over all of the rows and columns, you could use the following Python syntax instead ofcreating a for loop that goes from 0 to 41. | for row in rows:
line = ""
for col in row:
line = line + str(col) + ","
print(line) | _____no_output_____ | Apache-2.0 | Db2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2jupyter |
If you don't want the header row, modify the first line to start at the first row instead of row zero. | for row in rows[1:]:
line = ""
for col in row:
line = line + str(col) + ","
print(line) | _____no_output_____ | Apache-2.0 | Db2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2jupyter |
Since the data may be returned in different formats (like integers), you should use the str() functionto convert the values to strings. Otherwise, the concatenation function used in the above example will fail. Forinstance, the 6th field is a birthdate field. If you retrieve it as an individual value and try and concatenate a string to it, you get the following error. | try:
print("Birth Date="+rows[1][6])
except Exception as err:
print("Oops... Something went wrong!")
print(err) | _____no_output_____ | Apache-2.0 | Db2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2jupyter |
You can fix this problem by adding the str function to convert the date. | print("Birth Date="+str(rows[1][6])) | _____no_output_____ | Apache-2.0 | Db2 Jupyter Extensions Tutorial.ipynb | DB2-Samples/db2jupyter |
Welcome to the matched filtering tutorial! Installation Make sure you have PyCBC and some basic lalsuite tools installed. You can do this in a terminal with pip: | ! pip install lalsuite pycbc | _____no_output_____ | MIT | Session9/Day4/Matched_filter_tutorial.ipynb | jlmciver/LSSTC-DSFP-Sessions |
Jess notes: this notebook was made with a PyCBC 1.8.0 kernel. Learning goals With this tutorial, you learn how to:* Generate source waveforms detectable by LIGO, Virgo, KAGRA* Use PyCBC to run a matched filter search on gravitational wave detector data * Estimate the significance of a trigger given a background distribution* **Challenge**: Code up a trigger coincidence algorithm This tutorial borrows heavily from tutorials made for the [LIGO-Virgo Open Data Workshop](https://www.gw-openscience.org/static/workshop1/course.html) by Alex Nitz. You can find PyCBC documentation and additional examples [here](http://pycbc.org/pycbc/latest/html/py-modindex.html). Let's get started!___ Generate a gravitational wave signal waveformWe'll use a popular waveform approximant ([SOEBNRv4](https://arxiv.org/pdf/1611.03703.pdf)) to generate waveforms that would be detectable by LIGO, Virgo, or KAGRA. First we import the packages we'll need. | from pycbc.waveform import get_td_waveform
import pylab | _____no_output_____ | MIT | Session9/Day4/Matched_filter_tutorial.ipynb | jlmciver/LSSTC-DSFP-Sessions |
Let's see what these waveforms look like for different component masses. We'll assume the two compact object have masses equal to each other, and we'll set a lower frequency bound of 30 Hz (determined by the sensitivity of our detectors).We can also set a time sample rate with `get_td_waveform`. Let's try a rate of 4096 Hz. Let's make a plot of the plus polarization (`hp`) to get a feel for what the waveforms look like. | for m in [5, 10, 30, 100]:
hp, hc = get_td_waveform(approximant="SEOBNRv4_opt",
mass1=m,
mass2=m,
delta_t=1.0/4096,
f_lower=30)
pylab.plot(hp.sample_times, hp, label='$M_{\odot 1,2}=%s$' % m)
pylab.legend(loc='upper left')
pylab.ylabel('GW strain (plus polarization)')
pylab.grid()
pylab.xlabel('Time (s)')
pylab.show() | _____no_output_____ | MIT | Session9/Day4/Matched_filter_tutorial.ipynb | jlmciver/LSSTC-DSFP-Sessions |
Now let's see what happens if we decrease the lower frequency bound from 30 Hz to 15 Hz. | for m in [5, 10, 30, 100]:
hp, hc = get_td_waveform(approximant="SEOBNRv4_opt",
mass1=m,
mass2=m,
delta_t=1.0/4096,
f_lower=15)
pylab.plot(hp.sample_times, hp, label='$M_{\odot 1,2}=%s$' % m)
pylab.legend(loc='upper left')
pylab.ylabel('GW strain (plus polarization)')
pylab.grid()
pylab.xlabel('Time (s)')
pylab.show() | _____no_output_____ | MIT | Session9/Day4/Matched_filter_tutorial.ipynb | jlmciver/LSSTC-DSFP-Sessions |
--- Exercise 1What happens to the waveform when the total mass (let's say 20 Msol) stays the same, but the mass ratio between the component masses changes? Compare the waveforms for a m1 = m2 = 10 Msol system, and a m1 = 2 Msol, m2 = 18 Msol system. What do you notice? | # complete | _____no_output_____ | MIT | Session9/Day4/Matched_filter_tutorial.ipynb | jlmciver/LSSTC-DSFP-Sessions |
Exercise 2 How much longer (in signal duration) would LIGO and Virgo (and KAGRA) be able to detect a 1.4-1.4 Msol binary neutron star system if our detectors were sensitive down to 10 Hz instead of 30 Hz? ** Note you'll need to use a different waveform approximant here. Try TaylorF2.** Jess notes: this would be a major benefit of next-generation ("3G") ground-based gravitational wave detectors. | # complete | _____no_output_____ | MIT | Session9/Day4/Matched_filter_tutorial.ipynb | jlmciver/LSSTC-DSFP-Sessions |
--- Distance vs. signal amplitudeLet's see what happens when we scale the distance (in units of Megaparsecs) for a system with a total mass of 20 Msol. Note: redshift effects are not included here. | for d in [100, 500, 1000]:
hp, hc = get_td_waveform(approximant="SEOBNRv4_opt",
mass1=10,
mass2=10,
delta_t=1.0/4096,
f_lower=30,
distance=d)
pylab.plot(hp.sample_times, hp, label='Distance=%s Mpc' % d)
pylab.grid()
pylab.xlabel('Time (s)')
pylab.ylabel('GW strain (plus polarization)')
pylab.legend(loc='upper left')
pylab.show() | _____no_output_____ | MIT | Session9/Day4/Matched_filter_tutorial.ipynb | jlmciver/LSSTC-DSFP-Sessions |
--- Run a matched filter search on gravitational wave detector dataPyCBC also maintains a catalog of open data as PyCBC time series objects, easy to manipulate with PyCBC tools. Let's try using that and importing the data around the first detection, GW150914. | import pylab
from pycbc.catalog import Merger
from pycbc.filter import resample_to_delta_t, highpass
merger = Merger("GW150914")
# Get the data from the Hanford detector
strain = merger.strain('H1') | _____no_output_____ | MIT | Session9/Day4/Matched_filter_tutorial.ipynb | jlmciver/LSSTC-DSFP-Sessions |
Data pre-conditioning Once we've imported the open data from this alternate source, the first thing we'll need to do is **pre-condition** the data. This serves a few purposes: * 1) reduces the dynamic range of the data* 2) supresses high amplitudes at low frequencies, which can introduce numerical artifacts* 3) if we don't need high frequency information, downsampling allows us to compute our matched filter result fasterLet's try highpassing above 15 Hz and downsampling to 2048 Hz, and we'll make a plot to see what the result looks like: | # Remove the low frequency content and downsample the data to 2048Hz
strain = resample_to_delta_t(highpass(strain, 15.0), 1.0/2048)
pylab.plot(strain.sample_times, strain)
pylab.xlabel('Time (s)') | _____no_output_____ | MIT | Session9/Day4/Matched_filter_tutorial.ipynb | jlmciver/LSSTC-DSFP-Sessions |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.