markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Note that the asymmetry parameter becomes negative at the reflection peak (as expected, since light is preferentially backscattered), and, as a result, the transport length has a dip in the same wavelength region. Calculating the reflection spectrum of a core-shell particle system, with either an absorbing or non-absorbing particle index We can calculate a reflection spectrum of a system of core-shell particles, where the core and the shell(s) can have different radii and refractive indices. The syntax is mostly the same as that of non-core-shell particles, except that the particle radius and the particle index are now Quantity arrays of values from the innermost (the core) to the outermost layer in the particle. The volume fraction is that of the entire core-shell particle.
# Example calculation for a core-shell particle system (core is polystyrene and shell is silica, in a matrix of air) from structcol import model # parameters for our colloidal sample volume_fraction = sc.Quantity(0.5, '') radius = sc.Quantity(np.array([110,120]), 'nm') # wavelengths of interest wavelength = sc.Quantity(np.arange(400., 800., 10.0), 'nm') # calculate refractive indices at wavelengths of interest n_particle = sc.Quantity([ri.n('polystyrene', wavelength), ri.n('fused silica', wavelength)]) n_particle_abs = sc.Quantity([ri.n('polystyrene', wavelength), ri.n('fused silica', wavelength)+0.005j]) # assume the shell absorbs n_matrix = ri.n('vacuum', wavelength) n_medium = n_matrix # now calculate the reflection spectrum, asymmetry parameter (g), and # transport length (lstar) refl = np.zeros(wavelength.size) g = np.zeros(wavelength.size) lstar = np.zeros(wavelength.size)*sc.ureg('um') refl_abs = np.zeros(wavelength.size) g_abs = np.zeros(wavelength.size) lstar_abs = np.zeros(wavelength.size)*sc.ureg('um') for i in range(wavelength.size): # non-absorbing case refl[i], _, _, g[i], lstar[i] = model.reflection(n_particle[:,i], n_matrix[i], n_medium[i], wavelength[i], radius, volume_fraction, thickness = sc.Quantity('15000.0 nm'), theta_min = sc.Quantity('90 deg')) # absorbing case refl_abs[i], _, _, g_abs[i], lstar_abs[i] = model.reflection(n_particle_abs[:,i], n_matrix[i], n_medium[i], wavelength[i], radius, volume_fraction, thickness = sc.Quantity('15000.0 nm'), theta_min = sc.Quantity('90 deg')) fig, (ax_a, ax_b, ax_c) = plt.subplots(nrows=3, figsize=(8,8)) ax_a.plot(wavelength, refl, label='non-absorbing shell') ax_a.plot(wavelength, refl_abs, label='absorbing shell') ax_a.legend() ax_a.set_ylabel('Reflected fraction (unpolarized)') ax_b.plot(wavelength, g, label='non-absorbing shell') ax_b.plot(wavelength, g_abs, '--', label='absorbing shell') ax_b.legend() ax_b.set_ylabel('Asymmetry parameter') ax_c.semilogy(wavelength, lstar, label='non-absorbing shell') ax_c.semilogy(wavelength, lstar_abs, '--', label='absorbing shell') ax_c.legend() ax_c.set_ylabel('Transport length (μm)') ax_c.set_xlabel('wavelength (nm)')
tutorial.ipynb
manoharan-lab/structural-color
gpl-3.0
Calculating the reflection spectrum of a polydisperse system with either one species or two species of particles We can calculate the spectrum of a polydisperse system with either one or two species of particles, meaning that there are one or two mean radii, and each species has its own size distribution. We then need to specify the mean radius, the polydispersity index (pdi), and the concentration of each species. For example, consider a system of 90$\%$ of 200 nm polystyrene particles and 10$\%$ of 300 nm particles, with each species having a polydispersity index of 1$\%$. In this case, the mean radii are [200, 300] nm, the pdi are [0.01, 0.01], and the concentrations are [0.9, 0.1]. If the system is monospecies, we still need to specify the polydispersity parameters in 2-element arrays. For example, the mean radii become [200, 200] nm, the pdi become [0.01, 0.01], and the concentrations become [1.0, 0.0]. To include absorption into the polydisperse system calculation, we just need to use the complex refractive index of the particle and/or the matrix. Note: the code takes longer (~1 min) in the polydisperse than in the monodisperse case because it calculates the scattering for a distribution of particles. Note 2: the code currently does not handle polydispersity for systems of core-shell particles.
# Example calculation for a polydisperse system with two species of particles, each with its own size distribution from structcol import model # parameters for our colloidal sample volume_fraction = sc.Quantity(0.5, '') radius = sc.Quantity('100 nm') # define the parameters for polydispersity radius2 = sc.Quantity('150 nm') concentration = sc.Quantity(np.array([0.9,0.1]), '') pdi = sc.Quantity(np.array([0.01, 0.01]), '') # wavelengths of interest wavelength = sc.Quantity(np.arange(400., 800., 10.0), 'nm') # calculate refractive indices at wavelengths of interest n_particle = ri.n('polystyrene', wavelength) n_matrix = ri.n('vacuum', wavelength) n_medium = n_matrix # now calculate the reflection spectrum, asymmetry parameter (g), and # transport length (lstar) refl_mono = np.zeros(wavelength.size) g_mono = np.zeros(wavelength.size) lstar_mono = np.zeros(wavelength.size)*sc.ureg('um') refl_poly = np.zeros(wavelength.size) g_poly = np.zeros(wavelength.size) lstar_poly = np.zeros(wavelength.size)*sc.ureg('um') for i in range(wavelength.size): # need to specify extra parameters for the polydisperse (and bispecies) case refl_poly[i], _, _, g_poly[i], lstar_poly[i] = model.reflection(n_particle[i], n_matrix[i], n_medium[i], wavelength[i], radius, volume_fraction, thickness = sc.Quantity('15000.0 nm'), theta_min = sc.Quantity('90 deg'), radius2 = radius2, concentration = concentration, pdi = pdi, structure_type='polydisperse', form_type='polydisperse') # monodisperse (assuming the system is composed of purely the 200 nm particles) refl_mono[i], _, _, g_mono[i], lstar_mono[i] = model.reflection(n_particle[i], n_matrix[i], n_medium[i], wavelength[i], radius, volume_fraction, thickness = sc.Quantity('15000.0 nm'), theta_min = sc.Quantity('90 deg')) fig, (ax_a, ax_b, ax_c) = plt.subplots(nrows=3, figsize=(8,8)) ax_a.plot(wavelength, refl_mono, label='monodisperse') ax_a.plot(wavelength, refl_poly, label='polydisperse, bispecies') ax_a.legend() ax_a.set_ylabel('Reflected fraction (unpolarized)') ax_b.plot(wavelength, g_mono, label='monodisperse') ax_b.plot(wavelength, g_poly, label='polydisperse, bispecies') ax_b.legend() ax_b.set_ylabel('Asymmetry parameter') ax_c.semilogy(wavelength, lstar_mono, label='monodisperse') ax_c.semilogy(wavelength, lstar_poly, label='polydisperse, bispecies') ax_c.legend() ax_c.set_ylabel('Transport length (μm)') ax_c.set_xlabel('wavelength (nm)')
tutorial.ipynb
manoharan-lab/structural-color
gpl-3.0
Note that the polydisperse case has a broader and red-shifted peak compared to the monodisperse case. This trend makes sense since the polydisperse system contains 10$\%$ of larger particles than the monodisperse system. Mie scattering module Normally you won't need to use this model on its own, but if you want to, start with
from structcol import mie
tutorial.ipynb
manoharan-lab/structural-color
gpl-3.0
Form factor calculation:
wavelen = sc.Quantity('450 nm') n_matrix = ri.n('vacuum', wavelen) n_particle = ri.n('polystyrene', wavelen) radius = sc.Quantity('0.4 um') m = sc.index_ratio(n_particle, n_matrix) x = sc.size_parameter(wavelen, n_matrix, radius) # must explicitly state whether angles are in radians or degrees angles = sc.Quantity(np.linspace(0, np.pi, 1000), 'rad') form_factor_par, form_factor_perp = mie.calc_ang_dist(m, x, angles) plt.semilogy(angles.to('deg'), form_factor_par, label='parallel polarization') plt.plot(angles.to('deg'), form_factor_perp, label='perpendicular polarization') plt.legend() plt.xlabel('angle ($\degree$)') plt.ylabel('intensity')
tutorial.ipynb
manoharan-lab/structural-color
gpl-3.0
Structure module To use this module:
from structcol import structure
tutorial.ipynb
manoharan-lab/structural-color
gpl-3.0
Here is an example of calculating structure factors with the Percus-Yevick approximation. The code is fully vectorized, so we can calculate structure factors for a variety of qd values and volume fractions in parallel:
qd = np.arange(0.1, 20, 0.01) phi = np.array([0.15, 0.3, 0.45]) # this little trick allows us to calculate the structure factor on a 2d # grid of points (turns qd into a column vector and phi into a row vector). # Could also use np.ogrid s = structure.factor_py(qd.reshape(-1,1), phi.reshape(1,-1)) for i in range(len(phi)): plt.plot(qd, s[:,i], label='$\phi=$'+str(phi[i]))#, label='$phi='+phi[i]+'$') plt.legend() plt.xlabel('$qd$') plt.ylabel('$\phi$')
tutorial.ipynb
manoharan-lab/structural-color
gpl-3.0
Google Colab interagration
%%bash test ! -z $COLAB_GPU || exit 0 # Run only if running on Google Colab test ! -f ampl.installed || exit 0 # Run only once rm -rf ampl.linux-intel64 # You can install a demo bundle with all solvers: # curl -O https://portal.ampl.com/dl/amplce/ampl.linux64.tgz && tar xzf ampl.linux64.tgz # Or pick individual modules (recommended in order to reduce disk usage): curl -O https://portal.ampl.com/dl/modules/ampl-module.linux64.tgz && tar xzf ampl-module.linux64.tgz curl -O https://portal.ampl.com/dl/modules/coin-module.linux64.tgz && tar xzf coin-module.linux64.tgz cp ampl.linux-intel64/ampl.lic ampl.linux-intel64/ampl.lic.demo touch ampl.installed %%bash # If you have an AMPL Cloud License or an AMPL CE license, you can use it on Google Colab # Note: Your license UUID should never be shared. Please make sure you delete the license UUID # and rerun this cell before sharing the notebook with anyone LICENSE_UUID= test ! -z $COLAB_GPU || exit 0 # Run only if running on Google Colab cd ampl.linux-intel64 && pwd test -z $LICENSE_UUID && cp ampl.lic.demo ampl.lic # Restore demo license in case LICENSE_UUID is empty test ! -z $LICENSE_UUID && curl -O https://portal.ampl.com/download/license/$LICENSE_UUID/ampl.lic ./ampl -vvq import os if 'COLAB_GPU' in os.environ: os.environ['PATH'] += os.pathsep + '/content/ampl.linux-intel64'
notebooks/colab_bash.ipynb
ampl/amplpy
bsd-3-clause
Import, instantiate an ampl object and register jupyter notebook magics
from amplpy import AMPL, register_magics ampl = AMPL() # Store %%ampl cells in the list _ampl_cells # Evaluate %%ampl_eval cells with ampl.eval() register_magics(store_name='_ampl_cells', ampl_object=ampl)
notebooks/colab_bash.ipynb
ampl/amplpy
bsd-3-clause
Import libraries
import os from datetime import datetime import apache_beam as beam import numpy as np import tensorflow.io as tf_io
notebooks/community/analytics-componetized-patterns/retail/recommendation-system/bqml-scann/02_export_bqml_mf_embeddings.ipynb
GoogleCloudPlatform/bigquery-notebooks
apache-2.0
Configure GCP environment settings Update the following variables to reflect the values for your GCP environment: PROJECT_ID: The ID of the Google Cloud project you are using to implement this solution. BUCKET: The name of the Cloud Storage bucket you created to use with this solution. The BUCKET value should be just the bucket name, so myBucket rather than gs://myBucket. REGION: The region to use for the Dataflow job.
PROJECT_ID = "yourProject" # Change to your project. BUCKET = "yourBucketName" # Change to the bucket you created. REGION = "yourDataflowRegion" # Change to your Dataflow region. BQ_DATASET_NAME = "recommendations" !gcloud config set project $PROJECT_ID
notebooks/community/analytics-componetized-patterns/retail/recommendation-system/bqml-scann/02_export_bqml_mf_embeddings.ipynb
GoogleCloudPlatform/bigquery-notebooks
apache-2.0
Export the item embedding vector data Export the item embedding data to Cloud Storage by using a Dataflow pipeline. This pipeline does the following: Reads the item embedding records from the item_embeddings table in BigQuery. Writes each item embedding record to a CSV file. Writes the item embedding CSV files to a Cloud Storage bucket. The pipeline in implemented in the embeddings_exporter/pipeline.py module. Configure the pipeline variables Configure the variables needed by the pipeline:
runner = "DataflowRunner" timestamp = datetime.utcnow().strftime("%y%m%d%H%M%S") job_name = f"ks-bqml-export-embeddings-{timestamp}" bq_dataset_name = BQ_DATASET_NAME embeddings_table_name = "item_embeddings" output_dir = f"gs://{BUCKET}/bqml/item_embeddings" project = PROJECT_ID temp_location = os.path.join(output_dir, "tmp") region = REGION print(f"runner: {runner}") print(f"job_name: {job_name}") print(f"bq_dataset_name: {bq_dataset_name}") print(f"embeddings_table_name: {embeddings_table_name}") print(f"output_dir: {output_dir}") print(f"project: {project}") print(f"temp_location: {temp_location}") print(f"region: {region}") try: os.chdir(os.path.join(os.getcwd(), "embeddings_exporter")) except: pass
notebooks/community/analytics-componetized-patterns/retail/recommendation-system/bqml-scann/02_export_bqml_mf_embeddings.ipynb
GoogleCloudPlatform/bigquery-notebooks
apache-2.0
Parameters
# parameters of the filter beta = 0.33 n_sps = 8 # samples per symbol syms_per_filt = 4 # symbols per filter (plus minus in both directions) K_filt = 2 * syms_per_filt * n_sps + 1 # length of the fir filter # set symbol time t_symb = 1.0 # switch for normalizing all signals in order to only look at their shape and not their energy normalize_signals = 1
nt2_ce2/vorlesung/ch_3_channels/multipath.ipynb
kit-cel/lecture-examples
gpl-2.0
Define Channel
# defining delays of multi-path and their weight # NOTE: No "sanity check" if lengths correspond, so errors may occurr # choose whether delays of multipath are # (0) in multiples of the symbol time t_sym or # (1) in multiples of the sampling time t_sym / n_sps channel_factors = [ 1.0, .5, .1 ] delay_type = 1 if delay_type == 0: # construction based on delays w.r.t. symbol time channel_delays_syms = np.array( [ 1, 3, 5 ] ) channel_delays_samples = n_sps * channel_delays_syms else: # "fractional delays", i.e. delays w.r.t. t_sym / n_sps channel_delays_samples = np.array( [ 1, 12, 15 ] ) h_channel = np.zeros( np.max(channel_delays_samples ) + 1 ) for k in np.arange( len(channel_delays_samples) ): h_channel[ channel_delays_samples[k] ] = channel_factors[k]
nt2_ce2/vorlesung/ch_3_channels/multipath.ipynb
kit-cel/lecture-examples
gpl-2.0
Pulse Shape and Effect of Multi-Path on Pulse Shape
rrc = get_rrc_ir( K_filt, n_sps, t_symb, beta) r_rrc = np.convolve( rrc, h_channel ) if normalize_signals: rrc /= np.linalg.norm( rrc ) r_rrc /= np.linalg.norm( r_rrc )
nt2_ce2/vorlesung/ch_3_channels/multipath.ipynb
kit-cel/lecture-examples
gpl-2.0
show pulse and version after multi-path
plt.figure() plt.plot( np.arange(len(rrc)), rrc, linewidth=2.0, label='$g_{\\mathrm{rrc}}(t)$') plt.plot( np.arange(len(r_rrc)), r_rrc, linewidth=2.0, label='$r(t)$') plt.legend(loc='upper right') plt.grid(True) plt.xlabel('$n/t$')
nt2_ce2/vorlesung/ch_3_channels/multipath.ipynb
kit-cel/lecture-examples
gpl-2.0
show spectra Note: Both spectra are normalized to have equal energy in frequency regime
rrc_padded = np.hstack( ( rrc , np.zeros( 9*len(rrc) ) ) ) rrc_padded = np.roll( rrc_padded, -( K_filt - 1 ) // 2 ) RRC = np.fft.fft( rrc_padded ) r_rrc_padded = np.hstack( ( r_rrc , np.zeros( 10*len(rrc) - len( r_rrc ) ) ) ) r_rrc_padded = np.roll( r_rrc_padded, -( K_filt - 1 ) // 2 ) R_rrc = np.fft.fft( r_rrc_padded ) if normalize_signals: RRC /= np.linalg.norm( RRC ) R_rrc /= np.linalg.norm( R_rrc ) f = np.linspace( -n_sps/(2*t_symb), n_sps/(2*t_symb), len(rrc_padded) ) plt.figure() plt.plot( f, np.abs( RRC )**2, linewidth=2.0, label='$|G_{\\mathrm{rrc}}(f)|^2$') plt.plot( f, np.abs( R_rrc )**2, linewidth=2.0, label='$|R(f)|^2$') plt.legend(loc='upper right') plt.grid(True) plt.xlabel('$f$')
nt2_ce2/vorlesung/ch_3_channels/multipath.ipynb
kit-cel/lecture-examples
gpl-2.0
Note: Spectrum is shown to emphasize that pulse shape might be distorted. Real Data Modulated Tx-signal
# number of symbols and samples per symbol n_symb = 8 # modulation scheme and constellation points constellation = np.array( [ 1 , -1] ) # generate random binary vector and modulate the specified modulation scheme d = np.random.randint( 2, size = n_symb ) s = constellation[ d ] # prepare sequence to be filtered by upsampling s_up = np.zeros( n_symb * n_sps, dtype=complex) s_up[ : : n_sps ] = s s_up_delayed = np.hstack( ( np.zeros( (K_filt-1) // 2 ) , s_up ) ) s_up_delayed_delayed = np.hstack( ( np.zeros( K_filt-1 ), s_up ) ) # apply RRC filtering s_Tx = np.convolve( s_up, rrc) rc = np.convolve( rrc, rrc) s_rc = np.convolve( s_up, rc) # get received signal r = np.convolve( s_Tx, h_channel) # apply MF at the Rx y_mf_rrc = np.convolve(r, rrc) y_mf_no_channel = np.convolve( s_Tx, rrc ) # normalize signals if applicable if normalize_signals: s_Tx /= np.linalg.norm( s_Tx ) r /= np.linalg.norm( r ) y_mf_rrc /= np.linalg.norm( y_mf_rrc ) y_mf_no_channel /= np.linalg.norm( y_mf_no_channel )
nt2_ce2/vorlesung/ch_3_channels/multipath.ipynb
kit-cel/lecture-examples
gpl-2.0
show signals
plt.plot( np.arange(len(s_up)), np.max(np.abs(s_Tx)) * np.real(s_up),'vr', ms=12, label='$A_n$') plt.plot( np.arange(len(s_Tx)), np.real(s_Tx), linewidth=2.0, label='$s(t)=\sum A_ng_{\\mathrm{rrc}}(t-nT)$') plt.plot( np.arange(len(s_up_delayed)), np.max(np.abs(s_Tx)) * np.real(s_up_delayed),'xg', ms=18, label='$A_{n-\\tau_\mathrm{g}}$') plt.title('Tx signal and symbols') plt.legend(loc='upper right') plt.grid(True) plt.xlabel('$n$')
nt2_ce2/vorlesung/ch_3_channels/multipath.ipynb
kit-cel/lecture-examples
gpl-2.0
Note I: The sybol values (except for scaling) may be obtained by sampling the signal $s(t)$ at times being delayed by $\tau_\mathrm{g}$, equaling the group delay of the pulse shaping filter. Note II: So, in upcoming plots symbols $A_n$ (without delay) will be omitted and only $A_{n-\tau_\mathrm{g}}$ will be shown.
plt.plot( np.arange(len(s_up_delayed)), np.max(np.abs(s_Tx)) * np.real(s_up_delayed),'x', ms=18, label='$A_{n-\\tau_\mathrm{g}}$') plt.plot( np.arange(len(s_up_delayed_delayed)), np.max(np.abs(s_Tx)) * np.real(s_up_delayed_delayed),'D', ms=12, label='$A_{n-2\\tau_\mathrm{g}}$') plt.plot( np.arange(len(s_Tx)), np.real(s_Tx), linewidth=2.0, label='$s(t)=\sum A_ng_{\\mathrm{rrc}}(t-nT)$') plt.plot( np.arange(len(y_mf_no_channel)), np.real(y_mf_no_channel), linewidth=2.0, label='$y_{\\mathrm{mf}}(t)=s(t)* g_{\\mathrm{rrc}}(t)$') plt.title('Rx after mf, without channel') plt.legend(loc='upper right') plt.grid(True) plt.xlabel('$n$')
nt2_ce2/vorlesung/ch_3_channels/multipath.ipynb
kit-cel/lecture-examples
gpl-2.0
Note: If the channel is perfect, i.e, no channel is active at all, then MF at the receiver--matched to rrc--leads to signal $y_\mathrm{rrc}(t)$ whose samples with delay $2\tau_\mathrm{g}$ corresponds to the transmitted symbols.
plt.plot( np.arange(len(s_up_delayed)), np.max(np.abs(s_Tx)) * np.real(s_up_delayed),'x', ms=18, label='$A_{n-\\tau_\mathrm{g}}$') plt.plot( np.arange(len(s_up_delayed_delayed)), np.max(np.abs(s_Tx)) * np.real(s_up_delayed_delayed),'D', ms=12, label='$A_{n-2\\tau_\mathrm{g}}$') plt.plot( np.arange(len(s_Tx)), np.real(s_Tx), linewidth=2.0, label='$s(t)=\sum A_ng_{\\mathrm{rrc}}(t-nT)$') #plt.plot( np.arange(len(r)), np.real(r), linewidth=2.0, label='$r(t)=s(t)*h(t)$') plt.plot( np.arange(len(y_mf_rrc)), np.real(y_mf_rrc), linewidth=2.0, label='$y_{\\mathrm{mf}}(t)=r(t)* g_{\\mathrm{rrc}}(t)$') plt.title('Transmission with channel') plt.legend(loc='upper right') plt.grid(True) plt.xlabel('$n$')
nt2_ce2/vorlesung/ch_3_channels/multipath.ipynb
kit-cel/lecture-examples
gpl-2.0
TB Model We pick the following parameters: + hopping constant $ t= 250$ meV + $\Delta = 1.0 t$ so that $T_c^{MF} = 0.5 t$, and so that $\xi_0 \simeq a_0$ + $g = -0.25$, unitless, so as to match the article's formalism, not the thesis' + $J = \dfrac{0.1 t}{0.89}$ so as to set $T_{KT} = 0.1 t$. This means that we have the following physical properties
Tc_mf = meV_to_K(0.5*250) print '$T_c^{MF} = $', Tc_mf, "K" print r"$T_{KT} = $", Tc_mf/10.0, "K"
Notebooks/test_DOS.ipynb
dombrno/PG
bsd-2-clause
Instantiation
T_CST = 0.25 MY_PARAMS = {"width":10, "chem_potential": 0.0, "hopping_constant": T_CST, "J_constant": 0.1 * T_CST / 0.89, "delta": 1.0 * T_CST, "use_assaad": False, "broadening_delta": 0.01 * T_CST} MY_MODEL = TbModel(MY_PARAMS)
Notebooks/test_DOS.ipynb
dombrno/PG
bsd-2-clause
Modification
TB_PARAMS = {"width":20, "use_assaad": True} MY_MODEL.set_params(TB_PARAMS) print MY_MODEL
Notebooks/test_DOS.ipynb
dombrno/PG
bsd-2-clause
DOS Computation
fig, ax = plt.subplots(figsize = (10, 8), dpi=100, frameon=False) x_axis_ampl = 4.5 xmin = -x_axis_ampl xmax = x_axis_ampl ax.set_xlim([xmin, xmax]) #ax.set_ylim([0.0, 1.0]) l_xticks = np.linspace(int(xmin), int(xmax), 9,endpoint=True) ax.set_xticks(l_xticks) ax.set_xticklabels(['%1.1f' %elem for elem in l_xticks]) #ax.set_xticklabels([r'$\psi$' for elem in l_xticks]) #ax.set_yticks(np.linspace(xmin, xmax, 5,endpoint=True)) #ax.set_yticklabels(['%1.1f' %elem for elem in np.linspace(xmin, xmax,5,endpoint=True)]) dos_values = np.real(MY_MODEL.get_dos()) ax.plot(MY_MODEL.lattice.omega_mesh/T_CST, dos_values)
Notebooks/test_DOS.ipynb
dombrno/PG
bsd-2-clause
Modification
BCS_PARAMS = {"width":20, "use_assaad": True, "uniform_phase": True, "temperature": 1.75*145.0, "delta":1.0 * T_CST} MY_DWAVE_MODEL.set_params(BCS_PARAMS) print MY_DWAVE_MODEL print "temp: ", K_to_meV(MY_DWAVE_MODEL.temperature), "meV"
Notebooks/test_DOS.ipynb
dombrno/PG
bsd-2-clause
DOS Computation
dos_values = np.real(MY_DWAVE_MODEL.get_dos()) fig, ax = plt.subplots(figsize = (10, 8), dpi=100, frameon=False) x_values = MY_DWAVE_MODEL.lattice.omega_mesh / T_CST xmin = np.amin(x_values) xmax = np.amax(x_values) ax.set_xlim([xmin, xmax]) ax.set_xticks(np.linspace(int(xmin), int(xmax), 13,endpoint=True)) ax.plot(x_values, dos_values)
Notebooks/test_DOS.ipynb
dombrno/PG
bsd-2-clause
Modification
MC_PARAMS_MP = {"intervals": BCS_PARAMS["width"]**2 / 2, "target_snapshots": 25, "algorithm":"metropolis"} MC_PARAMS_CLUSTER = {"intervals": 5, "target_snapshots": 25, "algorithm":"cluster"} MY_DRIVER.set_params(MC_PARAMS_MP) print MY_DWAVE_MODEL._uniform_phase print MY_DRIVER print MY_DRIVER.params MY_DRIVER.mc_object.set_params({"temperature": 2.0 * 145.0}) MY_DRIVER.thermalize(20000) #MY_DRIVER.mc_object.set_params({"temperature": 1.1 * 145.0}) #MY_DRIVER.thermalize(50) MY_DRIVER.execute() result = MY_DRIVER.result data = result.observable_results["correlation_length"] print data["length_values"].size print data["correlation_values"] print result x_data = np.sqrt(data["length_values"]) y_data = data["correlation_values"] fig, ax = plt.subplots(figsize = (10, 8), dpi=100, frameon=False) ax.plot(x_data, y_data) popt, pcov = curve_fit(func, x_data, y_data) print popt ax.plot(x_data, func(x_data, popt[0], popt[1], popt[2])) print "corr length:", 1.0/popt[1] results = pickle.load( open( "result_corr_dwave_new.txt", "rb" ) ) results = np.append(results, pickle.load( open( "result_corr_dwave_new2.txt", "rb" ) )) #results = np.append(results, pickle.load( open( "result_corr_dwave_4.txt", "rb" ) )) #results = np.append(results, pickle.load( open( "result_corr_dwave_5.txt", "rb" ) )) #results = np.append(results, pickle.load( open( "result_corr_dwave_6.txt", "rb" ) )) data = results[0].observable_results["correlation_length"] l_values = [] for elem in data['correlation_values']: l_values.append(np.average(elem)) print data["length_values"].size print len(l_values) datas = {} temps =np.array([]) for elem in results: temps = np.append(temps, elem.bcs_params['temperature']) temps = np.unique(temps) for temp in temps: datas[temp] = np.array([elem for elem in results if elem.bcs_params['temperature']==temp]) print temps print datas[temps[0]].size x_datas = {} y_datas = {} for temp in temps: x_datas[temp] = np.sqrt(datas[temp][0].observable_results["correlation_length"]["length_values"]) y_datas[temp] = np.zeros((x_datas[temp].size)) total_sum = 0 for elem in datas[temp]: y_datas[temp] +=\ elem.observable_results["correlation_length"]["correlation_values"] y_datas[temp] /= datas[temp].size #np.array([np.average(zob) for zob in elem.observable_results["correlation_length"]["correlation_values"]]) fig, ax = plt.subplots(figsize = (14, 12), dpi=100, frameon=False) corr_lens = {} for temp in temps: x_data = x_datas[temp] y_data = y_datas[temp] ax.plot(x_data, y_data, label=str(temp)) popt, pcov = curve_fit(func, x_data, y_data) print "temp: ", temp, "params: ", popt, "length: ", 1.0/popt[1] corr_lens[temp] = 1.0/popt[1] ax.plot(x_data, func(x_data, popt[0], popt[1], popt[2])) ax.legend() plt.savefig("Notransition.pdf") fig, ax = plt.subplots(figsize = (14, 12), dpi=100, frameon=False) x_es = np.sort(np.array(corr_lens.keys())) y_es = np.array([corr_lens[elem] for elem in x_es]) ax.plot(x_es, y_es) ax.grid(True)
Notebooks/test_DOS.ipynb
dombrno/PG
bsd-2-clause
Questions: * What is the temp below which the correlation length should plateau? * What should the value of such plateau be ? Why would it be different from the size of the grid for low enough T??
a = np.array([1.125, 1.25, 1.5, 1.75, 2.0, 3.0, 4.0, 5.0]) fig, ax = plt.subplots(figsize = (10, 8), dpi=100, frameon=False) for temp in temps: x_data = x_datas[temp] y_data = y_datas[temp] ax.plot(x_data, y_data, label=str(temp)) popt, pcov = curve_fit(func, x_data, y_data) print "temp: ", temp, "params: ", popt, "length: ", 1.0/popt[1] ax.plot(x_data, func(x_data, popt[0], popt[1], popt[2])) ax.legend() #print results print "" fig, ax = plt.subplots(figsize = (10, 8), dpi=100, frameon=False) temp = 120.0 x_data = x_datas[temp] y_data = y_datas[temp] l_values = y_data popt, pcov = curve_fit(func, x_data, y_data) ax.plot(np.sqrt(data["length_values"]), np.log(l_values - popt[2])) ax.plot(np.sqrt(data["length_values"]), np.log(popt[0]) - popt[1] * np.sqrt(data["length_values"])) fig, ax = plt.subplots(figsize = (10, 8), dpi=100, frameon=False) x_values = MY_DRIVER.mc_object.lattice.omega_mesh / T_CST xmin = np.amin(x_values) xmax = np.amax(x_values) ax.set_xlim([xmin, xmax]) ax.set_xticks(np.linspace(int(xmin), int(xmax), 13,endpoint=True)) ax.plot(x_values, dos_values) print MY_DRIVER.result.observable_results print MY_DRIVER.result.observable_list MC_PARAMS = {"observable_list":["correlation_length", "DOS"]} MY_DRIVER.set_params(MC_PARAMS) MY_DRIVER.mc_object.temperature = 1.0 * 145.0 MY_DRIVER.thermalize(20000) MY_DRIVER.execute() fig, ax = plt.subplots(figsize = (10, 8), dpi=100, frameon=False) x_axis_ampl = 4.5 xmin = -x_axis_ampl xmax = x_axis_ampl ax.set_xlim([xmin, xmax]) #ax.set_ylim([0.0, 1.0]) ax.set_xticks(np.linspace(int(xmin), int(xmax), 9,endpoint=True)) #ax.set_yticks(np.linspace(xmin, xmax, 5,endpoint=True)) #ax.set_xticklabels(['%1.1f' %elem for elem in np.linspace(xmin, xmax,5,endpoint=True)]) #ax.set_yticklabels(['%1.1f' %elem for elem in np.linspace(xmin, xmax,5,endpoint=True)]) dos_values = MY_DRIVER.result.observable_results["DOS"] ax.plot(dos_values['omega_mesh']/ T_CST, dos_values['DOS_values']) 1500/ 450 #300K temps = [15, 113, 150, 155, 300, 450, 600, 800, 1200] names = ['./result_dwave_' + str(temp)+'.txt' for temp in temps] input_files = [np.loadtxt(name) for name in names] print len(input_files[3]) fig, ax = plt.subplots(figsize = (10, 8), dpi=100, frameon=False) x_axis_ampl = 6.0 * T_CST nb_ticks = 1001 xvalues = np.linspace(-x_axis_ampl, x_axis_ampl, nb_ticks, endpoint=True) #ymin = 0.0 #ymax = 0.5 xmin = -x_axis_ampl xmax = x_axis_ampl ax.set_xlim([xmin, xmax]) #ax.set_ylim([0.0, 2.0]) ax.set_xticks(np.linspace(int(xmin), int(xmax), 9,endpoint=True)) #ax.set_yticks(np.linspace(xmin, xmax, 5,endpoint=True)) #ax.set_xticklabels(['%1.1f' %elem for elem in np.linspace(xmin, xmax,5,endpoint=True)]) #ax.set_yticklabels(['%1.1f' %elem for elem in np.linspace(xmin, xmax,5,endpoint=True)]) for i in range(len(input_files)): ax.plot(xvalues/T_CST, input_files[i], ls = '-', label = str(temps[i])) ax.legend(loc=2) plt.savefig("DOS.pdf") #200K out = np.loadtxt('result_dwave_750.txt') fig, ax = plt.subplots(figsize = (10, 8), dpi=100, frameon=False) x_axis_ampl = 4.2 nb_ticks = 1001 xvalues = np.linspace(-x_axis_ampl, x_axis_ampl, nb_ticks, endpoint=True) #ymin = 0.0 #ymax = 0.5 xmin = -x_axis_ampl xmax = x_axis_ampl ax.set_xlim([xmin, xmax]) #ax.set_ylim([0.0, 1.0]) ax.set_xticks(np.linspace(int(xmin), int(xmax), 9,endpoint=True)) #ax.set_yticks(np.linspace(xmin, xmax, 5,endpoint=True)) #ax.set_xticklabels(['%1.1f' %elem for elem in np.linspace(xmin, xmax,5,endpoint=True)]) #ax.set_yticklabels(['%1.1f' %elem for elem in np.linspace(xmin, xmax,5,endpoint=True)]) ax.plot(xvalues/0.25, out) plt.savefig("DOS_dwave_T_KT.pdf") 150.0*1.75 print cst.physical_constants["Boltzmann constant in eV/K"][0] import pickle data_new = pickle.load(open('result_dwave_alltemps.txt', 'rb')) print data print data['length_values']
Notebooks/test_DOS.ipynb
dombrno/PG
bsd-2-clause
Functions
def sample(df, n=1000, include_cats=[2, 3, 4, 5, 6, 7], random_state=1868): """Take a random sample of size `n` for categories in `include_cats`. """ df = df.copy() subset = df[df.Category.isin(include_cats)] sample = subset.sample(n, random_state=random_state) return sample def clean_text(df, col): """A function for keeping only alpha-numeric characters and replacing all white space with a single space. """ df = df.copy() porter_stemmer = PorterStemmer() return df[col].apply(lambda x: re.sub(';br&', ';&', x))\ .apply(lambda x: re.sub('&.+?;', '', x))\ .apply(lambda x: re.sub('[^A-Za-z0-9]+', ' ', x.lower()))\ .apply(lambda x: re.sub('\s+', ' ', x).strip())\ .apply(lambda x: ' '.join([porter_stemmer.stem(w) for w in x.split()])) def count_pattern(df, col, pattern): """Count the occurrences of `pattern` in df[col]. """ df = df.copy() return df[col].str.count(pattern) def split_on_sentence(text): """Tokenize the text on sentences. Returns a list of strings (sentences). """ sent_tokenizer = nltk.data.load('tokenizers/punkt/english.pickle') return sent_tokenizer.tokenize(text) def split_on_word(text): """Use regular expression tokenizer. Keep apostrophes. Returns a list of lists, one list for each sentence: [[word, word], [word, word, ..., word], ...]. """ if type(text) is list: return [regexp_tokenize(sentence, pattern="\w+(?:[-']\w+)*") for sentence in text] else: return regexp_tokenize(text, pattern="\w+(?:[-']\w+)*") def features(df): """Create the features in the specified DataFrame.""" stop_words = stopwords.words('english') df = df.copy() df['n_questionmarks'] = count_pattern(df, 'Text', '\?') df['n_periods'] = count_pattern(df, 'Text', '\.') df['n_apostrophes'] = count_pattern(df, 'Text', '\'') df['n_the'] = count_pattern(df, 'Text', 'the ') df['first_word'] = df.text_clean.apply(lambda x: split_on_word(x)[0]) question_words = ['what', 'how', 'why', 'is'] for w in question_words: col_wc = 'n_' + w col_fw = 'fw_' + w df[col_fw] = (df.first_word == w) * 1 del df['first_word'] df['n_words'] = df.text_clean.apply(lambda x: len(split_on_word(x))) df['n_stopwords'] = df.text_clean.apply(lambda x: len([w for w in split_on_word(x) if w not in stop_words])) df['n_first_person'] = df.text_clean.apply(lambda x: sum([w in person_first for w in x.split()])) df['n_second_person'] = df.text_clean.apply(lambda x: sum([w in person_second for w in x.split()])) df['n_third_person'] = df.text_clean.apply(lambda x: sum([w in person_third for w in x.split()])) return df def flatten_words(list1d, get_unique=False): qa = [s.split() for s in list1d] if get_unique: return sorted(list(set([w for sent in qa for w in sent]))) else: return [w for sent in qa for w in sent] def tfidf_matrices(tr, te, col='text_clean'): """Returns tfidf matrices for both the training and test DataFrames. The matrices will have the same number of columns, which represent unique words, but not the same number of rows, which represent samples. """ tr = tr.copy() te = te.copy() text = tr[col].values.tolist() + te[col].values.tolist() vocab = flatten_words(text, get_unique=True) tfidf = TfidfVectorizer(stop_words='english', vocabulary=vocab) tr_matrix = tfidf.fit_transform(tr.text_clean) te_matrix = tfidf.fit_transform(te.text_clean) return tr_matrix, te_matrix def concat_tfidf(df, matrix): df = df.copy() df = pd.concat([df, pd.DataFrame(matrix.todense())], axis=1) return df def jitter(values, sd=0.25): """Jitter points for use in a scatterplot.""" return [np.random.normal(v, sd) for v in values] person_first = ['i', 'we', 'me', 'us', 'my', 'mine', 'our', 'ours'] person_second = ['you', 'your', 'yours'] person_third = ['he', 'she', 'it', 'him', 'her', 'his', 'hers', 'its']
notebooks/kaggle-writeup.ipynb
juanshishido/text-classification
mit
Data Load
training = pd.read_csv('../data/newtrain.csv') test = pd.read_csv('../data/newtest.csv')
notebooks/kaggle-writeup.ipynb
juanshishido/text-classification
mit
Clean
training['text_clean'] = clean_text(training, 'Text') test['text_clean'] = clean_text(test, 'Text')
notebooks/kaggle-writeup.ipynb
juanshishido/text-classification
mit
Features
training = features(training) test = features(test)
notebooks/kaggle-writeup.ipynb
juanshishido/text-classification
mit
Split the training data
train, dev = cross_validation.train_test_split(training, test_size=0.2, random_state=1868) train = train.append(sample(train, n=800)) train.reset_index(drop=True, inplace=True) dev.reset_index(drop=True, inplace=True)
notebooks/kaggle-writeup.ipynb
juanshishido/text-classification
mit
tfidf
train_matrix, dev_matrix = tfidf_matrices(train, dev)
notebooks/kaggle-writeup.ipynb
juanshishido/text-classification
mit
Combine
train = concat_tfidf(train, train_matrix) dev = concat_tfidf(dev, dev_matrix)
notebooks/kaggle-writeup.ipynb
juanshishido/text-classification
mit
Training
svm = LinearSVC(dual=False, max_iter=5000) features = train.columns[3:] X = train[features].values y = train['Category'].values features_dev = dev[features].values
notebooks/kaggle-writeup.ipynb
juanshishido/text-classification
mit
Testing on dev
svm.fit(X, y) dev_predicted = svm.predict(features_dev) accuracy_score(dev.Category, dev_predicted) plt.figure(figsize=(6, 5)) plt.scatter(jitter(dev.Category, 0.15), jitter(dev_predicted, 0.15), color='#348ABD', alpha=0.25) plt.title('Support Vector Classifier\n') plt.xlabel('Ground Truth') plt.ylabel('Predicted')
notebooks/kaggle-writeup.ipynb
juanshishido/text-classification
mit
Test Data
training = training.append(sample(training, n=1200)) training.reset_index(drop=True, inplace=True) training_matrix, test_matrix = tfidf_matrices(training, test) training = concat_tfidf(training, training_matrix) test = concat_tfidf(test, test_matrix) features = training.columns[3:] X = training[features].values y = training['Category'].values features_test = test[features].values svm.fit(X, y) test_predicted = svm.predict(features_test) test['Category'] = test_predicted output = test[['Id', 'Category']]
notebooks/kaggle-writeup.ipynb
juanshishido/text-classification
mit
Problema Prático 2.14 Transforme a rede Y da Figura 2.51 em uma rede delta.
print("Problema Prático 2.14") def y_delta(R1,R2,R3): Ra = (R1*R2 + R2*R3 + R3*R1)/R1 Rb = (R1*R2 + R2*R3 + R3*R1)/R2 Rc = (R1*R2 + R2*R3 + R3*R1)/R3 return Ra,Rb,Rc ra,rb,rc = y_delta(10,20,40) print("Ra:",ra,"ohms") print("Rb:",rb,"ohms") print("Rc:",rc,"ohms")
Aula 3 - Métodos de Análise - Nodal.ipynb
GSimas/EEL7045
mit
Exemplo 2.15 Obtenha a resistência equivalente Rab para o circuito da Figura 2.52 e a use para encontrar a corrente i.
print("Exemplo 2.15") #analisando a triade 5ohms (rc) 15ohms (rb) e 20ohms (ra) como associacao em delta r1,r2,r3 = delta_y(20,15,5) Req1 = 12.5+r1 #em serie com resistor 12,5 ohm Req2 = 10+r2 #em serie com resistor 10 ohm Req3 = Req1*Req2/(Req1 + Req2) #paralelo entre Req1 e Req2 Req4 = Req3 + r3 Reqf = 30*Req4/(Req4 + 30) V = 120 i = V/Reqf print("Corrente resultante:",i,"A")
Aula 3 - Métodos de Análise - Nodal.ipynb
GSimas/EEL7045
mit
Problema Prático 2.15 Para o circuito em ponte da Figura 2.54, determine Rab e i.
print("Problema Prático 2.15") r1,r2,r3 = delta_y(50,30,20) Req1 = 24 + r1 Req2 = 10 + r2 Req3 = Req1*Req2/(Req1 + Req2) Req4 = Req3 + r3 Reqf = Req4 + 13 V = 240 i = V/Reqf print("Rab:",Reqf,"ohm") print("Corrente resultante:",i,"A")
Aula 3 - Métodos de Análise - Nodal.ipynb
GSimas/EEL7045
mit
Exemplo 2.16 Três lâmpadas são conectadas a uma fonte de 9 V, conforme mostrado na Figura 2.56a. Calcule: (a) corrente total fornecida pela fonte; (b) corrente que passa por cada lâmpada; (c) resistência de cada lâmpada.
print("Exemplo 2.16") V = 9 p2 = 15 #potencia da primeira lampada p3 = 10 p1 = 20 pt = p1 + p2 + p3 i = pt/V i1 = p1/V i2 = i - i1 r1 = V/i1 r2 = p2/(i2**2) # da equacao p = r*(i^2) => r = p/(i^2) r3 = p3/(i2**2) print("Corrente total:",i,"A") print("Corrente na lampada 20W:",i1,"A") print("Corrente nas lampadas 15W e 10W:",i2,"A") print("Resistencia lampada 15W:",r2,"ohm") print("Resistencia lampada 10W:",r3,"ohm") print("Resistencia lampada 20W:",r1,"ohm")
Aula 3 - Métodos de Análise - Nodal.ipynb
GSimas/EEL7045
mit
Problema Prático 2.16 Consulte a Figura 2.55 e considere a existência de dez lâmpadas que podem ser associadas em paralelo e dez que podem ser ligadas em série, cada uma das quais com potência nominal de 40 W. Se a tensão da rede elétrica for 110 V para as ligações em série e em paralelo, calcule a corrente através de cada lâmpada para ambos os casos.
V = 110 p = 40 n = 10 pt = p*n #potencia serie i = pt/V print("Corrente em cada lampada em serie:",i,"A") i = p/V print("Corrente em cada lampada em paralelo:",i,"A")
Aula 3 - Métodos de Análise - Nodal.ipynb
GSimas/EEL7045
mit
Análise Nodal A análise nodal também é conhecida como método do nó-tensão. Etapas para determinar tensões nodais: Selecione um nó como referência. Atribua tensões v1, v2, ..., vn–1 aos n – 1 nós restantes. As tensões são medidas em relação ao nó de referência. Aplique a LKC a cada um dos n – 1 nós que não são de referência. Use a lei de Ohm para expressar as correntes nos ramos em termos de tensões nodais. Resolva as equações simultâneas resultantes para obter as tensões nodais desconhecidas. O número de nós que não são de referência é igual ao número de equações independentes que vamos deduzir. Em um resistor, a corrente flui de um potencial mais elevado para um potencial mais baixo. \begin{align} {\Large i = \frac{v_{maior} - v_{menor}}{R}} \end{align} Exemplo 3.1 Calcule as tensões nodais no circuito mostrado na Figura 3.3a.
print("Exemplo 3.1") import numpy as np #i2 + i3 = 5 #i2 + 10 = i5 + 5 => i5 = i2 + 5 #i2 = (v1-v2)/4 #i3 = v1/2 #i5 = v2/6 #(v1-v2)/4 + v1/2 = 5 => 3v1 - v2 = 20 # v2/6 = (v1-v2)/4 + 5 => 2v2 = 3v1 - 3v2 + 60 => 3v1 - 5v2 = -60 coef = np.matrix('3 -1; 3 -5') res = np.matrix('20;-60') V = np.linalg.inv(np.transpose(coef)*coef)*np.transpose(coef)*res #algebra linear B = (X'X)^(-1) * X'Y print("V1:",float(V[0]),"V") print("V2:",float(V[1]),"V")
Aula 3 - Métodos de Análise - Nodal.ipynb
GSimas/EEL7045
mit
Problema Prático 3.1 Obtenha as tensões nodais no circuito da Figura 3.4
print("Problema Prático 3.1") #i1 + i3 = 3 #i3 = i2 + 12 #i1 = v1/2 #i2 = v2/7 #i3 = (v1-v2)/6 #v1/2 + (v1-v2)/6 = 3 => 4v1 - v2 = 18 #(v1-v2)/6 = v2/7 + 12 => 7v1 - 7v2 = 6v2 + 504 => 7v1 - 13v2 = 504 coef = np.matrix('4 -1; 7 -13') res = np.matrix('18; 504') V = np.linalg.inv(np.transpose(coef)*coef)*np.transpose(coef)*res print("Tensão v1:",float(V[0]),"V") print("Tensão v2:",float(V[1]),"V")
Aula 3 - Métodos de Análise - Nodal.ipynb
GSimas/EEL7045
mit
Exemplo 3.2 Determine as tensões na Figura 3.5a.
print("Exemplo 3.2") #i1 + ix = 3 #ix = i2 + i3 #i1 + i2 = 2*ix #i1 = (v1-v3)/4 #i2 = (v2-v3)/8 #i3 = v2/4 #ix = (v1-v2)/2 #(v1-v3)/4 + (v1-v2)/2 = 3 => 9v1 - 6v2 - 3v3 = 36 #(v1-v2)/2 = (v2-v3)/8 + v2/4 => 4v1 - 7v2 + v3 = 0 #(v1-v3)/4 + (v2-v3)/8 = v1 - v2 => -6v1 + 9v2 - 3v3 = 0 coef = np.matrix('9 -6 -3;4 -7 1;-6 9 -3') res = np.matrix('36;0;0') V = np.linalg.inv(coef)*res print("Tensão V1:",float(V[0]),"V") print("Tensão V2:",float(V[1]),"V") print("Tensão V3:",float(V[2]),"V")
Aula 3 - Métodos de Análise - Nodal.ipynb
GSimas/EEL7045
mit
Problema Prático 3.2 Determine as tensões nos três primeiros nós que não são de referência no circuito da Figura 3.6.
print("Problema Prático 3.2") #i2 = i1 + 4 #ix = i2 + 4*ix #i1 + i3 + 4*ix = 0 #i1 = (v3-v1)/2 #i2 = (v1-v2)/3 #i3 = v3/6 #ix = v2/4 #(v1-v2)/3 = (v3-v1)/2 + 4 => 10v1 - 4v2 -6v3 = 48 #v2/4 = (v1-v2)/3 + v2 => 4v1 + 5v2 = 0 #(v3-v1)/2 + v3/6 + v2 = 0 => -3v1 + 6v2 + 4v3= 0 coef = np.matrix("10 -4 -6;4 5 0;-3 6 4") res = np.matrix("48;0;0") V = np.linalg.inv(coef)*res print("Tensão V1:",float(V[0]),"V") print("Tensão V2:",float(V[1]),"V") print("Tensão V3:",float(V[2]),"V")
Aula 3 - Métodos de Análise - Nodal.ipynb
GSimas/EEL7045
mit
Using the spin chain model First, we simulate the quantum circuit using the Hamiltonian model LinearSpinChain. The control Hamiltonians are defined in SpinChainModel.
processor = LinearSpinChain(3) processor.load_circuit(qc);
examples/qip-processor-DJ-algorithm.ipynb
qutip/qutip-notebooks
lgpl-3.0
To quickly visualize the pulse, Processor has a method called plot_pulses. In the figure bellow, each colour represents the pulse sequence of one control Hamiltonian in the system as a function of time. In each time interval, the pulse remains constant.
processor.plot_pulses(title="Control pulse of Spin chain", figsize=(8, 4), dpi=100);
examples/qip-processor-DJ-algorithm.ipynb
qutip/qutip-notebooks
lgpl-3.0
Because for the spin chain model interaction only exists between neighbouring qubits, SWAP gates are added between and after the first CNOT gate, swapping the first two qubits. The SWAP gate is decomposed into three iSWAP gates, while the CNOT is decomposed into two iSWAP gates plus additional single-qubit corrections. Both the Hadamard gate and the two-qubit gates need to be decomposed to native gates (iSWAP and rotation on the $x$ and $z$ axes). The compiled coefficients are square pulses and the control coefficients on $\sigma_z$ and $\sigma_x$ are also different, resulting in different gate times. Without decoherence
basis00 = basis([2,2], [0,0]) psi0 = basis([2,2,2], [0,0,0]) result = processor.run_state(init_state=psi0) print("Probability of measuring state 00:") print(np.real((basis00.dag() * ptrace(result.states[-1], [0,1]) * basis00)[0,0]))
examples/qip-processor-DJ-algorithm.ipynb
qutip/qutip-notebooks
lgpl-3.0
With decoherence
processor.t1 = 100 processor.t2 = 30 psi0 = basis([2,2,2], [0,0,0]) result = processor.run_state(init_state=psi0) print("Probability of measuring state 00:") print(np.real((basis00.dag() * ptrace(result.states[-1], [0,1]) * basis00)[0,0]))
examples/qip-processor-DJ-algorithm.ipynb
qutip/qutip-notebooks
lgpl-3.0
Using the optimal control module This feature integrated into the sub-class OptPulseProcessor which use methods in the optimal control module to find the optimal pulse sequence for the desired gates. It can find the optimal pulse either for the whole unitary evolution or for each gate. Here we choose the second option.
setting_args = {"SNOT": {"num_tslots": 6, "evo_time": 2}, "X": {"num_tslots": 1, "evo_time": 0.5}, "CNOT": {"num_tslots": 12, "evo_time": 5}} processor = OptPulseProcessor( # Use the control Hamiltonians of the spin chain model. num_qubits=3, model=SpinChainModel(3, setup="linear")) processor.load_circuit( # Provide parameters for the algorithm qc, setting_args=setting_args, merge_gates=False, verbose=True, amp_ubound=5, amp_lbound=0); processor.plot_pulses(title="Control pulse of OptPulseProcessor", figsize=(8, 4), dpi=100);
examples/qip-processor-DJ-algorithm.ipynb
qutip/qutip-notebooks
lgpl-3.0
For the optimal control model, we use the GRAPE algorithm, where control pulses are piece-wise constant functions. We provide the algorithm with the same control Hamiltonian model used for the spin chain model. In the compiled optimal signals, all controls are active (non-zero pulse amplitude) during most of the execution time. We note that for identical gates on different qubits (e.g., Hadamard), each optimized pulse is different, demonstrating that the optimized solution is not unique, and there are further constraints one could apply, such as adaptions for the specific hardware. Without decoherence
basis00 = basis([2,2], [0,0]) psi0 = basis([2,2,2], [0,0,0]) result = processor.run_state(init_state=psi0) print("Probability of measuring state 00:") print(np.real((basis00.dag() * ptrace(result.states[-1], [0,1]) * basis00)[0,0]))
examples/qip-processor-DJ-algorithm.ipynb
qutip/qutip-notebooks
lgpl-3.0
We can see that under noisy evolution their is a none zero probability of measuring state 00. Using the superconducting qubits model Below, we simulate the same quantum circuit using one sub-class LinearSpinChain. It will find the pulse based on the Hamiltonian available on a quantum computer of the linear spin chain system. Please refer to the notebook of the spin chain model for more details.
processor = SCQubits(num_qubits=3) processor.load_circuit(qc); processor.plot_pulses(title="Control pulse of SCQubits", figsize=(8, 4), dpi=100);
examples/qip-processor-DJ-algorithm.ipynb
qutip/qutip-notebooks
lgpl-3.0
For the superconducting-qubit processor, the compiled pulses have a Gaussian shape. This is crucial for superconducting qubits because the second excited level is only slightly detuned from the qubit transition energy. A smooth pulse usually prevents leakage to the non-computational subspace. Similar to the spin chain, SWAP gates are added to switch the zeroth and first qubit and one SWAP gate is compiled to three CNOT gates. The control $ZX^{21}$ is not used because there is no CNOT gate that is controlled by the second qubit and acts on the first one. Without decoherence
basis00 = basis([3, 3], [0, 0]) psi0 = basis([3, 3, 3], [0, 0, 0]) result = processor.run_state(init_state=psi0) print("Probability of measuring state 00:") print(np.real((basis00.dag() * ptrace(result.states[-1], [0,1]) * basis00)[0,0]))
examples/qip-processor-DJ-algorithm.ipynb
qutip/qutip-notebooks
lgpl-3.0
With decoherence
processor.t1 = 50.e3 processor.t2 = 20.e3 psi0 = basis([3, 3, 3], [0, 0, 0]) result = processor.run_state(init_state=psi0) print("Probability of measuring state 00:") print(np.real((basis00.dag() * ptrace(result.states[-1], [0,1]) * basis00)[0,0])) import qutip_qip print("qutip-qip version:", qutip_qip.version.version) from qutip.ipynbtools import version_table version_table()
examples/qip-processor-DJ-algorithm.ipynb
qutip/qutip-notebooks
lgpl-3.0
EBM with surface and atm layers
ebm = climlab.GreyRadiationModel(num_lev=1, num_lat=90) insolation = climlab.radiation.AnnualMeanInsolation(domains=ebm.Ts.domain) ebm.add_subprocess('insolation', insolation) ebm.subprocess.SW.flux_from_space = ebm.subprocess.insolation.insolation print(ebm) # add a fixed relative humidity process # (will only affect surface evaporation) h2o = climlab.radiation.ManabeWaterVapor(state=ebm.state, **ebm.param) ebm.add_subprocess('H2O', h2o) # Add surface heat fluxes shf = climlab.surface.SensibleHeatFlux(state=ebm.state, Cd=3E-4) lhf = climlab.surface.LatentHeatFlux(state=ebm.state, Cd=3E-4) # couple water vapor to latent heat flux process lhf.q = h2o.q ebm.add_subprocess('SHF', shf) ebm.add_subprocess('LHF', lhf) ebm.integrate_years(1) plt.plot(ebm.lat, ebm.Ts) plt.plot(ebm.lat, ebm.Tatm) co2ebm = climlab.process_like(ebm) co2ebm.subprocess['LW'].absorptivity = ebm.subprocess['LW'].absorptivity*1.1 co2ebm.integrate_years(3.) # no heat transport but with evaporation -- no polar amplification plt.plot(ebm.lat, co2ebm.Ts - ebm.Ts) plt.plot(ebm.lat, co2ebm.Tatm - ebm.Tatm)
docs/source/courseware/PolarAmplification.ipynb
cjcardinale/climlab
mit
Now with meridional heat transport
diffebm = climlab.process_like(ebm) # thermal diffusivity in W/m**2/degC D = 0.6 # meridional diffusivity in m**2/s K = D / diffebm.Tatm.domain.heat_capacity * const.a**2 d = climlab.dynamics.MeridionalDiffusion(K=K, state={'Tatm': diffebm.Tatm}, **diffebm.param) diffebm.add_subprocess('diffusion', d) print(diffebm) diffebm.integrate_years(3) plt.plot(diffebm.lat, diffebm.Ts) plt.plot(diffebm.lat, diffebm.Tatm) def inferred_heat_transport( energy_in, lat_deg ): '''Returns the inferred heat transport (in PW) by integrating the net energy imbalance from pole to pole.''' from scipy import integrate from climlab import constants as const lat_rad = np.deg2rad( lat_deg ) return ( 1E-15 * 2 * np.math.pi * const.a**2 * integrate.cumtrapz( np.cos(lat_rad)*energy_in, x=lat_rad, initial=0. ) ) # Plot the northward heat transport in this model Rtoa = np.squeeze(diffebm.timeave['ASR'] - diffebm.timeave['OLR']) plt.plot(diffebm.lat, inferred_heat_transport(Rtoa, diffebm.lat)) ## Now warm it up! co2diffebm = climlab.process_like(diffebm) co2diffebm.subprocess['LW'].absorptivity = diffebm.subprocess['LW'].absorptivity*1.1 co2diffebm.integrate_years(5) # with heat transport and evaporation # Get some modest polar amplifcation of surface warming # but larger equatorial amplification of atmospheric warming # Increased atmospheric gradient = increased poleward flux. plt.plot(diffebm.lat, co2diffebm.Ts - diffebm.Ts, label='Ts') plt.plot(diffebm.lat, co2diffebm.Tatm - diffebm.Tatm, label='Tatm') plt.legend() Rtoa = np.squeeze(diffebm.timeave['ASR'] - diffebm.timeave['OLR']) Rtoa_co2 = np.squeeze(co2diffebm.timeave['ASR'] - co2diffebm.timeave['OLR']) plt.plot(diffebm.lat, inferred_heat_transport(Rtoa, diffebm.lat), label='1xCO2') plt.plot(diffebm.lat, inferred_heat_transport(Rtoa_co2, diffebm.lat), label='2xCO2') plt.legend()
docs/source/courseware/PolarAmplification.ipynb
cjcardinale/climlab
mit
Same thing but with NO EVAPORATION
diffebm2 = climlab.process_like(diffebm) diffebm2.remove_subprocess('LHF') diffebm2.integrate_years(3) co2diffebm2 = climlab.process_like(co2diffebm) co2diffebm2.remove_subprocess('LHF') co2diffebm2.integrate_years(3) # With transport and no evaporation... # No polar amplification, either of surface or air temperature! plt.plot(diffebm2.lat, co2diffebm2.Ts - diffebm2.Ts, label='Ts') plt.plot(diffebm2.lat, co2diffebm2.Tatm[:,0] - diffebm2.Tatm[:,0], label='Tatm') plt.legend() plt.figure() # And in this case, the lack of polar amplification is DESPITE an increase in the poleward heat transport. Rtoa = np.squeeze(diffebm2.timeave['ASR'] - diffebm2.timeave['OLR']) Rtoa_co2 = np.squeeze(co2diffebm2.timeave['ASR'] - co2diffebm2.timeave['OLR']) plt.plot(diffebm2.lat, inferred_heat_transport(Rtoa, diffebm2.lat), label='1xCO2') plt.plot(diffebm2.lat, inferred_heat_transport(Rtoa_co2, diffebm2.lat), label='2xCO2') plt.legend()
docs/source/courseware/PolarAmplification.ipynb
cjcardinale/climlab
mit
A column model approach
model = climlab.GreyRadiationModel(num_lev=30, num_lat=90, abs_coeff=1.6E-4) insolation = climlab.radiation.AnnualMeanInsolation(domains=model.Ts.domain) model.add_subprocess('insolation', insolation) model.subprocess.SW.flux_from_space = model.subprocess.insolation.insolation print(model) # Convective adjustment for atmosphere only conv = climlab.convection.ConvectiveAdjustment(state={'Tatm':model.Tatm}, adj_lapse_rate=6.5, **model.param) model.add_subprocess('convective adjustment', conv) # add a fixed relative humidity process # (will only affect surface evaporation) h2o = climlab.radiation.water_vapor.ManabeWaterVapor(state=model.state, **model.param) model.add_subprocess('H2O', h2o) # Add surface heat fluxes shf = climlab.surface.SensibleHeatFlux(state=model.state, Cd=1E-3) lhf = climlab.surface.LatentHeatFlux(state=model.state, Cd=1E-3) lhf.q = model.subprocess.H2O.q model.add_subprocess('SHF', shf) model.add_subprocess('LHF', lhf) model.integrate_years(3.) def plot_temp_section(model, timeave=True): fig = plt.figure() ax = fig.add_subplot(111) if timeave: field = model.timeave['Tatm'].transpose() else: field = model.Tatm.transpose() cax = ax.contourf(model.lat, model.lev, field) ax.invert_yaxis() ax.set_xlim(-90,90) ax.set_xticks([-90, -60, -30, 0, 30, 60, 90]) fig.colorbar(cax) plot_temp_section(model, timeave=False) co2model = climlab.process_like(model) co2model.subprocess['LW'].absorptivity = model.subprocess['LW'].absorptivity*1.1 co2model.integrate_years(3) plot_temp_section(co2model, timeave=False) # Without transport, get equatorial amplification plt.plot(model.lat, co2model.Ts - model.Ts, label='Ts') plt.plot(model.lat, co2model.Tatm[:,0] - model.Tatm[:,0], label='Tatm') plt.legend()
docs/source/courseware/PolarAmplification.ipynb
cjcardinale/climlab
mit
Now with meridional heat tranpsort!
diffmodel = climlab.process_like(model) # thermal diffusivity in W/m**2/degC D = 0.05 # meridional diffusivity in m**2/s K = D / diffmodel.Tatm.domain.heat_capacity[0] * const.a**2 print(K) d = climlab.dynamics.MeridionalDiffusion(K=K, state={'Tatm':diffmodel.Tatm}, **diffmodel.param) diffmodel.add_subprocess('diffusion', d) print(diffmodel) diffmodel.integrate_years(3) plot_temp_section(diffmodel) # Plot the northward heat transport in this model Rtoa = np.squeeze(diffmodel.timeave['ASR'] - diffmodel.timeave['OLR']) plt.plot(diffmodel.lat, inferred_heat_transport(Rtoa, diffmodel.lat)) ## Now warm it up! co2diffmodel = climlab.process_like(diffmodel) co2diffmodel.subprocess['LW'].absorptivity = diffmodel.subprocess['LW'].absorptivity*1.1 co2diffmodel.integrate_years(3) # With transport, get polar amplification... # of surface temperature, but not of air temperature! plt.plot(diffmodel.lat, co2diffmodel.Ts - diffmodel.Ts, label='Ts') plt.plot(diffmodel.lat, co2diffmodel.Tatm[:,0] - diffmodel.Tatm[:,0], label='Tatm') plt.legend() Rtoa = np.squeeze(diffmodel.timeave['ASR'] - diffmodel.timeave['OLR']) Rtoa_co2 = np.squeeze(co2diffmodel.timeave['ASR'] - co2diffmodel.timeave['OLR']) plt.plot(diffmodel.lat, inferred_heat_transport(Rtoa, diffmodel.lat), label='1xCO2') plt.plot(diffmodel.lat, inferred_heat_transport(Rtoa_co2, diffmodel.lat), label='2xCO2')
docs/source/courseware/PolarAmplification.ipynb
cjcardinale/climlab
mit
Same thing but with NO EVAPORATION
diffmodel2 = climlab.process_like(diffmodel) diffmodel2.remove_subprocess('LHF') print(diffmodel2) diffmodel2.integrate_years(3) co2diffmodel2 = climlab.process_like(co2diffmodel) co2diffmodel2.remove_subprocess('LHF') co2diffmodel2.integrate_years(3) # With transport and no evaporation... # No polar amplification, either of surface or air temperature! plt.plot(diffmodel2.lat, co2diffmodel2.Ts - diffmodel2.Ts, label='Ts') plt.plot(diffmodel2.lat, co2diffmodel2.Tatm[:,0] - diffmodel2.Tatm[:,0], label='Tatm') plt.legend() Rtoa = np.squeeze(diffmodel2.timeave['ASR'] - diffmodel2.timeave['OLR']) Rtoa_co2 = np.squeeze(co2diffmodel2.timeave['ASR'] - co2diffmodel2.timeave['OLR']) plt.plot(diffmodel2.lat, inferred_heat_transport(Rtoa, diffmodel2.lat), label='1xCO2') plt.plot(diffmodel2.lat, inferred_heat_transport(Rtoa_co2, diffmodel2.lat), label='2xCO2')
docs/source/courseware/PolarAmplification.ipynb
cjcardinale/climlab
mit
Warming effect of a DECREASE IN EVAPORATION EFFICIENCY Take a column model that includes evaporation and heat transport, and reduce the drag coefficient by a factor of 2. How does the surface temperature change?
diffmodel3 = climlab.process_like(diffmodel) diffmodel3.subprocess['LHF'].Cd *= 0.5 diffmodel3.integrate_years(5.) # Reduced evaporation gives equatorially enhanced warming of surface # and cooling of near-surface air temperature plt.plot(diffmodel.lat, diffmodel3.Ts - diffmodel.Ts, label='Ts') plt.plot(diffmodel.lat, diffmodel3.Tatm[:,0] - diffmodel.Tatm[:,0], label='Tatm') plt.legend()
docs/source/courseware/PolarAmplification.ipynb
cjcardinale/climlab
mit
Same calculation in a two-layer EBM
diffebm3 = climlab.process_like(diffebm) diffebm3.subprocess['LHF'].Cd *= 0.5 diffebm3.integrate_years(5.) # Reduced evaporation gives equatorially enhanced warming of surface # and cooling of near-surface air temperature plt.plot(diffebm.lat, diffebm3.Ts - diffebm.Ts, label='Ts') plt.plot(diffebm.lat, diffebm3.Tatm[:,0] - diffebm.Tatm[:,0], label='Tatm')
docs/source/courseware/PolarAmplification.ipynb
cjcardinale/climlab
mit
Pretty much the same result. Some stuff with Band models
# Put in some ozone import xarray as xr ozonepath = "http://thredds.atmos.albany.edu:8080/thredds/dodsC/CLIMLAB/ozone/apeozone_cam3_5_54.nc" ozone = xr.open_dataset(ozonepath) # Dimensions of the ozone file lat = ozone.lat lon = ozone.lon lev = ozone.lev # Taking annual, zonal average of the ozone data O3_zon = ozone.OZONE.mean(dim=("time","lon")) # make a model on the same grid as the ozone model1 = climlab.BandRCModel(lev=lev, lat=lat) insolation = climlab.radiation.AnnualMeanInsolation(domains=model1.Ts.domain) model1.add_subprocess('insolation', insolation) model1.subprocess.SW.flux_from_space = model1.subprocess.insolation.insolation print(model1) # Set the ozone mixing ratio O3_trans = O3_zon.transpose() # Put in the ozone model1.absorber_vmr['O3'] = O3_trans model1.param # Convective adjustment for atmosphere only model1.remove_subprocess('convective adjustment') conv = climlab.convection.ConvectiveAdjustment(state={'Tatm':model1.Tatm}, **model1.param) model1.add_subprocess('convective adjustment', conv) # Add surface heat fluxes shf = climlab.surface.SensibleHeatFlux(state=model1.state, Cd=0.5E-3) lhf = climlab.surface.LatentHeatFlux(state=model1.state, Cd=0.5E-3) # set the water vapor input field for LHF process lhf.q = model1.q model1.add_subprocess('SHF', shf) model1.add_subprocess('LHF', lhf) model1.step_forward() model1.integrate_years(1.) model1.integrate_years(1.) plot_temp_section(model1, timeave=False) co2model1 = climlab.process_like(model1) co2model1.absorber_vmr['CO2'] *= 2 co2model1.integrate_years(3.) plot_temp_section(co2model1, timeave=False)
docs/source/courseware/PolarAmplification.ipynb
cjcardinale/climlab
mit
Model gets very very hot near equator. Very large equator-to-pole gradient. Band model with heat transport and evaporation
diffmodel1 = climlab.process_like(model1) # thermal diffusivity in W/m**2/degC D = 0.01 # meridional diffusivity in m**2/s K = D / diffmodel1.Tatm.domain.heat_capacity[0] * const.a**2 d = climlab.dynamics.MeridionalDiffusion(K=K, state={'Tatm': diffmodel1.Tatm}, **diffmodel1.param) diffmodel1.add_subprocess('diffusion', d) diffmodel1.absorber_vmr['CO2'] *= 4. print(diffmodel1) diffmodel1.integrate_years(3.) plot_temp_section(diffmodel1, timeave=False) Rtoa = np.squeeze(diffmodel1.timeave['ASR'] - diffmodel1.timeave['OLR']) plt.plot(diffmodel1.lat, inferred_heat_transport(Rtoa, diffmodel1.lat)) plt.plot(diffmodel1.lat, diffmodel1.Ts-273.15) # Now double CO2 co2diffmodel1 = climlab.process_like(diffmodel1) co2diffmodel1.absorber_vmr['CO2'] *= 2. co2diffmodel1.integrate_years(5) # No polar amplification in this model! plt.plot(diffmodel1.lat, co2diffmodel1.Ts - diffmodel1.Ts, label='Ts') plt.plot(diffmodel1.lat, co2diffmodel1.Tatm[:,0] - diffmodel1.Tatm[:,0], label='Tatm') plt.legend() plt.figure() Rtoa = np.squeeze(diffmodel1.timeave['ASR'] - diffmodel1.timeave['OLR']) Rtoa_co2 = np.squeeze(co2diffmodel1.timeave['ASR'] - co2diffmodel1.timeave['OLR']) plt.plot(diffmodel1.lat, inferred_heat_transport(Rtoa, diffmodel1.lat), label='1xCO2') plt.plot(diffmodel1.lat, inferred_heat_transport(Rtoa_co2, diffmodel1.lat), label='2xCO2') plt.legend()
docs/source/courseware/PolarAmplification.ipynb
cjcardinale/climlab
mit
Exercises Exercise: The linear model I used in this chapter has the obvious drawback that it is linear, and there is no reason to expect prices to change linearly over time. We can add flexibility to the model by adding a quadratic term, as we did in Section 11.3. Use a quadratic model to fit the time series of daily prices, and use the model to generate predictions. You will have to write a version of RunLinearModel that runs that quadratic model, but after that you should be able to reuse code from the chapter to generate predictions.
# Solution goes here # Solution goes here # Solution goes here # Solution goes here
code/chap12ex.ipynb
smorton2/think-stats
gpl-3.0
Exercise: Write a definition for a class named SerialCorrelationTest that extends HypothesisTest from Section 9.2. It should take a series and a lag as data, compute the serial correlation of the series with the given lag, and then compute the p-value of the observed correlation. Use this class to test whether the serial correlation in raw price data is statistically significant. Also test the residuals of the linear model and (if you did the previous exercise), the quadratic model.
# Solution goes here # Solution goes here # Solution goes here # Solution goes here
code/chap12ex.ipynb
smorton2/think-stats
gpl-3.0
Scikit-Learn Model Deployment Use Case In this case, we will build a very large ensemble model (here, Random Foreast with 512 trees) on a digits dataset (not very original !!!) and generate a SQL code for deployment using the web service. We then execute the SQL code on a local database (postgresql) and compare the SQL execution result with scikit-learn predict/predict_proba/.predict_log_proba result. Both results are stored in pandas dataframes. Build a scikit-learn model
from sklearn import datasets digits = datasets.load_digits() X = digits.data n_classes = len(digits.target_names) from sklearn.ensemble import RandomForestClassifier clf = RandomForestClassifier(n_estimators=512, max_depth=7, min_samples_leaf=30, random_state = 1960) clf.fit(digits.data, digits.target) #clf.__dict__
docs/WebService-RandomForest_512_Deploy.ipynb
antoinecarme/sklearn2sql_heroku
bsd-3-clause
Generate SQL Code from the Model
def test_ws_sql_gen(pickle_data): WS_URL="http://localhost:1888/model" b64_data = base64.b64encode(pickle_data).decode('utf-8') data={"Name":"model1", "PickleData":b64_data , "SQLDialect":"postgresql"} r = requests.post(WS_URL, json=data) #print(r.__dict__) content = r.json() # print(content) lSQL = content["model"]["SQLGenrationResult"][0]["SQL"] return lSQL; pickle_data = pickle.dumps(clf) lSQL = test_ws_sql_gen(pickle_data) N = len(lSQL) P = 4000 print(lSQL[0:P] + "..." + lSQL[N//2:(N//2 + P)] + "..." + lSQL[-P:])
docs/WebService-RandomForest_512_Deploy.ipynb
antoinecarme/sklearn2sql_heroku
bsd-3-clause
Execute the SQL Code
# save the dataset in a database table engine = sa.create_engine('postgresql://db:db@localhost/db?port=5432' , echo=False) conn = engine.connect() lTable = pd.DataFrame(digits.data); lTable.columns = ['Feature_' + str(c) for c in range(digits.data.shape[1])] lTable['KEY'] = range(lTable.shape[0]) lTable.to_sql("INPUT_DATA" , conn, if_exists='replace', index=False) sql_output = pd.read_sql(lSQL , conn); sql_output = sql_output.sort_values(by='KEY').reset_index(drop=True) conn.close() sql_output.sample(12, random_state=1960) sql_output.Decision.value_counts()
docs/WebService-RandomForest_512_Deploy.ipynb
antoinecarme/sklearn2sql_heroku
bsd-3-clause
Scikit-learn Prediction
skl_outputs = pd.DataFrame() skl_output_key = pd.DataFrame(list(range(X.shape[0])), columns=['KEY']); skl_output_score = pd.DataFrame(columns=['Score_' + str(c) for c in range(n_classes)]); skl_output_proba = pd.DataFrame(clf.predict_proba(X), columns=['Proba_' + str(c) for c in range(n_classes)]) skl_output_log_proba = pd.DataFrame(clf.predict_log_proba(X), columns=['LogProba_' + str(c) for c in range(n_classes)]) skl_output_decision = pd.DataFrame(clf.predict(X), columns=['Decision']) skl_output = pd.concat([skl_output_key, skl_output_score, skl_output_proba, skl_output_log_proba, skl_output_decision] , axis=1) skl_output.sample(12, random_state=1960)
docs/WebService-RandomForest_512_Deploy.ipynb
antoinecarme/sklearn2sql_heroku
bsd-3-clause
Data We use the same data as for all our previous experiments. Here we load the training, development and test data for a particular prompt.
import sys sys.path.append('../') import ndjson import glob from quillnlp.models.bert.preprocessing import preprocess, create_label_vocabulary train_file = f"../data/interim/{PREFIX}_train_withprompt_diverse200.ndjson" synth_files = glob.glob(f"../data/interim/{PREFIX}_train_withprompt_*.ndjson") dev_file = f"../data/interim/{PREFIX}_dev_withprompt.ndjson" test_file = f"../data/interim/{PREFIX}_test_withprompt.ndjson" with open(train_file) as i: train_data = ndjson.load(i) synth_data = [] for f in synth_files: if "allsynth" in f: continue with open(f) as i: synth_data += ndjson.load(i) with open(dev_file) as i: dev_data = ndjson.load(i) with open(test_file) as i: test_data = ndjson.load(i) label2idx = create_label_vocabulary(train_data) idx2label = {v:k for k,v in label2idx.items()} target_names = [idx2label[s] for s in range(len(idx2label))] train_dataloader = preprocess(train_data, BERT_MODEL, label2idx, MAX_SEQ_LENGTH, BATCH_SIZE) dev_dataloader = preprocess(dev_data, BERT_MODEL, label2idx, MAX_SEQ_LENGTH, BATCH_SIZE) test_dataloader = preprocess(test_data, BERT_MODEL, label2idx, MAX_SEQ_LENGTH, BATCH_SIZE, shuffle=False)
notebooks/BERT-1.1 Experiments-QuillNLP.ipynb
empirical-org/WikipediaSentences
agpl-3.0
Model
import torch from quillnlp.models.bert.models import get_bert_classifier device = "cuda" if torch.cuda.is_available() else "cpu" model = get_bert_classifier(BERT_MODEL, len(label2idx), device=device)
notebooks/BERT-1.1 Experiments-QuillNLP.ipynb
empirical-org/WikipediaSentences
agpl-3.0
Training
from quillnlp.models.bert.train import train output_model_file = train(model, train_dataloader, dev_dataloader, BATCH_SIZE, GRADIENT_ACCUMULATION_STEPS, device)
notebooks/BERT-1.1 Experiments-QuillNLP.ipynb
empirical-org/WikipediaSentences
agpl-3.0
Evaluation
from quillnlp.models.bert.train import evaluate from sklearn.metrics import precision_recall_fscore_support, classification_report print("Loading model from", output_model_file) device="cpu" model = get_bert_classifier(BERT_MODEL, len(label2idx), model_file=output_model_file, device=device) model.eval() _, test_correct, test_predicted = evaluate(model, test_dataloader, device) print("Test performance:", precision_recall_fscore_support(test_correct, test_predicted, average="micro")) print(classification_report(test_correct, test_predicted, target_names=target_names)) c = 0 for item, predicted, correct in zip(test_data, test_predicted, test_correct): assert item["label"] == idx2label[correct] c += (item["label"] == idx2label[predicted]) print("{}#{}#{}".format(item["text"], idx2label[correct], idx2label[predicted])) print(c) print(c/len(test_data))
notebooks/BERT-1.1 Experiments-QuillNLP.ipynb
empirical-org/WikipediaSentences
agpl-3.0
Loading data Load the three datasets into RDDs and name them artistData, artistAlias, and userArtistData. View the README, or the files themselves, to see how this data is formated. Some of the files have tab delimeters while some have space delimiters. Make sure that your userArtistData RDD contains only the canonical artist IDs.
#abs path to the dataset path = "/home/rajat/courses/bi/01.Apache.Spark.Project-1.RecommenderSystems.FINAL/" #reading the datasets and also parsing and converting to the data type as required artistData = sc.textFile(path + "artist_data_small.txt").map(lambda l: l.split("\t")).map(lambda l: (int(l[0]), l[1])) artistAlias = sc.textFile(path + "artist_alias_small.txt").map(lambda l: l.split("\t")).map(lambda l: (int(l[0]), int(l[1]))) userArtistData = sc.textFile(path + "user_artist_data_small.txt").map(lambda l: l.split(" ")).map(lambda l: (int(l[0]), int(l[1]), int(l[2])))
music-recommender/recommender.ipynb
shahrajat/DataMining
gpl-2.0
Data Exploration In the blank below, write some code that with find the users' total play counts. Find the three users with the highest number of total play counts (sum of all counters) and print the user ID, the total play count, and the mean play count (average number of times a user played an artist). Your output should look as follows: User 1059637 has a total play count of 674412 and a mean play count of 1878. User 2064012 has a total play count of 548427 and a mean play count of 9455. User 2069337 has a total play count of 393515 and a mean play count of 1519.
#group based on the userID to get list of artists for the user userToPlaycount = userArtistData.map(lambda u: (u[0],u[2])).groupByKey().mapValues(list) #add a average to each tuple userToPlaycountAvg = userToPlaycount.map(lambda t: (t[0],sum(t[1]),sum(t[1])//len(t[1]))) #sorted the data based on play count sorteduserToPlaycountAvg = userToPlaycountAvg.sortBy(ascending=False, keyfunc = lambda x: x[1]) #retrieve and print top3 top3 = sorteduserToPlaycountAvg.take(3) for t in top3: print "User " + str(t[0]) + " has a total play count of "+ str(t[1]) +" and a mean play count of "+ str(t[2]) + "."
music-recommender/recommender.ipynb
shahrajat/DataMining
gpl-2.0
Splitting Data for Testing Use the randomSplit function to divide the data (userArtistData) into: * A training set, trainData, that will be used to train the model. This set should constitute 40% of the data. * A validation set, validationData, used to perform parameter tuning. This set should constitute 40% of the data. * A test set, testData, used for a final evaluation of the model. This set should constitute 20% of the data. Use a random seed value of 13. Since these datasets will be repeatedly used you will probably want to persist them in memory using the cache function. In addition, print out the first 3 elements of each set as well as their sizes; if you created these sets correctly, your output should look as follows: [(1059637, 1000049, 1), (1059637, 1000056, 1), (1059637, 1000113, 5)] [(1059637, 1000010, 238), (1059637, 1000062, 11), (1059637, 1000112, 423)] [(1059637, 1000094, 1), (1059637, 1000130, 19129), (1059637, 1000139, 4)] 19817 19633 10031
#make a dictionary out of the alias data set artistAliasDict = dict(artistAlias.collect()) #replace in userArtistData each artistID with it's canonical ID userArtistData = userArtistData.map(lambda a: (a[0], artistAliasDict[a[1]], a[2]) if a[1] in artistAliasDict.keys() else a) #split data in training, validation and test data set trainData, validationData, testData = userArtistData.randomSplit([40,40,20],13) print trainData.take(3) print validationData.take(3) print testData.take(3) print trainData.count() print validationData.count() print testData.count() trainData.cache() validationData.cache() testData.cache()
music-recommender/recommender.ipynb
shahrajat/DataMining
gpl-2.0
The Recommender Model For this project, we will train the model with implicit feedback. You can read more information about this from the collaborative filtering page: http://spark.apache.org/docs/latest/mllib-collaborative-filtering.html. The function you will be using has a few tunable parameters that will affect how the model is built. Therefore, to get the best model, we will do a small parameter sweep and choose the model that performs the best on the validation set Therefore, we must first devise a way to evaluate models. Once we have a method for evaluation, we can run a parameter sweep, evaluate each combination of parameters on the validation data, and choose the optimal set of parameters. The parameters then can be used to make predictions on the test data. Model Evaluation Although there may be several ways to evaluate a model, we will use a simple method here. Suppose we have a model and some dataset of true artist plays for a set of users. This model can be used to predict the top X artist recommendations for a user and these recommendations can be compared the artists that the user actually listened to (here, X will be the number of artists in the dataset of true artist plays). Then, the fraction of overlap between the top X predictions of the model and the X artists that the user actually listened to can be calculated. This process can be repeated for all users and an average value returned. For example, suppose a model predicted [1,2,4,8] as the top X=4 artists for a user. Suppose, that user actually listened to the artists [1,3,7,8]. Then, for this user, the model would have a score of 2/4=0.5. To get the overall score, this would be performed for all users, with the average returned. NOTE: when using the model to predict the top-X artists for a user, do not include the artists listed with that user in the training data. Name your function modelEval and have it take a model (the output of ALS.trainImplicit) and a dataset as input. For parameter tuning, the dataset parameter should be set to the validation data (validationData). After parameter tuning, the model can be evaluated on the test data (testData).
def modelEval(model, dataset): #set of all the artists in the entire data allArtists = set(userArtistData.map(lambda a: a[1]).collect()) #set of all the users users = set(dataset.map(lambda u: u[0]).collect()) #user->[artist1, artist2,...] for current dataset datasetUserToArtists = dict(dataset.map(lambda u: (u[0],u[1])).groupByKey().mapValues(set).collect()) #user->[artist1, artist2, ...] for all training dataset userToArtists = dict(trainData.map(lambda u: (u[0],u[1])).groupByKey().mapValues(set).collect()) score = 0 #calculate score for each user for user in users: nonTrainArtists = allArtists - userToArtists[user] trueArtists = datasetUserToArtists[user] X = len(trueArtists) test = map(lambda x: (user, x),nonTrainArtists) #get artists (referred as products) predictions for the current user predictionsArtists = model.predictAll(sc.parallelize(test)).sortBy(ascending=False, keyfunc = lambda x: x.rating).map(lambda x:x.product).take(X) score += (float(len(set(predictionsArtists).intersection(trueArtists))) / X) return float(score)/len(users)
music-recommender/recommender.ipynb
shahrajat/DataMining
gpl-2.0
Model Construction Now we can build the best model possibly using the validation set of data and the modelEval function. Although, there are a few parameters we could optimize, for the sake of time, we will just try a few different values for the rank parameter (leave everything else at its default value, except make seed=345). Loop through the values [2, 10, 20] and figure out which one produces the highest scored based on your model evaluation function. Note: this procedure may take several minutes to run. For each rank value, print out the output of the modelEval function for that model. Your output should look as follows: The model score for rank 2 is 0.090431 The model score for rank 10 is 0.095294 The model score for rank 20 is 0.090248
model = ALS.trainImplicit(trainData, rank=2, seed=345) print modelEval(model, validationData) model = ALS.trainImplicit(trainData, rank=10, seed=345) print modelEval(model, validationData) model = ALS.trainImplicit(trainData, rank=20, seed=345) print modelEval(model, validationData)
music-recommender/recommender.ipynb
shahrajat/DataMining
gpl-2.0
Trying Some Artist Recommendations Using the best model above, predict the top 5 artists for user 1059637 using the recommendProducts function. Map the results (integer IDs) into the real artist name using artistAlias. Print the results. The output should look as follows: Artist 0: Brand New Artist 1: Taking Back Sunday Artist 2: Evanescence Artist 3: Elliott Smith Artist 4: blink-182
ratings = bestModel.recommendProducts(1059637, 5) ratingsArtists = map(lambda r : r.product, ratings) artistDataMap = dict(artistData.collect()) i=0 for artist in ratingsArtists: print "Artist "+ str(i)+ ":",artistDataMap[artist] i+=1
music-recommender/recommender.ipynb
shahrajat/DataMining
gpl-2.0
1. User-Specified Input Python can also ask the user to specify input interactively, as shown below. Execute the cell to see how it works.
faren = input("enter a temperature (in Fahrenheit): ") print(faren)
Labs/Lab4/Lab4.ipynb
kfollette/ASTR200-Spring2017
mit
<div class=hw> ### Exercise 1 ----------------------- Write a function that has no required input, but one optional argument with a default value "C". The function should ask the user what the temperature is and should print that value in Celcius, Farenheit and Kelvin. If the keyword is set to "F" or "K" it should assume the temperature has been given in Farenheit. Otherwise, it should assume Celcius. ## 2. More About Manipulating Strings ### 2.1 Special Characters backslashes (`\`) start special (escape) characters: \n = newline (\r = return) \t = tab \a = bell string literals are defined with double quotes or quotes. the outermost quote type cannot be used inside the string (unless it's escaped with a backslash)
print("green eggs and\n spam") # Triple quotes are another way to specify multi-line strings y = """For score and seven minutes ago, you folks all learned some basic mathy stuff with Python and boy were you blown away!""" print(y)
Labs/Lab4/Lab4.ipynb
kfollette/ASTR200-Spring2017
mit
2.2 Concatenating strings
# note the ; allows us to do two calculations on the same line s = "spam" ; e = "eggs" print(s + e) print(s + " and " + e) # this one won't work print('I want' + 3 + ' eggs and no ' + s) # but this will print('I want ' + str(3) + ' eggs and no ' + s)
Labs/Lab4/Lab4.ipynb
kfollette/ASTR200-Spring2017
mit
2.3 Multiple Concatenations
print(s*3 + e) print("*" * 50)
Labs/Lab4/Lab4.ipynb
kfollette/ASTR200-Spring2017
mit
2.4 Comparing Strings
print("spam" == "good"); print("spam" == "spam") "spam" < "zoo" "s" < "spam"
Labs/Lab4/Lab4.ipynb
kfollette/ASTR200-Spring2017
mit
<div class=hw> ## Exercise 2 ------------------ Write a function that does the following: 1) Takes a required input ***string array*** containing the names of the people in your group. 2) Asks the person with the longer name for their order first 3) Asks each person to enter (using user input) how many eggs they want, and then whether or not they want spam 4) prints the order in the form "*name* wants *x* eggs and *[spam/no spam]*" with a new line between the two orders 5) prints "name:" + "egg" a number of times equal to the number of eggs ordered + "and" + "SPAM!" or "NO SPAM!" ## 3. More about Conditionals ### 3.1 One-Line Conditionals
x = 1 if x > 0: print("yo") else: print("dude") # the one line version "yo" if x > 0 else "dude" # conditionals can lie within function calls print("yo" if x > 0 else "dude") z = "no" np.sin(np.pi if z=="yes" else 0)
Labs/Lab4/Lab4.ipynb
kfollette/ASTR200-Spring2017
mit
3.2 While and Iteration
x = 1 y = 0 while y < 10: print("yo" if x > 0 else "dude") x *= -1 y += 1
Labs/Lab4/Lab4.ipynb
kfollette/ASTR200-Spring2017
mit
<div class=hw> ### Exercise 3 ------------------- Copy the function above into a code cell and add a print statement that prints the variables x and y so that you can see what they are at each step. Use the output to figure out what the statements x \*= -1 and y += 1 are doing. Type an explanation of what they do into a markdown cell ### 3.3 Break Statements Break statements allow you to terminate/exit a loop, as in the following example:
#can also do this with a break statement while True: print("yo" if x > 0 else "dude") x *= -1 y += 1 if y >= 10: break
Labs/Lab4/Lab4.ipynb
kfollette/ASTR200-Spring2017
mit
4. Writing and Reading Files with Magic Commands
%%file number_game.py # The above "magic" command, denoted with the double %% saves the contents # of the current cell to file. We'll see more of these later x = 0 max_tries = 10 count = 0 while True: x_new = int(input("Enter a new number: ")) if x_new > x: print(" -> it's bigger than the last!") elif x_new < x: print(" -> it's smaller than the last!") else: print(" -> no change! I'll exit now") break x = x_new count += 1 if count > max_tries: print("too many tries...") break %run number_game.py # this magic command runs the given file. It's like typing python number_game.py in the command line
Labs/Lab4/Lab4.ipynb
kfollette/ASTR200-Spring2017
mit
<div class=hw> ### Exercise 4 ------------------- Create a program (and write it to a .py file) which repeatedly asks the user for a word. The program should append all the words together. When the user types a "!", "?", or a ".", the program should print the resulting sentence and exit. For example, a session might look like this: %run exercise2.py Enter a word (. ! or ? to end): My Enter a word (. ! or ? to end): name Enter a word (. ! or ? to end): is Enter a word (. ! or ? to end): Walter Enter a word (. ! or ? to end): White Enter a word (. ! or ? to end): ! My name is Walter White! <div class=hw> ### Exercise 5 (a classic) -------------------- Write a program (and write it to a python file) that prints the numbers from 1 to 100. But for multiples of three print “Fizz” instead of the number and for the multiples of five print “Buzz”. For numbers which are multiples of both three and five print “FizzBuzz” instead of the number. A special python operator called modulo and represented with the % sign might help you. A couple of examples of its use are below. See if you can figure out what it does by experimenting.
4 % 2 4 % 3 6.28 % 3.14 6.28 % 3.1 25 % 5 25 % 7 from IPython.core.display import HTML def css_styling(): styles = open("../custom.css", "r").read() return HTML(styles) css_styling()
Labs/Lab4/Lab4.ipynb
kfollette/ASTR200-Spring2017
mit
Protocol is not the same for XL320 servomotors, set the using_XL320 flag to True if you use them. Set my_baudrate to the baudrate you are using (1000000 for motors already configured, 57600 for new ones).
using_XL320 = False my_baudrate = 1000000
debug/poppy-torso_poppy-humanoid_poppy-ergo__motor_scan.ipynb
poppy-project/community-notebooks
lgpl-3.0
If the code below gives you an exception, try to restart all other notebooks that may be running, wait 5 seconds and try again.
for port in ports: print port try: if using_XL320: dxl_io = pypot.dynamixel.Dxl320IO(port, baudrate=my_baudrate) else: dxl_io = pypot.dynamixel.DxlIO(port, baudrate=my_baudrate) print "scanning" found = dxl_io.scan(range(60)) print found dxl_io.close() except Exception, e: print e
debug/poppy-torso_poppy-humanoid_poppy-ergo__motor_scan.ipynb
poppy-project/community-notebooks
lgpl-3.0
Kill whoever uses the ports (should be used only as last chance try to free the ports).
import os for port in ports: os.system('fuser -k '+port);
debug/poppy-torso_poppy-humanoid_poppy-ergo__motor_scan.ipynb
poppy-project/community-notebooks
lgpl-3.0
Introduction to scikit-learn Scikit-learn is a machine learning library in Python. Scikit-learn is the first of the several machine learning libraries we will explore in this course. It is relatively approachable, supports a wide variety of traditional machine learning models, and is ubiquitous in the world of data science. Datasets Scikit-learn contains methods for loading, fetching, and making (generating) data. The methods for doing this all fall under the datasets subpackage. Most of the functions in this package have load, fetch, or make in the name to let you know what the method is doing under the hood. Loading functions bring static datasets into your program. The data comes pre-packaged with scikit-learn, so no network access is required. Fetching functions also bring static datasets into your program. However, the data is pulled from the internet, so if you don't have network access, these functions might fail. Generating functions make dynamic datasets based on some equation. These pre-packaged dataset functions exist for many popular datasets, such as the MNIST digits dataset and the Iris flower dataset. The generation functions reference classic dataset "shape" formations such as moons and swiss rolls. These datasets are perfect for getting familiar with machine learning. Loading Let us first look at an example of loading data. We will load the iris flowers dataset using the load_iris function.
from sklearn.datasets import load_iris iris_data = load_iris() iris_data
content/03_regression/01_introduction_to_sklearn/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
That's a lot to take in. Let's examine this loaded data a little more closely. First we'll see what data type this dataset is:
type(iris_data)
content/03_regression/01_introduction_to_sklearn/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
sklearn.utils.Bunch is a type that you'll see quite often when working with datasets built into scikit-learn. It is a dictionary-like container for feature and target data within a dataset. You won't find much documentation about Bunch objects because they are not really meant for usage beyond containing data native to scikit-learn. Let's look at the attributes of the iris dataset:
dir(iris_data)
content/03_regression/01_introduction_to_sklearn/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
DESCR is a description of the dataset.
print(iris_data['DESCR'])
content/03_regression/01_introduction_to_sklearn/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
filename is the name of the source file where the data is stored.
print(iris_data['filename'])
content/03_regression/01_introduction_to_sklearn/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0