markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Examining, first, the mass-radius, mass-Teff, and mass-luminosity relationships as a function of age.
fig, ax = plt.subplots(3, 1, figsize=(8, 12), sharex=True) ax[2].set_xlabel('Mass ($M_{\\odot}$)', fontsize=20.) ax[0].set_ylabel('Radius ($R_{\\odot}$)', fontsize=20.) ax[1].set_ylabel('Temperature (K)', fontsize=20.) ax[2].set_ylabel('Luminosity ($L_{\\odot}$)', fontsize=20.) for axis in ax: axis.tick_params(whi...
Daily/20150729_young_magnetic_models.ipynb
gfeiden/Notebook
mit
Note that, in the figure above, standard stellar evolution models are shown as solid lines and magnetic stellar evolution models as dashed lines. Ages are indicated by color: grey = 5 Myr, blue = 12 Myr, red = 30 Myr. HR Diagram comparison:
fig, ax = plt.subplots(1, 1, figsize=(8.0, 8.0)) ax.set_xlabel('Effective Temperature (K)', fontsize=20.) ax.set_ylabel('$\\log_{10} (L / L_{\\odot})$', fontsize=20.) ax.set_xlim(5000., 2500.) ax.tick_params(which='major', axis='both', length=10., labelsize=16.) # Standard models ax.plot(10**std_iso_05[:, 1], std_iso...
Daily/20150729_young_magnetic_models.ipynb
gfeiden/Notebook
mit
Line styles and colors represent the same model combinations, as before. Lithium abundance as a function of mass, temperature, and luminosity:
fig, ax = plt.subplots(1, 3, figsize=(15, 5), sharey=True) ax[0].set_xlabel('Mass ($M_{\\odot}$)', fontsize=20.) ax[1].set_xlabel('Temperature (K)', fontsize=20.) ax[2].set_xlabel('$\\log_{10}(L/L_{\\odot})$', fontsize=20.) ax[0].set_ylabel('A(Li)', fontsize=20.) for axis in ax: axis.set_ylim(1.5, 3.5) axis.ti...
Daily/20150729_young_magnetic_models.ipynb
gfeiden/Notebook
mit
Euler's method Euler's method is the simplest numerical approach for solving a first order ODE numerically. Given the differential equation $$ \frac{dy}{dx} = f(y(x), x) $$ with the initial condition: $$ y(x_0)=y_0 $$ Euler's method performs updates using the equations: $$ y_{n+1} = y_n + h f(y_n,x_n) $$ $$ h = x_{n+1}...
def solve_euler(derivs, y0, x): """Solve a 1d ODE using Euler's method. Parameters ---------- derivs : function The derivative of the diff-eq with the signature deriv(y,x) where y and x are floats. y0 : float The initial condition y[0] = y(x[0]). x : np.ndarray, list...
assignments/assignment10/ODEsEx01.ipynb
joshnsolomon/phys202-2015-work
mit
The midpoint method is another numerical method for solving the above differential equation. In general it is more accurate than the Euler method. It uses the update equation: $$ y_{n+1} = y_n + h f\left(y_n+\frac{h}{2}f(y_n,x_n),x_n+\frac{h}{2}\right) $$ Write a function solve_midpoint that implements the midpoint met...
def solve_midpoint(derivs, y0, x): """Solve a 1d ODE using the Midpoint method. Parameters ---------- derivs : function The derivative of the diff-eq with the signature deriv(y,x) where y and x are floats. y0 : float The initial condition y[0] = y(x[0]). x : np.ndarr...
assignments/assignment10/ODEsEx01.ipynb
joshnsolomon/phys202-2015-work
mit
You are now going to solve the following differential equation: $$ \frac{dy}{dx} = x + 2y $$ which has the analytical solution: $$ y(x) = 0.25 e^{2x} - 0.5 x - 0.25 $$ First, write a solve_exact function that compute the exact solution and follows the specification described in the docstring:
def solve_exact(x): """compute the exact solution to dy/dx = x + 2y. Parameters ---------- x : np.ndarray Array of x values to compute the solution at. Returns ------- y : np.ndarray Array of solutions at y[i] = y(x[i]). """ y = (.25)*np.exp(2*x) - (.5*x)-(....
assignments/assignment10/ODEsEx01.ipynb
joshnsolomon/phys202-2015-work
mit
In the following cell you are going to solve the above ODE using four different algorithms: Euler's method Midpoint method odeint Exact Here are the details: Generate an array of x values with $N=11$ points over the interval $[0,1]$ ($h=0.1$). Define the derivs function for the above differential equation. Using the...
x = np.linspace(0,1,11) def derivs(y,x): return x + 2*y y1 = solve_euler(derivs, 0, x) y2 = solve_midpoint(derivs, 0, x) y3 = odeint(derivs,0,x) y4 = solve_exact(x) plt.subplot(2,2,1) # 2 rows x 1 col, plot 1 plt.plot(x,y1) plt.ylabel('Euler\'s Method') plt.subplot(2,2,2) plt.plot(x,y2) plt.ylabel('Midpo...
assignments/assignment10/ODEsEx01.ipynb
joshnsolomon/phys202-2015-work
mit
Load Data Multispot Load the leakage coefficient from disk (computed in Multi-spot 5-Samples analyis - Leakage coefficient fit):
leakage_coeff_fname = 'results/Multi-spot - leakage coefficient KDE wmean DexDem.csv' leakageM = float(np.loadtxt(leakage_coeff_fname, ndmin=1)) print('Multispot Leakage Coefficient:', leakageM)
Multi-spot Gamma Fitting.ipynb
tritemio/multispot_paper
mit
Load the direct excitation coefficient ($d_{dirT}$) from disk (computed in usALEX - Corrections - Direct excitation physical parameter):
dir_ex_coeff_fname = 'results/usALEX - direct excitation coefficient dir_ex_t beta.csv' dir_ex_t = float(np.loadtxt(dir_ex_coeff_fname, ndmin=1)) print('Direct excitation coefficient (dir_ex_t):', dir_ex_t)
Multi-spot Gamma Fitting.ipynb
tritemio/multispot_paper
mit
Multispot PR for FRET population:
mspot_filename = 'results/Multi-spot - dsDNA - PR - all_samples all_ch.csv' E_pr_fret = pd.read_csv(mspot_filename, index_col=0) E_pr_fret
Multi-spot Gamma Fitting.ipynb
tritemio/multispot_paper
mit
usALEX Corrected $E$ from ฮผs-ALEX data:
data_file = 'results/usALEX-5samples-E-corrected-all-ph.csv' data_alex = pd.read_csv(data_file).set_index('sample')#[['E_pr_fret_kde']] data_alex.round(6) E_alex = data_alex.E_gauss_w E_alex
Multi-spot Gamma Fitting.ipynb
tritemio/multispot_paper
mit
Multi-spot gamma fitting
import lmfit def residuals(params, E_raw, E_ref): gamma = params['gamma'].value # NOTE: leakageM and dir_ex_t are globals return E_ref - fretmath.correct_E_gamma_leak_dir(E_raw, leakage=leakageM, gamma=gamma, dir_ex_t=dir_ex_t) params = lmfit.Parameters() params.add('gamma', value=0.5) E_pr_fret_mean = ...
Multi-spot Gamma Fitting.ipynb
tritemio/multispot_paper
mit
Plot FRET vs distance
sns.set_style('whitegrid') CH = np.arange(8) CH_labels = ['CH%d' % i for i in CH] dist_s_bp = [7, 12, 17, 22, 27] fontsize = 16 fig, ax = plt.subplots(figsize=(8, 5)) ax.plot(dist_s_bp, E_fret_mch, '+', lw=2, mew=1.2, ms=10, zorder=4) ax.plot(dist_s_bp, E_alex, '-', lw=3, mew=0, alpha=0.5, color='k', zorder=3) plt...
Multi-spot Gamma Fitting.ipynb
tritemio/multispot_paper
mit
Simulate data I am going to simulate data using the various density functions available in scipy. During QC, we typically are trying to identify either samples or values (e.g. genes, exons, compounds) that do not behave as expected. We use various plots to help identify outliers and remove them from the dataset. For th...
# Simulate $\theta$ sp.random.seed(42) theta1 = sp.random.normal(loc=0.5, scale=0.1, size=1000) theta2 = sp.random.normal(loc=0.2, scale=0.1, size=360) # Simulate coverage cvg1 = sp.random.poisson(20, size=1000) cvg2 = sp.random.poisson(4, size=360) ## I can't have a coverage of 0, so replace 0's with 1 cvg1[cvg1 == ...
interactive_plotting.ipynb
McIntyre-Lab/ipython-demo
gpl-2.0
Now lets look at the distribution of our coverage counts
# Plot Distribution of Coverage ## Figure out the x limits xs = np.linspace(0, cvg.max(), num=100) ## Get Density functions density1 = stats.gaussian_kde(cvg1) density2 = stats.gaussian_kde(cvg2) ## Plot plt.plot(xs, density1(xs), label='High Coverage') plt.plot(xs, density2(xs), label='Low Coverage') plt.title('Dist...
interactive_plotting.ipynb
McIntyre-Lab/ipython-demo
gpl-2.0
Combine everything into a single dataset.
# Create Data Frame dat = pd.DataFrame({'theta': theta, 'cvg': cvg}) dat.head(3) # Plotting Desnsities is a lot easier with data frames fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(10, 5)) dat['theta'].plot(kind='kde', ax=ax1, title=r'Distribution of $\theta$') dat['cvg'].plot(kind='kde', ax=ax2, title='Distribution ...
interactive_plotting.ipynb
McIntyre-Lab/ipython-demo
gpl-2.0
QC Time Now that we have our simulated data, lets do some QC. Lets see what happens if we filter low coverage reads. First we will create a plotting function that takes a cutoff value.
def pltLow(dat, cutoff): """ Function to plot density after filtering""" clean = dat[dat['cvg'] >= cutoff] clean['theta'].plot(kind='kde', title=r'Distribution of $\theta${}Coverage Count Cutoff $\geq$ {}'.format('\n',cutoff), xlim=(-0.2, 1.2)) # Test plot function pltLow(dat, 1)
interactive_plotting.ipynb
McIntyre-Lab/ipython-demo
gpl-2.0
Interactive Plotting Ipython offers a simple way to create interactive plots. You import a function called interact, and use that to call your plotting function.
from IPython.html.widgets import interact, interact_manual, IntSlider, fixed interact(pltLow, dat=fixed(dat), cutoff=IntSlider(min=0, max=20))
interactive_plotting.ipynb
McIntyre-Lab/ipython-demo
gpl-2.0
If you have a lot of data, then interact can be slow because at each step along the slider it tries to calculate the filter. There is a noter interactive widget interact_manual that only runs calculations when you hit the run button.
interact_manual(pltLow, dat=fixed(dat), cutoff=IntSlider(min=0, max=20))
interactive_plotting.ipynb
McIntyre-Lab/ipython-demo
gpl-2.0
Other types of interactivity While there are a number of IPython widgets that may be useful, there are other packages that offer interactivity. One I have been playing with is a module that translates matplotlib plots into D3.js plots. I will demonstrate that here.
# Import the mpld3 library import mpld3 # Plain Scatter plot showing relationship between coverage and theta dat.plot(kind='scatter', x='cvg', y='theta', figsize=(10, 10)) # Plot figure with mpld3 fig, ax = plt.subplots(figsize=(10, 10)) scatter = ax.scatter(dat['cvg'], dat['theta']) labels = ['row {}'.format(i) for ...
interactive_plotting.ipynb
McIntyre-Lab/ipython-demo
gpl-2.0
Now lets mess with a point and see if it changes.
dat.ix[262, 'theta'] = -0.1 # Plot figure with mpld3 fig, ax = plt.subplots(figsize=(10, 10)) scatter = ax.scatter(dat['cvg'], dat['theta']) labels = ['row {}'.format(i) for i in dat.index.tolist()] tooltip = mpld3.plugins.PointLabelTooltip(scatter, labels=labels) mpld3.plugins.connect(fig, tooltip) mpld3.display()
interactive_plotting.ipynb
McIntyre-Lab/ipython-demo
gpl-2.0
This is the basic idea of a list comprehension. If you're familiar with mathematical notation this format should feel familiar for example: x^2 : x in { 0,1,2...10} Lets see a few more example of list comprehensions in Python: Example 2
# Square numbers in range and turn into list lst = [x**2 for x in range(0,11)] lst
PythonBootCamp/Complete-Python-Bootcamp-master/.ipynb_checkpoints/List Comprehensions-checkpoint.ipynb
yashdeeph709/Algorithms
apache-2.0
Example 3 Lets see how to add in if statements:
# Check for even numbers in a range lst = [x for x in range(11) if x % 2 == 0] lst
PythonBootCamp/Complete-Python-Bootcamp-master/.ipynb_checkpoints/List Comprehensions-checkpoint.ipynb
yashdeeph709/Algorithms
apache-2.0
Example 4 Can also do more complicated arithmetic:
# Convert Celsius to Fahrenheit celsius = [0,10,20.1,34.5] fahrenheit = [ ((float(9)/5)*temp + 32) for temp in celsius ] fahrenheit
PythonBootCamp/Complete-Python-Bootcamp-master/.ipynb_checkpoints/List Comprehensions-checkpoint.ipynb
yashdeeph709/Algorithms
apache-2.0
Example 5 We can also perform nested list comprehensions, for example:
lst = [ x**2 for x in [x**2 for x in range(11)]] lst
PythonBootCamp/Complete-Python-Bootcamp-master/.ipynb_checkpoints/List Comprehensions-checkpoint.ipynb
yashdeeph709/Algorithms
apache-2.0
Compute MNE-dSPM inverse solution on evoked data in volume source space Compute dSPM inverse solution on MNE evoked dataset in a volume source space and stores the solution in a nifti file for visualisation.
# Author: Alexandre Gramfort <alexandre.gramfort@inria.fr> # # License: BSD-3-Clause from nilearn.plotting import plot_stat_map from nilearn.image import index_img from mne.datasets import sample from mne import read_evokeds from mne.minimum_norm import apply_inverse, read_inverse_operator print(__doc__) data_path ...
stable/_downloads/8b7a85d4b98927c93b7d9ca1da8d2ab2/compute_mne_inverse_volume.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Plot with nilearn:
plot_stat_map(index_img(img, 61), str(t1_fname), threshold=8., title='%s (t=%.1f s.)' % (method, stc.times[61]))
stable/_downloads/8b7a85d4b98927c93b7d9ca1da8d2ab2/compute_mne_inverse_volume.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
First, let's prep to install Pysam and the HTSlib Pysam is a python wrapper around samtools, and samtools uses the HTSlib (http://www.htslib.org/). So we need to make sure we have the necessary libraries to compile HTSlib and samtools. The compilation is needed to activate the ability to read from google cloud buckets.
import os os.environ['HTSLIB_CONFIGURE_OPTIONS'] = "--enable-gcs"
notebooks/isb_cgc_bam_slicing_with_pysam.ipynb
isb-cgc/examples-Python
apache-2.0
We can invoke bash commands to see what was downloaded into our current working directory. Bash commands can be invoked by putting an exclamation point (!) before the command.
!ls -lha !sudo apt-get install autoconf automake make gcc perl zlib1g-dev libbz2-dev liblzma-dev libcurl4-openssl-dev libssl-dev !pip3 install pysam -v --force-reinstall --no-binary :all: # Without forcing the compilation, we get error # [Errno 93] could not open alignment file '...': Protocol not supported impor...
notebooks/isb_cgc_bam_slicing_with_pysam.ipynb
isb-cgc/examples-Python
apache-2.0
First, we need to set our project. Replace the assignment below with your project ID.
# First, we need to set our project. Replace the assignment below # with your project ID. # project_id = 'isb-cgc-02-0001' #!gcloud config set project {project_id} #import os #os.environ['GCS_OAUTH_TOKEN'] = "gcloud auth application-default print-access-token"
notebooks/isb_cgc_bam_slicing_with_pysam.ipynb
isb-cgc/examples-Python
apache-2.0
Now that we have Pysam installed, let's write an SQL query to locate BAM files in Google Cloud Storage Buckets. In the query below, we are looking to identify the Google Cloud Storage bucket locations for TCGA Ovarian Cancer BAMs obtained via whole genome sequencing (WGS) generated using the SOLiD sequencing system
%%bigquery --project isb-cgc-02-0001 df SELECT * FROM `isb-cgc.TCGA_hg19_data_v0.tcga_metadata_data_hg19_18jul` where data_format = 'BAM' AND disease_code = 'OV' AND experimental_strategy = "WGS" AND platform = 'ABI SOLiD' LIMIT 5
notebooks/isb_cgc_bam_slicing_with_pysam.ipynb
isb-cgc/examples-Python
apache-2.0
Now using the following Pysam command, let's read a bam file from GCS and slice out a section of the bam using the fetch function. For the purposes of the BAM slicing exercise, we will use an open-access CCLE BAM File open-access BAM file. CCLE open access BAM files are stored here
samfile = pysam.AlignmentFile('gs://isb-ccle-open/gdc/0a109993-2d5b-4251-bcab-9da4a611f2b1/C836.Calu-3.2.bam', "rb") for read in samfile.fetch('7', 140453130, 140453135): print(read) samfile.close()
notebooks/isb_cgc_bam_slicing_with_pysam.ipynb
isb-cgc/examples-Python
apache-2.0
The output from the above command is a tab-delimited human readable table of a slice of the BAM file. This table gives us information on reads that mapped to the region that we "extracted" from chromosome 7 between the coordinates of 140453130 and 140453135. Now, let's suppose you would like to save those reads to your...
samfile = pysam.AlignmentFile('gs://isb-ccle-open/gdc/0a109993-2d5b-4251-bcab-9da4a611f2b1/C836.Calu-3.2.bam', "rb") fetchedreads = pysam.AlignmentFile("test.bam", "wb", template=samfile) for read in samfile.fetch('7', 140453130, 140453135): fetchedreads.write(read) fetchedreads.close() samfile.close()
notebooks/isb_cgc_bam_slicing_with_pysam.ipynb
isb-cgc/examples-Python
apache-2.0
Let's see if we saved it?
!ls -lha #if you don't already have a google cloud storage bucket, you can make one using the following command: #The mb command creates a new bucket. #gsutil mb gs://your_bucket #to see what's in the bucket.. #!gsutil ls gs://your_bucket/ # then we can copy over the file !gsutil cp gs://bam_bucket_1/test.bam te...
notebooks/isb_cgc_bam_slicing_with_pysam.ipynb
isb-cgc/examples-Python
apache-2.0
Now, can we read it back?!?
newsamfile = pysam.AlignmentFile('gs://bam_bucket_1/test.bam', 'rb') for r in newsamfile.fetch(until_eof=True): print(r) # # # No. But maybe soon. #
notebooks/isb_cgc_bam_slicing_with_pysam.ipynb
isb-cgc/examples-Python
apache-2.0
Let's move our slice back to this instance.
!gsutil ls gs://bam_bucket_1/ !gsutil cp gs://bam_bucket_1/test.bam test_dl.bam
notebooks/isb_cgc_bam_slicing_with_pysam.ipynb
isb-cgc/examples-Python
apache-2.0
Now we're ready to work with our bam-slice! Very brief examples of working with reads. The Alignment file is read as a pysam.AlignedSegment, which is a python class. The methods and class variables can be found here: https://pysam.readthedocs.io/en/latest/api.html#pysam.AlignedSegment
import numpy as np # first we'll open our bam-slice dlsamfile = pysam.AlignmentFile('test_dl.bam', 'rb') # and we'll save the read quality scores in a list quality = [] for read in dlsamfile: quality.append(read.mapping_quality) # then we can compute statistics on them print("Average quality score") print(np.mea...
notebooks/isb_cgc_bam_slicing_with_pysam.ipynb
isb-cgc/examples-Python
apache-2.0
<p style="text-align:right;direction:rtl;">ื”ืจื™ืฆื• ืืช ื”ืคื•ื ืงืฆื™ื” ื›ืš: <code>password_generator('stam')</code> <p style="text-align:right;direction:rtl;">ื›ืขืช ื ืงื‘ืœ ืฉื ืžืฉืชืžืฉ ื•ืกื™ืกืžื”, ื•ื ื‘ื“ื•ืง ืื ื”ืฉื™ืœื•ื‘ ื”ื•ื ื ื›ื•ืŸ. ื‘ื“ืงื• ื‘ืืžืฆืขื•ืช ืžื—ื•ืœืœ ื”ืกื™ืกืžืื•ืช ืžื”ืกืขื™ืฃ ื”ืงื•ื“ื ืื ื”ืกื™ืกืžื” ืฉืœ ื”ืžืฉืชืžืฉ ืชื•ืืžืช ืืช ื”ืกื™ืกืžื” ืฉื™ื•ืฆืจ ื”ืžื—ื•ืœืœ. ื”ื“ืคื™ืกื• "ื‘ืจื•ืš ื”ื‘ื" ืื ื”ืกื™ืกืžื”...
# ื›ืชื‘ื• ืืช ื”ืคื•ื ืงืฆื™ื” ืฉืœื›ื ื›ืืŸ
week02/7_Summary.ipynb
PythonFreeCourse/Notebooks
mit
<p style="text-align:right;direction:rtl;">ื›ืชื‘ื• ืคื•ื ืงืฆื™ื” ืฉืžื—ื–ื™ืจื” <code>True</code> ืื ื‘ื•ืฆืข ื—ื™ื‘ื•ืจ ืžื•ืฆืœื—, ืื—ืจืช ื”ื—ื–ื™ืจื• <code>False</code>.<br> ืคื•ื ืงืฆื™ื” ื–ื• ื“ื•ืžื” ืžืื•ื“ ืœืคื•ื ืงืฆื™ื” ื”ืงื•ื“ืžืช ืฉื›ืชื‘ืชื, ืจืง ืฉื”ื™ื ืื™ื ื” ืžื“ืคื™ืกื” ื“ื‘ืจ.<br> ื‘ืžืงื•ื ื”ื”ื“ืคืกื” ื™ื•ื—ื–ืจ ืขืจืš ื‘ื•ืœื™ืื ื™ ืžืชืื™ื.<br>ืœื“ื•ื’ืžื”:<br></p> <br><code>login('stam', 'stamSTAMXXXX')</code>...
# ื›ืชื‘ื• ืืช ื”ืคื•ื ืงืฆื™ื” ืฉืœื›ื ื›ืืŸ
week02/7_Summary.ipynb
PythonFreeCourse/Notebooks
mit
<p style="text-align:right;direction:rtl;"> ื›ืขืช ืขื ื• ืขืœ ื”ืฉืืœื” ื”ืงื•ื“ืžืช ื‘ืืžืฆืขื•ืช ื”ืคื•ื ืงืฆื™ื” ืฉื›ืชื‘ืชื ื‘ืกืขื™ืฃ ื–ื”, ื›ืœื•ืžืจ ื›ืชื‘ื• ืคื•ื ืงืฆื™ื” ืฉืžืฉืชืžืฉืช ื‘ืคื•ื ืงืฆื™ื” ื”ืžื—ื–ื™ืจื” ืขืจืš ื‘ื•ืœื™ืื ื™ ื•ืžื“ืคื™ืกื” ื‘ื”ืชืื ืœื”ื•ืจืืช ืžื”ืกืขื™ืฃ ื”ืงื•ื“ื.<br>ืจืžื–: <span style="direction: rtl; background: #000; text: #000">ื”ืฉืชืžืฉื• ื‘ืขืจืš ื”ื”ื—ื–ืจื” ืฉืœ ื”ืคื•ื ืงืฆื™ื” ืžื”ืกืขื™ืฃ ื”ืงื•ื“ื, ื‘ืชื•ืš if.</span...
# ื›ืชื‘ื• ืืช ื”ืคื•ื ืงืฆื™ื” ืฉืœื›ื ื›ืืŸ
week02/7_Summary.ipynb
PythonFreeCourse/Notebooks
mit
<p style="text-align:right;direction:rtl;">ื›ืขืช ื ืจื—ื™ื‘ ืืช ืžืขืจื›ืช ื”ื‘ื ืง ืฉืœื ื•.<br> ื ื ื™ื— ื›ื™ ืœื›ืœ ืœืงื•ื— ื™ืฉ ื‘ื—ืฉื‘ื•ืŸ ื”ื‘ื ืง 500 ืฉ"ื—.<br> ื‘ืืžืฆืขื•ืช ื”ืคื•ืงื ืฆื™ื•ืช ื”ืงื•ื“ืžื•ืช ืฉื›ืชื‘ื ื• ื ืžืžืฉ ืืช ื”ืชื•ื›ื ื™ืช ื”ื‘ืื”:<br> <ul style="text-align:right; direction:rtl;"> <li>ื ื‘ืงืฉ ืžื”ืžืฉืชืžืฉ ืฉื ืžืฉืชืžืฉ ื•ืกื™ืกืžื”.</li> <li>ื ืืžืช ืืช ืฉื ื”ืžืฉืชืžืฉ ื•ื”ืกื™ืกืžื” ื‘ืขื–ืจืช ืžื—...
# ื›ืชื‘ื• ืืช ื”ืคื•ื ืงืฆื™ื” ืฉืœื›ื ื›ืืŸ
week02/7_Summary.ipynb
PythonFreeCourse/Notebooks
mit
<p style="text-align: right; direction: rtl; float: right; clear: both;"> ืœืื—ืจื•ื ื” ืขืœืชื” ื“ืจื™ืฉื” ืœืฉื›ืœืœ ืืช ื”ื‘ื ืง ืฉืœื ื•, ื›ืš ืฉืจืง ืžืกืคืจ ืžืฆื•ืžืฆื ืฉืœ ืœืงื•ื—ื•ืช ื™ื•ื›ืœื• ืœื’ืฉืช ืœื‘ื ืง.<br> ื”ื’ื“ื™ืจื• ืจืฉื™ืžื” ืฉืœ ืฉืžื•ืช ืฉืœ ืœืงื•ื—ื•ืช ืฉืขื‘ื•ืจื ื™ืชืืคืฉืจ ื”ื—ื™ื‘ื•ืจ.<br> ืขื‘ื•ืจ ืœืงื•ื—ื•ืช ืฉืื™ื ื ื‘ืจืฉื™ืžื” ื™ื›ืชื•ื‘ ื”ื‘ื ืง <samp>You are not a customer of the bank</samp>. </p...
# ื›ืชื‘ื• ืืช ื”ืคื•ื ืงืฆื™ื” ืฉืœื›ื ื›ืืŸ
week02/7_Summary.ipynb
PythonFreeCourse/Notebooks
mit
<p style="text-align:right;direction:rtl;">ืžืชื•ื“ื•ืช ืฉืœ ืžื—ืจื•ื–ื•ืช</p> <p style="text-align:right;direction:rtl;"> ื ื™ื–ื›ืจ ื‘ื›ืžื” ืคืขื•ืœื•ืช ืฉืœ ืžื—ืจื•ื–ื•ืช:<br>ืœื›ืœ ืื—ื“ ืžื”ืชืจื’ื™ืœื™ื ื”ื‘ืื™ื ื”ืจื™ืฆื• ืืช ื”ื“ื•ื’ืžื” ื•ื›ืชื‘ื• ื‘ืขืฆืžื›ื 3 ื“ื•ื’ืžืื•ืช ื ื•ืกืคื•ืช. ื”ืกื‘ื™ืจื• ืœืขืฆืžื›ื ืžื” ืขื•ืฉื” ื›ืœ ืžืชื•ื“ื” ืœืžื—ืจื•ื–ืช ืฉื”ื™ื ืžืงื‘ืœืช.<br> ืื ืชืจืฆื• ืœื”ื™ื–ื›ืจ ืžื” ืขื•ืฉื” ืžืชื•ื“ื” ืžืกื•ื™ืžืช ืชื•ื›ืœื• ืœื”ืจื™ืฅ ืื•ืชื”...
str.split? "abcdef:ghijk:xyz".split(":") # ื›ืชื‘ื• ื“ื•ื’ืžื” ืœืžืชื•ื“ื” ื–ื• # ื›ืชื‘ื• ื“ื•ื’ืžื” ืœืžืชื•ื“ื” ื–ื• # ื›ืชื‘ื• ื“ื•ื’ืžื” ืœืžืชื•ื“ื” ื–ื• "543".zfill(4) # ื›ืชื‘ื• ื“ื•ื’ืžื” ืœืžืชื•ื“ื” ื–ื• # ื›ืชื‘ื• ื“ื•ื’ืžื” ืœืžืชื•ื“ื” ื–ื• # ื›ืชื‘ื• ื“ื•ื’ืžื” ืœืžืชื•ื“ื” ื–ื• "now i am a lowercase string, one day i will be upper".upper() # ื›ืชื‘ื• ื“ื•ื’ืžื” ืœืžืชื•ื“ื” ื–ื• # ื›ืชื‘ื• ื“ื•ื’ืžื” ืœืžืชื•ื“ื” ื–ื• # ื›ืชื‘...
week02/7_Summary.ipynb
PythonFreeCourse/Notebooks
mit
<p style="text-align:right;direction:rtl;">ืฉืขื•ืŸ ืขื•ืœืžื™</p> <p style="text-align:right;direction:rtl;">ื‘ืฉืืœื” ื–ื• ื ื›ืชื•ื‘ ื’ืจืกื” ืฉืœ ืฉืขื•ืŸ ืขื•ืœืžื™ ื”ืชื•ืžืš ื‘ึพ4 ืื–ื•ืจื™ ื–ืžืŸ:<br> <ul style="text-align: right; direction: rtl; float: right; clear: both;"> <li>ืชืœ ืื‘ื™ื‘ โ€“ TLV</li> <li>ืœื•ื ื“ื•ืŸ โ€“ LDN</li> <li>ื ื™ื• ื™ื•ืจืง โ€“ NYC</li> ...
# ื›ืชื‘ื• ืืช ื”ืคื•ื ืงืฆื™ื” ืฉืœื›ื ื›ืืŸ
week02/7_Summary.ipynb
PythonFreeCourse/Notebooks
mit
<p style="text-align:right;direction:right;">ืจืžื–ื™ื</p> <p style="text-align:right;direction:rtl;">ืคื•ื ืงืฆื™ื•ืช ืฉื™ืžื•ืฉื™ื•ืช: <span style="direction: rtl; background: #000; text: #000"><br><em>split</em> โ€“ ืžืชื•ื“ื” ืฉืœ <em>string</em>.<br> ื”ืื•ืคืจื˜ื•ืจ % (ืžื•ื“ื•ืœื•) โ€“ ื—ืฉื‘ื• ืขื ืื™ื–ื” ืžืกืคืจ ืฆืจื™ืš ืœืขืฉื•ืช ืžื•ื“ื•ืœื•.<br> <em>zfill<...
# ื›ืชื‘ื• ืืช ื”ืคื•ื ืงืฆื™ื” ืฉืœื›ื ื›ืืŸ
week02/7_Summary.ipynb
PythonFreeCourse/Notebooks
mit
<span style="text-align: right; direction:rtl; float: right; clear: both;">ืื•ืจื›ื™ ืจืฉื™ืžื•ืช</span> <p style="text-align: right; direction: rtl; float: right; clear: both;"> ื›ืชื‘ื• ืชื•ื›ื ื™ืช ืฉืžืงื‘ืœืช 2 ืจืฉื™ืžื•ืช ืฉื•ื ื•ืช, ื•ืžื“ืคื™ืกื”: </p> <ul style="text-align: right; direction: rtl; float: right; clear: both;"> <li>"<samp>Same lengt...
# ื›ืชื‘ื• ืืช ื”ืคื•ื ืงืฆื™ื” ืฉืœื›ื ื›ืืŸ
week02/7_Summary.ipynb
PythonFreeCourse/Notebooks
mit
<span style="text-align: right; direction:rtl; float: right; clear: both;">ืžื™ืงื•ืžื™ื</span> <p style="text-align: right; direction: rtl; float: right; clear: both;"> ื›ืชื‘ื• ืคื•ื ืงืฆื™ื” ืฉืžืงื‘ืœืช ืจืฉื™ืžื” ืฉืœ ืจืฉื™ืžื•ืช.<br> ืื ื”ืจืฉื™ืžื” ื”ื—ื™ืฆื•ื ื™ืช ืœื ื‘ืื•ืจืš 6, ื”ืคื•ื ืงืฆื™ื” ืชื“ืคื™ืก <samp>Only lists of length 6 are allowed</samp>.<br> ื”ืคื•ื ืงืฆื™ื” ืชื“ืคื™ืก "...
# ื›ืชื‘ื• ืืช ื”ืคื•ื ืงืฆื™ื” ืฉืœื›ื ื›ืืŸ
week02/7_Summary.ipynb
PythonFreeCourse/Notebooks
mit
2. Authentication You only need your Algorithmia API Key to run the following commands.
API_KEY = 'YOUR_API_KEY' # Create a client instance client = Algorithmia.client(API_KEY)
jup_notebooks/data-science-ipython-notebooks-master/misc/Algorithmia.ipynb
steinam/teacher
mit
3. Face Detection Uses a pretrained model to detect faces in a given image. Read more about Face Detection here
from IPython.display import Image face_url = 'https://s3.amazonaws.com/algorithmia-assets/data-science-ipython-notebooks/face.jpg' # Sample Face Image Image(url=face_url) Algorithmia.apiKey = 'Simple ' + API_KEY input = [face_url, "data://.algo/temp/face_result.jpg"] algo = client.algo('opencv/FaceDetection/0.1.8'...
jup_notebooks/data-science-ipython-notebooks-master/misc/Algorithmia.ipynb
steinam/teacher
mit
4. Content Summarizer SummarAI is an advanced content summarizer with the option of generating context-controlled summaries. It is based on award-winning patented methods related to artificial intelligence and vector space developed at Lawrence Berkeley National Laboratory.
# Get a Wikipedia article as content wiki_article_name = 'Technological Singularity' client = Algorithmia.client(API_KEY) algo = client.algo('web/WikipediaParser/0.1.0') wiki_page_content = algo.pipe(wiki_article_name)['content'] print 'Wikipedia article length: ' + str(len(wiki_page_content)) # Summarize the Wikipedi...
jup_notebooks/data-science-ipython-notebooks-master/misc/Algorithmia.ipynb
steinam/teacher
mit
5. Latent Dirichlet Allocation This algorithm takes a group of documents (anything that is made of up text), and returns a number of topics (which are made up of a number of words) most relevant to these documents. Read more about Latent Dirichlet Allocation here
# Get up to 20 random Wikipedia articles client = Algorithmia.client(API_KEY) algo = client.algo('web/WikipediaParser/0.1.0') random_wiki_article_names = algo.pipe({"random":20}) random_wiki_articles = [] for article_name in random_wiki_article_names: try: article_content = algo.pipe(article_name)['conten...
jup_notebooks/data-science-ipython-notebooks-master/misc/Algorithmia.ipynb
steinam/teacher
mit
6. Optical Character Recognition Recognize text in your images. Read more about Optical Character Recognition here
from IPython.display import Image businesscard_url = 'https://s3.amazonaws.com/algorithmia-assets/data-science-ipython-notebooks/businesscard.jpg' # Sample Image Image(url=businesscard_url) input = {"src": businesscard_url, "hocr":{ "tessedit_create_hocr":1, "tessedit_pageseg_mode":1, "tessedit_char_whitelist":"abcd...
jup_notebooks/data-science-ipython-notebooks-master/misc/Algorithmia.ipynb
steinam/teacher
mit
Gender Estimation for First Names This notebook gives a simple example for a naive Bayes classifier. We try to predict the gender of a first name. In order to train our classifier, we need a training set of names that are marked as being either male. We happen to have two text files, names-female.txt and names-male....
def read_names(file_name): Result = [] with open(file_name, 'r') as file: for name in file: Result.append(name[:-1]) # discard newline return Result FemaleNames = read_names('names-female.txt') MaleNames = read_names('names-male.txt')
Python/6 Classification/Gender-Estimation.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
Let us compute the prior probabilities $P(\texttt{Female})$ and $P(\texttt{Male})$ for the classes $\texttt{Female}$ and $\texttt{Male}$. In the lecture it was shown that the prior probability of a class $C$ in a training set $T$ is given as: $$ P(C) \approx \frac{\mathtt{card}\bigl({t \in T \;|\; \mathtt{class}(t...
pFemale = len(FemaleNames) / (len(FemaleNames) + len(MaleNames)) pMale = len(MaleNames) / (len(FemaleNames) + len(MaleNames)) pFemale
Python/6 Classification/Gender-Estimation.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
As a first attempt to solve the problem we will use the last character of a name as its feature. We have to compute the conditional probability for every possible letter that occurs as the last letter of a name. The general formula to compute the conditional probability of a feature $f$ given a class $C$ is the follo...
def conditional_prop(c, g): if g == 'f': return len([n for n in FemaleNames if n[-1] == c]) / len(FemaleNames) else: return len([n for n in MaleNames if n[-1] == c]) / len(MaleNames)
Python/6 Classification/Gender-Estimation.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
Next, we define a dictionary Conditional_Probability. For every character $c$ and every gender $g \in {\texttt{'f'}, \texttt{'m'}}$, the entry $\texttt{Conditional_Probability}[(c,g)]$ is the conditional probability of observing the last character $c$ if the gender is known to be $g$.
Conditional_Probability = {} for c in 'abcdefghijklmnopqrstuvwxyz': for g in ['f', 'm']: Conditional_Probability[c, g] = conditional_prop(c, g)
Python/6 Classification/Gender-Estimation.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
Now that have both the prior probabilities $P(\texttt{'f'})$ and $P(\texttt{'m'})$ and also all the conditional probabilities $P(c|g)$, we are ready to implement our naive Bayes classifier.
def classify(name): last = name[-1] female = Conditional_Probability[(last, 'f')] * pFemale male = Conditional_Probability[(last, 'm')] * pMale if female >= male: return 'f' else: return 'm'
Python/6 Classification/Gender-Estimation.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
We test our classifier with two common names.
classify('Christian') classify('Elena')
Python/6 Classification/Gender-Estimation.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
Let us check the overall accuracy of our classifier with respect to the training set.
total = 0 correct = 0 for n in FemaleNames: if classify(n) == 'f': correct += 1 total += 1 for n in MaleNames: if classify(n) == 'm': correct += 1 total += 1 accuracy = correct / total accuracy
Python/6 Classification/Gender-Estimation.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
๋Œ€์นญ ํ–‰๋ ฌ์˜ ๊ณ ์œ  ๋ถ„ํ•ด ํ–‰๋ ฌ $A$๊ฐ€ ๋Œ€์นญ(symmetric) ํ–‰๋ ฌ์ด๋ฉด ๊ณ ์œ ๊ฐ’ ๋ฒกํ„ฐ ํ–‰๋ ฌ $V$๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์ „์น˜ ํ–‰๋ ฌ์ด ์—ญํ–‰๋ ฌ๊ณผ ๊ฐ™์•„์ง„๋‹ค. $$ V^T V = V V^T = I$$ ์ด ๋•Œ๋Š” ๊ณ ์œ  ๋ถ„ํ•ด๊ฐ€ ๋‹ค์Œ๊ณผ ๊ฐ™์ด ํ‘œ์‹œ๋œ๋‹ค. $$ A = V\Lambda V^T = \sum_{i=1}^{M} {\lambda_i} v_i v_i^T$$ $$ A^{-1} = V \Lambda^{-1} V^T = \sum_{i=1}^{M} \dfrac{1}{\lambda_i} v_i v_i^T$$ ํ™•๋ฅ  ๋ณ€์ˆ˜์˜ ์ขŒํ‘œ ๋ณ€ํ™˜ ํ™•๋ฅ  ๋ณ€์ˆ˜์˜ ๊ณต๋ถ„์‚ฐ ํ–‰๋ ฌ $\Sigma$ ์€ ๋Œ€์นญ ํ–‰๋ ฌ์ด๋ฏ€๋กœ ์œ„์˜ ...
mu = [2, 3] cov = [[2, 3],[3, 7]] rv = sp.stats.multivariate_normal(mu, cov) xx = np.linspace(0, 4, 120) yy = np.linspace(1, 5, 150) XX, YY = np.meshgrid(xx, yy) plt.grid(False) plt.contourf(XX, YY, rv.pdf(np.dstack([XX, YY]))) x1 = np.array([0, 2]) x1_mu = x1 - mu x2 = np.array([3, 4]) x2_mu = x2 - mu plt.plot(x1_mu[...
24. PCA/01. ๊ณ ์œ ๋ถ„ํ•ด์™€ ํŠน์ด๊ฐ’ ๋ถ„ํ•ด.ipynb
zzsza/Datascience_School
mit
ํŠน์ด๊ฐ’ ๋ถ„ํ•ด ์ •๋ฐฉ ํ–‰๋ ฌ์ด ์•„๋‹Œ ํ–‰๋ ฌ $M$์— ๋Œ€ํ•ด์„œ๋„ ๊ณ ์œ  ๋ถ„ํ•ด์™€ ์œ ์‚ฌํ•œ ๋ถ„ํ•ด๊ฐ€ ๊ฐ€๋Šฅํ•˜๋‹ค. ์ด๋ฅผ ํŠน์ด๊ฐ’ ๋ถ„ํ•ด(singular value decomposition)์ด๋ผ๊ณ  ํ•œ๋‹ค. $M \in \mathbf{R}^{m \times n}$ $$M = U \Sigma V^T$$ ์—ฌ๊ธฐ์—์„œ * $U \in \mathbf{R}^{m \times m}$ * $\Sigma \in \mathbf{R}^{m \times n}$ * $V \in \mathbf{R}^{n \times n}$ ์ด๊ณ  ํ–‰๋ ฌ $U$์™€ $V$๋Š” ๋‹ค์Œ ๊ด€๊ณ„๋ฅผ ๋งŒ์กฑํ•œ๋‹ค. $$ U^T U = UU^T = I ...
from pprint import pprint M = np.array([[1,0,0,0,0],[0,0,2,0,3],[0,0,0,0,0],[0,2,0,0,0]]) print("\nM:"); pprint(M) U, S0, V0 = np.linalg.svd(M, full_matrices=True) print("\nU:"); pprint(U) S = np.hstack([np.diag(S0), np.zeros(M.shape[0])[:, np.newaxis]]) print("\nS:"); pprint(S) print("\nV:"); pprint(V) V = V0.T print(...
24. PCA/01. ๊ณ ์œ ๋ถ„ํ•ด์™€ ํŠน์ด๊ฐ’ ๋ถ„ํ•ด.ipynb
zzsza/Datascience_School
mit
Load raw data We will use the taxifare dataset, using the CSV files that we created in the first notebook of this sequence. Those files have been saved into ../data.
!ls -l ../data/*.csv
notebooks/introduction_to_tensorflow/labs/4_keras_functional_api.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
Use tf.data to read the CSV files We wrote these functions for reading data from the csv files above in the previous notebook. For this lab we will also include some additional engineered features in our model. In particular, we will compute the difference in latitude and longitude, as well as the Euclidean distance be...
CSV_COLUMNS = [ "fare_amount", "pickup_datetime", "pickup_longitude", "pickup_latitude", "dropoff_longitude", "dropoff_latitude", "passenger_count", "key", ] LABEL_COLUMN = "fare_amount" DEFAULTS = [[0.0], ["na"], [0.0], [0.0], [0.0], [0.0], [0.0], ["na"]] UNWANTED_COLS = ["pickup_dateti...
notebooks/introduction_to_tensorflow/labs/4_keras_functional_api.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
Feature columns for Wide and Deep model For the Wide columns, we will create feature columns of crossed features. To do this, we'll create a collection of Tensorflow feature columns to pass to the tf.feature_column.crossed_column constructor. The Deep columns will consist of numeric columns and the embedding columns we...
# 1. Bucketize latitudes and longitudes NBUCKETS = 16 latbuckets = np.linspace(start=38.0, stop=42.0, num=NBUCKETS).tolist() lonbuckets = np.linspace(start=-76.0, stop=-72.0, num=NBUCKETS).tolist() fc_bucketized_plat = # TODO: Your code goes here. fc_bucketized_plon = # TODO: Your code goes here. fc_bucketized_dlat = ...
notebooks/introduction_to_tensorflow/labs/4_keras_functional_api.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
Gather list of feature columns Next we gather the list of wide and deep feature columns we'll pass to our Wide & Deep model in Tensorflow. Recall, wide columns are sparse, have linear relationship with the output while continuous columns are deep, have a complex relationship with the output. We will use our previously ...
# TODO 2 wide_columns = [ # One-hot encoded feature crosses # TODO: Your code goes here. ] deep_columns = [ # Embedding_columns # TODO: Your code goes here. # Numeric columns # TODO: Your code goes here. ]
notebooks/introduction_to_tensorflow/labs/4_keras_functional_api.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
Build a Wide and Deep model in Keras To build a wide-and-deep network, we connect the sparse (i.e. wide) features directly to the output node, but pass the dense (i.e. deep) features through a set of fully connected layers. Hereโ€™s that model architecture looks using the Functional API. First, we'll create our input col...
INPUT_COLS = [ "pickup_longitude", "pickup_latitude", "dropoff_longitude", "dropoff_latitude", "passenger_count", ] inputs = { colname: Input(name=colname, shape=(), dtype="float32") for colname in INPUT_COLS }
notebooks/introduction_to_tensorflow/labs/4_keras_functional_api.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
Then, we'll define our custom RMSE evaluation metric and build our wide and deep model. Exercise. Complete the code in the function build_model below so that it returns a compiled Keras model. The argument dnn_hidden_units should represent the number of units in each layer of your network. Use the Functional API to bui...
def rmse(y_true, y_pred): return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true))) def build_model(dnn_hidden_units): # Create the deep part of model deep = # TODO: Your code goes here. # Create the wide part of model wide = # TODO: Your code goes here. # Combine deep and wide parts of...
notebooks/introduction_to_tensorflow/labs/4_keras_functional_api.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
Next, we can call the build_model to create the model. Here we'll have two hidden layers, each with 10 neurons, for the deep part of our model. We can also use plot_model to see a diagram of the model we've created.
HIDDEN_UNITS = [10, 10] model = build_model(dnn_hidden_units=HIDDEN_UNITS) tf.keras.utils.plot_model(model, show_shapes=False, rankdir="LR")
notebooks/introduction_to_tensorflow/labs/4_keras_functional_api.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
Next, we'll set up our training variables, create our datasets for training and validation, and train our model. (We refer you the the blog post ML Design Pattern #3: Virtual Epochs for further details on why express the training in terms of NUM_TRAIN_EXAMPLES and NUM_EVALS and why, in this training code, the number of...
BATCH_SIZE = 1000 NUM_TRAIN_EXAMPLES = 10000 * 5 # training dataset will repeat, wrap around NUM_EVALS = 50 # how many times to evaluate NUM_EVAL_EXAMPLES = 10000 # enough to get a reasonable sample trainds = create_dataset( pattern="../data/taxi-train*", batch_size=BATCH_SIZE, mode="train" ) evalds = create_d...
notebooks/introduction_to_tensorflow/labs/4_keras_functional_api.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
Just as before, we can examine the history to see how the RMSE changes through training on the train set and validation set.
RMSE_COLS = ["rmse", "val_rmse"] pd.DataFrame(history.history)[RMSE_COLS].plot()
notebooks/introduction_to_tensorflow/labs/4_keras_functional_api.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
<h3>Beispiel: Alter</h3>
alter = {'Peter':45,'Julia':23,'Mathias':36} #Erzeugen eines Dictionaries print(alter) alter['Julia']=27 #ร„ndern des Alters alter['Monika']=33 #Hinzufรผgen von Monika - die Reihenfolge der Schlรผssel spielt keine Rolle print(alter) if 'Monika' in alter: print (alter['Monika'])
17-12-11-workcamp-ml/2017-12-11-arbeiten-mit-dictionaries-10.ipynb
mediagit2016/workcamp-maschinelles-lernen-grundlagen
gpl-3.0
<h3>Beispiel: Temperaturen in Staedten</h3>
temperatur={'stuttgart':32.9,'muenchen':29.8,'hamburg':24.4}# Erzeugen eines dictionaries mit Temperaturen in verschiedenen Stรคdten temperatur['koeln']=29.7 #hinzufuegen der temperatur in koeln print(temperatur) #ausgabe der temperaturen for stadt in temperatur: print('Die Temperatur in %s ist %g ยฐC' % (stadt,temp...
17-12-11-workcamp-ml/2017-12-11-arbeiten-mit-dictionaries-10.ipynb
mediagit2016/workcamp-maschinelles-lernen-grundlagen
gpl-3.0
<h2>Beispiel Studenten - mit dictionary</h2>
st={}#Erzeugen des leeren dictionarys st['100100'] = {'Mathe':1.0, 'Bwl':2.5} st['100200'] = {'Mathe':2.3, 'Bwl':1.8} print(st.items()) print(type(st)) print(st.values()) print(st.keys())
17-12-11-workcamp-ml/2017-12-11-arbeiten-mit-dictionaries-10.ipynb
mediagit2016/workcamp-maschinelles-lernen-grundlagen
gpl-3.0
<h2>Schrittweiser Aufbau eines Studentenverezichnisses</h2>
def stud_verz(): stud={}#erzeugen eines leeren dictionaries student=input('Matrikel-Nr als string eingeben:') while student: Mathe = input('Mathe Note eingeben:') Bwl = input('Bwl Note eingeben:') stud[student]={"Mathematik":Mathe,"BWL":Bwl} student=input('Matrikel-Nr als str...
17-12-11-workcamp-ml/2017-12-11-arbeiten-mit-dictionaries-10.ipynb
mediagit2016/workcamp-maschinelles-lernen-grundlagen
gpl-3.0
<h2>Ein Dictionary aus anderen zusammensetzen <li>d2.update(d1)
d1={'hans':1.8,'peter':1.73,'rainer':1.74} d2={'petra':1.8,'hannes':1.73,'rainer':1.78} d1.update(d2) print(d1)
17-12-11-workcamp-ml/2017-12-11-arbeiten-mit-dictionaries-10.ipynb
mediagit2016/workcamp-maschinelles-lernen-grundlagen
gpl-3.0
<h2>Datenzugriff in einem dictionary
deutsch = {'key':['Schluessel','Taste'],'slice':['Scheibe','Schnitte','Stueck'],'value':['Wert']} print(deutsch) ######Abfangen von Abfragefehlern def uebersetze(wort,d): if wort in d: return d[wort] else: return 'unbekannt' print(uebersetze('slice',deutsch)) uebersetze('search',deutsch)
17-12-11-workcamp-ml/2017-12-11-arbeiten-mit-dictionaries-10.ipynb
mediagit2016/workcamp-maschinelles-lernen-grundlagen
gpl-3.0
<h1>Vokabeltrainer entwickeln
#Vokabeltrainer entwickeln import random #Definition der Funktionen def dict_laden(pfad): d={} try: datei = open(pfad) liste = datei.readlines() for eintrag in liste: l_eintrag = eintrag.split() d[l_eintrag[0]]=l_eintrag[1:] datei.close() except: ...
17-12-11-workcamp-ml/2017-12-11-arbeiten-mit-dictionaries-10.ipynb
mediagit2016/workcamp-maschinelles-lernen-grundlagen
gpl-3.0
Data Exploration In this first section of this project, you will make a cursory investigation about the Boston housing data and provide your observations. Familiarizing yourself with the data through an explorative process is a fundamental practice to help you better understand and justify your results. Since the main ...
# TODO: Minimum price of the data minimum_price = np.min(prices) # TODO: Maximum price of the data maximum_price = np.max(prices) # TODO: Mean price of the data mean_price = np.mean(prices) # TODO: Median price of the data median_price = np.median(prices) # TODO: Standard deviation of prices of the data std_price =...
boston_housing/boston_housing.ipynb
sriharshams/mlnd
apache-2.0
Question 1 - Feature Observation As a reminder, we are using three features from the Boston housing dataset: 'RM', 'LSTAT', and 'PTRATIO'. For each data point (neighborhood): - 'RM' is the average number of rooms among homes in the neighborhood. - 'LSTAT' is the percentage of homeowners in the neighborhood considered "...
import matplotlib.pyplot as plt %matplotlib inline from sklearn.linear_model import LinearRegression reg = LinearRegression() pt_ratio = data["RM"].reshape(-1,1) reg.fit(pt_ratio, prices) # Create the figure window plt.plot(pt_ratio, reg.predict(pt_ratio), color='red', lw=1) plt.scatter(pt_ratio, prices, alpha=0.5, c=...
boston_housing/boston_housing.ipynb
sriharshams/mlnd
apache-2.0
LSTAT
import matplotlib.pyplot as plt %matplotlib inline from sklearn.linear_model import LinearRegression reg = LinearRegression() pt_ratio = data["LSTAT"].reshape(-1,1) reg.fit(pt_ratio, prices) # Create the figure window plt.plot(pt_ratio, reg.predict(pt_ratio), color='red', lw=1) plt.scatter(pt_ratio, prices, alpha=0.5,...
boston_housing/boston_housing.ipynb
sriharshams/mlnd
apache-2.0
PTRATIO
import matplotlib.pyplot as plt %matplotlib inline from sklearn.linear_model import LinearRegression reg = LinearRegression() pt_ratio = data["PTRATIO"].reshape(-1,1) reg.fit(pt_ratio, prices) # Create the figure window plt.plot(pt_ratio, reg.predict(pt_ratio), color='red', lw=1) plt.scatter(pt_ratio, prices, alpha=0....
boston_housing/boston_housing.ipynb
sriharshams/mlnd
apache-2.0
Developing a Model In this second section of the project, you will develop the tools and techniques necessary for a model to make a prediction. Being able to make accurate evaluations of each model's performance through the use of these tools and techniques helps to greatly reinforce the confidence in your predictions....
# TODO: Import 'r2_score' from sklearn.metrics import r2_score def performance_metric(y_true, y_predict): """ Calculates and returns the performance score between true and predicted values based on the metric chosen. """ # TODO: Calculate the performance score between 'y_true' and 'y_predict' ...
boston_housing/boston_housing.ipynb
sriharshams/mlnd
apache-2.0
Answer: - Yes I would consider this model to have successfully captured the variation of the target variable. - R2 is 0.923 which is very close to 1, means the True Value is 92.3% predicted from Prediction - As shown below it is possible to plot the values to get the visual representation in this scenario
import numpy as np import matplotlib.pyplot as plt %matplotlib inline true, pred = [3.0, -0.5, 2.0, 7.0, 4.2],[2.5, 0.0, 2.1, 7.8, 5.3] #plot true values true_handle = plt.scatter(true, true, alpha=0.6, color='blue', label = 'True' ) #reference line fit = np.poly1d(np.polyfit(true, true, 1)) lims = np.linspace(min(tr...
boston_housing/boston_housing.ipynb
sriharshams/mlnd
apache-2.0
Implementation: Shuffle and Split Data Your next implementation requires that you take the Boston housing dataset and split the data into training and testing subsets. Typically, the data is also shuffled into a random order when creating the training and testing subsets to remove any bias in the ordering of the datase...
# TODO: Import 'train_test_split' from sklearn.cross_validation import train_test_split # TODO: Shuffle and split the data into training and testing subsets X_train, X_test, y_train, y_test = train_test_split(features, prices, test_size=0.2, random_state=0) # Success print "Training and testing split was successful."
boston_housing/boston_housing.ipynb
sriharshams/mlnd
apache-2.0
Question 3 - Training and Testing What is the benefit to splitting a dataset into some ratio of training and testing subsets for a learning algorithm? Hint: What could go wrong with not having a way to test your model? Answer: - Learning algorithm used for prediction or inference of datasets. We do not need learning a...
# Produce learning curves for varying training set sizes and maximum depths vs.ModelLearning(features, prices)
boston_housing/boston_housing.ipynb
sriharshams/mlnd
apache-2.0
Question 4 - Learning the Data Choose one of the graphs above and state the maximum depth for the model. What happens to the score of the training curve as more training points are added? What about the testing curve? Would having more training points benefit the model? Hint: Are the learning curves converging to parti...
vs.ModelComplexity(X_train, y_train)
boston_housing/boston_housing.ipynb
sriharshams/mlnd
apache-2.0
Question 5 - Bias-Variance Tradeoff When the model is trained with a maximum depth of 1, does the model suffer from high bias or from high variance? How about when the model is trained with a maximum depth of 10? What visual cues in the graph justify your conclusions? Hint: How do you know when a model is suffering fro...
# TODO: Import 'make_scorer', 'DecisionTreeRegressor', and 'GridSearchCV' from sklearn.tree import DecisionTreeRegressor from sklearn.metrics import make_scorer from sklearn.grid_search import GridSearchCV def fit_model(X, y): """ Performs grid search over the 'max_depth' parameter for a decision tree reg...
boston_housing/boston_housing.ipynb
sriharshams/mlnd
apache-2.0
Answer: - 4, this is same as the result of my guess in Question 6 Question 10 - Predicting Selling Prices Imagine that you were a real estate agent in the Boston area looking to use this model to help price homes owned by your clients that they wish to sell. You have collected the following information from three of ...
# Produce a matrix for client data client_data = [[5, 17, 15], # Client 1 [4, 32, 22], # Client 2 [8, 3, 12]] # Client 3 # Show predictions for i, price in enumerate(reg.predict(client_data)): print "Predicted selling price for Client {}'s home: ${:,.2f}".format(i+1, price)
boston_housing/boston_housing.ipynb
sriharshams/mlnd
apache-2.0
Answer: The predicted selling prices are \$391,183.33, \$189,123.53 and \$942,666.67 for Client 1's home, Client 2's home and Client 3's home respectively. Facts from the descriptive statistics: - Distribution: Statistics for Boston housing dataset: Minimum price: \$105,000.00 Maximum price: \$1,024,800.00 ...
from matplotlib import pyplot as plt clients = np.transpose(client_data) pred = reg.predict(client_data) for i, feat in enumerate(['RM', 'LSTAT', 'PTRATIO']): plt.scatter(features[feat], prices, alpha=0.25, c=prices) plt.scatter(clients[i], pred, color='black', marker='x', linewidths=2) plt.xlabel(feat) ...
boston_housing/boston_housing.ipynb
sriharshams/mlnd
apache-2.0
Sensitivity An optimal model is not necessarily a robust model. Sometimes, a model is either too complex or too simple to sufficiently generalize to new data. Sometimes, a model could use a learning algorithm that is not appropriate for the structure of the data given. Other times, the data itself could be too noisy or...
vs.PredictTrials(features, prices, fit_model, client_data)
boston_housing/boston_housing.ipynb
sriharshams/mlnd
apache-2.0
CartesianCoords and PolarCoords are classes that were designed to be used in-house for the conversion between Cartesian and Polar coordinates. You just need to initialise the object with some coordinates, and then it is easy to extract the relevant information. 3D coordinates are possible, but the z-coordinate has a de...
cc = pmt.CartesianCoords(5,5) print("2D\n") print("x-coordinate: {}".format(cc.x)) print("y-coordinate: {}".format(cc.y)) print("radial: {}".format(cc.r)) print("azimuth: {}".format(cc.a)) cc3D = pmt.CartesianCoords(1,2,3) print("\n3D\n") print("x-coordinate: {}".format(cc3D.x)) print("y-coordinate: {}"....
notebooks/example.ipynb
fangohr/polygon-finite-difference-mesh-tools
bsd-2-clause
pmt.PolarCoords works in exactly the same way, but instead you initialise it with polar coordinates (radius, azimuth and height (optional), respectively) and the cartesian ones can be extracted as above. Function 1: in_poly
print(pmt.in_poly.__doc__)
notebooks/example.ipynb
fangohr/polygon-finite-difference-mesh-tools
bsd-2-clause
Takes three arguments by default: x, specifying the x-coordinate of the point you would like to test y, specifying the y-coordinate of the point you would like to test n, the number of sides of the polygon Optional arguments are: r, the radius of the circumscribed circle (equal to the distance from the circumcentre ...
pmt.in_poly(x=5, y=30, n=3, r=40, plot=True) pmt.in_poly(x=5, y=30, n=3, r=40) # No graph will be generated, more useful for use within other functions pmt.in_poly(x=0, y=10, n=6, r=20, plot=True) # Dot changes colour to green when inside the polygon import numpy as np pmt.in_poly(x=-10, y=-25, n=6, r=20, rotatio...
notebooks/example.ipynb
fangohr/polygon-finite-difference-mesh-tools
bsd-2-clause
And of course, as n becomes large, the polygon tends to a circle:
pmt.in_poly(x=3, y=5, n=100, r=10, plot=True)
notebooks/example.ipynb
fangohr/polygon-finite-difference-mesh-tools
bsd-2-clause
Function 2: plot_circular_fidi_mesh
print(pmt.plot_circular_fidi_mesh.__doc__)
notebooks/example.ipynb
fangohr/polygon-finite-difference-mesh-tools
bsd-2-clause
Only has one default argument: diameter, the diameter of the circle you would like to plot Optional arguments: x_spacing, the width of the mesh elements. Default x_spacing=2 y_spacing, the height of the mesh elements. Default y_spacing=2 (only integers are currently supported for x- and y-spacing.) centre_mesh, o...
pmt.plot_circular_fidi_mesh(diameter=60) pmt.plot_circular_fidi_mesh(diameter=60, x_spacing=2, y_spacing=2, centre_mesh=True) # Note the effect of centre_mesh=True. In the previous plot, the element boundaries are aligned with 0 on the x- and y-axes. # In this case, centring the mesh has the effect of producing a m...
notebooks/example.ipynb
fangohr/polygon-finite-difference-mesh-tools
bsd-2-clause
Function 3: plot_poly_fidi_mesh
print(pmt.plot_poly_fidi_mesh.__doc__)
notebooks/example.ipynb
fangohr/polygon-finite-difference-mesh-tools
bsd-2-clause
Requires two arguments: diameter, the diameter of the circumscribed circle n, the number of sides the polygon should have Optional arguments: x_spacing y_spacing centre_mesh show_axes show_title (All of the above have the same function as in plot_circular_fidi_mesh, and below, like in_poly) rotation translate
pmt.plot_poly_fidi_mesh(diameter=50, n=5, x_spacing=1, y_spacing=1, rotation=np.pi/10)
notebooks/example.ipynb
fangohr/polygon-finite-difference-mesh-tools
bsd-2-clause
Function 4: find_circumradius
print(pmt.find_circumradius.__doc__)
notebooks/example.ipynb
fangohr/polygon-finite-difference-mesh-tools
bsd-2-clause
If you need to specify the side length, or the distance from the circumcentre to the middle of one of the faces, this function will convert that value to the circumradius (not diameter!) that would give the correct side length or apothem.
pmt.find_circumradius(n=3, side=10)
notebooks/example.ipynb
fangohr/polygon-finite-difference-mesh-tools
bsd-2-clause