markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
4. Alltogether
ns1 = NS('n', dim=2).truncate(2.0, 0.8, lambda m: np.sum(np.abs(m), axis=1)) + 4 ns2 = 2 * NS('u', dim=2).truncate(1, expr=lambda m: np.sum(m, axis=1)) - (1, 1) ns3 = NS('n', dim=2).truncate(1.5, expr=lambda m: np.sum(np.square(m), axis=1)) + (4, 0) ns4 = ((NS('n', dim=2).truncate(2.5, expr=lambda m: np.sum(np.square(m...
_____no_output_____
Apache-2.0
examples/tutorials/07_sampler.ipynb
abrikoseg/batchflow
5. Notes * parallellism`Sampler`-objects allow for parallelism with `mutliprocessing`. Just make sure to use explicitly defined functions (not `lambda`s) when running `Sampler.apply` or `Sampler.truncate`:
def transform(m): return np.sum(np.abs(m), axis=1) ns = NS('n', dim=2).truncate(2.0, 0.8, expr=transform) + 4 from multiprocessing import Pool def test_func(s): return s.sample(2) p = Pool(5) p.map(test_func, [ns, ns, ns])
_____no_output_____
Apache-2.0
examples/tutorials/07_sampler.ipynb
abrikoseg/batchflow
Example usageTo use `pyspark_delta_utility` in a project:
import pyspark_delta_utility print(pyspark_delta_utility.__version__)
_____no_output_____
MIT
docs/example.ipynb
AraiYuno/pyspark-delta-utility
MLP train on K=2,3,4Train a generic MLP as binary classifier of protein-coding/non-coding RNA.Set aside a 20% test set, stratified shuffle by length.On the non-test, use random shuffle to partition train and validation sets.Train on 80% and valuate on 20% validation set.
import numpy as np import pandas as pd import pandas as pd import matplotlib.pyplot as plt import tensorflow as tf from tensorflow import keras tf.keras.backend.set_floatx('float64')
_____no_output_____
MIT
Length_Study/MLP_01.ipynb
ShepherdCode/ShepherdML
K-mer frequency, K=2
def read_features(nc_file,pc_file): nc = pd.read_csv (nc_file) pc = pd.read_csv (pc_file) nc['class']=0 pc['class']=1 rna_mer=pd.concat((nc,pc),axis=0) return rna_mer rna_mer = read_features('ncRNA.2mer.features.csv','pcRNA.2mer.features.csv') rna_mer # Split into train/test stratified by sequen...
_____no_output_____
MIT
Length_Study/MLP_01.ipynb
ShepherdCode/ShepherdML
K-mer frequency, K=3
rna_mer = read_features('ncRNA.3mer.features.csv','pcRNA.3mer.features.csv') (train_set,test_set)=make_train_test(rna_mer) (X_train,y_train,X_valid,y_valid)=prepare_train_set(train_set) act="sigmoid" mlp3mer = keras.models.Sequential([ keras.layers.LayerNormalization(trainable=False), keras.layers.Dense(32, act...
_____no_output_____
MIT
Length_Study/MLP_01.ipynb
ShepherdCode/ShepherdML
K-mer frequency, K=4
rna_mer = read_features('ncRNA.4mer.features.csv','pcRNA.4mer.features.csv') (train_set,test_set)=make_train_test(rna_mer) (X_train,y_train,X_valid,y_valid)=prepare_train_set(train_set) act="sigmoid" mlp4mer = keras.models.Sequential([ keras.layers.LayerNormalization(trainable=False), keras.layers.Dense(32, act...
_____no_output_____
MIT
Length_Study/MLP_01.ipynb
ShepherdCode/ShepherdML
Analysis of Games from the Apple Store > This dataset was taken from [Kaggle](https://www.kaggle.com/tristan581/17k-apple-app-store-strategy-games) and it was collected on the 3rd of August 2019 using the [iTunes API](https://affiliate.itunes.apple.com/resources/documentation/itunes-store-web-service-search-api/) and ...
# Important imports for the analysis of the dataset import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns sns.set_style("darkgrid") # Show the plot in the same window as the notebook %matplotlib inline # Create the dataframe and check the first 8 rows app_df = pd.read_csv("appsto...
_____no_output_____
MIT
App Store Strategy Game Analysis.ipynb
MarceloFischer/App-Store-Dataset-Analysis
2. Exploratory Analysis
app_df_cut.info()
<class 'pandas.core.frame.DataFrame'> RangeIndex: 17007 entries, 0 to 17006 Data columns (total 15 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 ID 17007 non-null int64 1 Name ...
MIT
App Store Strategy Game Analysis.ipynb
MarceloFischer/App-Store-Dataset-Analysis
***From the above cell I understand that I should take a closer look into the columns listed below because they have some missing values: - Average User Rating - User Rating Count - Price - Languages Another important thing to check is if there are any **duplicate ID's** and, if so, remove them. Also, the last two colu...
# Most reviewed app #app_df_cut.iloc[app_df_cut["User Rating Count"].idxmax()] # A better way of seeing the most reviwed apps app_df_cut = app_df_cut.sort_values(by="User Rating Count", ascending=False) app_df_cut.head(5)
_____no_output_____
MIT
App Store Strategy Game Analysis.ipynb
MarceloFischer/App-Store-Dataset-Analysis
Rating columns> I'm going to consider that all the NaN values in the "User Rating Count" column means that the game recieved no ratings and therefore is 0. If the app recieved no ratings, then the "Average User Rating" will also be zero for these games.
# Get the columns "User Rating Count" and "Average User Rating" where they are both equal to NaN and set the # values to 0. app_df_cut.loc[(app_df_cut["User Rating Count"].isnull()) | (app_df_cut["Average User Rating"].isnull()), ["Average User Rating", "User Rating Count"]] = 0
_____no_output_____
MIT
App Store Strategy Game Analysis.ipynb
MarceloFischer/App-Store-Dataset-Analysis
In-app Purchases column> I'm considering that the null values within the "In-app Purchases" column means that there are no in-app purchases available**Different considerations could have been done, but I will continue with this one for now.**
# Get the column "In-app Purchases" where the value is NaN and set it to zero app_df_cut.loc[app_df_cut["In-app Purchases"].isnull(), "In-app Purchases"] = 0
_____no_output_____
MIT
App Store Strategy Game Analysis.ipynb
MarceloFischer/App-Store-Dataset-Analysis
ID column> Let's check if there are missing or duplicate ID's in the dataset:
# Check if there are missing or 0 ID's app_df_cut.loc[(app_df_cut["ID"] == 0) | (app_df_cut["ID"].isnull()), "ID"] # Check for duplicates in the ID column len(app_df_cut["ID"]) - len(app_df_cut["ID"].unique()) # The number of unique values is lower than the total amount of ID's, therefore there are dupli...
_____no_output_____
MIT
App Store Strategy Game Analysis.ipynb
MarceloFischer/App-Store-Dataset-Analysis
Size column> I will check if there are any missing or 0 values in the size column. If so, they will be removed from the data since we cannot know it's value.
# Check if there are null values in the Size column app_df_cut[(app_df_cut["Size"].isnull()) | (app_df_cut['Size'] == 0)] # Drop the only row in which the game has no size app_df_cut.drop([16782], axis=0, inplace=True) # Convert the size to MB app_df_cut["Size"] = round(app_df_cut["Size"]/1000000) app_df_cut.head(5)
_____no_output_____
MIT
App Store Strategy Game Analysis.ipynb
MarceloFischer/App-Store-Dataset-Analysis
Price column > Games with a missing value in the price column will be dropped
# Drop the row with NaN values in the "Price" column app_df_cut = app_df_cut.drop(app_df_cut.loc[app_df_cut["Price"].isnull()].index)
_____no_output_____
MIT
App Store Strategy Game Analysis.ipynb
MarceloFischer/App-Store-Dataset-Analysis
Languages column> Games with a missing value in the "Languages" column will be dropped
# Drop the rows with NaN values in the "Languages" column app_df_cut = app_df_cut.drop(app_df_cut.loc[app_df_cut["Languages"].isnull()].index) app_df_cut.info()
<class 'pandas.core.frame.DataFrame'> Int64Index: 16763 entries, 1378 to 17006 Data columns (total 15 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 ID 16763 non-null int64 1 Name ...
MIT
App Store Strategy Game Analysis.ipynb
MarceloFischer/App-Store-Dataset-Analysis
Age Rating column > I will pad the Age Rating column with a 0 to make it easier to sort the values later
# Put a 0 in front of evry value in the 'Age Rating column' app_df_cut['Age Rating'] = app_df_cut['Age Rating'].str.pad(width=3, fillchar='0')
_____no_output_____
MIT
App Store Strategy Game Analysis.ipynb
MarceloFischer/App-Store-Dataset-Analysis
Now that the dataset is organized, let's save it into a csv file so that we do not have to redo all the steps above
app_df_cut.to_csv("app_df_clean.csv", index=False) app_df_clean = pd.read_csv("app_df_clean.csv") app_df_clean.head() # Transform the string dates into datetime objects app_df_clean["Original Release Date"] = pd.to_datetime(app_df_clean["Original Release Date"]) app_df_clean["Current Version Release Date"] = pd.to_date...
<class 'pandas.core.frame.DataFrame'> RangeIndex: 16763 entries, 0 to 16762 Data columns (total 15 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 ID 16763 non-null int64 1 Name ...
MIT
App Store Strategy Game Analysis.ipynb
MarceloFischer/App-Store-Dataset-Analysis
3. Graphics and Insights Evolution of the Apps' Size> Do the apps get bigger with time?
# Make the figure plt.figure(figsize=(16,10)) # Variables years = app_df_clean["Original Release Date"].apply(lambda date: date.year) size = app_df_clean["Size"] # Plot a swarmplot palette = sns.color_palette("muted") size = sns.swarmplot(x=years, y=size, palette=palette) size.set_ylabel("Size (in MB)", fontsize=16) ...
_____no_output_____
MIT
App Store Strategy Game Analysis.ipynb
MarceloFischer/App-Store-Dataset-Analysis
> **With the advance in technology and the internet becoming cheaper and cheaper more people have access to faster networks. As the years go by, it can be seen in the graph above that the games' size gets bigger. Some games that have more than 2GB can be noted, reaching a maximum value of 4GB, but they are not the most...
# Make the figure plt.figure(figsize=(16,10)) # Plot a countplot palette1 = sns.color_palette("inferno_r") apps_per_year = sns.countplot(x=years, data=app_df_clean, palette=palette1) apps_per_year.set_xlabel("Year of Release", fontsize=16) apps_per_year.set_ylabel("Amount of Games", fontsize=16) apps_per_year.set_titl...
_____no_output_____
MIT
App Store Strategy Game Analysis.ipynb
MarceloFischer/App-Store-Dataset-Analysis
> **From 2008 to 2016 we can identify a drastic increase in the number of games released each year in which the highest increase occurs between the years 2015 and 2016. After 2016 the amount of games released per year starts to drop down almost linearly for 2 years (2019 cannot be considered yet because the data was co...
#Make a list of years from 2014 to 2018 years_lst = [year for year in range(2014,2019)] #For loop to get a picture of the amount of games produced from August to December for year in years_lst: from_August = app_df_clean["Original Release Date"].apply(lambda date: (date.year == year) & (date.month >= 8)).sum() ...
In 2014, 44.1% games were produced from August to December. In 2015, 42.2% games were produced from August to December. In 2016, 39.9% games were produced from August to December. In 2017, 40.8% games were produced from August to December. In 2018, 42.4% games were produced from August to December.
MIT
App Store Strategy Game Analysis.ipynb
MarceloFischer/App-Store-Dataset-Analysis
> **Having checked the previous five years we can see that the amount of games released from August to December represents a significant portion of the whole and that it can be considered roughly constant at 42%. Nevertheless, the last two years show a tendency for a linear decrease in the quantity of games released pe...
# Make the figure plt.figure(figsize=(16,10)) # Variables. Sort by age to put the legend in order. data = app_df_clean.sort_values(by='Age Rating') # Plot a countplot palette1 = sns.color_palette("viridis") apps_per_year = sns.countplot(x=years, data=data, palette=palette1, hue='Age Rating') apps_per_year.set_xlabel(...
_____no_output_____
MIT
App Store Strategy Game Analysis.ipynb
MarceloFischer/App-Store-Dataset-Analysis
> **As shown above, most apps tend to target all ages.** The amount of apps had a considerable increase in the past years indicating that producing an app has been a trend and possibly a lucrative market. That being said, it is important to analyse if there is a preference for free or paid games and the range of price...
# Make the figure plt.figure(figsize=(16,10)) # Variables price = app_df_clean["Price"] # Plot a Countplot palette2 = sns.light_palette("green", reverse=True) price_vis = sns.countplot(x=price, palette=palette2) price_vis.set_xlabel("Price (in US dollars)", fontsize=16) price_vis.set_xticklabels(price_vis.get_xtickla...
_____no_output_____
MIT
App Store Strategy Game Analysis.ipynb
MarceloFischer/App-Store-Dataset-Analysis
> **We can see that the majority of the games are free. That leads me to analyse if the free apps have more in-app purchases then the paid ones, meaning that this might be their source of income.**
# Make the figure plt.figure(figsize=(16,10)) # Variables in_app_purchases = app_df_clean["In-app Purchases"].str.split(",").apply(lambda lst: len(lst)) # Plot a stripplot palette3 = sns.color_palette("BuGn_r", 23) in_app_purchases_vis = sns.stripplot(x=price, y=in_app_purchases, palette=palette3) in_app_purchases_vi...
_____no_output_____
MIT
App Store Strategy Game Analysis.ipynb
MarceloFischer/App-Store-Dataset-Analysis
> **As expected, free and lower-priced apps provide more items to be purchased than expensive games. Two reasons can be named:**>>> **1.The developers have to invest money into making the games and updating them, therefore they need a source of income. In the case of free games, this comes with the in-app purchases ava...
# Plot a distribution of the top 200 apps by their price # Make the figure plt.figure(figsize=(16,10)) # Plot a Countplot palette4 = sns.color_palette("BuPu_r") top_prices = sns.countplot(app_df_clean.iloc[:200]["Price"], palette=palette4) top_prices.set_xlabel("Price (in US dollars)", fontsize=16) top_prices.set_xti...
_____no_output_____
MIT
App Store Strategy Game Analysis.ipynb
MarceloFischer/App-Store-Dataset-Analysis
> **The graph above shows that among the top 200 games, the vast majority are free. This result makes sense considering you don't have to invest any money to start playing and can spend afterward if you would like to invest in it.** Even though most games are free we should take a look if a type of app (paid or free) ...
# Create the DataFrames needed paid = app_df_clean[app_df_clean["Price"] > 0] total_paid = len(paid) free = app_df_clean[app_df_clean["Price"] == 0] total_free = len(free) # Make the figure and the axes (1 row, 2 columns) fig, axes = plt.subplots(1, 2, figsize=(16,10)) palette5 = sns.color_palette("gist_yarg", 10) # ...
_____no_output_____
MIT
App Store Strategy Game Analysis.ipynb
MarceloFischer/App-Store-Dataset-Analysis
> **There are no indications of whether a paid or a free game is better. Actually, the pattern of user ratings is pretty much equal for both types of games. The graph above shows that both categories seem to deliver a good service and mostly satisfy their costumers as most of the ratings are between 4-5 stars. We can a...
# Make the figure plt.figure(figsize=(16,10)) # Make a countplot palette6 = sns.color_palette("BuGn_r") age_vis = sns.countplot(x=app_df_clean["Age Rating"], order=["04+", "09+", "12+", "17+"], palette=palette6) age_vis.set_xlabel("Age Rating", fontsize=16) age_vis.set_ylabel("Amount of Games", fontsize=16) age_vis.se...
_____no_output_____
MIT
App Store Strategy Game Analysis.ipynb
MarceloFischer/App-Store-Dataset-Analysis
> **Most of the apps are in the +4 age category, which can be translated as "everyone can play". This ensures that the developers are targeting a much broader audience with their games.** Languages> Do most games have various choices of languages?
# Create a new column that contains the amount of languages that app has available app_df_clean["numLang"] = app_df_clean["Languages"].apply(lambda x: len(x.split(","))) #Make the figure plt.figure(figsize=(16,10)) #Variables lang = app_df_clean.loc[app_df_clean["numLang"] <= 25, "numLang"] #Plot a countplot palette7...
_____no_output_____
MIT
App Store Strategy Game Analysis.ipynb
MarceloFischer/App-Store-Dataset-Analysis
Logistic Regression 3-class ClassifierShow below is a logistic-regression classifiers decision boundaries on the`iris `_ dataset. Thedatapoints are colored according to their labels.
print(__doc__) # Code source: Gaël Varoquaux # Modified for documentation by Jaques Grobler # License: BSD 3 clause import numpy as np import matplotlib.pyplot as plt from sklearn import linear_model, datasets import pandas as pd mydata = pd.read_csv("dataset.csv") dt = mydata.values X = dt[:, :2] Y = dt[:, 3] Y...
Automatically created module for IPython interactive environment
CC-BY-3.0
Assignments/hw3/HW3_Generalized_Linear_Model_finished/plot_iris_logistic1.ipynb
Leon23N/Leon23N.github.io
ga_sim This jn is intended to create simulations of dwarf galaxies and globular clusters using as field stars the catalog of DES. These simulations will be later copied to gawa jn, a pipeline to detect stellar systems with field's stars. In principle this pipeline read a table in data base with g and r magnitudes, sub...
import numpy as np from astropy.coordinates import SkyCoord from astropy import units as u import healpy as hp import astropy.io.fits as fits from astropy.table import Table from astropy.io.fits import getdata import sqlalchemy import json from pathlib import Path import os import sys import parsl from parsl.app.app im...
_____no_output_____
MIT
ga_sim.ipynb
linea-it/ga_sim
Below are the items of the configuration for field stars and simulations. A small description follows as a comment.
# Main settings: confg = "ga_sim.json" # read config file with open(confg) as fstream: param = json.load(fstream) age_simulation = 1.0e10 # in years Z_simulation = 0.001 # Assuming Z_sun = 0.0152 # Diretório para os resultados os.system("mkdir -p " + param['results_path']) # Reading reddening files hdu_ngp = ...
_____no_output_____
MIT
ga_sim.ipynb
linea-it/ga_sim
Downloading the isochrone table with the last improvements from Padova.Printing age and metalicity of isochrone downloaded. Try one more time in case of problems. Sometimes there is a problem with the connection to Padova.
download_iso(param['padova_version_code'], param['survey'], Z_simulation, age_simulation, param['av_simulation'], param['file_iso'])
_____no_output_____
MIT
ga_sim.ipynb
linea-it/ga_sim
Checking age and metalicity of the isochrone:
# Reading [M/H], log_age, mini, g iso_info = np.loadtxt(param['file_iso'], usecols=(1, 2, 3, 26), unpack=True) FeH_iso = iso_info[0][0] logAge_iso = iso_info[1][0] m_ini_iso = iso_info[2] g_iso = iso_info[3] print('[Fe/H]={:.2f}, Age={:.2f} Gyr'.format(FeH_iso, 10**(logAge_iso-9))) mM_mean = (param['mM_max'] + param[...
_____no_output_____
MIT
ga_sim.ipynb
linea-it/ga_sim
Reading the catalog and writing as a fits file (to avoid read from the DB many times in the case the same catalog will be used multiple times).
RA, DEC, MAG_G, MAGERR_G, MAG_R, MAGERR_R = read_cat( param['vac_ga'], param['ra_min'], param['ra_max'], param['dec_min'], param['dec_max'], param['mmin'], param['mmax'], param['cmin'], param['cmax'], "DES_Y6_Gold_v1_derred.fits", 1.19863, 0.83734, ngp, sgp, param['results_path'])
_____no_output_____
MIT
ga_sim.ipynb
linea-it/ga_sim
The cells below reads the position, calculates the extinction using the previous function and correct the aparent magnitude (top of the Galaxy), filter the stars for magnitude and color ranges, and writes a file with the original position of the stars and corrected magnitudes. Simulation of dwarf galaxies and globula...
RA_pix, DEC_pix, r_exp, ell, pa, dist, mass, mM, hp_sample_un = gen_clus_file( param['ra_min'], param['ra_max'], param['dec_min'], param['dec_max'], param['nside_ini'], param['border_extract'], param['mM_min'], param['mM_max'], param['log10_rexp_min'], param['log10_rexp_max'], ...
_____no_output_____
MIT
ga_sim.ipynb
linea-it/ga_sim
Dist starsReading data from magnitude and errors.
mag1_, err1_, err2_ = read_error(param['file_error'], 0.015, 0.015)
_____no_output_____
MIT
ga_sim.ipynb
linea-it/ga_sim
Now simulating the clusters using 'faker' function.
@python_app def faker_app(N_stars_cmd, frac_bin, IMF_author, x0, y0, rexp, ell_, pa, dist, hpx, output_path): global param faker( N_stars_cmd, frac_bin, IMF_author, x0, y0, rexp, ell_, pa, dist, hpx, param['cmin'], ...
_____no_output_____
MIT
ga_sim.ipynb
linea-it/ga_sim
Now functions to join catalogs of simulated clusters and field stars, and to estimate signal-to-noise ratio.
# Le os arquivos _clus.dat do diretório "result/fake_clus" # Gera o arquivo "result/<survey>_mockcat_for_detection.fits" mockcat = join_cat( param['ra_min'], param['ra_max'], param['dec_min'], param['dec_max'], hp_sample_un, param['survey'], RA, DEC, MAG_G, MAG_R, MAGERR_G, ...
_____no_output_____
MIT
ga_sim.ipynb
linea-it/ga_sim
If necessary, split the catalog with simulated clusters into many files according HP schema.
os.makedirs(param['hpx_cats_path'], exist_ok=True) ipix_cats = split_files(mockcat, 'ra', 'dec', param['nside_ini'], param['hpx_cats_path']) sim_clus_feat = write_sim_clus_features( mockcat, hp_sample_un, param['nside_ini'], mM, output_path=param['results_path'])
_____no_output_____
MIT
ga_sim.ipynb
linea-it/ga_sim
Merge both files in a single file.
clus_file_results(param['results_path'], "star_clusters_simulated.dat", sim_clus_feat, 'results/objects.dat')
_____no_output_____
MIT
ga_sim.ipynb
linea-it/ga_sim
PlotsA few plots to characterize the simulated clusters.
from ga_sim.plot import ( general_plots, plot_ftp, plots_ang_size, plots_ref, plot_err, plot_clusters_clean ) general_plots(param['star_clusters_simulated'])
_____no_output_____
MIT
ga_sim.ipynb
linea-it/ga_sim
Plot footprint map to check area.
hpx_ftp = param['results_path'] + "/ftp_4096_nest.fits" plot_ftp(hpx_ftp, param['star_clusters_simulated'], mockcat, param['ra_max'], param['ra_min'], param['dec_min'], param['dec_max']) # Diretório onde estão os arquivo _clus.dat plots_ang_size(param['star_clusters_simulated'], param['results_path'], ...
_____no_output_____
MIT
ga_sim.ipynb
linea-it/ga_sim
Plotting errors in main magnitude band.
# Plots to analyze the simulated clusters. plot_err(mockcat, param['output_plots'])
_____no_output_____
MIT
ga_sim.ipynb
linea-it/ga_sim
Removing stars close to each otherNow, we have to remove stars that are not detected in the pipeline of detection of the survey. In principle, the software used by detect sources is SExtractor, which parameter deblend is set to blend sources very close to each other.To remove sources close to each other, the approach ...
@python_app def clean_input_cat_dist_app(file_name, ra_str, dec_str, min_dist_arcsec): clean_input_cat_dist( file_name, ra_str, dec_str, min_dist_arcsec ) futures = list() # Cria uma Progressbar (Opcional) with tqdm(total=len(ipix_cats), file=sys.stdout) as pbar: pbar.set...
_____no_output_____
MIT
ga_sim.ipynb
linea-it/ga_sim
After filtering stars in HealPixels, join all the HP into a single catalog called final cat.
ipix_clean_cats = [i.split('.')[0] + '_clean_dist.fits' for i in ipix_cats] join_cats_clean(ipix_clean_cats, param['final_cat'], param['ra_str'], param['dec_str'])
_____no_output_____
MIT
ga_sim.ipynb
linea-it/ga_sim
Plot clusters comparing filtered and not filtered stars in each cluster. The region sampled is the center of the cluster where the crowding is more intense.Below the clusters with stars were filtered by max distance.
plot_clusters_clean(ipix_cats, ipix_clean_cats, param['nside_ini'], param['ra_str'], param['dec_str'], 0.01)
_____no_output_____
MIT
ga_sim.ipynb
linea-it/ga_sim
Let's see the simple code for Linear Regression.We will be creating a model to predict weight of a person based on independent variable height using simple linear regression.weight-height dataset is downloaded from kagglehttps://www.kaggle.com/sonalisingh1411/linear-regression-using-weight-height/data
from google.colab import drive drive.mount('/content/drive') %cd /content/drive/My\ Drive %cd 'Colab Notebooks'
/content/drive/My Drive/Colab Notebooks
MIT
Linear_Regression.ipynb
tejashrigadre/Linear_Regression
1. Simple Linear Regression with one independent variable We will read the data file and do some data exploration.
import numpy as np import pandas as pd import matplotlib.pyplot as plt df = pd.read_csv('weight-height.csv') df.head() df.shape df.columns df['Gender'].unique() df.corr()
_____no_output_____
MIT
Linear_Regression.ipynb
tejashrigadre/Linear_Regression
We can see that there is high co-relation between height and weight columns.We will use Linear Regression model from sklearn library
x = df['Height'] y = df['Weight']
_____no_output_____
MIT
Linear_Regression.ipynb
tejashrigadre/Linear_Regression
We will split the data into train and test datasets using sklearn preprocessing library
from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(x,y,test_size=0.2, random_state=42) X_train.shape X_train = X_train.to_numpy() X_train = X_train.reshape(-1,1)
_____no_output_____
MIT
Linear_Regression.ipynb
tejashrigadre/Linear_Regression
reshape() is called to make X_train 2-dimesional that is row and column format
X_train.shape X_test = X_test.to_numpy() X_test = X_test.reshape(-1,1) from sklearn.linear_model import LinearRegression model = LinearRegression() model.fit(X_train,y_train)
_____no_output_____
MIT
Linear_Regression.ipynb
tejashrigadre/Linear_Regression
model is created as instnace of LinearRegression. With .fit() method, optimal values of coefficients (b0,b1) are calculated using existing input X_train and y_train.
model.score(X_train,y_train)
_____no_output_____
MIT
Linear_Regression.ipynb
tejashrigadre/Linear_Regression
The arguments to .score() are also X_train and y_train and it returns the R2 (coefficient of determination).
Intercept,coef = model.intercept_,model.coef_ print("Intercept is :",Intercept, sep='\n') print("Coefficient/slope is :",coef , sep='\n')
Intercept is : -349.7878205824451 Coefficient/slope is : [7.70218561]
MIT
Linear_Regression.ipynb
tejashrigadre/Linear_Regression
Model attributes model.intercept_, model.coef_ give the value of (b01,b1) Now, we will use trained model to predict on test data
y_pred = model.predict(X_test) y_pred
_____no_output_____
MIT
Linear_Regression.ipynb
tejashrigadre/Linear_Regression
We can use also slope-intercept of line y = y-intercept + slope * x to predict the values on test data. We will use model.intercept_ and model.coef_ value for predictiion
y_pred1 = Intercept + coef * X_test y_pred1
_____no_output_____
MIT
Linear_Regression.ipynb
tejashrigadre/Linear_Regression
We can see output of both y_pred and y_pred1 is same. We will plot the graph of predicted and actual values of weights using seaborn and matplotlib library
import seaborn as sns ax = sns.regplot(x=y_pred, y=y_test, x_estimator=np.mean)
_____no_output_____
MIT
Linear_Regression.ipynb
tejashrigadre/Linear_Regression
To clearly see the the plot, let's draw 20 samples from training dataset with actual weight values and plot it with predicted weight values for training dataset.The red dots represent the actual weight values(20 samples drawn) and the green line represents the predcted weight values by the model. The vertical distance ...
plt.scatter(X_train[0:20], y_train[0:20], color = "red") plt.plot(X_train, model.predict(X_train), color = "green") plt.title("Weight vs Height") plt.xlabel("Height") plt.ylabel("Weight") plt.show()
_____no_output_____
MIT
Linear_Regression.ipynb
tejashrigadre/Linear_Regression
2. Multiple Linear Regressiom A regression with 2 or more independet variables is multiple linear regression. We will use same dataset to implemet multiple linear regression.The 2 independent variables will be gender and height which be used to predict the weight.
x = df.drop(columns = 'Weight') y = df['Weight'] x.columns
_____no_output_____
MIT
Linear_Regression.ipynb
tejashrigadre/Linear_Regression
Gender column is categorical. We can not use it directly as model can work only with numbers. We have to convert it to one-hot-encoding using pandas get_dummies() method. A new column will be create dropping earlier column . The new column contain values 1 and 0 for male and female respectively.
x = pd.get_dummies(x, columns = ['Gender'], drop_first = True) x print(x.shape) print(y.shape)
(10000, 2) (10000,)
MIT
Linear_Regression.ipynb
tejashrigadre/Linear_Regression
Rest of the steps will be same as simple linear regression.
from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(x,y,test_size=0.2, random_state=42) X_train = X_train.to_numpy() X_train = X_train.reshape(-1,2) X_test = X_test.to_numpy() X_test = X_test.reshape(-1,2) X_train.shape mulLR = LinearRegression() mulLR.fit(X_train,...
Intercept is : -244.69356793639193 Coefficient/slope is : [ 5.97314123 19.34720343]
MIT
Linear_Regression.ipynb
tejashrigadre/Linear_Regression
Coefficient array will have 2 values for gender and height respectively.
y_pred = mulLR.predict(X_test) y_pred
_____no_output_____
MIT
Linear_Regression.ipynb
tejashrigadre/Linear_Regression
Alternate method : Predicting weight using coefficient and intercept values in equation
y_pred1 = Intercept + np.sum(coef * X_test, axis = 1) y_pred1
_____no_output_____
MIT
Linear_Regression.ipynb
tejashrigadre/Linear_Regression
y_pred and y_pred1 both have same predicted values
import seaborn as sns ax = sns.regplot(x=y_pred, y=y_test, x_estimator=np.mean)
_____no_output_____
MIT
Linear_Regression.ipynb
tejashrigadre/Linear_Regression
Above plot shows graph representing predicted and actual weight values on test dataset. 3. Polynomial Regression We will use polynomial regression to find the weight using same dataset. Note that polynomial regression is the special case of linear regression. Import class PolynomialFeatures from sklearn.preprocessing
from sklearn.preprocessing import PolynomialFeatures x = df['Height'] y = df['Weight'] transformer = PolynomialFeatures(degree = 2, include_bias = False)
_____no_output_____
MIT
Linear_Regression.ipynb
tejashrigadre/Linear_Regression
We have to include terms like x2(x squared) as additional features when using polynomial regression.We have to transform the inputfor that transformer is defined with degree (defines the degree of polynomial regression function) and include_bias decides whether to include bias or not.
from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(x,y,test_size=0.2, random_state=42) X_train = X_train.to_numpy() X_train = X_train.reshape(-1,1) X_test = X_test.to_numpy() X_test = X_test.reshape(-1,1) transformer.fit(X_train) X_trans = transformer.transform(X...
_____no_output_____
MIT
Linear_Regression.ipynb
tejashrigadre/Linear_Regression
Above two lines of code can be fit into one line as below, both will give same output
x_trans = PolynomialFeatures(degree=2, include_bias=False).fit_transform(X_train) X_transtest = PolynomialFeatures(degree=2, include_bias=False).fit_transform(X_test)
_____no_output_____
MIT
Linear_Regression.ipynb
tejashrigadre/Linear_Regression
Each value in the first column is squared and stored in second column as feature
print(x_trans)
[[ 61.39164365 3768.93390949] [ 74.6976372 5579.7370037 ] [ 68.50781491 4693.32070353] ... [ 64.3254058 4137.75783102] [ 69.07449203 4771.28544943] [ 67.58883983 4568.25126988]]
MIT
Linear_Regression.ipynb
tejashrigadre/Linear_Regression
Create and fit the model
poly_LR = LinearRegression().fit(x_trans,y_train) poly_LR.score(x_trans,y_train) y_pred = poly_LR.predict(X_transtest) y_pred Intercept,coef = mulLR.intercept_,mulLR.coef_ print("Intercept is :",Intercept, sep='\n') print("Coefficient/slope is :",coef , sep='\n')
Intercept is : -244.69356793639193 Coefficient/slope is : [ 5.97314123 19.34720343]
MIT
Linear_Regression.ipynb
tejashrigadre/Linear_Regression
The score of ploynomial regression can slighly be better than linear regression due to added complexity but the high R2 scoe does not always mean good model. Sometimes ploynomial regression could lead to overfitting due to its complexity in defining the equation for regression.
_____no_output_____
MIT
Linear_Regression.ipynb
tejashrigadre/Linear_Regression
Cleaning
beer.head() beer.shape beer = beer[beer['r_text'].str.contains('No Review') == False] beer.shape beer[beer['breakdown'].str.contains('look:') != True]['name'].value_counts() beer = beer[beer['breakdown'].str.contains('look:') == True] beer.shape beer.isnull().sum() beer['username'].fillna('Missing Username', inplace=Tr...
_____no_output_____
MIT
Model.ipynb
markorland/markorland
Modeling
from sklearn.metrics.pairwise import cosine_similarity from sklearn.decomposition import TruncatedSVD vect = CountVectorizer(ngram_range=(2,2), stop_words=stop, min_df=2) X = vect.fit_transform(all_reviews['review']) X.shape svd = TruncatedSVD(n_components=5, n_iter=7, random_state=42) X_svd = svd.fit_transform(X) # im...
_____no_output_____
MIT
Model.ipynb
markorland/markorland
Model with grouped reviews
grouped_reviews = all_reviews.groupby('name')['review'].sum() grouped_reviews.head() from sklearn.metrics.pairwise import cosine_similarity from sklearn.decomposition import TruncatedSVD vect = CountVectorizer(ngram_range=(2,2), stop_words=stop, min_df=2) X = vect.fit_transform(grouped_reviews) X.shape svd = TruncatedS...
_____no_output_____
MIT
Model.ipynb
markorland/markorland
Recommender
all_reviews_cosine = pd.read_csv('./Scraping/Scraped_Data/Data/all_reviews_cosine.csv') beer.head() whiskey.head() all_reviews_cosine.head()
_____no_output_____
MIT
Model.ipynb
markorland/markorland
1) aisles.csv : aisle_id, aisle - 소분류 2) departments.csv : department_id, department - 대분류 2) order_products.csv : order_id, product_id, add_to_cart_order, reordered : train, prior - 주문id, 상품 id, 장바구니에 담긴 순서, 재구매 여부 3) orders.csv : order_id, user_id, eval_set, order_number, order_do...
import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) import matplotlib.pyplot as plt import seaborn as sns color = sns.color_palette() %matplotlib inline # data train = pd.read_csv("order_products__train.csv") prior = pd.read_csv("order_products__prior.csv") order...
_____no_output_____
MIT
src/recommendation/data_analysis/data_analysis_1.ipynb
odobenuskr/2019_capstone_FlexAds
TensorFlow Neural Network Lab In this lab, you'll use all the tools you learned from *Introduction to TensorFlow* to label images of English letters! The data you are using, notMNIST, consists of images of a letter from A to J in different fonts.The above images are a few examples of the data you'll be training on. Aft...
import hashlib import os import pickle from urllib.request import urlretrieve import numpy as np from PIL import Image from sklearn.model_selection import train_test_split from sklearn.preprocessing import LabelBinarizer from sklearn.utils import resample from tqdm import tqdm from zipfile import ZipFile print('All m...
All modules imported.
Apache-2.0
intro-to-tensorflow/intro_to_tensorflow.ipynb
ajmaradiaga/tf-examples
The notMNIST dataset is too large for many computers to handle. It contains 500,000 images for just training. You'll be using a subset of this data, 15,000 images for each label (A-J).
def download(url, file): """ Download file from <url> :param url: URL to file :param file: Local file path """ if not os.path.isfile(file): print('Downloading ' + file + '...') urlretrieve(url, file) print('Download Finished') # Download the training and test dataset. do...
100%|██████████| 210001/210001 [00:21<00:00, 9650.17files/s] 100%|██████████| 10001/10001 [00:01<00:00, 9583.57files/s]
Apache-2.0
intro-to-tensorflow/intro_to_tensorflow.ipynb
ajmaradiaga/tf-examples
Problem 1The first problem involves normalizing the features for your training and test data.Implement Min-Max scaling in the `normalize_grayscale()` function to a range of `a=0.1` and `b=0.9`. After scaling, the values of the pixels in the input data should range from 0.1 to 0.9.Since the raw notMNIST image data is i...
# Problem 1 - Implement Min-Max scaling for grayscale image data def normalize_grayscale(image_data): """ Normalize the image data with Min-Max scaling to a range of [0.1, 0.9] :param image_data: The image data to be normalized :return: Normalized image data """ # Implement Min-Max scaling for g...
Data cached in pickle file.
Apache-2.0
intro-to-tensorflow/intro_to_tensorflow.ipynb
ajmaradiaga/tf-examples
CheckpointAll your progress is now saved to the pickle file. If you need to leave and comeback to this lab, you no longer have to start from the beginning. Just run the code block below and it will load all the data and modules required to proceed.
%matplotlib inline # Load the modules import pickle import math import numpy as np import tensorflow as tf from tqdm import tqdm import matplotlib.pyplot as plt # Reload the data pickle_file = 'notMNIST.pickle' with open(pickle_file, 'rb') as f: pickle_data = pickle.load(f) train_features = pickle_data['train_da...
Data and modules loaded.
Apache-2.0
intro-to-tensorflow/intro_to_tensorflow.ipynb
ajmaradiaga/tf-examples
Problem 2Now it's time to build a simple neural network using TensorFlow. Here, your network will be just an input layer and an output layer.For the input here the images have been flattened into a vector of $28 \times 28 = 784$ features. Then, we're trying to predict the image digit so there are 10 output units, one ...
# All the pixels in the image (28 * 28 = 784) features_count = 784 # All the labels labels_count = 10 # Set the features and labels tensors features = tf.placeholder(tf.float32) labels = tf.placeholder(tf.float32) # Set the weights and biases tensors weights = tf.Variable(tf.random_normal([features_count, labels_coun...
Accuracy function created.
Apache-2.0
intro-to-tensorflow/intro_to_tensorflow.ipynb
ajmaradiaga/tf-examples
Problem 3Below are 2 parameter configurations for training the neural network. In each configuration, one of the parameters has multiple options. For each configuration, choose the option that gives the best acccuracy.Parameter configurations:Configuration 1* **Epochs:** 1* **Learning Rate:** * 0.8 * 0.5 * 0.1 * 0...
# Change if you have memory restrictions batch_size = 256 # Find the best parameters for each configuration #When epochs = 1, the best learning_rate is 0.5 with an accuracy of 0.7526666522026062 #When multiple epochs #2 = 0.7515999674797058 #3 = 0.7605332732200623 #4 = 0.771733283996582 #5 = 0.7671999335289001 epoc...
Epoch 1/4: 100%|██████████| 557/557 [00:03<00:00, 175.65batches/s] Epoch 2/4: 100%|██████████| 557/557 [00:03<00:00, 180.07batches/s] Epoch 3/4: 100%|██████████| 557/557 [00:03<00:00, 179.54batches/s] Epoch 4/4: 100%|██████████| 557/557 [00:03<00:00, 179.55batches/s]
Apache-2.0
intro-to-tensorflow/intro_to_tensorflow.ipynb
ajmaradiaga/tf-examples
TestYou're going to test your model against your hold out dataset/testing data. This will give you a good indicator of how well the model will do in the real world. You should have a test accuracy of at least 80%.
### DON'T MODIFY ANYTHING BELOW ### # The accuracy measured against the test set test_accuracy = 0.0 with tf.Session() as session: session.run(init) batch_count = int(math.ceil(len(train_features)/batch_size)) for epoch_i in range(epochs): # Progress bar batches_pbar = tqdm(r...
Epoch 1/4: 100%|██████████| 557/557 [00:00<00:00, 1230.28batches/s] Epoch 2/4: 100%|██████████| 557/557 [00:00<00:00, 1277.16batches/s] Epoch 3/4: 100%|██████████| 557/557 [00:00<00:00, 1209.55batches/s] Epoch 4/4: 100%|██████████| 557/557 [00:00<00:00, 1254.66batches/s]
Apache-2.0
intro-to-tensorflow/intro_to_tensorflow.ipynb
ajmaradiaga/tf-examples
Dependencies
import numpy as np import pandas as pd import pingouin as pg import seaborn as sns import scipy.stats import sklearn import matplotlib.pyplot as plt from tqdm import tqdm
_____no_output_____
MIT
src/Analysis/bible_bitexts_analysis.ipynb
AlexJonesNLP/crosslingual-analysis-101
Loading dataframes containing variables
# Loading the dataframes we'll be using # Contains the DEPENDENT variables relating to language PAIRS lang_pair_dv = pd.read_csv('/Data/Bible experimental vars/bible_dependent_vars_LANGUAGE_PAIR.csv') # Contains the INDEPENDENT variables relating to language PAIRS lang_pair_iv = pd.read_csv('/Data/bible_predictors_LA...
_____no_output_____
MIT
src/Analysis/bible_bitexts_analysis.ipynb
AlexJonesNLP/crosslingual-analysis-101
Experimenting with sklearn models for feature selection
from sklearn.linear_model import LinearRegression from sklearn.model_selection import cross_val_score from itertools import chain, combinations # Used for exhaustive feature search # The model we'll use to choose the best features for predicting F1-score for LaBSE labse_f1_model = LinearRegression() # All the possible ...
Getting best features for LaBSE, GH Score: -0.0413396886380951 Features: ['Combined sentences (LaBSE)'] Score: -0.021350223866934324 Features: ['Combined in-family sentences (LaBSE)'] Score: -0.01679224278785668 Features: ['Combined sentences (LaBSE)', 'Combined in-family sentences (LaBSE)'] Score: -0.00241493533479657...
MIT
src/Analysis/bible_bitexts_analysis.ipynb
AlexJonesNLP/crosslingual-analysis-101
Applying PCA as an additional feature selection tool
pca = sklearn.decomposition.PCA(n_components=5) labse_pair_pca = pca.fit_transform(X_pair_labse) labse_pair_pca.shape
_____no_output_____
MIT
src/Analysis/bible_bitexts_analysis.ipynb
AlexJonesNLP/crosslingual-analysis-101
PCR
# Implement principal component regression (PCR) def PCR(model, X, y, n_components, score_method): FOLDS = 10 pca = sklearn.decomposition.PCA(n_components=n_components) X_pca = pca.fit_transform(X) score_by_fold = sklearn.model_selection.cross_validate(model, ...
_____no_output_____
MIT
src/Analysis/bible_bitexts_analysis.ipynb
AlexJonesNLP/crosslingual-analysis-101
Taking a look at the loadings of the first principal components
pca = sklearn.decomposition.PCA(n_components=7) X_pair_labse_pca = pca.fit_transform(X_pair_labse) pca_labse_loadings = pd.DataFrame(pca.components_.T, columns=['PC1', 'PC2', 'PC3', 'PC4', 'PC5', 'PC6', 'PC7'], index=X_pair_labse.columns) pca_labse_loadings pca = sklearn.decomposition.PCA(n_components=6) X_pair_laser_p...
_____no_output_____
MIT
src/Analysis/bible_bitexts_analysis.ipynb
AlexJonesNLP/crosslingual-analysis-101
Generalized ufuncs We've just seen how to make our own ufuncs using `vectorize`, but what if we need something that can operate on an input array in any way that is not element-wise?Enter `guvectorize`. There are several important differences between `vectorize` and `guvectorize` that bear close examination. Let's t...
import numpy from numba import guvectorize @guvectorize('int64[:], int64, int64[:]', '(n),()->(n)') def g(x, y, result): for i in range(x.shape[0]): result[i] = x[i] + y
_____no_output_____
CC-BY-4.0
notebooks/08.Make.generalized.ufuncs.ipynb
IsabelAverill/Scipy-2017---Numba
* Declaration of input/output layouts* No return statements
x = numpy.arange(10)
_____no_output_____
CC-BY-4.0
notebooks/08.Make.generalized.ufuncs.ipynb
IsabelAverill/Scipy-2017---Numba
In the cell below we call the function `g` with a preallocated array for the result.
result = numpy.zeros_like(x) result = g(x, 5, result) print(result)
_____no_output_____
CC-BY-4.0
notebooks/08.Make.generalized.ufuncs.ipynb
IsabelAverill/Scipy-2017---Numba
But wait! We can still call `g` as if it were defined as `def g(x, y)````pythonres = g(x, 5)print(res)```We don't recommend this as it can have unintended consequences if some of the elements of the `results` array are not operated on by the function `g`. (The advantage is that you can preserve existing interfaces to...
@guvectorize('float64[:,:], float64[:,:], float64[:,:]', '(m,n),(n,p)->(m,p)') def matmul(A, B, C): m, n = A.shape n, p = B.shape for i in range(m): for j in range(p): C[i, j] = 0 for k in range(n): C[i, j] += A[i, k] * B[k, j] a = numpy.random.ra...
_____no_output_____
CC-BY-4.0
notebooks/08.Make.generalized.ufuncs.ipynb
IsabelAverill/Scipy-2017---Numba
And it also supports the `target` keyword argument
def g(x, y, res): for i in range(x.shape[0]): res[i] = x[i] + numpy.exp(y) g_serial = guvectorize('float64[:], float64, float64[:]', '(n),()->(n)')(g) g_par = guvectorize('float64[:], float64, float64[:]', '(n),()->(n)', target='parallel')(g) %timeit res...
_____no_output_____
CC-BY-4.0
notebooks/08.Make.generalized.ufuncs.ipynb
IsabelAverill/Scipy-2017---Numba
[Exercise: Writing signatures](./exercises/08.GUVectorize.Exercises.ipynbExercise:-2D-Heat-Transfer-signature) What's up with these boundary conditions?```pythonfor i in range(I): Tn[i, 0] = T[i, 0] Tn[i, J - 1] = Tn[i, J - 2] for j in range(J): Tn[0, j] = T[0, j] Tn[I - 1, j] = Tn[I - 2...
@guvectorize('float64[:,:], float64[:,:]', '(n,n)->(n,n)') def gucopy(a, b): I, J = a.shape for i in range(I): for j in range(J): b[i, j] = a[i, j] from numba import jit @jit def make_a_copy(): a = numpy.random.random((25,25)) b = gucopy(a) return a, b a, b = make_a_copy() a...
_____no_output_____
CC-BY-4.0
notebooks/08.Make.generalized.ufuncs.ipynb
IsabelAverill/Scipy-2017---Numba
Git_Utils Instruções:Para clonar um repositório, primeiro copie o url no GitHub ou no GitLab e insira no campo `REMOTE`.O formato deve ser, conforme o caso:```https://github.com//.git``` ou ```https://gitlab.com///<nome_do_projeto.git```Em seguida, verifique se os campos `GIT_CONFIG_PATH` e `PROJECTS_PATH` correspond...
REPO_HTTPS_URL = 'https://gitlab.com/liaa-3r/sinapses/ia-dispositivos-legais.git' GIT_CONFIG_PATH = 'C:\\Users\\cmlima\\Desenvolvimento\\LIAA-3R\\config' PROJECTS_PATH = 'C:\\Users\\cmlima\\Desenvolvimento\\LIAA-3R\\projetos' ACTION = "pull" BRANCH = 'master' COMMIT_MESSAGE = "" import os, re import ipywidgets as wid...
_____no_output_____
MIT
git_utils.ipynb
liaa-3r/utilidades-colab
1. Zipping Lists
import string first_example_list = [c for c in string.ascii_lowercase] second_example_list = [i for i in range(len(string.ascii_lowercase))] def zip_lists(first_list, second_list): new_list = [] for i in range(min(len(first_list), len(second_list))): new_list.append(first_list[i]) new_list.app...
['a', 0, 'b', 1, 'c', 2, 'd', 3, 'e', 4, 'f', 5, 'g', 6, 'h', 7, 'i', 8, 'j', 9, 'k', 10, 'l', 11, 'm', 12, 'n', 13, 'o', 14, 'p', 15, 'q', 16, 'r', 17, 's', 18, 't', 19, 'u', 20, 'v', 21, 'w', 22, 'x', 23, 'y', 24, 'z', 25]
Apache-2.0
Assignments/answers/Lab_3-answers.ipynb
unmeshvrije/python-for-beginners
2. Age Differences
example_people = [(16, "Brian"), (12, "Lucy"), (18, "Harold")] def age_differences(people): for i in range(len(people) - 1): first_name = people[i][1] first_age = people[i][0] second_name = people[i + 1][1] second_age = people[i + 1][0] if first_age > seco...
Brian is 4 years older than Lucy.
Apache-2.0
Assignments/answers/Lab_3-answers.ipynb
unmeshvrije/python-for-beginners
3. Remove the Duplicates
example_doubled_list = [1, 1, 2, 3, 3, 4, 3] def remove_doubles(doubled_list): no_doubles = [] for number in doubled_list: if number not in no_doubles: no_doubles.append(number) return no_doubles print(remove_doubles(example_doubled_list))
[1, 2, 3, 4]
Apache-2.0
Assignments/answers/Lab_3-answers.ipynb
unmeshvrije/python-for-beginners
4. Only the Duplicates
first_example_list = [1, 2, 3, 4] second_example_list = [1, 4, 5, 6] def get_duplicates(first_list, second_list): duplicates = [] for number in first_list: if number in second_list: duplicates.append(number) return duplicates print(get_duplicates(first_example_list, second_example_list...
[1, 4]
Apache-2.0
Assignments/answers/Lab_3-answers.ipynb
unmeshvrije/python-for-beginners
SAAVEDRA QUESTION 1
import numpy as np C = np.eye(4) print(C)
[[1. 0. 0. 0.] [0. 1. 0. 0.] [0. 0. 1. 0.] [0. 0. 0. 1.]]
Apache-2.0
PRELIM_EXAM.ipynb
Singko25/Linear-Algebra-58020
QUESTION 2
import numpy as np C = np.eye(4) print('C = ') print(C) array = np.multiply(2,C) print('Doubled = ') print(array)
C = [[1. 0. 0. 0.] [0. 1. 0. 0.] [0. 0. 1. 0.] [0. 0. 0. 1.]] Doubled = [[2. 0. 0. 0.] [0. 2. 0. 0.] [0. 0. 2. 0.] [0. 0. 0. 2.]]
Apache-2.0
PRELIM_EXAM.ipynb
Singko25/Linear-Algebra-58020