markdown stringlengths 0 1.02M | code stringlengths 0 832k | output stringlengths 0 1.02M | license stringlengths 3 36 | path stringlengths 6 265 | repo_name stringlengths 6 127 |
|---|---|---|---|---|---|
Lab 2: Data Structures ~ Advanced ApplicationsWow, look at you! Congratulations on making it to the second part of the lab!These assignments are *absolutely not required*! Even if you're here, you shouldn't try to solve all of the problems in this file. Our suggestion is that you should skim through these problems to ... | def generate_pascal_row(row):
"""Generate the next row of Pascal's triangle."""
if not row:
return [1]
row1, row2 = row + [0], [0] + row
return list(map(sum, zip(row1, row2)))
def print_pascal_triangle(n):
"""Print the first n rows of Pascal's triangle."""
total_spaces = n + n - 1
p... | 1
1 1
1 2 1
1
1 1
1 2 1
1 3 3 1
1 4 6 4 1
1 5 10 10 5 1
1 6 15 20 15 6 1
1 7 21 35 35 21 7 1
1 8 28 56 70 56 28 8 1
1 9 36 84 126 126 84 36 9 1
| BSD-2-Clause-FreeBSD | notebooks/lab-2/data-structures-part-2.ipynb | samuelcheang0419/python-labs |
Special PhrasesFor the next few problems, just like cyclone phrases, we'll describe a criterion that makes a word or phrase special.Let's load up the dictionary file again. Remember, if you are using macOS or Linux, you should have a dictionary file available at `/usr/share/dict/words` and we've mirrored the file at `... | # If you downloaded words from the course website,
# change me to the path to the downloaded file.
DICTIONARY_FILE = '/usr/share/dict/words'
def load_english():
"""Load and return a collection of english words from a file."""
pass
english = load_english()
print(len(english)) | _____no_output_____ | BSD-2-Clause-FreeBSD | notebooks/lab-2/data-structures-part-2.ipynb | samuelcheang0419/python-labs |
Triad PhrasesTriad words are English words for which the two smaller strings you make by extracting alternating letters both form valid words.For example:Write a function to determine whether an entire phrase passed into a function is made of triad words. You can assume ... | def is_triad_word(word, english):
"""Return whether a word is a triad word."""
pass
def is_triad_phrase(phrase, english):
"""Return whether a phrase is composed of only triad words."""
pass | _____no_output_____ | BSD-2-Clause-FreeBSD | notebooks/lab-2/data-structures-part-2.ipynb | samuelcheang0419/python-labs |
Surpassing Phrases (challenge)Surpassing words are English words for which the gap between each adjacent pair of letters strictly increases. These gaps are computed without "wrapping around" from Z to A.For example:Write a function to determine whether an entire phr... | def character_gap(ch1, ch2):
"""Return the absolute gap between two characters."""
return abs(ord(ch1) - ord(ch2))
def is_surpassing_word(word):
"""Return whether a word is surpassing."""
pass
def is_surpassing_phrase(word):
"""Return whether a word is surpassing.""" | _____no_output_____ | BSD-2-Clause-FreeBSD | notebooks/lab-2/data-structures-part-2.ipynb | samuelcheang0419/python-labs |
Triangle WordsThe nth term of the sequence of triangle numbers is given by $1 + 2 + ... + n = \frac{n(n+1)}{2}$. For example, the first ten triangle numbers are: `1, 3, 6, 10, 15, 21, 28, 36, 45, 55, ...`By converting each letter in a word to a number corresponding to its alphabetical position (`A=1`, `B=2`, etc) and ... | def is_triangle_word(word):
"""Return whether a word is a triangle word."""
pass
print(is_triangle_word("SKY")) # => True | _____no_output_____ | BSD-2-Clause-FreeBSD | notebooks/lab-2/data-structures-part-2.ipynb | samuelcheang0419/python-labs |
Polygon CollisionGiven two polygons in the form of lists of 2-tuples, determine whether the two polygons intersect.Formally, a polygon is represented by a list of (x, y) tuples, where each (x, y) tuple is a vertex of the polygon. Edges are assumed to be between adjacent vertices in the list, and the last vertex is con... | # compare each edge of poly1 with each edge of poly2
# how do two lines intersect? define line1 has (x1a, y1a) and (x1b, y1b), and line2 has (x2a, y2a) and (x2b, y2b).
# they intersect when
def polygon_collision(poly1, poly2):
pass
unit_square = [(0,0), (0,1), (1,1), (1,0)]
triangle = [(0,0), (0.5,2), (1,0)]
pr... | _____no_output_____ | BSD-2-Clause-FreeBSD | notebooks/lab-2/data-structures-part-2.ipynb | samuelcheang0419/python-labs |
ComprehensionsWe haven't talked about data comprehensions yet, but if you're interested, you can read about them [here](https://docs.python.org/3/tutorial/datastructures.htmllist-comprehensions) and then tackle the problems below. ReadPredict the output of each of the following list comprehensions. After you have writ... | # Predict the output of the following comprehensions. Does the output match what you expect?
print([x for x in [1, 2, 3, 4]])
# [1, 2, 3, 4]
print([n - 2 for n in range(10)])
# -2, -1 ... 7
print([k % 10 for k in range(41) if k % 3 == 0])
# 0, 3, ... , 9
'P' < 'n'
print([s.lower() for s in ['PythOn', 'iS', 'cOoL'] if s... | {8, 2, 3, 5}
| BSD-2-Clause-FreeBSD | notebooks/lab-2/data-structures-part-2.ipynb | samuelcheang0419/python-labs |
WriteWrite comprehensions to transform the input data structure into the output data structure:```python[0, 1, 2, 3] -> [1, 3, 5, 7] Double and add one['apple', 'orange', 'pear'] -> ['A', 'O', 'P'] Capitalize first letter['apple', 'orange', 'pear'] -> ['apple', 'pear'] Contains a 'p'["TA_parth", "student_poohbea... | nums = [0, 1, 2, 3]
fruits = ['apple', 'orange', 'pear']
people = ["TA_parth", "student_poohbear", "TA_michael", "TA_guido", "student_htiek"]
# Add your comprehensions here!
print([2 * n + 1 for n in nums])
print([c[0].upper() for c in fruits])
print([w for w in fruits if 'p' in w])
print('-'*20)
print([name[3:] for n... | [1, 3, 5, 7]
['A', 'O', 'P']
['apple', 'pear']
--------------------
['parth', 'michael', 'guido']
[('apple', 5), ('orange', 6), ('pear', 4)]
--------------------
{'apple': 5, 'orange': 6, 'pear': 4}
| BSD-2-Clause-FreeBSD | notebooks/lab-2/data-structures-part-2.ipynb | samuelcheang0419/python-labs |
# import the necessary packages
import numpy as np
import imutils
import cv2
def align_images(image, template, maxFeatures=500, keepPercent=0.2,
debug=False):
# convert both the input image and template to grayscale
imageGray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
templateGray = cv2.cvtColor(template, cv2.COLOR_... | _____no_output_____ | BSD-3-Clause | ImageAlign.ipynb | emilswan/stockstats | |
View source on GitHub Notebook Viewer Run in binder Run in Google Colab Install Earth Engine APIInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geehydro](https://github.com/giswqs/geehydro). The **geehydro** Python package builds on the [folium](http... | # %%capture
# !pip install earthengine-api
# !pip install geehydro | _____no_output_____ | MIT | Datasets/Vectors/us_census_tracts.ipynb | dmendelo/earthengine-py-notebooks |
Import libraries | import ee
import folium
import geehydro | _____no_output_____ | MIT | Datasets/Vectors/us_census_tracts.ipynb | dmendelo/earthengine-py-notebooks |
Authenticate and initialize Earth Engine API. You only need to authenticate the Earth Engine API once. Uncomment the line `ee.Authenticate()` if you are running this notebook for the first time or if you are getting an authentication error. | # ee.Authenticate()
ee.Initialize() | _____no_output_____ | MIT | Datasets/Vectors/us_census_tracts.ipynb | dmendelo/earthengine-py-notebooks |
Create an interactive map This step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function. The optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `... | Map = folium.Map(location=[40, -100], zoom_start=4)
Map.setOptions('HYBRID') | _____no_output_____ | MIT | Datasets/Vectors/us_census_tracts.ipynb | dmendelo/earthengine-py-notebooks |
Add Earth Engine Python script | dataset = ee.FeatureCollection('TIGER/2010/Tracts_DP1')
visParams = {
'min': 0,
'max': 4000,
'opacity': 0.8,
}
# Turn the strings into numbers
dataset = dataset.map(lambda f: f.set('shape_area', ee.Number.parse(f.get('dp0010001'))))
# Map.setCenter(-103.882, 43.036, 8)
image = ee.Image().float().paint(dataset, ... | _____no_output_____ | MIT | Datasets/Vectors/us_census_tracts.ipynb | dmendelo/earthengine-py-notebooks |
Display Earth Engine data layers | Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True)
Map | _____no_output_____ | MIT | Datasets/Vectors/us_census_tracts.ipynb | dmendelo/earthengine-py-notebooks |
This notebook functionizes the 'Array to ASPA'. Goal is to convert any input dictionary to a usable ASPA for analysis.IMPORTANT:During the visualisation of the images. Each cmap per individual image is scaled depending on the contents. Therefor the images array has to be saved and used... Saving the PNG's will give fau... | import numpy as np
import seaborn as sns
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.preprocessing import MinMaxScaler
from tqdm import tqdm
from keijzer_exogan import *
%matplotlib inline
%config InlineBackend.print_figure_kwargs={'facecolor' : "w"} # Make sure the axis background of plots is... | _____no_output_____ | MIT | notebooks/old notebooks/dict to ASPA v2.ipynb | deKeijzer/SRON-DCGAN |
Load chunkX[0] is a dict from regular chunk X[0][0] is a dict from .npy selection | dir_ = '/datb/16011015/ExoGAN_data//'
X = np.load(dir_+'selection/last_chunks_25_percent.npy')
X = X.flatten()
np.random.seed(23) # Set seed for the np.random functions
# Shuffle X along the first axis to make the order of simulations random
np.random.shuffle(X) # note that X = np.rand.... isn't required
len(X)
X[0... | _____no_output_____ | MIT | notebooks/old notebooks/dict to ASPA v2.ipynb | deKeijzer/SRON-DCGAN |
New ASPA Load data, combine $(R_p/R_s)^2$ with the wavelength | i = np.random.randint(0,len(X))
x = X[i] # select a dict from X
wavelengths = pd.read_csv(dir_+'wnw_grid.txt', header=None).values
spectrum = x['data']['spectrum']
spectrum = np.expand_dims(spectrum, axis=1) # change shape from (515,) to (515,1)
params = x['param']
for param in params:
if 'mixratio' in param:
... | _____no_output_____ | MIT | notebooks/old notebooks/dict to ASPA v2.ipynb | deKeijzer/SRON-DCGAN |
Normalize params | # Min max values from training set, in the same order as params above: planet mass, temp, .... co mixratio.
min_values = [1.518e26, 1e3, -18.42, 5.593e7, -18.42, -18.42, -18.42]
max_values = [3.796e27, 2e3, -2.303, 1.049e8, -2.306, -2.306, -2.306]
for i,param in enumerate(params):
params[param] = scale_param(param... | _____no_output_____ | MIT | notebooks/old notebooks/dict to ASPA v2.ipynb | deKeijzer/SRON-DCGAN |
Original ExoGAN simulationFrom 0.3 to 50 micron | plt.figure(figsize=(10,5))
plt.plot(data.x, data.y, '.-', color='r')
plt.xlabel(r'Wavelength [µm]')
plt.ylabel(r'$(R_P / R_S)^2$')
plt.xscale('log')
len(data) | _____no_output_____ | MIT | notebooks/old notebooks/dict to ASPA v2.ipynb | deKeijzer/SRON-DCGAN |
Select 0.3 to 16 micron | data = data[(data.x >= 0.3) & (data.x <= 16)] # select data between 0.3 and 16 micron
plt.figure(figsize=(10,5))
plt.plot(data.x, data.y, '.-', color='r')
plt.xlabel(r'Wavelength [µm]')
plt.ylabel(r'$(R_P / R_S)^2$')
#plt.xscale('log')
plt.xlim((2, 16))
len(data) | _____no_output_____ | MIT | notebooks/old notebooks/dict to ASPA v2.ipynb | deKeijzer/SRON-DCGAN |
Important!Notice how $(R_p/R_s)^2$ by index goes from a high to a low wavelength. Apart from that, i'm assuming the spatial difference between peaks is due to plotting against the index instead of the wavelength. The spectrum (below) will remain unchanged and is encoded this way into an ASPA, the wavelength values f... | #spectrum = np.flipud(data.y)
plt.figure(figsize=(10,5))
plt.plot(data.y, '.-', color='r')
plt.xlabel(r'Index')
plt.ylabel(r'$(R_P / R_S)^2$') | _____no_output_____ | MIT | notebooks/old notebooks/dict to ASPA v2.ipynb | deKeijzer/SRON-DCGAN |
Split the spectrum in bins | # Could loop this, but right now this is more visual
bin1 = data[data.x <= 0.8]
bin2 = data[(data.x > 0.8) & (data.x <= 1.3)] # select data between 2 and 4 micron
bin3 = data[(data.x > 1.3) & (data.x <= 2)]
bin4 = data[(data.x > 2) & (data.x <= 4)]
bin5 = data[(data.x > 4) & (data.x <= 6)]
bin6 = data[(data.x > 6) & (d... | _____no_output_____ | MIT | notebooks/old notebooks/dict to ASPA v2.ipynb | deKeijzer/SRON-DCGAN |
Bins against wavelength | """
Visualize the bins
"""
bins = [bin8, bin7, bin6, bin5, bin4, bin3, bin2, bin1]
plt.figure(figsize=(10,5))
for b in bins:
plt.plot(b.iloc[:,0], b.iloc[:,1], '.-')
plt.xlabel(r'Wavelength [µm]')
plt.ylabel(r'$(R_P / R_S)^2$')
#plt.xlim((0.3, 9)) | _____no_output_____ | MIT | notebooks/old notebooks/dict to ASPA v2.ipynb | deKeijzer/SRON-DCGAN |
Bins against indexNotice how bin1 (0-2 micron) has way more datapoints than bin 8 (14-16 micron) | plt.figure(figsize=(10,5))
for b in bins:
plt.plot(b.iloc[:,1], '.-')
plt.xlabel(r'Index [-]')
plt.ylabel(r'$(R_P / R_S)^2$') | _____no_output_____ | MIT | notebooks/old notebooks/dict to ASPA v2.ipynb | deKeijzer/SRON-DCGAN |
Normalize the spectrum in bins | scalers = [MinMaxScaler(feature_range=(0,1)).fit(b) for b in bins] # list of 8 scalers for the 8 bins
mins = [ b.iloc[:,1].min() for b in bins] # .iloc[:,1] selects the R/R (y) only
maxs = [ b.iloc[:,1].max() for b in bins]
stds = [ b.iloc[:,1].std() for b in bins]
bins_scaled = []
for i,b in enumerate(bins):
bins_... | _____no_output_____ | MIT | notebooks/old notebooks/dict to ASPA v2.ipynb | deKeijzer/SRON-DCGAN |
Scaled spectrum in bins | spectrum_scaled = np.concatenate(bins_scaled, axis=0)
spectrum_scaled = spectrum_scaled[:,1]
plt.plot(spectrum_scaled, '.-')
len(spectrum_scaled) | _____no_output_____ | MIT | notebooks/old notebooks/dict to ASPA v2.ipynb | deKeijzer/SRON-DCGAN |
Start creating the ASPA | import math
aspa = np.zeros((32,32))
row_length = 25 # amount of pixels used per row
n_rows = math.ceil(len(spectrum_scaled) / row_length) # amount of rows the spectrum needs in the aspa, so for 415 data points, 415/32=12.96 -> 13 rows
print('Using %s rows' % n_rows)
for i in range(n_rows): # for i in
start = i... | Using 16 rows
| MIT | notebooks/old notebooks/dict to ASPA v2.ipynb | deKeijzer/SRON-DCGAN |
Fill in the 7 ExoGAN params | params
for i,param in enumerate(params):
aspa[:16, 25+i:32+i] = params[param]
plt.imshow(aspa, cmap='gray') | _____no_output_____ | MIT | notebooks/old notebooks/dict to ASPA v2.ipynb | deKeijzer/SRON-DCGAN |
Fill in the min, max, std valued for the binsTODO: Normalize these properly | mins, maxs, stds
for i in range(len(mins)):
min_ = scale_param(mins[i], 0.005, 0.03)
max_ = scale_param(maxs[i], 0.005, 0.03)
std_ = scale_param(stds[i], 1e-7, 1e-4)
aspa[16:17, i*4:i*4+4] = min_
aspa[17:18, i*4:i*4+4] = std_
aspa[18:19, i*4:i*4+4] = max_
print(min_, max_, std... | 0.18304209407699484 0.18501794207820269 0.12168214443077108
0.1795876155164152 0.18350703182382339 0.3251502352023359
0.17866701946658978 0.1806349121948885 0.1224142192152229
0.17703549815458045 0.18609035470782206 0.6717569588276018
0.17673002557975623 0.18317877235056546 0.4599526393679086
0.17562721815244248 0.1796... | MIT | notebooks/old notebooks/dict to ASPA v2.ipynb | deKeijzer/SRON-DCGAN |
Fill in unused space with noise | for i in range(13):
noise = np.random.rand(32) # random noise betweem 0 and 1 for each row
aspa[19+i:20+i*1, :] = noise
plt.imshow(aspa, cmap='gray') | _____no_output_____ | MIT | notebooks/old notebooks/dict to ASPA v2.ipynb | deKeijzer/SRON-DCGAN |
Functionize ASPA v2 | def ASPA_v2(x, wavelengths):
spectrum = x['data']['spectrum']
spectrum = np.expand_dims(spectrum, axis=1) # change shape from (515,) to (515,1)
params = x['param']
for param in params:
if 'mixratio' in param:
params[param] = np.log(np.abs(params[param])) # transform mixratio's beca... | _____no_output_____ | MIT | notebooks/old notebooks/dict to ASPA v2.ipynb | deKeijzer/SRON-DCGAN |
Test ASPA v2 function | ## Load data
i = np.random.randint(0,len(X))
dict_ = X[i] # select a dict from X
wavelengths = pd.read_csv(dir_+'wnw_grid.txt', header=None).values
dict_['param']
aspa = ASPA_v2(dict_, wavelengths)
plt.imshow(aspa, cmap='gray')
np.random.shuffle(X)
plt.figure(figsize=(10,20))
for i in tqdm(range(8*4)):
image = A... | 100%|██████████| 32/32 [00:06<00:00, 2.92it/s]
| MIT | notebooks/old notebooks/dict to ASPA v2.ipynb | deKeijzer/SRON-DCGAN |
Creating images from all simulations in the chunk | images = []
for i in tqdm(range(len(X))):
image = ASPA_v2(X[i], wavelengths)
image = image.reshape(1, 32, 32) # [images, channel, width, height]
images.append(image)
images = np.array(images)
images.shape | _____no_output_____ | MIT | notebooks/old notebooks/dict to ASPA v2.ipynb | deKeijzer/SRON-DCGAN |
Saving this array to disk | %%time
np.save(dir_+'selection/last_chunks_25_percent_images.npy', images) | _____no_output_____ | MIT | notebooks/old notebooks/dict to ASPA v2.ipynb | deKeijzer/SRON-DCGAN |
Test loading and visualization | print('DONE')
print("DONE")
print("DONE")
images = np.load('/datb/16011015/ExoGAN_data/selection/first_chunks_25_percent_images.npy')
images.shape
plt.imshow(images[0,0,:,:])
plt.figure(figsize=(10,20))
for i in range(8*4):
plt.subplot(8, 4, i+1)
plt.imshow(images[i,0,:,:], cmap='gnuplot2')
plt.tight_layou... | _____no_output_____ | MIT | notebooks/old notebooks/dict to ASPA v2.ipynb | deKeijzer/SRON-DCGAN |
Randomly mask pixels from the encoded spectrum | image = images[0, 0, :, :]
plt.imshow(image)
# image[:23, :23] is the encoded spectrum.
t = image.copy()
print(t.shape)
#t[:23, :23] = 0
plt.imshow(t) | _____no_output_____ | MIT | notebooks/old notebooks/dict to ASPA v2.ipynb | deKeijzer/SRON-DCGAN |
Random uniform dropout | t = image.copy()
dropout = 0.9
for i in range(24): # loop over rows
for j in range(24): # loop over cols
a = np.random.random() # random uniform dist 0 - 1
if a < dropout:
t[i-1:i, j-1:j] = 0
else:
pass
plt.figure(figsize=(10,10))
plt.imshow(t)
# image[:23, :23] is t... | _____no_output_____ | MIT | notebooks/old notebooks/dict to ASPA v2.ipynb | deKeijzer/SRON-DCGAN |
Range dropout | # TODO: Mask everything but the visible spectrum
def mask_image(image, visible_length, random_visible_spectrum=True):
"""
Masks everything in an input image, apart from the start to visible_length.
start = start wavelength/index value of the visible (non masked) spectrum
visible_length = length o... | _____no_output_____ | MIT | notebooks/old notebooks/dict to ASPA v2.ipynb | deKeijzer/SRON-DCGAN |
Python - Writing Your First Python Code! Welcome! This notebook will teach you the basics of the Python programming language. Although the information presented here is quite basic, it is an important foundation that will help you read and write Python code. By the end of this notebook, you'll kn... | # Try your first Python output
print("Hello, Python!") | Hello, Python!
| MIT | 1.1-Types.ipynb | mohamedsuhaib/Python_Study |
After executing the cell above, you should see that Python prints Hello, Python!. Congratulations on running your first Python code! [Tip:] print() is a function. You passed the string 'Hello, Python!' as an argument to instruct Python on what to print. What version of Python are we using? There are two popular... | # Check version runing on Jupyter notebook
from platform import python_version
print(python_version())
# Check version inside your Python program
import sys
print(sys.version) | 3.7.10
3.7.10 (default, Feb 26 2021, 18:47:35)
[GCC 7.3.0]
| MIT | 1.1-Types.ipynb | mohamedsuhaib/Python_Study |
[Tip:] sys is a built-in module that contains many system-specific parameters and functions, including the Python version in use. Before using it, we must explictly import it. Writing comments in Python In addition to writing code, note that it's always a good idea to add comments to your code. It will help oth... | # Practice on writing comments
print('Hello, Python!') # This line prints a string
# print('Hi') | Hello, Python!
| MIT | 1.1-Types.ipynb | mohamedsuhaib/Python_Study |
After executing the cell above, you should notice that This line prints a string did not appear in the output, because it was a comment (and thus ignored by Python). The second line was also not executed because print('Hi') was preceded by the number sign () as well! Since this isn't an explanatory comment from ... | # Print string as error message
frint("Hello, Python!") | _____no_output_____ | MIT | 1.1-Types.ipynb | mohamedsuhaib/Python_Study |
The error message tells you: where the error occurred (more useful in large notebook cells or scripts), and what kind of error it was (NameError) Here, Python attempted to run the function frint, but could not determine what frint is since it's not a built-in function and it has not been previously defined by u... | # Try to see build in error message
print("Hello, Python!) | _____no_output_____ | MIT | 1.1-Types.ipynb | mohamedsuhaib/Python_Study |
Does Python know about your error before it runs your code? Python is what is called an interpreted language. Compiled languages examine your entire program at compile time, and are able to warn you about a whole class of errors prior to execution. In contrast, Python interprets your script line by line as it executes ... | # Print string and error to see the running order
print("This will be printed")
frint("This will cause an error")
print("This will NOT be printed") | This will be printed
| MIT | 1.1-Types.ipynb | mohamedsuhaib/Python_Study |
Exercise: Your First Program Generations of programmers have started their coding careers by simply printing "Hello, world!". You will be following in their footsteps.In the code cell below, use the print() function to print out the phrase: Hello, world! | # Write your code below and press Shift+Enter to execute
print("Hello World!") | Hello World!
| MIT | 1.1-Types.ipynb | mohamedsuhaib/Python_Study |
Double-click __here__ for the solution.<!-- Your answer is below:print("Hello, world!")--> Now, let's enhance your code with a comment. In the code cell below, print out the phrase: Hello, world! and comment it with the phrase Print the traditional hello world all in one line of code. | # Write your code below and press Shift+Enter to execute
#print the traditional Hello World
print("Hello World!") | Hello World!
| MIT | 1.1-Types.ipynb | mohamedsuhaib/Python_Study |
Double-click __here__ for the solution.<!-- Your answer is below:print("Hello, world!") Print the traditional hello world--> Types of objects in Python Python is an object-oriented language. There are many different types of objects in Python. Let's start with the most common object types: strings, integers and float... | # Integer
11
# Float
2.14
# String
"Hello, Python 101!" | _____no_output_____ | MIT | 1.1-Types.ipynb | mohamedsuhaib/Python_Study |
You can get Python to tell you the type of an expression by using the built-in type() function. You'll notice that Python refers to integers as int, floats as float, and character strings as str. | # Type of 12
type(12)
# Type of 2.14
type(2.14)
# Type of "Hello, Python 101!"
type("Hello, Python 101!") | _____no_output_____ | MIT | 1.1-Types.ipynb | mohamedsuhaib/Python_Study |
In the code cell below, use the type() function to check the object type of 12.0. | # Write your code below. Don't forget to press Shift+Enter to execute the cell
type(12.0) | _____no_output_____ | MIT | 1.1-Types.ipynb | mohamedsuhaib/Python_Study |
Double-click __here__ for the solution.<!-- Your answer is below:type(12.0)--> Integers Here are some examples of integers. Integers can be negative or positive numbers: We can verify this is the case by using, you guessed it, the type() function: | # Print the type of -1
type(-1)
# Print the type of 4
type(4)
# Print the type of 0
type(0) | _____no_output_____ | MIT | 1.1-Types.ipynb | mohamedsuhaib/Python_Study |
Floats Floats represent real numbers; they are a superset of integer numbers but also include "numbers with decimals". There are some limitations when it comes to machines representing real numbers, but floating point numbers are a good representation in most cases. You can learn more about the specifics of floats for... | # Print the type of 1.0
type(1.0) # Notice that 1 is an int, and 1.0 is a float
# Print the type of 0.5
type(0.5)
# Print the type of 0.56
type(0.56)
# System settings about float type
sys.float_info | _____no_output_____ | MIT | 1.1-Types.ipynb | mohamedsuhaib/Python_Study |
Converting from one object type to a different object type You can change the type of the object in Python; this is called typecasting. For example, you can convert an integer into a float (e.g. 2 to 2.0).Let's try it: | # Verify that this is an integer
type(2) | _____no_output_____ | MIT | 1.1-Types.ipynb | mohamedsuhaib/Python_Study |
Converting integers to floatsLet's cast integer 2 to float: | # Convert 2 to a float
float(2)
# Convert integer 2 to a float and check its type
type(float(2)) | _____no_output_____ | MIT | 1.1-Types.ipynb | mohamedsuhaib/Python_Study |
When we convert an integer into a float, we don't really change the value (i.e., the significand) of the number. However, if we cast a float into an integer, we could potentially lose some information. For example, if we cast the float 1.1 to integer we will get 1 and lose the decimal information (i.e., 0.1): | # Casting 1.1 to integer will result in loss of information
int(1.1) | _____no_output_____ | MIT | 1.1-Types.ipynb | mohamedsuhaib/Python_Study |
Converting from strings to integers or floats Sometimes, we can have a string that contains a number within it. If this is the case, we can cast that string that represents a number into an integer using int(): | # Convert a string into an integer
int('1') | _____no_output_____ | MIT | 1.1-Types.ipynb | mohamedsuhaib/Python_Study |
But if you try to do so with a string that is not a perfect match for a number, you'll get an error. Try the following: | # Convert a string into an integer with error
int('1 or 2 people') | _____no_output_____ | MIT | 1.1-Types.ipynb | mohamedsuhaib/Python_Study |
You can also convert strings containing floating point numbers into float objects: | # Convert the string "1.2" into a float
float('1.2') | _____no_output_____ | MIT | 1.1-Types.ipynb | mohamedsuhaib/Python_Study |
[Tip:] Note that strings can be represented with single quotes ('1.2') or double quotes ("1.2"), but you can't mix both (e.g., "1.2'). Converting numbers to strings If we can convert strings to numbers, it is only natural to assume that we can convert numbers to strings, right? | # Convert an integer to a string
str(1) | _____no_output_____ | MIT | 1.1-Types.ipynb | mohamedsuhaib/Python_Study |
And there is no reason why we shouldn't be able to make floats into strings as well: | # Convert a float to a string
str(1.2) | _____no_output_____ | MIT | 1.1-Types.ipynb | mohamedsuhaib/Python_Study |
Boolean data type Boolean is another important type in Python. An object of type Boolean can take on one of two values: True or False: | # Value true
True | _____no_output_____ | MIT | 1.1-Types.ipynb | mohamedsuhaib/Python_Study |
Notice that the value True has an uppercase "T". The same is true for False (i.e. you must use the uppercase "F"). | # Value false
False | _____no_output_____ | MIT | 1.1-Types.ipynb | mohamedsuhaib/Python_Study |
When you ask Python to display the type of a boolean object it will show bool which stands for boolean: | # Type of True
type(True)
# Type of False
type(False) | _____no_output_____ | MIT | 1.1-Types.ipynb | mohamedsuhaib/Python_Study |
We can cast boolean objects to other data types. If we cast a boolean with a value of True to an integer or float we will get a one. If we cast a boolean with a value of False to an integer or float we will get a zero. Similarly, if we cast a 1 to a Boolean, you get a True. And if we cast a 0 to a Boolean we will get a... | # Convert True to int
int(True)
# Convert 1 to boolean
bool(1)
# Convert 0 to boolean
bool(0)
# Convert True to float
float(True) | _____no_output_____ | MIT | 1.1-Types.ipynb | mohamedsuhaib/Python_Study |
Exercise: Types What is the data type of the result of: 6 / 2? | # Write your code below. Don't forget to press Shift+Enter to execute the cell
type (6/2) | _____no_output_____ | MIT | 1.1-Types.ipynb | mohamedsuhaib/Python_Study |
Double-click __here__ for the solution.<!-- Your answer is below:type(6/2) float--> What is the type of the result of: 6 // 2? (Note the double slash //.) | # Write your code below. Don't forget to press Shift+Enter to execute the cell
type (6//2) | _____no_output_____ | MIT | 1.1-Types.ipynb | mohamedsuhaib/Python_Study |
Double-click __here__ for the solution.<!-- Your answer is below:type(6//2) int, as the double slashes stand for integer division --> Expression and Variables Expressions Expressions in Python can include operations among compatible types (e.g., integers and floats). For example, basic arithmetic operations like addi... | # Addition operation expression
43 + 60 + 16 + 41 | _____no_output_____ | MIT | 1.1-Types.ipynb | mohamedsuhaib/Python_Study |
We can perform subtraction operations using the minus operator. In this case the result is a negative number: | # Subtraction operation expression
50 - 60 | _____no_output_____ | MIT | 1.1-Types.ipynb | mohamedsuhaib/Python_Study |
We can do multiplication using an asterisk: | # Multiplication operation expression
5 * 5 | _____no_output_____ | MIT | 1.1-Types.ipynb | mohamedsuhaib/Python_Study |
We can also perform division with the forward slash: | # Division operation expression
25 / 5
# Division operation expression
25 / 6 | _____no_output_____ | MIT | 1.1-Types.ipynb | mohamedsuhaib/Python_Study |
As seen in the quiz above, we can use the double slash for integer division, where the result is rounded to the nearest integer: | # Integer division operation expression
25 // 5
# Integer division operation expression
25 // 6 | _____no_output_____ | MIT | 1.1-Types.ipynb | mohamedsuhaib/Python_Study |
Exercise: Expression Let's write an expression that calculates how many hours there are in 160 minutes: | # Write your code below. Don't forget to press Shift+Enter to execute the cell
160 / 60 | _____no_output_____ | MIT | 1.1-Types.ipynb | mohamedsuhaib/Python_Study |
Double-click __here__ for the solution.<!-- Your answer is below:160/60 Or 160//60--> Python follows well accepted mathematical conventions when evaluating mathematical expressions. In the following example, Python adds 30 to the result of the multiplication (i.e., 120). | # Mathematical expression
30 + 2 * 60 | _____no_output_____ | MIT | 1.1-Types.ipynb | mohamedsuhaib/Python_Study |
And just like mathematics, expressions enclosed in parentheses have priority. So the following multiplies 32 by 60. | # Mathematical expression
(30 + 2) * 60 | _____no_output_____ | MIT | 1.1-Types.ipynb | mohamedsuhaib/Python_Study |
Variables Just like with most programming languages, we can store values in variables, so we can use them later on. For example: | # Store value into variable
x = 43 + 60 + 16 + 41 | _____no_output_____ | MIT | 1.1-Types.ipynb | mohamedsuhaib/Python_Study |
To see the value of x in a Notebook, we can simply place it on the last line of a cell: | # Print out the value in variable
x | _____no_output_____ | MIT | 1.1-Types.ipynb | mohamedsuhaib/Python_Study |
We can also perform operations on x and save the result to a new variable: | # Use another variable to store the result of the operation between variable and value
y = x / 60
y | _____no_output_____ | MIT | 1.1-Types.ipynb | mohamedsuhaib/Python_Study |
If we save a value to an existing variable, the new value will overwrite the previous value: | # Overwrite variable with new value
x = x / 60
x | _____no_output_____ | MIT | 1.1-Types.ipynb | mohamedsuhaib/Python_Study |
It's a good practice to use meaningful variable names, so you and others can read the code and understand it more easily: | # Name the variables meaningfully
total_min = 43 + 42 + 57 # Total length of albums in minutes
total_min
# Name the variables meaningfully
total_hours = total_min / 60 # Total length of albums in hours
total_hours | _____no_output_____ | MIT | 1.1-Types.ipynb | mohamedsuhaib/Python_Study |
In the cells above we added the length of three albums in minutes and stored it in total_min. We then divided it by 60 to calculate total length total_hours in hours. You can also do it all at once in a single expression, as long as you use parenthesis to add the albums length before you divide, as shown below. | # Complicate expression
total_hours = (43 + 42 + 57) / 60 # Total hours in a single expression
total_hours | _____no_output_____ | MIT | 1.1-Types.ipynb | mohamedsuhaib/Python_Study |
If you'd rather have total hours as an integer, you can of course replace the floating point division with integer division (i.e., //). Exercise: Expression and Variables in Python What is the value of x where x = 3 + 2 * 2 | # Write your code below. Don't forget to press Shift+Enter to execute the cell
x = 3 + 2 * 2
x | _____no_output_____ | MIT | 1.1-Types.ipynb | mohamedsuhaib/Python_Study |
Double-click __here__ for the solution.<!-- Your answer is below:7--> What is the value of y where y = (3 + 2) * 2? | # Write your code below. Don't forget to press Shift+Enter to execute the cell
y = (3+2)*2
y | _____no_output_____ | MIT | 1.1-Types.ipynb | mohamedsuhaib/Python_Study |
Double-click __here__ for the solution.<!-- Your answer is below:10--> What is the value of z where z = x + y? | # Write your code below. Don't forget to press Shift+Enter to execute the cell
z= x+y
z | _____no_output_____ | MIT | 1.1-Types.ipynb | mohamedsuhaib/Python_Study |
from google.colab import drive
drive.mount('/content/drive')
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
work_path = "/content/drive/My Drive/Colab Notebooks"
free_x = pd.read_csv(f"{work_path}/data/free_x.csv")
free_y = pd.read_csv(f"{work_path}/data/free_y.csv")
step_x = pd.read_csv(f"{... | _____no_output_____ | MIT | check_correlation.ipynb | heros-lab/colaboratory | |
Planning Challenge As a data scientist at a hotel chain, I'm trying to find out what customers are happy and unhappy with, based on reviews. I'd like to know the topics in each review and a score for the topic. Approach - Use standard NLP techniques (tokenization, TF-IDF, etc.) to process the reviews- Use LDA to ide... | import logging
import matplotlib.pyplot as plt
import numpy as np
import os
import pandas as pd
import plotly.express as px
import plotly.io as pio
import pyLDAvis # Has a warning on import
import pyLDAvis.sklearn
import pyLDAvis.gensim
import seaborn as sns
from gensim.corpora.dictionary import Dictionary
from gensi... | _____no_output_____ | MIT | 07-happy-hotel/python/happy_hotel.ipynb | leslem/insight-data-challenges |
Read in and clean the data Before reading in all of the files I downloaded from the GDrive, I used `diff` to compare the files because they looked like they might be duplicates. ```diff hotel_happy_reviews\ -\ hotel_happy_reviews.csv hotel_happy_reviews\ -\ hotel_happy_reviews.csv.csvdiff hotel_happy_reviews\ -\ hotel... | happy_reviews = pd.read_csv(
os.path.join(os.path.expanduser(data_dir), 'hotel_happy_reviews - hotel_happy_reviews.csv'),
)
display(happy_reviews.info())
display(happy_reviews)
# Name this bad_reviews so it's easier to distinguish
bad_reviews = pd.read_csv(
os.path.join(os.path.expanduser(data_dir), 'hotel_not... | <class 'pandas.core.frame.DataFrame'>
RangeIndex: 26521 entries, 0 to 26520
Data columns (total 4 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 User_ID 26521 non-null object
1 Description 26521 non-null object
2 Is_Response 26521 non-null object
3 ho... | MIT | 07-happy-hotel/python/happy_hotel.ipynb | leslem/insight-data-challenges |
Check that the two dfs are formatted the same | assert happy_reviews.columns.to_list() == bad_reviews.columns.to_list()
assert happy_reviews.dtypes.to_list() == bad_reviews.dtypes.to_list() | _____no_output_____ | MIT | 07-happy-hotel/python/happy_hotel.ipynb | leslem/insight-data-challenges |
Look at the data in detail | display(happy_reviews['hotel_ID'].value_counts())
display(happy_reviews['User_ID'].describe())
display(bad_reviews['hotel_ID'].value_counts())
display(bad_reviews['User_ID'].describe()) | _____no_output_____ | MIT | 07-happy-hotel/python/happy_hotel.ipynb | leslem/insight-data-challenges |
Process review text Tokenize Split the reviews up into individual words | def tokenize(review):
'''Split review string into tokens; remove stop words.
Returns: list of strings, one for each word in the review
'''
s = review.lower() # Make lowercase
s = regex_tokenizer.tokenize(s) # Split into words and remove punctuation.
s = [t for t in s if not t.isnumeric()] # ... | _____no_output_____ | MIT | 07-happy-hotel/python/happy_hotel.ipynb | leslem/insight-data-challenges |
Find bigrams and trigrams Identify word pairs and triplets that are above a given count threshold across all reviews. | # Add bigrams to single tokens
bigrammer = Phrases(all_tokens, min_count=20)
trigrammer = Phrases(bigrammer[all_tokens], min_count=20)
# For bigrams and trigrams meeting the min and threshold, add them to the token lists.
for idx in range(len(all_tokens)):
all_tokens.iloc[idx].extend([token for token in trigrammer... | 2020-04-09 15:31:27,829 : INFO : collecting all words and their counts
| MIT | 07-happy-hotel/python/happy_hotel.ipynb | leslem/insight-data-challenges |
Remove rare and common tokens, and limit vocabulary | dictionary = Dictionary(all_tokens)
dictionary.filter_extremes(no_below=30, no_above=0.5, keep_n=20000)
# Look at the top 100 and bottom 100 tokens
temp = dictionary[0] # Initialize the dict
token_counts = pd.DataFrame(np.array(
[[token_id, dictionary.id2token[token_id], dictionary.cfs[token_id]]
for token... | _____no_output_____ | MIT | 07-happy-hotel/python/happy_hotel.ipynb | leslem/insight-data-challenges |
Look at two examples before and after preprocessing | happy_idx = np.random.randint(1, len(happy_tokens))
bad_idx = np.random.randint(1, len(bad_tokens))
print('HAPPY before:')
display(happy_reviews['Description'].iloc[happy_idx])
print('HAPPY after:')
display(happy_tokens.iloc[happy_idx])
print('NOT HAPPY before:')
display(bad_reviews['Description'].iloc[bad_idx])
prin... | HAPPY before:
| MIT | 07-happy-hotel/python/happy_hotel.ipynb | leslem/insight-data-challenges |
Vectorize with Bag of Words and TF-IDF | bow_corpus = [dictionary.doc2bow(review) for review in all_tokens]
tfidf_model = TfidfModel(bow_corpus)
tfidf_corpus = tfidf_model[bow_corpus]
print('Number of unique tokens: {}'.format(len(dictionary)))
print('Number of documents: {}'.format(len(bow_corpus)))
len(tfidf_corpus) | 2020-04-09 15:32:02,623 : INFO : collecting document frequencies
| MIT | 07-happy-hotel/python/happy_hotel.ipynb | leslem/insight-data-challenges |
LDA topic modeling | # Fit a single version of the LDA model.
num_topics = 10
chunksize = 5000
passes = 4
iterations = 200
eval_every = 1 # Evaluate convergence at the end
id2word = dictionary.id2token
lda_model = LdaMulticore(
corpus=tfidf_corpus,
id2word=id2word,
chunksize=chunksize,
alpha='symmetric',
eta='auto',
... | 2020-04-09 15:32:02,990 : INFO : using symmetric alpha at 0.1
| MIT | 07-happy-hotel/python/happy_hotel.ipynb | leslem/insight-data-challenges |
Gensim calculates the [intrinsic coherence score](http://qpleple.com/topic-coherence-to-evaluate-topic-models/) foreach topic. By averaging across all of the topics in the model you can get an average coherence score. Coherenceis a measure of the strength of the association between words in a topic cluster. It is suppo... | # Average topic coherence is the sum of topic coherences of all topics, divided by the number of topics.
avg_topic_coherence = sum([t[1] for t in top_topics]) / num_topics
print('Average topic coherence: %.4f.' % avg_topic_coherence) | Average topic coherence: -1.2838.
| MIT | 07-happy-hotel/python/happy_hotel.ipynb | leslem/insight-data-challenges |
References:- https://radimrehurek.com/gensim/auto_examples/tutorials/run_lda.htmlsphx-glr-auto-examples-tutorials-run-lda-py- https://towardsdatascience.com/topic-modeling-and-latent-dirichlet-allocation-in-python-9bf156893c24 | # This code is used to run the .py script from beginning to end in the python interpreter
# with open('python/happy_hotel.py', 'r') as f:
# exec(f.read())
# plt.close('all') | _____no_output_____ | MIT | 07-happy-hotel/python/happy_hotel.ipynb | leslem/insight-data-challenges |
Recommendations via Dimensionality ReductionAll the content discovery approaches we have explored in previous notebooks can be used to do content recommendations. Here we explore yet another approach to do that, but instead of considering a single article as input, we will look at situations where we know that a user ... | from sklearn.decomposition import NMF
import joblib
import json
import numpy as np
import os
import requests
import urllib
DATA_DIR = "../data"
MODEL_DIR = "../models"
SOLR_URL = "http://localhost:8983/solr/nips2index"
FEATURES_DUMP_FILE = os.path.join(DATA_DIR, "comb-features.tsv")
NMF_MODEL_FILE = os.path.join(MODE... | _____no_output_____ | Apache-2.0 | notebooks/19-content-recommender.ipynb | sujitpal/content-engineering-tutorial |
Extract features from index | query_string = "*:*"
field_list = "id,keywords,authors,orgs"
cursor_mark = "*"
num_docs, num_keywords = 0, 0
doc_keyword_pairs = []
fdump = open(FEATURES_DUMP_FILE, "w")
all_keywords, all_authors, all_orgs = set(), set(), set()
while True:
if num_docs % 1000 == 0:
print("{:d} documents ({:d} keywords, {:d... | 0 documents (0 keywords, 0 authors, 0 orgs) retrieved
1000 documents (1628 keywords, 1347 authors, 159 orgs) retrieved
2000 documents (1756 keywords, 2601 authors, 214 orgs) retrieved
3000 documents (1814 keywords, 3948 authors, 269 orgs) retrieved
4000 documents (1833 keywords, 5210 authors, 311 orgs) retrieved
5000 d... | Apache-2.0 | notebooks/19-content-recommender.ipynb | sujitpal/content-engineering-tutorial |
Build sparse feature vector for documentsThe feature vector for each document will consist of a sparse vector of size 11992 (1847+9719+426). An entry is 1 if the item occurs in the document, 0 otherwise. | def build_lookup_table(item_set):
item2idx = {}
for idx, item in enumerate(item_set):
item2idx[item] = idx
return item2idx
keyword2idx = build_lookup_table(all_keywords)
author2idx = build_lookup_table(all_authors)
org2idx = build_lookup_table(all_orgs)
print(len(keyword2idx), len(author2idx), len(... | (7238, 11992)
[[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
...
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]]
| Apache-2.0 | notebooks/19-content-recommender.ipynb | sujitpal/content-engineering-tutorial |
Reduce dimensionalityWe reduce the sparse feature vector to a lower dimensional dense vector which effectively maps the original vector to a new "taste" vector space. Topic modeling has the same effect. We will use non-negative matrix factorization.Idea here is to factorize the input matrix X into two smaller matrices... | if os.path.exists(NMF_MODEL_FILE):
print("model already generated, loading")
model = joblib.load(NMF_MODEL_FILE)
W = model.transform(X)
H = model.components_
else:
model = NMF(n_components=150, init='random', solver="cd",
verbose=True, random_state=42)
W = model.fit_transfor... | model already generated, loading
violation: 1.0
violation: 0.2411207712867099
violation: 0.0225518954481444
violation: 0.00395945567371017
violation: 0.0004979448419219516
violation: 8.176770536033433e-05
Converged at iteration 6
(7238, 150) (150, 11992)
| Apache-2.0 | notebooks/19-content-recommender.ipynb | sujitpal/content-engineering-tutorial |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.