markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
We can also convert to a different time scale, for instance from UTC to TT. This uses the same attribute mechanism as above but now returns a new `Time` object:
t2 = t.tt t2 t2.jd
_____no_output_____
Unlicense
day4/06. Astropy - Time.ipynb
ubutnux/bosscha-python-workshop-2022
Note that both the ISO (ISOT) and JD representations of t2 are different than for t because they are expressed relative to the TT time scale. Of course, from the numbers or strings one could not tell; one format in which this information is kept is the `fits` format:
print(t2.fits)
_____no_output_____
Unlicense
day4/06. Astropy - Time.ipynb
ubutnux/bosscha-python-workshop-2022
Sidereal TimeApparent or mean sidereal time can be calculated using `sidereal_time()`. The method returns a `Longitude` with units of hourangle, which by default is for the longitude corresponding to the location with which the `Time` object is initialized. Like the scale transformations, ERFA C-library routines are u...
t = Time('2006-01-15 21:24:37.5', scale='utc', location=('120d', '45d')) t.sidereal_time('mean') t.sidereal_time('apparent')
_____no_output_____
Unlicense
day4/06. Astropy - Time.ipynb
ubutnux/bosscha-python-workshop-2022
Time DeltasSimple time arithmetic is supported using the TimeDelta class. The following operations are available:* Create a TimeDelta explicitly by instantiating a class object* Create a TimeDelta by subtracting two Times* Add a TimeDelta to a Time object to get a new Time* Subtract a TimeDelta from a Time object to g...
t1 = Time('2010-01-01 00:00:00') t2 = Time('2010-02-01 00:00:00') dt = t2 - t1 # Difference between two Times dt dt.sec from astropy.time import TimeDelta dt2 = TimeDelta(50.0, format='sec') t3 = t2 + dt2 # Add a TimeDelta to a Time t3
_____no_output_____
Unlicense
day4/06. Astropy - Time.ipynb
ubutnux/bosscha-python-workshop-2022
TimezonesWhen a Time object is constructed from a timezone-aware `datetime`, no timezone information is saved in the `Time` object. However, `Time` objects can be converted to timezone-aware datetime objects:
from datetime import datetime from astropy.time import Time, TimezoneInfo import astropy.units as u utc_plus_one_hour = TimezoneInfo(utc_offset=1*u.hour) dt_aware = datetime(2000, 1, 1, 0, 0, 0, tzinfo=utc_plus_one_hour) t = Time(dt_aware) # Loses timezone info, converts to UTC print(t) # will return UTC pr...
_____no_output_____
Unlicense
day4/06. Astropy - Time.ipynb
ubutnux/bosscha-python-workshop-2022
Model parameters
# Model parameters BATCH_SIZE = 64 EPOCHS = 30 LEARNING_RATE = 0.0001 HEIGHT = 64 WIDTH = 64 CANAL = 3 N_CLASSES = labels.shape[0] ES_PATIENCE = 5 DECAY_DROP = 0.5 DECAY_EPOCHS = 10 classes = list(map(str, range(N_CLASSES))) def f2_score_thr(threshold=0.5): def f2_score(y_true, y_pred): beta = 2 y_p...
Found 81928 images belonging to 1103 classes. Found 27309 images belonging to 1103 classes. Found 7443 images.
MIT
Model backlog/Deep Learning/NasNetLarge/[43th] - Fine-tune - NasNetLarge - head.ipynb
dimitreOliveira/iMet-Collection-2019-FGVC6
Model
print(os.listdir("../input/nasnetlarge")) def create_model(input_shape, n_out): input_tensor = Input(shape=input_shape) base_model = applications.NASNetLarge(weights=None, include_top=False, input_tensor=input_tensor) base_model.load_weights('../input/nasnetlarge/NASNet-large-no-top.h5') ...
WARNING:tensorflow:From /opt/conda/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version. Instructions for updating: Colocations handled automatically by placer. WARNING:tensorflow:From /o...
MIT
Model backlog/Deep Learning/NasNetLarge/[43th] - Fine-tune - NasNetLarge - head.ipynb
dimitreOliveira/iMet-Collection-2019-FGVC6
Train top layers
STEP_SIZE_TRAIN = train_generator.n//train_generator.batch_size STEP_SIZE_VALID = valid_generator.n//valid_generator.batch_size history = model.fit_generator(generator=train_generator, steps_per_epoch=STEP_SIZE_TRAIN, validation_data=valid_generator, ...
WARNING:tensorflow:From /opt/conda/lib/python3.6/site-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. Instructions for updating: Use tf.cast instead. Epoch 1/30 - 579s - loss: 0.0435 - acc: 0.9903 - categorical_accur...
MIT
Model backlog/Deep Learning/NasNetLarge/[43th] - Fine-tune - NasNetLarge - head.ipynb
dimitreOliveira/iMet-Collection-2019-FGVC6
Complete model graph loss
sns.set_style("whitegrid") fig, (ax1, ax2, ax3) = plt.subplots(1, 3, sharex='col', figsize=(20,7)) ax1.plot(history.history['loss'], label='Train loss') ax1.plot(history.history['val_loss'], label='Validation loss') ax1.legend(loc='best') ax1.set_title('Loss') ax2.plot(history.history['acc'], label='Train Accuracy')...
_____no_output_____
MIT
Model backlog/Deep Learning/NasNetLarge/[43th] - Fine-tune - NasNetLarge - head.ipynb
dimitreOliveira/iMet-Collection-2019-FGVC6
Find best threshold value
lastFullValPred = np.empty((0, N_CLASSES)) lastFullValLabels = np.empty((0, N_CLASSES)) for i in range(STEP_SIZE_VALID+1): im, lbl = next(valid_generator) scores = model.predict(im, batch_size=valid_generator.batch_size) lastFullValPred = np.append(lastFullValPred, scores, axis=0) lastFullValLabels = n...
thr=0.050 F2=0.211
MIT
Model backlog/Deep Learning/NasNetLarge/[43th] - Fine-tune - NasNetLarge - head.ipynb
dimitreOliveira/iMet-Collection-2019-FGVC6
Apply model to test set and output predictions
test_generator.reset() STEP_SIZE_TEST = test_generator.n//test_generator.batch_size preds = model.predict_generator(test_generator, steps=STEP_SIZE_TEST) predictions = [] for pred_ar in preds: valid = [] for idx, pred in enumerate(pred_ar): if pred > threshold: valid.append(idx) if len(v...
_____no_output_____
MIT
Model backlog/Deep Learning/NasNetLarge/[43th] - Fine-tune - NasNetLarge - head.ipynb
dimitreOliveira/iMet-Collection-2019-FGVC6
Civil Air Patrol and GeoregistrationIn the previous session, we saw how a series of 2D images taken of a 3D scene can be used to recover the 3D information, by exploiting geometric constraints of the cameras. Now the question is, how do we take this technique and apply it in a disaster response scenario?We are going t...
import sys import open3d as o3d import json import numpy as np import pandas as pd from matplotlib import pyplot as plt import cv2 import os # Take initial guess of intrinsic parameters through metadata !opensfm extract_metadata CAP_sample_1 # Detect features points !opensfm detect_features CAP_sample_1 # Match feat...
_____no_output_____
FSFULLR
09-Civil_Air_Patrol_and_georegistration.ipynb
bwsi-remote-sensing-2020/12-Civil_Air_Patrol_and_Georegistration
So what are we seeing? We see two collections of points, both mostly coplanar internally (which we expect, given that this is a mostly planar scene), but the two sets are not aligned with each other! Let's look a bit more closely...
# here, we're just going to plot the z (altitude) values of the reconstructed points point_coord = np.asarray(pcd.points) plt.hist(point_coord[:, 2].ravel()) plt.show()
_____no_output_____
FSFULLR
09-Civil_Air_Patrol_and_georegistration.ipynb
bwsi-remote-sensing-2020/12-Civil_Air_Patrol_and_Georegistration
So not only are the points misaligned, but we're getting wild altitude values! **What's going on?** ExerciseLet's make a critical assumption: all of the image coordinates (the GPS coordinates of the camera as it takes an image) all lie on a plane (in the mathematical sense). Answer the following questions:- How many p...
# This creates "tracks" for the features. That is to say, if a feature in image 1 is matched with one in image 2, # and in turn that one is matched with one in image 3, then it links the matches between 1 and 3. !opensfm create_tracks CAP_sample_1 # Calculates the essential matrix, the camera pose and the reconstruct...
2020-07-17 15:50:42,846 INFO: reading features 2020-07-17 15:50:42,900 DEBUG: Merging features onto tracks 2020-07-17 15:50:42,943 DEBUG: Good tracks: 3429 2020-07-17 15:50:45,033 INFO: Starting incremental reconstruction 2020-07-17 15:50:45,081 INFO: Starting reconstruction with image_url_pr_10_13_sample_11.jpg and im...
FSFULLR
09-Civil_Air_Patrol_and_georegistration.ipynb
bwsi-remote-sensing-2020/12-Civil_Air_Patrol_and_Georegistration
GeoregistrationThe process of assigning GPS coordinates to individual pixels is called _georegistration_ or _georeferencing_. This requires us to perform a final transformation from pixel coordinates *per each image* to the 3D reconstructed coordinates. Before doing so, it is worthwhile talking a bit about what exactl...
# Origin of our reconstruction, as given by the reference_lla.json (made from the reconstruction) with open("CAP_sample_1/reference_lla.json", "r") as f: reference_lla = json.load(f) latitude=reference_lla["latitude"] longitude=reference_lla["longitude"] altitude=reference_lla["altitude"] # This is the...
_____no_output_____
FSFULLR
09-Civil_Air_Patrol_and_georegistration.ipynb
bwsi-remote-sensing-2020/12-Civil_Air_Patrol_and_Georegistration
There is a bit of work we need to go through to finalize the georegistration. First, we need to match the reconstructed features with the features on an image the tracks.csv file and the reconstruction.json can help us do that. The columns of tracks are as follows: image name, track ID (ID of the reconstructed point), ...
from opensfm.features import denormalized_image_coordinates # reading the csv tracks = pd.read_csv("CAP_sample_1/tracks.csv", sep="\t", skiprows=1, names=["image_name", "track_id", "feature_id", "x", "y", "s", "R", "G", "B"]) # we need to denormalize the coordinates to turn them into regular pixel coordinates normali...
_____no_output_____
FSFULLR
09-Civil_Air_Patrol_and_georegistration.ipynb
bwsi-remote-sensing-2020/12-Civil_Air_Patrol_and_Georegistration
We're going to store the georegistration by creating a new .tif file for every CAP image. As you can recall, .tif files save not just the pixel data but also the projection that allows it to be displayed on top of other map data. There are two parts to doing this:- First, we need to create an _orthorectified_ image. Si...
import shutil import gdal, osr try: from pymap3d import enu2geodetic except: !pip install pymap3d from pymap3d import enu2geodetic import random from skimage import transform if not os.path.isdir("CAP_sample_1/geotiff/"): os.mkdir("CAP_sample_1/geotiff/") if not os.path.isdir("CAP_sample_1/ortho/"): ...
_____no_output_____
FSFULLR
09-Civil_Air_Patrol_and_georegistration.ipynb
bwsi-remote-sensing-2020/12-Civil_Air_Patrol_and_Georegistration
Pre-process input data for coastal variable extractionAuthor: Emily Sturdivant; esturdivant@usgs.gov***Pre-process files to be used in extractor.ipynb (Extract barrier island metrics along transects). See the project [README](https://github.com/esturdivant-usgs/BI-geomorph-extraction/blob/master/README.md) and the Met...
import os import sys import pandas as pd import numpy as np import arcpy import matplotlib.pyplot as plt import matplotlib matplotlib.style.use('ggplot') try: import core.functions_warcpy as fwa import core.functions as fun except ImportError as err: print("Looks like you need to install the module to your ...
No module named 'CoastalVarExtractor' Looks like you need to install the module to your ArcGIS environment. To do so: pip install git+https://github.com/esturdivant-usgs/BI-geomorph-extraction.git
CC0-1.0
temp/prepper_old.ipynb
esturdivant-usgs/geomorph-working-files
If you don't want to formally install the module, you'll need to add the path to the package to your system path: ```pythonmod_path = r"path\to\dir\BI-geomorph-extraction" replace with path to modulesys.path.insert(0, mod_path)import CoastalVarExtractor.functions_warcpy as fwa``` Initialize variablesBased on the proj...
from core.setvars import * # Inputs - vector orig_trans = os.path.join(arcpy.env.workspace, 'DelmarvaS_SVA_LT') ShorelinePts = os.path.join(home, 'SLpts') dlPts = os.path.join(home, 'DLpts') dhPts = os.path.join(home, 'DHpts') # Inputs - raster elevGrid = os.path.join(home, 'DEM') elevGrid_5m = os.path.join(home, 'DEM...
_____no_output_____
CC0-1.0
temp/prepper_old.ipynb
esturdivant-usgs/geomorph-working-files
Dunes and armoring Display the points and the DEM in a GIS to check for irregularities. For example, if shoreline points representing a distance less than X m are visually offset from the general shoreline, they should likely be removed. Another red flag is when the positions of dlows and dhighs in relation to the sho...
fwa.ReplaceValueInFC(dhPts, oldvalue=fill, newvalue=None, fields=["dhigh_z"]) # Dhighs fwa.ReplaceValueInFC(dlPts, oldvalue=fill, newvalue=None, fields=["dlow_z"]) # Dlows fwa.ReplaceValueInFC(ShorelinePts, oldvalue=fill, newvalue=None, fields=["slope"]) # Shoreline
_____no_output_____
CC0-1.0
temp/prepper_old.ipynb
esturdivant-usgs/geomorph-working-files
Project to UTM if not already. If this happens, we need to change the file name for future processing. If desired, delete dune points with missing Z values. Not necessary because you can choose to exclude those points from the beach width calculation.
# Delete points with fill elevation value from dune crests fmapdict = fwa.find_similar_fields('DH', dhPts, fields=['_z']) arcpy.CopyFeatures_management(dhPts, dhPts+'_orig') fwa.DeleteFeaturesByValue(dhPts, [fmapdict['_z']['src']], deletevalue=-99999) print('Deleted dune crest points that will fill elevation values. Or...
_____no_output_____
CC0-1.0
temp/prepper_old.ipynb
esturdivant-usgs/geomorph-working-files
ArmoringIf the dlows do not capture the entire top-of-beach due to atypical formations caused by anthropogenic modification, you may need to digitize the beachfront armoring. The next code block will generate an empty feature class. Refer to the DEM and orthoimagery. If there is no armoring in the study area, continue...
arcpy.CreateFeatureclass_management(home, os.path.basename(armorLines), 'POLYLINE', spatial_reference=utmSR) print("{} created. Now manually digitize the shorefront armoring.".format(armorLines))
_____no_output_____
CC0-1.0
temp/prepper_old.ipynb
esturdivant-usgs/geomorph-working-files
InletsWe also need to manually digitize inlets if an inlet delineation does not already exist. To do, the code below will produce the feature class. After which, use the Editing toolbar to create a line where the oceanside shore meets a tidal inlet. If the study area includes both sides of an inlet, that inlet will be...
# manually create lines that correspond to end of land and cross the MHW line (use bndpoly/DEM) arcpy.CreateFeatureclass_management(home, os.path.basename(inletLines), 'POLYLINE', spatial_reference=utmSR) print("{} created. Now we'll stop for you to manually create lines at each inlet.".format(inletLines))
_____no_output_____
CC0-1.0
temp/prepper_old.ipynb
esturdivant-usgs/geomorph-working-files
ShorelineThe shoreline is produced through a combination of the DEM and the shoreline points. The first step converts the DEM to both MTL and MHW contour polygons. Those polygons are combined to produce the full shoreline, which is considered to fall at MHW on the oceanside and MTL on the bayside (to include partially...
SA_bounds = 'SA_bounds' bndpoly = fwa.DEMtoFullShorelinePoly(elevGrid_5m, sitevals['MTL'], sitevals['MHW'], inletLines, ShorelinePts) print('Select features from {} that should not be included in the final shoreline polygon. '.format(bndpoly))
Creating the MTL contour polgon from the DEM... Creating the MHW contour polgon from the DEM... Combining the two polygons... Isolating the above-MTL portion of the polygon to the bayside... User input required! Select extra features in bndpoly for deletion. Recommended technique: select the polygon/s to keep...
CC0-1.0
temp/prepper_old.ipynb
esturdivant-usgs/geomorph-working-files
*__Requires display in GIS__*User input is required to identify only the areas within the study area and eliminate isolated landmasses that are not. Once the features to delete are selected, either delete in the GIS or run the code below. Make sure the bndpoly variable matches the layer name in the GIS.__Do not...__ se...
bndpoly = 'bndpoly' barrierBoundary = fwa.NewBNDpoly(bndpoly, ShorelinePts, barrierBoundary, '25 METERS', '50 METERS') shoreline = fwa.CreateShoreBetweenInlets(barrierBoundary, inletLines, shoreline, ShorelinePts, proj_code)
Splitting \\Mac\stor\Projects\TransectExtraction\FireIsland2010\FireIsland2010.gdb\bndpoly_2sl_edited at inlets... Preserving only those line segments that intersect shoreline points... Dissolving the line to create \\Mac\stor\Projects\TransectExtraction\FireIsland2010\FireIsland2010.gdb\ShoreBetweenInlets...
CC0-1.0
temp/prepper_old.ipynb
esturdivant-usgs/geomorph-working-files
After this step, you'll want to make sure the shoreline looks okay. There should be only one line segment for each stretch of shore between two inlets. Segments may be incorrectly deleted if the shoreline points are missing in the area. Segments may be incorrectly preserved if they are intersect a shoreline point. To r...
# Delete transects over 200 m outside of the study area. if input("Need to remove extra transects? 'y' if barrierBoundary should be used to select. ") == 'y': fwa.RemoveTransectsOutsideBounds(orig_trans, barrierBoundary) trans_extended = fwa.ExtendLine(orig_trans, os.path.join(arcpy.env.scratchGDB, 'trans_ext_temp'...
Need to remove extra transects? 'y' if barrierBoundary exists and should be used to select. y \\Mac\stor\Projects\TransectExtraction\Fisherman2014\Fisherman2014.gdb\DelmarvaS_SVA_LT is already projected in UTM. MANUALLY: use groups of existing transects in new FC '\\Mac\stor\Projects\TransectExtraction\Fisherman2014\sc...
CC0-1.0
temp/prepper_old.ipynb
esturdivant-usgs/geomorph-working-files
*__Requires manipulation in GIS__*1. Edit the trans_presort_temp feature class. __Move and rotate__ groups of transects to fill in gaps that are greater than 50 m alongshore. There is no need to preserve the original transects, but avoid overlapping the transects with each other and with the originals. Do not move any ...
fwa.RemoveDuplicates(trans_presort, trans_extended, barrierBoundary)
_____no_output_____
CC0-1.0
temp/prepper_old.ipynb
esturdivant-usgs/geomorph-working-files
2. Sort the transects along the shoreUsually if the shoreline curves, we need to identify different groups of transects for sorting. This is because the GIS will not correctly establish the alongshore order by simple ordering from the identified sort_corner. If this is the case, answer __yes__ to the next prompt.
sort_lines = fwa.SortTransectPrep(spatialref=utmSR)
Do we need to sort the transects in batches to preserve the order? (y/n) y MANUALLY: Add features to sort_lines. Indicate the order of use in 'sort' and the sort corner in 'sort_corn'.
CC0-1.0
temp/prepper_old.ipynb
esturdivant-usgs/geomorph-working-files
*__Requires manipulation in GIS__*The last step generated an empty sort lines feature class if you indicated that transects need to be sorted in batches to preserve the order. Now, the user creates lines that will be used to spatially sort transects in groups. For each group of transects:1. __Create a new line__ in 'so...
fwa.SortTransectsFromSortLines(trans_presort, extendedTrans, sort_lines, tID_fld)
Creating new feature class \\Mac\stor\Projects\TransectExtraction\Fisherman2014\Fisherman2014.gdb\extTrans to hold sorted transects... Sorting sort lines by field sort... For each line, creating subset of transects and adding them in order to the new FC... Copying the generated OID values to the transect ID field (sort...
CC0-1.0
temp/prepper_old.ipynb
esturdivant-usgs/geomorph-working-files
3. Tidy the extended (and sorted) transects to remove overlap*__Requires manipulation in GIS__*Overlapping transects cause problems during conversion to 5-m points and to rasters. We create a separate feature class with the 'tidied' transects, in which the lines don't overlap. This is largely a manually process with t...
overlapTrans_lines = os.path.join(arcpy.env.scratchGDB, 'overlapTrans_lines_temp') if not arcpy.Exists(overlapTrans_lines): overlapTrans_lines = input("Filename of the feature class of only 'boundary' transects: ") trans_x = arcpy.Intersect_analysis([extendedTrans, overlapTrans_lines], ...
_____no_output_____
CC0-1.0
temp/prepper_old.ipynb
esturdivant-usgs/geomorph-working-files
Importing all the required libraries
import pandas as pd from sklearn.model_selection import train_test_split from sklearn.metrics import accuracy_score from sklearn.preprocessing import LabelEncoder from sklearn.ensemble import BaggingClassifier
_____no_output_____
MIT
Airplane_Accident - HackerEarth/8. HackerEarth ML - BaggingClassifier.ipynb
phileinSophos/ML-DL_Problems
Reading the training dataset to Pandas DataFrame
data = pd.read_csv('train.csv') data.head()
_____no_output_____
MIT
Airplane_Accident - HackerEarth/8. HackerEarth ML - BaggingClassifier.ipynb
phileinSophos/ML-DL_Problems
Getting the target variables to Y variable
Y = data['Severity'] Y.shape
_____no_output_____
MIT
Airplane_Accident - HackerEarth/8. HackerEarth ML - BaggingClassifier.ipynb
phileinSophos/ML-DL_Problems
Dropoing the irrelevent columns from training data
data = data.drop(columns=['Severity','Accident_ID','Accident_Type_Code','Adverse_Weather_Metric'],axis=1) data.head()
_____no_output_____
MIT
Airplane_Accident - HackerEarth/8. HackerEarth ML - BaggingClassifier.ipynb
phileinSophos/ML-DL_Problems
creating the Label Encoder object which will encode the target severities to numerical form
label_encode = LabelEncoder() y = label_encode.fit_transform(Y) x_train,x_test,y_train,y_test = train_test_split(data,y,test_size=0.3) bag = BaggingClassifier(n_estimators=100,) bag.fit(data,y) predictions = bag.predict(x_test) accuracy_score(y_test,predictions) test_data = pd.read_csv('test.csv') accident_id = test_da...
_____no_output_____
MIT
Airplane_Accident - HackerEarth/8. HackerEarth ML - BaggingClassifier.ipynb
phileinSophos/ML-DL_Problems
"Chapter 1: Data Model"> Introduction about what "Pythonic" means.- toc:true- badges: true- author: JJmachan Pythonic Card DeckTo undertant how python works as a framework it is crutial that you get the Python Data Model. Python is very consistent and by that I mean that once you have some experince with the language...
import collections # namedtuple - tuples with names for each value in it (much like a class) Card = collections.namedtuple('Card', ['rank', 'suit']) c = Card('7', 'diamonds') # individual card object print(c) print(c.rank, c.suit) # class to represent the deck of cards class FrenchDeck: ranks = [str(n) for n in r...
_____no_output_____
Apache-2.0
_notebooks/2021-07-02-ch1-data-model.ipynb
jjmachan/fluent-python
Now we have created a class FrenchDeck that is short but still packs a punch. All the basic operations are supported. Now imagine we have another usecase to pick a random card. Normally we would add another function but in this case we can use pythons existing lib function `random.choice()`.
from random import choice choice(deck)
_____no_output_____
Apache-2.0
_notebooks/2021-07-02-ch1-data-model.ipynb
jjmachan/fluent-python
> We’ve just seen two advantages of using special methods to leverage the Python datamodel:> 1. The users of your classes don’t have to memorize arbitrary method names for stan‐dard operations (“How to get the number of items? Is it .size() , .length() , orwhat?”).> 2. It’s easier to benefit from the rich Python standa...
# because of __getitem__, our deck is now slicable deck[1:5] # because of __getitem__, is iterable for card in deck: if card.rank == 'K': print(card) # iteration is often implicit hence if the collection has no __contains__ method # the in operator does a sequential scan. Card('Q', 'spades') in deck Card('...
_____no_output_____
Apache-2.0
_notebooks/2021-07-02-ch1-data-model.ipynb
jjmachan/fluent-python
we can also make use the build-in `sorted()` function. We just need to proved a function for providing the values of the cards. Here the logic is provided in `spedes_high`
suit_value = dict(spades=3, hearts=2, diamonds=1, clubs=0) def spades_high(card): rank_value = FrenchDeck.ranks.index(card.rank) return rank_value*len(suit_value) + suit_value[card.suit] for card in sorted(deck, key=spades_high)[:10]: print(card)
Card(rank='2', suit='clubs') Card(rank='2', suit='diamonds') Card(rank='2', suit='hearts') Card(rank='2', suit='spades') Card(rank='3', suit='clubs') Card(rank='3', suit='diamonds') Card(rank='3', suit='hearts') Card(rank='3', suit='spades') Card(rank='4', suit='clubs') Card(rank='4', suit='diamonds')
Apache-2.0
_notebooks/2021-07-02-ch1-data-model.ipynb
jjmachan/fluent-python
> Although FrenchDeck implicitly inherits from object its functionality is not inherited,but comes from leveraging the data model and composition. By implementing the special methods `__len__` and `__getitem__` , our FrenchDeck behaves like a standard Pythonsequence, allowing it to benefit from core language features (...
from math import hypot class Vector: def __init__(self, x=0, y=0): self.x = x self.y = y def __repr__(self): return 'Vector(%d, %d)' %(self.x, self.y) def __abs__(self): return hypot(self.x, self.y) def __bool__(self): return bool(self.x or...
Vector(3, 4) 5.0 Vector(6, 8) Vector(3, 4)
Apache-2.0
_notebooks/2021-07-02-ch1-data-model.ipynb
jjmachan/fluent-python
As you can see we implemented many special methods but we don't directly invoke them. The special methods are to be invoked by the interpretor most of the time, unless you are doing a lot of metaprogramming.
bool(a)
_____no_output_____
Apache-2.0
_notebooks/2021-07-02-ch1-data-model.ipynb
jjmachan/fluent-python
String RepresentationWe use the `__repr__` special method to get the buildin string representation of of the object for inspection (note the usage in `vector` object. There are also other special methods like `__repr__with__str__` which is called by `str()` or `__str__` which is used to return a string for display to ...
class Test: def __init__(self, x): self.x = x t = Test(0) t, bool(t) class Test: def __init__(self, x): self.x = x def __bool__(self): return bool(self.x) t = Test(0) t, bool(t)
_____no_output_____
Apache-2.0
_notebooks/2021-07-02-ch1-data-model.ipynb
jjmachan/fluent-python
!rm -r sample_data/ import pandas as pd pd.set_option('display.max_columns', None) import numpy as np from sklearn.decomposition import PCA from sklearn.preprocessing import StandardScaler import matplotlib.pyplot as plt import seaborn as sns from scipy.cluster.hierarchy import dendrogram, linkage from scipy.cluster.hi...
_____no_output_____
OLDAP-2.2.1
PCA_Classification.ipynb
diviramon/NBA-Rookie-Analytics
UTILS
# PCA class derived from skelean standard PCA package # code adapted from: https://github.com/A-Jyad/NBAPlayerClustering class PCA_adv: def __init__(self, data, var_per): self.data = data self.pca = PCA(var_per, random_state = 0) self.PCA = self.pca.fit(self.Standard_Scaler_Preprocess().drop...
_____no_output_____
OLDAP-2.2.1
PCA_Classification.ipynb
diviramon/NBA-Rookie-Analytics
DATA LOADING
data = pd.read_csv('Data/career.csv') # csv file with the career averages of all players who played more than 10 seasons data.drop(['Unnamed: 0'], axis =1, inplace=True) # csv conversion automatically creates an index column which is not needed data.head()
_____no_output_____
OLDAP-2.2.1
PCA_Classification.ipynb
diviramon/NBA-Rookie-Analytics
PCA Analysis
pca = PCA_adv(data, 0.89) # create PCA object that covers 89% of the variance pca.PCA_variance() pca_df = pca.PCA_transform(4) # run PCA for the first 4 components pca.Heatmap() # heatmap of the PCs and variables pca.PCA_sorted_eigen('PC1')[:10] # eigenvalues for PC1 pc1 = pca_df[['PLAYER','POSITION','PC1']].copy() pc1...
_____no_output_____
OLDAP-2.2.1
PCA_Classification.ipynb
diviramon/NBA-Rookie-Analytics
K-MEANS
# elbow test for K-means to predict appropiate number of clusters from sklearn.cluster import KMeans Sum_of_squared_distances = [] K = range(1,20) for k in K: km = KMeans(n_clusters=k) km = km.fit(num_data_scaled) Sum_of_squared_distances.append(km.inertia_) plt.plot(K, Sum_of_squared_distances, 'bx-') plt...
_____no_output_____
OLDAP-2.2.1
PCA_Classification.ipynb
diviramon/NBA-Rookie-Analytics
Complete Hierarchy
data_scaled_c = data_scaled.copy() # run complete linkage clustering complete = Cluster(num_data_scaled, 'complete') complete.dendrogram_truncated(15, 5, 6.2) # plot dendrogram complete.elbow_plot(15) # elbow and silhouette plot # Calculate Complete Clusters data_scaled_c['complete_cluster'] = complete.create_cluster(6...
_____no_output_____
OLDAP-2.2.1
PCA_Classification.ipynb
diviramon/NBA-Rookie-Analytics
SINGLE
data_scaled_s = data_scaled.copy() # run single linkage clustering single = Cluster(num_data_scaled, 'single') single.dendrogram_truncated(15) # plot dendrogram single.elbow_plot(15) # elbow and silhouette plot # Inadequate for the given data (all players fall in one cluster) data_scaled_s['single_cluster'] = single.cr...
_____no_output_____
OLDAP-2.2.1
PCA_Classification.ipynb
diviramon/NBA-Rookie-Analytics
Average
data_scaled_a = data_scaled.copy() # run average linkage clustering average = Cluster(num_data_scaled, 'average') average.dendrogram_truncated(15, 3, 4) # plot dendrogram average.elbow_plot(15) # silhouette and elbow plot # Inadequate for the given data data_scaled_a['average_cluster'] = average.create_cluster(3.5) dat...
_____no_output_____
OLDAP-2.2.1
PCA_Classification.ipynb
diviramon/NBA-Rookie-Analytics
WARD method
# calculate ward linkage data_scaled_w = data_scaled.copy() ward = Cluster(num_data_scaled, 'ward') ward.dendrogram_truncated(15, 5, 11) # calculate elbow and silhouette plots ward.elbow_plot(15) # Cluster the data data_scaled_w['ward_cluster'] = ward.create_cluster(10) data_scaled_w['ward_cluster'].value_counts().sort...
_____no_output_____
OLDAP-2.2.1
PCA_Classification.ipynb
diviramon/NBA-Rookie-Analytics
Practice Notebook - Putting It All Together Hello, coders! Below we have code similar to what we wrote in the last video. Go ahead and run the following cell that defines our `get_event_date`, `current_users` and `generate_report` methods.
def get_event_date(event): return event.date def current_users(events): events.sort(key=get_event_date) machines={} for event in events: if event.machine not in machines: machines[event.machine]=set() if event.type =="login": machines[event.machine].add(event.use...
_____no_output_____
MIT
Crash Course on Python/pygrams_notebooks/utf-8''C1M6L1_Putting_It_All_Together.ipynb
garynth41/Google-IT-Automation-with-Python-Professional-Certificate
No output should be generated from running the custom function definitions above. To check that our code is doing everything it's supposed to do, we need an `Event` class. The code in the next cell below initializes our `Event` class. Go ahead and run this cell next.
class Event: def __init__(self, event_date, event_type, machine_name, user): self.date = event_date self.type = event_type self.machine = machine_name self.user = user
_____no_output_____
MIT
Crash Course on Python/pygrams_notebooks/utf-8''C1M6L1_Putting_It_All_Together.ipynb
garynth41/Google-IT-Automation-with-Python-Professional-Certificate
Ok, we have an `Event` class that has a constructor and sets the necessary attributes. Next let's create some events and add them to a list by running the following cell.
events = [ Event('2020-01-21 12:45:56', 'login', 'myworkstation.local', 'jordan'), Event('2020-01-22 15:53:42', 'logout', 'webserver.local', 'jordan'), Event('2020-01-21 18:53:21', 'login', 'webserver.local', 'lane'), Event('2020-01-22 10:25:34', 'logout', 'myworkstation.local', 'jordan'), Event('20...
_____no_output_____
MIT
Crash Course on Python/pygrams_notebooks/utf-8''C1M6L1_Putting_It_All_Together.ipynb
garynth41/Google-IT-Automation-with-Python-Professional-Certificate
Now we've got a bunch of events. Let's feed these events into our `custom_users` function and see what happens.
users = current_users(events) print(users)
{'webserver.local': {'lane', 'jordan'}, 'myworkstation.local': set()}
MIT
Crash Course on Python/pygrams_notebooks/utf-8''C1M6L1_Putting_It_All_Together.ipynb
garynth41/Google-IT-Automation-with-Python-Professional-Certificate
Uh oh. The code in the previous cell produces an error message. This is because we have a user in our `events` list that was logged out of a machine he was not logged into. Do you see which user this is? Make edits to the first cell containing our custom function definitions to see if you can fix this error message. ...
generate_report(users)
{'webserver.local': {'lane', 'jordan'}, 'myworkstation.local': set()}: lane,jordan {'webserver.local': {'lane', 'jordan'}, 'myworkstation.local': set()}: lane,jordan
MIT
Crash Course on Python/pygrams_notebooks/utf-8''C1M6L1_Putting_It_All_Together.ipynb
garynth41/Google-IT-Automation-with-Python-Professional-Certificate
Master Data Science for Business - Data Science Consulting - Session 2 Notebook 2: Web Scraping with Scrapy: Getting reviews from TripAdvisorTo Do (note for Cap): -Enlever des parties du code que les élèves doivent compléter par eux même 1. Importing packages
import scrapy from scrapy.crawler import CrawlerProcess from scrapy.spiders import CrawlSpider, Rule from scrapy.selector import Selector import sys from scrapy.http import Request from scrapy.linkextractors import LinkExtractor import json import logging import pandas as pd
_____no_output_____
MIT
Day2/.ipynb_checkpoints/Notebook 2 - TripAdvisor_sol-checkpoint.ipynb
ALaks96/CenterParcs_NLP_SentimentAnalysis_Webscraping
2. Some class and functions
# -*- coding: utf-8 -*- # Define here the models for your scraped items # # See documentation in: # https://doc.scrapy.org/en/latest/topics/items.html class HotelreviewsItem(scrapy.Item): # define the fields for your item here like: rating = scrapy.Field() review = scrapy.Field() title = scrapy.Field(...
_____no_output_____
MIT
Day2/.ipynb_checkpoints/Notebook 2 - TripAdvisor_sol-checkpoint.ipynb
ALaks96/CenterParcs_NLP_SentimentAnalysis_Webscraping
2. Creating the JSon pipeline
#JSon pipeline, you can rename the "trust.jl" to the name of your choice class JsonWriterPipeline(object): def open_spider(self, spider): self.file = open('tripadvisor.jl', 'w') def close_spider(self, spider): self.file.close() def process_item(self, item, spider): line = json.dum...
_____no_output_____
MIT
Day2/.ipynb_checkpoints/Notebook 2 - TripAdvisor_sol-checkpoint.ipynb
ALaks96/CenterParcs_NLP_SentimentAnalysis_Webscraping
3. SpiderWhen you go on a TripAdvisor page, you will have 5 reviews per page. Reviews are not fully displayed on the page, so you have to open them (i.e follow the link of the review to tell Scrapy to scrape this page) to scrape them. This means we will use 2 parsing functions: -The first one will go on the page of th...
class MySpider(CrawlSpider): name = 'BasicSpider' domain_url = "https://www.tripadvisor.com" # allowed_domains = ["https://www.tripadvisor.com"] start_urls = [ "https://www.tripadvisor.fr/ShowUserReviews-g1573379-d1573383-r629218790-Center_Parcs_Les_Trois_Forets-Hattigny_Moselle_Grand_Est.html"...
_____no_output_____
MIT
Day2/.ipynb_checkpoints/Notebook 2 - TripAdvisor_sol-checkpoint.ipynb
ALaks96/CenterParcs_NLP_SentimentAnalysis_Webscraping
4. Crawling
process = CrawlerProcess({ 'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)' }) process.crawl(MySpider) process.start()
2019-01-14 16:37:44 [scrapy.utils.log] INFO: Scrapy 1.5.1 started (bot: scrapybot) 2019-01-14 16:37:44 [scrapy.utils.log] INFO: Versions: lxml 4.3.0.0, libxml2 2.9.8, cssselect 1.0.3, parsel 1.5.1, w3lib 1.19.0, Twisted 18.9.0, Python 3.7.2 (default, Jan 2 2019, 17:07:39) [MSC v.1915 64 bit (AMD64)], pyOpenSSL 18.0.0 ...
MIT
Day2/.ipynb_checkpoints/Notebook 2 - TripAdvisor_sol-checkpoint.ipynb
ALaks96/CenterParcs_NLP_SentimentAnalysis_Webscraping
5. Importing and reading data scrapedIf you've succeeded, you should see here a dataframe with 2 entries corresponding to the 2 first reviews of the parc, and 11 columns for each item scraped.
dfjson = pd.read_json('tripadvisor.json') #Previewing DF dfjson.head() dfjson.info()
<class 'pandas.core.frame.DataFrame'> RangeIndex: 2 entries, 0 to 1 Data columns (total 11 columns): hotel_name 2 non-null object hotel_type 2 non-null object price_range 2 non-null object published_date 2 non-null object rating 2 non-null int64 review 2 non-null objec...
MIT
Day2/.ipynb_checkpoints/Notebook 2 - TripAdvisor_sol-checkpoint.ipynb
ALaks96/CenterParcs_NLP_SentimentAnalysis_Webscraping
Deep Reinforcement Learning for Stock Trading from Scratch: Single Stock TradingTutorials to use OpenAI DRL to trade single stock in one Jupyter Notebook | Presented at NeurIPS 2020: Deep RL Workshop* This blog is based on our paper: FinRL: A Deep Reinforcement Learning Library for Automated Stock Trading in Quantita...
## install finrl library !pip install git+https://github.com/AI4Finance-LLC/FinRL-Library.git
Collecting git+https://github.com/AI4Finance-LLC/FinRL-Library.git Cloning https://github.com/AI4Finance-LLC/FinRL-Library.git to /tmp/pip-req-build-gpm5bcb4 Running command git clone -q https://github.com/AI4Finance-LLC/FinRL-Library.git /tmp/pip-req-build-gpm5bcb4 Requirement already satisfied (use --upgrade to u...
MIT
FinRL_single_stock_trading.ipynb
jomach/FinRL-Library
2.2. Check if the additional packages needed are present, if not install them. * Yahoo Finance API* pandas* numpy* matplotlib* stockstats* OpenAI gym* stable-baselines* tensorflow* pyfolio
import pkg_resources import pip installedPackages = {pkg.key for pkg in pkg_resources.working_set} required = {'yfinance', 'pandas', 'matplotlib', 'stockstats','stable-baselines','gym','tensorflow'} missing = required - installedPackages if missing: !pip install yfinance !pip install pandas !pip install mat...
Requirement already satisfied: yfinance in /usr/local/lib/python3.6/dist-packages (0.1.55) Requirement already satisfied: lxml>=4.5.1 in /usr/local/lib/python3.6/dist-packages (from yfinance) (4.6.1) Requirement already satisfied: requests>=2.20 in /usr/local/lib/python3.6/dist-packages (from yfinance) (2.23.0) Require...
MIT
FinRL_single_stock_trading.ipynb
jomach/FinRL-Library
2.3. Import Packages
import pandas as pd import numpy as np import matplotlib import matplotlib.pyplot as plt matplotlib.use('Agg') import datetime from finrl.config import config from finrl.marketdata.yahoodownloader import YahooDownloader from finrl.preprocessing.preprocessors import FeatureEngineer from finrl.preprocessing.data import ...
_____no_output_____
MIT
FinRL_single_stock_trading.ipynb
jomach/FinRL-Library
2.4. Create Folders
import os if not os.path.exists("./" + config.DATA_SAVE_DIR): os.makedirs("./" + config.DATA_SAVE_DIR) if not os.path.exists("./" + config.TRAINED_MODEL_DIR): os.makedirs("./" + config.TRAINED_MODEL_DIR) if not os.path.exists("./" + config.TENSORBOARD_LOG_DIR): os.makedirs("./" + config.TENSORBOARD_LOG_DIR)...
_____no_output_____
MIT
FinRL_single_stock_trading.ipynb
jomach/FinRL-Library
Part 3. Download DataYahoo Finance is a website that provides stock data, financial news, financial reports, etc. All the data provided by Yahoo Finance is free.* FinRL uses a class **YahooDownloader** to fetch data from Yahoo Finance API* Call Limit: Using the Public API (without authentication), you are limited to 2...
# from config.py start_date is a string config.START_DATE # from config.py end_date is a string config.END_DATE
_____no_output_____
MIT
FinRL_single_stock_trading.ipynb
jomach/FinRL-Library
ticker_list is a list of stock tickers, in a single stock trading case, the list contains only 1 ticker
# Download and save the data in a pandas DataFrame: data_df = YahooDownloader(start_date = config.START_DATE, end_date = config.END_DATE, ticker_list = ['AAPL']).fetch_data() data_df.shape data_df.head()
_____no_output_____
MIT
FinRL_single_stock_trading.ipynb
jomach/FinRL-Library
Part 4. Preprocess DataData preprocessing is a crucial step for training a high quality machine learning model. We need to check for missing data and do feature engineering in order to convert the data into a model-ready state.* FinRL uses a class **FeatureEngineer** to preprocess the data* Add **technical indicators*...
## we store the stockstats technical indicator column names in config.py tech_indicator_list=config.TECHNICAL_INDICATORS_LIST print(tech_indicator_list) ## user can add more technical indicators ## check https://github.com/jealous/stockstats for different names tech_indicator_list=tech_indicator_list+['kdjk','open_2_sm...
['macd', 'rsi_30', 'cci_30', 'dx_30', 'kdjk', 'open_2_sma', 'boll', 'close_10.0_le_5_c', 'wr_10', 'dma', 'trix']
MIT
FinRL_single_stock_trading.ipynb
jomach/FinRL-Library
4.2 Perform Feature Engineering
data_df = FeatureEngineer(data_df.copy(), use_technical_indicator=True, tech_indicator_list = tech_indicator_list, use_turbulence=False, user_defined_feature = True).preprocess_data() data_df.head()
_____no_output_____
MIT
FinRL_single_stock_trading.ipynb
jomach/FinRL-Library
Part 5. Build EnvironmentConsidering the stochastic and interactive nature of the automated stock trading tasks, a financial task is modeled as a **Markov Decision Process (MDP)** problem. The training process involves observing stock price change, taking an action and reward's calculation to have the agent adjusting ...
train = data_split(data_df, start = config.START_DATE, end = config.START_TRADE_DATE) trade = data_split(data_df, start = config.START_TRADE_DATE, end = config.END_DATE) #train = data_split(data_df, start = '2009-01-01', end = '2019-01-01') #trade = data_split(data_df, start = '2019-01-01', end = '2020-09-30') ## data...
['open', 'high', 'low', 'volume', 'macd', 'rsi_30', 'cci_30', 'dx_30', 'kdjk', 'open_2_sma', 'boll', 'close_10.0_le_5_c', 'wr_10', 'dma', 'trix', 'daily_return']
MIT
FinRL_single_stock_trading.ipynb
jomach/FinRL-Library
5.2 User-defined Environment: a simulation environment class
import numpy as np import pandas as pd from gym.utils import seeding import gym from gym import spaces import matplotlib matplotlib.use('Agg') import matplotlib.pyplot as plt class SingleStockEnv(gym.Env): """A single stock trading environment for OpenAI gym Attributes ---------- df: DataFrame ...
_____no_output_____
MIT
FinRL_single_stock_trading.ipynb
jomach/FinRL-Library
5.3 Initialize Environment* **stock dimension**: the number of unique stock tickers we use* **hmax**: the maximum amount of shares to buy or sell* **initial amount**: the amount of money we use to trade in the begining* **transaction cost percentage**: a per share rate for every share trade* **tech_indicator_list**: a...
## we store the stockstats technical indicator column names in config.py ## check https://github.com/jealous/stockstats for different names tech_indicator_list # the stock dimension is 1, because we only use the price data of AAPL. len(train.tic.unique()) # account balance + close price + shares + technical indicators ...
_____no_output_____
MIT
FinRL_single_stock_trading.ipynb
jomach/FinRL-Library
Part 6: Implement DRL Algorithms* The implementation of the DRL algorithms are based on **OpenAI Baselines** and **Stable Baselines**. Stable Baselines is a fork of OpenAI Baselines, with a major structural refactoring, and code cleanups.* FinRL library includes fine-tuned standard DRL algorithms, such as DQN, DDPG,Mu...
agent = DRLAgent(env = env_train)
_____no_output_____
MIT
FinRL_single_stock_trading.ipynb
jomach/FinRL-Library
Model Training: 5 models, A2C DDPG, PPO, TD3, SAC Model 1: A2C
## default hyperparameters in config file config.A2C_PARAMS print("==============Model Training===========") now = datetime.datetime.now().strftime('%Y%m%d-%Hh%M') a2c_params_tuning = {'n_steps':5, 'ent_coef':0.005, 'learning_rate':0.0007, 'verbose':0, 'timesteps':100000} model_a2c = agent.train_A...
==============Model Training=========== begin_total_asset:100000 end_total_asset:176934.7576968735 total_reward:76934.75769687351 total_cost: 5882.835153967686 total_trades: 2484 Sharpe: 0.46981434691347806 ================================= begin_total_asset:100000 end_total_asset:595867.5745766863 total_reward:4958...
MIT
FinRL_single_stock_trading.ipynb
jomach/FinRL-Library
Model 2: DDPG
## default hyperparameters in config file config.DDPG_PARAMS print("==============Model Training===========") now = datetime.datetime.now().strftime('%Y%m%d-%Hh%M') ddpg_params_tuning = { 'batch_size': 128, 'buffer_size':100000, 'verbose':0, 't...
==============Model Training=========== WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/stable_baselines/ddpg/policies.py:136: dense (from tensorflow.python.layers.core) is deprecated and will be removed in a future version. Instructions for updating: Use keras.layers.Dense instead. WARNING:tensorflow:Fr...
MIT
FinRL_single_stock_trading.ipynb
jomach/FinRL-Library
Model 3: PPO
config.PPO_PARAMS print("==============Model Training===========") now = datetime.datetime.now().strftime('%Y%m%d-%Hh%M') ppo_params_tuning = {'n_steps':128, 'nminibatches': 4, 'ent_coef':0.005, 'learning_rate':0.00025, 'verbose':0, ...
==============Model Training=========== WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/stable_baselines/common/tf_util.py:191: The name tf.ConfigProto is deprecated. Please use tf.compat.v1.ConfigProto instead. WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/stable_baselines/common/tf_ut...
MIT
FinRL_single_stock_trading.ipynb
jomach/FinRL-Library
Model 4: TD3
## default hyperparameters in config file config.TD3_PARAMS print("==============Model Training===========") now = datetime.datetime.now().strftime('%Y%m%d-%Hh%M') td3_params_tuning = { 'batch_size': 128, 'buffer_size':200000, 'learning_rate': 0.0002, ...
==============Model Training=========== begin_total_asset:100000 end_total_asset:766882.06486716 total_reward:666882.06486716 total_cost: 122.06275547719093 total_trades: 2502 Sharpe: 0.9471484567377753 ================================= begin_total_asset:100000 end_total_asset:1064261.8436314124 total_reward:964261....
MIT
FinRL_single_stock_trading.ipynb
jomach/FinRL-Library
Model 5: SAC
## default hyperparameters in config file config.SAC_PARAMS print("==============Model Training===========") now = datetime.datetime.now().strftime('%Y%m%d-%Hh%M') sac_params_tuning={ 'batch_size': 64, 'buffer_size': 100000, 'ent_coef':'auto_0.1', 'learning_rate': 0.0001, 'learning_starts':200, 'timesteps': 500...
==============Model Training=========== WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/stable_baselines/sac/policies.py:63: The name tf.log is deprecated. Please use tf.math.log instead. begin_total_asset:100000 end_total_asset:628197.7965312647 total_reward:528197.7965312647 total_cost: 161.175515315...
MIT
FinRL_single_stock_trading.ipynb
jomach/FinRL-Library
Trading* we use the environment class we initialized at 5.3 to create a stock trading environment* Assume that we have $100,000 initial capital at 2019-01-01. * We use the trained model of PPO to trade AAPL.
trade.head() # create trading env env_trade, obs_trade = env_setup.create_env_trading(data = trade, env_class = SingleStockEnv) ## make a prediction and get the account value change df_account_value, df_actions = DRLAgent.DRL_prediction(model=model_td3, ...
begin_total_asset:100000 end_total_asset:308768.3018266945 total_reward:208768.30182669451 total_cost: 99.89708306503296 total_trades: 439 Sharpe: 1.9188345294206783 =================================
MIT
FinRL_single_stock_trading.ipynb
jomach/FinRL-Library
Part 7: Backtesting PerformanceBacktesting plays a key role in evaluating the performance of a trading strategy. Automated backtesting tool is preferred because it reduces the human error. We usually use the Quantopian pyfolio package to backtest our trading strategies. It is easy to use and consists of various indivi...
print("==============Get Backtest Results===========") perf_stats_all = BackTestStats(account_value=df_account_value) perf_stats_all = pd.DataFrame(perf_stats_all) perf_stats_all.to_csv("./"+config.RESULTS_DIR+"/perf_stats_all_"+now+'.csv')
==============Get Backtest Results=========== annual return: 104.80443553947256 sharpe ratio: 1.9188345294206783 Annual return 0.907331 Cumulative returns 2.087683 Annual volatility 0.374136 Sharpe ratio 1.918835 Calmar ratio 2.887121 Stability 0.909127 Max drawdown ...
MIT
FinRL_single_stock_trading.ipynb
jomach/FinRL-Library
7.2 BackTestPlot
print("==============Compare to AAPL itself buy-and-hold===========") %matplotlib inline BackTestPlot(account_value=df_account_value, baseline_ticker = 'AAPL')
==============Compare to AAPL itself buy-and-hold=========== annual return: 104.80443553947256 sharpe ratio: 1.9188345294206783 [*********************100%***********************] 1 of 1 completed Shape of DataFrame: (440, 7)
MIT
FinRL_single_stock_trading.ipynb
jomach/FinRL-Library
7.3 Baseline Stats
print("==============Get Baseline Stats===========") baesline_perf_stats=BaselineStats('AAPL') print("==============Get Baseline Stats===========") baesline_perf_stats=BaselineStats('^GSPC')
==============Get Baseline Stats=========== [*********************100%***********************] 1 of 1 completed Shape of DataFrame: (440, 7) Annual return 0.176845 Cumulative returns 0.328857 Annual volatility 0.270644 Sharpe ratio 0.739474 Calmar ratio 0.521283 Stability ...
MIT
FinRL_single_stock_trading.ipynb
jomach/FinRL-Library
7.4 Compare to Stock Market Index
print("==============Compare to S&P 500===========") %matplotlib inline # S&P 500: ^GSPC # Dow Jones Index: ^DJI # NASDAQ 100: ^NDX BackTestPlot(df_account_value, baseline_ticker = '^GSPC')
_____no_output_____
MIT
FinRL_single_stock_trading.ipynb
jomach/FinRL-Library
Multi-Layer Perceptron, MNIST---In this notebook, we will train an MLP to classify images from the [MNIST database](http://yann.lecun.com/exdb/mnist/) hand-written digit database.The process will be broken down into the following steps:>1. Load and visualize the data2. Define a neural network3. Train the model4. Evalu...
# import libraries import torch import numpy as np
_____no_output_____
MIT
convolutional-neural-networks/mnist-mlp/mnist_mlp_solution.ipynb
staskh/deep-learning-v2-pytorch
--- Load and Visualize the [Data](http://pytorch.org/docs/stable/torchvision/datasets.html)Downloading may take a few moments, and you should see your progress as the data is loading. You may also choose to change the `batch_size` if you want to load more data at a time.This cell will create DataLoaders for each of our...
# The MNIST datasets are hosted on yann.lecun.com that has moved under CloudFlare protection # Run this script to enable the datasets download # Reference: https://github.com/pytorch/vision/issues/1938 from six.moves import urllib opener = urllib.request.build_opener() opener.addheaders = [('User-agent', 'Mozilla/5.0'...
Downloading http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz Downloading http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz Downloading http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz Downloading http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz Processing... Done!
MIT
convolutional-neural-networks/mnist-mlp/mnist_mlp_solution.ipynb
staskh/deep-learning-v2-pytorch
Visualize a Batch of Training DataThe first step in a classification task is to take a look at the data, make sure it is loaded in correctly, then make any initial observations about patterns in that data.
import matplotlib.pyplot as plt %matplotlib inline # obtain one batch of training images dataiter = iter(train_loader) images, labels = dataiter.next() images = images.numpy() # plot the images in the batch, along with the corresponding labels fig = plt.figure(figsize=(25, 4)) for idx in np.arange(20): ax = f...
_____no_output_____
MIT
convolutional-neural-networks/mnist-mlp/mnist_mlp_solution.ipynb
staskh/deep-learning-v2-pytorch
View an Image in More Detail
img = np.squeeze(images[1]) fig = plt.figure(figsize = (12,12)) ax = fig.add_subplot(111) ax.imshow(img, cmap='gray') width, height = img.shape thresh = img.max()/2.5 for x in range(width): for y in range(height): val = round(img[x][y],2) if img[x][y] !=0 else 0 ax.annotate(str(val), xy=(y,x), ...
_____no_output_____
MIT
convolutional-neural-networks/mnist-mlp/mnist_mlp_solution.ipynb
staskh/deep-learning-v2-pytorch
--- Define the Network [Architecture](http://pytorch.org/docs/stable/nn.html)The architecture will be responsible for seeing as input a 784-dim Tensor of pixel values for each image, and producing a Tensor of length 10 (our number of classes) that indicates the class scores for an input image. This particular example u...
import torch.nn as nn import torch.nn.functional as F # define the NN architecture class Net(nn.Module): def __init__(self): super(Net, self).__init__() # number of hidden nodes in each layer (512) hidden_1 = 512 hidden_2 = 512 # linear layer (784 -> hidden_1) self.f...
Net( (fc1): Linear(in_features=784, out_features=512, bias=True) (fc2): Linear(in_features=512, out_features=512, bias=True) (fc3): Linear(in_features=512, out_features=10, bias=True) (dropout): Dropout(p=0.2) )
MIT
convolutional-neural-networks/mnist-mlp/mnist_mlp_solution.ipynb
staskh/deep-learning-v2-pytorch
Specify [Loss Function](http://pytorch.org/docs/stable/nn.htmlloss-functions) and [Optimizer](http://pytorch.org/docs/stable/optim.html)It's recommended that you use cross-entropy loss for classification. If you look at the documentation (linked above), you can see that PyTorch's cross entropy function applies a soft...
# specify loss function (categorical cross-entropy) criterion = nn.CrossEntropyLoss() # specify optimizer (stochastic gradient descent) and learning rate = 0.01 optimizer = torch.optim.SGD(model.parameters(), lr=0.01)
_____no_output_____
MIT
convolutional-neural-networks/mnist-mlp/mnist_mlp_solution.ipynb
staskh/deep-learning-v2-pytorch
--- Train the NetworkThe steps for training/learning from a batch of data are described in the comments below:1. Clear the gradients of all optimized variables2. Forward pass: compute predicted outputs by passing inputs to the model3. Calculate the loss4. Backward pass: compute gradient of the loss with respect to mode...
# number of epochs to train the model n_epochs = 50 model.train() # prep model for training for epoch in range(n_epochs): # monitor training loss train_loss = 0.0 ################### # train the model # ################### for data, target in train_loader: # clear the gradients of...
Epoch: 1 Training Loss: 0.833544 Epoch: 2 Training Loss: 0.321996 Epoch: 3 Training Loss: 0.247905 Epoch: 4 Training Loss: 0.201408 Epoch: 5 Training Loss: 0.169627 Epoch: 6 Training Loss: 0.147488 Epoch: 7 Training Loss: 0.129424 Epoch: 8 Training Loss: 0.116433 Epoch: 9 Training Loss: 0.104333 Epoch: 10 Tra...
MIT
convolutional-neural-networks/mnist-mlp/mnist_mlp_solution.ipynb
staskh/deep-learning-v2-pytorch
--- Test the Trained NetworkFinally, we test our best model on previously unseen **test data** and evaluate it's performance. Testing on unseen data is a good way to check that our model generalizes well. It may also be useful to be granular in this analysis and take a look at how this model performs on each class as w...
# initialize lists to monitor test loss and accuracy test_loss = 0.0 class_correct = list(0. for i in range(10)) class_total = list(0. for i in range(10)) model.eval() # prep model for training for data, target in test_loader: # forward pass: compute predicted outputs by passing inputs to the model output = m...
Test Loss: 0.052876 Test Accuracy of 0: 99% (972/980) Test Accuracy of 1: 99% (1127/1135) Test Accuracy of 2: 98% (1012/1032) Test Accuracy of 3: 98% (992/1010) Test Accuracy of 4: 98% (968/982) Test Accuracy of 5: 98% (875/892) Test Accuracy of 6: 98% (946/958) Test Accuracy of 7: 98% ...
MIT
convolutional-neural-networks/mnist-mlp/mnist_mlp_solution.ipynb
staskh/deep-learning-v2-pytorch
Visualize Sample Test ResultsThis cell displays test images and their labels in this format: `predicted (ground-truth)`. The text will be green for accurately classified examples and red for incorrect predictions.
# obtain one batch of test images dataiter = iter(test_loader) images, labels = dataiter.next() # get sample outputs output = model(images) # convert output probabilities to predicted class _, preds = torch.max(output, 1) # prep images for display images = images.numpy() # plot the images in the batch, along with pre...
_____no_output_____
MIT
convolutional-neural-networks/mnist-mlp/mnist_mlp_solution.ipynb
staskh/deep-learning-v2-pytorch
Exercise 1 1.1. Which sigs are valid?```P = (887387e452b8eacc4acfde10d9aaf7f6d9a0f975aabb10d006e4da568744d06c, 61de6d95231cd89026e286df3b6ae4a894a3378e393e93a0f45b666329a0ae34)z, r, s = ec208baa0fc1c19f708a9ca96fdeff3ac3f230bb4a7ba4aede4942ad003c0f60, ac8d1c87e51d0d441be8b3dd5b05c8795b48875dffe00b7ffcfac...
# Exercise 1.1 from ecc import S256Point, G, N px = 0x887387e452b8eacc4acfde10d9aaf7f6d9a0f975aabb10d006e4da568744d06c py = 0x61de6d95231cd89026e286df3b6ae4a894a3378e393e93a0f45b666329a0ae34 signatures = ( # (z, r, s) (0xec208baa0fc1c19f708a9ca96fdeff3ac3f230bb4a7ba4aede4942ad003c0f60, 0xac8d1c87e51d0d4...
_____no_output_____
BSD-2-Clause
session3/session3.ipynb
casey-bowman/pb-exercises
Exercise 2 2.1. Verify the DER signature for the hash of "ECDSA is awesome!" for the given SEC pubkey`z = int.from_bytes(double_sha256('ECDSA is awesome!'), 'big')`Public Key in SEC Format: 0204519fac3d910ca7e7138f7013706f619fa8f033e6ec6e09370ea38cee6a7574Signature in DER Format: 304402201f62993ee03fca342fcb45929993fa...
# Exercise 2.1 from ecc import S256Point, Signature from helper import double_sha256 der = bytes.fromhex('304402201f62993ee03fca342fcb45929993fa6ee885e00ddad8de154f268d98f083991402201e1ca12ad140c04e0e022c38f7ce31da426b8009d02832f0b44f39a6b178b7a1') sec = bytes.fromhex('0204519fac3d910ca7e7138f7013706f619fa8f033e6ec6e...
_____no_output_____
BSD-2-Clause
session3/session3.ipynb
casey-bowman/pb-exercises
Exercise 3WIF is the serialization of a Private Key. 3.1. Find the WIF Format of the following:* \\(2^{256}-2^{199}\\), mainnet, compressed* \\(2^{256}-2^{201}\\), testnet, uncompressed* 0dba685b4511dbd3d368e5c4358a1277de9486447af7b3604a69b8d9d8b7889d, mainnet, uncompressed* 1cca23de92fd1862fb5b76e5f4f50eb082165e5191e...
# Exercise 3.1 from helper import encode_base58_checksum components = ( # (secret, testnet, compressed) (2**256-2**199, False, True), (2**256-2**201, True, False), (0x0dba685b4511dbd3d368e5c4358a1277de9486447af7b3604a69b8d9d8b7889d, False, False), (0x1cca23de92fd1862fb5b76e5f4f50eb082165e5191e116c1...
_____no_output_____
BSD-2-Clause
session3/session3.ipynb
casey-bowman/pb-exercises
Exercise 4 4.1. Make [this test](/edit/session3/tx.py) pass```tx.py:TxTest:test_parse_version```
# Exercise 4.1 reload(tx) run_test(tx.TxTest('test_parse_version'))
_____no_output_____
BSD-2-Clause
session3/session3.ipynb
casey-bowman/pb-exercises
Exercise 5 5.1. Make [this test](/edit/session3/tx.py) pass```tx.py:TxTest:test_parse_inputs```
# Exercise 5.1 reload(tx) run_test(tx.TxTest('test_parse_inputs'))
_____no_output_____
BSD-2-Clause
session3/session3.ipynb
casey-bowman/pb-exercises
Exercise 6 6.1. Make [this test](/edit/session3/tx.py) pass```tx.py:TxTest:test_parse_outputs```
# Exercise 6.1 reload(tx) run_test(tx.TxTest('test_parse_outputs'))
_____no_output_____
BSD-2-Clause
session3/session3.ipynb
casey-bowman/pb-exercises
Exercise 7 7.1. Make [this test](/edit/session3/tx.py) pass```tx.py:TxTest:test_parse_locktime```
# Exercise 7.1 reload(tx) run_test(tx.TxTest('test_parse_locktime'))
_____no_output_____
BSD-2-Clause
session3/session3.ipynb
casey-bowman/pb-exercises