markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
10ステップ後の姿勢のばらつき
path = [] for j in range(100): x = np.array([0,0,0]) u = np.array([0.1,10/180*math.pi]) # 毎回0.1だけ進めて10[deg]向きを変える for i in range(10): x = f(x,u) path.append(x) fig = plt.figure(i,figsize=(8, 8)) sp = fig.add_subplot(111, aspect='equal') sp.set_xlim(-1.0,1.0) sp.set_ylim(-0.5,1.5) xs = [e[0] for e in path] ys = [e[1] for e in path] vxs = [math.cos(e[2]) for e in path] vys = [math.sin(e[2]) for e in path] plt.quiver(xs,ys,vxs,vys,color="red",label="actual robot motion")
_____no_output_____
MIT
state_equations/with_noise.ipynb
ryuichiueda/probrobo_practice
Create Spark Session for application
spark = SparkSession.\ builder.\ master("local[*]").\ appName("Demo-app").\ config("spark.serializer", KryoSerializer.getName).\ config("spark.kryo.registrator", SedonaKryoRegistrator.getName) .\ config("spark.jars.packages", "org.apache.sedona:sedona-python-adapter-3.0_2.12:1.1.0-incubating,org.datasyslab:geotools-wrapper:1.1.0-25.2") .\ getOrCreate() SedonaRegistrator.registerAll(spark) sc = spark.sparkContext
21/10/08 19:55:41 WARN UDTRegistration: Cannot register UDT for org.locationtech.jts.geom.Geometry, which is already registered. 21/10/08 19:55:41 WARN UDTRegistration: Cannot register UDT for org.locationtech.jts.index.SpatialIndex, which is already registered. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function st_pointfromtext replaced a previously registered function. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function st_polygonfromtext replaced a previously registered function. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function st_linestringfromtext replaced a previously registered function. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function st_geomfromtext replaced a previously registered function. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function st_geomfromwkt replaced a previously registered function. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function st_geomfromwkb replaced a previously registered function. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function st_geomfromgeojson replaced a previously registered function. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function st_point replaced a previously registered function. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function st_polygonfromenvelope replaced a previously registered function. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function st_contains replaced a previously registered function. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function st_intersects replaced a previously registered function. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function st_within replaced a previously registered function. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function st_distance replaced a previously registered function. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function st_convexhull replaced a previously registered function. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function st_npoints replaced a previously registered function. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function st_buffer replaced a previously registered function. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function st_envelope replaced a previously registered function. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function st_length replaced a previously registered function. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function st_area replaced a previously registered function. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function st_centroid replaced a previously registered function. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function st_transform replaced a previously registered function. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function st_intersection replaced a previously registered function. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function st_isvalid replaced a previously registered function. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function st_precisionreduce replaced a previously registered function. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function st_equals replaced a previously registered function. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function st_touches replaced a previously registered function. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function st_overlaps replaced a previously registered function. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function st_crosses replaced a previously registered function. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function st_issimple replaced a previously registered function. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function st_makevalid replaced a previously registered function. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function st_simplifypreservetopology replaced a previously registered function. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function st_astext replaced a previously registered function. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function st_asgeojson replaced a previously registered function. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function st_geometrytype replaced a previously registered function. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function st_numgeometries replaced a previously registered function. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function st_linemerge replaced a previously registered function. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function st_azimuth replaced a previously registered function. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function st_x replaced a previously registered function. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function st_y replaced a previously registered function. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function st_startpoint replaced a previously registered function. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function st_boundary replaced a previously registered function. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function st_minimumboundingradius replaced a previously registered function. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function st_minimumboundingcircle replaced a previously registered function. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function st_endpoint replaced a previously registered function. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function st_exteriorring replaced a previously registered function. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function st_geometryn replaced a previously registered function. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function st_interiorringn replaced a previously registered function. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function st_dump replaced a previously registered function. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function st_dumppoints replaced a previously registered function. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function st_isclosed replaced a previously registered function. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function st_numinteriorrings replaced a previously registered function. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function st_addpoint replaced a previously registered function. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function st_removepoint replaced a previously registered function. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function st_isring replaced a previously registered function. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function st_flipcoordinates replaced a previously registered function. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function st_linesubstring replaced a previously registered function. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function st_lineinterpolatepoint replaced a previously registered function. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function st_subdivideexplode replaced a previously registered function. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function st_subdivide replaced a previously registered function. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function rs_normalizeddifference replaced a previously registered function. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function rs_mean replaced a previously registered function. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function rs_mode replaced a previously registered function. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function rs_fetchregion replaced a previously registered function. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function rs_greaterthan replaced a previously registered function. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function rs_greaterthanequal replaced a previously registered function. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function rs_lessthan replaced a previously registered function. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function rs_lessthanequal replaced a previously registered function. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function rs_addbands replaced a previously registered function. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function rs_subtractbands replaced a previously registered function. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function rs_dividebands replaced a previously registered function. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function rs_multiplyfactor replaced a previously registered function. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function rs_multiplybands replaced a previously registered function. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function rs_bitwiseand replaced a previously registered function. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function rs_bitwiseor replaced a previously registered function. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function rs_count replaced a previously registered function. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function rs_modulo replaced a previously registered function. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function rs_getband replaced a previously registered function. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function rs_squareroot replaced a previously registered function. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function rs_logicaldifference replaced a previously registered function. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function rs_logicalover replaced a previously registered function. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function rs_base64 replaced a previously registered function. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function rs_html replaced a previously registered function. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function rs_array replaced a previously registered function. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function rs_normalize replaced a previously registered function. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function st_union_aggr replaced a previously registered function. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function st_envelope_aggr replaced a previously registered function. 21/10/08 19:55:41 WARN SimpleFunctionRegistry: The function st_intersection_aggr replaced a previously registered function.
Apache-2.0
binder/ApacheSedonaRaster.ipynb
aggunr/incubator-sedona
Geotiff Loader 1. Loader takes as input a path to directory which contains geotiff files or a parth to particular geotiff file2. Loader will read geotiff image in a struct named image which contains multiple fields as shown in the schema below which can be extracted using spark SQL
# Path to directory of geotiff images DATA_DIR = "./data/raster/" df = spark.read.format("geotiff").option("dropInvalid",True).load(DATA_DIR) df.printSchema() df = df.selectExpr("image.origin as origin","ST_GeomFromWkt(image.wkt) as Geom", "image.height as height", "image.width as width", "image.data as data", "image.nBands as bands") df.show(5)
+--------------------+--------------------+------+-----+--------------------+-----+ | origin| Geom|height|width| data|bands| +--------------------+--------------------+------+-----+--------------------+-----+ |file:///Users/jia...|POLYGON ((-58.702...| 32| 32|[1081.0, 1068.0, ...| 4| |file:///Users/jia...|POLYGON ((-58.286...| 32| 32|[1151.0, 1141.0, ...| 4| +--------------------+--------------------+------+-----+--------------------+-----+
Apache-2.0
binder/ApacheSedonaRaster.ipynb
aggunr/incubator-sedona
Extract a particular band from geotiff dataframe using RS_GetBand()
''' RS_GetBand() will fetch a particular band from given data array which is the concatination of all the bands''' df = df.selectExpr("Geom","RS_GetBand(data, 1,bands) as Band1","RS_GetBand(data, 2,bands) as Band2","RS_GetBand(data, 3,bands) as Band3", "RS_GetBand(data, 4,bands) as Band4") df.createOrReplaceTempView("allbands") df.show(5)
+--------------------+--------------------+--------------------+--------------------+--------------------+ | Geom| Band1| Band2| Band3| Band4| +--------------------+--------------------+--------------------+--------------------+--------------------+ |POLYGON ((-58.702...|[1081.0, 1068.0, ...|[865.0, 1084.0, 1...|[0.0, 0.0, 0.0, 0...|[0.0, 0.0, 0.0, 0...| |POLYGON ((-58.286...|[1151.0, 1141.0, ...|[1197.0, 1163.0, ...|[0.0, 0.0, 0.0, 0...|[0.0, 0.0, 0.0, 0...| +--------------------+--------------------+--------------------+--------------------+--------------------+
Apache-2.0
binder/ApacheSedonaRaster.ipynb
aggunr/incubator-sedona
Map Algebra operations on band values
''' RS_NormalizedDifference can be used to calculate NDVI for a particular geotiff image since it uses same computational formula as ndvi''' NomalizedDifference = df.selectExpr("RS_NormalizedDifference(Band1, Band2) as normDiff") NomalizedDifference.show(5) ''' RS_Mean() can used to calculate mean of piel values in a particular spatial band ''' meanDF = df.selectExpr("RS_Mean(Band1) as mean") meanDF.show(5) """ RS_Mode() is used to calculate mode in an array of pixels and returns a array of double with size 1 in case of unique mode""" modeDF = df.selectExpr("RS_Mode(Band1) as mode") modeDF.show(5) ''' RS_GreaterThan() is used to mask all the values with 1 which are greater than a particular threshold''' greaterthanDF = spark.sql("Select RS_GreaterThan(Band1,1000.0) as greaterthan from allbands") greaterthanDF.show() ''' RS_GreaterThanEqual() is used to mask all the values with 1 which are greater than a particular threshold''' greaterthanEqualDF = spark.sql("Select RS_GreaterThanEqual(Band1,360.0) as greaterthanEqual from allbands") greaterthanEqualDF.show() ''' RS_LessThan() is used to mask all the values with 1 which are less than a particular threshold''' lessthanDF = spark.sql("Select RS_LessThan(Band1,1000.0) as lessthan from allbands") lessthanDF.show() ''' RS_LessThanEqual() is used to mask all the values with 1 which are less than equal to a particular threshold''' lessthanEqualDF = spark.sql("Select RS_LessThanEqual(Band1,2890.0) as lessthanequal from allbands") lessthanEqualDF.show() ''' RS_AddBands() can add two spatial bands together''' sumDF = df.selectExpr("RS_AddBands(Band1, Band2) as sumOfBand") sumDF.show(5) ''' RS_SubtractBands() can subtract two spatial bands together''' subtractDF = df.selectExpr("RS_SubtractBands(Band1, Band2) as diffOfBand") subtractDF.show(5) ''' RS_MultiplyBands() can multiple two bands together''' multiplyDF = df.selectExpr("RS_MultiplyBands(Band1, Band2) as productOfBand") multiplyDF.show(5) ''' RS_DivideBands() can divide two bands together''' divideDF = df.selectExpr("RS_DivideBands(Band1, Band2) as divisionOfBand") divideDF.show(5) ''' RS_MultiplyFactor() will multiply a factor to a spatial band''' mulfacDF = df.selectExpr("RS_MultiplyFactor(Band2, 2) as target") mulfacDF.show(5) ''' RS_BitwiseAND() will return AND between two values of Bands''' bitwiseAND = df.selectExpr("RS_BitwiseAND(Band1, Band2) as AND") bitwiseAND.show(5) ''' RS_BitwiseOR() will return OR between two values of Bands''' bitwiseOR = df.selectExpr("RS_BitwiseOR(Band1, Band2) as OR") bitwiseOR.show(5) ''' RS_Count() will calculate the total number of occurence of a target value''' countDF = df.selectExpr("RS_Count(RS_GreaterThan(Band1,1000.0), 1.0) as count") countDF.show(5) ''' RS_Modulo() will calculate the modulus of band value with respect to a given number''' moduloDF = df.selectExpr("RS_Modulo(Band1, 21.0) as modulo ") moduloDF.show(5) ''' RS_SquareRoot() will calculate calculate square root of all the band values upto two decimal places''' rootDF = df.selectExpr("RS_SquareRoot(Band1) as root") rootDF.show(5) ''' RS_LogicalDifference() will return value from band1 if value at that particular location is not equal tp band1 else it will return 0''' logDiff = df.selectExpr("RS_LogicalDifference(Band1, Band2) as loggDifference") logDiff.show(5) ''' RS_LogicalOver() will iterate over two bands and return value of first band if it is not equal to 0 else it will return value from later band''' logOver = df.selectExpr("RS_LogicalOver(Band3, Band2) as logicalOver") logOver.show(5)
+--------------------+ | logicalOver| +--------------------+ |[865.0, 1084.0, 1...| |[1197.0, 1163.0, ...| +--------------------+
Apache-2.0
binder/ApacheSedonaRaster.ipynb
aggunr/incubator-sedona
Visualising Geotiff Images1. Normalize the bands in range [0-255] if values are greater than 2552. Process image using RS_Base64() which converts in into a base64 string3. Embedd results of RS_Base64() in RS_HTML() to embedd into IPython notebook4. Process results of RS_HTML() as below:
''' Plotting images as a dataframe using geotiff Dataframe.''' df = spark.read.format("geotiff").option("dropInvalid",True).load(DATA_DIR) df = df.selectExpr("image.origin as origin","ST_GeomFromWkt(image.wkt) as Geom", "image.height as height", "image.width as width", "image.data as data", "image.nBands as bands") df = df.selectExpr("RS_GetBand(data,1,bands) as targetband", "height", "width", "bands", "Geom") df_base64 = df.selectExpr("Geom", "RS_Base64(height,width,RS_Normalize(targetBand), RS_Array(height*width,0.0), RS_Array(height*width, 0.0)) as red","RS_Base64(height,width,RS_Array(height*width, 0.0), RS_Normalize(targetBand), RS_Array(height*width, 0.0)) as green", "RS_Base64(height,width,RS_Array(height*width, 0.0), RS_Array(height*width, 0.0), RS_Normalize(targetBand)) as blue","RS_Base64(height,width,RS_Normalize(targetBand), RS_Normalize(targetBand),RS_Normalize(targetBand)) as RGB" ) df_HTML = df_base64.selectExpr("Geom","RS_HTML(red) as RedBand","RS_HTML(blue) as BlueBand","RS_HTML(green) as GreenBand", "RS_HTML(RGB) as CombinedBand") df_HTML.show(5) display(HTML(df_HTML.limit(2).toPandas().to_html(escape=False)))
_____no_output_____
Apache-2.0
binder/ApacheSedonaRaster.ipynb
aggunr/incubator-sedona
User can also create some UDF manually to manipulate Geotiff dataframes
''' Sample UDF calculates sum of all the values in a band which are greater than 1000.0 ''' def SumOfValues(band): total = 0.0 for num in band: if num>1000.0: total+=1 return total calculateSum = udf(SumOfValues, DoubleType()) spark.udf.register("RS_Sum", calculateSum) sumDF = df.selectExpr("RS_Sum(targetband) as sum") sumDF.show() ''' Sample UDF to visualize a particular region of a GeoTiff image''' def generatemask(band, width,height): for (i,val) in enumerate(band): if (i%width>=12 and i%width<26) and (i%height>=12 and i%height<26): band[i] = 255.0 else: band[i] = 0.0 return band maskValues = udf(generatemask, ArrayType(DoubleType())) spark.udf.register("RS_MaskValues", maskValues) df_base64 = df.selectExpr("Geom", "RS_Base64(height,width,RS_Normalize(targetband), RS_Array(height*width,0.0), RS_Array(height*width, 0.0), RS_MaskValues(targetband,width,height)) as region" ) df_HTML = df_base64.selectExpr("Geom","RS_HTML(region) as selectedregion") display(HTML(df_HTML.limit(2).toPandas().to_html(escape=False)))
21/10/08 19:55:44 WARN SimpleFunctionRegistry: The function rs_maskvalues replaced a previously registered function.
Apache-2.0
binder/ApacheSedonaRaster.ipynb
aggunr/incubator-sedona
Stepper Motors* [How to use a stepper motor with the Raspberry Pi Pico](https://www.youngwonks.com/blog/How-to-use-a-stepper-motor-with-the-Raspberry-Pi-Pico)* [Control 28BYJ-48 Stepper Motor with ULN2003 Driver & Arduino](https://lastminuteengineers.com/28byj48-stepper-motor-arduino-tutorial/) Description of the 27BYJ-48 stepper motor, ULN2003 driver, and Arduino code.* [28BYJ-48 stepper motor and ULN2003 Arduino (Quick tutorial for beginners)](https://www.youtube.com/watch?v=avrdDZD7qEQ) Video description.* [Stepper Motor - Wikipedia](https://en.wikipedia.org/wiki/Stepper_motor)Link Stepper Motors![](https://cdn-learn.adafruit.com/assets/assets/000/016/234/original/components_IMG_4810_crop.jpg?1398735192)[Adafruit](https://learn.adafruit.com/all-about-stepper-motors/types-of-steppers)![](https://cdn-learn.adafruit.com/assets/assets/000/016/342/original/components_IMG_4837.jpg?1399130432)[Adafruit](https://learn.adafruit.com/all-about-stepper-motors/types-of-steppers)![](https://cdn-learn.adafruit.com/assets/assets/000/016/343/large1024/components_winding_types_2.png?1399130808) Unipolar Stepper MotorsThe ubiquitous 28BYJ-48 stepper motor with reduction gears that is manufactured by the millions and widely available at very low cost. [Elegoo, for example, sells kits of 5 motors with ULN2003 5V driver boards](https://www.elegoo.com/products/elegoo-uln2003-5v-stepper-motor-uln2003-driver-board) for less than $15/kit. The [UNL2003](https://en.wikipedia.org/wiki/ULN2003A) is a package of seven NPN Darlington transistors capable of 500ma output at 50 volts, with flyback diodes to drive inductive loads.![](https://cdn-learn.adafruit.com/assets/assets/000/016/349/medium640/components_unipolar_5.png?1399131989)![](https://m.media-amazon.com/images/S/aplus-seller-content-images-us-east-1/ATVPDKIKX0DER/A2WWHQ25ENKVJ1/B01CP18J4A/cZgPvVZSJSP._UX970_TTW__.jpg)The 28BJY-48 has 32 teeth thus each full step corresponds to 360/32 = 11.25 degrees of rotation. A set of four reduction gears yields a 63.68395:1 gear reduction, or 2037.8864 steps per rotation. The maximum speed is 500 steps per second. If half steps are used, then there are 4075.7728 half steps per revolution at a maximum speed of 1000 half steps per second.(See https://youtu.be/15K9N1yVnhc for a teardown of the 28BYJ-48 motor.) Driving the 28BYJ-48 Stepper Motor(Also see https://www.youtube.com/watch?v=UJ4JjeCLuaI&ab_channel=TinkerTechTrove)The following code assigns four GPIO pins to the four coils. For this code, the pins don't need to be contiguous or in order, but keeping that discipline may help later when we attempt to implement a driver using the PIO state machines of the Raspberry Pi Pico.Note that the Stepper class maintains an internal parameter corresponding to the current rotor position. This is used to index into the sequence data using modular arithmetic.See []() for ideas on a Stepper class.
%serialconnect from machine import Pin import time class Stepper(object): step_seq = [[1, 0, 0, 0], [1, 1, 0, 0], [0, 1, 0, 0], [0, 1, 1, 0], [0, 0, 1, 0], [0, 0, 1, 1], [0, 0, 0, 1], [1, 0, 0, 1]] def __init__(self, gpio_pins): self.pins = [Pin(pin, Pin.OUT) for pin in gpio_pins] self.motor_position = 0 def rotate(self, degrees=360): n_steps = abs(int(4075.7728*degrees/360)) d = 1 if degrees > 0 else -1 for _ in range(n_steps): self.motor_position += d phase = self.motor_position % len(self.step_seq) for i, value in enumerate(self.step_seq[phase]): self.pins[i].value(value) time.sleep(0.001) stepper = Stepper([2, 3, 4, 5]) stepper.rotate(360) stepper.rotate(-360) print(stepper.motor_position)
Found serial ports: /dev/cu.usbmodem14401, /dev/cu.Bluetooth-Incoming-Port Connecting to --port=/dev/cu.usbmodem14401 --baud=115200  Ready. .0
MIT
Raspberry_Pi_Pico/notebooks/C.08-Stepper-Motors.ipynb
jckantor/cbe61622
Discussion:* What class methods should we build to support the syringe pump project?* Should we simplify and stick with half-step sequence?* How will be integrate motor operation with UI buttons and other controls? Programmable Input/Ouput (PIO)* MicroPython (https://datasheets.raspberrypi.org/pico/raspberry-pi-pico-python-sdk.pdf)* TinkerTechTrove [[github]](https://github.com/tinkertechtrove/pico-pi-playinghttps://github.com/tinkertechtrove/pico-pi-playing) [[youtube]](https://www.youtube.com/channel/UCnoBIijHK7NnCBVpUojYFTA/videoshttps://www.youtube.com/channel/UCnoBIijHK7NnCBVpUojYFTA/videos)* [Raspberry Pi Pico PIO - Ep. 1 - Overview with Pull, Out, and Parallel Port](https://youtu.be/YafifJLNr6I)
%serialconnect from machine import Pin from rp2 import PIO, StateMachine, asm_pio from time import sleep import sys @asm_pio(set_init=(PIO.OUT_LOW,) * 4) def prog(): wrap_target() set(pins, 8) [31] # 8 nop() [31] nop() [31] nop() [31] nop() [31] nop() [31] nop() [31] set(pins, 4) [31] # 4 nop() [31] nop() [31] nop() [31] nop() [31] nop() [31] nop() [31] set(pins, 2) [31] # 2 nop() [31] nop() [31] nop() [31] nop() [31] nop() [31] nop() [31] set(pins, 1) [31] # 1 nop() [31] nop() [31] nop() [31] nop() [31] nop() [31] nop() [31] wrap() sm = StateMachine(0, prog, freq=100000, set_base=Pin(14)) sm.active(1) sleep(10) sm.active(0) sm.exec("set(pins,0)") %serialconnect from machine import Pin from rp2 import PIO, StateMachine, asm_pio from time import sleep import sys @asm_pio(set_init=(PIO.OUT_LOW,) * 4, out_init=(PIO.OUT_HIGH,) * 4, out_shiftdir=PIO.SHIFT_LEFT) def prog(): pull() mov(y, osr) # step pattern pull() mov(x, osr) # num steps jmp(not_x, "end") label("loop") jmp(not_osre, "step") # loop pattern if exhausted mov(osr, y) label("step") out(pins, 4) [31] nop() [31] nop() [31] nop() [31] jmp(x_dec,"loop") label("end") set(pins, 8) [31] # 8 sm = StateMachine(0, prog, freq=10000, set_base=Pin(14), out_base=Pin(14)) sm.active(1) sm.put(2216789025) #1000 0100 0010 0001 1000010000100001 sm.put(1000) sleep(10) sm.active(0) sm.exec("set(pins,0)") %serialconnect from machine import Pin from rp2 import PIO, StateMachine, asm_pio from time import sleep import sys @asm_pio(set_init=(PIO.OUT_LOW,) * 4, out_init=(PIO.OUT_LOW,) * 4, out_shiftdir=PIO.SHIFT_RIGHT, in_shiftdir=PIO.SHIFT_LEFT) def prog(): pull() mov(x, osr) # num steps pull() mov(y, osr) # step pattern jmp(not_x, "end") label("loop") jmp(not_osre, "step") # loop pattern if exhausted mov(osr, y) label("step") out(pins, 4) [31] jmp(x_dec,"loop") label("end") irq(rel(0)) sm = StateMachine(0, prog, freq=10000, set_base=Pin(14), out_base=Pin(14)) data = [(1,2,4,8),(2,4,8,1),(4,8,1,2),(8,1,2,4)] steps = 0 def turn(sm): global steps global data idx = steps % 4 a = data[idx][0] | (data[idx][1] << 4) | (data[idx][2] << 8) | (data[idx][3] << 12) a = a << 16 | a #print("{0:b}".format(a)) sleep(1) sm.put(500) sm.put(a) steps += 500 sm.irq(turn) sm.active(1) turn(sm) sleep(50) print("done") sm.active(0) sm.exec("set(pins,0)") %serialconnect import time import rp2 @rp2.asm_pio() def irq_test(): wrap_target() nop() [31] nop() [31] nop() [31] nop() [31] irq(0) nop() [31] nop() [31] nop() [31] nop() [31] irq(1) wrap() rp2.PIO(0).irq(lambda pio: print(pio.irq().flags())) #rp2.PIO(1).irq(lambda pio: print("1")) sm = rp2.StateMachine(0, irq_test, freq=2000) sm1 = rp2.StateMachine(1, irq_test, freq=2000) sm.active(1) #sm1.active(1) time.sleep(1) sm.active(0) sm1.active(0)
Found serial ports: /dev/cu.usbmodem14201, /dev/cu.Bluetooth-Incoming-Port Connecting to --port=/dev/cu.usbmodem14201 --baud=115200  Ready. 256 512 256 512 256 512 256 512 256 512 256 512 256 512 256
MIT
Raspberry_Pi_Pico/notebooks/C.08-Stepper-Motors.ipynb
jckantor/cbe61622
View source on GitHub Notebook Viewer Run in binder Run in Google Colab Install Earth Engine APIInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geehydro](https://github.com/giswqs/geehydro). The **geehydro** Python package builds on the [folium](https://github.com/python-visualization/folium) package and implements several methods for displaying Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, `Map.centerObject()`, and `Map.setOptions()`.The following script checks if the geehydro package has been installed. If not, it will install geehydro, which automatically install its dependencies, including earthengine-api and folium.
import subprocess try: import geehydro except ImportError: print('geehydro package not installed. Installing ...') subprocess.check_call(["python", '-m', 'pip', 'install', 'geehydro'])
_____no_output_____
MIT
Image/image_smoothing.ipynb
pberezina/earthengine-py-notebooks
Import libraries
import ee import folium import geehydro
_____no_output_____
MIT
Image/image_smoothing.ipynb
pberezina/earthengine-py-notebooks
Authenticate and initialize Earth Engine API. You only need to authenticate the Earth Engine API once.
try: ee.Initialize() except Exception as e: ee.Authenticate() ee.Initialize()
_____no_output_____
MIT
Image/image_smoothing.ipynb
pberezina/earthengine-py-notebooks
Create an interactive map This step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function. The optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `ESRI`.
Map = folium.Map(location=[40, -100], zoom_start=4) Map.setOptions('HYBRID')
_____no_output_____
MIT
Image/image_smoothing.ipynb
pberezina/earthengine-py-notebooks
Add Earth Engine Python script
image = ee.Image('srtm90_v4') smoothed = image.reduceNeighborhood(**{ 'reducer': ee.Reducer.mean(), 'kernel': ee.Kernel.square(3), }) # vis_params = {'min': 0, 'max': 3000} # Map.addLayer(image, vis_params, 'SRTM original') # Map.addLayer(smooth, vis_params, 'SRTM smoothed') Map.setCenter(-112.40, 42.53, 12) Map.addLayer(ee.Terrain.hillshade(image), {}, 'Original hillshade') Map.addLayer(ee.Terrain.hillshade(smoothed), {}, 'Smoothed hillshade')
_____no_output_____
MIT
Image/image_smoothing.ipynb
pberezina/earthengine-py-notebooks
Display Earth Engine data layers
Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True) Map
_____no_output_____
MIT
Image/image_smoothing.ipynb
pberezina/earthengine-py-notebooks
Run Sampling Experiment (Land Surface Temperature)
%%time start_time = '2010-06-06T00:00:00.000000000' #'2010-06' end_time = '2010-06-06T00:00:00.000000000'#'2010-08' num_training = 2000 variables = ['land_surface_temperature'] window_sizes = [3] #, 15] #[3, 5, 7, 9, 11, 13, 15] save_names = 'test' sampling_exp = SamplingExp( variables=variables, window_sizes=window_sizes, save_names=save_names, start_time=start_time, end_time=end_time, num_training=num_training) sampling_exp.run_experiment(True);
Extracting minicubes... 3 No data fitted...extracting datacube /media/disk/databases/BACI-CABLAB/low_res_data/land_surface_temperature Variable: land_surface_temperature Window Size: 3 Initialize data class Load minicubes Extract datacube for plots... /media/disk/databases/BACI-CABLAB/low_res_data/land_surface_temperature Time Stamp: 2010-06-06T00:00:00.000000000
MIT
notebooks/regression/esdc_application/old/app_sample_3_experiment.ipynb
IPL-UV/sakame
Run Sampling Experiment (Gross Primary Productivity) Load Results
## Run Sampling Experiment (Land Surface Temperature)
_____no_output_____
MIT
notebooks/regression/esdc_application/old/app_sample_3_experiment.ipynb
IPL-UV/sakame
Self-Driving Car Engineer Nanodegree Deep Learning Project: Build a Traffic Sign Recognition ClassifierIn this notebook, a template is provided for you to implement your functionality in stages, which is required to successfully complete this project. If additional code is required that cannot be included in the notebook, be sure that the Python code is successfully imported and included in your submission if necessary. > **Note**: Once you have completed all of the code implementations, you need to finalize your work by exporting the iPython Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You can then export the notebook by using the menu above and navigating to \n", "**File -> Download as -> HTML (.html)**. Include the finished document along with this notebook as your submission. In addition to implementing code, there is a writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a [write up template](https://github.com/udacity/CarND-Traffic-Sign-Classifier-Project/blob/master/writeup_template.md) that can be used to guide the writing process. Completing the code template and writeup template will cover all of the [rubric points](https://review.udacity.com/!/rubrics/481/view) for this project.The [rubric](https://review.udacity.com/!/rubrics/481/view) contains "Stand Out Suggestions" for enhancing the project beyond the minimum requirements. The stand out suggestions are optional. If you decide to pursue the "stand out suggestions", you can include the code in this Ipython notebook and also discuss the results in the writeup file.>**Note:** Code and Markdown cells can be executed using the **Shift + Enter** keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.
import pickle import numpy as np import pandas as pd import random from sklearn.utils import shuffle import tensorflow as tf from tensorflow.contrib.layers import flatten import cv2 import glob import os import matplotlib.image as mpimg import matplotlib.pyplot as plt %matplotlib inline
_____no_output_____
MIT
Traffic_Sign_Classifier.ipynb
Ansheel9/Traffic-Sign-Classifier
--- Step 0: Load The Data
# TODO: Fill this in based on where you saved the training and testing data training_file = "../data/train.p" validation_file= "../data/valid.p" testing_file = "../data/test.p" with open(training_file, mode='rb') as f: train = pickle.load(f) with open(validation_file, mode='rb') as f: valid = pickle.load(f) with open(testing_file, mode='rb') as f: test = pickle.load(f) X_train, y_train = train['features'], train['labels'] X_valid, y_valid = valid['features'], valid['labels'] X_test, y_test = test['features'], test['labels']
_____no_output_____
MIT
Traffic_Sign_Classifier.ipynb
Ansheel9/Traffic-Sign-Classifier
--- Step 1: Dataset Summary & ExplorationThe pickled data is a dictionary with 4 key/value pairs:- `'features'` is a 4D array containing raw pixel data of the traffic sign images, (num examples, width, height, channels).- `'labels'` is a 1D array containing the label/class id of the traffic sign. The file `signnames.csv` contains id -> name mappings for each id.- `'sizes'` is a list containing tuples, (width, height) representing the original width and height the image.- `'coords'` is a list containing tuples, (x1, y1, x2, y2) representing coordinates of a bounding box around the sign in the image. **THESE COORDINATES ASSUME THE ORIGINAL IMAGE. THE PICKLED DATA CONTAINS RESIZED VERSIONS (32 by 32) OF THESE IMAGES**Complete the basic data summary below. Use python, numpy and/or pandas methods to calculate the data summary rather than hard coding the results. For example, the [pandas shape method](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.shape.html) might be useful for calculating some of the summary results. Provide a Basic Summary of the Data Set Using Python, Numpy and/or Pandas
### Replace each question mark with the appropriate value. ### Use python, pandas or numpy methods rather than hard coding the results # TODO: Number of training examples n_train = len(X_train) # TODO: Number of validation examples n_validation = len(X_valid) # TODO: Number of testing examples. n_test = len(X_test) # TODO: What's the shape of an traffic sign image? image_shape = X_train[0].shape # TODO: How many unique classes/labels there are in the dataset. n_classes = len(np.unique(y_train)) print("Number of training examples =", n_train) print("Number of testing examples =", n_test) print("Image data shape =", image_shape) print("Number of classes =", n_classes)
Number of training examples = 34799 Number of testing examples = 12630 Image data shape = (32, 32, 3) Number of classes = 43
MIT
Traffic_Sign_Classifier.ipynb
Ansheel9/Traffic-Sign-Classifier
Include an exploratory visualization of the dataset Visualize the German Traffic Signs Dataset using the pickled file(s). This is open ended, suggestions include: plotting traffic sign images, plotting the count of each sign, etc. The [Matplotlib](http://matplotlib.org/) [examples](http://matplotlib.org/examples/index.html) and [gallery](http://matplotlib.org/gallery.html) pages are a great resource for doing visualizations in Python.**NOTE:** It's recommended you start with something simple first. If you wish to do more, come back to it after you've completed the rest of the sections. It can be interesting to look at the distribution of classes in the training, validation and test set. Is the distribution the same? Are there more examples of some classes than others?
### Data exploration visualization code goes here. ### Feel free to use as many code cells as needed. index = random.randint(0, len(X_train)) image = X_train[index].squeeze() plt.figure(figsize=(1,1)) plt.imshow(image) print(y_train[index])
10
MIT
Traffic_Sign_Classifier.ipynb
Ansheel9/Traffic-Sign-Classifier
---- Step 2: Design and Test a Model ArchitectureDesign and implement a deep learning model that learns to recognize traffic signs. Train and test your model on the [German Traffic Sign Dataset](http://benchmark.ini.rub.de/?section=gtsrb&subsection=dataset).The LeNet-5 implementation shown in the [classroom](https://classroom.udacity.com/nanodegrees/nd013/parts/fbf77062-5703-404e-b60c-95b78b2f3f9e/modules/6df7ae49-c61c-4bb2-a23e-6527e69209ec/lessons/601ae704-1035-4287-8b11-e2c2716217ad/concepts/d4aca031-508f-4e0b-b493-e7b706120f81) at the end of the CNN lesson is a solid starting point. You'll have to change the number of classes and possibly the preprocessing, but aside from that it's plug and play! With the LeNet-5 solution from the lecture, you should expect a validation set accuracy of about 0.89. To meet specifications, the validation set accuracy will need to be at least 0.93. It is possible to get an even higher accuracy, but 0.93 is the minimum for a successful project submission. There are various aspects to consider when thinking about this problem:- Neural network architecture (is the network over or underfitting?)- Play around preprocessing techniques (normalization, rgb to grayscale, etc)- Number of examples per label (some have more than others).- Generate fake data.Here is an example of a [published baseline model on this problem](http://yann.lecun.com/exdb/publis/pdf/sermanet-ijcnn-11.pdf). It's not required to be familiar with the approach used in the paper but, it's good practice to try to read papers like these. Pre-process the Data Set (normalization, grayscale, etc.) Minimally, the image data should be normalized so that the data has mean zero and equal variance. For image data, `(pixel - 128)/ 128` is a quick way to approximately normalize the data and can be used in this project. Other pre-processing steps are optional. You can try different techniques to see if it improves performance. Use the code cell (or multiple code cells, if necessary) to implement the first step of your project.
### Preprocess the data here. It is required to normalize the data. Other preprocessing steps could include ### converting to grayscale, etc. ### Feel free to use as many code cells as needed. #Convert to gray X_train_gry = np.sum(X_train/3, axis=3, keepdims=True) X_valid_gry = np.sum(X_valid/3, axis=3, keepdims=True) X_test_gry = np.sum(X_test/3, axis=3, keepdims=True) #Normalize Images X_train_norm = (X_train_gry)/255 X_valid_norm = (X_valid_gry)/255 X_test_norm = (X_test_gry)/255 #Shuffle dataset X_train_norm, y_train = shuffle(X_train_gry, y_train) X_valid_norm, y_valid = shuffle(X_valid_gry, y_valid)
_____no_output_____
MIT
Traffic_Sign_Classifier.ipynb
Ansheel9/Traffic-Sign-Classifier
Model Architecture
### Define your architecture here. ### Feel free to use as many code cells as needed. def LeNet(x): #HyperParameters mu = 0 sigma = 0.1 keep_prob = 0.5 # SOLUTION: Layer 1: Convolutional. Input = 32x32x1. Output = 28x28x6. conv1_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 1, 6), mean = mu, stddev = sigma)) conv1_b = tf.Variable(tf.zeros(6)) conv1 = tf.nn.conv2d(x, conv1_W, strides=[1, 1, 1, 1], padding='VALID') + conv1_b conv1 = tf.nn.relu(conv1) conv1 = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID') # SOLUTION: Layer 2: Convolutional. Output = 10x10x16. conv2_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 6, 16), mean = mu, stddev = sigma)) conv2_b = tf.Variable(tf.zeros(16)) conv2 = tf.nn.conv2d(conv1, conv2_W, strides=[1, 1, 1, 1], padding='VALID') + conv2_b conv2 = tf.nn.relu(conv2) conv2 = tf.nn.max_pool(conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID') # SOLUTION: Flatten. Input = 5x5x16. Output = 400. fc0 = flatten(conv2) # SOLUTION: Layer 3: Fully Connected. Input = 400. Output = 120. fc1_W = tf.Variable(tf.truncated_normal(shape=(400, 120), mean = mu, stddev = sigma)) fc1_b = tf.Variable(tf.zeros(120)) fc1 = tf.matmul(fc0, fc1_W) + fc1_b fc1 = tf.nn.relu(fc1) #fc1 = tf.nn.dropout(fc1, keep_prob) # SOLUTION: Layer 4: Fully Connected. Input = 120. Output = 84. fc2_W = tf.Variable(tf.truncated_normal(shape=(120, 84), mean = mu, stddev = sigma)) fc2_b = tf.Variable(tf.zeros(84)) fc2 = tf.matmul(fc1, fc2_W) + fc2_b fc2 = tf.nn.relu(fc2) #fc2 = tf.nn.dropout(fc2, keep_prob) # SOLUTION: Layer 5: Fully Connected. Input = 84. Output = 10. fc3_W = tf.Variable(tf.truncated_normal(shape=(84, 43), mean = mu, stddev = sigma)) fc3_b = tf.Variable(tf.zeros(43)) logits = tf.matmul(fc2, fc3_W) + fc3_b return logits
_____no_output_____
MIT
Traffic_Sign_Classifier.ipynb
Ansheel9/Traffic-Sign-Classifier
Train, Validate and Test the Model A validation set can be used to assess how well the model is performing. A low accuracy on the training and validationsets imply underfitting. A high accuracy on the training set but low accuracy on the validation set implies overfitting.
### Train your model here. ### Calculate and report the accuracy on the training and validation set. ### Once a final model architecture is selected, ### the accuracy on the test set should be calculated and reported as well. ### Feel free to use as many code cells as needed. x = tf.placeholder(tf.float32, (None, 32, 32, 1)) y = tf.placeholder(tf.int32, (None)) one_hot_y = tf.one_hot(y, 43) #Training Pipeline rate = 0.001 logits = LeNet(x) cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=one_hot_y, logits=logits) loss_operation = tf.reduce_mean(cross_entropy) optimizer = tf.train.AdamOptimizer(learning_rate = rate) training_operation = optimizer.minimize(loss_operation) #Model Evaluation correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1)) accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) saver = tf.train.Saver() def evaluate(X_data, y_data): num_examples = len(X_data) total_accuracy = 0 sess = tf.get_default_session() for offset in range(0, num_examples, BATCH_SIZE): batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE] accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y}) total_accuracy += (accuracy * len(batch_x)) return total_accuracy / num_examples EPOCHS = 30 BATCH_SIZE = 128 with tf.Session() as sess: sess.run(tf.global_variables_initializer()) num_examples = len(X_train_norm) print("Training...") print() for i in range(EPOCHS): X_train_norm, y_train = shuffle(X_train_norm, y_train) for offset in range(0, num_examples, BATCH_SIZE): end = offset + BATCH_SIZE batch_x, batch_y = X_train_norm[offset:end], y_train[offset:end] sess.run(training_operation, feed_dict={x: batch_x, y: batch_y}) validation_accuracy = evaluate(X_valid_norm, y_valid) print("EPOCH {} ...".format(i+1)) print("Validation Accuracy = {:.3f}".format(validation_accuracy)) print() saver.save(sess, './lenet') print("Model saved") with tf.Session() as sess: saver.restore(sess, tf.train.latest_checkpoint('.')) test_accuracy = evaluate(X_test_gry, y_test) print("Test Accuracy = {:.3f}".format(test_accuracy))
INFO:tensorflow:Restoring parameters from ./lenet Test Accuracy = 0.921
MIT
Traffic_Sign_Classifier.ipynb
Ansheel9/Traffic-Sign-Classifier
--- Step 3: Test a Model on New ImagesTo give yourself more insight into how your model is working, download at least five pictures of German traffic signs from the web and use your model to predict the traffic sign type.You may find `signnames.csv` useful as it contains mappings from the class id (integer) to the actual sign name. Load and Output the Images
### Load the images and plot them here. ### Feel free to use as many code cells as needed. #fig, axs = plt.subplots(2,4, figsize=(4, 2)) #fig.subplots_adjust(hspace = .2, wspace=.001) #axs = axs.ravel() #../data/Images/traffic*.png fig, axes = plt.subplots(1, 5, figsize=(18,4)) new_images = [] for i, img in enumerate(sorted(glob.glob('../data/Images/traffic*.png'))): image = mpimg.imread(img) axes[i].imshow(image) axes[i].axis('off') image = cv2.resize(image, (32, 32)) image = np.sum(image/3, axis = 2, keepdims = True) image = (image - image.mean())/np.std(image) new_images.append(image) print(image.shape) #new_images.shape X_new_images = np.asarray(new_images) y_new_images = np.array([12, 18, 38, 18, 38]) print(X_new_images.shape)
(32, 32, 1) (32, 32, 1) (32, 32, 1) (32, 32, 1) (32, 32, 1) (5, 32, 32, 1)
MIT
Traffic_Sign_Classifier.ipynb
Ansheel9/Traffic-Sign-Classifier
Predict the Sign Type for Each Image
### Run the predictions here and use the model to output the prediction for each image. ### Make sure to pre-process the images with the same pre-processing pipeline used earlier. ### Feel free to use as many code cells as needed. #my_labels = [35, 12, 11, 24, 16, 14, 1, 4] keep_prob = tf.placeholder(tf.float32) with tf.Session() as sess: saver.restore(sess, tf.train.latest_checkpoint('.')) sess = tf.get_default_session() new_images_accuracy = sess.run(accuracy_operation, feed_dict={x: X_new_images, y: y_new_images, keep_prob: 1.0}) print("Test Accuracy = {:.3f}".format(new_images_accuracy))
INFO:tensorflow:Restoring parameters from ./lenet Test Accuracy = 0.600
MIT
Traffic_Sign_Classifier.ipynb
Ansheel9/Traffic-Sign-Classifier
Analyze Performance
### Calculate the accuracy for these 5 new images. ### For example, if the model predicted 1 out of 5 signs correctly, it's 20% accurate on these new images. with tf.Session() as sess: saver.restore(sess, tf.train.latest_checkpoint('.')) sess = tf.get_default_session() prediction = sess.run(logits, feed_dict={x: X_new_images, y: y_new_images, keep_prob: 1.0}) print(np.argmax(prediction,1))
INFO:tensorflow:Restoring parameters from ./lenet [12 18 12 18 3]
MIT
Traffic_Sign_Classifier.ipynb
Ansheel9/Traffic-Sign-Classifier
Output Top 5 Softmax Probabilities For Each Image Found on the Web For each of the new images, print out the model's softmax probabilities to show the **certainty** of the model's predictions (limit the output to the top 5 probabilities for each image). [`tf.nn.top_k`](https://www.tensorflow.org/versions/r0.12/api_docs/python/nn.htmltop_k) could prove helpful here. The example below demonstrates how tf.nn.top_k can be used to find the top k predictions for each image.`tf.nn.top_k` will return the values and indices (class ids) of the top k predictions. So if k=3, for each sign, it'll return the 3 largest probabilities (out of a possible 43) and the correspoding class ids.Take this numpy array as an example. The values in the array represent predictions. The array contains softmax probabilities for five candidate images with six possible classes. `tf.nn.top_k` is used to choose the three classes with the highest probability:``` (5, 6) arraya = np.array([[ 0.24879643, 0.07032244, 0.12641572, 0.34763842, 0.07893497, 0.12789202], [ 0.28086119, 0.27569815, 0.08594638, 0.0178669 , 0.18063401, 0.15899337], [ 0.26076848, 0.23664738, 0.08020603, 0.07001922, 0.1134371 , 0.23892179], [ 0.11943333, 0.29198961, 0.02605103, 0.26234032, 0.1351348 , 0.16505091], [ 0.09561176, 0.34396535, 0.0643941 , 0.16240774, 0.24206137, 0.09155967]])```Running it through `sess.run(tf.nn.top_k(tf.constant(a), k=3))` produces:```TopKV2(values=array([[ 0.34763842, 0.24879643, 0.12789202], [ 0.28086119, 0.27569815, 0.18063401], [ 0.26076848, 0.23892179, 0.23664738], [ 0.29198961, 0.26234032, 0.16505091], [ 0.34396535, 0.24206137, 0.16240774]]), indices=array([[3, 0, 5], [0, 1, 4], [0, 5, 1], [1, 3, 5], [1, 4, 3]], dtype=int32))```Looking just at the first row we get `[ 0.34763842, 0.24879643, 0.12789202]`, you can confirm these are the 3 largest probabilities in `a`. You'll also notice `[3, 0, 5]` are the corresponding indices.
### Print out the top five softmax probabilities for the predictions on the German traffic sign images found on the web. ### Feel free to use as many code cells as needed. with tf.Session() as sess: print(sess.run(tf.nn.top_k(tf.nn.softmax(prediction), k=5)))
TopKV2(values=array([[ 0.13296957, 0.09634399, 0.08997886, 0.06783284, 0.06598242], [ 0.13137311, 0.07682533, 0.07135586, 0.06071901, 0.05829126], [ 0.16518918, 0.12350544, 0.0783048 , 0.07044089, 0.06533803], [ 0.18929006, 0.11967038, 0.07411727, 0.06745335, 0.04894404], [ 0.16643417, 0.11010567, 0.09781482, 0.0726616 , 0.05540863]], dtype=float32), indices=array([[12, 9, 5, 10, 38], [18, 8, 5, 7, 4], [12, 38, 5, 13, 32], [18, 4, 26, 8, 7], [ 3, 5, 38, 32, 8]], dtype=int32))
MIT
Traffic_Sign_Classifier.ipynb
Ansheel9/Traffic-Sign-Classifier
Project WriteupOnce you have completed the code implementation, document your results in a project writeup using this [template](https://github.com/udacity/CarND-Traffic-Sign-Classifier-Project/blob/master/writeup_template.md) as a guide. The writeup can be in a markdown or pdf file. > **Note**: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to \n", "**File -> Download as -> HTML (.html)**. Include the finished document along with this notebook as your submission. --- Step 4 (Optional): Visualize the Neural Network's State with Test Images This Section is not required to complete but acts as an additional excersise for understaning the output of a neural network's weights. While neural networks can be a great learning device they are often referred to as a black box. We can understand what the weights of a neural network look like better by plotting their feature maps. After successfully training your neural network you can see what it's feature maps look like by plotting the output of the network's weight layers in response to a test stimuli image. From these plotted feature maps, it's possible to see what characteristics of an image the network finds interesting. For a sign, maybe the inner network feature maps react with high activation to the sign's boundary outline or to the contrast in the sign's painted symbol. Provided for you below is the function code that allows you to get the visualization output of any tensorflow weight layer you want. The inputs to the function should be a stimuli image, one used during training or a new one you provided, and then the tensorflow variable name that represents the layer's state during the training process, for instance if you wanted to see what the [LeNet lab's](https://classroom.udacity.com/nanodegrees/nd013/parts/fbf77062-5703-404e-b60c-95b78b2f3f9e/modules/6df7ae49-c61c-4bb2-a23e-6527e69209ec/lessons/601ae704-1035-4287-8b11-e2c2716217ad/concepts/d4aca031-508f-4e0b-b493-e7b706120f81) feature maps looked like for it's second convolutional layer you could enter conv2 as the tf_activation variable.For an example of what feature map outputs look like, check out NVIDIA's results in their paper [End-to-End Deep Learning for Self-Driving Cars](https://devblogs.nvidia.com/parallelforall/deep-learning-self-driving-cars/) in the section Visualization of internal CNN State. NVIDIA was able to show that their network's inner weights had high activations to road boundary lines by comparing feature maps from an image with a clear path to one without. Try experimenting with a similar test to show that your trained network's weights are looking for interesting features, whether it's looking at differences in feature maps from images with or without a sign, or even what feature maps look like in a trained network vs a completely untrained one on the same sign image. Your output should look something like this (above)
### Visualize your network's feature maps here. ### Feel free to use as many code cells as needed. # image_input: the test image being fed into the network to produce the feature maps # tf_activation: should be a tf variable name used during your training procedure that represents the calculated state of a specific weight layer # activation_min/max: can be used to view the activation contrast in more detail, by default matplot sets min and max to the actual min and max values of the output # plt_num: used to plot out multiple different weight feature map sets on the same block, just extend the plt number for each new feature map entry def outputFeatureMap(image_input, tf_activation, activation_min=-1, activation_max=-1 ,plt_num=1): # Here make sure to preprocess your image_input in a way your network expects # with size, normalization, ect if needed # image_input = # Note: x should be the same name as your network's tensorflow data placeholder variable # If you get an error tf_activation is not defined it may be having trouble accessing the variable from inside a function activation = tf_activation.eval(session=sess,feed_dict={x : image_input}) featuremaps = activation.shape[3] plt.figure(plt_num, figsize=(15,15)) for featuremap in range(featuremaps): plt.subplot(6,8, featuremap+1) # sets the number of feature maps to show on each row and column plt.title('FeatureMap ' + str(featuremap)) # displays the feature map number if activation_min != -1 & activation_max != -1: plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin =activation_min, vmax=activation_max, cmap="gray") elif activation_max != -1: plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmax=activation_max, cmap="gray") elif activation_min !=-1: plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin=activation_min, cmap="gray") else: plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", cmap="gray")
_____no_output_____
MIT
Traffic_Sign_Classifier.ipynb
Ansheel9/Traffic-Sign-Classifier
Advanced Lane Finding ProjectThe goals / steps of this project are the following:* Compute the camera calibration matrix and distortion coefficients given a set of chessboard images.* Apply a distortion correction to raw images.* Use color transforms, gradients, etc., to create a thresholded binary image.* Apply a perspective transform to rectify binary image ("birds-eye view").* Detect lane pixels and fit to find the lane boundary.* Determine the curvature of the lane and vehicle position with respect to center.* Warp the detected lane boundaries back onto the original image.* Output visual display of the lane boundaries and numerical estimation of lane curvature and vehicle position.--- First, I'll compute the camera calibration using chessboard images
import numpy as np import cv2 import glob import matplotlib.pyplot as plt %matplotlib qt # prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0) objp = np.zeros((6*9,3), np.float32) objp[:,:2] = np.mgrid[0:9,0:6].T.reshape(-1,2) # Arrays to store object points and image points from all the images. objpoints = [] # 3d points in real world space imgpoints = [] # 2d points in image plane. # Make a list of calibration images images = glob.glob('../camera_cal/calibration*.jpg') # Step through the list and search for chessboard corners for fname in images: img = cv2.imread(fname) gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY) # Find the chessboard corners ret, corners = cv2.findChessboardCorners(gray, (9,6),None) # If found, add object points, image points if ret == True: objpoints.append(objp) imgpoints.append(corners) # Draw and display the corners img = cv2.drawChessboardCorners(img, (9,6), corners, ret) cv2.imshow('img',img) cv2.waitKey(500) cv2.destroyAllWindows()
_____no_output_____
MIT
examples/example.ipynb
VamshiK-Kasula/CarND-Advanced-Lane-Lines
Broadcasting the Value**Numpy arrays differ from normal python list due to their ability to broadcast.**
arr[0:5] = 100 # Broacasts the value 100 to first 5 digits. arr # Reset the array arr = np.arange(0,11) arr slice_of_arr = arr[0:6] slice_of_arr # To grab everything in the slice slice_of_arr[:] # Broadcasting after grabbing everything in the array slice_of_arr[:] = 99 slice_of_arr arr # Notice above how not only slice_of_arr got changed due to the broadcast but the array arr was also changed. # Slice and the original array both got changed in terms of values. # Data is not copied but rather just copied or pointed from original array. # Reason behind such behaviour is that to prevent memory issues while dealing with large arrays. # It basically means numpy prefers not setting copies of arrays and would rather point slices to their original parent arrays. # Use copy() method which is array_name.copy() arr_copy = arr.copy() arr_copy arr_copy[0:5] = 23 arr arr_copy #Since we have copied now we can see that arr and arr_copy would be different even after broadcasting. # Original array remains unaffected despite changes on the copied array. # Main idea here is that if you grab the actual slice of the array and set it as variable without calling the method copy # on the array then you are just seeing the link to original array and changes on slice would reflect on original/parent array.
_____no_output_____
BSD-3-Clause
05. Python for Data Analysis - NumPy/5.19 indexing_selection_np_array.ipynb
shrey-c/DSC-ML-numpy-pandas
2D Array/Matrix
arr_2d = np.array([[5,10,15],[20,25,30],[35,40,45]]) arr_2d # REMEMBER If having confusion regarding dimensions of the matrix just call shape. arr_2d.shape # 3 rows, 3 columns # Two general formats for grabbing elements from a 2D array or matrix format : # (i) Double Bracket Format (ii) Single Bracket Format with comma (Recommended) # (i) Double Bracket Format arr_2d[0][:] # Gives all the elements inside the 0th index of array arr. # arr_2d[0][:] Also works arr_2d[1][2] # Gives the element at index 2 of the 1st index of arr_2d i.e. 30 # (ii) Single Bracket Format with comma (Recommended) : Removes [][] 2 square brackets with a tuple kind (x,y) format # To print 30 we do the following 1st row and 2nd index arr_2d[1,2] # Say we want sub matrices from the matrix arr_2d arr_2d[:3,1:] # Everything upto the third row, and anything from column 1 onwards. arr_2d[1:,:]
_____no_output_____
BSD-3-Clause
05. Python for Data Analysis - NumPy/5.19 indexing_selection_np_array.ipynb
shrey-c/DSC-ML-numpy-pandas
Conditional Selection
arr = np.arange(1,11) arr # Taking the array arr and comapring it using comparison operators to get a full boolean array out of this. bool_arr = arr > 5 ''' 1. Getting the array and using a comparison operator on it will actually return a boolean array. 2. An array with boolean values in response to our condition. 3. Now we can use the boolean array to actually index or conditionally select elements from the original array where boolean array is true. ''' bool_arr arr[bool_arr] # Gives us only the results which are only true. # Doing what's described above in one line will be arr[arr<3] # arr[comaprison condition] Get used to this notation we use this a lot especially in Pandas!
_____no_output_____
BSD-3-Clause
05. Python for Data Analysis - NumPy/5.19 indexing_selection_np_array.ipynb
shrey-c/DSC-ML-numpy-pandas
Exercise1. Create a new 2d array np.arange(50).reshape(5,10).2. Grab any 2sub matrices from the 5x10 chunk.
arr_2d = np.arange(50).reshape(5,10) arr_2d # Selecting 11 to 35 arr_2d[1:4,1:6]# Keep in mind it is exclusive for the end value in the start:end format of indexing. # Selecting 5-49 arr_2d[0:,5:]
_____no_output_____
BSD-3-Clause
05. Python for Data Analysis - NumPy/5.19 indexing_selection_np_array.ipynb
shrey-c/DSC-ML-numpy-pandas
NumPy, Pandas and Matplotlib with ICESat UW Geospatial Data Analysis CEE498/CEWA599 David Shean Objectives1. Solidify basic skills with NumPy, Pandas, and Matplotlib2. Learn basic data manipulation, exploration, and visualizatioin with a relatively small, clean point dataset (65K points)3. Learn a bit more about the ICESat mission, the GLAS instrument, and satellite laser altimetry4. Explore outlier removal, grouping and clustering ICESat GLAS BackgroundThe NASA Ice Cloud and land Elevation Satellite ([ICESat](https://icesat.gsfc.nasa.gov/icesat/)) was a NASA mission carrying the Geosciences Laser Altimeter System (GLAS) instrument: a space laser, pointed down at the Earth (and unsuspecting Earthlings). It measured surface elevations by precisely tracking laser pulses emitted from the spacecraft at a rate of 40 Hz (a new pulse every 0.025 seconds). These pulses traveled through the atmosphere, reflected off the surface, back up through the atmosphere, and into space, where some small fraction of that original energy was received by a telescope on the spacecraft. The instrument electronics precisely recorded the time when these intrepid photons left the instrument and when they returned. The position and orientation of the spacecraft was precisely known, so the two-way traveltime (and assumptions about the speed of light and propagation through the atmosphere) allowed for precise forward determination of the spot on the Earth's surface (or cloud tops, as was often the case) where the reflection occurred. The laser spot size varied during the mission, but was ~70 m in diameter. ICESat collected billions of measurements from 2003 to 2009, and was operating in a "repeat-track" mode that sacrificed spatial coverage for more observations along the same ground tracks over time. One primary science focus involved elevation change over the Earth's ice sheets. It allowed for early measurements of full Antarctic and Greenland ice sheet elevation change, which offered a detailed look at spatial distribution and rates of mass loss, and total ice sheet contributions to sea level rise. There were problems with the lasers during the mission, so it operated in short campaigns lasting only a few months to prolong the full mission lifetime. While the primary measurements focused on the polar regions, many measurements were also collected over lower latitudes, to meet other important science objectives (e.g., estimating biomass in the Earth's forests, observing sea surface height/thickness over time). Sample GLAS dataset for CONUSA few years ago, I wanted to evaluate ICESat coverage of the Continental United States (CONUS). The primary application was to extract a set of accurate control points to co-register a large set of high-resolution digital elevation modoels (DEMs) derived from satellite stereo imagery. I wrote some Python/shell scripts to download, filter, and process all of the [GLAH14 L2 Global Land Surface Altimetry Data](https://nsidc.org/data/GLAH14/versions/34) granules in parallel ([https://github.com/dshean/icesat_tools](https://github.com/dshean/icesat_tools)).The high-level workflow is here: https://github.com/dshean/icesat_tools/blob/master/glas_proc.pyL24. These tools processed each HDF5 (H5) file and wrote out csv files containing “good” points. These csv files were concatenated to prepare the single input csv (`GLAH14_tllz_conus_lulcfilt_demfilt.csv`) that we will use for this tutorial. The csv contains ICESat GLAS shots that passed the following filters:* Within some buffer (~110 km) of mapped glacier polygons from the [Randolph Glacier Inventory (RGI)](https://www.glims.org/RGI/)* Returns from exposed bare ground (landcover class 31) or snow/ice (12) according to a 30-m Land-use/Land-cover dataset (2011 NLCD, https://www.mrlc.gov/data?f%5B0%5D=category%3Aland%20cover)* Elevation values within some threshold (200 m) of elevations sampled from an external reference DEM (void-filled 1/3-arcsec [30-m] SRTM-GL1, https://lpdaac.usgs.gov/products/srtmgl1v003/), used to remove spurious points and returns from clouds.* Various other ICESat-specific quality flags (see comments in `glas_proc.py` for details)The final file contains a relatively small subset (~65K) of the total shots in the original GLAH14 data granules from the full mission timeline (2003-2009). The remaining points should represent returns from the Earth's surface with reasonably high quality, and can be used for subsequent analysis. Lab ExercisesLet's use this dataset to explore some of the NumPy and Pandas functionality, and practice some basic plotting with Matplotlib.I've provided instructions and hints, and you will need to fill in the code to generate the output results and plots. Import necessary modules
#Use shorter names (np, pd, plt) instead of full (numpy, pandas, matplotlib.pylot) for convenience import numpy as np import pandas as pd import matplotlib.pyplot as plt #Magic function to enable interactive plotting (zoom/pan) in Jupyter notebook #If running locally, this would be `%matplotlib notebook`, but since we're using Juptyerlab, we use widget #%matplotlib widget #Use matplotlib inline to render/embed figures in the notebook for upload to github %matplotlib inline #%matplotlib widget
_____no_output_____
MIT
modules/03_NumPy_Pandas_Matplotlib/03_NumPy_Pandas_Matplotlib_ICESat_exercises.ipynb
UW-GDA/gda_course_2021
Define relative path to the GLAS data csv from week 01
glas_fn = '../01_Shell_Github/data/GLAH14_tllz_conus_lulcfilt_demfilt.csv'
_____no_output_____
MIT
modules/03_NumPy_Pandas_Matplotlib/03_NumPy_Pandas_Matplotlib_ICESat_exercises.ipynb
UW-GDA/gda_course_2021
This notebook just defines `bar`
def bar(x): return "bar" * x
_____no_output_____
BSD-3-Clause
001-Jupyter/001-Tutorials/003-IPython-in-Depth/examples/IPython Kernel/nbpackage/nbs/other.ipynb
willirath/jupyter-jsc-notebooks
Data Preprocessing
# drop the following columns as they are only available in training data # click_time, clicked, open_time, unsubscribe_time, unsubscribed training_data.drop(['click_time','clicked', 'open_time', 'unsubscribe_time', 'unsubscribed'], axis=1, inplace=True)
_____no_output_____
CC0-1.0
Source Code.ipynb
ericblanco/fictional-happiness
Missing Values
training_data.isnull().sum() training_data['mail_category'].fillna(training_data['mail_category'].value_counts().index[0], inplace=True) training_data['mail_type'].fillna(training_data['mail_type'].value_counts().index[0],inplace=True) training_data['hacker_timezone'].fillna(training_data['hacker_timezone'].value_counts().index[0], inplace=True) training_data['last_online'].fillna(training_data['last_online'].mean(), inplace=True) testing_data.isnull().sum() testing_data['mail_category'].fillna(testing_data['mail_category'].value_counts().index[0], inplace=True) testing_data['mail_type'].fillna(testing_data['mail_type'].value_counts().index[0],inplace=True) testing_data['hacker_timezone'].fillna(testing_data['hacker_timezone'].value_counts().index[0], inplace=True) testing_data['last_online'].fillna(testing_data['last_online'].mean(), inplace=True)
_____no_output_____
CC0-1.0
Source Code.ipynb
ericblanco/fictional-happiness
Outliers
training_data.describe().T min_threshold, max_threshold = training_data.sent_time.quantile([0.001, 0.999]) training_data = training_data[(training_data['sent_time'] > min_threshold) & (training_data['sent_time'] < max_threshold)] min_threshold, max_threshold = training_data.last_online.quantile([0.001, 0.999]) training_data = training_data[(training_data['last_online'] > min_threshold) & (training_data['last_online'] < max_threshold)] training_data.shape
_____no_output_____
CC0-1.0
Source Code.ipynb
ericblanco/fictional-happiness
Encoding Categorical Attributes
from sklearn.preprocessing import LabelEncoder encode = LabelEncoder() # Extract Categorical Attributes #cat_training_data = training_data.select_dtypes(include=['object']).copy() # encode the categorical attributes of training_data training_data = training_data.apply(encode.fit_transform) # encode the categorical attribute of testing_data testing_data = testing_data.apply(encode.fit_transform) training_data.head() testing_data.head()
_____no_output_____
CC0-1.0
Source Code.ipynb
ericblanco/fictional-happiness
Seperate the Label Column from training_data
label = training_data['opened'] training_data.drop('opened', inplace=True, axis=1)
_____no_output_____
CC0-1.0
Source Code.ipynb
ericblanco/fictional-happiness
Scaling Numerical Features
from sklearn.preprocessing import StandardScaler scaler = StandardScaler() # extract numerical attributes and scale it to have zero mean and unit variance train_cols = training_data.select_dtypes(include=['float64', 'int64']).columns training_data = scaler.fit_transform(training_data.select_dtypes(include=['float64','int64'])) # extract numerical attributes and scale it to have zero mean and unit variance test_cols = testing_data.select_dtypes(include=['float64', 'int64']).columns testing_data = scaler.fit_transform(testing_data.select_dtypes(include=['float64','int64'])) training_data = pd.DataFrame(training_data, columns=train_cols) testing_data = pd.DataFrame(testing_data, columns=test_cols) training_data.shape testing_data.shape
_____no_output_____
CC0-1.0
Source Code.ipynb
ericblanco/fictional-happiness
Split the training_data into training and validaiton data
from sklearn.model_selection import train_test_split x_train, x_val, y_train, y_val = train_test_split(training_data, label, test_size = 0.2, random_state=2)
_____no_output_____
CC0-1.0
Source Code.ipynb
ericblanco/fictional-happiness
Models Training Decision Tree
from sklearn import tree from sklearn import metrics DT_Classifier = tree.DecisionTreeClassifier(criterion='entropy', random_state=0) DT_Classifier.fit(x_train, y_train)
_____no_output_____
CC0-1.0
Source Code.ipynb
ericblanco/fictional-happiness
Accuracy and Confusion Matrix
accuracy = metrics.accuracy_score(y_val, DT_Classifier.predict(x_val)) confustion_matrix = metrics.confusion_matrix(y_val, DT_Classifier.predict(x_val)) accuracy confustion_matrix
_____no_output_____
CC0-1.0
Source Code.ipynb
ericblanco/fictional-happiness
K-Nearest Neighbour
from sklearn.neighbors import KNeighborsClassifier KNN_Classifier = KNeighborsClassifier(n_jobs=-1) KNN_Classifier.fit(x_train, y_train) accuracy = metrics.accuracy_score(y_val, KNN_Classifier.predict(x_val)) confusion_matrix = metrics.confusion_matrix(y_val, KNN_Classifier.predict(x_val)) accuracy confusion_matrix
_____no_output_____
CC0-1.0
Source Code.ipynb
ericblanco/fictional-happiness
Prediction on test data
predicted = DT_Classifier.predict(testing_data) predicted.shape prediction_df = pd.DataFrame(predicted, columns=['Prediction']) prediction_df.to_csv('prediction.csv') import pickle pkl_filename = "Decision_Tree_model.pkl" with open(pkl_filename, 'wb') as file: pickle.dump(DT_Classifier, file)
_____no_output_____
CC0-1.0
Source Code.ipynb
ericblanco/fictional-happiness
Implement Sliding Windows and Fit a PolynomialThis notebook displays how to create a sliding windows on a image using Histogram we did in an earlier notebook.we can use the two highest peaks from our histogram as a starting point for determining where the lane lines are, and then use sliding windows moving upward in the image (further along the road) to determine where the lane lines go. The output should look something like this: Steps: 1. Split the histogram for the two lines.2. Set up windows and window hyper parameters.3. Iterate through number of sliding windows to track curvature.4. Fit the polynomial.5. Plot the image. 1. Split the histogram for the two linesThe first step we'll take is to split the histogram into two sides, one for each lane line.NOTE: You will need an image from the previous notebook: warped-example.jpgBelow is the pseucode:**Do not Run the Below Cell it is for explanation only**
# Assuming you have created a warped binary image called "binary_warped" # Take a histogram of the bottom half of the image histogram = np.sum(binary_warped[binary_warped.shape[0]//2:,:], axis=0) # Create an output image to draw on and visualize the result out_img = np.dstack((binary_warped, binary_warped, binary_warped))*255 # Find the peak of the left and right halves of the histogram # These will be the starting point for the left and right lines midpoint = np.int(histogram.shape[0]//2) leftx_base = np.argmax(histogram[:midpoint]) rightx_base = np.argmax(histogram[midpoint:]) + midpoint
_____no_output_____
MIT
04-Advanced-Computer-Vision/02-Implement-Sliding-Windows-and-Fit-a-Polynomial.ipynb
vyasparthm/AutonomousDriving
2. Set up windows and window hyper parameters.Our next step is to set a few hyper parameters related to our sliding windows, and set them up to iterate across the binary activations in the image. I have some base hyper parameters below, but don't forget to try out different values in your own implementation to see what works best!Below is the pseudocode:**Do not run the below Cell, it is for explanation only**
# HYPERPARAMETERS # Choose the number of sliding windows nwindows = 9 # Set the width of the windows +/- margin margin = 100 # Set minimum number of pixels found to recenter window minpix = 50 # Set height of windows - based on nwindows above and image shape window_height = np.int(binary_warped.shape[0]//nwindows) # Identify the x and y positions of all nonzero (i.e. activated) pixels in the image nonzero = binary_warped.nonzero() nonzeroy = np.array(nonzero[0]) nonzerox = np.array(nonzero[1]) # Current positions to be updated later for each window in nwindows leftx_current = leftx_base rightx_current = rightx_base # Create empty lists to receive left and right lane pixel indices left_lane_inds = [] right_lane_inds = []
_____no_output_____
MIT
04-Advanced-Computer-Vision/02-Implement-Sliding-Windows-and-Fit-a-Polynomial.ipynb
vyasparthm/AutonomousDriving
3. Iterate through number of sliding windows to track curvaturenow that we've set up what the windows look like and have a starting point, we'll want to loop for `nwindows`, with the given window sliding left or right if it finds the mean position of activated pixels within the window to have shifted.Let's approach this like below: 1. Loop through each window in `nwindows` 2. Find the boundaries of our current window. This is based on a combination of the current window's starting point `(leftx_current` and `rightx_current`), as well as the margin you set in the hyperparameters. 3. Use cv2.rectangle to draw these window boundaries onto our visualization image `out_img`. 4. Now that we know the boundaries of our window, find out which activated pixels from `nonzeroy` and `nonzerox` above actually fall into the window. 5. Append these to our lists `left_lane_inds` and `right_lane_inds`. 6. If the number of pixels you found in Step 4 are greater than your hyperparameter `minpix`, re-center our window (i.e. `leftx_current` or `rightx_current`) based on the mean position of these pixels. 4. Fit the polynomialNow that we have found all our pixels belonging to each line through the sliding window method, it's time to fit a polynomial to the line. First, we have a couple small steps to ready our pixels.Below is the pseudocode:**Do not run the below Cell, it is for explanation only**
# Concatenate the arrays of indices (previously was a list of lists of pixels) left_lane_inds = np.concatenate(left_lane_inds) right_lane_inds = np.concatenate(right_lane_inds) # Extract left and right line pixel positions leftx = nonzerox[left_lane_inds] lefty = nonzeroy[left_lane_inds] rightx = nonzerox[right_lane_inds] righty = nonzeroy[right_lane_inds] Assuming we have `left_fit` and `right_fit` from `np.polyfit` before # Generate x and y values for plotting ploty = np.linspace(0, binary_warped.shape[0]-1, binary_warped.shape[0]) left_fitx = left_fit[0]*ploty**2 + left_fit[1]*ploty + left_fit[2] right_fitx = right_fit[0]*ploty**2 + right_fit[1]*ploty + right_fit[2]
_____no_output_____
MIT
04-Advanced-Computer-Vision/02-Implement-Sliding-Windows-and-Fit-a-Polynomial.ipynb
vyasparthm/AutonomousDriving
5. VisualizeWe will use subplots to visualize the output.Lets get to coding then.
import numpy as np import matplotlib.image as mpimg import matplotlib.pyplot as plt import cv2 # Load our image binary_warped = mpimg.imread('./img/warped-example.jpg') def find_lane_pixels(binary_warped): # Take a histogram of the bottom half of the image histogram = np.sum(binary_warped[binary_warped.shape[0]//2:,:], axis=0) # Create an output image to draw on and visualize the result out_img = np.dstack((binary_warped, binary_warped, binary_warped)) # Find the peak of the left and right halves of the histogram # These will be the starting point for the left and right lines midpoint = np.int(histogram.shape[0]//2) leftx_base = np.argmax(histogram[:midpoint]) rightx_base = np.argmax(histogram[midpoint:]) + midpoint # HYPERPARAMETERS # Choose the number of sliding windows nwindows = 9 # Set the width of the windows +/- margin margin = 100 # Set minimum number of pixels found to recenter window minpix = 50 # Set height of windows - based on nwindows above and image shape window_height = np.int(binary_warped.shape[0]//nwindows) # Identify the x and y positions of all nonzero pixels in the image nonzero = binary_warped.nonzero() nonzeroy = np.array(nonzero[0]) nonzerox = np.array(nonzero[1]) # Current positions to be updated later for each window in nwindows leftx_current = leftx_base rightx_current = rightx_base # Create empty lists to receive left and right lane pixel indices left_lane_inds = [] right_lane_inds = [] # Step through the windows one by one for window in range(nwindows): # Identify window boundaries in x and y (and right and left) win_y_low = binary_warped.shape[0] - (window+1)*window_height win_y_high = binary_warped.shape[0] - window*window_height win_xleft_low = leftx_current - margin win_xleft_high = leftx_current + margin win_xright_low = rightx_current - margin win_xright_high = rightx_current + margin # Draw the windows on the visualization image cv2.rectangle(out_img,(win_xleft_low,win_y_low), (win_xleft_high,win_y_high),(0,255,0), 2) cv2.rectangle(out_img,(win_xright_low,win_y_low), (win_xright_high,win_y_high),(0,255,0), 2) # Identify the nonzero pixels in x and y within the window # good_left_inds = ((nonzeroy >= win_y_low) & (nonzeroy < win_y_high) & (nonzerox >= win_xleft_low) & (nonzerox < win_xleft_high)).nonzero()[0] good_right_inds = ((nonzeroy >= win_y_low) & (nonzeroy < win_y_high) & (nonzerox >= win_xright_low) & (nonzerox < win_xright_high)).nonzero()[0] # Append these indices to the lists left_lane_inds.append(good_left_inds) right_lane_inds.append(good_right_inds) # If you found > minpix pixels, recenter next window on their mean position if len(good_left_inds) > minpix: leftx_current = np.int(np.mean(nonzerox[good_left_inds])) if len(good_right_inds) > minpix: rightx_current = np.int(np.mean(nonzerox[good_right_inds])) # Concatenate the arrays of indices (previously was a list of lists of pixels) try: left_lane_inds = np.concatenate(left_lane_inds) right_lane_inds = np.concatenate(right_lane_inds) except ValueError: # Avoids an error if the above is not implemented fully pass # Extract left and right line pixel positions leftx = nonzerox[left_lane_inds] lefty = nonzeroy[left_lane_inds] rightx = nonzerox[right_lane_inds] righty = nonzeroy[right_lane_inds] return leftx, lefty, rightx, righty, out_img def fit_polynomial(binary_warped): # Find our lane pixels first leftx, lefty, rightx, righty, out_img = find_lane_pixels(binary_warped) # Fit a second order polynomial to each using `np.polyfit` left_fit = np.polyfit(lefty, leftx, 2) right_fit = np.polyfit(righty, rightx, 2) # Generate x and y values for plotting ploty = np.linspace(0, binary_warped.shape[0]-1, binary_warped.shape[0] ) try: left_fitx = left_fit[0]*ploty**2 + left_fit[1]*ploty + left_fit[2] right_fitx = right_fit[0]*ploty**2 + right_fit[1]*ploty + right_fit[2] except TypeError: # Avoids an error if `left` and `right_fit` are still none or incorrect print('The function failed to fit a line!') left_fitx = 1*ploty**2 + 1*ploty right_fitx = 1*ploty**2 + 1*ploty ## Visualization ## # Colors in the left and right lane regions out_img[lefty, leftx] = [255, 0, 0] out_img[righty, rightx] = [0, 0, 255] # Plots the left and right polynomials on the lane lines plt.plot(left_fitx, ploty, color='white') plt.plot(right_fitx, ploty, color='white') print(left_fit) print(right_fit) return out_img out_img = fit_polynomial(binary_warped) plt.imshow(out_img)
[ 2.23090058e-04 -3.90812851e-01 4.78139852e+02] [ 4.19709859e-04 -4.79568379e-01 1.11522544e+03]
MIT
04-Advanced-Computer-Vision/02-Implement-Sliding-Windows-and-Fit-a-Polynomial.ipynb
vyasparthm/AutonomousDriving
Loading Files Alright, let's start with some basics. Before we do anything, we will need to load a snapshot. You can do this using the ```load_sample``` convenience function. yt will autodetect that you want a tipsy snapshot and download it from the yt hub.
import yt
_____no_output_____
BSD-3-Clause-Clear
doc/source/cookbook/tipsy_and_yt.ipynb
Kiradorn/yt
We will be looking at a fairly low resolution dataset. >This dataset is available for download at https://yt-project.org/data/TipsyGalaxy.tar.gz (10 MB).
ds = yt.load_sample("TipsyGalaxy")
_____no_output_____
BSD-3-Clause-Clear
doc/source/cookbook/tipsy_and_yt.ipynb
Kiradorn/yt
We now have a `TipsyDataset` object called `ds`. Let's see what fields it has.
ds.field_list
_____no_output_____
BSD-3-Clause-Clear
doc/source/cookbook/tipsy_and_yt.ipynb
Kiradorn/yt
yt also defines so-called "derived" fields. These fields are functions of the on-disk fields that live in the `field_list`. There is a `derived_field_list` attribute attached to the `Dataset` object - let's take look at the derived fields in this dataset:
ds.derived_field_list
_____no_output_____
BSD-3-Clause-Clear
doc/source/cookbook/tipsy_and_yt.ipynb
Kiradorn/yt
All of the field in the `field_list` are arrays containing the values for the associated particles. These haven't been smoothed or gridded in any way. We can grab the array-data for these particles using `ds.all_data()`. For example, let's take a look at a temperature-colored scatterplot of the gas particles in this output.
%matplotlib inline import matplotlib.pyplot as plt import numpy as np ad = ds.all_data() xcoord = ad["Gas", "Coordinates"][:, 0].v ycoord = ad["Gas", "Coordinates"][:, 1].v logT = np.log10(ad["Gas", "Temperature"]) plt.scatter( xcoord, ycoord, c=logT, s=2 * logT, marker="o", edgecolor="none", vmin=2, vmax=6 ) plt.xlim(-20, 20) plt.ylim(-20, 20) cb = plt.colorbar() cb.set_label(r"$\log_{10}$ Temperature") plt.gcf().set_size_inches(15, 10)
_____no_output_____
BSD-3-Clause-Clear
doc/source/cookbook/tipsy_and_yt.ipynb
Kiradorn/yt
Making Smoothed Images yt will automatically generate smoothed versions of these fields that you can use to plot. Let's make a temperature slice and a density projection.
yt.SlicePlot(ds, "z", ("gas", "density"), width=(40, "kpc"), center="m") yt.ProjectionPlot(ds, "z", ("gas", "density"), width=(40, "kpc"), center="m")
_____no_output_____
BSD-3-Clause-Clear
doc/source/cookbook/tipsy_and_yt.ipynb
Kiradorn/yt
Not only are the values in the tipsy snapshot read and automatically smoothed, the auxiliary files that have physical significance are also smoothed. Let's look at a slice of Iron mass fraction.
yt.SlicePlot(ds, "z", ("gas", "Fe_fraction"), width=(40, "kpc"), center="m")
_____no_output_____
BSD-3-Clause-Clear
doc/source/cookbook/tipsy_and_yt.ipynb
Kiradorn/yt
Natural Language Processing, a look at distinguishing subreddit categories by analyzing the text of the comments and posts**Matt Paterson, hello@hiremattpaterson.com**General Assembly Data Science Immersive, July 2020 Abstract**HireMattPaterson.com has been (fictionally) contracted by Virgin Galactic’s marketing team to build a Natural Language Processing Model that will efficiently predict if reddit posts are being made for the SpaceX subreddit or the Boeing subreddit as a proof of concept to segmenting the targeted markets.**We’ve created a model that predicts the silo of the post with nearly 80% accuracy (with a top score of 79.9%). To get there we tried over 2,000 different iterations on a total of 5 different classification modeling algorithms including two versions of Multinomial Naïve Bayes, Random Cut Forest, Extra Trees, and a simple Logistic Regression Classifier. We’d like to use Support Vector Machines as well as Gradient Boosting and a K-Nearest Neighbors model in our follow-up to this presentation.If you like our proof of concept, the next iteration of our model will take in to account the trend or frequency in the comments of each user; what other subreddits these users can be found to post to (are they commenting on the Rolex and Gulfstream and Maserati or are they part of the Venture Capital and AI crowd?); and if their comments appear to be professional in nature (are they looking to someday work in aerospace or maybe they already do). These trends will help the marketing team tune their tone, choose words that are trending, and speak directly to each cohort in a narrow-cast fashion thus allowing VG to spend less money on ads and on people over time.This notebook shows how we got there. Problem Statement:Virgin Galactic wants to charge customers USD 250K per voyage to bring customers into outer space on a pleasure cruise in null GThe potential customers range from more traditional HNWI who have more conservative values, to the Nouveau Riche, and various levels of tech millionaires in betweenLarge teams of many Marketing Analysts and Marketing Managers are expensiveIf you can keep your current headcount or only add a few you are better off, since as headcount grows, overall ROI tends to shrink (VG HC ~ 200 ppl) Solution:Create a machine learning model to identify what type of interests each user has based on their social media and reddit postsNarrowcast to each smaller cohort with the language, tone, and vocabulary that will push each to purchase the quarter-million dollar flight Import libraries
import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import nltk import lebowski as dude from sklearn.linear_model import LogisticRegression from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer from sklearn.model_selection import train_test_split, GridSearchCV, cross_val_score from sklearn.pipeline import Pipeline from sklearn.naive_bayes import MultinomialNB from sklearn.metrics import confusion_matrix, plot_confusion_matrix from sklearn.ensemble import RandomForestClassifier, ExtraTreesClassifier from sklearn.ensemble import GradientBoostingClassifier, AdaBoostClassifier, VotingClassifier from sklearn.preprocessing import StandardScaler from sklearn.neighbors import KNeighborsClassifier from sklearn.tree import DecisionTreeClassifier
_____no_output_____
CC0-1.0
NLP_subreddits_Spacex_v_Boeing.ipynb
MattPat1981/new_space_race_nlp
Read in the data. In the data_file_creation.ipynb found in this directory, we have already gone to the 'https://api.pushshift.io/reddit/search/' api and pulled subreddit posts and comments from SpaceX, Boeing, BlueOrigin, and VirginGalactic; four specific companies venturing into the outer space exploration business with distinct differences therein. It is the theory of this research team that each subreddit will also have a distinct group of main users, or possible customers that are engaging on each platform. While there will be overlap in the usership, there will also be a clear lexicon that each subreddit thread has. In this particular study, we will look specifically at the differences between SpaceX and Boeing, and will create a classification model that predicts whether a post is indeed in the SpaceX subreddit or not in the SpaceX subreddit. Finally we will test the model against a testing set that is made up of posts from all four companies and measure its ability to predict which posts are SpaceX and which are not.
spacex = pd.read_csv('./data/spacex.csv') boeing = pd.read_csv('./data/boeing.csv') spacex.head()
_____no_output_____
CC0-1.0
NLP_subreddits_Spacex_v_Boeing.ipynb
MattPat1981/new_space_race_nlp
We have already done a lot of cleaning up, but as we see there are still many NaN values and other meaningless values in our data. We'll create a function to remove these values using mapping in our dataframe. Before we get there, let's convert our target column into a binary selector.
spacex['subreddit'] = spacex['subreddit'].map({'spacex': 1, 'boeing': 0}) boeing['subreddit'] = boeing['subreddit'].map({'spacex': 1, 'boeing': 0})
_____no_output_____
CC0-1.0
NLP_subreddits_Spacex_v_Boeing.ipynb
MattPat1981/new_space_race_nlp
And drop the null values right off too.
print(f"spacex df has {spacex.isna().sum()} null values not including extraneous words") print(f"boeing df has {boeing.isna().sum()} null values not including extraneous words")
spacex df has subreddit 0 body 37 permalink 0 dtype: int64 null values not including extraneous words boeing df has subreddit 0 body 24 permalink 0 dtype: int64 null values not including extraneous words
CC0-1.0
NLP_subreddits_Spacex_v_Boeing.ipynb
MattPat1981/new_space_race_nlp
we can remove these 61 rows right off
spacex = spacex.dropna() boeing = boeing.dropna() spacex.shape boeing.shape
_____no_output_____
CC0-1.0
NLP_subreddits_Spacex_v_Boeing.ipynb
MattPat1981/new_space_race_nlp
Merge into one dataframe
space_wars = pd.concat([spacex, boeing]) space_wars.shape
_____no_output_____
CC0-1.0
NLP_subreddits_Spacex_v_Boeing.ipynb
MattPat1981/new_space_race_nlp
Use TF to break up the dataframes into numbers and then drop the unneeded words
tvec = TfidfVectorizer(stop_words = 'english')
_____no_output_____
CC0-1.0
NLP_subreddits_Spacex_v_Boeing.ipynb
MattPat1981/new_space_race_nlp
We will only put the 'body' column in to the count vectorizer
X_list = space_wars.body nums_df = pd.DataFrame(tvec.fit_transform(X_list).toarray(), columns=tvec.get_feature_names()) nums_df.head()
_____no_output_____
CC0-1.0
NLP_subreddits_Spacex_v_Boeing.ipynb
MattPat1981/new_space_race_nlp
And with credit to Noelle Brown, let's graph the resulting top words:
# get count of top-occurring words top_words_tf = {} for i in nums_df.columns: top_words_tf[i] = nums_df[i].sum() # top_words to dataframe sorted by highest occurance most_freq_tf = pd.DataFrame(sorted(top_words_tf.items(), key = lambda x: x[1], reverse = True)) plt.figure(figsize = (10, 5)) # visualize top 10 words plt.bar(most_freq_tf[0][:10], most_freq_tf[1][:10]);
_____no_output_____
CC0-1.0
NLP_subreddits_Spacex_v_Boeing.ipynb
MattPat1981/new_space_race_nlp
We can see that if we remove 'replace_me', 'removed', and 'deleted', then we'll be dealing with a much more useful dataset. For the words dataframe, we can just add these words to our stop_words library. For the numeric dataframe we'll drop them here, as well as a few more.
dropwords = ['replace_me', 'removed', 'deleted', 'https', 'com', 'don', 'www'] nums_df = nums_df.drop(columns=dropwords)
_____no_output_____
CC0-1.0
NLP_subreddits_Spacex_v_Boeing.ipynb
MattPat1981/new_space_race_nlp
And we can re-run the graph above for a better look.
# get count of top-occurring words top_words_tf = {} for i in nums_df.columns: top_words_tf[i] = nums_df[i].sum() # top_words to dataframe sorted by highest occurance most_freq_tf = pd.DataFrame(sorted(top_words_tf.items(), key = lambda x: x[1], reverse = True)) plt.figure(figsize = (18, 6)) dude.graph_words('black') # visualize top 10 words plt.bar(most_freq_tf[0][:15], most_freq_tf[1][:15]);
_____no_output_____
CC0-1.0
NLP_subreddits_Spacex_v_Boeing.ipynb
MattPat1981/new_space_race_nlp
If I had more time I'd like to graph the words used most in each company. I can go ahead and try to display which company is more verbose, wordy that is, and which one uses longer words (Credit to Hovanes Gasparian).
nums_df = pd.concat([space_wars['subreddit'], nums_df]) space_wars['word_count'] = space_wars['body'].apply(dude.word_count) space_wars['post_length'] = space_wars['body'].apply(dude.count_chars) space_wars[['word_count', 'post_length']].describe().T space_wars.groupby(['word_count']).size().sort_values(ascending=False)#.head() space_wars[space_wars['word_count'] > 1000] #space_wars.groupby(['subreddit', 'word_count']).size().sort_values(ascending=False).head() space_wars.subreddit.value_counts() plt.figure(figsize=(18,6)) dude.graph_words('black') plt.hist([space_wars[space_wars['subreddit']==0]['word_count'], space_wars[space_wars['subreddit']==1]['word_count']], bins=3, color=['blue', 'red'], ec='k') plt.title('Word Count by Company', fontsize=30) plt.legend(['Boeing', 'SpaceX']);
_____no_output_____
CC0-1.0
NLP_subreddits_Spacex_v_Boeing.ipynb
MattPat1981/new_space_race_nlp
Trouble in parsing-diseIt appears that I'm having some issues with manipulating this portion of the data. I will clean this up before final pull request. Create test_train_split with word data Find the baseline:
baseline = space_wars.subreddit.value_counts(normalize=True)[1] all_scores = {} all_scores['baseline'] = baseline all_scores['baseline'] X_words = space_wars['body'] y_words = space_wars['subreddit'] X_train_w, X_test_w, y_train_w, y_test_w = train_test_split(X_words, y_words, random_state=42, test_size=.1, stratify=y_words)
_____no_output_____
CC0-1.0
NLP_subreddits_Spacex_v_Boeing.ipynb
MattPat1981/new_space_race_nlp
Now it's time to train some models!
# Modify our stopwords list from the nltk.'english' stopwords = nltk.corpus.stopwords.words('english') # Above we created a list called dropwords for i in dropwords: stopwords.append(i) param_cv = { 'stop_words' : stopwords, 'ngram_range' : (1, 2), 'analyzer' : 'word', 'max_df' : 0.8, 'min_df' : 0.02, } cntv = CountVectorizer(param_cv) # Print y_test for a sanity check y_test_w # credit Noelle from lecture train_data_features = cntv.fit_transform(X_train_w, y_train_w) test_data_features = cntv.transform(X_test_w)
_____no_output_____
CC0-1.0
NLP_subreddits_Spacex_v_Boeing.ipynb
MattPat1981/new_space_race_nlp
Logistic Regression
lr = LogisticRegression( max_iter = 10_000) lr.fit(train_data_features, y_train_w) lr.score(train_data_features, y_train_w) all_scores['Logistic Regression'] = lr.score(test_data_features, y_test_w) all_scores['Logistic Regression']
_____no_output_____
CC0-1.0
NLP_subreddits_Spacex_v_Boeing.ipynb
MattPat1981/new_space_race_nlp
***Using a simple Logistic regression with very little tweaking, a set of stopwords, we created a model that while slightly overfit, is more than 22 points more accurate than the baseline.*** What does the confusion matrix look like? Is 80% accuracy even good?Perhaps I can get some help making a confusion matrix with this data? Multinomial Naive Bayes using CountVectorizerIn this section we will create a Pipeline that starts with the CountVectorizer and ends with the Multinomial Naive Bayes Algorithm. We'll run through 270 possible configurations of this model, and run it in parallel on 3 of the 4 cores on my machine.
pipe = Pipeline([ ('count_v', CountVectorizer()), ('nb', MultinomialNB()) ]) pipe_params = { 'count_v__max_features': [2000, 5000, 9000], 'count_v__stop_words': [stopwords], 'count_v__min_df': [2, 3, 10], 'count_v__max_df': [.9, .8, .7], 'count_v__ngram_range': [(1, 1), (1, 2)] } gs = GridSearchCV(pipe, pipe_params, cv = 5, n_jobs=6 ) %%time gs.fit(X_train_w, y_train_w) gs.best_params_ all_scores['Naive Bayes'] = gs.best_score_ all_scores['Naive Bayes'] gs.best_index_ # is this the index that has the best indication of being positive?
_____no_output_____
CC0-1.0
NLP_subreddits_Spacex_v_Boeing.ipynb
MattPat1981/new_space_race_nlp
We see that our Naive Bayes model yields an accuracy score just shy of our Logistic Regression model, 79.7%**What does the confusion matrix look like?**
# Get predictions and true/false pos/neg preds = gs.predict(X_test_w) tn, fp, fn, tp = confusion_matrix(y_test_w, preds).ravel() # View confusion matrix dude.graph_words('black') plot_confusion_matrix(gs, X_test_w, y_test_w, cmap='Blues', values_format='d'); sensitivity = tp / (tp + fp) sensitivity specificity = tn / (tn + fn) specificity
_____no_output_____
CC0-1.0
NLP_subreddits_Spacex_v_Boeing.ipynb
MattPat1981/new_space_race_nlp
Naive Bayes using the TFID Vectorizer
pipe_tvec = Pipeline([ ('tvec', TfidfVectorizer()), ('nb', MultinomialNB()) ]) pipe_params_tvec = { 'tvec__max_features': [2000, 9000], 'tvec__stop_words' : [None, stopwords], 'tvec__ngram_range': [(1, 1), (1, 2)] } gs_tvec = GridSearchCV(pipe_tvec, pipe_params_tvec, cv = 5) %%time gs_tvec.fit(X_train_w, y_train_w) all_scores['Naive Bayes TFID'] = gs_tvec.best_score_ all_scores['Naive Bayes TFID'] all_scores # Confusion Matrix for tvec preds = gs_tvec.predict(X_test_w) tn, fp, fn, tp = confusion_matrix(y_test_w, preds).ravel() # View confusion matrix dude.graph_words('black') plot_confusion_matrix(gs_tvec, X_test_w, y_test_w, cmap='Blues', values_format='d'); specificity = tn / (tn+fn) specificity sensitivity = tp / (tp+fp) sensitivity
_____no_output_____
CC0-1.0
NLP_subreddits_Spacex_v_Boeing.ipynb
MattPat1981/new_space_race_nlp
Here, the specificity is 4 points higher than the NB using Count Vectorizer, but the sensitity and overall accuracy are about the same. Random Cut Forest and Extra Trees
pipe_rf = Pipeline([ ('count_v', CountVectorizer()), ('rf', RandomForestClassifier()), ]) pipe_ef = Pipeline([ ('count_v', CountVectorizer()), ('ef', ExtraTreesClassifier()), ]) pipe_params = { 'count_v__max_features': [2000, 5000, 9000], 'count_v__stop_words': [stopwords], 'count_v__min_df': [2, 3, 10], 'count_v__max_df': [.9, .8, .7], 'count_v__ngram_range': [(1, 1), (1, 2)] } %%time gs_rf = GridSearchCV(pipe_rf, pipe_params, cv = 5, n_jobs=6) gs_rf.fit(X_train_w, y_train_w) print(gs_rf.best_score_) gs_rf.best_params_ gs_rf.best_estimator_ all_scores['Random Cut Forest'] = gs_rf.best_score_ all_scores # Confusion Matrix for Random Cut Forest preds = gs_rf.predict(X_test_w) tn, fp, fn, tp = confusion_matrix(y_test_w, preds).ravel() # View confusion matrix dude.graph_words('black') plot_confusion_matrix(gs_rf, X_test_w, y_test_w, cmap='Blues', values_format='d'); specificity = tn / (tn+fn) specificity sensitivity = tp / (tp+fp) sensitivity
_____no_output_____
CC0-1.0
NLP_subreddits_Spacex_v_Boeing.ipynb
MattPat1981/new_space_race_nlp
Our original Logistic Regression model is still the winner. What does the matchup look like?
score_df = pd.DataFrame([all_scores]) score_df.shape score_df.head()
_____no_output_____
CC0-1.0
NLP_subreddits_Spacex_v_Boeing.ipynb
MattPat1981/new_space_race_nlp
Create a Count Vecotorized datasetSince the below cells have been troublesome, we'll create a dataset using only the count vectorizer and then use that data in the model as we did above.
# Re-establish the subsets using Noelle's starter script again train_data_features = cntv.fit_transform(X_train_w, y_train_w) test_data_features = cntv.transform(X_test_w) pipe_params_tvec = { 'tvec__max_features': [2000, 9000], 'tvec__stop_words' : [None, stopwords], 'tvec__ngram_range': [(1, 1), (1, 2)] } knn_pipe = Pipeline([ ('ss', StandardScaler()), ('knn', KNeighborsClassifier()) ]) tree_pipe = Pipeline([ ('tvec', TfidfVectorizer()), ('tree', DecisionTreeClassifier()) ]) ada_pipe = Pipeline([ ('tvec', TfidfVectorizer()), ('ada', AdaBoostClassifier(base_estimator=DecisionTreeClassifier())), ]) grad_pipe = Pipeline([ ('tvec', TfidfVectorizer()), ('grad_boost', GradientBoostingClassifier()), ])
_____no_output_____
CC0-1.0
NLP_subreddits_Spacex_v_Boeing.ipynb
MattPat1981/new_space_race_nlp
Irreconcilable Error:At this time there are still structural issues that are not allowing this last block of code to complete the final model attempts (user error).***In the next few days, prior to publication, this notebook will be revamped and this final cell will execute.***
%%time vote = VotingClassifier([ ('ada', AdaBoostClassifier(base_estimator=DecisionTreeClassifier())), ('grad_boost', GradientBoostingClassifier()), ('tree', DecisionTreeClassifier()), ('knn_pipe', knn_pipe) ]) params = { 'ada__n_estimators': [50, 51], # since HPO names are common, use dunder from tuple names 'grad_boost__n_estimators': [10, 11], 'knn_pipe__knn__n_neighbors': [3, 5], 'ada__base_estimator__max_depth': [1, 2], 'tree__max_depth': [1, 2], 'weights':[[.25] * 4, [.3, .3, .3, .1]] } gs = GridSearchCV(vote, param_grid=params, cv=3) gs.fit(train_data_features, y_train_w) print(gs.best_score_) gs.best_params_
C:\ProgramData\Anaconda3\lib\site-packages\sklearn\model_selection\_validation.py:536: FitFailedWarning: Estimator fit failed. The score on this train-test partition for these parameters will be set to nan. Details: ValueError: Cannot center sparse matrices: pass `with_mean=False` instead. See docstring for motivation and alternatives. FitFailedWarning) C:\ProgramData\Anaconda3\lib\site-packages\sklearn\model_selection\_validation.py:536: FitFailedWarning: Estimator fit failed. The score on this train-test partition for these parameters will be set to nan. Details: ValueError: Cannot center sparse matrices: pass `with_mean=False` instead. See docstring for motivation and alternatives. FitFailedWarning) C:\ProgramData\Anaconda3\lib\site-packages\sklearn\model_selection\_validation.py:536: FitFailedWarning: Estimator fit failed. The score on this train-test partition for these parameters will be set to nan. Details: ValueError: Cannot center sparse matrices: pass `with_mean=False` instead. See docstring for motivation and alternatives. FitFailedWarning) C:\ProgramData\Anaconda3\lib\site-packages\sklearn\model_selection\_validation.py:536: FitFailedWarning: Estimator fit failed. The score on this train-test partition for these parameters will be set to nan. Details: ValueError: Cannot center sparse matrices: pass `with_mean=False` instead. See docstring for motivation and alternatives. FitFailedWarning) C:\ProgramData\Anaconda3\lib\site-packages\sklearn\model_selection\_validation.py:536: FitFailedWarning: Estimator fit failed. The score on this train-test partition for these parameters will be set to nan. Details: ValueError: Cannot center sparse matrices: pass `with_mean=False` instead. See docstring for motivation and alternatives. FitFailedWarning) C:\ProgramData\Anaconda3\lib\site-packages\sklearn\model_selection\_validation.py:536: FitFailedWarning: Estimator fit failed. The score on this train-test partition for these parameters will be set to nan. Details: ValueError: Cannot center sparse matrices: pass `with_mean=False` instead. See docstring for motivation and alternatives. FitFailedWarning) C:\ProgramData\Anaconda3\lib\site-packages\sklearn\model_selection\_validation.py:536: FitFailedWarning: Estimator fit failed. The score on this train-test partition for these parameters will be set to nan. Details: ValueError: Cannot center sparse matrices: pass `with_mean=False` instead. See docstring for motivation and alternatives. FitFailedWarning) C:\ProgramData\Anaconda3\lib\site-packages\sklearn\model_selection\_validation.py:536: FitFailedWarning: Estimator fit failed. The score on this train-test partition for these parameters will be set to nan. Details: ValueError: Cannot center sparse matrices: pass `with_mean=False` instead. See docstring for motivation and alternatives. FitFailedWarning) C:\ProgramData\Anaconda3\lib\site-packages\sklearn\model_selection\_validation.py:536: FitFailedWarning: Estimator fit failed. The score on this train-test partition for these parameters will be set to nan. Details: ValueError: Cannot center sparse matrices: pass `with_mean=False` instead. See docstring for motivation and alternatives. FitFailedWarning) C:\ProgramData\Anaconda3\lib\site-packages\sklearn\model_selection\_validation.py:536: FitFailedWarning: Estimator fit failed. The score on this train-test partition for these parameters will be set to nan. Details: ValueError: Cannot center sparse matrices: pass `with_mean=False` instead. See docstring for motivation and alternatives. FitFailedWarning) C:\ProgramData\Anaconda3\lib\site-packages\sklearn\model_selection\_validation.py:536: FitFailedWarning: Estimator fit failed. The score on this train-test partition for these parameters will be set to nan. Details: ValueError: Cannot center sparse matrices: pass `with_mean=False` instead. See docstring for motivation and alternatives. FitFailedWarning) C:\ProgramData\Anaconda3\lib\site-packages\sklearn\model_selection\_validation.py:536: FitFailedWarning: Estimator fit failed. The score on this train-test partition for these parameters will be set to nan. Details: ValueError: Cannot center sparse matrices: pass `with_mean=False` instead. See docstring for motivation and alternatives. FitFailedWarning) C:\ProgramData\Anaconda3\lib\site-packages\sklearn\model_selection\_validation.py:536: FitFailedWarning: Estimator fit failed. The score on this train-test partition for these parameters will be set to nan. Details: ValueError: Cannot center sparse matrices: pass `with_mean=False` instead. See docstring for motivation and alternatives. FitFailedWarning) C:\ProgramData\Anaconda3\lib\site-packages\sklearn\model_selection\_validation.py:536: FitFailedWarning: Estimator fit failed. The score on this train-test partition for these parameters will be set to nan. Details: ValueError: Cannot center sparse matrices: pass `with_mean=False` instead. See docstring for motivation and alternatives. FitFailedWarning) C:\ProgramData\Anaconda3\lib\site-packages\sklearn\model_selection\_validation.py:536: FitFailedWarning: Estimator fit failed. The score on this train-test partition for these parameters will be set to nan. Details: ValueError: Cannot center sparse matrices: pass `with_mean=False` instead. See docstring for motivation and alternatives. FitFailedWarning) C:\ProgramData\Anaconda3\lib\site-packages\sklearn\model_selection\_validation.py:536: FitFailedWarning: Estimator fit failed. The score on this train-test partition for these parameters will be set to nan. Details: ValueError: Cannot center sparse matrices: pass `with_mean=False` instead. See docstring for motivation and alternatives. FitFailedWarning) C:\ProgramData\Anaconda3\lib\site-packages\sklearn\model_selection\_validation.py:536: FitFailedWarning: Estimator fit failed. The score on this train-test partition for these parameters will be set to nan. Details: ValueError: Cannot center sparse matrices: pass `with_mean=False` instead. See docstring for motivation and alternatives. FitFailedWarning) C:\ProgramData\Anaconda3\lib\site-packages\sklearn\model_selection\_validation.py:536: FitFailedWarning: Estimator fit failed. The score on this train-test partition for these parameters will be set to nan. Details: ValueError: Cannot center sparse matrices: pass `with_mean=False` instead. See docstring for motivation and alternatives. FitFailedWarning) C:\ProgramData\Anaconda3\lib\site-packages\sklearn\model_selection\_validation.py:536: FitFailedWarning: Estimator fit failed. The score on this train-test partition for these parameters will be set to nan. Details: ValueError: Cannot center sparse matrices: pass `with_mean=False` instead. See docstring for motivation and alternatives. FitFailedWarning) C:\ProgramData\Anaconda3\lib\site-packages\sklearn\model_selection\_validation.py:536: FitFailedWarning: Estimator fit failed. The score on this train-test partition for these parameters will be set to nan. Details: ValueError: Cannot center sparse matrices: pass `with_mean=False` instead. See docstring for motivation and alternatives. FitFailedWarning) C:\ProgramData\Anaconda3\lib\site-packages\sklearn\model_selection\_validation.py:536: FitFailedWarning: Estimator fit failed. The score on this train-test partition for these parameters will be set to nan. Details: ValueError: Cannot center sparse matrices: pass `with_mean=False` instead. See docstring for motivation and alternatives. FitFailedWarning) C:\ProgramData\Anaconda3\lib\site-packages\sklearn\model_selection\_validation.py:536: FitFailedWarning: Estimator fit failed. The score on this train-test partition for these parameters will be set to nan. Details: ValueError: Cannot center sparse matrices: pass `with_mean=False` instead. See docstring for motivation and alternatives. FitFailedWarning) C:\ProgramData\Anaconda3\lib\site-packages\sklearn\model_selection\_validation.py:536: FitFailedWarning: Estimator fit failed. The score on this train-test partition for these parameters will be set to nan. Details: ValueError: Cannot center sparse matrices: pass `with_mean=False` instead. See docstring for motivation and alternatives. FitFailedWarning)
CC0-1.0
NLP_subreddits_Spacex_v_Boeing.ipynb
MattPat1981/new_space_race_nlp
%matplotlib inline import os import re import urllib.request import numpy as np import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim import matplotlib.pyplot as plt import itertools from torch.utils.data import Dataset, DataLoader from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence device = torch.device("cuda:0") if torch.cuda.is_available() else torch.device("cpu")
_____no_output_____
MIT
LatentFactorModel/LatentFactorModel.ipynb
vitutorial/exercises
In this notebook you will work with a deep generative language model that maps words from a discrete (bit-vector-valued) latent space. We will use text data (we will work on the character level) in Spanish and pytorch. The first section concerns data manipulation and data loading classes necessary for our implementation. You do not need to modify anything in this part of the code. Let's first download the SIGMORPHON dataset that we will be using for this notebook: these are inflected Spanish words together with some morphosyntactic descriptors. For this notebook we will ignore the morphosyntactic descriptors.
url = "https://raw.githubusercontent.com/ryancotterell/sigmorphon2016/master/data/" train_file = "spanish-task1-train" val_file = "spanish-task1-dev" test_file = "spanish-task1-test" print("Downloading data files...") if not os.path.isfile(train_file): urllib.request.urlretrieve(url + train_file, filename=train_file) if not os.path.isfile(val_file): urllib.request.urlretrieve(url + val_file, filename=val_file) if not os.path.isfile(test_file): urllib.request.urlretrieve(url + test_file, filename=test_file) print("Download complete.")
Downloading data files... Download complete.
MIT
LatentFactorModel/LatentFactorModel.ipynb
vitutorial/exercises
DataIn order to work with text data, we need to transform the text into something that our algorithms can work with. The first step of this process is converting words into word ids. We do this by constructing a vocabulary from the data, assigning a new word id to each new word it encounters.
UNK_TOKEN = "?" PAD_TOKEN = "_" SOW_TOKEN = ">" EOW_TOKEN = "." def extract_inflected_word(s): """ Extracts the inflected words in the SIGMORPHON dataset. """ return s.split()[-1] class Vocabulary: def __init__(self): self.idx_to_char = {0: UNK_TOKEN, 1: PAD_TOKEN, 2: SOW_TOKEN, 3: EOW_TOKEN} self.char_to_idx = {UNK_TOKEN: 0, PAD_TOKEN: 1, SOW_TOKEN: 2, EOW_TOKEN: 3} self.word_freqs = {} def __getitem__(self, key): return self.char_to_idx[key] if key in self.char_to_idx else self.char_to_idx[UNK_TOKEN] def word(self, idx): return self.idx_to_char[idx] def size(self): return len(self.char_to_idx) @staticmethod def from_data(filenames): """ Creates a vocabulary from a list of data files. It assumes that the data files have been tokenized and pre-processed beforehand. """ vocab = Vocabulary() for filename in filenames: with open(filename) as f: for line in f: # Strip whitespace and the newline symbol. word = extract_inflected_word(line.strip()) # Split the words into characters and assign ids to each # new character it encounters. for char in list(word): if char not in vocab.char_to_idx: idx = len(vocab.char_to_idx) vocab.char_to_idx[char] = idx vocab.idx_to_char[idx] = char return vocab # Construct a vocabulary from the training and validation data. print("Constructing vocabulary...") vocab = Vocabulary.from_data([train_file, val_file]) print("Constructed a vocabulary of %d types" % vocab.size()) # some examples print('e', vocab['e']) print('é', vocab['é']) print('ș', vocab['ș']) # something UNKNOWN
e 8 é 24 ș 0
MIT
LatentFactorModel/LatentFactorModel.ipynb
vitutorial/exercises
We also need to load the data files into memory. We create a simple class `TextDataset` that stores the data as a list of words:
class TextDataset(Dataset): """ A simple class that loads a list of words into memory from a text file, split by newlines. This does not do any memory optimisation, so if your dataset is very large, you might want to use an alternative class. """ def __init__(self, text_file, max_len=30): self.data = [] with open(text_file) as f: for line in f: word = extract_inflected_word(line.strip()) if len(list(word)) <= max_len: self.data.append(word) def __len__(self): return len(self.data) def __getitem__(self, idx): return self.data[idx] # Load the training, validation, and test datasets into memory. train_dataset = TextDataset(train_file) val_dataset = TextDataset(val_file) test_dataset = TextDataset(test_file) # Print some samples from the data: print("Sample from training data: \"%s\"" % train_dataset[np.random.choice(len(train_dataset))]) print("Sample from validation data: \"%s\"" % val_dataset[np.random.choice(len(val_dataset))]) print("Sample from test data: \"%s\"" % test_dataset[np.random.choice(len(test_dataset))])
Sample from training data: "compiláramos" Sample from validation data: "debutara" Sample from test data: "paginabas"
MIT
LatentFactorModel/LatentFactorModel.ipynb
vitutorial/exercises
Now it's time to write a function that converts a word into a list of character ids using the vocabulary we created before. This function is `create_batch` in the code cell below. This function creates a batch from a list of words, and makes sure that each word starts with a start-of-word symbol and ends with an end-of-word symbol. Because not all words are of equal length in a certain batch, words are padded with padding symbols so that they match the length of the largest word in the batch. The function returns an input batch, an output batch, a mask of 1s for words and 0s for padding symbols, and the sequence lengths of each word in the batch. The output batch is shifted by one character, to reflect the predictions that the model is expected to make. For example, for a word\begin{align} \text{e s p e s e m o s}\end{align}the input sequence is\begin{align} \text{SOW e s p e s e m o s}\end{align}and the output sequence is\begin{align} \text{e s p e s e m o s EOW}\end{align}You can see the output is shifted wrt the input, that's because we will be computing a distribution for the next character in context of its prefix, and that's why we need to shift the sequence this way.Lastly, we create an inverse function `batch_to_words` that recovers the list of words from a padded batch of character ids to use during test time.
def create_batch(words, vocab, device, word_dropout=0.): """ Converts a list of words to a padded batch of word ids. Returns an input batch, an output batch shifted by one, a sequence mask over the input batch, and a tensor containing the sequence length of each batch element. :param words: a list of words, each a list of token ids :param vocab: a Vocabulary object for this dataset :param device: :param word_dropout: rate at which we omit words from the context (input) :returns: a batch of padded inputs, a batch of padded outputs, mask, lengths """ tok = np.array([[SOW_TOKEN] + list(w) + [EOW_TOKEN] for w in words]) seq_lengths = [len(w)-1 for w in tok] max_len = max(seq_lengths) pad_id = vocab[PAD_TOKEN] pad_id_input = [ [vocab[w[t]] if t < seq_lengths[idx] else pad_id for t in range(max_len)] for idx, w in enumerate(tok)] # Replace words of the input with <unk> with p = word_dropout. if word_dropout > 0.: unk_id = vocab[UNK_TOKEN] word_drop = [ [unk_id if (np.random.random() < word_dropout and t < seq_lengths[idx]) else word_ids[t] for t in range(max_len)] for idx, word_ids in enumerate(pad_id_input)] # The output batch is shifted by 1. pad_id_output = [ [vocab[w[t+1]] if t < seq_lengths[idx] else pad_id for t in range(max_len)] for idx, w in enumerate(tok)] # Convert everything to PyTorch tensors. batch_input = torch.tensor(pad_id_input) batch_output = torch.tensor(pad_id_output) seq_mask = (batch_input != vocab[PAD_TOKEN]) seq_length = torch.tensor(seq_lengths) # Move all tensors to the given device. batch_input = batch_input.to(device) batch_output = batch_output.to(device) seq_mask = seq_mask.to(device) seq_length = seq_length.to(device) return batch_input, batch_output, seq_mask, seq_length def batch_to_words(tensors, vocab: Vocabulary): """ Converts a batch of word ids back to words. :param tensors: [B, T] word ids :param vocab: a Vocabulary object for this dataset :returns: an array of strings (each a word). """ words = [] batch_size = tensors.size(0) for idx in range(batch_size): word = [vocab.word(t.item()) for t in tensors[idx,:]] # Filter out the start-of-word and padding tokens. word = list(filter(lambda t: t != PAD_TOKEN and t != SOW_TOKEN, word)) # Remove the end-of-word token and all tokens following it. if EOW_TOKEN in word: word = word[:word.index(EOW_TOKEN)] words.append("".join(word)) return np.array(words)
_____no_output_____
MIT
LatentFactorModel/LatentFactorModel.ipynb
vitutorial/exercises
In PyTorch the RNN functions expect inputs to be sorted from long words to shorter ones. Therefore we create a simple wrapper class for the DataLoader class that sorts words from long to short:
class SortingTextDataLoader: """ A wrapper for the DataLoader class that sorts a list of words by their lengths in descending order. """ def __init__(self, dataloader): self.dataloader = dataloader self.it = iter(dataloader) def __iter__(self): return self def __next__(self): words = None for s in self.it: words = s break if words is None: self.it = iter(self.dataloader) raise StopIteration words = np.array(words) sort_keys = sorted(range(len(words)), key=lambda idx: len(list(words[idx])), reverse=True) sorted_words = words[sort_keys] return sorted_words
_____no_output_____
MIT
LatentFactorModel/LatentFactorModel.ipynb
vitutorial/exercises
Model Deterministic language modelIn language modelling, we model a word $x = \langle x_1, \ldots, x_n \rangle$ of length $n = |x|$ as a sequence of categorical draws:\begin{align}X_i|x_{<i} & \sim \text{Cat}(f(x_{<i}; \theta)) & i = 1, \ldots, n \\\end{align}where we use $x_{<i}$ to denote a (possibly empty) prefix string, and thus the model makes no Markov assumption. We map from the conditioning context, the prefix $x_{<i}$, to the categorical parameters (a $v$-dimensional probability vector, where $v$ denotes the size of the vocabulary, in this case, the size of the character set) using a fixed neural network architecture whose parameters we collectively denote by $\theta$.This assigns the following likelihood to the word\begin{align} P(x|\theta) &= \prod_{i=1}^n P(x_i|x_{<i}, \theta) \\ &= \prod_{i=1}^n \text{Cat}(x_i|f(x_{<i}; \theta)) \end{align}where the categorical pmf is $\text{Cat}(k|\pi) = \prod_{j=1}^v \pi_j^{[k=j]} = \pi_k$. Suppose we have a dataset $\mathcal D = \{x^{(1)}, \ldots, x^{(N)}\}$ containing $N$ i.i.d. observations. Then we can use the log-likelihood function \begin{align}\mathcal L(\theta|\mathcal D) &= \sum_{k=1}^{N} \log P(x^{(k)}| \theta) \\&= \sum_{k=1}^{N} \sum_{i=1}^{|x^{(k)}|} \log \text{Cat}(x^{(k)}_i|f(x^{(k)}_{<i}; \theta))\end{align} to estimate $\theta$ by maximisation: \begin{align} \theta^\star = \arg\max_{\theta \in \Theta} \mathcal L(\theta|\mathcal D) ~ . \end{align} We can use stochastic gradient-ascent to find a local optimum of $\mathcal L(\theta|\mathcal D)$, which only requires a gradient estimate:\begin{align}\nabla_\theta \mathcal L(\theta|\mathcal D) &= \sum_{k=1}^{|\mathcal D|} \nabla_\theta \log P(x^{(k)}|\theta) \\ &= \sum_{k=1}^{|\mathcal D|} \frac{1}{N} N \nabla_\theta \log P(x^{(k)}| \theta) \\&= \mathbb E_{\mathcal U(1/N)} \left[ N \nabla_\theta \log P(x^{(K)}| \theta) \right] \\&\overset{\text{MC}}{\approx} \frac{N}{M} \sum_{m=1}^M \nabla_\theta \log P(x^{(k_m)}|\theta) \\&\text{where }K_m \sim \mathcal U(1/N)\end{align}This is a Monte Carlo (MC) estimate of the gradient computed on $M$ data points selected uniformly at random from $\mathcal D$.For as long as $f$ remains differentiable wrt to its inputs and parameters, we can rely on automatic differentiation to obtain gradient estimates.An example design for $f$ is:\begin{align}\mathbf x_i &= \text{emb}(x_i; \theta_{\text{emb}}) \\\mathbf h_0 &= \mathbf 0 \\\mathbf h_i &= \text{rnn}(\mathbf h_{i-1}, \mathbf x_{i-1}; \theta_{\text{rnn}}) \\f(x_{<i}; \theta) &= \text{softmax}(\text{dense}_v(\mathbf h_{i}; \theta_{\text{out}}))\end{align}where * $\text{emb}$ is a fixed embedding layer with parameters $\theta_{\text{emb}}$;* $\text{rnn}$ is a recurrent architecture with parameters $\theta_{\text{rnn}}$, e.g. an LSTM or GRU, and $\mathbf h_0$ is part of the architecture's parameters;* $\text{dense}_v$ is a dense layer with $v$ outputs (vocabulary size) and parameters $\theta_{\text{out}}$.In what follows we show how to extend this model with a continuous latent word embedding. Deep generative language modelWe want to model a word $x$ as a draw from the marginal of deep generative model $P(z, x|\theta, \alpha) = P(z|\alpha)P(x|z, \theta)$. Generative modelThe generative story is:\begin{align} Z_k & \sim \text{Bernoulli}(\alpha_k) & k=1,\ldots, K \\ X_i | z, x_{<i} &\sim \text{Cat}(f(z, x_{<i}; \theta)) & i=1, \ldots, n\end{align}where $z \in \mathbb R^K$ and we impose a product of independent Bernoulli distributions prior. Other choices of prior can induce interesting properties in latent space, for example, the Bernoullis could be correlated, however, in this notebook, we use independent distributions. **About the prior parameter** The parameter of the $k$th Bernoulli distribution is the probability that the $k$th bit in $z$ is set to $1$, and therefore, if we have reasons to believe some bits are more frequent than others (for example, because we expect some bits to capture verb attributes and others to capture noun attributes, and we know nouns are more frequent than verbs) we may be able to have a good guess at $\alpha_k$ for different $k$, otherwise, we may simply say that bits are about as likely to be on or off a priori, thus setting $\alpha_k = 0.5$ for every $k$. In this lab, we will treat the prior parameter ($\alpha$) as *fixed*.**Architecture** It is easy to design $f$ by a simple modification of the deterministic design shown before:\begin{align}\mathbf x_i &= \text{emb}(x_i; \theta_{\text{emb}}) \\\mathbf h_0 &= \tanh(\text{dense}(z; \theta_{\text{init}})) \\\mathbf h_i &= \text{rnn}(\mathbf h_{i-1}, \mathbf x_{i-1}; \theta_{\text{rnn}}) \\f(x_{<i}; \theta) &= \text{softmax}(\text{dense}_v(\mathbf h_{i}; \theta_{\text{out}}))\end{align}where we just initialise the recurrent cell using $z$. Note we could also use $z$ in other places, for example, as additional input to every update of the recurrent cell $\mathbf h_i = \text{rnn}(\mathbf h_{i-1}, [\mathbf x_{i-1}, z])$. This is an architecture choice which like many others can only be judged empirically or on the basis of practical convenience. Parameter estimationThe marginal likelihood, necessary for parameter estimation, is now no longer tractable:\begin{align}P(x|\theta, \alpha) &= \sum_{z \in \{0,1\}^K} P(z|\alpha)P(x|z, \theta) \\&= \sum_{z \in \{0,1\}^K} \prod_{k=1}^K \text{Bernoulli}(z_k|\alpha_k)\prod_{i=1}^n \text{Cat}(x_i|f(z,x_{<i}; \theta) ) \end{align}the intractability is clear as there is an exponential number of assignments to $z$, namely, $2^K$.We turn to variational inference and derive a lowerbound $\mathcal E(\theta, \lambda|\mathcal D)$ on the log-likelihood function\begin{align} \mathcal E(\theta, \lambda|\mathcal D) &= \sum_{s=1}^{|\mathcal D|} \mathcal E_s(\theta, \lambda|x^{(s)}) \end{align}which for a single datapoint $x$ is\begin{align} \mathcal E(\theta, \lambda|x) &= \mathbb{E}_{Q(z|x, \lambda)}\left[\log P(x|z, \theta)\right] - \text{KL}\left(Q(z|x, \lambda)||P(z|\alpha)\right)\\\end{align}where we have introduce an independently parameterised auxiliary distribution $Q(z|x, \lambda)$. The distribution $Q$ which maximises this *evidence lowerbound* (ELBO) is also the distribution that minimises \begin{align}\text{KL}(Q(z|x, \lambda)||P(z|x, \theta, \alpha)) = \mathbb E_{Q(z|x, \lambda)}\left[\log \frac{Q(z|x, \lambda)}{P(z|x, \theta, \alpha)}\right]\end{align} where $P(z|x, \theta, \alpha) = \frac{P(x, z|\theta, \alpha)}{P(x|\theta, \alpha)}$ is our intractable true posterior. For that reason, we think of $Q(z|x, \lambda)$ as an *approximate posterior*. The approximate posterior is an independent model of the latent variable given the data, for that reason we also call it an *inference model*. In this notebook, our inference model will be a product of independent Bernoulli distributions, to make sure that we cover the sample space of our latent variable. We will leave at the end of the notebook as an optional exercise to model correlations (thus achieving *structured* inference, rather than mean field inference). Such mean field (MF) approximation takes $K$ Bernoulli variational factors whose parameters we predict with a neural network: \begin{align} Q(z|x, \lambda) &= \prod_{k=1}^K \text{Bernoulli}(z_k|\beta_k(x; \lambda))\end{align} Note we compute a *fixed* number, namely, $K$, of Bernoulli parameters. This can be done with a neural network that outputs $K$ values and employs a sigmoid activation for the outputs. For this choice, the KL term in the ELBO is tractable:\begin{align}\text{KL}\left(Q(z|x, \lambda)||P(z|\alpha)\right) &= \sum_{k=1}^K \text{KL}\left(Q(z_k|x, \lambda)||P(z_k|\alpha_k)\right) \\&= \sum_{k=1}^K \text{KL}\left(\text{Bernoulli}(\beta_k(x;\lambda))|| \text{Bernoulli}(\alpha_k)\right) \\&= \sum_{k=1}^K \beta_k(x;\lambda) \log \frac{\beta_k(x;\lambda)}{\alpha_k} + (1-\beta_k(x;\lambda)) \log \frac{1-\beta_k(x;\lambda)}{1-\alpha_k}\end{align} Here's an example design for our inference model:\begin{align}\mathbf x_i &= \text{emb}(x_i; \lambda_{\text{emb}}) \\\mathbf f_i &= \text{rnn}(\mathbf f_{i-1}, \mathbf x_{i}; \lambda_{\text{fwd}}) \\\mathbf b_i &= \text{rnn}(\mathbf b_{i+1}, \mathbf x_{i}; \lambda_{\text{bwd}}) \\\mathbf h &= \text{dense}([\mathbf f_{n}, \mathbf b_1]; \lambda_{\text{hid}}) \\\beta(x; \lambda) &= \text{sigmoid}(\text{dense}_K(\mathbf h; \lambda_{\text{out}}))\end{align}where we use the $\text{sigmoid}$ activation to make sure our probabilities are independently set between $0$ and $1$. Because we have neural networks compute the Bernoulli variational factors for us, we call this *amortised* mean field inference. Gradient estimationWe have to obtain gradients of the ELBO with respect to $\theta$ (generative model) and $\lambda$ (inference model). Recall we will leave $\alpha$ fixed.For the **generative model**\begin{align}\nabla_\theta \mathcal E(\theta, \lambda|x) &=\nabla_\theta\sum_{z} Q(z|x, \lambda)\log P(x|z,\theta) - \underbrace{\nabla_\theta \sum_{k=1}^K \text{KL}(Q(z_k|x, \lambda) || P(z_k|\alpha_k))}_{\color{blue}{0}} \\&=\sum_{z} Q(z|x, \lambda)\nabla_\theta\log P(x|z,\theta) \\&= \mathbb E_{Q(z|x, \lambda)}\left[\nabla_\theta\log P(x|z,\theta) \right] \\&\overset{\text{MC}}{\approx} \frac{1}{S} \sum_{s=1}^S \nabla_\theta \log P(x|z^{(s)}, \theta) \end{align}where $z^{(s)} \sim Q(z|x,\lambda)$.Note there is no difficulty in obtaining gradient estimates precisely because the samples come from the inference model and therefore do not interfere with backpropagation for updates to $\theta$.For the **inference model** the story is less straightforward, and we have to use the *score function estimator* (a.k.a. REINFORCE):\begin{align}\nabla_\lambda \mathcal E(\theta, \lambda|x) &=\nabla_\lambda\sum_{z} Q(z|x, \lambda)\log P(x|z,\theta) - \nabla_\lambda \underbrace{\sum_{k=1}^K \text{KL}(Q(z_k|x, \lambda) || P(z_k|\alpha_k))}_{ \color{blue}{\text{tractable} }} \\&=\sum_{z} \nabla_\lambda Q(z|x, \lambda)\log P(x|z,\theta) - \sum_{k=1}^K \nabla_\lambda \text{KL}(Q(z_k|x, \lambda) || P(z_k|\alpha_k)) \\&=\sum_{z} \underbrace{Q(z|x, \lambda) \nabla_\lambda \log Q(z|x, \lambda)}_{\nabla_\lambda Q(z|x, \lambda)} \log P(x|z,\theta) - \sum_{k=1}^K \nabla_\lambda \text{KL}(Q(z_k|x, \lambda) || P(z_k|\alpha_k)) \\&= \mathbb E_{Q(z|x, \lambda)}\left[ \log P(x|z,\theta) \nabla_\lambda \log Q(z|x, \lambda) \right] - \sum_{k=1}^K \nabla_\lambda \text{KL}(Q(z_k|x, \lambda) || P(z_k|\alpha_k)) \\&\overset{\text{MC}}{\approx} \left(\frac{1}{S} \sum_{s=1}^S \log P(x|z^{(s)}, \theta) \nabla_\lambda \log Q(z^{(s)}|x, \lambda) \right) - \sum_{k=1}^K \nabla_\lambda \text{KL}(Q(z_k|x, \lambda) || P(z_k|\alpha_k)) \end{align}where $z^{(s)} \sim Q(z|x,\lambda)$. ImplementationLet's implement the model and the loss (negative ELBO). We work with the notion of a *surrogate loss*, that is, a computation node whose gradients wrt to parameters are equivalent to the gradients we need.For a given sample $z \sim Q(z|x, \lambda)$, the following is a single-sample surrogate loss:\begin{align}\mathcal S(\theta, \lambda|x) = \log P(x|z, \theta) + \color{red}{\text{detach}(\log P(x|z, \theta) )}\log Q(z|x, \lambda) - \sum_{k=1}^K \text{KL}(Q(z_k|x, \lambda) || P(z_k|\alpha_k))\end{align}Check the documentation of pytorch's `detach` method.Show that it's gradients wrt $\theta$ and $\lambda$ are exactly what we need: \begin{align}\nabla_\theta \mathcal S(\theta, \lambda|x) = \color{red}{?}\end{align}\begin{align}\nabla_\lambda \mathcal S(\theta, \lambda|x) = \color{red}{?}\end{align} Let's now turn to the actual implementation in pytorch of the inference model as well as the generative model. Here and there we will provide helper code for you.
def bernoulli_log_probs_from_logits(logits): """ Let p be the Bernoulli parameter and q = 1 - p. This function is a stable computation of p and q from logit = log(p/q). :param logit: log (p/q) :return: log_p, log_q """ return - F.softplus(-logits), - F.softplus(logits)
_____no_output_____
MIT
LatentFactorModel/LatentFactorModel.ipynb
vitutorial/exercises
We start with the implementation of a product of Bernoulli distributions where the parameters are *given* at construction time. That is, for some vector $b_1, \ldots, b_K$ we have\begin{equation} Z_k \sim \text{Bernoulli}(b_k)\end{equation}and thus the joint probability of $z_1, \ldots, z_K$ is given by $\prod_{k=1}^K \text{Bernoulli}(z_k|b_k)$.
class ProductOfBernoullis: """ This is class models a product of independent Bernoulli distributions. Each product of Bernoulli is defined by a D-dimensional vector of logits for each independent Bernoulli variable. """ def __init__(self, logits): """ :param p: a tensor of D Bernoulli parameters (logits) for each batch element. [B, D] """ pass def mean(self): """For Bernoulli variables this is the probability of each Bernoulli being 1.""" return None def std(self): """For Bernoulli variables this is p*(1-p) where p is the probability of the Bernoulli being 1""" return self.probs * (1.0 - self.probs) def sample(self): """ Returns a sample with the shape of the Bernoulli parameter. # [B, D] """ return None def log_prob(self, x): """ Assess the log probability mass of x. :param x: a tensor of Bernoulli samples (same shape as the Bernoulli parameter) [B, D] :returns: tensor of log probabilitie densities [B] """ return None def unstable_kl(self, other: 'Bernoulli'): """ The straightforward implementation of the KL between two Bernoullis. This implementation is unstable, a stable implementation is provided in ProductOfBernoullis.kl(self, q) :returns: a tensor of KL values with the same shape as the parameters of self. """ return None def kl(self, other: 'Bernoulli'): """ A stable implementation of the KL divergence between two Bernoulli variables. :returns: a tensor of KL values with the same shape as the parameters of self. """ return None
_____no_output_____
MIT
LatentFactorModel/LatentFactorModel.ipynb
vitutorial/exercises
Then we should implement the inference model $Q(z | x, \lambda)$, that is, a module that uses a neural network to map from a data point $x$ to the parameters of a product of Bernoullis.You might want to consult the documentation of * `torch.nn.Embedding`* `torch.nn.LSTM`* `torch.nn.Linear`* and of our own `ProductOfBernoullis` distribution (see above).
class InferenceModel(nn.Module): def __init__(self, vocab_size, embedder, hidden_size, latent_size, pad_idx, bidirectional=False): """ Implement the layers in the inference model. :param vocab_size: size of the vocabulary of the language :param embedder: embedding layer :param hidden_size: size of recurrent cell :param latent_size: size K of the latent variable :param pad_idx: id of the -PAD- token :param bidirectional: whether we condition on x via a bidirectional or unidirectional encoder """ super().__init__() # pytorch modules should always start with this pass # Construct your NN blocks here # and make sure every block is an attribute of self # or they won't get initialised properly # for example, self.my_linear_layer = torch.nn.Linear(...) def forward(self, x, seq_mask, seq_len) -> ProductOfBernoullis: """ Return an inference product of Bernoullis per instance in the mini-batch :param x: words [B, T] as token ids :param seq_mask: indicates valid positions vs padding positions [B, T] :param seq_len: the length of the sequences [B] :return: a collection of B ProductOfBernoullis approximate posterior, each a distribution over K-dimensional bit vectors """ pass # tests for inference model pad_idx = vocab.char_to_idx[PAD_TOKEN] dummy_inference_model = InferenceModel( vocab_size=vocab.size(), embedder=nn.Embedding(vocab.size(), 64, padding_idx=pad_idx), hidden_size=128, latent_size=16, pad_idx=pad_idx, bidirectional=True ).to(device=device) dummy_batch_size = 32 dummy_dataloader = SortingTextDataLoader(DataLoader(train_dataset, batch_size=dummy_batch_size)) dummy_words = next(dummy_dataloader) x_in, _, seq_mask, seq_len = create_batch(dummy_words, vocab, device) q_z_given_x = dummy_inference_model.forward(x_in, seq_mask, seq_len)
_____no_output_____
MIT
LatentFactorModel/LatentFactorModel.ipynb
vitutorial/exercises
Then we should implement the generative latent factor model. The decoder is a sequence of correlated Categorical draws that condition on a latent factor assignment. We will be parameterising categorical distributions, so you might want to check the documentation of `torch.distributions.categorical.Categorical`.
from torch.distributions import Categorical class LatentFactorModel(nn.Module): def __init__(self, vocab_size, emb_size, hidden_size, latent_size, pad_idx, dropout=0.): """ :param vocab_size: size of the vocabulary of the language :param emb_size: dimensionality of embeddings :param hidden_size: dimensionality of recurrent cell :param latent_size: this is D the dimensionality of the latent variable z :param pad_idx: the id reserved to the -PAD- token :param dropout: a dropout rate (you can ignore this for now) """ super().__init__() # Construct your NN blocks here, # remember to assign them to attributes of self pass def init_hidden(self, z): """ Returns the hidden state of the LSTM initialized with a projection of a given z. :param z: [B, K] :returns: [num_layers, B, H] hidden state, [num_layers, B, H] cell state """ pass def step(self, prev_x, z, hidden): """ Performs a single LSTM step for a given previous word and hidden state. Returns the unnormalized log probabilities (logits) over the vocabulary for this time step. :param prev_x: [B, 1] id of the previous token :param z: [B, K] latent variable :param hidden: hidden ([num_layers, B, H] state, [num_layers, B, H] cell) :returns: [B, V] logits, ([num_layers, B, H] updated state, [num_layers, B, H] updated cell) """ pass def forward(self, x, z) -> Categorical: """ Performs an entire forward pass given a sequence of words x and a z. This returns a collection of [B, T] categorical distributions, each with support over V events. :param x: [B, T] token ids :param z: [B, K] a latent sample :returns: Categorical object with shape [B,T,V] """ hidden = self.init_hidden(z) outputs = [] for t in range(x.size(1)): # [B, 1] prev_x = x[:, t].unsqueeze(-1) # logits: [B, V] logits, hidden = self.step(prev_x, z, hidden) outputs.append(logits) outputs = torch.cat(outputs, dim=1) return Categorical(logits=outputs) def loss(self, output_distributions, observations, pz, qz, free_nats=0., evaluation=False): """ Computes the terms in the loss (negative ELBO) given the output Categorical distributions, observations, the prior distribution p(z), and the approximate posterior distribution q(z|x). If free_nats is nonzero it will clamp the KL divergence between the posterior and prior to that value, preventing gradient propagation via the KL if it's below that value. If evaluation is set to true, the loss will be summed instead of averaged over the batch. Returns the (surrogate) loss, the ELBO, and the KL. :returns: surrogate loss (scalar), ELBO (scalar), KL (scalar) """ pass
_____no_output_____
MIT
LatentFactorModel/LatentFactorModel.ipynb
vitutorial/exercises
The code below is used to assess the model and also investigate what it learned. We implemented it for you, so that you can focus on the VAE part. It's useful however to learn from this example: we do interesting things like computing perplexity and sampling novel words! Evaluation metricsDuring training we'd like to keep track of some evaluation metrics on the validation data in order to keep track of how our model is doing and to perform early stopping. One simple metric we can compute is the ELBO on all the validation or test data using a single sample from the approximate posterior $Q(z|x, \lambda)$:
def eval_elbo(model, inference_model, eval_dataset, vocab, device, batch_size=128): """ Computes a single sample estimate of the ELBO on a given dataset. This returns both the average ELBO and the average KL (for inspection). """ dl = DataLoader(eval_dataset, batch_size=batch_size) sorted_dl = SortingTextDataLoader(dl) # Make sure the model is in evaluation mode (i.e. disable dropout). model.eval() total_ELBO = 0. total_KL = 0. num_words = 0 # We don't need to compute gradients for this. with torch.no_grad(): for words in sorted_dl: x_in, x_out, seq_mask, seq_len = create_batch(words, vocab, device) # Infer the approximate posterior and construct the prior. qz = inference_model(x_in, seq_mask, seq_len) pz = ProductOfBernoullis(torch.ones_like(qz.probs) * 0.5) # Compute the unnormalized probabilities using a single sample from the # approximate posterior. z = qz.sample() # Compute distributions X_i|z, x_{<i} px_z = model(x_in, z) # Compute the reconstruction loss and KL divergence. loss, ELBO, KL = model.loss(px_z, x_out, pz, qz, z, free_nats=0., evaluation=True) total_ELBO += ELBO total_KL += KL num_words += x_in.size(0) # Return the average reconstruction loss and KL. avg_ELBO = total_ELBO / num_words avg_KL = total_KL / num_words return avg_ELBO, avg_KL dummy_lm = LatentFactorModel( vocab.size(), emb_size=64, hidden_size=128, latent_size=16, pad_idx=pad_idx).to(device=device) !head -n 128 {val_file} > ./dummy_dataset dummy_data = TextDataset('./dummy_dataset') dummy_ELBO, dummy_kl = eval_elbo(dummy_lm, dummy_inference_model, dummy_data, vocab, device) print(dummy_ELBO, dummy_kl) assert dummy_kl.item() > 0
tensor(-37.6747, device='cuda:0') tensor(0.5302, device='cuda:0')
MIT
LatentFactorModel/LatentFactorModel.ipynb
vitutorial/exercises
A common metric to evaluate language models is the perplexity per word. The perplexity per word for a dataset is defined as:\begin{align} \text{ppl}(\mathcal{D}|\theta, \lambda) = \exp\left(-\frac{1}{\sum_{k=1}^{|\mathcal D|} n^{(k)}} \sum_{k=1}^{|\mathcal{D}|} \log P(x^{(k)}|\theta, \lambda)\right) \end{align}where $n^{(k)} = |x^{(k)}|$ is the number of tokens in a word and $P(x^{(k)}|\theta, \lambda)$ is the probability that our model assigns to the datapoint $x^{(k)}$. In order to compute $\log P(x|\theta, \lambda)$ for our model we need to evaluate the marginal:\begin{align} P(x|\theta, \lambda) = \sum_{z \in \{0, 1\}^K} P(x|z,\theta) P(z|\alpha)\end{align}As this is summation cannot be computed in a reasonable amount of time (due to exponential complexity), we have two options: we can use the earlier derived lower-bound on the log-likelihood, which will give us an upper-bound on the perplexity, or we can make an importance sampling estimate using our approximate posterior distribution. The importance sampling (IS) estimate can be done as:\begin{align}\hat P(x|\theta, \lambda) &\overset{\text{IS}}{\approx} \frac{1}{S} \sum_{s=1}^{S} \frac{P(z^{(s)}|\alpha)P(x|z^{(s)}, \theta)}{Q(z^{(s)}|x)} & \text{where }z^{(s)} \sim Q(z|x)\end{align}where $S$ is the number of samples.Then our perplexity becomes:\begin{align} &\frac{1}{\sum_{k=1}^{|\mathcal D|} n^{(k)}} \sum_{k=1}^{|\mathcal D|} \log P(x^{(k)}|\theta) \\ &\approx \frac{1}{\sum_{k=1}^{|\mathcal D|} n^{(k)}} \sum_{k=1}^{|\mathcal D|} \log \frac{1}{S} \sum_{s=1}^{S} \frac{P(z^{(s)}|\alpha)P(x^{(k)}|z^{(s)}, \theta)}{Q(z^{(s)}|x^{(k)})} \\\end{align}We define the function `eval_perplexity` below that implements this importance sampling estimate:
def eval_perplexity(model, inference_model, eval_dataset, vocab, device, n_samples, batch_size=128): """ Estimates the per-word perplexity using importance sampling with the given number of samples. """ dl = DataLoader(eval_dataset, batch_size=batch_size) sorted_dl = SortingTextDataLoader(dl) # Make sure the model is in evaluation mode (i.e. disable dropout). model.eval() log_px = 0. num_predictions = 0 num_words = 0 # We don't need to compute gradients for this. with torch.no_grad(): for words in sorted_dl: x_in, x_out, seq_mask, seq_len = create_batch(words, vocab, device) # Infer the approximate posterior and construct the prior. qz = inference_model(x_in, seq_mask, seq_len) pz = ProductOfBernoullis(torch.ones_like(qz.probs) * 0.5) # TODO different prior # Create an array to hold all samples for this batch. batch_size = x_in.size(0) log_px_samples = torch.zeros(n_samples, batch_size) # Sample log P(x) n_samples times. for s in range(n_samples): # Sample a z^s from the posterior. z = qz.sample() # Compute log P(x^k|z^s) px_z = model(x_in, z) # [B, T] cond_log_prob = px_z.log_prob(x_out) cond_log_prob = torch.where(seq_mask, cond_log_prob, torch.zeros_like(cond_log_prob)) # [B] cond_log_prob = cond_log_prob.sum(-1) # Compute log p(z^s) and log q(z^s|x^k) prior_log_prob = pz.log_prob(z) # B posterior_log_prob = qz.log_prob(z) # B # Store the sample for log P(x^k) importance weighted with p(z^s)/q(z^s|x^k). log_px_sample = cond_log_prob + prior_log_prob - posterior_log_prob log_px_samples[s] = log_px_sample # Average over the number of samples and count the number of predictions made this batch. log_px_batch = torch.logsumexp(log_px_samples, dim=0) - \ torch.log(torch.Tensor([n_samples])) log_px += log_px_batch.sum() num_predictions += seq_len.sum() num_words += seq_len.size(0) # Compute and return the perplexity per word. perplexity = torch.exp(-log_px / num_predictions) NLL = -log_px / num_words return perplexity, NLL
_____no_output_____
MIT
LatentFactorModel/LatentFactorModel.ipynb
vitutorial/exercises
Lastly, we want to occasionally qualitatively see the performance of the model during training, by letting it reconstruct a given word from the latent space. This gives us an idea of whether the model is using the latent space to encode some semantics about the data. For this we use a deterministic greedy decoding algorithm, that chooses the word with maximum probability at every time step, and feeds that word into the next time step.
def greedy_decode(model, z, vocab, max_len=50): """ Greedily decodes a word from a given z, by picking the word with maximum probability at each time step. """ # Disable dropout. model.eval() # Don't compute gradients. with torch.no_grad(): batch_size = z.size(0) # We feed the model the start-of-word symbol at the first time step. prev_x = torch.ones(batch_size, 1, dtype=torch.long).fill_(vocab[SOW_TOKEN]).to(z.device) # Initialize the hidden state from z. hidden = model.init_hidden(z) predictions = [] for t in range(max_len): logits, hidden = model.step(prev_x, z, hidden) # Choose the argmax of the unnnormalized probabilities as the # prediction for this time step. prediction = torch.argmax(logits, dim=-1) predictions.append(prediction) prev_x = prediction.view(batch_size, 1) return torch.cat(predictions, dim=1)
_____no_output_____
MIT
LatentFactorModel/LatentFactorModel.ipynb
vitutorial/exercises
TrainingNow it's time to train the model. We use early stopping on the validation perplexity for model selection.
# Define the model hyperparameters. emb_size = 256 hidden_size = 256 latent_size = 16 bidirectional_encoder = True free_nats = 0 # 5. annealing_steps = 0 # 11400 dropout = 0.6 word_dropout = 0 # 0.75 batch_size = 64 learning_rate = 0.001 num_epochs = 20 n_importance_samples = 3 # 50 # Create the training data loader. dl = DataLoader(train_dataset, batch_size=batch_size, shuffle=True) sorted_dl = SortingTextDataLoader(dl) # Create the generative model. model = LatentFactorModel(vocab_size=vocab.size(), emb_size=emb_size, hidden_size=hidden_size, latent_size=latent_size, pad_idx=vocab[PAD_TOKEN], dropout=dropout) model = model.to(device) # Create the inference model. inference_model = InferenceModel(vocab_size=vocab.size(), embedder=model.embedder, hidden_size=hidden_size, latent_size=latent_size, pad_idx=vocab[PAD_TOKEN], bidirectional=bidirectional_encoder) inference_model = inference_model.to(device) # Create the optimizer. optimizer = optim.Adam(itertools.chain(model.parameters(), inference_model.parameters()), lr=learning_rate) # Save the best model (early stopping). best_model = "./best_model.pt" best_val_ppl = float("inf") best_epoch = 0 # Keep track of some statistics to plot later. train_ELBOs = [] train_KLs = [] val_ELBOs = [] val_KLs = [] val_perplexities = [] val_NLLs = [] step = 0 training_ELBO = 0. training_KL = 0. num_batches = 0 for epoch_num in range(1, num_epochs+1): for words in sorted_dl: # Make sure the model is in training mode (for dropout). model.train() # Transform the words to input, output, seq_len, seq_mask batches. x_in, x_out, seq_mask, seq_len = create_batch(words, vocab, device, word_dropout=word_dropout) # Compute the multiplier for the KL term if we do annealing. if annealing_steps > 0: KL_weight = min(1., (1.0 / annealing_steps) * step) else: KL_weight = 1. # Do a forward pass through the model and compute the training loss. We use # a reparameterized sample from the approximate posterior during training. qz = inference_model(x_in, seq_mask, seq_len) pz = ProductOfBernoullis(torch.ones_like(qz.probs) * 0.5) z = qz.sample() px_z = model(x_in, z) loss, ELBO, KL = model.loss(px_z, x_out, pz, qz, z, free_nats=free_nats) # Backpropagate and update the model weights. loss.backward() optimizer.step() optimizer.zero_grad() # Update some statistics to track for the training loss. training_ELBO += ELBO training_KL += KL num_batches += 1 # Every 100 steps we evaluate the model and report progress. if step % 100 == 0: val_ELBO, val_KL = eval_elbo(model, inference_model, val_dataset, vocab, device) print("(%d) step %d: training ELBO (KL) = %.2f (%.2f) --" " KL weight = %.2f --" " validation ELBO (KL) = %.2f (%.2f)" % (epoch_num, step, training_ELBO/num_batches, training_KL/num_batches, KL_weight, val_ELBO, val_KL)) # Update some statistics for plotting later. train_ELBOs.append((step, (training_ELBO/num_batches).item())) train_KLs.append((step, (training_KL/num_batches).item())) val_ELBOs.append((step, val_ELBO.item())) val_KLs.append((step, val_KL.item())) # Reset the training statistics. training_ELBO = 0. training_KL = 0. num_batches = 0 step += 1 # After an epoch we'll compute validation perplexity and save the model # for early stopping if it's better than previous models. print("Finished epoch %d" % (epoch_num)) val_perplexity, val_NLL = eval_perplexity(model, inference_model, val_dataset, vocab, device, n_importance_samples) val_ELBO, val_KL = eval_elbo(model, inference_model, val_dataset, vocab, device) # Keep track of the validation perplexities / NLL. val_perplexities.append((epoch_num, val_perplexity.item())) val_NLLs.append((epoch_num, val_NLL.item())) # If validation perplexity is better, store this model for early stopping. if val_perplexity < best_val_ppl: best_val_ppl = val_perplexity best_epoch = epoch_num torch.save(model.state_dict(), best_model) # Print epoch statistics. print("Evaluation epoch %d:\n" " - validation perplexity: %.2f\n" " - validation NLL: %.2f\n" " - validation ELBO (KL) = %.2f (%.2f)" % (epoch_num, val_perplexity, val_NLL, val_ELBO, val_KL)) # Also show some qualitative results by reconstructing a word from the # validation data. Use the mean of the approximate posterior and greedy # decoding. random_word = val_dataset[np.random.choice(len(val_dataset))] x_in, _, seq_mask, seq_len = create_batch([random_word], vocab, device) qz = inference_model(x_in, seq_mask, seq_len) z = qz.mean() reconstruction = greedy_decode(model, z, vocab) reconstruction = batch_to_words(reconstruction, vocab)[0] print("-- Original word: \"%s\"" % random_word) print("-- Model reconstruction: \"%s\"" % reconstruction)
(1) step 0: training ELBO (KL) = -39.02 (0.43) -- KL weight = 1.00 -- validation ELBO (KL) = -38.29 (0.43) (1) step 100: training ELBO (KL) = -27.68 (1.20) -- KL weight = 1.00 -- validation ELBO (KL) = -23.76 (1.28) Finished epoch 1 Evaluation epoch 1: - validation perplexity: 7.88 - validation NLL: 21.97 - validation ELBO (KL) = -22.52 (1.25) -- Original word: "interpretarían" -- Model reconstruction: "acontaren" (2) step 200: training ELBO (KL) = -24.03 (1.33) -- KL weight = 1.00 -- validation ELBO (KL) = -22.47 (1.23) (2) step 300: training ELBO (KL) = -23.19 (1.33) -- KL weight = 1.00 -- validation ELBO (KL) = -22.19 (1.47) Finished epoch 2 Evaluation epoch 2: - validation perplexity: 7.41 - validation NLL: 21.32 - validation ELBO (KL) = -21.99 (1.57) -- Original word: "subtítulos" -- Model reconstruction: "acarrarían" (3) step 400: training ELBO (KL) = -23.07 (1.66) -- KL weight = 1.00 -- validation ELBO (KL) = -22.02 (1.65) (3) step 500: training ELBO (KL) = -23.00 (1.85) -- KL weight = 1.00 -- validation ELBO (KL) = -22.06 (1.91) Finished epoch 3 Evaluation epoch 3: - validation perplexity: 7.34 - validation NLL: 21.22 - validation ELBO (KL) = -22.09 (2.12) -- Original word: "antojó" -- Model reconstruction: "acontaran" (4) step 600: training ELBO (KL) = -22.87 (1.95) -- KL weight = 1.00 -- validation ELBO (KL) = -22.17 (2.22) (4) step 700: training ELBO (KL) = -23.29 (2.55) -- KL weight = 1.00 -- validation ELBO (KL) = -22.70 (2.85) Finished epoch 4 Evaluation epoch 4: - validation perplexity: 7.77 - validation NLL: 21.83 - validation ELBO (KL) = -22.74 (3.02) -- Original word: "cosquillearé" -- Model reconstruction: "acontaran" (5) step 800: training ELBO (KL) = -23.54 (2.97) -- KL weight = 1.00 -- validation ELBO (KL) = -22.73 (3.01) (5) step 900: training ELBO (KL) = -23.41 (2.98) -- KL weight = 1.00 -- validation ELBO (KL) = -22.54 (2.93) Finished epoch 5 Evaluation epoch 5: - validation perplexity: 7.69 - validation NLL: 21.71 - validation ELBO (KL) = -22.73 (3.19) -- Original word: "chutases" -- Model reconstruction: "acalaran" (6) step 1000: training ELBO (KL) = -23.44 (3.05) -- KL weight = 1.00 -- validation ELBO (KL) = -22.70 (3.17) (6) step 1100: training ELBO (KL) = -23.34 (3.12) -- KL weight = 1.00 -- validation ELBO (KL) = -22.49 (3.04) Finished epoch 6 Evaluation epoch 6: - validation perplexity: 7.44 - validation NLL: 21.37 - validation ELBO (KL) = -22.37 (3.00) -- Original word: "diversificaciones" -- Model reconstruction: "acarraría" (7) step 1200: training ELBO (KL) = -23.31 (3.03) -- KL weight = 1.00 -- validation ELBO (KL) = -22.49 (3.12) (7) step 1300: training ELBO (KL) = -23.24 (3.18) -- KL weight = 1.00 -- validation ELBO (KL) = -22.34 (3.03) Finished epoch 7 Evaluation epoch 7: - validation perplexity: 7.37 - validation NLL: 21.27 - validation ELBO (KL) = -22.33 (3.08) -- Original word: "entrelazado" -- Model reconstruction: "acontaran" (8) step 1400: training ELBO (KL) = -23.08 (3.04) -- KL weight = 1.00 -- validation ELBO (KL) = -22.40 (3.16) (8) step 1500: training ELBO (KL) = -23.23 (3.27) -- KL weight = 1.00 -- validation ELBO (KL) = -22.68 (3.49) Finished epoch 8 Evaluation epoch 8: - validation perplexity: 7.65 - validation NLL: 21.66 - validation ELBO (KL) = -22.78 (3.63) -- Original word: "comulgaríamos" -- Model reconstruction: "abarraría" (9) step 1600: training ELBO (KL) = -23.46 (3.54) -- KL weight = 1.00 -- validation ELBO (KL) = -22.72 (3.58) (9) step 1700: training ELBO (KL) = -23.47 (3.69) -- KL weight = 1.00 -- validation ELBO (KL) = -22.88 (3.83) Finished epoch 9 Evaluation epoch 9: - validation perplexity: 7.98 - validation NLL: 22.11 - validation ELBO (KL) = -23.19 (4.19) -- Original word: "coleccionarás" -- Model reconstruction: "acontaran" (10) step 1800: training ELBO (KL) = -23.82 (4.08) -- KL weight = 1.00 -- validation ELBO (KL) = -23.18 (4.15) (10) step 1900: training ELBO (KL) = -23.79 (4.08) -- KL weight = 1.00 -- validation ELBO (KL) = -23.11 (4.15) Finished epoch 10 Evaluation epoch 10: - validation perplexity: 7.87 - validation NLL: 21.96 - validation ELBO (KL) = -23.06 (4.13) -- Original word: "conmemoraran" -- Model reconstruction: "acarraría" (11) step 2000: training ELBO (KL) = -23.79 (4.17) -- KL weight = 1.00 -- validation ELBO (KL) = -23.03 (4.10) (11) step 2100: training ELBO (KL) = -23.59 (3.99) -- KL weight = 1.00 -- validation ELBO (KL) = -22.82 (3.94) Finished epoch 11 Evaluation epoch 11: - validation perplexity: 7.71 - validation NLL: 21.74 - validation ELBO (KL) = -22.98 (4.14) -- Original word: "esculpieren" -- Model reconstruction: "acontaran" (12) step 2200: training ELBO (KL) = -23.73 (4.10) -- KL weight = 1.00 -- validation ELBO (KL) = -22.90 (4.07) (12) step 2300: training ELBO (KL) = -23.64 (4.15) -- KL weight = 1.00 -- validation ELBO (KL) = -22.97 (4.22) Finished epoch 12 Evaluation epoch 12: - validation perplexity: 7.60 - validation NLL: 21.59 - validation ELBO (KL) = -22.87 (4.16) -- Original word: "cansándose" -- Model reconstruction: "acontrastaría" (13) step 2400: training ELBO (KL) = -23.68 (4.25) -- KL weight = 1.00 -- validation ELBO (KL) = -23.02 (4.29) (13) step 2500: training ELBO (KL) = -23.56 (4.16) -- KL weight = 1.00 -- validation ELBO (KL) = -22.87 (4.16) Finished epoch 13 Evaluation epoch 13: - validation perplexity: 7.78 - validation NLL: 21.84 - validation ELBO (KL) = -22.99 (4.33) -- Original word: "desmoldasen" -- Model reconstruction: "acontermaría" (14) step 2600: training ELBO (KL) = -23.61 (4.29) -- KL weight = 1.00 -- validation ELBO (KL) = -23.00 (4.37) (14) step 2700: training ELBO (KL) = -23.76 (4.42) -- KL weight = 1.00 -- validation ELBO (KL) = -23.24 (4.63) Finished epoch 14 Evaluation epoch 14: - validation perplexity: 7.79 - validation NLL: 21.85 - validation ELBO (KL) = -23.09 (4.50) -- Original word: "homenajearemos" -- Model reconstruction: "aconterraría" (15) step 2800: training ELBO (KL) = -23.89 (4.59) -- KL weight = 1.00 -- validation ELBO (KL) = -23.20 (4.63) (15) step 2900: training ELBO (KL) = -23.97 (4.77) -- KL weight = 1.00 -- validation ELBO (KL) = -23.48 (4.97) Finished epoch 15 Evaluation epoch 15: - validation perplexity: 7.99 - validation NLL: 22.12 - validation ELBO (KL) = -23.23 (4.75) -- Original word: "pisotearan" -- Model reconstruction: "acontaran" (16) step 3000: training ELBO (KL) = -23.90 (4.76) -- KL weight = 1.00 -- validation ELBO (KL) = -23.19 (4.70) (16) step 3100: training ELBO (KL) = -24.00 (4.88) -- KL weight = 1.00 -- validation ELBO (KL) = -23.60 (5.16) Finished epoch 16 Evaluation epoch 16: - validation perplexity: 8.32 - validation NLL: 22.56 - validation ELBO (KL) = -23.78 (5.34) -- Original word: "coexistid" -- Model reconstruction: "acondiciaren" (17) step 3200: training ELBO (KL) = -24.46 (5.36) -- KL weight = 1.00 -- validation ELBO (KL) = -24.00 (5.60) (17) step 3300: training ELBO (KL) = -24.72 (5.64) -- KL weight = 1.00 -- validation ELBO (KL) = -23.92 (5.55) Finished epoch 17 Evaluation epoch 17: - validation perplexity: 8.33 - validation NLL: 22.57 - validation ELBO (KL) = -23.87 (5.53) -- Original word: "ensamblamos" -- Model reconstruction: "aconderaría" (18) step 3400: training ELBO (KL) = -24.50 (5.45) -- KL weight = 1.00 -- validation ELBO (KL) = -23.74 (5.41) (18) step 3500: training ELBO (KL) = -23.51 (4.53) -- KL weight = 1.00 -- validation ELBO (KL) = -23.71 (5.42) Finished epoch 18 Evaluation epoch 18: - validation perplexity: 8.29 - validation NLL: 22.52 - validation ELBO (KL) = -23.95 (5.68) -- Original word: "caro" -- Model reconstruction: "aconternaría" (19) step 3600: training ELBO (KL) = -24.57 (5.59) -- KL weight = 1.00 -- validation ELBO (KL) = -23.96 (5.73) (19) step 3700: training ELBO (KL) = -24.55 (5.68) -- KL weight = 1.00 -- validation ELBO (KL) = -23.59 (5.36) Finished epoch 19 Evaluation epoch 19: - validation perplexity: 8.22 - validation NLL: 22.43 - validation ELBO (KL) = -23.70 (5.50) -- Original word: "captáremos" -- Model reconstruction: "aconderaría" (20) step 3800: training ELBO (KL) = -24.24 (5.44) -- KL weight = 1.00 -- validation ELBO (KL) = -23.66 (5.45) (20) step 3900: training ELBO (KL) = -24.25 (5.42) -- KL weight = 1.00 -- validation ELBO (KL) = -23.49 (5.34) Finished epoch 20 Evaluation epoch 20: - validation perplexity: 8.11 - validation NLL: 22.28 - validation ELBO (KL) = -23.60 (5.46) -- Original word: "endeudado" -- Model reconstruction: "acondiciaría"
MIT
LatentFactorModel/LatentFactorModel.ipynb
vitutorial/exercises
Let's plot the training and validation statistics:
steps, training_ELBO = list(zip(*train_ELBOs)) _, training_KL = list(zip(*train_KLs)) _, val_ELBO = list(zip(*val_ELBOs)) _, val_KL = list(zip(*val_KLs)) epochs, val_ppl = list(zip(*val_perplexities)) _, val_NLL = list(zip(*val_NLLs)) fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(18, 5)) # Plot training ELBO and KL ax1.set_title("Training ELBO") ax1.plot(steps, training_ELBO, "-o") ax2.set_title("Training KL") ax2.plot(steps, training_KL, "-o") plt.show() # Plot validation ELBO and KL fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(18, 5)) ax1.set_title("Validation ELBO") ax1.plot(steps, val_ELBO, "-o", color="orange") ax2.set_title("Validation KL") ax2.plot(steps, val_KL, "-o", color="orange") plt.show() # Plot validation perplexities. fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(18, 5)) ax1.set_title("Validation perplexity") ax1.plot(epochs, val_ppl, "-o", color="orange") ax2.set_title("Validation NLL") ax2.plot(epochs, val_NLL, "-o", color="orange") plt.show() print()
_____no_output_____
MIT
LatentFactorModel/LatentFactorModel.ipynb
vitutorial/exercises
Let's load the best model according to validation perplexity and compute its perplexity on the test data:
# Load the best model from disk. model = LatentFactorModel(vocab_size=vocab.size(), emb_size=emb_size, hidden_size=hidden_size, latent_size=latent_size, pad_idx=vocab[PAD_TOKEN], dropout=dropout) model.load_state_dict(torch.load(best_model)) model = model.to(device) # Compute test perplexity and ELBO. test_perplexity, test_NLL = eval_perplexity(model, inference_model, test_dataset, vocab, device, n_importance_samples) test_ELBO, test_KL = eval_elbo(model, inference_model, test_dataset, vocab, device) print("test ELBO (KL) = %.2f (%.2f) -- test perplexity = %.2f -- test NLL = %.2f" % (test_ELBO, test_KL, test_perplexity, test_NLL))
test ELBO (KL) = -25.34 (5.46) -- test perplexity = 9.56 -- test NLL = 24.05
MIT
LatentFactorModel/LatentFactorModel.ipynb
vitutorial/exercises
Qualitative analysisLet's have a look at what how our trained model interacts with the learned latent space. First let's greedily decode some samples from the prior to assess the diversity of the model:
# Generate 10 samples from the standard normal prior. num_prior_samples = 10 pz = ProductOfBernoullis(torch.ones(num_prior_samples, latent_size) * 0.5) z = pz.sample() z = z.to(device) # Use the greedy decoding algorithm to generate words. predictions = greedy_decode(model, z, vocab) predictions = batch_to_words(predictions, vocab) for num, prediction in enumerate(predictions): print("%d: %s" % (num+1, prediction))
_____no_output_____
MIT
LatentFactorModel/LatentFactorModel.ipynb
vitutorial/exercises
Let's now have a look how good the model is at reconstructing words from the test dataset using the approximate posterior mean and a couple of samples:
# Pick a random test word. test_word = test_dataset[np.random.choice(len(test_dataset))] # Infer q(z|x). x_in, _, seq_mask, seq_len = create_batch([test_word], vocab, device) qz = inference_model(x_in, seq_mask, seq_len) # Decode using the mean. z_mean = qz.mean() mean_reconstruction = greedy_decode(model, z_mean, vocab) mean_reconstruction = batch_to_words(mean_reconstruction, vocab)[0] print("Original: \"%s\"" % test_word) print("Posterior mean reconstruction: \"%s\"" % mean_reconstruction) # Decode a couple of samples from the approximate posterior. for s in range(3): z = qz.sample() sample_reconstruction = greedy_decode(model, z, vocab) sample_reconstruction = batch_to_words(sample_reconstruction, vocab)[0] print("Posterior sample reconstruction (%d): \"%s\"" % (s+1, sample_reconstruction))
_____no_output_____
MIT
LatentFactorModel/LatentFactorModel.ipynb
vitutorial/exercises
We can also qualitatively assess the smoothness of the learned latent space by interpolating between two words in the test set:
# Pick a random test word. test_word_1 = test_dataset[np.random.choice(len(test_dataset))] # Infer q(z|x). x_in, _, seq_mask, seq_len = create_batch([test_word_1], vocab, device) qz = inference_model(x_in, seq_mask, seq_len) qz_1 = qz.mean() # Pick a random second test word. test_word_2 = test_dataset[np.random.choice(len(test_dataset))] # Infer q(z|x) again. x_in, _, seq_mask, seq_len = create_batch([test_word_2], vocab, device) qz = inference_model(x_in, seq_mask, seq_len) qz_2 = qz.mean() # Now interpolate between the two means and generate words between those. num_words = 5 print("Word 1: \"%s\"" % test_word_1) for alpha in np.linspace(start=0., stop=1., num=num_words): z = (1-alpha) * qz_1 + alpha * qz_2 reconstruction = greedy_decode(model, z, vocab) reconstruction = batch_to_words(reconstruction, vocab)[0] print("(1-%.2f) * qz1.mean + %.2f qz2.mean: \"%s\"" % (alpha, alpha, reconstruction)) print("Word 2: \"%s\"" % test_word_2)
_____no_output_____
MIT
LatentFactorModel/LatentFactorModel.ipynb
vitutorial/exercises