markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Variable Type & ConversionEvery variable has a type (int, float, string, list, etc) and some of them can be converted into certain types
#Finding out the type of a variable type(my_float) #printing the types of some other variables print(type(my_num), type(simple_dict), type(truth), type(mixed_list)) #Converting anything to string str(my_float) str(simple_dict) str(mixed_list) #converting string to number three = "3" int(three) float(three) #Converting tuple to a list list(aTuple) #Converting list to a tuple tuple(same_type_list)
_____no_output_____
MIT
.ipynb_checkpoints/Python Crash Course-checkpoint.ipynb
rafia37/DSA5113-TA-class-repo
Lists A versatile datatype that can be thought of as a collection of comma-seperated values. Each item in a list has an index. The indices start with 0. The items in a list doesn't need to be of the same type
#Defining some lists l1 = [1,2,3,4,5,6] l2 = ["a", "b", "c", "d"] l3 = list(range(2,50,2)) #Creates a list going from 2 up to and not including 50 in increments of 2 print(l3) #displaying l3 #Length of a list #The len command gives the size of the list i.e. the total number of items len(l1) len(l2)
_____no_output_____
MIT
.ipynb_checkpoints/Python Crash Course-checkpoint.ipynb
rafia37/DSA5113-TA-class-repo
**Accessing list items** List items can be accessed using their index. The first item has an index of 0, the next one has 1 and so on
#First item of l2 is "a" and third item of l1 is 3 print("First item of l2: {}".format(l2[0])) # l2[0] accesses the item at 0th index of l2 print("Third item of l1: {}".format(l1[2])) # l1[0] accesses the item at 2nd index of l1
First item of l2: a Third item of l1: 3
MIT
.ipynb_checkpoints/Python Crash Course-checkpoint.ipynb
rafia37/DSA5113-TA-class-repo
**Indexing in reverse** List items can be accessed in reversed order using negative indices. The last item canbe accessed with -1, second from last with -2 and so on
print("Last item of l3: {}".format(l3[-1])) print("Third to last item of l1: {}".format(l1[-3]))
Last item of l3: 48 Third to last item of l1: 4
MIT
.ipynb_checkpoints/Python Crash Course-checkpoint.ipynb
rafia37/DSA5113-TA-class-repo
**Slicing** Portions of a list can be chosen using some or all of 3 numbers - starting index, stopping index and increment The syntax is `list_name[start:stop:increment]`
#If I want 2,3,4 from list l1, I want to start from index 1 and end at index 3 #The stopping indes is not included so we choose 3+1=4 as stopping index l1[1:4] #In this example we chose items from idex 1 up to index 5, skipping an item every time (increment of 2) l1[1:6:2] #If we just indicate starting index, everything after that is kept l1[2:] #If we just indicate stopping index, everything up to that is kept l1[:4] #Using reverse index l1[:-2] #Everything except for the last 2 items
_____no_output_____
MIT
.ipynb_checkpoints/Python Crash Course-checkpoint.ipynb
rafia37/DSA5113-TA-class-repo
List operations
#"adding" two lists results in concatenation l4 = l1 + l2 l4 #Multiplying a list by a scalar results in repetition ["hello"]*5 l2*3 [2]*7
_____no_output_____
MIT
.ipynb_checkpoints/Python Crash Course-checkpoint.ipynb
rafia37/DSA5113-TA-class-repo
Some other popular list manipulation functions
#Appending to the end of an existing string l2.append("e") l2 #Insert an item at a particular index - list_name(index, value) l2.insert(2,"f") l2 #sorting a list l2.sort() l2 #removes item by index and returns the removed item l4.pop(3) #remove the item at index 3 l4 #remove item by matching value l4.remove("a") l4 #maximum or minimum value of a list max(l3) #min(l3) for minimum
_____no_output_____
MIT
.ipynb_checkpoints/Python Crash Course-checkpoint.ipynb
rafia37/DSA5113-TA-class-repo
String Manipulation Strings are values enclosed in single quotes (' ') or double quotes (" ") These are characters or a series of characters and can be manipulated in very similar way to lists, though they have their own special functions
#Defining some strings str1 = "I hear Rafia is a harsh grader" str2 = "NO NEED TO SHOUT" str3 = "fine, no caps lock"
_____no_output_____
MIT
.ipynb_checkpoints/Python Crash Course-checkpoint.ipynb
rafia37/DSA5113-TA-class-repo
**Accessing & Slicing**
#Very similar to lists print(str1[:12]) #Takes the 1st 10 characters print(str1[0]) #Accesses the first character print(str2[-5:]) #Takes last 5 characters print(str3[6:13]) #Takes 6 through 9
I hear Rafia I SHOUT no caps
MIT
.ipynb_checkpoints/Python Crash Course-checkpoint.ipynb
rafia37/DSA5113-TA-class-repo
**Other popular string manipulation functions**
#Splitting a string based on a sperator - str_name.split(separator) print(str1.split(" ")) #separating based on space print(str2.split()) #If no argument is given to split, default separator is space print(str3.split(",")) #separating based on space #Changing case print(str2.lower()) #All lower case print(str3.upper()) #All upper case print(str3.capitalize()) #Only first letter upper case print("Red".swapcase()) #swaps cases #Replace characters by given string str1.replace("harsh", "good") #Find a given pattern in a string str1.find("Rafia") #Returns the index of where the pattern is found #Concatenating and formating string print(str2 + " -- " + str3) #adding string concatenates them str4 = "Strings can be anything, like {} is a string".format(12345) print(str4) #Like lists, we can multiply to repeat "Hi"*4 #Like lists, we can use len command to find the size of a string len("apples")
_____no_output_____
MIT
.ipynb_checkpoints/Python Crash Course-checkpoint.ipynb
rafia37/DSA5113-TA-class-repo
**Special Characters**
#\n makes a new line print("This is making \n a new line") #\t inserts a tab print("This just inserts \t a tab")
This is making a new line This just inserts a tab
MIT
.ipynb_checkpoints/Python Crash Course-checkpoint.ipynb
rafia37/DSA5113-TA-class-repo
If Statement Executing blockes of code based on whether or not a given condition is true The syntax is -```pythonif (condition): Do somthingelif (condition): Do some other thingelse: Do somet other thing``` Only one block will execute - the condition that returns true first You can use as many elif blocks as needed
if ("c" in l2): print("Yes c is in l2") l2.remove("c") print("But now it's removed. Here's the new list") print(l2) a = 5 #defining a variable if (a>10): print("a is greater than 10") else: print("a is less than 10") if (a>5): print("a is greater than 5") elif (a<5): print("a is less than 5") else: print("a is equal to 5") # assigning a value to variable using if statement str5 = "This is a great class" b = "yes" if "great" in str5 else "no" #if great is in str5, b will get a value of yes, otherwise it will be no c = 1 if a>10 else 0 #if the variable a is greater than 10, c will be 1, otherwise 0 print("b = {}, c = {}".format(b,c))
b = yes, c = 0
MIT
.ipynb_checkpoints/Python Crash Course-checkpoint.ipynb
rafia37/DSA5113-TA-class-repo
LoopsLoops are an essential tool in python that allows you to repeatedly excute a block of code given certain conditions or based on interating over a given list or array. There's two main types of loops in python - `For` and `While`. There's also `Do..While` loop in python by combinging the Do command and While command but I won't discuss that here. For Loop For loops are useful when you want to iterate a certain number of times or when you want to iterate over a list or array type object ```pythonfor i in list_name: do something```
#Looping a certain number of time for i in range(10): #iterating over a list going from 0 to 9 a = i*5 print("Multiply {} by 5 gives {}".format(i, a)) #Looping over a list for item in l4: str_item = str(item) print("{} - {}".format(str_item, type(str_item)))
1 - <class 'str'> 2 - <class 'str'> 3 - <class 'str'> 5 - <class 'str'> 6 - <class 'str'> b - <class 'str'> c - <class 'str'> d - <class 'str'>
MIT
.ipynb_checkpoints/Python Crash Course-checkpoint.ipynb
rafia37/DSA5113-TA-class-repo
**Loop Control Statements** You can control the execution of a loop using 3 statements - - `break` : This breaks out of a loop and moves on to the next segment of your code- `continue` : This skips any code below it (inside the loop) and moves on to the next iteration- `pass` : It's used when a statement is required syntactically but you don't want any code to execute Demonstrating `break`
#l4 is a list that contains both integers and numbers l4
_____no_output_____
MIT
.ipynb_checkpoints/Python Crash Course-checkpoint.ipynb
rafia37/DSA5113-TA-class-repo
So if you try to add numbers to the string elements, you'll get an error. To avoid it when iterating over this list, you can insert a break statement in your loop so that your code breaks out of the loop when it encounters a string.
for i in l4: if type(i)==str: print("Encountered a string, breaking out of the loop") break tmp = i+10 print("Added 10 to list item {} to get {}".format(i, tmp))
Added 10 to list item 1 to get 11 Added 10 to list item 2 to get 12 Added 10 to list item 3 to get 13 Added 10 to list item 5 to get 15 Added 10 to list item 6 to get 16 Encountered a string, breaking out of the loop
MIT
.ipynb_checkpoints/Python Crash Course-checkpoint.ipynb
rafia37/DSA5113-TA-class-repo
Demonstrating `continue` But now, with the `break` statement, it breaks out of the loop any time it encounters string element. If the next element after a string element is an integer, we're missing out on it. That is where the continue statment comes in. If you use `continue` instead of `break` then, instead of breaking out of the loop, you just skip the current iteration and move to the next one. i.e. you move on to the next element and check again whether it's a string or not and so on..
for i in l4: if type(i)==str: print("Encountered a string, moving on to the next element") continue tmp = i+10 print("Added 10 to list item {} to get {}".format(i, tmp))
Added 10 to list item 1 to get 11 Added 10 to list item 2 to get 12 Added 10 to list item 3 to get 13 Added 10 to list item 5 to get 15 Added 10 to list item 6 to get 16 Encountered a string, moving on to the next element Encountered a string, moving on to the next element Encountered a string, moving on to the next element
MIT
.ipynb_checkpoints/Python Crash Course-checkpoint.ipynb
rafia37/DSA5113-TA-class-repo
Demonstrating `pass` `pass` is more of a placeholder. If you start a loop, you are bound by syntax to write at least one statement inside it. If you don't want to write anything yet, you can use a `pass` statement to avoid getting an error
for i in l4: pass
_____no_output_____
MIT
.ipynb_checkpoints/Python Crash Course-checkpoint.ipynb
rafia37/DSA5113-TA-class-repo
**Popular functions related to loops** There's a lot of usefull functions in python that work well with loops e.g. (range, unpack(*), tuple, split etc.) But there are two very important ones that go hand-in-hand with loops - `zip` & `enumerate` - so these are the ones I'm discussing here.- `zip` : Used when you want to iterate over two lists of equal length (If the length are not equal, it only iterates up to the length of the shorter list)- `enumerate` : Used when you want the index of the list item you're iterating over
print(len(l1), len(l3)) for a, b in zip(l1, l3): print("list 1 item is {}, corresponding list 3 item is {}".format(a,b)) for i, (a,b) in enumerate(zip(l1,l3)): print("At index {}, list 1 item is {}, corresponding list 3 item is {}".format(i, a, b))
At index 0, list 1 item is 1, corresponding list 3 item is 2 At index 1, list 1 item is 2, corresponding list 3 item is 4 At index 2, list 1 item is 3, corresponding list 3 item is 6 At index 3, list 1 item is 4, corresponding list 3 item is 8 At index 4, list 1 item is 5, corresponding list 3 item is 10 At index 5, list 1 item is 6, corresponding list 3 item is 12
MIT
.ipynb_checkpoints/Python Crash Course-checkpoint.ipynb
rafia37/DSA5113-TA-class-repo
While Loop While loops are usefull when you want to iterate a code block **until** a certain condition is satified. While loops often need a counter variable that increments as the loop goes on.```pythonwhile (condition): do something```
counter = 10 while counter>0: print("The counter is still positive and right now, it's {}".format(counter)) counter-= 1 #incrementing the counter, reducing it by 1 in every iteration
The counter is still positive and right now, it's 10 The counter is still positive and right now, it's 9 The counter is still positive and right now, it's 8 The counter is still positive and right now, it's 7 The counter is still positive and right now, it's 6 The counter is still positive and right now, it's 5 The counter is still positive and right now, it's 4 The counter is still positive and right now, it's 3 The counter is still positive and right now, it's 2 The counter is still positive and right now, it's 1
MIT
.ipynb_checkpoints/Python Crash Course-checkpoint.ipynb
rafia37/DSA5113-TA-class-repo
`pass`, `break` and `continue` statements all work well with `while` loop. `zip` and `enumerate` doesn't usually pair with while since it doesn't iterate over list type objects Function In python, apart from using the built-in functions, you can define your own customized functions using the following syntax -```pythondef function_name(arg1, arg2): value = do something using arg1 & arg2 return value calling your functionfunction_name(value1, value2)```This is useful when you find yourself repeathing a block of code often.
#Defining the function def arithmatic_operations(num1, num2): """ A function to perform a series of arithmatic operations on num1 and num2 Returns the final result as an integer rounded up/down """ add = num1 + num2 mltply = add*num2 sbtrct = mltply - num2 divide = sbtrct/num2 result = round(divide) return result #Anything put inside a multi-line comment (""" """) inside a function, is called a doc-string. #You can describe your function inside """ """ and then retrieve this information by doing help(function_name) help(arithmatic_operations) #Calling the function resA = arithmatic_operations(10, 5) resA arithmatic_operations(10, 15)
_____no_output_____
MIT
.ipynb_checkpoints/Python Crash Course-checkpoint.ipynb
rafia37/DSA5113-TA-class-repo
**Setting default values** You can use default argument in you parameter list to set default values or optional arguments Default arguments are optional parameters for a function i.e. you can call the function without these parameters ```pythondef new_func(arg1, arg2, arg3=5): result = arg1 + arg2 + arg3 return result```Here, arg3 is the optional argument because you've set it to a default value of 5. If you don't provide arg3 when you call this function, arg3 will assume a value of 5. If you don't provide arg1 or arg2, you'll get an error because they are required/positional arguments Now imagine if someone were to call the `arithmatic_operations` function using string arguments, they'd get an error - because you can't perform arithmatic operations on a string. In that case, we want to be able to convert the input to a number. Let's instroduce a keyword argument `convert` to handle such cases
#Defining the function def new_arith(num1, num2, convert=False): """ A new function function that can handle even string arguments """ if convert!=False: num1 = float(num1) num2 = float(num2) add = num1 + num2 mltply = add*num2 sbtrct = mltply - num2 divide = sbtrct/num2 result = round(divide) return result #Handles numbers as usual #Function works fine even if we don't specify convert new_arith(10, 5) #Since we didn't specify convert, it's assumed to be False #strings are not converted and we get an error new_arith("10", "5") new_arith("10", "5", convert=True)
_____no_output_____
MIT
.ipynb_checkpoints/Python Crash Course-checkpoint.ipynb
rafia37/DSA5113-TA-class-repo
Scope The variables in a program are not accessible by every part of the program. Based on accessibility, there are two types of variables - global variable and local variable. Global variables are variables that can be accessed by any part of the program. Example from this notebook would be `str1`, `str2`, `truth`, `l1` etc. These variables can be accesed by this entire notebook. Local variables are variables that can only be accessed in certain parts of the program, e.g. variables defined inside function. Example from this notebook would be `mltply`, `sbtrct`, `add`, `convert`, `result` etc. these variables are only defined inside the function and can only be accessed by the respective functions
result mltply
_____no_output_____
MIT
.ipynb_checkpoints/Python Crash Course-checkpoint.ipynb
rafia37/DSA5113-TA-class-repo
Miscellaneous Dictionary Dictionaries are another iterable data type that comes in comma separated, key-value pairs.
#Definging some dictionaries dict1 = {} #One way to define an empty dictionary dict2 = dict() #One way to define an empty dictionary or convert another data type into a dictionary ou_mascots = {"Name": "Boomer", "Species": "Horse", "Partner": "Sooner", "Represents": "Oklahoma Sooners"} dict3 = {1:"uno", 2:34, "three": [1,2,3], 4:(4,5), 5:ou_mascots} ou_mascots dict3 #Dictionary values can be of any type - string, number, lists, even dictionary
_____no_output_____
MIT
.ipynb_checkpoints/Python Crash Course-checkpoint.ipynb
rafia37/DSA5113-TA-class-repo
Accessing elements
ou_mascots["Name"] ou_mascots.get("Partner")
_____no_output_____
MIT
.ipynb_checkpoints/Python Crash Course-checkpoint.ipynb
rafia37/DSA5113-TA-class-repo
Updating Dictionary
#Adding new element dict1["new_element"] = 5113 dict1 #Deleting del dict3[1] #removes the entry with key 1 dict1.clear() #removes all entries del dict2 #deletes entire dictionary dict3 dict1 dict2
_____no_output_____
MIT
.ipynb_checkpoints/Python Crash Course-checkpoint.ipynb
rafia37/DSA5113-TA-class-repo
Useful Dictionary Functions
ou_mascots.keys() #Returns keys ou_mascots.items() #Returns key-value pairs as tuples ou_mascots.values() #Returns values ou_mascots.pop("Species") #removes given key and returns value len(dict1)
_____no_output_____
MIT
.ipynb_checkpoints/Python Crash Course-checkpoint.ipynb
rafia37/DSA5113-TA-class-repo
Tuples Tuples are another iterable and sequence data type. Almost everything disccussed in the list section can be applied to tuples and they work in the same way - operations, functions etc.
#Defining some tuples tup1 = (20,) #If your tuple has only one element, you still have to use a comma tup2 = (1,3,4,6,7) tup3 = ("a", "b", "c") tup4 = (5,6,7) #The key difference with lists, you can't change tuple items tup2[3] = 4 #You can use tuples to define deictionaries dict(zip(tup3, tup4))
_____no_output_____
MIT
.ipynb_checkpoints/Python Crash Course-checkpoint.ipynb
rafia37/DSA5113-TA-class-repo
List Comprehension List comprehension is a quick way to create a new list from an existing list (or any other iterable like tuples or dictionaries). The syntax is as follows -```pythonnew_list = [(x+5) for x in existing_list]```The above one line code is the same as writing the following lengthy code block:```pythonnew_list=[]for x in existing_list: value = x + 5 new_list.append(value)```
print(l3) #We need a new list of numbers that are an even multiple of 5 #We already have a list of even numbers up to 48 - l3 #time to create the new list l5 = [2*i for i in l3] print(l5)
[4, 8, 12, 16, 20, 24, 28, 32, 36, 40, 44, 48, 52, 56, 60, 64, 68, 72, 76, 80, 84, 88, 92, 96]
MIT
.ipynb_checkpoints/Python Crash Course-checkpoint.ipynb
rafia37/DSA5113-TA-class-repo
Error Handling Sometimes we might have a code block, especially in a loop or a function that might not work for all kind of values. In that case, error hadnling is something to consider in order to avoid error and continue on with the rest of the program. Errors can be handled in many ways depending on your needs but here I'm showing the `try .. except` method.
#inserting another string in l4 l4.insert(2, "a") l4 #let's try running the arithmatic_operations functions on the elements of l4 for item in l4: try: res = ariethmatic_operations(item,5) print("list item {}, result {}".format(item, res)) except: print("Could not perform arithmatic operations for list item {}".format(item))
list item 1, result 5 list item 2, result 6 Could not perform arithmatic operations for list item a list item 3, result 7 list item 5, result 9 list item 6, result 10 Could not perform arithmatic operations for list item b Could not perform arithmatic operations for list item c Could not perform arithmatic operations for list item d
MIT
.ipynb_checkpoints/Python Crash Course-checkpoint.ipynb
rafia37/DSA5113-TA-class-repo
Lambda Expression A quick way to define short anonymous functions - one liner functions. Handy when you keep repeating an expression and it's too small to define a formal function. ```pythonDefiningx = lambda arg : expressioncallingx(value)```This is equivalent to -```pythonDefiningdef x(arg): result = expression return resultcallingx(value)```
#small function with 1 argument x = lambda a : a + 10 x(5) #small function with multiple arguments x = lambda a,b,c : ((a + 10)*b)/c x(5,10,2)
_____no_output_____
MIT
.ipynb_checkpoints/Python Crash Course-checkpoint.ipynb
rafia37/DSA5113-TA-class-repo
Mapping Function `map` function is quick way to apply a function to many values using an iterable (lists, tuples etc). The function to apply can be a built in function, user defined function or even a lambda expression. In fact, mapping and lambda expression work really well together. The syntax is as follows : ```pythonmap(function_name, list_name)```The above one line code is eqivalent to the lengthy code block below -```pythonfor item in list_name: function_name(list_name)``` **applying the built-in `type` function to the dictionary values**
dict3 result = map(type, dict3.values()) list(result)
_____no_output_____
MIT
.ipynb_checkpoints/Python Crash Course-checkpoint.ipynb
rafia37/DSA5113-TA-class-repo
**applying the user-defined `arithmetic_operations` function to two lists**
print(l1, l3) result = map(ariethmatic_operations, l1, l3) #mapped up to the shorter of the two lists list(result)
_____no_output_____
MIT
.ipynb_checkpoints/Python Crash Course-checkpoint.ipynb
rafia37/DSA5113-TA-class-repo
**combining lambda expression and mapping function**
numbers1 = [1, 2, 3] numbers2 = [4, 5, 6] result = map(lambda x, y: x + y, numbers1, numbers2) list(result)
_____no_output_____
MIT
.ipynb_checkpoints/Python Crash Course-checkpoint.ipynb
rafia37/DSA5113-TA-class-repo
User Input Sometimes, it is necessary to take user input and you can do that in python using the `input` command. The `input` command returns the user input as a string so, always remember to convert the input to the data type you need. ```pythoninput("Your customized prompt goes here")```
inp = input("please input two integers seperated by comma") inp #let's apply the arithmetic_operation function to this user input a,b = inp.split(",") a ariethmatic_operations(int(a), int(b)) #Need to convert to integers since this one doesn't handle strings new_arith(a,b, convert=True)
_____no_output_____
MIT
.ipynb_checkpoints/Python Crash Course-checkpoint.ipynb
rafia37/DSA5113-TA-class-repo
Run all corpora
As.testSet() As.test(basic)
_____no_output_____
MIT
zz_test/100-slots.ipynb
sethbam9/tutorials
Run specific corpora
As.testSet("uruk") As.test(basic, refresh=True)
_____no_output_____
MIT
zz_test/100-slots.ipynb
sethbam9/tutorials
webgrabber für Listen von Wikipedia
# Gebäckliste import requests from bs4 import BeautifulSoup # man muss der liste einen letzten eintrag geben, weil sonst weitere listen unter der eigentlichen ausgelesen werden. def grab_list(url, last_item): # wenn wikipedia eine Tabelle anzeigt grabbed_list = [] r = requests.get(url) text = r.text soup = BeautifulSoup(text, 'lxml') soup.prettify() matches = soup.find_all('tr') for index, row in enumerate(matches): try: obj = row.find('td').a.get('title') if obj.endswith(' (page does not exist)'): obj = obj.replace(' (page does not exist)', '') grabbed_list.append(obj) if obj == last_item: break except AttributeError: continue return grabbed_list def grab_list2(url, last_item): # wenn wikipedia eine bullet-point liste anzeigt grabbed_list = [] r = requests.get(url) text = r.text soup = BeautifulSoup(text, 'lxml') soup.prettify() matches = soup.find_all('li') for index, row in enumerate(matches): try: obj = row.a.get('title') if obj.endswith(' (page does not exist)'): obj = obj.replace(' (page does not exist)', '') grabbed_list.append(obj) if obj == last_item: break except AttributeError: continue return grabbed_list url_gebaeck = r'https://en.wikipedia.org/wiki/List_of_pastries' gebaeckliste = grab_list(url_gebaeck, 'Zlebia') print(gebaeckliste) # deutsche desserts url_deutschedesserts = r'https://en.wikipedia.org/wiki/List_of_German_desserts' germanpastrylist = grab_list(url_deutschedesserts, 'Zwetschgenkuchen') print(germanpastrylist) # Milchprodukte url_dairy = r'https://en.wikipedia.org/wiki/List_of_dairy_products' dairyproductlist = grab_list(url_dairy, 'Yogurt') print(dairyproductlist) # Cheeses url_cheese = r'https://en.wikipedia.org/wiki/List_of_cheeses' cheeselist = grab_list(url_cheese, 'Rice cheese') print(cheeselist) url_fruit = r'https://en.wikipedia.org/wiki/List_of_culinary_fruits' fruits = grab_list(url_fruit, 'Yantok') print(fruits) url_vegetables = r'https://en.wikipedia.org/wiki/List_of_vegetables' vegetables = grab_list(url_vegetables, 'Wakame') print(vegetables) url_seafood = r'https://en.wikipedia.org/wiki/List_of_types_of_seafood' seafood = grab_list2(url_seafood, 'Nautilus') print(seafood) url_seafood = r'https://en.wikipedia.org/wiki/List_of_seafood_dishes' seafood = grab_list2(url_seafood, 'Cuttlefish') print(seafood)
['Baik kut kyee kaik', 'Balchão', 'Bánh canh', 'Bisque (food)', 'Bún mắm', 'Bún riêu', 'Chowder', 'Cioppino', 'Crawfish pie', 'Curanto', 'Fideuà', 'Halabos', 'Hoe (dish)', 'Hoedeopbap', 'Kaeng som', 'Kedgeree', 'Maeuntang', 'Moules-frites', 'Namasu', 'New England clam bake', 'Paella', 'Paelya', 'Paila marina', 'Piaparan', 'Plateau de fruits de mer', 'Seafood basket', 'Seafood birdsnest', 'Seafood boil', 'Seafood cocktail', 'Seafood pizza', 'Stroganina', 'Sundubu jjigae', 'Surf and turf', 'Tinumok', 'Clam cake', 'Clam chowder', 'Clams casino', 'Clams oreganata', 'Fabes con almejas', 'Fried clams', 'Jaecheopguk', 'New England clam bake', 'Steamed clams', 'Stuffed clam', 'Crab puff', 'Fish heads', "'Ota 'ika", 'Ginataang sugpo', 'Bisque (food)', 'Lobster Newberg', 'Lobster roll', 'Lobster stew', 'Scampi', 'Miruhulee boava', 'Nakji-bokkeum', 'Nakji-yeonpo-tang', 'Polbo á feira', 'Pulpo a la campechana', 'Akashiyaki', 'San-nakji', 'Takoyaki', 'Takomeshi', 'Angels on horseback', 'Hangtown fry', 'Oyster omelette', 'Oyster sauce', 'Oyster vermicelli', 'Oysters Bienville', 'Oysters en brochette', 'Oysters Kirkpatrick', 'Oysters Rockefeller', 'Steak and oyster pie', 'Balao-balao', 'Biyaring', 'Bobó de camarão', 'Bún mắm', 'Camaron rebosado', 'Chakkoli', 'Chạo tôm', 'Coconut shrimp', 'Drunken shrimp', 'Ebi chili', 'Fried prawn', 'Ginataang hipon', 'Ginataang kalabasa', 'Halabos na hipon', 'Har gow', 'Nilasing na hipon', 'Okoy', 'Pininyahang hipon', 'Potted shrimps', 'Prawn cracker', 'Prawn cocktail', 'Shrimp ball', 'Shrimp DeJonghe', 'White boiled shrimp', 'Adobong pusit', 'Arròs negre', 'Dried shredded squid', 'Squid as food', 'Gising-gising', 'Ikameshi', 'Orange cuttlefish', 'Paella negra', 'Pancit choca', 'Squid cocktail', 'Cuttlefish']
MIT
webgrabber_wikilisten.ipynb
TechLabs-Dortmund/nutritional-value-determination
My first notebook
print ('my first notebook') 1 + 2 int(1 + 2) a = 3 print(a)
3
MIT
Labs/Lab1.ipynb
peralegh/480
Read Data from a file
import xlrd book = xlrd.open_workbook("Diamonds.xls") sheet = book.sheet_by_name("Diamonds") for row_index in range(1,5): # read the first 4 rows, skip the first row id_, weight, color,_,_,price = sheet.row_values(row_index) print(id_,weight,color,price)
1.0 0.3 D 1302.0 2.0 0.3 E 1510.0 3.0 0.3 G 1510.0 4.0 0.3 G 1260.0
MIT
Labs/Lab1.ipynb
peralegh/480
Question 1Given the following jumbled word, OBANWRI guess the correct English word.A. RANIBOWB. RAINBOWC. BOWRANID. ROBWANI
import random def shuffling(given): given = str(given) words = ['RAINBOW','RANIBOW','BOWRANI','ROBWANI'] shuffled = ''.join(random.sample(given,len(given))) if shuffled=='RAINBOW': return shuffled print("The correct option is: RAINBOW") else: #shuffling(given) print(shuffled,"is incorrect") print("The correct option is: RAINBOW") shuffling('OBANWRI')
BOIAWNR is incorrect The correct option is: RAINBOW
Apache-2.0
Day-1/Day1_assignment.ipynb
anjumrohra/LetsUpgrade_DataScience_Essentials
Question 2Write a program which prints “LETS UPGRADE”. (Please note that you have toprint in ALL CAPS as given)
string = "Lets upgrade" print(string.upper())
LETS UPGRADE
Apache-2.0
Day-1/Day1_assignment.ipynb
anjumrohra/LetsUpgrade_DataScience_Essentials
Question 3Write a program that takes Cost Price and Selling Price as input and displays whether the transaction is aProfit or a Loss or neither.INPUT FORMAT:1. The first line contains the cost price.2. The second line contains the selling price.OUTPUT FORMAT:1. Print "Profit" if the transaction is a profit or "Loss" if it is a loss. 2. If it is neither profit nor loss, print "Neither". (You must not have quotes in your output)
CP = float(input()) SP = float(input()) if CP<SP: print("Profit") elif CP>SP: print("Loss") else: print("Neither")
20 20 Neither
Apache-2.0
Day-1/Day1_assignment.ipynb
anjumrohra/LetsUpgrade_DataScience_Essentials
Question 4Write a program that takes an amount in Euros as input. You need to find its equivalent inRupees and display it. Assume 1 Euro equals Rs. 80.Please note that you are expected to stick to the given input and outputformat as in sample test cases. Please don't add any extra lines such as'Enter a number', etc.Your program should take only one number as input and display the output.
Euro = float(input()) Rupees = Euro * 80 print(Rupees)
20 1600.0
Apache-2.0
Day-1/Day1_assignment.ipynb
anjumrohra/LetsUpgrade_DataScience_Essentials
IntroductionNow that I have removed the RNA/DNA node and we have fixed many pathways, I will re-visit the things that were raised in issue 37: 'Reaction reversibility'. There were reactions that we couldn't reverse or remove or they would kill the biomass. I will try to see if these problems have been resolved now. If not, I will dig into the underlying cause in a smanner similar to what was done in notebook 20.
import cameo import pandas as pd import cobra.io import escher from escher import Builder from cobra import Reaction model = cobra.io.read_sbml_model('../model/p-thermo.xml') model_e_coli = cameo.load_model('iML1515') model_b_sub = cameo.load_model('iYO844')
_____no_output_____
Apache-2.0
notebooks/28. Resolve issue 37-Reaction reversibility.ipynb
biosustain/p-thermo
__ALDD2x__should be irreversible, but doing so kills the biomass growth completely at this moment. It needs to be changed as we right now have an erroneous energy generating cycle going from aad_c --> ac_c (+atp) --> acald --> accoa_c -->aad_c.Apparently, unconciously i already fixed this problem in notebook 20. So this is fine now. __GLYO1__ This reaction has already been removed in notebook 20 to fix the glycine pathway.__DHORDfum__ Has been renamed to DHORD6 in notebook 20 in the first check of fixing dCMP. And the reversability has been fixed too. __OMPDC__ This has by chance also been fixed in notebook 20 in the first pass to fix dCMP biosynthesis. __NADK__ The reaction is currently reversible, but should be irreversible, producing nadp and adp. Still, when I try to fix the flux in the direction it should be, it kills the biomass production. I will try to figure out why, likely it has to do with co-factor balance.
model.reactions.NADK.bounds = (0,1000) model.reactions.ALAD_L.bounds = (-1000,0) model.optimize().objective_value cofactors = ['nad_c', 'nadh_c','', '', '', ''] with model: # model.add_boundary(model.metabolites.glc__D_c, type = 'sink', reaction_id = 'test') # model.add_boundary(model.metabolites.r5p_c , type = 'sink', reaction_id = 'test2') # model.add_boundary(model.metabolites.hco3_c, type = 'sink', reaction_id = 'test3') for met in model.reactions.biomass.metabolites: if met.id in cofactors: coeff = model.reactions.biomass.metabolites[met] model.reactions.biomass.add_metabolites({met:-coeff}) else: continue solution = model.optimize() #print (model.metabolites.glu__D_c.summary()) #print ('test flux:', solution['test']) #print ('test2 flux:', solution['test2']) print (solution.objective_value)
1.8496633304871162
Apache-2.0
notebooks/28. Resolve issue 37-Reaction reversibility.ipynb
biosustain/p-thermo
It seems that the NAD and NADH are the blocked metabolites for biomass generation. Now lets try to figure out where this problem lies. I think the problem lies in re-generating NAD. The model uses this reaction togehter with oth strange reactions to regenerate NAD, where normally in oxygen containing conditions I would expect respiration to do this. So let me see how bacillus and e. coli models do this and see if maybe some form of ETC is missing in our model. This would explain why adding the ATP synthase didn't influence our biomass prediction at all.__Flavin reductase__In E. coli we observed that there is a flavin reductase in the genome, contributing to NADH regeneration. We've looked into the genome annotation for our strain, and have found that there is a flavin reductase annotated there aswell (https://www.genome.jp/dbget-bin/www_bget?ptl:AOT13_02085), but not in bacillus (fitting the model). Therefore, I will add this reaction into our model, named FADRx. __NADH dehydrogenase__The NADH dehydrogenase, tansfering reducing equivalents from NADH to quinone, is the first part of the electron transport chain. The quinones then can transfer the electrons to pump out protons, which can allow ATP synthase to generate additional energy. in iML1515 this reaction is captures by NADH16pp, NADH17pp and NADH18pp. In B. subtilis NADH4 reflects this reaction. In our model, we don't currently have anything that resembles this reaction. However, in Beata's thesis (and the genome) we can find EC 1.6.5.3, which performs the a similar reaction to NADH16pp. Therefore, I will add this reactin into our model.In our model, we also have the reactions QH2OR and NADHQOR, which somewhat resemble the NADHDH reaction. Both do not include proton translocation or are reversible. To prevent these reactions from forming a cycle and having incorrect duplicate reactions in the model, I will remove them. __CYOR__The last 'step' in the model electron transport chain is the transfer of electrons from the quinone to oxygen, pumping protons out of the cell. E. coli has a CYTBO3_4pp reaction that shows this, performed by a cytochrome oxidase. The model doesnt have this reaction, but from Beata's thesis and the genome annotation one would expect this to be present. We found the reaction in a way similar to the E. coli model. Therefor I will add the CYTBO3 reaction to our model, as indicated in Beata's thesis.
model.add_reaction(Reaction(id='FADRx')) model.reactions.FADRx.name = 'Flavin reductase' model.reactions.FADRx.annotation = model_e_coli.reactions.FADRx.annotation model.reactions.FADRx.add_metabolites({ model.metabolites.fad_c:-1, model.metabolites.h_c: -1, model.metabolites.nadh_c:-1, model.metabolites.fadh2_c:1, model.metabolites.nad_c:1 }) #add NADH dehydrogenase reaction model.add_reaction(Reaction(id='NADHDH')) model.reactions.NADHDH.name = 'NADH Dehydrogenase (ubiquinone & 3.5 protons)' model.reactions.NADHDH.annotation['ec-code'] = '1.6.5.3' model.reactions.NADHDH.annotation['kegg.reaction'] = 'R11945' model.reactions.NADHDH.add_metabolites({ model.metabolites.nadh_c:-1, model.metabolites.h_c: -4.5, model.metabolites.ubiquin_c:-1, model.metabolites.nad_c: 1, model.metabolites.h_e: 3.5, model.metabolites.qh2_c: 1 }) model.remove_reactions(model.reactions.NADHQOR) model.remove_reactions(model.reactions.QH2OR) model.add_reaction(Reaction(id='CYTBO3')) model.reactions.CYTBO3.name = 'Cytochrome oxidase bo3 (ubiquinol: 2.5 protons)' model.reactions.CYTBO3.add_metabolites({ model.metabolites.o2_c:-0.5, model.metabolites.h_c: -2.5, model.metabolites.qh2_c:-1, model.metabolites.h2o_c:1, model.metabolites.h_e: 2.5, model.metabolites.ubiquin_c:1 })
_____no_output_____
Apache-2.0
notebooks/28. Resolve issue 37-Reaction reversibility.ipynb
biosustain/p-thermo
In looking at the above, I also observed some other reactions that probably should not looked at and modified.
model.reactions.MALQOR.id = 'MDH2' model.reactions.MDH2.bounds = (0,1000) model.metabolites.hexcoa_c.id = 'hxcoa_c' model.reactions.HEXOT.id = 'ACOAD2f' model.metabolites.dccoa_c.id = 'dcacoa_c' model.reactions.DECOT.id = 'ACOAD4f' #in the wrong direction and id model.reactions.GLYCDH_1.id = 'HPYRRx' model.reactions.HPYRRx.bounds = (-1000,0) #in the wrong direction model.reactions.FMNRx.bounds = (-1000,0) model.metabolites.get_by_id('3hbycoa_c').id = '3hbcoa_c'
_____no_output_____
Apache-2.0
notebooks/28. Resolve issue 37-Reaction reversibility.ipynb
biosustain/p-thermo
Even with the changes above we still do not restore growth... Supplying nmn_c restores growth, but supplying aspartate (beginning of the pathway) doesn't sovle the problem. so maybe the problem lies more with the NAD biosynthesis pathway than really the regeneration anymore?
model.metabolites.nicrnt_c.name = 'Nicotinate ribonucleotide' model.metabolites.ncam_c.name = 'Niacinamide' #wrong direction model.reactions.QULNS.bounds = (-1000,0) #this rescued biomass accumulation! #connected to aspartate model.optimize().objective_value #save&commit cobra.io.write_sbml_model(model,'../model/p-thermo.xml')
_____no_output_____
Apache-2.0
notebooks/28. Resolve issue 37-Reaction reversibility.ipynb
biosustain/p-thermo
Flux is carried through the Still it is strange that flux is not carried through the ETC, but is through the ATP synthase as one would expect in the presence of oxygen. Therefore I will investigate where the extracellular protons come from. It seems all the extracellular protons come from the export of phosphate (pi_c) which is proton symport coupled. We are producing so much phosphate from the biomass reaction. Though in theory, phosphate should not be produced so much, as it is also used for the generation of ATP from ADP. Right now I don't really see how to solve this problem. I've made an issueof it and will look into this at another time.
model.optimize()['ATPS4r'] model.metabolites.pi_c.summary()
_____no_output_____
Apache-2.0
notebooks/28. Resolve issue 37-Reaction reversibility.ipynb
biosustain/p-thermo
I also noticed that now most ATP comes from dGTP. The production of dGDP should just play a role in supplying nucleotides for biomass and so the flux it carries be low. I will check where the majority of the dGTP comes from.What is happening is the following: dgtp is converted dgdp and atp (rct ATDGDm). The dgdp then reacts with pep to form dGTP again. Pep formation is somewhat energy neutral, but it is wierd the metabolism decides to do this by themselves instead of flowing the pep into pyruvate via the normal glycolysis and into the TCA.
#reaction to be removed model.remove_reactions(model.reactions.PYRPT)
C:\Users\vivmol\AppData\Local\Continuum\anaconda3\envs\g-thermo\lib\site-packages\cobra\core\model.py:716: UserWarning: need to pass in a list C:\Users\vivmol\AppData\Local\Continuum\anaconda3\envs\g-thermo\lib\site-packages\cobra\core\group.py:110: UserWarning: need to pass in a list
Apache-2.0
notebooks/28. Resolve issue 37-Reaction reversibility.ipynb
biosustain/p-thermo
Removing these reactions triggers normal ATP production via ETC and ATP synthase again. So this may be solved now.
model.metabolites.pi_c.summary() #save & commit cobra.io.write_sbml_model(model,'../model/p-thermo.xml')
_____no_output_____
Apache-2.0
notebooks/28. Resolve issue 37-Reaction reversibility.ipynb
biosustain/p-thermo
Градиентный бустинг своими руками**Внимание:** в тексте задания произошли изменения - поменялось число деревьев (теперь 50), правило изменения величины шага в задании 3 и добавился параметр `random_state` у решающего дерева. Правильные ответы не поменялись, но теперь их проще получить. Также исправлена опечатка в функции `gbm_predict`.В этом задании будет использоваться датасет `boston` из `sklearn.datasets`. Оставьте последние 25% объектов для контроля качества, разделив `X` и `y` на `X_train`, `y_train` и `X_test`, `y_test`.Целью задания будет реализовать простой вариант градиентного бустинга над регрессионными деревьями для случая квадратичной функции потерь.
from sklearn import datasets, model_selection from sklearn.tree import DecisionTreeRegressor from sklearn.metrics import mean_squared_error import numpy as np boston = datasets.load_boston() X_train, X_test = boston.data[: 380, :], boston.data[381 :, :] y_train, y_test = boston.target[: 380], boston.target[381 :]
_____no_output_____
MIT
LearningOnMarkedData/week4/c02_w04_ex02.ipynb
ishatserka/MachineLearningAndDataAnalysisCoursera
Задание 1Как вы уже знаете из лекций, **бустинг** - это метод построения композиций базовых алгоритмов с помощью последовательного добавления к текущей композиции нового алгоритма с некоторым коэффициентом. Градиентный бустинг обучает каждый новый алгоритм так, чтобы он приближал антиградиент ошибки по ответам композиции на обучающей выборке. Аналогично минимизации функций методом градиентного спуска, в градиентном бустинге мы подправляем композицию, изменяя алгоритм в направлении антиградиента ошибки.Воспользуйтесь формулой из лекций, задающей ответы на обучающей выборке, на которые нужно обучать новый алгоритм (фактически это лишь чуть более подробно расписанный градиент от ошибки), и получите частный ее случай, если функция потерь `L` - квадрат отклонения ответа композиции `a(x)` от правильного ответа `y` на данном `x`.Если вы давно не считали производную самостоятельно, вам поможет таблица производных элементарных функций (которую несложно найти в интернете) и правило дифференцирования сложной функции. После дифференцирования квадрата у вас возникнет множитель 2 — т.к. нам все равно предстоит выбирать коэффициент, с которым будет добавлен новый базовый алгоритм, проигноируйте этот множитель при дальнейшем построении алгоритма.
def accent_l(z, y): '''result = list() for i in range(0, len(y)): result.append(-(y[i] - z[i])) ''' return -1.0*(z - y)
_____no_output_____
MIT
LearningOnMarkedData/week4/c02_w04_ex02.ipynb
ishatserka/MachineLearningAndDataAnalysisCoursera
Задание 2Заведите массив для объектов `DecisionTreeRegressor` (будем их использовать в качестве базовых алгоритмов) и для вещественных чисел (это будут коэффициенты перед базовыми алгоритмами). В цикле от обучите последовательно 50 решающих деревьев с параметрами `max_depth=5` и `random_state=42` (остальные параметры - по умолчанию). В бустинге зачастую используются сотни и тысячи деревьев, но мы ограничимся 50, чтобы алгоритм работал быстрее, и его было проще отлаживать (т.к. цель задания разобраться, как работает метод). Каждое дерево должно обучаться на одном и том же множестве объектов, но ответы, которые учится прогнозировать дерево, будут меняться в соответствие с полученным в задании 1 правилом. Попробуйте для начала всегда брать коэффициент равным 0.9. Обычно оправдано выбирать коэффициент значительно меньшим - порядка 0.05 или 0.1, но т.к. в нашем учебном примере на стандартном датасете будет всего 50 деревьев, возьмем для начала шаг побольше.В процессе реализации обучения вам потребуется функция, которая будет вычислять прогноз построенной на данный момент композиции деревьев на выборке `X`:```def gbm_predict(X): return [sum([coeff * algo.predict([x])[0] for algo, coeff in zip(base_algorithms_list, coefficients_list)]) for x in X](считаем, что base_algorithms_list - список с базовыми алгоритмами, coefficients_list - список с коэффициентами перед алгоритмами)```Эта же функция поможет вам получить прогноз на контрольной выборке и оценить качество работы вашего алгоритма с помощью `mean_squared_error` в `sklearn.metrics`. Возведите результат в степень 0.5, чтобы получить `RMSE`. Полученное значение `RMSE` — **ответ в пункте 2**.
base_algorithms_list = list() coefficients_list = list() algorithm = DecisionTreeRegressor(max_depth=5, random_state=42) def gbm_predict(X): return [sum([coeff * algo.predict([x])[0] for algo, coeff in zip(base_algorithms_list, coefficients_list)]) for x in X] base_algorithms_list = list() coefficients_list = list() b_0 = algorithm.fit(X_train, y_train) base_algorithms_list.append(b_0) coefficients_list.append(0.9) for i in range(1, 50): algorithm_i = DecisionTreeRegressor(max_depth=5, random_state=42) s_i = accent_l(gbm_predict(X_train), y_train) b_i = algorithm_i.fit(X_train, s_i) base_algorithms_list.append(b_i) coefficients_list.append(0.9) print(mean_squared_error(y_test, gbm_predict(X_test))**0.5)
5.448710743655589
MIT
LearningOnMarkedData/week4/c02_w04_ex02.ipynb
ishatserka/MachineLearningAndDataAnalysisCoursera
Задание 3Вас может также беспокоить, что двигаясь с постоянным шагом, вблизи минимума ошибки ответы на обучающей выборке меняются слишком резко, перескакивая через минимум. Попробуйте уменьшать вес перед каждым алгоритмом с каждой следующей итерацией по формуле `0.9 / (1.0 + i)`, где `i` - номер итерации (от 0 до 49). Используйте качество работы алгоритма как **ответ в пункте 3**. В реальности часто применяется следующая стратегия выбора шага: как только выбран алгоритм, подберем коэффициент перед ним численным методом оптимизации таким образом, чтобы отклонение от правильных ответов было минимальным. Мы не будем предлагать вам реализовать это для выполнения задания, но рекомендуем попробовать разобраться с такой стратегией и реализовать ее при случае для себя.
base_algorithms_list = list() coefficients_list = list() b_0 = algorithm.fit(X_train, y_train) base_algorithms_list.append(b_0) coefficients_list.append(0.9) for i in range(1, 50): algorithm_i = DecisionTreeRegressor(max_depth=5, random_state=42) s_i = accent_l(gbm_predict(X_train), y_train) b_i = algorithm_i.fit(X_train, s_i) base_algorithms_list.append(b_i) coefficients_list.append(0.9/(1.0+i)) #coefficients_list.append(0.05) print(mean_squared_error(y_test, gbm_predict(X_test))**0.5)
5.241288806316885
MIT
LearningOnMarkedData/week4/c02_w04_ex02.ipynb
ishatserka/MachineLearningAndDataAnalysisCoursera
Задание 4Реализованный вами метод - градиентный бустинг над деревьями - очень популярен в машинном обучении. Он представлен как в самой библиотеке `sklearn`, так и в сторонней библиотеке `XGBoost`, которая имеет свой питоновский интерфейс. На практике `XGBoost` работает заметно лучше `GradientBoostingRegressor` из `sklearn`, но для этого задания вы можете использовать любую реализацию. Исследуйте, переобучается ли градиентный бустинг с ростом числа итераций (и подумайте, почему), а также с ростом глубины деревьев. На основе наблюдений выпишите через пробел номера правильных из приведенных ниже утверждений в порядке возрастания номера (это будет **ответ в п.4**): 1. С увеличением числа деревьев, начиная с некоторого момента, качество работы градиентного бустинга не меняется существенно. 2. С увеличением числа деревьев, начиная с некоторого момента, градиентный бустинг начинает переобучаться. 3. С ростом глубины деревьев, начиная с некоторого момента, качество работы градиентного бустинга на тестовой выборке начинает ухудшаться. 4. С ростом глубины деревьев, начиная с некоторого момента, качество работы градиентного бустинга перестает существенно изменяться
from xgboost import XGBClassifier from sklearn.model_selection import cross_val_score from sklearn.ensemble import GradientBoostingRegressor %pylab inline n_trees = [1] + list(range(10, 105, 5)) X = boston.data y = boston.target estimator = GradientBoostingRegressor(learning_rate=0.1, max_depth=5, n_estimators=100) estimator.fit(X_train, y_train) print(mean_squared_error(y_test, estimator.predict(X_test))**0.5) estimator = XGBClassifier(learning_rate=0.25, max_depth=5, n_estimators=50, min_child_weight=3) estimator.fit(X_train, y_train) print(mean_squared_error(y_test, estimator.predict(X_test))**0.5) %%time xgb_scoring = [] for n_tree in n_trees: estimator = XGBClassifier(learning_rate=0.1, max_depth=5, n_estimators=n_tree, min_child_weight=3) estimator.fit(X_train, y_train) #estimator = GradientBoostingRegressor(learning_rate=0.25, max_depth=5, n_estimators=n_tree) #score = cross_val_score(estimator, X, y, scoring = 'accuracy', cv = 3) score = mean_squared_error(y_test, estimator.predict(X_test))**0.5 xgb_scoring.append(score) xgb_scoring = np.asmatrix(xgb_scoring) print(xgb_scoring.reshape(xgb_scoring.shape[1])) pylab.plot(n_trees, xgb_scoring.reshape(20, 1), marker='.', label='XGBoost') pylab.grid(True) pylab.xlabel('n_trees') pylab.ylabel('score') pylab.title('Accuracy score') pylab.legend(loc='lower right') %%time xgb_scoring = [] depths = range(1, 21) for depth in depths: estimator = XGBClassifier(learning_rate=0.1, max_depth=depth, n_estimators=50, min_child_weight=3) estimator.fit(X_train, y_train) #estimator = GradientBoostingRegressor(learning_rate=0.25, max_depth=5, n_estimators=n_tree) #score = cross_val_score(estimator, X, y, scoring = 'accuracy', cv = 3) score = mean_squared_error(y_test, estimator.predict(X_test))**0.5 xgb_scoring.append(score) xgb_scoring = np.asmatrix(xgb_scoring) pylab.plot(n_trees, xgb_scoring.reshape(20, 1), marker='.', label='XGBoost') pylab.grid(True) pylab.xlabel('n_trees') pylab.ylabel('score') pylab.title('Accuracy score') pylab.legend(loc='lower right')
_____no_output_____
MIT
LearningOnMarkedData/week4/c02_w04_ex02.ipynb
ishatserka/MachineLearningAndDataAnalysisCoursera
Задание 5Сравните получаемое с помощью градиентного бустинга качество с качеством работы линейной регрессии. Для этого обучите `LinearRegression` из `sklearn.linear_model` (с параметрами по умолчанию) на обучающей выборке и оцените для прогнозов полученного алгоритма на тестовой выборке `RMSE`. Полученное качество - ответ в **пункте 5**. В данном примере качество работы простой модели должно было оказаться хуже, но не стоит забывать, что так бывает не всегда. В заданиях к этому курсу вы еще встретите пример обратной ситуации.
from sklearn.linear_model import LinearRegression estimator = LinearRegression() estimator.fit(X_train, y_train) print(mean_squared_error(y_test, estimator.predict(X_test))**0.5)
7.87339775956158
MIT
LearningOnMarkedData/week4/c02_w04_ex02.ipynb
ishatserka/MachineLearningAndDataAnalysisCoursera
HSV Color Space, Balloons Import resources and display image
import numpy as np import matplotlib.pyplot as plt import cv2 %matplotlib inline # Read in the image image = cv2.imread('images/water_balloons.jpg') # Change color to RGB (from BGR) image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) plt.imshow(image)
_____no_output_____
MIT
1_1_Image_Representation/5_1. HSV Color Space, Balloons.ipynb
m-emad/computer-vision-exercises
Plot color channels
# RGB channels r = image[:,:,0] g = image[:,:,1] b = image[:,:,2] f, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(20,10)) ax1.set_title('Red') ax1.imshow(r, cmap='gray') ax2.set_title('Green') ax2.imshow(g, cmap='gray') ax3.set_title('Blue') ax3.imshow(b, cmap='gray') # Convert from RGB to HSV hsv = cv2.cvtColor(image, cv2.COLOR_RGB2HSV) # HSV channels h = hsv[:,:,0] s = hsv[:,:,1] v = hsv[:,:,2] f, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(20,10)) ax1.set_title('Hue') ax1.imshow(h, cmap='gray') ax2.set_title('Saturation') ax2.imshow(s, cmap='gray') ax3.set_title('Value') ax3.imshow(v, cmap='gray')
_____no_output_____
MIT
1_1_Image_Representation/5_1. HSV Color Space, Balloons.ipynb
m-emad/computer-vision-exercises
Define pink and hue selection thresholds
# Define our color selection criteria in HSV values lower_hue = np.array([150,0,0]) upper_hue = np.array([180,255,255]) # Define our color selection criteria in RGB values lower_pink = np.array([180,0,100]) upper_pink = np.array([255,255,230])
_____no_output_____
MIT
1_1_Image_Representation/5_1. HSV Color Space, Balloons.ipynb
m-emad/computer-vision-exercises
Mask the image
# Define the masked area in RGB space mask_rgb = cv2.inRange(image, lower_pink, upper_pink) # mask the image masked_image = np.copy(image) masked_image[mask_rgb==0] = [0,0,0] # Vizualize the mask plt.imshow(masked_image) # Now try HSV! # Define the masked area in HSV space mask_hsv = cv2.inRange(hsv, lower_hue, upper_hue) # mask the image masked_image = np.copy(image) masked_image[mask_hsv==0] = [0,0,0] # Vizualize the mask plt.imshow(masked_image)
_____no_output_____
MIT
1_1_Image_Representation/5_1. HSV Color Space, Balloons.ipynb
m-emad/computer-vision-exercises
Modeling and Simulation in PythonProject 1 exampleCopyright 2018 Allen DowneyLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
# Configure Jupyter so figures appear in the notebook %matplotlib inline # Configure Jupyter to display the assigned value after an assignment %config InteractiveShell.ast_node_interactivity='last_expr_or_assign' # import functions from the modsim library from modsim import * from pandas import read_html filename = 'data/World_population_estimates.html' tables = read_html(filename, header=0, index_col=0, decimal='M') table2 = tables[2] table2.columns = ['census', 'prb', 'un', 'maddison', 'hyde', 'tanton', 'biraben', 'mj', 'thomlinson', 'durand', 'clark'] def plot_results(census, un, timeseries, title): """Plot the estimates and the model. census: TimeSeries of population estimates un: TimeSeries of population estimates timeseries: TimeSeries of simulation results title: string """ plot(census, ':', label='US Census') plot(un, '--', label='UN DESA') if len(timeseries): plot(timeseries, color='gray', label='model') decorate(xlabel='Year', ylabel='World population (billion)', title=title) un = table2.un / 1e9 census = table2.census / 1e9 empty = TimeSeries() plot_results(census, un, empty, 'World population estimates') half = get_first_value(census) / 2 init = State(young=half, old=half) system = System(birth_rate1 = 1/18, birth_rate2 = 1/26, mature_rate = 1/40, death_rate = 1/40, t_0 = 1950, t_end = 2016, init=init) def update_func1(state, t, system): if t < 1970: births = system.birth_rate1 * state.young else: births = system.birth_rate2 * state.young maturings = system.mature_rate * state.young deaths = system.death_rate * state.old young = state.young + births - maturings old = state.old + maturings - deaths return State(young=young, old=old) state = update_func1(init, system.t_0, system) state = update_func1(state, system.t_0, system) def run_simulation(system, update_func): """Simulate the system using any update function. init: initial State object system: System object update_func: function that computes the population next year returns: TimeSeries """ results = TimeSeries() state = system.init results[system.t_0] = state.young + state.old for t in linrange(system.t_0, system.t_end): state = update_func(state, t, system) results[t+1] = state.young + state.old return results results = run_simulation(system, update_func1); plot_results(census, un, results, 'World population estimates')
_____no_output_____
MIT
code/world_pop_transition_from_allendowney_github.ipynb
sdaitzman/ModSimPy
Trial 2: classification with learned graph filtersWe want to classify data by first extracting meaningful features from learned filters.
import time import numpy as np import scipy.sparse, scipy.sparse.linalg, scipy.spatial.distance from sklearn import datasets, linear_model import matplotlib.pyplot as plt %matplotlib inline import os import sys sys.path.append('..') from lib import graph
_____no_output_____
MIT
trials/2_classification.ipynb
Gxqiang/cnn_graph
Parameters Dataset* Two digits version of MNIST with N samples of each class.* Distinguishing 4 from 9 is the hardest.
def mnist(a, b, N): """Prepare data for binary classification of MNIST.""" folder = os.path.join('..', 'data') mnist = datasets.fetch_mldata('MNIST original', data_home=folder) assert N < min(sum(mnist.target==a), sum(mnist.target==b)) M = mnist.data.shape[1] X = np.empty((M, 2, N)) X[:,0,:] = mnist.data[mnist.target==a,:][:N,:].T X[:,1,:] = mnist.data[mnist.target==b,:][:N,:].T y = np.empty((2, N)) y[0,:] = -1 y[1,:] = +1 X.shape = M, 2*N y.shape = 2*N, 1 return X, y X, y = mnist(4, 9, 1000) print('Dimensionality: N={} samples, M={} features'.format(X.shape[1], X.shape[0])) X -= 127.5 print('X in [{}, {}]'.format(np.min(X), np.max(X))) def plot_digit(nn): M, N = X.shape m = int(np.sqrt(M)) fig, axes = plt.subplots(1,len(nn), figsize=(15,5)) for i, n in enumerate(nn): n = int(n) img = X[:,n] axes[i].imshow(img.reshape((m,m))) axes[i].set_title('Label: y = {:.0f}'.format(y[n,0])) plot_digit([0, 1, 1e2, 1e2+1, 1e3, 1e3+1])
_____no_output_____
MIT
trials/2_classification.ipynb
Gxqiang/cnn_graph
Regularized least-square Reference: sklearn ridge regression* With regularized data, the objective is the same with or without bias.
def test_sklearn(tauR): def L(w, b=0): return np.linalg.norm(X.T @ w + b - y)**2 + tauR * np.linalg.norm(w)**2 def dL(w): return 2 * X @ (X.T @ w - y) + 2 * tauR * w clf = linear_model.Ridge(alpha=tauR, fit_intercept=False) clf.fit(X.T, y) w = clf.coef_.T print('L = {}'.format(L(w, clf.intercept_))) print('|dLw| = {}'.format(np.linalg.norm(dL(w)))) # Normalized data: intercept should be small. print('bias: {}'.format(abs(np.mean(y - X.T @ w)))) test_sklearn(1e-3)
_____no_output_____
MIT
trials/2_classification.ipynb
Gxqiang/cnn_graph
Linear classifier
def test_optim(clf, X, y, ax=None): """Test optimization on full dataset.""" tstart = time.process_time() ret = clf.fit(X, y) print('Processing time: {}'.format(time.process_time()-tstart)) print('L = {}'.format(clf.L(*ret, y))) if hasattr(clf, 'dLc'): print('|dLc| = {}'.format(np.linalg.norm(clf.dLc(*ret, y)))) if hasattr(clf, 'dLw'): print('|dLw| = {}'.format(np.linalg.norm(clf.dLw(*ret, y)))) if hasattr(clf, 'loss'): if not ax: fig = plt.figure() ax = fig.add_subplot(111) ax.semilogy(clf.loss) ax.set_title('Convergence') ax.set_xlabel('Iteration number') ax.set_ylabel('Loss') if hasattr(clf, 'Lsplit'): print('Lsplit = {}'.format(clf.Lsplit(*ret, y))) print('|dLz| = {}'.format(np.linalg.norm(clf.dLz(*ret, y)))) ax.semilogy(clf.loss_split) class rls: def __init__(s, tauR, algo='solve'): s.tauR = tauR if algo is 'solve': s.fit = s.solve elif algo is 'inv': s.fit = s.inv def L(s, X, y): return np.linalg.norm(X.T @ s.w - y)**2 + s.tauR * np.linalg.norm(s.w)**2 def dLw(s, X, y): return 2 * X @ (X.T @ s.w - y) + 2 * s.tauR * s.w def inv(s, X, y): s.w = np.linalg.inv(X @ X.T + s.tauR * np.identity(X.shape[0])) @ X @ y return (X,) def solve(s, X, y): s.w = np.linalg.solve(X @ X.T + s.tauR * np.identity(X.shape[0]), X @ y) return (X,) def predict(s, X): return X.T @ s.w test_optim(rls(1e-3, 'solve'), X, y) test_optim(rls(1e-3, 'inv'), X, y)
_____no_output_____
MIT
trials/2_classification.ipynb
Gxqiang/cnn_graph
Feature graph
t_start = time.process_time() z = graph.grid(int(np.sqrt(X.shape[0]))) dist, idx = graph.distance_sklearn_metrics(z, k=4) A = graph.adjacency(dist, idx) L = graph.laplacian(A, True) lmax = graph.lmax(L) print('Execution time: {:.2f}s'.format(time.process_time() - t_start))
_____no_output_____
MIT
trials/2_classification.ipynb
Gxqiang/cnn_graph
Lanczos basis
def lanczos(L, X, K): M, N = X.shape a = np.empty((K, N)) b = np.zeros((K, N)) V = np.empty((K, M, N)) V[0,...] = X / np.linalg.norm(X, axis=0) for k in range(K-1): W = L.dot(V[k,...]) a[k,:] = np.sum(W * V[k,...], axis=0) W = W - a[k,:] * V[k,...] - (b[k,:] * V[k-1,...] if k>0 else 0) b[k+1,:] = np.linalg.norm(W, axis=0) V[k+1,...] = W / b[k+1,:] a[K-1,:] = np.sum(L.dot(V[K-1,...]) * V[K-1,...], axis=0) return V, a, b def lanczos_H_diag(a, b): K, N = a.shape H = np.zeros((K*K, N)) H[:K**2:K+1, :] = a H[1:(K-1)*K:K+1, :] = b[1:,:] H.shape = (K, K, N) Q = np.linalg.eigh(H.T, UPLO='L')[1] Q = np.swapaxes(Q,1,2).T return Q def lanczos_basis_eval(L, X, K): V, a, b = lanczos(L, X, K) Q = lanczos_H_diag(a, b) M, N = X.shape Xt = np.empty((K, M, N)) for n in range(N): Xt[...,n] = Q[...,n].T @ V[...,n] Xt *= Q[0,:,np.newaxis,:] Xt *= np.linalg.norm(X, axis=0) return Xt, Q[0,...]
_____no_output_____
MIT
trials/2_classification.ipynb
Gxqiang/cnn_graph
Tests* Memory arrangement for fastest computations: largest dimensions on the outside, i.e. fastest varying indices.* The einsum seems to be efficient for three operands.
def test(): """Test the speed of filtering and weighting.""" def mult(impl=3): if impl is 0: Xb = Xt.view() Xb.shape = (K, M*N) XCb = Xb.T @ C # in MN x F XCb = XCb.T.reshape((F*M, N)) return (XCb.T @ w).squeeze() elif impl is 1: tmp = np.tensordot(Xt, C, (0,0)) return np.tensordot(tmp, W, ((0,2),(1,0))) elif impl is 2: tmp = np.tensordot(Xt, C, (0,0)) return np.einsum('ijk,ki->j', tmp, W) elif impl is 3: return np.einsum('kmn,fm,kf->n', Xt, W, C) C = np.random.normal(0,1,(K,F)) W = np.random.normal(0,1,(F,M)) w = W.reshape((F*M, 1)) a = mult(impl=0) for impl in range(4): tstart = time.process_time() for k in range(1000): b = mult(impl) print('Execution time (impl={}): {}'.format(impl, time.process_time() - tstart)) np.testing.assert_allclose(a, b) #test()
_____no_output_____
MIT
trials/2_classification.ipynb
Gxqiang/cnn_graph
GFL classification without weights* The matrix is singular thus not invertible.
class gflc_noweights: def __init__(s, F, K, niter, algo='direct'): """Model hyper-parameters""" s.F = F s.K = K s.niter = niter if algo is 'direct': s.fit = s.direct elif algo is 'sgd': s.fit = s.sgd def L(s, Xt, y): #tmp = np.einsum('kmn,kf,fm->n', Xt, s.C, np.ones((s.F,M))) - y.squeeze() #tmp = np.einsum('kmn,kf->mnf', Xt, s.C).sum((0,2)) - y.squeeze() #tmp = (C.T @ Xt.reshape((K,M*N))).reshape((F,M,N)).sum((0,2)) - y.squeeze() tmp = np.tensordot(s.C, Xt, (0,0)).sum((0,1)) - y.squeeze() return np.linalg.norm(tmp)**2 def dLc(s, Xt, y): tmp = np.tensordot(s.C, Xt, (0,0)).sum(axis=(0,1)) - y.squeeze() return np.dot(Xt, tmp).sum(1)[:,np.newaxis].repeat(s.F,1) #return np.einsum('kmn,n->km', Xt, tmp).sum(1)[:,np.newaxis].repeat(s.F,1) def sgd(s, X, y): Xt, q = lanczos_basis_eval(L, X, s.K) s.C = np.random.normal(0, 1, (s.K, s.F)) s.loss = [s.L(Xt, y)] for t in range(s.niter): s.C -= 1e-13 * s.dLc(Xt, y) s.loss.append(s.L(Xt, y)) return (Xt,) def direct(s, X, y): M, N = X.shape Xt, q = lanczos_basis_eval(L, X, s.K) s.C = np.random.normal(0, 1, (s.K, s.F)) W = np.ones((s.F, M)) c = s.C.reshape((s.K*s.F, 1)) s.loss = [s.L(Xt, y)] Xw = np.einsum('kmn,fm->kfn', Xt, W) #Xw = np.tensordot(Xt, W, (1,1)) Xw.shape = (s.K*s.F, N) #np.linalg.inv(Xw @ Xw.T) c[:] = np.linalg.solve(Xw @ Xw.T, Xw @ y) s.loss.append(s.L(Xt, y)) return (Xt,) #test_optim(gflc_noweights(1, 4, 100, 'sgd'), X, y) #test_optim(gflc_noweights(1, 4, 0, 'direct'), X, y)
_____no_output_____
MIT
trials/2_classification.ipynb
Gxqiang/cnn_graph
GFL classification with weights
class gflc_weights(): def __init__(s, F, K, tauR, niter, algo='direct'): """Model hyper-parameters""" s.F = F s.K = K s.tauR = tauR s.niter = niter if algo is 'direct': s.fit = s.direct elif algo is 'sgd': s.fit = s.sgd def L(s, Xt, y): tmp = np.einsum('kmn,kf,fm->n', Xt, s.C, s.W) - y.squeeze() return np.linalg.norm(tmp)**2 + s.tauR * np.linalg.norm(s.W)**2 def dLw(s, Xt, y): tmp = np.einsum('kmn,kf,fm->n', Xt, s.C, s.W) - y.squeeze() return 2 * np.einsum('kmn,kf,n->fm', Xt, s.C, tmp) + 2 * s.tauR * s.W def dLc(s, Xt, y): tmp = np.einsum('kmn,kf,fm->n', Xt, s.C, s.W) - y.squeeze() return 2 * np.einsum('kmn,n,fm->kf', Xt, tmp, s.W) def sgd(s, X, y): M, N = X.shape Xt, q = lanczos_basis_eval(L, X, s.K) s.C = np.random.normal(0, 1, (s.K, s.F)) s.W = np.random.normal(0, 1, (s.F, M)) s.loss = [s.L(Xt, y)] for t in range(s.niter): s.C -= 1e-12 * s.dLc(Xt, y) s.W -= 1e-12 * s.dLw(Xt, y) s.loss.append(s.L(Xt, y)) return (Xt,) def direct(s, X, y): M, N = X.shape Xt, q = lanczos_basis_eval(L, X, s.K) s.C = np.random.normal(0, 1, (s.K, s.F)) s.W = np.random.normal(0, 1, (s.F, M)) #c = s.C.reshape((s.K*s.F, 1)) #w = s.W.reshape((s.F*M, 1)) c = s.C.view() c.shape = (s.K*s.F, 1) w = s.W.view() w.shape = (s.F*M, 1) s.loss = [s.L(Xt, y)] for t in range(s.niter): Xw = np.einsum('kmn,fm->kfn', Xt, s.W) #Xw = np.tensordot(Xt, s.W, (1,1)) Xw.shape = (s.K*s.F, N) c[:] = np.linalg.solve(Xw @ Xw.T, Xw @ y) Z = np.einsum('kmn,kf->fmn', Xt, s.C) #Z = np.tensordot(Xt, s.C, (0,0)) #Z = s.C.T @ Xt.reshape((K,M*N)) Z.shape = (s.F*M, N) w[:] = np.linalg.solve(Z @ Z.T + s.tauR * np.identity(s.F*M), Z @ y) s.loss.append(s.L(Xt, y)) return (Xt,) def predict(s, X): Xt, q = lanczos_basis_eval(L, X, s.K) return np.einsum('kmn,kf,fm->n', Xt, s.C, s.W) #test_optim(gflc_weights(3, 4, 1e-3, 50, 'sgd'), X, y) clf_weights = gflc_weights(F=3, K=50, tauR=1e4, niter=5, algo='direct') test_optim(clf_weights, X, y)
_____no_output_____
MIT
trials/2_classification.ipynb
Gxqiang/cnn_graph
GFL classification with splittingSolvers* Closed-form solution.* Stochastic gradient descent.
class gflc_split(): def __init__(s, F, K, tauR, tauF, niter, algo='direct'): """Model hyper-parameters""" s.F = F s.K = K s.tauR = tauR s.tauF = tauF s.niter = niter if algo is 'direct': s.fit = s.direct elif algo is 'sgd': s.fit = s.sgd def L(s, Xt, XCb, Z, y): return np.linalg.norm(XCb.T @ s.w - y)**2 + s.tauR * np.linalg.norm(s.w)**2 def Lsplit(s, Xt, XCb, Z, y): return np.linalg.norm(Z.T @ s.w - y)**2 + s.tauF * np.linalg.norm(XCb - Z)**2 + s.tauR * np.linalg.norm(s.w)**2 def dLw(s, Xt, XCb, Z, y): return 2 * Z @ (Z.T @ s.w - y) + 2 * s.tauR * s.w def dLc(s, Xt, XCb, Z, y): Xb = Xt.reshape((s.K, -1)).T Zb = Z.reshape((s.F, -1)).T return 2 * s.tauF * Xb.T @ (Xb @ s.C - Zb) def dLz(s, Xt, XCb, Z, y): return 2 * s.w @ (s.w.T @ Z - y.T) + 2 * s.tauF * (Z - XCb) def lanczos_filter(s, Xt): M, N = Xt.shape[1:] Xb = Xt.reshape((s.K, M*N)).T #XCb = np.tensordot(Xb, C, (2,1)) XCb = Xb @ s.C # in MN x F XCb = XCb.T.reshape((s.F*M, N)) # Needs to copy data. return XCb def sgd(s, X, y): M, N = X.shape Xt, q = lanczos_basis_eval(L, X, s.K) s.C = np.zeros((s.K, s.F)) s.w = np.zeros((s.F*M, 1)) Z = np.random.normal(0, 1, (s.F*M, N)) XCb = np.empty((s.F*M, N)) s.loss = [s.L(Xt, XCb, Z, y)] s.loss_split = [s.Lsplit(Xt, XCb, Z, y)] for t in range(s.niter): s.C -= 1e-7 * s.dLc(Xt, XCb, Z, y) XCb[:] = s.lanczos_filter(Xt) Z -= 1e-4 * s.dLz(Xt, XCb, Z, y) s.w -= 1e-4 * s.dLw(Xt, XCb, Z, y) s.loss.append(s.L(Xt, XCb, Z, y)) s.loss_split.append(s.Lsplit(Xt, XCb, Z, y)) return Xt, XCb, Z def direct(s, X, y): M, N = X.shape Xt, q = lanczos_basis_eval(L, X, s.K) s.C = np.zeros((s.K, s.F)) s.w = np.zeros((s.F*M, 1)) Z = np.random.normal(0, 1, (s.F*M, N)) XCb = np.empty((s.F*M, N)) Xb = Xt.reshape((s.K, M*N)).T Zb = Z.reshape((s.F, M*N)).T s.loss = [s.L(Xt, XCb, Z, y)] s.loss_split = [s.Lsplit(Xt, XCb, Z, y)] for t in range(s.niter): s.C[:] = Xb.T @ Zb / np.sum((np.linalg.norm(X, axis=0) * q)**2, axis=1)[:,np.newaxis] XCb[:] = s.lanczos_filter(Xt) #Z[:] = np.linalg.inv(s.tauF * np.identity(s.F*M) + s.w @ s.w.T) @ (s.tauF * XCb + s.w @ y.T) Z[:] = np.linalg.solve(s.tauF * np.identity(s.F*M) + s.w @ s.w.T, s.tauF * XCb + s.w @ y.T) #s.w[:] = np.linalg.inv(Z @ Z.T + s.tauR * np.identity(s.F*M)) @ Z @ y s.w[:] = np.linalg.solve(Z @ Z.T + s.tauR * np.identity(s.F*M), Z @ y) s.loss.append(s.L(Xt, XCb, Z, y)) s.loss_split.append(s.Lsplit(Xt, XCb, Z, y)) return Xt, XCb, Z def predict(s, X): Xt, q = lanczos_basis_eval(L, X, s.K) XCb = s.lanczos_filter(Xt) return XCb.T @ s.w #test_optim(gflc_split(3, 4, 1e-3, 1e-3, 50, 'sgd'), X, y) clf_split = gflc_split(3, 4, 1e4, 1e-3, 8, 'direct') test_optim(clf_split, X, y)
_____no_output_____
MIT
trials/2_classification.ipynb
Gxqiang/cnn_graph
Filters visualizationObservations:* Filters learned with the splitting scheme have much smaller amplitudes.* Maybe the energy sometimes goes in W ?* Why are the filters so different ?
lamb, U = graph.fourier(L) print('Spectrum in [{:1.2e}, {:1.2e}]'.format(lamb[0], lamb[-1])) def plot_filters(C, spectrum=False): K, F = C.shape M, M = L.shape m = int(np.sqrt(M)) X = np.zeros((M,1)) X[int(m/2*(m+1))] = 1 # Kronecker Xt, q = lanczos_basis_eval(L, X, K) Z = np.einsum('kmn,kf->mnf', Xt, C) Xh = U.T @ X Zh = np.tensordot(U.T, Z, (1,0)) pmin = int(m/2) - K pmax = int(m/2) + K + 1 fig, axes = plt.subplots(2,int(np.ceil(F/2)), figsize=(15,5)) for f in range(F): img = Z[:,0,f].reshape((m,m))[pmin:pmax,pmin:pmax] im = axes.flat[f].imshow(img, vmin=Z.min(), vmax=Z.max(), interpolation='none') axes.flat[f].set_title('Filter {}'.format(f)) fig.subplots_adjust(right=0.8) cax = fig.add_axes([0.82, 0.16, 0.02, 0.7]) fig.colorbar(im, cax=cax) if spectrum: ax = plt.figure(figsize=(15,5)).add_subplot(111) for f in range(F): ax.plot(lamb, Zh[...,f] / Xh, '.-', label='Filter {}'.format(f)) ax.legend(loc='best') ax.set_title('Spectrum of learned filters') ax.set_xlabel('Frequency') ax.set_ylabel('Amplitude') ax.set_xlim(0, lmax) plot_filters(clf_weights.C, True) plot_filters(clf_split.C, True)
_____no_output_____
MIT
trials/2_classification.ipynb
Gxqiang/cnn_graph
Extracted features
def plot_features(C, x): K, F = C.shape m = int(np.sqrt(x.shape[0])) xt, q = lanczos_basis_eval(L, x, K) Z = np.einsum('kmn,kf->mnf', xt, C) fig, axes = plt.subplots(2,int(np.ceil(F/2)), figsize=(15,5)) for f in range(F): img = Z[:,0,f].reshape((m,m)) #im = axes.flat[f].imshow(img, vmin=Z.min(), vmax=Z.max(), interpolation='none') im = axes.flat[f].imshow(img, interpolation='none') axes.flat[f].set_title('Filter {}'.format(f)) fig.subplots_adjust(right=0.8) cax = fig.add_axes([0.82, 0.16, 0.02, 0.7]) fig.colorbar(im, cax=cax) plot_features(clf_weights.C, X[:,[0]]) plot_features(clf_weights.C, X[:,[1000]])
_____no_output_____
MIT
trials/2_classification.ipynb
Gxqiang/cnn_graph
Performance w.r.t. hyper-parameters* F plays a big role. * Both for performance and training time. * Larger values lead to over-fitting !* Order $K \in [3,5]$ seems sufficient.* $\tau_R$ does not have much influence.
def scorer(clf, X, y): yest = clf.predict(X).round().squeeze() y = y.squeeze() yy = np.ones(len(y)) yy[yest < 0] = -1 nerrs = np.count_nonzero(y - yy) return 1 - nerrs / len(y) def perf(clf, nfolds=3): """Test training accuracy.""" N = X.shape[1] inds = np.arange(N) np.random.shuffle(inds) inds.resize((nfolds, int(N/nfolds))) folds = np.arange(nfolds) test = inds[0,:] train = inds[folds != 0, :].reshape(-1) fig, axes = plt.subplots(1,3, figsize=(15,5)) test_optim(clf, X[:,train], y[train], axes[2]) axes[0].plot(train, clf.predict(X[:,train]), '.') axes[0].plot(train, y[train].squeeze(), '.') axes[0].set_ylim([-3,3]) axes[0].set_title('Training set accuracy: {:.2f}'.format(scorer(clf, X[:,train], y[train]))) axes[1].plot(test, clf.predict(X[:,test]), '.') axes[1].plot(test, y[test].squeeze(), '.') axes[1].set_ylim([-3,3]) axes[1].set_title('Testing set accuracy: {:.2f}'.format(scorer(clf, X[:,test], y[test]))) if hasattr(clf, 'C'): plot_filters(clf.C) perf(rls(tauR=1e6)) for F in [1,3,5]: perf(gflc_weights(F=F, K=50, tauR=1e4, niter=5, algo='direct')) #perf(rls(tauR=1e-3)) #for K in [2,3,5,7]: # perf(gflc_weights(F=3, K=K, tauR=1e-3, niter=5, algo='direct')) #for tauR in [1e-3, 1e-1, 1e1]: # perf(rls(tauR=tauR)) # perf(gflc_weights(F=3, K=3, tauR=tauR, niter=5, algo='direct'))
_____no_output_____
MIT
trials/2_classification.ipynb
Gxqiang/cnn_graph
Classification* Greater is $F$, greater should $K$ be.
def cross_validation(clf, nfolds, nvalidations): M, N = X.shape scores = np.empty((nvalidations, nfolds)) for nval in range(nvalidations): inds = np.arange(N) np.random.shuffle(inds) inds.resize((nfolds, int(N/nfolds))) folds = np.arange(nfolds) for n in folds: test = inds[n,:] train = inds[folds != n, :].reshape(-1) clf.fit(X[:,train], y[train]) scores[nval, n] = scorer(clf, X[:,test], y[test]) return scores.mean()*100, scores.std()*100 #print('Accuracy: {:.2f} +- {:.2f}'.format(scores.mean()*100, scores.std()*100)) #print(scores) def test_classification(clf, params, param, values, nfolds=10, nvalidations=1): means = [] stds = [] fig, ax = plt.subplots(1,1, figsize=(15,5)) for i,val in enumerate(values): params[param] = val mean, std = cross_validation(clf(**params), nfolds, nvalidations) means.append(mean) stds.append(std) ax.annotate('{:.2f} +- {:.2f}'.format(mean,std), xy=(i,mean), xytext=(10,10), textcoords='offset points') ax.errorbar(np.arange(len(values)), means, stds, fmt='.', markersize=10) ax.set_xlim(-.8, len(values)-.2) ax.set_xticks(np.arange(len(values))) ax.set_xticklabels(values) ax.set_xlabel(param) ax.set_ylim(50, 100) ax.set_ylabel('Accuracy') ax.set_title('Parameters: {}'.format(params)) test_classification(rls, {}, 'tauR', [1e8,1e7,1e6,1e5,1e4,1e3,1e-5,1e-8], 10, 10) params = {'F':1, 'K':2, 'tauR':1e3, 'niter':5, 'algo':'direct'} test_classification(gflc_weights, params, 'tauR', [1e8,1e6,1e5,1e4,1e3,1e2,1e-3,1e-8], 10, 10) params = {'F':2, 'K':10, 'tauR':1e4, 'niter':5, 'algo':'direct'} test_classification(gflc_weights, params, 'F', [1,2,3,5]) params = {'F':2, 'K':4, 'tauR':1e4, 'niter':5, 'algo':'direct'} test_classification(gflc_weights, params, 'K', [2,3,4,5,8,10,20,30,50,70])
_____no_output_____
MIT
trials/2_classification.ipynb
Gxqiang/cnn_graph
Sampled MNIST
Xfull = X def sample(X, p, seed=None): M, N = X.shape z = graph.grid(int(np.sqrt(M))) # Select random pixels. np.random.seed(seed) mask = np.arange(M) np.random.shuffle(mask) mask = mask[:int(p*M)] return z[mask,:], X[mask,:] X = Xfull z, X = sample(X, .5) dist, idx = graph.distance_sklearn_metrics(z, k=4) A = graph.adjacency(dist, idx) L = graph.laplacian(A) lmax = graph.lmax(L) lamb, U = graph.fourier(L) print('Spectrum in [{:1.2e}, {:1.2e}]'.format(lamb[0], lamb[-1])) print(L.shape) def plot(n): M, N = X.shape m = int(np.sqrt(M)) x = X[:,n] #print(x+127.5) plt.scatter(z[:,0], -z[:,1], s=20, c=x+127.5) plot(10) def plot_digit(nn): M, N = X.shape m = int(np.sqrt(M)) fig, axes = plt.subplots(1,len(nn), figsize=(15,5)) for i, n in enumerate(nn): n = int(n) img = X[:,n] axes[i].imshow(img.reshape((m,m))) axes[i].set_title('Label: y = {:.0f}'.format(y[n,0])) #plot_digit([0, 1, 1e2, 1e2+1, 1e3, 1e3+1]) #clf_weights = gflc_weights(F=3, K=4, tauR=1e-3, niter=5, algo='direct') #test_optim(clf_weights, X, y) #plot_filters(clf_weights.C, True) #test_classification(rls, {}, 'tauR', [1e1,1e0]) #params = {'F':2, 'K':5, 'tauR':1e-3, 'niter':5, 'algo':'direct'} #test_classification(gflc_weights, params, 'F', [1,2,3]) test_classification(rls, {}, 'tauR', [1e8,1e7,1e6,1e5,1e4,1e3,1e-5,1e-8], 10, 10) params = {'F':2, 'K':2, 'tauR':1e3, 'niter':5, 'algo':'direct'} test_classification(gflc_weights, params, 'tauR', [1e8,1e5,1e4,1e3,1e2,1e1,1e-3,1e-8], 10, 1) params = {'F':2, 'K':10, 'tauR':1e5, 'niter':5, 'algo':'direct'} test_classification(gflc_weights, params, 'F', [1,2,3,4,5,10]) params = {'F':2, 'K':4, 'tauR':1e5, 'niter':5, 'algo':'direct'} test_classification(gflc_weights, params, 'K', [2,3,4,5,6,7,8,10,20,30])
_____no_output_____
MIT
trials/2_classification.ipynb
Gxqiang/cnn_graph
Housing Market Introduction:This time we will create our own dataset with fictional numbers to describe a house market. As we are going to create random data don't try to reason of the numbers. Step 1. Import the necessary libraries
import pandas as pd import numpy as np
_____no_output_____
BSD-3-Clause
05_Merge/Housing Market/Exercises.ipynb
LouisNodskov/pandas_exercises
Step 2. Create 3 differents Series, each of length 100, as follows: 1. The first a random number from 1 to 4 2. The second a random number from 1 to 33. The third a random number from 10,000 to 30,000
rand1 = pd.Series(np.random.randint(1, 5, 100)) rand2 = pd.Series(np.random.randint(1, 4, 100)) rand3 = pd.Series(np.random.randint(10000, 30001, 100)) print(rand1, rand2, rand3)
0 2 1 1 2 3 3 2 4 2 .. 95 3 96 4 97 3 98 2 99 2 Length: 100, dtype: int32 0 1 1 1 2 1 3 2 4 3 .. 95 2 96 1 97 3 98 3 99 2 Length: 100, dtype: int32 0 23816 1 22299 2 13516 3 25975 4 22916 ... 95 11050 96 16246 97 11288 98 25346 99 26681 Length: 100, dtype: int32
BSD-3-Clause
05_Merge/Housing Market/Exercises.ipynb
LouisNodskov/pandas_exercises
Step 3. Let's create a DataFrame by joinning the Series by column
df = pd.concat([rand1, rand2, rand3], axis = 1) df
_____no_output_____
BSD-3-Clause
05_Merge/Housing Market/Exercises.ipynb
LouisNodskov/pandas_exercises
Step 4. Change the name of the columns to bedrs, bathrs, price_sqr_meter
df.rename(columns = { 0: 'bedrs', 1: 'bathrs', 2: 'price_sqr_meter' }, inplace=True) df
_____no_output_____
BSD-3-Clause
05_Merge/Housing Market/Exercises.ipynb
LouisNodskov/pandas_exercises
Step 5. Create a one column DataFrame with the values of the 3 Series and assign it to 'bigcolumn'
bigcolumn = pd.DataFrame(pd.concat([rand1, rand2, rand3], axis = 0))
_____no_output_____
BSD-3-Clause
05_Merge/Housing Market/Exercises.ipynb
LouisNodskov/pandas_exercises
Step 6. Oops, it seems it is going only until index 99. Is it true?
len(bigcolumn)
_____no_output_____
BSD-3-Clause
05_Merge/Housing Market/Exercises.ipynb
LouisNodskov/pandas_exercises
Step 7. Reindex the DataFrame so it goes from 0 to 299
bigcolumn.reset_index(drop = True, inplace=True) bigcolumn
_____no_output_____
BSD-3-Clause
05_Merge/Housing Market/Exercises.ipynb
LouisNodskov/pandas_exercises
Import libraries
# generic tools import numpy as np import datetime # tools from sklearn from sklearn.preprocessing import LabelBinarizer from sklearn.metrics import classification_report from sklearn.datasets import fetch_openml from sklearn.model_selection import train_test_split # tools from tensorflow import tensorflow as tf from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense from tensorflow.keras.optimizers import SGD from tensorflow.keras.datasets import mnist from tensorflow.keras import backend as K from tensorflow.keras.utils import plot_model # matplotlib import matplotlib.pyplot as plt # Load the TensorBoard notebook extension %load_ext tensorboard # delete logs from previous runs - not always safe! !rm -rf ./logs/
_____no_output_____
MIT
notebooks/session8.ipynb
sofieditmer/cds-visual
Download data, train-test split, binarize labels
data, labels = fetch_openml('mnist_784', version=1, return_X_y=True) # to data data = data.astype("float")/255.0 # split data (trainX, testX, trainY, testY) = train_test_split(data, labels, test_size=0.2) # convert labels to one-hot encoding lb = LabelBinarizer() trainY = lb.fit_transform(trainY) testY = lb.fit_transform(testY)
_____no_output_____
MIT
notebooks/session8.ipynb
sofieditmer/cds-visual
Define neural network architecture using ```tf.keras```
# define architecture 784x256x128x10 model = Sequential() model.add(Dense(256, input_shape=(784,), activation="sigmoid")) model.add(Dense(128, activation="sigmoid")) model.add(Dense(10, activation="softmax")) # generalisation of logistic regression for multiclass task
_____no_output_____
MIT
notebooks/session8.ipynb
sofieditmer/cds-visual
Show summary of model architecture
model.summary()
Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= dense (Dense) (None, 256) 200960 _________________________________________________________________ dense_1 (Dense) (None, 128) 32896 _________________________________________________________________ dense_2 (Dense) (None, 10) 1290 ================================================================= Total params: 235,146 Trainable params: 235,146 Non-trainable params: 0 _________________________________________________________________
MIT
notebooks/session8.ipynb
sofieditmer/cds-visual
Visualise model layers
plot_model(model, show_shapes=True, show_layer_names=True)
_____no_output_____
MIT
notebooks/session8.ipynb
sofieditmer/cds-visual
Compile model loss function, optimizer, and preferred metrics
# train model using SGD sgd = SGD(1e-2) model.compile(loss="categorical_crossentropy", optimizer=sgd, metrics=["accuracy"])
_____no_output_____
MIT
notebooks/session8.ipynb
sofieditmer/cds-visual
Set ```tensorboard``` parameters - not compulsory!
log_dir = "logs/fit/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S") tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)
_____no_output_____
MIT
notebooks/session8.ipynb
sofieditmer/cds-visual
Train model and save history
history = model.fit(trainX, trainY, validation_data=(testX,testY), epochs=100, batch_size=128, callbacks=[tensorboard_callback])
Epoch 1/100 438/438 [==============================] - 2s 4ms/step - loss: 2.3059 - accuracy: 0.1420 - val_loss: 2.2460 - val_accuracy: 0.3663 Epoch 2/100 438/438 [==============================] - 1s 3ms/step - loss: 2.2309 - accuracy: 0.3536 - val_loss: 2.1785 - val_accuracy: 0.4581 Epoch 3/100 438/438 [==============================] - 1s 3ms/step - loss: 2.1597 - accuracy: 0.4864 - val_loss: 2.0883 - val_accuracy: 0.4735 Epoch 4/100 438/438 [==============================] - 1s 3ms/step - loss: 2.0618 - accuracy: 0.5459 - val_loss: 1.9589 - val_accuracy: 0.6129 Epoch 5/100 438/438 [==============================] - 1s 3ms/step - loss: 1.9203 - accuracy: 0.6039 - val_loss: 1.7823 - val_accuracy: 0.6325 Epoch 6/100 438/438 [==============================] - 1s 3ms/step - loss: 1.7367 - accuracy: 0.6555 - val_loss: 1.5729 - val_accuracy: 0.6772 Epoch 7/100 438/438 [==============================] - 1s 2ms/step - loss: 1.5242 - accuracy: 0.6950 - val_loss: 1.3654 - val_accuracy: 0.7379 Epoch 8/100 438/438 [==============================] - 1s 3ms/step - loss: 1.3285 - accuracy: 0.7239 - val_loss: 1.1888 - val_accuracy: 0.7531 Epoch 9/100 438/438 [==============================] - 1s 3ms/step - loss: 1.1601 - accuracy: 0.7500 - val_loss: 1.0491 - val_accuracy: 0.7741 Epoch 10/100 438/438 [==============================] - 1s 3ms/step - loss: 1.0293 - accuracy: 0.7740 - val_loss: 0.9410 - val_accuracy: 0.7899 Epoch 11/100 438/438 [==============================] - 2s 4ms/step - loss: 0.9307 - accuracy: 0.7887 - val_loss: 0.8562 - val_accuracy: 0.8013 Epoch 12/100 438/438 [==============================] - 1s 3ms/step - loss: 0.8491 - accuracy: 0.8005 - val_loss: 0.7878 - val_accuracy: 0.8117 Epoch 13/100 438/438 [==============================] - 1s 3ms/step - loss: 0.7830 - accuracy: 0.8110 - val_loss: 0.7328 - val_accuracy: 0.8237 Epoch 14/100 438/438 [==============================] - 1s 3ms/step - loss: 0.7308 - accuracy: 0.8213 - val_loss: 0.6864 - val_accuracy: 0.8292 Epoch 15/100 438/438 [==============================] - 1s 3ms/step - loss: 0.6892 - accuracy: 0.8293 - val_loss: 0.6481 - val_accuracy: 0.8349 Epoch 16/100 438/438 [==============================] - 1s 3ms/step - loss: 0.6520 - accuracy: 0.8356 - val_loss: 0.6147 - val_accuracy: 0.8422 Epoch 17/100 438/438 [==============================] - 1s 3ms/step - loss: 0.6138 - accuracy: 0.8454 - val_loss: 0.5861 - val_accuracy: 0.8466 Epoch 18/100 438/438 [==============================] - 1s 3ms/step - loss: 0.5918 - accuracy: 0.8481 - val_loss: 0.5617 - val_accuracy: 0.8525 Epoch 19/100 438/438 [==============================] - 1s 3ms/step - loss: 0.5632 - accuracy: 0.8550 - val_loss: 0.5407 - val_accuracy: 0.8581 Epoch 20/100 438/438 [==============================] - 2s 4ms/step - loss: 0.5438 - accuracy: 0.8596 - val_loss: 0.5212 - val_accuracy: 0.8606 Epoch 21/100 438/438 [==============================] - 1s 3ms/step - loss: 0.5276 - accuracy: 0.8622 - val_loss: 0.5046 - val_accuracy: 0.8634 Epoch 22/100 438/438 [==============================] - 1s 2ms/step - loss: 0.5057 - accuracy: 0.8671 - val_loss: 0.4891 - val_accuracy: 0.8677 Epoch 23/100 438/438 [==============================] - 1s 3ms/step - loss: 0.4918 - accuracy: 0.8709 - val_loss: 0.4757 - val_accuracy: 0.8714 Epoch 24/100 438/438 [==============================] - 1s 3ms/step - loss: 0.4787 - accuracy: 0.8727 - val_loss: 0.4635 - val_accuracy: 0.8735 Epoch 25/100 438/438 [==============================] - 1s 2ms/step - loss: 0.4670 - accuracy: 0.8761 - val_loss: 0.4527 - val_accuracy: 0.8746 Epoch 26/100 438/438 [==============================] - 2s 4ms/step - loss: 0.4549 - accuracy: 0.8788 - val_loss: 0.4430 - val_accuracy: 0.8761 Epoch 27/100 438/438 [==============================] - 1s 3ms/step - loss: 0.4499 - accuracy: 0.8805 - val_loss: 0.4336 - val_accuracy: 0.8788 Epoch 28/100 438/438 [==============================] - 1s 3ms/step - loss: 0.4442 - accuracy: 0.8816 - val_loss: 0.4255 - val_accuracy: 0.8806 Epoch 29/100 438/438 [==============================] - 1s 3ms/step - loss: 0.4314 - accuracy: 0.8834 - val_loss: 0.4179 - val_accuracy: 0.8821 Epoch 30/100 438/438 [==============================] - 1s 3ms/step - loss: 0.4168 - accuracy: 0.8863 - val_loss: 0.4106 - val_accuracy: 0.8843 Epoch 31/100 438/438 [==============================] - 1s 3ms/step - loss: 0.4141 - accuracy: 0.8878 - val_loss: 0.4049 - val_accuracy: 0.8856 Epoch 32/100 438/438 [==============================] - 2s 4ms/step - loss: 0.4038 - accuracy: 0.8899 - val_loss: 0.3981 - val_accuracy: 0.8874 Epoch 33/100 438/438 [==============================] - 2s 4ms/step - loss: 0.3989 - accuracy: 0.8903 - val_loss: 0.3928 - val_accuracy: 0.8884 Epoch 34/100 438/438 [==============================] - 2s 4ms/step - loss: 0.3977 - accuracy: 0.8923 - val_loss: 0.3879 - val_accuracy: 0.8879 Epoch 35/100 438/438 [==============================] - 1s 3ms/step - loss: 0.3916 - accuracy: 0.8916 - val_loss: 0.3828 - val_accuracy: 0.8907 Epoch 36/100 438/438 [==============================] - 1s 3ms/step - loss: 0.3867 - accuracy: 0.8917 - val_loss: 0.3783 - val_accuracy: 0.8917 Epoch 37/100 438/438 [==============================] - 1s 3ms/step - loss: 0.3820 - accuracy: 0.8936 - val_loss: 0.3739 - val_accuracy: 0.8914 Epoch 38/100 438/438 [==============================] - 1s 3ms/step - loss: 0.3729 - accuracy: 0.8975 - val_loss: 0.3702 - val_accuracy: 0.8917 Epoch 39/100 438/438 [==============================] - 1s 3ms/step - loss: 0.3687 - accuracy: 0.8989 - val_loss: 0.3663 - val_accuracy: 0.8938 Epoch 40/100 438/438 [==============================] - 1s 3ms/step - loss: 0.3693 - accuracy: 0.8984 - val_loss: 0.3628 - val_accuracy: 0.8941 Epoch 41/100 438/438 [==============================] - 1s 3ms/step - loss: 0.3597 - accuracy: 0.9011 - val_loss: 0.3597 - val_accuracy: 0.8944 Epoch 42/100 438/438 [==============================] - 1s 3ms/step - loss: 0.3630 - accuracy: 0.8988 - val_loss: 0.3561 - val_accuracy: 0.8955 Epoch 43/100 438/438 [==============================] - 1s 3ms/step - loss: 0.3583 - accuracy: 0.9004 - val_loss: 0.3531 - val_accuracy: 0.8958 Epoch 44/100 438/438 [==============================] - 1s 3ms/step - loss: 0.3557 - accuracy: 0.8998 - val_loss: 0.3505 - val_accuracy: 0.8958 Epoch 45/100 438/438 [==============================] - 1s 3ms/step - loss: 0.3534 - accuracy: 0.9014 - val_loss: 0.3474 - val_accuracy: 0.8969 Epoch 46/100 438/438 [==============================] - 1s 3ms/step - loss: 0.3484 - accuracy: 0.9018 - val_loss: 0.3450 - val_accuracy: 0.8984 Epoch 47/100 438/438 [==============================] - 1s 3ms/step - loss: 0.3480 - accuracy: 0.9019 - val_loss: 0.3424 - val_accuracy: 0.8989 Epoch 48/100 438/438 [==============================] - 1s 3ms/step - loss: 0.3455 - accuracy: 0.9022 - val_loss: 0.3400 - val_accuracy: 0.8993 Epoch 49/100 438/438 [==============================] - 1s 3ms/step - loss: 0.3338 - accuracy: 0.9050 - val_loss: 0.3378 - val_accuracy: 0.8984 Epoch 50/100 438/438 [==============================] - 1s 3ms/step - loss: 0.3384 - accuracy: 0.9060 - val_loss: 0.3353 - val_accuracy: 0.8999 Epoch 51/100 438/438 [==============================] - 1s 3ms/step - loss: 0.3325 - accuracy: 0.9069 - val_loss: 0.3333 - val_accuracy: 0.8999 Epoch 52/100 438/438 [==============================] - 1s 3ms/step - loss: 0.3361 - accuracy: 0.9058 - val_loss: 0.3313 - val_accuracy: 0.9007 Epoch 53/100 438/438 [==============================] - 1s 3ms/step - loss: 0.3327 - accuracy: 0.9064 - val_loss: 0.3292 - val_accuracy: 0.9009 Epoch 54/100 438/438 [==============================] - 1s 3ms/step - loss: 0.3316 - accuracy: 0.9051 - val_loss: 0.3273 - val_accuracy: 0.9011 Epoch 55/100 438/438 [==============================] - 1s 3ms/step - loss: 0.3319 - accuracy: 0.9064 - val_loss: 0.3255 - val_accuracy: 0.9025 Epoch 56/100 438/438 [==============================] - 1s 3ms/step - loss: 0.3320 - accuracy: 0.9051 - val_loss: 0.3238 - val_accuracy: 0.9016 Epoch 57/100 438/438 [==============================] - 1s 3ms/step - loss: 0.3209 - accuracy: 0.9097 - val_loss: 0.3218 - val_accuracy: 0.9029 Epoch 58/100 438/438 [==============================] - 1s 3ms/step - loss: 0.3155 - accuracy: 0.9132 - val_loss: 0.3199 - val_accuracy: 0.9029 Epoch 59/100 300/438 [===================>..........] - ETA: 0s - loss: 0.3241 - accuracy: 0.9089
MIT
notebooks/session8.ipynb
sofieditmer/cds-visual
Visualise using ```matplotlib```
plt.style.use("fivethirtyeight") plt.figure() plt.plot(np.arange(0, 100), history.history["loss"], label="train_loss") plt.plot(np.arange(0, 100), history.history["val_loss"], label="val_loss", linestyle=":") plt.plot(np.arange(0, 100), history.history["accuracy"], label="train_acc") plt.plot(np.arange(0, 100), history.history["val_accuracy"], label="val_acc", linestyle=":") plt.title("Training Loss and Accuracy") plt.xlabel("Epoch #") plt.ylabel("Loss/Accuracy") plt.tight_layout() plt.legend() plt.show()
_____no_output_____
MIT
notebooks/session8.ipynb
sofieditmer/cds-visual
Inspect using ```tensorboard```This won't run on JupyterHub!
%tensorboard --logdir logs/fit
_____no_output_____
MIT
notebooks/session8.ipynb
sofieditmer/cds-visual
Classifier metrics
# evaluate network print("[INFO] evaluating network...") predictions = model.predict(testX, batch_size=128) print(classification_report(testY.argmax(axis=1), predictions.argmax(axis=1), target_names=[str(x) for x in lb.classes_]))
_____no_output_____
MIT
notebooks/session8.ipynb
sofieditmer/cds-visual
Import Libraries
import sys !{sys.executable} -m pip install -r requirements.txt import numpy as np import matplotlib.pyplot as plt from analytics import SQLClient
_____no_output_____
MIT
analytics/analytics.ipynb
shawlu95/Grocery_Matter
Connect to MySQL database
username = "privateuser" password = "1234567" port = 7777 client = SQLClient(username, password, port) sql_tmp = """ SELECT id ,userID ,name ,type ,-priceCNY * count / 6.9 AS price ,count ,currency ,-priceCNY * count AS priceCNY ,time FROM items WHERE userID LIKE '%%shawlu%%' AND time BETWEEN '$start_dt$ 00:00:00' AND '$end_dt$ 00:00:00' AND deleted = 0 ORDER BY time;"""
_____no_output_____
MIT
analytics/analytics.ipynb
shawlu95/Grocery_Matter
Analytics: Nov. 2018
start_dt = '2018-11-01' end_dt = '2018-12-01' df = client.query(sql_tmp.replace('$start_dt$', start_dt).replace("$end_dt$", end_dt)) df = df.groupby(['type']).sum() total = np.sum(df.price) df["pct"] = df.price / total df["category"] = client.categories df = df.sort_values("pct")[::-1] df labels = ["%s: $%.2f"%(df.category.values[i], df.price.values[i]) for i in range(len(df))] plt.figure(figsize=(8, 8)) title = "Total expense %s\n%s-%s"%('$ {:,.2f}'.format(total), start_dt, end_dt) plt.title(title) _ = plt.pie(x = df.price.values, labels = labels, autopct='%1.1f%%', labeldistance = 1.1) centre_circle = plt.Circle((0,0),0.70,fc='white') fig = plt.gcf() fig.gca().add_artist(centre_circle) plt.savefig("month.png")
_____no_output_____
MIT
analytics/analytics.ipynb
shawlu95/Grocery_Matter
Analytics: Year of 2018
start_dt = '2018-01-01' end_dt = '2018-12-01' df = client.query(sql_tmp.replace('$start_dt$', start_dt).replace("$end_dt$", end_dt)) df[df.type == 'COM'] df = df.groupby(['type']).sum() total = np.sum(df.price) df["pct"] = df.price / total df["category"] = client.categories df = df.sort_values("pct")[::-1] df labels = ["%s: $%.2f"%(df.category.values[i], df.price.values[i]) for i in range(len(df))] plt.figure(figsize=(8, 8)) title = "Total expense %s\n%s-%s"%('$ {:,.2f}'.format(total), start_dt, end_dt) plt.title(title) _ = plt.pie(x = df.price.values, labels = labels, autopct='%1.1f%%', labeldistance = 1.1) centre_circle = plt.Circle((0,0),0.70,fc='white') fig = plt.gcf() fig.gca().add_artist(centre_circle) plt.savefig("year.png")
_____no_output_____
MIT
analytics/analytics.ipynb
shawlu95/Grocery_Matter
Dependencies
import os import sys import cv2 import shutil import random import warnings import numpy as np import pandas as pd import seaborn as sns import multiprocessing as mp import matplotlib.pyplot as plt from tensorflow import set_random_seed from sklearn.utils import class_weight from sklearn.model_selection import train_test_split from sklearn.metrics import confusion_matrix, cohen_kappa_score from keras import backend as K from keras.models import Model from keras.utils import to_categorical from keras import optimizers, applications from keras.preprocessing.image import ImageDataGenerator from keras.layers import Dense, Dropout, GlobalAveragePooling2D, Input from keras.callbacks import EarlyStopping, ReduceLROnPlateau, Callback, LearningRateScheduler, ModelCheckpoint def seed_everything(seed=0): random.seed(seed) os.environ['PYTHONHASHSEED'] = str(seed) np.random.seed(seed) set_random_seed(0) seed = 0 seed_everything(seed) %matplotlib inline sns.set(style="whitegrid") warnings.filterwarnings("ignore") sys.path.append(os.path.abspath('../input/efficientnet/efficientnet-master/efficientnet-master/')) from efficientnet import *
/opt/conda/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint8 = np.dtype([("qint8", np.int8, 1)]) /opt/conda/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint8 = np.dtype([("quint8", np.uint8, 1)]) /opt/conda/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint16 = np.dtype([("qint16", np.int16, 1)]) /opt/conda/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint16 = np.dtype([("quint16", np.uint16, 1)]) /opt/conda/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint32 = np.dtype([("qint32", np.int32, 1)]) /opt/conda/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. np_resource = np.dtype([("resource", np.ubyte, 1)]) /opt/conda/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint8 = np.dtype([("qint8", np.int8, 1)]) /opt/conda/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint8 = np.dtype([("quint8", np.uint8, 1)]) /opt/conda/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint16 = np.dtype([("qint16", np.int16, 1)]) /opt/conda/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint16 = np.dtype([("quint16", np.uint16, 1)]) /opt/conda/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint32 = np.dtype([("qint32", np.int32, 1)]) /opt/conda/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. np_resource = np.dtype([("resource", np.ubyte, 1)]) Using TensorFlow backend.
MIT
Model backlog/EfficientNet/EfficientNetB4/5-Fold/274 - EfficientNetB4-Reg-Img256 Old&New Fold3.ipynb
ThinkBricks/APTOS2019BlindnessDetection