markdown stringlengths 0 1.02M | code stringlengths 0 832k | output stringlengths 0 1.02M | license stringlengths 3 36 | path stringlengths 6 265 | repo_name stringlengths 6 127 |
|---|---|---|---|---|---|
The `multiscale.io` package handles file conversion. In general, one can call `multiscale.io.read_file.tohdf5` to convert the data type.*If the data type is not currently compatible, either code a conversion function or ask Loic/Ralph/Iaroslav.* | multiscale.io.read_file.tohdf5(original_filename) | file successfully converted
| CC-BY-4.0 | examples/Basics/Multiscale_Basics_Tutorial.ipynb | Coilm/hystorian |
If you open the newly produced file `SD_P4_zB5_050mV_-2550mV_0002` in HDFView, you will see four folders:1. **`datasets`** contains the main converted data from the .ibw files. It contains a subfolder for each of the original scans (in this case, only one), and each of these subfolders contain the 8 data channels obtained from the raw data.2. **`metadata`** contains all other data obtained from the .ibw files, except for the image itself, such as the scan rate or tip voltage.3. **`process`** is currently empty, but will eventually contain the results of our subsequent processing.4. **`type`** indicates the original filetype of the data - that is, 'ibw'.**Warning: HDFView prevents Python from operating on open .hdf5 files. Make sure to close the open files before proceeding!***** Before we do any processing, let's just check if things work. The function `msplt.save_image` lets us save an image from an array - however, our array is stored in the `.hdf5` file, and Python does not currently know about it. To use `msplt.save_image` then, we call it using the `pt.m_apply` function.In short, `pt.m_apply` lets us pass the location of the files within the `.hdf5` file, instead of an actual array. This makes handling several datasets much easier. For now, the main function call of `pt.m_apply` is of the format:`m_apply(filename, function, in_paths)`1. **`filename`** The name of the `.hdf5` file we are using. We set this earlier to be `'SD_P4_zB5_050mV_-2550mV_0002.hdf5'`2. **`function`** The function we are applying. In this case, we are going to use the function `msplt.save_image`.3. **`in_paths`** This is the path (or paths) to the data within the `.hdf5` file. If you look in HDFView, you can see the file directory. In this case, let's look at the `Phase1Trace` channel in `datasets`. We will thus set this argument to `'datasets/SD_P4_zB5_050mV_-2550mV_0002/Phase1Trace'`**Note:** Other arguments exist, but are beyond this scope. See Intermediate or Programming tutorials for more detail | pt.m_apply(filename, msplt.save_image, 'datasets/SD_P4_zB5_050mV_-2550mV_0002/Phase2Retrace', image_name = 'Original_Phase', show=True) | _____no_output_____ | CC-BY-4.0 | examples/Basics/Multiscale_Basics_Tutorial.ipynb | Coilm/hystorian |
You might notice we added extra arguments to `m_apply`. In general, if `m_apply` is given extra arguments, these arguments are passed to the subfunction: in this case, `msplt.save_image`. Thus, `msplt.save_image` knows to set `image_name` to `'Original_Phase'`, and to set `show` to `True`. You should now also see the image saved in this fiel directory; if you want, you could change this by changing the variable `saving_path`*** Now that we have something to compare to, we can begin processing. We are going to linearise the phase of this image (that is, transform the phase, which is currently an angle between -90 and 270, and wrapping at that limit) to a number between 0 and 1. To do this, we are going to use the function phase_linearisation, which we will again call using `m_apply`: | pt.m_apply(filename, twodim.phase_linearisation, 'datasets/SD_P4_zB5_050mV_-2550mV_0002/Phase2Retrace')
print('Linearisation Complete!') | Linearisation Complete!
| CC-BY-4.0 | examples/Basics/Multiscale_Basics_Tutorial.ipynb | Coilm/hystorian |
If you open HDFView right now, you should see a new folder in `process` called `001-phase_linearisation` which contains the newly linearised data. If an error did occur at some point, you might also see other files of the form `abc-phase_linearisation`, where abc is some number. Don't worry; simply mark the correct (or incorrect) ones, and change the path names of the next function calls to ensure it goes to the correct folder.*** Now that the data is linearised, we can now binarise it. This is simply a threshold function. This is called very similarly to the last function, except for the different function call, and the different path location. Feel free to look at the code itself in the `twodim` subpackage if y7ou want to see how this code works, or if you want to pass it other arguments. | pt.m_apply(filename, twodim.phase_binarisation, 'process/001-phase_linearisation/SD_P4_zB5_050mV_-2550mV_0002/Phase2Retrace')
print('Binarisation Complete!') | Binarisation Complete!
| CC-BY-4.0 | examples/Basics/Multiscale_Basics_Tutorial.ipynb | Coilm/hystorian |
Finally, we can view our final image. This requires the `msplt.save_image` function, which we used earlier. | pt.m_apply(filename, msplt.save_image, 'process/002-phase_binarisation/SD_P4_zB5_050mV_-2550mV_0002/Phase2Retrace', image_name = 'Binarised_Phase', show=True) | _____no_output_____ | CC-BY-4.0 | examples/Basics/Multiscale_Basics_Tutorial.ipynb | Coilm/hystorian |
If we want to, we can also go back and see the intermediate, linearised phase: | pt.m_apply(filename, msplt.save_image, 'process/001-phase_linearisation/SD_P4_zB5_050mV_-2550mV_0002/Phase2Retrace', image_name = 'Linearised_Phase', show=True) | _____no_output_____ | CC-BY-4.0 | examples/Basics/Multiscale_Basics_Tutorial.ipynb | Coilm/hystorian |
Our Mission Spam detection is one of the major applications of Machine Learning in the interwebs today. Pretty much all of the major email service providers have spam detection systems built in and automatically classify such mail as 'Junk Mail'. In this mission we will be using the Naive Bayes algorithm to create a model that can classify SMS messages as spam or not spam, based on the training we give to the model. It is important to have some level of intuition as to what a spammy text message might look like. Often they have words like 'free', 'win', 'winner', 'cash', 'prize' and the like in them as these texts are designed to catch your eye and in some sense tempt you to open them. Also, spam messages tend to have words written in all capitals and also tend to use a lot of exclamation marks. To the human recipient, it is usually pretty straightforward to identify a spam text and our objective here is to train a model to do that for us!Being able to identify spam messages is a binary classification problem as messages are classified as either 'Spam' or 'Not Spam' and nothing else. Also, this is a supervised learning problem, as we will be feeding a labelled dataset into the model, that it can learn from, to make future predictions. OverviewThis project has been broken down in to the following steps: - Step 0: Introduction to the Naive Bayes Theorem- Step 1.1: Understanding our dataset- Step 1.2: Data Preprocessing- Step 2.1: Bag of Words (BoW)- Step 2.2: Implementing BoW from scratch- Step 2.3: Implementing Bag of Words in scikit-learn- Step 3.1: Training and testing sets- Step 3.2: Applying Bag of Words processing to our dataset.- Step 4.1: Bayes Theorem implementation from scratch- Step 4.2: Naive Bayes implementation from scratch- Step 5: Naive Bayes implementation using scikit-learn- Step 6: Evaluating our model- Step 7: Conclusion**Note**: If you need help with a step, you can find the solution notebook by clicking on the Jupyter logo in the top left of the notebook. Step 0: Introduction to the Naive Bayes Theorem Bayes Theorem is one of the earliest probabilistic inference algorithms. It was developed by Reverend Bayes (which he used to try and infer the existence of God no less), and still performs extremely well for certain use cases. It's best to understand this theorem using an example. Let's say you are a member of the Secret Service and you have been deployed to protect the Democratic presidential nominee during one of his/her campaign speeches. Being a public event that is open to all, your job is not easy and you have to be on the constant lookout for threats. So one place to start is to put a certain threat-factor for each person. So based on the features of an individual, like age, whether the person is carrying a bag, looks nervous, etc., you can make a judgment call as to whether that person is a viable threat. If an individual ticks all the boxes up to a level where it crosses a threshold of doubt in your mind, you can take action and remove that person from the vicinity. Bayes Theorem works in the same way, as we are computing the probability of an event (a person being a threat) based on the probabilities of certain related events (age, presence of bag or not, nervousness of the person, etc.). One thing to consider is the independence of these features amongst each other. For example if a child looks nervous at the event then the likelihood of that person being a threat is not as much as say if it was a grown man who was nervous. To break this down a bit further, here there are two features we are considering, age AND nervousness. Say we look at these features individually, we could design a model that flags ALL persons that are nervous as potential threats. However, it is likely that we will have a lot of false positives as there is a strong chance that minors present at the event will be nervous. Hence by considering the age of a person along with the 'nervousness' feature we would definitely get a more accurate result as to who are potential threats and who aren't. This is the 'Naive' bit of the theorem where it considers each feature to be independent of each other which may not always be the case and hence that can affect the final judgement.In short, Bayes Theorem calculates the probability of a certain event happening (in our case, a message being spam) based on the joint probabilistic distributions of certain other events (in our case, the appearance of certain words in a message). We will dive into the workings of Bayes Theorem later in the mission, but first, let us understand the data we are going to work with. Step 1.1: Understanding our dataset We will be using a dataset originally compiled and posted on the UCI Machine Learning repository which has a very good collection of datasets for experimental research purposes. If you're interested, you can review the [abstract](https://archive.ics.uci.edu/ml/datasets/SMS+Spam+Collection) and the original [compressed data file](https://archive.ics.uci.edu/ml/machine-learning-databases/00228/) on the UCI site. For this exercise, however, we've gone ahead and downloaded the data for you. **Here's a preview of the data:** The columns in the data set are currently not named and as you can see, there are 2 columns. The first column takes two values, 'ham' which signifies that the message is not spam, and 'spam' which signifies that the message is spam. The second column is the text content of the SMS message that is being classified. >**Instructions:*** Import the dataset into a pandas dataframe using the **read_table** method. The file has already been downloaded, and you can access it using the filepath 'smsspamcollection/SMSSpamCollection'. Because this is a tab separated dataset we will be using '\\t' as the value for the 'sep' argument which specifies this format. * Also, rename the column names by specifying a list ['label', 'sms_message'] to the 'names' argument of read_table().* Print the first five values of the dataframe with the new column names. | # '!' allows you to run bash commands from jupyter notebook.
print("List all the files in the current directory\n")
!ls
# The required data table can be found under smsspamcollection/SMSSpamCollection
print("\n List all the files inside the smsspamcollection directory\n")
!ls smsspamcollection
!cat smsspamcollection/SMSSpamCollection
import pandas as pd
# Dataset available using filepath 'smsspamcollection/SMSSpamCollection'
df = pd.read_table("smsspamcollection/SMSSpamCollection", names=['label', 'sms_message'] )
# Output printing out first 5 rows
df[:5] | _____no_output_____ | MIT | Exercises/4_Bayesian_Interference/Bayesian_Inference.ipynb | psnx/artificial-intelligence |
Step 1.2: Data Preprocessing Now that we have a basic understanding of what our dataset looks like, let's convert our labels to binary variables, 0 to represent 'ham'(i.e. not spam) and 1 to represent 'spam' for ease of computation. You might be wondering why do we need to do this step? The answer to this lies in how scikit-learn handles inputs. Scikit-learn only deals with numerical values and hence if we were to leave our label values as strings, scikit-learn would do the conversion internally(more specifically, the string labels will be cast to unknown float values). Our model would still be able to make predictions if we left our labels as strings but we could have issues later when calculating performance metrics, for example when calculating our precision and recall scores. Hence, to avoid unexpected 'gotchas' later, it is good practice to have our categorical values be fed into our model as integers. >**Instructions:*** Convert the values in the 'label' column to numerical values using map method as follows:{'ham':0, 'spam':1} This maps the 'ham' value to 0 and the 'spam' value to 1.* Also, to get an idea of the size of the dataset we are dealing with, print out number of rows and columns using 'shape'. | '''
Solution
'''
df['names'] = df.label.map(lambda x: 1 if x == "spam" else 0)
| _____no_output_____ | MIT | Exercises/4_Bayesian_Interference/Bayesian_Inference.ipynb | psnx/artificial-intelligence |
Step 2.1: Bag of Words What we have here in our data set is a large collection of text data (5,572 rows of data). Most ML algorithms rely on numerical data to be fed into them as input, and email/sms messages are usually text heavy. Here we'd like to introduce the Bag of Words (BoW) concept which is a term used to specify the problems that have a 'bag of words' or a collection of text data that needs to be worked with. The basic idea of BoW is to take a piece of text and count the frequency of the words in that text. It is important to note that the BoW concept treats each word individually and the order in which the words occur does not matter. Using a process which we will go through now, we can convert a collection of documents to a matrix, with each document being a row and each word (token) being the column, and the corresponding (row, column) values being the frequency of occurrence of each word or token in that document.For example: Let's say we have 4 documents, which are text messagesin our case, as follows:`['Hello, how are you!','Win money, win from home.','Call me now','Hello, Call you tomorrow?']`Our objective here is to convert this set of texts to a frequency distribution matrix, as follows:Here as we can see, the documents are numbered in the rows, and each word is a column name, with the corresponding value being the frequency of that word in the document.Let's break this down and see how we can do this conversion using a small set of documents.To handle this, we will be using sklearn's [count vectorizer](http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.htmlsklearn.feature_extraction.text.CountVectorizer) method which does the following:* It tokenizes the string (separates the string into individual words) and gives an integer ID to each token.* It counts the occurrence of each of those tokens.**Please Note:** * The CountVectorizer method automatically converts all tokenized words to their lower case form so that it does not treat words like 'He' and 'he' differently. It does this using the `lowercase` parameter which is by default set to `True`.* It also ignores all punctuation so that words followed by a punctuation mark (for example: 'hello!') are not treated differently than the same words not prefixed or suffixed by a punctuation mark (for example: 'hello'). It does this using the `token_pattern` parameter which has a default regular expression which selects tokens of 2 or more alphanumeric characters.* The third parameter to take note of is the `stop_words` parameter. Stop words refer to the most commonly used words in a language. They include words like 'am', 'an', 'and', 'the', etc. By setting this parameter value to `english`, CountVectorizer will automatically ignore all words (from our input text) that are found in the built in list of English stop words in scikit-learn. This is extremely helpful as stop words can skew our calculations when we are trying to find certain key words that are indicative of spam.We will dive into the application of each of these into our model in a later step, but for now it is important to be aware of such preprocessing techniques available to us when dealing with textual data. Step 2.2: Implementing Bag of Words from scratch Before we dive into scikit-learn's Bag of Words (BoW) library to do the dirty work for us, let's implement it ourselves first so that we can understand what's happening behind the scenes. **Step 1: Convert all strings to their lower case form.**Let's say we have a document set:```documents = ['Hello, how are you!', 'Win money, win from home.', 'Call me now.', 'Hello, Call hello you tomorrow?']```>>**Instructions:*** Convert all the strings in the documents set to their lower case. Save them into a list called 'lower_case_documents'. You can convert strings to their lower case in python by using the lower() method. | '''
Solution:
'''
documents = ['Hello, how are you!',
'Win money, win from home.',
'Call me now.',
'Hello, Call hello you tomorrow?']
lower_case_documents = [w.lower() for w in documents]
print(lower_case_documents) | ['hello, how are you!', 'win money, win from home.', 'call me now.', 'hello, call hello you tomorrow?']
| MIT | Exercises/4_Bayesian_Interference/Bayesian_Inference.ipynb | psnx/artificial-intelligence |
**Step 2: Removing all punctuation**>>**Instructions:**Remove all punctuation from the strings in the document set. Save the strings into a list called 'sans_punctuation_documents'. | '''
Solution:
'''
punctuation = ",.?!"
import string
sans_punctuation_documents = [w.translate({ord(c): None for c in ".,_!?"})for w in lower_case_documents]
print(sans_punctuation_documents) | ['hello how are you', 'win money win from home', 'call me now', 'hello call hello you tomorrow']
| MIT | Exercises/4_Bayesian_Interference/Bayesian_Inference.ipynb | psnx/artificial-intelligence |
**Step 3: Tokenization**Tokenizing a sentence in a document set means splitting up the sentence into individual words using a delimiter. The delimiter specifies what character we will use to identify the beginning and end of a word. Most commonly, we use a single space as the delimiter character for identifying words, and this is true in our documents in this case also. >>**Instructions:**Tokenize the strings stored in 'sans_punctuation_documents' using the split() method. Store the final document set in a list called 'preprocessed_documents'. | '''
Solution:
'''
import itertools
preprocessed_documents = [w.split() for w in sans_punctuation_documents]
preprocessed_documents = list(itertools.chain(*preprocessed_documents))
print(preprocessed_documents) | ['hello', 'how', 'are', 'you', 'win', 'money', 'win', 'from', 'home', 'call', 'me', 'now', 'hello', 'call', 'hello', 'you', 'tomorrow']
| MIT | Exercises/4_Bayesian_Interference/Bayesian_Inference.ipynb | psnx/artificial-intelligence |
**Step 4: Count frequencies**Now that we have our document set in the required format, we can proceed to counting the occurrence of each word in each document of the document set. We will use the `Counter` method from the Python `collections` library for this purpose. `Counter` counts the occurrence of each item in the list and returns a dictionary with the key as the item being counted and the corresponding value being the count of that item in the list. >>**Instructions:**Using the Counter() method and preprocessed_documents as the input, create a dictionary with the keys being each word in each document and the corresponding values being the frequency of occurrence of that word. Save each Counter dictionary as an item in a list called 'frequency_list'. | '''
Solution
'''
frequency_list = []
import pprint
from collections import Counter
frequency_list = Counter(preprocessed_documents)
pprint.pprint(frequency_list) | Counter({'hello': 3,
'you': 2,
'win': 2,
'call': 2,
'how': 1,
'are': 1,
'money': 1,
'from': 1,
'home': 1,
'me': 1,
'now': 1,
'tomorrow': 1})
| MIT | Exercises/4_Bayesian_Interference/Bayesian_Inference.ipynb | psnx/artificial-intelligence |
Congratulations! You have implemented the Bag of Words process from scratch! As we can see in our previous output, we have a frequency distribution dictionary which gives a clear view of the text that we are dealing with.We should now have a solid understanding of what is happening behind the scenes in the `sklearn.feature_extraction.text.CountVectorizer` method of scikit-learn. We will now implement `sklearn.feature_extraction.text.CountVectorizer` method in the next step. Step 2.3: Implementing Bag of Words in scikit-learn Now that we have implemented the BoW concept from scratch, let's go ahead and use scikit-learn to do this process in a clean and succinct way. We will use the same document set as we used in the previous step. | '''
Here we will look to create a frequency matrix on a smaller document set to make sure we understand how the
document-term matrix generation happens. We have created a sample document set 'documents'.
'''
documents = ['Hello, how are you!',
'Win money, win from home.',
'Call me now.',
'Hello, Call hello you tomorrow?'] | _____no_output_____ | MIT | Exercises/4_Bayesian_Interference/Bayesian_Inference.ipynb | psnx/artificial-intelligence |
>>**Instructions:**Import the sklearn.feature_extraction.text.CountVectorizer method and create an instance of it called 'count_vector'. | '''
Solution
'''
from sklearn.feature_extraction.text import CountVectorizer
count_vector = CountVectorizer(documents)
count_vector | _____no_output_____ | MIT | Exercises/4_Bayesian_Interference/Bayesian_Inference.ipynb | psnx/artificial-intelligence |
**Data preprocessing with CountVectorizer()**In Step 2.2, we implemented a version of the CountVectorizer() method from scratch that entailed cleaning our data first. This cleaning involved converting all of our data to lower case and removing all punctuation marks. CountVectorizer() has certain parameters which take care of these steps for us. They are:* `lowercase = True` The `lowercase` parameter has a default value of `True` which converts all of our text to its lower case form.* `token_pattern = (?u)\\b\\w\\w+\\b` The `token_pattern` parameter has a default regular expression value of `(?u)\\b\\w\\w+\\b` which ignores all punctuation marks and treats them as delimiters, while accepting alphanumeric strings of length greater than or equal to 2, as individual tokens or words.* `stop_words` The `stop_words` parameter, if set to `english` will remove all words from our document set that match a list of English stop words defined in scikit-learn. Considering the small size of our dataset and the fact that we are dealing with SMS messages and not larger text sources like e-mail, we will not use stop words, and we won't be setting this parameter value.You can take a look at all the parameter values of your `count_vector` object by simply printing out the object as follows: | '''
Practice node:
Print the 'count_vector' object which is an instance of 'CountVectorizer()'
'''
# No need to revise this code
print(count_vector) | CountVectorizer(analyzer='word', binary=False, decode_error='strict',
dtype=<class 'numpy.int64'>, encoding='utf-8',
input=['Hello, how are you!', 'Win money, win from home.', 'Call me now.', 'Hello, Call hello you tomorrow?'],
lowercase=True, max_df=1.0, max_features=None, min_df=1,
ngram_range=(1, 1), preprocessor=None, stop_words=None,
strip_accents=None, token_pattern='(?u)\\b\\w\\w+\\b',
tokenizer=None, vocabulary=None)
| MIT | Exercises/4_Bayesian_Interference/Bayesian_Inference.ipynb | psnx/artificial-intelligence |
>>**Instructions:**Fit your document dataset to the CountVectorizer object you have created using fit(), and get the list of words which have been categorized as features using the get_feature_names() method. | '''
Solution:
'''
# No need to revise this code
count_vector.fit(documents)
count_vector.get_feature_names() | _____no_output_____ | MIT | Exercises/4_Bayesian_Interference/Bayesian_Inference.ipynb | psnx/artificial-intelligence |
The `get_feature_names()` method returns our feature names for this dataset, which is the set of words that make up our vocabulary for 'documents'. >>**Instructions:**Create a matrix with each row representing one of the 4 documents, and each column representing a word (feature name). Each value in the matrix will represent the frequency of the word in that column occurring in the particular document in that row. You can do this using the transform() method of CountVectorizer, passing in the document data set as the argument. The transform() method returns a matrix of NumPy integers, which you can convert to an array usingtoarray(). Call the array 'doc_array'. | '''
Solution
'''
doc_array = [d for d in documents]
doc_array | _____no_output_____ | MIT | Exercises/4_Bayesian_Interference/Bayesian_Inference.ipynb | psnx/artificial-intelligence |
Now we have a clean representation of the documents in terms of the frequency distribution of the words in them. To make it easier to understand our next step is to convert this array into a dataframe and name the columns appropriately. >>**Instructions:**Convert the 'doc_array' we created into a dataframe, with the column names as the words (feature names). Call the dataframe 'frequency_matrix'. | '''
Solution
'''
import pandas as pd
frequency_matrix = pd.DataFrame(doc_array)
frequency_matrix | _____no_output_____ | MIT | Exercises/4_Bayesian_Interference/Bayesian_Inference.ipynb | psnx/artificial-intelligence |
Congratulations! You have successfully implemented a Bag of Words problem for a document dataset that we created. One potential issue that can arise from using this method is that if our dataset of text is extremely large (say if we have a large collection of news articles or email data), there will be certain values that are more common than others simply due to the structure of the language itself. For example, words like 'is', 'the', 'an', pronouns, grammatical constructs, etc., could skew our matrix and affect our analyis. There are a couple of ways to mitigate this. One way is to use the `stop_words` parameter and set its value to `english`. This will automatically ignore all the words in our input text that are found in a built-in list of English stop words in scikit-learn.Another way of mitigating this is by using the [tfidf](http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfVectorizer.htmlsklearn.feature_extraction.text.TfidfVectorizer) method. This method is out of scope for the context of this lesson. Step 3.1: Training and testing sets Now that we understand how to use the Bag of Words approach, we can return to our original, larger UCI dataset and proceed with our analysis. Our first step is to split our dataset into a training set and a testing set so we can first train, and then test our model. >>**Instructions:**Split the dataset into a training and testing set using the train_test_split method in sklearn, and print out the number of rows we have in each of our training and testing data. Split the datausing the following variables:* `X_train` is our training data for the 'sms_message' column.* `y_train` is our training data for the 'label' column* `X_test` is our testing data for the 'sms_message' column.* `y_test` is our testing data for the 'label' column. | '''
Solution
'''
# split into training and testing sets
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(df['sms_message'],
df['label'],
random_state=1)
print('Number of rows in the total set: {}'.format(df.shape[0]))
print('Number of rows in the training set: {}'.format(X_train.shape[0]))
print('Number of rows in the test set: {}'.format(X_test.shape[0])) | _____no_output_____ | MIT | Exercises/4_Bayesian_Interference/Bayesian_Inference.ipynb | psnx/artificial-intelligence |
Step 3.2: Applying Bag of Words processing to our dataset. Now that we have split the data, our next objective is to follow the steps from "Step 2: Bag of Words," and convert our data into the desired matrix format. To do this we will be using CountVectorizer() as we did before. There are two steps to consider here:* First, we have to fit our training data (`X_train`) into `CountVectorizer()` and return the matrix.* Secondly, we have to transform our testing data (`X_test`) to return the matrix. Note that `X_train` is our training data for the 'sms_message' column in our dataset and we will be using this to train our model. `X_test` is our testing data for the 'sms_message' column and this is the data we will be using (after transformation to a matrix) to make predictions on. We will then compare those predictions with `y_test` in a later step. For now, we have provided the code that does the matrix transformations for you! | '''
[Practice Node]
The code for this segment is in 2 parts. First, we are learning a vocabulary dictionary for the training data
and then transforming the data into a document-term matrix; secondly, for the testing data we are only
transforming the data into a document-term matrix.
This is similar to the process we followed in Step 2.3.
We will provide the transformed data to students in the variables 'training_data' and 'testing_data'.
'''
'''
Solution
'''
# Instantiate the CountVectorizer method
count_vector = CountVectorizer()
# Fit the training data and then return the matrix
training_data = count_vector.fit_transform(X_train)
# Transform testing data and return the matrix. Note we are not fitting the testing data into the CountVectorizer()
testing_data = count_vector.transform(X_test) | _____no_output_____ | MIT | Exercises/4_Bayesian_Interference/Bayesian_Inference.ipynb | psnx/artificial-intelligence |
Step 4.1: Bayes Theorem implementation from scratch Now that we have our dataset in the format that we need, we can move onto the next portion of our mission which is the algorithm we will use to make our predictions to classify a message as spam or not spam. Remember that at the start of the mission we briefly discussed the Bayes theorem but now we shall go into a little more detail. In layman's terms, the Bayes theorem calculates the probability of an event occurring, based on certain other probabilities that are related to the event in question. It is composed of "prior probabilities" - or just "priors." These "priors" are the probabilities that we are aware of, or that are given to us. And Bayes theorem is also composed of the "posterior probabilities," or just "posteriors," which are the probabilities we are looking to compute using the "priors". Let us implement the Bayes Theorem from scratch using a simple example. Let's say we are trying to find the odds of an individual having diabetes, given that he or she was tested for it and got a positive result. In the medical field, such probabilities play a very important role as they often deal with life and death situations. We assume the following:`P(D)` is the probability of a person having Diabetes. Its value is `0.01`, or in other words, 1% of the general population has diabetes (disclaimer: these values are assumptions and are not reflective of any actual medical study).`P(Pos)` is the probability of getting a positive test result.`P(Neg)` is the probability of getting a negative test result.`P(Pos|D)` is the probability of getting a positive result on a test done for detecting diabetes, given that you have diabetes. This has a value `0.9`. In other words the test is correct 90% of the time. This is also called the Sensitivity or True Positive Rate.`P(Neg|~D)` is the probability of getting a negative result on a test done for detecting diabetes, given that you do not have diabetes. This also has a value of `0.9` and is therefore correct, 90% of the time. This is also called the Specificity or True Negative Rate.The Bayes formula is as follows:* `P(A)` is the prior probability of A occurring independently. In our example this is `P(D)`. This value is given to us.* `P(B)` is the prior probability of B occurring independently. In our example this is `P(Pos)`.* `P(A|B)` is the posterior probability that A occurs given B. In our example this is `P(D|Pos)`. That is, **the probability of an individual having diabetes, given that this individual got a positive test result. This is the value that we are looking to calculate.*** `P(B|A)` is the prior probability of B occurring, given A. In our example this is `P(Pos|D)`. This value is given to us. Putting our values into the formula for Bayes theorem we get:`P(D|Pos) = P(D) * P(Pos|D) / P(Pos)`The probability of getting a positive test result `P(Pos)` can be calculated using the Sensitivity and Specificity as follows:`P(Pos) = [P(D) * Sensitivity] + [P(~D) * (1-Specificity))]` | '''
Instructions:
Calculate probability of getting a positive test result, P(Pos)
'''
'''
Solution (skeleton code will be provided)
'''
# P(D)
p_diabetes = 0.01
# P(~D)
p_no_diabetes = 0.99
# Sensitivity or P(Pos|D)
p_pos_diabetes = 0.9
# Specificity or P(Neg|~D)
p_neg_no_diabetes = 0.9
# P(Pos)
p_pos = # TODO
print('The probability of getting a positive test result P(Pos) is: {}',format(p_pos)) | _____no_output_____ | MIT | Exercises/4_Bayesian_Interference/Bayesian_Inference.ipynb | psnx/artificial-intelligence |
**Using all of this information we can calculate our posteriors as follows:** The probability of an individual having diabetes, given that, that individual got a positive test result:`P(D|Pos) = (P(D) * Sensitivity)) / P(Pos)`The probability of an individual not having diabetes, given that, that individual got a positive test result:`P(~D|Pos) = (P(~D) * (1-Specificity)) / P(Pos)`The sum of our posteriors will always equal `1`. | '''
Instructions:
Compute the probability of an individual having diabetes, given that, that individual got a positive test result.
In other words, compute P(D|Pos).
The formula is: P(D|Pos) = (P(D) * P(Pos|D) / P(Pos)
'''
'''
Solution
'''
# P(D|Pos)
p_diabetes_pos = # TODO
print('Probability of an individual having diabetes, given that that individual got a positive test result is:\
',format(p_diabetes_pos))
'''
Instructions:
Compute the probability of an individual not having diabetes, given that, that individual got a positive test result.
In other words, compute P(~D|Pos).
The formula is: P(~D|Pos) = P(~D) * P(Pos|~D) / P(Pos)
Note that P(Pos|~D) can be computed as 1 - P(Neg|~D).
Therefore:
P(Pos|~D) = p_pos_no_diabetes = 1 - 0.9 = 0.1
'''
'''
Solution
'''
# P(Pos|~D)
p_pos_no_diabetes = 0.1
# P(~D|Pos)
p_no_diabetes_pos = # TODO
print 'Probability of an individual not having diabetes, given that that individual got a positive test result is:'\
,p_no_diabetes_pos | _____no_output_____ | MIT | Exercises/4_Bayesian_Interference/Bayesian_Inference.ipynb | psnx/artificial-intelligence |
Congratulations! You have implemented Bayes Theorem from scratch. Your analysis shows that even if you get a positive test result, there is only an 8.3% chance that you actually have diabetes and a 91.67% chance that you do not have diabetes. This is of course assuming that only 1% of the entire population has diabetes which is only an assumption. **What does the term 'Naive' in 'Naive Bayes' mean ?** The term 'Naive' in Naive Bayes comes from the fact that the algorithm considers the features that it is using to make the predictions to be independent of each other, which may not always be the case. So in our Diabetes example, we are considering only one feature, that is the test result. Say we added another feature, 'exercise'. Let's say this feature has a binary value of `0` and `1`, where the former signifies that the individual exercises less than or equal to 2 days a week and the latter signifies that the individual exercises greater than or equal to 3 days a week. If we had to use both of these features, namely the test result and the value of the 'exercise' feature, to compute our final probabilities, Bayes' theorem would fail. Naive Bayes' is an extension of Bayes' theorem that assumes that all the features are independent of each other. Step 4.2: Naive Bayes implementation from scratch Now that you have understood the ins and outs of Bayes Theorem, we will extend it to consider cases where we have more than one feature. Let's say that we have two political parties' candidates, 'Jill Stein' of the Green Party and 'Gary Johnson' of the Libertarian Party and we have the probabilities of each of these candidates saying the words 'freedom', 'immigration' and 'environment' when they give a speech:* Probability that Jill Stein says 'freedom': 0.1 ---------> `P(F|J)`* Probability that Jill Stein says 'immigration': 0.1 -----> `P(I|J)`* Probability that Jill Stein says 'environment': 0.8 -----> `P(E|J)`* Probability that Gary Johnson says 'freedom': 0.7 -------> `P(F|G)`* Probability that Gary Johnson says 'immigration': 0.2 ---> `P(I|G)`* Probability that Gary Johnson says 'environment': 0.1 ---> `P(E|G)`And let us also assume that the probability of Jill Stein giving a speech, `P(J)` is `0.5` and the same for Gary Johnson, `P(G) = 0.5`. Given this, what if we had to find the probabilities of Jill Stein saying the words 'freedom' and 'immigration'? This is where the Naive Bayes' theorem comes into play as we are considering two features, 'freedom' and 'immigration'.Now we are at a place where we can define the formula for the Naive Bayes' theorem:Here, `y` is the class variable (in our case the name of the candidate) and `x1` through `xn` are the feature vectors (in our case the individual words). The theorem makes the assumption that each of the feature vectors or words (`xi`) are independent of each other. To break this down, we have to compute the following posterior probabilities:* `P(J|F,I)`: Given the words 'freedom' and 'immigration' were said, what's the probability they were said by Jill? Using the formula and our knowledge of Bayes' theorem, we can compute this as follows: `P(J|F,I)` = `(P(J) * P(F|J) * P(I|J)) / P(F,I)`. Here `P(F,I)` is the probability of the words 'freedom' and 'immigration' being said in a speech. * `P(G|F,I)`: Given the words 'freedom' and 'immigration' were said, what's the probability they were said by Gary? Using the formula, we can compute this as follows: `P(G|F,I)` = `(P(G) * P(F|G) * P(I|G)) / P(F,I)` | '''
Instructions: Compute the probability of the words 'freedom' and 'immigration' being said in a speech, or
P(F,I).
The first step is multiplying the probabilities of Jill Stein giving a speech with her individual
probabilities of saying the words 'freedom' and 'immigration'. Store this in a variable called p_j_text.
The second step is multiplying the probabilities of Gary Johnson giving a speech with his individual
probabilities of saying the words 'freedom' and 'immigration'. Store this in a variable called p_g_text.
The third step is to add both of these probabilities and you will get P(F,I).
'''
'''
Solution: Step 1
'''
# P(J)
p_j = 0.5
# P(F/J)
p_j_f = 0.1
# P(I/J)
p_j_i = 0.1
p_j_text = # TODO
print(p_j_text)
'''
Solution: Step 2
'''
# P(G)
p_g = 0.5
# P(F/G)
p_g_f = 0.7
# P(I/G)
p_g_i = 0.2
p_g_text = # TODO
print(p_g_text)
'''
Solution: Step 3: Compute P(F,I) and store in p_f_i
'''
p_f_i = # TODO
print('Probability of words freedom and immigration being said are: ', format(p_f_i)) | _____no_output_____ | MIT | Exercises/4_Bayesian_Interference/Bayesian_Inference.ipynb | psnx/artificial-intelligence |
Now we can compute the probability of `P(J|F,I)`, the probability of Jill Stein saying the words 'freedom' and 'immigration' and `P(G|F,I)`, the probability of Gary Johnson saying the words 'freedom' and 'immigration'. | '''
Instructions:
Compute P(J|F,I) using the formula P(J|F,I) = (P(J) * P(F|J) * P(I|J)) / P(F,I) and store it in a variable p_j_fi
'''
'''
Solution
'''
p_j_fi = # TODO
print('The probability of Jill Stein saying the words Freedom and Immigration: ', format(p_j_fi))
'''
Instructions:
Compute P(G|F,I) using the formula P(G|F,I) = (P(G) * P(F|G) * P(I|G)) / P(F,I) and store it in a variable p_g_fi
'''
'''
Solution
'''
p_g_fi = # TODO
print('The probability of Gary Johnson saying the words Freedom and Immigration: ', format(p_g_fi)) | _____no_output_____ | MIT | Exercises/4_Bayesian_Interference/Bayesian_Inference.ipynb | psnx/artificial-intelligence |
And as we can see, just like in the Bayes' theorem case, the sum of our posteriors is equal to 1. Congratulations! You have implemented the Naive Bayes' theorem from scratch. Our analysis shows that there is only a 6.6% chance that Jill Stein of the Green Party uses the words 'freedom' and 'immigration' in her speech as compared with the 93.3% chance for Gary Johnson of the Libertarian party. For another example of Naive Bayes, let's consider searching for images using the term 'Sacramento Kings' in a search engine. In order for us to get the results pertaining to the Scramento Kings NBA basketball team, the search engine needs to be able to associate the two words together and not treat them individually. If the search engine only searched for the words individually, we would get results of images tagged with 'Sacramento,' like pictures of city landscapes, and images of 'Kings,' which might be pictures of crowns or kings from history. But associating the two terms together would produce images of the basketball team. In the first approach we would treat the words as independent entities, so it would be considered 'naive.' We don't usually want this approach from a search engine, but it can be extremely useful in other cases. Applying this to our problem of classifying messages as spam, the Naive Bayes algorithm *looks at each word individually and not as associated entities* with any kind of link between them. In the case of spam detectors, this usually works, as there are certain red flag words in an email which are highly reliable in classifying it as spam. For example, emails with words like 'viagra' are usually classified as spam. Step 5: Naive Bayes implementation using scikit-learn Now let's return to our spam classification context. Thankfully, sklearn has several Naive Bayes implementations that we can use, so we do not have to do the math from scratch. We will be using sklearn's `sklearn.naive_bayes` method to make predictions on our SMS messages dataset. Specifically, we will be using the multinomial Naive Bayes algorithm. This particular classifier is suitable for classification with discrete features (such as in our case, word counts for text classification). It takes in integer word counts as its input. On the other hand, Gaussian Naive Bayes is better suited for continuous data as it assumes that the input data has a Gaussian (normal) distribution. | '''
Instructions:
We have loaded the training data into the variable 'training_data' and the testing data into the
variable 'testing_data'.
Import the MultinomialNB classifier and fit the training data into the classifier using fit(). Name your classifier
'naive_bayes'. You will be training the classifier using 'training_data' and 'y_train' from our split earlier.
'''
'''
Solution
'''
from sklearn.naive_bayes import MultinomialNB
naive_bayes = # TODO
naive_bayes.fit(# TODO)
'''
Instructions:
Now that our algorithm has been trained using the training data set we can now make some predictions on the test data
stored in 'testing_data' using predict(). Save your predictions into the 'predictions' variable.
'''
'''
Solution
'''
predictions = naive_bayes.predict(# TODO) | _____no_output_____ | MIT | Exercises/4_Bayesian_Interference/Bayesian_Inference.ipynb | psnx/artificial-intelligence |
Now that predictions have been made on our test set, we need to check the accuracy of our predictions. Step 6: Evaluating our model Now that we have made predictions on our test set, our next goal is to evaluate how well our model is doing. There are various mechanisms for doing so, so first let's review them.**Accuracy** measures how often the classifier makes the correct prediction. It’s the ratio of the number of correct predictions to the total number of predictions (the number of test data points).**Precision** tells us what proportion of messages we classified as spam, actually were spam.It is a ratio of true positives (words classified as spam, and which actually are spam) to all positives (all words classified as spam, regardless of whether that was the correct classification). In other words, precision is the ratio of`[True Positives/(True Positives + False Positives)]`**Recall (sensitivity)** tells us what proportion of messages that actually were spam were classified by us as spam.It is a ratio of true positives (words classified as spam, and which actually are spam) to all the words that were actually spam. In other words, recall is the ratio of`[True Positives/(True Positives + False Negatives)]`For classification problems that are skewed in their classification distributions like in our case - for example if we had 100 text messages and only 2 were spam and the other 98 weren't - accuracy by itself is not a very good metric. We could classify 90 messages as not spam (including the 2 that were spam but we classify them as not spam, hence they would be false negatives) and 10 as spam (all 10 false positives) and still get a reasonably good accuracy score. For such cases, precision and recall come in very handy. These two metrics can be combined to get the **F1 score**, which is the weighted average of the precision and recall scores. This score can range from 0 to 1, with 1 being the best possible F1 score. We will be using all 4 of these metrics to make sure our model does well. For all 4 metrics whose values can range from 0 to 1, having a score as close to 1 as possible is a good indicator of how well our model is doing. | '''
Instructions:
Compute the accuracy, precision, recall and F1 scores of your model using your test data 'y_test' and the predictions
you made earlier stored in the 'predictions' variable.
'''
'''
Solution
'''
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score
print('Accuracy score: ', format(accuracy_score(# TODO)))
print('Precision score: ', format(precision_score(# TODO)))
print('Recall score: ', format(recall_score(# TODO)))
print('F1 score: ', format(f1_score(# TODO))) | _____no_output_____ | MIT | Exercises/4_Bayesian_Interference/Bayesian_Inference.ipynb | psnx/artificial-intelligence |
Visualizing Large Graphs with Datashader`DatashaderVisualizer` is capable of responsively showing very large graphs, but is lessinteractive than the [CytoscapeVisualizer](./Cytoscape_Example.ipynb). | import ipywidgets as W
import traitlets as T
from rdflib import Graph
from ipyradiant import DatashaderVisualizer, LayoutSelector | _____no_output_____ | BSD-3-Clause | examples/Datashader_Example.ipynb | lnijhawan/ipyradiant |
Here a `DatashaderVisualizer` is linked to a to show largest dataset from the[example data](./data/README.md). The `LayoutSelector` changes the layout algorithmused. A small `HTML` widget also reports which nodes are selected. Try exploring thevarious tools offered in the toolbar. | g = Graph().parse("data/tree.jsonld", format="json-ld")
ds = DatashaderVisualizer(graph=g)
ls = LayoutSelector(vis=ds)
sn = W.HTML()
T.dlink(
(ds, "selected_nodes"),
(sn, "value"),
lambda n: "Selected Nodes: <pre>{}</pre>".format("\n".join(sorted(n))),
)
ds_ex = W.HBox([ds, W.VBox([ls, sn])])
ds_ex | _____no_output_____ | BSD-3-Clause | examples/Datashader_Example.ipynb | lnijhawan/ipyradiant |
Setup Installing Dependencies and Mounting | %%capture
!pip install transformers
# Mount Google Drive
from google.colab import drive # import drive from google colab
ROOT = "/content/drive"
drive.mount(ROOT, force_remount=True) | Mounted at /content/drive
| MIT | notebooks/AutoCompose.ipynb | prajwalcr/AutoCompose |
Imports | import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
% matplotlib inline
import random
import json
import time
import datetime
import os
from transformers import GPT2Tokenizer, GPT2LMHeadModel, GPT2Config, AdamW, get_linear_schedule_with_warmup
import torch
torch.manual_seed(64)
from torch.utils.data import Dataset, random_split, DataLoader, RandomSampler, SequentialSampler
!pip show torch | Name: torch
Version: 1.8.1+cu101
Summary: Tensors and Dynamic neural networks in Python with strong GPU acceleration
Home-page: https://pytorch.org/
Author: PyTorch Team
Author-email: packages@pytorch.org
License: BSD-3
Location: /usr/local/lib/python3.7/dist-packages
Requires: numpy, typing-extensions
Required-by: torchvision, torchtext, fastai
| MIT | notebooks/AutoCompose.ipynb | prajwalcr/AutoCompose |
Setting Device | %cd /content/drive/MyDrive/AutoCompose/
!nvidia-smi
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
device | _____no_output_____ | MIT | notebooks/AutoCompose.ipynb | prajwalcr/AutoCompose |
Data Preparation Data Collection | with open("data/anticipation.json", "r") as f:
data = json.load(f)
data = [poem for poem in data if len(poem["poem"].split()) < 100]
print(len(data))
data[:5] | 25070
| MIT | notebooks/AutoCompose.ipynb | prajwalcr/AutoCompose |
Data Model | class PoemDataset(Dataset):
def __init__(self, poems, tokenizer, max_length=768, gpt2_type="gpt2"):
self.tokenizer = tokenizer
self.input_ids = []
self.attn_masks = []
for poem in poems:
encodings_dict = tokenizer("<|startoftext|>"+poem["poem"]+"<|endoftext|>",
truncation=True,
max_length=max_length,
padding="max_length")
self.input_ids.append(torch.tensor(encodings_dict["input_ids"]))
self.attn_masks.append(torch.tensor(encodings_dict["attention_mask"]))
def __len__(self):
return len(self.input_ids)
def __getitem__(self, idx):
return self.input_ids[idx], self.attn_masks[idx]
# Loading GPT2 Tokenizer
tokenizer = GPT2Tokenizer.from_pretrained('gpt2',
bos_token='<|startoftext|>',
eos_token='<|endoftext|>',
pad_token='<|pad|>') | _____no_output_____ | MIT | notebooks/AutoCompose.ipynb | prajwalcr/AutoCompose |
Rough | print(tokenizer.encode("<|startoftext|> Hello World <|endoftext|>", padding="max_length", max_length=10))
print(len(tokenizer))
# Finding length of maximum token in dataset
max_length = max([len(tokenizer.encode(poem["poem"])) for poem in data])
print(max_length)
max_length = 100
x = [len(tokenizer.encode(poem["poem"])) for poem in data if len(tokenizer.encode(poem["poem"])) < 100]
y = [len(tokenizer.encode(poem["poem"])) - len(poem["poem"].split()) for poem in data]
print(sum(y)/len(y))
print(max(x), len(x))
plt.hist(x, bins = 5)
plt.show | 1967 382741
| MIT | notebooks/AutoCompose.ipynb | prajwalcr/AutoCompose |
Dataset Creation | batch_size = 32
max_length = 100
dataset = PoemDataset(data, tokenizer, max_length=max_length)
# Split data into train and validation sets
train_size = int(0.9*len(dataset))
val_size = len(dataset) - train_size
train_dataset, val_dataset = random_split(dataset, [train_size, val_size])
print("Number of samples for training =", train_size)
print("Number of samples for validation =", val_size)
train_dataset[0]
train_dataloader = DataLoader(train_dataset,
sampler=RandomSampler(train_dataset),
batch_size=batch_size)
val_dataloader = DataLoader(val_dataset,
sampler=SequentialSampler(val_dataset),
batch_size=batch_size) | _____no_output_____ | MIT | notebooks/AutoCompose.ipynb | prajwalcr/AutoCompose |
Finetune GPT2 Language Model Importing Pre-Trained GPT2 Model | # Load model configuration
config = GPT2Config.from_pretrained("gpt2")
# Create model instance and set embedding length
model = GPT2LMHeadModel.from_pretrained("gpt2", config=config)
model.resize_token_embeddings(len(tokenizer))
# Running the model on GPU
model = model.to(device)
# <<< Optional >>>
# Setting seeds to enable reproducible runs
seed_val = 42
random.seed(seed_val)
np.random.seed(seed_val)
torch.manual_seed(seed_val)
torch.cuda.manual_seed_all(seed_val) | _____no_output_____ | MIT | notebooks/AutoCompose.ipynb | prajwalcr/AutoCompose |
Scheduling Optimizer | epochs = 4
warmup_steps = 1e2
sample_every = 100
print(len(train_dataloader))
print(len(train_dataset))
# Using AdamW optimizer with default parameters
optimizer = AdamW(model.parameters(), lr=5e-4, eps=1e-8)
# Toatl training steps is the number of data points times the number of epochs
total_training_steps = len(train_dataloader)*epochs
# Setting a variable learning rate using scheduler
scheduler = get_linear_schedule_with_warmup(optimizer,
num_warmup_steps=warmup_steps,
num_training_steps=total_training_steps) | _____no_output_____ | MIT | notebooks/AutoCompose.ipynb | prajwalcr/AutoCompose |
Training | def format_time(elapsed):
return str(datetime.timedelta(seconds=int(round(elapsed))))
total_t0 = time.time()
training_stats = []
model = model.to(device)
for epoch_i in range(epochs):
print(f'Beginning epoch {epoch_i+1} of {epochs}')
t0 = time.time()
total_train_loss = 0
model.train()
# Labels are shifted by 1 timestep
for step, batch in enumerate(train_dataloader):
b_input_ids = batch[0].to(device)
b_labels = batch[0].to(device)
b_masks = batch[1].to(device)
model.zero_grad()
outputs = model(b_input_ids,
labels=b_labels,
attention_mask=b_masks)
loss = outputs[0]
batch_loss = loss.item()
total_train_loss += batch_loss
# Sampling every x steps
if step != 0 and step % sample_every == 0:
elapsed = format_time(time.time()-t0)
print(f'Batch {step} of {len(train_dataloader)}. Loss: {batch_loss}. Time: {elapsed}')
model.eval()
sample_outputs = model.generate(
bos_token_id=random.randint(1,30000),
do_sample=True,
top_k=50,
max_length = 200,
top_p=0.95,
num_return_sequences=1
)
for i, sample_output in enumerate(sample_outputs):
print(f'Example ouput: {tokenizer.decode(sample_output, skip_special_tokens=True)}')
print()
model.train()
loss.backward()
optimizer.step()
scheduler.step()
avg_train_loss = total_train_loss / len(train_dataloader)
training_time = format_time(time.time()-t0)
print(f'Average Training Loss: {avg_train_loss}. Epoch time: {training_time}')
print()
t0 = time.time()
model.eval()
total_eval_loss = 0
nb_eval_steps = 0
for batch in val_dataloader:
b_input_ids = batch[0].to(device)
b_labels = batch[0].to(device)
b_masks = batch[1].to(device)
with torch.no_grad():
outputs = model(b_input_ids,
attention_mask = b_masks,
labels=b_labels)
loss = outputs[0]
batch_loss = loss.item()
total_eval_loss += batch_loss
avg_val_loss = total_eval_loss / len(val_dataloader)
val_time = format_time(time.time() - t0)
print(f'Validation loss: {avg_val_loss}. Validation Time: {val_time}')
print()
# Record all statistics from this epoch.
training_stats.append(
{
'epoch': epoch_i + 1,
'Training Loss': avg_train_loss,
'Valid. Loss': avg_val_loss,
'Training Time': training_time,
'Validation Time': val_time
}
)
print("------------------------------")
print(f'Total training took {format_time(time.time()-total_t0)}') | Beginning epoch 1 of 4
| MIT | notebooks/AutoCompose.ipynb | prajwalcr/AutoCompose |
Visualizations | pd.set_option('precision', 2)
df_stats = pd.DataFrame(data=training_stats)
df_stats = df_stats.set_index('epoch')
# Use plot styling from seaborn.
sns.set(style='darkgrid')
# Increase the plot size and font size.
sns.set(font_scale=1.5)
plt.rcParams["figure.figsize"] = (12,6)
# Plot the learning curve.
plt.plot(df_stats['Training Loss'], 'b-o', label="Training")
plt.plot(df_stats['Valid. Loss'], 'g-o', label="Validation")
# Label the plot.
plt.title("Training & Validation Loss")
plt.xlabel("Epoch")
plt.ylabel("Loss")
plt.legend()
plt.xticks([1, 2, 3, 4])
plt.show() | _____no_output_____ | MIT | notebooks/AutoCompose.ipynb | prajwalcr/AutoCompose |
Generate Poems | model.eval()
prompt = "<|startoftext|>"
generated = torch.tensor(tokenizer.encode(prompt)).unsqueeze(0)
generated = generated.to(device)
sample_outputs = model.generate(
generated,
do_sample=True,
top_k=50,
max_length = 300,
top_p=0.95,
num_return_sequences=3
)
for i, sample_output in enumerate(sample_outputs):
print("{}: {}\n\n".format(i, tokenizer.decode(sample_output, skip_special_tokens=True))) | Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.
| MIT | notebooks/AutoCompose.ipynb | prajwalcr/AutoCompose |
Saving and Loading Finetuned Model | output_dir = "/content/drive/My Drive/AutoCompose/models/anticipation2"
# Save generated poems
# sample_outputs = model.generate(
# generated,
# do_sample=True,
# top_k=50,
# max_length = 300,
# top_p=0.95,
# num_return_sequences=25
# )
# with open(os.path.join(output_dir, 'generated_poems.txt'), "w") as outfile:
# for i, sample_output in enumerate(sample_outputs):
# outfile.write(tokenizer.decode(sample_output, skip_special_tokens=True)+"\n\n")
# Save a trained model, configuration and tokenizer using `save_pretrained()`.
# They can then be reloaded using `from_pretrained()`
model_to_save = model.module if hasattr(model, 'module') else model
model_to_save.save_pretrained(output_dir)
tokenizer.save_pretrained(output_dir)
# Good practice: save your training arguments together with the trained model
# torch.save(training_stats, os.path.join(output_dir, 'training_args.bin'))
# Save generated poems
sample_outputs = model.generate(
generated,
do_sample=True,
top_k=50,
max_length = 300,
top_p=0.95,
num_return_sequences=25
)
with open(os.path.join(output_dir, 'generated_poems.txt'), "w") as outfile:
for i, sample_output in enumerate(sample_outputs):
outfile.write(tokenizer.decode(sample_output, skip_special_tokens=True)+"\n\n")
# Loading saved model
model_dir = "/content/drive/My Drive/AutoCompose/models/neutral"
model = GPT2LMHeadModel.from_pretrained(model_dir)
tokenizer = GPT2Tokenizer.from_pretrained(model_dir)
model.to(device) | _____no_output_____ | MIT | notebooks/AutoCompose.ipynb | prajwalcr/AutoCompose |
Version Control | !git config --global user.email "prajwalguptacr@gmail.com"
!git config --global user.name "prajwal"
import json
f = open("AutoComposeCreds.json")
data = json.load(f)
f.close()
print(data)
username="prajwalcr"
repository="AutoCompose"
git_token = data["git-token"]
!git clone https://{git_token}@github.com/{username}/{repository}
%cd /content/drive/MyDrive/AutoCompose/
!git pull
!git push
!git add .
!git commit -m "anger model trained on uni-m dataset added"
!git filter-branch --tree-filter 'rm -rf models/' HEAD
!git add .
!git status
!git commit -m "new models added"
| _____no_output_____ | MIT | notebooks/AutoCompose.ipynb | prajwalcr/AutoCompose |
Define the Convolutional Neural NetworkIn this notebook and in `models.py`:1. Define a CNN with images as input and keypoints as output2. Construct the transformed FaceKeypointsDataset, just as before3. Train the CNN on the training data, tracking loss4. See how the trained model performs on test data5. If necessary, modify the CNN structure and model hyperparameters, so that it performs *well* **\*****\*** What does *well* mean?"Well" means that the model's loss decreases during training **and**, when applied to test image data, the model produces keypoints that closely match the true keypoints of each face. And you'll see examples of this later in the notebook.--- CNN ArchitectureRecall that CNN's are defined by a few types of layers:* Convolutional layers* Maxpooling layers* Fully-connected layers Define model in the provided file `models.py` file PyTorch Neural NetsTo define a neural network in PyTorch, we have defined the layers of a model in the function `__init__` and defined the feedforward behavior of a network that employs those initialized layers in the function `forward`, which takes in an input image tensor, `x`. The structure of this Net class is shown below and left for you to fill in.Note: During training, PyTorch will be able to perform backpropagation by keeping track of the network's feedforward behavior and using autograd to calculate the update to the weights in the network. Define the Layers in ` __init__`As a reminder, a conv/pool layer may be defined like this (in `__init__`):``` 1 input image channel (for grayscale images), 32 output channels/feature maps, 3x3 square convolution kernelself.conv1 = nn.Conv2d(1, 32, 3) maxpool that uses a square window of kernel_size=2, stride=2self.pool = nn.MaxPool2d(2, 2) ``` Refer to Layers in `forward`Then referred to in the `forward` function like this, in which the conv1 layer has a ReLu activation applied to it before maxpooling is applied:```x = self.pool(F.relu(self.conv1(x)))```Best practice is to place any layers whose weights will change during the training process in `__init__` and refer to them in the `forward` function; any layers or functions that always behave in the same way, such as a pre-defined activation function, should appear *only* in the `forward` function. Why models.pyWe are tasked with defining the network in the `models.py` file so that any models we define can be saved and loaded by name in different notebooks in this project directory. For example, by defining a CNN class called `Net` in `models.py`, we can then create that same architecture in this and other notebooks by simply importing the class and instantiating a model:``` from models import Net net = Net()``` | # load the data if you need to; if you have already loaded the data, you may comment this cell out
# -- DO NOT CHANGE THIS CELL -- #
!mkdir /data
!wget -P /data/ https://s3.amazonaws.com/video.udacity-data.com/topher/2018/May/5aea1b91_train-test-data/train-test-data.zip
!unzip -n /data/train-test-data.zip -d /data | mkdir: cannot create directory ‘/data’: File exists
--2019-02-24 07:21:17-- https://s3.amazonaws.com/video.udacity-data.com/topher/2018/May/5aea1b91_train-test-data/train-test-data.zip
Resolving s3.amazonaws.com (s3.amazonaws.com)... 52.216.176.157
Connecting to s3.amazonaws.com (s3.amazonaws.com)|52.216.176.157|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 338613624 (323M) [application/zip]
Saving to: ‘/data/train-test-data.zip.1’
train-test-data.zip 100%[===================>] 322.93M 93.8MB/s in 3.6s
2019-02-24 07:21:21 (89.3 MB/s) - ‘/data/train-test-data.zip.1’ saved [338613624/338613624]
Archive: /data/train-test-data.zip
| MIT | 2. Define the Network Architecture.ipynb | SyedaZainabAkhtar/Facial-Keypoint-Detection |
**Note:** Workspaces automatically close connections after 30 minutes of inactivity (including inactivity while training!). Use the code snippet below to keep your workspace alive during training. (The active_session context manager is imported below.)```from workspace_utils import active_sessionwith active_session(): train_model(num_epochs)``` | # import the usual resources
import matplotlib.pyplot as plt
import numpy as np
# import utilities to keep workspaces alive during model training
from workspace_utils import active_session
# watch for any changes in model.py, if it changes, re-load it automatically
%load_ext autoreload
%autoreload 2
## Define the Net in models.py
import torch
import torch.nn as nn
import torch.nn.functional as F
## Once you've define the network, you can instantiate it
# one example conv layer has been provided for you
from models import Net
net = Net()
print(net) | Net(
(conv1): Conv2d(1, 32, kernel_size=(5, 5), stride=(1, 1))
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(conv2): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1))
(conv3): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1))
(conv3_bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv4): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1))
(conv5): Conv2d(256, 512, kernel_size=(1, 1), stride=(1, 1))
(fc1): Linear(in_features=18432, out_features=1024, bias=True)
(fc2): Linear(in_features=1024, out_features=512, bias=True)
(fc3): Linear(in_features=512, out_features=136, bias=True)
(dropout): Dropout(p=0.25)
)
| MIT | 2. Define the Network Architecture.ipynb | SyedaZainabAkhtar/Facial-Keypoint-Detection |
Transform the dataset To prepare for training, we have created a transformed dataset of images and keypoints. Define a data transformIn PyTorch, a convolutional neural network expects a torch image of a consistent size as input. For efficient training, and so our model's loss does not blow up during training, it is also suggested that we normalize the input images and keypoints. The necessary transforms have been defined in `data_load.py` and we **do not** need to modify these.To define the data transform below, we have used a [composition](http://pytorch.org/tutorials/beginner/data_loading_tutorial.htmlcompose-transforms) of:1. Rescaling and/or cropping the data, such that we are left with a square image (the suggested size is 224x224px)2. Normalizing the images and keypoints; turning each RGB image into a grayscale image with a color range of [0, 1] and transforming the given keypoints into a range of [-1, 1]3. Turning these images and keypoints into Tensors**This transform will be applied to the training data and, later, the test data**. It will change how we go about displaying these images and keypoints, but these steps are essential for efficient training. | from torch.utils.data import Dataset, DataLoader
from torchvision import transforms, utils
# the dataset we created in Notebook 1 is copied in the helper file `data_load.py`
from data_load import FacialKeypointsDataset
# the transforms we defined in Notebook 1 are in the helper file `data_load.py`
from data_load import Rescale, RandomCrop, Normalize, ToTensor
## define the data_transform using transforms.Compose([all tx's, . , .])
# order matters! i.e. rescaling should come before a smaller crop
data_transform = transforms.Compose([Rescale(250),
RandomCrop(224),
Normalize(),
ToTensor()])
# testing that you've defined a transform
assert(data_transform is not None), 'Define a data_transform'
# create the transformed dataset
transformed_dataset = FacialKeypointsDataset(csv_file='/data/training_frames_keypoints.csv',
root_dir='/data/training/',
transform=data_transform)
print('Number of images: ', len(transformed_dataset))
# iterate through the transformed dataset and print some stats about the first few samples
for i in range(4):
sample = transformed_dataset[i]
print(i, sample['image'].size(), sample['keypoints'].size()) | Number of images: 3462
0 torch.Size([1, 224, 224]) torch.Size([68, 2])
1 torch.Size([1, 224, 224]) torch.Size([68, 2])
2 torch.Size([1, 224, 224]) torch.Size([68, 2])
3 torch.Size([1, 224, 224]) torch.Size([68, 2])
| MIT | 2. Define the Network Architecture.ipynb | SyedaZainabAkhtar/Facial-Keypoint-Detection |
Batching and loading dataNext, having defined the transformed dataset, we can use PyTorch's DataLoader class to load the training data in batches of whatever size as well as to shuffle the data for training the model. You can read more about the parameters of the DataLoader in [this documentation](http://pytorch.org/docs/master/data.html). Batch sizeDecide on a good batch size for training your model. Try both small and large batch sizes and note how the loss decreases as the model trains. Too large a batch size may cause your model to crash and/or run out of memory while training.**Note for Windows users**: Please change the `num_workers` to 0 or you may face some issues with your DataLoader failing. | # load training data in batches
batch_size = 10
train_loader = DataLoader(transformed_dataset,
batch_size=batch_size,
shuffle=True,
num_workers=4)
| _____no_output_____ | MIT | 2. Define the Network Architecture.ipynb | SyedaZainabAkhtar/Facial-Keypoint-Detection |
Before trainingTake a look at how this model performs before it trains. You should see that the keypoints it predicts start off in one spot and don't match the keypoints on a face at all! It's interesting to visualize this behavior so that you can compare it to the model after training and see how the model has improved. Load in the test datasetThe test dataset is one that this model has *not* seen before, meaning it has not trained with these images. We'll load in this test data and before and after training, see how our model performs on this set!To visualize this test data, we have to go through some un-transformation steps to turn our images into python images from tensors and to turn our keypoints back into a recognizable range. | # load in the test data, using the dataset class
# AND apply the data_transform you defined above
# create the test dataset
test_dataset = FacialKeypointsDataset(csv_file='/data/test_frames_keypoints.csv',
root_dir='/data/test/',
transform=data_transform)
# load test data in batches
batch_size = 10
test_loader = DataLoader(test_dataset,
batch_size=batch_size,
shuffle=True,
num_workers=4) | _____no_output_____ | MIT | 2. Define the Network Architecture.ipynb | SyedaZainabAkhtar/Facial-Keypoint-Detection |
Apply the model on a test sampleTo test the model on a test sample of data, we have to follow these steps:1. Extract the image and ground truth keypoints from a sample2. Wrap the image in a Variable, so that the net can process it as input and track how it changes as the image moves through the network.3. Make sure the image is a FloatTensor, which the model expects.4. Forward pass the image through the net to get the predicted, output keypoints.This function test how the network performs on the first batch of test data. It returns the images, the transformed images, the predicted keypoints (produced by the model), and the ground truth keypoints. | # test the model on a batch of test images
def net_sample_output():
# iterate through the test dataset
for i, sample in enumerate(test_loader):
# get sample data: images and ground truth keypoints
images = sample['image']
key_pts = sample['keypoints']
# convert images to FloatTensors
images = images.type(torch.FloatTensor)
# forward pass to get net output
output_pts = net(images)
# reshape to batch_size x 68 x 2 pts
output_pts = output_pts.view(output_pts.size()[0], 68, -1)
# break after first image is tested
if i == 0:
return images, output_pts, key_pts
| _____no_output_____ | MIT | 2. Define the Network Architecture.ipynb | SyedaZainabAkhtar/Facial-Keypoint-Detection |
Debugging tipsIf you get a size or dimension error here, make sure that your network outputs the expected number of keypoints! Or if you get a Tensor type error, look into changing the above code that casts the data into float types: `images = images.type(torch.FloatTensor)`. | # call the above function
# returns: test images, test predicted keypoints, test ground truth keypoints
test_images, test_outputs, gt_pts = net_sample_output()
# print out the dimensions of the data to see if they make sense
print(test_images.data.size())
print(test_outputs.data.size())
print(gt_pts.size()) | torch.Size([10, 1, 224, 224])
torch.Size([10, 68, 2])
torch.Size([10, 68, 2])
| MIT | 2. Define the Network Architecture.ipynb | SyedaZainabAkhtar/Facial-Keypoint-Detection |
Visualize the predicted keypointsOnce we've had the model produce some predicted output keypoints, we can visualize these points in a way that's similar to how we've displayed this data before, only this time, we have to "un-transform" the image/keypoint data to display it.The *new* function, `show_all_keypoints` displays a grayscale image, its predicted keypoints and its ground truth keypoints (if provided). | def show_all_keypoints(image, predicted_key_pts, gt_pts=None):
"""Show image with predicted keypoints"""
# image is grayscale
plt.imshow(image, cmap='gray')
plt.scatter(predicted_key_pts[:, 0], predicted_key_pts[:, 1], s=20, marker='.', c='m')
# plot ground truth points as green pts
if gt_pts is not None:
plt.scatter(gt_pts[:, 0], gt_pts[:, 1], s=20, marker='.', c='g')
| _____no_output_____ | MIT | 2. Define the Network Architecture.ipynb | SyedaZainabAkhtar/Facial-Keypoint-Detection |
Un-transformationNext, you'll see a helper function. `visualize_output` that takes in a batch of images, predicted keypoints, and ground truth keypoints and displays a set of those images and their true/predicted keypoints.This function's main role is to take batches of image and keypoint data (the input and output of your CNN), and transform them into numpy images and un-normalized keypoints (x, y) for normal display. The un-transformation process turns keypoints and images into numpy arrays from Tensors *and* it undoes the keypoint normalization done in the Normalize() transform; it's assumed that you applied these transformations when you loaded your test data. | # visualize the output
# by default this shows a batch of 10 images
def visualize_output(test_images, test_outputs, gt_pts=None, batch_size=10):
for i in range(batch_size):
plt.figure(figsize=(20,10))
ax = plt.subplot(1, batch_size, i+1)
# un-transform the image data
image = test_images[i].data # get the image from it's Variable wrapper
image = image.numpy() # convert to numpy array from a Tensor
image = np.transpose(image, (1, 2, 0)) # transpose to go from torch to numpy image
# un-transform the predicted key_pts data
predicted_key_pts = test_outputs[i].data
predicted_key_pts = predicted_key_pts.numpy()
# undo normalization of keypoints
predicted_key_pts = predicted_key_pts*50.0+100
# plot ground truth points for comparison, if they exist
ground_truth_pts = None
if gt_pts is not None:
ground_truth_pts = gt_pts[i]
ground_truth_pts = ground_truth_pts*50.0+100
# call show_all_keypoints
show_all_keypoints(np.squeeze(image), predicted_key_pts, ground_truth_pts)
plt.axis('off')
plt.show()
# call it
visualize_output(test_images, test_outputs, gt_pts) | _____no_output_____ | MIT | 2. Define the Network Architecture.ipynb | SyedaZainabAkhtar/Facial-Keypoint-Detection |
Training Loss functionTraining a network to predict keypoints is different than training a network to predict a class; instead of outputting a distribution of classes and using cross entropy loss, we have to choose a loss function that is suited for regression, which directly compares a predicted value and target value. Read about the various kinds of loss functions (like MSE or L1/SmoothL1 loss) in [this documentation](http://pytorch.org/docs/master/_modules/torch/nn/modules/loss.html). Define the loss and optimizationNext, we will define how the model will train by deciding on the loss function and optimizer.--- | ## Define the loss and optimization
import torch.optim as optim
criterion = nn.MSELoss()
optimizer = optim.Adam(net.parameters(), lr = 0.001)
| _____no_output_____ | MIT | 2. Define the Network Architecture.ipynb | SyedaZainabAkhtar/Facial-Keypoint-Detection |
Training and Initial ObservationNow, we will train on our batched training data from `train_loader` for a number of epochs. | def train_net(n_epochs):
# prepare the net for training
net.train()
training_loss = []
for epoch in range(n_epochs): # loop over the dataset multiple times
running_loss = 0.0
# train on batches of data, assumes you already have train_loader
for batch_i, data in enumerate(train_loader):
# get the input images and their corresponding labels
images = data['image']
key_pts = data['keypoints']
# flatten pts
key_pts = key_pts.view(key_pts.size(0), -1)
# convert variables to floats for regression loss
key_pts = key_pts.type(torch.FloatTensor)
images = images.type(torch.FloatTensor)
# forward pass to get outputs
output_pts = net(images)
# calculate the loss between predicted and target keypoints
loss = criterion(output_pts, key_pts)
# zero the parameter (weight) gradients
optimizer.zero_grad()
# backward pass to calculate the weight gradients
loss.backward()
# update the weights
optimizer.step()
# print loss statistics
running_loss += loss.item()
if batch_i % 10 == 9: # print every 10 batches
print('Epoch: {}, Batch: {}, Avg. Loss: {}'.format(epoch + 1, batch_i+1, running_loss/10))
running_loss = 0.0
training_loss.append(running_loss)
print('Finished Training')
return training_loss
# train your network
n_epochs = 10 # start small, and increase when you've decided on your model structure and hyperparams
# this is a Workspaces-specific context manager to keep the connection
# alive while training your model, not part of pytorch
with active_session():
training_loss = train_net(n_epochs)
# visualize the loss as the network trained
plt.figure()
plt.semilogy(training_loss)
plt.grid()
plt.xlabel('Epoch')
plt.ylabel('Loss'); | _____no_output_____ | MIT | 2. Define the Network Architecture.ipynb | SyedaZainabAkhtar/Facial-Keypoint-Detection |
Test dataSee how the model performs on previously unseen, test data. We've already loaded and transformed this data, similar to the training data. Next, run the trained model on these images to see what kind of keypoints are produced. | # get a sample of test data again
test_images, test_outputs, gt_pts = net_sample_output()
print(test_images.data.size())
print(test_outputs.data.size())
print(gt_pts.size())
## visualize test output
# you can use the same function as before, by un-commenting the line below:
visualize_output(test_images, test_outputs, gt_pts)
| _____no_output_____ | MIT | 2. Define the Network Architecture.ipynb | SyedaZainabAkhtar/Facial-Keypoint-Detection |
Once we have found a good model (or two), we have to save the model so we can load it and use it later! | ## change the name to something uniqe for each new model
model_dir = 'saved_models/'
model_name = 'facial_keypoints_model.pt'
# after training, save your model parameters in the dir 'saved_models'
torch.save(net.state_dict(), model_dir+model_name) | _____no_output_____ | MIT | 2. Define the Network Architecture.ipynb | SyedaZainabAkhtar/Facial-Keypoint-Detection |
Feature VisualizationSometimes, neural networks are thought of as a black box, given some input, they learn to produce some output. CNN's are actually learning to recognize a variety of spatial patterns and you can visualize what each convolutional layer has been trained to recognize by looking at the weights that make up each convolutional kernel and applying those one at a time to a sample image. This technique is called feature visualization and it's useful for understanding the inner workings of a CNN. In the cell below, you can see how to extract a single filter (by index) from your first convolutional layer. The filter should appear as a grayscale grid. | # Get the weights in the first conv layer, "conv1"
# if necessary, change this to reflect the name of your first conv layer
weights1 = net.conv1.weight.data
w = weights1.numpy()
filter_index = 0
print(w[filter_index][0])
print(w[filter_index][0].shape)
# display the filter weights
plt.imshow(w[filter_index][0], cmap='gray')
| [[ 0.15470788 -0.03103321 0.14474995 -0.09415503 -0.17265566]
[ 0.25098324 0.1987015 -0.0861486 -0.18626866 0.02080246]
[ 0.21582745 0.20678866 0.02022225 -0.2662425 -0.10517941]
[-0.0465669 -0.06400613 0.11120261 -0.18623494 0.01846401]
[ 0.03995793 0.116187 -0.08362331 -0.1171196 -0.09572858]]
(5, 5)
| MIT | 2. Define the Network Architecture.ipynb | SyedaZainabAkhtar/Facial-Keypoint-Detection |
Feature mapsEach CNN has at least one convolutional layer that is composed of stacked filters (also known as convolutional kernels). As a CNN trains, it learns what weights to include in it's convolutional kernels and when these kernels are applied to some input image, they produce a set of **feature maps**. So, feature maps are just sets of filtered images; they are the images produced by applying a convolutional kernel to an input image. These maps show us the features that the different layers of the neural network learn to extract. For example, you might imagine a convolutional kernel that detects the vertical edges of a face or another one that detects the corners of eyes. You can see what kind of features each of these kernels detects by applying them to an image. One such example is shown below; from the way it brings out the lines in an the image, you might characterize this as an edge detection filter.Next, choose a test image and filter it with one of the convolutional kernels in your trained CNN; look at the filtered output to get an idea what that particular kernel detects. Filter an image to see the effect of a convolutional kernel--- | ## load in and display any image from the transformed test dataset
import cv2
image = cv2.imread('images/mona_lisa.jpg')
# convert image to grayscale
image = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY) / 255.0
## Using cv's filter2D function
filter_kernel = np.array([[ 0, 1, 1],
[-1, 0, 1],
[-1, -1, 0]])
filtered_image = cv2.filter2D(image, -1, filter_kernel)
f, (ax1, ax2, ax3) = plt.subplots(ncols=3, nrows=1, figsize=(10, 5))
ax1.imshow(filter_kernel, cmap='gray')
ax2.imshow(image, cmap='gray')
ax3.imshow(filtered_image, cmap='gray')
ax1.set_title('Kernel')
ax2.set_title('Orginal Image')
ax3.set_title('Filtered image')
plt.tight_layout();
## apply a specific set of filter weights (like the one displayed above) to the test image
weights = net.conv1.weight.data.numpy()
filter_kernel = weights[filter_index][0]
filtered_image = cv2.filter2D(image, -1, filter_kernel)
f, (ax1, ax2, ax3) = plt.subplots(ncols=3, nrows=1, figsize=(10, 5))
ax1.imshow(filter_kernel, cmap='gray')
ax2.imshow(image, cmap='gray')
ax3.imshow(filtered_image, cmap='gray')
ax1.set_title('Kernel')
ax2.set_title('Orginal Image')
ax3.set_title('Filtered image')
plt.tight_layout(); | _____no_output_____ | MIT | 2. Define the Network Architecture.ipynb | SyedaZainabAkhtar/Facial-Keypoint-Detection |
Project 4: Neural Networks ProjectAll code was complied and run in Google Colab as Neural models take time to run and the university laptops donot have enough processing power to run the same. All comments and conclusions have been added right below each code block for easier analysis and understanding Task 1. Automatic grid search LibrariesKey libraries used are keras and scikit-learn | import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import statsmodels.api as sm
from sklearn.preprocessing import PolynomialFeatures
from sklearn.model_selection import GridSearchCV, cross_val_score, KFold
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error, r2_score
from keras.wrappers.scikit_learn import KerasRegressor
from keras.layers import Dense
from keras.models import Sequential
from keras.optimizers import Adam
data = pd.read_csv("20.csv", header = None)
data.head()
data.corr() | _____no_output_____ | MIT | araghug_NN/Final_NN.ipynb | adithyarganesh/CSC591_004_Neural_Nets |
From the correlation values determined for the dataset, we notice that there is a high correlation with the first column in comparison with the restSplitting the data into train and test with a 2000 - 300 split | dataset = data.values
X = dataset[:,0:5]
Y = dataset[:,5]
X_test = X[-300:]
X = X[:-300]
Y_test = Y[-300:]
Y = Y[:-300] | _____no_output_____ | MIT | araghug_NN/Final_NN.ipynb | adithyarganesh/CSC591_004_Neural_Nets |
First, I decided to run a baseline model and see how the mse value for it is coming to be as this would give a perspective of how the values can increase with modification and hyperparameter tuning. | # define base model
def baseline():
model = Sequential()
model.add(Dense(5, input_dim=5, kernel_initializer='normal', activation='relu'))
model.add(Dense(1, kernel_initializer='normal'))
model.compile(loss='mean_squared_error', optimizer='adam')
return model
estimator = KerasRegressor(build_fn=baseline, epochs=100, batch_size=5, verbose=0)
kfold = KFold(n_splits=10)
results = cross_val_score(estimator, X, Y, cv=kfold)
print("Baseline: %.2f (%.2f) MSE" % (results.mean(), results.std())) | Baseline: -8204048.46 (24315010.89) MSE
| MIT | araghug_NN/Final_NN.ipynb | adithyarganesh/CSC591_004_Neural_Nets |
As seen above, for a simple multilayer perceptron regressor, a very high mse value has been determined. This allows us to conclude that better hyperparameter tuning is required with modifications to other parameters such as learning rate, dropout, epochs etc.Initially, I decided to nail down which an ideal optimizer would be, then I decided to tweak the other major parameters as it takes hours to try every combination.For a list of optimizers, epochs and batch sizes, I was able to conclude that Adam optimizer is the most ideal for the dataset given to me.The mse values for each combination while run in gridsearch has been listed below.Best: -25979.201172 using {'batch_size': 20, 'epochs': 100, 'optimizer': 'adam'} -83848.133594 (31485.334665) with: {'batch_size': 10, 'epochs': 10, 'optimizer': 'adam'} -124149.147656 (106041.321994) with: {'batch_size': 10, 'epochs': 10, 'optimizer': 'RMSprop'} -17538629.000000 (6145950.034129) with: {'batch_size': 10, 'epochs': 10, 'optimizer': 'Adagrad'} -28976.654297 (6686.122457) with: {'batch_size': 10, 'epochs': 50, 'optimizer': 'adam'} -28985.950000 (4135.118675) with: {'batch_size': 10, 'epochs': 50, 'optimizer': 'RMSprop'} -1475655.350000 (144409.180757) with: {'batch_size': 10, 'epochs': 50, 'optimizer': 'Adagrad'} -31307.830078 (7195.229146) with: {'batch_size': 10, 'epochs': 100, 'optimizer': 'adam'} -35668.427344 (12147.983446) with: {'batch_size': 10, 'epochs': 100, 'optimizer': 'RMSprop'} -1435397.200000 (173003.982770) with: {'batch_size': 10, 'epochs': 100, 'optimizer': 'Adagrad'} -607021.156250 (225326.076199) with: {'batch_size': 20, 'epochs': 10, 'optimizer': 'adam'} -155434.096875 (67205.428782) with: {'batch_size': 20, 'epochs': 10, 'optimizer': 'RMSprop'} -39172515.600000 (7229904.980792) with: {'batch_size': 20, 'epochs': 10, 'optimizer': 'Adagrad'} -32730.587109 (9326.100937) with: {'batch_size': 20, 'epochs': 50, 'optimizer': 'adam'} -46073.637109 (17537.055165) with: {'batch_size': 20, 'epochs': 50, 'optimizer': 'RMSprop'} -1622539.675000 (233324.938891) with: {'batch_size': 20, 'epochs': 50, 'optimizer': 'Adagrad'} -25979.201172 (3285.793231) with: {'batch_size': 20, 'epochs': 100, 'optimizer': 'adam'} -44877.579688 (7302.797490) with: {'batch_size': 20, 'epochs': 100, 'optimizer': 'RMSprop'} -1489904.750000 (215725.142852) with: {'batch_size': 20, 'epochs': 100, 'optimizer': 'Adagrad'} -1350494.175000 (162489.428364) with: {'batch_size': 40, 'epochs': 10, 'optimizer': 'adam'} -742374.950000 (163049.310736) with: {'batch_size': 40, 'epochs': 10, 'optimizer': 'RMSprop'} -56523900.000000 (2665037.687018) with: {'batch_size': 40, 'epochs': 10, 'optimizer': 'Adagrad'} -56658.258203 (24003.537579) with: {'batch_size': 40, 'epochs': 50, 'optimizer': 'adam'} -64086.296094 (12042.358310) with: {'batch_size': 40, 'epochs': 50, 'optimizer': 'RMSprop'} -9372795.800000 (5108249.641949) with: {'batch_size': 40, 'epochs': 50, 'optimizer': 'Adagrad'} -30622.471875 (6322.287248) with: {'batch_size': 40, 'epochs': 100, 'optimizer': 'adam'} -36232.569531 (13259.656484) with: {'batch_size': 40, 'epochs': 100, 'optimizer': 'RMSprop'} -1600181.925000 (146014.239422) with: {'batch_size': 40, 'epochs': 100, 'optimizer': 'Adagrad'} -1390699.350000 (145640.273592) with: {'batch_size': 60, 'epochs': 10, 'optimizer': 'adam'} -1082542.925000 (144731.452078) with: {'batch_size': 60, 'epochs': 10, 'optimizer': 'RMSprop'} -62656396.800000 (559420.032519) with: {'batch_size': 60, 'epochs': 10, 'optimizer': 'Adagrad'} -69710.080469 (40863.769851) with: {'batch_size': 60, 'epochs': 50, 'optimizer': 'adam'} -71970.824219 (24058.956433) with: {'batch_size': 60, 'epochs': 50, 'optimizer': 'RMSprop'} -16491987.400000 (3500092.027003) with: {'batch_size': 60, 'epochs': 50, 'optimizer': 'Adagrad'} -46966.215625 (15952.838801) with: {'batch_size': 60, 'epochs': 100, 'optimizer': 'adam'} -45104.332812 (10972.408712) with: {'batch_size': 60, 'epochs': 100, 'optimizer': 'RMSprop'} -2788073.200000 (698682.820182) with: {'batch_size': 60, 'epochs': 100, 'optimizer': 'Adagrad'} -1493044.875000 (155697.516601) with: {'batch_size': 80, 'epochs': 10, 'optimizer': 'adam'} -1351079.800000 (130707.791587) with: {'batch_size': 80, 'epochs': 10, 'optimizer': 'RMSprop'} -65509906.400000 (3248526.947553) with: {'batch_size': 80, 'epochs': 10, 'optimizer': 'Adagrad'} -263853.200000 (123436.623595) with: {'batch_size': 80, 'epochs': 50, 'optimizer': 'adam'} -92486.471875 (25669.353331) with: {'batch_size': 80, 'epochs': 50, 'optimizer': 'RMSprop'} -25053901.200000 (2766136.455614) with: {'batch_size': 80, 'epochs': 50, 'optimizer': 'Adagrad'} -41316.805469 (6963.559710) with: {'batch_size': 80, 'epochs': 100, 'optimizer': 'adam'} -47747.921094 (15393.723483) with: {'batch_size': 80, 'epochs': 100, 'optimizer': 'RMSprop'} -6449660.600000 (3418975.118244) with: {'batch_size': 80, 'epochs': 100, 'optimizer': 'Adagrad'} -1476760.825000 (167679.598081) with: {'batch_size': 100, 'epochs': 10, 'optimizer': 'adam'} -1404041.825000 (201396.916914) with: {'batch_size': 100, 'epochs': 10, 'optimizer': 'RMSprop'} -72146363.200000 (2264547.064452) with: {'batch_size': 100, 'epochs': 10, 'optimizer': 'Adagrad'} -352332.837500 (104710.011595) with: {'batch_size': 100, 'epochs': 50, 'optimizer': 'adam'} -90365.727344 (25890.780854) with: {'batch_size': 100, 'epochs': 50, 'optimizer': 'RMSprop'} -32726843.600000 (8646951.726295) with: {'batch_size': 100, 'epochs': 50, 'optimizer': 'Adagrad'} -42565.274219 (14731.363104) with: {'batch_size': 100, 'epochs': 100, 'optimizer': 'adam'} -65972.997656 (29357.657998) with: {'batch_size': 100, 'epochs': 100, 'optimizer': 'RMSprop'} -11417867.800000 (2452926.970178) with: {'batch_size': 100, 'epochs': 100, 'optimizer': 'Adagrad'} | def custom_model( momentum=0, dropout_rate=0.0, learn_rate=0.01, epochs = 10, verbose=0):
model = Sequential()
model.add(Dense(128, input_dim=X.shape[1], activation='relu'))
model.add(Dense(64, activation='relu'))
model.add(Dense(1))
adam = Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.0, amsgrad=False)
model.compile(loss='mean_squared_error', optimizer=adam, metrics=['mse'])
return model
np.random.seed(5)
model = KerasRegressor(build_fn=custom_model, verbose=0)
# Hyperparameter tuning
learn_rate = [0.0001, 0.001, 0.01]
dropout_rate = [0.0, 0.2, 0.3]
batch_size = [10, 50, 100]
epochs = [10, 50, 100]
param_grid = dict(batch_size=batch_size, epochs=epochs, learn_rate=learn_rate, dropout_rate=dropout_rate)
grid = GridSearchCV(estimator=model, param_grid=param_grid, n_jobs=-1)
grid_result = grid.fit(X, Y) | _____no_output_____ | MIT | araghug_NN/Final_NN.ipynb | adithyarganesh/CSC591_004_Neural_Nets |
I then created a model with two dense layers and used the Adam optimizer to perform the remaining hyperparameter tuning. There were the outputs that were obtained | print("Best mse is %f with params --> %s" % (grid_result.best_score_, grid_result.best_params_))
means = grid_result.cv_results_['mean_test_score']
std_dev = grid_result.cv_results_['std_test_score']
tuned_params = grid_result.cv_results_['params' ]
for mean, stdev, param in zip(means, std_dev, tuned_params):
print("%f, %f ----> %r" % (mean, stdev, param)) | Best mse is -24887.330078 with params --> {'batch_size': 10, 'dropout_rate': 0.2, 'epochs': 100, 'learn_rate': 0.01}
-83463.160156, 25271.865298 ----> {'batch_size': 10, 'dropout_rate': 0.0, 'epochs': 10, 'learn_rate': 0.0001}
-88775.615625, 23038.250922 ----> {'batch_size': 10, 'dropout_rate': 0.0, 'epochs': 10, 'learn_rate': 0.001}
-90300.914844, 32942.413488 ----> {'batch_size': 10, 'dropout_rate': 0.0, 'epochs': 10, 'learn_rate': 0.01}
-36012.864453, 11941.555961 ----> {'batch_size': 10, 'dropout_rate': 0.0, 'epochs': 50, 'learn_rate': 0.0001}
-31121.520313, 7992.348638 ----> {'batch_size': 10, 'dropout_rate': 0.0, 'epochs': 50, 'learn_rate': 0.001}
-28983.807812, 5356.626577 ----> {'batch_size': 10, 'dropout_rate': 0.0, 'epochs': 50, 'learn_rate': 0.01}
-35069.562109, 8180.334911 ----> {'batch_size': 10, 'dropout_rate': 0.0, 'epochs': 100, 'learn_rate': 0.0001}
-33771.587500, 7284.982974 ----> {'batch_size': 10, 'dropout_rate': 0.0, 'epochs': 100, 'learn_rate': 0.001}
-32479.938281, 8067.058798 ----> {'batch_size': 10, 'dropout_rate': 0.0, 'epochs': 100, 'learn_rate': 0.01}
-65907.715625, 24826.923191 ----> {'batch_size': 10, 'dropout_rate': 0.2, 'epochs': 10, 'learn_rate': 0.0001}
-77717.960156, 34020.840710 ----> {'batch_size': 10, 'dropout_rate': 0.2, 'epochs': 10, 'learn_rate': 0.001}
-85224.619531, 29205.053937 ----> {'batch_size': 10, 'dropout_rate': 0.2, 'epochs': 10, 'learn_rate': 0.01}
-33830.892578, 3986.515899 ----> {'batch_size': 10, 'dropout_rate': 0.2, 'epochs': 50, 'learn_rate': 0.0001}
-31440.497656, 7374.794438 ----> {'batch_size': 10, 'dropout_rate': 0.2, 'epochs': 50, 'learn_rate': 0.001}
-27606.241406, 4662.202180 ----> {'batch_size': 10, 'dropout_rate': 0.2, 'epochs': 50, 'learn_rate': 0.01}
-29065.995703, 5938.747315 ----> {'batch_size': 10, 'dropout_rate': 0.2, 'epochs': 100, 'learn_rate': 0.0001}
-26874.994922, 4138.167862 ----> {'batch_size': 10, 'dropout_rate': 0.2, 'epochs': 100, 'learn_rate': 0.001}
-24887.330078, 3946.426631 ----> {'batch_size': 10, 'dropout_rate': 0.2, 'epochs': 100, 'learn_rate': 0.01}
-70273.667188, 37229.902720 ----> {'batch_size': 10, 'dropout_rate': 0.3, 'epochs': 10, 'learn_rate': 0.0001}
-107712.754687, 46457.342344 ----> {'batch_size': 10, 'dropout_rate': 0.3, 'epochs': 10, 'learn_rate': 0.001}
-79865.365625, 20842.438363 ----> {'batch_size': 10, 'dropout_rate': 0.3, 'epochs': 10, 'learn_rate': 0.01}
-29312.818750, 6711.976675 ----> {'batch_size': 10, 'dropout_rate': 0.3, 'epochs': 50, 'learn_rate': 0.0001}
-28767.895313, 2799.946145 ----> {'batch_size': 10, 'dropout_rate': 0.3, 'epochs': 50, 'learn_rate': 0.001}
-33539.787500, 7869.685157 ----> {'batch_size': 10, 'dropout_rate': 0.3, 'epochs': 50, 'learn_rate': 0.01}
-29265.059375, 4962.019223 ----> {'batch_size': 10, 'dropout_rate': 0.3, 'epochs': 100, 'learn_rate': 0.0001}
-38803.658594, 26620.278088 ----> {'batch_size': 10, 'dropout_rate': 0.3, 'epochs': 100, 'learn_rate': 0.001}
-26067.693750, 2738.324925 ----> {'batch_size': 10, 'dropout_rate': 0.3, 'epochs': 100, 'learn_rate': 0.01}
-1391231.450000, 131769.763490 ----> {'batch_size': 50, 'dropout_rate': 0.0, 'epochs': 10, 'learn_rate': 0.0001}
-1394257.100000, 148163.606630 ----> {'batch_size': 50, 'dropout_rate': 0.0, 'epochs': 10, 'learn_rate': 0.001}
-1334672.825000, 121353.652625 ----> {'batch_size': 50, 'dropout_rate': 0.0, 'epochs': 10, 'learn_rate': 0.01}
-77671.030078, 72929.877871 ----> {'batch_size': 50, 'dropout_rate': 0.0, 'epochs': 50, 'learn_rate': 0.0001}
-54267.003906, 26992.621148 ----> {'batch_size': 50, 'dropout_rate': 0.0, 'epochs': 50, 'learn_rate': 0.001}
-53134.264062, 25623.205015 ----> {'batch_size': 50, 'dropout_rate': 0.0, 'epochs': 50, 'learn_rate': 0.01}
-31013.926172, 3174.209647 ----> {'batch_size': 50, 'dropout_rate': 0.0, 'epochs': 100, 'learn_rate': 0.0001}
-48720.391016, 16649.325040 ----> {'batch_size': 50, 'dropout_rate': 0.0, 'epochs': 100, 'learn_rate': 0.001}
-31847.444141, 13980.987319 ----> {'batch_size': 50, 'dropout_rate': 0.0, 'epochs': 100, 'learn_rate': 0.01}
-1345242.825000, 160264.462418 ----> {'batch_size': 50, 'dropout_rate': 0.2, 'epochs': 10, 'learn_rate': 0.0001}
-1343731.025000, 142865.047342 ----> {'batch_size': 50, 'dropout_rate': 0.2, 'epochs': 10, 'learn_rate': 0.001}
-1354697.775000, 124586.394037 ----> {'batch_size': 50, 'dropout_rate': 0.2, 'epochs': 10, 'learn_rate': 0.01}
-57313.091016, 18280.070121 ----> {'batch_size': 50, 'dropout_rate': 0.2, 'epochs': 50, 'learn_rate': 0.0001}
-51404.878125, 26886.678269 ----> {'batch_size': 50, 'dropout_rate': 0.2, 'epochs': 50, 'learn_rate': 0.001}
-53362.170312, 20306.481582 ----> {'batch_size': 50, 'dropout_rate': 0.2, 'epochs': 50, 'learn_rate': 0.01}
-31464.076953, 6984.338670 ----> {'batch_size': 50, 'dropout_rate': 0.2, 'epochs': 100, 'learn_rate': 0.0001}
-34847.044141, 15228.993071 ----> {'batch_size': 50, 'dropout_rate': 0.2, 'epochs': 100, 'learn_rate': 0.001}
-32029.678125, 5482.817316 ----> {'batch_size': 50, 'dropout_rate': 0.2, 'epochs': 100, 'learn_rate': 0.01}
-1379998.125000, 172412.173111 ----> {'batch_size': 50, 'dropout_rate': 0.3, 'epochs': 10, 'learn_rate': 0.0001}
-1378416.150000, 94280.468042 ----> {'batch_size': 50, 'dropout_rate': 0.3, 'epochs': 10, 'learn_rate': 0.001}
-1372643.625000, 120326.731641 ----> {'batch_size': 50, 'dropout_rate': 0.3, 'epochs': 10, 'learn_rate': 0.01}
-56712.373437, 11561.815221 ----> {'batch_size': 50, 'dropout_rate': 0.3, 'epochs': 50, 'learn_rate': 0.0001}
-64784.871875, 12282.235965 ----> {'batch_size': 50, 'dropout_rate': 0.3, 'epochs': 50, 'learn_rate': 0.001}
-69941.793750, 38007.654682 ----> {'batch_size': 50, 'dropout_rate': 0.3, 'epochs': 50, 'learn_rate': 0.01}
-37416.461328, 13574.281323 ----> {'batch_size': 50, 'dropout_rate': 0.3, 'epochs': 100, 'learn_rate': 0.0001}
-45251.885156, 16110.994895 ----> {'batch_size': 50, 'dropout_rate': 0.3, 'epochs': 100, 'learn_rate': 0.001}
-29959.032031, 6280.830413 ----> {'batch_size': 50, 'dropout_rate': 0.3, 'epochs': 100, 'learn_rate': 0.01}
-1494333.325000, 171158.091437 ----> {'batch_size': 100, 'dropout_rate': 0.0, 'epochs': 10, 'learn_rate': 0.0001}
-1492897.475000, 136893.803305 ----> {'batch_size': 100, 'dropout_rate': 0.0, 'epochs': 10, 'learn_rate': 0.001}
-1481787.225000, 181406.314004 ----> {'batch_size': 100, 'dropout_rate': 0.0, 'epochs': 10, 'learn_rate': 0.01}
-362559.275000, 144505.187410 ----> {'batch_size': 100, 'dropout_rate': 0.0, 'epochs': 50, 'learn_rate': 0.0001}
-493743.925000, 175628.419863 ----> {'batch_size': 100, 'dropout_rate': 0.0, 'epochs': 50, 'learn_rate': 0.001}
-425601.634375, 138994.073447 ----> {'batch_size': 100, 'dropout_rate': 0.0, 'epochs': 50, 'learn_rate': 0.01}
-56394.031250, 29822.650123 ----> {'batch_size': 100, 'dropout_rate': 0.0, 'epochs': 100, 'learn_rate': 0.0001}
-62118.402734, 33738.726838 ----> {'batch_size': 100, 'dropout_rate': 0.0, 'epochs': 100, 'learn_rate': 0.001}
-45354.618750, 12476.379207 ----> {'batch_size': 100, 'dropout_rate': 0.0, 'epochs': 100, 'learn_rate': 0.01}
-1520627.550000, 153316.697647 ----> {'batch_size': 100, 'dropout_rate': 0.2, 'epochs': 10, 'learn_rate': 0.0001}
-1525229.250000, 192898.677445 ----> {'batch_size': 100, 'dropout_rate': 0.2, 'epochs': 10, 'learn_rate': 0.001}
-1576432.075000, 140458.977827 ----> {'batch_size': 100, 'dropout_rate': 0.2, 'epochs': 10, 'learn_rate': 0.01}
-438015.534375, 169262.908999 ----> {'batch_size': 100, 'dropout_rate': 0.2, 'epochs': 50, 'learn_rate': 0.0001}
-526000.006250, 172199.747528 ----> {'batch_size': 100, 'dropout_rate': 0.2, 'epochs': 50, 'learn_rate': 0.001}
-302433.350000, 203837.546876 ----> {'batch_size': 100, 'dropout_rate': 0.2, 'epochs': 50, 'learn_rate': 0.01}
-65191.571875, 16258.684909 ----> {'batch_size': 100, 'dropout_rate': 0.2, 'epochs': 100, 'learn_rate': 0.0001}
-65105.631250, 21753.605495 ----> {'batch_size': 100, 'dropout_rate': 0.2, 'epochs': 100, 'learn_rate': 0.001}
-42338.783203, 14474.060719 ----> {'batch_size': 100, 'dropout_rate': 0.2, 'epochs': 100, 'learn_rate': 0.01}
-1580144.925000, 167657.703649 ----> {'batch_size': 100, 'dropout_rate': 0.3, 'epochs': 10, 'learn_rate': 0.0001}
-1572937.625000, 189683.774624 ----> {'batch_size': 100, 'dropout_rate': 0.3, 'epochs': 10, 'learn_rate': 0.001}
-1567378.325000, 161370.750398 ----> {'batch_size': 100, 'dropout_rate': 0.3, 'epochs': 10, 'learn_rate': 0.01}
-461903.221875, 243098.024866 ----> {'batch_size': 100, 'dropout_rate': 0.3, 'epochs': 50, 'learn_rate': 0.0001}
-329203.254688, 191314.508843 ----> {'batch_size': 100, 'dropout_rate': 0.3, 'epochs': 50, 'learn_rate': 0.001}
-514557.693750, 125854.978822 ----> {'batch_size': 100, 'dropout_rate': 0.3, 'epochs': 50, 'learn_rate': 0.01}
-68567.199219, 37323.214715 ----> {'batch_size': 100, 'dropout_rate': 0.3, 'epochs': 100, 'learn_rate': 0.0001}
-46243.672266, 26302.250809 ----> {'batch_size': 100, 'dropout_rate': 0.3, 'epochs': 100, 'learn_rate': 0.001}
-79122.269922, 52110.550128 ----> {'batch_size': 100, 'dropout_rate': 0.3, 'epochs': 100, 'learn_rate': 0.01}
| MIT | araghug_NN/Final_NN.ipynb | adithyarganesh/CSC591_004_Neural_Nets |
From the above values, we notice that the most optimal set of attributes were found to be. 'batch_size': 10, 'dropout_rate': 0.2, 'epochs': 100, 'learn_rate': 0.01 Task 2 - Compare the trained neural networkwith multivariable regression | X2 = sm.add_constant(X)
est = sm.OLS(Y, X2)
est2 = est.fit()
print(est2.summary())
reg2 = LinearRegression()
reg2.fit(X, Y)
print("The linear model is: Y = {:.5} + {:.5}*X1 + {:.5}*X2 + {:.5}*X3 + {:.5}*X4 + {:.5}*X5".format(reg2.intercept_, reg2.coef_[0], reg2.coef_[1], reg2.coef_[2], reg2.coef_[3], reg2.coef_[4]))
print("Y = a0 + a1X1 + a3X3 + a4X4 + a5X5") | The linear model is: Y = -2567.9 + 55.037*X1 + 2.2014*X2 + 5.6969*X3 + 6.9531*X4 + 9.1432*X5
Y = a0 + a1X1 + a3X3 + a4X4 + a5X5
| MIT | araghug_NN/Final_NN.ipynb | adithyarganesh/CSC591_004_Neural_Nets |
We now calculate the sum of squared errors (SSE) for each of the models and determine which is the better model | LR_sse = 0
for v in Y - reg2.predict(X):
LR_sse += v**2
NN_sse = 0
for v in Y - grid_result.predict(X):
NN_sse += v**2
print("SSE for Multivariate regression: ", LR_sse)
print("SSE for estimation with Neural Moedl: ", NN_sse) | SSE for Multivariate regression: 164973673.90797538
SSE for estimation with Neural Moedl: 44258448.18429801
| MIT | araghug_NN/Final_NN.ipynb | adithyarganesh/CSC591_004_Neural_Nets |
It can be seen that the SSE value for the custom neural model created with hyperparameter tuning seems to fare better in comparison to the Multivariable linear regression.Below are two sample predictions made on untrained test data by both the models. To plain sight, the difference is minimal but on further analysis with hyper parammeter tuning, we see a much bigger difference in performance between the two models. | Y_test_pred = reg2.predict(X_test)
plt.plot(Y_test_pred[:50])
plt.plot(Y_test[:50])
Y_test_pred_NN = grid_result.predict(X_test)
plt.plot(Y_test_pred_NN[:50])
plt.plot(Y_test[:50]) | _____no_output_____ | MIT | araghug_NN/Final_NN.ipynb | adithyarganesh/CSC591_004_Neural_Nets |
Problem StatementFind and return the `nth` row of Pascal's triangle in the form a list. `n` is 0-based.For exmaple, if `n = 4`, then `output = [1, 4, 6, 4, 1]`.To know more about Pascal's triangle: https://www.mathsisfun.com/pascals-triangle.html | #%% Imports and functions declarations
from math import factorial
def combinations(total_num: int, choosen_num: int) -> int:
"""
Returns the number of available combinations given a number of elements and the subspace selected
:param total_num: number of total elements
:param choosen_num: number of elements of the subspace
:return: number of total combinations
"""
return int(factorial(total_num)/(factorial(choosen_num)*factorial(total_num-choosen_num)))
def nth_row_pascal(num_row: int) -> list:
"""
Given the number of the row, generates the specifiy values present in this pascal triangle
:param num_row: number of row to represent
:return: pascal's triangle row
"""
row_result = []
for i in range(num_row+1):
row_result.append(combinations(num_row, i))
return row_result | _____no_output_____ | MIT | Arrays and Linked Lists/011_Pascal's-Triangle.ipynb | parikshitsaikia1619/DSA_Mastery |
Show Solution | def test_function(test_case):
n = test_case[0]
solution = test_case[1]
output = nth_row_pascal(n)
if solution == output:
print("Pass")
else:
print("Fail")
n = 0
solution = [1]
test_case = [n, solution]
test_function(test_case)
n = 1
solution = [1, 1]
test_case = [n, solution]
test_function(test_case)
n = 2
solution = [1, 2, 1]
test_case = [n, solution]
test_function(test_case)
n = 3
solution = [1, 3, 3, 1]
test_case = [n, solution]
test_function(test_case)
n = 4
solution = [1, 4, 6, 4, 1]
test_case = [n, solution]
test_function(test_case) | Pass
| MIT | Arrays and Linked Lists/011_Pascal's-Triangle.ipynb | parikshitsaikia1619/DSA_Mastery |
Dependencies | import json, warnings, shutil
from tweet_utility_scripts import *
from tweet_utility_preprocess_roberta_scripts import *
from transformers import TFRobertaModel, RobertaConfig
from tokenizers import ByteLevelBPETokenizer
from tensorflow.keras.models import Model
from tensorflow.keras import optimizers, metrics, losses, layers
from tensorflow.keras.callbacks import EarlyStopping, TensorBoard, ModelCheckpoint
SEED = 0
seed_everything(SEED)
warnings.filterwarnings("ignore") | _____no_output_____ | MIT | Model backlog/Train/53-tweet-train-3fold-roberta-base-pb2.ipynb | dimitreOliveira/Tweet-Sentiment-Extraction |
Load data | database_base_path = '/kaggle/input/tweet-dataset-split-roberta-base-96/'
k_fold = pd.read_csv(database_base_path + '5-fold.csv')
display(k_fold.head())
# Unzip files
!tar -xvf /kaggle/input/tweet-dataset-split-roberta-base-96/fold_1.tar.gz
!tar -xvf /kaggle/input/tweet-dataset-split-roberta-base-96/fold_2.tar.gz
!tar -xvf /kaggle/input/tweet-dataset-split-roberta-base-96/fold_3.tar.gz
# !tar -xvf /kaggle/input/tweet-dataset-split-roberta-base-96/fold_4.tar.gz
# !tar -xvf /kaggle/input/tweet-dataset-split-roberta-base-96/fold_5.tar.gz | _____no_output_____ | MIT | Model backlog/Train/53-tweet-train-3fold-roberta-base-pb2.ipynb | dimitreOliveira/Tweet-Sentiment-Extraction |
Model parameters | vocab_path = database_base_path + 'vocab.json'
merges_path = database_base_path + 'merges.txt'
base_path = '/kaggle/input/qa-transformers/roberta/'
config = {
"MAX_LEN": 96,
"BATCH_SIZE": 32,
"EPOCHS": 5,
"LEARNING_RATE": 3e-5,
"ES_PATIENCE": 1,
"question_size": 4,
"N_FOLDS": 1,
"base_model_path": base_path + 'roberta-base-tf_model.h5',
"config_path": base_path + 'roberta-base-config.json'
}
with open('config.json', 'w') as json_file:
json.dump(json.loads(json.dumps(config)), json_file) | _____no_output_____ | MIT | Model backlog/Train/53-tweet-train-3fold-roberta-base-pb2.ipynb | dimitreOliveira/Tweet-Sentiment-Extraction |
Model | module_config = RobertaConfig.from_pretrained(config['config_path'], output_hidden_states=False)
def model_fn(MAX_LEN):
input_ids = layers.Input(shape=(MAX_LEN,), dtype=tf.int32, name='input_ids')
attention_mask = layers.Input(shape=(MAX_LEN,), dtype=tf.int32, name='attention_mask')
base_model = TFRobertaModel.from_pretrained(config['base_model_path'], config=module_config, name="base_model")
sequence_output = base_model({'input_ids': input_ids, 'attention_mask': attention_mask})
last_state = sequence_output[0]
x_start = layers.Conv1D(1, 1)(last_state)
x_start = layers.Flatten()(x_start)
y_start = layers.Activation('softmax', name='y_start')(x_start)
x_end = layers.Conv1D(1, 1)(last_state)
x_end = layers.Flatten()(x_end)
y_end = layers.Activation('softmax', name='y_end')(x_end)
model = Model(inputs=[input_ids, attention_mask], outputs=[y_start, y_end])
model.compile(optimizers.Adam(lr=config['LEARNING_RATE']),
loss=losses.CategoricalCrossentropy(),
metrics=[metrics.CategoricalAccuracy()])
return model | _____no_output_____ | MIT | Model backlog/Train/53-tweet-train-3fold-roberta-base-pb2.ipynb | dimitreOliveira/Tweet-Sentiment-Extraction |
Tokenizer | tokenizer = ByteLevelBPETokenizer(vocab_file=vocab_path, merges_file=merges_path, lowercase=True, add_prefix_space=True)
tokenizer.save('./') | _____no_output_____ | MIT | Model backlog/Train/53-tweet-train-3fold-roberta-base-pb2.ipynb | dimitreOliveira/Tweet-Sentiment-Extraction |
Train | history_list = []
AUTO = tf.data.experimental.AUTOTUNE
for n_fold in range(config['N_FOLDS']):
n_fold +=1
print('\nFOLD: %d' % (n_fold))
# Load data
base_data_path = 'fold_%d/' % (n_fold)
x_train = np.load(base_data_path + 'x_train.npy')
y_train = np.load(base_data_path + 'y_train.npy')
x_valid = np.load(base_data_path + 'x_valid.npy')
y_valid = np.load(base_data_path + 'y_valid.npy')
### Delete data dir
shutil.rmtree(base_data_path)
# Train model
model_path = 'model_fold_%d.h5' % (n_fold)
model = model_fn(config['MAX_LEN'])
es = EarlyStopping(monitor='val_loss', mode='min', patience=config['ES_PATIENCE'],
restore_best_weights=True, verbose=1)
checkpoint = ModelCheckpoint(model_path, monitor='val_loss', mode='min',
save_best_only=True, save_weights_only=True)
history = model.fit(list(x_train), list(y_train),
validation_data=(list(x_valid), list(y_valid)),
batch_size=config['BATCH_SIZE'],
callbacks=[checkpoint, es],
epochs=config['EPOCHS'],
verbose=2).history
history_list.append(history)
# Make predictions
train_preds = model.predict(list(x_train))
valid_preds = model.predict(list(x_valid))
k_fold.loc[k_fold['fold_%d' % (n_fold)] == 'train', 'start_fold_%d' % (n_fold)] = train_preds[0].argmax(axis=-1)
k_fold.loc[k_fold['fold_%d' % (n_fold)] == 'train', 'end_fold_%d' % (n_fold)] = train_preds[1].argmax(axis=-1)
k_fold.loc[k_fold['fold_%d' % (n_fold)] == 'validation', 'start_fold_%d' % (n_fold)] = valid_preds[0].argmax(axis=-1)
k_fold.loc[k_fold['fold_%d' % (n_fold)] == 'validation', 'end_fold_%d' % (n_fold)] = valid_preds[1].argmax(axis=-1)
k_fold['end_fold_%d' % (n_fold)] = k_fold['end_fold_%d' % (n_fold)].astype(int)
k_fold['start_fold_%d' % (n_fold)] = k_fold['start_fold_%d' % (n_fold)].astype(int)
k_fold['end_fold_%d' % (n_fold)].clip(0, k_fold['text_len'], inplace=True)
k_fold['start_fold_%d' % (n_fold)].clip(0, k_fold['end_fold_%d' % (n_fold)], inplace=True)
k_fold['prediction_fold_%d' % (n_fold)] = k_fold.apply(lambda x: decode(x['start_fold_%d' % (n_fold)], x['end_fold_%d' % (n_fold)], x['text'], config['question_size'], tokenizer), axis=1)
k_fold['prediction_fold_%d' % (n_fold)].fillna('', inplace=True)
k_fold['jaccard_fold_%d' % (n_fold)] = k_fold.apply(lambda x: jaccard(x['text'], x['prediction_fold_%d' % (n_fold)]), axis=1) |
FOLD: 1
Train on 21984 samples, validate on 5496 samples
Epoch 1/5
21984/21984 - 292s - loss: 2.1388 - y_start_loss: 1.0514 - y_end_loss: 1.0873 - y_start_categorical_accuracy: 0.6546 - y_end_categorical_accuracy: 0.6529 - val_loss: 1.6510 - val_y_start_loss: 0.8545 - val_y_end_loss: 0.7959 - val_y_start_categorical_accuracy: 0.6994 - val_y_end_categorical_accuracy: 0.7220
Epoch 2/5
21984/21984 - 274s - loss: 1.5846 - y_start_loss: 0.8206 - y_end_loss: 0.7640 - y_start_categorical_accuracy: 0.7031 - y_end_categorical_accuracy: 0.7300 - val_loss: 1.5583 - val_y_start_loss: 0.8072 - val_y_end_loss: 0.7505 - val_y_start_categorical_accuracy: 0.7091 - val_y_end_categorical_accuracy: 0.7285
Epoch 3/5
Restoring model weights from the end of the best epoch.
21984/21984 - 272s - loss: 1.4377 - y_start_loss: 0.7527 - y_end_loss: 0.6850 - y_start_categorical_accuracy: 0.7213 - y_end_categorical_accuracy: 0.7471 - val_loss: 1.5799 - val_y_start_loss: 0.8255 - val_y_end_loss: 0.7538 - val_y_start_categorical_accuracy: 0.6943 - val_y_end_categorical_accuracy: 0.7240
Epoch 00003: early stopping
| MIT | Model backlog/Train/53-tweet-train-3fold-roberta-base-pb2.ipynb | dimitreOliveira/Tweet-Sentiment-Extraction |
Model loss graph | sns.set(style="whitegrid")
for n_fold in range(config['N_FOLDS']):
print('Fold: %d' % (n_fold+1))
plot_metrics(history_list[n_fold]) | Fold: 1
| MIT | Model backlog/Train/53-tweet-train-3fold-roberta-base-pb2.ipynb | dimitreOliveira/Tweet-Sentiment-Extraction |
Model evaluation | display(evaluate_model_kfold(k_fold, config['N_FOLDS']).style.applymap(color_map)) | _____no_output_____ | MIT | Model backlog/Train/53-tweet-train-3fold-roberta-base-pb2.ipynb | dimitreOliveira/Tweet-Sentiment-Extraction |
Visualize predictions | display(k_fold[[c for c in k_fold.columns if not (c.startswith('textID') or
c.startswith('text_len') or
c.startswith('selected_text_len') or
c.startswith('text_wordCnt') or
c.startswith('selected_text_wordCnt') or
c.startswith('fold_') or
c.startswith('start_fold_') or
c.startswith('end_fold_'))]].head(15)) | _____no_output_____ | MIT | Model backlog/Train/53-tweet-train-3fold-roberta-base-pb2.ipynb | dimitreOliveira/Tweet-Sentiment-Extraction |
メモelixir を齧る。かじる。今のイメージ $\quad$ erlang 上で、erlang は 並行処理のためのシステムで、その erlang 上で理想的な言語を作ろうとしたら、ruby + clojure みたいな言語になった。Dave Thomas と まつもとゆきひろ が勧めているのだからいい言語なのだろう。 * https://elixirschool.com/ja/lessons/basics/control-structures/* https://magazine.rubyist.net/articles/0054/0054-ElixirBook.* https://dev.to/gumi/elixir-01--2585* https://elixir-lang.org/getting-started/introduction.html---本を買った。プログラミング elixirdave thomas, 笹田耕一・鳥居雪訳、 ohmshaprogramming elixir |> 1.6を読む。 | %%capture
!wget https://packages.erlang-solutions.com/erlang-solutions_2.0_all.deb && sudo dpkg -i erlang-solutions_2.0_all.deb
!sudo apt update
!sudo apt install elixir
!elixir -v
!date | Erlang/OTP 24 [erts-12.2.1] [source] [64-bit] [smp:2:2] [ds:2:2:10] [async-threads:1] [jit]
Elixir 1.13.0 (compiled with Erlang/OTP 24)
Wed Mar 16 16:43:56 UTC 2022
| MIT | learnelixir.ipynb | kalz2q/myjupyternotebooks |
---メモ`!elixir -h` (ヘルプ)としたらシェルワンライナー `elixir -e` が使えるらしいことがわかった。`iex` というのがインタラクティブ環境なのだが、colab では使いにくいので `elixir -e` で代用する。 | !elixir -e 'IO.puts 3 + 3'
!elixir -e 'IO.puts "hello world!"'
# 次のようにすればファイルが作れる
%%writefile temp.exs
IO.puts "this is a pen."
# cat してみる
!cat temp.exs
# ファイルを elixir で実行する
!elixir temp.exs | this is a pen.
| MIT | learnelixir.ipynb | kalz2q/myjupyternotebooks |
---ネットで紹介されていた次のコードセルのコードはどうやって実行するのだろう。 今はわからなくていいと思うがとりあえず転記しておく。説明:このプログラムでは、Parallel というモジュールに pmap という関数を定義しているmap は、与えられたコレクションに対して map(Ruby での Enumerablemap と同じようなものと考えて下さい)を行なうのですが、 各要素の処理を、要素数の分だけプロセスを生成し、各プロセスで並行に実行する、というものです。 ちょっと見ても、よくわからないような気がしますが、大丈夫、本書を読めば、わかるようになりるとのこと。 | %%writefile temp.exs
defmodule Parallel do
def pmap(collection, func) do
collection
|> Enum.map(&(Task.async(fn -> func.(&1) end)))
|> Enum.map(&Task.await/1)
end
end
result = Parallel.pmap 1..1000, &(&1 * &1)
IO.inspect result
!elixir temp.exs | [1, 4, 9, 16, 25, 36, 49, 64, 81, 100, 121, 144, 169, 196, 225, 256, 289, 324,
361, 400, 441, 484, 529, 576, 625, 676, 729, 784, 841, 900, 961, 1024, 1089,
1156, 1225, 1296, 1369, 1444, 1521, 1600, 1681, 1764, 1849, 1936, 2025, 2116,
2209, 2304, 2401, 2500, ...]
| MIT | learnelixir.ipynb | kalz2q/myjupyternotebooks |
上の例で colab 環境で非同期処理が問題なく動くことが確認できたみたい。 ---次のもネットで紹介されていた例で、ハローワールド並行処理版 | %%writefile temp.exs
parent = self()
spawn_link(fn ->
send parent, {:msg, "hello world"}
end)
receive do
{:msg, contents} -> IO.puts contents
end
!elixir temp.exs | hello world
| MIT | learnelixir.ipynb | kalz2q/myjupyternotebooks |
上の例でやっていることはつぎのような流れである。1. spawn_linkという関数に渡された関数が、関数の内容を実行する。2. 新しく作られたプロセス側では、メインプロセス側(parent)に “hello world” というメッセージを送る。3. メインプロセス側は、どこからかメッセージが来ないかを待ち受けて(receive)、メッセージが来たらそれをコンソールに表示する。 | # 実験 とりあえず理解しない。 colab 環境でどうかだけ調べる。
%%writefile chain.exs
defmodule Chain do
def counter(next_pid) do
receive do
n -> send next_pid, n + 1
end
end
def create_processes(n) do
last = Enum.reduce 1..n, self(),
fn (_, send_to) -> spawn(Chain, :counter, [send_to]) end
send last, 0
receive do
final_answer when is_integer(final_answer) ->
"Result is #{inspect(final_answer)}"
end
end
def run(n) do
IO.puts inspect :timer.tc(Chain, :create_processes, [n])
end
end
!elixir --erl "+P 1000000" -r chain.exs -e "Chain.run(1_000_000)" | {4638957, "Result is 1000000"}
| MIT | learnelixir.ipynb | kalz2q/myjupyternotebooks |
記事 https://ubiteku.oinker.me/2015/12/22/elixir試飲-2-カルチャーショックに戸惑う-並行指向プ/ のマシン Macbook Pro – 3 GHz Intel Core i7, 16GB RAM では 7 秒のところ、colab では 5 秒で終わってるね!!!!手元のwindowsマシン intel core i5-9400 8gb ram でやったら次のようになった。 {3492935, "Result is 1000000"}あれ、速いじゃん!!!! ---コメントは `` | %%writefile temp.exs
# コメント実験
str = "helloworld!!!!"
IO.puts str
!elixir temp.exs | helloworld!!!!
| MIT | learnelixir.ipynb | kalz2q/myjupyternotebooks |
---n 進数、整数 integer | !elixir -e 'IO.puts 0b1111'
!elixir -e 'IO.puts 0o7777'
!elixir -e 'IO.puts 0xffff'
!elixir -e 'IO.puts 1000_000_00_0' | 15
4095
65535
1000000000
| MIT | learnelixir.ipynb | kalz2q/myjupyternotebooks |
整数型に上限下限 fixed limit はない。 factorial(10000) が計算できる。今はしない。 ---問題10進数を $n$ 進数にベースを変えるのはどうするか。 python では `int()`, `bin()`, `oct()`, `hex()` があった。 | # python
print(0b1111)
print(0o7777)
print(0xffff)
print(int('7777',8))
print(bin(15))
print(oct(4095))
print(hex(65535))
!elixir -e 'IO.puts 0b1111'
!elixir -e 'IO.puts 0o7777'
!elixir -e 'IO.puts 0xffff'
!echo
# Integer.to_string() と言う関数を使う
# <> はバイナリー連結
!elixir -e 'IO.puts "0b" <> Integer.to_string(15,2)'
!elixir -e 'IO.puts "0o" <> Integer.to_string(4095,8)'
!elixir -e 'IO.puts "0x" <> Integer.to_string(65535,16)' | 15
4095
65535
0b1111
0o7777
0xFFFF
| MIT | learnelixir.ipynb | kalz2q/myjupyternotebooks |
浮動小数点数 floating-point number | !elixir -e 'IO.puts 1.532e-4'
# .0 とか 1. とかはエラーになる
!elixir -e 'IO.puts 98099098.0809898888'
!elixir -e 'IO.puts 0.00000000000000000000000001' #=> 1.0e-26
!elixir -e 'IO.puts 90000000000000000000000000000000000000000000000000000000' | 1.532e-4
98099098.08098988
1.0e-26
999999999999999999999999999999999999999
90000000000000000000000000000000000000000000000000000000
| MIT | learnelixir.ipynb | kalz2q/myjupyternotebooks |
文字列 stringstring という型はない、みたい。---質問 型を調べる関数はあるか。type() とか。 | !elixir -e 'IO.puts "日本語が書けますか"'
!elixir -e 'IO.puts "日本語が書けます"'
# 関数に括弧をつけることができる
# \ で escape できる
!elixir -e 'IO.puts (0b1111)'
!elixir -e 'IO.puts ("にほんご\n日本語")'
!elixir -e "IO.puts ('にほんご\n\"日本語\"')"
# 文字連結 `+` ではない!!!!
!elixir -e 'IO.puts("ABCD"<>"EFGH")' | ABCDEFGH
| MIT | learnelixir.ipynb | kalz2q/myjupyternotebooks |
`` と言う記号はバイナリ連結ということらしい。 ---値の埋め込み`{変数名}` を記述することで、変数の値を埋め込むことができる。 | !elixir -e 'val = 1000; IO.puts "val = #{val}"' | val = 1000
| MIT | learnelixir.ipynb | kalz2q/myjupyternotebooks |
---真偽値elixir の 真偽値は true と false (小文字) で false と nil が false でそれ以外は true | !elixir -e 'if true do IO.puts "true" end'
!elixir -e 'if True do IO.puts "true" end'
!elixir -e 'if False do IO.puts "true" end' # False が大文字なので
!elixir -e 'if false do IO.puts "true" else IO.puts "false" end'
!elixir -e 'if nil do IO.puts "true" else IO.puts "false" end'
!elixir -e 'if 0 do IO.puts "true" else IO.puts "false" end'
!elixir -e 'if (-1) do IO.puts "true" else IO.puts "false" end'
!elixir -e 'if [] do IO.puts "true" else IO.puts "false" end'
!elixir -e 'if "" do IO.puts "true" else IO.puts "false" end' | true
true
true
false
false
true
true
true
true
| MIT | learnelixir.ipynb | kalz2q/myjupyternotebooks |
`null` はない。 ---**マッチ演算子 `=`**マッチ演算子 `=` はマッチ演算子である。 マッチ演算子を通して値を代入し、その後、マッチさせることができる。マッチすると、方程式の結果が返され、失敗すると、エラーになる。 | !elixir -e 'IO.puts a = 1'
!elixir -e 'a =1; IO.puts 1 = a'
!elixir -e 'a =1; IO.puts 2 = a'
!elixir -e 'IO.inspect a = [1,2,3]' # リストは puts で表示できないので inspect を使う
!elixir -e '[a,b,c] = [1,2,3]; IO.puts c; IO.puts b'
| [1, 2, 3]
3
2
| MIT | learnelixir.ipynb | kalz2q/myjupyternotebooks |
上の例は、elixir は マッチ演算子 `=` があると左右がマッチするように最善を尽くす。 そのため、`[a,b,c] = [1,2,3]` で a,b,c に値が代入される。 | !elixir -e 'IO.inspect [1,2,[3,4,5]]'
!elixir -e '[a,b,c] = [1,2,[3,4,5]]; IO.inspect c; IO.inspect b'
# 実験 => エラー
!elixir -e 'IO.insepct [a,b] = [1,2,3]'
# 実験
!elixir -e 'IO.inspect a = [[1,2,3]]'
!elixir -e 'IO.inspect [a] = [[1,2,3]]'
!elixir -e '[a] = [[1,2,3]]; IO.inspect a'
# 実験 => エラー
!elixir -e 'IO.insepct [a,b] = [a,b]'
# 実験 アトムについては後述
!elixir -e 'IO.puts a = :a'
!elixir -e 'a = :a; IO.inspect a = a'
!elixir -e 'a = :a; IO.puts a = a'
!elixir -e 'IO.puts :b' | a
:a
a
b
| MIT | learnelixir.ipynb | kalz2q/myjupyternotebooks |
アンダースコア `_` で値を無視する。 ワルドカード。なんでも受け付ける。 | !elixir -e 'IO.inspect [1,_,_]=[1,2,3]'
!elixir -e 'IO.inspect [1,_,_]=[1,"cat","dog"]' | [1, "cat", "dog"]
| MIT | learnelixir.ipynb | kalz2q/myjupyternotebooks |
変数は、バインド (束縛、紐付け) されると変更できない。かと思ったらできてしまう。 | !elixir -e 'a = 1; IO.puts a = 2' | 2
| MIT | learnelixir.ipynb | kalz2q/myjupyternotebooks |
元の変数を指し示すピン演算子 (`^` カレット) がある。 | !elixir -e 'a = 1; IO.puts ^a = 2' | ** (MatchError) no match of right hand side value: 2
(stdlib 3.15) erl_eval.erl:450: :erl_eval.expr/5
(stdlib 3.15) erl_eval.erl:893: :erl_eval.expr_list/6
(stdlib 3.15) erl_eval.erl:408: :erl_eval.expr/5
(elixir 1.12.0) lib/code.ex:656: Code.eval_string_with_error_handling/3
| MIT | learnelixir.ipynb | kalz2q/myjupyternotebooks |
メモ $\quad$ 普通の関数型言語のように変数は変更できないルールにしてしまった方が簡単ではなかったか、と思わないでもない。 変数を不変にする、const 宣言みたいなのはないのか。リストは不変 immutable なので安心。 | # 大文字にする capitalize
!elixir -e 'IO.puts name = String.capitalize "elixir"'
# 大文字にする upcase
!elixir -e 'IO.puts String.upcase "elixir"' | ELIXIR
| MIT | learnelixir.ipynb | kalz2q/myjupyternotebooks |
アトムアトムは名前がそのまま値となる定数である。**名前の前にコロン `:` をつけることでアトムになる。**アトムの名前は utf-8 文字列 (記号を含む)、数字、アンダースコア `_` 、`@` で、終端文字としてのみ「!」や「?」が使える。:fred $\quad$ :is_binary? $\quad$ :var@2 $\quad$ : $\quad$ :=== :"func/3" $\quad$ :"long john silver" $\quad$ :эликсир:mötley_crüeメモ | # 実験 アトムは宣言しないで突然使える
!elixir -e 'IO.puts :fred'
# 実験
!elixir -e 'IO.puts true === :true'
!elixir -e 'IO.puts :true'
!elixir -e 'IO.puts false === :false'
# 実験
!elixir -e 'IO.puts :fred'
!elixir -e 'IO.puts :is_binary?'
!elixir -e 'IO.puts :var@2'
!elixir -e 'IO.puts :<>'
!elixir -e 'IO.puts :==='
# セミコロンを含むアトムは iex 上では使えるが、シェルワンライナーでは使えない
# unexpected token: "" と言うエラーになる
# colab の環境だけでなく、通常のシェルでも同じ
# ファイルにしたプログラムでは使えるので問題ない
# !elixir -e 'IO.puts :"func/3"'
# !elixir -e 'IO.puts :"long john silver"'
!elixir -e 'IO.puts :эликсир'
!elixir -e 'IO.puts :mötley_crüe'
!elixir -e 'IO.puts :日本語はどうか' | fred
is_binary?
var@2
<>
===
эликсир
mötley_crüe
日本語はどうか
| MIT | learnelixir.ipynb | kalz2q/myjupyternotebooks |
演算子 | !elixir -e 'IO.puts 1 + 2'
!elixir -e 'x = 10; IO.puts x + 1'
!elixir -e 'IO.puts 1 - 2'
!elixir -e 'x = 10; IO.puts x - 1'
!elixir -e 'IO.puts 5 * 2'
!elixir -e 'x = 10; IO.puts x * 4'
!echo
!elixir -e 'IO.puts 5 / 2'
!elixir -e 'x = 10; IO.puts x / 3'
# 浮動少数ではなく整数としての結果がほしい場合は div 関数を使用
!elixir -e 'IO.puts div(10,5)'
!elixir -e 'IO.puts div(10,4)'
# 割り算の余り、剰余を求める場合は rem関数を使用
!elixir -e 'IO.puts rem(10,4)'
!elixir -e 'IO.puts rem(10,3)'
!elixir -e 'IO.puts rem(10,2)'
# 比較演算子
!elixir -e 'IO.puts 1 == 1'
!elixir -e 'IO.puts 1 != 1'
!elixir -e 'IO.puts ! (1 != 1)'
!echo
!elixir -e 'IO.puts 20.0 == 20'
!elixir -e 'IO.puts 20.0 === 20'
!elixir -e 'IO.puts 20.0 !== 20'
# 論理演算子
# 論理和
!elixir -e 'IO.puts "ABC" == "ABC" || 20 == 30'
!elixir -e 'IO.puts "ABC" == "abc" || 20 == 30'
!echo
# 論理積
!elixir -e 'IO.puts "ABC" == "ABC" && 20 == 20'
!elixir -e 'IO.puts "ABC" == "ABC" && 20 == 30'
!elixir -e 'IO.puts "ABC" == "def" && 10 > 100'
!echo
# 否定
!elixir -e 'IO.puts !("ABC" == "ABC")'
!elixir -e 'IO.puts !("ABC" == "DEF")' | true
false
true
false
false
false
true
| MIT | learnelixir.ipynb | kalz2q/myjupyternotebooks |
rangeメモ $\quad$ range は型ではなく、struct である。 構造体?`start..end` で表現される、とあるが、1..10 と書けばそれで range なのか? | !elixir -e 'IO.inspect Enum.to_list(1..3)'
!elixir -e 'IO.inspect Enum.to_list(0..10//3)'
!elixir -e 'IO.inspect Enum.to_list(0..10//-3)'
!elixir -e 'IO.inspect Enum.to_list(10..0//-3)'
!elixir -e 'IO.inspect Enum.to_list(1..1)'
!elixir -e 'IO.inspect Enum.to_list(1..-1)'
!elixir -e 'IO.inspect Enum.to_list(1..1//2)'
!elixir -e 'IO.inspect Enum.to_list(1..-1//2)'
!elixir -e 'IO.inspect Enum.to_list(1..-1//-2)'
!elixir -e 'IO.inspect 1..9//2' | 1..9//2
| MIT | learnelixir.ipynb | kalz2q/myjupyternotebooks |
正規表現 regular expression正規表現も型ではなく、struct である。 | !elixir -e 'IO.inspect Regex.run ~r{[aiueo]},"catapillar"'
!elixir -e 'IO.inspect Regex.scan ~r{[aiueo]},"catapillar"'
!elixir -e 'IO.inspect Regex.split ~r{[aiueo]},"catapillar"'
!elixir -e 'IO.inspect Regex.replace ~r{[aiueo]},"catapillar", "*"' | "c*t*p*ll*r"
| MIT | learnelixir.ipynb | kalz2q/myjupyternotebooks |
コレクション型 タプルタプルは波括弧 brace を用いて定義する。タプルに限らず elixir のコレクションはすべて要素のタイプを限定しない。通常 2 から 4 の要素であり、それ以上の要素数の場合、map や struct の利用を考える。タプルは関数の返り値に便利に利用される。パターンマッチングと組み合わせて使われる。---cf. タプル以外の波括弧 brace の使用* 値の代入`{変数名}` * 正規表現 Regex `r{}` * マップ `%{}` | !elixir -e 'IO.inspect {3.14, :pie, "Apple"}'
!elixir -e '{status, count, action} = {3.14, :pie, "next"}; IO.puts action'
# 実験
# タプルの使い方の例
!echo hello > temp.txt
!elixir -e '{status, file} = File.open("temp.txt"); IO.inspect {status, file}'
!elixir -e '{status, file} = File.read("temp.txt"); IO.inspect {status, file}'
!elixir -e '{status, file} = File.read("temp02.txt"); IO.inspect {status, file}'
!elixir -e '{status, file} = File.write("temp.txt", "goodbye"); IO.inspect {status, file}'
!elixir -e '{status, file} = File.read("temp.txt"); IO.inspect {status, file}'
# 実験 タプルに ++ は使えるか。 => 使えない <> も使えない
# !elixir -e 'IO.inspect {3.14, :pie, "Apple"} ++ {3}'
# 実験 タプルに head は使えるか。 => 使えない
# !elixir -e 'IO.inspect hd {3.14, :pie, "Apple"}'
# 実験 タプルにパターンマッチングは使えるか。 => 使える
!elixir -e '{a,b,c} = {3.14, :pie, "Apple"}; IO.inspect [c,a,b]'
# 実験
# 項目の入れ替え
!elixir -e 'a=1; b=3; {b,a}={a,b}; IO.inspect {a,b}'
!elixir -e 'a=1; b=3; c=5; d= 7; {d,c,b,a}={a,b,c,d}; IO.inspect {a,b,c,d}'
# 実験
# タプルの要素にタプルはあるか
!elixir -e 'IO.inspect {3.14, :pie, "Apple", {3}}' | {3.14, :pie, "Apple", {3}}
| MIT | learnelixir.ipynb | kalz2q/myjupyternotebooks |
リスト他の言語の配列 array と elixir のリストは違うので注意。 lisp のリストと似たような概念である。カラのリストでなければ、head (hd) と tail (tl) がある。hd は頭の1つで tl はそれ以降全部。 | # リスト
!elixir -e 'IO.inspect [3.14, :pie, "Apple"]'
!elixir -e 'IO.inspect hd [3.14]'
!elixir -e 'IO.inspect tl [3.14]'
# リスト先頭への追加(高速)
!elixir -e 'IO.inspect ["π" | [3.14, :pie, "Apple"]]'
# リスト末尾への追加(低速)
!elixir -e 'IO.inspect [3.14, :pie, "Apple"] ++ ["Cherry"]' | ["π", 3.14, :pie, "Apple"]
[3.14, :pie, "Apple", "Cherry"]
| MIT | learnelixir.ipynb | kalz2q/myjupyternotebooks |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.