markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
我们生成一些随机数据来让我们后面去分类,为了数据难一点,我们加入了一些噪音。生成数据的同时把数据归一化
X, y = make_circles(noise=0.2, factor=0.5, random_state=1); from sklearn.preprocessing import StandardScaler X = StandardScaler().fit_transform(X)
Python/notebook/一个SVM RBF分类调参的例子.ipynb
zhangmianhongni/MyPractice
apache-2.0
我们先看看我的数据是什么样子的,这里做一次可视化如下:
from matplotlib.colors import ListedColormap cm = plt.cm.RdBu cm_bright = ListedColormap(['#FF0000', '#0000FF']) ax = plt.subplot() ax.set_title("Input data") # Plot the training points ax.scatter(X[:, 0], X[:, 1], c=y, cmap=cm_bright) ax.set_xticks(()) ax.set_yticks(()) plt.tight_layout() plt.show()
Python/notebook/一个SVM RBF分类调参的例子.ipynb
zhangmianhongni/MyPractice
apache-2.0
好了,现在我们要对这个数据集进行SVM RBF分类了,分类时我们使用了网格搜索,在C=(0.1,1,10)和gamma=(1, 0.1, 0.01)形成的9种情况中选择最好的超参数,我们用了4折交叉验证。这里只是一个例子,实际运用中,你可能需要更多的参数组合来进行调参。
from sklearn.model_selection import GridSearchCV grid = GridSearchCV(SVC(), param_grid={"C":[0.1, 1, 10], "gamma": [1, 0.1, 0.01]}, cv=4) grid.fit(X, y) print("The best parameters are %s with a score of %0.2f" % (grid.best_params_, grid.best_score_))
Python/notebook/一个SVM RBF分类调参的例子.ipynb
zhangmianhongni/MyPractice
apache-2.0
也就是说,通过网格搜索,在我们给定的9组超参数中,C=10, Gamma=0.1 分数最高,这就是我们最终的参数候选。 到这里,我们的调参举例就结束了。不过我们可以看看我们的普通的SVM分类后的可视化。这里我们把这9种组合各个训练后,通过对网格里的点预测来标色,观察分类的效果图。代码如下:
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1 y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1 xx, yy = np.meshgrid(np.arange(x_min, x_max,0.02), np.arange(y_min, y_max, 0.02)) for i, C in enumerate((0.1, 1, 10)): for j, gamma in enumerate((1, 0.1, 0.01)): plt.subplot() ...
Python/notebook/一个SVM RBF分类调参的例子.ipynb
zhangmianhongni/MyPractice
apache-2.0
As you can see, this folder holds a number of plain text files, ending in the .txt extension. Let us open a random file:
f = open('data/arabian_nights/848.txt', 'r') text = f.read() f.close() print(text[:500])
Chapter 9 - Text analysis.ipynb
mikekestemont/lot2016
mit
Here, we use the open() function to create a file object f, which we can use to access the actual text content of the file. Make sure that you do not pass the 'w' parameter ("write") to open(), instead of 'r' ("read"), since this would overwrite and thus erase the existing file. After assigning the string returned by ...
with open('data/arabian_nights/848.txt', 'r') as f: text = f.read() print(text[:500])
Chapter 9 - Text analysis.ipynb
mikekestemont/lot2016
mit
This code block does exactly the same thing as the previous one but saves you some typing. In this chapter we would like to work with all the files in the arabian_nights directory. This is where loops come in handy of course, since what we really would like to do, is iterate over the contents of the directory. Accessin...
import os
Chapter 9 - Text analysis.ipynb
mikekestemont/lot2016
mit
Using the dot-syntax (os.xxx), we can now access all functions that come with this module, such as listdir(), which returns a list of the items which are included under a given directory
filenames = os.listdir('data/arabian_nights') print(len(filenames)) print(filenames[:20])
Chapter 9 - Text analysis.ipynb
mikekestemont/lot2016
mit
The function os.listdir() returns a list of strings, representing the filenames contained under a directory. Quiz In Burton's translation some of the 1001 nights are missing. How many? Can you come up with a clever way to find out which nights are missing? Hint: a counting loop and some string casting might be useful ...
# your code goes here
Chapter 9 - Text analysis.ipynb
mikekestemont/lot2016
mit
With os.listdir(), you need to make sure that you pass the correct path to an existing directory:
os.listdir('data/belgian_nights')
Chapter 9 - Text analysis.ipynb
mikekestemont/lot2016
mit
It might therefore be convenient to check whether a directory actually exists in a given location:
print(os.path.isdir('data/arabian_nights')) print(os.path.isdir('data/belgian_nights'))
Chapter 9 - Text analysis.ipynb
mikekestemont/lot2016
mit
The second directory, naturally, does not exist and isdir() evaluates to False in this case. Creating a new (and thus empty) directory is also easy using os:
os.mkdir('belgian_nights')
Chapter 9 - Text analysis.ipynb
mikekestemont/lot2016
mit
We can see that it lives in the present working directory now, by typing ls again:
ls
Chapter 9 - Text analysis.ipynb
mikekestemont/lot2016
mit
Or we use Python:
print(os.path.isdir('belgian_nights'))
Chapter 9 - Text analysis.ipynb
mikekestemont/lot2016
mit
Removing directories is also easy, but PLEASE watch out, sometimes it is too easy: if you remove a wrong directory in Python, it will be gone forever... Unlike other applications, Python does not keep a copy of it in your Trash and it does not have a Ctrl-Z button. Please watch out with what you do, since with great po...
import shutil shutil.rmtree('belgian_nights')
Chapter 9 - Text analysis.ipynb
mikekestemont/lot2016
mit
And lo behold: the directory has disappeared again:
print(os.path.isdir('belgian_nights'))
Chapter 9 - Text analysis.ipynb
mikekestemont/lot2016
mit
Here, we use the rmtree() command to remove the entire directory in a recursive way: even if the directory isn't empty and contains files and subfolders, we will remove all of them. The os module also comes with a rmdir() but this will not allow you to remove a directory which is not empty, as becomes clear in the OSEr...
os.rmdir('data/arabian_nights')
Chapter 9 - Text analysis.ipynb
mikekestemont/lot2016
mit
The folder contains things and therefore cannot be removed using this function. There are, of course, also ways to remove individual files or check whether they exist:
os.mkdir('belgian_nights') f = open('belgian_nights/1001.txt', 'w') f.write('Content') f.close() print(os.path.exists('belgian_nights/1001.txt')) os.remove('belgian_nights/1001.txt') print(os.path.exists('belgian_nights/1001.txt'))
Chapter 9 - Text analysis.ipynb
mikekestemont/lot2016
mit
Here, we created a directory, wrote a new file to it (1001.txt), and removed it again. Using os.path.exists() we monitored at which point the file existed. Finally, the shutil module also ships with a useful copyfile() function which allows you to copy files from one location to another, possibly with another name. To ...
shutil.copyfile('data/arabian_nights/66.txt', 'new_66.txt')
Chapter 9 - Text analysis.ipynb
mikekestemont/lot2016
mit
Indeed, we have added an exact copy of night 66 to our present working directory:
ls
Chapter 9 - Text analysis.ipynb
mikekestemont/lot2016
mit
We can safely remove it again:
os.remove('new_66.txt')
Chapter 9 - Text analysis.ipynb
mikekestemont/lot2016
mit
Paths The paths we have used so far are 'relative' paths, in the sense that they are relative to the place on our machine from which we execute our Python code. Absolute paths can also be retrieved and will differ on each computer, because they typically include user names etc:
os.path.abspath('data/arabian_nights/848.txt')
Chapter 9 - Text analysis.ipynb
mikekestemont/lot2016
mit
While absolute paths are longer to type, they have the advantage that they can be used anywhere on your computer (i.e. irrespective of where you run your code from). Paths can be tricky. Suppose that we would like to open one of our filenames:
filenames = os.listdir('data/arabian_nights') random_filename = filenames[9] with open(random_filename, 'r') as f: text = f.read() print(text[:500])
Chapter 9 - Text analysis.ipynb
mikekestemont/lot2016
mit
Python throws a FileNotFoundError, complaining that the file we wish to open does not exist. This situation stems from the fact that os.listdir() only returns the base name of a given file, and not an entire (absolute or relative) path to it. To properly access the file, we must therefore not forget to include the rest...
filenames = os.listdir('data/arabian_nights') random_filename = filenames[9] with open('data/arabian_nights/'+ random_filename, 'r') as f: text = f.read() print(text[:500])
Chapter 9 - Text analysis.ipynb
mikekestemont/lot2016
mit
Apart from os.listdir() there are a number of other common ways to obtain directory listings in Python. Using the glob module for instance, we can easily access the full relative path leading to our Arabian Nights:
import glob filenames = glob.glob('data/arabian_nights/*') print(filenames[:10])
Chapter 9 - Text analysis.ipynb
mikekestemont/lot2016
mit
The asterisk (*) in the argument passed to glob.glob() is worth noting here. Just like with regular expressions, this asterisk is a sort of wildcard which will match any series of characters (i.e. the filenames under arabian_nights). When we exploit this wildcard syntax, glob.glob() offers another distinct advantage: w...
filenames = glob.glob('data/arabian_nights/*.txt') print(filenames[:10])
Chapter 9 - Text analysis.ipynb
mikekestemont/lot2016
mit
Interestingly, the command in this code block will only load filenames that end in ".txt". This is interesting when we would like to ignore other sorts of junk files etc. that might be present in a directory. To replicate similar behaviour with os.listdir(), we would have needed a typical for-loop, such as:
filenames = [] for fn in os.listdir('data/arabian_nights'): if fn.endswith('.txt'): filenames.append(fn) print(filenames[:10])
Chapter 9 - Text analysis.ipynb
mikekestemont/lot2016
mit
Or for you stylish coders out there, you can show off with a list comprehension:
filenames = [fn for fn in os.listdir('data/arabian_nights') if fn.endswith('.txt')]
Chapter 9 - Text analysis.ipynb
mikekestemont/lot2016
mit
However, when using glob.glob(), you might sometimes want to be able to extract a file's base name again. There are several solutions to this:
filenames = glob.glob('data/arabian_nights/*.txt') fn = filenames[10] # simple string splitting: print(fn.split('/')[-1]) # using os.sep: print(fn.split(os.sep)[-1]) # using os.path: print(os.path.basename(fn))
Chapter 9 - Text analysis.ipynb
mikekestemont/lot2016
mit
Both os.sep and os.path.basename have the advantage that they know what separator is used for paths in the operating system, so you don't need to explicitly code it like in the first solution. Separators differ between Windows (backslash) and Mac/Linux (forward slash). Finally, sometimes, you might be interested in all...
for root, directory, filename in os.walk("data"): print(filename)
Chapter 9 - Text analysis.ipynb
mikekestemont/lot2016
mit
As you can see, os.walk() allows you to efficiently loop over the entire tree. As always, don't forget that help is right around the corner in your notebooks. Using help(), you can quickly access the documentation of modules and their functions etc. (but only after you have imported the modules first!).
help(os.walk)
Chapter 9 - Text analysis.ipynb
mikekestemont/lot2016
mit
Quiz In the next part of this chapter, we will need a way to sort our stories from the first, to the very last night. For our own convenience we will use a little hack for this. In this quiz, we would like you to create a new folder under data directory, called '1001'. You should copy all the original files from arabia...
# your quiz code
Chapter 9 - Text analysis.ipynb
mikekestemont/lot2016
mit
Parsing files Using the code from the previous quiz, it is now trivial to sort our nights sequentially on the basis of their actual name (i.e. a string variable):
for fn in sorted(os.listdir('data/1001')): print(fn)
Chapter 9 - Text analysis.ipynb
mikekestemont/lot2016
mit
Using the old filenames, this was not possible directly, because of the way Python sorts strings of unequal lengths. Note that the number in the filenames are represented as strings, which are completely different from real numeric integers, and thus will be sorted differently:
for fn in sorted(os.listdir('data/arabian_nights/')): print(fn)
Chapter 9 - Text analysis.ipynb
mikekestemont/lot2016
mit
Note: There is a more elegant, but also slightly less trivial way to achieve the correct order in this case:
for fn in sorted(os.listdir('data/arabian_nights/'), key=lambda nb: int(nb[:-4])): print(fn)
Chapter 9 - Text analysis.ipynb
mikekestemont/lot2016
mit
Should you be interested: here, we pass a key argument to sort, which specifies which operations should be applied to the filenames before actually sorting them. Here, we specify a so-called lambda function to key, which is less intuitive to read, but which allow you to specify a sort of 'mini-function' in a very conde...
import re def preprocess(in_str): out_str = '' for c in in_str.lower(): if c.isalpha() or c.isspace(): out_str += c whitespace = re.compile(r'\s+') out_str = whitespace.sub(' ', out_str) return out_str
Chapter 9 - Text analysis.ipynb
mikekestemont/lot2016
mit
This code reviews some of the materials from previous chapters, including the use of a regular expression, which converts all consecutive instances of whitespace (including line breaks, for instance) to a single space. After executing the previous code block, we can now test our function:
old_str = 'This; is -- a very DIRTY string!' new_str = preprocess(old_str) print(new_str)
Chapter 9 - Text analysis.ipynb
mikekestemont/lot2016
mit
We can now apply this function to the contents from a random night:
with open('data/1001/0007.txt', 'r') as f: in_str = f.read() print(preprocess(in_str))
Chapter 9 - Text analysis.ipynb
mikekestemont/lot2016
mit
This text looks cleaner already! We can now start to extract individual tokens from the text and count them. This process is called tokenization. Here, we make the naive assumption that words are simply space-free alphabetic strings -- which is of course wrong in the case of English words like "can't". Note that for ma...
def tokenize(in_str): tokens = in_str.split() tokens = [t for t in tokens if t] return tokens
Chapter 9 - Text analysis.ipynb
mikekestemont/lot2016
mit
Using the list comprehension, we make sure that we do not accidentally return empty strings as a token, for instance, at the beginning of a text which starts with a newline. Remember that anything in Python with a length of 0, will evaluate to False, which explains the if t in the comprehension: empty strings will fail...
with open('data/1001/0007.txt', 'r') as f: in_str = f.read() tokens = tokenize(preprocess(in_str)) print(tokens[:10])
Chapter 9 - Text analysis.ipynb
mikekestemont/lot2016
mit
We can now start analyzing our nights. A good start would be to check the length of each night in words:
print(len(tokens))
Chapter 9 - Text analysis.ipynb
mikekestemont/lot2016
mit
Quiz Iterate over all the nights in 1001 in a sorted way. Open, preprocess and tokenize each text. Store in a list called word_counts how many words each story has.
# your quiz code
Chapter 9 - Text analysis.ipynb
mikekestemont/lot2016
mit
We now have a list of numbers, which we can plot over time. We will cover plotting more extensively in one of the next chapters. The things below are just a teaser. Start by importing matplotlib, which is imported as follows by convention:
import matplotlib.pyplot as plt %matplotlib inline
Chapter 9 - Text analysis.ipynb
mikekestemont/lot2016
mit
The second line is needed to make sure that the plots will properly show up in our notebook. Let us start with a simple visualization:
plt.plot(word_counts)
Chapter 9 - Text analysis.ipynb
mikekestemont/lot2016
mit
As you can see, this simple command can be used to quickly obtain a visualization that shows interesting trends. On the y-axis, we plot absolute word counts for each of our nights. The x-axis is figured out automatically by matplotlib and adds an index on the horizontal x-axis. Implicitly, it interprets our command as ...
plt.plot(range(0, len(word_counts)), word_counts)
Chapter 9 - Text analysis.ipynb
mikekestemont/lot2016
mit
When plt.plot receives two flat lists as arguments, it plots the first along the x-axis, and the second along the y-axis. If it only receives one list, it plots it along the y-axis and uses the range we now (redundantly) specified here for the x-axis. This is in fact a subtoptimal plot, since the index of the first dat...
filenames = sorted(os.listdir('data/1001')) idxs = [int(i[:-4]) for i in filenames] print(idxs[:20]) print(min(idxs)) print(max(idxs))
Chapter 9 - Text analysis.ipynb
mikekestemont/lot2016
mit
We can now make our plot more truthful, and add some bells and whistles:
plt.plot(idxs, word_counts, color='r') plt.xlabel('Word length') plt.ylabel('# words (absolute counts)') plt.title('The Arabian Nights') plt.xlim(1, 1001)
Chapter 9 - Text analysis.ipynb
mikekestemont/lot2016
mit
Quiz Using axvline() you can add vertical lines to a plot, for instance at position:
plt.plot(idxs, word_counts, color='r') plt.xlabel('Word length') plt.ylabel('# words (absolute counts)') plt.title(r'The Arabian Nights') plt.xlim(1, 1001) plt.axvline(500, color='g')
Chapter 9 - Text analysis.ipynb
mikekestemont/lot2016
mit
Write code that plots the position of the missing nights using this function (and blue lines).
# quiz code goes here
Chapter 9 - Text analysis.ipynb
mikekestemont/lot2016
mit
Right now, we are visualizing texts, but we might also be interested in the vocabulary used in the story collection. Counting how often a word appears in a text is trivial for you right now with custom code, for instance:
cnts = {} for word in tokens: if word in cnts: cnts[word] += 1 else: cnts[word] = 1 print(cnts)
Chapter 9 - Text analysis.ipynb
mikekestemont/lot2016
mit
One interesting item which you can use for counting in Python is the Counter object, which we can import as follows:
from collections import Counter
Chapter 9 - Text analysis.ipynb
mikekestemont/lot2016
mit
This Counter makes it much easier to write code for counting. Below you can see how this counter automatically creates a dictionary-like structure:
cnt = Counter(tokens) print(cnt)
Chapter 9 - Text analysis.ipynb
mikekestemont/lot2016
mit
If we would like to find which items are most frequent for instance, we could simply do:
print(cnt.most_common(25))
Chapter 9 - Text analysis.ipynb
mikekestemont/lot2016
mit
We can also pass the Counter the tokens to count in multiple stages:
cnt = Counter() cnt.update(tokens) cnt.update(tokens) print(cnt.most_common(25))
Chapter 9 - Text analysis.ipynb
mikekestemont/lot2016
mit
After passing our tokens twice to the counter, we see that the numbers double in size. Quiz Write code that makes a word frequency counter named vocab, which counts the cumulative frequencies of all words in the Arabian Nights. Which are the 15 most frequent words? Does that make sense?
# quiz code
Chapter 9 - Text analysis.ipynb
mikekestemont/lot2016
mit
Let us now finally visualize the frequencies of the 15 most frequent items using a standard barplot in matplotlib. This can be achieved as follows. We first split out the names and frequencies, since .mostcommon(n) returns a list of tuples, and we create indices:
freqs = [f for _, f in vocab.most_common(15)] words = [w for w, _ in vocab.most_common(15)] # note the use of underscores for 'throwaway' variables idxs = range(1, len(freqs)+1)
Chapter 9 - Text analysis.ipynb
mikekestemont/lot2016
mit
Next, we simply do:
plt.barh(idxs, freqs, align='center') plt.yticks(idxs, words) plt.xlabel('Words') plt.ylabel('Cumulative absolute frequencies')
Chapter 9 - Text analysis.ipynb
mikekestemont/lot2016
mit
Et voilà! Closing Assignment In this larger assignment, you will have to perform some basic text processing on the larger set of XML-encoded files under data/TEI/french_plays. For this assignment, there are several subtasks: 1. Each of these files represent a play written by a particular author (see the <author> ...
from IPython.core.display import HTML def css_styling(): styles = open("styles/custom.css", "r").read() return HTML(styles) css_styling()
Chapter 9 - Text analysis.ipynb
mikekestemont/lot2016
mit
To launch your query, pass in your dates. When passing multiple dates, pass as a list of strings. We will time the multi-day query. Important Date Details for GDELT 1.0 and 2.0 For GDELT 2.0, every 15 minute interval is a zipped CSV file, and gdeltPyR makes concurrent HTTP GET requests to each file. When the covera...
import platform import multiprocessing print (platform.platform()) print (multiprocessing.cpu_count())
examples/Basic gdeltPyR Query.ipynb
linwoodc3/gdeltPyR
gpl-3.0
And now the query.
%time results = gd.Search(['2016 10 19','2016 10 22'],table='events',coverage=True)
examples/Basic gdeltPyR Query.ipynb
linwoodc3/gdeltPyR
gpl-3.0
Let's get an idea for the number of results we returned.
results.info()
examples/Basic gdeltPyR Query.ipynb
linwoodc3/gdeltPyR
gpl-3.0
Now, you can invoke the ls command from within the notebook to list the files in this directory. Check that the file is there. You can invoke the rootls command to see what's inside the file.
! ls . ! echo Now listing the content of the file ! rootls -l ./histos.root
SummerStudentCourse/2019/Exercises/WorkingWithFiles/WritingOnFiles_Solution.ipynb
root-mirror/training
gpl-2.0
Access the histograms and draw them in Python. Remember that you need to create a TCanvas before and draw it too in order to inline the plots in the notebooks. You can switch to the interactive JavaScript visualisation using the %jsroot on "magic" command.
%jsroot on f = ROOT.TFile("histos.root") c = ROOT.TCanvas() c.Divide(2,2) c.cd(1) f.gaus.Draw() c.cd(2) f.expo.Draw() c.cd(3) f.unif.Draw() c.Draw() # Draw the Canvas
SummerStudentCourse/2019/Exercises/WorkingWithFiles/WritingOnFiles_Solution.ipynb
root-mirror/training
gpl-2.0
You can now repeat the exercise above using C++. Transform the cell in a C++ cell using the %%cpp "magic".
%%cpp TFile f("histos.root"); TH1F *hg, *he, *hu; f.GetObject("gaus", hg); f.GetObject("expo", he); f.GetObject("unif", hu); TCanvas c; c.Divide(2,2); c.cd(1); hg->Draw(); c.cd(2); he->Draw(); c.cd(3); hu->Draw(); c.Draw(); // Draw the Canvas
SummerStudentCourse/2019/Exercises/WorkingWithFiles/WritingOnFiles_Solution.ipynb
root-mirror/training
gpl-2.0
Inspect the content of the file: TXMLFile ROOT provides a different kind of TFile, TXMLFile. It has the same interface and it's very useful to better understand how objects are written in files by ROOT. Repeat the exercise above, either on Python or C++ - your choice, using a TXMLFILE rather than a TFile and then displ...
f = ROOT.TXMLFile("histos.xml","RECREATE") hg = ROOT.TH1F("gaus","Gaussian numbers", 64, -4, 4) he = ROOT.TH1F("expo","Exponential numbers", 64, -4, 4) hu = ROOT.TH1F("unif","Uniform numbers", 64, -4, 4) for i in xrange(1024): hg.Fill(rndm.Gaus()) he.Fill(rndm.Exp(1)) hu.Fill(rndm.Uniform(-4,4)) for h in ...
SummerStudentCourse/2019/Exercises/WorkingWithFiles/WritingOnFiles_Solution.ipynb
root-mirror/training
gpl-2.0
Let's give a name to the indexes and create them in the ES server. Take care utils.create_ES_index() deletes any existing index with the same name before creating it.
index_name_git = 'github-git' utils.create_ES_index(es, index_name_git, utils.MAPPING_GITHUB_GIT) index_name_github_issues = 'github-issues' utils.create_ES_index(es, index_name_github_issues, utils.MAPPING_GITHUB_ISSUES)
Light Github index generator.ipynb
jsmanrique/grimoirelab-personal-utils
mit
Let's import needed backends from Perceval
from perceval.backends.core.git import Git from perceval.backends.core.github import GitHub
Light Github index generator.ipynb
jsmanrique/grimoirelab-personal-utils
mit
For each repository in the settings file, get git related info and upload it to defined git ES index
for repo_url in settings['github-repo']: repo_owner = repo_url.split('/')[-2] repo_name = repo_url.split('/')[-1] repo_git_url = repo_url + '.git' git_repo = Git(uri=repo_git_url, gitpath='/tmp/'+repo_name) utils.logging.info('Parsing log from {}'.format(repo_name)) items = [...
Light Github index generator.ipynb
jsmanrique/grimoirelab-personal-utils
mit
For each repository in the settings file, get github issues related info and upload it to defined github issues ES index
import datetime as datetime for repo_url in settings['github-repo']: repo_owner = repo_url.split('/')[-2] repo_name = repo_url.split('/')[-1] repo_git_url = repo_url + '.git' github_repo = GitHub(owner=repo_owner, repository=repo_name, api_token=settings['github_token']) utils.loggin...
Light Github index generator.ipynb
jsmanrique/grimoirelab-personal-utils
mit
Was haben wir hier eigentlich?
log.info()
demos/20190425_JUGH_Kassel/DatenanalysenProblemeEntwicklung.ipynb
feststelltaste/software-analytics
gpl-3.0
<b>1</b> DataFrame (~ programmierbares Excel-Arbeitsblatt), <b>4</b> Series (= Spalten), <b>5665947</b> Rows (= Einträge) III. Bereinigen Daten sind oft nicht so, wie man sie braucht Datentypen passen teilweise noch nicht Wir wandeln die Zeitstempel um
log['timestamp'] = pd.to_datetime(log['timestamp']) log.head()
demos/20190425_JUGH_Kassel/DatenanalysenProblemeEntwicklung.ipynb
feststelltaste/software-analytics
gpl-3.0
Wir berechnen uns das Alter jeder Codezeilenänderung
log['age'] = pd.Timestamp('today') - log['timestamp'] log.head()
demos/20190425_JUGH_Kassel/DatenanalysenProblemeEntwicklung.ipynb
feststelltaste/software-analytics
gpl-3.0
IV. Anreichern Vorhandenen Daten noch zusätzlich mit anderen Datenquellen verschneiden Aber auch: Teile aus vorhanden Daten extrahieren => Dadurch werden mehrere <b>Perspektiven</b> auf ein Problem möglich Wir ordnen jeder Zeilenänderung einer Komponente zu
log['component'] = log['path'].str.split("/").str[:2].str.join(":") log.head()
demos/20190425_JUGH_Kassel/DatenanalysenProblemeEntwicklung.ipynb
feststelltaste/software-analytics
gpl-3.0
<br/> <small><i>String-Operationen...die dauern. Gibt aber diverse Optimierungsmöglichkeiten!</i></small> V. Aggregieren Vorhandene Daten sind oft zu viel für manuelle Sichtung Neue Einsichten über Problem aber oft auf hoher Flugbahn möglich Wir fassen nach Komponenten zusammen und arbeiten mit der jeweils jüngsten Z...
age_per_component = log.groupby('component')['age'].min().sort_values() age_per_component.head()
demos/20190425_JUGH_Kassel/DatenanalysenProblemeEntwicklung.ipynb
feststelltaste/software-analytics
gpl-3.0
IV. Visualisieren Grafische Darstellung geben Analysen den letzten Schliff Probleme können Außenstehenden visuell dargestellt besser kommuniziert werden Wir bauen ein Diagramm mit Min-Alter pro Komponente
age_per_component.plot.bar(figsize=[20,5]);
demos/20190425_JUGH_Kassel/DatenanalysenProblemeEntwicklung.ipynb
feststelltaste/software-analytics
gpl-3.0
Fill in the code for the functions below. main() is already set up to call the functions with a few different inputs, printing 'OK' when each function is correct. The starter code for each function includes a 'return' which is just a placeholder for your code. A. doughnuts Given an int count of a number of doughnuts, r...
def doughnuts(count): # +++your code here+++ return test(doughnuts(4), 'Number of doughnuts: 4') test(doughnuts(9), 'Number of doughnuts: 9') test(doughnuts(10), 'Number of doughnuts: many') test(doughnuts(99), 'Number of doughnuts: many')
2.5 - String exercises.ipynb
sastels/Onboarding
mit
B. both_ends Given a string s, return a string made of the first 2 and the last 2 chars of the original string, so 'spring' yields 'spng'. However, if the string length is less than 2, return instead the empty string.
def both_ends(s): # +++your code here+++ return test(both_ends('spring'), 'spng') test(both_ends('Hello'), 'Helo') test(both_ends('a'), '') test(both_ends('xyz'), 'xyyz')
2.5 - String exercises.ipynb
sastels/Onboarding
mit
C. fix_start Given a string s, return a string where all occurences of its first char have been changed to '*', except do not change the first char itself. e.g. 'babble' yields ba**le Assume that the string is length 1 or more. Hint: s.replace(stra, strb) returns a version of string s where all instances of stra have ...
def fix_start(s): # +++your code here+++ return test(fix_start('babble'), 'ba**le') test(fix_start('aardvark'), 'a*rdv*rk') test(fix_start('google'), 'goo*le') test(fix_start('doughnut'), 'doughnut')
2.5 - String exercises.ipynb
sastels/Onboarding
mit
D. mix_up Given strings a and b, return a single string with a and b separated by a space ' ', except swap the first 2 chars of each string. Assume a and b are length 2 or more.
def mix_up(a, b): # +++your code here+++ return test(mix_up('mix', 'pod'), 'pox mid') test(mix_up('dog', 'dinner'), 'dig donner') test(mix_up('gnash', 'sport'), 'spash gnort') test(mix_up('pezzy', 'firm'), 'fizzy perm')
2.5 - String exercises.ipynb
sastels/Onboarding
mit
E. verbing Given a string, if its length is at least 3, add 'ing' to its end. Unless it already ends in 'ing', in which case add 'ly' instead. If the string length is less than 3, leave it unchanged. Return the resulting string.
def verbing(s): # +++your code here+++ return test(verbing('hail'), 'hailing') test(verbing('swimming'), 'swimmingly') test(verbing('do'), 'do')
2.5 - String exercises.ipynb
sastels/Onboarding
mit
F. not_bad Given a string, find the first appearance of the substring 'not' and 'bad'. If the 'bad' follows the 'not', replace the whole 'not'...'bad' substring with 'good'. Return the resulting string. So 'This dinner is not that bad!' yields: This dinner is good!
def not_bad(s): # +++your code here+++ return test(not_bad('This movie is not so bad'), 'This movie is good') test(not_bad('This dinner is not that bad!'), 'This dinner is good!') test(not_bad('This tea is not hot'), 'This tea is not hot') test(not_bad("It's bad yet not"), "It's bad yet not")
2.5 - String exercises.ipynb
sastels/Onboarding
mit
G. front_back Consider dividing a string into two halves. If the length is even, the front and back halves are the same length. If the length is odd, we'll say that the extra char goes in the front half. e.g. 'abcde', the front half is 'abc', the back half 'de'. Given 2 strings, a and b, return a string of the form a-f...
def front_back(a, b): # +++your code here+++ return test(front_back('abcd', 'xy'), 'abxcdy') test(front_back('abcde', 'xyz'), 'abcxydez') test(front_back('Kitten', 'Donut'), 'KitDontenut')
2.5 - String exercises.ipynb
sastels/Onboarding
mit
2.1) Dealing with Missing Values – Imputation Imputation is the act of replacing missing data values in a data set with meaningful values. Simply removing rows with missing feature values is bad practice if the data is scarce, as a lot of information could be lost. In addition, deletion methods can introduce bias [9]. ...
#Load additional training data add_training_data = pd.read_csv("/Users/Max/Desktop/Max's Folder/Uni Work/Data Science MSc/Machine Learning/ML Kaggle Competition /Data Sets/Additional Training Data Set .csv", header=0, index_col=0) #observe additional training data add_training_data #quantify class counts of additio...
SVM Binary Classification.ipynb
MaxYousif/Data-Science-MSc-Projects
mit
A couple of imputation methods were tried in the original Notebook: Imputation using the column (feature) mean Imputation with K-Nearest Neighbours The most effective and theoretically advocated method producing the best results was the second method; imputation using the K-Nearest Neighbours. Note: the fancyimput...
#imputation via KNN from fancyimpute import KNN knn_trial = full_training_data_inc knn_trial complete_knn = KNN(k=3).complete(knn_trial) #convert imputed matrix back to dataframe for visualisation and convert 'prediction' dtype to int complete_knn_df = pd.DataFrame(complete_knn, index=full_training_data_inc.index, co...
SVM Binary Classification.ipynb
MaxYousif/Data-Science-MSc-Projects
mit
2.2) Dealing with Confidence Labels One approach employed to incorporate the confidence labels was to use the confidence label of each instance as the corresponding sample weight for the instance. Theoretically, a confidence label of smaller than 1 would reduce the C parameter, which results in a lower penalty for mis...
#Load confidence annotations confidence_labels = pd.read_csv("/Users/Max/Desktop/Max's Folder/Uni Work/Data Science MSc/Machine Learning/ML Kaggle Competition /Data Sets/Annotation Confidence .csv", header=0, index_col=0) #quantify confidence labels (how many are 1, how many are 0.66) print(confidence_labels.confide...
SVM Binary Classification.ipynb
MaxYousif/Data-Science-MSc-Projects
mit
The original Notebook tried a couple of methods of incorporating the confidence labels into the model: Use all data samples, irrespective of confidence labels. However, the confidence label of each instance was set to be sample weight of each instance in the training phase. Only use instances that have a confidence ...
#only keep data instance with confidence label = 1 conf_full_train = full_train_wcl.loc[full_train_wcl['confidence'] == 1] conf_full_train #quantify class counts conf_full_train.prediction.value_counts() #convert full training data dataframe with confidence instances only to matrix conf_ft_matrix = conf_full_train.a...
SVM Binary Classification.ipynb
MaxYousif/Data-Science-MSc-Projects
mit
2.3) Dealing with Class Imbalance Binary classification tasks suffer from imbalanced class splits. Training a model on a data set with more instances for one class than the other class can result in biases towards the majority class, as sensitivity will be lost in detecting the minority class [17]. This is pertinent be...
from imblearn.over_sampling import SMOTE from collections import Counter #fit over-sampling to training data inputs and putputs over_sampler = SMOTE(ratio='auto', k_neighbors=5, kind='regular', random_state=0) over_sampler.fit(conf_ft_inputs, conf_ft_outputs) #create new inputs and outputs with correct class proport...
SVM Binary Classification.ipynb
MaxYousif/Data-Science-MSc-Projects
mit
3. Pre-Processing The pre-processing of the data consisted of several steps. First, the features were rescaled appropriately. Secondly, Feature Extraction was performed to reduce the unwieldy dimensionality of the training data set, concomitantly increasing the signal-to-noise ratio and decreasing time complexity. This...
#standardise the full training data with confidence labels 1 only scaler_2 = preprocessing.StandardScaler().fit(conf_ft_inputs) std_conf_ft_in = scaler_2.transform(conf_ft_inputs) std_conf_ft_in
SVM Binary Classification.ipynb
MaxYousif/Data-Science-MSc-Projects
mit
3.2) Principal Component Analysis (PCA) High-dimensionality should be reduced because it is likely to contain noisy features and because high-dimensionality increases computational time complexity [18]. Dimensionality reduction can be achieved via feature selection methods, such as filters and wrappers [19], or via fea...
import matplotlib.pyplot as plt import seaborn as sns %matplotlib inline #preprocessing: PCA (feature construction). High number of pcs chosen to plot a graph #showing how much more variance is explained as pc number increases pca_2 = PCA(n_components=700, random_state=0) std_conf_ft_in_pca = pca_2.fit_transform(std_...
SVM Binary Classification.ipynb
MaxYousif/Data-Science-MSc-Projects
mit
The cell below will plot how much more of the variance in the data set is explained as the number of principal components included is increased.
#calculate a list of cumulative sums for amount of variance explained cumulative_variance = np.cumsum(pca_2.explained_variance_ratio_) len(cumulative_variance) #add 0 to the beginning of the list, otherwise list starts with variance explained by 1 pc cumulative_variance = np.insert(cumulative_variance, 0, 0) #define ...
SVM Binary Classification.ipynb
MaxYousif/Data-Science-MSc-Projects
mit
The graph above suggests that the maximum number of principal components should not exceed 300, as less and less variance is explained as the number of principal components included increases beyond 300. For the optimisation, the optimal number of principal components was initially assumed to be 230.
#preprocessing: PCA (feature construction) pca_2 = PCA(n_components=230, random_state=0) std_conf_ft_in_pca = pca_2.fit_transform(std_conf_ft_in) #quantify ratio of variance explain by principal components print("Total Variance Explained by PCs (%): ", np.sum(pca_2.explained_variance_ratio_))
SVM Binary Classification.ipynb
MaxYousif/Data-Science-MSc-Projects
mit
4. Model Selection The optimization was conducted through the use of a Grid search. In addition, the optimization was conducted for two kernels: the polynomial kernel and the RBF kernel. The initial search for optimal parameters was conducted on a logarithmic scale to explore as much of the parameter space as possible....
#this cell takes around 7 minutes to run #parameter optimisation with Exhaustive Grid Search, with class weight original_c_range = np.arange(0.85, 1.01, 0.01) gamma_range = np.arange(0.00001, 0.00023, 0.00002) #define parameter ranges to test param_grid = [{'C': original_c_range, 'gamma': gamma_range, 'kernel': ['rbf...
SVM Binary Classification.ipynb
MaxYousif/Data-Science-MSc-Projects
mit
The cell below will plot two heat-maps side by side: one for showing how the training accuracy changes during cross-validation for different combinations of parameters, and one for showing how the testing accuracy changes during cross-validation for different combinations of parameters.
#Draw heatmap of the validation accuracy as a function of gamma and C fig = plt.figure(figsize=(10, 10)) ix=fig.add_subplot(1,2,1) val_scores = clf.cv_results_['mean_test_score'].reshape(len(original_c_range),len(gamma_range)) val_scores ax = sns.heatmap(val_scores, linewidths=0.5, square=True, cmap='PuBuGn', ...
SVM Binary Classification.ipynb
MaxYousif/Data-Science-MSc-Projects
mit
The cells below will plot a Validation Curves for Gamma.
#import module/library from sklearn.model_selection import validation_curve import matplotlib.pyplot as plt %matplotlib inline #specifying gamma parameter range to plot for validation curve param_range = gamma_range param_range #calculating train and validation scores train_scores, valid_scores = validation_curve(...
SVM Binary Classification.ipynb
MaxYousif/Data-Science-MSc-Projects
mit
The cells below will plot the Learning Curve.
#import module/library from sklearn.model_selection import learning_curve #define training data size increments td_size = np.arange(0.1, 1.1, 0.1) #calculating train and validation scores train_sizes, train_scores, valid_scores = learning_curve(SVC(C=0.92, kernel='rbf', gamma=0.00011, class_weight={0:1.33, 1:1}), st...
SVM Binary Classification.ipynb
MaxYousif/Data-Science-MSc-Projects
mit
Finding Best Number of Principal Components The cells below will show the optimisation for the number of principal components to include. This is done by doing using a range of principal components, conducting PCA for each specified number in the interval and calculating the average of the test score over 3-fold cross-...
#this cell may take several minutes to run #plot how the number of PC's changes the test accuracy no_pcs = np.arange(20, 310, 10) compute_average_of_5 = [] for t in range(0,5): pcs_accuracy_change = [] for i in no_pcs: dummy_inputs = std_conf_ft_in dummy_outputs = conf_ft_outputs pca_du...
SVM Binary Classification.ipynb
MaxYousif/Data-Science-MSc-Projects
mit
Making Predictions The following cells will prepare the test data by getting it into the right format.
#Load the complete training data set test_data = pd.read_csv("/Users/Max/Desktop/Max's Folder/Uni Work/Data Science MSc/Machine Learning/ML Kaggle Competition /Data Sets/Testing Data Set.csv", header=0, index_col=0) ##Observe the test data test_data #turn test dataframe into matrix test_data_matrix = test_data.as_...
SVM Binary Classification.ipynb
MaxYousif/Data-Science-MSc-Projects
mit
The following cell will apply the same pre-processing applied to the training data to the test data.
#pre-process test data in same way as train data scaled_test = scaler_2.transform(test_data_matrix) transformed_test = pca_2.transform(scaled_test) transformed_test.shape
SVM Binary Classification.ipynb
MaxYousif/Data-Science-MSc-Projects
mit
The following cells will produce predictions on the test data using the final model.
#define and fit final model with best parameters from grid search final_model = SVC(C=0.92, cache_size=1000, kernel='rbf', gamma=0.00011, class_weight={0:1.33, 1:1}) final_model.fit(std_conf_ft_in_pca, conf_ft_outputs) #make test data predictions predictions = final_model.predict(transformed_test) #create dictionary ...
SVM Binary Classification.ipynb
MaxYousif/Data-Science-MSc-Projects
mit
Este código faz com que primeiramente toda a primeira linha seja preenchida, em seguida a segunda e assim sucessivamente. Se nós quiséssemos que a primeira coluna fosse preenchida e em seguida a segunda coluna e assim por diante, como ficaria o código? Um exemplo: se o usuário digitasse o seguinte comando “x = cria_mat...
def cria_matriz(num_linhas, num_colunas): matriz = [] #lista vazia for i in range(num_linhas): linha = [] for j in range(num_colunas): linha.append(0) matriz.append(linha) for i in range(num_colunas): for j in range(num_linhas): matriz[j][i] = int(in...
.ipynb_checkpoints/Curso Introdução à Ciência da Computação com Python - Parte 2-checkpoint.ipynb
marcelomiky/PythonCodes
mit