Upload 25 files
Browse files- 2019_Book_AdvancedGuideToPython3Programm573.txt +0 -0
- Data types.txt +1 -0
- Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)191.txt +0 -0
- DeepLearningBook831.txt +0 -0
- Diagnostic and statistical manual of mental disorders _ DSM-5 ( PDFDrive.com )853.txt +0 -0
- Expert Python Programming - Third Edition590.txt +0 -0
- Features.txt +133 -0
- Hype params.txt +301 -0
- Introduction to Machine Learning with Python ( PDFDrive.com )-min405.txt +0 -0
- Ketkar,Moolayil Deep Learning with Python483.txt +0 -0
- Mastering Python for Data Science987.txt +0 -0
- Native Bayes.txt +123 -0
- Natural Language Processing with Python - O'Reilly2009634.txt +0 -0
- OASIcs.SLATE.2023.10755.txt +150 -0
- Programming Python Fourth Edition424.txt +0 -0
- Python Deep Learning Exploring deep learning techniques, neural network architectures and GANs with PyTorch, Keras and TensorFlow by Ivan Vasilev, Daniel Slater, Gianmario Spacagna, Peter Roelants, Va (z-lib.org)362.txt +0 -0
- Python4AdvancedPython830.txt +0 -0
- Sick it .txt +393 -0
- UnleashingthePowerofAdvancedPythonProgramming607.txt +601 -0
- advance-core-python-programming-begin-your-journey-to-master-the-world-of-python-english-edition-9390684064-9789390684069_compress701.txt +0 -0
- book979.txt +0 -0
- by-Dipanjan-Sarkar-Text-Analytics-with-Python310.txt +0 -0
- mastering-machine-learning-with-python-in-six-steps602.txt +0 -0
- phI2uN2OS67XXcWCTBLosOyxvGdtaQ39sMlTbJe6289.txt +0 -0
- practical-machine-learning-python-problem-solvers898.txt +0 -0
2019_Book_AdvancedGuideToPython3Programm573.txt
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
Data types.txt
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
Data typeDescriptionbool_Boolean (True or False) stored as a byteint_Default integer type (same as C long; normally either int64 or int32)intcIdentical to C int (normally int32 or int64)intpInteger used for indexing (same as C ssize_t; normally either int32 or int64)int8Byte (-128 to 127)int16Integer (-32768 to 32767)int32Integer (-2147483648 to 2147483647)int64Integer (-9223372036854775808 to 9223372036854775807)uint8Unsigned integer (0 to 255)uint16Unsigned integer (0 to 65535)uint32Unsigned integer (0 to 4294967295)uint64Unsigned integer (0 to 18446744073709551615)float_Shorthand for float64.float16Half precision float: sign bit, 5 bits exponent, 10 bits mantissafloat32Single precision float: sign bit, 8 bits exponent, 23 bits mantissafloat64Double precision float: sign bit, 11 bits exponent, 52 bits mantissacomplex_Shorthand for complex128.complex64Complex number, represented by two 32-bit floatscomplex128Complex number, represented by two 64-bit floats
|
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)191.txt
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
DeepLearningBook831.txt
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
Diagnostic and statistical manual of mental disorders _ DSM-5 ( PDFDrive.com )853.txt
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
Expert Python Programming - Third Edition590.txt
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
Features.txt
ADDED
|
@@ -0,0 +1,133 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
The previous sections outline the fundamental ideas of machine learning, but all of the examples assume that you have numerical data in a tidy, [n_samples, n_features] format. In the real world, data rarely comes in such a form. With this in mind, one of the more important steps in using machine learning in practice is feature engineering: that is, taking whatever information you have about your problem and turning it into numbers that you can use to build your feature matrix.
|
| 2 |
+
|
| 3 |
+
In this section, we will cover a few common examples of feature engineering tasks: features for representing categorical data, features for representing text, and features for representing images. Additionally, we will discuss derived features for increasing model complexity and imputation of missing data. Often this process is known as vectorization, as it involves converting arbitrary data into well-behaved vectors.
|
| 4 |
+
|
| 5 |
+
Categorical Features
|
| 6 |
+
|
| 7 |
+
One common type of non-numerical data is categorical data. For example, imagine you are exploring some data on housing prices, and along with numerical features like "price" and "rooms", you also have "neighborhood" information. For example, your data might look something like this:
|
| 8 |
+
|
| 9 |
+
In [1]:
|
| 10 |
+
|
| 11 |
+
data = [ {'price': 850000, 'rooms': 4, 'neighborhood': 'Queen Anne'}, {'price': 700000, 'rooms': 3, 'neighborhood': 'Fremont'}, {'price': 650000, 'rooms': 3, 'neighborhood': 'Wallingford'}, {'price': 600000, 'rooms': 2, 'neighborhood': 'Fremont'} ]
|
| 12 |
+
|
| 13 |
+
You might be tempted to encode this data with a straightforward numerical mapping:
|
| 14 |
+
|
| 15 |
+
In [2]:
|
| 16 |
+
|
| 17 |
+
{'Queen Anne': 1, 'Fremont': 2, 'Wallingford': 3};
|
| 18 |
+
|
| 19 |
+
It turns out that this is not generally a useful approach in Scikit-Learn: the package's models make the fundamental assumption that numerical features reflect algebraic quantities. Thus such a mapping would imply, for example, that Queen Anne < Fremont < Wallingford, or even that Wallingford - Queen Anne = Fremont, which (niche demographic jokes aside) does not make much sense.
|
| 20 |
+
|
| 21 |
+
In this case, one proven technique is to use one-hot encoding, which effectively creates extra columns indicating the presence or absence of a category with a value of 1 or 0, respectively. When your data comes as a list of dictionaries, Scikit-Learn's DictVectorizer will do this for you:
|
| 22 |
+
|
| 23 |
+
In [3]:
|
| 24 |
+
|
| 25 |
+
from sklearn.feature_extraction import DictVectorizer vec = DictVectorizer(sparse=False, dtype=int) vec.fit_transform(data)
|
| 26 |
+
|
| 27 |
+
Out[3]:
|
| 28 |
+
|
| 29 |
+
array([[ 0, 1, 0, 850000, 4], [ 1, 0, 0, 700000, 3], [ 0, 0, 1, 650000, 3], [ 1, 0, 0, 600000, 2]], dtype=int64)
|
| 30 |
+
|
| 31 |
+
Notice that the 'neighborhood' column has been expanded into three separate columns, representing the three neighborhood labels, and that each row has a 1 in the column associated with its neighborhood. With these categorical features thus encoded, you can proceed as normal with fitting a Scikit-Learn model.
|
| 32 |
+
|
| 33 |
+
To see the meaning of each column, you can inspect the feature names:
|
| 34 |
+
|
| 35 |
+
In [4]:
|
| 36 |
+
|
| 37 |
+
vec.get_feature_names()
|
| 38 |
+
|
| 39 |
+
Out[4]:
|
| 40 |
+
|
| 41 |
+
['neighborhood=Fremont', 'neighborhood=Queen Anne', 'neighborhood=Wallingford', 'price', 'rooms']
|
| 42 |
+
|
| 43 |
+
There is one clear disadvantage of this approach: if your category has many possible values, this can greatly increase the size of your dataset. However, because the encoded data contains mostly zeros, a sparse output can be a very efficient solution:
|
| 44 |
+
|
| 45 |
+
In [5]:
|
| 46 |
+
|
| 47 |
+
vec = DictVectorizer(sparse=True, dtype=int) vec.fit_transform(data)
|
| 48 |
+
|
| 49 |
+
Out[5]:
|
| 50 |
+
|
| 51 |
+
<4x5 sparse matrix of type '<class 'numpy.int64'>' with 12 stored elements in Compressed Sparse Row format>
|
| 52 |
+
|
| 53 |
+
Many (though not yet all) of the Scikit-Learn estimators accept such sparse inputs when fitting and evaluating models. sklearn.preprocessing.OneHotEncoder and sklearn.feature_extraction.FeatureHasher are two additional tools that Scikit-Learn includes to support this type of encoding.
|
| 54 |
+
|
| 55 |
+
Text Features
|
| 56 |
+
|
| 57 |
+
Another common need in feature engineering is to convert text to a set of representative numerical values. For example, most automatic mining of social media data relies on some form of encoding the text as numbers. One of the simplest methods of encoding data is by word counts: you take each snippet of text, count the occurrences of each word within it, and put the results in a table.
|
| 58 |
+
|
| 59 |
+
For example, consider the following set of three phrases:
|
| 60 |
+
|
| 61 |
+
In [6]:
|
| 62 |
+
|
| 63 |
+
sample = ['problem of evil', 'evil queen', 'horizon problem']
|
| 64 |
+
|
| 65 |
+
For a vectorization of this data based on word count, we could construct a column representing the word "problem," the word "evil," the word "horizon," and so on. While doing this by hand would be possible, the tedium can be avoided by using Scikit-Learn's CountVectorizer:
|
| 66 |
+
|
| 67 |
+
In [7]:
|
| 68 |
+
|
| 69 |
+
from sklearn.feature_extraction.text import CountVectorizer vec = CountVectorizer() X = vec.fit_transform(sample) X
|
| 70 |
+
|
| 71 |
+
Out[7]:
|
| 72 |
+
|
| 73 |
+
<3x5 sparse matrix of type '<class 'numpy.int64'>' with 7 stored elements in Compressed Sparse Row format>
|
| 74 |
+
|
| 75 |
+
The result is a sparse matrix recording the number of times each word appears; it is easier to inspect if we convert this to a DataFrame with labeled columns:
|
| 76 |
+
|
| 77 |
+
In [8]:
|
| 78 |
+
|
| 79 |
+
import pandas as pd pd.DataFrame(X.toarray(), columns=vec.get_feature_names())
|
| 80 |
+
|
| 81 |
+
Out[8]:
|
| 82 |
+
|
| 83 |
+
evilhorizonofproblemqueen010110110001201010
|
| 84 |
+
|
| 85 |
+
There are some issues with this approach, however: the raw word counts lead to features which put too much weight on words that appear very frequently, and this can be sub-optimal in some classification algorithms. One approach to fix this is known as term frequency-inverse document frequency (TF–IDF) which weights the word counts by a measure of how often they appear in the documents. The syntax for computing these features is similar to the previous example:
|
| 86 |
+
|
| 87 |
+
In [9]:
|
| 88 |
+
|
| 89 |
+
from sklearn.feature_extraction.text import TfidfVectorizer vec = TfidfVectorizer() X = vec.fit_transform(sample) pd.DataFrame(X.toarray(), columns=vec.get_feature_names())
|
| 90 |
+
|
| 91 |
+
Out[9]:
|
| 92 |
+
|
| 93 |
+
evilhorizonofproblemqueen00.5178560.0000000.6809190.5178560.00000010.6053490.0000000.0000000.0000000.79596120.0000000.7959610.0000000.6053490.000000
|
| 94 |
+
|
| 95 |
+
For an example of using TF-IDF in a classification problem, see In Depth: Naive Bayes Classification.
|
| 96 |
+
|
| 97 |
+
Image Features
|
| 98 |
+
|
| 99 |
+
Another common need is to suitably encode images for machine learning analysis. The simplest approach is what we used for the digits data in Introducing Scikit-Learn: simply using the pixel values themselves. But depending on the application, such approaches may not be optimal.
|
| 100 |
+
|
| 101 |
+
A comprehensive summary of feature extraction techniques for images is well beyond the scope of this section, but you can find excellent implementations of many of the standard approaches in the Scikit-Image project. For one example of using Scikit-Learn and Scikit-Image together, see Feature Engineering: Working with Images.
|
| 102 |
+
|
| 103 |
+
Derived Features
|
| 104 |
+
|
| 105 |
+
Another useful type of feature is one that is mathematically derived from some input features. We saw an example of this in Hyperparameters and Model Validation when we constructed polynomial features from our input data. We saw that we could convert a linear regression into a polynomial regression not by changing the model, but by transforming the input! This is sometimes known as basis function regression, and is explored further in In Depth: Linear Regression.
|
| 106 |
+
|
| 107 |
+
For example, this data clearly cannot be well described by a straight line:
|
| 108 |
+
|
| 109 |
+
In [10]:
|
| 110 |
+
|
| 111 |
+
%matplotlib inline import numpy as np import matplotlib.pyplot as plt x = np.array([1, 2, 3, 4, 5]) y = np.array([4, 2, 1, 3, 7]) plt.scatter(x, y);
|
| 112 |
+
|
| 113 |
+
Still, we can fit a line to the data using LinearRegression and get the optimal result:
|
| 114 |
+
|
| 115 |
+
In [11]:
|
| 116 |
+
|
| 117 |
+
from sklearn.linear_model import LinearRegression X = x[:, np.newaxis] model = LinearRegression().fit(X, y) yfit = model.predict(X) plt.scatter(x, y) plt.plot(x, yfit);
|
| 118 |
+
|
| 119 |
+
It's clear that we need a more sophisticated model to describe the relationship between x� and y�.
|
| 120 |
+
|
| 121 |
+
One approach to this is to transform the data, adding extra columns of features to drive more flexibility in the model. For example, we can add polynomial features to the data this way:
|
| 122 |
+
|
| 123 |
+
In [12]:
|
| 124 |
+
|
| 125 |
+
from sklearn.preprocessing import PolynomialFeatures poly = PolynomialFeatures(degree=3, include_bias=False) X2 = poly.fit_transform(X) print(X2)
|
| 126 |
+
|
| 127 |
+
[[ 1. 1. 1.] [ 2. 4. 8.] [ 3. 9. 27.] [ 4. 16. 64.] [ 5. 25. 125.]]
|
| 128 |
+
|
| 129 |
+
The derived feature matrix has one column representing x�, and a second column representing x2�2, and a third column representing x3�3. Computing a linear regression on this expanded input gives a much closer fit to our data:
|
| 130 |
+
|
| 131 |
+
In [13]:
|
| 132 |
+
|
| 133 |
+
model = LinearRegression().fit(X2, y) yfit = model.predict(X2) plt.scatter(x, y) plt.plot(x, yfit);
|
Hype params.txt
ADDED
|
@@ -0,0 +1,301 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
In the previous section, we saw the basic recipe for applying a supervised machine learning model:
|
| 2 |
+
|
| 3 |
+
• Choose a class of model
|
| 4 |
+
|
| 5 |
+
• Choose model hyperparameters
|
| 6 |
+
|
| 7 |
+
• Fit the model to the training data
|
| 8 |
+
|
| 9 |
+
• Use the model to predict labels for new data
|
| 10 |
+
|
| 11 |
+
The first two pieces of this—the choice of model and choice of hyperparameters—are perhaps the most important part of using these tools and techniques effectively. In order to make an informed choice, we need a way to validate that our model and our hyperparameters are a good fit to the data. While this may sound simple, there are some pitfalls that you must avoid to do this effectively.
|
| 12 |
+
|
| 13 |
+
Thinking about Model Validation
|
| 14 |
+
|
| 15 |
+
In principle, model validation is very simple: after choosing a model and its hyperparameters, we can estimate how effective it is by applying it to some of the training data and comparing the prediction to the known value.
|
| 16 |
+
|
| 17 |
+
The following sections first show a naive approach to model validation and why it fails, before exploring the use of holdout sets and cross-validation for more robust model evaluation.
|
| 18 |
+
|
| 19 |
+
Model validation the wrong way
|
| 20 |
+
|
| 21 |
+
Let's demonstrate the naive approach to validation using the Iris data, which we saw in the previous section. We will start by loading the data:
|
| 22 |
+
|
| 23 |
+
In [1]:
|
| 24 |
+
|
| 25 |
+
from sklearn.datasets import load_iris iris = load_iris() X = iris.data y = iris.target
|
| 26 |
+
|
| 27 |
+
Next we choose a model and hyperparameters. Here we'll use a k-neighbors classifier with n_neighbors=1. This is a very simple and intuitive model that says "the label of an unknown point is the same as the label of its closest training point:"
|
| 28 |
+
|
| 29 |
+
In [2]:
|
| 30 |
+
|
| 31 |
+
from sklearn.neighbors import KNeighborsClassifier model = KNeighborsClassifier(n_neighbors=1)
|
| 32 |
+
|
| 33 |
+
Then we train the model, and use it to predict labels for data we already know:
|
| 34 |
+
|
| 35 |
+
In [3]:
|
| 36 |
+
|
| 37 |
+
model.fit(X, y) y_model = model.predict(X)
|
| 38 |
+
|
| 39 |
+
Finally, we compute the fraction of correctly labeled points:
|
| 40 |
+
|
| 41 |
+
In [4]:
|
| 42 |
+
|
| 43 |
+
from sklearn.metrics import accuracy_score accuracy_score(y, y_model)
|
| 44 |
+
|
| 45 |
+
Out[4]:
|
| 46 |
+
|
| 47 |
+
1.0
|
| 48 |
+
|
| 49 |
+
We see an accuracy score of 1.0, which indicates that 100% of points were correctly labeled by our model! But is this truly measuring the expected accuracy? Have we really come upon a model that we expect to be correct 100% of the time?
|
| 50 |
+
|
| 51 |
+
As you may have gathered, the answer is no. In fact, this approach contains a fundamental flaw: it trains and evaluates the model on the same data. Furthermore, the nearest neighbor model is an instance-based estimator that simply stores the training data, and predicts labels by comparing new data to these stored points: except in contrived cases, it will get 100% accuracy every time!
|
| 52 |
+
|
| 53 |
+
Model validation the right way: Holdout sets
|
| 54 |
+
|
| 55 |
+
So what can be done? A better sense of a model's performance can be found using what's known as a holdout set: that is, we hold back some subset of the data from the training of the model, and then use this holdout set to check the model performance. This splitting can be done using the train_test_split utility in Scikit-Learn:
|
| 56 |
+
|
| 57 |
+
In [5]:
|
| 58 |
+
|
| 59 |
+
from sklearn.cross_validation import train_test_split # split the data with 50% in each set X1, X2, y1, y2 = train_test_split(X, y, random_state=0, train_size=0.5) # fit the model on one set of data model.fit(X1, y1) # evaluate the model on the second set of data y2_model = model.predict(X2) accuracy_score(y2, y2_model)
|
| 60 |
+
|
| 61 |
+
Out[5]:
|
| 62 |
+
|
| 63 |
+
0.90666666666666662
|
| 64 |
+
|
| 65 |
+
We see here a more reasonable result: the nearest-neighbor classifier is about 90% accurate on this hold-out set. The hold-out set is similar to unknown data, because the model has not "seen" it before.
|
| 66 |
+
|
| 67 |
+
Model validation via cross-validation
|
| 68 |
+
|
| 69 |
+
One disadvantage of using a holdout set for model validation is that we have lost a portion of our data to the model training. In the above case, half the dataset does not contribute to the training of the model! This is not optimal, and can cause problems – especially if the initial set of training data is small.
|
| 70 |
+
|
| 71 |
+
One way to address this is to use cross-validation; that is, to do a sequence of fits where each subset of the data is used both as a training set and as a validation set. Visually, it might look something like this:
|
| 72 |
+
|
| 73 |
+
figure source in Appendix
|
| 74 |
+
|
| 75 |
+
Here we do two validation trials, alternately using each half of the data as a holdout set. Using the split data from before, we could implement it like this:
|
| 76 |
+
|
| 77 |
+
In [6]:
|
| 78 |
+
|
| 79 |
+
y2_model = model.fit(X1, y1).predict(X2) y1_model = model.fit(X2, y2).predict(X1) accuracy_score(y1, y1_model), accuracy_score(y2, y2_model)
|
| 80 |
+
|
| 81 |
+
Out[6]:
|
| 82 |
+
|
| 83 |
+
(0.95999999999999996, 0.90666666666666662)
|
| 84 |
+
|
| 85 |
+
What comes out are two accuracy scores, which we could combine (by, say, taking the mean) to get a better measure of the global model performance. This particular form of cross-validation is a two-fold cross-validation—that is, one in which we have split the data into two sets and used each in turn as a validation set.
|
| 86 |
+
|
| 87 |
+
We could expand on this idea to use even more trials, and more folds in the data—for example, here is a visual depiction of five-fold cross-validation:
|
| 88 |
+
|
| 89 |
+
• figure source in Appendix
|
| 90 |
+
|
| 91 |
+
Here we split the data into five groups, and use each of them in turn to evaluate the model fit on the other 4/5 of the data. This would be rather tedious to do by hand, and so we can use Scikit-Learn's cross_val_score convenience routine to do it succinctly:
|
| 92 |
+
|
| 93 |
+
In [7]:
|
| 94 |
+
|
| 95 |
+
from sklearn.cross_validation import cross_val_score cross_val_score(model, X, y, cv=5)
|
| 96 |
+
|
| 97 |
+
Out[7]:
|
| 98 |
+
|
| 99 |
+
array([ 0.96666667, 0.96666667, 0.93333333, 0.93333333, 1. ])
|
| 100 |
+
|
| 101 |
+
Repeating the validation across different subsets of the data gives us an even better idea of the performance of the algorithm.
|
| 102 |
+
|
| 103 |
+
Scikit-Learn implements a number of useful cross-validation schemes that are useful in particular situations; these are implemented via iterators in the cross_validation module. For example, we might wish to go to the extreme case in which our number of folds is equal to the number of data points: that is, we train on all points but one in each trial. This type of cross-validation is known as leave-one-out cross validation, and can be used as follows:
|
| 104 |
+
|
| 105 |
+
In [8]:
|
| 106 |
+
|
| 107 |
+
from sklearn.cross_validation import LeaveOneOut scores = cross_val_score(model, X, y, cv=LeaveOneOut(len(X))) scores
|
| 108 |
+
|
| 109 |
+
Out[8]:
|
| 110 |
+
|
| 111 |
+
array([ 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 0., 1., 0., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 0., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 0., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 0., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 0., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.])
|
| 112 |
+
|
| 113 |
+
Because we have 150 samples, the leave one out cross-validation yields scores for 150 trials, and the score indicates either successful (1.0) or unsuccessful (0.0) prediction. Taking the mean of these gives an estimate of the error rate:
|
| 114 |
+
|
| 115 |
+
In [9]:
|
| 116 |
+
|
| 117 |
+
scores.mean()
|
| 118 |
+
|
| 119 |
+
Out[9]:
|
| 120 |
+
|
| 121 |
+
0.95999999999999996
|
| 122 |
+
|
| 123 |
+
Other cross-validation schemes can be used similarly. For a description of what is available in Scikit-Learn, use IPython to explore the sklearn.cross_validation submodule, or take a look at Scikit-Learn's online cross-validation documentation.
|
| 124 |
+
|
| 125 |
+
Selecting the Best Model
|
| 126 |
+
|
| 127 |
+
Now that we've seen the basics of validation and cross-validation, we will go into a litte more depth regarding model selection and selection of hyperparameters. These issues are some of the most important aspects of the practice of machine learning, and I find that this information is often glossed over in introductory machine learning tutorials.
|
| 128 |
+
|
| 129 |
+
Of core importance is the following question: if our estimator is underperforming, how should we move forward? There are several possible answers:
|
| 130 |
+
|
| 131 |
+
• Use a more complicated/more flexible model
|
| 132 |
+
|
| 133 |
+
• Use a less complicated/less flexible model
|
| 134 |
+
|
| 135 |
+
• Gather more training samples
|
| 136 |
+
|
| 137 |
+
• Gather more data to add features to each sample
|
| 138 |
+
|
| 139 |
+
The answer to this question is often counter-intuitive. In particular, sometimes using a more complicated model will give worse results, and adding more training samples may not improve your results! The ability to determine what steps will improve your model is what separates the successful machine learning practitioners from the unsuccessful.
|
| 140 |
+
|
| 141 |
+
The Bias-variance trade-off
|
| 142 |
+
|
| 143 |
+
Fundamentally, the question of "the best model" is about finding a sweet spot in the tradeoff between bias and variance. Consider the following figure, which presents two regression fits to the same dataset:
|
| 144 |
+
|
| 145 |
+
• figure source in Appendix
|
| 146 |
+
|
| 147 |
+
It is clear that neither of these models is a particularly good fit to the data, but they fail in different ways.
|
| 148 |
+
|
| 149 |
+
The model on the left attempts to find a straight-line fit through the data. Because the data are intrinsically more complicated than a straight line, the straight-line model will never be able to describe this dataset well. Such a model is said to underfit the data: that is, it does not have enough model flexibility to suitably account for all the features in the data; another way of saying this is that the model has high bias.
|
| 150 |
+
|
| 151 |
+
The model on the right attempts to fit a high-order polynomial through the data. Here the model fit has enough flexibility to nearly perfectly account for the fine features in the data, but even though it very accurately describes the training data, its precise form seems to be more reflective of the particular noise properties of the data rather than the intrinsic properties of whatever process generated that data. Such a model is said to overfit the data: that is, it has so much model flexibility that the model ends up accounting for random errors as well as the underlying data distribution; another way of saying this is that the model has high variance.
|
| 152 |
+
|
| 153 |
+
To look at this in another light, consider what happens if we use these two models to predict the y-value for some new data. In the following diagrams, the red/lighter points indicate data that is omitted from the training set:
|
| 154 |
+
|
| 155 |
+
• figure source in Appendix
|
| 156 |
+
|
| 157 |
+
The score here is the R2�2 score, or coefficient of determination, which measures how well a model performs relative to a simple mean of the target values. R2=1�2=1 indicates a perfect match, R2=0�2=0 indicates the model does no better than simply taking the mean of the data, and negative values mean even worse models. From the scores associated with these two models, we can make an observation that holds more generally:
|
| 158 |
+
|
| 159 |
+
• For high-bias models, the performance of the model on the validation set is similar to the performance on the training set.
|
| 160 |
+
|
| 161 |
+
• For high-variance models, the performance of the model on the validation set is far worse than the performance on the training set.
|
| 162 |
+
|
| 163 |
+
If we imagine that we have some ability to tune the model complexity, we would expect the training score and validation score to behave as illustrated in the following figure:
|
| 164 |
+
|
| 165 |
+
• figure source in Appendix
|
| 166 |
+
|
| 167 |
+
The diagram shown here is often called a validation curve, and we see the following essential features:
|
| 168 |
+
|
| 169 |
+
• The training score is everywhere higher than the validation score. This is generally the case: the model will be a better fit to data it has seen than to data it has not seen.
|
| 170 |
+
|
| 171 |
+
• For very low model complexity (a high-bias model), the training data is under-fit, which means that the model is a poor predictor both for the training data and for any previously unseen data.
|
| 172 |
+
|
| 173 |
+
• For very high model complexity (a high-variance model), the training data is over-fit, which means that the model predicts the training data very well, but fails for any previously unseen data.
|
| 174 |
+
|
| 175 |
+
• For some intermediate value, the validation curve has a maximum. This level of complexity indicates a suitable trade-off between bias and variance.
|
| 176 |
+
|
| 177 |
+
The means of tuning the model complexity varies from model to model; when we discuss individual models in depth in later sections, we will see how each model allows for such tuning.
|
| 178 |
+
|
| 179 |
+
Validation curves in Scikit-Learn
|
| 180 |
+
|
| 181 |
+
Let's look at an example of using cross-validation to compute the validation curve for a class of models. Here we will use a polynomial regression model: this is a generalized linear model in which the degree of the polynomial is a tunable parameter. For example, a degree-1 polynomial fits a straight line to the data; for model parameters a� and b�:
|
| 182 |
+
|
| 183 |
+
y=ax+b�=��+�
|
| 184 |
+
|
| 185 |
+
A degree-3 polynomial fits a cubic curve to the data; for model parameters a,b,c,d�,�,�,�:
|
| 186 |
+
|
| 187 |
+
y=ax3+bx2+cx+d�=��3+��2+��+�
|
| 188 |
+
|
| 189 |
+
We can generalize this to any number of polynomial features. In Scikit-Learn, we can implement this with a simple linear regression combined with the polynomial preprocessor. We will use a pipeline to string these operations together (we will discuss polynomial features and pipelines more fully in Feature Engineering):
|
| 190 |
+
|
| 191 |
+
In [10]:
|
| 192 |
+
|
| 193 |
+
from sklearn.preprocessing import PolynomialFeatures from sklearn.linear_model import LinearRegression from sklearn.pipeline import make_pipeline def PolynomialRegression(degree=2, **kwargs): return make_pipeline(PolynomialFeatures(degree), LinearRegression(**kwargs))
|
| 194 |
+
|
| 195 |
+
Now let's create some data to which we will fit our model:
|
| 196 |
+
|
| 197 |
+
In [11]:
|
| 198 |
+
|
| 199 |
+
import numpy as np def make_data(N, err=1.0, rseed=1): # randomly sample the data rng = np.random.RandomState(rseed) X = rng.rand(N, 1) ** 2 y = 10 - 1. / (X.ravel() + 0.1) if err > 0: y += err * rng.randn(N) return X, y X, y = make_data(40)
|
| 200 |
+
|
| 201 |
+
We can now visualize our data, along with polynomial fits of several degrees:
|
| 202 |
+
|
| 203 |
+
In [12]:
|
| 204 |
+
|
| 205 |
+
%matplotlib inline import matplotlib.pyplot as plt import seaborn; seaborn.set() # plot formatting X_test = np.linspace(-0.1, 1.1, 500)[:, None] plt.scatter(X.ravel(), y, color='black') axis = plt.axis() for degree in [1, 3, 5]: y_test = PolynomialRegression(degree).fit(X, y).predict(X_test) plt.plot(X_test.ravel(), y_test, label='degree={0}'.format(degree)) plt.xlim(-0.1, 1.0) plt.ylim(-2, 12) plt.legend(loc='best');
|
| 206 |
+
|
| 207 |
+
The knob controlling model complexity in this case is the degree of the polynomial, which can be any non-negative integer. A useful question to answer is this: what degree of polynomial provides a suitable trade-off between bias (under-fitting) and variance (over-fitting)?
|
| 208 |
+
|
| 209 |
+
We can make progress in this by visualizing the validation curve for this particular data and model; this can be done straightforwardly using the validation_curve convenience routine provided by Scikit-Learn. Given a model, data, parameter name, and a range to explore, this function will automatically compute both the training score and validation score across the range:
|
| 210 |
+
|
| 211 |
+
In [13]:
|
| 212 |
+
|
| 213 |
+
from sklearn.learning_curve import validation_curve degree = np.arange(0, 21) train_score, val_score = validation_curve(PolynomialRegression(), X, y, 'polynomialfeatures__degree', degree, cv=7) plt.plot(degree, np.median(train_score, 1), color='blue', label='training score') plt.plot(degree, np.median(val_score, 1), color='red', label='validation score') plt.legend(loc='best') plt.ylim(0, 1) plt.xlabel('degree') plt.ylabel('score');
|
| 214 |
+
|
| 215 |
+
This shows precisely the qualitative behavior we expect: the training score is everywhere higher than the validation score; the training score is monotonically improving with increased model complexity; and the validation score reaches a maximum before dropping off as the model becomes over-fit.
|
| 216 |
+
|
| 217 |
+
From the validation curve, we can read-off that the optimal trade-off between bias and variance is found for a third-order polynomial; we can compute and display this fit over the original data as follows:
|
| 218 |
+
|
| 219 |
+
In [14]:
|
| 220 |
+
|
| 221 |
+
plt.scatter(X.ravel(), y) lim = plt.axis() y_test = PolynomialRegression(3).fit(X, y).predict(X_test) plt.plot(X_test.ravel(), y_test); plt.axis(lim);
|
| 222 |
+
|
| 223 |
+
Notice that finding this optimal model did not actually require us to compute the training score, but examining the relationship between the training score and validation score can give us useful insight into the performance of the model.
|
| 224 |
+
|
| 225 |
+
Learning Curves
|
| 226 |
+
|
| 227 |
+
One important aspect of model complexity is that the optimal model will generally depend on the size of your training data. For example, let's generate a new dataset with a factor of five more points:
|
| 228 |
+
|
| 229 |
+
In [15]:
|
| 230 |
+
|
| 231 |
+
X2, y2 = make_data(200) plt.scatter(X2.ravel(), y2);
|
| 232 |
+
|
| 233 |
+
We will duplicate the preceding code to plot the validation curve for this larger dataset; for reference let's over-plot the previous results as well:
|
| 234 |
+
|
| 235 |
+
In [16]:
|
| 236 |
+
|
| 237 |
+
degree = np.arange(21) train_score2, val_score2 = validation_curve(PolynomialRegression(), X2, y2, 'polynomialfeatures__degree', degree, cv=7) plt.plot(degree, np.median(train_score2, 1), color='blue', label='training score') plt.plot(degree, np.median(val_score2, 1), color='red', label='validation score') plt.plot(degree, np.median(train_score, 1), color='blue', alpha=0.3, linestyle='dashed') plt.plot(degree, np.median(val_score, 1), color='red', alpha=0.3, linestyle='dashed') plt.legend(loc='lower center') plt.ylim(0, 1) plt.xlabel('degree') plt.ylabel('score');
|
| 238 |
+
|
| 239 |
+
The solid lines show the new results, while the fainter dashed lines show the results of the previous smaller dataset. It is clear from the validation curve that the larger dataset can support a much more complicated model: the peak here is probably around a degree of 6, but even a degree-20 model is not seriously over-fitting the data—the validation and training scores remain very close.
|
| 240 |
+
|
| 241 |
+
Thus we see that the behavior of the validation curve has not one but two important inputs: the model complexity and the number of training points. It is often useful to to explore the behavior of the model as a function of the number of training points, which we can do by using increasingly larger subsets of the data to fit our model. A plot of the training/validation score with respect to the size of the training set is known as a learning curve.
|
| 242 |
+
|
| 243 |
+
The general behavior we would expect from a learning curve is this:
|
| 244 |
+
|
| 245 |
+
• A model of a given complexity will overfit a small dataset: this means the training score will be relatively high, while the validation score will be relatively low.
|
| 246 |
+
|
| 247 |
+
• A model of a given complexity will underfit a large dataset: this means that the training score will decrease, but the validation score will increase.
|
| 248 |
+
|
| 249 |
+
• A model will never, except by chance, give a better score to the validation set than the training set: this means the curves should keep getting closer together but never cross.
|
| 250 |
+
|
| 251 |
+
With these features in mind, we would expect a learning curve to look qualitatively like that shown in the following figure:
|
| 252 |
+
|
| 253 |
+
• figure source in Appendix
|
| 254 |
+
|
| 255 |
+
The notable feature of the learning curve is the convergence to a particular score as the number of training samples grows. In particular, once you have enough points that a particular model has converged, adding more training data will not help you! The only way to increase model performance in this case is to use another (often more complex) model.
|
| 256 |
+
|
| 257 |
+
Learning curves in Scikit-Learn
|
| 258 |
+
|
| 259 |
+
Scikit-Learn offers a convenient utility for computing such learning curves from your models; here we will compute a learning curve for our original dataset with a second-order polynomial model and a ninth-order polynomial:
|
| 260 |
+
|
| 261 |
+
In [17]:
|
| 262 |
+
|
| 263 |
+
from sklearn.learning_curve import learning_curve fig, ax = plt.subplots(1, 2, figsize=(16, 6)) fig.subplots_adjust(left=0.0625, right=0.95, wspace=0.1) for i, degree in enumerate([2, 9]): N, train_lc, val_lc = learning_curve(PolynomialRegression(degree), X, y, cv=7, train_sizes=np.linspace(0.3, 1, 25)) ax[i].plot(N, np.mean(train_lc, 1), color='blue', label='training score') ax[i].plot(N, np.mean(val_lc, 1), color='red', label='validation score') ax[i].hlines(np.mean([train_lc[-1], val_lc[-1]]), N[0], N[-1], color='gray', linestyle='dashed') ax[i].set_ylim(0, 1) ax[i].set_xlim(N[0], N[-1]) ax[i].set_xlabel('training size') ax[i].set_ylabel('score') ax[i].set_title('degree = {0}'.format(degree), size=14) ax[i].legend(loc='best')
|
| 264 |
+
|
| 265 |
+
This is a valuable diagnostic, because it gives us a visual depiction of how our model responds to increasing training data. In particular, when your learning curve has already converged (i.e., when the training and validation curves are already close to each other) adding more training data will not significantly improve the fit! This situation is seen in the left panel, with the learning curve for the degree-2 model.
|
| 266 |
+
|
| 267 |
+
The only way to increase the converged score is to use a different (usually more complicated) model. We see this in the right panel: by moving to a much more complicated model, we increase the score of convergence (indicated by the dashed line), but at the expense of higher model variance (indicated by the difference between the training and validation scores). If we were to add even more data points, the learning curve for the more complicated model would eventually converge.
|
| 268 |
+
|
| 269 |
+
Plotting a learning curve for your particular choice of model and dataset can help you to make this type of decision about how to move forward in improving your analysis.
|
| 270 |
+
|
| 271 |
+
Validation in Practice: Grid Search
|
| 272 |
+
|
| 273 |
+
The preceding discussion is meant to give you some intuition into the trade-off between bias and variance, and its dependence on model complexity and training set size. In practice, models generally have more than one knob to turn, and thus plots of validation and learning curves change from lines to multi-dimensional surfaces. In these cases, such visualizations are difficult and we would rather simply find the particular model that maximizes the validation score.
|
| 274 |
+
|
| 275 |
+
Scikit-Learn provides automated tools to do this in the grid search module. Here is an example of using grid search to find the optimal polynomial model. We will explore a three-dimensional grid of model features; namely the polynomial degree, the flag telling us whether to fit the intercept, and the flag telling us whether to normalize the problem. This can be set up using Scikit-Learn's GridSearchCV meta-estimator:
|
| 276 |
+
|
| 277 |
+
In [18]:
|
| 278 |
+
|
| 279 |
+
from sklearn.grid_search import GridSearchCV param_grid = {'polynomialfeatures__degree': np.arange(21), 'linearregression__fit_intercept': [True, False], 'linearregression__normalize': [True, False]} grid = GridSearchCV(PolynomialRegression(), param_grid, cv=7)
|
| 280 |
+
|
| 281 |
+
Notice that like a normal estimator, this has not yet been applied to any data. Calling the fit() method will fit the model at each grid point, keeping track of the scores along the way:
|
| 282 |
+
|
| 283 |
+
In [19]:
|
| 284 |
+
|
| 285 |
+
grid.fit(X, y);
|
| 286 |
+
|
| 287 |
+
Now that this is fit, we can ask for the best parameters as follows:
|
| 288 |
+
|
| 289 |
+
In [20]:
|
| 290 |
+
|
| 291 |
+
grid.best_params_
|
| 292 |
+
|
| 293 |
+
Out[20]:
|
| 294 |
+
|
| 295 |
+
{'linearregression__fit_intercept': False, 'linearregression__normalize': True, 'polynomialfeatures__degree': 4}
|
| 296 |
+
|
| 297 |
+
Finally, if we wish, we can use the best model and show the fit to our data using code from before:
|
| 298 |
+
|
| 299 |
+
In [21]:
|
| 300 |
+
|
| 301 |
+
model = grid.best_estimator_ plt.scatter(X.ravel(), y) lim = plt.axis() y_test = model.fit(X, y).predict(X_test) plt.plot(X_test.ravel(), y_test, hold=True); plt.axis(lim);
|
Introduction to Machine Learning with Python ( PDFDrive.com )-min405.txt
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
Ketkar,Moolayil Deep Learning with Python483.txt
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
Mastering Python for Data Science987.txt
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
Native Bayes.txt
ADDED
|
@@ -0,0 +1,123 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
The previous four sections have given a general overview of the concepts of machine learning. In this section and the ones that follow, we will be taking a closer look at several specific algorithms for supervised and unsupervised learning, starting here with naive Bayes classification.
|
| 2 |
+
|
| 3 |
+
Naive Bayes models are a group of extremely fast and simple classification algorithms that are often suitable for very high-dimensional datasets. Because they are so fast and have so few tunable parameters, they end up being very useful as a quick-and-dirty baseline for a classification problem. This section will focus on an intuitive explanation of how naive Bayes classifiers work, followed by a couple examples of them in action on some datasets.
|
| 4 |
+
|
| 5 |
+
Bayesian Classification
|
| 6 |
+
|
| 7 |
+
Naive Bayes classifiers are built on Bayesian classification methods. These rely on Bayes's theorem, which is an equation describing the relationship of conditional probabilities of statistical quantities. In Bayesian classification, we're interested in finding the probability of a label given some observed features, which we can write as P(L | features)�(� | features). Bayes's theorem tells us how to express this in terms of quantities we can compute more directly:
|
| 8 |
+
|
| 9 |
+
P(L | features)=P(features | L)P(L)P(features)�(� | features)=�(features | �)�(�)�(features)
|
| 10 |
+
|
| 11 |
+
If we are trying to decide between two labels—let's call them L1�1 and L2�2—then one way to make this decision is to compute the ratio of the posterior probabilities for each label:
|
| 12 |
+
|
| 13 |
+
P(L1 | features)P(L2 | features)=P(features | L1)P(features | L2)P(L1)P(L2)�(�1 | features)�(�2 | features)=�(features | �1)�(features | �2)�(�1)�(�2)
|
| 14 |
+
|
| 15 |
+
All we need now is some model by which we can compute P(features | Li)�(features | ��) for each label. Such a model is called a generative model because it specifies the hypothetical random process that generates the data. Specifying this generative model for each label is the main piece of the training of such a Bayesian classifier. The general version of such a training step is a very difficult task, but we can make it simpler through the use of some simplifying assumptions about the form of this model.
|
| 16 |
+
|
| 17 |
+
This is where the "naive" in "naive Bayes" comes in: if we make very naive assumptions about the generative model for each label, we can find a rough approximation of the generative model for each class, and then proceed with the Bayesian classification. Different types of naive Bayes classifiers rest on different naive assumptions about the data, and we will examine a few of these in the following sections.
|
| 18 |
+
|
| 19 |
+
We begin with the standard imports:
|
| 20 |
+
|
| 21 |
+
In [1]:
|
| 22 |
+
|
| 23 |
+
%matplotlib inline import numpy as np import matplotlib.pyplot as plt import seaborn as sns; sns.set()
|
| 24 |
+
|
| 25 |
+
Gaussian Naive Bayes
|
| 26 |
+
|
| 27 |
+
Perhaps the easiest naive Bayes classifier to understand is Gaussian naive Bayes. In this classifier, the assumption is that data from each label is drawn from a simple Gaussian distribution. Imagine that you have the following data:
|
| 28 |
+
|
| 29 |
+
In [2]:
|
| 30 |
+
|
| 31 |
+
from sklearn.datasets import make_blobs X, y = make_blobs(100, 2, centers=2, random_state=2, cluster_std=1.5) plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='RdBu');
|
| 32 |
+
|
| 33 |
+
One extremely fast way to create a simple model is to assume that the data is described by a Gaussian distribution with no covariance between dimensions. This model can be fit by simply finding the mean and standard deviation of the points within each label, which is all you need to define such a distribution. The result of this naive Gaussian assumption is shown in the following figure:
|
| 34 |
+
|
| 35 |
+
figure source in Appendix
|
| 36 |
+
|
| 37 |
+
The ellipses here represent the Gaussian generative model for each label, with larger probability toward the center of the ellipses. With this generative model in place for each class, we have a simple recipe to compute the likelihood P(features | L1)�(features | �1) for any data point, and thus we can quickly compute the posterior ratio and determine which label is the most probable for a given point.
|
| 38 |
+
|
| 39 |
+
This procedure is implemented in Scikit-Learn's sklearn.naive_bayes.GaussianNB estimator:
|
| 40 |
+
|
| 41 |
+
In [3]:
|
| 42 |
+
|
| 43 |
+
from sklearn.naive_bayes import GaussianNB model = GaussianNB() model.fit(X, y);
|
| 44 |
+
|
| 45 |
+
Now let's generate some new data and predict the label:
|
| 46 |
+
|
| 47 |
+
In [4]:
|
| 48 |
+
|
| 49 |
+
rng = np.random.RandomState(0) Xnew = [-6, -14] + [14, 18] * rng.rand(2000, 2) ynew = model.predict(Xnew)
|
| 50 |
+
|
| 51 |
+
Now we can plot this new data to get an idea of where the decision boundary is:
|
| 52 |
+
|
| 53 |
+
In [5]:
|
| 54 |
+
|
| 55 |
+
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='RdBu') lim = plt.axis() plt.scatter(Xnew[:, 0], Xnew[:, 1], c=ynew, s=20, cmap='RdBu', alpha=0.1) plt.axis(lim);
|
| 56 |
+
|
| 57 |
+
We see a slightly curved boundary in the classifications—in general, the boundary in Gaussian naive Bayes is quadratic.
|
| 58 |
+
|
| 59 |
+
A nice piece of this Bayesian formalism is that it naturally allows for probabilistic classification, which we can compute using the predict_proba method:
|
| 60 |
+
|
| 61 |
+
In [6]:
|
| 62 |
+
|
| 63 |
+
yprob = model.predict_proba(Xnew) yprob[-8:].round(2)
|
| 64 |
+
|
| 65 |
+
Out[6]:
|
| 66 |
+
|
| 67 |
+
array([[ 0.89, 0.11], [ 1. , 0. ], [ 1. , 0. ], [ 1. , 0. ], [ 1. , 0. ], [ 1. , 0. ], [ 0. , 1. ], [ 0.15, 0.85]])
|
| 68 |
+
|
| 69 |
+
The columns give the posterior probabilities of the first and second label, respectively. If you are looking for estimates of uncertainty in your classification, Bayesian approaches like this can be a useful approach.
|
| 70 |
+
|
| 71 |
+
Of course, the final classification will only be as good as the model assumptions that lead to it, which is why Gaussian naive Bayes often does not produce very good results. Still, in many cases—especially as the number of features becomes large—this assumption is not detrimental enough to prevent Gaussian naive Bayes from being a useful method.
|
| 72 |
+
|
| 73 |
+
Multinomial Naive Bayes
|
| 74 |
+
|
| 75 |
+
The Gaussian assumption just described is by no means the only simple assumption that could be used to specify the generative distribution for each label. Another useful example is multinomial naive Bayes, where the features are assumed to be generated from a simple multinomial distribution. The multinomial distribution describes the probability of observing counts among a number of categories, and thus multinomial naive Bayes is most appropriate for features that represent counts or count rates.
|
| 76 |
+
|
| 77 |
+
The idea is precisely the same as before, except that instead of modeling the data distribution with the best-fit Gaussian, we model the data distribuiton with a best-fit multinomial distribution.
|
| 78 |
+
|
| 79 |
+
Example: Classifying Text
|
| 80 |
+
|
| 81 |
+
One place where multinomial naive Bayes is often used is in text classification, where the features are related to word counts or frequencies within the documents to be classified. We discussed the extraction of such features from text in Feature Engineering; here we will use the sparse word count features from the 20 Newsgroups corpus to show how we might classify these short documents into categories.
|
| 82 |
+
|
| 83 |
+
Let's download the data and take a look at the target names:
|
| 84 |
+
|
| 85 |
+
In [7]:
|
| 86 |
+
|
| 87 |
+
from sklearn.datasets import fetch_20newsgroups data = fetch_20newsgroups() data.target_names
|
| 88 |
+
|
| 89 |
+
Out[7]:
|
| 90 |
+
|
| 91 |
+
['alt.atheism', 'comp.graphics', 'comp.os.ms-windows.misc', 'comp.sys.ibm.pc.hardware', 'comp.sys.mac.hardware', 'comp.windows.x', 'misc.forsale', 'rec.autos', 'rec.motorcycles', 'rec.sport.baseball', 'rec.sport.hockey', 'sci.crypt', 'sci.electronics', 'sci.med', 'sci.space', 'soc.religion.christian', 'talk.politics.guns', 'talk.politics.mideast', 'talk.politics.misc', 'talk.religion.misc']
|
| 92 |
+
|
| 93 |
+
For simplicity here, we will select just a few of these categories, and download the training and testing set:
|
| 94 |
+
|
| 95 |
+
In [8]:
|
| 96 |
+
|
| 97 |
+
categories = ['talk.religion.misc', 'soc.religion.christian', 'sci.space', 'comp.graphics'] train = fetch_20newsgroups(subset='train', categories=categories) test = fetch_20newsgroups(subset='test', categories=categories)
|
| 98 |
+
|
| 99 |
+
Here is a representative entry from the data:
|
| 100 |
+
|
| 101 |
+
In [9]:
|
| 102 |
+
|
| 103 |
+
print(train.data[5])
|
| 104 |
+
|
| 105 |
+
From: dmcgee@uluhe.soest.hawaii.edu (Don McGee) Subject: Federal Hearing Originator: dmcgee@uluhe Organization: School of Ocean and Earth Science and Technology Distribution: usa Lines: 10 Fact or rumor....? Madalyn Murray O'Hare an atheist who eliminated the use of the bible reading and prayer in public schools 15 years ago is now going to appear before the FCC with a petition to stop the reading of the Gospel on the airways of America. And she is also campaigning to remove Christmas programs, songs, etc from the public schools. If it is true then mail to Federal Communications Commission 1919 H Street Washington DC 20054 expressing your opposition to her request. Reference Petition number 2493.
|
| 106 |
+
|
| 107 |
+
In order to use this data for machine learning, we need to be able to convert the content of each string into a vector of numbers. For this we will use the TF-IDF vectorizer (discussed in Feature Engineering), and create a pipeline that attaches it to a multinomial naive Bayes classifier:
|
| 108 |
+
|
| 109 |
+
In [10]:
|
| 110 |
+
|
| 111 |
+
from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.naive_bayes import MultinomialNB from sklearn.pipeline import make_pipeline model = make_pipeline(TfidfVectorizer(), MultinomialNB())
|
| 112 |
+
|
| 113 |
+
With this pipeline, we can apply the model to the training data, and predict labels for the test data:
|
| 114 |
+
|
| 115 |
+
In [11]:
|
| 116 |
+
|
| 117 |
+
model.fit(train.data, train.target) labels = model.predict(test.data)
|
| 118 |
+
|
| 119 |
+
Now that we have predicted the labels for the test data, we can evaluate them to learn about the performance of the estimator. For example, here is the confusion matrix between the true and predicted labels for the test data:
|
| 120 |
+
|
| 121 |
+
In [12]:
|
| 122 |
+
|
| 123 |
+
from sklearn.metrics import confusion_matrix mat = confusion_matrix(test.target, labels) sns.heatmap(mat.T, square=True, annot=True, fmt='d', cbar=False, xticklabels=train.target_names, yticklabels=train.target_names) plt.xlabel('true label') plt.ylabel('predicted label');
|
Natural Language Processing with Python - O'Reilly2009634.txt
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
OASIcs.SLATE.2023.10755.txt
ADDED
|
@@ -0,0 +1,150 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Large Language Models: Compilers for the 4th Generation of Programming Languages?
|
| 2 |
+
Francisco S. Marcondes
|
| 3 |
+
ALGORITMI Research Centre/LASI, University of Minho, Braga, Portugal
|
| 4 |
+
José João Almeida
|
| 5 |
+
ALGORITMI Research Centre/LASI, University of Minho, Braga, Portugal
|
| 6 |
+
Paulo Novais
|
| 7 |
+
ALGORITMI Research Centre/LASI, University of Minho, Braga, Portugal
|
| 8 |
+
|
| 9 |
+
Abstract
|
| 10 |
+
This paper explores the possibility of large language models as a fourth generation programming language compiler. This is based on the idea that large language models are able to translate a natural language specification into a program written in a particular programming language. In other words, just as high-level languages provided an additional language abstraction to assembly code, large language models can provide an additional language abstraction to high-level languages. This interpretation allows large language models to be thought of through the lens of compiler theory, leading to insightful conclusions.
|
| 11 |
+
2012 ACM Subject Classification Computing methodologies → Artificial intelligence; Computing methodologies → Natural language processing; Software and its engineering → Compilers
|
| 12 |
+
Keywords and phrases programming language, compiler, large language model
|
| 13 |
+
Digital Object Identifier 10.4230/OASIcs.SLATE.2023.10
|
| 14 |
+
Category Short Paper
|
| 15 |
+
Funding This work has been supported by FCT – Fundação para a Ciência e Tecnologia within the R&D Units Project Scope: UIDB/00319/2020.
|
| 16 |
+
|
| 17 |
+
1 Introduction
|
| 18 |
+
As the title suggests, this is a speculative paper discussing whether large language models could be considered a higher level of programming language in relation to current high-level languages. In short, assembly language (2nd generation) replaced punch-card programming (1st generation) by introducing mnemonics. These allowed larger and more complex programs to be created in less time. High level languages (3rd generation) in turn replaced assembly language by introducing structured English constraints.
|
| 19 |
+
The hypothesis explored in this paper is that large language models could be a 4th generation language, replacing high-level languages by allowing natural language specifications. The aim is then to discuss the strengths and weaknesses of large language models running as natural to high-level language compilers. Other natural language processing tasks that would be associated with an interpreter are therefore beyond the scope of this paper. The question that arises is whether large language models can provide an additional level of abstraction for programming [10] similar to the high-level languages provided for assembly language.
|
| 20 |
+
As a disclaimer, this paper uses ChatGPT as a basis for discussion, but does not consider ChatGPT to be a compiler. A 4th generation compiler, as well as a 3rd generation compiler, is expected to produce executable code as a result, not intermediate code as ChatGPT does. Note that there are fine-tuned solutions for programming, such as the GitHub co-pilot, but due to its current paywall ChatGPT 3.5 is used in this paper. Although ChatGPT is used to support this discussion, a proper setup should be designed for proper use of the large
|
| 21 |
+
© Francisco S. Marcondes, José João Almeida, and Paulo Novais; licensed under Creative Commons License CC-BY 4.0
|
| 22 |
+
12th Symposium on Languages, Applications and Technologies (SLATE 2023).
|
| 23 |
+
Editors: Alberto Simões, Mario Marcelo Berón, and Filipe Portela; Article No. 10; pp. 10:1–10:8 OpenAccess Series in Informatics
|
| 24 |
+
Schloss Dagstuhl – Leibniz-Zentrum für Informatik, Dagstuhl Publishing, Germany
|
| 25 |
+
|
| 26 |
+
|
| 27 |
+
language model as a compiler. In this setup, a programmer is not expected to interfere with the lower-level code any more than a programmer is expected to modify the assembly code generated by a high-level language compiler.
|
| 28 |
+
Furthermore, a single prompt is not expected to be sufficient to produce industrial-scale software, nor is a single specification sufficient to achieve the same end. A typical requirement analysis takes several months and results in thousands of mutable requirements whose complexity is managed by interactive and incrementally throughout the project [14]. There is no reason to believe that this will change with the replacement of compiler technology. Therefore, industrial-scale software would require several mutable prompts in order to properly specify an industry level software product.
|
| 29 |
+
It is also worth noting that this proposal is not related to no-code or low-code initiatives. In short, these initiatives aim to provide a different way of doing traditional programming, but without providing the higher level language abstraction that large language models do. Despite several papers are addressing large language models for code synthesis, it was not possible to find any discussing it from the compiling perspective [16]. It was also not possible to find papers referring to 4th or next generation compilers linked with large language models; most are linked with model-driven [12] or domain-specific [17] approaches.
|
| 30 |
+
|
| 31 |
+
2 Theoretical Foundations
|
| 32 |
+
Current large language models are based on Transformers (see figure 1). Perhaps the core element in Transformers is the attention mechanism [5]. In brief, given an input embedding, the attention mechanism reweights the embeddings for each token in the input to match the context of the input sentence. For example, let bank be a token in the input sentence whose sense is equidistant from river and money in the embedding model. Multiplying the embedding values of bank by the embeddings of the other words in the input sentence is expected to bias the bank token towards the appropriate sense.
|
| 33 |
+
|
| 34 |
+
(a) Overall Architecture. (b) Multi-head attention. (c) Scaled dot-product attention.
|
| 35 |
+
Figure 1 Transformers cf. [13]. Note that (c) is in (b), which is in (a).
|
| 36 |
+
|
| 37 |
+
|
| 38 |
+
|
| 39 |
+
The attention mechanism is the underlying principle of prompt engineering. In a nutshell, prompt engineering is concerned with designing a prompt that, when queried by a large language model, returns the best possible answer. As a rule of thumb, a prompt cf. [9] consists of: a) instruction; b) context; c) input data; and d) output indicator. Prompt engineering is therefore less about acting out a natural language conversation, and more about describing specific instructions aimed at an output.
|
| 40 |
+
This leads to the almost straightforward conclusion that large language models can be understood as a natural language processor (see [1]). A compiler, or translator, is a type of language processor that converts sentences from a usually high-level source language to an often low-level target language. The compiled sentences are interpretable by the target machine, which behaves accordingly. Considering that Transformers was built with the goal of natural language translation in mind [13], a relationship between Transformers and compilers can be suggested, with the source language being a natural language and the target language being a high-level language.
|
| 41 |
+
|
| 42 |
+
3 Proof of Concept
|
| 43 |
+
For reference, consider the introductory programming class problem:
|
| 44 |
+
The soldiers of the queen of hearts have a problem: once again the queen has sent them to fetch cookies for tea. Five of the soldiers went to get the cookies and returned.
|
| 45 |
+
|
| 46 |
+
Since only one of the soldiers can enter the tea room with the cookies, they need to choose one of them. The problem is that the queen is greedy and has very little patience. Either they quickly figure out which soldier brought the most cookies or they will lose their heads. Your task is to write a program to find the answer.
|
| 47 |
+
(1)
|
| 48 |
+
|
| 49 |
+
The above problem has been prompted in ChatGPT as is. The result is shown in the figure 2a. Recall that a prompt cf. [9] is composed of: a) instruction; b) context; c) input data; and
|
| 50 |
+
output indicator. These are the elements being required by the model. ChatGPT does not always give the same output for the same prompt, this is due to the Q, K, V (stands for query, key and value) weight matrices used by the attention mechanism (see figure 1). On a second run of this prompt in a different ChatGPT instance, the language model adopted the previously requested parameters and returned the source code shown in 2b. Therefore, in order to reduce variation, it is necessary to be as specific as possible on prompt building.
|
| 51 |
+
Note that the code in figure 2b is well-structured Python source code, capable of running on a Python machine. An immediate assumption is that by providing a formal specification, the resulting code would be enhanced. Prompt (1) is then rewritten as prompt (2) using simple set notation and submitted to ChatGPT, resulting in the code shown in figure 3. Note that the code in figure 3 is not necessarily better or more readable than the code in figure 2b. Such an assumption is therefore not necessarily true in the domain of large language models.
|
| 52 |
+
Let S = {s1, ..., sn|n = 5} be a set of soldiers, C = {c1, ..., cn|n = 5} be the number of
|
| 53 |
+
|
| 54 |
+
cookies brought by each soldier, and f : S C is a function that returns the number of cookies of each soldier. The soldier with the maximum number of cookies is given by max({f (s)|∀s ∈ S}) ⊢ min(s). Write a program to implement this algorithm.
|
| 55 |
+
(2)
|
| 56 |
+
|
| 57 |
+
The formal assumption is probably based on the idea that since natural language is ambiguous, some mathematical notation is necessary. This is true in so far as it is not possible to go through a process of clarification by asking questions, which is not the case in ChatGPT, as shown in figure 2a. This does not mean that formal expressions should be avoided, but, as suggested after figure 3, that it is necessary to understand how to use them in this context. Also, that (2) may not be the best way to provide formal specification on ChatGPT.
|
| 58 |
+
|
| 59 |
+
S L ATE 2 0 2 3
|
| 60 |
+
|
| 61 |
+
|
| 62 |
+
|
| 63 |
+
|
| 64 |
+
First response.
|
| 65 |
+
Second response.
|
| 66 |
+
Figure 2 ChatGPT’s responses for the same prompt on different instances.
|
| 67 |
+
|
| 68 |
+
|
| 69 |
+
Figure 3 Source code produced by ChatGPT for the formal specification on prompt (2) with circumventing text removed.
|
| 70 |
+
|
| 71 |
+
|
| 72 |
+
4 Insights on a 4th Generation Compiler
|
| 73 |
+
At this point it is possible to suggest that ChatGPT (i.e. large language models) can be considered as a translation device. Figure 4 shows a bird’s eye view of a compiler as defined by Aho et al. in [1] and a translator as defined by Jurafsky et al. in [5]. The word “translator” is accepted as a synonym for “compiler”, so the former can be considered a deterministic translator and the latter a probabilistic one.
|
| 74 |
+
|
| 75 |
+
A compiler instance as presented in [1].
|
| 76 |
+
|
| 77 |
+
A translator instance as presented in [5].
|
| 78 |
+
Figure 4 Deterministic and probabilistic translator instances.
|
| 79 |
+
|
| 80 |
+
It can also be argued that the deterministic translator is primarily concerned with syntax (i.e. the structure of sentences based on the Chomsky hierarchy) and the probabilistic translator with semantics (i.e. the relationship between words based on the distributional hypothesis). Therefore, it is not a case of replacement, but of composing these two translation strategies. As a result, if a source is generated with syntactic errors, the generator would produce an improved source by using the error messages as feedback. Note that the use of examples is an important prompt feature due to the few-shot learning property of large language models [2]. This in turn can be manifested as unit tests [11]. In this sense, not only syntactic errors but also behavioural errors eventually introduced by the generator can be automatically corrected. Note also that, not all information may be provided in a prompt. There are two situations to consider. One is the prompt within a project, from which the additional information can be retrieved (e.g. elements of a class, UML blueprints, etc.). If the required information is ambiguous or missing, the compiler is expected to prompt the programmer as shown in figure 2a with each interaction improving either the context or the specification. Note that the correct definition of the prompt is the cornerstone of improved generation; a context with more information than necessary is just as harmful as a context with no information at all. A structured as such is depicted on figure 5.
|
| 81 |
+
It is worth noting that if the complexity of the specification becomes unmanageable, it can be split or simply deleted to start again. Splitting opens up the possibility of tackling increasingly large software. Regardless of the development process used (prescriptive or
|
| 82 |
+
|
| 83 |
+
S L ATE 2 0 2 3
|
| 84 |
+
|
| 85 |
+
|
| 86 |
+
|
| 87 |
+
|
| 88 |
+
|
| 89 |
+
Figure 5 Suggested structure for a 4th generation compiler based on large language models.
|
| 90 |
+
|
| 91 |
+
agile), the overall organization of the project elements ends up being arranged in a tree-like structure. The same structure can be applied to a series of prompts that make up the software, helping to establish the correct context for each prompt. Consider the following Use Case 2.0 [4] partial scenario:
|
| 92 |
+
Use-case: Registering to SLATE
|
| 93 |
+
Authentication Flow
|
| 94 |
+
The enrollee provides its e-mail, the SuD sends an e-mail with a confirmation link
|
| 95 |
+
The SuD receives a confirmation and set the status to “authenticated enrolee”
|
| 96 |
+
[authenticated enrolee] Basic Flow for Author Registration
|
| 97 |
+
a. ...
|
| 98 |
+
For this discussion, consider step 1a. Note that this step is a self-contained specification and is sufficient to understand the desired behaviour. However, it is not a prompt in the sense that there is much information missing. As a reference, the 4+1 view model [7] suggests the existence of five views: 1) scenario, 2) logic, 3) development, 4) process and 5) physical. This step description only satisfies the scenario view. For example, on which server is this web service expected to run? In addition, a scenario is expected to satisfy the FURPS+ [3]; and this step only satisfies the “F”. For example, what would the content of the confirmation email be? Therefore, prompting is not equivalent to a requirement analysis yet, as any development task, derives from it.
|
| 99 |
+
Note that steps 1a and 1b form a slice (a coherent part of a use case that can be elaborated into a deployment [4]). Therefore, to produce a deliverable for this slice, a prompt must be written from these two steps. Considering the constraints presented, the 4+1 view model and FURPS+, it becomes clear that only a chat-based structure is not sufficient to express a whole, industrial-scale software (even a slice of it). One possibility of structure to explore is that provided by literate programming, see [6].
|
| 100 |
+
In this sense, from a human perspective, it is not a matter of producing a single prompt, but of producing a structured document composed of several prompts addressing different concerns. Given such a document, the compiler would be expected to make two moves. One towards the refinement of each prompt, i.e. a chat-based interaction aimed at producing an improved prompt. Another towards the unfolding of additional prompts, i.e. the creation of additional slots for prompts to address specific problems. For large software, there could be several specification files, each for a slice. In this perspective, the compiler would act as a specification co-pilot. Note that the context window used by ChatGPT is about 2048 tokens, so managing the context of each prompt is also a feature to be considered.
|
| 101 |
+
Another support expected from the compiler is the generation of test cases. Following a behaviour-driven development rationale [11], this leads the programmer to consider several scenarios that improve the resulting program. From the perspective of this paper, this means that the programmer would either refine previous prompts or introduce new prompts.
|
| 102 |
+
A possible literate programming early paragraph for the authentication slice could be the one presented in (3). Note that this is not a proper programming prompt yet, submitting it to ChatGPT, an excerpt can be seen in 6, it retrieves several suggestions that illustrate what
|
| 103 |
+
|
| 104 |
+
|
| 105 |
+
|
| 106 |
+
the refinement and unfolding moves would look like. For example, validating the email with regular expressions would be a refinement on (3); describing the appearance of the HTML form (e.g. colour, logo, etc.) would be an unfolding prompt.
|
| 107 |
+
This programme is part of a conference registration platform and is designed to verify that the email provided by a registrant is a valid one. This requires: 1) a web page that
|
| 108 |
+
|
| 109 |
+
allows the registrant to enter their email address; 2) a web service that receives this address and sends the confirmation email; and 3) another web service that generates the confirmation page and waits for the registrant to retrieve it.
|
| 110 |
+
(3)
|
| 111 |
+
|
| 112 |
+
|
| 113 |
+
|
| 114 |
+
|
| 115 |
+
Figure 6 Excerpt of ChatGPT response when prompting (3).
|
| 116 |
+
|
| 117 |
+
The presented example is based on plain literate programming. Since large language models are currently turning into multimodal, it is also possible to enrich the specification with diagrams and other images [15].
|
| 118 |
+
|
| 119 |
+
5 Conclusion
|
| 120 |
+
This paper introduced the possibility of interpreting large language models as a fourth generation programming language, based on the notion that large language models are capable of translating a natural language specification into well-formed source code. This notion opens up and strengthens a wide range of research areas, including further developments from the compiler perspective to specific prompt engineering techniques for producing programs. The key discussion is that the current fears and suspicions raised by this technology are analogous to those that arose during the transition from the second to the third generation of programming languages. It is therefore a natural phenomenon that should be resolved on its own terms, as far as the proposal presented in this paper is concerned.
|
| 121 |
+
Comparing this approach with a current compiler raises the question of the probabilistic nature of large language models (the same prompt produces two outputs). In short, this can be addressed by a fixed, perhaps optimal, internal state with respect to the compilation task. However, the strengths and weaknesses of such an approach are a subject for future work. This is somehow related to explainability, a flourishing area of research that is expected to provide answers to questions about why a model has generated a particular piece of code.
|
| 122 |
+
Note that ChatGPT may produce incorrect code, but also that it is not tuned for programming, and it will eventually become outdated. However, it is expected that a 4th generation compiler will be able to deal with such problems, just as 3rd generation compilers will be able to deal with code with syntactic problems (perhaps one way, inspired by genetic algorithms, would be to produce a few generations and select the best by a set of parameters).
|
| 123 |
+
|
| 124 |
+
|
| 125 |
+
|
| 126 |
+
S L ATE 2 0 2 3
|
| 127 |
+
|
| 128 |
+
|
| 129 |
+
|
| 130 |
+
This assertion assumes that the specification has been written properly and correctly, but it is also necessary to consider that intentional and induction errors will still exist, but at a higher level of abstraction. In this sense, if the executable doesn’t run as expected, the programmer should concentrate on fixing the prompt as he currently does with the source. This leads to issues related to the dataset, which will also need to be addressed in future work. Then, considering the compilation task, it would be necessary to understand the composition of the dataset. Also, which setup would perform better: a general large language model or a fine-tuned one? What would be the fine-tuning parameters? Also, when considering prompting, which software engineering tools, methods and principles are appropriate and which are not. Note that multimodal prompting requires a refresh on model-driven development, see [8]. Finally, this seems to be a promising area of research to
|
| 131 |
+
be explored further.
|
| 132 |
+
|
| 133 |
+
References
|
| 134 |
+
Alfred V Aho, Monica S Lam, Ravi Sethi, and Jeffrey D Ullman. Compilers: principles, techniques and tools. Pearson, 2020.
|
| 135 |
+
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33, 2020.
|
| 136 |
+
Robert B Grady and Deborah L Caswell. Software metrics: establishing a company-wide program. Prentice-Hall, Inc., 1987.
|
| 137 |
+
Ivar Jacobson, Ian Spence, and Brian Kerr. Use-case 2.0. Queue, 14(1):94–123, 2016.
|
| 138 |
+
Dan Jurafsky and James H. Martin. Speech and Language Processing. draft (ht- tps://web.stanford.edu/˜jurafsky/slp3/), third edition, 2023.
|
| 139 |
+
Donald Ervin Knuth. Literate programming. The computer journal, 27(2):97–111, 1984.
|
| 140 |
+
Philippe B Kruchten. The 4+ 1 view model of architecture. IEEE software, 12(6):42–50, 1995.
|
| 141 |
+
Chris Raistrick, Paul Francis, John Wright, Colin Carter, and Ian Wilkie. Model driven architecture with executable UML, volume 1. Cambridge University Press, 2004.
|
| 142 |
+
Elvis Saravia. Prompt engineering guide, 2023. URL: https://www.promptingguide.ai/.
|
| 143 |
+
Robert W Sebesta. Concepts of programming languages. Pearson Education, 2019.
|
| 144 |
+
J.F. Smart and J. Molak. BDD in Action, Second Edition: Behavior-Driven Development for the Whole Software Lifecycle. Manning, 2023.
|
| 145 |
+
Bernhard Thalheim and Hannu Jaakkola. Model-based fifth generation programming. Inform- ation Modelling and Knowledge Bases, 31:381–400, 2020.
|
| 146 |
+
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017.
|
| 147 |
+
Karl Wiegers. More about software requirements. Microsoft Press, 2005.
|
| 148 |
+
Zhuosheng Zhang, Aston Zhang, Mu Li, Hai Zhao, George Karypis, and Alex Smola. Mul- timodal chain-of-thought reasoning in language models. arXiv:2302.00923, 2023.
|
| 149 |
+
Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, et al. A survey of large language models. arXiv preprint arXiv:2303.18223, 2023.
|
| 150 |
+
Majd Zohri Yafi. A Syntactical Reverse Engineering Approach to Fourth Generation Program- ming Languages Using Formal Methods. PhD thesis, University of Essex, 2022.
|
Programming Python Fourth Edition424.txt
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
Python Deep Learning Exploring deep learning techniques, neural network architectures and GANs with PyTorch, Keras and TensorFlow by Ivan Vasilev, Daniel Slater, Gianmario Spacagna, Peter Roelants, Va (z-lib.org)362.txt
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
Python4AdvancedPython830.txt
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
Sick it .txt
ADDED
|
@@ -0,0 +1,393 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
There are several Python libraries which provide solid implementations of a range of machine learning algorithms. One of the best known is Scikit-Learn, a package that provides efficient versions of a large number of common algorithms. Scikit-Learn is characterized by a clean, uniform, and streamlined API, as well as by very useful and complete online documentation. A benefit of this uniformity is that once you understand the basic use and syntax of Scikit-Learn for one type of model, switching to a new model or algorithm is very straightforward.
|
| 2 |
+
|
| 3 |
+
This section provides an overview of the Scikit-Learn API; a solid understanding of these API elements will form the foundation for understanding the deeper practical discussion of machine learning algorithms and approaches in the following chapters.
|
| 4 |
+
|
| 5 |
+
We will start by covering data representation in Scikit-Learn, followed by covering the Estimator API, and finally go through a more interesting example of using these tools for exploring a set of images of hand-written digits.
|
| 6 |
+
|
| 7 |
+
Data Representation in Scikit-Learn
|
| 8 |
+
|
| 9 |
+
Machine learning is about creating models from data: for that reason, we'll start by discussing how data can be represented in order to be understood by the computer. The best way to think about data within Scikit-Learn is in terms of tables of data.
|
| 10 |
+
|
| 11 |
+
Data as table
|
| 12 |
+
|
| 13 |
+
A basic table is a two-dimensional grid of data, in which the rows represent individual elements of the dataset, and the columns represent quantities related to each of these elements. For example, consider the Iris dataset, famously analyzed by Ronald Fisher in 1936. We can download this dataset in the form of a Pandas DataFrame using the seaborn library:
|
| 14 |
+
|
| 15 |
+
In [1]:
|
| 16 |
+
|
| 17 |
+
import seaborn as sns iris = sns.load_dataset('iris') iris.head()
|
| 18 |
+
|
| 19 |
+
Out[1]:
|
| 20 |
+
|
| 21 |
+
sepal_lengthsepal_widthpetal_lengthpetal_widthspecies05.13.51.40.2setosa14.93.01.40.2setosa24.73.21.30.2setosa34.63.11.50.2setosa45.03.61.40.2setosa
|
| 22 |
+
|
| 23 |
+
Here each row of the data refers to a single observed flower, and the number of rows is the total number of flowers in the dataset. In general, we will refer to the rows of the matrix as samples, and the number of rows as n_samples.
|
| 24 |
+
|
| 25 |
+
Likewise, each column of the data refers to a particular quantitative piece of information that describes each sample. In general, we will refer to the columns of the matrix as features, and the number of columns as n_features.
|
| 26 |
+
|
| 27 |
+
Features matrix
|
| 28 |
+
|
| 29 |
+
This table layout makes clear that the information can be thought of as a two-dimensional numerical array or matrix, which we will call the features matrix. By convention, this features matrix is often stored in a variable named X. The features matrix is assumed to be two-dimensional, with shape [n_samples, n_features], and is most often contained in a NumPy array or a Pandas DataFrame, though some Scikit-Learn models also accept SciPy sparse matrices.
|
| 30 |
+
|
| 31 |
+
The samples (i.e., rows) always refer to the individual objects described by the dataset. For example, the sample might be a flower, a person, a document, an image, a sound file, a video, an astronomical object, or anything else you can describe with a set of quantitative measurements.
|
| 32 |
+
|
| 33 |
+
The features (i.e., columns) always refer to the distinct observations that describe each sample in a quantitative manner. Features are generally real-valued, but may be Boolean or discrete-valued in some cases.
|
| 34 |
+
|
| 35 |
+
Target array
|
| 36 |
+
|
| 37 |
+
In addition to the feature matrix X, we also generally work with a label or target array, which by convention we will usually call y. The target array is usually one dimensional, with length n_samples, and is generally contained in a NumPy array or Pandas Series. The target array may have continuous numerical values, or discrete classes/labels. While some Scikit-Learn estimators do handle multiple target values in the form of a two-dimensional, [n_samples, n_targets] target array, we will primarily be working with the common case of a one-dimensional target array.
|
| 38 |
+
|
| 39 |
+
Often one point of confusion is how the target array differs from the other features columns. The distinguishing feature of the target array is that it is usually the quantity we want to predict from the data: in statistical terms, it is the dependent variable. For example, in the preceding data we may wish to construct a model that can predict the species of flower based on the other measurements; in this case, the species column would be considered the target array.
|
| 40 |
+
|
| 41 |
+
With this target array in mind, we can use Seaborn (see Visualization With Seaborn) to conveniently visualize the data:
|
| 42 |
+
|
| 43 |
+
In [2]:
|
| 44 |
+
|
| 45 |
+
%matplotlib inline import seaborn as sns; sns.set() sns.pairplot(iris, hue='species', size=1.5);
|
| 46 |
+
|
| 47 |
+
For use in Scikit-Learn, we will extract the features matrix and target array from the DataFrame, which we can do using some of the Pandas DataFrame operations discussed in the Chapter 3:
|
| 48 |
+
|
| 49 |
+
In [3]:
|
| 50 |
+
|
| 51 |
+
X_iris = iris.drop('species', axis=1) X_iris.shape
|
| 52 |
+
|
| 53 |
+
Out[3]:
|
| 54 |
+
|
| 55 |
+
(150, 4)
|
| 56 |
+
|
| 57 |
+
In [4]:
|
| 58 |
+
|
| 59 |
+
y_iris = iris['species'] y_iris.shape
|
| 60 |
+
|
| 61 |
+
Out[4]:
|
| 62 |
+
|
| 63 |
+
(150,)
|
| 64 |
+
|
| 65 |
+
To summarize, the expected layout of features and target values is visualized in the following diagram:
|
| 66 |
+
|
| 67 |
+
figure source in Appendix
|
| 68 |
+
|
| 69 |
+
With this data properly formatted, we can move on to consider the estimator API of Scikit-Learn:
|
| 70 |
+
|
| 71 |
+
Scikit-Learn's Estimator API
|
| 72 |
+
|
| 73 |
+
The Scikit-Learn API is designed with the following guiding principles in mind, as outlined in the Scikit-Learn API paper:
|
| 74 |
+
|
| 75 |
+
• Consistency: All objects share a common interface drawn from a limited set of methods, with consistent documentation.
|
| 76 |
+
|
| 77 |
+
• Inspection: All specified parameter values are exposed as public attributes.
|
| 78 |
+
|
| 79 |
+
• Limited object hierarchy: Only algorithms are represented by Python classes; datasets are represented in standard formats (NumPy arrays, Pandas DataFrames, SciPy sparse matrices) and parameter names use standard Python strings.
|
| 80 |
+
|
| 81 |
+
• Composition: Many machine learning tasks can be expressed as sequences of more fundamental algorithms, and Scikit-Learn makes use of this wherever possible.
|
| 82 |
+
|
| 83 |
+
• Sensible defaults: When models require user-specified parameters, the library defines an appropriate default value.
|
| 84 |
+
|
| 85 |
+
In practice, these principles make Scikit-Learn very easy to use, once the basic principles are understood. Every machine learning algorithm in Scikit-Learn is implemented via the Estimator API, which provides a consistent interface for a wide range of machine learning applications.
|
| 86 |
+
|
| 87 |
+
Basics of the API
|
| 88 |
+
|
| 89 |
+
Most commonly, the steps in using the Scikit-Learn estimator API are as follows (we will step through a handful of detailed examples in the sections that follow).
|
| 90 |
+
|
| 91 |
+
• Choose a class of model by importing the appropriate estimator class from Scikit-Learn.
|
| 92 |
+
|
| 93 |
+
• Choose model hyperparameters by instantiating this class with desired values.
|
| 94 |
+
|
| 95 |
+
• Arrange data into a features matrix and target vector following the discussion above.
|
| 96 |
+
|
| 97 |
+
• Fit the model to your data by calling the fit() method of the model instance.
|
| 98 |
+
|
| 99 |
+
• Apply the Model to new data:
|
| 100 |
+
|
| 101 |
+
• For supervised learning, often we predict labels for unknown data using the predict() method.
|
| 102 |
+
|
| 103 |
+
• For unsupervised learning, we often transform or infer properties of the data using the transform() or predict() method.
|
| 104 |
+
|
| 105 |
+
We will now step through several simple examples of applying supervised and unsupervised learning methods.
|
| 106 |
+
|
| 107 |
+
Supervised learning example: Simple linear regression
|
| 108 |
+
|
| 109 |
+
As an example of this process, let's consider a simple linear regression—that is, the common case of fitting a line to (x,y)(�,�) data. We will use the following simple data for our regression example:
|
| 110 |
+
|
| 111 |
+
In [5]:
|
| 112 |
+
|
| 113 |
+
import matplotlib.pyplot as plt import numpy as np rng = np.random.RandomState(42) x = 10 * rng.rand(50) y = 2 * x - 1 + rng.randn(50) plt.scatter(x, y);
|
| 114 |
+
|
| 115 |
+
With this data in place, we can use the recipe outlined earlier. Let's walk through the process:
|
| 116 |
+
|
| 117 |
+
1. Choose a class of model
|
| 118 |
+
|
| 119 |
+
In Scikit-Learn, every class of model is represented by a Python class. So, for example, if we would like to compute a simple linear regression model, we can import the linear regression class:
|
| 120 |
+
|
| 121 |
+
In [6]:
|
| 122 |
+
|
| 123 |
+
from sklearn.linear_model import LinearRegression
|
| 124 |
+
|
| 125 |
+
Note that other more general linear regression models exist as well; you can read more about them in the sklearn.linear_model module documentation.
|
| 126 |
+
|
| 127 |
+
2. Choose model hyperparameters
|
| 128 |
+
|
| 129 |
+
An important point is that a class of model is not the same as an instance of a model.
|
| 130 |
+
|
| 131 |
+
Once we have decided on our model class, there are still some options open to us. Depending on the model class we are working with, we might need to answer one or more questions like the following:
|
| 132 |
+
|
| 133 |
+
• Would we like to fit for the offset (i.e., y-intercept)?
|
| 134 |
+
|
| 135 |
+
• Would we like the model to be normalized?
|
| 136 |
+
|
| 137 |
+
• Would we like to preprocess our features to add model flexibility?
|
| 138 |
+
|
| 139 |
+
• What degree of regularization would we like to use in our model?
|
| 140 |
+
|
| 141 |
+
• How many model components would we like to use?
|
| 142 |
+
|
| 143 |
+
These are examples of the important choices that must be made once the model class is selected. These choices are often represented as hyperparameters, or parameters that must be set before the model is fit to data. In Scikit-Learn, hyperparameters are chosen by passing values at model instantiation. We will explore how you can quantitatively motivate the choice of hyperparameters in Hyperparameters and Model Validation.
|
| 144 |
+
|
| 145 |
+
For our linear regression example, we can instantiate the LinearRegression class and specify that we would like to fit the intercept using the fit_intercept hyperparameter:
|
| 146 |
+
|
| 147 |
+
In [7]:
|
| 148 |
+
|
| 149 |
+
model = LinearRegression(fit_intercept=True) model
|
| 150 |
+
|
| 151 |
+
Out[7]:
|
| 152 |
+
|
| 153 |
+
LinearRegression(copy_X=True, fit_intercept=True, n_jobs=1, normalize=False)
|
| 154 |
+
|
| 155 |
+
Keep in mind that when the model is instantiated, the only action is the storing of these hyperparameter values. In particular, we have not yet applied the model to any data: the Scikit-Learn API makes very clear the distinction between choice of model and application of model to data.
|
| 156 |
+
|
| 157 |
+
3. Arrange data into a features matrix and target vector
|
| 158 |
+
|
| 159 |
+
Previously we detailed the Scikit-Learn data representation, which requires a two-dimensional features matrix and a one-dimensional target array. Here our target variable y is already in the correct form (a length-n_samples array), but we need to massage the data x to make it a matrix of size [n_samples, n_features]. In this case, this amounts to a simple reshaping of the one-dimensional array:
|
| 160 |
+
|
| 161 |
+
In [8]:
|
| 162 |
+
|
| 163 |
+
X = x[:, np.newaxis] X.shape
|
| 164 |
+
|
| 165 |
+
Out[8]:
|
| 166 |
+
|
| 167 |
+
(50, 1)
|
| 168 |
+
|
| 169 |
+
4. Fit the model to your data
|
| 170 |
+
|
| 171 |
+
Now it is time to apply our model to data. This can be done with the fit() method of the model:
|
| 172 |
+
|
| 173 |
+
In [9]:
|
| 174 |
+
|
| 175 |
+
model.fit(X, y)
|
| 176 |
+
|
| 177 |
+
Out[9]:
|
| 178 |
+
|
| 179 |
+
LinearRegression(copy_X=True, fit_intercept=True, n_jobs=1, normalize=False)
|
| 180 |
+
|
| 181 |
+
This fit() command causes a number of model-dependent internal computations to take place, and the results of these computations are stored in model-specific attributes that the user can explore. In Scikit-Learn, by convention all model parameters that were learned during the fit() process have trailing underscores; for example in this linear model, we have the following:
|
| 182 |
+
|
| 183 |
+
In [10]:
|
| 184 |
+
|
| 185 |
+
model.coef_
|
| 186 |
+
|
| 187 |
+
Out[10]:
|
| 188 |
+
|
| 189 |
+
array([ 1.9776566])
|
| 190 |
+
|
| 191 |
+
In [11]:
|
| 192 |
+
|
| 193 |
+
model.intercept_
|
| 194 |
+
|
| 195 |
+
Out[11]:
|
| 196 |
+
|
| 197 |
+
-0.90331072553111635
|
| 198 |
+
|
| 199 |
+
These two parameters represent the slope and intercept of the simple linear fit to the data. Comparing to the data definition, we see that they are very close to the input slope of 2 and intercept of -1.
|
| 200 |
+
|
| 201 |
+
One question that frequently comes up regards the uncertainty in such internal model parameters. In general, Scikit-Learn does not provide tools to draw conclusions from internal model parameters themselves: interpreting model parameters is much more a statistical modeling question than a machine learning question. Machine learning rather focuses on what the model predicts. If you would like to dive into the meaning of fit parameters within the model, other tools are available, including the Statsmodels Python package.
|
| 202 |
+
|
| 203 |
+
5. Predict labels for unknown data
|
| 204 |
+
|
| 205 |
+
Once the model is trained, the main task of supervised machine learning is to evaluate it based on what it says about new data that was not part of the training set. In Scikit-Learn, this can be done using the predict() method. For the sake of this example, our "new data" will be a grid of x values, and we will ask what y values the model predicts:
|
| 206 |
+
|
| 207 |
+
In [12]:
|
| 208 |
+
|
| 209 |
+
xfit = np.linspace(-1, 11)
|
| 210 |
+
|
| 211 |
+
As before, we need to coerce these x values into a [n_samples, n_features] features matrix, after which we can feed it to the model:
|
| 212 |
+
|
| 213 |
+
In [13]:
|
| 214 |
+
|
| 215 |
+
Xfit = xfit[:, np.newaxis] yfit = model.predict(Xfit)
|
| 216 |
+
|
| 217 |
+
Finally, let's visualize the results by plotting first the raw data, and then this model fit:
|
| 218 |
+
|
| 219 |
+
In [14]:
|
| 220 |
+
|
| 221 |
+
plt.scatter(x, y) plt.plot(xfit, yfit);
|
| 222 |
+
|
| 223 |
+
Typically the efficacy of the model is evaluated by comparing its results to some known baseline, as we will see in the next example
|
| 224 |
+
|
| 225 |
+
Supervised learning example: Iris classification
|
| 226 |
+
|
| 227 |
+
Let's take a look at another example of this process, using the Iris dataset we discussed earlier. Our question will be this: given a model trained on a portion of the Iris data, how well can we predict the remaining labels?
|
| 228 |
+
|
| 229 |
+
For this task, we will use an extremely simple generative model known as Gaussian naive Bayes, which proceeds by assuming each class is drawn from an axis-aligned Gaussian distribution (see In Depth: Naive Bayes Classification for more details). Because it is so fast and has no hyperparameters to choose, Gaussian naive Bayes is often a good model to use as a baseline classification, before exploring whether improvements can be found through more sophisticated models.
|
| 230 |
+
|
| 231 |
+
We would like to evaluate the model on data it has not seen before, and so we will split the data into a training set and a testing set. This could be done by hand, but it is more convenient to use the train_test_split utility function:
|
| 232 |
+
|
| 233 |
+
In [15]:
|
| 234 |
+
|
| 235 |
+
from sklearn.cross_validation import train_test_split Xtrain, Xtest, ytrain, ytest = train_test_split(X_iris, y_iris, random_state=1)
|
| 236 |
+
|
| 237 |
+
With the data arranged, we can follow our recipe to predict the labels:
|
| 238 |
+
|
| 239 |
+
In [16]:
|
| 240 |
+
|
| 241 |
+
from sklearn.naive_bayes import GaussianNB # 1. choose model class model = GaussianNB() # 2. instantiate model model.fit(Xtrain, ytrain) # 3. fit model to data y_model = model.predict(Xtest) # 4. predict on new data
|
| 242 |
+
|
| 243 |
+
Finally, we can use the accuracy_score utility to see the fraction of predicted labels that match their true value:
|
| 244 |
+
|
| 245 |
+
In [17]:
|
| 246 |
+
|
| 247 |
+
from sklearn.metrics import accuracy_score accuracy_score(ytest, y_model)
|
| 248 |
+
|
| 249 |
+
Out[17]:
|
| 250 |
+
|
| 251 |
+
0.97368421052631582
|
| 252 |
+
|
| 253 |
+
With an accuracy topping 97%, we see that even this very naive classification algorithm is effective for this particular dataset!
|
| 254 |
+
|
| 255 |
+
Unsupervised learning example: Iris dimensionality
|
| 256 |
+
|
| 257 |
+
As an example of an unsupervised learning problem, let's take a look at reducing the dimensionality of the Iris data so as to more easily visualize it. Recall that the Iris data is four dimensional: there are four features recorded for each sample.
|
| 258 |
+
|
| 259 |
+
The task of dimensionality reduction is to ask whether there is a suitable lower-dimensional representation that retains the essential features of the data. Often dimensionality reduction is used as an aid to visualizing data: after all, it is much easier to plot data in two dimensions than in four dimensions or higher!
|
| 260 |
+
|
| 261 |
+
Here we will use principal component analysis (PCA; see In Depth: Principal Component Analysis), which is a fast linear dimensionality reduction technique. We will ask the model to return two components—that is, a two-dimensional representation of the data.
|
| 262 |
+
|
| 263 |
+
Following the sequence of steps outlined earlier, we have:
|
| 264 |
+
|
| 265 |
+
In [18]:
|
| 266 |
+
|
| 267 |
+
from sklearn.decomposition import PCA # 1. Choose the model class model = PCA(n_components=2) # 2. Instantiate the model with hyperparameters model.fit(X_iris) # 3. Fit to data. Notice y is not specified! X_2D = model.transform(X_iris) # 4. Transform the data to two dimensions
|
| 268 |
+
|
| 269 |
+
Now let's plot the results. A quick way to do this is to insert the results into the original Iris DataFrame, and use Seaborn's lmplot to show the results:
|
| 270 |
+
|
| 271 |
+
In [19]:
|
| 272 |
+
|
| 273 |
+
iris['PCA1'] = X_2D[:, 0] iris['PCA2'] = X_2D[:, 1] sns.lmplot("PCA1", "PCA2", hue='species', data=iris, fit_reg=False);
|
| 274 |
+
|
| 275 |
+
We see that in the two-dimensional representation, the species are fairly well separated, even though the PCA algorithm had no knowledge of the species labels! This indicates to us that a relatively straightforward classification will probably be effective on the dataset, as we saw before.
|
| 276 |
+
|
| 277 |
+
Unsupervised learning: Iris clustering
|
| 278 |
+
|
| 279 |
+
Let's next look at applying clustering to the Iris data. A clustering algorithm attempts to find distinct groups of data without reference to any labels. Here we will use a powerful clustering method called a Gaussian mixture model (GMM), discussed in more detail in In Depth: Gaussian Mixture Models. A GMM attempts to model the data as a collection of Gaussian blobs.
|
| 280 |
+
|
| 281 |
+
We can fit the Gaussian mixture model as follows:
|
| 282 |
+
|
| 283 |
+
In [20]:
|
| 284 |
+
|
| 285 |
+
from sklearn.mixture import GMM # 1. Choose the model class model = GMM(n_components=3, covariance_type='full') # 2. Instantiate the model with hyperparameters model.fit(X_iris) # 3. Fit to data. Notice y is not specified! y_gmm = model.predict(X_iris) # 4. Determine cluster labels
|
| 286 |
+
|
| 287 |
+
As before, we will add the cluster label to the Iris DataFrame and use Seaborn to plot the results:
|
| 288 |
+
|
| 289 |
+
In [21]:
|
| 290 |
+
|
| 291 |
+
iris['cluster'] = y_gmm sns.lmplot("PCA1", "PCA2", data=iris, hue='species', col='cluster', fit_reg=False);
|
| 292 |
+
|
| 293 |
+
By splitting the data by cluster number, we see exactly how well the GMM algorithm has recovered the underlying label: the setosa species is separated perfectly within cluster 0, while there remains a small amount of mixing between versicolor and virginica. This means that even without an expert to tell us the species labels of the individual flowers, the measurements of these flowers are distinct enough that we could automatically identify the presence of these different groups of species with a simple clustering algorithm! This sort of algorithm might further give experts in the field clues as to the relationship between the samples they are observing.
|
| 294 |
+
|
| 295 |
+
Application: Exploring Hand-written Digits
|
| 296 |
+
|
| 297 |
+
To demonstrate these principles on a more interesting problem, let's consider one piece of the optical character recognition problem: the identification of hand-written digits. In the wild, this problem involves both locating and identifying characters in an image. Here we'll take a shortcut and use Scikit-Learn's set of pre-formatted digits, which is built into the library.
|
| 298 |
+
|
| 299 |
+
Loading and visualizing the digits data
|
| 300 |
+
|
| 301 |
+
We'll use Scikit-Learn's data access interface and take a look at this data:
|
| 302 |
+
|
| 303 |
+
In [22]:
|
| 304 |
+
|
| 305 |
+
from sklearn.datasets import load_digits digits = load_digits() digits.images.shape
|
| 306 |
+
|
| 307 |
+
Out[22]:
|
| 308 |
+
|
| 309 |
+
(1797, 8, 8)
|
| 310 |
+
|
| 311 |
+
The images data is a three-dimensional array: 1,797 samples each consisting of an 8 × 8 grid of pixels. Let's visualize the first hundred of these:
|
| 312 |
+
|
| 313 |
+
In [23]:
|
| 314 |
+
|
| 315 |
+
import matplotlib.pyplot as plt fig, axes = plt.subplots(10, 10, figsize=(8, 8), subplot_kw={'xticks':[], 'yticks':[]}, gridspec_kw=dict(hspace=0.1, wspace=0.1)) for i, ax in enumerate(axes.flat): ax.imshow(digits.images[i], cmap='binary', interpolation='nearest') ax.text(0.05, 0.05, str(digits.target[i]), transform=ax.transAxes, color='green')
|
| 316 |
+
|
| 317 |
+
In order to work with this data within Scikit-Learn, we need a two-dimensional, [n_samples, n_features] representation. We can accomplish this by treating each pixel in the image as a feature: that is, by flattening out the pixel arrays so that we have a length-64 array of pixel values representing each digit. Additionally, we need the target array, which gives the previously determined label for each digit. These two quantities are built into the digits dataset under the data and target attributes, respectively:
|
| 318 |
+
|
| 319 |
+
In [24]:
|
| 320 |
+
|
| 321 |
+
X = digits.data X.shape
|
| 322 |
+
|
| 323 |
+
Out[24]:
|
| 324 |
+
|
| 325 |
+
(1797, 64)
|
| 326 |
+
|
| 327 |
+
In [25]:
|
| 328 |
+
|
| 329 |
+
y = digits.target y.shape
|
| 330 |
+
|
| 331 |
+
Out[25]:
|
| 332 |
+
|
| 333 |
+
(1797,)
|
| 334 |
+
|
| 335 |
+
We see here that there are 1,797 samples and 64 features.
|
| 336 |
+
|
| 337 |
+
Unsupervised learning: Dimensionality reduction
|
| 338 |
+
|
| 339 |
+
We'd like to visualize our points within the 64-dimensional parameter space, but it's difficult to effectively visualize points in such a high-dimensional space. Instead we'll reduce the dimensions to 2, using an unsupervised method. Here, we'll make use of a manifold learning algorithm called Isomap (see In-Depth: Manifold Learning), and transform the data to two dimensions:
|
| 340 |
+
|
| 341 |
+
In [26]:
|
| 342 |
+
|
| 343 |
+
from sklearn.manifold import Isomap iso = Isomap(n_components=2) iso.fit(digits.data) data_projected = iso.transform(digits.data) data_projected.shape
|
| 344 |
+
|
| 345 |
+
Out[26]:
|
| 346 |
+
|
| 347 |
+
(1797, 2)
|
| 348 |
+
|
| 349 |
+
We see that the projected data is now two-dimensional. Let's plot this data to see if we can learn anything from its structure:
|
| 350 |
+
|
| 351 |
+
In [27]:
|
| 352 |
+
|
| 353 |
+
plt.scatter(data_projected[:, 0], data_projected[:, 1], c=digits.target, edgecolor='none', alpha=0.5, cmap=plt.cm.get_cmap('spectral', 10)) plt.colorbar(label='digit label', ticks=range(10)) plt.clim(-0.5, 9.5);
|
| 354 |
+
|
| 355 |
+
This plot gives us some good intuition into how well various numbers are separated in the larger 64-dimensional space. For example, zeros (in black) and ones (in purple) have very little overlap in parameter space. Intuitively, this makes sense: a zero is empty in the middle of the image, while a one will generally have ink in the middle. On the other hand, there seems to be a more or less continuous spectrum between ones and fours: we can understand this by realizing that some people draw ones with "hats" on them, which cause them to look similar to fours.
|
| 356 |
+
|
| 357 |
+
Overall, however, the different groups appear to be fairly well separated in the parameter space: this tells us that even a very straightforward supervised classification algorithm should perform suitably on this data. Let's give it a try.
|
| 358 |
+
|
| 359 |
+
Classification on digits
|
| 360 |
+
|
| 361 |
+
Let's apply a classification algorithm to the digits. As with the Iris data previously, we will split the data into a training and testing set, and fit a Gaussian naive Bayes model:
|
| 362 |
+
|
| 363 |
+
In [28]:
|
| 364 |
+
|
| 365 |
+
Xtrain, Xtest, ytrain, ytest = train_test_split(X, y, random_state=0)
|
| 366 |
+
|
| 367 |
+
In [29]:
|
| 368 |
+
|
| 369 |
+
from sklearn.naive_bayes import GaussianNB model = GaussianNB() model.fit(Xtrain, ytrain) y_model = model.predict(Xtest)
|
| 370 |
+
|
| 371 |
+
Now that we have predicted our model, we can gauge its accuracy by comparing the true values of the test set to the predictions:
|
| 372 |
+
|
| 373 |
+
In [30]:
|
| 374 |
+
|
| 375 |
+
from sklearn.metrics import accuracy_score accuracy_score(ytest, y_model)
|
| 376 |
+
|
| 377 |
+
Out[30]:
|
| 378 |
+
|
| 379 |
+
0.83333333333333337
|
| 380 |
+
|
| 381 |
+
With even this extremely simple model, we find about 80% accuracy for classification of the digits! However, this single number doesn't tell us where we've gone wrong—one nice way to do this is to use the confusion matrix, which we can compute with Scikit-Learn and plot with Seaborn:
|
| 382 |
+
|
| 383 |
+
In [31]:
|
| 384 |
+
|
| 385 |
+
from sklearn.metrics import confusion_matrix mat = confusion_matrix(ytest, y_model) sns.heatmap(mat, square=True, annot=True, cbar=False) plt.xlabel('predicted value') plt.ylabel('true value');
|
| 386 |
+
|
| 387 |
+
This shows us where the mis-labeled points tend to be: for example, a large number of twos here are mis-classified as either ones or eights. Another way to gain intuition into the characteristics of the model is to plot the inputs again, with their predicted labels. We'll use green for correct labels, and red for incorrect labels:
|
| 388 |
+
|
| 389 |
+
In [32]:
|
| 390 |
+
|
| 391 |
+
fig, axes = plt.subplots(10, 10, figsize=(8, 8), subplot_kw={'xticks':[], 'yticks':[]}, gridspec_kw=dict(hspace=0.1, wspace=0.1)) test_images = Xtest.reshape(-1, 8, 8) for i, ax in enumerate(axes.flat): ax.imshow(test_images[i], cmap='binary', interpolation='nearest') ax.text(0.05, 0.05, str(y_model[i]), transform=ax.transAxes, color='green' if (ytest[i] == y_model[i]) else 'red')
|
| 392 |
+
|
| 393 |
+
https://colab.research.google.com/github/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/05.02-Introducing-Scikit-Learn.ipynb
|
UnleashingthePowerofAdvancedPythonProgramming607.txt
ADDED
|
@@ -0,0 +1,601 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
|
| 3 |
+
|
| 4 |
+
See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/373217154
|
| 5 |
+
|
| 6 |
+
Unleashing the Power of Advanced Python Programming
|
| 7 |
+
Preprint · August 2023
|
| 8 |
+
DOI: 10.13140/RG.2.2.23019.31520
|
| 9 |
+
|
| 10 |
+
|
| 11 |
+
|
| 12 |
+
|
| 13 |
+
CITATIONS
|
| 14 |
+
0
|
| 15 |
+
READS
|
| 16 |
+
506
|
| 17 |
+
|
| 18 |
+
|
| 19 |
+
2 authors:
|
| 20 |
+
|
| 21 |
+
|
| 22 |
+
Abu Rayhan CBECL
|
| 23 |
+
98 PUBLICATIONS 194 CITATIONS
|
| 24 |
+
Robert Kinzler
|
| 25 |
+
Harvard University
|
| 26 |
+
29 PUBLICATIONS 17 CITATIONS
|
| 27 |
+
|
| 28 |
+
|
| 29 |
+
|
| 30 |
+
|
| 31 |
+
|
| 32 |
+
|
| 33 |
+
|
| 34 |
+
|
| 35 |
+
|
| 36 |
+
|
| 37 |
+
|
| 38 |
+
|
| 39 |
+
|
| 40 |
+
|
| 41 |
+
|
| 42 |
+
|
| 43 |
+
|
| 44 |
+
|
| 45 |
+
|
| 46 |
+
|
| 47 |
+
|
| 48 |
+
|
| 49 |
+
|
| 50 |
+
|
| 51 |
+
|
| 52 |
+
|
| 53 |
+
|
| 54 |
+
|
| 55 |
+
|
| 56 |
+
|
| 57 |
+
|
| 58 |
+
|
| 59 |
+
|
| 60 |
+
|
| 61 |
+
|
| 62 |
+
|
| 63 |
+
|
| 64 |
+
|
| 65 |
+
|
| 66 |
+
|
| 67 |
+
|
| 68 |
+
|
| 69 |
+
|
| 70 |
+
|
| 71 |
+
|
| 72 |
+
|
| 73 |
+
|
| 74 |
+
|
| 75 |
+
All content following this page was uploaded by Abu Rayhan on 19 August 2023.
|
| 76 |
+
|
| 77 |
+
The user has requested enhancement of the downloaded file.
|
| 78 |
+
|
| 79 |
+
Unleashing the Power of Advanced Python Programming
|
| 80 |
+
|
| 81 |
+
|
| 82 |
+
|
| 83 |
+
|
| 84 |
+
|
| 85 |
+
Abstract:
|
| 86 |
+
Abu Rayhan1, Robert Kinzler2
|
| 87 |
+
1Abu Rayhan, CBECL, Dhaka, Bangladesh
|
| 88 |
+
|
| 89 |
+
In the rapidly evolving landscape of programming languages, Python has emerged as a versatile and widely adopted choice. This research paper delves into the realm of advanced Python programming techniques, aiming to equip seasoned developers with the tools to unlock the language's full potential. We explore topics such as metaprogramming, concurrency, decorators, optimization strategies, and more. Through a combination of clear explanations, illustrative code snippets, tables, and charts, we demonstrate how these advanced concepts elevate Python beyond its foundational capabilities.
|
| 90 |
+
|
| 91 |
+
Keywords: Python programming, advanced techniques, metaprogramming, concurrency, decorators, optimization strategies.
|
| 92 |
+
|
| 93 |
+
Introduction:
|
| 94 |
+
|
| 95 |
+
Python's Popularity and Evolution:
|
| 96 |
+
|
| 97 |
+
Python, a dynamically typed, high-level programming language, has witnessed an unprecedented surge in popularity over the past few decades. Its simplicity, readability, and versatility have propelled it into a leading position in diverse domains, including web development, scientific computing, data analysis, and artificial intelligence. Guido van Rossum's creation, Python, first emerged in the late 1980s and has since evolved through multiple iterations, with the latest stable release being Python 3.9.
|
| 98 |
+
|
| 99 |
+
Advancing Programming Skills for Proficient Developers:
|
| 100 |
+
|
| 101 |
+
As the software development landscape continues to evolve rapidly, the importance of honing advanced programming skills cannot be overstated. Proficient developers who possess an in-depth understanding of Python's advanced features can unlock a realm of possibilities for creating more efficient, maintainable, and elegant code. This paper aims to guide developers through various advanced concepts and techniques in Python, enabling them to navigate the complexities of modern software development and produce solutions that align with best practices.
|
| 102 |
+
|
| 103 |
+
In the subsequent sections, we will delve into the intricate world of advanced Python programming, exploring metaprogramming, concurrency, decorators, optimization strategies, and more. Through a combination of explanations, code snippets, charts, and case studies, we will illuminate the path toward mastering these concepts. By the end of this paper, readers will be equipped with the knowledge and tools to elevate their Python
|
| 104 |
+
|
| 105 |
+
|
| 106 |
+
programming skills to new heights, empowering them to tackle complex challenges with confidence and finesse.
|
| 107 |
+
|
| 108 |
+
Let's embark on this journey of exploration and mastery, as we unravel the intricacies of advanced Python programming techniques.
|
| 109 |
+
|
| 110 |
+
Metaprogramming and Reflection:
|
| 111 |
+
|
| 112 |
+
Explanation of Metaprogramming and Its Relevance:
|
| 113 |
+
|
| 114 |
+
Metaprogramming in Python involves writing code that manipulates or generates other code during runtime. This dynamic approach to programming allows developers to create more flexible and adaptable systems. Metaprogramming is particularly relevant in scenarios where code generation, configuration management, and aspect-oriented programming are essential. By altering the structure and behavior of code programmatically, metaprogramming contributes to code reusability, modularity, and maintenance.
|
| 115 |
+
|
| 116 |
+
Demonstrating Introspection and Reflection:
|
| 117 |
+
|
| 118 |
+
Introspection is the ability of a program to examine its own structure and properties at runtime. Python provides rich introspection capabilities, allowing developers to inspect objects, functions, and modules. Reflection, on the other hand, involves modifying and utilizing code structures based on their runtime properties. This dynamic manipulation is facilitated by functions such as `getattr()`, `setattr()`, and `hasattr()`.
|
| 119 |
+
|
| 120 |
+
Consider the following code snippet that showcases introspection and reflection:
|
| 121 |
+
|
| 122 |
+
```python class Person:
|
| 123 |
+
def init (self, name, age): self.name = name
|
| 124 |
+
self.age = age
|
| 125 |
+
person = Person("Alice", 30)
|
| 126 |
+
# Introspection: Examining object attributes attributes = dir(person)
|
| 127 |
+
print(attributes)
|
| 128 |
+
# Reflection: Modifying attribute dynamically
|
| 129 |
+
|
| 130 |
+
|
| 131 |
+
if hasattr(person, "age"): setattr(person, "age", 31) print(person.age)
|
| 132 |
+
```
|
| 133 |
+
Case Studies Illustrating Metaclasses, Attribute Access, and Code Generation: Metaclasses:
|
| 134 |
+
Metaclasses are classes that define the behavior of other classes, serving as blueprints for class creation. They enable developers to customize class creation and attribute handling. Consider a scenario where we want all attributes of a class to be uppercase. A metaclass can be used to achieve this behavior:
|
| 135 |
+
|
| 136 |
+
```python
|
| 137 |
+
class UppercaseAttributesMeta(type): def new (cls, name, bases, attrs):
|
| 138 |
+
uppercase_attrs = {}
|
| 139 |
+
for attr_name, attr_value in attrs.items(): if not attr_name.startswith(" "):
|
| 140 |
+
uppercase_attrs[attr_name.upper()] = attr_value return super(). new (cls, name, bases, uppercase_attrs)
|
| 141 |
+
class MyClass(metaclass=UppercaseAttributesMeta): value = 42
|
| 142 |
+
obj = MyClass() print(obj.VALUE) # Output: 42
|
| 143 |
+
```
|
| 144 |
+
|
| 145 |
+
Attribute Access:
|
| 146 |
+
Python allows customizing attribute access using methods like ` getattr ()` and
|
| 147 |
+
` setattr ()`. This can be useful for implementing lazy loading, validation, and logging. Here's an example of using ` getattr ()` for lazy loading:
|
| 148 |
+
|
| 149 |
+
|
| 150 |
+
```python
|
| 151 |
+
class LazyLoader: def init (self):
|
| 152 |
+
self._data = None
|
| 153 |
+
def getattr (self, name):
|
| 154 |
+
if self._data is None:
|
| 155 |
+
self._data = self._load_data() return getattr(self._data, name) def _load_data(self):
|
| 156 |
+
# Load data from external source pass
|
| 157 |
+
loader = LazyLoader()
|
| 158 |
+
print(loader.value) # Data is loaded and attribute is accessed
|
| 159 |
+
```
|
| 160 |
+
|
| 161 |
+
Code Generation:
|
| 162 |
+
Code generation involves creating new code based on existing code or specifications. This is often used in frameworks, ORM (Object-Relational Mapping) systems, and template engines. Consider a simple example of generating a basic Python class using string interpolation:
|
| 163 |
+
|
| 164 |
+
```python
|
| 165 |
+
class_name = "MyGeneratedClass" class_attrs = ["attribute1", "attribute2"] class_template = f"class {class_name}:\n" for attr in class_attrs:
|
| 166 |
+
class_template += f" {attr} = None\n" exec(class_template)
|
| 167 |
+
obj = MyGeneratedClass()
|
| 168 |
+
|
| 169 |
+
|
| 170 |
+
print(obj.attribute1) # Output: None
|
| 171 |
+
```
|
| 172 |
+
|
| 173 |
+
In these case studies, we've explored metaprogramming concepts like metaclasses, attribute access customization, and code generation, showcasing how metaprogramming can provide powerful tools for dynamic manipulation of code and objects in Python.
|
| 174 |
+
|
| 175 |
+
Concurrency and Parallelism:
|
| 176 |
+
|
| 177 |
+
Concurrency is a fundamental concept in modern software development, allowing programs to execute multiple tasks seemingly simultaneously. However, Python's Global Interpreter Lock (GIL) has been a subject of discussion due to its impact on concurrency. The GIL restricts the Python interpreter to executing only one thread at a time per process. This limitation can hinder the effective utilization of multiple CPU cores, particularly in CPU-bound tasks.
|
| 178 |
+
|
| 179 |
+
The Global Interpreter Lock (GIL):
|
| 180 |
+
The Global Interpreter Lock (GIL) is a mutex that protects access to Python objects and prevents multiple threads from executing Python bytecodes concurrently. While this simplifies memory management and maintains data integrity, it can limit the performance of multithreaded programs by preventing true parallel execution of Python code.
|
| 181 |
+
|
| 182 |
+
Multithreading:
|
| 183 |
+
Multithreading involves using multiple threads to execute different tasks concurrently within a single process. Despite the GIL, multithreading can still be beneficial for I/O- bound tasks. Here, each thread can perform non-blocking I/O operations while waiting for external resources, thus effectively utilizing CPU time.
|
| 184 |
+
|
| 185 |
+
Multiprocessing:
|
| 186 |
+
To overcome the GIL's limitations, multiprocessing allows Python programs to create multiple processes, each with its own interpreter and memory space. This enables true parallel execution on multiple CPU cores, making multiprocessing suitable for CPU- bound tasks. A simple example of multiprocessing can be demonstrated through parallelizing a time-consuming calculation:
|
| 187 |
+
|
| 188 |
+
```python
|
| 189 |
+
import multiprocessing
|
| 190 |
+
def calculate_square(number): return number number
|
| 191 |
+
|
| 192 |
+
|
| 193 |
+
if name == ' main ': numbers = [1, 2, 3, 4, 5]
|
| 194 |
+
with multiprocessing.Pool() as pool:
|
| 195 |
+
squared_numbers = pool.map(calculate_square, numbers) print(squared_numbers)
|
| 196 |
+
```
|
| 197 |
+
|
| 198 |
+
Asynchronous Programming:
|
| 199 |
+
Asynchronous programming leverages the `async` and `await` keywords to manage concurrency without relying on threads or processes. This approach is particularly useful for I/O-bound tasks where waiting for external resources would otherwise cause idle time. The `asyncio` library provides a framework for asynchronous programming:
|
| 200 |
+
|
| 201 |
+
```python import asyncio
|
| 202 |
+
async def fetch_data(url):
|
| 203 |
+
# Simulate asynchronous I/O await asyncio.sleep(2)
|
| 204 |
+
return f"Data fetched from {url}" async def main():
|
| 205 |
+
tasks = [fetch_data("example.com"), fetch_data("example.org")] results = await asyncio.gather(tasks)
|
| 206 |
+
print(results)
|
| 207 |
+
if name == ' main ': asyncio.run(main())
|
| 208 |
+
```
|
| 209 |
+
|
| 210 |
+
Comparison of Approaches:
|
| 211 |
+
The choice between multithreading, multiprocessing, and asynchronous programming depends on the nature of the task at hand. Multithreading is suitable for I/O-bound tasks, multiprocessing for CPU-bound tasks, and asynchronous programming for tasks
|
| 212 |
+
|
| 213 |
+
|
| 214 |
+
involving frequent I/O operations. It's important to assess the trade-offs and consider factors such as complexity, resource usage, and code readability when selecting an approach.
|
| 215 |
+
|
| 216 |
+
In the next section, we delve into the world of decorators and higher-order functions, uncovering their significance in creating more modular and expressive Python code.
|
| 217 |
+
|
| 218 |
+
Table 1: Comparison of Concurrency Approaches
|
| 219 |
+
|
| 220 |
+
|
| 221 |
+
Decorators and Higher-Order Functions:
|
| 222 |
+
|
| 223 |
+
In the realm of advanced Python programming, decorators and higher-order functions play a pivotal role in enhancing code modularity, reusability, and overall readability. Decorators are functions that modify the behavior of other functions or methods without changing their core logic. They provide a powerful tool for adding functionality to existing code without modifying it directly. On the other hand, higher-order functions are functions that either take one or more functions as arguments or return a function as their result.
|
| 224 |
+
|
| 225 |
+
Decorators for Code Enhancement:
|
| 226 |
+
Decorators allow developers to encapsulate common functionality that can be applied to multiple functions or methods. This promotes a cleaner codebase by avoiding code duplication and ensuring consistency in behavior. They are particularly useful for tasks such as logging, access control, and memoization.
|
| 227 |
+
|
| 228 |
+
Example: Logging Decorator
|
| 229 |
+
```python
|
| 230 |
+
def log_function_call(func): def wrapper(args, kwargs):
|
| 231 |
+
print(f"Calling {func. name } with arguments {args} and keyword arguments {kwargs}")
|
| 232 |
+
|
| 233 |
+
|
| 234 |
+
result = func(args, kwargs)
|
| 235 |
+
print(f"{func. name } returned {result}") return result
|
| 236 |
+
return wrapper @log_function_call def add(a, b):
|
| 237 |
+
return a + b
|
| 238 |
+
result = add(3, 5) # Output will display function call and result
|
| 239 |
+
```
|
| 240 |
+
|
| 241 |
+
Built-in and Custom Decorators:
|
| 242 |
+
Python provides a set of built-in decorators that cater to common scenarios. Examples include `@staticmethod` and `@property`. The `@staticmethod` decorator defines a method that belongs to a class rather than an instance, while the `@property` decorator allows for the creation of read-only attributes that are computed on the fly.
|
| 243 |
+
|
| 244 |
+
Developers can also create custom decorators tailored to their specific requirements. These decorators can encapsulate complex logic and enable the easy addition of features to functions or methods.
|
| 245 |
+
|
| 246 |
+
Example: Authorization Decorator
|
| 247 |
+
```python
|
| 248 |
+
def authorize(permission): def decorator(func):
|
| 249 |
+
def wrapper(args, kwargs):
|
| 250 |
+
if check_permission(permission): return func(args, kwargs)
|
| 251 |
+
else:
|
| 252 |
+
raise PermissionError("Unauthorized access") return wrapper
|
| 253 |
+
return decorator
|
| 254 |
+
|
| 255 |
+
|
| 256 |
+
@authorize("admin") def delete_file(file_id):
|
| 257 |
+
# Code to delete the specified file pass
|
| 258 |
+
```
|
| 259 |
+
|
| 260 |
+
Leveraging Higher-Order Functions:
|
| 261 |
+
Higher-order functions empower developers to write more expressive and flexible code by abstracting away repetitive patterns. They can receive functions as arguments, enabling dynamic behavior, and return functions as output, enhancing code modularity.
|
| 262 |
+
|
| 263 |
+
Example: Mapping with Higher-Order Function
|
| 264 |
+
```python
|
| 265 |
+
def apply_to_list(func, items):
|
| 266 |
+
return [func(item) for item in items] numbers = [1, 2, 3, 4, 5]
|
| 267 |
+
squared_numbers = apply_to_list(lambda x: x2, numbers) # squared_numbers will contain [1, 4, 9, 16, 25]
|
| 268 |
+
```
|
| 269 |
+
|
| 270 |
+
In conclusion, decorators and higher-order functions are indispensable tools in advanced Python programming. They empower developers to write more concise, modular, and extensible code, promoting best practices in software design. By incorporating built-in decorators, crafting custom decorators, and utilizing higher- order functions, developers can achieve code that is both elegant and functional.
|
| 271 |
+
|
| 272 |
+
Table 2: Common Built-in Decorators
|
| 273 |
+
|
| 274 |
+
|
| 275 |
+
|
| 276 |
+
Table 3: Common Custom Decorators
|
| 277 |
+
|
| 278 |
+
|
| 279 |
+
These tables provide an overview of some commonly used built-in and custom decorators, showcasing their diverse applications in Python programming.
|
| 280 |
+
Performance Optimization:
|
| 281 |
+
|
| 282 |
+
Python's elegance and expressiveness come at a cost—runtime performance. As developers, understanding and addressing the common performance bottlenecks in Python code is essential for building efficient applications.
|
| 283 |
+
|
| 284 |
+
Common Performance Bottlenecks:
|
| 285 |
+
|
| 286 |
+
Table 4
|
| 287 |
+
|
| 288 |
+
Profiling and Benchmarking:
|
| 289 |
+
|
| 290 |
+
To tackle performance issues, profiling and benchmarking tools provide invaluable insights. Profilers like `cProfile` help identify bottlenecks by showing function call times and call counts. On the other hand, benchmarking tools such as `timeit` allow you to measure execution times of specific code snippets.
|
| 291 |
+
|
| 292 |
+
Example of using `cProfile`:
|
| 293 |
+
|
| 294 |
+
```python import cProfile def fibonacci(n):
|
| 295 |
+
if n <= 1: return n
|
| 296 |
+
|
| 297 |
+
|
| 298 |
+
return fibonacci(n - 1) + fibonacci(n - 2) cProfile.run("fibonacci(30)")
|
| 299 |
+
```
|
| 300 |
+
Caching and Memoization:
|
| 301 |
+
|
| 302 |
+
Caching and memoization techniques reduce redundant computations by storing results of expensive function calls. The `functools.lru_cache` decorator is a powerful tool for memoization, automatically managing a cache of the most recently used function calls.
|
| 303 |
+
|
| 304 |
+
Example of using `functools.lru_cache`:
|
| 305 |
+
|
| 306 |
+
```python import functools
|
| 307 |
+
@functools.lru_cache(maxsize=None) def fibonacci(n):
|
| 308 |
+
if n <= 1: return n
|
| 309 |
+
return fibonacci(n - 1) + fibonacci(n - 2) result = fibonacci(30)
|
| 310 |
+
```
|
| 311 |
+
|
| 312 |
+
Just-In-Time (JIT) Compilation:
|
| 313 |
+
|
| 314 |
+
JIT compilation enhances execution speed by translating parts of your Python code into machine code at runtime. The `Numba` library provides JIT compilation capabilities, allowing you to decorate functions to take advantage of optimized execution.
|
| 315 |
+
|
| 316 |
+
Example of using `Numba` for JIT compilation:
|
| 317 |
+
|
| 318 |
+
```python
|
| 319 |
+
from numba import jit @jit
|
| 320 |
+
|
| 321 |
+
|
| 322 |
+
def fibonacci(n): if n <= 1:
|
| 323 |
+
return n
|
| 324 |
+
return fibonacci(n - 1) + fibonacci(n - 2) result = fibonacci(30)
|
| 325 |
+
```
|
| 326 |
+
|
| 327 |
+
By applying these techniques—profiling, caching, and JIT compilation—you can significantly enhance the performance of your Python code, transforming it from a potential bottleneck into a smoothly running, optimized system.
|
| 328 |
+
|
| 329 |
+
Advanced Data Structures and Algorithms:
|
| 330 |
+
|
| 331 |
+
In the realm of advanced Python programming, a profound understanding of data structures and algorithms empowers developers to craft efficient and scalable solutions. This section delves into the intricacies of advanced data structures and algorithms, highlighting their significance and real-world applications.
|
| 332 |
+
|
| 333 |
+
Advanced Data Structures:
|
| 334 |
+
|
| 335 |
+
Advanced data structures play a pivotal role in optimizing memory usage, access times, and overall algorithmic efficiency. Below, we present a brief overview of some key advanced data structures:
|
| 336 |
+
|
| 337 |
+
Table 5
|
| 338 |
+
|
| 339 |
+
|
| 340 |
+
Advanced Algorithms:
|
| 341 |
+
|
| 342 |
+
Advanced algorithms enhance problem-solving capabilities and are essential for tackling complex tasks. Here, we touch upon a selection of essential algorithms:
|
| 343 |
+
|
| 344 |
+
Sorting Algorithms: Sorting is a fundamental operation in computer science. Python's standard library provides efficient sorting algorithms such as QuickSort (`sorted()`) and MergeSort (`heapq` module), with varying time complexities based on the input data.
|
| 345 |
+
|
| 346 |
+
Searching Algorithms: Searching algorithms facilitate finding specific elements within a dataset. The binary search algorithm, available in Python's standard library, achieves logarithmic time complexity and is suitable for sorted collections.
|
| 347 |
+
|
| 348 |
+
Graph Traversal Algorithms: Graphs are versatile data structures used in a range of applications, from social networks to route planning. Depth-First Search (DFS) and Breadth-First Search (BFS) are fundamental graph traversal algorithms that enable exploration of graph nodes and edges.
|
| 349 |
+
|
| 350 |
+
Standard Library Offerings and External Packages:
|
| 351 |
+
|
| 352 |
+
Python's standard library and external packages offer a wealth of resources for implementing advanced data structures and algorithms. The `collections` module, for instance, provides specialized container datatypes like `Counter`, which efficiently counts occurrences of items, and `deque`, a double-ended queue for fast appends and pops.
|
| 353 |
+
|
| 354 |
+
Furthermore, external packages like NumPy and SciPy are indispensable for scientific computing tasks. NumPy provides array objects that enable efficient mathematical operations on large datasets, while SciPy extends these capabilities to include optimization, signal processing, and more.
|
| 355 |
+
|
| 356 |
+
Code Example: Implementing a Binary Search
|
| 357 |
+
|
| 358 |
+
Below is a Python code snippet demonstrating the implementation of a binary search algorithm:
|
| 359 |
+
|
| 360 |
+
```python
|
| 361 |
+
def binary_search(arr, target): left, right = 0, len(arr) - 1 while left <= right:
|
| 362 |
+
mid = left + (right - left) // 2
|
| 363 |
+
|
| 364 |
+
|
| 365 |
+
if arr[mid] == target: return mid
|
| 366 |
+
elif arr[mid] < target: left = mid + 1
|
| 367 |
+
else:
|
| 368 |
+
right = mid - 1
|
| 369 |
+
return -1 # Element not found
|
| 370 |
+
|
| 371 |
+
|
| 372 |
+
# Example usage
|
| 373 |
+
sorted_array = [2, 5, 8, 12, 16, 23, 38, 45, 56, 72]
|
| 374 |
+
target_element = 23
|
| 375 |
+
index = binary_search(sorted_array, target_element) if index != -1:
|
| 376 |
+
print(f"Element {target_element} found at index {index}") else:
|
| 377 |
+
print("Element not found")
|
| 378 |
+
```
|
| 379 |
+
|
| 380 |
+
This code snippet demonstrates the efficient binary search algorithm, which drastically reduces search times compared to linear search for large datasets.
|
| 381 |
+
|
| 382 |
+
Incorporating advanced data structures and algorithms into your Python projects equips you with the tools to tackle intricate problems and optimize code performance. The diverse offerings of the standard library and external packages further amplify the capabilities of your programming arsenal.
|
| 383 |
+
|
| 384 |
+
(Note: The code snippet above demonstrates the binary search algorithm and its usage, showcasing how an advanced algorithm can be implemented in Python.)
|
| 385 |
+
|
| 386 |
+
Working with C Extensions:
|
| 387 |
+
|
| 388 |
+
In the pursuit of optimizing performance, the integration of C and Python offers a compelling avenue for developers. By seamlessly blending Python's high-level features
|
| 389 |
+
|
| 390 |
+
|
| 391 |
+
with C's low-level capabilities, developers can achieve significant performance improvements in critical sections of their code.
|
| 392 |
+
|
| 393 |
+
Utilizing Python's C API:
|
| 394 |
+
Python's C API provides a bridge between the Python interpreter and C code, enabling developers to create C extensions that can be seamlessly imported and used within Python programs. This API exposes a range of functions and macros that allow C code to interact with Python objects, call Python functions, and manipulate data structures.
|
| 395 |
+
|
| 396 |
+
|
| 397 |
+
```python
|
| 398 |
+
#include <Python.h>
|
| 399 |
+
static PyObject example_function(PyObject self, PyObject args) {
|
| 400 |
+
// C code implementation
|
| 401 |
+
// ...
|
| 402 |
+
return Py_BuildValue("i", result);
|
| 403 |
+
|
| 404 |
+
}
|
| 405 |
+
static PyMethodDef methods[] = {
|
| 406 |
+
{"example_function", example_function, METH_VARARGS, "Example function"},
|
| 407 |
+
{NULL, NULL, 0, NULL}
|
| 408 |
+
|
| 409 |
+
};
|
| 410 |
+
static struct PyModuleDef module = { PyModuleDef_HEAD_INIT, "example_module",
|
| 411 |
+
NULL,
|
| 412 |
+
-1,
|
| 413 |
+
methods
|
| 414 |
+
|
| 415 |
+
};
|
| 416 |
+
PyMODINIT_FUNC PyInit_example_module(void) { return PyModule_Create(&module);
|
| 417 |
+
|
| 418 |
+
|
| 419 |
+
}
|
| 420 |
+
```
|
| 421 |
+
|
| 422 |
+
Pros and Cons of Incorporating Low-Level C Code:
|
| 423 |
+
|
| 424 |
+
Pros:
|
| 425 |
+
Performance Boost: One of the primary advantages of using C extensions is the potential for significant performance improvements. Since C is a compiled language and operates at a lower level than Python, computationally intensive tasks can be executed much faster.
|
| 426 |
+
|
| 427 |
+
Access to C Libraries: By integrating C code, developers can tap into a wide range of existing C libraries for specialized tasks, such as numerical computations, cryptography, and image processing.
|
| 428 |
+
|
| 429 |
+
Fine-grained Control: C extensions offer developers fine-grained control over memory management and resource allocation, allowing for efficient utilization of system resources.
|
| 430 |
+
|
| 431 |
+
Cons:
|
| 432 |
+
Complexity: Writing C extensions requires a strong understanding of both Python and C, making it more complex than writing pure Python code.
|
| 433 |
+
|
| 434 |
+
Potential for Bugs: Due to the lower-level nature of C, there's an increased risk of memory leaks, buffer overflows, and other low-level bugs that can be hard to debug.
|
| 435 |
+
|
| 436 |
+
Portability Concerns: C extensions might not be as portable as pure Python code, as they are closely tied to the underlying system architecture and might require recompilation for different platforms.
|
| 437 |
+
|
| 438 |
+
Integrating C extensions into Python applications presents a trade-off between performance gains and increased complexity. By judiciously applying C extensions to critical sections of code, developers can achieve substantial speed improvements while being mindful of potential challenges related to debugging, maintenance, and portability. Careful consideration of the pros and cons will help developers make informed decisions when opting for this approach.
|
| 439 |
+
|
| 440 |
+
Real-world Applications and Case Studies:
|
| 441 |
+
|
| 442 |
+
In this section, we delve into real-world applications where advanced Python concepts shine across diverse domains, illustrating the language's adaptability and power. We present case studies from web development, scientific computing, and machine learning, showcasing how these advanced techniques are employed to create impactful solutions.
|
| 443 |
+
|
| 444 |
+
|
| 445 |
+
Web Development:
|
| 446 |
+
Advanced Python programming plays a pivotal role in modern web development, enabling developers to build dynamic, responsive, and scalable web applications. A notable case study is the use of the Flask microframework. Flask leverages Python's simplicity and flexibility to create web applications with minimal overhead. Below is an example of a basic Flask application:
|
| 447 |
+
|
| 448 |
+
```python
|
| 449 |
+
from flask import Flask app = Flask( name ) @app.route('/')
|
| 450 |
+
def hello_world(): return 'Hello, World!'
|
| 451 |
+
if name == ' main ': app.run()
|
| 452 |
+
```
|
| 453 |
+
|
| 454 |
+
Scientific Computing:
|
| 455 |
+
Python's rich ecosystem of libraries makes it a prominent choice for scientific computing. The NumPy library, for instance, provides support for arrays and mathematical functions, crucial for data manipulation and analysis. Let's consider a snippet demonstrating NumPy's power in matrix multiplication:
|
| 456 |
+
|
| 457 |
+
```python
|
| 458 |
+
import numpy as np
|
| 459 |
+
matrix_a = np.array([[1, 2], [3, 4]])
|
| 460 |
+
matrix_b = np.array([[5, 6], [7, 8]]) result_matrix = np.dot(matrix_a, matrix_b) print(result_matrix)
|
| 461 |
+
```
|
| 462 |
+
|
| 463 |
+
|
| 464 |
+
Machine Learning:
|
| 465 |
+
Advanced Python programming is pivotal in the field of machine learning, enabling the implementation of intricate algorithms and models. Scikit-learn, a widely used machine learning library, showcases Python's capabilities. Here, we illustrate the application of Scikit-learn's support vector machine (SVM) for classification:
|
| 466 |
+
|
| 467 |
+
```python
|
| 468 |
+
from sklearn import datasets
|
| 469 |
+
from sklearn.model_selection import train_test_split from sklearn.svm import SVC
|
| 470 |
+
from sklearn.metrics import accuracy_score # Load the dataset
|
| 471 |
+
iris = datasets.load_iris()
|
| 472 |
+
X = iris.data y = iris.target
|
| 473 |
+
|
| 474 |
+
# Split data into training and testing sets
|
| 475 |
+
|
| 476 |
+
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
|
| 477 |
+
|
| 478 |
+
|
| 479 |
+
# Initialize SVM classifier svm_classifier = SVC(kernel='linear')
|
| 480 |
+
|
| 481 |
+
# Train the classifier svm_classifier.fit(X_train, y_train)
|
| 482 |
+
|
| 483 |
+
# Make predictions
|
| 484 |
+
predictions = svm_classifier.predict(X_test)
|
| 485 |
+
|
| 486 |
+
|
| 487 |
+
# Calculate accuracy
|
| 488 |
+
accuracy = accuracy_score(y_test, predictions) print(f"Accuracy: {accuracy}")
|
| 489 |
+
```
|
| 490 |
+
|
| 491 |
+
These case studies highlight just a fraction of the versatile applications of advanced Python programming in various domains. The examples demonstrate Python's integral role in web development, scientific computing, and machine learning. Through Flask, NumPy, and Scikit-learn, Python empowers developers to create innovative solutions with efficiency and efficacy.
|
| 492 |
+
|
| 493 |
+
Table 6 summarizes the case studies' key takeaways, showcasing how advanced Python concepts manifest in practical scenarios:
|
| 494 |
+
|
| 495 |
+
|
| 496 |
+
Through these illustrative cases, we underline the significance of advanced Python programming as a catalyst for innovation across industries. The language's versatility and capabilities continue to drive groundbreaking solutions that address complex challenges.
|
| 497 |
+
|
| 498 |
+
Future Trends and Best Practices:
|
| 499 |
+
|
| 500 |
+
Python's Evolution in the Context of Advanced Programming:
|
| 501 |
+
Python, renowned for its simplicity and readability, continues to evolve, accommodating the demands of modern programming paradigms. With the advent of Python 4.0 on the horizon, several trends are anticipated to shape its trajectory in the realm of advanced programming:
|
| 502 |
+
|
| 503 |
+
Performance Enhancements: Python's performance has been a point of contention, particularly in high-performance computing and data-intensive applications. Python
|
| 504 |
+
4.0 is expected to make strides in addressing these concerns, potentially incorporating improvements in runtime speed and memory efficiency.
|
| 505 |
+
|
| 506 |
+
Concurrency and Parallelism Improvements: While Python has made progress in concurrent programming with features like `asyncio` and the `concurrent.futures` module, Python 4.0 may further enhance support for parallelism, making it more competitive in multi-core and distributed computing scenarios.
|
| 507 |
+
|
| 508 |
+
|
| 509 |
+
|
| 510 |
+
Type System Refinements: Python's gradual move toward a more robust type system, through tools like Type Hints and the `typing` module, is likely to continue. Python 4.0 could introduce additional features to strengthen static typing, aiding code correctness and maintainability.
|
| 511 |
+
|
| 512 |
+
AI and Machine Learning Integration: Python has already established itself as a staple in the AI and machine learning domains, courtesy of libraries like TensorFlow, PyTorch, and scikit-learn. Python 4.0 might foster tighter integration with these technologies, simplifying their usage and enhancing performance.
|
| 513 |
+
|
| 514 |
+
Enhanced Metaprogramming Capabilities: Building upon Python's existing metaprogramming capabilities, Python 4.0 may introduce more intuitive ways to manipulate and generate code, empowering developers to achieve even greater levels of code customization.
|
| 515 |
+
Best Practices for Maintainable and Robust Code:
|
| 516 |
+
While Python's evolution promises exciting opportunities, adhering to best practices remains crucial for creating code that is both maintainable and robust:
|
| 517 |
+
|
| 518 |
+
Code Readability: Python's hallmark is its readability. Follow the PEP 8 style guide to ensure consistent formatting and clear, understandable code. Meaningful variable and function names, along with concise comments, enhance code comprehension.
|
| 519 |
+
|
| 520 |
+
Modular Design: Break down complex problems into smaller, manageable modules. This promotes code reuse and easier maintenance. Utilize classes, functions, and modules to encapsulate logic effectively.
|
| 521 |
+
|
| 522 |
+
Testing and Documentation: Write unit tests using frameworks like `unittest` or
|
| 523 |
+
`pytest` to validate your code's behavior. Comprehensive documentation, generated using tools like Sphinx, aids other developers in understanding and using your code.
|
| 524 |
+
|
| 525 |
+
Version Control: Employ version control systems like Git to track changes, collaborate seamlessly, and revert to previous states if needed.
|
| 526 |
+
|
| 527 |
+
Security Considerations: Stay informed about potential security vulnerabilities in third-party libraries you use. Regularly update dependencies to ensure your code is protected against known exploits.
|
| 528 |
+
|
| 529 |
+
In conclusion, Python's journey in advanced programming is marked by continuous enhancement and adaptation to modern programming paradigms. By embracing best practices, developers can create code that not only leverages Python's advanced features but also maintains readability, modularity, and robustness in the face of evolving industry demands.
|
| 530 |
+
|
| 531 |
+
|
| 532 |
+
Table 7: Recommended Best Practices
|
| 533 |
+
|
| 534 |
+
|
| 535 |
+
By embracing these practices and staying attuned to Python's evolution, developers can navigate the dynamic landscape of advanced programming, ensuring the creation of robust, maintainable, and future-proof code.
|
| 536 |
+
|
| 537 |
+
Code 9.1: Example Unit Test using `unittest`
|
| 538 |
+
```python import unittest def add(a, b):
|
| 539 |
+
return a + b
|
| 540 |
+
class TestAddFunction(unittest.TestCase): def test_positive_numbers(self):
|
| 541 |
+
self.assertEqual(add(3, 5), 8) def test_negative_numbers(self):
|
| 542 |
+
self.assertEqual(add(-2, -7), -9) if name == ' main ':
|
| 543 |
+
unittest.main()
|
| 544 |
+
```
|
| 545 |
+
|
| 546 |
+
In this example, we demonstrate a unit test for the `add` function using the `unittest` framework. Such tests help ensure that the code functions as expected and guards against unintended regressions.
|
| 547 |
+
|
| 548 |
+
|
| 549 |
+
Conclusion:
|
| 550 |
+
|
| 551 |
+
In conclusion, this research paper has delved into the multifaceted realm of advanced Python programming, unraveling a tapestry of techniques that empower seasoned developers to craft code that is not only elegant but also highly efficient. By traversing topics ranging from metaprogramming and concurrency to performance optimization and C extensions, we have highlighted the dynamic spectrum of possibilities that await those who seek to transcend basic proficiency in Python.
|
| 552 |
+
|
| 553 |
+
One of the fundamental takeaways from this exploration is the pivotal role of metaprogramming and reflection in Python's versatility. The ability to introspect and manipulate code grants developers the means to create adaptable and extensible solutions. Furthermore, our discussion on concurrency and parallelism underscored the importance of overcoming the challenges posed by the Global Interpreter Lock (GIL), as developers increasingly navigate the landscape of multicore processors.
|
| 554 |
+
|
| 555 |
+
Decorators and higher-order functions, as showcased in this research, serve as potent tools for enhancing code modularity and expressiveness. By abstracting repetitive tasks and promoting reusability, these concepts elevate the readability and maintainability of Python codebases.
|
| 556 |
+
|
| 557 |
+
The pursuit of performance optimization, elucidated in this paper, involves a careful balance between algorithmic efficiency and implementation intricacies. Profiling, benchmarking, and optimization techniques enable developers to pinpoint bottlenecks and enhance application speed, catering to modern demands for responsiveness and scalability.
|
| 558 |
+
|
| 559 |
+
Real-world applications and case studies have demonstrated the real impact of advanced Python programming across diverse industries. From web development frameworks to scientific computing libraries and machine learning ecosystems, the integration of advanced techniques enriches the development landscape, fostering innovation and efficiency.
|
| 560 |
+
|
| 561 |
+
As we look ahead, Python's evolution is poised to continue, offering more sophisticated tools and language features. We encourage developers to embrace a mindset of perpetual exploration and learning. By immersing themselves in the advanced techniques discussed in this paper and keeping abreast of emerging trends, developers can not only elevate their skill set but also contribute to the continued advancement of the Python programming language.
|
| 562 |
+
|
| 563 |
+
|
| 564 |
+
References:
|
| 565 |
+
|
| 566 |
+
Lutz, M. (2019). "Learning Python." O'Reilly Media.
|
| 567 |
+
|
| 568 |
+
Beazley, D. M. (2019). "Python Essential Reference." Addison-Wesley Professional.
|
| 569 |
+
|
| 570 |
+
McKinney, W. (2017). "Python for Data Analysis." O'Reilly Media.
|
| 571 |
+
|
| 572 |
+
Reitz, K. (2020). "Requests: HTTP for Humans™." Retrieved from https://requests.readthedocs.io/en/latest/
|
| 573 |
+
|
| 574 |
+
van Rossum, G., & Drake Jr, F. L. (2009). "Python tutorial." Centrum Wiskunde & Informatica (CWI).
|
| 575 |
+
|
| 576 |
+
Brownlee, J. (2019). "Master Machine Learning Algorithms." Machine Learning Mastery.
|
| 577 |
+
|
| 578 |
+
Ramalho, L. (2015). "Fluent Python." O'Reilly Media.
|
| 579 |
+
|
| 580 |
+
VanderPlas, J. (2016). "Python Data Science Handbook." O'Reilly Media.
|
| 581 |
+
|
| 582 |
+
Pérez, F., & Granger, B. E. (2007). "IPython: a system for interactive scientific computing." Computing in Science & Engineering, 9(3), 21-29.
|
| 583 |
+
|
| 584 |
+
Harris, C. R., Millman, K. J., van der Walt, S. J., Gommers, R., Virtanen, P., Cournapeau, D., ... & Oliphant, T. E. (2020). "Array programming with NumPy." Nature, 585(7825), 357-362.
|
| 585 |
+
|
| 586 |
+
McKinney, W., & Others. (2010). "Data structures for statistical computing in python." In Proceedings of the 9th Python in Science Conference (Vol. 445, p. 51).
|
| 587 |
+
|
| 588 |
+
Perez, F., & Granger, B. (2015). "Project Jupyter: Computational narratives as the engine of collaborative data science." In Proceedings of the 20th international conference on Intelligent User Interfaces.
|
| 589 |
+
|
| 590 |
+
|
| 591 |
+
|
| 592 |
+
|
| 593 |
+
|
| 594 |
+
|
| 595 |
+
|
| 596 |
+
|
| 597 |
+
|
| 598 |
+
|
| 599 |
+
|
| 600 |
+
|
| 601 |
+
View publication stats
|
advance-core-python-programming-begin-your-journey-to-master-the-world-of-python-english-edition-9390684064-9789390684069_compress701.txt
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
book979.txt
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
by-Dipanjan-Sarkar-Text-Analytics-with-Python310.txt
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
mastering-machine-learning-with-python-in-six-steps602.txt
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
phI2uN2OS67XXcWCTBLosOyxvGdtaQ39sMlTbJe6289.txt
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
practical-machine-learning-python-problem-solvers898.txt
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|