markdown
stringlengths 0
37k
| code
stringlengths 1
33.3k
| path
stringlengths 8
215
| repo_name
stringlengths 6
77
| license
stringclasses 15
values |
|---|---|---|---|---|
feature_names is the name of the feature columns.
|
print(iris_data['feature_names'])
|
content/03_regression/01_introduction_to_sklearn/colab.ipynb
|
google/applied-machine-learning-intensive
|
apache-2.0
|
target_names, despite the name, is not the names of the target columns. There is only one column of targets.
Instead, target_names is the human-readable names of the classes in the target list within the bunch. In this case,target_names is the names of the three species of iris in this dataset.
|
print(iris_data['target_names'])
|
content/03_regression/01_introduction_to_sklearn/colab.ipynb
|
google/applied-machine-learning-intensive
|
apache-2.0
|
We can now examine target and see that it contains zeros, ones, and twos. These correspond to the target names 'setosa', 'versicolor', and 'virginica'.
|
print(iris_data['target'])
|
content/03_regression/01_introduction_to_sklearn/colab.ipynb
|
google/applied-machine-learning-intensive
|
apache-2.0
|
Last, we'll look at the data within the bunch. The data is an array of arrays. Each sub-array contains four values. These values match up with the feature_names. The first item in each sub-array is 'sepal length (cm)', the next is 'sepal width (cm)', and so on.
|
iris_data['data']
|
content/03_regression/01_introduction_to_sklearn/colab.ipynb
|
google/applied-machine-learning-intensive
|
apache-2.0
|
The number of target values should always equal the number of rows in the data.
|
print(len(iris_data['data']))
print(len(iris_data['target']))
|
content/03_regression/01_introduction_to_sklearn/colab.ipynb
|
google/applied-machine-learning-intensive
|
apache-2.0
|
Bunch objects are an adequate container for data. They can be used directly to feed models. However, Bunch objects are not very good for analyzing and manipulating your data.
In this course, we will typically convert Bunch objects into Pandas DataFrame objects to make analysis, data cleaning, visualization, and train/test splitting easier.
To do this, we will take the matrix of feature data and append the target data to it to create a single matrix of data. We also take the list of feature names and append the word 'species' to represent the target classes in the matrix.
|
import pandas as pd
import numpy as np
iris_df = pd.DataFrame(
data=np.append(
iris_data['data'],
np.array(iris_data['target']).reshape(len(iris_data['target']), 1),
axis=1),
columns=np.append(iris_data['feature_names'], ['species'])
)
iris_df.sample(n=10)
|
content/03_regression/01_introduction_to_sklearn/colab.ipynb
|
google/applied-machine-learning-intensive
|
apache-2.0
|
You might notice that the integer representation of species got converted to a floating point number along the way. We can change that back.
|
iris_df['species'] = iris_df['species'].astype('int64')
iris_df.sample(n=10)
|
content/03_regression/01_introduction_to_sklearn/colab.ipynb
|
google/applied-machine-learning-intensive
|
apache-2.0
|
Exercise 1
Load the Boston house price dataset into a Pandas DataFrame. Append the target values to the last column of the DataFrame called boston_df. Name the target column 'PRICE'.
Student Solution
|
# Your answer goes here
|
content/03_regression/01_introduction_to_sklearn/colab.ipynb
|
google/applied-machine-learning-intensive
|
apache-2.0
|
Fetching
Fetching is similar to loading. Scikit-learn will first see if it can find the dataset locally, and, if so, it will simply load the data. Otherwise, it will attempt to pull the data from the internet.
We can see fetching in action with the fetch_california_housing function below.
|
from sklearn.datasets import fetch_california_housing
housing_data = fetch_california_housing()
type(housing_data)
|
content/03_regression/01_introduction_to_sklearn/colab.ipynb
|
google/applied-machine-learning-intensive
|
apache-2.0
|
The dataset is once again a Bunch.
If you follow the link to the fetch_california_housing documentation, you notice that the dataset is a regression dataset as opposed to the iris dataset, which was a classification dataset.
We can see the difference in the dataset by checking out the attributes of the Bunch.
|
dir(housing_data)
|
content/03_regression/01_introduction_to_sklearn/colab.ipynb
|
google/applied-machine-learning-intensive
|
apache-2.0
|
We see that four of the attributes that we expect are present, but 'target_names' is missing. This is because our target is now a continuous variable (home price) and not a discrete value (iris species).
|
print(housing_data['target'][:10])
|
content/03_regression/01_introduction_to_sklearn/colab.ipynb
|
google/applied-machine-learning-intensive
|
apache-2.0
|
Converting a Bunch of regression data to a DataFrame is no different than for a Bunch of classification data.
|
import pandas as pd
import numpy as np
housing_df = pd.DataFrame(
data=np.append(
housing_data['data'],
np.array(housing_data['target']).reshape(len(housing_data['target']), 1),
axis=1),
columns=np.append(housing_data['feature_names'], ['price'])
)
housing_df.sample(n=10)
|
content/03_regression/01_introduction_to_sklearn/colab.ipynb
|
google/applied-machine-learning-intensive
|
apache-2.0
|
Generating
In the example datasets we've seen so far in this Colab, the data is static and loaded from a file. Sometimes it makes more sense to generate a dataset. For this, we can use one of the many generator functions.
make_regression is a generator that creates a dataset with an underlying regression that you can then attempt to discover using various machine learning models.
In the example below, we create a dataset with 10 data points. For the sake of visualization, we have only one feature per datapoint, but we could ask for more.
The return values are the $X$ and $y$ values for the regression. $X$ is a matrix of features. $y$ is a list of targets.
Since a generator uses randomness to generate data, we are going to set a random_state in this Colab for reproducibility. This ensures we get the same result every time we run the code. You won't do this in your production code.
|
from sklearn.datasets import make_regression
features, targets = make_regression(n_samples=10, n_features=1, random_state=42)
features, targets
|
content/03_regression/01_introduction_to_sklearn/colab.ipynb
|
google/applied-machine-learning-intensive
|
apache-2.0
|
We can use a visualization library to plot the regression data.
|
import matplotlib.pyplot as plt
plt.plot(features, targets, 'b.')
plt.show()
|
content/03_regression/01_introduction_to_sklearn/colab.ipynb
|
google/applied-machine-learning-intensive
|
apache-2.0
|
That data appears to have a very linear pattern!
If we want to make it more realistic (non-linear), we can add some noise during data generation.
Remember that random_state is for reproducibility only. Don't use this in your code unless you have a good reason to.
|
from sklearn.datasets import make_regression
features, targets = make_regression(n_samples=10, n_features=1, random_state=42, noise=5.0)
plt.plot(features, targets, 'b.')
plt.show()
|
content/03_regression/01_introduction_to_sklearn/colab.ipynb
|
google/applied-machine-learning-intensive
|
apache-2.0
|
There are dozens of dataset loaders and generators in the scikit-learn datasets package. When you want to play with a new machine learning algorithm, they are a great source of data for getting started.
Exercise 2
Search the scikit-learn datasets documentation and find a function to make a "Moons" dataset. Create a dataset with 75 samples. Use a random state of 42 and a noise of 0.08. Store the $X$ return value in a variable called features and the $y$ return value in a variable called targets.
Student Solution
|
# Your answer goes here
|
content/03_regression/01_introduction_to_sklearn/colab.ipynb
|
google/applied-machine-learning-intensive
|
apache-2.0
|
Exercise 3
In Exercise Two, you created a "moons" dataset. In that dataset, the features are $(x,y)$-coordinates that can be graphed in a scatterplot. The targets are zeros and ones that represent a binary classification.
Use matplotlib's scatter method to visualize the data as a scatterplot. Use the c argument to make the dots for each class a different color.
Student Solution
|
# Your answer goes here
|
content/03_regression/01_introduction_to_sklearn/colab.ipynb
|
google/applied-machine-learning-intensive
|
apache-2.0
|
Models
Machine learning involves training a model to gain insight and predictive power from a dataset. Scikit-learn has support for many different types of models, ranging from classic algebraic models to more modern deep learning models.
Throughout the remainder of this course, you will learn about many of these models in much more depth. This section will walk you through some of the overarching concepts across all models.
Estimators
Most of the models in scikit-learn are considered estimators. An estimator is expected to implement two methods: fit and predict.
fit is used to train the model. At a minimum, it is passed the feature data used to train the model. In supervised models, it is also passed the target data.
predict is used to get predictions from the model. This method is passed features and returns target predictions.
Let's see an example of this in action.
Linear regression is a simple model that you might have encountered in a statistics class in the past. The model attempts to draw a straight line through a set of data points, so the line is as close to as many points as possible.
We'll use scikit-learn's LinearRegression class to fit a line to the regression data that we generated earlier in this Colab. To do that, we simply call the fit(features, targets) method.
After fitting, we can ask the model for predictions. To do this, we use the predict(features) method.
|
from sklearn.datasets import make_regression
from sklearn.linear_model import LinearRegression
regression = LinearRegression()
regression.fit(features, targets)
predictions = regression.predict(features)
plt.plot(features, targets, 'b.')
plt.plot(features, predictions, 'r-')
plt.show()
|
content/03_regression/01_introduction_to_sklearn/colab.ipynb
|
google/applied-machine-learning-intensive
|
apache-2.0
|
At this point, don't worry too much about the details of what LinearRegression is doing. There is a deep-dive into regression problems coming up soon.
For now, just note the fit/predict pattern for training estimators, and know that you'll see it throughout our adventures with scikit-learn.
Transformers
In practice, it is rare that you will get perfectly clean data that is ready to feed into your model for training. Most of the time, you will need to perform some type of cleaning on the data first.
You've had some hands-on experience doing this in our Pandas Colabs. Scikit-learn can also be used to perform some data preprocessing.
Transformers are spread about within the scikit-learn library. Some are in the preprocessing module while others are in more specialized packages like compose, feature_extraction, impute, and others.
All transformers implement the fit and transform methods. The fit method calculates parameters necessary to perform the data transformation. The transform method actually applies the transformation. There is a convenience fit_transform method that performs both fitting and transformation in one method call.
Let's see a transformer in action.
We will use the MinMaxScaler to scale our feature data between zero and one. This scales the data with a linear transform so that the minimum value becomes 0 and the maximum value becomes 1, so all values are within 0 and 1.
Looking at our feature data pre-transformation, we can see values that are below zero and above one.
|
features
|
content/03_regression/01_introduction_to_sklearn/colab.ipynb
|
google/applied-machine-learning-intensive
|
apache-2.0
|
We will now create a MinMaxScaler and fit it to our feature data.
Each transformer has different information that it needs in order to perform a transformation. In the case of the MinMaxScaler, the smallest and largest values in the data are needed.
|
from sklearn.preprocessing import MinMaxScaler
transformer = MinMaxScaler()
transformer.fit(features)
transformer.data_min_, transformer.data_max_
|
content/03_regression/01_introduction_to_sklearn/colab.ipynb
|
google/applied-machine-learning-intensive
|
apache-2.0
|
You might notice that the values are stored in arrays. This is because transformers can operate on more than one feature. In this case, however, we have only one.
Next, we need to apply the transformation to our features. After the transformation, we can now see that all of the features fall between the range of zero to one. Moreover, you might notice that the minimum and maximum value in the untransformed features array correspond to the 0 and 1 in the transformed array, respectively.
|
features = transformer.transform(features)
features
|
content/03_regression/01_introduction_to_sklearn/colab.ipynb
|
google/applied-machine-learning-intensive
|
apache-2.0
|
Pipelines
A pipeline is simply a series of transformers, often with an estimator at the end.
In the example below, we use a Pipeline class to perform min-max scaling or our feature data and then train a linear regression model using the scaled features.
|
from sklearn.pipeline import Pipeline
features, targets = make_regression(
n_samples=10, n_features=1, random_state=42, noise=5.0)
pipeline = Pipeline([
('scale', MinMaxScaler()),
('regression', LinearRegression())
])
pipeline.fit(features, targets)
predictions = pipeline.predict(features)
plt.plot(features, targets, 'b.')
plt.plot(features, predictions, 'r-')
plt.show()
|
content/03_regression/01_introduction_to_sklearn/colab.ipynb
|
google/applied-machine-learning-intensive
|
apache-2.0
|
Metrics
So far we have seen ways that scikit-learn can help you get data, modify that data, train a model, and finally, make predictions. But how do we know how good these predictions are?
Scikit-learn also comes with many functions for measuring model performance in the metrics package. Later in this course, you will learn about different ways to measure the performance of regression and classification models, as well as tradeoffs between the different metrics.
We can use the mean_squared_error function to find the mean squared error (MSE) between the target values that we used to train our linear regression model and the predicted values.
|
from sklearn.metrics import mean_squared_error
mean_squared_error(targets, predictions)
|
content/03_regression/01_introduction_to_sklearn/colab.ipynb
|
google/applied-machine-learning-intensive
|
apache-2.0
|
In this case, the MSE value alone doesn't have much meaning. Since the data that we fit the regression to isn't related to any real-world metrics, the MSE is hard to interpret alone.
As we learn more about machine learning and begin training models on real data, you'll learn how to interpret MSE and other metrics in the context of the data being analyzed and the problem being solved.
There are also metrics that come with each estimator class. These metrics can be extracted using the score method.
The regression class we created earlier can be scored, as can the pipeline.
|
print(regression.score(features, targets))
print(pipeline.score(features, targets))
|
content/03_regression/01_introduction_to_sklearn/colab.ipynb
|
google/applied-machine-learning-intensive
|
apache-2.0
|
The return value of the score method depends on the estimator being used. In the case of LinearRegression, the score is the $R^2$ score, where scores closer to 1.0 are better. You can find the metric that score returns in the documentation for the given estimator you're using.
Exercise 4
Use the Pipeline class to combine a data pre-processor and an estimator.
To accomplish this:
Find a preprocessor that uses the max absolute value for scaling.
Find a linear_model based on the Huber algorithm.
Combine this preprocessor and estimator into a pipeline.
Make a sample regression dataset with 200 samples and 1 feature. Use a random state of 85 and a noise of 5.0. Save the features in a variable called features and the targets in a variable called targets.
Fit the model.
Using the features that were created when the regression dataset was created, make predictions with the model and save them into a variable called predictions.
Plot the features and targets used to train the model on a scatterplot with blue dots.
Plot the features and predictions over the scatterplot as a red line.
Student Solution
|
# Your answer goes here
|
content/03_regression/01_introduction_to_sklearn/colab.ipynb
|
google/applied-machine-learning-intensive
|
apache-2.0
|
使用 TensorFlow 进行分布式训练
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://tensorflow.google.cn/guide/distributed_training" class=""><img src="https://tensorflow.google.cn/images/tf_logo_32px.png" class="">在 TensorFlow.org 上查看</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/guide/distributed_training.ipynb" class=""><img src="https://tensorflow.google.cn/images/colab_logo_32px.png">在 Google Colab 中运行 </a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/guide/distributed_training.ipynb" class=""><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png">在 GitHub 上查看源代码</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/guide/distributed_training.ipynb" class=""><img src="https://tensorflow.google.cn/images/download_logo_32px.png">下载笔记本</a></td>
</table>
概述
tf.distribute.Strategy 是一个可在多个 GPU、多台机器或 TPU 上进行分布式训练的 TensorFlow API。使用此 API,您只需改动较少代码就能分布现有模型和训练代码。
tf.distribute.Strategy 旨在实现以下目标:
易于使用,支持多种用户(包括研究人员和 ML 工程师等)。
提供开箱即用的良好性能。
轻松切换策略。
tf.distribute.Strategy 可用于 Keras 等高级 API,也可用来分布自定义训练循环(以及,一般来说,使用 TensorFlow 的任何计算)。
在 TensorFlow 2.x 中,您可以立即执行程序,也可以使用 tf.function 在计算图中执行。虽然 tf.distribute.Strategy 对两种执行模式都支持,但使用 tf.function 效果最佳。建议仅将 Eager 模式用于调试,而 TPUStrategy 不支持此模式。尽管本指南大部分时间在讨论训练,但该 API 也可用于在不同平台上分布评估和预测。
您在使用 tf.distribute.Strategy 时只需改动少量代码,因为我们修改了 TensorFlow 的底层组件,使其可感知策略。这些组件包括变量、层、优化器、指标、摘要和检查点。
在本指南中,我们将介绍各种类型的策略,以及如何在不同情况下使用它们。
注:为了更深入地理解这些概念,请观看此深入演示。如果您计划编写自己的训练循环,尤其建议您观看此视频。
|
# Import TensorFlow
import tensorflow as tf
|
site/zh-cn/guide/distributed_training.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
策略类型
tf.distribute.Strategy 打算涵盖不同轴上的许多用例。目前已支持其中的部分组合,将来还会添加其他组合。其中一些轴包括:
同步和异步训练:这是通过数据并行进行分布式训练的两种常用方法。在同步训练中,所有工作进程都同步地对输入数据的不同片段进行训练,并且会在每一步中聚合梯度。在异步训练中,所有工作进程都独立训练输入数据并异步更新变量。通常情况下,同步训练通过全归约实现,而异步训练通过参数服务器架构实现。
硬件平台:您可能需要将训练扩展到一台机器上的多个 GPU 或一个网络中的多台机器(每台机器拥有 0 个或多个 GPU),或扩展到 Cloud TPU 上。
要支持这些用例,有六种策略可选。在下一部分,我们将说明当前在 TF 2.2 的哪些场景中支持哪些策略。以下为快速概览:
训练 API | MirroredStrategy | TPUStrategy | MultiWorkerMirroredStrategy | CentralStorageStrategy | ParameterServerStrategy
:-- | :-- | :-- | :-- | :-- | :--
Keras API | 支持 | 支持 | 实验性支持 | 实验性支持 | 计划于 2.3 后支持
自定义训练循环 | 支持 | 支持 | 实验性支持 | 实验性支持 | 计划于 2.3 后支持
Estimator API | 有限支持 | 不支持 | 有限支持 | 有限支持 | 有限支持
注:实验性支持指不保证该 API 的兼容性。
注:对 Estimator 提供有限支持。基本训练和评估都是实验性的,而未实现高级功能(如基架)。如未涵盖某一用例,建议您使用 Keras 或自定义训练循环。
MirroredStrategy
tf.distribute.MirroredStrategy 支持在一台机器的多个 GPU 上进行同步分布式训练。该策略会为每个 GPU 设备创建一个副本。模型中的每个变量都会在所有副本之间进行镜像。这些变量将共同形成一个名为 MirroredVariable 的单个概念变量。这些变量会通过应用相同的更新彼此保持同步。
高效的全归约算法用于在设备之间传递变量更新。全归约算法通过加总各个设备上的张量使其聚合,并使其在每个设备上可用。这是一种非常高效的融合算法,可以显著减少同步开销。根据设备之间可用的通信类型,可以使用的全归约算法和实现方法有很多。默认使用 NVIDIA NCCL 作为全归约实现。您可以选择我们提供的其他选项,也可以自己编写。
以下是创建 MirroredStrategy 最简单的方法:
|
mirrored_strategy = tf.distribute.MirroredStrategy()
|
site/zh-cn/guide/distributed_training.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
这会创建一个 MirroredStrategy 实例,该实例使用所有对 TensorFlow 可见的 GPU,并使用 NCCL 进行跨设备通信。
如果您只想使用机器上的部分 GPU,您可以这样做:
|
mirrored_strategy = tf.distribute.MirroredStrategy(devices=["/gpu:0", "/gpu:1"])
|
site/zh-cn/guide/distributed_training.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
如果您想重写跨设备通信,可以通过提供 tf.distribute.CrossDeviceOps 的实例,使用 cross_device_ops 参数来实现。目前,除了默认选项 tf.distribute.NcclAllReduce 外,还有 tf.distribute.HierarchicalCopyAllReduce 和 tf.distribute.ReductionToOneDevice 两个选项。
|
mirrored_strategy = tf.distribute.MirroredStrategy(
cross_device_ops=tf.distribute.HierarchicalCopyAllReduce())
|
site/zh-cn/guide/distributed_training.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
TPUStrategy
您可以使用 tf.distribute.experimental.TPUStrategy 在张量处理单元 (TPU) 上运行 TensorFlow 训练。TPU 是 Google 的专用 ASIC,旨在显著加速机器学习工作负载。您可通过 Google Colab、TensorFlow Research Cloud 和 Cloud TPU 平台进行使用。
就分布式训练架构而言,TPUStrategy 和 MirroredStrategy 是一样的,即实现同步分布式训练。TPU 会在多个 TPU 核心之间实现高效的全归约和其他集合运算,并将其用于 TPUStrategy。
下面演示了如何将 TPUStrategy 实例化:
注:要在 Colab 中运行此代码,应将 TPU 作为 Colab 运行时。具体请参阅 TensorFlow TPU 指南。
cluster_resolver = tf.distribute.cluster_resolver.TPUClusterResolver( tpu=tpu_address) tf.config.experimental_connect_to_cluster(cluster_resolver) tf.tpu.experimental.initialize_tpu_system(cluster_resolver) tpu_strategy = tf.distribute.experimental.TPUStrategy(cluster_resolver)
TPUClusterResolver 实例可帮助定位 TPU。在 Colab 中,您无需为其指定任何参数。
如果要将其用于 Cloud TPU,您必须:
在 tpu 参数中指定 TPU 资源的名称。
在程序开始时显式地初始化 TPU 系统。这是使用 TPU 进行计算前的必须步骤。初始化 TPU 系统还会清除 TPU 内存,所以为了避免丢失状态,请务必先完成此步骤。
MultiWorkerMirroredStrategy
tf.distribute.experimental.MultiWorkerMirroredStrategy 与 MirroredStrategy 非常相似。它实现了跨多个工作进程的同步分布式训练,而每个工作进程可能有多个 GPU。与 MirroredStrategy 类似,它也会跨所有工作进程在每个设备的模型中创建所有变量的副本。
它使用 CollectiveOps 作为多工作进程全归约通信方法,用于保持变量同步。集合运算是 TensorFlow 计算图中的单个运算,它可以根据硬件、网络拓扑和张量大小在 TensorFlow 运行期间自动选择全归约算法。
它还实现了其他性能优化。例如,静态优化,可以将小张量上的多个全归约转化为大张量上较少的全归约。此外,我们还在为它设计插件架构,这样您将来就能以插件的形式使用针对您的硬件进行了更好优化的算法。请注意,集合运算还可以实现其他集合运算,比如广播和全收集。
以下是创建 MultiWorkerMirroredStrategy 最简单的方法:
|
multiworker_strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
|
site/zh-cn/guide/distributed_training.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
MultiWorkerMirroredStrategy 目前为您提供两种不同的集合运算实现方法。CollectiveCommunication.RING 通过将 gRPC 用作通信层来实现基于环的集合。CollectiveCommunication.NCCL 使用 NVIDIA 的 NCCL 来实现集合。CollectiveCommunication.AUTO 会将选择推迟到运行时。集合实现的最佳选择取决于 GPU 的数量和种类,以及集群中的网络互连。您可以通过以下方式来指定:
|
multiworker_strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy(
tf.distribute.experimental.CollectiveCommunication.NCCL)
|
site/zh-cn/guide/distributed_training.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
与多 GPU 训练相比,多工作进程训练的一个主要差异是多工作进程的设置。TF_CONFIG 环境变量是在 TensorFlow 中为作为集群一部分的每个工作进程指定集群配置的标准方法。详细了解如何设置 TF_CONFIG。
注:此策略处于 experimental 阶段,我们目前正在进行改进,使其能够用于更多场景。敬请期待 API 的未来变化。
CentralStorageStrategy
tf.distribute.experimental.CentralStorageStrategy 也执行同步训练。变量不会被镜像,而是放在 CPU 上,且运算会复制到所有本地 GPU 。如果只有一个 GPU,则所有变量和运算都将被放在该 GPU 上。
请通过以下代码,创建 CentralStorageStrategy 实例:
|
central_storage_strategy = tf.distribute.experimental.CentralStorageStrategy()
|
site/zh-cn/guide/distributed_training.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
这会创建一个 CentralStorageStrategy 实例,该实例将使用所有可见的 GPU 和 CPU。在副本上对变量的更新将先进行聚合,然后再应用于变量。
注:此策略处于 experimental 阶段,我们目前正在进行改进,使其能够用于更多场景。敬请期待 API 的未来变化。
ParameterServerStrategy
tf.distribute.experimental.ParameterServerStrategy 支持在多台机器上进行参数服务器训练。在此设置中,有些机器会被指定为工作进程,有些会被指定为参数服务器。模型的每个变量都会被放在参数服务器上。计算会被复制到所有工作进程的所有 GPU 中。
就代码而言,该策略看起来与其他策略类似:
ps_strategy = tf.distribute.experimental.ParameterServerStrategy()
对于多工作进程训练,TF_CONFIG 需要在集群中指定参数服务器和工作进程的配置,有关详细信息,可以阅读下面的 TF_CONFIG。
注:该策略仅适用于 Estimator API。
其他策略
除上述策略外,还有其他两种策略可能对使用 tf.distribute API 进行原型设计和调试有所帮助。
默认策略
默认策略是一种分布式策略,当作用域内没有显式分布策略时就会出现。此策略会实现 tf.distribute.Strategy 接口,但只具有传递功能,不提供实际分布。例如,strategy.run(fn) 只会调用 fn。使用该策略编写的代码与未使用任何策略编写的代码完全一样。您可以将其视为“无运算”策略。
默认策略是一种单一实例,无法创建它的更多实例。可通过在任意显式策略的作用域(与可用于在显式策略的作用域内获得当前策略的 API 相同)外使用 tf.distribute.get_strategy() 获得该策略。
|
default_strategy = tf.distribute.get_strategy()
|
site/zh-cn/guide/distributed_training.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
该策略有两个主要用途:
它允许无条件编写可感知分布的库代码。例如,在优化器中,我们可以执行 tf.distribute.get_strategy() 并使用该策略来减少梯度,而它将始终返回一个我们可以在其上调用归约 API 的策略对象。
|
# In optimizer or other library code
# Get currently active strategy
strategy = tf.distribute.get_strategy()
strategy.reduce("SUM", 1.) # reduce some values
|
site/zh-cn/guide/distributed_training.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
与库代码类似,它可以用来在使用或不使用分布策略的情况下编写最终用户的程序,而无需条件逻辑。以下示例代码段展示了这一点:
|
if tf.config.list_physical_devices('gpu'):
strategy = tf.distribute.MirroredStrategy()
else: # use default strategy
strategy = tf.distribute.get_strategy()
with strategy.scope():
# do something interesting
print(tf.Variable(1.))
|
site/zh-cn/guide/distributed_training.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
OneDeviceStrategy
tf.distribute.OneDeviceStrategy 是一种会将所有变量和计算放在单个指定设备上的策略。
strategy = tf.distribute.OneDeviceStrategy(device="/gpu:0")
此策略与默认策略在诸多方面存在差异。在默认策略中,与没有任何分布策略的 TensorFlow 运行相比,变量放置逻辑保持不变。但是当使用 OneDeviceStrategy 时,在其作用域内创建的所有变量都会被显式地放在指定设备上。此外,通过 OneDeviceStrategy.run 调用的任何函数也会被放在指定设备上。
通过该策略分布的输入将被预提取到指定设备。而在默认策略中,则没有输入分布。
与默认策略类似,在切换到实际分布到多个设备/机器的其他策略之前,也可以使用此策略来测试代码。这将比默认策略更多地使用分布策略机制,但不能像使用 MirroredStrategy 或 TPUStrategy 等策略那样充分发挥其作用。如果您想让代码表现地像没有策略,请使用默认策略。
目前为止,我们已经讨论了可用的不同策略以及如何将其实例化。在接下来的几个部分中,我们将讨论使用它们分布训练的不同方法。我们将在本指南中展示简短的代码段,并附上可以从头到尾运行的完整教程的链接。
在 tf.keras.Model.fit 中使用 tf.distribute.Strategy
我们已将 tf.distribute.Strategy 集成到 tf.keras(TensorFlow 对 Keras API 规范的实现)。tf.keras 是用于构建和训练模型的高级 API。将该策略集成到 tf.keras 后端以后,您可以使用 model.fit 在 Keras 训练框架中无缝进行分布式训练。
您需要对代码进行以下更改:
创建一个合适的 tf.distribute.Strategy 实例。
将 Keras 模型、优化器和指标的创建转移到 strategy.scope 中。
我们支持所有类型的 Keras 模型:序贯模型、函数式模型和子类化模型。
下面是一段代码,执行该代码会创建一个非常简单的带有一个密集层的 Keras 模型:
|
mirrored_strategy = tf.distribute.MirroredStrategy()
with mirrored_strategy.scope():
model = tf.keras.Sequential([tf.keras.layers.Dense(1, input_shape=(1,))])
model.compile(loss='mse', optimizer='sgd')
|
site/zh-cn/guide/distributed_training.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
在此示例中我们使用了 MirroredStrategy,因此我们可以在有多个 GPU 的机器上运行。strategy.scope() 会指示 Keras 使用哪个策略来进行分布式训练。我们可以通过在此作用域内创建模型/优化器/指标来创建分布式变量而非常规变量。设置完成后,您就可以像平常一样拟合模型。MirroredStrategy 负责将模型的训练复制到可用的 GPU 上,以及聚合梯度等。
|
dataset = tf.data.Dataset.from_tensors(([1.], [1.])).repeat(100).batch(10)
model.fit(dataset, epochs=2)
model.evaluate(dataset)
|
site/zh-cn/guide/distributed_training.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
我们在这里使用了 tf.data.Dataset 来提供训练和评估输入。您还可以使用 Numpy 数组:
|
import numpy as np
inputs, targets = np.ones((100, 1)), np.ones((100, 1))
model.fit(inputs, targets, epochs=2, batch_size=10)
|
site/zh-cn/guide/distributed_training.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
在上述两种情况(数据集或 Numpy)中,给定输入的每个批次都被平均分到了多个副本中。例如,如果对 2 个 GPU 使用 MirroredStrategy,大小为 10 的每个批次将被均分到 2 个 GPU 中,每个 GPU 每步会接收 5 个输入样本。如果添加更多 GPU,每个周期的训练速度就会更快。在添加更多加速器时通常需要增加批次大小,以便有效利用额外的计算能力。您还需要根据模型重新调整学习率。您可以使用 strategy.num_replicas_in_sync 获得副本数量。
|
# Compute global batch size using number of replicas.
BATCH_SIZE_PER_REPLICA = 5
global_batch_size = (BATCH_SIZE_PER_REPLICA *
mirrored_strategy.num_replicas_in_sync)
dataset = tf.data.Dataset.from_tensors(([1.], [1.])).repeat(100)
dataset = dataset.batch(global_batch_size)
LEARNING_RATES_BY_BATCH_SIZE = {5: 0.1, 10: 0.15}
learning_rate = LEARNING_RATES_BY_BATCH_SIZE[global_batch_size]
|
site/zh-cn/guide/distributed_training.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
目前支持的策略
训练 API | MirroredStrategy | TPUStrategy | MultiWorkerMirroredStrategy | CentralStorageStrategy | ParameterServerStrategy
--- | --- | --- | --- | --- | ---
Keras API | 支持 | 支持 | 实验性支持 | 实验性支持 | 计划于 2.3 后支持
示例和教程
下列教程和示例完整演示了上述集成到 Keras 的过程:
使用 MirroredStrategy 训练 MNIST 的教程。
使用 MultiWorkerMirroredStrategy 训练 MNIST 的教程。
使用 TPUStrategy 训练 MNIST 的指南。
包含使用各种策略实现的最先进模型集合的 TensorFlow Model Garden 仓库。
在自定义训练循环中使用 tf.distribute.Strategy
如您所见,在 Keras model.fit 中使用 tf.distribute.Strategy 只需改动几行代码。再多花点功夫,您还可以在自定义训练循环中使用 tf.distribute.Strategy。
如果您需要更多相对于使用 Estimator 或 Keras 时的灵活性和对训练循环的控制权,您可以编写自定义训练循环。例如,在使用 GAN 时,您可能会希望每轮使用不同数量的生成器或判别器步骤。同样,高级框架也不太适合强化学习训练。
为了支持自定义训练循环,我们通过 tf.distribute.Strategy 类提供了一组核心方法。使用这些方法可能需要在开始时对代码进行轻微重构,但完成重构后,您只需更改策略实例就能够在 GPU、TPU 和多台机器之间进行切换。
下面我们将用一个简短的代码段说明此用例,其中的简单训练样本使用与之前相同的 Keras 模型。
首先,在该策略的作用域内创建模型和优化器。这样可以确保使用此模型和优化器创建的任何变量都是镜像变量。
|
with mirrored_strategy.scope():
model = tf.keras.Sequential([tf.keras.layers.Dense(1, input_shape=(1,))])
optimizer = tf.keras.optimizers.SGD()
|
site/zh-cn/guide/distributed_training.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
接下来,我们创建输入数据集并调用 tf.distribute.Strategy.experimental_distribute_dataset,以根据策略分布数据集。
|
dataset = tf.data.Dataset.from_tensors(([1.], [1.])).repeat(100).batch(
global_batch_size)
dist_dataset = mirrored_strategy.experimental_distribute_dataset(dataset)
|
site/zh-cn/guide/distributed_training.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
然后,我们定义一个训练步骤。我们将使用 tf.GradientTape 来计算梯度,并使用优化器来应用这些梯度以更新模型变量。要分布此训练步骤,我们加入一个 train_step 函数,并将此函数和从之前创建的 dist_dataset 获得的数据集输入一起传递给 tf.distrbute.Strategy.run:
|
loss_object = tf.keras.losses.BinaryCrossentropy(
from_logits=True,
reduction=tf.keras.losses.Reduction.NONE)
def compute_loss(labels, predictions):
per_example_loss = loss_object(labels, predictions)
return tf.nn.compute_average_loss(per_example_loss, global_batch_size=global_batch_size)
def train_step(inputs):
features, labels = inputs
with tf.GradientTape() as tape:
predictions = model(features, training=True)
loss = compute_loss(labels, predictions)
gradients = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
return loss
@tf.function
def distributed_train_step(dist_inputs):
per_replica_losses = mirrored_strategy.run(train_step, args=(dist_inputs,))
return mirrored_strategy.reduce(tf.distribute.ReduceOp.SUM, per_replica_losses,
axis=None)
|
site/zh-cn/guide/distributed_training.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
以上代码还需注意以下几点:
我们使用了 tf.nn.compute_average_loss 来计算损失。tf.nn.compute_average_loss 将每个样本的损失相加,然后将总和除以 global_batch_size。这很重要,因为稍后在每个副本上计算出梯度后,会通过对它们求和使其在副本中聚合。
我们使用了 tf.distribute.Strategy.reduce API 来聚合 tf.distribute.Strategy.run 返回的结果。tf.distribute.Strategy.run 会从策略中的每个本地副本返回结果,且有多种方法使用该结果。可以 reduce 它们以获得聚合值。还可以通过执行 tf.distribute.Strategy.experimental_local_results 获得包含在结果中的值的列表,每个本地副本一个列表。
当在一个分布策略作用域内调用 apply_gradients 时,它的行为会被修改。具体来说,在同步训练期间,在将梯度应用于每个并行实例之前,它会对梯度的所有副本求和。
最后,当我们定义完训练步骤后,就可以迭代 dist_dataset,并在循环中运行训练:
|
for dist_inputs in dist_dataset:
print(distributed_train_step(dist_inputs))
|
site/zh-cn/guide/distributed_training.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
在上面的示例中,我们通过迭代 dist_dataset 为训练提供输入。我们还提供 tf.distribute.Strategy.make_experimental_numpy_dataset 以支持 Numpy 输入。您可以在调用 tf.distribute.Strategy.experimental_distribute_dataset 之前使用此 API 来创建数据集。
迭代数据的另一种方法是显式地使用迭代器。当您希望运行给定数量的步骤而非迭代整个数据集时,可能会用到此方法。现在可以将上面的迭代修改为:先创建迭代器,然后在迭代器上显式地调用 next 以获得输入数据。
|
iterator = iter(dist_dataset)
for _ in range(10):
print(distributed_train_step(next(iterator)))
|
site/zh-cn/guide/distributed_training.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
上面是使用 tf.distribute.Strategy API 来分布自定义训练循环最简单的情况。我们正在改进这些 API。由于此用例还需进一步调整才能适应您的代码,我们未来会发布单独的详细指南。
目前支持的策略
训练 API | MirroredStrategy | TPUStrategy | MultiWorkerMirroredStrategy | CentralStorageStrategy | ParameterServerStrategy
:-- | :-- | :-- | :-- | :-- | :--
自定义训练循环 | 支持 | 支持 | 实验性支持 | 实验性支持 | 计划于 2.3 后支持
示例和教程
下面是在自定义训练循环中使用分布策略的一些示例:
使用 MirroredStrategy 训练 MNIST 的教程。
使用 TPUStrategy 训练 MNIST 的指南。
包含使用各种策略实现的最先进模型集合的 TensorFlow Model Garden 仓库。
在 Estimator 中使用 tf.distribute.Strategy(有限支持)
tf.estimator 是分布式训练 TensorFlow API,最初支持异步参数服务器方法。与 Keras 类似,我们已将 tf.distribute.Strategy 集成到 tf.Estimator。如果您正在使用 Estimator 进行训练,那么您只需改动少量代码即可轻松转换为分布式训练。借助此功能,Estimator 用户现在可以在多个 GPU 和多个工作进程以及 TPU 上进行同步分布式训练。但是,Estimator 的这种支持是有限的。有关详细信息,请参阅下文目前支持的策略部分。
在 Estimator 中使用 tf.distribute.Strategy 的方法与 Keras 略有不同。现在我们不使用 strategy.scope,而是将策略对象传递到 Estimator 的 RunConfig 中。
以下代码段使用预制 Estimator LinearRegressor 和 MirroredStrategy 展示了这种情况:
|
mirrored_strategy = tf.distribute.MirroredStrategy()
config = tf.estimator.RunConfig(
train_distribute=mirrored_strategy, eval_distribute=mirrored_strategy)
regressor = tf.estimator.LinearRegressor(
feature_columns=[tf.feature_column.numeric_column('feats')],
optimizer='SGD',
config=config)
|
site/zh-cn/guide/distributed_training.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
我们在这里使用了预制 Estimator,但同样的代码也适用于自定义 Estimator。train_distribute 决定训练如何分布,eval_distribute 决定评估如何分布。这是与 Keras 的另一个区别,在 Keras 中,我们会对训练和评估使用相同的策略。
现在,我们可以使用输入函数来训练和评估这个 Estimator:
|
def input_fn():
dataset = tf.data.Dataset.from_tensors(({"feats":[1.]}, [1.]))
return dataset.repeat(1000).batch(10)
regressor.train(input_fn=input_fn, steps=10)
regressor.evaluate(input_fn=input_fn, steps=10)
|
site/zh-cn/guide/distributed_training.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
Exercício 1 - Utilizando o método update_with_media, realize a atualização do status utilizando a imagem fia.jpg disponível na pasta da aula.
Imprima com o status "Programação com Python e Twitter na FIA!"
|
retorno = api.update_with_media(filename='fia.jpg',status='Test. Upload media via python')
print(retorno.text)
|
Python/2016-07-29/aula4-parte5-exercicios-Copy1.ipynb
|
rubensfernando/mba-analytics-big-data
|
mit
|
Exercício 2 - Salve o retorno do tweet do exercício anterior e imprima as seguintes informações:
* tweet
* id
* created_at
* lang
* text
* user
* screen_name,
* friends_count
* time_zone
Por fim, remova o tweet, utilizando o método destroy_status.
|
print(retorno.id)
print(retorno.created_at)
print(retorno.lang)
print(retorno.text)
print(retorno.user.screen_name)
print(retorno.user.friends_count)
print(retorno.user.time_zone)
retornoDestroy = api.destroy_status(retorno.id)
|
Python/2016-07-29/aula4-parte5-exercicios-Copy1.ipynb
|
rubensfernando/mba-analytics-big-data
|
mit
|
Exercício 3 - Utilizando o método home_timeline(), recupere os 10 tweets atuais. Para cada um desses tweets, imprima:
* o screen_name
* o texto do tweet
* o id do usuário
|
home = api.home_timeline(count=10)
for i, tweet in enumerate(home):
print(tweet.user.screen_name)
print(tweet.text)
print(tweet.user.id)
print('\n')
|
Python/2016-07-29/aula4-parte5-exercicios-Copy1.ipynb
|
rubensfernando/mba-analytics-big-data
|
mit
|
Exercício 4 - Para cada tweet do exercício anterior, utilize o id do usuário e imprima o texto dos 5 primeiros tweets de cada um dos 10 usuários (user_timeline).
|
for i, user in enumerate(home):
help(api.user_timeline)
|
Python/2016-07-29/aula4-parte5-exercicios-Copy1.ipynb
|
rubensfernando/mba-analytics-big-data
|
mit
|
Create Some Mock Data
First, let's create a default galaxy halo model:
|
model = halomod.TracerHaloModel(
z=0.2,
transfer_model='EH',
rnum=30,
rmin=0.1,
rmax=30,
hod_model='Zehavi05',
hod_params={
"M_min": 12.0,
"M_1": 12.8,
'alpha': 1.05,
'central': True
},
dr_table=0.1,
dlnk=0.1,
dlog10m=0.05
)
|
docs/examples/fitting.ipynb
|
steven-murray/halomod
|
mit
|
Now, let's create some mock data with some Gaussian noise:
|
np.random.seed(1234)
mock_data = model.corr_auto_tracer + np.random.normal(scale = 0.1 * np.abs(model.corr_auto_tracer))
mock_ngal = model.mean_tracer_den
plt.plot(model.r, model.corr_auto_tracer)
plt.scatter(model.r, mock_data)
plt.xscale('log')
plt.yscale('log')
|
docs/examples/fitting.ipynb
|
steven-murray/halomod
|
mit
|
Define a likelihood
Now let's define a likelihood function, based on some input model for $\xi(r)$. The model is simply a $\chi^2$ likelihood.
|
def chi_square(model, data, sigma):
return np.sum(norm.logpdf(data, loc=model, scale=sigma))
|
docs/examples/fitting.ipynb
|
steven-murray/halomod
|
mit
|
Define an emcee-compatible likelihood function
Now we define a likelihood function for emcee. There is a bit more flexibility here, as this function needs to calculate priors on all input parameters, and handle exceptions as well. This means this function is rather specific to the problem at hand. We will define a very simple function, but one that is fairly general.
|
fiducial_model = model.clone()
|
docs/examples/fitting.ipynb
|
steven-murray/halomod
|
mit
|
First, we define a small utility function that will take a dictionary in which keys may be dot-paths, and converts it to a nested dictionary:
|
def flat_to_nested_dict(dct: dict) -> dict:
"""Convert a dct of key: value pairs into a nested dict.
Keys that have dots in them indicate nested structure.
"""
def key_to_dct(key, val, dct):
if '.' in key:
key, parts = key.split('.', maxsplit=1)
if key not in dct:
dct[key] = {}
key_to_dct(parts, val, dct[key])
else:
dct[key] = val
out = {}
for k, v in dct.items():
key_to_dct(k, v, out)
return out
|
docs/examples/fitting.ipynb
|
steven-murray/halomod
|
mit
|
So, this will do the following:
|
flat_to_nested_dict(
{
'nested.key': 1,
'nested.key2': 2,
'non_nested': 3
}
)
|
docs/examples/fitting.ipynb
|
steven-murray/halomod
|
mit
|
This will enable us to pass a list of parameter names that we want updated, which could be parameters of nested models. This means our posterior function is fairly general, and can accept any model parameters to be updated:
|
def log_prob(param_values, param_names, data, model, bounds=None, derived=()):
# Pack parameters into a dict
params = dict(zip(param_names, param_values))
# Allow for simple bounded flat priors.
bounds = bounds or {}
for key, val in params.items():
bound = bounds.get(key, (-np.inf, np.inf))
if not bound[0] < val < bound[1]:
return (-np.inf,) + (None,)*len(derived)
# Update the base model with all the parameters that are being constrained.
params = flat_to_nested_dict(params)
model.update(**params)
ll = chi_square(model.corr_auto_tracer, data[0], 0.1 * np.abs(model.corr_auto_tracer))
ll += chi_square(model.mean_tracer_den, data[1], 1e-4)
if not np.isfinite(ll):
return (-np.inf, ) + (None,)*len(derived)
derived = tuple(getattr(model, d) for d in derived)
out = (ll,) + derived
return out
|
docs/examples/fitting.ipynb
|
steven-murray/halomod
|
mit
|
We can test that the log_prob function works:
|
log_prob(
[12.0, 12.8, 1.05],
['hod_params.M_min', 'hod_params.M_1', 'hod_params.alpha'],
(mock_data, mock_ngal),
model,
derived=['satellite_fraction', 'mean_tracer_den']
)
|
docs/examples/fitting.ipynb
|
steven-murray/halomod
|
mit
|
Notice the derived parameters: we can pass any quantity of the TracerHaloModel, and it will be stored on every iteration. Nice!
Run emcee
Let's run a simple model in which we just want to fit $\sigma_8$ and the spectral index $n_s$. We use the popular emcee package, and pass in our log_prob model:
|
backend = emcee.backends.HDFBackend("backend.h5")
backend.reset(100, 3)
blobs_dtype = [("sat_frac", float), ("tracer_den", float), ("bias_effective_tracer", float), ("corr_auto_tracer", (float, len(mock_data)))]
sampler = emcee.EnsembleSampler(
nwalkers = 100,
ndim = 3,
log_prob_fn = log_prob,
kwargs = {
'param_names': ['hod_params.M_min', 'hod_params.M_1', 'hod_params.alpha'],
'data': (mock_data, mock_ngal),
'model': model,
'derived': ['satellite_fraction', 'mean_tracer_den', 'bias_effective_tracer', 'corr_auto_tracer'],
},
pool = Pool(32),
blobs_dtype=blobs_dtype,
backend=backend
)
|
docs/examples/fitting.ipynb
|
steven-murray/halomod
|
mit
|
On the advice of the emcee documentation, we set up some initial positions of the walkers around the solution.
|
initialpos = np.array([
fiducial_model.hod.params['M_min'],
fiducial_model.hod.params['M_1'],
fiducial_model.hod.params['alpha']
]) + 1e-4 * np.random.normal(size=(sampler.nwalkers, sampler.ndim))
sampler.run_mcmc(initialpos, nsteps=10000, progress=True);
|
docs/examples/fitting.ipynb
|
steven-murray/halomod
|
mit
|
Now we can plot the posterior in a corner plot, along with the derived parameters, and the true input values:
|
flatchain = sampler.get_chain(discard=500, thin=5, flat=True)
blobs = sampler.get_blobs(discard=500, thin=5, flat=True)
flatchain = np.hstack((
flatchain,
np.atleast_2d(blobs['sat_frac']).T,
np.atleast_2d(np.log10(blobs['tracer_den'])).T,
np.atleast_2d(blobs['bias_effective_tracer']).T
))
np.save('flatchain', flatchain)
corner.corner(
flatchain,
labels=[r'$M_{\rm min}$', '$M_1$', r'$\alpha$', r'$f_{\rm sat}$', r'$\log_{10}\bar{n}_g$',
r'$b_{\rm eff}$'],
quantiles=(0.16, 0.84),
show_titles=True,
#range=lim,
levels=(1-np.exp(-0.5),1-np.exp(-2),1-np.exp(-4)),
plot_datapoints=False,
plot_density=False,
fill_contours=True,
color="blue",
hist_kwargs={"color":"black"},
smooth=0.5,
smooth1d=0.5,
truths=[12., 12.8, 1.05, None, None, None],
truth_color='darkgray'
);
plt.savefig("default_corner.pdf")
|
docs/examples/fitting.ipynb
|
steven-murray/halomod
|
mit
|
And we're done! The posterior contains the truth to within 1-sigma.
Let's also plot the residuals:
|
xi_out = sampler.get_blobs(discard=500, thin=5, flat=True)['corr_auto_tracer']
quantiles = np.quantile(xi_out, [0.16, 0.50, 0.84], axis=0)
plt.scatter(model.r, mock_data / quantiles[1])
plt.errorbar(model.r, mock_data / quantiles[1], yerr = 0.2*np.abs(fiducial_model.corr_auto_tracer) / quantiles[1], fmt='none')
plt.fill_between(model.r, quantiles[0] / quantiles[1], quantiles[2]/ quantiles[1], alpha=0.3)
plt.xscale('log')
plt.xlabel("r [Mpc/$h$]", fontsize=14)
plt.ylabel(r"$\xi(r) / \hat{\xi}(r)$", fontsize=14);
plt.savefig("residuals.pdf")
|
docs/examples/fitting.ipynb
|
steven-murray/halomod
|
mit
|
Is it big enough?
Now we have all our data loaded into variable big_data, but can we really say it's Big Data?
|
print "We have {} posts".format(len(big_data))
|
legacy/Getting Meta with Big Data Malaysia.ipynb
|
BigDataMalaysia/bigdatamy_fb_group_analysis
|
mit
|
Wow! So data! Very big!
Seriously though... it's not big. In fact it's rather small. How small is small? Here's a clue...
|
import os
print "The source file is {} bytes. Pathetic.".format(os.stat(INPUT_FILE).st_size)
|
legacy/Getting Meta with Big Data Malaysia.ipynb
|
BigDataMalaysia/bigdatamy_fb_group_analysis
|
mit
|
At the time this was written, the file was just about 3MB, and there were fewer than 2k posts... note that excludes comments made on posts, but still, this stuff is small. It is small enough that at no point do we need to do anything clever from a data indexing/caching/storage perspective, so to start we will take the simplistic but often appropriate approach of slicing and dicing our big_data object directly. Later on we'll get into pandas DataFrame objects.
Anyway, size doesn't matter. It's variety that counts.
Fields of gold
Now we know how many elements (rows I guess?) we have, but how much variety do we have in this data? One measure of this may be to look at the number of fields in each of those items:
|
import itertools
all_the_fields = set(itertools.chain.from_iterable(big_data))
print "We have {} different field names:".format(len(all_the_fields))
print all_the_fields
|
legacy/Getting Meta with Big Data Malaysia.ipynb
|
BigDataMalaysia/bigdatamy_fb_group_analysis
|
mit
|
Are we missing anything? A good way to sanity check things is to actually inspect the data, so let's look at a random item:
|
import random
import pprint
# re-run this as much as you like to inspect different items
pprint.pprint(random.choice(big_data))
|
legacy/Getting Meta with Big Data Malaysia.ipynb
|
BigDataMalaysia/bigdatamy_fb_group_analysis
|
mit
|
From that you should be able to sense that we are missing some things - it isn't simply that there are some number of fields that describe each item, because some of those fields have data hierarchies beneath them, for example:
|
pprint.pprint(big_data[234])
|
legacy/Getting Meta with Big Data Malaysia.ipynb
|
BigDataMalaysia/bigdatamy_fb_group_analysis
|
mit
|
From that we can see some fields have hierarchies within them, e.g. likes have a list of id dictionaries, which happen to be relatively trivial (names and ids... I wonder why Facebook didn't just post the id and make you look up the name?) but the comment field is a bit more complex, wherein it contains a list of dictionaries with each field potentially being a dictionary of its own, e.g. we can see that the second comment on that post tagged Teuku Faruq:
|
pprint.pprint(big_data[234]['comments'][0]['data'][1]['message_tags'])
|
legacy/Getting Meta with Big Data Malaysia.ipynb
|
BigDataMalaysia/bigdatamy_fb_group_analysis
|
mit
|
Data quality annoyances
Actually I'm not even sure why the comments field is a single entry list. Is that always the case?
|
set([len(data['comments']) for data in big_data if 'comments' in data])
|
legacy/Getting Meta with Big Data Malaysia.ipynb
|
BigDataMalaysia/bigdatamy_fb_group_analysis
|
mit
|
Apparently that's not always the case, sometimes there are 2 items in the list, let's see what that looks like...
|
multi_item_comment_lists = [data['comments'] for data in big_data if ('comments' in data) and (len(data['comments']) > 1)]
print len(multi_item_comment_lists)
pprint.pprint(multi_item_comment_lists[0])
|
legacy/Getting Meta with Big Data Malaysia.ipynb
|
BigDataMalaysia/bigdatamy_fb_group_analysis
|
mit
|
Skimming the above it looks as though very long comment threads are split into multiple "pages" in the comments list. This may be an artifact of the paging code in pull_feed.py, which is not ideal. At some point we may fix it there, but for the time being we'll just consider it a data quality inconvenience that we will have to deal with.
Here's a function to work around this annoyance:
|
def flatten_comments_pages(post):
flattened_comments = []
for page in post:
flattened_comments += page['data']
return flattened_comments
post_comments_paged = multi_item_comment_lists[0]
print "Post has {} comments".format(len(flatten_comments_pages(post_comments_paged)))
|
legacy/Getting Meta with Big Data Malaysia.ipynb
|
BigDataMalaysia/bigdatamy_fb_group_analysis
|
mit
|
Start plotting things already dammit
Now that we're counting comments, it's natural to ask: what does the number-of-comments-per-post distribution look like?
IMPORTANT NOTE: Beyond this point, we start to "follow the data" as we analyse things, and we do so in a time-relative way (e.g. comparing the last N days of posts to historical data). As Big Data Malaysia is a living breathing group, the data set is a living breathing thing, so things may change, and the conclusions informing the analysis here may suffer logic rot.
|
comments_threads = [data['comments'] for data in big_data if 'comments' in data]
count_of_posts_with_no_comments = len(big_data) - len(comments_threads)
comments_counts = [0] * count_of_posts_with_no_comments
comments_counts += [len(flatten_comments_pages(thread)) for thread in comments_threads]
import matplotlib.pyplot as plt
plt.hist(comments_counts, bins=max(comments_counts))
plt.title("Comments-per-post Histogram")
plt.xlabel("Comments per post")
plt.ylabel("Frequency")
plt.show()
|
legacy/Getting Meta with Big Data Malaysia.ipynb
|
BigDataMalaysia/bigdatamy_fb_group_analysis
|
mit
|
This sort of adds up intuitively; posts with long comment threads will be rare, though from experience with this forum it does not seem right to conclude that there is a lot of posting going on with no interaction... the community is a bit more engaged than that.
But since this is Facebook, comments aren't the only way of interacting with a post. There's also the wonderful 'Like'.
|
likes_threads = [data['likes']['data'] for data in big_data if 'likes' in data]
count_of_posts_with_no_likes = len(big_data) - len(likes_threads)
likes_counts = [0] * count_of_posts_with_no_likes
likes_counts += [len(thread) for thread in likes_threads]
plt.hist(likes_counts, bins=max(likes_counts))
plt.title("Likes-per-post Histogram")
plt.xlabel("Likes per post")
plt.ylabel("Frequency")
plt.show()
|
legacy/Getting Meta with Big Data Malaysia.ipynb
|
BigDataMalaysia/bigdatamy_fb_group_analysis
|
mit
|
Note that the above does not include Likes on Comments made on posts; only Likes made on posts themselves are counted.
While this paints the picture of a more engaged community, it still doesn't feel quite right. It seems unusual these days to find a post go by without a Like or two.
I have a hunch that the zero-like posts are skewed a bit to the earlier days of the group. To dig into that we'll need to start playing with timestamps. Personally I prefer to deal with time as UTC epoch seconds, and surprisingly it seems I need to write my own helper function for this.
|
import datetime
import dateutil
import pytz
def epoch_utc_s(date_string):
dt_local = dateutil.parser.parse(str(date_string))
dt_utc = dt_local.astimezone(pytz.utc)
nineteenseventy = datetime.datetime(1970,1,1)
epoch_utc = dt_utc.replace(tzinfo=None) - nineteenseventy
return int(epoch_utc.total_seconds())
posts_without_likes = [data for data in big_data if 'likes' not in data]
posts_with_likes = [data for data in big_data if 'likes' in data]
timestamps_of_posts_without_likes = [epoch_utc_s(post['created_time']) for post in posts_without_likes]
timestamps_of_posts_with_likes = [epoch_utc_s(post['created_time']) for post in posts_with_likes]
import numpy
median_epoch_liked = int(numpy.median(timestamps_of_posts_with_likes))
median_epoch_non_liked = int(numpy.median(timestamps_of_posts_without_likes))
print "Median timestamp of posts without likes: {} ({})".format(datetime.datetime.fromtimestamp(median_epoch_non_liked),
median_epoch_non_liked)
print "Median timestamp of posts with likes: {} ({})".format(datetime.datetime.fromtimestamp(median_epoch_liked),
median_epoch_liked)
|
legacy/Getting Meta with Big Data Malaysia.ipynb
|
BigDataMalaysia/bigdatamy_fb_group_analysis
|
mit
|
In general it seems my hunch may have been right, but it will be clearer if we plot it.
|
plt.hist(timestamps_of_posts_without_likes, alpha=0.5, label='non-Liked posts')
plt.hist(timestamps_of_posts_with_likes, alpha=0.5, label='Liked posts')
plt.title("Liked vs non-Liked posts")
plt.xlabel("Time (epoch UTC s)")
plt.ylabel("Count of posts")
plt.legend(loc='upper left')
plt.show()
|
legacy/Getting Meta with Big Data Malaysia.ipynb
|
BigDataMalaysia/bigdatamy_fb_group_analysis
|
mit
|
This is looking pretty legit now. We can see that lately there's been a significant uptick in the number of posts, and an uptick in the ratio of posts that receive at least one Like.
As another sanity check, we can revisit the Likes-per-post Histogram, but only include recent posts. While we're at it we might as well do the same for the Comments-per-post Histogram.
|
def less_than_n_days_ago(date_string, n):
query_date = epoch_utc_s(date_string)
today_a_year_ago = epoch_utc_s(datetime.datetime.now(pytz.utc) - datetime.timedelta(days=n))
return query_date > today_a_year_ago
# try changing this variable then re-running this cell...
days_ago = 30
# create a slice of our big_data containing only posts created n days ago
recent_data = [data for data in big_data if less_than_n_days_ago(data['created_time'], days_ago)]
# plot the Likes-per-post Histogram for recent_data
recent_likes_threads = [data['likes']['data'] for data in recent_data if 'likes' in data]
recent_count_of_posts_with_no_likes = len(recent_data) - len(recent_likes_threads)
recent_likes_counts = [0] * recent_count_of_posts_with_no_likes
recent_likes_counts += [len(thread) for thread in recent_likes_threads]
plt.hist(recent_likes_counts, bins=max(recent_likes_counts))
plt.title("Likes-per-post Histogram (last {} days)".format(days_ago))
plt.xlabel("Likes per post")
plt.ylabel("Frequency")
plt.show()
# plot the Comment-per-post Histogram for recent_data
recent_comments_threads = [data['comments'] for data in recent_data if 'comments' in data]
recent_count_of_posts_with_no_comments = len(recent_data) - len(comments_threads)
recent_comments_counts = [0] * recent_count_of_posts_with_no_comments
recent_comments_counts += [len(flatten_comments_pages(thread)) for thread in recent_comments_threads]
plt.hist(recent_comments_counts, bins=max(recent_comments_counts))
plt.title("Comments-per-post Histogram (last {} days)".format(days_ago))
plt.xlabel("Comments per post")
plt.ylabel("Frequency")
plt.show()
|
legacy/Getting Meta with Big Data Malaysia.ipynb
|
BigDataMalaysia/bigdatamy_fb_group_analysis
|
mit
|
build database
|
from __future__ import print_function
from itertools import (islice, izip)
import arrow
import github3
import requests
from sqlalchemy import or_
from github_settings import (ry_username, ry_password,
username, password,
# token,
GITENBERG_GITHUB_TOKEN,
GITENBERG_TRAVIS_ACCESS_TOKEN,
RDHYEE_GITHUB_TOKEN,
RDHYEE_TRAVIS_ACCESS_TOKEN,
RDHYEE_TRAVIS_PROFILE_TOKEN)
from second_folio import (apply_to_repos, all_repos)
from gitenberg_utils import (GitenbergJob,
GitenbergTravisJob,
ForkBuildRepo,
BuildRepo,
BuildRepo2,
MetadataWrite,
RepoNameFixer,
repo_md,
GitenbergJobRunner,
MetadataWriterRunner,
RepoJobRunner,
StatusUpdateRunner)
from gitenberg_db import Repo, create_session
import logging
logging.getLogger().getEffectiveLevel()
l = logging.getLogger()
l.setLevel(30)
print (logging.getLogger().getEffectiveLevel())
|
build_all_gitenberg.ipynb
|
rdhyee/nypl50
|
apache-2.0
|
reading in data using pandas
|
# http://www.datacarpentry.org/python-ecology-lesson/08-working-with-sql
import sqlite3
from itertools import islice
# Create a SQL connection to our SQLite database
con = sqlite3.connect("gitenberg.db")
cur = con.cursor()
# the result of a "cursor.execute" can be iterated over by row
for row in islice(cur.execute('SELECT * FROM repos;'), 3):
print(row)
#Be sure to close the connection.
con.close()
import pandas as pd
from pandas import DataFrame, Series
import sqlite3
con = sqlite3.connect("gitenberg.db")
df = pd.read_sql('SELECT * FROM repos;', con, parse_dates=('updated','metadata_written'))
df.head()
df.dtypes
# let's pull out a list of repos that have been built
|
build_all_gitenberg.ipynb
|
rdhyee/nypl50
|
apache-2.0
|
Build a specific book
Mr.-Spaceship_32522
|
class MetadataWriterRunner2(MetadataWriterRunner):
def __init__(self, dbfname, gh_username, gh_password, access_token=None, max_exceptions=None,
repos_list=None):
super(MetadataWriterRunner2, self).__init__(dbfname, gh_username, gh_password,
access_token, max_exceptions)
self.repos_list = repos_list
def repos(self, n=None):
if self.repos_list is not None:
return islice(self.session().query(Repo).
filter(Repo.repo_name.in_(self.repos_list)),
n)
else:
return []
class RepoJobRunner2(RepoJobRunner):
def __init__(self, dbfname, gh_username, gh_password, access_token=None, max_exceptions=None,
repos_list=None):
super(RepoJobRunner2, self).__init__(dbfname, gh_username, gh_password,
access_token, max_exceptions)
self.repos_list = repos_list
def repos(self, n=None):
if self.repos_list is not None:
return islice(self.session().query(Repo).
filter(Repo.repo_name.in_(self.repos_list)),
n)
else:
return []
mwr2 = MetadataWriterRunner2("gitenberg.db", username, password,
repos_list=('At-the-Sign-of-the-Eagle_6218',))
mwr2.run(1)
rjr2 = RepoJobRunner2("gitenberg.db", username, password, GITENBERG_TRAVIS_ACCESS_TOKEN, max_exceptions=20,
repos_list=('At-the-Sign-of-the-Eagle_6218',
))
rjr2.run(None)
|
build_all_gitenberg.ipynb
|
rdhyee/nypl50
|
apache-2.0
|
metadatawrite
|
mwr = MetadataWriterRunner("gitenberg.db", username, password)
mwr.run(1)
mwr.exceptions()
job = BuildRepo2(username=username,
password=password,
repo_name='',
repo_owner='GITenberg',
update_travis_commit_msg='build using gitenberg.travis',
tag_commit_message='build using gitenberg.travis',
access_token=GITENBERG_TRAVIS_ACCESS_TOKEN)
|
build_all_gitenberg.ipynb
|
rdhyee/nypl50
|
apache-2.0
|
Building books
|
session = create_session("gitenberg.db")
(session.query(Repo)
.filter(or_(Repo.buildable == None, Repo.buildable == True))
.filter(Repo.datebuilt == None)
.filter(Repo.metadata_written != None)
).count()
rjr = RepoJobRunner("gitenberg.db", username, password, GITENBERG_TRAVIS_ACCESS_TOKEN, max_exceptions=20)
rjr.run(50)
list(rjr.repo_names(1))
def delete_repo_token(repo_name):
gtj = GitenbergTravisJob(username, password, repo_name, 'GITenberg',
update_travis_commit_msg='build using gitenberg.travis',
tag_commit_message='build using gitenberg.travis',
access_token=GITENBERG_TRAVIS_ACCESS_TOKEN)
gtj.delete_repo_token()
rjr.exceptions()
rjr.gh.ratelimit_remaining
dt = arrow.get(rjr.gh.rate_limit()['rate']['reset']) - arrow.now()
rjr.countdown(dt.seconds)
|
build_all_gitenberg.ipynb
|
rdhyee/nypl50
|
apache-2.0
|
StatusUpdater
|
class StatusUpdateRunner2(StatusUpdateRunner):
def __init__(self, dbfname, gh_username, gh_password, access_token=None, max_exceptions=None,
repos_list=None):
super(StatusUpdateRunner2, self).__init__(dbfname, gh_username, gh_password,
access_token, max_exceptions)
self.repos_list = repos_list
def repos(self, n=None):
if self.repos_list is not None:
return islice(self.session().query(Repo).
filter(Repo.repo_name.in_(self.repos_list)),
n)
else:
return []
(session.query(Repo)
.filter(Repo.datebuilt != None)
.filter(Repo.last_build_id == None)
).count()
sur = StatusUpdateRunner("gitenberg.db", username, password, GITENBERG_TRAVIS_ACCESS_TOKEN)
sur.run(None)
sur.gh.ratelimit_remaining
dt = arrow.get(sur.gh.rate_limit()['rate']['reset']) - arrow.now()
sur.countdown(dt.seconds)
sur.exceptions()
|
build_all_gitenberg.ipynb
|
rdhyee/nypl50
|
apache-2.0
|
overall stats
|
(session.query(Repo)
.filter(Repo.ebooks_in_release_count == 3)
).count()
session.query(Repo.ebooks_in_release_count).distinct().all()
|
build_all_gitenberg.ipynb
|
rdhyee/nypl50
|
apache-2.0
|
sql
SELECT ebooks_in_release_count, count (ebooks_in_release_count)
FROM Repos
GROUP BY ebooks_in_release_count
|
# how many built
(session.query(Repo)
.filter(Repo.datebuilt != None).count())
# how many for which we know lastbuilt status
(session.query(Repo)
.filter(Repo.last_build_state != None).count())
# http://stackoverflow.com/a/4086229/7782
from sqlalchemy import func
(session.query(Repo.ebooks_in_release_count, func.count(Repo.ebooks_in_release_count))
.group_by(Repo.ebooks_in_release_count).all())
from sqlalchemy import func
build_states = (session.query(Repo.last_build_state, func.count(Repo.last_build_state))
.group_by(Repo.last_build_state).all())
build_states
__builtin__.sum([v for (k,v) in build_states])
session.query(Repo).distinct(Repo.ebooks_in_release_count).count()
sur.gh.ratelimit_remaining
dt = arrow.get(sur.gh.rate_limit()['rate']['reset']) - arrow.now()
sur.countdown(dt.seconds)
import json
import unicodecsv as csv
from StringIO import StringIO
# http://stackoverflow.com/a/11884806
def as_dict(repo):
return {c.name: getattr(repo, c.name) for c in repo.__table__.columns}
# return Repos that have a known build state
results = (session.query(Repo)
.filter(Repo.last_build_state != None))
# repos_file = StringIO()
with open("built_repos.tsv", "wb") as repos_file:
headers = [c.name for c in Repo.__table__.columns]
repo_csv = csv.DictWriter(repos_file, headers, encoding='utf-8', delimiter='\t')
repo_csv.writeheader()
for result in islice(results,None):
repo_csv.writerow(as_dict(result))
!wc built_repos.tsv
|
build_all_gitenberg.ipynb
|
rdhyee/nypl50
|
apache-2.0
|
debugging errors / failures
|
failed_builds = (session.query(Repo)
.filter(Repo.last_build_state == 'failed'))
failed_builds.count()
for (i, repo) in enumerate(islice(failed_builds,None)):
url = url = "https://travis-ci.org/GITenberg/{repo_name}/builds/{last_build_id}".format(repo_name=repo.repo_name,
last_build_id=repo.last_build_id)
print (url)
|
build_all_gitenberg.ipynb
|
rdhyee/nypl50
|
apache-2.0
|
let's look at https://travis-ci.org/GITenberg/American-Hand-Book-of-the-Daguerreotype_167/builds/150209405
cannot read from /home/travis/build/GITenberg/American-Hand-Book-of-the-Daguerreotype_167/book.epub
The case of the image file names don't match -- case sensitivity.
For https://travis-ci.org/GITenberg/Literary-Blunders--A-Chapter-in-the--History-of-Human-Error-_371/builds/150224012:
ebook-convert 371.txt book.epub --title "Literary Blunders: A Chapter in the "History of Human Error"" --authors "" ' returned non-zero exit status 1
A problem with how quotes are handled in invocation of ebook-convert
relationship among build, job, log?
https://travis-ci.org/GITenberg/American-Hand-Book-of-the-Daguerreotype_167/builds/150209405
|
#
repo_name = "American-Hand-Book-of-the-Daguerreotype_167"
gtj = GitenbergTravisJob(username, password, repo_name, 'GITenberg',
update_travis_commit_msg='build using gitenberg.travis',
tag_commit_message='build using gitenberg.travis',
access_token=GITENBERG_TRAVIS_ACCESS_TOKEN)
gtj.travis_repo
|
build_all_gitenberg.ipynb
|
rdhyee/nypl50
|
apache-2.0
|
How to read log files from travis? revisit menegazzo/travispy: Travis CI API for Python
|
# How to read log files from travis
b = gtj.travis.build(gtj.travis_repo.last_build_id)
j = b.jobs[-1]
j.id
j.log.body[:100]
|
build_all_gitenberg.ipynb
|
rdhyee/nypl50
|
apache-2.0
|
update repos with started status
|
(session.query(Repo)
.filter(Repo.last_build_state == 'started')
).count()
class StatusUpdateRunnerForStartedJobs(StatusUpdateRunner):
def repos(self, n):
return islice((self.session().query(Repo)
.filter(Repo.last_build_state == 'started')
),n)
sur2 = StatusUpdateRunnerForStartedJobs("gitenberg.db", username, password, GITENBERG_TRAVIS_ACCESS_TOKEN)
sur2.run(None)
|
build_all_gitenberg.ipynb
|
rdhyee/nypl50
|
apache-2.0
|
rerunning jobs that have error status
|
class ErroredRepoJobRunner(RepoJobRunner):
def repos(self, n):
return islice((self.session().query(Repo)
.filter(Repo.last_build_state == 'errored')
),n)
erjr = ErroredRepoJobRunner("gitenberg.db", username, password, GITENBERG_TRAVIS_ACCESS_TOKEN, max_exceptions=20)
erjr.run(10)
erjr.gh.ratelimit_remaining
dt = arrow.get(erjr.gh.rate_limit()['rate']['reset']) - arrow.now()
sur.countdown(dt.seconds)
|
build_all_gitenberg.ipynb
|
rdhyee/nypl50
|
apache-2.0
|
Misc
|
for repo in session.query(Repo).filter_by(ebooks_in_release_count = 3):
repo.has_metadata = True
repo.has_source = True
repo.buildable = True
repo.updated = arrow.now().isoformat()
session.commit()
import gitenberg
b = gitenberg.Book(1)
b.parse_book_metadata()
b.meta.metadata
import yaml
md = repo_md(1)
print (yaml.safe_dump(md,default_flow_style=False,
allow_unicode=True))
1/0
def status_for_repo(repo_name):
rs = GitenbergTravisJob(username=username, password=password, repo_name=repo_name,
repo_owner='GITenberg',
update_travis_commit_msg='check status',
tag_commit_message='check status',
access_token=GITENBERG_TRAVIS_ACCESS_TOKEN)
return rs.status()
results_iter = apply_to_repos(status_for_repo, repos=all_repos)
results = []
for (i,result) in enumerate(results_iter):
results.append(result)
if not isinstance(result, Exception):
print ("\r{}: {}".format(i, result['repo_name']), end="")
else:
print ("\r{}: {}".format(i, str(result)), end="")
[(i, result) for (i, result) in enumerate(results) if isinstance(result, Exception)]
[result.get('repo_name') for result in results if result.get('ebooks_in_release_count') != 3]
# update the database based on result
result = results[0]
result
for result in results:
repo = session.query(Repo).filter_by(repo_name=result['repo_name']).first()
repo.updated = arrow.now().isoformat()
repo.datebuilt = result['last_build_started_at']
repo.version = result['version']
repo.ebooks_in_release_count = result['ebooks_in_release_count']
repo.last_build_id = result['last_build_id']
repo.last_build_state = result['last_build_state']
session.commit()
# building the rest
session.query(Repo).filter(Repo.datebuilt != None).count()
repo_names = [repo.repo_name for repo in
islice(session.query(Repo).filter(Repo.datebuilt == None).order_by(Repo.gutenberg_id.asc()),5)]
from collections import OrderedDict
from itertools import islice
results = OrderedDict()
repos_iter = iter(repo_names)
def build_repos(repo_names, n=None):
for (i, repo_name) in enumerate(islice(repo_names, n)):
try:
bj = BuildRepo2(username=username, password=password, repo_name=repo_name,
repo_owner='GITenberg',
update_travis_commit_msg='build using gitenberg.travis',
tag_commit_message='build using gitenberg.travis',
access_token=GITENBERG_TRAVIS_ACCESS_TOKEN)
results[repo_name] = (bj, bj.run())
# just mark as started
repo = session.query(Repo).filter_by(repo_name=result['repo_name']).first()
repo.updated = arrow.now().isoformat()
repo.datebuilt = arrow.now().isoformat()
except Exception, e:
results[repo_name] = e
print ("\r{}: {}".format(i, results[repo_name]), end="")
build_repos(repos_iter, 1)
|
build_all_gitenberg.ipynb
|
rdhyee/nypl50
|
apache-2.0
|
wondering if not add_all -- any add or update function? python - SQLAlchemy insert or update example - Stack Overflow
|
repo1.version = '0.0.5'
session.dirty
session.new
our_repo = session.query(Repo).filter_by(repo_name='Repo1').first() # doctest:+NORMALIZE_WHITESPACE
our_repo
|
build_all_gitenberg.ipynb
|
rdhyee/nypl50
|
apache-2.0
|
personal access tokens
|
gh = github3.login(ry_username, password=ry_password)
from itertools import islice
auths = [{'name': auth.name, 'created_at':auth.created_at, 'updated_at':auth.updated_at}
for auth in islice(gh.iter_authorizations(),None)]
sorted(auths, key=lambda r: r['created_at'])
|
build_all_gitenberg.ipynb
|
rdhyee/nypl50
|
apache-2.0
|
TF简介
|
# 例1: a+b
a = tf.placeholder(dtype=tf.float32, shape=[2]) # 定义占位符,可以feed满足相应条件的数据
b = tf.placeholder(dtype=tf.float32, shape=[2])
c = a + b
with tf.Session() as sess: # 创建一个会话
print sess.run(c, feed_dict={a:[1.,2.], b:[3.,3.]})
# 例2: 最小值 f(x) = x(1-x)sin(x)
import matplotlib.pylab as plt
%matplotlib inline
x = tf.Variable([1.80], dtype=tf.float32) # 定义变量
#x = tf.Variable([1.7], dtype=tf.float32)
y = x * (1-x) * tf.sin(6.28*x)
train_op = tf.train.GradientDescentOptimizer(0.01).minimize(y) # 使用GD算法求最小值
init = tf.global_variables_initializer() # 变量初始化,很重要!!!
with tf.Session() as sess:
sess.run(init)
x_init, y_init = sess.run([x,y])
for i in range(100):
sess.run(train_op)
x_min,y_min = sess.run([x,y])
# plot
x = np.linspace(-1,3,100)
y = x * (1-x) * np.sin(6.28*x)
plt.plot(x,y,'b-')
plt.plot(x_init,y_init,'bo')
plt.plot(x_min,y_min,'ro')
plt.title("$min_x f(x)=x(x-1)\sin(x)$")
|
tensorflow_tinanic.ipynb
|
gengyj/ml-basic-course
|
gpl-3.0
|
LR算法
|
# 说明: 我们还是使用tinanic数据,见sklearn_titanic.ipynb
import cPickle
with open("../kaggle_titanic/data/train_data","rb") as f:
X_train, y_train = cPickle.load(f)
X_train = X_train.astype(np.float32)
y_train = y_train.reshape((-1,1)).astype(np.float32)
X_tra, X_val, y_tra, y_val = train_test_split(X_train,y_train, test_size=0.25)
|
tensorflow_tinanic.ipynb
|
gengyj/ml-basic-course
|
gpl-3.0
|
LR算法简介:
对样本${x^{i}, y^{i}}(i=1,2,...,N),其中y^i \in {0,1}$,LR模型为:
$$h(x) = sigmod(\vec{w} \cdot \vec{x}+b)$$
其中,$sigmod(t) = \frac{1}{1+e^{-t}}$, $\vec{w},b$为参数。其损失函数为
$$L(\vec{w}) = -\sum_{i=0}^{N}(y^i \log h(x^i) + (1-y^i) \log (1-h(x^i)))$$
|
N_INPUT = 14
MAX_STEP = 1000
def inference(x): # 一般情况下,把正向传播部分到到一起,称之为infercence,如果要修改模型,很多时候修改这部分就可以了
w = tf.Variable(np.random.randn(N_INPUT,1),dtype=tf.float32)
b = tf.Variable([0.], dtype=tf.float32)
h = tf.matmul(x,w) + b # h = x * w + b
return h
x = tf.placeholder(tf.float32, shape=[None, N_INPUT])
y = tf.placeholder(tf.float32,shape=[None, 1])
y_ = inference(x)
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=y, logits=y_)
y_pred = tf.cast(tf.greater(y_, 0.5), tf.float32)
correct = tf.equal(y_pred, y)
accuracy = tf.reduce_mean(tf.cast(correct, tf.float32)) # loss is not 1-accuracy
train_op = tf.train.GradientDescentOptimizer(learning_rate=0.01).minimize(loss)
acc1 = []
with tf.Session() as sess:
init = tf.global_variables_initializer() # 变量初始化,很重要!!!
sess.run(init)
for i in range(MAX_STEP):
_, acc_tra = sess.run([train_op,accuracy],feed_dict={x:X_tra, y:y_tra})
if i % 10 == 0 or i+1 == MAX_STEP:
acc_val = sess.run(accuracy, feed_dict={x:X_val, y:y_val})
acc1.append([i, acc_tra, acc_val])
if i % 100 == 0 or i+1 == MAX_STEP:
print "%d, train accuracy :%.4f, test accuracy: %.4f" % (i, acc_tra, acc_val)
|
tensorflow_tinanic.ipynb
|
gengyj/ml-basic-course
|
gpl-3.0
|
添加一个隐藏层
LR算法是单层的NN,前面分析过,我们使用LR模型是欠拟合的,我们可以通过增加隐藏层复杂化,看看效果如何?
|
N_INPUT = 14
MAX_STEP = 1000
N_HID = 7
def inference(x):
w1 = tf.Variable(np.random.randn(N_INPUT,N_HID),dtype=tf.float32)
b1 = tf.Variable([0.], dtype=tf.float32)
h1 = tf.nn.tanh(tf.matmul(x,w1) + b1)
w2 = tf.Variable(np.random.randn(N_HID,1),dtype=tf.float32)
b2 = tf.Variable([0.], dtype=tf.float32)
h2 = tf.matmul(h1,w2) + b2
return h2
x = tf.placeholder(tf.float32, shape=[None, N_INPUT])
y = tf.placeholder(tf.float32,shape=[None, 1])
y_ = inference(x)
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=y, logits=y_)
y_pred = tf.cast(tf.greater(y_, 0.5), tf.float32)
correct = tf.equal(y_pred, y)
accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))
train_op = tf.train.GradientDescentOptimizer(learning_rate=0.01).minimize(loss)
acc2 = []
with tf.Session() as sess:
init = tf.global_variables_initializer()
sess.run(init)
for i in range(MAX_STEP):
_, acc_tra = sess.run([train_op,accuracy],feed_dict={x:X_tra, y:y_tra})
if i % 10 == 0 or i+1 == MAX_STEP:
acc_val = sess.run(accuracy, feed_dict={x:X_val, y:y_val})
acc2.append([i, acc_tra, acc_val])
if i % 100 == 0 or i+1 == MAX_STEP:
print "%d, train accuracy :%.4f, test accuracy: %.4f" % (i, acc_tra, acc_val)
|
tensorflow_tinanic.ipynb
|
gengyj/ml-basic-course
|
gpl-3.0
|
比较
|
import numpy as np
import matplotlib.pylab as plt
%matplotlib inline
acc1 = np.array(acc1)
acc2 = np.array(acc2)
plt.figure(figsize=(12,6))
plt.plot(acc1[:,0],acc1[:,1],'b--')
plt.plot(acc1[:,0],acc1[:,2],'b-')
plt.plot(acc2[:,0],acc2[:,1],'g--')
plt.plot(acc2[:,0],acc2[:,2],'g-')
plt.title("step vs. accuracy")
|
tensorflow_tinanic.ipynb
|
gengyj/ml-basic-course
|
gpl-3.0
|
II. RNN
<img src="rnnAML.png" width=900>
|
import keras
from keras.layers import Concatenate,Dense,Embedding
rnn_num_units = 64
embedding_size = 16
#Let's create layers for our recurrent network
#Note: we create layers but we don't "apply" them yet
embed_x = Embedding(n_tokens,embedding_size) # an embedding layer that converts character ids into embeddings
#a dense layer that maps input and previous state to new hidden state, [x_t,h_t]->h_t+1
get_h_next = Dense(rnn_num_units, activation="tanh")
#a dense layer that maps current hidden state to probabilities of characters [h_t+1]->P(x_t+1|h_t+1)
get_probas = Dense(n_tokens, activation="softmax")
def rnn_one_step(x_t, h_t):
"""
Recurrent neural network step that produces next state and output
given prev input and previous state.
We'll call this method repeatedly to produce the whole sequence.
"""
#convert character id into embedding
x_t_emb = embed_x(tf.reshape(x_t,[-1,1]))[:,0]
#print(tf.shape(x_t_emb)) #Tensor("Shape_16:0", shape=(2,), dtype=int32)
#print(tf.shape(h_t)) #Tensor("Shape_16:0", shape=(2,), dtype=int32)
#concatenate x embedding and previous h state
#x_and_h = Concatenate()([x_t_emb, h_t])###YOUR CODE HERE <keras.layers.merge.Concatenate object at 0x7f87e5bfc6a0>
x_and_h = tf.concat([x_t_emb, h_t], 1)
#compute next state given x_and_h
h_next = get_h_next(x_and_h)
#get probabilities for language model P(x_next|h_next)
output_probas = get_probas(h_next)
return output_probas,h_next
input_sequence = tf.placeholder('int32',(MAX_LENGTH,None))
batch_size = tf.shape(input_sequence)[1]
predicted_probas = []
h_prev = tf.zeros([batch_size,rnn_num_units]) #initial hidden state
for t in range(MAX_LENGTH): #for every time-step 't' ( each character)
x_t = input_sequence[t]
probas_next,h_next = rnn_one_step(x_t,h_prev)
h_prev = h_next
predicted_probas.append(probas_next)
predicted_probas = tf.stack(predicted_probas)
predictions_matrix = tf.reshape(predicted_probas[:-1],[-1,len(tokens)])
answers_matrix = tf.one_hot(tf.reshape(input_sequence[1:],[-1]), n_tokens)
from keras.objectives import categorical_crossentropy
loss = tf.reduce_mean(categorical_crossentropy(answers_matrix, predictions_matrix))
optimize = tf.train.AdamOptimizer().minimize(loss)
from IPython.display import clear_output
from random import sample
s = keras.backend.get_session()
s.run(tf.global_variables_initializer())
history = []
for i in range(5000):
batch = to_matrix(sample(names,32),max_len=MAX_LENGTH)
loss_i,_ = s.run([loss,optimize],{input_sequence:batch})
history.append(loss_i)
if (i+1)%100==0:
clear_output(True)
plt.plot(history,label='loss')
plt.legend()
plt.show()
assert np.mean(history[:10]) > np.mean(history[-10:]), "RNN didn't converge."
|
_files/bacterial_names/RNNs_KERAS.ipynb
|
vaxherra/vaxherra.github.io
|
mit
|
III. Sampling
|
x_t = tf.placeholder('int32',(None,))
h_t = tf.Variable(np.zeros([1,rnn_num_units],'float32'))
next_probs,next_h = rnn_one_step(x_t,h_t)
def generate_sample(seed_phrase=None,max_length=MAX_LENGTH):
'''
The function generates text given a phrase of length at least SEQ_LENGTH.
parameters:
The phrase is set using the variable seed_phrase
The optional input "N" is used to set the number of characters of text to predict.
'''
if seed_phrase==None:
seed_phrase=' '
else:
seed_phrase=' ' + str(seed_phrase).strip().lower()
x_sequence = [token_to_id[token] for token in seed_phrase]
s.run(tf.assign(h_t,h_t.initial_value))
#feed the seed phrase, if any
for ix in x_sequence[:-1]:
s.run(tf.assign(h_t,next_h),{x_t:[ix]})
#start generating
for _ in range(max_length-len(seed_phrase)):
x_probs,_ = s.run([next_probs,tf.assign(h_t,next_h)],{x_t:[x_sequence[-1]]})
x_sequence.append(np.random.choice(n_tokens,p=x_probs[0]))
return ''.join([tokens[ix] for ix in x_sequence])
for i in range(3):
print(str(i+1) + ". " + generate_sample())
for i in range(5):
print(str(i+1) + ". " + generate_sample())
for i in range(3):
print(str(i+1) + ". " + generate_sample("trump"))
for i in range(5):
print(str(i+1) + ". " + generate_sample("trump"))
for i in range(10):
print(str(i+1) + ". " + generate_sample("Kwapich"))
|
_files/bacterial_names/RNNs_KERAS.ipynb
|
vaxherra/vaxherra.github.io
|
mit
|
Product and expectation value
QobjEvo.mul_vec(t,state) = spmv(QobjEvo(t), state)
QobjEvo.expect(t, state, real) = cy_expect_psi/cy_expect_rho_vec (QobjEvo(t), state, real)
|
from qutip.cy.spmatfuncs import spmv, cy_expect_rho_vec, cy_expect_psi
spmv(td_func(2).data, vec) == td_func.mul_vec(2,vec)
print(td_func(2).data * mat_c == td_func.mul_mat(2,mat_c))
mat_c.flags
print(td_func(2).data * mat_f == td_func.mul_mat(2,mat_f))
mat_f.flags
cy_expect_psi(td_str(2).data, vec, 0) == td_str.expect(2, vec, 0)
cy_expect_rho_vec(td_super(2).data, vec_super, 0) == td_super.expect(2, vec_super, 0)
|
development/development-qobjevo-adv.ipynb
|
qutip/qutip-notebooks
|
lgpl-3.0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.