code stringlengths 38 801k | repo_path stringlengths 6 263 |
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="q8uxCSNqoXh4"
# # Topic Modeling with BERT
# + colab={} colab_type="code" id="GFenB5sr6F6n"
import numpy as np
import pandas as pd
import umap
import hdbscan
from sentence_transformers import SentenceTransformer
from sklearn.datasets import fetch_20newsgroups
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.metrics.pairwise import cosine_similarity
from tqdm import tqdm
import matplotlib.pyplot as plt
def c_tf_idf(documents, m, ngram_range=(1, 1)):
""" Calculate a class-based TF-IDF where m is the number of total documents. """
count = CountVectorizer(ngram_range=ngram_range, stop_words="english").fit(documents)
t = count.transform(documents).toarray()
w = t.sum(axis=1)
tf = np.divide(t.T, w)
sum_t = t.sum(axis=0)
idf = np.log(np.divide(m, sum_t)).reshape(-1, 1)
tf_idf = np.multiply(tf, idf)
return tf_idf, count
def extract_top_n_words_per_topic(tf_idf, count, docs_per_topic, n=20):
words = count.get_feature_names()
labels = list(docs_per_topic.Topic)
tf_idf_transposed = tf_idf.T
indices = tf_idf_transposed.argsort()[:, -n:]
top_n_words = {label: [(words[j], tf_idf_transposed[i][j]) for j in indices[i]][::-1] for i, label in enumerate(labels)}
return top_n_words
def extract_topic_sizes(df):
topic_sizes = (df.groupby(['Topic'])
.Doc
.count()
.reset_index()
.rename({"Topic": "Topic", "Doc": "Size"}, axis='columns')
.sort_values("Size", ascending=False))
return topic_sizes
# + [markdown] colab_type="text" id="MVLh9q2aRJd3"
# ## Load data
# For this example, let's use the famous 20Newsgroups dataset which contains roughly 18000 newsgroups posts on 20 topics.
# + colab={} colab_type="code" id="i47mA1qY6O04"
data = fetch_20newsgroups(subset='all')['data']
# + [markdown] colab_type="text" id="9BuND9OQRNVx"
# ## Create embeddings
# The very first step is converting the documents to numerical data. There are many methods that can be applied, but since we are modeling topics with **BERT** that is what we are going to be using.
#
# There are many pre-trained models that you can use for a large amount of languages [here](https://www.sbert.net/docs/pretrained_models.html). Simply plug-in the name instead of *distilbert-base-nli-mean-tokens*.
# + colab={"base_uri": "https://localhost:8080/", "height": 99, "referenced_widgets": ["d67e52562c2544aca31693e7f42efd1f", "13108b4be7e74b4bbdc3d86d4ea07208", "093d16278b5f42ca8663d5f13c8e7a5c", "58fd555e4c4a4c9098661c57f4ae3130", "5b5658e3f93840139ec8e52ccd9abb60", "efbc44215ef94abeba3f53d4f213742a", "9b848746237a427db3a3ff8f51bdc581", "2ea6d1bde7a243408ecfde59d6f0371b"]} colab_type="code" id="PPTyoDcV6cdA" outputId="a1d6dd99-ef40-4b4b-c05b-303f65c6a154"
# %%time
model = SentenceTransformer('distilbert-base-nli-mean-tokens')
embeddings = model.encode(data, show_progress_bar=True)
# + [markdown] colab_type="text" id="rO1GX8TvRXCM"
# ## Reduce dimensionality
# We use **UMAP** to reduce the dimensionality of the embeddings created above. It is important that we keep a little bit of dimensionality as that allows the reduced embeddings to contain more structure to improve clustering at a later stage.
#
# You can play around with the **number of components** (dimensionality to reduce to) and the **number of neighbors** (the nearby points to look at).
# + colab={"base_uri": "https://localhost:8080/", "height": 50} colab_type="code" id="V_rBb73b7AKd" outputId="e1ebd845-4e87-47c1-b031-c96fb12af253"
# %%time
umap_embeddings = umap.UMAP(n_neighbors=15,
n_components=5,
min_dist=0.0,
metric='cosine',
random_state=42).fit_transform(embeddings)
# + [markdown] colab_type="text" id="p_STjPe9RxSO"
# ## Cluster documents
# Since **UMAP** keeps some of the original high-embedded structure, it makes sense to use **HDBSCAN** to find highly-densed clusters. The metric is euclidean since it does not suffer from high-dimensionality and the **minimum cluster size** allows you to decrease the number of topics found and increase the topic sizes.
# + colab={"base_uri": "https://localhost:8080/", "height": 50} colab_type="code" id="MGqxxsTt7AGV" outputId="e925b380-c54b-4006-a1be-6e4f56ede05b"
# %%time
cluster = hdbscan.HDBSCAN(min_cluster_size=30,
metric='euclidean',
cluster_selection_method='eom',
prediction_data=True).fit(umap_embeddings)
# + [markdown] colab_type="text" id="KYj2_bbQaaqt"
# ## Visualize Clusters
# We can visualize the resulting cluster by embedding the data into **2d-space** using **UMAP** and using matplotlib to color the clusters. Some clusters are difficult to spot as there may be > 50 topics generated.
# + colab={} colab_type="code" id="4OoUWNulm7Fd"
# Prepare data
umap_data = umap.UMAP(n_neighbors=15, n_components=2, min_dist=0.0, metric='cosine').fit_transform(embeddings)
result = pd.DataFrame(umap_data, columns=['x', 'y'])
result['labels'] = cluster.labels_
# Visualize clusters
fig, ax = plt.subplots(figsize=(20, 10))
outliers = result.loc[result.labels == -1, :]
clustered = result.loc[result.labels != -1, :]
plt.scatter(outliers.x, outliers.y, color='#BDBDBD', s=0.05)
plt.scatter(clustered.x, clustered.y, c=clustered.labels, s=0.05, cmap='hsv_r')
plt.colorbar()
# plt.savefig("result1.png", dpi = 300)
# + [markdown] colab_type="text" id="OPD_OftOR-pi"
# ## Prepare results
# For easier selection, we put the results in a pandas dataframe. Then, *docs_per_label* is created in which all documents within a single cluster are joined.
# + colab={} colab_type="code" id="dp2TdT5z6iWy"
docs_df = pd.DataFrame(data, columns=["Doc"])
docs_df['Topic'] = cluster.labels_
docs_df['Doc_ID'] = range(len(docs_df))
docs_per_topic = docs_df.groupby(['Topic'], as_index = False).agg({'Doc': ' '.join})
# + [markdown] colab_type="text" id="EtzXjdDlSIPH"
# ## Calculate word importance per topic
# Calculate the importance of words in a topic compared to all other topics by considering all documents in a topic to be a single document instead before applying **TF-IDF**. Then, we simply extract the words with the highest values in each cluster as a representative of a topic.
# + colab={} colab_type="code" id="-lZKtdwv818W"
tf_idf, count = c_tf_idf(docs_per_topic.Doc.values, m = len(data))
# + colab={"base_uri": "https://localhost:8080/", "height": 343} colab_type="code" id="xpQOamfNu5-W" outputId="a2fe3cc8-2ce2-46d6-b9b8-0e6b1685c736"
top_n_words = extract_top_n_words_per_topic(tf_idf, count, docs_per_topic, n=20)
topic_sizes = extract_topic_sizes(docs_df); topic_sizes.head(10)
# + colab={"base_uri": "https://localhost:8080/", "height": 353} colab_type="code" id="uVA0zHMAsH-J" outputId="0c4884c7-40f4-4cff-896d-4fc2d177303a"
top_n_words[28]
# + [markdown] colab_type="text" id="inIQPln8vCQj"
# ## Topic Reduction
# + colab={"base_uri": "https://localhost:8080/", "height": 370} colab_type="code" id="gZ7JWcHrqPB2" outputId="aba2d2df-e881-42fc-e24b-40a045f1b424"
for i in tqdm(range(20)):
# Calculate cosine similarity
similarities = cosine_similarity(tf_idf.T)
np.fill_diagonal(similarities, 0)
# Extract label to merge into and from where
topic_sizes = docs_df.groupby(['Topic']).count().sort_values("Doc", ascending=False).reset_index()
topic_to_merge = topic_sizes.iloc[-1].Topic
topic_to_merge_into = np.argmax(similarities[topic_to_merge + 1]) - 1
# Adjust topics
docs_df.loc[docs_df.Topic == topic_to_merge, "Topic"] = topic_to_merge_into
old_topics = docs_df.sort_values("Topic").Topic.unique()
map_topics = {old_topic: index - 1 for index, old_topic in enumerate(old_topics)}
docs_df.Topic = docs_df.Topic.map(map_topics)
docs_per_topic = docs_df.groupby(['Topic'], as_index = False).agg({'Doc': ' '.join})
# Calculate new topic words
m = len(data)
tf_idf, count = c_tf_idf(docs_per_topic.Doc.values, m)
top_n_words = extract_top_n_words_per_topic(tf_idf, count, docs_per_topic, n=20)
topic_sizes = extract_topic_sizes(docs_df); topic_sizes.head(10)
| notebooks/Topic_with_BERT.ipynb |
// ---
// jupyter:
// jupytext:
// text_representation:
// extension: .cs
// format_name: light
// format_version: '1.5'
// jupytext_version: 1.14.4
// kernelspec:
// display_name: C#
// language: csharp
// name: csharp
// ---
// # C# for Azure Notebooks
//
// <img style=style="width: 300px; float: top" src='logo.jpg' alt="Drawing" />
//
// C# is an elegant and type-safe object-oriented language that enables developers to build a variety of secure and robust applications that run on the .NET Framework. You can use C# to create Windows client applications, XML Web services, distributed components, client-server applications, database applications, and much, much more. Visual C# provides an advanced code editor, convenient user interface designers, integrated debugger, and many other tools to make it easier to develop applications based on the C# language and the .NET Framework.
//
// It is a general-purpose language designed for developing apps on the Microsoft platform and requires the .NET framework on Windows to work. C# is often thought of as a hybrid that takes the best of C and C++ to create a truly modernized language. Although the .NET framework supports several other coding languages, C# has quickly become one of the most popular and is currently ranked as a [Top 5](https://spectrum.ieee.org/static/interactive-the-top-programming-languages-2017) most used language by IEEE.
//
// This notebook [quickstarts](https://docs.microsoft.com/en-us/dotnet/csharp/quick-starts/) can be used by a variety of audiences. Depending on your experience with programming, or with the C# language and .NET, you may wish to explore different sections of this notebook. The examples are kept simple, so if you are a brand-new developer learning C# for the first time, that's fine too!
//
// The [C# Language Reference](https://docs.microsoft.com/en-us/dotnet/csharp/language-reference/) is a reference for the C# language, and the [F# Guide](https://docs.microsoft.com/en-us/dotnet/csharp/index) covers general topics. [C# Walkthroughs](https://docs.microsoft.com/en-us/dotnet/csharp/walkthroughs) give step-by-step instructions for common scenarios, which makes them a good place to start learning about the product or a particular feature area.
//
// To learn more about how to use Jupyter notebooks, see [the Jupyter documentation](http://jupyter-notebook.readthedocs.io/) and the [Jupyter keyboard shortcuts](https://www.cheatography.com/weidadeyue/cheat-sheets/jupyter-notebook/). You can install the C# and Jupyter tooling locally using [IcSharp](https://github.com/zabirauf/icsharp). For a more detailed tour of the language, see [C# Tour](https://docs.microsoft.com/en-us/dotnet/csharp/tour-of-csharp/)
//
// This notebook demonstrates the features of the C# kernel for Jupyter Notebook.
//
//
// # Introduction #
//
// ### Hello world!
//
// In the Hello world example, you'll create the most basic C# program. You'll explore the string type and how to work with text.
// +
using System;
Console.WriteLine("Hello World!");
// -
// # Numbers in C#
//
// This tutorial teaches you about the number types in C# interactively. You'll write small amounts of code, then you'll compile and run that code. The tutorial contains a series of lessons that explore numbers and math operations in C#. These lessons teach you the fundamentals of the C# language.
//
// This tutorial expects you to have a machine you can use for development. The .NET topic [Get Started in 10 minutes](https://www.microsoft.com/net/core) has instructions for setting up your local development environment on Mac, PC or Linux.
//
// ## Explore integer math
//
//
int a = 18;
int b = 6;
int c = a + b;
Console.WriteLine(c);
//
// You've just seen one of the fundamental math operations with integers. The `int` type represents an **integer**, a positive or negative whole number. You use the `+` symbol for addition. Other common mathematical operations for integers include:
//
// - `-` for subtraction
// - `*` for multiplication
// - `/` for division
//
// Start by exploring those different operations. Add these lines after the line that writes the value of `c`:
// + attributes={"classes": ["csharp"], "id": ""}
c = a - b;
Console.WriteLine(c);
c = a * b;
Console.WriteLine(c);
c = a / b;
Console.WriteLine(c);
// -
//
// You can also experiment by performing multiple mathematics operations in the same line, if you'd like. Try `c = a + b - 12 * 17;` for example. Mixing variables and constant numbers is allowed.
//
// > As you explore C# (or any programming language), you'll
// > make mistakes when you write code. The **compiler** will
// > find those errors and report them to you. When the output
// > contains error messages, look closely at the example code
// > and the code in your window to see what to fix.
// > That exercise will help you learn the structure of C# code.
//
// You've finished the first step. Before you start the next section, let's move the current code into a separate method. That makes it easier to start working with a new example. Name your method to `WorkingWithIntegers` and write a new `Program` class. When you have finished, your code should look like this:
// +
using System;
class Program
{
public static void WorkingWithIntegers()
{
int a = 18;
int b = 6;
int c = a + b;
Console.WriteLine(c);
c = a - b;
Console.WriteLine(c);
c = a * b;
Console.WriteLine(c);
c = a / b;
Console.WriteLine(c);
}
}
Program.WorkingWithIntegers();
// -
// ## Explore order of operations
//
// Comment out the call to `WorkingWithIntegers()`. It will make the output less cluttered as you work in this section:
// + attributes={"classes": ["csharp"], "id": ""}
//WorkingWithIntegers();
// -
// The `//` starts a **comment** in C#. Comments are any text you want to keep in your source code but not execute as code. The compiler does not generate any executable code from comments.
//
// The C# language defines the precedence of different mathematics operations
// with rules consistent with the rules you learned in mathematics.
// Multiplication and division take precedence over addition and subtraction.
// +
int a = 5;
int b = 4;
int c = 2;
int d = a + b * c;
Console.WriteLine(d);
// The output demonstrates that the multiplication is performed before the addition.
// You can force a different order of operation by adding parentheses around
// the operation or operations you want performed first. Add the following
// lines and run again:
d = (a + b) * c;
Console.WriteLine(d);
// -
// Explore more by combining many different operations. Add something like
// the following lines.
// + attributes={"classes": ["csharp"], "id": ""}
d = (a + b) - 6 * c + (12 * 4) / 3 + 12;
Console.WriteLine(d);
// -
// You may have noticed an interesting behavior for integers. Integer
// division always produces an integer result, even when you'd expect the result to include a decimal or fractional portion.
//
// If you haven't seen this behavior, try the following code:
int e = 7;
int f = 4;
int g = 3;
int h = (e + f) / g;
Console.WriteLine(h);
// Before moving on, let's take all the code you've written in this
// section and put it in a new method. Call that new method `OrderPrecedence`.
// You should end up with something like this:
// +
using System;
class Program
{
public static void WorkingWithIntegers()
{
int a = 18;
int b = 6;
int c = a + b;
Console.WriteLine(c);
c = a - b;
Console.WriteLine(c);
c = a * b;
Console.WriteLine(c);
c = a / b;
Console.WriteLine(c);
}
public static void OrderPrecedence()
{
int a = 5;
int b = 4;
int c = 2;
int d = a + b * c;
Console.WriteLine(d);
d = (a + b) * c;
Console.WriteLine(d);
d = (a + b) - 6 * c + (12 * 4) / 3 + 12;
Console.WriteLine(d);
int e = 7;
int f = 4;
int g = 3;
int h = (e + f) / g;
Console.WriteLine(h);
}
}
Program.WorkingWithIntegers();
Console.WriteLine(" ");
Program.OrderPrecedence();
// -
// ## Explore integer precision and limits
// That last sample showed you that integer division truncates the result.
// You can get the **remainder** by using the **modulo** operator, the `%` character. Try the following code:
int a = 7;
int b = 4;
int c = 3;
int d = (a + b) / c;
int e = (a + b) % c;
Console.WriteLine($"quotient: {d}");
Console.WriteLine($"remainder: {e}");
// The C# integer type differs from mathematical integers in one other
// way: the `int` type has minimum and maximum limits.
int max = int.MaxValue;
int min = int.MinValue;
Console.WriteLine($"The range of integers is {min} to {max}");
// If a calculation produces a value that exceeds those limits, you
// have an **underflow** or **overflow** condition. The answer appears
// to wrap from one limit to the other.
int what = max + 3;
Console.WriteLine($"An example of overflow: {what}");
// Notice that the answer is very close to the minimum (negative) integer. It's
// the same as `min + 2`.
// The addition operation **overflowed** the allowed values for integers.
// The answer is a very large negative number because an overflow "wraps around"
// from the largest possible integer value to the smallest.
//
// There are other numeric types with different limits and precision that you
// would use when the `int` type doesn't meet your needs. Let's explore those next.
//
// ## Work with the double type
//
// The `double` numeric type represents a double-precision floating point
// number. Those terms may be new to you. A **floating point** number is
// useful to represent non-integral numbers that may be very large or small
// in magnitude. **Double-precision** means that these numbers are stored
// using greater precision than **single-precision**. On modern computers,
// it is more common to use double precision than single precision numbers.
// Let's explore. Add the following code and see the result:
double a = 5;
double b = 4;
double c = 2;
double d = (a + b) / c;
Console.WriteLine(d);
// Notice that the answer includes the decimal portion of the quotient. Try a slightly
// more complicated expression with doubles:
double e = 19;
double f = 23;
double g = 8;
double h = (e + f) / g;
Console.WriteLine(h);
// The range of a double value is much greater than integer values. Try the following
// code below what you've written so far:
double max = double.MaxValue;
double min = double.MinValue;
Console.WriteLine($"The range of double is {min} to {max}");
// These values are printed out in scientific notation. The number to
// the left of the `E` is the significand. The number to the right is the exponent,
// as a power of 10.
//
// Just like decimal numbers in math, doubles in C# can have rounding errors. Try this code:
double third = 1.0 / 3.0;
Console.WriteLine(third);
// You know that `0.3` repeating is not exactly the same as `1/3`.
//
// ***Challenge***
//
// Try other calculations with large numbers, small numbers, multiplication
// and division using the `double` type. Try more complicated calculations.
//
// After you've spent some time with the challenge, take the code you've written
// and place it in a new method. Name that new method `WorkWithDoubles`.
//
// ## Work with fixed point types
//
// You've seen the basic numeric types in C#: integers and doubles. There is one
// other type to learn: the `decimal` type. The `decimal` type has a smaller
// range but greater precision than `double`. The term **fixed point** means
// that the decimal point (or binary point) doesn't move. Let's take a look:
decimal min = decimal.MinValue;
decimal max = decimal.MaxValue;
Console.WriteLine($"The range of the decimal type is {min} to {max}");
// Notice that the range is smaller than the `double` type. You can see the greater
// precision with the decimal type by trying the following code:
// +
double a = 1.0;
double b = 3.0;
Console.WriteLine(a / b);
decimal c = 1.0M;
decimal d = 3.0M;
Console.WriteLine(c / d);
// -
// The `M` suffix on the numbers is how you indicate that a constant should use the
// `decimal` type.
//
// Notice that the math using the decimal type has more digits to the right
// of the decimal point.
//
// ***Challenge***
//
// Now that you've seen the different numeric types, write code that calculates
// the area of a circle whose radius is 2.50 centimeters. Remember that the area of a circle
// is the radius squared multiplied by PI. One hint: .NET contains a constant
// for PI, <xref:System.Math.PI?displayProperty=nameWithType> that you can use for that value.
//
// You should get an answer between 19 and 20.
// You can check your answer by [looking at the finished sample code on GitHub](https://github.com/dotnet/samples/tree/master/csharp/numbers-quickstart/Program.cs#L104-L106)
//
// Try some other formulas if you'd like.
//
// ---
// # Branches and loops
//
// This section teaches you how to write code that examines variables and changes the execution path based on those variables. You write C# code and see the results of compiling and running it. The section contains a series of lessons that explore branching and looping constructs in C#. These lessons teach you the fundamentals of the C# language.
//
// ## Make decisions using the `if` statement
//
//
int a = 5;
int b = 6;
if (a + b > 10)
Console.WriteLine("The answer is greater than 10.");
// Try this code by running the code in the your console window. You should see the message "The answer is greater than 10." printed to your console.
//
// Modify the declaration of `b` so that the sum is less than 10:
int b = 3;
if (a + b > 10)
Console.WriteLine("The answer is greater than 10.");
// Because the answer is less than 10, nothing is printed. The **condition** you're testing is false. You don't have any code to execute because you've only
// written one of the possible branches for an `if` statement: the true branch.
//
// > As you explore C# (or any programming language), you'll
// > make mistakes when you write code. The compiler will
// > find and report the errors. Look closely at the error
// > output and the code that generated the error. The compiler
// > error can usually help you find the problem.
//
// This first sample shows the power of `if` and Boolean types. A *Boolean* is a variable that can have one of two values: `true` or `false`. C# defines a special type, `bool` for Boolean variables. The `if` statement checks the value of a `bool`. When the value is `true`, the statement following the `if` executes. Otherwise, it is skipped.
//
// This process of checking conditions and executing statements based on those conditions is very powerful.
//
// ## Make if and else work together
//
// To execute different code in both the true and false branches, you
// create an `else` branch that executes when the condition is false. Try this. Add the last two lines:
int a = 5;
int b = 3;
if (a + b > 10)
Console.WriteLine("The answer is greater than 10");
else
Console.WriteLine("The answer is not greater than 10");
// The statement following the `else` keyword executes only when the condition being tested is `false`. Combining `if` and `else` with Boolean conditions provides all the power you need to handle both a `true` and a `false` condition.
//
// > [!IMPORTANT]
// > The indentation under the `if` and `else` statements is for human readers.
// > The C# language doesn't treat indentation or white space as significant.
// > The statement following the `if` or `else` keyword will be executed based
// > on the condition. All the samples in this section follow a common
// > practice to indent lines based on the control flow of statements.
//
// Because indentation is not significant, you need to use `{` and `}` to
// indicate when you want more than one statement to be part of the block
// that executes conditionally. C# programmers typically use those braces
// on all `if` and `else` clauses. The following example is the same as the one you
// just created. Modify your code above to match the following code:
int a = 5;
int b = 3;
if (a + b > 10)
{
Console.WriteLine("The answer is greater than 10");
}
else
{
Console.WriteLine("The answer is not greater than 10");
}
// > Through the rest of this section, the code samples all include the braces,
// > following accepted practices.
//
// You can test more complicated conditions.
// +
int c = 4;
if ((a + b + c > 10) && (a > b))
{
Console.WriteLine("The answer is greater than 10");
Console.WriteLine("And the first number is greater than the second");
}
else
{
Console.WriteLine("The answer is not greater than 10");
Console.WriteLine("Or the first number is not greater than the second");
}
// -
// The `&&` represents "and". It means both conditions must be true to execute
// the statement in the true branch. These examples also show that you can have multiple
// statements in each conditional branch, provided you enclose them in `{` and `}`.
//
// You can also use `||` to represent "or". Add the following code after what you've written so far:
// + attributes={"classes": ["csharp"], "id": ""}
if ((a + b + c > 10) || (a > b))
{
Console.WriteLine("The answer is greater than 10");
Console.WriteLine("Or the first number is greater than the second");
}
else
{
Console.WriteLine("The answer is not greater than 10");
Console.WriteLine("And the first number is not greater than the second");
}
// -
// You've finished the first step. Name your method to `ExploreIf` and write a new class named Program. When you have finished, your code should look like this:
// +
using System;
class Program
{
public static void ExploreIf()
{
int a = 5;
int b = 3;
int c = 4;
if (a + b > 10)
{
Console.WriteLine("The answer is greater than 10");
}
else
{
Console.WriteLine("The answer is not greater than 10");
}
if ((a + b + c > 10) && (a > b))
{
Console.WriteLine("The answer is greater than 10");
Console.WriteLine("And the first number is greater than the second");
}
else
{
Console.WriteLine("The answer is not greater than 10");
Console.WriteLine("Or the first number is not greater than the second");
}
if ((a + b + c > 10) || (a > b))
{
Console.WriteLine("The answer is greater than 10");
Console.WriteLine("Or the first number is greater than the second");
}
else
{
Console.WriteLine("The answer is not greater than 10");
Console.WriteLine("And the first number is not greater than the second");
}
}
}
Program.ExploreIf();
// -
// ## Use loops to repeat operations
//
// In this section you use **loops** to repeat statements. Try
// this code in your `Main` method:
int counter = 0;
while (counter < 10)
{
Console.WriteLine($"Hello World! The counter is {counter}");
counter++;
}
// The `while` statement checks a condition and executes the statement or statement block
// following the `while`. It repeatedly checks the condition and
// executing those statements until the condition is false.
//
// There's one other new operator in this example. The `++` after
// the `counter` variable is the **increment** operator. It adds 1
// to the value of `counter` and stores that value in the `counter` variable.
//
// > Make sure that the `while` loop condition changes to
// > false as you execute the code. Otherwise, you create an
// > **infinite loop** where your program never ends.
//
// The `while` loop tests the condition before executing the code
// following the `while`. The `do` ... `while` loop executes the
// code first, and then checks the condition. The do while loop is shown in the following code:
// + attributes={"classes": ["csharp"], "id": ""}
counter = 0;
do
{
Console.WriteLine($"Hello World! The counter is {counter}");
counter++;
} while (counter < 10);
// -
// This `do` loop and the earlier `while` loop produce the same output.
//
// ## Work with the for loop
//
// The **for** loop is commonly used in C#.
for( int index = 0; index < 10; index++)
{
Console.WriteLine($"Hello World! The index is {index}");
}
// This does the same work as the `while` loop and the `do` loop you've
// already used. The `for` statement has three parts that control
// how it works.
//
// The first part is the **for initializer**: `for index = 0;` declares
// that `index` is the loop variable, and sets its initial value to `0`.
//
// The middle part is the **for condition**: `index < 10` declares that this
// `for` loop continues to execute as long as the value of counter is less than 10.
//
// The final part is the **for iterator**: `index++` specifies how to modify the loop
// variable after executing the block following the `for` statement. Here, it specifies
// that `index` should be incremented by 1 each time the block executes.
//
// Experiment with these yourself. Try each of the following:
//
// - Change the initializer to start at a different value.
// - Change the condition to stop at a different value.
//
// When you're done, let's move on to write some code yourself to
// use what you've learned.
//
// ## Combine branches and loops
//
// Now that you've seen the `if` statement and the looping
// constructs in the C# language, see if you can write C# code to
// find the sum of all integers 1 through 20 that are divisible
// by 3. Here are a few hints:
//
// - The `%` operator gives you the remainder of a division operation.
// - The `if` statement gives you the condition to see if a number should be part of the sum.
// - The `for` loop can help you repeat a series of steps for all the numbers 1 through 20.
//
// Try it yourself. Then check how you did. You should get 63 for an answer. You can see one possible answer by
// [viewing the completed code on GitHub](https://github.com/dotnet/samples/tree/master/csharp/branches-quickstart/Program.cs#L46-L54).
//
// You've completed the "branches and loops" section.
//
// # String interpolation
//
// This section teaches you how to use C# [string interpolation](../language-reference/tokens/interpolated.md) to insert values into a single result string. You write C# code and see the results of compiling and running it. The section contains a series of lessons that show you how to insert values into a string and format those values in different ways.
//
// ## Create an interpolated string
//
// Create a directory named **interpolated**. Make it the current directory and run the following command from a new console window:
// Open **Program.cs** in your favorite editor, and replace the line `Console.WriteLine("Hello World!");` with the following code, where you replace `<name>` with your name:
// + attributes={"classes": ["csharp"], "id": ""}
var name = "<name>";
Console.WriteLine($"Hello, {name}. It's a pleasure to meet you!");
// -
// When you run the program, it displays a single string that includes your name in the greeting. The string included in the <xref:System.Console.WriteLine%2A> method call is an *interpolated string*. It's a kind of template that lets you construct a single string (called the *result string*) from a string that includes embedded code. Interpolated strings are particularly useful for inserting values into a string or concatenating (joining together) strings.
//
// This simple example contains the two elements that every interpolated string must have:
//
// - A string literal that begins with the `$` character before its opening quotation mark character. There can't be any spaces between the `$` symbol and the quotation mark character. (If you'd like to see what happens if you include one, insert a space after the `$` character, save the file, and run the program again by typing `dotnet run` in the console window. The C# compiler displays an error message, "error CS1056: Unexpected character '$'".)
//
// - One or more *interpolated expressions*. An interpolated expression is indicated by an opening and closing brace (`{` and `}`). You can put any C# expression that returns a value (including `null`) inside the braces.
//
// Let's try a few more string interpolation examples with some other data types.
//
// ## Include different data types
//
// In the previous section, you used string interpolation to insert one string inside of another. The result of an interpolated expression can be of any data type, though. Let's include values of various data types in an interpolated string.
//
// In the following example, first, we define a [class](../programming-guide/classes-and-structs/classes.md) data type `Vegetable` that has the `Name` [property](../properties.md) and the `ToString` [method](../methods.md), which [overrides](../language-reference/keywords/override.md) the behavior of the <xref:System.Object.ToString?displayProperty=nameWithType> method. The [`public` access modifier](../language-reference/keywords/public.md) makes that method available to any client code to get the string representation of a `Vegetable` instance. In the example the `Vegetable.ToString` method returns the value of the `Name` property that is initialized at the `Vegetable` [constructor](../programming-guide/classes-and-structs/constructors.md):
// Then we create an instance of the `Vegetable` class by using [`new` keyword](../language-reference/keywords/new-operator.md) and providing a name parameter for the constructor `Vegetable`:
// + attributes={"classes": ["csharp"], "id": ""}
var item = new Vegetable("eggplant");
// -
// Finally, we include the `item` variable into an interpolated string that also contains a <xref:System.DateTime> value, a <xref:System.Decimal> value, and a `Unit` [enumeration](../programming-guide/enumeration-types.md) value. Replace all of the C# code in your editor with the following code:
// + attributes={"classes": ["csharp"], "id": ""}
using System;
public class Vegetable
{
public Vegetable(string name) => Name = name;
public string Name { get; }
public override string ToString() => Name;
}
public enum Unit { item, pound, ounce, dozen };
var item = new Vegetable("eggplant");
var date = DateTime.Now;
var price = 1.99m;
var unit = Unit.item;
Console.WriteLine($"On {date}, the price of {item} was {price} per {unit}.");
// -
// Note that the interpolated expression `item` in the interpolated string resolves to the text "eggplant" in the result string. That's because, when the type of the expression result is not a string, the result is resolved to a string in the following way:
//
// - If the interpolated expression evaluates to `null`, an empty string ("", or <xref:System.String.Empty?displayProperty=nameWithType>) is used.
//
// - If the interpolated expression doesn't evaluate to `null`, typically the `ToString` method of the result type is called. You can test this by updating the implementation of the `Vegetable.ToString` method. You might not even need to implement the `ToString` method since every type has some implementation of this method. To test this, comment out the definition of the `Vegetable.ToString` method in the example (to do that, put a comment symbol, `//`, in front of it). In the output, the string "eggplant" is replaced by the fully qualified type name ("Vegetable" in this example), which is the default behavior of the <xref:System.Object.ToString?displayProperty=nameWithType> method. The default behavior of the `ToString` method for an enumeration value is to return the string representation of the value.
//
// In the output from this example, the date is too precise (the price of eggplant doesn't change every second), and the price value doesn't indicate a unit of currency. In the next section, you'll learn how to fix those issues by controlling the format of string representations of the expression results.
//
// ## Control the formatting of interpolated expressions
//
// In the previous section, two poorly formatted strings were inserted into the result string. One was a date and time value for which only the date was appropriate. The second was a price that didn't indicate its unit of currency. Both issues are easy to address. String interpolation lets you specify *format strings* that control the formatting of particular types. Modify the call to `Console.WriteLine` from the previous example to include the format strings for the date and price expressions as shown in the following line:
// + attributes={"classes": ["csharp"], "id": ""}
Console.WriteLine($"On {date:d}, the price of {item} was {price:C2} per {unit}.");
// -
// You specify a format string by following the interpolated expression with a colon (":") and the format string. "d" is a standard date and time format string that represents the short date format. "C2" is a standard numeric format string that represents a number as a currency value with two digits after the decimal point.
//
// A number of types in the .NET libraries support a predefined set of format strings. These include all the numeric types and the date and time types.
//
// Try modifying the format strings in your text editor and, each time you make a change, rerun the program to see how the changes affect the formatting of the date and time and the numeric value. Change the "d" in `{date:d}` to "t" (to display the short time format), "y" (to display the year and month), and "yyyy" (to display the year as a four-digit number). Change the "C2" in `{price:C2}` to "e" (for exponential notation) and "F3" (for a numeric value with three digits after the decimal point).
//
// In addition to controlling formatting, you can also control the field width and alignment of the formatted strings that are included in the result string. In the next section, you'll learn how to do this.
//
// ## Control the field width and alignment of interpolated expressions
//
// Ordinarily, when the result of an interpolated expression is formatted to string, that string is included in a result string without leading or trailing spaces. Particularly when you work with a set of data, being able to control a field width and text alignment helps to produce a more readable output. To see this, replace all the code in your text editor with the following code, then run the cell to execute the program:
// +
using System;
using System.Collections.Generic;
var titles = new Dictionary<string, string>()
{
["<NAME>"] = "Hound of the Baskervilles, The",
["<NAME>"] = "Call of the Wild, The",
["<NAME>"] = "Tempest, The"
};
Console.WriteLine("Author and Title List");
Console.WriteLine();
Console.WriteLine($"|{"Author",-25}|{"Title",30}|");
foreach (var title in titles)
Console.WriteLine($"|{title.Key,-25}|{title.Value,30}|");
// -
// The names of authors are left-aligned, and the titles they wrote are right-aligned. You specify the alignment by adding a comma (",") after an interpolated expression and designating the *minimum* field width. If the specified value is a positive number, the field is right-aligned. If it is a negative number, the field is left-aligned.
//
// Try removing the negative signs from the `{"Author",-25}` and `{title.Key,-25}` code and run the example again, as the following code does:
// + attributes={"classes": ["csharp"], "id": ""}
Console.WriteLine($"|{"Author",25}|{"Title",30}|");
foreach (var title in titles)
Console.WriteLine($"|{title.Key,25}|{title.Value,30}|");
// -
// This time, the author information is right-aligned.
//
// You can combine an alignment specifier and a format string for a single interpolated expression. To do that, specify the alignment first, followed by a colon and the format string. Replace all of the code inside the `Main` method with the following code, which displays three formatted strings with defined field widths. Then run the program by entering the `dotnet run` command.
// + attributes={"classes": ["csharp"], "id": ""}
Console.WriteLine($"[{DateTime.Now,-20:d}] Hour [{DateTime.Now,-10:HH}] [{1063.342,15:N2}] feet");
// -
// You've completed the string interpolation section.
//
// # C#: Collections
//
// This section provides an introduction to the C# language and the basics of the <xref:System.Collections.Generic.List%601>
// class.
//
// This section expects you to have a machine you can use for development. The .NET topic [Get Started in 10 minutes](https://www.microsoft.com/net/core) has instructions for setting up your local development environment on Mac, PC or Linux.
//
// ## A basic list example
// +
using System;
using System.Collections.Generic;
var names = new List<string> { "<name>", "Ana", "Felipe" };
foreach (var name in names)
{
Console.WriteLine($"Hello {name.ToUpper()}!");
}
// -
// Replace `<name>` with your name.
//
// You've just created a list of strings, added three names to that list, and printed out the names in all CAPS. You're using concepts that you've learned in earlier sections to loop through the list.
//
// The code to display names makes use of the string interpolation feature. When you precede a `string` with the `$` character, you can embed C# code in the string declaration. The actual string replaces that C# code with the value it generates. In this example, it replaces the `{name.ToUpper()}` with each name, converted to capital letters, because you called the <xref:System.String.ToUpper%2A> method.
//
// Let's keep exploring.
//
// ## Modify list contents
//
// The collection you created uses the <xref:System.Collections.Generic.List%601> type. This type stores sequences of elements. You specify the type of the elements between the angle brackets.
//
// One important aspect of this <xref:System.Collections.Generic.List%601> type is that it can grow or shrink, enabling you to add or remove elements.
// + attributes={"classes": ["csharp"], "id": ""}
Console.WriteLine();
names.Add("Maria");
names.Add("Bill");
names.Remove("Ana");
foreach (var name in names)
{
Console.WriteLine($"Hello {name.ToUpper()}!");
}
// -
// The <xref:System.Collections.Generic.List%601> enables you to reference individual items by **index**. You place the index between `[` and `]` tokens following the list name. C# uses 0 for the first index. Add this code directly below the code you just added and try it:
// + attributes={"classes": ["csharp"], "id": ""}
Console.WriteLine($"My name is {names[0]}");
Console.WriteLine($"I've added {names[2]} and {names[3]} to the list");
// -
// You cannot access an index beyond the end of the list. Remember that indices start at 0, so the largest valid index is one less than the number of items in the list. You can check how long the list is using the <xref:System.Collections.Generic.List%601.Count%2A> property. We will add the following code.
// + attributes={"classes": ["csharp"], "id": ""}
Console.WriteLine($"The list has {names.Count} people in it");
// Save the file, and type `dotnet run` again to see the results.
//
// ## Search and sort lists
//
// Our samples use relatively small lists, but your applications may often create lists with many more elements, sometimes numbering in the thousands. To find elements in these larger collections, you need to search the list for different items. The <xref:System.Collections.Generic.List%601.IndexOf%2A> method searches for an item and returns the index of the item. Add this code to the bottom of your `Main` method:
var index = names.IndexOf("Felipe");
if (index == -1)
{
Console.WriteLine($"When an item is not found, IndexOf returns {index}");
} else
{
Console.WriteLine($"The name {names[index]} is at index {index}");
}
index = names.IndexOf("Not Found");
if (index == -1)
{
Console.WriteLine($"When an item is not found, IndexOf returns {index}");
} else
{
Console.WriteLine($"The name {names[index]} is at index {index}");
}
// -
// The items in your list can be sorted as well. The <xref:System.Collections.Generic.List%601.Sort%2A> method sorts all the items in the list in their normal order (alphabetically in the case of strings). Add this code to the bottom of our method:
// + attributes={"classes": ["csharp"], "id": ""}
names.Sort();
foreach (var name in names)
{
Console.WriteLine($"Hello {name.ToUpper()}!");
}
// -
// Before you start the next section, let's move the current code into a separate method. That makes it easier to start working with a new example. When you have finished, your code should look like this:
// + attributes={"classes": ["csharp"], "id": ""}
using System;
using System.Collections.Generic;
public static void WorkingWithStrings()
{
var names = new List<string> { "<name>", "Ana", "Felipe" };
foreach (var name in names)
{
Console.WriteLine($"Hello {name.ToUpper()}!");
}
Console.WriteLine();
names.Add("Maria");
names.Add("Bill");
names.Remove("Ana");
foreach (var name in names)
{
Console.WriteLine($"Hello {name.ToUpper()}!");
}
Console.WriteLine($"My name is {names[0]}");
Console.WriteLine($"I've added {names[2]} and {names[3]} to the list");
Console.WriteLine($"The list has {names.Count} people in it");
var index = names.IndexOf("Felipe");
Console.WriteLine($"The name {names[index]} is at index {index}");
var notFound = names.IndexOf("Not Found");
Console.WriteLine($"When an item is not found, IndexOf returns {notFound}");
names.Sort();
foreach (var name in names)
{
Console.WriteLine($"Hello {name.ToUpper()}!");
}
}
WorkingWithStrings();
// -
// ## Lists of other types
//
// You've been using the `string` type in lists so far. Let's make a <xref:System.Collections.Generic.List%601> using a different type. Let's build a set of numbers.
var fibonacciNumbers = new List<int> {1, 1};
// That creates a list of integers, and sets the first two integers to the value 1. These are the first two values of a *Fibonacci Sequence*, a sequence of numbers. Each next Fibonacci number is found by taking the sum of the previous two numbers. Add this code:
// + attributes={"classes": ["csharp"], "id": ""}
var previous = fibonacciNumbers[fibonacciNumbers.Count - 1];
var previous2 = fibonacciNumbers[fibonacciNumbers.Count - 2];
fibonacciNumbers.Add(previous + previous2);
foreach(var item in fibonacciNumbers)
Console.WriteLine(item);
// -
// ## Challenge
//
// See if you can put together some of the concepts from this and earlier lessons. Expand on what you've built so far with Fibonacci Numbers. Try to write the code to generate the first 20 numbers in the sequence. (As a hint, the 20th Fibonacci number is 6765.)
//
// ## Complete challenge
//
// You can see an example solution by [looking at the finished sample code on GitHub](https://github.com/dotnet/samples/tree/master/csharp/list-quickstart/Program.cs#L13-L23)
//
// With each iteration of the loop, you're taking the last two integers in the list, summing them, and adding that value to the list. The loop repeats until you've added 20 items to the list.
//
// Congratulations, you've completed the list section.
// # Introduction to classes
//
// ## Create your application
// In this section, you're going to create new types that represent a bank account. Typically developers define each class in a different text file. That makes it easier to manage as a program grows in size.
//
// This file will contain the definition of a ***bank account***. Object Oriented programming organizes code by creating types in the form of ***classes***. These classes contain the code that represents a specific entity. The `BankAccount` class represents a bank account. The code implements specific operations through methods and properties. In this section, the bank account supports this behavior:
//
// 1. It has a 10-digit number that uniquely identifies the bank account.
// 1. It has a string that stores the name or names of the owners.
// 1. The balance can be retrieved.
// 1. It accepts deposits.
// 1. It accepts withdrawals.
// 1. The initial balance must be positive.
// 1. Withdrawals cannot result in a negative balance.
//
// ## Define the bank account type
//
// You can start by creating the basics of a class that defines that behavior. It would look like this:
// +
using System;
public class BankAccount
{
public BankAccount(string name, decimal initialBalance)
{
this.Owner = name;
this.Balance = initialBalance;
}
public string Number { get; }
public string Owner { get; set; }
public decimal Balance { get; }
public void MakeDeposit(decimal amount, DateTime date, string note)
{
}
public void MakeWithdrawal(decimal amount, DateTime date, string note)
{
}
}
// -
// `public class BankAccount` defines the class, or type, you are creating. Everything inside the `{` and `}` that follows the class declaration defines the behavior of the class. There are five ***members*** of the `BankAccount` class. The first three are ***properties***. Properties are data elements and can have code that enforces validation or other rules. The last two are ***methods***. Methods are blocks of code that perform a single function. Reading the names of each of the members should provide enough information for you or another developer to understand what the class does.
//
// ## Open a new account
//
// The first feature to implement is to open a bank account. When a customer opens an account, they must supply an initial balance, and information about the owner or owners of that account.
//
// Creating a new object of the `BankAccount` type means defining a ***constructor*** that assigns those values. A ***constructor*** is a member that has the same name as the class. It is used to initialize objects of that class type. Add the following constructor to the `BankAccount` type:
// + attributes={"classes": ["csharp"], "id": ""} active=""
// public BankAccount(string name, decimal initialBalance)
// {
// this.Owner = name;
// this.Balance = initialBalance;
// }
// -
// Constructors are called when you create an object using [`new`](../language-reference/keywords/new.md). Replace the line `Console.WriteLine("Hello World!");` in ***program.cs*** with the following line (replace `<name>` with your name):
// + attributes={"classes": ["csharp"], "id": ""}
var account = new BankAccount("<name>", 1000);
Console.WriteLine($"Account {account.Number} was created for {account.Owner} with {account.Balance} initial balance.");
// -
// Did you notice that the account number is blank? It's time to fix that. The account number should be assigned when the object is constructed. But it shouldn't be the responsibility of the caller to create it. The `BankAccount` class code should know how to assign new account numbers. A simple way to do this is to start with a 10-digit number. Increment it when each new account is created. Finally, store the current account number when an object is constructed.
//
// Add the following member declaration to the `BankAccount` class:
// + attributes={"classes": ["csharp"], "id": ""} active=""
// private static int accountNumberSeed = 1234567890;
// -
// This is a data member. It's `private`, which means it can only be accessed by code inside the `BankAccount` class. It's a way of separating the public responsibilities (like having an account number) from the private implementation (how account numbers are generated.) Add the following two lines to the constructor to assign the account number:
// + attributes={"classes": ["csharp"], "id": ""} active=""
// this.Number = accountNumberSeed.ToString();
// accountNumberSeed++;
// +
public class BankAccount
{
private static int accountNumberSeed = 1234567890;
public BankAccount(string name, decimal initialBalance)
{
this.Owner = name;
this.Balance = initialBalance;
this.Number = accountNumberSeed.ToString();
accountNumberSeed++;
}
public string Number { get; }
public string Owner { get; set; }
public decimal Balance { get; }
public void MakeDeposit(decimal amount, DateTime date, string note)
{
// Try it on your own!
// When you are finished, you can compare your implementation with solutions :)
}
public void MakeWithdrawal(decimal amount, DateTime date, string note)
{
// Try it on your own!
// When you are finished, you can compare your implementation with solutions :)
}
}
// -
// ## Create deposits and withdrawals
//
// Your bank account class needs to accept deposits and withdrawals to work correctly. Let's implement deposits and withdrawals by creating a journal of every transaction for the account. That has a few advantages over simply updating the balance on each transaction. The history can be used to audit all transactions and manage daily balances. By computing the balance from the history of all transactions when needed, any errors in a single transaction that are fixed will be correctly reflected in the balance on the next computation.
//
// Let's start by creating a new type to represent a transaction. This is a simple type that doesn't have any responsibilities. It needs a few properties.
//
//
// We will leave the implementation of the function `MakeDeposit` and `MakeWithdrawal` to the reader.
// ## Next Steps
//
// If you got stuck, you can see the source for this notebook [in our GitHub repo](https://github.com/dotnet/samples/tree/master/csharp/classes-quickstart/)
| C# Tutorial/CSharpTutorial.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.7.3 64-bit (''env'': venv)'
# name: python3
# ---
# + [markdown] id="amended-ratio"
# ## Book Price Prediction 📚
# + [markdown] id="gentle-spider"
# ### Introduction( Problem Definition)
# + [markdown] id="statewide-aside"
# Books are the most important friends in one’s life. Books are open doors to the unimagined worlds which is unique to every person. It is more than just a hobby for many. There are many among us who prefer to spend more time with books than anything else.
#
# Here we explore a big database of books. Books of different genres, from thousands of authors. In this project, we will use the dataset to build a Machine Learning model to predict the price of books based on a given set of features.
#
#
# FEATURES:
#
# * Title: The title of the book
# * Author: The author(s) of the book.
# * Edition: The edition of the book eg (Paperback
# * Item url: A link to the book on bookdepository.com
# * Image url: A link to the book's image
# * Publish date: The date of release eg(26 Apr 2018)
# * BookCategory: The department the book is usually available at.
# * Price: The price of the book (Target variable)
#
# ### Approach
#
# 1. Problem definition
# 2. Creating a dataset for the future model
# 3. Exploring The Data Sets, Cleaning, Processing
# 4. Selection of algorithm(Building A Regressor)
# 5. Evaluating the trained model
# 6. Deploying the model
# + id="seventh-midwest"
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestRegressor
from sklearn.preprocessing import OneHotEncoder
from sklearn.metrics import mean_squared_log_error
from sklearn.pipeline import Pipeline
from sklearn.metrics import mean_squared_error
from sklearn.compose import ColumnTransformer
import warnings
warnings.filterwarnings('ignore')
# + [markdown] id="impaired-worst"
# ### Creating dataset for feature model
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="hidden-sapphire" outputId="56b4e6e4-5089-4c11-869b-82a2ca42fb5c"
# !pip install git+https://github.com/fortune-uwha/book_scraper
# + id="cross-depression"
from scraper.bookscraper import CleanBookScraper
# + colab={"base_uri": "https://localhost:8080/"} id="Jrg7BRZRZyEM" outputId="9c65fa06-12fe-4231-95d7-6598841cd823"
categories = ['romance','horror','thriller','health','crime','sports','humour', 'philosophy','medical']
examples_to_scrape = 100
data_categories = []
for category in categories:
scraper = CleanBookScraper(examples_to_scrape, category)
data = scraper.clean_dataframe()
data_categories.append(data)
data_categories = pd.concat(data_categories)
data_categories.to_csv('books.csv', index=False)
# + colab={"base_uri": "https://localhost:8080/", "height": 823} id="alleged-beauty" outputId="93b17ef0-f25f-4a2e-c66e-e420be0bef44"
## Loading dataset for future model
books = pd.read_csv('books.csv')
books
# + [markdown] id="m6ye7S8BdvtT"
# ### Data preprocessing
# + id="ongoing-legislation"
books['price'] = pd.to_numeric(books['price'], errors='coerce')
books = books.dropna(subset=['price'])
# + id="right-number"
## Dropping Title, Item url, Image url, Publish date
books.drop(['item_url','image_url','title', 'publish_date', 'released_year'], axis=1,inplace=True)
# + colab={"base_uri": "https://localhost:8080/", "height": 286} id="TBz-Lp6WeN7D" outputId="d24e067f-8fac-4c27-80e5-1604fe6e76c8"
import matplotlib.pyplot as plt
plt.title('Price Comparision with Book Category',size=20)
plt.barh(books.category,books.price)
plt.show()
# + [markdown] id="MY0VEmcde9_E"
# Turns out the most expensive books come from the sports category
# + [markdown] id="Ybhf-kDoesmH"
# ### Train the model(Build a regressor)
# + id="spatial-collar"
target = books['price'].reset_index()
target = target.drop(["index"], axis=1).astype(float)
features = books.drop(["price"], axis=1)
# + id="casual-tomato"
X_train, X_test, y_train, y_test = train_test_split(
features, target, test_size=0.2, random_state=42
)
# + [markdown] id="8PgjbU8AfHBX"
# #### Create a pipeline
# + colab={"base_uri": "https://localhost:8080/"} id="transsexual-strand" outputId="ec32df86-9bb0-46fc-c5e8-3764dd4f5116"
features_to_encode = ["author", "edition", "category"]
preprocessor = ColumnTransformer([('encoder', OneHotEncoder(handle_unknown="ignore"),features_to_encode)], remainder='passthrough')
preprocessor
# + colab={"base_uri": "https://localhost:8080/"} id="sought-plaintiff" outputId="52f741d8-d2ae-4a41-d75f-ce5678433b21"
rfr = RandomForestRegressor(random_state = 101, n_estimators = 50)
pipe = Pipeline([('preprocessor', preprocessor),('model', rfr)])
pipe.fit(X_train, y_train)
# + id="funny-injury"
# Predict the value of the book on the test subset
y_pred = pipe.predict(X_test)
# + colab={"base_uri": "https://localhost:8080/"} id="baking-customer" outputId="93fe24ca-2338-48ed-a88e-e2685a7b63dd"
print('RMSLE :',(np.sqrt(mean_squared_log_error( abs(y_test),abs(y_pred)))))
# + [markdown] id="caroline-introduction"
# ### Saving the model
# + id="scientific-gallery"
import pickle
# + id="graduate-proportion"
with open("pipe.pkl", "wb") as f:
pickle.dump(pipe, f)
# + id="alpine-bangladesh"
| Predict_book_price_model_training_notebook.ipynb |
/ -*- coding: utf-8 -*-
/ ---
/ jupyter:
/ jupytext:
/ text_representation:
/ extension: .q
/ format_name: light
/ format_version: '1.5'
/ jupytext_version: 1.14.4
/ ---
/ + cell_id="00000-1c4f85ea-abc0-4f9b-84fd-6ea56311ba3e" deepnote_cell_type="code" deepnote_to_be_reexecuted=false execution_millis=5419 execution_start=1638480454866 source_hash="a2a14f" tags=[]
import pandas as pd
import numpy as np
import tensorflow as tf
import io
import re
/ + [markdown] cell_id="00001-e3466362-c409-49ae-b7f1-5a981625111e" deepnote_cell_type="markdown" tags=[]
/ ### Loading and _cleaning_ of the non-toxic tweets
/ + cell_id="00003-4555dd05-3d32-46d3-99df-227109c99c1c" deepnote_cell_type="code" deepnote_to_be_reexecuted=false execution_millis=757 execution_start=1638485173281 source_hash="daca8940" tags=[]
path_to_txt = '/datasets/toxic-dataset/non_toxic_tweets.txt'
with io.open(path_to_txt, encoding='utf-8') as f:
text = f.read().lower()
print('corpus length:', len(text))
# removing non alphabetic characters, keeping the \n
clean_text = re.sub(r'[^A-Za-z19 \n]+', '', text)
# unique characters
chars = sorted(list(set(clean_text)))
print('total chars:', len(chars))
# to make the conversion
char_to_indices = dict((c, i) for i, c in enumerate(chars))
indices_to_char = dict((i, c) for i, c in enumerate(chars))
/ + cell_id="00003-11172035-26ed-4b41-9b23-d0fcbfd5d34a" deepnote_cell_type="code" deepnote_output_heights=[20.5] deepnote_to_be_reexecuted=false execution_millis=15 execution_start=1638485174037 source_hash="eec34095" tags=[]
clean_text[0]
/ + cell_id="00009-bf6d953a-60dd-4771-acef-d90b6c15f03e" deepnote_cell_type="code" deepnote_output_heights=[520.5] deepnote_to_be_reexecuted=false execution_millis=1845 execution_start=1638485174898 source_hash="2a8dfb19" tags=[]
# cut the text in semi-redundant sequences of maxlen characters
MAXLEN = 30
WINDOWS_STEP = 3
ADDITIONAL_CHARS = 1
sentences = []
next_chars = []
# sentences will act as 'X' and next_chars 'y'
# so it will be like this
# ...clean tex | t
# ...the sente | c
# ...from covi | d
# etc etc etc
for i in range(0, len(clean_text) - MAXLEN, WINDOWS_STEP):
sentences.append(clean_text[i: i + MAXLEN])
next_chars.append(clean_text[i + MAXLEN: i + MAXLEN + ADDITIONAL_CHARS])
print('nb sequences:', len(sentences))
/ + cell_id="00005-76d70d69-e747-4491-bf9d-8c8fe93f91c0" deepnote_cell_type="code" deepnote_output_heights=[20.5] deepnote_to_be_reexecuted=false execution_millis=4424050 execution_start=1638485177300 source_hash="dceddbb7" tags=[]
sentences[99]
/ + cell_id="00006-36cddcaa-0708-4c4e-ad7a-93fb3fc2462b" deepnote_cell_type="code" deepnote_output_heights=[20.5] deepnote_to_be_reexecuted=false execution_millis=5 execution_start=1638485181084 source_hash="f4faaef4" tags=[]
next_chars[99]
/ + cell_id="00013-4286e4bb-fa9a-49b2-ba50-8476b0e78170" deepnote_cell_type="code" deepnote_to_be_reexecuted=false execution_millis=0 execution_start=1638485183795 source_hash="d8b95ce2" tags=[]
def convert_string_to_int(string):
''''
This functions receives a single string and return a numpy array of all
its characters converted to integers.
'''
list_of_ints = [char_to_indices[ch] for ch in string]
return np.array(list_of_ints)
def convert_int_to_string(list_of_ints):
''''
This functions recives a single array and returns a string
where all its letters were converted from integers.
'''
string = ''.join([indices_to_char[integ] for integ in list_of_ints])
return string
/ + cell_id="00010-d1847633-3ea0-4dfb-95b2-b77a4f2ac509" deepnote_cell_type="code" deepnote_to_be_reexecuted=false execution_millis=390 execution_start=1638485196768 source_hash="ab204f5d" tags=[]
# this one-hot encodes the sentences, which is not ideal bc of the resources it consumes
# X = np.zeros((len(sentences), MAXLEN, len(chars)), dtype=np.bool)
# y = np.zeros((len(sentences), len(chars)), dtype=np.bool)
# for i, sentence in enumerate(sentences):
# for t, char in enumerate(sentence):
# X[i, t, char_to_indices[char]] = 1
# y[i, char_to_indices[next_chars[i]]] = 1
X = [convert_string_to_int(stri) for stri in sentences]
/ + cell_id="00008-598f2fa7-7b29-4625-b96f-20c9ed001d5e" deepnote_cell_type="code" deepnote_output_heights=[131.625] deepnote_to_be_reexecuted=false execution_millis=4 execution_start=1638480801688 source_hash="cb2f9520" tags=[]
sentences_vect = np.array(sentences_char)
/ + cell_id="00008-e834d0af-d845-4c89-b505-db73075c8f74" deepnote_cell_type="code" deepnote_output_heights=[20.5] deepnote_to_be_reexecuted=false execution_millis=9052 execution_start=1638480605809 source_hash="e49efea8" tags=[]
lalayer = tf.keras.layers.StringLookup(vocabulary=chars)
sentenciado = lalayer(sentences)
/ + cell_id="00010-0150015e-d9d0-48b3-8102-bb7d3ab59534" deepnote_cell_type="code" deepnote_output_heights=[20.5] deepnote_to_be_reexecuted=false execution_millis=8 execution_start=1638480634511 source_hash="6f0523a5" tags=[]
sentenciado[0:10]
/ + cell_id="00011-8a4df583-28f5-41d1-b72b-b0e3fc64c741" deepnote_cell_type="code" deepnote_to_be_reexecuted=false execution_millis=1 execution_start=1638480833009 source_hash="d89f3016" tags=[]
tweet_prueba = 'el covichito es malo y te odio y odio a todo el mundo y esto es un tweet tóxito'
/ + cell_id="00012-b363ba59-be67-4a89-867a-ae86f34ebca9" deepnote_cell_type="code" deepnote_output_heights=[20.5] deepnote_to_be_reexecuted=false execution_millis=52 execution_start=1638481045221 source_hash="f0e4bc67" tags=[]
tweet_prueba[:30]
/ + cell_id="00013-5e47c9f8-4079-4294-ac45-7a888c0922c7" deepnote_cell_type="code" deepnote_to_be_reexecuted=false execution_millis=8 execution_start=1638481050955 source_hash="1ed23de" tags=[]
print(char_to_indices)
/ + cell_id="00015-78edf0f9-adff-481a-9c4a-a6bd2e0d87af" deepnote_cell_type="code" deepnote_output_heights=[20.5] deepnote_to_be_reexecuted=false execution_millis=2 execution_start=1638481176095 source_hash="fc4fef65" tags=[]
# convert_int_to_string(convertio)
/ + cell_id="00015-aafa5a53-d93d-4ef9-80df-2e3541a3a0bf" deepnote_cell_type="code" tags=[]
/ + [markdown] created_in_deepnote_cell=true deepnote_cell_type="markdown" tags=[]
/ <a style='text-decoration:none;line-height:16px;display:flex;color:#5B5B62;padding:10px;justify-content:end;' href='https://deepnote.com?utm_source=created-in-deepnote-cell&projectId=a1e7defd-57f6-4a69-933f-51f8c5c266bb' target="_blank">
/ <img alt='Created in deepnote.com' style='display:inline;max-height:16px;margin:0px;margin-right:7.5px;' src='data:image/svg+xml;base64,PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiPz4KPHN2ZyB3aWR0aD0iODBweCIgaGVpZ2h0PSI4MHB4IiB2aWV3Qm94PSIwIDAgODAgODAiIHZlcnNpb249IjEuMSIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB4bWxuczp4bGluaz0iaHR0cDovL3d3dy53My5vcmcvMTk5OS94bGluayI+CiAgICA8IS0tIEdlbmVyYXRvcjogU2tldGNoIDU0LjEgKDc2NDkwKSAtIGh0dHBzOi8vc2tldGNoYXBwLmNvbSAtLT4KICAgIDx0aXRsZT5Hcm91cCAzPC90aXRsZT4KICAgIDxkZXNjPkNyZWF0ZWQgd2l0aCBTa2V0Y2guPC9kZXNjPgogICAgPGcgaWQ9IkxhbmRpbmciIHN0cm9rZT0ibm9uZSIgc3Ryb2tlLXdpZHRoPSIxIiBmaWxsPSJub25lIiBmaWxsLXJ1bGU9ImV2ZW5vZGQiPgogICAgICAgIDxnIGlkPSJBcnRib2FyZCIgdHJhbnNmb3JtPSJ0cmFuc2xhdGUoLTEyMzUuMDAwMDAwLCAtNzkuMDAwMDAwKSI+CiAgICAgICAgICAgIDxnIGlkPSJHcm91cC0zIiB0cmFuc2Zvcm09InRyYW5zbGF0ZSgxMjM1LjAwMDAwMCwgNzkuMDAwMDAwKSI+CiAgICAgICAgICAgICAgICA8cG9seWdvbiBpZD0iUGF0aC0yMCIgZmlsbD0iIzAyNjVCNCIgcG9pbnRzPSIyLjM3NjIzNzYyIDgwIDM4LjA0NzY2NjcgODAgNTcuODIxNzgyMiA3My44MDU3NTkyIDU3LjgyMTc4MjIgMzIuNzU5MjczOSAzOS4xNDAyMjc4IDMxLjY4MzE2ODMiPjwvcG9seWdvbj4KICAgICAgICAgICAgICAgIDxwYXRoIGQ9Ik0zNS4wMDc3MTgsODAgQzQyLjkwNjIwMDcsNzYuNDU0OTM1OCA0Ny41NjQ5MTY3LDcxLjU0MjI2NzEgNDguOTgzODY2LDY1LjI2MTk5MzkgQzUxLjExMjI4OTksNTUuODQxNTg0MiA0MS42NzcxNzk1LDQ5LjIxMjIyODQgMjUuNjIzOTg0Niw0OS4yMTIyMjg0IEMyNS40ODQ5Mjg5LDQ5LjEyNjg0NDggMjkuODI2MTI5Niw0My4yODM4MjQ4IDM4LjY0NzU4NjksMzEuNjgzMTY4MyBMNzIuODcxMjg3MSwzMi41NTQ0MjUgTDY1LjI4MDk3Myw2Ny42NzYzNDIxIEw1MS4xMTIyODk5LDc3LjM3NjE0NCBMMzUuMDA3NzE4LDgwIFoiIGlkPSJQYXRoLTIyIiBmaWxsPSIjMDAyODY4Ij48L3BhdGg+CiAgICAgICAgICAgICAgICA8cGF0aCBkPSJNMCwzNy43MzA0NDA1IEwyNy4xMTQ1MzcsMC4yNTcxMTE0MzYgQzYyLjM3MTUxMjMsLTEuOTkwNzE3MDEgODAsMTAuNTAwMzkyNyA4MCwzNy43MzA0NDA1IEM4MCw2NC45NjA0ODgyIDY0Ljc3NjUwMzgsNzkuMDUwMzQxNCAzNC4zMjk1MTEzLDgwIEM0Ny4wNTUzNDg5LDc3LjU2NzA4MDggNTMuNDE4MjY3Nyw3MC4zMTM2MTAzIDUzLjQxODI2NzcsNTguMjM5NTg4NSBDNTMuNDE4MjY3Nyw0MC4xMjg1NTU3IDM2LjMwMzk1NDQsMzcuNzMwNDQwNSAyNS4yMjc0MTcsMzcuNzMwNDQwNSBDMTcuODQzMDU4NiwzNy43MzA0NDA1IDkuNDMzOTE5NjYsMzcuNzMwNDQwNSAwLDM3LjczMDQ0MDUgWiIgaWQ9IlBhdGgtMTkiIGZpbGw9IiMzNzkzRUYiPjwvcGF0aD4KICAgICAgICAgICAgPC9nPgogICAgICAgIDwvZz4KICAgIDwvZz4KPC9zdmc+' > </img>
/ Created in <span style='font-weight:600;margin-left:4px;'>Deepnote</span></a>
| 03_NLP_generate_model.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + pycharm={"is_executing": false}
import matplotlib.pyplot as plt
import scipy.stats as st
import seaborn as sns
import pandas as pd
from scipy.stats import norm, uniform, expon, t
from scipy.integrate import quad
from sympy.solvers import solve
from sympy import Symbol
import numpy as np
from pandas import Series, DataFrame
# + pycharm={"is_executing": false, "name": "#%%\n"}
fuellungen = Series([71, 69, 67, 68, 73, 72, 71, 71, 68, 72, 69, 72])
# -
fuellungen.mean()
fuellungen_standardisiert = fuellungen
| Lernphase/SW06/.ipynb_checkpoints/Aufgaben-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# DAY 05 - Mar 1, 2017
# +
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.ensemble import RandomForestClassifier
import math
import os
# +
# %matplotlib inline
plt.rcParams["figure.figsize"] = [10,6]
# -
# ## Digit Recognizer
#
# Another Kaggle competition: [Digit Recognizer](https://www.kaggle.com/c/digit-recognizer/data)
#
# The goal in this competition is to take an image of a handwritten single digit, and determine what that digit is.
# +
input_dir = "./data/"
# Load my data
train_file = os.path.join(input_dir, "train.csv")
test_file = os.path.join(input_dir, "test.csv")
train = pd.read_csv(train_file)
test = pd.read_csv(test_file)
# +
# Inspect data
dim = train.shape
print(dim)
train.head()
# -
# The features
features = train.columns[1:]
n_features = len(features)
# +
# Create different data sets i.e. for training & validating
train_num = int(dim[0] * 0.8)
val_num = dim[0] - train_num
X_train, y_train = train.iloc[:train_num,1:], train.iloc[:train_num,0]
X_val, y_val = train.iloc[train_num:,1:], train.iloc[train_num:,0]
print("feature:", n_features)
print("train:", train_num)
print("valid:", val_num)
# -
# ### Random Forest Classfier
#
# Reference: http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html#sklearn.ensemble.RandomForestClassifier.fit
#
# ### How many trees?
#
# Here we train on `n = 10,20,30,...100` trees
# +
n_trees = range(10,101,10)
rfc = {}
acc = {}
for n in n_trees:
print("N =", n, "...")
rfc[n] = RandomForestClassifier(n_estimators = n)
rfc[n].fit(X_train, y_train)
acc[n] = sum(rfc[n].predict(X_val)==y_val)/len(y_val)
# -
df_acc = pd.DataFrame(acc, index=["accuracy"]).transpose()
df_acc["num_trees"] = df_acc.index
df_acc
# Difference in accuracy is not that great from 30 - 100 trees. We next show the same data in a figure.
# +
ax = df_acc.plot(legend=False)
df_acc.plot.scatter(x="num_trees", y="accuracy", legend=False, ax=ax)
n_tree = 30
plt.scatter(x=[n_tree], y=[acc[n_tree]], color="red")
plt.hlines(acc[n_tree], xmin=10,xmax=100, color="#5E5852")
plt.xlim(10,100)
plt.ylim(.9, 1)
# +
pd.DataFrame({
"feature":train.columns[1:],
"importance":rfc[n].feature_importances_
}).plot()
plt.title('Features')
plt.xlabel('Pixel')
plt.ylabel('Importance')
# -
# ### Evaluation
# Predictions with 30 trees
test_yhat = rfc[30].predict(test)
# +
pd.DataFrame({"ImageId":test.index+1, "Label":test_yhat}).to_csv("digitrecognizer-rf_submission.csv", index=False)
# !head digitrecognizer-rf_submission.csv
# -
# Public score of "0.95871"
| digit_recognizer/DigitRecognizer-randomforest.ipynb |
// ---
// jupyter:
// jupytext:
// text_representation:
// extension: .scala
// format_name: light
// format_version: '1.5'
// jupytext_version: 1.14.4
// kernelspec:
// display_name: Apache Toree - Scala
// language: scala
// name: apache_toree_scala
// ---
// # Basic Transformations
//
// ## Overview of Basic Transformations
//
// Let us define problem statements to learn more about Data Frame APIs. We will try to cover filtering, aggregations and sorting as part of solutions for these problem statements.
// * Get total number of flights as well as number of flights which are delayed in departure and number of flights delayed in arrival.
// * Output should contain 3 columns - **FlightCount**, **DepDelayedCount**, **ArrDelayedCount**
// * Get number of flights which are delayed in departure and number of flights delayed in arrival for each day along with number of flights departed for each day.
// * Output should contain 4 columns - **FlightDate**, **FlightCount**, **DepDelayedCount**, **ArrDelayedCount**
// * **FlightDate** should be of **YYYY-MM-dd** format.
// * Data should be **sorted** in ascending order by **flightDate**
// * Get all the flights which are departed late but arrived early (**IsArrDelayed is NO**).
// * Output should contain - **FlightCRSDepTime**, **UniqueCarrier**, **FlightNum**, **Origin**, **Dest**, **DepDelay**, **ArrDelay**
// * **FlightCRSDepTime** need to be computed using **Year**, **Month**, **DayOfMonth**, **CRSDepTime**
// * **FlightCRSDepTime** should be displayed using **YYYY-MM-dd HH:mm** format.
// * Output should be sorted by **FlightCRSDepTime** and then by the difference between **DepDelay** and **ArrDelay**
// * Also get the count of such flights
// ## Starting Spark Context
//
// Let us start spark context for this Notebook so that we can execute the code provided.
import org.apache.spark.sql.SparkSession
val spark = SparkSession.
builder.
config("spark.ui.port", "0").
appName("Basic Transformations").
master("yarn").
getOrCreate
// ## Overview of Filtering
// Let us understand few important details related to filtering before we get into the solution
// * Filtering can be done either by using `filter` or `where`. These are like synonyms to each other.
// * When it comes to the condition, we can either pass it in **SQL Style** or **Data Frame Style**.
// * Example for SQL Style - `airlines.filter("IsArrDelayed = "YES"").show() or airlines.where("IsArrDelayed = "YES"").show()`
// * Example for Data Frame Style - `airlines.filter(airlines["IsArrDelayed"] == "YES").show()` or `airlines.filter(airlines.IsArrDelayed == "YES").show()`. We can also use where instead of filter.
// * Here are the other operations we can perform to filter the data - `!=`, `>`, `<`, `>=`, `<=`, `LIKE`, `BETWEEN` with `AND`
// * If we have to validate against multiple columns then we need to use boolean operations such as `AND` and `OR`.
// * If we have to compare each column value with multiple values then we can use the `IN` operator.
//
// ### Tasks
//
// Let us perform some tasks to understand filtering in detail. Solve all the problems by passing conditions using both SQL Style as well as API Style.
//
// * Read the data for the month of 2008 January.
// + pycharm={"name": "#%%\n"}
val airlines_path = "/public/airlines_all/airlines-part/flightmonth=200801"
// + pycharm={"name": "#%%\n"}
val airlines = spark.
read.
parquet(airlines_path)
// + pycharm={"name": "#%%\n"}
airlines.printSchema
// -
// * Get count of flights which are departed late at origin and reach destination early or on time.
//
// + pycharm={"name": "#%%\n"}
airlines.
filter("IsDepDelayed = 'YES' AND IsArrDelayed = 'NO'").
count
// -
// * API Style
// + pycharm={"name": "#%%\n"}
import org.apache.spark.sql.functions.col
// + pycharm={"name": "#%%\n"}
airlines.
filter(col("IsDepDelayed") === "YES" and
col("IsArrDelayed") === "NO"
).
count
// -
airlines.
filter(airlines("IsDepDelayed") === "YES" and
airlines("IsArrDelayed") === "NO"
).
count
// * Get count of flights which are departed late from origin by more than 60 minutes.
//
// + pycharm={"name": "#%%\n"}
airlines.
filter("DepDelay > 60").
count
// -
//
// * API Style
// + pycharm={"name": "#%%\n"}
import org.apache.spark.sql.functions.col
// + pycharm={"name": "#%%\n"}
airlines.
filter(col("DepDelay") > 60).
count
// -
// * Get count of flights which are departed early or on time but arrive late by at least 15 minutes.
//
// + pycharm={"name": "#%%\n"}
airlines.
filter("IsDepDelayed = 'NO' AND ArrDelay >= 15").
count
// -
// * API Style
// + pycharm={"name": "#%%\n"}
import org.apache.spark.sql.functions. col
// + pycharm={"name": "#%%\n"}
airlines.
filter(col("IsDepDelayed") === "NO" and col("ArrDelay") >= 15).
count()
// -
// * Get count of flights departed from following major airports - ORD, DFW, ATL, LAX, SFO.
airlines.count
// + pycharm={"name": "#%% \n"}
airlines.
filter("Origin IN ('ORD', 'DFW', 'ATL', 'LAX', 'SFO')").
count
// -
// * API Style
import org.apache.spark.sql.functions.col
// + pycharm={"name": "#%%\n"}
airlines.
filter(col("Origin").isin("ORD", "DFW", "ATL", "LAX", "SFO")).
count
// -
// * Add a column FlightDate by using Year, Month and DayOfMonth. Format should be **yyyyMMdd**.
//
// + pycharm={"name": "#%%\n"}
import org.apache.spark.sql.functions.{col, concat, lpad}
// + pycharm={"name": "#%%\n"}
airlines.
withColumn("FlightDate",
concat(col("Year"),
lpad(col("Month"), 2, "0"),
lpad(col("DayOfMonth"), 2, "0")
)
).
show
// -
// * Get count of flights departed late between 2008 January 1st to January 9th using FlightDate.
//
// + pycharm={"name": "#%%\n"}
import org.apache.spark.sql.functions.{col, concat, lpad}
// + pycharm={"name": "#%%\n"}
airlines.
withColumn("FlightDate",
concat(col("Year"),
lpad(col("Month"), 2, "0"),
lpad(col("DayOfMonth"), 2, "0")
)
).
filter("IsDepDelayed = 'YES' AND FlightDate LIKE '2008010%'").
count
// + pycharm={"name": "#%%\n"}
import org.apache.spark.sql.functions.{col, concat, lpad}
// + pycharm={"name": "#%%\n"}
airlines.
withColumn("FlightDate",
concat(col("Year"),
lpad(col("Month"), 2, "0"),
lpad(col("DayOfMonth"), 2, "0")
)
).
filter("""
IsDepDelayed = "YES" AND
FlightDate BETWEEN 20080101 AND 20080109
""").
count
// -
// * API Style
// + pycharm={"name": "#%%\n"}
import org.apache.spark.sql.functions.{col, concat, lpad}
// + pycharm={"name": "#%%\n"}
airlines.
withColumn("FlightDate",
concat(col("Year"),
lpad(col("Month"), 2, "0"),
lpad(col("DayOfMonth"), 2, "0")
)
).
filter(col("IsDepDelayed") === "YES" and
(col("FlightDate") like ("2008010%"))
).
count
// + pycharm={"name": "#%%\n"}
import org.apache.spark.sql.functions.{col, concat, lpad}
airlines.
withColumn("FlightDate",
concat(col("Year"),
lpad(col("Month"), 2, "0"),
lpad(col("DayOfMonth"), 2, "0")
)
).
filter(col("IsDepDelayed") === "YES" and
(col("FlightDate") between ("20080101", "20080109"))
).
count
// -
// * Get number of flights departed late on Sundays.
val l = List("X")
val df = l.toDF("dummy")
import org.apache.spark.sql.functions.current_date
df.select(current_date).show
import org.apache.spark.sql.functions.date_format
df.select(current_date, date_format(current_date, "EE")).show
// + pycharm={"name": "#%%\n"}
import org.apache.spark.sql.functions.{col, concat, lpad}
airlines.
withColumn("FlightDate",
concat(col("Year"),
lpad(col("Month"), 2, "0"),
lpad(col("DayOfMonth"), 2, "0")
)
).
filter("""
IsDepDelayed = "YES" AND
date_format(to_date(FlightDate, "yyyyMMdd"), "EEEE") = "Sunday"
""").
count
// -
// * API Style
import spark.implicits._
// + pycharm={"name": "#%%\n"}
import org.apache.spark.sql.functions.{col, concat, lpad, date_format, to_date}
airlines.
withColumn("FlightDate",
concat(col("Year"),
lpad(col("Month"), 2, "0"),
lpad(col("DayOfMonth"), 2, "0")
)
).
filter(col("IsDepDelayed") === "YES" and
date_format(
to_date($"FlightDate", "yyyyMMdd"), "EEEE"
) === "Sunday"
).
count
// -
// ## Overview of Aggregations
//
// Let us go through the details related to aggregation using Spark.
//
// * We can perform total aggregations directly on Dataframe or we can perform aggregations after grouping by a key(s).
// * Here are the APIs which we typically use to group the data using a key.
// * groupBy
// * rollup
// * cube
// * Here are the functions which we typically use to perform aggregations.
// * count
// * sum, avg
// * min, max
// * If we want to provide aliases to the aggregated fields then we have to use `agg` after `groupBy`.
// * Let us get the count of flights for each day for the month of 200801.
val airlines_path = "/public/airlines_all/airlines-part/flightmonth=200801"
val airlines = spark.
read.
parquet(airlines_path)
import org.apache.spark.sql.functions.{concat, lpad, count, lit}
import spark.implicits._
airlines.
groupBy(concat($"year",
lpad($"Month", 2, "0"),
lpad($"DayOfMonth", 2, "0")
).alias("FlightDate")
).
agg(count(lit(1)).alias("FlightCount")).
show
// ## Overview of Sorting
//
// Let us understand how to sort the data in a Data Frame.
// * We can use `orderBy` or `sort` to sort the data.
// * We can perform composite sorting by passing multiple columns or expressions.
// * By default data is sorted in ascending order, we can change it to descending by applying `desc()` function on the column or expression.
// ## Solutions - Problem 1
// Get total number of flights as well as number of flights which are delayed in departure and number of flights delayed in arrival.
// * Output should contain 3 columns - **FlightCount**, **DepDelayedCount**, **ArrDelayedCount**
//
// ### Reading airlines data
// + pycharm={"name": "#%%\n"}
val airlines_path = "/public/airlines_all/airlines-part/flightmonth=200801"
// + pycharm={"name": "#%%\n"}
val airlines = spark.
read.
parquet(airlines_path)
// + pycharm={"name": "#%%\n"}
airlines.printSchema
// -
// ### Get flights with delayed arrival
// + pycharm={"name": "#%%\n"}
// SQL Style
airlines.filter("IsArrDelayed = 'YES'").show
// + pycharm={"name": "#%%\n"}
// Data Frame Style
airlines.filter(airlines("IsArrDelayed") === "YES").show
// -
import spark.implicits._
// + pycharm={"name": "#%%\n"}
airlines.filter($"IsArrDelayed" === "YES").show
// -
// ### Get delayed counts
// Departure Delayed Count
airlines.
filter(airlines("IsDepDelayed") === "YES").
count
// Arrival Delayed Count
airlines.
filter(airlines("IsArrDelayed") === "YES").
count()
airlines.
filter("IsDepDelayed = 'YES' OR IsArrDelayed = 'YES'").
select("Year", "Month", "DayOfMonth",
"FlightNum", "IsDepDelayed", "IsArrDelayed"
).
show
// Both Departure Delayed and Arrival Delayed
import org.apache.spark.sql.functions.{col, lit, count, sum, expr}
airlines.
agg(count(lit(1)).alias("FlightCount"),
sum(expr("CASE WHEN IsDepDelayed = 'YES' THEN 1 ELSE 0 END")).alias("DepDelayedCount"),
sum(expr("CASE WHEN IsArrDelayed = 'YES' THEN 1 ELSE 0 END")).alias("ArrDelayedCount")
).
show
// + [markdown] pycharm={"name": "#%% md\n"}
// ## Solutions - Problem 2
//
// Get number of flights which are delayed in departure and number of flights delayed in arrival for each day along with number of flights departed for each day.
//
// * Output should contain 4 columns - **FlightDate**, **FlightCount**, **DepDelayedCount**, **ArrDelayedCount**
// * **FlightDate** should be of **YYYY-MM-dd** format.
// * Data should be **sorted** in ascending order by **flightDate**
//
// ### Grouping Data by Flight Date
//
// + pycharm={"name": "#%%\n"}
import org.apache.spark.sql.functions.{lit, concat, lpad}
// + pycharm={"name": "#%%\n"}
import spark.implicits._
// -
// * Example of `groupBy`. It should follow with `agg`. **If you run the below code, it will throw exception.**
// + pycharm={"name": "#%%\n"}
airlines.
groupBy(concat($"Year", lit("-"),
lpad($"Month", 2, "0"), lit("-"),
lpad($"DayOfMonth", 2, "0")).
alias("FlightDate"))
// -
// ### Getting Counts by FlightDate
// +
import org.apache.spark.sql.functions.{lit, concat, lpad, count}
airlines.
groupBy(concat($"Year", lit("-"),
lpad($"Month", 2, "0"), lit("-"),
lpad($"DayOfMonth", 2, "0")).
alias("FlightDate")).
agg(count(lit(1)).alias("FlightCount")).
show(31)
// -
// Alternative to get the count with out using agg
// We will not be able to provide alias for aggregated fields
import org.apache.spark.sql.functions.{lit, concat, lpad}
airlines.
groupBy(concat($"Year", lit("-"),
lpad($"Month", 2, "0"), lit("-"),
lpad($"DayOfMonth", 2, "0")).
alias("FlightDate")).
count.
show(31)
// ### Getting total as well as delayed counts for each day
// + pycharm={"name": "#%%\n"}
import org.apache.spark.sql.functions.{lit, concat, lpad, count, sum, expr}
// + pycharm={"name": "#%%\n"}
airlines.
groupBy(concat($"Year", lit("-"),
lpad($"Month", 2, "0"), lit("-"),
lpad($"DayOfMonth", 2, "0")).
alias("FlightDate")).
agg(count(lit(1)).alias("FlightCount"),
sum(expr("CASE WHEN IsDepDelayed = 'YES' THEN 1 ELSE 0 END")).alias("DepDelayedCount"),
sum(expr("CASE WHEN IsArrDelayed = 'YES' THEN 1 ELSE 0 END")).alias("ArrDelayedCount")
).
show
// -
// ### Sorting Data By FlightDate
// + pycharm={"name": "#%%\n"}
import org.apache.spark.sql.functions.{lit, concat, lpad, sum, expr}
// + pycharm={"name": "#%%\n"}
airlines.
groupBy(concat($"Year", lit("-"),
lpad($"Month", 2, "0"), lit("-"),
lpad($"DayOfMonth", 2, "0")).
alias("FlightDate")).
agg(count(lit(1)).alias("FlightCount"),
sum(expr("CASE WHEN IsDepDelayed = 'YES' THEN 1 ELSE 0 END")).alias("DepDelayedCount"),
sum(expr("CASE WHEN IsArrDelayed = 'YES' THEN 1 ELSE 0 END")).alias("ArrDelayedCount")
).
orderBy("FlightDate").
show(31)
// -
// ### Sorting Data in descending order by count
import org.apache.spark.sql.functions.{lit, concat, lpad, sum, expr, col}
airlines.
groupBy(concat($"Year", lit("-"),
lpad($"Month", 2, "0"), lit("-"),
lpad($"DayOfMonth", 2, "0")).
alias("FlightDate")).
agg(count(lit(1)).alias("FlightCount"),
sum(expr("CASE WHEN IsDepDelayed = 'YES' THEN 1 ELSE 0 END")).alias("DepDelayedCount"),
sum(expr("CASE WHEN IsArrDelayed = 'YES' THEN 1 ELSE 0 END")).alias("ArrDelayedCount")
).
orderBy(col("FlightCount").desc).
show(31)
// ## Solutions - Problem 3
// Get all the flights which are departed late but arrived early (**IsArrDelayed is NO**).
// * Output should contain - **FlightCRSDepTime**, **UniqueCarrier**, **FlightNum**, **Origin**, **Dest**, **DepDelay**, **ArrDelay**
// * **FlightCRSDepTime** need to be computed using **Year**, **Month**, **DayOfMonth**, **CRSDepTime**
// * **FlightCRSDepTime** should be displayed using **YYYY-MM-dd HH:mm** format.
// * Output should be sorted by **FlightCRSDepTime** and then by the difference between **DepDelay** and **ArrDelay**
// * Also get the count of such flights
//
airlines.select("Year", "Month", "DayOfMonth", "CRSDepTime").show()
l = [(2008, 1, 23, 700),
(2008, 1, 10, 1855),
]
df = spark.createDataFrame(l, "Year INT, Month INT, DayOfMonth INT, DepTime INT")
df.show()
import org.apache.spark.sql.functions. substring
df.select(substring(col("DepTime"), -2, 2)).
show()
df.select("DepTime", date_format(lpad("DepTime", 4, "0"), "HH:mm")).show()
help(substring)
df.select(substring(col("DepTime"), 1, length(col("DepTime").cast("string")))).
show()
// +
import org.apache.spark.sql.functions. lit, col, concat, lpad, sum, expr
flightsFiltered = airlines.
filter("IsDepDelayed = "YES" AND IsArrDelayed = "NO"").
select(concat("Year", lit("-"),
lpad("Month", 2, "0"), lit("-"),
lpad("DayOfMonth", 2, "0"), lit(" "),
lpad("CRSDepTime", 4, "0")
).alias("FlightCRSDepTime"),
"UniqueCarrier", "FlightNum", "Origin",
"Dest", "DepDelay", "ArrDelay"
).
orderBy("FlightCRSDepTime", col("DepDelay") - col("ArrDelay")).
show()
// + [markdown] pycharm={"name": "#%% md\n"}
// ### Getting Count
// + pycharm={"name": "#%%\n"}
import org.apache.spark.sql.functions. lit, col, concat, lpad, sum, expr
flightsFiltered = airlines.
filter("IsDepDelayed = "YES" AND IsArrDelayed = "NO"").
select(concat("Year", lit("-"),
lpad("Month", 2, "0"), lit("-"),
lpad("DayOfMonth", 2, "0"), lit(" "),
lpad("CRSDepTime", 4, "0")
).alias("FlightCRSDepTime"),
"UniqueCarrier", "FlightNum", "Origin",
"Dest", "DepDelay", "ArrDelay"
).
count()
flightsFiltered
// -
| spark-scala/05_basic_transformations.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Py3 pangeo
# language: python
# name: pangeo
# ---
import numpy as np
import xarray as xr
from matplotlib import pyplot as plt
# %matplotlib inline
plt.rcParams['figure.figsize'] = (8,5)
#from scipy.interpolate import *
ds = xr.open_dataset('/net/kage/d5/datasets/ERAInterim/monthly/Surface/u10.nc',decode_times=False)
u0 = ds.u10[0].sel(Y=slice(80,-80))
u0
dudx0 = u0.diff('X')/ds.X.diff('X')
coslat = np.cos(np.deg2rad(u0.Y))
deg2meter = coslat*111000
dudxms = dudx0/deg2meter
dudxms.plot(vmin=-1e-5,vmax=1e-5)
du = u0.diff('X').rolling(X=2).mean().shift(X=-1)
dx = coslat*111000*u0.X.diff('X').rolling(X=2).mean().shift(X=-1)
dudx = du/dx
fig, ax = plt.subplots(figsize=(10,3))
plt.subplot(121)
dx.plot()
plt.subplot(122)
dudx.plot(vmin=-1e-5,vmax=1e-5)
plt.tight_layout
| Beginner-notebooks/99e_PartialDerivs.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="zLl9oP-RfrdI" colab_type="text"
# ## **Setup of the OS**
# ``
# ''!'' in notebook means that you are executing commands in commad shell
#
# Update all packages of debian-based distributive of the Linux OS.
# + id="7mFTn3l7fOLn" colab_type="code" outputId="7ab365f9-083c-4fe9-9b50-4ea6bcbbf9bf" colab={"base_uri": "https://localhost:8080/", "height": 104}
# ! lsb_release -a
# + id="OWOrJHDJaiYF" colab_type="code" outputId="3795d81c-3290-4062-ec9f-a61467f3f3c1" colab={"base_uri": "https://localhost:8080/", "height": 34}
# !apt update -qq;
# + id="VCA-ZzUJhSLv" colab_type="code" outputId="6dab25f8-840c-43bd-9388-dc4fbc594a51" colab={"base_uri": "https://localhost:8080/", "height": 694}
# ! apt list --upgradable
# + [markdown] id="r-PVa1p_fpb-" colab_type="text"
# Download package with repository parameters
# + id="DDGMeWNyg5Oz" colab_type="code" outputId="ec8c6cd8-bc32-4ed4-dcec-1e0c96af9868" colab={"base_uri": "https://localhost:8080/", "height": 315}
# !wget https://developer.nvidia.com/compute/cuda/8.0/Prod2/local_installers/cuda-repo-ubuntu1604-8-0-local-ga2_8.0.61-1_amd64-deb;
# + [markdown] id="uHRTu5w5iOjC" colab_type="text"
# Installing CUDA repo details
# + id="237icJriiEEL" colab_type="code" outputId="d06ad252-4870-4e5f-92cb-116841c4c9e6" colab={"base_uri": "https://localhost:8080/", "height": 211}
# !dpkg -i cuda-repo-ubuntu1604-8-0-local-ga2_8.0.61-1_amd64-deb;
# + [markdown] id="g1YDAmT3iZJT" colab_type="text"
# Adding public key to access repository
# + id="e446iPSkijLJ" colab_type="code" outputId="61c0b604-4447-4cd5-a47f-926c9617ff2e" colab={"base_uri": "https://localhost:8080/", "height": 34}
# !apt-key add /var/cuda-repo-8-0-local-ga2/7fa2af80.pub;
# + [markdown] id="YN4z4TksiuuN" colab_type="text"
# Update packages again, with new added repository
# + id="EWj-8posi2qP" colab_type="code" colab={}
# !apt-get update -qq;
# + [markdown] id="iRK0Putwk1xo" colab_type="text"
# Now check what versions of CUDA packages do we have.
# We search for **cuda-10-0 **
# + id="aOPXP2ajk9Ny" colab_type="code" outputId="eb737b3d-76d0-4fff-aef4-3d1be8191082" colab={"base_uri": "https://localhost:8080/", "height": 3629}
# !apt-cache search cuda
# + [markdown] id="J5iJMpgXi7Hu" colab_type="text"
# Install packages for CUDA programming with C++
# + id="VovJ51TCjB09" colab_type="code" colab={}
# !apt-get install cuda-10-0 gcc-5 g++-5 -y -qq;
# + [markdown] id="6kk8rplbk0jS" colab_type="text"
# Create symbolic links for compiler files
# + id="TpWdZVLolBdj" colab_type="code" outputId="b8cc02e9-9773-4da1-dad4-8ca333854b56" colab={"base_uri": "https://localhost:8080/", "height": 52}
# !ln -s /usr/bin/gcc-5 /usr/local/cuda/bin/gcc;
# !ln -s /usr/bin/g++-5 /usr/local/cuda/bin/g++;
# + [markdown] id="r_UMlVN1lHWB" colab_type="text"
# Check is NVCC up and running:
#
#
#
# + id="rORu2yChldl6" colab_type="code" outputId="3cf06f09-e66e-4fee-c711-22a7226a563d" colab={"base_uri": "https://localhost:8080/", "height": 86}
# !/usr/local/cuda/bin/nvcc --version
# + [markdown] id="1Neyi2-7li5c" colab_type="text"
# Upload extension to notebook to work with CUDA C directly from notebook
# + id="gkQ3ksTWlpyM" colab_type="code" outputId="d3a60cf5-ee7f-41a6-fd78-3e8d9f2db632" colab={"base_uri": "https://localhost:8080/", "height": 159}
# !pip install git+git://github.com/andreinechaev/nvcc4jupyter.git
# + [markdown] id="7hgb-Mkwlq12" colab_type="text"
# Load cuda extension to the notebook
# + id="Oa6pxLFilw89" colab_type="code" outputId="bb9b9c92-f597-469a-fd30-0a9d2a634fb3" colab={"base_uri": "https://localhost:8080/", "height": 52}
# %load_ext nvcc_plugin
# + [markdown] id="VdkAoLecl51d" colab_type="text"
# Check your GPU card
#
# For this we need explicitly say to the interpreter, that we want to use the extension by **adding %cu at the beginning of each cell with CUDA code.**
#
#
# + id="ILsqpyeioTyD" colab_type="code" outputId="d0760442-8e6e-488d-f1f7-7e82e1ae7b36" colab={"base_uri": "https://localhost:8080/", "height": 34}
# %%cu
#include <stdio.h>
#include <iostream>
#include <time.h>
using namespace std;
#define N 1024
inline cudaError_t checkCudaErr(cudaError_t err, const char* msg) {
if (err != cudaSuccess) {
fprintf(stderr, "CUDA Runtime error at %s: %s\n", msg, cudaGetErrorString(err));
}
return err;
}
__global__ void matrixMulGPU( int * a, int * b, int * c )
{
/*
* Build out this kernel.
*/
int row = threadIdx.y + blockIdx.y * blockDim.y;
int col = threadIdx.x + blockIdx.x * blockDim.x;
int val = 0;
if (row < N && col < N) {
for (int i = 0; i < N; ++i) {
val += a[row * N + i] * b[i * N + col];
}
c[row * N + col] = val;
}
}
/*
* This CPU function already works, and will run to create a solution matrix
* against which to verify your work building out the matrixMulGPU kernel.
*/
void matrixMulCPU( int * a, int * b, int * c )
{
int val = 0;
for( int row = 0; row < N; ++row )
for( int col = 0; col < N; ++col )
{
val = 0;
for ( int k = 0; k < N; ++k )
val += a[row * N + k] * b[k * N + col];
c[row * N + col] = val;
}
}
int main()
{
int *a, *b, *c_cpu, *c_gpu; // Allocate a solution matrix for both the CPU and the GPU operations
int size = N * N * sizeof (int); // Number of bytes of an N x N matrix
// Allocate memory
cudaMallocManaged (&a, size);
cudaMallocManaged (&b, size);
cudaMallocManaged (&c_cpu, size);
cudaMallocManaged (&c_gpu, size);
// Initialize memory; create 2D matrices
for( int row = 0; row < N; ++row )
for( int col = 0; col < N; ++col )
{
a[row*N + col] = row;
b[row*N + col] = col+2;
c_cpu[row*N + col] = 0;
c_gpu[row*N + col] = 0;
}
/*
* Assign `threads_per_block` and `number_of_blocks` 2D values
* that can be used in matrixMulGPU above.
*/
dim3 threads_per_block(32, 32, 1);
dim3 number_of_blocks(N / threads_per_block.x + 1, N / threads_per_block.y + 1, 1);
clock_t gpu_start = clock();
matrixMulGPU <<< number_of_blocks, threads_per_block >>> ( a, b, c_gpu );
clock_t gpu_end = clock();
checkCudaErr(cudaDeviceSynchronize(), "Syncronization");
checkCudaErr(cudaGetLastError(), "GPU");
// Call the CPU version to check our work
clock_t cpu_start = clock();
matrixMulCPU( a, b, c_cpu );
clock_t cpu_end = clock();
// Compare the two answers to make sure they are equal
bool error = false;
for( int row = 0; row < N && !error; ++row )
for( int col = 0; col < N && !error; ++col )
if (c_cpu[row * N + col] != c_gpu[row * N + col])
{
printf("FOUND ERROR at c[%d][%d]\n", row, col);
error = true;
break;
}
if (!error)
cout << "Success! ";
cout << cpu_end - cpu_start << " " << gpu_end - gpu_start;
// Free all our allocated memory
cudaFree(a); cudaFree(b);
cudaFree( c_cpu ); cudaFree( c_gpu );
}
# + id="3RXT_W1ToTSD" colab_type="code" outputId="95e267dd-4ffa-4e6e-a76f-e27f7167b4e5" colab={"base_uri": "https://localhost:8080/", "height": 52}
# !ls /usr/local/cuda/samples/1_Utilities/deviceQuery/
# + [markdown] id="cgmvM-38sS5W" colab_type="text"
# Make sample from NVIDI sample directory
# + id="689gXnyarnth" colab_type="code" outputId="0d3bdd25-13a5-4322-a304-80bb5bb522e7" colab={"base_uri": "https://localhost:8080/", "height": 141}
# !cd /usr/local/cuda/samples/1_Utilities/deviceQuery;make clean; make
# + [markdown] id="MeFRMqJWsV_B" colab_type="text"
# Run sample
# + id="nB_dBQvSsX_8" colab_type="code" outputId="00302427-57c1-4bba-cc5e-69f909d2d166" colab={"base_uri": "https://localhost:8080/", "height": 781}
# ! /usr/local/cuda/samples/1_Utilities/deviceQuery/deviceQuery
# + id="XOyyDeiysyXr" colab_type="code" outputId="c25c1ead-d262-4447-c6dd-28773c347fd6" colab={"base_uri": "https://localhost:8080/", "height": 312}
# !nvidia-smi
# + [markdown] id="9mrRnS1Lsaq8" colab_type="text"
# Mount google drive with test image:
# + id="murDhIypsc_F" colab_type="code" outputId="ca196145-f2fc-4bc7-81b0-9060bdf3eb5e" colab={"base_uri": "https://localhost:8080/", "height": 34}
from google.colab import drive
drive.mount('/content/drive')
# + id="-Gbui9jyWoVC" colab_type="code" outputId="fd54fb1d-6c38-44f8-f2f2-f69994800594" colab={"base_uri": "https://localhost:8080/", "height": 52}
# !ls -al '/content/drive/My Drive/samples'
# + [markdown] id="UqNp8Qrctr7Y" colab_type="text"
# Code for SUM with 'REDUCE', simplify
# + id="vIDaCN0FtiM2" colab_type="code" outputId="b28652f7-1c7c-4e57-b658-08a3a42687b0" colab={"base_uri": "https://localhost:8080/", "height": 34}
# %%cu
#include <stdio.h>
#include <iostream>
#include <time.h>
#include <fstream>
#include <stdint.h>
#define BLOCK 1024
#define cudaCheckErrors(msg) \
do { \
cudaError_t __err = cudaGetLastError(); \
if (__err != cudaSuccess) { \
fprintf(stderr, "Fatal error at runtime: %s (%s at %s:%d)\n", \
msg, cudaGetErrorString(__err), \
__FILE__, __LINE__); \
fprintf(stderr, "*** FAILED - ABORTING\n"); \
exit(1); \
} \
} while (0)
__global__ void fill_1_block( unsigned char *vec )
{
int idx = threadIdx.x;
vec[idx] = 1;
}
__global__ void sum_reduce_simple(unsigned char *g_ivec, int *g_ovec){
extern __shared__ int sdata[];
int idx = threadIdx.x;
sdata[idx] = g_ivec[idx];
for (unsigned int s=1; s < blockDim.x ; s *= 2) {
if (idx % (2*s) == 0) {
sdata[idx] += sdata[idx + s];
}
__syncthreads();
}
g_ovec[0] = sdata[0];
}
int main()
{
cudaEvent_t start, stop;
float time;
cudaEventCreate(&start);
cudaEventCreate(&stop);
unsigned char *d_image, *h_image;
int *d_result, *h_result ;
size_t dszp = BLOCK;
//ALLOCATE HOST MEM
h_image = (unsigned char*)malloc(dszp);
h_result = (int *) malloc(sizeof(int));
//ALLOCATE MEM
cudaMalloc(&d_image, dszp);
cudaMalloc(&d_result, sizeof(int));
cudaCheckErrors("cudaMalloc fail \n");
cudaEventRecord(start, 0);
//FILL VALUES
fill_1_block <<< 1, 1024 >>> (d_image);
cudaCheckErrors("Kernel CALL fail \n");
cudaEventRecord(stop, 0);
cudaEventSynchronize(stop);
cudaEventElapsedTime(&time, start, stop);
//printf ("Time for the filling kernel: %f ms\n", time);
cudaMemcpy(h_image, d_image, dszp*sizeof(unsigned char), cudaMemcpyDeviceToHost);
cudaCheckErrors("Memory copying filled image fail \n");
cudaEventRecord(start, 0);
sum_reduce_simple <<< 1, 1024, 1024*sizeof(int) >>> (d_image, d_result);
cudaCheckErrors("Kernel sum_reduce_simple CALL fail \n");
cudaEventRecord(stop, 0);
cudaEventSynchronize(stop);
cudaEventElapsedTime(&time, start, stop);
printf ("Time for the sum_reduce_simple kernel: %f ms\n", time);
cudaMemcpy(h_result, d_result, sizeof(int), cudaMemcpyDeviceToHost);
cudaCheckErrors("Memory copying result fail \n");
//FREE MEM
cudaFree(d_image);
cudaFree(d_result);
cudaCheckErrors("cudaFree fail \n");
printf ("SUM is: %d\n",h_result[0]);
printf ("%u, %u ... %u, %u\n",h_image[0], h_image[1], h_image[1022], h_image[1023]);
//printf ("All is OK ");
free(h_image);
free(h_result);
return(0);
}
# + [markdown] id="GAYxQCKuOIB1" colab_type="text"
# **Think about REDUCE** 1024 elements
# + id="n2gSl-dVNATN" colab_type="code" outputId="9d856102-1485-455b-d0a5-bb17050c7494" colab={"base_uri": "https://localhost:8080/", "height": 34}
# %%cu
#include <stdio.h>
#include <iostream>
#include <time.h>
#include <fstream>
#include <stdint.h>
#define BLOCK 1024
#define cudaCheckErrors(msg) \
do { \
cudaError_t __err = cudaGetLastError(); \
if (__err != cudaSuccess) { \
fprintf(stderr, "Fatal error at runtime: %s (%s at %s:%d)\n", \
msg, cudaGetErrorString(__err), \
__FILE__, __LINE__); \
fprintf(stderr, "*** FAILED - ABORTING\n"); \
exit(1); \
} \
} while (0)
__global__ void fill_1_block( unsigned char *vec )
{
int idx = threadIdx.x;
//how to make it random
vec[idx] = 1;
}
__global__ void sum_reduce_simple(unsigned char *g_ivec, int *g_ovec){
extern __shared__ int sdata[];
//each thread load s one element from global to shared mem
unsigned int tid = threadIdx.x;
unsigned int i = blockIdx.x * blockDim.x + threadIdx.x;
sdata[tid] = g_ivec[i];
__syncthreads();
// do reduction in shared mem
for (unsigned int s=1; s < blockDim.x ; s *= 2) {
if (tid % (2*s) == 0) {
sdata[tid] += sdata[tid + s];
}
__syncthreads();
}
// write result for this block to global mem
if (tid == 0) g_ovec[blockIdx.x] = sdata[0];
}
int main()
{
cudaEvent_t start, stop;
float time;
cudaEventCreate(&start);
cudaEventCreate(&stop);
unsigned char *d_image, *h_image;
int *d_result, *h_result ;
size_t dszp = BLOCK;
//ALLOCATE HOST MEM
h_image = (unsigned char*)malloc(dszp);
h_result = (int*)malloc(1*sizeof(int));
//ALLOCATE MEM
cudaMalloc(&d_image, dszp);
cudaMalloc(&d_result, 1*sizeof(int));
cudaCheckErrors("cudaMalloc fail \n");
cudaEventRecord(start, 0);
//FILL VALUES
fill_1_block <<< 1, 1024 >>> (d_image);
cudaCheckErrors("Kernel CALL fail \n");
cudaEventRecord(stop, 0);
cudaEventSynchronize(stop);
cudaEventElapsedTime(&time, start, stop);
//printf ("Time for the filling kernel: %f ms\n", time);
cudaMemcpy(h_image, d_image, dszp*sizeof(unsigned char), cudaMemcpyDeviceToHost);
cudaCheckErrors("Memory copying filled image fail \n");
cudaEventRecord(start, 0);
sum_reduce_simple <<< 1, 1024, BLOCK*sizeof(int) >>> (d_image, d_result);
cudaCheckErrors("Kernel sum_reduce_simple CALL fail \n");
cudaEventRecord(stop, 0);
cudaEventSynchronize(stop);
cudaEventElapsedTime(&time, start, stop);
printf ("Time for the sum_reduce_simple kernel: %f ms\n", time);
cudaMemcpy(h_result, d_result, 1*sizeof(int), cudaMemcpyDeviceToHost);
cudaCheckErrors("Memory copying result fail \n");
//FREE MEM
cudaFree(d_image);
cudaFree(d_result);
cudaCheckErrors("cudaFree fail \n");
printf ("SUM is: %d\n",h_result[0]);
//printf ("%u, %u ... %u, %u\n",h_image[0], h_image[1], h_image[1022], h_image[1023]);
//printf ("All is OK ");
free(h_image);
free(h_result);
return(0);
}
# + [markdown] id="PIOapn1wZ3B9" colab_type="text"
# MAKE REDUCE SUM for **1024X1024 (1MB)**
# + id="GjEV_-peaQUV" colab_type="code" outputId="2924b0bc-4fe2-4af7-a7cb-1914b26b3487" colab={"base_uri": "https://localhost:8080/", "height": 34}
# %%cu
#include <stdio.h>
#include <iostream>
#include <time.h>
#include <fstream>
#include <stdint.h>
#define BLOCK 1024
#define cudaCheckErrors(msg) \
do { \
cudaError_t __err = cudaGetLastError(); \
if (__err != cudaSuccess) { \
fprintf(stderr, "Fatal error at runtime: %s (%s at %s:%d)\n", \
msg, cudaGetErrorString(__err), \
__FILE__, __LINE__); \
fprintf(stderr, "*** FAILED - ABORTING\n"); \
exit(1); \
} \
} while (0)
__global__ void fill_1_block( unsigned char *vec )
{
int idx = threadIdx.x;
unsigned int i = blockIdx.x * blockDim.x + threadIdx.x;
//how to make it random?
vec[i] = 1;
}
__global__ void sum_reduce_simple(unsigned char *g_ivec, int *g_ovec){
extern __shared__ int sdata[];
//each thread load s one element from global to shared mem
unsigned int tid = threadIdx.x;
unsigned int i = blockIdx.x * blockDim.x + threadIdx.x;
sdata[tid] = g_ivec[i];
__syncthreads();
// do reduction in shared mem
for (unsigned int s=1; s < blockDim.x ; s *= 2) {
if (tid % (2*s) == 0) {
sdata[tid] += sdata[tid + s];
}
__syncthreads();
}
// write result for this block to global mem
if (tid == 0) g_ovec[blockIdx.x] = sdata[0];
}
int main()
{
cudaEvent_t start, stop;
float time;
cudaEventCreate(&start);
cudaEventCreate(&stop);
unsigned char *d_image, *h_image;
int *d_result, *h_result ;
size_t dszp = BLOCK*1024;
//ALLOCATE HOST MEM
h_image = (unsigned char*)malloc(dszp);
h_result = (int*)malloc(1024*sizeof(int));
//ALLOCATE MEM
cudaMalloc(&d_image, dszp);
cudaMalloc(&d_result, 1024*sizeof(int));
cudaCheckErrors("cudaMalloc fail \n");
cudaEventRecord(start, 0);
//FILL VALUES
fill_1_block <<< 1024, 1024 >>> (d_image);
cudaCheckErrors("Kernel CALL fail \n");
cudaEventRecord(stop, 0);
cudaEventSynchronize(stop);
cudaEventElapsedTime(&time, start, stop);
printf ("Time for the filling kernel: %f ms\n", time);
cudaMemcpy(h_image, d_image, dszp*sizeof(unsigned char), cudaMemcpyDeviceToHost);
cudaCheckErrors("Memory copying filled image fail \n");
cudaEventRecord(start, 0);
sum_reduce_simple <<< 1024, 1024, 1024*sizeof(int) >>> (d_image, d_result);
cudaCheckErrors("Kernel sum_reduce_simple CALL fail \n");
cudaEventRecord(stop, 0);
cudaEventSynchronize(stop);
cudaEventElapsedTime(&time, start, stop);
printf ("Time for the sum_reduce_simple kernel: %f ms\n", time);
cudaMemcpy(h_result, d_result, BLOCK*sizeof(int), cudaMemcpyDeviceToHost);
cudaCheckErrors("Memory copying result fail \n");
//FREE MEM
cudaFree(d_image);
cudaFree(d_result);
cudaCheckErrors("cudaFree fail \n");
printf ("SUM is: %d\n",h_result[1023]);
printf ("%u, %u ... %u, %u\n",h_image[0], h_image[1], h_image[1024*1023+1022], h_image[1024*1023+1023]);
//printf ("All is OK ");
free(h_image);
free(h_result);
return(0);
}
# + [markdown] id="FA7Mz9NcfHBs" colab_type="text"
# Make reduce sum for **1 GB**
# + id="3TOhngl2fEKi" colab_type="code" outputId="22290aa7-4795-43c9-e483-7d32d8a1b717" colab={"base_uri": "https://localhost:8080/", "height": 54}
# %%cu
#include <stdio.h>
#include <iostream>
#include <time.h>
#include <fstream>
#include <stdint.h>
#define BLOCK 1024
#define cudaCheckErrors(msg) \
do { \
cudaError_t __err = cudaGetLastError(); \
if (__err != cudaSuccess) { \
fprintf(stderr, "Fatal error at runtime: %s (%s at %s:%d)\n", \
msg, cudaGetErrorString(__err), \
__FILE__, __LINE__); \
fprintf(stderr, "*** FAILED - ABORTING\n"); \
exit(1); \
} \
} while (0)
__global__ void fill_1_block( unsigned char *vec, int index)
{
int idx = threadIdx.x;
unsigned int i = index*1024*1024 + blockIdx.x * blockDim.x + threadIdx.x;
//how to make it random?
vec[i] = 1;
}
__global__ void sum_reduce_simple(unsigned char *g_ivec, int *g_ovec, int index){
extern __shared__ int sdata[];
//each thread load s one element from global to shared mem
unsigned int tid = threadIdx.x;
unsigned int i = index*1024*1024 + blockIdx.x * blockDim.x + threadIdx.x;
sdata[tid] = g_ivec[i];
__syncthreads();
// do reduction in shared mem
for (unsigned int s=1; s < blockDim.x ; s *= 2) {
if (tid % (2*s) == 0) {
sdata[tid] += sdata[tid + s];
}
__syncthreads();
}
// write result for this block to global mem
if (tid == 0) g_ovec[1024*index + blockIdx.x] = sdata[0];
}
int main()
{
cudaEvent_t start, stop;
float time;
cudaEventCreate(&start);
cudaEventCreate(&stop);
unsigned char *d_image, *h_image;
int *d_result, *h_result ;
size_t dszp = BLOCK*1024*1024;
//ALLOCATE HOST MEM
h_image = (unsigned char*)malloc(dszp);
h_result = (int*)malloc(1024*1024*sizeof(int));
//ALLOCATE MEM
cudaMalloc(&d_image, dszp);
cudaMalloc(&d_result, 1024*1024*sizeof(int));
cudaCheckErrors("cudaMalloc fail \n");
cudaEventRecord(start, 0);
//FILL VALUES
for (int i = 0; i<1024; i++){
fill_1_block <<< 1024, 1024 >>> (d_image, i);
}
cudaCheckErrors("Kernel CALL fail \n");
cudaEventRecord(stop, 0);
cudaEventSynchronize(stop);
cudaEventElapsedTime(&time, start, stop);
printf ("Time for the filling kernel: %f ms\n", time);
cudaMemcpy(h_image, d_image, dszp*sizeof(unsigned char), cudaMemcpyDeviceToHost);
cudaCheckErrors("Memory copying filled image fail \n");
cudaEventRecord(start, 0);
for (int i = 0; i<1024; i++){
sum_reduce_simple <<< 1024, 1024, BLOCK*sizeof(int) >>> (d_image, d_result, i);
}
cudaCheckErrors("Kernel sum_reduce_simple CALL fail \n");
cudaEventRecord(stop, 0);
cudaEventSynchronize(stop);
cudaEventElapsedTime(&time, start, stop);
printf ("Time for the sum_reduce_simple kernel: %f ms\n", time);
cudaMemcpy(h_result, d_result, 1024*1024*sizeof(int), cudaMemcpyDeviceToHost);
cudaCheckErrors("Memory copying result fail \n");
//FREE MEM
cudaFree(d_image);
cudaFree(d_result);
cudaCheckErrors("cudaFree fail \n");
printf ("SUM is: %d\n",h_result[1024*1023 + 1023]);
printf ("%u, %u ... %u, %u\n",h_image[0], h_image[1], h_image[1024*1024*1023+1024*1023 + 1022], h_image[1024*1024*1023+1024*1023 + 1023]);
//printf ("All is OK ");
free(h_image);
free(h_result);
return(0);
}
# + id="Y0nsDmIVi2lQ" colab_type="code" outputId="772646af-43ec-4fa2-b77d-2e8c1fb11321" colab={"base_uri": "https://localhost:8080/", "height": 54}
# %%cu
#include <stdio.h>
#include <iostream>
#include <time.h>
#include <fstream>
#include <stdint.h>
#define BLOCK 1024
#define cudaCheckErrors(msg) \
do { \
cudaError_t __err = cudaGetLastError(); \
if (__err != cudaSuccess) { \
fprintf(stderr, "Fatal error at runtime: %s (%s at %s:%d)\n", \
msg, cudaGetErrorString(__err), \
__FILE__, __LINE__); \
fprintf(stderr, "*** FAILED - ABORTING\n"); \
exit(1); \
} \
} while (0)
__global__ void fill_1_block( unsigned char *vec, int index)
{
int idx = threadIdx.x;
unsigned int i = index*1024*1024 + blockIdx.x * blockDim.x + threadIdx.x;
//how to make it random?
vec[i] = 1;
}
__global__ void sum_reduce_simple(unsigned char *g_ivec, int *g_ovec, int index){
extern __shared__ int sdata[];
//each thread load s one element from global to shared mem
unsigned int tid = threadIdx.x;
unsigned int i = index*1024*1024 + blockIdx.x * blockDim.x + threadIdx.x;
sdata[tid] = g_ivec[i];
__syncthreads();
// do reduction in shared mem
for (unsigned int s=1; s < blockDim.x ; s *= 2) {
if (tid % (2*s) == 0) {
sdata[tid] += sdata[tid + s];
}
__syncthreads();
}
// write result for this block to global mem
if (tid == 0) g_ovec[1024*index + blockIdx.x] = sdata[0];
}
__global__ void sum_reduce_strided(unsigned char *g_ivec, int *g_ovec, int index){
extern __shared__ int sdata[];
//each thread load s one element from global to shared mem
unsigned int tid = threadIdx.x;
unsigned int i = index*1024*1024 + blockIdx.x * blockDim.x + threadIdx.x;
sdata[tid] = g_ivec[i];
__syncthreads();
// do reduction in shared mem
for (unsigned int s=1; s < blockDim.x; s *= 2) {
int index = 2 * s * tid;
if (index < blockDim.x) {
sdata[index] += sdata[index + s];
}
__syncthreads();
}
// write result for this block to global mem
if (tid == 0) g_ovec[1024*index + blockIdx.x] = sdata[0];
}
int main()
{
cudaEvent_t start, stop;
float time;
cudaEventCreate(&start);
cudaEventCreate(&stop);
unsigned char *d_image, *h_image;
int *d_result, *h_result ;
size_t dszp = BLOCK*1024*1024;
//ALLOCATE HOST MEM
h_image = (unsigned char*)malloc(dszp);
h_result = (int*)malloc(1024*1024*sizeof(int));
//ALLOCATE MEM
cudaMalloc(&d_image, dszp);
cudaMalloc(&d_result, 1024*1024*sizeof(int));
cudaCheckErrors("cudaMalloc fail \n");
cudaEventRecord(start, 0);
//FILL VALUES
for (int i = 0; i<1024; i++){
fill_1_block <<< 1024, 1024 >>> (d_image, i);
}
cudaCheckErrors("Kernel CALL fail \n");
cudaEventRecord(stop, 0);
cudaEventSynchronize(stop);
cudaEventElapsedTime(&time, start, stop);
//printf ("Time for the filling kernel: %f ms\n", time);
cudaMemcpy(h_image, d_image, dszp*sizeof(unsigned char), cudaMemcpyDeviceToHost);
cudaCheckErrors("Memory copying filled image fail \n");
cudaEventRecord(start, 0);
for (int i = 0; i<1024; i++){
sum_reduce_strided <<< 1024, 1024, BLOCK*sizeof(int) >>> (d_image, d_result, i);
}
cudaCheckErrors("Kernel sum_reduce_strided CALL fail \n");
cudaEventRecord(stop, 0);
cudaEventSynchronize(stop);
cudaEventElapsedTime(&time, start, stop);
printf ("Time %f ms\n", time);
cudaMemcpy(h_result, d_result, 1024*1024*sizeof(int), cudaMemcpyDeviceToHost);
cudaCheckErrors("Memory copying result fail \n");
//FREE MEM
cudaFree(d_image);
cudaFree(d_result);
cudaCheckErrors("cudaFree fail \n");
printf ("SUM is: %d\n",h_result[1024*1023 + 1023]);
//printf ("%u, %u ... %u, %u\n",h_image[0], h_image[1], h_image[1024*1024*1023+1024*1023 + 1022], h_image[1024*1024*1023+1024*1023 + 1023]);
//printf ("All is OK ");
free(h_image);
free(h_result);
return(0);
}
# + id="Je9FZ7zdk8SG" colab_type="code" outputId="00519aaa-679d-424b-cf78-f6a602af45d2" colab={"base_uri": "https://localhost:8080/", "height": 54}
# %%cu
#include <stdio.h>
#include <iostream>
#include <time.h>
#include <fstream>
#include <stdint.h>
#define BLOCK 1024
#define cudaCheckErrors(msg) \
do { \
cudaError_t __err = cudaGetLastError(); \
if (__err != cudaSuccess) { \
fprintf(stderr, "Fatal error at runtime: %s (%s at %s:%d)\n", \
msg, cudaGetErrorString(__err), \
__FILE__, __LINE__); \
fprintf(stderr, "*** FAILED - ABORTING\n"); \
exit(1); \
} \
} while (0)
__global__ void fill_1_block( unsigned char *vec, int index)
{
int idx = threadIdx.x;
unsigned int i = index*1024*1024 + blockIdx.x * blockDim.x + threadIdx.x;
//how to make it random?
vec[i] = 1;
}
__global__ void sum_reduce_simple(unsigned char *g_ivec, int *g_ovec, int index){
extern __shared__ int sdata[];
//each thread load s one element from global to shared mem
unsigned int tid = threadIdx.x;
unsigned int i = index*1024*1024 + blockIdx.x * blockDim.x + threadIdx.x;
sdata[tid] = g_ivec[i];
__syncthreads();
// do reduction in shared mem
for (unsigned int s=1; s < blockDim.x ; s *= 2) {
if (tid % (2*s) == 0) {
sdata[tid] += sdata[tid + s];
}
__syncthreads();
}
// write result for this block to global mem
if (tid == 0) g_ovec[1024*index + blockIdx.x] = sdata[0];
}
__global__ void sum_reduce_strided(unsigned char *g_ivec, int *g_ovec, int index){
extern __shared__ int sdata[];
//each thread load s one element from global to shared mem
unsigned int tid = threadIdx.x;
unsigned int i = index*1024*1024 + blockIdx.x * blockDim.x + threadIdx.x;
sdata[tid] = g_ivec[i];
__syncthreads();
// do reduction in shared mem
for (unsigned int s=1; s < blockDim.x; s *= 2) {
int index = 2 * s * tid;
if (index < blockDim.x) {
sdata[index] += sdata[index + s];
}
__syncthreads();
}
// write result for this block to global mem
if (tid == 0) g_ovec[1024*index + blockIdx.x] = sdata[0];
}
__global__ void sum_reduce_reversed(unsigned char *g_ivec, int *g_ovec, int index){
extern __shared__ int sdata[];
//each thread load s one element from global to shared mem
unsigned int tid = threadIdx.x;
unsigned int i = index*1024*1024 + blockIdx.x * blockDim.x + threadIdx.x;
sdata[tid] = g_ivec[i];
__syncthreads();
// do reduction in shared mem
for (unsigned int s=blockDim.x/2; s>0; s>>=1) {
if (tid < s) {
sdata[tid] += sdata[tid + s];
}
__syncthreads();
}
// write result for this block to global mem
if (tid == 0) g_ovec[1024*index + blockIdx.x] = sdata[0];
}
int main()
{
cudaEvent_t start, stop;
float time;
cudaEventCreate(&start);
cudaEventCreate(&stop);
unsigned char *d_image, *h_image;
int *d_result, *h_result ;
size_t dszp = BLOCK*1024*1024;
//ALLOCATE HOST MEM
h_image = (unsigned char*)malloc(dszp);
h_result = (int*)malloc(1024*1024*sizeof(int));
//ALLOCATE MEM
cudaMalloc(&d_image, dszp);
cudaMalloc(&d_result, 1024*1024*sizeof(int));
cudaCheckErrors("cudaMalloc fail \n");
cudaEventRecord(start, 0);
//FILL VALUES
for (int i = 0; i<1024; i++){
fill_1_block <<< 1024, 1024 >>> (d_image, i);
}
cudaCheckErrors("Kernel CALL fail \n");
cudaEventRecord(stop, 0);
cudaEventSynchronize(stop);
cudaEventElapsedTime(&time, start, stop);
//printf ("Time for the filling kernel: %f ms\n", time);
cudaMemcpy(h_image, d_image, dszp*sizeof(unsigned char), cudaMemcpyDeviceToHost);
cudaCheckErrors("Memory copying filled image fail \n");
cudaEventRecord(start, 0);
for (int i = 0; i<1024; i++){
sum_reduce_reversed <<< 1024, 1024, BLOCK*sizeof(int) >>> (d_image, d_result, i);
}
cudaCheckErrors("Kernel sum_reduce_reversed CALL fail \n");
cudaEventRecord(stop, 0);
cudaEventSynchronize(stop);
cudaEventElapsedTime(&time, start, stop);
printf ("Time %f ms\n", time);
cudaMemcpy(h_result, d_result, 1024*1024*sizeof(int), cudaMemcpyDeviceToHost);
cudaCheckErrors("Memory copying result fail \n");
//FREE MEM
cudaFree(d_image);
cudaFree(d_result);
cudaCheckErrors("cudaFree fail \n");
printf ("SUM is: %d\n",h_result[1024*1023 + 1023]);
//printf ("%u, %u ... %u, %u\n",h_image[0], h_image[1], h_image[1024*1024*1023+1024*1023 + 1022], h_image[1024*1024*1023+1024*1023 + 1023]);
//printf ("All is OK ");
free(h_image);
free(h_result);
return(0);
}
# + id="e5bukcz-mKEv" colab_type="code" outputId="d9f53de4-063e-4369-b31b-e00dead45343" colab={"base_uri": "https://localhost:8080/", "height": 54}
# %%cu
#include <stdio.h>
#include <iostream>
#include <time.h>
#include <fstream>
#include <stdint.h>
#define BLOCK 1024
#define cudaCheckErrors(msg) \
do { \
cudaError_t __err = cudaGetLastError(); \
if (__err != cudaSuccess) { \
fprintf(stderr, "Fatal error at runtime: %s (%s at %s:%d)\n", \
msg, cudaGetErrorString(__err), \
__FILE__, __LINE__); \
fprintf(stderr, "*** FAILED - ABORTING\n"); \
exit(1); \
} \
} while (0)
__global__ void fill_1_block( unsigned char *vec, int index)
{
int idx = threadIdx.x;
unsigned int i = index*1024*1024 + blockIdx.x * blockDim.x + threadIdx.x;
//how to make it random?
vec[i] = 1;
}
__global__ void sum_reduce_simple(unsigned char *g_ivec, int *g_ovec, int index){
extern __shared__ int sdata[];
//each thread load s one element from global to shared mem
unsigned int tid = threadIdx.x;
unsigned int i = index*1024*1024 + blockIdx.x * blockDim.x + threadIdx.x;
sdata[tid] = g_ivec[i];
__syncthreads();
// do reduction in shared mem
for (unsigned int s=1; s < blockDim.x ; s *= 2) {
if (tid % (2*s) == 0) {
sdata[tid] += sdata[tid + s];
}
__syncthreads();
}
// write result for this block to global mem
if (tid == 0) g_ovec[1024*index + blockIdx.x] = sdata[0];
}
__global__ void sum_reduce_strided(unsigned char *g_ivec, int *g_ovec, int index){
extern __shared__ int sdata[];
//each thread load s one element from global to shared mem
unsigned int tid = threadIdx.x;
unsigned int i = index*1024*1024 + blockIdx.x * blockDim.x + threadIdx.x;
sdata[tid] = g_ivec[i];
__syncthreads();
// do reduction in shared mem
for (unsigned int s=1; s < blockDim.x; s *= 2) {
int index = 2 * s * tid;
if (index < blockDim.x) {
sdata[index] += sdata[index + s];
}
__syncthreads();
}
// write result for this block to global mem
if (tid == 0) g_ovec[1024*index + blockIdx.x] = sdata[0];
}
__global__ void sum_reduce_reversed(unsigned char *g_ivec, int *g_ovec, int index){
extern __shared__ int sdata[];
//each thread load s one element from global to shared mem
unsigned int tid = threadIdx.x;
unsigned int i = index*1024*1024 + blockIdx.x * blockDim.x + threadIdx.x;
sdata[tid] = g_ivec[i];
__syncthreads();
// do reduction in shared mem
for (unsigned int s=blockDim.x/2; s>0; s>>=1) {
if (tid < s) {
sdata[tid] += sdata[tid + s];
}
__syncthreads();
}
// write result for this block to global mem
if (tid == 0) g_ovec[1024*index + blockIdx.x] = sdata[0];
}
__global__ void sum_reduce_halved(unsigned char *g_ivec, int *g_ovec, int index){
extern __shared__ int sdata[];
// perform first level of reduction,
// reading from global memory, writing to shared memory
unsigned int tid = threadIdx.x;
unsigned int i = index*1024*1024 + blockIdx.x*(blockDim.x*2) + threadIdx.x;
sdata[tid] = g_ivec[i] + g_ivec[i+blockDim.x];
__syncthreads();
// do reduction in shared mem
for (unsigned int s=blockDim.x/2; s>0; s>>=1) {
if (tid < s) {
sdata[tid] += sdata[tid + s];
}
__syncthreads();
}
// write result for this block to global mem
if (tid == 0) g_ovec[1024*index + blockIdx.x] = sdata[0];
}
int main()
{
cudaEvent_t start, stop;
float time;
cudaEventCreate(&start);
cudaEventCreate(&stop);
unsigned char *d_image, *h_image;
int *d_result, *h_result ;
size_t dszp = BLOCK*1024*1024;
//ALLOCATE HOST MEM
h_image = (unsigned char*)malloc(dszp);
h_result = (int*)malloc(1024*1024*sizeof(int));
//ALLOCATE MEM
cudaMalloc(&d_image, dszp);
cudaMalloc(&d_result, 1024*1024*sizeof(int));
cudaCheckErrors("cudaMalloc fail \n");
cudaEventRecord(start, 0);
//FILL VALUES
for (int i = 0; i<1024; i++){
fill_1_block <<< 1024, 1024 >>> (d_image, i);
}
cudaCheckErrors("Kernel CALL fail \n");
cudaEventRecord(stop, 0);
cudaEventSynchronize(stop);
cudaEventElapsedTime(&time, start, stop);
//printf ("Time for the filling kernel: %f ms\n", time);
cudaMemcpy(h_image, d_image, dszp*sizeof(unsigned char), cudaMemcpyDeviceToHost);
cudaCheckErrors("Memory copying filled image fail \n");
cudaEventRecord(start, 0);
for (int i = 0; i<1024; i++){
sum_reduce_halved <<< 1024, 512, BLOCK*sizeof(int) >>> (d_image, d_result, i);
}
cudaCheckErrors("Kernel sum_reduce_reversed CALL fail \n");
cudaEventRecord(stop, 0);
cudaEventSynchronize(stop);
cudaEventElapsedTime(&time, start, stop);
printf ("Time %f ms\n", time);
cudaMemcpy(h_result, d_result, 1024*1024*sizeof(int), cudaMemcpyDeviceToHost);
cudaCheckErrors("Memory copying result fail \n");
//FREE MEM
cudaFree(d_image);
cudaFree(d_result);
cudaCheckErrors("cudaFree fail \n");
printf ("SUM is: %d %d %d %d \n",h_result[0], h_result[1], h_result[1024*1023 + 1022], h_result[1024*1023 + 1023]);
//printf ("%u, %u ... %u, %u\n",h_image[0], h_image[1], h_image[1024*1024*1023+1024*1023 + 1022], h_image[1024*1024*1023+1024*1023 + 1023]);
//printf ("All is OK ");
free(h_image);
free(h_result);
return(0);
}
# + [markdown] id="mtwJyveWUNL9" colab_type="text"
# What if we try 2048 elements?
# + id="7H-kU5mTUJnl" colab_type="code" outputId="b5b34c5d-24e7-43fa-83ff-51871003f79a" colab={"base_uri": "https://localhost:8080/", "height": 54}
# %%cu
#include <stdio.h>
#include <iostream>
#include <time.h>
#include <fstream>
#include <stdint.h>
#include <time.h>
#define BLOCK 1024
#define cudaCheckErrors(msg) \
do { \
cudaError_t __err = cudaGetLastError(); \
if (__err != cudaSuccess) { \
fprintf(stderr, "Fatal error at runtime: %s (%s at %s:%d)\n", \
msg, cudaGetErrorString(__err), \
__FILE__, __LINE__); \
fprintf(stderr, "*** FAILED - ABORTING\n"); \
exit(1); \
} \
} while (0)
__global__ void fill_1_block( unsigned char *vec, int index)
{
int idx = threadIdx.x;
unsigned int i = index*1024*1024 + blockIdx.x * blockDim.x + threadIdx.x;
//how to make it random?
vec[i] = 1;
}
__global__ void sum_reduce_simple(unsigned char *g_ivec, int *g_ovec, int index){
extern __shared__ int sdata[];
//each thread load s one element from global to shared mem
unsigned int tid = threadIdx.x;
unsigned int i = index*1024*1024 + blockIdx.x * blockDim.x + threadIdx.x;
sdata[tid] = g_ivec[i];
__syncthreads();
// do reduction in shared mem
for (unsigned int s=1; s < blockDim.x ; s *= 2) {
if (tid % (2*s) == 0) {
sdata[tid] += sdata[tid + s];
}
__syncthreads();
}
// write result for this block to global mem
if (tid == 0) g_ovec[1024*index + blockIdx.x] = sdata[0];
}
__global__ void sum_reduce_strided(unsigned char *g_ivec, int *g_ovec, int index){
extern __shared__ int sdata[];
//each thread load s one element from global to shared mem
unsigned int tid = threadIdx.x;
unsigned int i = index*1024*1024 + blockIdx.x * blockDim.x + threadIdx.x;
sdata[tid] = g_ivec[i];
__syncthreads();
// do reduction in shared mem
for (unsigned int s=1; s < blockDim.x; s *= 2) {
int index = 2 * s * tid;
if (index < blockDim.x) {
sdata[index] += sdata[index + s];
}
__syncthreads();
}
// write result for this block to global mem
if (tid == 0) g_ovec[1024*index + blockIdx.x] = sdata[0];
}
__global__ void sum_reduce_reversed(unsigned char *g_ivec, int *g_ovec, int index){
extern __shared__ int sdata[];
//each thread load s one element from global to shared mem
unsigned int tid = threadIdx.x;
unsigned int i = index*1024*1024 + blockIdx.x * blockDim.x + threadIdx.x;
sdata[tid] = g_ivec[i];
__syncthreads();
// do reduction in shared mem
for (unsigned int s=blockDim.x/2; s>0; s>>=1) {
if (tid < s) {
sdata[tid] += sdata[tid + s];
}
__syncthreads();
}
// write result for this block to global mem
if (tid == 0) g_ovec[1024*index + blockIdx.x] = sdata[0];
}
__global__ void sum_reduce_halved(unsigned char *g_ivec, int *g_ovec){
extern __shared__ int sdata[];
// perform first level of reduction,
// reading from global memory, writing to shared memory
unsigned int tid = threadIdx.x;
unsigned int i = blockIdx.x*(blockDim.x*2) + threadIdx.x;
sdata[tid] = g_ivec[i] + g_ivec[i+blockDim.x];
__syncthreads();
// do reduction in shared mem
for (unsigned int s=blockDim.x/2; s>0; s>>=1) {
if (tid < s) {
sdata[tid] += sdata[tid + s];
}
__syncthreads();
}
// write result for this block to global mem
if (tid == 0) g_ovec[blockIdx.x] = sdata[0];
}
int main()
{
cudaEvent_t start, stop;
float time;
cudaEventCreate(&start);
cudaEventCreate(&stop);
unsigned char *d_image, *h_image;
int *d_result, *h_result ;
size_t dszp = BLOCK*1024*1024;
//ALLOCATE HOST MEM
h_image = (unsigned char*)malloc(dszp);
h_result = (int*)malloc(1024*1024*sizeof(int));
//ALLOCATE MEM
cudaMalloc(&d_image, dszp);
cudaMalloc(&d_result, 1024*1024*sizeof(int));
cudaCheckErrors("cudaMalloc fail \n");
cudaEventRecord(start, 0);
//FILL VALUES
for (int i = 0; i<1024; i++){
fill_1_block <<< 1024, 1024 >>> (d_image, i);
}
cudaCheckErrors("Kernel CALL fail \n");
cudaEventRecord(stop, 0);
cudaEventSynchronize(stop);
cudaEventElapsedTime(&time, start, stop);
//printf ("Time for the filling kernel: %f ms\n", time);
cudaMemcpy(h_image, d_image, dszp*sizeof(unsigned char), cudaMemcpyDeviceToHost);
cudaCheckErrors("Memory copying filled image fail \n");
cudaEventRecord(start, 0);
sum_reduce_halved <<< 1024*1024, 512, 1024*sizeof(int) >>> (d_image, d_result);
cudaCheckErrors("Kernel sum_reduce_reversed CALL fail \n");
cudaEventRecord(stop, 0);
cudaEventSynchronize(stop);
cudaEventElapsedTime(&time, start, stop);
printf ("Time %f ms\n", time);
cudaEventRecord(start, 0);
cudaMemcpy(h_result, d_result, 1024*1024*sizeof(int), cudaMemcpyDeviceToHost);
cudaCheckErrors("Memory copying result fail \n");
cudaEventRecord(stop, 0);
cudaEventSynchronize(stop);
cudaEventElapsedTime(&time, start, stop);
printf ("Copy Time %f ms\n", time);
//FREE MEM
cudaFree(d_image);
cudaFree(d_result);
cudaCheckErrors("cudaFree fail \n");
printf ("SUM is: %d %d %d %d \n",h_result[0], h_result[1], h_result[1024*1023 + 1022], h_result[1024*1023 + 1023]);
//printf ("%u, %u ... %u, %u\n",h_image[0], h_image[1], h_image[1024*1024*1023+1024*1023 + 1022], h_image[1024*1024*1023+1024*1023 + 1023]);
//printf ("All is OK ");
free(h_image);
free(h_result);
return(0);
}
# + [markdown] id="wiCvo1NfcAE7" colab_type="text"
# 1024 threads
# + id="lNerrQrsb_jd" colab_type="code" outputId="5ff780be-07f0-4ca4-ae85-71525c0c946f" colab={"base_uri": "https://localhost:8080/", "height": 54}
# %%cu
#include <stdio.h>
#include <iostream>
#include <time.h>
#include <fstream>
#include <stdint.h>
#include <time.h>
#define BLOCK 1024
#define cudaCheckErrors(msg) \
do { \
cudaError_t __err = cudaGetLastError(); \
if (__err != cudaSuccess) { \
fprintf(stderr, "Fatal error at runtime: %s (%s at %s:%d)\n", \
msg, cudaGetErrorString(__err), \
__FILE__, __LINE__); \
fprintf(stderr, "*** FAILED - ABORTING\n"); \
exit(1); \
} \
} while (0)
__global__ void fill_1_block( unsigned char *vec, int index)
{
int idx = threadIdx.x;
unsigned int i = index*1024*1024 + blockIdx.x * blockDim.x + threadIdx.x;
//how to make it random?
vec[i] = 1;
}
__global__ void sum_reduce_simple(unsigned char *g_ivec, int *g_ovec, int index){
extern __shared__ int sdata[];
//each thread load s one element from global to shared mem
unsigned int tid = threadIdx.x;
unsigned int i = index*1024*1024 + blockIdx.x * blockDim.x + threadIdx.x;
sdata[tid] = g_ivec[i];
__syncthreads();
// do reduction in shared mem
for (unsigned int s=1; s < blockDim.x ; s *= 2) {
if (tid % (2*s) == 0) {
sdata[tid] += sdata[tid + s];
}
__syncthreads();
}
// write result for this block to global mem
if (tid == 0) g_ovec[1024*index + blockIdx.x] = sdata[0];
}
__global__ void sum_reduce_strided(unsigned char *g_ivec, int *g_ovec, int index){
extern __shared__ int sdata[];
//each thread load s one element from global to shared mem
unsigned int tid = threadIdx.x;
unsigned int i = index*1024*1024 + blockIdx.x * blockDim.x + threadIdx.x;
sdata[tid] = g_ivec[i];
__syncthreads();
// do reduction in shared mem
for (unsigned int s=1; s < blockDim.x; s *= 2) {
int index = 2 * s * tid;
if (index < blockDim.x) {
sdata[index] += sdata[index + s];
}
__syncthreads();
}
// write result for this block to global mem
if (tid == 0) g_ovec[1024*index + blockIdx.x] = sdata[0];
}
__global__ void sum_reduce_reversed(unsigned char *g_ivec, int *g_ovec, int index){
extern __shared__ int sdata[];
//each thread load s one element from global to shared mem
unsigned int tid = threadIdx.x;
unsigned int i = index*1024*1024 + blockIdx.x * blockDim.x + threadIdx.x;
sdata[tid] = g_ivec[i];
__syncthreads();
// do reduction in shared mem
for (unsigned int s=blockDim.x/2; s>0; s>>=1) {
if (tid < s) {
sdata[tid] += sdata[tid + s];
}
__syncthreads();
}
// write result for this block to global mem
if (tid == 0) g_ovec[1024*index + blockIdx.x] = sdata[0];
}
__global__ void sum_reduce_halved(unsigned char *g_ivec, int *g_ovec){
extern __shared__ int sdata[];
// perform first level of reduction,
// reading from global memory, writing to shared memory
unsigned int tid = threadIdx.x;
unsigned int i = blockIdx.x*(blockDim.x*2) + threadIdx.x;
sdata[tid] = g_ivec[i] + g_ivec[i+blockDim.x];
__syncthreads();
// do reduction in shared mem
for (unsigned int s=blockDim.x/2; s>0; s>>=1) {
if (tid < s) {
sdata[tid] += sdata[tid + s];
}
__syncthreads();
}
// write result for this block to global mem
if (tid == 0) g_ovec[blockIdx.x] = sdata[0];
}
int main()
{
cudaEvent_t start, stop;
float time;
cudaEventCreate(&start);
cudaEventCreate(&stop);
unsigned char *d_image, *h_image;
int *d_result, *h_result ;
size_t dszp = BLOCK*1024*1024;
//ALLOCATE HOST MEM
h_image = (unsigned char*)malloc(dszp);
h_result = (int*)malloc(2*1024*1024*sizeof(int));
//ALLOCATE MEM
cudaMalloc(&d_image, dszp);
cudaMalloc(&d_result, 2*1024*1024*sizeof(int));
cudaCheckErrors("cudaMalloc fail \n");
cudaEventRecord(start, 0);
//FILL VALUES
for (int i = 0; i<1024; i++){
fill_1_block <<< 1024, 1024 >>> (d_image, i);
}
cudaCheckErrors("Kernel CALL fail \n");
cudaEventRecord(stop, 0);
cudaEventSynchronize(stop);
cudaEventElapsedTime(&time, start, stop);
//printf ("Time for the filling kernel: %f ms\n", time);
cudaMemcpy(h_image, d_image, dszp*sizeof(unsigned char), cudaMemcpyDeviceToHost);
cudaCheckErrors("Memory copying filled image fail \n");
cudaEventRecord(start, 0);
sum_reduce_halved <<< 1024*2048, 256, 512*sizeof(int) >>> (d_image, d_result);
cudaCheckErrors("Kernel sum_reduce_reversed CALL fail \n");
cudaEventRecord(stop, 0);
cudaEventSynchronize(stop);
cudaEventElapsedTime(&time, start, stop);
printf ("Time %f ms\n", time);
cudaEventRecord(start, 0);
cudaMemcpy(h_result, d_result, 2*1024*1024*sizeof(int), cudaMemcpyDeviceToHost);
cudaCheckErrors("Memory copying result fail \n");
cudaEventRecord(stop, 0);
cudaEventSynchronize(stop);
cudaEventElapsedTime(&time, start, stop);
printf ("Copy Time %f ms\n", time);
//FREE MEM
cudaFree(d_image);
cudaFree(d_result);
cudaCheckErrors("cudaFree fail \n");
printf ("SUM is: %d %d %d %d \n",h_result[0], h_result[1], h_result[1024*1023 + 1022], h_result[1024*1023 + 1023]);
//printf ("%u, %u ... %u, %u\n",h_image[0], h_image[1], h_image[1024*1024*1023+1024*1023 + 1022], h_image[1024*1024*1023+1024*1023 + 1023]);
//printf ("All is OK ");
free(h_image);
free(h_result);
return(0);
}
# + id="GEelmj5fXpSL" colab_type="code" outputId="51930d7b-026d-4218-c34d-0dd0ed09776b" colab={"base_uri": "https://localhost:8080/", "height": 34}
# %%cu
#include <stdio.h>
#include <iostream>
#include <time.h>
#include <fstream>
#include <stdint.h>
#include <time.h>
#define BLOCK 1024
int main()
{
unsigned char *d_image, *h_image;
int *d_result, *h_result ;
size_t dszp = BLOCK*1024*1024;
//ALLOCATE HOST MEM
h_image = (unsigned char*)malloc(dszp);
h_result = (int*)malloc(1024*1024*sizeof(int));
//DO it with MEMSET faster
for (int i = 0; i<1024*1024*1024; i++){
h_image[i] = 1;
}
//measure time for cpu calc.
clock_t t1, t2;
t1 = clock();
h_result[0] = 0;
for (int i = 0; i<1024*1024*1024; i++){
h_result[0] = h_result[0] + h_image[i];
}
t2 = clock();
double time_taken = ((double)(t2 - t1) / CLOCKS_PER_SEC *1000);
printf("%.3f", time_taken);
free(h_image);
free(h_result);
return(0);
}
# + id="ED1pXrValh9D" colab_type="code" outputId="e752b2c9-0109-4ac4-f67a-aa5d021caed1" colab={"base_uri": "https://localhost:8080/", "height": 54}
# %%cu
#include <stdio.h>
#include <iostream>
#include <time.h>
#include <fstream>
#include <stdint.h>
#include <time.h>
#define BLOCK 1024
#define cudaCheckErrors(msg) \
do { \
cudaError_t __err = cudaGetLastError(); \
if (__err != cudaSuccess) { \
fprintf(stderr, "Fatal error at runtime: %s (%s at %s:%d)\n", \
msg, cudaGetErrorString(__err), \
__FILE__, __LINE__); \
fprintf(stderr, "*** FAILED - ABORTING\n"); \
exit(1); \
} \
} while (0)
__global__ void fill_1_block( unsigned char *vec, int index)
{
int idx = threadIdx.x;
unsigned int i = index*1024*1024 + blockIdx.x * blockDim.x + threadIdx.x;
//how to make it random?
vec[i] = 1;
}
__device__ void warpReduce(volatile int* sdata, int tid) {
sdata[tid] += sdata[tid + 32];
sdata[tid] += sdata[tid + 16];
sdata[tid] += sdata[tid + 8];
sdata[tid] += sdata[tid + 4];
sdata[tid] += sdata[tid + 2];
sdata[tid] += sdata[tid + 1];
}
__global__ void sum_reduce_halved(unsigned char *g_ivec, int *g_ovec, int index){
extern __shared__ int sdata[];
// perform first level of reduction,
// reading from global memory, writing to shared memory
unsigned int tid = threadIdx.x;
unsigned int i = index*1024*1024 + blockIdx.x*(blockDim.x*2) + threadIdx.x;
sdata[tid] = g_ivec[i] + g_ivec[i+blockDim.x];
__syncthreads();
for (unsigned int s=blockDim.x/2; s>32; s>>=1) {
if (tid < s)
sdata[tid] += sdata[tid + s];
__syncthreads();
}
if (tid < 32) warpReduce(sdata, tid);
// write result for this block to global mem
if (tid == 0) g_ovec[1024*index + blockIdx.x] = sdata[0];
}
int main()
{
cudaEvent_t start, stop;
float time;
cudaEventCreate(&start);
cudaEventCreate(&stop);
unsigned char *d_image, *h_image;
int *d_result, *h_result ;
size_t dszp = BLOCK*1024*1024;
//ALLOCATE HOST MEM
h_image = (unsigned char*)malloc(dszp);
h_result = (int*)malloc(1024*1024*sizeof(int));
//ALLOCATE MEM
cudaMalloc(&d_image, dszp);
cudaMalloc(&d_result, 1024*1024*sizeof(int));
cudaCheckErrors("cudaMalloc fail \n");
cudaEventRecord(start, 0);
//FILL VALUES
for (int i = 0; i<1024; i++){
fill_1_block <<< 1024, 1024 >>> (d_image, i);
}
cudaCheckErrors("Kernel CALL fail \n");
cudaEventRecord(stop, 0);
cudaEventSynchronize(stop);
cudaEventElapsedTime(&time, start, stop);
//printf ("Time for the filling kernel: %f ms\n", time);
cudaMemcpy(h_image, d_image, dszp*sizeof(unsigned char), cudaMemcpyDeviceToHost);
cudaCheckErrors("Memory copying filled image fail \n");
cudaEventRecord(start, 0);
sum_reduce_halved <<< 1024*1024, 512, BLOCK*sizeof(int) >>> (d_image, d_result, 0);
cudaCheckErrors("Kernel sum_reduce_reversed CALL fail \n");
cudaEventRecord(stop, 0);
cudaEventSynchronize(stop);
cudaEventElapsedTime(&time, start, stop);
printf ("Time %f ms\n", time);
cudaEventRecord(start, 0);
cudaMemcpy(h_result, d_result, 1024*1024*sizeof(int), cudaMemcpyDeviceToHost);
cudaCheckErrors("Memory copying result fail \n");
cudaEventRecord(stop, 0);
cudaEventSynchronize(stop);
cudaEventElapsedTime(&time, start, stop);
printf ("Copy Time %f ms\n", time);
//FREE MEM
cudaFree(d_image);
cudaFree(d_result);
cudaCheckErrors("cudaFree fail \n");
printf ("SUM is: %d %d %d %d \n",h_result[0], h_result[1], h_result[1024*1023 + 1022], h_result[1024*1023 + 1023]);
//printf ("%u, %u ... %u, %u\n",h_image[0], h_image[1], h_image[1024*1024*1023+1024*1023 + 1022], h_image[1024*1024*1023+1024*1023 + 1023]);
//printf ("All is OK ");
free(h_image);
free(h_result);
return(0);
}
# + id="YPo7bZCooEne" colab_type="code" outputId="3a58b77a-7f77-44b6-aec9-536f2fab55a3" colab={"base_uri": "https://localhost:8080/", "height": 54}
# %%cu
#include <stdio.h>
#include <iostream>
#include <time.h>
#include <fstream>
#include <stdint.h>
#include <time.h>
#define BLOCK 1024
#define cudaCheckErrors(msg) \
do { \
cudaError_t __err = cudaGetLastError(); \
if (__err != cudaSuccess) { \
fprintf(stderr, "Fatal error at runtime: %s (%s at %s:%d)\n", \
msg, cudaGetErrorString(__err), \
__FILE__, __LINE__); \
fprintf(stderr, "*** FAILED - ABORTING\n"); \
exit(1); \
} \
} while (0)
__global__ void fill_1_block( unsigned char *vec, int index)
{
unsigned int i = index*1024*1024 + blockIdx.x * blockDim.x + threadIdx.x;
//how to make it random?
vec[i] = 1;
}
template <unsigned int blockSize>
__device__ void warpReduce(volatile int* sdata, int tid) {
if (blockSize >= 64) sdata[tid] += sdata[tid + 32];
if (blockSize >= 32) sdata[tid] += sdata[tid + 16];
if (blockSize >= 16) sdata[tid] += sdata[tid + 8];
if (blockSize >= 8) sdata[tid] += sdata[tid + 4];
if (blockSize >= 4) sdata[tid] += sdata[tid + 2];
if (blockSize >= 2) sdata[tid] += sdata[tid + 1];
}
template <unsigned int blockSize>
__global__ void reduce5(unsigned char *g_ivec, int *g_ovec){
extern __shared__ int sdata[];
// perform first level of reduction,
// reading from global memory, writing to shared memory
unsigned int tid = threadIdx.x;
unsigned int i = blockIdx.x*(blockDim.x*2) + threadIdx.x;
sdata[tid] = g_ivec[i] + g_ivec[i+blockDim.x];
__syncthreads();
if (blockSize >= 512) {
if (tid < 256) { sdata[tid] += sdata[tid + 256]; } __syncthreads(); }
if (blockSize >= 256) {
if (tid < 128) { sdata[tid] += sdata[tid + 128]; } __syncthreads(); }
if (blockSize >= 128) {
if (tid < 64) { sdata[tid] += sdata[tid + 64]; } __syncthreads(); }
if (tid < 32) warpReduce<blockSize>(sdata, tid);
// write result for this block to global mem
if (tid == 0) g_ovec[blockIdx.x] = sdata[0];
}
int main()
{
cudaEvent_t start, stop;
float time;
cudaEventCreate(&start);
cudaEventCreate(&stop);
unsigned char *d_image, *h_image;
int *d_result, *h_result ;
size_t dszp = BLOCK*1024*1024;
//ALLOCATE HOST MEM
h_image = (unsigned char*)malloc(dszp);
h_result = (int*)malloc(1024*1024*sizeof(int));
//ALLOCATE MEM
cudaMalloc(&d_image, dszp);
cudaMalloc(&d_result, 1024*1024*sizeof(int));
cudaCheckErrors("cudaMalloc fail \n");
cudaEventRecord(start, 0);
//FILL VALUES
for (int i = 0; i<1024; i++){
fill_1_block <<< 1024, 1024 >>> (d_image, i);
}
cudaCheckErrors("Kernel CALL fail \n");
cudaEventRecord(stop, 0);
cudaEventSynchronize(stop);
cudaEventElapsedTime(&time, start, stop);
//printf ("Time for the filling kernel: %f ms\n", time);
cudaMemcpy(h_image, d_image, dszp*sizeof(unsigned char), cudaMemcpyDeviceToHost);
cudaCheckErrors("Memory copying filled image fail \n");
cudaEventRecord(start, 0);
reduce5<512> <<< 1024*1024, 512, BLOCK*sizeof(int) >>> (d_image, d_result);
cudaCheckErrors("Kernel sum_reduce_reversed CALL fail \n");
cudaEventRecord(stop, 0);
cudaEventSynchronize(stop);
cudaEventElapsedTime(&time, start, stop);
printf ("Time %f ms\n", time);
cudaEventRecord(start, 0);
cudaMemcpy(h_result, d_result, 1024*1024*sizeof(int), cudaMemcpyDeviceToHost);
cudaCheckErrors("Memory copying result fail \n");
cudaEventRecord(stop, 0);
cudaEventSynchronize(stop);
cudaEventElapsedTime(&time, start, stop);
printf ("Copy Time %f ms\n", time);
//FREE MEM
cudaFree(d_image);
cudaFree(d_result);
cudaCheckErrors("cudaFree fail \n");
printf ("SUM is: %d %d %d %d \n",h_result[0], h_result[1], h_result[1024*1023 + 1022], h_result[1024*1023 + 1023]);
//printf ("%u, %u ... %u, %u\n",h_image[0], h_image[1], h_image[1024*1024*1023+1024*1023 + 1022], h_image[1024*1024*1023+1024*1023 + 1023]);
//printf ("All is OK ");
free(h_image);
free(h_result);
return(0);
}
# + [markdown] id="jJcUZBtxvr9L" colab_type="text"
# Use:
#
# http://developer.download.nvidia.com/compute/cuda/3_2_prod/toolkit/docs/CUDA_C_Best_Practices_Guide.pdf
#
# http://vuduc.org/teaching/cse6230-hpcta-fa12/slides/cse6230-fa12--05b-reduction-notes.pdf
#
| cuda/GPU_optimization_example.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Window Feature Classification Model: ANN with Feature Engineering
# This file is composed of an artifical neural network classification model to evaluate if using features from windows of time (20 seconds with 10 second overlap), would generate a better model than our simple timepoint classifier. Leave-One-Person-Out (LOPO) Cross-Validation is used to validate the model.
# __INPUT: .csv files containing the rolled sensor data with feature engineering (engineered_features.csv)__
# __OUTPUT: Neural Network Multi-Classification Window Featuer Model (F1 Score = 0.871)__
# ## Imports
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import plot_confusion_matrix
import matplotlib.pyplot as plt
from sklearn.metrics import confusion_matrix, accuracy_score, f1_score
import seaborn as sns
import tensorflow as tf
# ## Read in Data
# The loaded dataset contains windows of data that are 20 seconds long with a 10 second overlap. These are stored as arrays in the dataframe.
pd.set_option('display.max_columns', None)
df = pd.read_csv('/Users/N1/Data7/Data-2020/10_code/40_usable_data_for_models/41_Duke_Data/engineered_features.csv')
# We add a window number that changes everytime there is a new activity present, as we wish to use this as a feature.
df = df.assign(count=df.groupby(df.Activity.ne(df.Activity.shift()).cumsum()).cumcount().add(1))
df.head(5)
#what the sensor names correspond to
#['ACC1', 'ACC2', 'ACC3','TEMP', 'EDA', 'HR', 'BVP', 'Magnitude',]
#Accelerometry Axes, Body Temperature, Electrodermal Activity, Heart Rate, Blood Volume Pulse, Magnitude of Axes
# ### Label Encode Activity and Subject_ID
# We encode the y variable as we need to one-hot encode this y variable for the model. The label each class is associated with is printed below.
# +
from sklearn.preprocessing import LabelEncoder
le1 = LabelEncoder()
df['Activity'] = le1.fit_transform(df['Activity'])
activity_name_mapping = dict(zip(le1.classes_, le1.transform(le1.classes_)))
print(activity_name_mapping)
# -
le = LabelEncoder()
df['Subject_ID'] = le.fit_transform(df['Subject_ID'])
# ## Create Test Train split
# + tags=[]
np.random.seed(29)
rands = np.random.choice(df.Subject_ID.unique(),3, replace=False)
print(f' These will be our Subjects in our test set: {rands}')
# -
# ### Split Subjects into Test and Train Sets (n=52, 3)
test = df[df['Subject_ID'].isin(rands)]
train = df[-df['Subject_ID'].isin(rands)]
# ## Feature Selection
# ### Features that can be used in the model
# Picking one of the three following code cells to chooses what features are used in the model. For this model, we chose to use all features. You can request the code to choose to run the model on different features.
# ##### All Features
# +
# train = train[['ACC1_mean', 'ACC2_mean', 'ACC3_mean', 'TEMP_mean', 'EDA_mean', 'BVP_mean', 'HR_mean', 'Magnitude_mean', 'ACC1_std', 'ACC2_std',
# 'ACC3_std', 'TEMP_std', 'EDA_std', 'BVP_std', 'HR_std', 'Magnitude_std', 'ACC1_min', 'ACC2_min', 'ACC3_min', 'TEMP_min', 'EDA_min', 'BVP_min', 'HR_min', 'Magnitude_min',
# 'ACC1_max', 'ACC2_max', 'ACC3_max', 'TEMP_max', 'EDA_max', 'BVP_max', 'HR_max', 'Magnitude_max', 'Subject_ID', 'count', 'Activity']]
# test = test [['ACC1_mean', 'ACC2_mean', 'ACC3_mean', 'TEMP_mean', 'EDA_mean', 'BVP_mean', 'HR_mean', 'Magnitude_mean', 'ACC1_std', 'ACC2_std',
# 'ACC3_std', 'TEMP_std', 'EDA_std', 'BVP_std', 'HR_std', 'Magnitude_std', 'ACC1_min', 'ACC2_min', 'ACC3_min', 'TEMP_min', 'EDA_min', 'BVP_min', 'HR_min', 'Magnitude_min',
# 'ACC1_max', 'ACC2_max', 'ACC3_max', 'TEMP_max', 'EDA_max', 'BVP_max', 'HR_max', 'Magnitude_max', 'Subject_ID', 'count', 'Activity']]
# -
# ##### Mechanical Features
# +
# train = train[['ACC1_mean', 'ACC2_mean', 'ACC3_mean', 'Magnitude_mean', 'ACC1_std', 'ACC2_std', 'ACC3_std', 'Magnitude_std',
# 'ACC1_min', 'ACC2_min', 'ACC3_min', 'Magnitude_min',
# 'ACC1_max', 'ACC2_max', 'ACC3_max', 'Magnitude_max', 'Subject_ID', 'count', 'Activity']]
# test = test[['ACC1_mean', 'ACC2_mean', 'ACC3_mean', 'Magnitude_mean', 'ACC1_std', 'ACC2_std', 'ACC3_std', 'Magnitude_std',
# 'ACC1_min', 'ACC2_min', 'ACC3_min', 'Magnitude_min',
# 'ACC1_max', 'ACC2_max', 'ACC3_max', 'Magnitude_max', 'Subject_ID', 'count', 'Activity']]
# -
# ##### Physiological Features
# +
train = train[['TEMP_mean', 'EDA_mean', 'BVP_mean', 'HR_mean', 'TEMP_std', 'EDA_std', 'BVP_std', 'HR_std', 'TEMP_min', 'EDA_min', 'BVP_min', 'HR_min',
'TEMP_max', 'EDA_max', 'BVP_max', 'HR_max', 'Subject_ID', 'count', 'Activity']]
test = test[['TEMP_mean', 'EDA_mean', 'BVP_mean', 'HR_mean', 'TEMP_std', 'EDA_std', 'BVP_std', 'HR_std', 'TEMP_min', 'EDA_min', 'BVP_min', 'HR_min',
'TEMP_max', 'EDA_max', 'BVP_max', 'HR_max', 'Subject_ID', 'count', 'Activity']]
# -
# ### Balancing Classes
# In the following code cells, we randomly sample data from our majority classes to balance our dataset.
#{'Activity': 0, 'Baseline': 1, 'DB': 2, 'Type': 3}
train['Activity'].value_counts()
zero = train[train['Activity'] == 0]
one = train[train['Activity'] == 1]
two = train[train['Activity'] == 2]
three =train[train['Activity'] == 3]
zero = zero.sample(505)
one = one.sample(505)
train = pd.concat([zero, one, two, three])
train['Activity'].value_counts()
# This train_SID is made so we can use the Subject_ID values to perform LOPO (leave one person out) later on.
train_SID = train['Subject_ID'].values
# ### Apply one-hot encoding to Subject ID and window count
# Subject_ID and window count must be one-hot encoded to be used as features in our model. Test and train dataframes must be concatenated before we one-hot encode, so that we do not get different encodings for each data set.
# +
train['train'] =1
test['train'] = 0
combined = pd.concat([train, test])
combined = pd.concat([combined, pd.get_dummies(combined['Subject_ID'], prefix = 'SID')], axis =1).drop('Subject_ID', axis =1)
combined = pd.concat([combined, pd.get_dummies(combined['count'], prefix = 'count')], axis =1).drop('count', axis = 1)
# +
train = combined[combined['train'] == 1]
test = combined[combined['train'] == 0]
train.drop(["train"], axis = 1, inplace = True)
test.drop(["train"], axis = 1, inplace = True)
print(train.shape, test.shape)
# -
# We remove activity from our train and test datasets as this is the y variable (target variable) and we are only interested in keeping the features.
train_f = train.drop("Activity", axis =1)
test_f = test.drop("Activity", axis =1)
# ### Define X (features) and y (targets)
X_train = train_f
y_train = train.Activity
X_test = test_f
y_test = test.Activity
# ### Standardize Data
# Scaling is used to change values without distorting differences in the range of values for each sensor. We do this because different sensor values are not in similar ranges of each other and if we did not scale the data, gradients may oscillate back and forth and take a long time before finding the local minimum. It may not be necessary for this data, but to be sure, we normalized the features.
#
# The standard score of a sample x is calculated as:
#
# $$z = \frac{x-u}{s}$$
#
# Where u is the mean of the data, and s is the standard deviation of the data of a single sample. The scaling is fit on the training set and applied to both the training and test set.
sc = StandardScaler()
X_train.iloc[:,:16] = sc.fit_transform(X_train.iloc[:,:16])
X_test.iloc[:,:16] = sc.transform(X_test.iloc[:,:16])
X_train
X_train = X_train.values
X_test = X_test.values
from keras.utils import np_utils
y_train_dummy = np_utils.to_categorical(y_train)
y_test_dummy = np_utils.to_categorical(y_test)
# ## Neural Network
# - 6 hidden **fully connected** layers with 256 nodes
#
# - The **Dropout** layer randomly sets input units to 0 with a frequency of rate at each step during training time, which helps prevent overfitting.
#
# - **Softmax** acitvation function - Used to generate probabilities for each class as an output in the final fully connected layer of the model
# We decided to use ADAM as our optimizer as it is computationally efficient and updates the learning rate on a per-parameter basis, based on a moving estimate per-parameter gradient, and the per-parameter squared gradient.
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Dropout
# ### LOOCV
# __Leave One Out CV:__
# Each observation is considered as a validation set and the rest n-1 observations are a training set. Fit the model and predict using 1 observation validation set. Repeat this for n times for each observation as a validation set.
# Test-error rate is average of all n errors.
#
# __Advantages:__ takes care of both drawbacks of validation-set method
# 1. No randomness of using some observations for training vs. validation set like in validation-set method as each observation is considered for both training a›nd validation. So overall less variability than Validation-set method due to no randomness no matter how many times you run it.
# 2. Less bias than validation-set method as training-set is of n-1 size. Because of this reduced bias, reduced over-estimation of test-error, not as much compared to validation-set method.
#
# __Disadvantages:__
# 1. Even though each iterations test-error is un-biased, it has a high variability as only one-observation validation-set was used for prediction.
# 2. Computationally expensive (time and power) especially if dataset is big with large n as it requires fitting the model n times. Also some statistical models have computationally intensive fitting so with large dataset and these models LOOCV might not be a good choice.
# +
from sklearn.model_selection import LeaveOneGroupOut
# Lists to store metrics
acc_per_fold = []
loss_per_fold = []
f1_per_fold = []
# Define the K-fold Cross Validator
groups = train_SID
inputs = X_train
targets = y_train_dummy
logo = LeaveOneGroupOut()
logo.get_n_splits(inputs, targets, groups)
cv = logo.split(inputs, targets, groups)
# LOGO
fold_no = 1
for train, test in cv:
#Define the model architecture
model = Sequential()
model.add(Dense(256, activation='relu'))
model.add(Dense(256, activation='relu'))
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(256, activation='relu'))
model.add(Dense(256, activation='relu'))
model.add(Dense(4, activation='softmax')) #4 outputs are possible
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
# Generate a print
print('------------------------------------------------------------------------')
print(f'Training for fold {fold_no} ...')
# Fit data to model
history = model.fit(inputs[train], targets[train],
batch_size=32,
epochs=10,
verbose=1)
# Generate generalization metrics
scores = model.evaluate(inputs[test], targets[test], verbose=0)
y_pred = np.argmax(model.predict(inputs[test]), axis=-1)
f1 = (f1_score(np.argmax(targets[test], axis=1), (y_pred), average = 'weighted'))
print(f'Score for fold {fold_no}: {model.metrics_names[0]} of {scores[0]}; {model.metrics_names[1]} of {scores[1]*100}%, F1 of {f1}')
f1_per_fold.append(f1)
acc_per_fold.append(scores[1] * 100)
loss_per_fold.append(scores[0])
# Increase fold number
fold_no = fold_no + 1
# == Provide average scores ==
print('------------------------------------------------------------------------')
print('Score per fold')
for i in range(0, len(acc_per_fold)):
print('------------------------------------------------------------------------')
print(f'> Fold {i+1} - Loss: {loss_per_fold[i]} - Accuracy: {acc_per_fold[i]}% - F1:{f1_per_fold[i]}%')
print('------------------------------------------------------------------------')
print('Average scores for all folds:')
print(f'> Accuracy: {np.mean(acc_per_fold)} (+- {np.std(acc_per_fold)})')
print(f'> F1: {np.mean(f1_per_fold)} (+- {np.std(f1_per_fold)})')
print(f'> Loss: {np.mean(loss_per_fold)}')
print('------------------------------------------------------------------------')
# -
# Please edit the name of the model below. This will be used to save the model and figures associated with the model.
model_name = '20_TF_FE_balanced_Phys_Only'
# !mkdir -p saved_model
model.save(f'saved_model/{model_name}')
# # Prediction
# We obtain the predicted class for each test set sample by using the argmax function on the predicted probabilities that are output from our model. Argmax returns the class with the highest probability.
model = tf.keras.models.load_model(f'saved_model/{model_name}')
y_pred = np.argmax(model.predict(X_test), axis=-1)
results = model.evaluate(X_test, y_test_dummy, batch_size=32)
print("Test loss, Test acc:", results)
# A **confusion matrix** is generated to observe where the model is classifying well and to see classes which the model is not classifying well.
cm = confusion_matrix(y_test,y_pred)
cm
# We normalize the confusion matrix to better understand the proportions of classes classified correctly and incorrectly for this model.
cm= cm.astype('float')/cm.sum(axis=1)[:,np.newaxis]
cm
ax = plt.subplot()
sns.heatmap(cm, annot = True, fmt = '.2f',cmap = 'Blues', xticklabels = le1.classes_, yticklabels = le1.classes_)
ax.set_xlabel("Predicted labels")
ax.set_ylabel('Actual labels')
plt.title('Feature Engineered STEP balanced - Confusion Matrix')
plt.savefig(f'20_figures/{model_name}_CF.png')
# The **accuracy** score represents the proportion of correct classifications over all classifications.
# The **F1 score** is a composite metric of two other metrics:
#
# Specificity: proportion of correct 'positive predictions' over all 'positive' predictions.
#
# Sensitivity: number of correct 'negative' predictions over all 'negative' predictions.
#
# The F1 score gives insight as to whether all classes are predicted correctly at the same rate. A low F1 score and high accuracy can indicate that only a majority class is predicted.
# +
a_s = accuracy_score(y_test, y_pred)
f1_s = f1_score(y_test, y_pred, average = 'weighted')
print(f'Accuracy Score: {a_s:.3f} \nF1 Score: {f1_s:.3f}')
| DigitalBiomarkers-HumanActivityRecognition/10_code/50_deep_learning/53_tensorflow_models/53_tensorflow_Duke_Data/20_ANN_WFE.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/HuyenNguyenHelen/t81_558_deep_learning/blob/master/BERT/BERT_Fine_Tuning_Sentence_Classification_v4.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="EKOTlwcmxmej"
# # BERT Fine-Tuning Tutorial with PyTorch
#
# By <NAME> and <NAME>
# + [markdown] id="MPgpITmdwvX0"
# *Revised on March 20, 2020 - Switched to `tokenizer.encode_plus` and added validation loss. See [Revision History](https://colab.research.google.com/drive/1pTuQhug6Dhl9XalKB0zUGf4FIdYFlpcX#scrollTo=IKzLS9ohzGVu) at the end for details.*
#
#
#
# + [markdown] id="BJR6t_gCQe_x"
# In this tutorial I'll show you how to use BERT with the huggingface PyTorch library to quickly and efficiently fine-tune a model to get near state of the art performance in sentence classification. More broadly, I describe the practical application of transfer learning in NLP to create high performance models with minimal effort on a range of NLP tasks.
#
# This post is presented in two forms--as a blog post [here](http://mccormickml.com/2019/07/22/BERT-fine-tuning/) and as a Colab Notebook [here](https://colab.research.google.com/drive/1pTuQhug6Dhl9XalKB0zUGf4FIdYFlpcX).
#
# The content is identical in both, but:
# * The blog post includes a comments section for discussion.
# * The Colab Notebook will allow you to run the code and inspect it as you read through.
#
# I've also published a video walkthrough of this post on my YouTube channel! [Part 1](https://youtu.be/x66kkDnbzi4) and [Part 2](https://youtu.be/Hnvb9b7a_Ps).
#
#
# + [markdown] id="jrC9__lXxTJz"
# # Contents
# + [markdown] id="p9MCBOq4xUpr"
# See "Table of contents" in the sidebar to the left.
# + [markdown] id="ADkUGTqixRWo"
# # Introduction
# + [markdown] id="L9vxxTBsuL24"
#
# ## History
#
# 2018 was a breakthrough year in NLP. Transfer learning, particularly models like Allen AI's ELMO, OpenAI's Open-GPT, and Google's BERT allowed researchers to smash multiple benchmarks with minimal task-specific fine-tuning and provided the rest of the NLP community with pretrained models that could easily (with less data and less compute time) be fine-tuned and implemented to produce state of the art results. Unfortunately, for many starting out in NLP and even for some experienced practicioners, the theory and practical application of these powerful models is still not well understood.
#
# + [markdown] id="qCgvR9INuP5q"
#
# ## What is BERT?
#
# BERT (Bidirectional Encoder Representations from Transformers), released in late 2018, is the model we will use in this tutorial to provide readers with a better understanding of and practical guidance for using transfer learning models in NLP. BERT is a method of pretraining language representations that was used to create models that NLP practicioners can then download and use for free. You can either use these models to extract high quality language features from your text data, or you can fine-tune these models on a specific task (classification, entity recognition, question answering, etc.) with your own data to produce state of the art predictions.
#
# This post will explain how you can modify and fine-tune BERT to create a powerful NLP model that quickly gives you state of the art results.
#
# + [markdown] id="DaVGdtOkuXUZ"
#
# ## Advantages of Fine-Tuning
#
# + [markdown] id="5llwu8GBuqMb"
#
# In this tutorial, we will use BERT to train a text classifier. Specifically, we will take the pre-trained BERT model, add an untrained layer of neurons on the end, and train the new model for our classification task. Why do this rather than train a train a specific deep learning model (a CNN, BiLSTM, etc.) that is well suited for the specific NLP task you need?
#
# 1. **Quicker Development**
#
# * First, the pre-trained BERT model weights already encode a lot of information about our language. As a result, it takes much less time to train our fine-tuned model - it is as if we have already trained the bottom layers of our network extensively and only need to gently tune them while using their output as features for our classification task. In fact, the authors recommend only 2-4 epochs of training for fine-tuning BERT on a specific NLP task (compared to the hundreds of GPU hours needed to train the original BERT model or a LSTM from scratch!).
#
# 2. **Less Data**
#
# * In addition and perhaps just as important, because of the pre-trained weights this method allows us to fine-tune our task on a much smaller dataset than would be required in a model that is built from scratch. A major drawback of NLP models built from scratch is that we often need a prohibitively large dataset in order to train our network to reasonable accuracy, meaning a lot of time and energy had to be put into dataset creation. By fine-tuning BERT, we are now able to get away with training a model to good performance on a much smaller amount of training data.
#
# 3. **Better Results**
#
# * Finally, this simple fine-tuning procedure (typically adding one fully-connected layer on top of BERT and training for a few epochs) was shown to achieve state of the art results with minimal task-specific adjustments for a wide variety of tasks: classification, language inference, semantic similarity, question answering, etc. Rather than implementing custom and sometimes-obscure architetures shown to work well on a specific task, simply fine-tuning BERT is shown to be a better (or at least equal) alternative.
#
# + [markdown] id="ZEynC5F4u7Nb"
#
# ### A Shift in NLP
#
# This shift to transfer learning parallels the same shift that took place in computer vision a few years ago. Creating a good deep learning network for computer vision tasks can take millions of parameters and be very expensive to train. Researchers discovered that deep networks learn hierarchical feature representations (simple features like edges at the lowest layers with gradually more complex features at higher layers). Rather than training a new network from scratch each time, the lower layers of a trained network with generalized image features could be copied and transfered for use in another network with a different task. It soon became common practice to download a pre-trained deep network and quickly retrain it for the new task or add additional layers on top - vastly preferable to the expensive process of training a network from scratch. For many, the introduction of deep pre-trained language models in 2018 (ELMO, BERT, ULMFIT, Open-GPT, etc.) signals the same shift to transfer learning in NLP that computer vision saw.
#
# Let's get started!
# + [markdown] id="2-Th8bRio6A4"
#
# [](https://bit.ly/30JzuBH)
#
# + [markdown] id="RX_ZDhicpHkV"
# # 1. Setup
# + [markdown] id="nSU7yERLP_66"
# ## 1.1. Using Colab GPU for Training
#
# + [markdown] id="GI0iOY8zvZzL"
#
# Google Colab offers free GPUs and TPUs! Since we'll be training a large neural network it's best to take advantage of this (in this case we'll attach a GPU), otherwise training will take a very long time.
#
# A GPU can be added by going to the menu and selecting:
#
# `Edit 🡒 Notebook Settings 🡒 Hardware accelerator 🡒 (GPU)`
#
# Then run the following cell to confirm that the GPU is detected.
# + id="DEfSbAA4QHas" colab={"base_uri": "https://localhost:8080/"} outputId="fc9f36ec-5ce8-4742-8933-3cd94db60963"
import tensorflow as tf
# Get the GPU device name.
device_name = tf.test.gpu_device_name()
# The device name should look like the following:
if device_name == '/device:GPU:0':
print('Found GPU at: {}'.format(device_name))
else:
raise SystemError('GPU device not found')
# + [markdown] id="cqG7FzRVFEIv"
# In order for torch to use the GPU, we need to identify and specify the GPU as the device. Later, in our training loop, we will load data onto the device.
# + id="oYsV4H8fCpZ-" colab={"base_uri": "https://localhost:8080/"} outputId="b76e2209-ba73-4234-e2eb-1367666526ed"
import torch
# If there's a GPU available...
if torch.cuda.is_available():
# Tell PyTorch to use the GPU.
device = torch.device("cuda")
print('There are %d GPU(s) available.' % torch.cuda.device_count())
print('We will use the GPU:', torch.cuda.get_device_name(0))
# If not...
else:
print('No GPU available, using the CPU instead.')
device = torch.device("cpu")
# + [markdown] id="2ElsnSNUridI"
# ## 1.2. Installing the Hugging Face Library
#
# + [markdown] id="G_N2UDLevYWn"
#
# Next, let's install the [transformers](https://github.com/huggingface/transformers) package from Hugging Face which will give us a pytorch interface for working with BERT. (This library contains interfaces for other pretrained language models like OpenAI's GPT and GPT-2.) We've selected the pytorch interface because it strikes a nice balance between the high-level APIs (which are easy to use but don't provide insight into how things work) and tensorflow code (which contains lots of details but often sidetracks us into lessons about tensorflow, when the purpose here is BERT!).
#
# At the moment, the Hugging Face library seems to be the most widely accepted and powerful pytorch interface for working with BERT. In addition to supporting a variety of different pre-trained transformer models, the library also includes pre-built modifications of these models suited to your specific task. For example, in this tutorial we will use `BertForSequenceClassification`.
#
# The library also includes task-specific classes for token classification, question answering, next sentence prediciton, etc. Using these pre-built classes simplifies the process of modifying BERT for your purposes.
#
# + id="0NmMdkZO8R6q" colab={"base_uri": "https://localhost:8080/"} outputId="debe1e88-9b1d-4c22-df13-31bf052d90d3"
# !pip install transformers
# + [markdown] id="lxddqmruamSj"
# The code in this notebook is actually a simplified version of the [run_glue.py](https://github.com/huggingface/transformers/blob/master/examples/run_glue.py) example script from huggingface.
#
# `run_glue.py` is a helpful utility which allows you to pick which GLUE benchmark task you want to run on, and which pre-trained model you want to use (you can see the list of possible models [here](https://github.com/huggingface/transformers/blob/e6cff60b4cbc1158fbd6e4a1c3afda8dc224f566/examples/run_glue.py#L69)). It also supports using either the CPU, a single GPU, or multiple GPUs. It even supports using 16-bit precision if you want further speed up.
#
# Unfortunately, all of this configurability comes at the cost of *readability*. In this Notebook, we've simplified the code greatly and added plenty of comments to make it clear what's going on.
# + [markdown] id="guw6ZNtaswKc"
# # 2. Loading CoLA Dataset
#
# + [markdown] id="_9ZKxKc04Btk"
# We'll use [The Corpus of Linguistic Acceptability (CoLA)](https://nyu-mll.github.io/CoLA/) dataset for single sentence classification. It's a set of sentences labeled as grammatically correct or incorrect. It was first published in May of 2018, and is one of the tests included in the "GLUE Benchmark" on which models like BERT are competing.
#
# + [markdown] id="4JrUHXms16cn"
# ## 2.1. Download & Extract
# + [markdown] id="3ZNVW6xd0T0X"
# We'll use the `wget` package to download the dataset to the Colab instance's file system.
# + id="5m6AnuFv0QXQ" colab={"base_uri": "https://localhost:8080/"} outputId="0ab257ef-891f-4c40-f984-1cb7fbdc3243"
# !pip install wget
# + [markdown] id="08pO03Ff1BjI"
# The dataset is hosted on GitHub in this repo: https://nyu-mll.github.io/CoLA/
# + id="pMtmPMkBzrvs" colab={"base_uri": "https://localhost:8080/"} outputId="712b00fe-8f63-46df-85e8-5deb14b2776c"
import wget
import os
print('Downloading dataset...')
# The URL for the dataset zip file.
url = 'https://nyu-mll.github.io/CoLA/cola_public_1.1.zip'
# Download the file (if we haven't already)
if not os.path.exists('./cola_public_1.1.zip'):
wget.download(url, './cola_public_1.1.zip')
# + [markdown] id="_mKctx-ll2FB"
# Unzip the dataset to the file system. You can browse the file system of the Colab instance in the sidebar on the left.
# + id="0Yv-tNv20dnH" colab={"base_uri": "https://localhost:8080/"} outputId="73279854-d6b9-4e49-9fe8-747e48aa4bb8"
# Unzip the dataset (if we haven't already)
if not os.path.exists('./cola_public/'):
# !unzip cola_public_1.1.zip
# + [markdown] id="oQUy9Tat2EF_"
# ## 2.2. Parse
# + [markdown] id="xeyVCXT31EZQ"
# We can see from the file names that both `tokenized` and `raw` versions of the data are available.
#
# We can't use the pre-tokenized version because, in order to apply the pre-trained BERT, we *must* use the tokenizer provided by the model. This is because (1) the model has a specific, fixed vocabulary and (2) the BERT tokenizer has a particular way of handling out-of-vocabulary words.
# + [markdown] id="MYWzeGSY2xh3"
# We'll use pandas to parse the "in-domain" training set and look at a few of its properties and data points.
# + id="_UkeC7SG2krJ" colab={"base_uri": "https://localhost:8080/", "height": 388} outputId="37eb6257-a60c-40d7-c05f-04747bb108a3"
import pandas as pd
# Load the dataset into a pandas dataframe.
df = pd.read_csv("./cola_public/raw/in_domain_train.tsv", delimiter='\t', header=None, names=['sentence_source', 'label', 'label_notes', 'sentence'])
# Report the number of sentences.
print('Number of training sentences: {:,}\n'.format(df.shape[0]))
# Display 10 random rows from the data.
df.sample(10)
# + [markdown] id="kfWzpPi92UAH"
# The two properties we actually care about are the the `sentence` and its `label`, which is referred to as the "acceptibility judgment" (0=unacceptable, 1=acceptable).
# + [markdown] id="H_LpQfzCn9_o"
# Here are five sentences which are labeled as not grammatically acceptible. Note how much more difficult this task is than something like sentiment analysis!
# + id="blqIvQaQncdJ" colab={"base_uri": "https://localhost:8080/", "height": 202} outputId="00b1e7da-17cc-4cb7-c2e6-c646e9a7665f"
df.loc[df.label == 0].sample(5)[['sentence', 'label']]
# + [markdown] id="4SMZ5T5Imhlx"
#
#
# Let's extract the sentences and labels of our training set as numpy ndarrays.
# + id="GuE5BqICAne2"
# Get the lists of sentences and their labels.
sentences = df.sentence.values
labels = df.label.values
# + [markdown] id="ex5O1eV-Pfct"
# # 3. Tokenization & Input Formatting
#
# In this section, we'll transform our dataset into the format that BERT can be trained on.
# + [markdown] id="-8kEDRvShcU5"
# ## 3.1. BERT Tokenizer
# + [markdown] id="bWOPOyWghJp2"
#
# To feed our text to BERT, it must be split into tokens, and then these tokens must be mapped to their index in the tokenizer vocabulary.
#
# The tokenization must be performed by the tokenizer included with BERT--the below cell will download this for us. We'll be using the "uncased" version here.
#
# + id="Z474sSC6oe7A" colab={"base_uri": "https://localhost:8080/", "height": 180, "referenced_widgets": ["b01c75063f644b2c879fa2cda5bd1c31", "b4e1dc6116514b54a0a83286e3f48a34", "07b28a05436f44cbbe1284fcc1c347c8", "<KEY>", "cd529e65e76a498c96e96e5a21a1617f", "<KEY>", "41aca9ac7de249de85a157f3fbb556e7", "<KEY>", "0fd1c267e0584737a53c81c4131e4ed2", "014f2928a8b544228b9c32c5c12ab3b8", "c370672636da4c169ea1ee482c1769de", "9c154d4e87704ac6a8343ad38a5539d4", "497ccacf905b41e894b07b65e4b8d2aa", "1080e501cf7145c8a9ce7ae15cf40f6a", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "81dac82b91a74b179097d76607baf539", "a2789483808e47a583e3e9feba22d69a", "<KEY>", "d06a09066bef49cbb7a36586a7d63d2b", "<KEY>", "<KEY>"]} outputId="8cc8f129-9fbb-4c5e-da7c-c8fdde65fa45"
from transformers import BertTokenizer
# Load the BERT tokenizer.
print('Loading BERT tokenizer...')
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', do_lower_case=True)
# + [markdown] id="dFzmtleW6KmJ"
# Let's apply the tokenizer to one sentence just to see the output.
#
# + id="dLIbudgfh6F0" colab={"base_uri": "https://localhost:8080/"} outputId="21154ecb-2052-496f-8df4-ef1fdf16fccb"
# Print the original sentence.
print(' Original: ', sentences[0])
# Print the sentence split into tokens.
print('Tokenized: ', tokenizer.tokenize(sentences[0]))
# Print the sentence mapped to token ids.
print('Token IDs: ', tokenizer.convert_tokens_to_ids(tokenizer.tokenize(sentences[0])))
# + [markdown] id="WeNIc4auFUdF"
# When we actually convert all of our sentences, we'll use the `tokenize.encode` function to handle both steps, rather than calling `tokenize` and `convert_tokens_to_ids` separately.
#
# Before we can do that, though, we need to talk about some of BERT's formatting requirements.
# + [markdown] id="viKGCCh8izww"
# ## 3.2. Required Formatting
# + [markdown] id="yDcqNlvVhL5W"
# The above code left out a few required formatting steps that we'll look at here.
#
# *Side Note: The input format to BERT seems "over-specified" to me... We are required to give it a number of pieces of information which seem redundant, or like they could easily be inferred from the data without us explicity providing it. But it is what it is, and I suspect it will make more sense once I have a deeper understanding of the BERT internals.*
#
# We are required to:
# 1. Add special tokens to the start and end of each sentence.
# 2. Pad & truncate all sentences to a single constant length.
# 3. Explicitly differentiate real tokens from padding tokens with the "attention mask".
#
#
# + [markdown] id="V6mceWWOjZnw"
# ### Special Tokens
#
# + [markdown] id="Ykk0P9JiKtVe"
#
# **`[SEP]`**
#
# At the end of every sentence, we need to append the special `[SEP]` token.
#
# This token is an artifact of two-sentence tasks, where BERT is given two separate sentences and asked to determine something (e.g., can the answer to the question in sentence A be found in sentence B?).
#
# I am not certain yet why the token is still required when we have only single-sentence input, but it is!
#
# + [markdown] id="86C9objaKu8f"
# **`[CLS]`**
#
# For classification tasks, we must prepend the special `[CLS]` token to the beginning of every sentence.
#
# This token has special significance. BERT consists of 12 Transformer layers. Each transformer takes in a list of token embeddings, and produces the same number of embeddings on the output (but with the feature values changed, of course!).
#
# 
#
# On the output of the final (12th) transformer, *only the first embedding (corresponding to the [CLS] token) is used by the classifier*.
#
# > "The first token of every sequence is always a special classification token (`[CLS]`). The final hidden state
# corresponding to this token is used as the aggregate sequence representation for classification
# tasks." (from the [BERT paper](https://arxiv.org/pdf/1810.04805.pdf))
#
# You might think to try some pooling strategy over the final embeddings, but this isn't necessary. Because BERT is trained to only use this [CLS] token for classification, we know that the model has been motivated to encode everything it needs for the classification step into that single 768-value embedding vector. It's already done the pooling for us!
#
#
# + [markdown] id="u51v0kFxeteu"
# ### Sentence Length & Attention Mask
#
#
# + [markdown] id="qPNuwqZVK3T6"
# The sentences in our dataset obviously have varying lengths, so how does BERT handle this?
#
# BERT has two constraints:
# 1. All sentences must be padded or truncated to a single, fixed length.
# 2. The maximum sentence length is 512 tokens.
#
# Padding is done with a special `[PAD]` token, which is at index 0 in the BERT vocabulary. The below illustration demonstrates padding out to a "MAX_LEN" of 8 tokens.
#
# <img src="https://drive.google.com/uc?export=view&id=1cb5xeqLu_5vPOgs3eRnail2Y00Fl2pCo" width="600">
#
# The "Attention Mask" is simply an array of 1s and 0s indicating which tokens are padding and which aren't (seems kind of redundant, doesn't it?!). This mask tells the "Self-Attention" mechanism in BERT not to incorporate these PAD tokens into its interpretation of the sentence.
#
# The maximum length does impact training and evaluation speed, however.
# For example, with a Tesla K80:
#
# `MAX_LEN = 128 --> Training epochs take ~5:28 each`
#
# `MAX_LEN = 64 --> Training epochs take ~2:57 each`
#
#
#
#
#
#
# + [markdown] id="l6w8elb-58GJ"
# ## 3.3. Tokenize Dataset
# + [markdown] id="U28qy4P-NwQ9"
# The transformers library provides a helpful `encode` function which will handle most of the parsing and data prep steps for us.
#
# Before we are ready to encode our text, though, we need to decide on a **maximum sentence length** for padding / truncating to.
#
# The below cell will perform one tokenization pass of the dataset in order to measure the maximum sentence length.
# + id="cKsH2sU0OCQA" colab={"base_uri": "https://localhost:8080/"} outputId="2e6143be-cbf4-434e-8c72-a3118c0c314b"
max_len = 0
# For every sentence...
for sent in sentences:
# Tokenize the text and add `[CLS]` and `[SEP]` tokens.
input_ids = tokenizer.encode(sent, add_special_tokens=True)
# Update the maximum sentence length.
max_len = max(max_len, len(input_ids))
print('Max sentence length: ', max_len)
# + [markdown] id="1M296yz577fV"
# Just in case there are some longer test sentences, I'll set the maximum length to 64.
#
# + [markdown] id="tIWAoWL2RK1p"
# Now we're ready to perform the real tokenization.
#
# The `tokenizer.encode_plus` function combines multiple steps for us:
#
# 1. Split the sentence into tokens.
# 2. Add the special `[CLS]` and `[SEP]` tokens.
# 3. Map the tokens to their IDs.
# 4. Pad or truncate all sentences to the same length.
# 5. Create the attention masks which explicitly differentiate real tokens from `[PAD]` tokens.
#
# The first four features are in `tokenizer.encode`, but I'm using `tokenizer.encode_plus` to get the fifth item (attention masks). Documentation is [here](https://huggingface.co/transformers/main_classes/tokenizer.html?highlight=encode_plus#transformers.PreTrainedTokenizer.encode_plus).
#
# + id="2bBdb3pt8LuQ" colab={"base_uri": "https://localhost:8080/"} outputId="418d759b-c2a8-4beb-ebcc-40eec565245f"
# Tokenize all of the sentences and map the tokens to thier word IDs.
input_ids = []
attention_masks = []
# For every sentence...
for sent in sentences:
# `encode_plus` will:
# (1) Tokenize the sentence.
# (2) Prepend the `[CLS]` token to the start.
# (3) Append the `[SEP]` token to the end.
# (4) Map tokens to their IDs.
# (5) Pad or truncate the sentence to `max_length`
# (6) Create attention masks for [PAD] tokens.
encoded_dict = tokenizer.encode_plus(
sent, # Sentence to encode.
add_special_tokens = True, # Add '[CLS]' and '[SEP]'
max_length = 64, # Pad & truncate all sentences.
pad_to_max_length = True,
return_attention_mask = True, # Construct attn. masks.
return_tensors = 'pt', # Return pytorch tensors.
)
# Add the encoded sentence to the list.
input_ids.append(encoded_dict['input_ids'])
# And its attention mask (simply differentiates padding from non-padding).
attention_masks.append(encoded_dict['attention_mask'])
# Convert the lists into tensors.
input_ids = torch.cat(input_ids, dim=0)
attention_masks = torch.cat(attention_masks, dim=0)
labels = torch.tensor(labels)
# Print sentence 0, now as a list of IDs.
print('Original: ', sentences[0])
print('Token IDs:', input_ids[0])
# + [markdown] id="aRp4O7D295d_"
# ## 3.4. Training & Validation Split
#
# + [markdown] id="qu0ao7p8rb06"
# Divide up our training set to use 90% for training and 10% for validation.
# + id="GEgLpFVlo1Z-" colab={"base_uri": "https://localhost:8080/"} outputId="3faa37ea-7589-43fb-a90e-1eb0b35f25ec"
from torch.utils.data import TensorDataset, random_split
# Combine the training inputs into a TensorDataset.
dataset = TensorDataset(input_ids, attention_masks, labels)
# Create a 90-10 train-validation split.
# Calculate the number of samples to include in each set.
train_size = int(0.9 * len(dataset))
val_size = len(dataset) - train_size
# Divide the dataset by randomly selecting samples.
train_dataset, val_dataset = random_split(dataset, [train_size, val_size])
print('{:>5,} training samples'.format(train_size))
print('{:>5,} validation samples'.format(val_size))
# + [markdown] id="dD9i6Z2pG-sN"
# We'll also create an iterator for our dataset using the torch DataLoader class. This helps save on memory during training because, unlike a for loop, with an iterator the entire dataset does not need to be loaded into memory.
# + id="XGUqOCtgqGhP"
from torch.utils.data import DataLoader, RandomSampler, SequentialSampler
# The DataLoader needs to know our batch size for training, so we specify it
# here. For fine-tuning BERT on a specific task, the authors recommend a batch
# size of 16 or 32.
batch_size = 32
# Create the DataLoaders for our training and validation sets.
# We'll take training samples in random order.
train_dataloader = DataLoader(
train_dataset, # The training samples.
sampler = RandomSampler(train_dataset), # Select batches randomly
batch_size = batch_size # Trains with this batch size.
)
# For validation the order doesn't matter, so we'll just read them sequentially.
validation_dataloader = DataLoader(
val_dataset, # The validation samples.
sampler = SequentialSampler(val_dataset), # Pull out batches sequentially.
batch_size = batch_size # Evaluate with this batch size.
)
# + [markdown] id="8bwa6Rts-02-"
# # 4. Train Our Classification Model
# + [markdown] id="3xYQ3iLO08SX"
# Now that our input data is properly formatted, it's time to fine tune the BERT model.
# + [markdown] id="D6TKgyUzPIQc"
# ## 4.1. BertForSequenceClassification
# + [markdown] id="1sjzRT1V0zwm"
# For this task, we first want to modify the pre-trained BERT model to give outputs for classification, and then we want to continue training the model on our dataset until that the entire model, end-to-end, is well-suited for our task.
#
# Thankfully, the huggingface pytorch implementation includes a set of interfaces designed for a variety of NLP tasks. Though these interfaces are all built on top of a trained BERT model, each has different top layers and output types designed to accomodate their specific NLP task.
#
# Here is the current list of classes provided for fine-tuning:
# * BertModel
# * BertForPreTraining
# * BertForMaskedLM
# * BertForNextSentencePrediction
# * **BertForSequenceClassification** - The one we'll use.
# * BertForTokenClassification
# * BertForQuestionAnswering
#
# The documentation for these can be found under [here](https://huggingface.co/transformers/v2.2.0/model_doc/bert.html).
# + [markdown] id="BXYitPoE-cjH"
#
#
# We'll be using [BertForSequenceClassification](https://huggingface.co/transformers/v2.2.0/model_doc/bert.html#bertforsequenceclassification). This is the normal BERT model with an added single linear layer on top for classification that we will use as a sentence classifier. As we feed input data, the entire pre-trained BERT model and the additional untrained classification layer is trained on our specific task.
#
# + [markdown] id="WnQW9E-bBCRt"
# OK, let's load BERT! There are a few different pre-trained BERT models available. "bert-base-uncased" means the version that has only lowercase letters ("uncased") and is the smaller version of the two ("base" vs "large").
#
# The documentation for `from_pretrained` can be found [here](https://huggingface.co/transformers/v2.2.0/main_classes/model.html#transformers.PreTrainedModel.from_pretrained), with the additional parameters defined [here](https://huggingface.co/transformers/v2.2.0/main_classes/configuration.html#transformers.PretrainedConfig).
# + id="gFsCTp_mporB" colab={"base_uri": "https://localhost:8080/", "height": 1000, "referenced_widgets": ["13b90c5fc28147bb84e2c5bc34a721ef", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "53561298087e45a3bb2273d3cefd608a", "1df769f1aa84436791e66b043ea5a77b", "<KEY>", "<KEY>", "<KEY>", "761612ddfe4446b8a864b9900eaaa291", "96e8c930531849a5bd539a5c3357d8bb", "7ea30664f7dd413c963f214cb8a45ca1", "37b1e4dc20764a7b81d3f10d7971deee", "8d15d9bbb9b7485496c08d5be535c009"]} outputId="347271a2-6c34-43dd-f604-fb40efd51192"
from transformers import BertForSequenceClassification, AdamW, BertConfig
# Load BertForSequenceClassification, the pretrained BERT model with a single
# linear classification layer on top.
model = BertForSequenceClassification.from_pretrained(
"bert-base-uncased", # Use the 12-layer BERT model, with an uncased vocab.
num_labels = 2, # The number of output labels--2 for binary classification.
# You can increase this for multi-class tasks.
output_attentions = False, # Whether the model returns attentions weights.
output_hidden_states = False, # Whether the model returns all hidden-states.
)
# Tell pytorch to run this model on the GPU.
model.cuda()
# + [markdown] id="e0Jv6c7-HHDW"
# Just for curiosity's sake, we can browse all of the model's parameters by name here.
#
# In the below cell, I've printed out the names and dimensions of the weights for:
#
# 1. The embedding layer.
# 2. The first of the twelve transformers.
# 3. The output layer.
#
#
#
# + id="8PIiVlDYCtSq" colab={"base_uri": "https://localhost:8080/"} outputId="cf97045d-2458-4c8b-cf3b-f8111a79366b"
# Get all of the model's parameters as a list of tuples.
params = list(model.named_parameters())
print('The BERT model has {:} different named parameters.\n'.format(len(params)))
print('==== Embedding Layer ====\n')
for p in params[0:5]:
print("{:<55} {:>12}".format(p[0], str(tuple(p[1].size()))))
print('\n==== First Transformer ====\n')
for p in params[5:21]:
print("{:<55} {:>12}".format(p[0], str(tuple(p[1].size()))))
print('\n==== Output Layer ====\n')
for p in params[-4:]:
print("{:<55} {:>12}".format(p[0], str(tuple(p[1].size()))))
# + [markdown] id="qRWT-D4U_Pvx"
# ## 4.2. Optimizer & Learning Rate Scheduler
# + [markdown] id="8o-VEBobKwHk"
# Now that we have our model loaded we need to grab the training hyperparameters from within the stored model.
#
# For the purposes of fine-tuning, the authors recommend choosing from the following values (from Appendix A.3 of the [BERT paper](https://arxiv.org/pdf/1810.04805.pdf)):
#
# >- **Batch size:** 16, 32
# - **Learning rate (Adam):** 5e-5, 3e-5, 2e-5
# - **Number of epochs:** 2, 3, 4
#
# We chose:
# * Batch size: 32 (set when creating our DataLoaders)
# * Learning rate: 2e-5
# * Epochs: 4 (we'll see that this is probably too many...)
#
# The epsilon parameter `eps = 1e-8` is "a very small number to prevent any division by zero in the implementation" (from [here](https://machinelearningmastery.com/adam-optimization-algorithm-for-deep-learning/)).
#
# You can find the creation of the AdamW optimizer in `run_glue.py` [here](https://github.com/huggingface/transformers/blob/5bfcd0485ece086ebcbed2d008813037968a9e58/examples/run_glue.py#L109).
# + id="GLs72DuMODJO"
# Note: AdamW is a class from the huggingface library (as opposed to pytorch)
# I believe the 'W' stands for 'Weight Decay fix"
optimizer = AdamW(model.parameters(),
lr = 2e-5, # args.learning_rate - default is 5e-5, our notebook had 2e-5
eps = 1e-8 # args.adam_epsilon - default is 1e-8.
)
# + id="-p0upAhhRiIx"
from transformers import get_linear_schedule_with_warmup
# Number of training epochs. The BERT authors recommend between 2 and 4.
# We chose to run for 4, but we'll see later that this may be over-fitting the
# training data.
epochs = 4
# Total number of training steps is [number of batches] x [number of epochs].
# (Note that this is not the same as the number of training samples).
total_steps = len(train_dataloader) * epochs
# Create the learning rate scheduler.
scheduler = get_linear_schedule_with_warmup(optimizer,
num_warmup_steps = 0, # Default value in run_glue.py
num_training_steps = total_steps)
# + [markdown] id="RqfmWwUR_Sox"
# ## 4.3. Training Loop
# + [markdown] id="_QXZhFb4LnV5"
# Below is our training loop. There's a lot going on, but fundamentally for each pass in our loop we have a trianing phase and a validation phase.
#
# > *Thank you to [<NAME>](https://ca.linkedin.com/in/stasbekman) for contributing the insights and code for using validation loss to detect over-fitting!*
#
# **Training:**
# - Unpack our data inputs and labels
# - Load data onto the GPU for acceleration
# - Clear out the gradients calculated in the previous pass.
# - In pytorch the gradients accumulate by default (useful for things like RNNs) unless you explicitly clear them out.
# - Forward pass (feed input data through the network)
# - Backward pass (backpropagation)
# - Tell the network to update parameters with optimizer.step()
# - Track variables for monitoring progress
#
# **Evalution:**
# - Unpack our data inputs and labels
# - Load data onto the GPU for acceleration
# - Forward pass (feed input data through the network)
# - Compute loss on our validation data and track variables for monitoring progress
#
# Pytorch hides all of the detailed calculations from us, but we've commented the code to point out which of the above steps are happening on each line.
#
# > *PyTorch also has some [beginner tutorials](https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html#sphx-glr-beginner-blitz-cifar10-tutorial-py) which you may also find helpful.*
# + [markdown] id="pE5B99H5H2-W"
# Define a helper function for calculating accuracy.
# + id="9cQNvaZ9bnyy"
import numpy as np
# Function to calculate the accuracy of our predictions vs labels
def flat_accuracy(preds, labels):
pred_flat = np.argmax(preds, axis=1).flatten()
labels_flat = labels.flatten()
return np.sum(pred_flat == labels_flat) / len(labels_flat)
# + [markdown] id="KNhRtWPXH9C3"
# Helper function for formatting elapsed times as `hh:mm:ss`
#
# + id="gpt6tR83keZD"
import time
import datetime
def format_time(elapsed):
'''
Takes a time in seconds and returns a string hh:mm:ss
'''
# Round to the nearest second.
elapsed_rounded = int(round((elapsed)))
# Format as hh:mm:ss
return str(datetime.timedelta(seconds=elapsed_rounded))
# + [markdown] id="cfNIhN19te3N"
# We're ready to kick off the training!
# + id="6J-FYdx6nFE_" colab={"base_uri": "https://localhost:8080/"} outputId="beca44cf-9d16-4139-9af7-bcc214b121db"
import random
import numpy as np
# This training code is based on the `run_glue.py` script here:
# https://github.com/huggingface/transformers/blob/5bfcd0485ece086ebcbed2d008813037968a9e58/examples/run_glue.py#L128
# Set the seed value all over the place to make this reproducible.
seed_val = 42
random.seed(seed_val)
np.random.seed(seed_val)
torch.manual_seed(seed_val)
torch.cuda.manual_seed_all(seed_val)
# We'll store a number of quantities such as training and validation loss,
# validation accuracy, and timings.
training_stats = []
# Measure the total training time for the whole run.
total_t0 = time.time()
# For each epoch...
for epoch_i in range(0, epochs):
# ========================================
# Training
# ========================================
# Perform one full pass over the training set.
print("")
print('======== Epoch {:} / {:} ========'.format(epoch_i + 1, epochs))
print('Training...')
# Measure how long the training epoch takes.
t0 = time.time()
# Reset the total loss for this epoch.
total_train_loss = 0
# Put the model into training mode. Don't be mislead--the call to
# `train` just changes the *mode*, it doesn't *perform* the training.
# `dropout` and `batchnorm` layers behave differently during training
# vs. test (source: https://stackoverflow.com/questions/51433378/what-does-model-train-do-in-pytorch)
model.train()
# For each batch of training data...
for step, batch in enumerate(train_dataloader):
# Progress update every 40 batches.
if step % 40 == 0 and not step == 0:
# Calculate elapsed time in minutes.
elapsed = format_time(time.time() - t0)
# Report progress.
print(' Batch {:>5,} of {:>5,}. Elapsed: {:}.'.format(step, len(train_dataloader), elapsed))
# Unpack this training batch from our dataloader.
#
# As we unpack the batch, we'll also copy each tensor to the GPU using the
# `to` method.
#
# `batch` contains three pytorch tensors:
# [0]: input ids
# [1]: attention masks
# [2]: labels
b_input_ids = batch[0].to(device)
b_input_mask = batch[1].to(device)
b_labels = batch[2].to(device)
# Always clear any previously calculated gradients before performing a
# backward pass. PyTorch doesn't do this automatically because
# accumulating the gradients is "convenient while training RNNs".
# (source: https://stackoverflow.com/questions/48001598/why-do-we-need-to-call-zero-grad-in-pytorch)
model.zero_grad()
# Perform a forward pass (evaluate the model on this training batch).
# In PyTorch, calling `model` will in turn call the model's `forward`
# function and pass down the arguments. The `forward` function is
# documented here:
# https://huggingface.co/transformers/model_doc/bert.html#bertforsequenceclassification
# The results are returned in a results object, documented here:
# https://huggingface.co/transformers/main_classes/output.html#transformers.modeling_outputs.SequenceClassifierOutput
# Specifically, we'll get the loss (because we provided labels) and the
# "logits"--the model outputs prior to activation.
result = model(b_input_ids,
token_type_ids=None,
attention_mask=b_input_mask,
labels=b_labels,
return_dict=True)
loss = result.loss
logits = result.logits
# Accumulate the training loss over all of the batches so that we can
# calculate the average loss at the end. `loss` is a Tensor containing a
# single value; the `.item()` function just returns the Python value
# from the tensor.
total_train_loss += loss.item()
# Perform a backward pass to calculate the gradients.
loss.backward()
# Clip the norm of the gradients to 1.0.
# This is to help prevent the "exploding gradients" problem.
torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0)
# Update parameters and take a step using the computed gradient.
# The optimizer dictates the "update rule"--how the parameters are
# modified based on their gradients, the learning rate, etc.
optimizer.step()
# Update the learning rate.
scheduler.step()
# Calculate the average loss over all of the batches.
avg_train_loss = total_train_loss / len(train_dataloader)
# Measure how long this epoch took.
training_time = format_time(time.time() - t0)
print("")
print(" Average training loss: {0:.2f}".format(avg_train_loss))
print(" Training epcoh took: {:}".format(training_time))
# ========================================
# Validation
# ========================================
# After the completion of each training epoch, measure our performance on
# our validation set.
print("")
print("Running Validation...")
t0 = time.time()
# Put the model in evaluation mode--the dropout layers behave differently
# during evaluation.
model.eval()
# Tracking variables
total_eval_accuracy = 0
total_eval_loss = 0
nb_eval_steps = 0
# Evaluate data for one epoch
for batch in validation_dataloader:
# Unpack this training batch from our dataloader.
#
# As we unpack the batch, we'll also copy each tensor to the GPU using
# the `to` method.
#
# `batch` contains three pytorch tensors:
# [0]: input ids
# [1]: attention masks
# [2]: labels
b_input_ids = batch[0].to(device)
b_input_mask = batch[1].to(device)
b_labels = batch[2].to(device)
# Tell pytorch not to bother with constructing the compute graph during
# the forward pass, since this is only needed for backprop (training).
with torch.no_grad():
# Forward pass, calculate logit predictions.
# token_type_ids is the same as the "segment ids", which
# differentiates sentence 1 and 2 in 2-sentence tasks.
result = model(b_input_ids,
token_type_ids=None,
attention_mask=b_input_mask,
labels=b_labels,
return_dict=True)
# Get the loss and "logits" output by the model. The "logits" are the
# output values prior to applying an activation function like the
# softmax.
loss = result.loss
logits = result.logits
# Accumulate the validation loss.
total_eval_loss += loss.item()
# Move logits and labels to CPU
logits = logits.detach().cpu().numpy()
label_ids = b_labels.to('cpu').numpy()
# Calculate the accuracy for this batch of test sentences, and
# accumulate it over all batches.
total_eval_accuracy += flat_accuracy(logits, label_ids)
# Report the final accuracy for this validation run.
avg_val_accuracy = total_eval_accuracy / len(validation_dataloader)
print(" Accuracy: {0:.2f}".format(avg_val_accuracy))
# Calculate the average loss over all of the batches.
avg_val_loss = total_eval_loss / len(validation_dataloader)
# Measure how long the validation run took.
validation_time = format_time(time.time() - t0)
print(" Validation Loss: {0:.2f}".format(avg_val_loss))
print(" Validation took: {:}".format(validation_time))
# Record all statistics from this epoch.
training_stats.append(
{
'epoch': epoch_i + 1,
'Training Loss': avg_train_loss,
'Valid. Loss': avg_val_loss,
'Valid. Accur.': avg_val_accuracy,
'Training Time': training_time,
'Validation Time': validation_time
}
)
print("")
print("Training complete!")
print("Total training took {:} (h:mm:ss)".format(format_time(time.time()-total_t0)))
# + [markdown] id="VQTvJ1vRP7u4"
# Let's view the summary of the training process.
# + id="6O_NbXFGMukX" colab={"base_uri": "https://localhost:8080/", "height": 202} outputId="6b68d165-db3d-4f91-8cf2-5be40fea3ead"
import pandas as pd
# Display floats with two decimal places.
pd.set_option('precision', 2)
# Create a DataFrame from our training statistics.
df_stats = pd.DataFrame(data=training_stats)
# Use the 'epoch' as the row index.
df_stats = df_stats.set_index('epoch')
# A hack to force the column headers to wrap.
#df = df.style.set_table_styles([dict(selector="th",props=[('max-width', '70px')])])
# Display the table.
df_stats
# + [markdown] id="1-G03mmwH3aI"
# Notice that, while the the training loss is going down with each epoch, the validation loss is increasing! This suggests that we are training our model too long, and it's over-fitting on the training data.
#
# (For reference, we are using 7,695 training samples and 856 validation samples).
#
# Validation Loss is a more precise measure than accuracy, because with accuracy we don't care about the exact output value, but just which side of a threshold it falls on.
#
# If we are predicting the correct answer, but with less confidence, then validation loss will catch this, while accuracy will not.
# + id="68xreA9JAmG5" colab={"base_uri": "https://localhost:8080/", "height": 427} outputId="6463399f-0b52-4120-f643-e2c0ee764f39"
import matplotlib.pyplot as plt
% matplotlib inline
import seaborn as sns
# Use plot styling from seaborn.
sns.set(style='darkgrid')
# Increase the plot size and font size.
sns.set(font_scale=1.5)
plt.rcParams["figure.figsize"] = (12,6)
# Plot the learning curve.
plt.plot(df_stats['Training Loss'], 'b-o', label="Training")
plt.plot(df_stats['Valid. Loss'], 'g-o', label="Validation")
# Label the plot.
plt.title("Training & Validation Loss")
plt.xlabel("Epoch")
plt.ylabel("Loss")
plt.legend()
plt.xticks([1, 2, 3, 4])
plt.show()
# + [markdown] id="mkyubuJSOzg3"
# # 5. Performance On Test Set
# + [markdown] id="DosV94BYIYxg"
# Now we'll load the holdout dataset and prepare inputs just as we did with the training set. Then we'll evaluate predictions using [Matthew's correlation coefficient](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.matthews_corrcoef.html) because this is the metric used by the wider NLP community to evaluate performance on CoLA. With this metric, +1 is the best score, and -1 is the worst score. This way, we can see how well we perform against the state of the art models for this specific task.
# + [markdown] id="Tg42jJqqM68F"
# ### 5.1. Data Preparation
#
# + [markdown] id="xWe0_JW21MyV"
#
# We'll need to apply all of the same steps that we did for the training data to prepare our test data set.
# + id="mAN0LZBOOPVh" colab={"base_uri": "https://localhost:8080/"} outputId="23db8f72-4f5f-4877-9688-acd3bd43fd34"
import pandas as pd
# Load the dataset into a pandas dataframe.
df = pd.read_csv("./cola_public/raw/out_of_domain_dev.tsv", delimiter='\t', header=None, names=['sentence_source', 'label', 'label_notes', 'sentence'])
# Report the number of sentences.
print('Number of test sentences: {:,}\n'.format(df.shape[0]))
# Create sentence and label lists
sentences = df.sentence.values
labels = df.label.values
# Tokenize all of the sentences and map the tokens to thier word IDs.
input_ids = []
attention_masks = []
# For every sentence...
for sent in sentences:
# `encode_plus` will:
# (1) Tokenize the sentence.
# (2) Prepend the `[CLS]` token to the start.
# (3) Append the `[SEP]` token to the end.
# (4) Map tokens to their IDs.
# (5) Pad or truncate the sentence to `max_length`
# (6) Create attention masks for [PAD] tokens.
encoded_dict = tokenizer.encode_plus(
sent, # Sentence to encode.
add_special_tokens = True, # Add '[CLS]' and '[SEP]'
max_length = 64, # Pad & truncate all sentences.
pad_to_max_length = True,
return_attention_mask = True, # Construct attn. masks.
return_tensors = 'pt', # Return pytorch tensors.
)
# Add the encoded sentence to the list.
input_ids.append(encoded_dict['input_ids'])
# And its attention mask (simply differentiates padding from non-padding).
attention_masks.append(encoded_dict['attention_mask'])
# Convert the lists into tensors.
input_ids = torch.cat(input_ids, dim=0)
attention_masks = torch.cat(attention_masks, dim=0)
labels = torch.tensor(labels)
# Set the batch size.
batch_size = 32
# Create the DataLoader.
prediction_data = TensorDataset(input_ids, attention_masks, labels)
prediction_sampler = SequentialSampler(prediction_data)
prediction_dataloader = DataLoader(prediction_data, sampler=prediction_sampler, batch_size=batch_size)
# + [markdown] id="16lctEOyNFik"
# ## 5.2. Evaluate on Test Set
#
# + [markdown] id="rhR99IISNMg9"
#
# With the test set prepared, we can apply our fine-tuned model to generate predictions on the test set.
# + id="Hba10sXR7Xi6" colab={"base_uri": "https://localhost:8080/"} outputId="9aa2db93-909a-418e-bfa6-93bca735cc58"
# Prediction on test set
print('Predicting labels for {:,} test sentences...'.format(len(input_ids)))
# Put model in evaluation mode
model.eval()
# Tracking variables
predictions , true_labels = [], []
# Predict
for batch in prediction_dataloader:
# Add batch to GPU
batch = tuple(t.to(device) for t in batch)
# Unpack the inputs from our dataloader
b_input_ids, b_input_mask, b_labels = batch
# Telling the model not to compute or store gradients, saving memory and
# speeding up prediction
with torch.no_grad():
# Forward pass, calculate logit predictions.
result = model(b_input_ids,
token_type_ids=None,
attention_mask=b_input_mask,
return_dict=True)
logits = result.logits
# Move logits and labels to CPU
logits = logits.detach().cpu().numpy()
label_ids = b_labels.to('cpu').numpy()
# Store predictions and true labels
predictions.append(logits)
true_labels.append(label_ids)
print(' DONE.')
# + [markdown] id="-5jscIM8R4Gv"
# Accuracy on the CoLA benchmark is measured using the "[Matthews correlation coefficient](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.matthews_corrcoef.html)" (MCC).
#
# We use MCC here because the classes are imbalanced:
#
# + id="hWcy0X1hirdx" colab={"base_uri": "https://localhost:8080/"} outputId="835ba1e0-598a-470d-fd99-753fafb616c6"
print('Positive samples: %d of %d (%.2f%%)' % (df.label.sum(), len(df.label), (df.label.sum() / len(df.label) * 100.0)))
# + id="cRaZQ4XC7kLs" colab={"base_uri": "https://localhost:8080/"} outputId="94f3dea1-bf03-4d3a-f2bd-a838457a06df"
from sklearn.metrics import matthews_corrcoef
matthews_set = []
# Evaluate each test batch using Matthew's correlation coefficient
print('Calculating Matthews Corr. Coef. for each batch...')
# For each input batch...
for i in range(len(true_labels)):
# The predictions for this batch are a 2-column ndarray (one column for "0"
# and one column for "1"). Pick the label with the highest value and turn this
# in to a list of 0s and 1s.
pred_labels_i = np.argmax(predictions[i], axis=1).flatten()
# Calculate and store the coef for this batch.
matthews = matthews_corrcoef(true_labels[i], pred_labels_i)
matthews_set.append(matthews)
# + [markdown] id="IUM0UA1qJaVB"
# The final score will be based on the entire test set, but let's take a look at the scores on the individual batches to get a sense of the variability in the metric between batches.
#
# Each batch has 32 sentences in it, except the last batch which has only (516 % 32) = 4 test sentences in it.
#
# + id="pyfY1tqxU0t9" colab={"base_uri": "https://localhost:8080/", "height": 427} outputId="ed1189e9-dcf3-432c-a438-57ffe6304018"
# Create a barplot showing the MCC score for each batch of test samples.
ax = sns.barplot(x=list(range(len(matthews_set))), y=matthews_set, ci=None)
plt.title('MCC Score per Batch')
plt.ylabel('MCC Score (-1 to +1)')
plt.xlabel('Batch #')
plt.show()
# + [markdown] id="1YrjAPX2V-l4"
# Now we'll combine the results for all of the batches and calculate our final MCC score.
# + id="oCYZa1lQ8Jn8" colab={"base_uri": "https://localhost:8080/"} outputId="373d238a-3c8e-42e3-fbd2-55ac65c73bed"
# Combine the results across all batches.
flat_predictions = np.concatenate(predictions, axis=0)
# For each sample, pick the label (0 or 1) with the higher score.
flat_predictions = np.argmax(flat_predictions, axis=1).flatten()
# Combine the correct labels for each batch into a single list.
flat_true_labels = np.concatenate(true_labels, axis=0)
# Calculate the MCC
mcc = matthews_corrcoef(flat_true_labels, flat_predictions)
print('Total MCC: %.3f' % mcc)
# + [markdown] id="jXx0jPc4HUfZ"
# Cool! In about half an hour and without doing any hyperparameter tuning (adjusting the learning rate, epochs, batch size, ADAM properties, etc.) we are able to get a good score.
#
# > *Note: To maximize the score, we should remove the "validation set" (which we used to help determine how many epochs to train for) and train on the entire training set.*
#
# The library documents the expected accuracy for this benchmark [here](https://huggingface.co/transformers/examples.html#glue) as `49.23`.
#
# You can also look at the official leaderboard [here](https://gluebenchmark.com/leaderboard/submission/zlssuBTm5XRs0aSKbFYGVIVdvbj1/-LhijX9VVmvJcvzKymxy).
#
# Note that (due to the small dataset size?) the accuracy can vary significantly between runs.
#
# + [markdown] id="GfjYoa6WmkN6"
# # Conclusion
# + [markdown] id="xlQG7qgkmf4n"
# This post demonstrates that with a pre-trained BERT model you can quickly and effectively create a high quality model with minimal effort and training time using the pytorch interface, regardless of the specific NLP task you are interested in.
# + [markdown] id="YUmsUOIv8EUO"
# # Appendix
#
# + [markdown] id="q2079Qyn8Mt8"
# ## A1. Saving & Loading Fine-Tuned Model
#
# This first cell (taken from `run_glue.py` [here](https://github.com/huggingface/transformers/blob/35ff345fc9df9e777b27903f11fa213e4052595b/examples/run_glue.py#L495)) writes the model and tokenizer out to disk.
# + id="6ulTWaOr8QNY" colab={"base_uri": "https://localhost:8080/"} outputId="28b62f46-52ed-4297-b326-d6d198d54b0a"
import os
# Saving best-practices: if you use defaults names for the model, you can reload it using from_pretrained()
output_dir = './model_save/'
# Create output directory if needed
if not os.path.exists(output_dir):
os.makedirs(output_dir)
print("Saving model to %s" % output_dir)
# Save a trained model, configuration and tokenizer using `save_pretrained()`.
# They can then be reloaded using `from_pretrained()`
model_to_save = model.module if hasattr(model, 'module') else model # Take care of distributed/parallel training
model_to_save.save_pretrained(output_dir)
tokenizer.save_pretrained(output_dir)
# Good practice: save your training arguments together with the trained model
# torch.save(args, os.path.join(output_dir, 'training_args.bin'))
# + [markdown] id="Z-tjHkR7lc1I"
# Let's check out the file sizes, out of curiosity.
# + id="mqMzI3VTCZo5" colab={"base_uri": "https://localhost:8080/"} outputId="6b303b19-2a6b-4c93-8169-4f5242ebd5a9"
# !ls -l --block-size=K ./model_save/
# + [markdown] id="fr_bt2rFlgDn"
# The largest file is the model weights, at around 418 megabytes.
# + id="-WUFUIQ8Cu8D" colab={"base_uri": "https://localhost:8080/"} outputId="4d6cd231-82a3-4f7e-fa04-1a77df4f20af"
# !ls -l --block-size=M ./model_save/pytorch_model.bin
# + [markdown] id="dzGKvOFAll_e"
# To save your model across Colab Notebook sessions, download it to your local machine, or ideally copy it to your Google Drive.
# + id="Trr-A-POC18_" outputId="f0f13f27-ecdf-4e1e-d0ca-4fd937684de3" colab={"base_uri": "https://localhost:8080/", "height": 128}
# Mount Google Drive to this Notebook instance.
from google.colab import drive
drive.mount('/content/drive')
# + id="NxlZsafTC-V5"
# Copy the model files to a directory in your Google Drive.
# !cp -r ./model_save/ "./drive/Shared drives/ChrisMcCormick.AI/Blog Posts/BERT Fine-Tuning/"
# + [markdown] id="W0vstijw85SZ"
# The following functions will load the model back from disk.
# + id="nskPzUM084zL"
# Load a trained model and vocabulary that you have fine-tuned
model = model_class.from_pretrained(output_dir)
tokenizer = tokenizer_class.from_pretrained(output_dir)
# Copy the model to the GPU.
model.to(device)
# + [markdown] id="NIWouvDrGVAi"
# ## A.2. Weight Decay
#
#
# + [markdown] id="f123ZAlF1OyW"
# The huggingface example includes the following code block for enabling weight decay, but the default decay rate is "0.0", so I moved this to the appendix.
#
# This block essentially tells the optimizer to not apply weight decay to the bias terms (e.g., $ b $ in the equation $ y = Wx + b $ ). Weight decay is a form of regularization--after calculating the gradients, we multiply them by, e.g., 0.99.
# + id="QxSMw0FrptiL"
# This code is taken from:
# https://github.com/huggingface/transformers/blob/5bfcd0485ece086ebcbed2d008813037968a9e58/examples/run_glue.py#L102
# Don't apply weight decay to any parameters whose names include these tokens.
# (Here, the BERT doesn't have `gamma` or `beta` parameters, only `bias` terms)
no_decay = ['bias', 'LayerNorm.weight']
# Separate the `weight` parameters from the `bias` parameters.
# - For the `weight` parameters, this specifies a 'weight_decay_rate' of 0.01.
# - For the `bias` parameters, the 'weight_decay_rate' is 0.0.
optimizer_grouped_parameters = [
# Filter for all parameters which *don't* include 'bias', 'gamma', 'beta'.
{'params': [p for n, p in param_optimizer if not any(nd in n for nd in no_decay)],
'weight_decay_rate': 0.1},
# Filter for parameters which *do* include those.
{'params': [p for n, p in param_optimizer if any(nd in n for nd in no_decay)],
'weight_decay_rate': 0.0}
]
# Note - `optimizer_grouped_parameters` only includes the parameter values, not
# the names.
# + [markdown] id="IKzLS9ohzGVu"
# # Revision History
# + [markdown] id="SZqpiHEnGqYR"
# **Version 4** - *Feb 2nd, 2020* - (current)
# * Updated all calls to `model` (fine-tuning and evaluation) to use the [`SequenceClassifierOutput`](https://huggingface.co/transformers/main_classes/output.html#transformers.modeling_outputs.SequenceClassifierOutput) class.
# * Moved illustration images to Google Drive--Colab appears to no longer support images at external URLs.
#
# **Version 3** - *Mar 18th, 2020*
# * Simplified the tokenization and input formatting (for both training and test) by leveraging the `tokenizer.encode_plus` function.
# `encode_plus` handles padding *and* creates the attention masks for us.
# * Improved explanation of attention masks.
# * Switched to using `torch.utils.data.random_split` for creating the training-validation split.
# * Added a summary table of the training statistics (validation loss, time per epoch, etc.).
# * Added validation loss to the learning curve plot, so we can see if we're overfitting.
# * Thank you to [<NAME>](https://ca.linkedin.com/in/stasbekman) for contributing this!
# * Displayed the per-batch MCC as a bar plot.
#
# **Version 2** - *Dec 20th, 2019* - [link](https://colab.research.google.com/drive/1Y4o3jh3ZH70tl6mCd76vz_IxX23biCPP)
# * huggingface renamed their library to `transformers`.
# * Updated the notebook to use the `transformers` library.
#
# **Version 1** - *July 22nd, 2019*
# * Initial version.
# + [markdown] id="FL_NnDGxRpEI"
# ## Further Work
#
# * It might make more sense to use the MCC score for “validation accuracy”, but I’ve left it out so as not to have to explain it earlier in the Notebook.
# * Seeding -- I’m not convinced that setting the seed values at the beginning of the training loop is actually creating reproducible results…
# * The MCC score seems to vary substantially across different runs. It would be interesting to run this example a number of times and show the variance.
#
| BERT/BERT_Fine_Tuning_Sentence_Classification_v4.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# This notebook contains code to reproduce the schematic analysis/figure
# ## Imports
# +
import numpy as np
import pandas as pd
from scipy.spatial.distance import cdist
from sherlock_helpers.constants import DATA_DIR, FIG_DIR, RAW_DIR
from sherlock_helpers.functions import (
get_topic_words,
get_video_text,
multicol_display,
show_source
)
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
# %matplotlib inline
# -
# ## Inspect some functions
show_source(get_video_text)
show_source(get_topic_words)
# ## Set plotting params
mpl.rcParams['pdf.fonttype'] = 42
sns.set_context('poster')
palette = [sns.color_palette()[0], sns.color_palette('bright')[2]]
# ## Load data
# +
video_text = pd.read_excel(RAW_DIR.joinpath('Sherlock_Segments_1000_NN_2017.xlsx'))
recall_text = np.load(DATA_DIR.joinpath('recall_text.npy'))
video_model, recall_models = np.load(DATA_DIR.joinpath('models_t100_v50_r10.npy'),
allow_pickle=True)
cv = np.load(DATA_DIR.joinpath('count_vectorizer_model.npy'), allow_pickle=True).item()
lda = np.load(DATA_DIR.joinpath('topic_model.npy'), allow_pickle=True).item()
# -
# ## Split video and recall into thirds
# +
p17_recall_model = recall_models[16]
p17_vr_corrmat = 1 - cdist(video_model, p17_recall_model, 'correlation')
video_thirds = np.linspace(0, video_model.shape[0], 4).astype(int)
recall_thirds = np.linspace(0, p17_recall_model.shape[0], 4).astype(int)
video_first = np.arange(video_thirds[0], video_thirds[1])
video_second = np.arange(video_thirds[1], video_thirds[2])
video_third = np.arange(video_thirds[2], video_thirds[3])
recall_first = np.arange(recall_thirds[0], recall_thirds[1])
recall_second = np.arange(recall_thirds[1], recall_thirds[2])
recall_third = np.arange(recall_thirds[2], recall_thirds[3])
# -
# ## Find best matching timepoints from each third
# +
corrmat_first = p17_vr_corrmat[np.ix_(video_first, recall_first)]
corrmat_second = p17_vr_corrmat[np.ix_(video_second, recall_second)]
corrmat_third = p17_vr_corrmat[np.ix_(video_third, recall_third)]
video_tpt1, recall_tpt1 = np.unravel_index(corrmat_first.argmax(), corrmat_first.shape)
video_tpt2, recall_tpt2 = np.unravel_index(corrmat_second.argmax(), corrmat_second.shape)
video_tpt2 += video_thirds[1]
recall_tpt2 += recall_thirds[1]
video_tpt3, recall_tpt3 = np.unravel_index(corrmat_third.argmax(), corrmat_third.shape)
video_tpt3 += video_thirds[2]
recall_tpt3 += recall_thirds[2]
# -
# ## Get matching video and recall text
# +
video_chunk1 = get_video_text(video_tpt1 - 24, video_tpt1 + 25)
recall_chunk1 = recall_text[recall_tpt1]
video_chunk2 = get_video_text(video_tpt2 - 24, video_tpt2 + 25)
recall_chunk2 = recall_text[recall_tpt2]
video_chunk3 = get_video_text(video_tpt3 - 24, video_tpt3 + 25)
recall_chunk3 = recall_text[recall_tpt3]
# -
print(video_chunk1)
print(recall_chunk1)
print(video_chunk2)
print(recall_chunk2)
print(video_chunk3)
print(recall_chunk3)
# ## Get video & recall topic weights for video's hightest weighted topic at each timepoint
# +
video_tpts = [video_tpt1, video_tpt2, video_tpt3]
recall_tpts = [recall_tpt1, recall_tpt2, recall_tpt3]
topics = [video_model[tpt].argmax() for tpt in video_tpts]
# +
df = pd.DataFrame(index=range(18),
columns=['Time', 'Topic', 'Model', 'Topic weight'])
row_ix = 0
for vid_tpt, rec_tpt in zip(video_tpts, recall_tpts):
tr_tpt = f'TR {vid_tpt}'
for topic in topics:
topic_weight_vid = video_model[vid_tpt, topic]
topic_weight_rec = p17_recall_model[rec_tpt, topic]
df.loc[row_ix] = [tr_tpt, str(topic), 'Video', topic_weight_vid]
df.loc[row_ix + 1] = [tr_tpt, str(topic), 'Recall', topic_weight_rec]
row_ix += 2
# -
df
# ## Plot result
# +
g = sns.catplot(x='Topic weight',
y='Topic',
data=df,
hue='Model',
col='Time',
kind='bar',
orient='h',
aspect=1.2,
order=['9', '65', '68'],
hue_order=['Video', 'Recall'],
palette=palette,
sharey=False)
g.fig.legends[0].set_title('')
for ax in g.axes[0]:
ax.set_xlim(0,1)
ax.set_xticklabels(['0', '', '', '', '', '1'])
# plt.savefig('/mnt/paper/figs/tmp/schematic_topic_weights.pdf')
# -
# ## Get words with highest weights in each topic
# +
topic_words = get_topic_words(cv, lda, topics=topics, n_words=10)
multicol_display(*map('\n'.join, topic_words.values()),
ncols=3,
col_headers=(f'Topic {t}' for t in topic_words.keys()),
table_css={'width': '50%'},
cell_css={'line-height': '2.5em'})
| code/notebooks/main/schematic.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
a = ['aa','bb','cc']
q =0
for i in a:
instr = f'def {i}():print({q})'
q+= 1
instr2 = f'{i}()'
exec(instr)
exec(instr2)
aa()
bb()
import matplotlib.pyplot as plt
import numpy as np
import nltk
import pandas as pd
import mytoolkit as mt
# +
pp1 =mt.pandas_read_sec_csv('/Users/wangmu/Documents/Science/mG1/数据/sec/20190624/6D-11631.CSV')
# -
pp1[pp1.time[2.5:5.0]]
# +
x = 10
expr = """
z = 30
sum = x + y + z
print(sum)
"""
def func():
y = 20
exec(expr)
exec(expr, {'x': 1, 'y': 2})
exec(expr, {'x': 1, 'y': 2}, {'y': 3, 'z': 4})
func()
# -
import mytoolkit as mt
import matplotlib.pyplot as plt
pp1 =mt.pandas_read_sec_csv('/Users/wangmu/Documents/Science/mG1/数据/sec/20190624/6D-11631.CSV')
pos = pp1.where((pp1.time<5.0)&(pp1.time>0.0))
type((pp1.time<1.0)&(pp1.time>0.5))
mt.normalize_peak2zero(pos)
plt.plot(pos.time,pos.peak)
pos = mt.pick_x_region
pos.peak.max()
| playground.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Quantum jump duration estimation from direct deconvolution of signal
# +
import matplotlib.pyplot as plt
import numpy as np
from uncertainties import unumpy
from uncertainties import ufloat
# %matplotlib inline
# -
# ## Signal import and frequency analysis
# +
data_file = 'data/raw_data/selected_g2/20170529_FWMg2_MPD_MPDGated_4GHzOsci_4MHzBlueDetuned.dat'
# import oscilloscope data into time and counts vectors. cut off the edges
Dt, counts = np.flipud(np.genfromtxt(data_file, skip_header=5, delimiter=',')).T
Dt_step = np.abs(np.mean(np.diff(Dt)))
# usable_range = range(len(Dt))
usable_range = range(543, 4994)
counts = counts[usable_range]
Dt = -Dt[usable_range]
counts_err = np.sqrt(counts)
counts_u = unumpy.uarray(counts, counts_err)
print('Time resolution: {:.3e}'.format(Dt_step))
# -
plt.figure('FWM data, time resolution: {:.3g} ns'.format(np.mean(np.diff(Dt))* 1e9))
plt.plot(Dt*1e9, counts, '.-')
plt.xlabel(r'$\Delta t$ (ns)')
plt.ylabel('counts');
# ## Detector transfer function
from lmfit import Model
from lmfit import Parameters
from lmfit.models import ConstantModel
from lmfit.models import ExponentialModel
from lmfit.models import GaussianModel
from lmfit.models import LorentzianModel
# + run_control={"marked": false}
mpd_datafile = 'data/raw_data/MPD_characterization/F1pmp1actrigger.txt'
Dt_mpd, counts_mpd = np.genfromtxt(mpd_datafile, delimiter=',', skip_header=5).T
Dt_mpd_step = np.mean(np.diff(Dt_mpd))
# drop zero bins and center the MPD response function
peak_idx = np.argmax(counts_mpd)
mpd_center = counts_mpd[4:peak_idx + (peak_idx - 5)]
mpd_t = (np.arange(len(mpd_center)) - len(mpd_center)//2 - 1) * Dt_mpd_step
# poissonian error
mpd_error = np.sqrt(mpd_center)
# sets the floor range where accidental counts are the main component
flat_range = list(range(500)) + list(range(len(mpd_t)-500, len(mpd_t)))
plt.figure()
plt.errorbar(mpd_t * 1e9, mpd_center, yerr=mpd_error, fmt='.')
plt.xlabel(r'$\Delta t$ (ns)');
plt.yscale('log')
# -
# ### Baseline removal
# +
# Model of the detector response
response_model = (ConstantModel(prefix='offset_') +
ExponentialModel(prefix='a_') +
ExponentialModel(prefix='b_') +
GaussianModel(prefix='c_')
)
p_peak = response_model.make_params()
p_peak['offset_c'].set(value=min(mpd_center))
p_peak['a_decay'].set(value=1)
p_peak['a_amplitude'].set(value=1e2)
p_peak['b_decay'].set(value=1)
p_peak['b_amplitude'].set(value=1e2)
p_peak['c_amplitude'].set(value=1e3)
p_peak['c_sigma'].set(value=.1)
p_peak['c_center'].set(value=0, vary=1)
mpd_result = response_model.fit(mpd_center,
x=np.abs(mpd_t * 1e9),
params=p_peak,
weights=1 / mpd_error
)
print(mpd_result.fit_report())
comps = mpd_result.eval_components()
plt.figure()
plt.errorbar(mpd_t * 1e9, mpd_center, yerr=mpd_error, alpha=.3, fmt='.')
plt.plot(mpd_t * 1e9, mpd_result.best_fit);
plt.plot(mpd_t * 1e9, comps['c_']);
plt.plot(mpd_t * 1e9, comps['a_']);
plt.plot(mpd_t * 1e9, comps['b_']);
plt.ylim(min(mpd_center))
plt.yscale('log')
# +
# Defining the normalized response function, including errors
mpd_counts_u = unumpy.uarray(mpd_center, mpd_error) - ufloat(mpd_result.params['offset_c'].value,
mpd_result.params['offset_c'].stderr)
# normalization
norm_u = np.sum(mpd_counts_u)
mpd_u = mpd_counts_u / norm_u
mpd_error = unumpy.std_devs(mpd_u)
mpd = unumpy.nominal_values(mpd_u)
# -
# ### Match length of signal and detector response
# match length of signal and detector response
l_signal = len(counts)
l_mpd = len(mpd)
extension = np.zeros(int((l_signal-l_mpd) / 2))
mpd_ex = np.concatenate((extension, mpd, extension))
mpd_error_ex = np.concatenate((extension, mpd_error, extension))
print('Signal lengt: {}, New response vector length: {}'.format(l_signal, len(mpd_ex)))
# ## Deconvolution
from numpy.fft import rfft
from numpy.fft import rfftfreq
from numpy.fft import irfft
# +
# Low pass exponential filter
def winn(x, cutoff):
retval = np.exp(-x / cutoff)
retval /= retval[0]
return retval
def deconvolution(signal, det_response, winn=None, normalize=False):
signal_fft = rfft(signal)
if normalize:
signal_fft /= signal_fft[0]
det_fft = rfft(det_response)
det_fft /= det_fft[0]
if winn is not None:
signal_fft = signal_fft * winn
return irfft(signal_fft / det_fft)
# -
# ### Bootstrapping
# +
import multiprocessing
try:
cpus = multiprocessing.cpu_count()
except NotImplementedError:
cpus = 1 # arbitrary default
print('Numbe rof CPU to use for multi-processing {}'.format(cpus))
# +
# Filter definition
cutoff = 400
winn_vec = winn(np.arange(np.ceil(l_signal/2)), cutoff)
# sampling using Poissonian statistics
samples = 1e3
counts_prop = np.array([np.random.poisson(k, int(samples)) for k in counts]).T
# applying the deconvolution for every sample
with multiprocessing.Pool(cpus) as pool:
def obj_f(x):
return deconvolution(x, mpd_ex, winn_vec)
dec_prop = pool.map(obj_f, counts_prop)
# reshape the
dec_prop = np.array(dec_prop).T
# extract the resulting deconvoluted signal and corresponding error
tt = np.mean(dec_prop, 1)
tt_err = np.std(dec_prop, 1)
# fix for the rollover of the inverse fourier transform
tt = np.concatenate((tt[len(tt) // 2:], tt[:len(tt) // 2 + 1]))
tt_err = np.concatenate(
(tt_err[len(tt_err) // 2:], tt_err[:len(tt_err) // 2 + 1]))
# -
plt.figure(
'Comparison of time correlation before and after deconvolution with detectors response')
plt.errorbar(np.arange(len(tt)),
tt,
yerr=tt_err,
fmt='.',
alpha=.15,
label='After deconvolution')
plt.plot(counts, '-', label='Original data')
plt.xlabel(r'$\Delta t$ (ns)')
plt.ylabel('counts')
plt.legend();
# + [markdown] run_control={"marked": false}
# ## Fitting
# -
# ### Fit function
# + run_control={"marked": false}
def jump(x, alpha):
# rising edge
return 1 / (1 + np.exp(- x / alpha))
def rise_decay_f(x, alpha, t0, tau):
x = x - t0
retval = jump(x, alpha) * np.exp(-x / tau)
return retval / max(retval)
def rise_time(alpha):
# return 90/10 rise time from the edge
c = np.log(1.8/.2) / 2
return alpha * c * 4
# fit model from function
rise_decay_model = ConstantModel(prefix='amplitude_') * Model(rise_decay_f) + ConstantModel(prefix='offset_')
# -
# ### Parameter choice and results
# +
p = Parameters()
p.add('alpha', .1)
p.add('t0', 14)
p.add('tau', 6)
p.add('offset_c', np.mean(tt[:200]), vary=1)
p.add('amplitude_c', max(tt))
fit_result = rise_decay_model.fit(tt, x=Dt*1e9, params=p, weights= 1 / tt_err)
print('90/10 rise time from fit: {:.2u} ns\n'.format(rise_time(ufloat(fit_result.params['alpha'].value,
fit_result.params['alpha'].stderr))))
print(fit_result.fit_report())
fit_result.plot(xlabel=r'$\Delta t$ (ns)',
ylabel='counts',
datafmt='.',
data_kws={'alpha':.3});
# -
# ## Save data to file
with open('data/processed_data/quantum-jump_deconvolution.dat', 'w') as f:
f.write('#Dt\tcounts\tcounts_err\t'
'deconv\tdeconv_err\t'
'fit\tfit_uncert\n')
[f.write(('{}\t'*6 + '{}\n').format(t, c, c_e, dc, dc_e, ff, ff_e))
for t, c, c_e, dc, dc_e, ff, ff_e
in zip(Dt*1e9, counts, counts_err, tt, tt_err, fit_result.best_fit, dely)]
# ## Analysis of the effect of the filter characteristic frequency
# The previous section was repeated for different values of `cutoff` to observe how the filter affect the estimated duration of the jump
# +
cutoffs = [200,
300,
350,
400,
450,
500,
550,
600,
650,
700]
alphas = [0.03863482,
0.02259058,
0.01832522,
0.01528771,
0.01300014,
0.01117608,
0.00945013,
0.00821871,
0.00694958,
6.3628E-04]
alphas_err = [7.7571E-04,
5.9585E-04,
7.2571E-04,
0.00100046,
0.00133600,
0.00177284,
0.00209704,
0.00245997,
0.00286602,
0.55922264]
# -
# %matplotlib notebook
plt.figure()
plt.errorbar(cutoffs[:-1], alphas[:-1], yerr=alphas_err[:-1], fmt='o')
plt.xlabel('Filter characteristic cutoff')
plt.ylabel(r'$\alpha$ from fit (ns)');
plt.figure()
plt.plot(cutoffs[:-1], alphas_err[:-1], 'o')
plt.xlabel('Filter characteristic cutoff')
plt.ylabel(r'Absolute error of $\alpha$ (ns)');
plt.figure()
plt.plot(cutoffs[:-1], np.array(alphas_err[:-1]) / np.array(alphas[:-1]), 'o')
plt.xlabel('Filter characteristic cutoff')
plt.ylabel(r'Relative error of $\alpha$ (ns)');
| Quantum jump duration estimation from direct deconvolution of signal.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Preface
#
# The purpose of this text is to walk through image reduction and photometry using
# Python, especially Astropy and its affiliated packages. It assumes some basic
# familiarity with astronomical images and with Python. The inspiration for this
# work is a pair of guides written for IRAF, ["A User's Guide to CCD Reductions with IRAF" (Massey 1997)](http://www.ifa.hawaii.edu/~meech/a399/handouts/ccduser3.pdf) and
# ["A User's Guide to Stellar CCD Photometry with IRAF" (Massey and Davis 1992)](https://www.mn.uio.no/astro/english/services/it/help/visualization/iraf/daophot2.pdf).
#
# The focus is on optical/IR images, not spectra.
# ## Credits
# ### Authors
# This guide was written by <NAME> and <NAME>. Editing was done by
# <NAME>.
#
# New contributors will be moved from the acknowledgments to the author list when
# they have either written roughly the equivalent of one section or provided
# detailed review of several sections. This is intended as a rough guideline, and
# when in doubt we will lean towards including people as authors rather than
# excluding them.
# ### Funding
#
# Made possible by the Astropy Project and ScienceBetter Consulting through
# financial support from the Community Software Initiative at the Space Telescope
# Science Institute.
# ### Acknowledgments
#
# The following people contributed to this work by making suggestions, testing
# code, or providing feedback on drafts. We are grateful for their assistance!
#
# + <NAME>
# + <NAME>
# + <NAME>
# + <NAME>
# + <NAME>
# + <NAME>
# + <NAME>
# + <NAME>
# + <NAME>
# + <NAME>
#
# If you have provided feedback and are not listed above, we apologize -- please
# [open an issue here](https://github.com/astropy/ccd-reduction-and-photometry-guide/issues/new) so we can fix it.
# ## Resources
#
# This astronomical content work was inspired by, and guided by, the excellent
# resources below:
#
# + ["A User's Guide to CCD Reductions with IRAF" (Massey 1997)](http://www.ifa.hawaii.edu/~meech/a399/handouts/ccduser3.pdf) is very thorough, but IRAF has become more
# difficult to install over time and is no longer supported.
# + ["A User's Guide to Stellar CCD Photometry with IRAF" (Massey and Davis 1992)](https://www.mn.uio.no/astro/english/services/it/help/visualization/iraf/daophot2.pdf).
# + [The Handbook of Astronomical Image Processing](https://www.amazon.com/Handbook-Astronomical-Image-Processing/dp/0943396824) by <NAME> and <NAME>. This
# provides a very detailed overview of data reduction and photometry. One virtue
# is its inclusion of *real* images with defects.
# + The [AAVSO CCD Obseving Manual](https://www.aavso.org/sites/default/files/publications_files/ccd_photometry_guide/CCDPhotometryGuide.pdf) provides a complete introduction to CCD data reduction and photometry.
# + [A Beginner's Guide to Working with Astronomical Data](https://arxiv.org/abs/1905.13189) is much broader than this guide. It
# includes an introduction to Python.
# ## Software setup
#
# The recommended way to get set up to use this guide is to use the
# [Anaconda Python distribution](https://www.anaconda.com/download/) (or the much smaller
# [miniconda installer](https://conda.io/miniconda.html)). Once you have that, you can install
# everything you need with:
#
# ```
# conda install -c astropy ccdproc photutils ipywidgets matplotlib
# ```
# ## Data files
#
# The list of the data files, and their approximate sizes, is below. You can
# either download them one by one, or use the download helper included with these
# notebooks.
#
# ### Use this in a terminal to download the data
#
# ```console
# $ python download_data.py
# ```
#
# ### Use this in a notebook cell to download the data
#
# ```python
# # %run download_data.py
# ```
#
# ### List of data files
#
# + [Combination of 100 bias images (26MB)](https://zenodo.org/record/3320113/files/combined_bias_100_images.fit.bz2?download=1) (DOI: https://doi.org/10.5281/zenodo.3320113)
# + [Single dark frame, exposure time 1,000 seconds (11MB)](https://zenodo.org/record/3312535/files/dark-test-0002d1000.fit.bz2?download=1) (DOI: https://doi.org/10.5281/zenodo.3312535)
# + [Combination of several dark frames, each 1,000 exposure time (52MB)](https://zenodo.org/record/4302262/files/combined_dark_exposure_1000.0.fit.bz2?download=1) (DOI: https://doi.org/10.5281/zenodo.4302262)
# + [Combination of several dark frames, each 300 sec (7MB)](https://zenodo.org/record/3332818/files/combined_dark_300.000.fits.bz2?download=1) (DOI: https://doi.org/10.5281/zenodo.3332818)
# + **"Example 1" in the reduction notebooks:** [Several images from the Palomar Large Format Camera, Chip 0 **(162MB)**](https://zenodo.org/record/3254683/files/example-cryo-LFC.tar.bz2?download=1)
# (DOI: https://doi.org/10.5281/zenodo.3254683)
# + **"Example 2" in the reduction notebooks:** [Several images from an Andor Aspen CG16M **(483MB)**](https://zenodo.org/record/3245296/files/example-thermo-electric.tar.bz2?download=1)
# (DOI: https://doi.org/10.5281/zenodo.3245296)
| notebooks/00-00-Preface.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# MIT License
#
# Copyright (c) 2021 <NAME>
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
#
# +
import operator
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import sklearn
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error, r2_score
from sklearn.preprocessing import PolynomialFeatures
# -
#glucose and output data
data = pd.read_csv('poly-Nonzero.csv')
data = data.sort_values(by='Insulin', ascending=True)
print(data.shape)
data.head()
# +
from sklearn.model_selection import train_test_split
splitRatio = 0.2
train , test = train_test_split(data,test_size = splitRatio,random_state = 123)
X_train = train[[x for x in train.columns if x not in ["Insulin"]]]
y_train = train[["Insulin"]]
#y_train = label_binarize(y_train, classes=[0,1,2])
X_test = test[[x for x in test.columns if x not in ["Insulin"]]]
y_test = test[["Insulin"]]
# -
y_train = y_train.to_numpy()
X_train = X_train.to_numpy()
y_test = y_test.to_numpy()
X_test = X_test.to_numpy()
# +
polynomial_features2= PolynomialFeatures(degree=2)
polynomial_features7= PolynomialFeatures(degree=7)
polynomial_features12= PolynomialFeatures(degree=12)
polynomial_features17= PolynomialFeatures(degree=17)
# +
x_poly2 = polynomial_features2.fit_transform(X_train)
t_poly2 = polynomial_features2.fit_transform(X_test)
model = LinearRegression()
model.fit(t_poly2, y_test)
y_poly_pred2 = model.predict(t_poly2)
x_poly7 = polynomial_features7.fit_transform(X_train)
t_poly7 = polynomial_features7.fit_transform(X_test)
model = LinearRegression()
model.fit(t_poly7, y_test)
y_poly_pred7 = model.predict(t_poly7)
x_poly12 = polynomial_features12.fit_transform(X_train)
t_poly12 = polynomial_features12.fit_transform(X_test)
model = LinearRegression()
model.fit(t_poly12, y_test)
y_poly_pred12 = model.predict(t_poly12)
x_poly17 = polynomial_features17.fit_transform(X_train)
t_poly17 = polynomial_features17.fit_transform(X_test)
model = LinearRegression()
model.fit(t_poly17, y_test)
y_poly_pred17 = model.predict(t_poly17)
# -
mse = mean_squared_error(y_test,y_poly_pred2)
print(mse)
from sklearn.metrics import mean_absolute_error
mean_absolute_error(y_test,y_poly_pred2)
# +
rmse = np.sqrt(mean_squared_error(y_test,y_poly_pred2))
r2 = r2_score(y_test,y_poly_pred2)
print(rmse)
print(r2)
plt.scatter(X_test, y_test, s=10)
# sort the values of x before line plot
sort_axis = operator.itemgetter(0)
sorted_zip = sorted(zip(X_test,y_poly_pred2), key=sort_axis)
X_test, y_poly_pred2 = zip(*sorted_zip)
plt.plot(X_test, y_poly_pred2, color='m')
plt.xlabel("Glucose")
plt.ylabel("Insulin")
plt.show()
# -
""""
#fig, axes = plt.subplots(2,2)
# plot the 3 sets
plt.scatter(X_test, y_test, s=10)
# sort the values of x before line plot
sort_axis = operator.itemgetter(0)
sorted_zip2 = sorted(zip(X_test,y_poly_pred2), key=sort_axis)
sorted_zip7 = sorted(zip(X_test,y_poly_pred7), key=sort_axis)
sorted_zip12 = sorted(zip(X_test,y_poly_pred12), key=sort_axis)
sorted_zip17 = sorted(zip(X_test,y_poly_pred17), key=sort_axis)
#sorted_zip18 = sorted(zip(X_test,y_poly_pred18), key=sort_axis)
X_test, y_poly_pred17 = zip(*sorted_zip7)
plt.plot(X_test, y_poly_pred17,label='7-degree')
plt.xlabel("Glucose")
plt.ylabel("Insulin")
plt.legend()
plt.show()
"""
"""
fig, axes = plt.subplots(2,2)
# plot the 3 sets
plt.plot(X_test, y_poly_pred2,label='2-degree')
plt.plot(X_test, y_poly_pred7,label='7-degree')
plt.plot(X_test, y_poly_pred12,label='12-degree')
plt.plot(X_test, y_poly_pred17,label='17-degree')
# one plot on each subplot
axes[0][0].scatter(x,y1)
axes[0][1].bar(x,y1)
axes[1][0].scatter(x,y2)
axes[1][1].plot(x,y2)
# you can set a legend for a single subplot
axes[1][1].legend(['plot 4'])
# call with no parameters
plt.legend()
plt.show()
"""
#new_df = pd.DataFrame([[141,44]])
new_df = pd.read_csv('poly-pred.csv',encoding='latin1')
new_df = new_df[["Glucose"]]
new_df.head()
new_d = new_df.to_numpy()
newdf_poly = polynomial_features7.fit_transform(new_d)
# We predict the outcome
prediction = model.predict(newdf_poly)
prediction = prediction.astype(int)
new_df['Insulin'] = prediction
new_df.head()
new_df.to_csv('pred-insulin-7.csv',index=False, header=True)
| PolynomialRegression.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # San Diego Burrito Analytics: The best burritos (under construction)
#
# <NAME>
#
# 21 May 2016
#
# This notebook will determine the best and worst burritos across the different dimensions, such as:
# 1. What taco shop has the highest rated burritos?
# 2. What taco shop has the best California burrito?
# 3. What taco shop has the most optimal meat-to-filling ratio?
# # Default imports
# +
# %config InlineBackend.figure_format = 'retina'
# %matplotlib inline
import numpy as np
import scipy as sp
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
sns.set_style("white")
# -
# # Load data
filename="burrito_current.csv"
df = pd.read_csv(filename)
N = df.shape[0]
# # Find the best location for each dimension
# +
m_best = ['Volume','Cost','Tortilla','Temp','Meat','Fillings','Meat:filling',
'Uniformity','Salsa','Synergy','Wrap','overall','Google','Yelp']
for m in m_best:
print m
print 'High,', df.Location[df[m].idxmax()], df[m][df[m].idxmax()]
print 'Low,', df.Location[df[m].idxmin()], df[m][df[m].idxmin()]
| burrito/u/UNFINISHED_Burrito_Best.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="7um2l1WUk0nn"
# <p><img alt="Colaboratory logo" height="45px" src="/img/colab_favicon.ico" align="left" hspace="10px" vspace="0px"></p>
#
# <h1>What is colab minimal CI/CD?</h1>
#
# Colaboratory allows you to write and execute Python in your browser, with
# - Zero configuration required
# - Free access to GPUs
# - Easy sharing
#
# In this way, it's a perfect tool to test and share your work on an open-source Deep Learning framework. Why shouldn't we use it?
#
# Features:
# - clones the code from Github (you could select your branch of interest if you would like to)
# - installs the requirements (minimal requirements also supported)
# - installs Nvidia Apex if you would like to test it too
# - checks codestyle and docs
# - runs tests
# - checks framework integrations
# + [markdown] id="HVl8c7w9Ze3p"
# # Code
# + id="va2uPF3hC834" colab={"base_uri": "https://localhost:8080/"} outputId="092bb97b-9d7e-427c-b0b0-cccae3deb65a"
# ! git clone https://github.com/catalyst-team/catalyst
# + [markdown] id="_pgjY2_BZ1XP"
# ## Branch
# + id="7xFVVSq-VZL7"
# optional, branch
# ! export BRANCH="v2103-minimal-fix" && cd catalyst && git checkout -b $BRANCH origin/$BRANCH && git pull origin $BRANCH
# + [markdown] id="zdP0lUOkZhQF"
# # Requirements
# + id="l6w6albQZ9HY"
# optional, "minimal" requirements (otherwise they are "latest")
# ! python -c "req = open('./catalyst/requirements/requirements.txt').read().replace('>', '=') ; open('./catalyst/requirements/requirements.txt', 'w').write(req)"
# ! python -c "req = open('./catalyst/requirements/requirements-cv.txt').read().replace('>', '=') ; open('./catalyst/requirements/requirements-cv.txt', 'w').write(req)"
# ! python -c "req = open('./catalyst/requirements/requirements-ml.txt').read().replace('>', '=') ; open('./catalyst/requirements/requirements-ml.txt', 'w').write(req)"
# ! python -c "req = open('./catalyst/requirements/requirements-hydra.txt').read().replace('>', '=') ; open('./catalyst/requirements/requirements-hydra.txt', 'w').write(req)"
# ! python -c "req = open('./catalyst/requirements/requirements-optuna.txt').read().replace('>', '=') ; open('./catalyst/requirements/requirements-optuna.txt', 'w').write(req)"
# + id="pOEZDcoJC_Qp"
# {!} may require runtime restart
# ! pip install \
# -r ./catalyst/requirements/requirements.txt \
# -r ./catalyst/requirements/requirements-cv.txt \
# -r ./catalyst/requirements/requirements-dev.txt \
# -r ./catalyst/requirements/requirements-hydra.txt \
# -r ./catalyst/requirements/requirements-ml.txt \
# -r ./catalyst/requirements/requirements-mlflow.txt \
# -r ./catalyst/requirements/requirements-onnx-gpu.txt \
# -r ./catalyst/requirements/requirements-optuna.txt \
# + [markdown] id="I1mxJv-zZlLH"
# ## Apex
# + id="B0VoFRIr5LYb"
# optional, nvidia apex
# ! git clone https://github.com/NVIDIA/apex
# ! cd apex && python setup.py install
# + [markdown] id="Bk3YHg_nZoGo"
# # Codestyle
# + id="otIYa0HrDE0l"
# ! cd catalyst && catalyst-make-codestyle && catalyst-check-codestyle > codestyle.txt
# + id="bAJmVx__DFYC"
# ! cat ./catalyst/codestyle.txt
# + [markdown] id="pX8dxJCCZr9p"
# # Docs
# + id="_GWVG-IU4O69"
# ! cd catalyst && make check-docs
# + [markdown] id="1QMCOj0yZt2h"
# # Tests
# + id="lwSvMPYhGgMn"
# ! cd catalyst && pytest .
# + [markdown] id="uVJ_azQua5hq"
# # Integrations
# + id="JgM1OUaBWNkg"
# ! cd catalyst && pip install -e .
# + id="SVHLLBg7fN6w"
# ! cd catalyst && bash bin/workflows/check_projector.sh
# + id="yKDA4WqLe-OG"
# ! cd catalyst && bash bin/workflows/check_settings.sh
# + [markdown] id="wrKbPBunmXH6"
# ---
# + [markdown] id="d7L-_rYemY3o"
# # Extra
# + id="mEzzyFWOh0PL"
| examples/notebooks/colab_ci_cd.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.9.7 64-bit (''rdkit'': conda)'
# language: python
# name: python3
# ---
# +
import pandas as pd
import numpy as np
import sys
import os
# path = "C:/Users/meide/Documents/GitHub/Master/data"
path = "data/"
os.chdir(path)
# -
print(os.getcwd())
# +
# df = pd.read_csv("https://raw.githubusercontent.com/meidelien/Metabolic-network-layout-using-biochemical-coordinates/main/Notebooks/Chemical_descriptors1515.csv", index_col = 0)
df = pd.read_csv("PyvisData.csv")
df2 = pd.read_csv("Calc_results.csv")
# +
#df = df.drop(columns = [" Line 1: ","reference", "formula", "InChIKey", "SMILES", "InChI", "description" ])
df2 = df2.loc[:,((df2 !=0).sum() >df2.shape[0]*0.9 )]
df = df.drop(columns = ["Remove", "Unnamed: 0"])
# -
df.to_csv("PyvisDataNoIndex2.csv", index = False)
df.to_csv("Cleaned1515PrePCA.csv")
df2.to_csv("Calc_results_pruned.csv", index = False)
df3 = pd.read_csv("ProperModel.csv")
| Notebooks/CleanUp.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Social Web - Twitter
#
# Twitter implements OAuth 1.0A as its standard authentication mechanism, and in order to use it to make requests to Twitter's API, you'll need to go to https://dev.twitter.com/apps and create a sample application.
#
# Twitter examples from the python-twitter API [https://github.com/ideoforms/python-twitter-examples](https://github.com/ideoforms/python-twitter-examples)
# ## Authorizing an application to access Twitter account data
# +
import twitter # pip install twitter
# Go to http://dev.twitter.com/apps/new to create an app and get values
# for these credentials, which you'll need to provide in place of these
# empty string values that are defined as placeholders.
# See https://dev.twitter.com/docs/auth/oauth for more information
# on Twitter's OAuth implementation.
CONSUMER_KEY = 'mcp7cJZQlbcZ7cVzNP13cQk2o'
CONSUMER_SECRET = '<KEY>'
OAUTH_TOKEN = '<KEY>'
OAUTH_TOKEN_SECRET = '<KEY>'
auth = twitter.oauth.OAuth(OAUTH_TOKEN, OAUTH_TOKEN_SECRET,
CONSUMER_KEY, CONSUMER_SECRET)
twitter_api = twitter.Twitter(auth=auth)
# Nothing to see by displaying twitter_api except that it's now a
# defined variable
print(twitter_api)
# -
# ## Retrieving trends
#-----------------------------------------------------------------------
# retrieve global trends.
# other localised trends can be specified by looking up WOE IDs:
# http://developer.yahoo.com/geo/geoplanet/
# twitter API docs: https://dev.twitter.com/rest/reference/get/trends/place
#-----------------------------------------------------------------------
def twitter_trends(place, woe_id):
results = twitter_api.trends.place(_id = woe_id)
print ("%s Trends"%place)
t=[]
for location in results:
for trend in location["trends"]:
print (" - %s" % trend["name"])
t.append(trend["name"])
return t
woe_ids={'UK':23424975,'US':23424977,'WORLD':1}
uk_trends=twitter_trends('UK',woe_ids['UK'])
us_trends=twitter_trends('US',woe_ids['US'])
world_trends=twitter_trends('WORLD',woe_ids['WORLD'])
def intersect(a, b):
return list(set(a) & set(b))
common_trends = intersect(us_trends,uk_trends)
print(common_trends)
# ## Getting Tweets
# +
import json
q = '#DeepLearning'
n = 55
from urllib.parse import unquote
# See https://dev.twitter.com/rest/reference/get/search/tweets
search_results = twitter_api.search.tweets(q=q, count=n)
statuses = search_results['statuses']
for _ in range(5):
try:
next_results = search_results['search_metadata']['next_results']
except KeyError as e: # No more results when next_results doesn't exist
break
# Create a dictionary from next_results, which has the following form:
# # ?max_id=847960489447628799&q=%23RIPSelena&count=100&include_entities=1
kwargs = dict([ kv.split('=') for kv in unquote(next_results[1:]).split("&") ])
search_results = twitter_api.search.tweets(**kwargs)
statuses += search_results['statuses']
print(json.dumps(statuses[0], indent=1))
# -
for i in range(10):
print()
print(statuses[i]['text'])
print('Favorites: ', statuses[i]['favorite_count'])
print('Retweets: ', statuses[i]['retweet_count'])
# ## Twitter friends
# +
#-----------------------------------------------------------------------
# this is the user whose friends we will list
#-----------------------------------------------------------------------
username = "nikbearbrown"
#-----------------------------------------------------------------------
# perform a basic search
# twitter API docs: https://dev.twitter.com/rest/reference/get/friends/ids
#-----------------------------------------------------------------------
query = twitter_api.friends.ids(screen_name = username)
#-----------------------------------------------------------------------
# tell the user how many friends we've found.
# note that the twitter API will NOT immediately give us any more
# information about friends except their numeric IDs...
#-----------------------------------------------------------------------
print ("found %d friends" % (len(query["ids"])))
# -
#-----------------------------------------------------------------------
# now we loop through them to pull out more info, in blocks of 100.
#-----------------------------------------------------------------------
for n in range(0, len(query["ids"]), 100):
ids = query["ids"][n:n+100]
#-----------------------------------------------------------------------
# create a subquery, looking up information about these users
# twitter API docs: https://dev.twitter.com/rest/reference/get/users/lookup
#-----------------------------------------------------------------------
subquery = twitter_api.users.lookup(user_id = ids)
for user in subquery:
#-----------------------------------------------------------------------
# now print out user info, starring any users that are Verified.
#-----------------------------------------------------------------------
print (" [%s] %s - %s" % ("*" if user["verified"] else " ", user["screen_name"], user["location"]))
# ## Twitter friendship
# +
source = "nikbearbrown"
target = "infoinaction"
#-----------------------------------------------------------------------
# perform the API query
# twitter API docs: https://dev.twitter.com/rest/reference/get/friendships/show
#-----------------------------------------------------------------------
result = twitter_api.friendships.show(source_screen_name = source,
target_screen_name = target)
#-----------------------------------------------------------------------
# extract the relevant properties
#-----------------------------------------------------------------------
following = result["relationship"]["target"]["following"]
follows = result["relationship"]["target"]["followed_by"]
print ("%s following %s: %s" % (source, target, follows))
print ("%s following %s: %s" % (target, source, following))
# -
# ## Twitter home timeline
# +
statuses = twitter_api.statuses.home_timeline(count = 50)
print (statuses)
#-----------------------------------------------------------------------
# loop through each of my statuses, and print its content
#-----------------------------------------------------------------------
for status in statuses:
print ("(%s) @%s %s" % (status["created_at"], status["user"]["screen_name"], status["text"]))
# -
# ## twitter list lists
import pprint
users = [ "nikbearbrown", "kylekuzma", "AliSingerYoga" ]
for user in users:
print ("@%s" % (user))
#-----------------------------------------------------------------------
# ...retrieve all of the lists they own.
# twitter API docs: https://dev.twitter.com/rest/reference/get/lists/list
#-----------------------------------------------------------------------
result = twitter_api.lists.list(screen_name = user)
for list in result:
print (" - %s (%d members)" % (list["name"], list["member_count"]))
# ## Twitter list retweets
# +
user='nikbearbrown'
results = twitter_api.statuses.user_timeline(screen_name = user)
#-----------------------------------------------------------------------
# loop through each of my statuses, and print its content
#-----------------------------------------------------------------------
for status in results:
print ("@%s %s" % (user, status["text"]))
#-----------------------------------------------------------------------
# do a new query: who has RT'd this tweet?
#-----------------------------------------------------------------------
retweets = twitter_api.statuses.retweets._id(_id = status["id"])
for retweet in retweets:
print (" - retweeted by %s" % (retweet["user"]["screen_name"]))
# -
# ## Twitter search geo
# +
# Northeastern University 42.3398° N, 71.0892° W
latitude = 42.3398 # geographical centre of search
longitude = -71.0892 # geographical centre of search
max_range = 1 # search range in kilometres
num_results = 50 # minimum results to obtain
result_count = 0
last_id = None
while result_count < num_results:
#-----------------------------------------------------------------------
# perform a search based on latitude and longitude
# twitter API docs: https://dev.twitter.com/rest/reference/get/search/tweets
#-----------------------------------------------------------------------
query = twitter_api.search.tweets(q = "", geocode = "%f,%f,%dkm" % (latitude, longitude, max_range), count = 100, max_id = last_id)
for result in query["statuses"]:
#-----------------------------------------------------------------------
# only process a result if it has a geolocation
#-----------------------------------------------------------------------
if result["geo"]:
user = result["user"]["screen_name"]
text = result["text"]
text = text.encode('ascii', 'replace')
latitude = result["geo"]["coordinates"][0]
longitude = result["geo"]["coordinates"][1]
#-----------------------------------------------------------------------
# now write this row to our CSV file
#-----------------------------------------------------------------------
row = [ user, latitude, longitude, text ]
print ("%s %s %s"% (row[0],row[2],row[2]))
print ("%s"% (row[3]))
result_count += 1
last_id = result["id"]
#-----------------------------------------------------------------------
# let the user know where we're up to
#-----------------------------------------------------------------------
print ("got %d results" % result_count)
# -
# ## Extracting text, screen names, and hashtags from tweets
# +
status_texts = [ status['text']
for status in statuses ]
screen_names = [ user_mention['screen_name']
for status in statuses
for user_mention in status['entities']['user_mentions'] ]
hashtags = [ hashtag['text']
for status in statuses
for hashtag in status['entities']['hashtags'] ]
# Compute a collection of all words from all tweets
words = [ w
for t in status_texts
for w in t.split() ]
# Explore the first 5 items for each...
print(json.dumps(status_texts[0:5], indent=1))
print(json.dumps(screen_names[0:5], indent=1) )
print(json.dumps(hashtags[0:5], indent=1))
print(json.dumps(words[0:5], indent=1))
# -
# ## Frequency distribution from the words in tweets
# +
from collections import Counter
for item in [words, screen_names, hashtags]:
c = Counter(item)
print(c.most_common()[:10]) # top 10
print()
# +
from prettytable import PrettyTable
for label, data in (('Word', words),
('Screen Name', screen_names),
('Hashtag', hashtags)):
pt = PrettyTable(field_names=[label, 'Count'])
c = Counter(data)
[ pt.add_row(kv) for kv in c.most_common()[:10] ]
pt.align[label], pt.align['Count'] = 'l', 'r' # Set column alignment
print(pt)
# -
# ## Calculating lexical diversity for tweets
# +
def lexical_diversity(tokens):
return len(set(tokens))/len(tokens)
def average_words(statuses):
total_words = sum([ len(s.split()) for s in statuses ])
return total_words/len(statuses)
print(lexical_diversity(words))
print(lexical_diversity(screen_names))
print(lexical_diversity(hashtags))
print(average_words(status_texts))
# -
# ## Most popular retweets
# +
retweets = [
# Store out a tuple of these three values ...
(status['retweet_count'],
status['retweeted_status']['user']['screen_name'],
status['retweeted_status']['id'],
status['text'])
# ... for each status ...
for status in statuses
# ... so long as the status meets this condition.
if 'retweeted_status' in status.keys()
]
# Slice off the first 5 from the sorted results and display each item in the tuple
pt = PrettyTable(field_names=['Count', 'Screen Name', 'Tweet ID', 'Text'])
[ pt.add_row(row) for row in sorted(retweets, reverse=True)[:5] ]
pt.max_width['Text'] = 50
pt.align= 'l'
print(pt)
# -
# ## Users who have retweeted a status
# +
# Get the original tweet id for a tweet from its retweeted_status node
# and insert it here
_retweets = twitter_api.statuses.retweets(id=922562320080953344)
print([r['user']['screen_name'] for r in _retweets])
# -
# ## Sentiment Analysis
# +
# pip install nltk
import nltk
nltk.download('vader_lexicon')
import numpy as np
from nltk.sentiment.vader import SentimentIntensityAnalyzer
# -
twitter_stream = twitter.TwitterStream(auth=auth)
iterator = twitter_stream.statuses.sample()
tweets = []
for tweet in iterator:
try:
if tweet['lang'] == 'en':
tweets.append(tweet)
except:
pass
if len(tweets) == 100:
break
analyzer = SentimentIntensityAnalyzer()
analyzer.polarity_scores('Hi')
analyzer.polarity_scores('Happy birthday to the loveliest lady. I really like like you.')
analyzer.polarity_scores('We messed up tonight... bad 🙃.')
# +
scores = np.zeros(len(tweets))
for i, t in enumerate(tweets):
# Extract the text portion of the tweet
text = t['text']
# Measure the polarity of the tweet
polarity = analyzer.polarity_scores(text)
# Store the normalized, weighted composite score
scores[i] = polarity['compound']
# -
most_positive = np.argmax(scores)
most_negative = np.argmin(scores)
print('{0:6.3f} : "{1}"'.format(scores[most_positive], tweets[most_positive]['text']))
print('{0:6.3f} : "{1}"'.format(scores[most_negative], tweets[most_negative]['text']))
| Week_7/NBB_Social_Web_Twitter.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <img src="images/usm.jpg" width="480" height="240" align="left"/>
# # MAT281 - Laboratorio N°06
#
# ## Objetivos de la clase
#
# * Reforzar los conceptos básicos del E.D.A..
# ## Contenidos
#
# * [Problema 01](#p1)
#
# ## Problema 01
# <img src="./images/logo_iris.jpg" width="360" height="360" align="center"/>
# El **Iris dataset** es un conjunto de datos que contine una muestras de tres especies de Iris (Iris setosa, Iris virginica e Iris versicolor). Se midió cuatro rasgos de cada muestra: el largo y ancho del sépalo y pétalo, en centímetros.
#
# Lo primero es cargar el conjunto de datos y ver las primeras filas que lo componen:
# +
# librerias
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
pd.set_option('display.max_columns', 500) # Ver más columnas de los dataframes
# Ver gráficos de matplotlib en jupyter notebook/lab
# %matplotlib inline
# +
# cargar datos
df = pd.read_csv(os.path.join("data","iris_contaminados.csv"))
df.columns = ['sepalLength',
'sepalWidth',
'petalLength',
'petalWidth',
'species']
df.head()
# -
# ### Bases del experimento
#
# Lo primero es identificar las variables que influyen en el estudio y la naturaleza de esta.
#
# * **species**:
# * Descripción: Nombre de la especie de Iris.
# * Tipo de dato: *string*
# * Limitantes: solo existen tres tipos (setosa, virginia y versicolor).
# * **sepalLength**:
# * Descripción: largo del sépalo.
# * Tipo de dato: *integer*.
# * Limitantes: los valores se encuentran entre 4.0 y 7.0 cm.
# * **sepalWidth**:
# * Descripción: ancho del sépalo.
# * Tipo de dato: *integer*.
# * Limitantes: los valores se encuentran entre 2.0 y 4.5 cm.
# * **petalLength**:
# * Descripción: largo del pétalo.
# * Tipo de dato: *integer*.
# * Limitantes: los valores se encuentran entre 1.0 y 7.0 cm.
# * **petalWidth**:
# * Descripción: ancho del pépalo.
# * Tipo de dato: *integer*.
# * Limitantes: los valores se encuentran entre 0.1 y 2.5 cm.
# Su objetivo es realizar un correcto **E.D.A.**, para esto debe seguir las siguientes intrucciones:
# 1. Realizar un conteo de elementos de la columna **species** y corregir según su criterio. Reemplace por "default" los valores nan..
df['species'].unique() #Reviso los valores únicos que existen en la columna especies
df.species = df.species.str.lower().str.strip() #Fuerzo a la columna especies a ser sólo minúsculas
df.loc[df['species'].isnull(),'species'] = 'default' #Cambio los valores NaN por default
df['species'].unique() #Reviso los valores únicos que existen en la columna especies
# 2. Realizar un gráfico de box-plot sobre el largo y ancho de los petalos y sépalos. Reemplace por **0** los valores nan.
# +
df.loc[df['sepalLength'].isnull(),'sepalLength'] = '0' #Cambio los valores NaN por 0
df.loc[df['sepalWidth'].isnull(),'sepalWidth'] = '0' #Cambio los valores NaN por 0
df.loc[df['petalLength'].isnull(),'petalLength'] = '0' #Cambio los valores NaN por 0
df.loc[df['petalWidth'].isnull(),'petalWidth'] = '0' #Cambio los valores NaN por 0
stats_df = df.drop(['species'], axis=1) # Quitamos de nuestra data lo que
# no necesitamos
# New boxplot using stats_df
sns.boxplot(data=stats_df) # Utilizamos Seaborn con cajas
# -
# 3. Anteriormente se define un rango de valores válidos para los valores del largo y ancho de los petalos y sépalos. Agregue una columna denominada **label** que identifique cuál de estos valores esta fuera del rango de valores válidos.
# +
df['sepalLength'] = df['sepalLength'].astype(float) #Cambiamos los valores a flotantes
df['sepalWidth'] = df['sepalWidth'].astype(float) #Cambiamos los valores a flotantes
df['petalLength'] = df['petalLength'].astype(float) #Cambiamos los valores a flotantes
df['petalWidth'] = df['petalWidth'].astype(float) #Cambiamos los valores a flotantes
df_sL_inf = df['sepalLength']>=4 #Tomamos los valores mayores o iguales a 4
df_sL_sup = df['sepalLength']<=7 #Tomamos los valores menores o iguales a 7
df_sW_inf = df['sepalWidth']>=2 #Tomamos los valores mayores o iguales a 2
df_sW_sup = df['sepalWidth']<=4.5 #Tomamos los valores menores o iguales a 4.5
df_pL_inf = df['petalLength']>=1 #Tomamos los valores mayores o iguales a 1
df_pL_sup = df['petalLength']<=7 #Tomamos los valores menores o iguales a 7
df_pW_inf = df['petalWidth']>=0.1 #Tomamos los valores mayores o iguales a 0.1
df_pW_sup = df['petalWidth']<=2.5 #Tomamos los valores menores o iguales a 2.5
mask_df_sL = df_sL_inf & df_sL_sup #Mezclamos las mascaras en una unica mascara
mask_df_sW = df_sW_inf & df_sW_sup #Mezclamos las mascaras en una unica mascara
mask_df_pL = df_pL_inf & df_pL_sup #Mezclamos las mascaras en una unica mascara
mask_df_pW = df_pW_inf & df_pW_sup #Mezclamos las mascaras en una unica mascara
df_filtrado = df[mask_df_sL & mask_df_sW & mask_df_pL & mask_df_pW] #Aplicamos las mascaras a nuestro data
df_filtrado.head()
# -
# 4. Realice un gráfico de *sepalLength* vs *petalLength* y otro de *sepalWidth* vs *petalWidth* categorizados por la etiqueta **label**. Concluya sus resultados.
# +
palette = sns.color_palette("hls", 6)
sns.lineplot( # Utilizo Seaborn con linea
x='sepalLength', # En x utilizamos el largo del sepalo
y='petalLength', # En y utilizamos el largo del petalo
#hue='Generation',# color por Generation
data=df_filtrado, # Últilizamos nuestra data
ci = None,
palette=palette
)
# +
palette = sns.color_palette("hls", 6)
sns.lineplot( # Utilizo Seaborn con linea
x='sepalWidth', # En x utilizamos el ancho del sepalo
y='petalWidth', # En y utilizamos el ancho del petalo
#hue='Generation',# color por Generation
data=df_filtrado, # Últilizamos nuestra data
ci = None,
palette=palette
)
# -
# 5. Filtre los datos válidos y realice un gráfico de *sepalLength* vs *petalLength* categorizados por la etiqueta **species**.
# +
df_set = df['species'] == 'setosa'
df_vir = df['species'] == 'virginica'
df_col = df['species'] == 'versicolor'
df_def = df['species'] == 'default'
sns.lineplot( # Utilizo Seaborn con linea
x='sepalLength', # En x utilizamos el largo del sepalo
y='petalLength', # En y utilizamos el largo del petalo
#hue='Generation',# color por Generation
data=df[df_set], # Últilizamos nuestra data de setosa
ci = None,
palette=sns.color_palette("hls", 1)
)
sns.lineplot( # Utilizo Seaborn con linea
x='sepalLength', # En x utilizamos el largo del sepalo
y='petalLength', # En y utilizamos el largo del petalo
#hue='Generation',# color por Generation
data=df[df_vir], # Últilizamos nuestra data de virginica
ci = None,
palette=sns.color_palette("hls", 2)
)
sns.lineplot( # Utilizo Seaborn con linea
x='sepalLength', # En x utilizamos el largo del sepalo
y='petalLength', # En y utilizamos el largo del petalo
#hue='Generation',# color por Generation
data=df[df_col], # Últilizamos nuestra data de versicolor
ci = None,
palette=sns.color_palette("hls", 3)
)
sns.lineplot( # Utilizo Seaborn con linea
x='sepalLength', # En x utilizamos el largo del sepalo
y='petalLength', # En y utilizamos el largo del petalo
#hue='Generation',# color por Generation
data=df[df_def], # Últilizamos nuestra data de default
ci = None,
palette=sns.color_palette("hls", 4)
)
# -
| labs/06_eda/laboratorio_06_desarrollo.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] nbgrader={}
# # Ordinary Differential Equations Exercise 1
# + [markdown] nbgrader={}
# ## Imports
# + nbgrader={}
# %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
from scipy.integrate import odeint
from IPython.html.widgets import interact, fixed
# + [markdown] nbgrader={}
# ## Euler's method
# + [markdown] nbgrader={}
# [Euler's method](http://en.wikipedia.org/wiki/Euler_method) is the simplest numerical approach for solving a first order ODE numerically. Given the differential equation
#
# $$ \frac{dy}{dx} = f(y(x), x) $$
#
# with the initial condition:
#
# $$ y(x_0)=y_0 $$
#
# Euler's method performs updates using the equations:
#
# $$ y_{n+1} = y_n + h f(y_n,x_n) $$
#
# $$ h = x_{n+1} - x_n $$
#
# Write a function `solve_euler` that implements the Euler method for a 1d ODE and follows the specification described in the docstring:
# + nbgrader={"checksum": "970f9fafed818a7c2b3202d7c5f42f7f", "solution": true}
def solve_euler(derivs, y0, x):
"""Solve a 1d ODE using Euler's method.
Parameters
----------
derivs : function
The derivative of the diff-eq with the signature deriv(y,x) where
y and x are floats.
y0 : float
The initial condition y[0] = y(x[0]).
x : np.ndarray, list, tuple
The array of times at which of solve the diff-eq.
Returns
-------
y : np.ndarray
Array of solutions y[i] = y(x[i])
"""
# YOUR CODE HERE
#raise NotImplementedError()
y = np.empty_like(x)
y[0] = y0
h = x[1] - x[0]
for n in range (0, len(x) - 1):
y[n + 1] = y[n] + h * derivs(y[n],x[n])
return y
# + deletable=false nbgrader={"checksum": "dde39b8046d2099cf0618eb75d9d49a2", "grade": true, "grade_id": "odesex01a", "points": 2}
assert np.allclose(solve_euler(lambda y, x: 1, 0, [0,1,2]), [0,1,2])
# + [markdown] nbgrader={}
# The [midpoint method]() is another numerical method for solving the above differential equation. In general it is more accurate than the Euler method. It uses the update equation:
#
# $$ y_{n+1} = y_n + h f\left(y_n+\frac{h}{2}f(y_n,x_n),x_n+\frac{h}{2}\right) $$
#
# Write a function `solve_midpoint` that implements the midpoint method for a 1d ODE and follows the specification described in the docstring:
# + nbgrader={"checksum": "caba5256e19921e2282330d0b0b85337", "solution": true}
def solve_midpoint(derivs, y0, x):
"""Solve a 1d ODE using the Midpoint method.
Parameters
----------
derivs : function
The derivative of the diff-eq with the signature deriv(y,x) where y
and x are floats.
y0 : float
The initial condition y[0] = y(x[0]).
x : np.ndarray, list, tuple
The array of times at which of solve the diff-eq.
Returns
-------
y : np.ndarray
Array of solutions y[i] = y(x[i])
"""
# YOUR CODE HERE
#raise NotImplementedError()
y = np.empty_like(x)
y[0] = y0
h = x[1] - x[0]
for n in range (0, len(x) - 1):
# y[n + 1] = y[n] + h * ((derivs(y[n]+(h/2)) * derivs(y[n],x[n]), x[n]) * (y[n] + (h/2) * derivs(y[n],x[n]) + (h/2)))
y[n+1] = y[n] + h * derivs(y[n] + h/2 * derivs(y[n],x[n]), x[n] + h/2)
return y
# + deletable=false nbgrader={"checksum": "f4e0baef0e112c92e614a6d4101b0045", "grade": true, "grade_id": "odesex01b", "points": 2}
assert np.allclose(solve_midpoint(lambda y, x: 1, 0, [0,1,2]), [0,1,2])
# + [markdown] nbgrader={}
# You are now going to solve the following differential equation:
#
# $$
# \frac{dy}{dx} = x + 2y
# $$
#
# which has the analytical solution:
#
# $$
# y(x) = 0.25 e^{2x} - 0.5 x - 0.25
# $$
#
# First, write a `solve_exact` function that compute the exact solution and follows the specification described in the docstring:
# + nbgrader={"checksum": "8abaa12752f4606d727cbe599443dc6b", "grade": false, "grade_id": "", "points": 0, "solution": true}
def solve_exact(x):
"""compute the exact solution to dy/dx = x + 2y.
Parameters
----------
x : np.ndarray
Array of x values to compute the solution at.
Returns
-------
y : np.ndarray
Array of solutions at y[i] = y(x[i]).
"""
# YOUR CODE HERE
#raise NotImplementedError()
y = 0.25*np.exp(2*x) - 0.5*x - 0.25
return y
# + deletable=false nbgrader={"checksum": "1234041305bef6ff5b2f7daf4ae33597", "grade": true, "grade_id": "odesex01c", "points": 2}
assert np.allclose(solve_exact(np.array([0,1,2])),np.array([0., 1.09726402, 12.39953751]))
# + [markdown] nbgrader={}
# In the following cell you are going to solve the above ODE using four different algorithms:
#
# 1. Euler's method
# 2. Midpoint method
# 3. `odeint`
# 4. Exact
#
# Here are the details:
#
# * Generate an array of x values with $N=11$ points over the interval $[0,1]$ ($h=0.1$).
# * Define the `derivs` function for the above differential equation.
# * Using the `solve_euler`, `solve_midpoint`, `odeint` and `solve_exact` functions to compute
# the solutions using the 4 approaches.
#
# Visualize the solutions on a sigle figure with two subplots:
#
# 1. Plot the $y(x)$ versus $x$ for each of the 4 approaches.
# 2. Plot $\left|y(x)-y_{exact}(x)\right|$ versus $x$ for each of the 3 numerical approaches.
#
# Your visualization should have legends, labeled axes, titles and be customized for beauty and effectiveness.
#
# While your final plot will use $N=10$ points, first try making $N$ larger and smaller to see how that affects the errors of the different approaches.
# + deletable=false nbgrader={"checksum": "6cff4e8e53b15273846c3aecaea84a3d", "solution": true}
# YOUR CODE HERE
# raise NotImplementedError()
x = np.linspace(0,1.0,11)
y = np.empty_like(x)
y0 = y[0]
def derivs(y, x):
return x+2*y
plt.plot(solve_euler(derivs, y0, x), label = 'euler')
plt.plot(solve_midpoint(derivs, y0, x), label = 'midpoint')
plt.plot(solve_exact(x), label = 'exact')
plt.plot(odeint(derivs, y0, x), label = 'odeint')
# + deletable=false nbgrader={"checksum": "7d29baed01ce53d19fe14792b77ab230", "grade": true, "grade_id": "odesex01d", "points": 4}
assert True # leave this for grading the plots
| assignments/assignment10/ODEsEx01.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="-Lrp-k9QLQS8"
# # **<NAME> - B19-02 - CT Assignment 2**
# + [markdown] id="MQ3fceFLKoH_"
# ## Task 1.1 Make the following systems stable, proposing appropriate control
#
#
# $$\dot x =
# \begin{pmatrix} 10 & 0 \\ -5 & 10
# \end{pmatrix}
# x
# # +
# \begin{pmatrix}
# 2 \\ 0
# \end{pmatrix}
# u
# $$
#
#
# $$\dot x =
# \begin{pmatrix} 0 & -8 \\ 1 & 30
# \end{pmatrix}
# x
# # +
# \begin{pmatrix}
# -2 \\ 1
# \end{pmatrix}
# u
# $$
#
#
# $$\dot x =
# \begin{pmatrix} 2 & 2 \\ -6 & 10
# \end{pmatrix}
# x
# # +
# \begin{pmatrix}
# 0 \\ 5
# \end{pmatrix}
# u
# $$
#
#
# $$\dot x =
# \begin{pmatrix} 5 & -5 \\ 6 & 15
# \end{pmatrix}
# x
# # +
# \begin{pmatrix}
# -10 \\ 10
# \end{pmatrix}
# u
# $$
# + colab={"base_uri": "https://localhost:8080/"} id="dTfbIivDzWAe" outputId="940c408b-258f-4898-a1ec-9db474bb0917"
import numpy as np
from scipy.signal import place_poles
A = [np.array([[10, 0], [-5, 10]]),
np.array([[0, -8], [1, 30]]),
np.array([[2, 2], [-6, 10]]),
np.array([[5, -5], [6, 15]]),
]
B = [
np.array([[2], [0]]),
np.array([[-2], [1]]),
np.array([[0], [5]]),
np.array([[-10], [10]]),
]
poles = np.array([-1, -2])
print("Appropriate control: u = -Kx")
for i in range(len(A)):
print(i + 1, ") ", sep="", end="")
place_obj = place_poles(A[i], B[i], poles)
K = place_obj.gain_matrix
print("K=", K.round(2))
# + [markdown] id="yP7jmU2jLSio"
# ## Task 1.2 Make the following systems stable, proposing appropriate control
#
# $$\dot x =
# \begin{pmatrix} 10 & 0 \\ -5 & 10
# \end{pmatrix}
# x
# # +
# \begin{pmatrix}
# 2 & 1 \\ 0 & -1
# \end{pmatrix}
# u
# $$
#
#
# $$\dot x =
# \begin{pmatrix} 0 & -8 \\ 1 & 30
# \end{pmatrix}
# x
# # +
# \begin{pmatrix}
# -2 & 1 \\ 1 & 1
# \end{pmatrix}
# u
# $$
#
#
# $$\dot x =
# \begin{pmatrix} 2 & 2 \\ -6 & 10
# \end{pmatrix}
# x
# # +
# \begin{pmatrix}
# 0 & -1 \\ 5 & -1
# \end{pmatrix}
# u
# $$
#
#
# $$\dot x =
# \begin{pmatrix} 5 & -5 \\ 6 & 15
# \end{pmatrix}
# x
# # +
# \begin{pmatrix}
# -10 & 3 \\ 10 & 3
# \end{pmatrix}
# u
# $$
# + colab={"base_uri": "https://localhost:8080/"} id="hd1CH9ZozBAs" outputId="cdc3a016-37d1-45f1-81c3-b098da384c36"
import numpy as np
from scipy.signal import place_poles
A = [
np.array([[10, 0], [-5, 10]]),
np.array([[0, -8], [1, 30]]),
np.array([[2, 2], [-6, 10]]),
np.array([[5, -5], [6, 15]]),
]
B = [
np.array([[2, 1], [0, -1]]),
np.array([[-2, 1], [1, 1]]),
np.array([[0, -1], [5, -1]]),
np.array([[-10, 3], [10, 3]]),
]
poles = np.array([-1, -2])
print("Appropriate control: u = -Kx")
for i in range(len(A)):
print(i+1, ") ", sep="", end="")
place_obj = place_poles(A[i], B[i], poles)
K = place_obj.gain_matrix
print("K=\n", K.round(2), end="\n\n")
# + [markdown] id="9ihEn7Alay0P"
# ## Task 1.3 Give example of an unstable system that can't be stabilized...
#
# of the form $\dot x =
# Ax+Bu$, where $A \in \mathbb{R}^{2 \times 2}$
#
# * where $B \in \mathbb{R}^{2 \times 1}$
# * where $B \in \mathbb{R}^{2 \times 2}$
# * where $B \in \mathbb{R}^{2 \times 3}$
# + [markdown] id="UUS9tOBG1Gr4"
# If B is a zero-matrix, and at least one eigenvalue of A is greater than 0, the system is unstable and cannot be stabilized. Examples:
#
# $$
# A=
# \begin{pmatrix} 1 & 0 \\ 1 & 1
# \end{pmatrix}
# ,B=
# \begin{pmatrix} 0 \\ 0
# \end{pmatrix}
# $$
#
# $$
# A=
# \begin{pmatrix} 1 & 0 \\ 1 & 1
# \end{pmatrix}
# ,B=
# \begin{pmatrix} 0 & 0 \\ 0 & 0
# \end{pmatrix}
# $$
#
# $$
# A=
# \begin{pmatrix} 1 & 0 \\ 1 & 1
# \end{pmatrix}
# ,B=
# \begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & 0
# \end{pmatrix}
# $$
# + [markdown] id="5xHhRpaCI6Lo"
# ## Task 2.1 Plot root locus
#
# * For a system with $A$ with imaginary eigenvalues
# * For a system with $A$ with real eigenvalues
# * For a system where real parts of eigenvalues of $(A - BK)$ are all positive
# * For a system where real parts of eigenvalues of $(A - BK)$ are all negative
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="0USGTBy_2Prm" outputId="56102635-8eea-48b8-bb53-26e4dc4bd982"
import matplotlib.pyplot as plt
import numpy as np
from numpy.linalg import eig
import control
from scipy import signal
A = [
np.array([[0, 1], [-4, 0]]), # A has imaginary eigenvalues
np.array([[1, 0], [1, 1]]), # A has real eigenvalues
np.array([[0, 1], [-1, 4]]), # A-BK has positive real parts of all eigenvalues
np.array([[1, -7], [2, -10]]), # A-BK has negative real part of all eigenvalues
]
B = [
np.array([[1], [1]]),
np.array([[1], [1]]),
np.array([[0], [1]]),
np.array([[1], [0]]),
]
C = np.array([1, 1])
D = np.array([0])
for i in range(len(A)):
num, den = signal.ss2tf(A[i], B[i], C, D)
lti = signal.lti(num, den)
K=np.linspace(0, 10, 1000)
rlist, klist = control.root_locus(lti, kvect=K, plot=False)
x = []
y = []
for z in rlist:
x.append(z.real)
y.append(z.imag)
plt.xlabel("Real")
plt.ylabel("Imaginary")
plt.plot(x, y)
plt.show()
# + [markdown] id="psIIY0rZL3uU"
# ## Task 3.1 Simulate one of the given systems with a step function as an input.
#
# + colab={"base_uri": "https://localhost:8080/", "height": 264} id="xCTgSIBB9hwN" outputId="f4c52e27-1c06-4746-9236-7972451823f7"
import numpy as np
from scipy import signal
from control import *
import matplotlib.pyplot as plt
# Simulating the system from Lab2 code snippets
A = np.array([[1, -7], [2, -10]])
B = np.array([[1], [0]])
C = np.eye(2)
D = np.zeros((2, 1))
num, den = signal.ss2tf(A, B, C, D)
filt = signal.lti(num, den)
plt.plot(*filt.step())
plt.show()
# + [markdown] id="s6Fyh9GU9i2h"
# ## Task 3.2 Linear combination of solutions
#
# Simulate one of the given systems with two different step functions $f_1$, $f_2$ as an input, and as a sum of those $f_1+f_2$ as an input. Compare the sum of the solutions for the $f_1$, $f_2$ with the solution for $f_1+f_2$.
#
# $$ f_1 =
# \begin{cases}
# 1, \ \ \ t \geq t_1 \\
# 0, \ \ \ t < t_1
# \end{cases}
# $$
# $$ f_2 =
# \begin{cases}
# 1, \ \ \ t \geq t_2 \\
# 0, \ \ \ t < t_2
# \end{cases}
# $$
# + colab={"base_uri": "https://localhost:8080/", "height": 804} id="oaNHj4cv9mD-" outputId="1a96f82d-7be9-4aab-812e-2b83410f7d56"
import numpy as np
from scipy import signal
import matplotlib.pyplot as plt
from scipy.integrate import odeint
# Simulating the system from Lab2 code snippets
t1 = 3
t2 = 7
def f1(t):
if t >= t1:
return 1
return 0
def f2(t):
if t >= t2:
return 1
return 0
def f1pf2(t):
if t < min(t1, t2):
return 0
elif t >= max(t1, t2):
return 2
return 1
A = np.array([[1, -7], [2, -10]])
B = np.array([[1], [0]])
C = np.eye(2)
D = np.zeros((2, 1))
t0 = 0 # Initial time
tf = 10 # Final time
T = np.linspace(t0, tf, 1000)
U1 = []
U2 = []
U3 = []
for elem in T:
U1.append(f1(elem))
U2.append(f2(elem))
U3.append(f1pf2(elem))
num, den = signal.ss2tf(A, B, C, D)
tout, yout, _ = signal.lsim((num, den), U1, T)
plt.plot(tout, yout)
plt.xlabel('time')
plt.ylabel('state response')
plt.show()
tout, yout, _ = signal.lsim((num, den), U2, T)
plt.plot(tout, yout)
plt.xlabel('time')
plt.ylabel('state response')
plt.show()
tout, yout, _ = signal.lsim((num, den), U3, T)
plt.plot(tout, yout)
plt.xlabel('time')
plt.ylabel('state response')
plt.show()
# + [markdown] id="wgKqFhcZLB4E"
# ## Task 4 Sinusoidal inputs
#
# Simulate one of the prevuiously given function for a sinusoidal input $u = sin(wt)$.
# + colab={"base_uri": "https://localhost:8080/", "height": 278} id="n-yE_Hh6U0nC" outputId="879be2f0-d87f-41b0-bc47-20f3da171a9d"
import numpy as np
from scipy import signal
import matplotlib.pyplot as plt
from scipy.integrate import odeint
# Simulating the system from Lab2 code snippets
w = 3
def u(t):
return np.sin(w * t)
A = np.array([[1, -7], [2, -10]])
B = np.array([[1], [0]])
C = np.array([1, 1])
D = np.zeros((1, 1))
t0 = 0 # Initial time
tf = 10 # Final time
T = np.linspace(t0, tf, 1000)
U = []
for elem in T:
U.append(u(elem))
num, den = signal.ss2tf(A, B, C, D)
tout, yout, _ = signal.lsim((num, den), U, T)
plt.plot(tout, yout)
plt.xlabel('time')
plt.ylabel('state response')
plt.show()
# + [markdown] id="We-TmMuugsEH"
# ## Task 4.1 Make frequency diagrams for 2 of the systems you studied in the tasks 1.1 and 1.2
# + colab={"base_uri": "https://localhost:8080/", "height": 300} id="xZ4wEvWA4YDr" outputId="6aa45ba9-8f44-4809-a66d-8bbc56f2b44b"
from scipy.signal import ss2tf
from scipy.signal import freqz
import numpy as np
import matplotlib.pyplot as plt
# First system from Task 1.1
A = np.array([[10, 0], [-5, 10]])
B = np.array([[1], [0]])
C = np.eye(2)
D = np.zeros((2, 1))
num, den = ss2tf(A, B, C, D)
w1, h1 = freqz(num[0, :], den)
w2, h2 = freqz(num[1, :], den)
plt.subplot(211)
plt.plot(w1, 20 * np.log10(abs(h1)), 'b')
plt.ylabel('Amplitude [dB]', color='b')
plt.xlabel('Frequency [rad/sample]')
plt.subplot(212)
plt.plot(w2, 20 * np.log10(abs(h2)), 'b')
plt.ylabel('Amplitude [dB]', color='b')
plt.xlabel('Frequency [rad/sample]')
# + colab={"base_uri": "https://localhost:8080/", "height": 298} id="aHLK5Fg55KvR" outputId="47f437fa-7a4e-4f68-baae-0711d8ed9796"
from scipy.signal import ss2tf
from scipy.signal import freqz
import numpy as np
import matplotlib.pyplot as plt
# First system from Task 1.2
A = np.array([[10, 0], [-5, 10]])
B = np.array([[2, 1], [0, -1]])
C = np.eye(2)
D = np.zeros((2, 2))
num, den = ss2tf(A, B, C, D)
w1, h1 = freqz(num[0, :], den)
w2, h2 = freqz(num[1, :], den)
plt.subplot(211)
plt.plot(w1, 20 * np.log10(abs(h1)), 'b')
plt.ylabel('Amplitude [dB]', color='b')
plt.xlabel('Frequency [rad/sample]')
plt.subplot(212)
plt.plot(w2, 20 * np.log10(abs(h2)), 'b')
plt.ylabel('Amplitude [dB]', color='b')
plt.xlabel('Frequency [rad/sample]')
# + [markdown] id="oV0YON4woFXh"
# ## Task 5.1 Design point-to-point control and simulate two systems:
#
# * where $B \in \mathbb{R}^{2 \times 1}$
# * where $B \in \mathbb{R}^{2 \times 2}$
# + [markdown] id="N0u543zX6vkf"
# Driving the system:
#
# $$\dot x =
# \begin{pmatrix} 10 & 5 \\ -5 & -10
# \end{pmatrix}
# x
# # +
# \begin{pmatrix}
# -1 \\ 2
# \end{pmatrix}
# u
# $$
#
# towards the point $x^* = \begin{pmatrix} 0 \\ 1 \end{pmatrix}$
# + colab={"base_uri": "https://localhost:8080/", "height": 278} id="TrhXQ9Es6nkw" outputId="008a3236-31ea-4aed-885f-0a772454b2e3"
import numpy as np
from numpy.linalg import pinv
from scipy.signal import place_poles
from scipy.integrate import odeint
import matplotlib.pyplot as plt
A = np.array([[10, 5], [-5, -10]])
B = np.array([[-1], [2]])
poles = np.array([-1, -2])
place_obj = place_poles(A, B, poles)
K = place_obj.gain_matrix
x_desired = np.array([0, 1])
u_desired = (-np.linalg.pinv(B).dot(A).dot(x_desired))[0]
def StateSpace(x, t):
u = -K.dot(x - x_desired) + u_desired
return A.dot(x) + B.dot(u)
time = np.linspace(0, 30, 30000)
x0 = np.random.rand(2) # initial state
solution = {"sol": odeint(StateSpace, x0, time)}
plt.plot(time, solution["sol"], linewidth=2)
plt.xlabel('time')
plt.ylabel('x(t)')
plt.show()
# + [markdown] id="YWNiHl1UYXYx"
# Driving the system:
#
# $$\dot x =
# \begin{pmatrix} 10 & 5 \\ -5 & -10
# \end{pmatrix}
# x
# # +
# \begin{pmatrix}
# 2 & 1 \\ 0 & -1
# \end{pmatrix}
# u
# $$
#
# towards the point $x^* = \begin{pmatrix} 0 \\ 1 \end{pmatrix}$
# + colab={"base_uri": "https://localhost:8080/", "height": 295} id="1O_e7eMwYsnA" outputId="344ca58e-8c7b-4e3f-81e4-3def1842b359"
import numpy as np
from numpy.linalg import pinv
from scipy.signal import place_poles
from scipy.integrate import odeint
import matplotlib.pyplot as plt
A = np.array([[10, 5], [-5, -10]])
B = np.array([[2, 1], [0, -1]])
poles = np.array([-1, -2])
place_obj = place_poles(A, B, poles)
K = place_obj.gain_matrix;
x_desired = np.array([0, 1])
u_desired = (-np.linalg.pinv(B).dot(A).dot(x_desired))
def StateSpace(x, t):
u = -K.dot(x - x_desired) + u_desired
return A.dot(x) + B.dot(u)
time = np.linspace(0, 30, 30000)
x0 = np.random.rand(2) # initial state
solution = {"sol": odeint(StateSpace, x0, time)}
plt.plot(time, solution["sol"], linewidth=2)
plt.xlabel('time')
plt.ylabel('x(t)')
# + [markdown] id="4RW7jjzahiCg"
# ## Task 6.1
#
# Find which of the followig systems is stable:
#
# $$x_{i+1} =
# \begin{pmatrix} 0.5 & 0.1 \\ -0.05 & 0.2
# \end{pmatrix}
# x_i
# $$
#
#
# $$x_{i+1} =
# \begin{pmatrix} 1 & -2 \\ 0 & 0.3
# \end{pmatrix}
# x_i
# $$
#
#
# $$x_{i+1} =
# \begin{pmatrix} -5 & 0 \\ -0.1 & 1
# \end{pmatrix}
# x_i
# # +
# \begin{pmatrix}
# 0 \\ 0.5
# \end{pmatrix}
# u_i, \ \ \
# u_i =
# \begin{pmatrix}
# 0 & 0.2
# \end{pmatrix}
# x_i
# $$
#
#
# $$x_{i+1} =
# \begin{pmatrix} -2.2 & -3 \\ 0 & 0.5
# \end{pmatrix}
# x_i
# # +
# \begin{pmatrix}
# -1 \\ 1
# \end{pmatrix}
# u_i, \ \ \
# u_i = 10
# $$
# + colab={"base_uri": "https://localhost:8080/"} id="KfduTVhAShdu" outputId="3188b5e7-f368-46cb-fcad-afb841580f0f"
from numpy.linalg import eig
import numpy as np
def stable(x):
# print(x)
for i in x:
if abs(np.linalg.norm(i)) > 1:
return False
return True
As = [
np.array([[0.5, 0.1], [-0.05, 0.2]]),
np.array([[1, -2], [0, 0.3]]),
np.array([[-5, 0], [-0.1, 1]]),
np.array([[-2.2, -3], [0, 0.5]])
]
for i in range(len(As)):
e, _ = eig(As[i])
if stable(e):
print("System", i + 1, "is stable")
else:
print("System", i + 1, "is unstable")
# + [markdown] id="GrlzeUF4Sh4q"
#
# ## Task 6.2
#
# Propose control that makes the following systems stable:
#
# $$x_{i+1} =
# \begin{pmatrix} 1 & 1 \\ -0.4 & 0.1
# \end{pmatrix}
# x_i
# # +
# \begin{pmatrix}
# 0.5 \\ 0.5
# \end{pmatrix}
# u_i
# $$
#
#
# $$x_{i+1} =
# \begin{pmatrix} 0.8 & -0.3 \\ 0 & 0.15
# \end{pmatrix}
# x_i
# # +
# \begin{pmatrix}
# -1 \\ 1
# \end{pmatrix}
# u_i
# $$
# + colab={"base_uri": "https://localhost:8080/"} id="U3a54ISxSjX1" outputId="5b2b9401-bf48-40b0-f38c-6a7268a24f07"
import numpy as np
from scipy.signal import place_poles
A = [np.array([[1, 1], [-0.4, 0.1]]),
np.array([[0.8, -0.3], [0, 0.15]]),
]
B = [
np.array([[0.5], [0.5]]),
np.array([[-1], [1]]),
]
poles = np.array([-1, -2])
print("Appropriate control: u = -Kx")
for i in range(len(A)):
print(i + 1, ") ", sep="", end="")
place_obj = place_poles(A[i], B[i], poles)
K = place_obj.gain_matrix
print("K=", K.round(2))
# + [markdown] id="xRjR2hWCuT5v"
# ## Task 6.3 Design point-to-point control and simulate two discrete systems:
#
# * where $B \in \mathbb{R}^{2 \times 1}$
# * where $B \in \mathbb{R}^{2 \times 2}$
# + [markdown] id="r1lmLJEUZPUg"
# Simulating the two systems from the previous task, but in discrete time, with T = 1, using ZOH for plotting.
# + colab={"base_uri": "https://localhost:8080/", "height": 295} id="AFTGwaciZemy" outputId="935a0c98-127e-4ccb-f356-23744baa8e6e"
import numpy as np
from numpy.linalg import pinv
from scipy.signal import place_poles
from scipy.integrate import odeint
import matplotlib.pyplot as plt
A = np.array([[10, 5], [-5, -10]])
B = np.array([[-1], [2]])
poles = np.array([-1, -2])
place_obj = place_poles(A, B, poles)
K = place_obj.gain_matrix
x_desired = np.array([0, 1])
u_desired = (-np.linalg.pinv(B).dot(A).dot(x_desired))[0]
def StateSpace(x, t):
u = -K.dot(x - x_desired) + u_desired
return A.dot(x) + B.dot(u)
time = np.linspace(0, 30, 30)
x0 = np.random.rand(2) # initial state
solution = {"sol": odeint(StateSpace, x0, time)}
plt.step(time, solution["sol"], where='post')
plt.xlabel('time')
plt.ylabel('x(t)')
# + colab={"base_uri": "https://localhost:8080/", "height": 295} id="05S01yr9Zg8Y" outputId="173644ea-3b6f-45c4-daf2-e78ad07ea3a1"
import numpy as np
from numpy.linalg import pinv
from scipy.signal import place_poles
from scipy.integrate import odeint
import matplotlib.pyplot as plt
A = np.array([[10, 5], [-5, -10]])
B = np.array([[2, 1], [0, -1]])
poles = np.array([-1, -2])
place_obj = place_poles(A, B, poles)
K = place_obj.gain_matrix;
x_desired = np.array([0, 1])
u_desired = (-np.linalg.pinv(B).dot(A).dot(x_desired))
def StateSpace(x, t):
u = -K.dot(x - x_desired) + u_desired
return A.dot(x) + B.dot(u)
time = np.linspace(0, 30, 30)
x0 = np.random.rand(2) # initial state
solution = {"sol": odeint(StateSpace, x0, time)}
plt.step(time, solution["sol"], where='post')
plt.xlabel('time')
plt.ylabel('x(t)')
# + [markdown] id="VfKv5ZZDxAnn"
# ## Task 7.1
#
# Choose one of the continious and one of the discrete systems for which you designed control, and prove stability of the closed-loop version $(A - BK)$
# + colab={"base_uri": "https://localhost:8080/"} id="bVuTIqZYhs2C" outputId="fecb9ad3-2a8f-4159-c9e8-fc713b8cd996"
from scipy.linalg import solve_continuous_lyapunov
from scipy.linalg import solve_discrete_lyapunov
import numpy as np
from scipy.signal import place_poles
Q = np.array([[1, 0], [0, 1]])
A = np.array([[10, 5], [-5, -10]])
B = np.array([[-1], [2]])
poles = np.array([-2, -3])
place_obj = place_poles(A, B, poles)
K = place_obj.gain_matrix
P = solve_continuous_lyapunov(A - B.dot(K), Q)
print("P(continuous) =\n", P.round(2), end='\n\n')
P = solve_discrete_lyapunov(A - B.dot(K), Q)
print("P(discrete) =\n", P.round(2))
| Coursework/Control Theory/Assignment 2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: my-kernel
# language: python
# name: my-kernel
# ---
# + [markdown] tags=[]
# # DSSIM
#
# This notebook opens original and compressed NetCDF files at a given data path, computes the DSSIM on the compressed files for specified time steps, and stores the values in a CSV file in the lcr/data/ directory.
# + tags=[]
# Make sure you are using the cmip6-2019.10 kernel
# Add ldcpy root to system path (MODIFY FOR YOUR LDCPY CODE LOCATION)
import sys
sys.path.insert(0, '/glade/u/home/abaker/repos/ldcpy')
import ldcpy
# Display output of plots directly in Notebook
# %matplotlib inline
# Automatically reload module if it is edited
# %reload_ext autoreload
# %autoreload 2
# silence warnings
import warnings
warnings.filterwarnings("ignore")
import os
hdf_pp = os.environ["HDF5_PLUGIN_PATH"]
env_list = ['export HDF5_PLUGIN_PATH='+hdf_pp]
# + tags=[]
# start the dask scheduler
# for Cheyenne
from dask_jobqueue import PBSCluster
cluster = PBSCluster(
queue="regular",
walltime="02:00:00",
project="NIOW0001",
memory="109GB",
resource_spec="select=1:ncpus=9:mem=109GB",
cores=36,
processes=9,
env_extra=env_list
)
# scale as needed
cluster.adapt(minimum_jobs=1, maximum_jobs=30)
cluster
# -
import dask
dask.config.set({'distributed.dashboard.link':'https://jupyterhub.hpc.ucar.edu/{JUPYTERHUB_SERVICE_PREFIX}/proxy/{port}/status'})
# + tags=[]
from dask.distributed import Client
# Connect client to the remote dask workers
client = Client(cluster)
client
# +
import time
monthly_variables = ["CCN3", "CLOUD", "FLNS", "FLNT", "FSNS", "FSNT", "LHFLX",
"PRECC", "PRECL", "PS", "QFLX", "SHFLX", "TMQ", "TS", "U"]
daily_variables = ["FLUT", "LHFLX", "PRECT", "TAUX", "TS", "Z500"]
cols_monthly = {}
cols_daily = {}
sets = {}
levels = {}
data_path = "/glade/p/cisl/asap/CAM_lossy_test_data_31/"
for variable in daily_variables:
print(variable)
levels[variable] = [f"bg_2_{variable}",
f"bg_3_{variable}",
f"bg_4_{variable}", f"bg_5_{variable}",
f"bg_6_{variable}", f"bg_7_{variable}",]
sets[variable] = [f"{data_path}/orig/b.e11.BRCP85C5CNBDRD.f09_g16.031.cam.h1.{variable}.20060101-20071231.nc",
f"{data_path}/research/bg/bg_2/b.e11.BRCP85C5CNBDRD.f09_g16.031.cam.h1.{variable}.20060101-20071231.nc",
f"{data_path}/research/bg/bg_3/b.e11.BRCP85C5CNBDRD.f09_g16.031.cam.h1.{variable}.20060101-20071231.nc",
f"{data_path}/research/bg/bg_4/b.e11.BRCP85C5CNBDRD.f09_g16.031.cam.h1.{variable}.20060101-20071231.nc",
f"{data_path}/research/bg/bg_5/b.e11.BRCP85C5CNBDRD.f09_g16.031.cam.h1.{variable}.20060101-20071231.nc",
f"{data_path}/research/bg/bg_6/b.e11.BRCP85C5CNBDRD.f09_g16.031.cam.h1.{variable}.20060101-20071231.nc",
f"{data_path}/research/bg/bg_7/b.e11.BRCP85C5CNBDRD.f09_g16.031.cam.h1.{variable}.20060101-20071231.nc"]
cols_daily[variable] = ldcpy.open_datasets("cam-fv", [f"{variable}"], sets[variable], [f"orig_{variable}"] + levels[variable], chunks={})
for variable in monthly_variables:
print(variable)
levels[variable] = [f"bg_2_{variable}",
f"bg_3_{variable}",
f"bg_4_{variable}", f"bg_5_{variable}",
f"bg_6_{variable}", f"bg_7_{variable}",]
sets[variable] = [f"{data_path}/orig/b.e11.BRCP85C5CNBDRD.f09_g16.031.cam.h0.{variable}.200601-201012.nc",
f"{data_path}/research/bg/bg_2/b.e11.BRCP85C5CNBDRD.f09_g16.031.cam.h0.{variable}.200601-201012.nc",
f"{data_path}/research/bg/bg_3/b.e11.BRCP85C5CNBDRD.f09_g16.031.cam.h0.{variable}.200601-201012.nc",
f"{data_path}/research/bg/bg_4/b.e11.BRCP85C5CNBDRD.f09_g16.031.cam.h0.{variable}.200601-201012.nc",
f"{data_path}/research/bg/bg_5/b.e11.BRCP85C5CNBDRD.f09_g16.031.cam.h0.{variable}.200601-201012.nc",
f"{data_path}/research/bg/bg_6/b.e11.BRCP85C5CNBDRD.f09_g16.031.cam.h0.{variable}.200601-201012.nc",
f"{data_path}/research/bg/bg_7/b.e11.BRCP85C5CNBDRD.f09_g16.031.cam.h0.{variable}.200601-201012.nc"]
cols_monthly[variable] = ldcpy.open_datasets("cam-fv", [f"{variable}"], sets[variable], [f"orig_{variable}"] + levels[variable], chunks={})
# -
cols_daily["TS"]
# +
cols_daily["TS"].TS.data.visualize()
# -
#This individual bit needs speeding up
# %time ldcpy.save_metrics(cols_daily["TS"], "TS", "orig_TS", "bg_2_TS", time=0, location="testfile.csv")
# %time ldcpy.save_metrics(cols_daily["TS"], "TS", "orig_TS", "bg_2_TS", time=1, location="testfile.csv")
# +
#This will take way too long
for variable in daily_variables
if variable == "TAUX":
for time in range(160,cols_daily[variable].dims["time"]):
for i in ["bg_2", "bg_3", "bg_4", "bg_5", "bg_6", "bg_7"]:
ldcpy.save_metrics(cols_daily[variable], variable, f"orig_{variable}", f"{i}_{variable}", time=time, location="../data/dssims.csv")
else:
for time in range(0,cols_daily[variable].dims["time"]):
for i in ["bg_2", "bg_3", "bg_4", "bg_5", "bg_6", "bg_7"]:
ldcpy.save_metrics(cols_daily[variable], variable, f"orig_{variable}", f"{i}_{variable}", time=time, location="../data/dssims.csv")
# -
# +
#more variables
import time
daily_variables = ["bc_a1_SRF", "dst_a1_SRF", "dst_a3_SRF", "FLNS", "FLNSC",
"FLUT", "FSNS", "FSNSC", "FSNTOA", "ICEFRAC", "LHFLX", "pom_a1_SRF", "PRECL", "PRECSC",
"PRECSL", "PRECT", "PRECTMX", "PSL", "Q200", "Q500", "Q850", "QBOT", "SHFLX", "so4_a1_SRF",
"so4_a2_SRF", "so4_a3_SRF", "soa_a1_SRF", "soa_a2_SRF", "T010", "T200", "T500", "T850",
"TAUX", "TAUY", "TMQ", "TREFHT", "TREFHTMN", "TREFHTMX", "TS", "U010", "U200", "U500", "U850", "VBOT",
"WSPDSRFAV", "Z050", "Z500"]
cols_monthly = {}
cols_daily = {}
sets = {}
levels = {}
data_path = "/glade/p/cisl/asap/CAM_lossy_test_data_31/research"
for variable in daily_variables:
print(variable)
levels[variable] = [f"bg_2_{variable}",
f"bg_3_{variable}",
f"bg_4_{variable}", f"bg_5_{variable}",
f"bg_6_{variable}", f"bg_7_{variable}",]
sets[variable] = [f"{data_path}/daily_orig/b.e11.BRCP85C5CNBDRD.f09_g16.031.cam.h1.{variable}.20060101-20071231.nc",
f"{data_path}/daily_bg/bg_2/b.e11.BRCP85C5CNBDRD.f09_g16.031.cam.h1.{variable}.20060101-20071231.nc",
f"{data_path}/daily_bg/bg_3/b.e11.BRCP85C5CNBDRD.f09_g16.031.cam.h1.{variable}.20060101-20071231.nc",
f"{data_path}/daily_bg/bg_4/b.e11.BRCP85C5CNBDRD.f09_g16.031.cam.h1.{variable}.20060101-20071231.nc",
f"{data_path}/daily_bg/bg_5/b.e11.BRCP85C5CNBDRD.f09_g16.031.cam.h1.{variable}.20060101-20071231.nc",
f"{data_path}/daily_bg/bg_6/b.e11.BRCP85C5CNBDRD.f09_g16.031.cam.h1.{variable}.20060101-20071231.nc",
f"{data_path}/daily_bg/bg_7/b.e11.BRCP85C5CNBDRD.f09_g16.031.cam.h1.{variable}.20060101-20071231.nc"]
cols_daily[variable] = ldcpy.open_datasets("cam-fv", [f"{variable}"], sets[variable], [f"orig_{variable}"] + levels[variable], chunks={"time":700})
# -
| notebooks/allison/DSSIM.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Fussing with the BiasFrame class [v1.4]
# +
# imports
import glob
import os
from importlib import reload
from pypit import biasframe
from pypit.par import pypitpar
# -
pypdev_path = os.getenv('PYPIT_DEV')
# +
## Additional settings
#kast_settings['reduce'] = {}
#kast_settings['reduce']['masters'] = {}
#kast_settings['reduce']['masters']['reuse'] = False
#kast_settings['reduce']['masters']['force'] = False
#kast_settings['reduce']['masters']['loaded'] = []
##
#kast_settings['run'] = {}
#kast_settings['run']['spectrograph'] = 'shane_kast_blue'
#kast_settings['run']['directory'] = {}
#kast_settings['run']['directory']['master'] = 'MF'
#setup = 'A_01_aa'
# Would set masters='reuse' for settings['reduce']['masters']['reuse'] = True or
# masters='force' for settings['reduce']['masters']['force'] = True.
# In the actual pypit run these would get set from a config file.
# Maybe should combine 'run' and 'reduce'?
runpar = pypitpar.RunPar(caldir='MF')
rdxpar = pypitpar.ReducePar(spectrograph='shane_kast_blue', masters=None, setup='A_01_aa')
# -
# ## Generate Bias Image
kast_blue_bias = glob.glob(os.path.join(os.getenv('PYPIT_DEV'), 'RAW_DATA',
'Shane_Kast_blue', '600_4310_d55', 'b1?.fits*'))
kast_blue_bias
# ### Instantiate
# Change one default just to make sure it propagates
biaspar = pypitpar.FrameGroupPar(frametype='bias',
combine=pypitpar.CombineFramesPar(cosmics=25.))
print(biaspar['combine']['cosmics'])
print(biaspar.default['combine']['cosmics'])
print(biaspar['useframe'])
#reload(biasframe)
root_path = os.path.join(os.getcwd(), runpar['caldir'])
bias_frame = biasframe.BiasFrame(rdxpar['spectrograph'], file_list=kast_blue_bias,
par=biaspar, setup=rdxpar['setup'], root_path=root_path,
mode=rdxpar['masters'])
bias_frame
bias_frame.directory_path
bias_frame.spectrograph.detector[0]['datasec']
print(bias_frame.combine_par['cosmics'])
print(bias_frame.par['combine']['cosmics'])
print(bias_frame.combine_par['cosmics'] is bias_frame.par['combine']['cosmics'])
# ### Process
bias_img = bias_frame.process()
bias_frame.steps
# +
#bias_frame.show('stack')
# (KBW) this was causing me problems...
# -
# ### Write
# (KBW) Are writing/loading used in this way anymore?
# +
#bias_frame.write_stack_to_fits('tmp.fits')
# -
# ### Load from disk
# +
#bias_frame2 = biasframe.BiasFrame.from_fits('tmp.fits')
#bias_frame2.stack.shape
# -
# ## Run (from scratch) as called from PYPIT
#
# ### Creates bias and saves as a MasterFrame to MF_shane_kast_blue/ which I needed to generate
# Instantiate
bias_frame2 = biasframe.BiasFrame(rdxpar['spectrograph'], file_list=kast_blue_bias,
par=biaspar, setup=rdxpar['setup'], root_path=root_path,
mode=rdxpar['masters'])
try:
os.mkdir('MF_shane_kast_blue')
except:
pass
bias = bias_frame2.build_image()
bias_frame2.steps
# Save
bias_frame2.save_master(bias)
# ## Load Master (several ways)
# ### master()
biaspar['useframe'] = 'bias'
bias_frame3 = biasframe.BiasFrame(rdxpar['spectrograph'], file_list=kast_blue_bias,
par=biaspar, setup=rdxpar['setup'], root_path=root_path,
mode='reuse')
bias3 = bias_frame3.master()
bias3.shape
# ### Direct load of all master frame stuff
bias4, _, _ = bias_frame3.load_master_frame()
bias4.shape
# ## Clean up
os.remove('MF_shane_kast_blue/MasterBias_A_01_aa.fits')
os.rmdir('MF_shane_kast_blue')
| doc/nb/BiasFrame.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="LmV3F04moy9i" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="fc4b1fbf-71a0-4ef9-f553-ca2971e9fd0b" executionInfo={"status": "ok", "timestamp": 1561536547309, "user_tz": -330, "elapsed": 1339, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "11583358661043614280"}}
# cd
# + id="Kp3K7_slo3L3" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="e0683b30-80a0-4c96-b9fd-2d8b8ea5b3a2" executionInfo={"status": "ok", "timestamp": 1561536547310, "user_tz": -330, "elapsed": 1318, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "11583358661043614280"}}
# cd /content/drive/My Drive/H
# + id="pJV_BIwlpJ00" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="bf1f8c79-702a-4994-86c8-982b90a52471" executionInfo={"status": "ok", "timestamp": 1561536551002, "user_tz": -330, "elapsed": 4994, "user": {"displayName": "roxtar roxtar", "photoUrl": "", "userId": "11583358661043614280"}}
from keras.datasets import mnist
import numpy as np
import os
import tensorflow as tf
from tensorflow.keras.models import Model,Sequential
from tensorflow.keras.layers import Dense,Dropout,Activation,Conv2D,MaxPooling2D,Flatten,UpSampling2D,Conv2DTranspose,Input,BatchNormalization
import cv2
import matplotlib.pyplot as plt
from tqdm import tqdm
import pickle
# + id="OASbDUNzpN3W" colab_type="code" colab={}
import natsort
path="/content/drive/My Drive/H/haze"
training_data=[]
IMG_SIZE=512
k=natsort.natsorted(os.listdir(path))
for img in k:
img_array=cv2.imread(os.path.join(path,img))
imgq=cv2.cvtColor(img_array,cv2.COLOR_BGR2RGB)
new_array=cv2.resize(imgq,(IMG_SIZE,IMG_SIZE))
training_data.append(new_array)
X1=np.array(training_data).reshape(-1,IMG_SIZE,IMG_SIZE,3)
print(X1.shape)
with open('X1.pickle','wb') as f:pickle.dump(X1,f)
| Colab Notebooks-20190911T131716Z-001/Colab Notebooks/Untitled1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + colab={"base_uri": "https://localhost:8080/"} id="BoCoaBw1Y-ZU" outputId="024976a9-9dba-4996-abe0-143d78de753a"
# !pip install tensorflow==2.0
#importing dependecies
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
import math
# + id="CNa1Sr-aZM54"
#generating many sample datapoints
SAMPLES = 1000
# + id="mUnhImEgZye6"
#Setting a seed value, so we get the same random numbers each time we run this
SEED = 1337
np.random.seed(SEED)
tf.random.set_seed(SEED)
# + id="fdUnVybAZyoK"
#Generate a uniformly distributed between 0 to 2pi
#This covers the sine wave
x_values = np.random.uniform(low=0,high=2*math.pi, size=SAMPLES)
# + id="rSauDzSzivYW"
#shuffling the values to guarantee they are not in order.
#for deeplearning this step is important to assure the data being fed
#to the model is random
np.random.shuffle(x_values)
# + id="xsuN7nSlivkK"
#calculating the corresponding sine values
y_values = np.sin(x_values)
# + colab={"base_uri": "https://localhost:8080/", "height": 265} id="TSmzH_uwkVry" outputId="d0c70044-6a24-4a61-f60c-ed7c5a847c0c"
#Plot our data. The 'b.' argument tells the library to print the blue dots.
plt.plot(x_values, y_values, 'b.')
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 265} id="IlaQlwN3ivyB" outputId="f65136d0-da90-4c6e-f16a-0ef3cb0da2de"
#testing deeplearning, by adding random numbers to each y value
y_values += 0.1 * np.random.randn(*y_values.shape)
#plotting the data
plt.plot(x_values, y_values, 'b.')
plt.show()
# + id="VtTHYC8YZywp"
#60% of the data is for training
#20% of the data is for testing
TRAIN_SPLIT = int(0.6 * SAMPLES)
TEST_SPLIT = int(0.2 * SAMPLES + TRAIN_SPLIT)
# + id="1btrmWGCpgHG"
#dividing the data into three chuncks with np.split
x_train, x_validate, x_test = np.split(x_values, [TRAIN_SPLIT, TEST_SPLIT])
y_train,y_validate, y_test = np.split(y_values, [TRAIN_SPLIT, TEST_SPLIT])
# + colab={"base_uri": "https://localhost:8080/", "height": 265} id="_he34MshpgaO" outputId="76f66214-f546-4d2b-d2ff-d66f47d430b6"
#double checking that the split adds up
assert(x_train.size + x_validate.size + x_test.size) == SAMPLES
#plot the data in each of the partitions using different colors
plt.plot(x_train, y_train, 'b.', label = "Train")
plt.plot(x_validate, y_validate, 'y.', label = "Validate")
plt.plot(x_test, y_test, 'r.', label="Test")
plt.legend()
plt.show()
# + id="9YO0gNehpgju"
#using Keras to create a simple model architecture
from tensorflow.python import keras
from tensorflow.keras import layers
from tensorflow.keras.layers import Input, Dense
model_1 = keras.Sequential()
#first layer takes a scalar input and feeds it through 16 "neurons."
#the neurones decide whether to activate based on the 'relu' activation function
model_1.add(layers.Dense(16, activation = 'relu', input_shape =(1,)))
# + id="ft_pL92Syqw4"
#the final layer is a single neuron, since we want to output a single value
model_1.add(layers.Dense(1))
# + id="n0yyrx4vyrB_"
#compile the model using a standard optimizer and loss function for regression
model_1.compile(optimizer='rmsprop', loss='mse', metrics=['mae'])
# + colab={"base_uri": "https://localhost:8080/"} id="zRtiPnJRyrJd" outputId="87a69b95-771d-4e30-8b45-1d511a3f2faf"
#printing the summary of the model's architecture
model_1.summary()
# + colab={"base_uri": "https://localhost:8080/"} id="Ye5U47CXyrMT" outputId="3081e416-017e-4c19-9976-b4ce0abd9e0f"
#training
history_1 = model_1.fit(x_train, y_train, epochs=1000, batch_size=16, validation_data = (x_validate, y_validate))
# + colab={"base_uri": "https://localhost:8080/", "height": 295} id="BCdJx64iO_tt" outputId="06f13bf5-dcd9-477e-d171-eeb430d6bafb"
loss = history_1.history['loss']
val_loss = history_1.history['val_loss']
epochs = range(1, len(loss)+ 1 )
plt.plot(epochs, loss, 'g.', label='Training loss')
plt.plot(epochs, val_loss, 'b', label = 'validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 295} id="Dcjjb2vtO_7R" outputId="9648a22f-cc6f-4aa5-b893-fbfaeafa2c17"
#Adjusting the Graph to better readability
SKIP = 100
plt.plot(epochs[SKIP:], loss[SKIP:], 'g.', label = 'Training Loss')
plt.plot(epochs[SKIP:], val_loss[SKIP:], 'b.', label = 'Validation loss')
plt.title('Training and validation_loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 295} id="iaNcQqwMPAHA" outputId="1e28b625-545f-4873-c8be-3d4a3337c04b"
#mean absolute error Graph - ammount of error
mae = history_1.history['mae']
val_mae = history_1.history['val_mae']
plt.plot(epochs[SKIP:], mae[SKIP:], 'g.', label = 'Training MAE')
plt.plot(epochs[SKIP:], val_mae[SKIP:], 'b.', label = 'Validation MAE')
plt.title('Training and validation mean absolute error')
plt.xlabel('Epochs')
plt.ylabel('MAE')
plt.legend()
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 281} id="2rQVpfeLPAR0" outputId="d8870821-809c-44e7-fc69-aed410065d66"
predictions = model_1.predict(x_train)
plt.clf()
plt.title('Training data predicted vs actual values')
plt.plot(x_test, y_test, 'b.', label = 'Actual')
plt.plot(x_train, predictions, 'r.', label = 'Predicted')
plt.legend()
plt.show()
# + colab={"base_uri": "https://localhost:8080/"} id="DQRcKrj6PAeE" outputId="c16d6a4d-4a4d-49dd-ae80-ce004a0c430d"
#adding more neurones
model_2 = tf.keras.Sequential()
model_2.add(layers.Dense(16, activation = 'relu', input_shape = (1,)))
model_2.add(layers.Dense(16, activation ='relu'))
model_2.add(layers.Dense(1))
#compiling
model_2.compile(optimizer = 'rmsprop', loss='mse', metrics=['mae'])
model_2.summary()
# + colab={"base_uri": "https://localhost:8080/"} id="Hx9JjEitPAqd" outputId="851cd6b0-605c-46ba-e20c-fa0dfcc0f511"
history_2 = model_2.fit(x_train, y_train, epochs = 600, batch_size = 16, validation_data = (x_validate, y_validate))
# + colab={"base_uri": "https://localhost:8080/", "height": 295} id="D9IUyJfIdxpN" outputId="bec67453-ef2a-4114-fdd8-cc167553286c"
loss = history_2.history['loss']
val_loss = history_2.history['val_loss']
epochs = range(1, len(loss) + 1)
plt.plot(epochs, loss, 'g.', label = 'Training Loss')
plt.plot(epochs, val_loss, 'b', label = 'Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 295} id="7erk0Rr_dxhQ" outputId="ac215cf9-57ad-43bb-96e1-939a123ac3cc"
plt.clf()
#mean absolute error
mae = history_2.history['mae']
val_mae = history_2.history['val_mae']
plt.plot(epochs[SKIP:], mae[SKIP:], 'g.', label ='Training MAE')
plt.plot(epochs[SKIP:], val_mae[SKIP:], 'b.', label = 'Validation MAE')
plt.title('Training and validation mean absolute error')
plt.xlabel('Epochs')
plt.ylabel('MAE')
plt.legend()
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 337} id="i-bPcAc5v94S" outputId="207e335f-78b6-4d2e-d831-35441cef170d"
#plotting the adjusted training model
loss = model_2.evaluate(x_test, y_test)
predictions = model_2.predict(x_test)
plt.clf()
plt.title('Comparing predictions and actual values')
plt.plot(x_test, y_test, 'b.', label = 'Actual')
plt.plot(x_test, predictions, 'r.', label = 'Predicted')
plt.legend()
# + [markdown] id="8GGdofy9x0sl"
# It seems the neural network is now overfitting. But in this case it is not a deal breaker
#
#
# We will quantize the neural network and reduce its size to fit it at the edge (microcontroler)
#
# + colab={"base_uri": "https://localhost:8080/"} id="coXXOCLbv91k" outputId="3de412ff-66a0-4091-cefc-6bffabc4cc26"
#converting into TensorFlow Lite, no quantization
converter = tf.lite.TFLiteConverter.from_keras_model(model_2)
tflite_model = converter.convert()
open("sine_model.tflite", "wb").write(tflite_model)
# + colab={"base_uri": "https://localhost:8080/"} id="dS5x2o9zv9ye" outputId="50059217-1792-4c0d-df8f-b7f3b17a968d"
converter = tf.lite.TFLiteConverter.from_keras_model(model_2)
#optimizing
converter.optimizations = [tf.lite.Optimize.DEFAULT]
def respresentative_dataset_generator():
for value in x_test:
yield [np.array(value, dtype = np.float32, ndmin = 2)]
converter.representative_dataset = respresentative_dataset_generator
tflite_model = converter.convert()
open("sine_model_quantized.tflite", "wb").write(tflite_model)
# + [markdown] id="pCH0ApmY10z1"
# OOH! Original model was reduced in size! yes!
#
# Naa maa dere AB testen!
# + id="IYwOBI4Hv9vd"
#instatiating and interpreter for each tflite and tf model
sine_model = tf.lite.Interpreter('sine_model_quantized.tflite')
sine_model_quantized = tf.lite.Interpreter('sine_model_quantized.tflite')
#memory allocation
sine_model.allocate_tensors()
sine_model_quantized.allocate_tensors()
#indexes
sine_model_input_index = sine_model.get_input_details()[0]["index"]
sine_model_output_index = sine_model.get_output_details()[0]["index"]
sine_model_quantized_input_index = sine_model_quantized.get_input_details()[0]["index"]
sine_model_quantized_output_index = sine_model_quantized.get_output_details()[0]["index"]
sine_model_predictions = []
sine_model_quantized_predictions = []
# + colab={"base_uri": "https://localhost:8080/", "height": 281} id="qgm0mZtZv9s4" outputId="4de8de30-5762-4ee3-a933-8c000ce16e73"
#running each model's interpretrer and storing values in the arrays
for x_value in x_test:
x_value_tensor = tf.convert_to_tensor([[x_value]], dtype = np.float32)
sine_model.set_tensor(sine_model_input_index, x_value_tensor)
sine_model.invoke()
sine_model_predictions.append(sine_model.get_tensor(sine_model_output_index)[0])
sine_model_quantized.set_tensor(sine_model_quantized_input_index, x_value_tensor)
sine_model_quantized.invoke()
sine_model_quantized_predictions.append(sine_model_quantized.get_tensor(sine_model_quantized_output_index)[0])
plt.clf()
plt.title('Comparing models vs actuals')
plt.plot(x_test, y_test, 'bo', label = 'Actual')
plt.plot(x_test, predictions, 'ro', label = 'Original predictions')
plt.plot(x_test, sine_model_quantized_predictions, 'gx', label = 'lite quantized predictions')
plt.legend()
plt.show()
# + colab={"base_uri": "https://localhost:8080/"} id="3xdXdIpDv9os" outputId="ea271576-45ae-4659-8830-069f6638658c"
#comparing quantized model
import os
basic_model_size = os.path.getsize("sine_model.tflite")
print("Basic model is %d bytes" % basic_model_size)
quantized_model_size = os.path.getsize("sine_model_quantized.tflite")
print("Quantized model is %d bytes" % quantized_model_size)
difference = basic_model_size - quantized_model_size
print("Difference is %d bytes" % difference)
# + colab={"base_uri": "https://localhost:8080/"} id="-pl-YobXUjo5" outputId="fedcb29c-8e1b-4166-e35a-41062de5c253"
#added git checkout -b DSAIT-37-project-test-tensorflow
#using xxd to transfor and save in c
# !apt-get -qq install xxd
# !xxd -i sine_model_quantized.tflite > sine_model_quantized.cc
# !cat sine_model_quantized.cc
# + id="8PnzzpRbUjgQ"
#include "tensorflow/lite/micro/examples/hello_world/sine_model_data.h"
#include "tensorflow/lite/micro/kernels/all_ops_resolver.h"
#include "tensorflow/lite/micro/micro_error_reporter.h"
#include "tensorflow/lite/micro/micro_interpreter.h"
#include "tensorflow/lite/micro/testing/micro_test.h"
#include "tensorflow/lite/schema/schema_generated.h"
#include "tensorflow/lite/version.h"
# + id="7a3VSkvZUjZS"
# + id="voCdzfcnUjSb"
# + id="uX33kkd0UjIx"
# + id="oMAUwhbddxWi"
# + id="o4klXSIKPA2m"
| Neural_Network_Sine_Function(2).ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
people = {
"first": ["Corey", 'Jane', 'John'],
"last": ["Schafer", 'Doe', 'Doe'],
"email": ["<EMAIL>", '<EMAIL>', '<EMAIL>']
}
df = pd.DataFrame(people)
df
# -
df.sort_values(by='last')
df.sort_values(by='last',ascending=False)
df.sort_values(by=['last','first'],ascending=False)
people = {
"first": ["Corey", 'Jane', 'John','Adam'],
"last": ["Schafer", 'Doe', 'Doe','Doe'],
"email": ["<EMAIL>", '<EMAIL>', '<EMAIL>','<EMAIL>']
}
df = pd.DataFrame(people)
df.sort_values(by=['last','first'],ascending=False)
df.sort_values(by=['last','first'],ascending=[False,True])
df.sort_values(by=['last','first'],ascending=[False,True],inplace=True)
df
df.sort_index()
df['last'].sort_values()
df = pd.read_csv('survey_results_public.csv', index_col='Respondent')
schema_df = pd.read_csv('survey_results_schema.csv', index_col='Column')
pd.set_option('display.max_columns', 85)
pd.set_option('display.max_rows', 85)
df.head()
df.sort_values(by='Country',inplace=True)
df['Country'].head()
df[['Country','Student']].head()
df.sort_values(by=['Country','Student'],ascending=[True,False],inplace=True)
df[['Country','Student']].head()
df['AssessBenefits7'].nlargest(10)
df.nlargest(10,'AssessBenefits7')
df['AssessBenefits7'].nsmallest(10)
| 05-Machine-Learning-Code/数据分析工具/Pandas/.ipynb_checkpoints/7_sort_data-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import ipycytoscape
import json
with open("geneData.json") as fi:
json_file = json.load(fi)
with open("geneStyle.json") as fi:
s = json.load(fi)
cytoscapeobj = ipycytoscape.CytoscapeWidget()
cytoscapeobj.graph.add_graph_from_json(json_file)
cytoscapeobj.set_style(s)
cytoscapeobj.set_layout(name = 'cola',
nodeSpacing = 5,
edgeLengthVal = 45,
animate = True,
randomize = False,
maxSimulationTime = 1500)
cytoscapeobj
# +
# edits graph directly
cytoscapeobj.set_layout(nodeSpacing=100)
cytoscapeobj.get_layout()
# +
# connects a slider to the nodeSpacing of the graph
import ipywidgets as widgets
node_range = widgets.IntSlider()
output = widgets.Output()
display(node_range, output)
def on_value_change(change):
with output:
cytoscapeobj.set_layout(nodeSpacing = node_range.value)
cytoscapeobj.get_layout()
node_range.observe(on_value_change)
| examples/Tip gene example.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import json
import fire
import typer
from datasets import load_dataset
import spacy
from pathlib import Path
from spacy.tokens import DocBin, Span
from typing import List
# +
count = 10
# count = float("inf")
def retokenize(tokens: List) -> str:
return " ".join(tokens)
span_key = "sc"
docs = []
lang = "en"
nlp = spacy.blank(lang)
def hf_to_span(data, tags_key: List):
for i, element in enumerate(data):
if i > count:
break
sentence = retokenize(element["tokens"])
doc = nlp(sentence)
ner_tags = element["ner_tags"]
ner_tags = [tags_key[k] for k in ner_tags]
char_spans, char_len = [], 0
substring = ""
j, prev_begin = 0, False
for j, tag in enumerate(ner_tags):
token = element["tokens"][j]
if "B-" in tag:
if prev_begin:
char_spans.append(
{
"start": start,
"end": char_len,
"label": label,
}
)
substring = ""
substring += token
label = tag[2:]
start = char_len
prev_begin = True
if "I-" in tag:
substring += " "
substring += token
prev_begin = False
if "O" == tag and len(substring) > 0:
char_spans.append({"start": start, "end": char_len, "label": label})
substring = ""
prev_begin = False
char_len += len(token)
char_len += 1 # for the space
if len(substring) > 0: # jugaad for when the last tag is "I-"
char_spans.append(
{
"start": start,
"end": char_len,
"label": label,
}
)
for span in char_spans:
start, end, label = span["start"], span["end"], span["label"]
new_ent = doc.char_span(start, end - 1, label=label)
doc.set_ents([new_ent], default="unmodified")
docs.append(doc)
def make_spans(dataset, splits=["train", "dev", "test"]):
tags_key = dataset[splits[0]].features["ner_tags"].feature.names
for split in splits:
print(f"------------{split.upper()}------------------")
hf_to_span(dataset[split], tags_key=tags_key)
def download(output_path: Path) -> None:
dataset = load_dataset("conll2003")
data_dir = Path("./assets")
splits = list(dataset.keys())
make_spans(dataset, splits)
loc = Path(".")
download(".")
# docbin = DocBin().from_disk(loc)
# docs = list(docbin.get_docs(nlp.vocab)) # this is the line that I want to change
for doc in docs:
doc.spans[span_key] = list(doc.ents)
print([(ent, ent.label_) for ent in doc.ents])
# DocBin(docs=docs).to_disk(loc)
# -
| nbs/Convert.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.8 64-bit (''base'': conda)'
# name: python3
# ---
# + tags=[]
#Edits made by Paddington - Faerlina Sept 25, 2021
#Added:
#Mana pool
#Mana regen
#Mana potions
#Toggleable abilities
#Insect Swarm
#Starfall downranking
#Individual Spell damage and percentage - Commented out currently for debugging
#Dark Iron Smoking Pipe
#Added trinket internal cooldown in case of using 2 on-use trinkets
#Fixed problem where adding loops in main cell caused many loops with 0 damage
#Changed output to average dps which seems more accurate.
import numpy
import pandas as pd
import math #Paddington - For sqrt in spirit calculation in future
# Define here your stats and environment
# Stats :
intel = 430
crit_score = 107
hit_score = 174 # 113
spellpower = 882
haste = 0
spirit = 233
#Paddington - Added toggling of spells
casting_SF = True
casting_wrath = False
SF_rank = 6
is_MF = False
is_IS = False
is_csd = False # Chaotic Skyfire Diamond equiped
is_spellstrike = False
is_spellfire = False
is_motw = True
is_divine_spirit = True
is_arcane_brilliance = True #40 intellect +14 motw
is_totem_of_wrath = True
is_wrath_of_air = True
is_draenei_in_group = False
curse_of_elements = 1.0 #1.10
is_blessing_kings = False # 10% stats increased
is_crusader = False
is_sham_4_piece = True
is_flask = False
is_food = False
is_wizard_oil = False
#TODO Paddington - Add mana oil
is_mana_oil = False
is_twilight_owl = False
is_eye_of_night = False
is_T5 = False
is_T6_2 = False
is_T6_4 = False
is_drums = False
is_lust = False
lust_count = 1
lust_when = 100 # Percentage at which you want to receive the first bloodlust. 100 means the first lust goes off on engage.
#TODO Paddington - Add mana regeneration from cooldowns
is_manapot = True
is_innervate = True
is_mana_totem = True
is_spriest = True
misery = 1.05
## Update stats
if is_motw:
intel = intel + 18
spirit = spirit + 18
if is_arcane_brilliance:
intel = intel + 40
if is_divine_spirit:
spirit = spirit + 50
if is_totem_of_wrath:
hit_score = hit_score + (3 * 12.6)
crit_score = crit_score + (3 * 22.1)
if is_draenei_in_group:
hit_score = hit_score + 12.6
if is_blessing_kings:
intel = intel * 1.1
spirit = spirit * 1.1
if is_crusader:
crit_score = crit_score + (3 * 22.1)
if is_wrath_of_air:
spellpower = spellpower + 101
if is_sham_4_piece:
spellpower = spellpower + 20
if is_flask:
spellpower = spellpower + 80
if is_food:
spellpower = spellpower + 20
if is_wizard_oil:
spellpower = spellpower + 36
crit_score = crit_score + 14
if is_twilight_owl:
crit_score = crit_score + (2 * 22.1)
if is_eye_of_night:
spellpower = spellpower + 34
# Paddington - Added mana pool
mana = 2370 + intel*15
# Functions
def logfunc(s):
is_log_on = False
if is_log_on:
print(s)
def compute_dps(mana, intel, crit_score, hit_score, spellpower, haste, is_csd,
is_spellstrike, is_spellfire, is_T5, is_T6_2, is_T6_4, is_manapot, is_MF,
is_IS, SF_rank, casting_wrath, casting_SF):
#print("Provided values : " + " int " + str(intel) + " crit " + str(crit_score) + " hit " + str(hit_score) + " sp " + str(spellpower)+ " spirit " + str(spirit))
# Special Trinkets
eye_of_mag = False # +54 SP + Grants 170 increased spell damage for 10 sec when one of your spells is resisted.
#Paddington - Added Dark Iron Smoking Pipe
dark_iron_smoking_pipe = True # + 43 + Use: Increase damage and healing done by magucal spells and effects by up to 155 for 20 sec. (2 Min Cooldown)
silver_crescent = False # 43 SP + Use: Increases damage and healing done by magical spells and effects by up to 155 for 20 sec. (2 Min Cooldown)
scryer_gem = True # 32 Hit rating + Use: Increases spell damage by up to 150 and healing by up to 280 for 15 sec. (1 Min, 30 Sec Cooldown)
quagmirran = False # 37 SP + Equip: Your harmful spells have a chance to increase your spell haste rating by 320 for 6 secs. (Proc chance: 10%, 45s cooldown)
essence_sapphi = False # Use: Increases damage and healing done by magical spells and effects by up to 130 for 20 sec. (2 Min Cooldown)
illidari_vengeance = False # +26 Crit . Use: Increases spell damage done by up to 120 and healing done by up to 220 for 15 sec. (1 Min, 30 Sec Cooldown)
lightening_capacitor = False # Equip: You gain an Electrical Charge each time you cause a damaging spell critical strike. When you reach 3 Electrical Charges, -> 750 dmg on average (2.5s ICD for charges, concern with wrath spam only)
xiris_gift = False # +32 crit / Use: Increases spell damage by up to 150 and healing by up to 280 for 15 sec. (1 Min, 30 Sec Cooldown)
sextant_of_unstable_currents = False # +40 crit, Chance on critical strike to increase your spellpower by 190 for 15 seconds (20% proc chance, 45 second icd)
ashtongue_talisman = False # Starfire has a 25% chance to grant you 150 spellpower for 8 seconds (25% proc chance, no icd)
skull = False # +55 spellpower, +25 hit, 175 haste for 20 seconds on a 120 second cooldown.
# Talents
balance_of_power = 4 # +4% Hit
focused_starlight = 4 # +4% crit for SF and Wrath
moonkin_form = 5 # +5% Crit
improved_mf = 10 # +10% Moonfire crit
starlight_wrath = True # reduce cast time by 0.5s
vengeance = True # +100% Crit damange
lunar_guidance = True # Spellpower bonus = 24% of total intel
moonfury = 1.1 # +10% damage
wrath_of_cenarius = 1.2 # +20% Spellpower for SF | +10% SpellPower for Wrath
fight_length = 155 # in seconds
is_activable_trinket = True
# Sets bonuses
spellfire = is_spellfire # SP bonus = +7% of total intellect
spellstrike = is_spellstrike # 5% chance to have +92sp for 10s - No ICD
windhawk = False # 8MP/5 KEK
# Meta GEM - Chaotic Skyfire Diamond
csd_equiped = is_csd
#Paddington - Added mana regen
#Mana Regen
casting_mp5 = 75
mp5_tick= 5
# Two kinds of trinkets
is_trinket_activable = False
is_trinket_triggered = True
# Apply stats modifications
if sextant_of_unstable_currents:
crit_score = crit_score + 40
# trinket_duration = 15 # seconds
sextant_icd = 45 #seconds
# spellpower_trinket_bonus = 190
is_trinket_triggered = True
is_trinket_activable = False
if eye_of_mag:
spellpower = spellpower + 54
# trinket_duration = 10 # seconds
# trinket_cd = 0 # seconds
# spellpower_trinket_bonus = 170
is_trinket_triggered = True
is_trinket_activable = False
if xiris_gift:
crit_score = crit_score + 32
spellpower_trinket_bonus = 150
trinket_duration = 15 # seconds
trinket_cd = 90 # seconds
on_use_icd = 15 # seconds
is_trinket_activable = True
#Paddington - Added Dark Iron Smoking Pipe
if dark_iron_smoking_pipe:
spellpower = spellpower + 43
spellpower_trinket_bonus = 155
trinket_duration = 20 # seconds
trinket_cd = 120 # seconds
on_use_icd = 15 # seconds
is_trinket_activable = True
if silver_crescent:
spellpower = spellpower + 43
spellpower_trinket_bonus = 155
trinket_duration = 20 # seconds
trinket_cd = 120 # seconds
on_use_icd = 15 # seconds
is_trinket_activable = True
if essence_sapphi:
spellpower = spellpower + 40
spellpower_trinket_bonus = 130
trinket_duration = 20 # seconds
trinket_cd = 120 # seconds
is_trinket_activable = True
if scryer_gem:
#hit_score = hit_score + 32 #Paddington - removed hit because it is already added when manually entering hit rating
is_trinket_activable = True
spellpower_trinket_bonus = 150
trinket_duration = 15 # seconds
on_use_icd = 15 # seconds
trinket_cd = 90 # seconds
if illidari_vengeance:
crit_score = crit_score + 26
spellpower_trinket_bonus = 120
trinket_duration = 15 # seconds
trinket_cd = 90 # seconds
on_use_icd = 15 # seconds
is_trinket_activable = True
if quagmirran:
spellpower = spellpower + 37
# haste_trinket_bonus = 320 / 15.77
# trinket_duration = 6 # seconds
trinket_icd = 45 # seconds
# trinket_activation_chance = 0.1 # 10%
if skull:
spellpower = spellpower + 55
#hit_score = hit_score + 25 #Paddington - removed hit because it is already added when manually entering hit rating
trinket_duration = 20 #seconds
trinket_cd = 120 #seconds
on_use_icd = 15 # seconds
# Translating stats to %
# At level 70, 22.1 Spell Critical Strike Rating increases your chance to land a Critical Strike with a Spell by 1%
# At level 70, 12.6 Spell Hit Rating increases your chance to Hit with Spells by 1%. Hit cap is 202 FLAT (not including talents & buffs).
# Druids receive 1% Spell Critical Strike chance for every 79.4 points of intellect.
FF_mana_cost = 145
#Paddington - changed spell damages to refelct in game damage.
# Moonfire base damage : 305 to 357 Arcane damage and then an additional 600 Arcane damage over 12 sec.
MF_coeff = 0.15
MF_mana_cost = 450
if is_T6_2:
MF_coeff_dot = 0.65
else:
MF_coeff_dot = 0.52
# Starfire base damage : Causes 550 to 647 Arcane damage -> 658 on average
SF_coeff = 1
#Paddington - Added spell downranking
if SF_rank == 6:
SF_mana_cost = 287
SF_average_damage = 553.5
elif SF_rank == 7:
SF_mana_cost = 309
SF_average_damage = 614.5
else:
SF_mana_cost = 337
SF_average_damage = 658
MF_average_damage = 397.5
if is_T6_2:
MF_average_dot_damage = 720
else:
MF_average_dot_damage = 600
partial_coeff = 0.5 # For the moment, let's say that in average, partials get 50% damage reduction
sf_cast_time = 3
sf_cast_time_ng = 2.5
#Paddington - Added Insect Swarm
IS_coeff = 0.76
IS_mana_cost = 175
IS_average_dot_damage = 792
wrath_coeff = 0.65
wrath_mana_cost = 232
wrath_average_damage = 448.5
wrath_cast_time = 1.5
wrath_cast_time_ng = 1
#Paddington - Added mana potions
if is_manapot:
manapot_cd = 0
manapot_up = True
else:
manapot_cd = 60
manapot_up = False
# Apply spell haste coefficients here
# 15.77 Spell Haste Rating increases casting speed by 1%
# % Spell Haste at level 70 = (Haste Rating / 15.77)
# New Casting Time = Base Casting Time / (1 + (% Spell Haste / 100))
spell_haste = haste / 15.77
# sf_cast_time = 3 / (1 + (spell_haste/100))
# sf_cast_time_ng = 2.5 / (1 + (spell_haste/100))
# print("SF Cast time : " + str(sf_cast_time))
# print("SF NG Cast time : " + str(sf_cast_time_ng))
# Spell power calculation for fight SP + lunar guidance
if lunar_guidance:
spellpower = spellpower + 0.25 * intel
if spellfire:
spellpower = spellpower + 0.07 * intel
if is_divine_spirit:
spellpower = spellpower + .1 * spirit
# Hit chance
# 12.6 Spell Hit Rating -> 1%
hit_chance = min(99, 83 + (hit_score/12.6) + balance_of_power)
hit_chance_percent_value = hit_chance / 100
logfunc("Hit chance is : " + str(hit_chance))
# Crit chance
# At level 70, 22.1 Spell Critical Strike Rating -> 1%
# Druids receive 1% Spell Critical Strike chance for every 79.4 points of intellect.
MF_crit_percent = crit_score/22.1 + intel/79.4 + improved_mf + moonkin_form
MF_crit_percent_value = MF_crit_percent / 100
logfunc("Moonfire crit chance is : " + str(MF_crit_percent))
if is_T6_4 == True:
SF_crit_percent = crit_score/22.1 + intel/79.4 + + moonkin_form + focused_starlight + 5
else:
SF_crit_percent = crit_score/22.1 + intel/79.4 + + moonkin_form + focused_starlight
SF_crit_percent_value = SF_crit_percent / 100
logfunc("Starfire crit chance is : " + str(SF_crit_percent))
wrath_crit_percent = crit_score/22.1 + intel/79.4 + + moonkin_form + focused_starlight
logfunc("Wrath crit chance is : " + str(wrath_crit_percent))
logfunc("Spellpower is : " + str(spellpower))
# Crit coeff
if csd_equiped:
crit_coeff = 2.09
else:
crit_coeff = 2
# Spellstrike bonus:
if spellstrike:
spellstrike_bonus = 92
else:
spellstrike_bonus = 0
# Prepare and launch the simulations
loop_size = 1 # number of fights simulated, Paddington - Changed to 1 and use 2nd cell for number of fights
average_dps = 0
n = 0
while n < loop_size:
n = n + 1
# Initialization
loopcount = 0
total_damage_done = 0
damage = 0
fight_time = 0
spellstrike_uptime = 0
eye_of_mag_uptime = 0
ff_uptime = 0
mf_uptime = 0
is_uptime = 0
trinket_uptime = 0
trinket_cd_timer = 0
is_trinket_active = False
#Paddington - Added on-use trinket internal cooldown
on_use_icd_timer = 0
is_trinket_available = True
is_ff_up = False
is_mf_up = False
#Paddington - Added Insect Swarm
is_is_up = False
is_ng = False
is_eye_of_mag_triggered = False
is_sextant_of_unstable_currents_triggered = False
is_ashtongue_triggered = False
spellstrike_proc = False
ng_proc = False
drum_time = 0
drum_cd = 0
eye_of_quagg_icd = 0
eye_of_quagg_proc = False
eye_of_quagg_uptime = 0
sextant_of_unstable_currents_icd = 0
sextant_of_unstable_currents_proc = False
sextant_of_unstable_currents_uptime = 0
ashtongue_uptime = 0
skull_uptime = 0
skull_cd = 0
skull_active = False
lust_amount = lust_count
lust_uptime = 0
lust_start = 1 - (lust_when/100)
lusted = False
fight_haste = spell_haste
#Paddington - Added individual spell damage totals
SF_damage = 0
MF_damage = 0
IS_damage = 0
Wrath_damage = 0
#Treant_damage = 0
SF_hit = False
MF_hit = False
IS_hit = False
Wrath_hit = False
# Time to kick ass and chew bubble gum
while fight_time <= fight_length:
loop_duration = max(1, (1.5 / (1 + (fight_haste / 100)))) #GCD - can't be less, it's the rule !
damage = 0
# adding a variable to keep the initial spellpower to revert to this value in case a trinket bonus
# fades out before the end of the SF / Wrath cast (ie. no spellpower snapshot)
loop_start_spellpower = spellpower
if spellstrike_proc:
fight_spell_power = spellpower + spellstrike_bonus
else:
fight_spell_power = spellpower
if is_eye_of_mag_triggered:
fight_spell_power = fight_spell_power + 170
if is_sextant_of_unstable_currents_triggered:
fight_spell_power = spellpower + 190
if is_ashtongue_triggered:
fight_spell_power = spellpower + 150
if is_drums and drum_cd <= 0:
drum_time = 30
drum_cd = 120
drums_up = True
fight_haste = fight_haste + (80/15.77)
# logfunc(" drums up !!! fight haste is " + str(fight_haste))
if is_lust and lust_amount >= 1 and fight_time >= (lust_start * fight_length) and lusted == False:
lust_amount = lust_amount - 1
fight_haste = fight_haste + 30
lust_uptime = 40
lusted = True
#logfunc("lust up, lust amount is " + str(lust_amount) + " fight time is " + str(fight_time))
#Paddington - Added Dark Iron Smoking Pipe
if dark_iron_smoking_pipe and is_trinket_activable and is_ff_up:
fight_spell_power = fight_spell_power + spellpower_trinket_bonus
on_use_icd_timer = on_use_icd
trinket_cd_timer = trinket_cd
is_trinket_activable = False
if silver_crescent and is_trinket_activable and is_ff_up:
fight_spell_power = fight_spell_power + spellpower_trinket_bonus
on_use_icd_timer = on_use_icd
trinket_cd_timer = trinket_cd
is_trinket_activable = False
if skull and skull_cd <= 0 and is_ff_up:
fight_haste = fight_haste + (175/15.77)
skull_uptime = 20
skull_cd = 120
skull_active = True
#logfunc("skull activated!!")
#Paddington - Added mana potions
if manapot_up and mana < 7500:
mana = mana + (1800 + 3000)/2
manapot_cd = 60
manapot_up = False
logfunc("Mana Potion Used!")
# if FF not up, cast FF
#Paddington - Added mana management
if not is_ff_up and mana >= FF_mana_cost:
logfunc("Casting Faerie Fire !")
#Paddington - Added mana management
mana = mana - FF_mana_cost
logfunc("Current mana:" + str(mana))
loop_duration = max(1, (1.5 / (1 + (fight_haste/100)))) #GCD
is_crit = False # can't crit on FF
damage = 0 # and no damage applied
if(numpy.random.random() <= hit_chance_percent_value):
is_hit = True
ff_uptime = 40
is_ff_up = True
# Test if spellstrike is proc -> Commented out because tested later on in the code
# spellstrike_proc = (numpy.random.randint(1, high = 101, size = 1) <= 10)
else:
is_hit = False
logfunc("Faerie Fire -> Resist !")
# if Moonfire not up, cast Moonfire
else:
# Once FF is up trigger trinket if available.
if is_trinket_active:
# apply spell modifier
fight_spell_power = fight_spell_power + spellpower_trinket_bonus
else:
if is_trinket_available and is_trinket_activable:
# activate trinket
logfunc("Trinket activation !!")
is_trinket_active = True
# start the chrono of activation
trinket_uptime = trinket_duration
# start the chrono of CD
trinket_cd_timer = trinket_cd
#Paddington - Added mana management
if not is_mf_up and is_MF and mana >= MF_mana_cost:
logfunc("Casting Moonfire !")
#Paddington - Added mana management
mana = mana - MF_mana_cost
logfunc("Current mana:" + str(mana))
loop_duration = max(1, (1.5 / (1 + (fight_haste/100)))) #GCD because we cast a spell
# Is it a hit ?
if(numpy.random.random() <= hit_chance_percent_value):
is_hit = True
MF_hit = True
# Is it a crit ?
is_crit = (numpy.random.random() <= MF_crit_percent_value)
# Is it a partial ?
#if(numpy.random.randint(1, high = 101, size = 1) <= hit_chance):
# damage = MF_average_damage + MF_coeff * fight_spell_power * partial_coeff
damage = MF_average_damage + MF_coeff * fight_spell_power
# Apply damage
if is_crit:
damage = damage * crit_coeff
# DoT :
if is_T6_2:
damage = damage + MF_average_dot_damage + (MF_coeff_dot * fight_spell_power * min(15, (fight_length - fight_time - 1.5))/15)
else:
damage = damage + MF_average_dot_damage + (MF_coeff_dot * fight_spell_power * min(12, (fight_length - fight_time - 1.5))/12)
# There is a Hit ! update model
if is_T6_2:
is_mf_up = True
mf_uptime = 15
else:
is_mf_up = True
mf_uptime = 12
else:
is_hit = False
logfunc("Moonfire -> Resist ! ")
#Paddington - Added insect swarm use
elif not is_is_up and is_IS and mana>= IS_mana_cost:
logfunc("Casting Insect Swarm !")
mana = mana - IS_mana_cost
logfunc("Current mana:" + str(mana))
loop_duration = max(1, (1.5 / (1 + (fight_haste/100)))) #GCD because we cast a spell
# Is it a hit ?
if(numpy.random.random() <= hit_chance_percent_value):
is_hit = True
#Paddington - Added hit check for individual spell dmg totals
IS_hit = True
# DoT :
damage = damage + IS_average_dot_damage + (IS_coeff * fight_spell_power * min(12, (fight_length - fight_time - 1.5))/12)
# There is a Hit ! update model
is_is_up = True
is_uptime = 12
else:
is_hit = False
logfunc("Insect Swarm -> Resist ! ")
#Paddington - Added wrath option here
#Paddington - Added mana management
elif casting_wrath and mana >= wrath_mana_cost:
# Cast Wrath
logfunc("Casting Wrath !")
mana = mana - wrath_mana_cost
# Is it a hit ?
if(numpy.random.randint(1, high = 101, size = 1) <= hit_chance):
is_hit = True
#Paddington - Added hit check for individual spell dmg totals
Wrath_hit = True
# Is it a crit ?
is_crit = (numpy.random.randint(1, high = 101, size = 1) <= wrath_crit_percent)
# Is it a partial ?
if(numpy.random.randint(1, high = 101, size = 1) > hit_chance):
logfunc("Partial hit !")
damage = (wrath_average_damage + (wrath_coeff * fight_spell_power * wrath_of_cenarius * partial_coeff )) * moonfury
# logfunc("Damage done : " + str(damage))
else:
damage = (wrath_average_damage + (wrath_coeff * fight_spell_power * wrath_of_cenarius )) * moonfury
logfunc("Damage done : " + str(damage))
if is_crit:
damage = damage * crit_coeff
else:
is_hit = False
logfunc("Wrath -> Resist ! ")
if is_ng:
loop_duration = wrath_cast_time_ng
else:
loop_duration = wrath_cast_time
is_ng = False # Consume NG once wrath is cast
#Paddington - Added mana management
elif casting_SF and mana >= SF_mana_cost:
# Cast Starfire
logfunc("Casting Starfire !")
#Paddington - Added mana management
mana = mana - SF_mana_cost
logfunc("Current mana:" + str(mana))
sf_cast_time = 3 / (1 + (fight_haste/100))
sf_cast_time_ng = 2.5 / (1 + (fight_haste/100))
# Computing loop duration
if is_ng:
sf_cast_time_ng = max(1, (2.5 / (1 + (fight_haste/100))))
loop_duration = max(1, sf_cast_time_ng)
else:
sf_cast_time = max(1, (3 / (1 + (fight_haste/100))))
loop_duration = max(1, sf_cast_time)
is_ng = False # Consume NG once SF is cast
# Is it a hit ?
# if(numpy.random.randint(1, high = 101, size = 1) <= hit_chance):
if(numpy.random.random() <= hit_chance_percent_value):
is_hit = True
#Paddington - Added hit check for individual spell dmg totals
SF_hit = True
if ashtongue_talisman and numpy.random.randint(1, high = 5, size = 1) == 1:
is_ashtongue_triggered = True
ashtongue_uptime = 8
logfunc("Ashtongue Proc !!")
# Is it a crit ?
is_crit = (numpy.random.random() <= SF_crit_percent_value)
if is_crit:
logfunc("Starfire -> Crit ! ")
# Is it a partial ?
#if(numpy.random.randint(1, high = 101, size = 1) > hit_chance):
# logfunc("Partial hit !")
# damage = (SF_average_damage + (SF_coeff * fight_spell_power * wrath_of_cenarius * partial_coeff )) * moonfury
# logfunc("Damage done : " + str(damage))
# Let's check if the cast is finished before trinket bonus, if not, we remove the relevant bonus :
if spellstrike_proc and (spellstrike_uptime < loop_duration):
fight_spell_power = fight_spell_power - spellstrike_bonus
logfunc("Spellstrike fades out to early ...")
if is_eye_of_mag_triggered and (eye_of_mag_uptime < loop_duration):
fight_spell_power = fight_spell_power - 170
logfunc("Eye of Mag fades out to early ...")
if is_sextant_of_unstable_currents_triggered and (sextant_of_unstable_currents_uptime < loop_duration):
fight_spell_power = fight_spell_power - 190
logfunc("Sextant fades out to early ...")
if is_ashtongue_triggered and (ashtongue_uptime < loop_duration):
fight_spell_power = fight_spell_power - 150
logfunc("Ashtongue fades out to early ...")
# end of trinket verification
#Paddington - Added check for Insect Swarm
if is_T5 and is_ng and mf_uptime >= sf_cast_time_ng and is_uptime >= sf_cast_time_ng:
damage = (SF_average_damage + (SF_coeff * fight_spell_power * wrath_of_cenarius )) * moonfury * 1.1
elif is_T5 and mf_uptime >= sf_cast_time:
damage = (SF_average_damage + (SF_coeff * fight_spell_power * wrath_of_cenarius )) * moonfury * 1.1
elif is_T5 and is_uptime >= sf_cast_time:
damage = (SF_average_damage + (SF_coeff * fight_spell_power * wrath_of_cenarius )) * moonfury * 1.1
else:
damage = (SF_average_damage + (SF_coeff * fight_spell_power * wrath_of_cenarius )) * moonfury
if is_crit:
damage = damage * crit_coeff
logfunc("Damage done : " + str(damage))
else:
is_hit = False
logfunc("Starfire -> Resist ! ")
#Treant_damage = 0
#Individual spell damage totals
if SF_hit:
SF_damage = SF_damage + damage
SF_hit = False
if MF_hit:
MF_damage = MF_damage + damage
MF_hit = False
if IS_hit:
IS_damage = IS_damage + damage
IS_hit = False
if Wrath_hit:
Wrath_damage = Wrath_damage + damage
Wrath_hit = False
# Update time and model
fight_time = fight_time + loop_duration
ff_uptime = ff_uptime - loop_duration
mf_uptime = mf_uptime - loop_duration
trinket_uptime = trinket_uptime - loop_duration
trinket_cd_timer = trinket_cd_timer - loop_duration
#Paddington - Added Internal CD for on-use trinkets
on_use_icd_timer = on_use_icd_timer - loop_duration
eye_of_mag_uptime = eye_of_mag_uptime - loop_duration
spellstrike_uptime = spellstrike_uptime - loop_duration
eye_of_quagg_icd = eye_of_quagg_icd - loop_duration
eye_of_quagg_uptime = eye_of_quagg_uptime - loop_duration
sextant_of_unstable_currents_icd = sextant_of_unstable_currents_icd - loop_duration
sextant_of_unstable_currents_uptime = sextant_of_unstable_currents_uptime - loop_duration
ashtongue_uptime = ashtongue_uptime - loop_duration
drum_time = drum_time - loop_duration
drum_cd = drum_cd - loop_duration
#####################
#Paddington - Added Mana management
manapot_cd = manapot_cd - loop_duration
mp5_tick = mp5_tick - loop_duration
######################
lust_uptime = lust_uptime - loop_duration
skull_uptime = skull_uptime - loop_duration
skull_cd = skull_cd - loop_duration
# Check the timer on buffs / debuffs
if spellstrike_uptime <= 0:
spellstrike_proc = False
if mf_uptime <= 0:
is_mf_up = False
if ff_uptime <= 0:
is_ff_up = False
# Trinket
if trinket_uptime <= 0:
is_trinket_active = False
else:
is_trinket_active = True
#Paddington - Added check for trinket internal CD if using 2 on-use trinkets
if trinket_cd_timer <= 0 and on_use_icd_timer <= 0:
is_trinket_available = True
else:
is_trinket_available = False
if eye_of_quagg_uptime <= 0 and eye_of_quagg_proc:
fight_haste = fight_haste - (320/15.77)
eye_of_quagg_proc = False
logfunc("eye of quagg fades. fight haste is " + str(fight_haste))
if eye_of_mag_uptime <= 0:
is_eye_of_mag_triggered = False
if sextant_of_unstable_currents_uptime <= 0:
is_sextant_of_unstable_currents_triggered = False
if ashtongue_uptime <= 0:
is_ashtongue_triggered = False
if is_drums and drum_time <= 0 and drums_up:
fight_haste = fight_haste - (80/15.77)
drums_up = False
#logfunc("drums down!!! fight haste is: " + str(fight_haste))
#Paddington - Added mana potion use
if manapot_cd <= 0:
manapot_up = True
logfunc("Mana Potion Available!")
else:
logfunc("Mana Potion Down!")
#Paddington - Added mana management
if mp5_tick <= 0:
mana = mana + casting_mp5
if lusted and lust_uptime <= 0:
fight_haste = fight_haste - 30
lusted = False
#logfunc("lust down!")
if skull_active and skull_uptime <= 0:
fight_haste = fight_haste - (175/15.77)
skull_active = False
#logfunc("skull deactivated !!")
# Update nature's grace
if is_crit:
if sextant_of_unstable_currents and numpy.random.randint(1, high = 6, size = 1) == 5 and sextant_of_unstable_currents_icd <= 0:
is_sextant_of_unstable_currents_triggered = True
sextant_of_unstable_currents_uptime = 15
sextant_of_unstable_currents_icd = sextant_icd
logfunc("Sextant proc!")
is_ng = True
total_damage_done = total_damage_done + damage * curse_of_elements * misery
# If there is a Hit, Check if spellstrike / Quag'eye is proc or refreshed :
if is_hit:
if is_spellstrike and numpy.random.randint(1, high = 11, size = 1) == 10:
spellstrike_proc = True
spellstrike_uptime = 10
logfunc("Spellstrike proc !!!")
if quagmirran and numpy.random.randint(1, high = 11, size = 1) == 10 and eye_of_quagg_icd <= 0:
eye_of_quagg_proc = True
eye_of_quagg_uptime = 6
eye_of_quagg_icd = trinket_icd
fight_haste = fight_haste + (320/15.77)
logfunc("Eye of Quagmirran proc !!!, fight haste is: "+ str(fight_haste))
# If there is a resist, check eye of mag proc :
if eye_of_mag and not is_hit:
is_eye_of_mag_triggered = True
eye_of_mag_uptime = 10
logfunc("Eye of mag proc !!!")
# Print output
logfunc("Loop Duration : " + str(loop_duration))
logfunc("Loop Damage : " + str(damage))
if damage > 0:
loopcount = loopcount + 1
#print("damage loops: ", loopcount) Used to track how many loops actually do damage in a fight i.e. before mana runs out.
logfunc("Overall damage done : " + str(total_damage_done))
#Paddington - Added individual spell damage totals
#print("Starfire damage : ", str(SF_damage))
#print("Starfall DPS: ", SF_damage/fight_time)
#print("Percent Starfall damage: ", (SF_damage/total_damage_done)*100)
#print("Wrath damage : ", str(Wrath_damage))
#print("Wrath DPS: ", Wrath_damage/fight_time)
#print("Percent Wrath damage: ", (Wrath_damage/total_damage_done)*100)
#print("Moonfire damage : ", str(MF_damage))
#print("Moonfire DPS: ", MF_damage/fight_time)
#print("Percent Moonfire damage: ", (MF_damage/total_damage_done)*100)
#print("Insect Swarm damage : ", str(IS_damage))
#print("Insect Swarm DPS: ", IS_damage/fight_time)
#print("Percent Insect Swarm damage: ", (IS_damage/total_damage_done)*100)
logfunc("Overall DPS : " + str(total_damage_done/fight_time)) # We use fight_time here in case SF lands after the fight_length mark
average_dps = average_dps + (total_damage_done/fight_time)
break
#Paddington - Changed to average dps. Seems more accurate.
#real_average_dps = average_dps / loop_size
#logfunc("Average DPS : " + str(real_average_dps))
return(average_dps)
#return(real_average_dps)
# Adapting the function to panda dataframes
#v_compute_dps = numpy.vectorize(compute_dps)
# +
# Run this part to calculate the average dps value of your configuration
n = 0
all_dps = 0
loop_size = 100
damage = 0
while n < loop_size :
n = n + 1
# all_dps = all_dps + compute_dps(intel = 381, crit_score = 243, hit_score = 135, spellpower = 1162, haste = 0)
all_dps = all_dps + compute_dps(mana, intel, crit_score, hit_score, spellpower, haste,
is_csd, is_spellstrike, is_spellfire, is_T5, is_T6_2, is_T6_4, is_manapot,
is_MF, is_IS, SF_rank, casting_wrath, casting_SF)
average_dps = all_dps / loop_size
print("Average DPS with current configuration is : ", average_dps)
| dps_generator_Toggle.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Exporting Trajectories
#
# <img align="right" src="https://anitagraser.github.io/movingpandas/pics/movingpandas.png">
#
# [](https://mybinder.org/v2/gh/anitagraser/movingpandas-examples/main?filepath=1-tutorials/4-exporting-trajectories.ipynb)
#
# Trajectories and TrajectoryCollections can be converted back to GeoDataFrames that can then be exported to different GIS file formats.
# +
import pandas as pd
import geopandas as gpd
from geopandas import GeoDataFrame, read_file
from shapely.geometry import Point, LineString, Polygon
from datetime import datetime, timedelta
import movingpandas as mpd
print(mpd.__version__)
# -
gdf = read_file('../data/geolife_small.gpkg')
gdf['t'] = pd.to_datetime(gdf['t'])
gdf = gdf.set_index('t').tz_localize(None)
traj_collection = mpd.TrajectoryCollection(gdf, 'trajectory_id')
# ## Converting TrajectoryCollections back to GeoDataFrames
#
# ### Convert to a point GeoDataFrame
traj_collection.to_point_gdf()
# ### Convert to a line GeoDataFrame
traj_collection.to_line_gdf()
# ### Convert to a trajectory GeoDataFrame
traj_collection.to_traj_gdf(wkt=True)
# ## Exporting to GIS file formats
# These GeoDataFrames can be exported to different file formats using GeoPandas, as documented in https://geopandas.org/docs/user_guide/io.html
traj_collection.to_traj_gdf(wkt=True).to_file("temp.gpkg", layer='trajectories', driver="GPKG")
read_file('temp.gpkg').plot()
| 1-tutorials/4-exporting-trajectories.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # PLEASE NOTE: Please run this notebook OUTSIDE a Spark notebook as it should run in a plain Default Python 3.6 Free Environment
#
# This is the last assignment for the Coursera course "Advanced Machine Learning and Signal Processing"
#
# Just execute all cells one after the other and you are done - just note that in the last one you should update your email address (the one you've used for coursera) and obtain a submission token, you get this from the programming assignment directly on coursera.
#
# Please fill in the sections labelled with "###YOUR_CODE_GOES_HERE###"
#
# The purpose of this assignment is to learn how feature engineering boosts model performance. You will apply Discrete Fourier Transformation on the accelerometer sensor time series and therefore transforming the dataset from the time to the frequency domain.
#
# After that, you’ll use a classification algorithm of your choice to create a model and submit the new predictions to the grader. Done.
#
#
# +
from IPython.display import Markdown, display
def printmd(string):
display(Markdown('# <span style="color:red">' + string + "</span>"))
if "sc" in locals() or "sc" in globals():
printmd(
"<<<<<!!!!! It seems that you are running in a IBM Watson Studio Apache Spark Notebook. Please run it in an IBM Watson Studio Default Runtime (without Apache Spark) !!!!!>>>>>"
)
# -
# !pip install pyspark==2.4.5
# !pip install https://github.com/IBM/coursera/blob/master/systemml-1.3.0-SNAPSHOT-python.tar.gz?raw=true
# +
from pyspark import SparkConf, SparkContext
from pyspark.sql import SparkSession, SQLContext
from pyspark.sql.types import DoubleType, IntegerType, StringType, StructField, StructType
sc = SparkContext.getOrCreate(SparkConf().setMaster("local[*]"))
from pyspark.sql import SparkSession
spark = SparkSession.builder.getOrCreate()
# -
#
# So the first thing we need to ensure is that we are on the latest version of SystemML, which is 1.3.0 (as of 20th March'19) Please use the code block below to check if you are already on 1.3.0 or higher. 1.3 contains a necessary fix, that's we are running against the SNAPSHOT
#
# !mkdir -p /home/dsxuser/work/systemml
# +
from systemml import MLContext, dml
ml = MLContext(spark)
ml.setConfigProperty("sysml.localtmpdir", "mkdir /home/dsxuser/work/systemml")
print(ml.version())
if not ml.version() == "1.3.0-SNAPSHOT":
raise ValueError(
"please upgrade to SystemML 1.3.0, or restart your Kernel (Kernel->Restart & Clear Output)"
)
# -
# !wget https://github.com/IBM/coursera/blob/master/coursera_ml/shake.parquet?raw=true
# !mv shake.parquet?raw=true shake.parquet
# Now it’s time to read the sensor data and create a temporary query table.
df = spark.read.parquet("shake.parquet")
df.show()
df.printSchema()
# !pip install pixiedust
# + pixiedust={"displayParams": {"handlerId": "tableView"}}
import pixiedust
display(df)
# -
df.createOrReplaceTempView("df")
# We’ll use Apache SystemML to implement Discrete Fourier Transformation. This way all computation continues to happen on the Apache Spark cluster for advanced scalability and performance.
# As you’ve learned from the lecture, implementing Discrete Fourier Transformation in a linear algebra programming language is simple. Apache SystemML DML is such a language and as you can see the implementation is straightforward and doesn’t differ too much from the mathematical definition (Just note that the sum operator has been swapped with a vector dot product using the %*% syntax borrowed from R
# ):
#
# <img style="float: left;" src="https://wikimedia.org/api/rest_v1/media/math/render/svg/1af0a78dc50bbf118ab6bd4c4dcc3c4ff8502223">
#
#
dml_script = """
PI = 3.141592654
N = nrow(signal)
n = seq(0, N-1, 1)
k = seq(0, N-1, 1)
M = (n %*% t(k))*(2*PI/N)
Xa = cos(M) %*% signal
Xb = sin(M) %*% signal
DFT = cbind(Xa, Xb)
"""
# Now it’s time to create a function which takes a single row Apache Spark data frame as argument (the one containing the accelerometer measurement time series for one axis) and returns the Fourier transformation of it. In addition, we are adding an index column for later joining all axis together and renaming the columns to appropriate names. The result of this function is an Apache Spark DataFrame containing the Fourier Transformation of its input in two columns.
#
# +
from pyspark.sql.functions import monotonically_increasing_id
def dft_systemml(signal, name):
prog = dml(dml_script).input("signal", signal).output("DFT")
return (
# execute the script inside the SystemML engine running on top of Apache Spark
ml.execute(prog)
# read result from SystemML execution back as SystemML Matrix
.get("DFT")
# convert SystemML Matrix to ApacheSpark DataFrame
.toDF()
# rename default column names
.selectExpr("C1 as %sa" % (name), "C2 as %sb" % (name))
# add unique ID per row for later joining
.withColumn("id", monotonically_increasing_id())
)
# -
# Now it’s time to create individual DataFrames containing only a subset of the data. We filter simultaneously for accelerometer each sensor axis and one for each class. This means you’ll get 6 DataFrames. Please implement this using the relational API of DataFrames or SparkSQL. Please use class 1 and 2 and not 0 and 1. <h1><span style="color:red">Please make sure that each DataFrame has only ONE colum (only the measurement, eg. not CLASS column)</span></h1>
#
# +
from pyspark.sql.functions import countDistinct
df.select(countDistinct("CLASS")).show()
# -
x0 = df.filter(df.CLASS == 0).select("X")
y0 = df.filter(df.CLASS == 0).select("Y")
z0 = df.filter(df.CLASS == 0).select("Z")
x1 = df.filter(df.CLASS == 1).select("X")
y1 = df.filter(df.CLASS == 1).select("Y")
z1 = df.filter(df.CLASS == 1).select("Z")
# Since we’ve created this cool DFT function before, we can just call it for each of the 6 DataFrames now. And since the result of this function call is a DataFrame again we can use the pyspark best practice in simply calling methods on it sequentially. So what we are doing is the following:
#
# - Calling DFT for each class and accelerometer sensor axis.
# - Joining them together on the ID column.
# - Re-adding a column containing the class index.
# - Stacking both Dataframes for each classes together
#
#
# +
from pyspark.sql.functions import lit
df_class_0 = (
dft_systemml(x0, "x")
.join(dft_systemml(y0, "y"), on=["id"], how="inner")
.join(dft_systemml(z0, "z"), on=["id"], how="inner")
.withColumn("class", lit(0))
)
df_class_1 = (
dft_systemml(x1, "x")
.join(dft_systemml(y1, "y"), on=["id"], how="inner")
.join(dft_systemml(z1, "z"), on=["id"], how="inner")
.withColumn("class", lit(1))
)
df_dft = df_class_0.union(df_class_1)
df_dft.show()
# -
# Please create a VectorAssembler which consumes the newly created DFT columns and produces a column “features”
#
from pyspark.ml.feature import VectorAssembler
vectorAssembler = VectorAssembler(
inputCols=["xa", "xb", "ya", "yb", "za", "zb"], outputCol="features"
)
# Please insatiate a classifier from the SparkML package and assign it to the classifier variable. Make sure to set the “class” column as target.
#
from pyspark.ml.classification import GBTClassifier
classifier = GBTClassifier(featuresCol="features", labelCol="class", maxIter=10)
# Let’s train and evaluate…
#
# +
from pyspark.ml import Pipeline
pipeline = Pipeline(stages=[vectorAssembler, classifier])
# -
model = pipeline.fit(df_dft)
prediction = model.transform(df_dft)
prediction.show()
# +
from pyspark.ml.evaluation import MulticlassClassificationEvaluator
binEval = (
MulticlassClassificationEvaluator()
.setMetricName("accuracy")
.setPredictionCol("prediction")
.setLabelCol("class")
)
binEval.evaluate(prediction)
# -
# If you are happy with the result (I’m happy with > 0.8) please submit your solution to the grader by executing the following cells, please don’t forget to obtain an assignment submission token (secret) from the Courera’s graders web page and paste it to the “secret” variable below, including your email address you’ve used for Coursera.
#
# !rm -Rf a2_m4.json
prediction = prediction.repartition(1)
prediction.write.json("a2_m4.json")
# !rm -f rklib.py
# !wget wget https://raw.githubusercontent.com/IBM/coursera/master/rklib.py
# +
from rklib import zipit
zipit("a2_m4.json.zip", "a2_m4.json")
# -
# !base64 a2_m4.json.zip > a2_m4.json.zip.base64
# +
from rklib import submit
key = <KEY>"
part = "IjtJk"
# email = ###YOUR_CODE_GOES_HERE###
# submission_token = ###YOUR_CODE_GOES_HERE### # (have a look here if you need more information on how to obtain the token https://youtu.be/GcDo0Rwe06U?t=276)
with open("a2_m4.json.zip.base64", "r") as myfile:
data = myfile.read()
submit(email, submission_token, key, part, [part], data)
| advanced-machine-learning-and-signal-processing/Week 4/Feature Engineering (Programming Assignment).ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
import re
f = open('sentences.txt','r')
lines = f.readline()
for lines in f:
words = re.split('[^a-z]',lines.lower())
d = filter(None, words)
print d
| Untitled13.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# #!python -m pip install seaborn
# +
# # %load_ext autoreload
# # %autoreload 2
# -
import seaborn as sns
df = sns.load_dataset('titanic')
df
df.shape
df.info()
# survived, pclass, sibsp, parch, fare
X = df[['pclass','sibsp', 'parch', 'fare']]
Y = df[['survived']]
X.shape, Y.shape
from sklearn.model_selection import train_test_split
X_train, X_test, Y_train, Y_test = train_test_split(X,Y)
X_train.shape, X_test.shape, Y_train.shape, Y_test.shape
from sklearn.linear_model import LogisticRegression
logR = LogisticRegression()
logR, type(logR)
logR.fit(X_train,Y_train)
logR.classes_
logR.coef_
logR.score(X_train, Y_train)
logR.predict(X_train)
logR.predict_proba(X_train)
logR.predict_proba(X_train[10:13])
logR.predict(X_train[10:13])
from sklearn import metrics
| titanic_classfication-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:duet]
# language: python
# name: conda-env-duet-py
# ---
# +
from importlib import reload
import astropy.units as u
import astropy.constants as const
from astroduet.duet_sensitivity import calc_snr
from astroduet.utils import get_neff
from astroduet.bbmag import bb_abmag_fluence, bb_abmag
import numpy as np
from matplotlib import pyplot as plt
from astropy.table import Table
import astroduet.config as config
reload(config)
from astroduet.background import background_pixel_rate
from astropy.visualization import quantity_support
import matplotlib
font = {'family' : 'sans',
'weight' : 'bold',
'size' : 18}
matplotlib.rc('font', **font)
# -
plt.rcParams['figure.figsize'] = [15,8]
# +
# Telescope setup
duet = config.Telescope()
read_noise = 3.*(2**0.5) # Read noise for two frames
print('Effective PSF size {}'.format(duet.psf_size))
# Get the number of effective background pixels
neff = get_neff(duet.psf_size, duet.pixel)
print('Number of effective bgd pixels: {}'.format(neff))
print()
# -
[bgd_band1, bgd_band2] = background_pixel_rate(duet, low_zodi = True, diag=True)
# +
# Other settings and background computation
texp = 300*u.s
dist0 = 10*u.pc
dist = [50,100,200]*u.Mpc
colerr = 0.1 # Error on color in magnitudes
ab_vega = 1.73 # AB-Vega offset for Swift magnitudes
# Get the bands to project into
bandone = duet.bandpass1
bandtwo = duet.bandpass2
print(bandone)
print(bandtwo)
# -
# Load Tony's lightcurves
shock_2e10 = np.loadtxt('../astroduet/data/shock_2.5e10.dat')
shock_5e10 = np.loadtxt('../astroduet/data/shock_5e10.dat')
shock_1e11 = np.loadtxt('../astroduet/data/shock_1e11.dat')
blukn_01 = np.loadtxt('../astroduet/data/kilonova_0.01.dat')
blukn_02 = np.loadtxt('../astroduet/data/kilonova_0.02.dat')
blukn_04 = np.loadtxt('../astroduet/data/kilonova_0.04.dat')
# +
# Calculate absolute ABmags and photon rates in DUET bands for both models:
shock_2e10_lc = Table([(shock_2e10[:,0]*u.d).to(u.s),np.zeros(len(shock_2e10))*u.ABmag,np.zeros(len(shock_2e10))*u.ABmag,np.zeros(len(shock_2e10))*1/(u.s*u.cm**2),np.zeros(len(shock_2e10))*1/(u.s*u.cm**2)],
names=('time', 'mag_D1', 'mag_D2', 'photflux_D1', 'photflux_D2'), meta={'name': 'Shock model - radius 2.5e10 cm - mags and photon flux at 10pc'})
for k in range(len(shock_2e10[:,0])):
bolflux = 10**shock_2e10[k,1] * (u.erg/u.s) /(4 * np.pi * dist0**2)
band1_mag, band2_mag = bb_abmag(bbtemp=shock_2e10[k,2]*u.K, bandone=bandone,
bandtwo=bandtwo, bolflux=bolflux, val=True)
band1_fluence, band2_fluence = bb_abmag_fluence(bbtemp=shock_2e10[k,2]*u.K,
bolflux=bolflux)
shock_2e10_lc[k]['mag_D1'],shock_2e10_lc[k]['mag_D2'] = band1_mag,band2_mag
shock_2e10_lc[k]['photflux_D1'],shock_2e10_lc[k]['photflux_D2'] = band1_fluence.value,band2_fluence.value
shock_5e10_lc = Table([(shock_5e10[:,0]*u.d).to(u.s),np.zeros(len(shock_5e10))*u.ABmag,np.zeros(len(shock_5e10))*u.ABmag,np.zeros(len(shock_5e10))*1/(u.s*u.cm**2),np.zeros(len(shock_5e10))*1/(u.s*u.cm**2)],
names=('time', 'mag_D1', 'mag_D2', 'photflux_D1', 'photflux_D2'), meta={'name': 'Shock model - radius 5e10 cm - mags and photon flux at 10pc'})
for k in range(len(shock_5e10[:,0])):
bolflux = 10**shock_5e10[k,1] * (u.erg/u.s) /(4 * np.pi * dist0**2)
band1_mag, band2_mag = bb_abmag(bbtemp=shock_5e10[k,2]*u.K, bandone=bandone,
bandtwo=bandtwo, bolflux=bolflux, val=True)
band1_fluence, band2_fluence = bb_abmag_fluence(bbtemp=shock_5e10[k,2]*u.K,
bolflux=bolflux)
shock_5e10_lc[k]['mag_D1'],shock_5e10_lc[k]['mag_D2'] = band1_mag,band2_mag
shock_5e10_lc[k]['photflux_D1'],shock_5e10_lc[k]['photflux_D2'] = band1_fluence.value,band2_fluence.value
shock_1e11_lc = Table([(shock_1e11[:,0]*u.d).to(u.s),np.zeros(len(shock_1e11))*u.ABmag,np.zeros(len(shock_1e11))*u.ABmag,np.zeros(len(shock_1e11))*1/(u.s*u.cm**2),np.zeros(len(shock_1e11))*1/(u.s*u.cm**2)],
names=('time', 'mag_D1', 'mag_D2', 'photflux_D1', 'photflux_D2'), meta={'name': 'Shock model - radius 1e11 cm - mags and photon flux at 10pc'})
for k in range(len(shock_1e11[:,0])):
bolflux = 10**shock_1e11[k,1] * (u.erg/u.s) /(4 * np.pi * dist0**2)
band1_mag, band2_mag = bb_abmag(bbtemp=shock_1e11[k,2]*u.K, bandone=bandone,
bandtwo=bandtwo, bolflux=bolflux,val=True)
band1_fluence, band2_fluence = bb_abmag_fluence(bbtemp=shock_1e11[k,2]*u.K,
bolflux=bolflux)
shock_1e11_lc[k]['mag_D1'],shock_1e11_lc[k]['mag_D2'] = band1_mag,band2_mag
shock_1e11_lc[k]['photflux_D1'],shock_1e11_lc[k]['photflux_D2'] = band1_fluence.value,band2_fluence.value
blukn_01_lc = Table([(blukn_01[:,0]*u.d).to(u.s),np.zeros(len(blukn_01))*u.ABmag,np.zeros(len(blukn_01))*u.ABmag,np.zeros(len(blukn_01))*1/(u.s*u.cm**2),np.zeros(len(blukn_01))*1/(u.s*u.cm**2)],
names=('time', 'mag_D1', 'mag_D2', 'photflux_D1', 'photflux_D2'), meta={'name': 'Blue kilonova model - mass 0.01 Msun - mags and photon flux at 10pc'})
for k in range(len(blukn_01[:,0])):
bolflux = 10**blukn_01[k,1] * (u.erg/u.s) /(4 * np.pi * dist0**2)
band1_mag, band2_mag = bb_abmag(bbtemp=blukn_01[k,2]*u.K, bandone=bandone,
bandtwo=bandtwo, bolflux=bolflux, val=True)
band1_fluence, band2_fluence = bb_abmag_fluence(bbtemp=blukn_01[k,2]*u.K,
bolflux=bolflux)
blukn_01_lc[k]['mag_D1'],blukn_01_lc[k]['mag_D2'] = band1_mag,band2_mag
blukn_01_lc[k]['photflux_D1'],blukn_01_lc[k]['photflux_D2'] = band1_fluence.value,band2_fluence.value
blukn_02_lc = Table([(blukn_02[:,0]*u.d).to(u.s),np.zeros(len(blukn_02))*u.ABmag,np.zeros(len(blukn_02))*u.ABmag,np.zeros(len(blukn_02))*1/(u.s*u.cm**2),np.zeros(len(blukn_02))*1/(u.s*u.cm**2)],
names=('time', 'mag_D1', 'mag_D2', 'photflux_D1', 'photflux_D2'), meta={'name': 'Blue kilonova model - mass 0.02 Msun - mags and photon flux at 10pc'})
for k in range(len(blukn_02[:,0])):
bolflux = 10**blukn_02[k,1] * (u.erg/u.s) /(4 * np.pi * dist0**2)
band1_mag, band2_mag = bb_abmag(bbtemp=blukn_02[k,2]*u.K, bandone=bandone,
bandtwo=bandtwo, bolflux=bolflux, val=True)
band1_fluence, band2_fluence = bb_abmag_fluence(bbtemp=blukn_02[k,2]*u.K,
bolflux=bolflux)
blukn_02_lc[k]['mag_D1'],blukn_02_lc[k]['mag_D2'] = band1_mag,band2_mag
blukn_02_lc[k]['photflux_D1'],blukn_02_lc[k]['photflux_D2'] = band1_fluence.value,band2_fluence.value
blukn_04_lc = Table([(blukn_04[:,0]*u.d).to(u.s),np.zeros(len(blukn_04))*u.ABmag,np.zeros(len(blukn_04))*u.ABmag,np.zeros(len(blukn_04))*1/(u.s*u.cm**2),np.zeros(len(blukn_04))*1/(u.s*u.cm**2)],
names=('time', 'mag_D1', 'mag_D2', 'photflux_D1', 'photflux_D2'), meta={'name': 'Blue kilonova model - mass 0.04 Msun - mags and photon flux at 10pc'})
for k in range(len(blukn_04[:,0])):
bolflux = 10**blukn_04[k,1] * (u.erg/u.s) /(4 * np.pi * dist0**2)
band1_mag, band2_mag = bb_abmag(bbtemp=blukn_04[k,2]*u.K, bandone=bandone,
bandtwo=bandtwo, bolflux=bolflux, val=True)
band1_fluence, band2_fluence = bb_abmag_fluence(bbtemp=blukn_04[k,2]*u.K,
bolflux=bolflux)
blukn_04_lc[k]['mag_D1'],blukn_04_lc[k]['mag_D2'] = band1_mag,band2_mag
blukn_04_lc[k]['photflux_D1'],blukn_04_lc[k]['photflux_D2'] = band1_fluence.value,band2_fluence.value
# +
plt.plot((shock_2e10_lc['time']).to(u.d),shock_2e10_lc['mag_D1'], color='lightcoral', linestyle='-', linewidth=2, label='Shock model (2.5e10)')
plt.plot((shock_2e10_lc['time']).to(u.d),shock_2e10_lc['mag_D2'], color='lightcoral', linestyle='--', linewidth=2, label='_Shock model (2.5e10)')
plt.plot((shock_5e10_lc['time']).to(u.d),shock_5e10_lc['mag_D1'], color='crimson', linestyle='-', linewidth=2, label='Shock model (5e10)')
plt.plot((shock_5e10_lc['time']).to(u.d),shock_5e10_lc['mag_D2'], color='crimson', linestyle='--', linewidth=2, label='_Shock model (5e10)')
plt.plot((shock_1e11_lc['time']).to(u.d),shock_1e11_lc['mag_D1'], color='maroon', linestyle='-', linewidth=2, label='Shock model (1e11)')
plt.plot((shock_1e11_lc['time']).to(u.d),shock_1e11_lc['mag_D2'], color='maroon', linestyle='--', linewidth=2, label='_Shock model (1e11)')
plt.plot((blukn_01_lc['time']).to(u.d),blukn_01_lc['mag_D1'], color='deepskyblue', linestyle='-', linewidth=2, label='Blue Kilonova (0.01)')
plt.plot((blukn_01_lc['time']).to(u.d),blukn_01_lc['mag_D2'], color='deepskyblue', linestyle='--', linewidth=2, label='_Blue Kilonova (0.01)')
plt.plot((blukn_02_lc['time']).to(u.d),blukn_02_lc['mag_D1'], color='royalblue', linestyle='-', linewidth=2, label='Blue Kilonova (0.02)')
plt.plot((blukn_02_lc['time']).to(u.d),blukn_02_lc['mag_D2'], color='royalblue', linestyle='--', linewidth=2, label='_Blue Kilonova (0.02)')
plt.plot((blukn_04_lc['time']).to(u.d),blukn_04_lc['mag_D1'], color='navy', linestyle='-', linewidth=2, label='Blue Kilonova (0.04)')
plt.plot((blukn_04_lc['time']).to(u.d),blukn_04_lc['mag_D2'], color='navy', linestyle='--', linewidth=2, label='_Blue Kilonova (0.04)')
plt.axhline(y=22,xmin=0,xmax=1,color='black',linestyle=':')
plt.ylim(-10,-18)
plt.xlim(-0.1,2)
plt.legend(fontsize=16)
plt.xlabel('Time in days after merger')
plt.ylabel(r'ABmag')
plt.title('Light curve for shock and blue kilonova models (absolute magnitude)')
plt.show()
# +
plt.errorbar(shock_2e10_lc['time']/((u.d).to(u.s)),shock_2e10_lc['mag_D1']-shock_2e10_lc['mag_D2'],yerr=colerr, linewidth=2, color='lightcoral', label='Shock model')
plt.errorbar(shock_5e10_lc['time']/((u.d).to(u.s)),shock_5e10_lc['mag_D1']-shock_5e10_lc['mag_D2'],yerr=colerr, linewidth=2, color='crimson', label='Shock model')
plt.errorbar(shock_1e11_lc['time']/((u.d).to(u.s)),shock_1e11_lc['mag_D1']-shock_1e11_lc['mag_D2'],yerr=colerr, linewidth=2, color='maroon', label='Shock model')
plt.errorbar(blukn_01_lc['time']/((u.d).to(u.s)),blukn_01_lc['mag_D1']-blukn_01_lc['mag_D2'],yerr=colerr ,linewidth=2, color='lightskyblue', label='Blue kilonova')
plt.errorbar(blukn_02_lc['time']/((u.d).to(u.s)),blukn_02_lc['mag_D1']-blukn_02_lc['mag_D2'],yerr=colerr ,linewidth=2, color='royalblue', label='Blue kilonova')
plt.errorbar(blukn_04_lc['time']/((u.d).to(u.s)),blukn_04_lc['mag_D1']-blukn_04_lc['mag_D2'],yerr=colerr ,linewidth=2, color='navy', label='Blue kilonova')
plt.ylim(-1,2)
plt.xlim(-0.01,1)
#plt.legend()
plt.xlabel('Time in days after merger')
plt.ylabel(r'D1 - D2')
plt.title('Color evolution for shock and blue kilonova models')
plt.show()
# +
# Calculate S/N for both models:
shock_2e10_snr = np.zeros([len(dist),len(shock_2e10[:,0]),3])
shock_2e10_snr[:,:,0] = shock_2e10[:,0]
for j, distval in enumerate(dist):
band1_fluence = shock_2e10_lc['photflux_D1'].quantity*(dist0.to(u.pc)/distval.to(u.pc))**2
band2_fluence = shock_2e10_lc['photflux_D2'].quantity*(dist0.to(u.pc)/distval.to(u.pc))**2
band1_rate = duet.trans_eff * duet.eff_area * band1_fluence
band2_rate = duet.trans_eff * duet.eff_area * band2_fluence
snr1 = calc_snr(texp, band1_rate, bgd_band1, read_noise, neff)
snr2 = calc_snr(texp, band2_rate, bgd_band2, read_noise, neff)
shock_2e10_snr[j,:,1] = snr1
shock_2e10_snr[j,:,2] = snr2
shock_5e10_snr = np.zeros([len(dist),len(shock_5e10[:,0]),3])
shock_5e10_snr[:,:,0] = shock_5e10[:,0]
for j, distval in enumerate(dist):
band1_fluence = shock_5e10_lc['photflux_D1'].quantity*(dist0.to(u.pc)/distval.to(u.pc))**2
band2_fluence = shock_5e10_lc['photflux_D2'].quantity*(dist0.to(u.pc)/distval.to(u.pc))**2
band1_rate = duet.trans_eff * duet.eff_area * band1_fluence
band2_rate = duet.trans_eff * duet.eff_area * band2_fluence
snr1 = calc_snr(texp, band1_rate, bgd_band1, read_noise, neff)
snr2 = calc_snr(texp, band2_rate, bgd_band2, read_noise, neff)
shock_5e10_snr[j,:,1] = snr1
shock_5e10_snr[j,:,2] = snr2
shock_1e11_snr = np.zeros([len(dist),len(shock_1e11[:,0]),3])
shock_1e11_snr[:,:,0] = shock_1e11[:,0]
for j, distval in enumerate(dist):
band1_fluence = shock_1e11_lc['photflux_D1'].quantity*(dist0.to(u.pc)/distval.to(u.pc))**2
band2_fluence = shock_1e11_lc['photflux_D2'].quantity*(dist0.to(u.pc)/distval.to(u.pc))**2
band1_rate = duet.trans_eff * duet.eff_area * band1_fluence
band2_rate = duet.trans_eff * duet.eff_area * band2_fluence
snr1 = calc_snr(texp, band1_rate, bgd_band1, read_noise, neff)
snr2 = calc_snr(texp, band2_rate, bgd_band2, read_noise, neff)
shock_1e11_snr[j,:,1] = snr1
shock_1e11_snr[j,:,2] = snr2
blukn_01_snr = np.zeros([len(dist),len(blukn_01[:,0]),3])
blukn_01_snr[:,:,0] = blukn_01[:,0]
for j, distval in enumerate(dist):
band1_fluence = blukn_01_lc['photflux_D1'].quantity*(dist0.to(u.pc)/distval.to(u.pc))**2
band2_fluence = blukn_01_lc['photflux_D2'].quantity*(dist0.to(u.pc)/distval.to(u.pc))**2
band1_rate = duet.trans_eff * duet.eff_area * band1_fluence
band2_rate = duet.trans_eff * duet.eff_area * band2_fluence
snr1 = calc_snr(texp, band1_rate, bgd_band1, read_noise, neff)
snr2 = calc_snr(texp, band2_rate, bgd_band2, read_noise, neff)
blukn_01_snr[j,:,1] = snr1
blukn_01_snr[j,:,2] = snr2
blukn_02_snr = np.zeros([len(dist),len(blukn_02[:,0]),3])
blukn_02_snr[:,:,0] = blukn_02[:,0]
for j, distval in enumerate(dist):
band1_fluence = blukn_02_lc['photflux_D1'].quantity*(dist0.to(u.pc)/distval.to(u.pc))**2
band2_fluence = blukn_02_lc['photflux_D2'].quantity*(dist0.to(u.pc)/distval.to(u.pc))**2
band1_rate = duet.trans_eff * duet.eff_area * band1_fluence
band2_rate = duet.trans_eff * duet.eff_area * band2_fluence
snr1 = calc_snr(texp, band1_rate, bgd_band1, read_noise, neff)
snr2 = calc_snr(texp, band2_rate, bgd_band2, read_noise, neff)
blukn_02_snr[j,:,1] = snr1
blukn_02_snr[j,:,2] = snr2
blukn_04_snr = np.zeros([len(dist),len(blukn_04[:,0]),3])
blukn_04_snr[:,:,0] = blukn_04[:,0]
for j, distval in enumerate(dist):
band1_fluence = blukn_04_lc['photflux_D1'].quantity*(dist0.to(u.pc)/distval.to(u.pc))**2
band2_fluence = blukn_04_lc['photflux_D2'].quantity*(dist0.to(u.pc)/distval.to(u.pc))**2
band1_rate = duet.trans_eff * duet.eff_area * band1_fluence
band2_rate = duet.trans_eff * duet.eff_area * band2_fluence
snr1 = calc_snr(texp, band1_rate, bgd_band1, read_noise, neff)
snr2 = calc_snr(texp, band2_rate, bgd_band2, read_noise, neff)
blukn_04_snr[j,:,1] = snr1
blukn_04_snr[j,:,2] = snr2
# +
plt.plot(shock_2e10_snr[0,:,0],shock_2e10_snr[0,:,1], color='lightcoral', linestyle='-', linewidth=2, label='Shock model (2.5e10)')
plt.plot(shock_2e10_snr[0,:,0],shock_2e10_snr[0,:,2], color='lightcoral', linestyle='--', linewidth=2, label='_Shock model (2.5e10)')
plt.plot(shock_5e10_snr[0,:,0],shock_5e10_snr[0,:,1], color='crimson', linestyle='-', linewidth=2, label='Shock model (5e10)')
plt.plot(shock_5e10_snr[0,:,0],shock_5e10_snr[0,:,2], color='crimson', linestyle='--', linewidth=2, label='_Shock model (5e10)')
plt.plot(shock_1e11_snr[0,:,0],shock_1e11_snr[0,:,1], color='maroon', linestyle='-', linewidth=2, label='Shock model (1e11)')
plt.plot(shock_1e11_snr[0,:,0],shock_1e11_snr[0,:,2], color='maroon', linestyle='--', linewidth=2, label='_Shock model (1e11)')
plt.plot(blukn_01_snr[0,:,0],blukn_01_snr[0,:,1], color='deepskyblue', linestyle='-', linewidth=2, label='Blue Kilonova (0.01)')
plt.plot(blukn_01_snr[0,:,0],blukn_01_snr[0,:,2], color='deepskyblue', linestyle='--', linewidth=2, label='_Blue Kilonova (0.01)')
plt.plot(blukn_02_snr[0,:,0],blukn_02_snr[0,:,1], color='royalblue', linestyle='-', linewidth=2, label='Blue Kilonova (0.02)')
plt.plot(blukn_02_snr[0,:,0],blukn_02_snr[0,:,2], color='royalblue', linestyle='--', linewidth=2, label='_Blue Kilonova (0.02)')
plt.plot(blukn_04_snr[0,:,0],blukn_04_snr[0,:,1], color='navy', linestyle='-', linewidth=2, label='Blue Kilonova (0.04)')
plt.plot(blukn_04_snr[0,:,0],blukn_04_snr[0,:,2], color='navy', linestyle='--', linewidth=2, label='_Blue Kilonova (0.04)')
plt.axhline(y=10,xmin=0,xmax=1,color='black',linestyle=':')
#plt.ylim(26,18)
#plt.xlim(-0.1,2)
plt.legend(fontsize=16)
plt.xlabel('Time in days after merger')
plt.ylabel(r'S/N')
plt.title('Signal-to-noise in 300s for shock and blue kilonova models at 50 Mpc (low zodi)')
plt.show()
# +
plt.plot(shock_2e10_snr[1,:,0],shock_2e10_snr[1,:,1], color='lightcoral', linestyle='-', linewidth=2, label='Shock model (2.5e10)')
plt.plot(shock_2e10_snr[1,:,0],shock_2e10_snr[1,:,2], color='lightcoral', linestyle='--', linewidth=2, label='_Shock model (2.5e10)')
plt.plot(shock_5e10_snr[1,:,0],shock_5e10_snr[1,:,1], color='crimson', linestyle='-', linewidth=2, label='Shock model (5e10)')
plt.plot(shock_5e10_snr[1,:,0],shock_5e10_snr[1,:,2], color='crimson', linestyle='--', linewidth=2, label='_Shock model (5e10)')
plt.plot(shock_1e11_snr[1,:,0],shock_1e11_snr[1,:,1], color='maroon', linestyle='-', linewidth=2, label='Shock model (1e11)')
plt.plot(shock_1e11_snr[1,:,0],shock_1e11_snr[1,:,2], color='maroon', linestyle='--', linewidth=2, label='_Shock model (1e11)')
plt.plot(blukn_01_snr[1,:,0],blukn_01_snr[1,:,1], color='deepskyblue', linestyle='-', linewidth=2, label='Blue Kilonova (0.01)')
plt.plot(blukn_01_snr[1,:,0],blukn_01_snr[1,:,2], color='deepskyblue', linestyle='--', linewidth=2, label='_Blue Kilonova (0.01)')
plt.plot(blukn_02_snr[1,:,0],blukn_02_snr[1,:,1], color='royalblue', linestyle='-', linewidth=2, label='Blue Kilonova (0.02)')
plt.plot(blukn_02_snr[1,:,0],blukn_02_snr[1,:,2], color='royalblue', linestyle='--', linewidth=2, label='_Blue Kilonova (0.02)')
plt.plot(blukn_04_snr[1,:,0],blukn_04_snr[1,:,1], color='navy', linestyle='-', linewidth=2, label='Blue Kilonova (0.04)')
plt.plot(blukn_04_snr[1,:,0],blukn_04_snr[1,:,2], color='navy', linestyle='--', linewidth=2, label='_Blue Kilonova (0.04)')
plt.axhline(y=10,xmin=0,xmax=1,color='black',linestyle=':')
#plt.ylim(26,18)
plt.xlim(-0.1,2)
plt.legend(fontsize=16)
plt.xlabel('Time in days after merger')
plt.ylabel(r'S/N')
plt.title('Signal-to-noise in 300s for shock and blue kilonova models at 100 Mpc (low zodi)')
plt.show()
# +
plt.plot(shock_2e10_snr[2,:,0],shock_2e10_snr[2,:,1], color='lightcoral', linestyle='-', linewidth=2, label='Shock model (2.5e10)')
plt.plot(shock_2e10_snr[2,:,0],shock_2e10_snr[2,:,2], color='lightcoral', linestyle='--', linewidth=2, label='_Shock model (2.5e10)')
plt.plot(shock_5e10_snr[2,:,0],shock_5e10_snr[2,:,1], color='crimson', linestyle='-', linewidth=2, label='Shock model (5e10)')
plt.plot(shock_5e10_snr[2,:,0],shock_5e10_snr[2,:,2], color='crimson', linestyle='--', linewidth=2, label='_Shock model (5e10)')
plt.plot(shock_1e11_snr[2,:,0],shock_1e11_snr[2,:,1], color='maroon', linestyle='-', linewidth=2, label='Shock model (1e11)')
plt.plot(shock_1e11_snr[2,:,0],shock_1e11_snr[2,:,2], color='maroon', linestyle='--', linewidth=2, label='_Shock model (1e11)')
plt.plot(blukn_01_snr[2,:,0],blukn_01_snr[2,:,1], color='deepskyblue', linestyle='-', linewidth=2, label='Blue Kilonova (0.01)')
plt.plot(blukn_01_snr[2,:,0],blukn_01_snr[2,:,2], color='deepskyblue', linestyle='--', linewidth=2, label='_Blue Kilonova (0.01)')
plt.plot(blukn_02_snr[2,:,0],blukn_02_snr[2,:,1], color='royalblue', linestyle='-', linewidth=2, label='Blue Kilonova (0.02)')
plt.plot(blukn_02_snr[2,:,0],blukn_02_snr[2,:,2], color='royalblue', linestyle='--', linewidth=2, label='_Blue Kilonova (0.02)')
plt.plot(blukn_04_snr[2,:,0],blukn_04_snr[2,:,1], color='navy', linestyle='-', linewidth=2, label='Blue Kilonova (0.04)')
plt.plot(blukn_04_snr[2,:,0],blukn_04_snr[2,:,2], color='navy', linestyle='--', linewidth=2, label='_Blue Kilonova (0.04)')
plt.axhline(y=10,xmin=0,xmax=1,color='black',linestyle=':')
#plt.ylim(26,18)
plt.xlim(-0.1,2)
plt.legend(fontsize=16)
plt.xlabel('Time in days after merger')
plt.ylabel(r'S/N')
plt.title('Signal-to-noise in 300s for shock and blue kilonova models at 200 Mpc (low zodi)')
plt.show()
# -
# Remove ABmag units, write to fits tables
shock_2e10_lc['mag_D1'].unit = None
shock_2e10_lc['mag_D2'].unit = None
shock_5e10_lc['mag_D1'].unit = None
shock_5e10_lc['mag_D2'].unit = None
shock_1e11_lc['mag_D1'].unit = None
shock_1e11_lc['mag_D2'].unit = None
blukn_01_lc['mag_D1'].unit = None
blukn_01_lc['mag_D2'].unit = None
blukn_02_lc['mag_D1'].unit = None
blukn_02_lc['mag_D2'].unit = None
blukn_04_lc['mag_D1'].unit = None
blukn_04_lc['mag_D2'].unit = None
shock_2e10_lc.write('../astroduet/data/shock_2.5e10_lightcurve_DUET.fits', format='fits', overwrite=True)
shock_5e10_lc.write('../astroduet/data/shock_5e10_lightcurve_DUET.fits', format='fits', overwrite=True)
shock_1e11_lc.write('../astroduet/data/shock_1e11_lightcurve_DUET.fits', format='fits', overwrite=True)
blukn_01_lc.write('../astroduet/data/blukn_01_lightcurve_DUET.fits', format='fits', overwrite=True)
blukn_02_lc.write('../astroduet/data/blukn_02_lightcurve_DUET.fits', format='fits', overwrite=True)
blukn_04_lc.write('../astroduet/data/blukn_04_lightcurve_DUET.fits', format='fits', overwrite=True)
shock_2e10_lc['mag_D1'].unit = u.ABmag
shock_2e10_lc['mag_D2'].unit = u.ABmag
shock_5e10_lc['mag_D1'].unit = u.ABmag
shock_5e10_lc['mag_D2'].unit = u.ABmag
shock_1e11_lc['mag_D1'].unit = u.ABmag
shock_1e11_lc['mag_D2'].unit = u.ABmag
blukn_01_lc['mag_D1'].unit = u.ABmag
blukn_01_lc['mag_D2'].unit = u.ABmag
blukn_02_lc['mag_D1'].unit = u.ABmag
blukn_02_lc['mag_D2'].unit = u.ABmag
blukn_04_lc['mag_D1'].unit = u.ABmag
blukn_04_lc['mag_D2'].unit = u.ABmag
| notebooks/GWEM_lightcurves_and_sensitivities.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # How to build a social media sentiment analysis pipeline with scikit-learn
#
# *This is Part 2 of 5 in a series on building a sentiment analysis pipeline using scikit-learn. You can find Part 3 [here](./sentiment-pipeline-sklearn-3.ipynb), and the introduction [here](./sentiment-pipeline-sklearn-1.ipynb).*
#
# *Jump to:*
#
# * *[**Part 1 - Introduction and requirements**](./sentiment-pipeline-sklearn-1.ipynb)*
# * *[**Part 3 - Adding a custom function to a pipeline**](./sentiment-pipeline-sklearn-3.ipynb)*
# * *[**Part 4 - Adding a custom feature to a pipeline with FeatureUnion**](./sentiment-pipeline-sklearn-4.ipynb)*
# * *[**Part 5 - Hyperparameter tuning in pipelines with GridSearchCV**](./sentiment-pipeline-sklearn-5.ipynb)*
#
# # Part 2 - Building a basic scikit-learn pipeline
#
# In this post, we're going to build a very simple pipeline, consisting of a count vectorizer for feature extraction and a logistic regression for classification.
#
# # Get The Data
#
# First, we're going to read in a CSV that has the ids of 1000 tweets and the posts' labeled sentiments, then we're going to fetch the contents of those tweets via Twython and return a dataframe. This is defined in a function in `fetch_twitter_data.py`. You'll need to populate the variables for Twitter API app key and secret, plus a user token and secret at the top of that file in order to fetch from Twitter. You can create an app [here](https://apps.twitter.com/app/new).
# %%time
from fetch_twitter_data import fetch_the_data
df = fetch_the_data()
# If you're following along at home, a nonzero amount of those 1000 tweets will be unable to be fetched. That happens when the posts are deleted by the users. All we can do is fondly remember them and carry on. ¯\\_(ツ)_/¯
#
# We're going to take a quick peek at the head of the data and move on to building the model, since the emphasis of this post is on building a robust classifier pipeline. You're a great data scientist, though, so you already know the importance of getting really friendly with the contents of the data, right?
df.head()
# # Building a basic pipeline
# A quick note before we begin: I'm taking liberties with assessing the performance of the model, and we're in danger of overfitting the model to the testing data. We're looking at the results of a particular set to show how the model changes with new features, preprocessing, and hyperparameters. This is for illustratory purposes only. Always remember to practice safe model evaluation, using proper cross-validation.
#
# Okay, time for some fun - we're going to make our first pipeline that we'll continue to build upon.
#
# ## Train-test split
#
# First, we'll split up the data between training and testing sets.
# + format="row"
from sklearn.model_selection import train_test_split
X, y = df.text, df.sentiment
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)
# -
# ## Tokenizer
#
# We're going to override sklearn's default tokenizer with NLTK's TweetTokenizer. This has the benefit of tokenizing hashtags and emoticons correctly, and shortening repeated characters (e.g., "stop ittttttttttt" -> "stop ittt")
import nltk
tokenizer = nltk.casual.TweetTokenizer(preserve_case=False, reduce_len=True) # Your milage may vary on these arguments
# ## Pipeline steps
# Now let's initialize our [Count Vectorizer](http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html) and our [Logistic Regression](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) for classification. Many models tend to use Naive Bayes approaches for text classification problems. However, I'm not going to for this example for a few reasons:
#
# 1. Later on, we're going to be adding continuous features to the pipeline, which is difficult to do with scikit-learn's implementation of NB. For example, Gaussian NB (the flavor which produces best results most of the time from continuous variables) requires dense matrices, but the output of a `CountVectorizer` is sparse. It's more work, and from my tests LR will still outperform even on this small of a dataset.
# 1. We're thinking big, creating a pipeline that will still be useful on a much bigger dataset than we have in front of us right now. While Naive Bayes tends to converge quicker (albeit with a higher error rate than Logistic Regression) on smaller datasets, LR should outperform NB in the long run with more data to learn from.
# 1. Finally, and probably most importantly, I've simply observed LR working better than NB on this kind of text data in actual production use cases. Data sparsity is likely our enemy here. Twitter data is short-form, so each example will have few "rows" to look up from our vectorized vocabulary, and our model may not see the same unigram/phrase enough times to really get its head around the true class probabilities for every word. sklearn's Logistic Regression's built-in regularization will handle these kinds of cases better than NB (again, based on my experience in this specific domain of social media). The experiences I've seen with LR vs. NB on social media posts probably warrant their own post.
#
# For more information, here's a nice book excerpt on Naive Bayes vs. Logistic Regression [here](http://www.cs.cmu.edu/~tom/mlbook/NBayesLogReg.pdf), a quick article on how to pick the right classifier [here](http://blog.echen.me/2011/04/27/choosing-a-machine-learning-classifier/), and a great walkthrough of how logistic regression works [here](http://nbviewer.jupyter.org/github/justmarkham/DAT8/blob/master/notebooks/12_logistic_regression.ipynb).
# +
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.linear_model import LogisticRegression
from sklearn.pipeline import Pipeline
count_vect = CountVectorizer(tokenizer=tokenizer.tokenize)
classifier = LogisticRegression()
# -
# And now to put them together in a pipeline...
sentiment_pipeline = Pipeline([
('vectorizer', count_vect),
('classifier', classifier)
])
# Alright, we've got a pipeline. Let's unleash this beast all over the training data!
#
# We'll import a helper function to make training and testing a more pleasant experience. This function trains, predicts on test data, checks accuracy score against null accuracy, and displays a confusion matrix and classification report.
# +
from sklearn_helpers import train_test_and_evaluate
sentiment_pipeline, confusion_matrix = train_test_and_evaluate(sentiment_pipeline, X_train, y_train, X_test, y_test)
# -
# # All done
#
# That wasn't so bad, was it? Pipelines are great, as you can easily keep adding steps and tweaking the ones you have. In the [next lesson](./sentiment-pipeline-sklearn-3.ipynb) we're going to define a custom preprocessing function and add it as a step in the model. [Hit it!](./sentiment-pipeline-sklearn-3.ipynb)
#
# *This is Part 2 of 5 in a series on building a sentiment analysis pipeline using scikit-learn. You can find Part 3 [here](./sentiment-pipeline-sklearn-3.ipynb), and the introduction [here](./sentiment-pipeline-sklearn-1.ipynb).*
#
# *Jump to:*
#
# * *[**Part 1 - Introduction and requirements**](./sentiment-pipeline-sklearn-1.ipynb)*
# * *[**Part 3 - Adding a custom function to a pipeline**](./sentiment-pipeline-sklearn-3.ipynb)*
# * *[**Part 4 - Adding a custom feature to a pipeline with FeatureUnion**](./sentiment-pipeline-sklearn-4.ipynb)*
# * *[**Part 5 - Hyperparameter tuning in pipelines with GridSearchCV**](./sentiment-pipeline-sklearn-5.ipynb)*
| notebooks/sentiment-pipeline-sklearn-2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
# ! ls
def load_data_and_authors(data_path="Embedding_and_retrieval/papers.pkl",
authors_path="Embedding_and_retrieval/authors.csv"):
data = pd.read_pickle(data_path)
data.drop(63376, inplace=True)
authors = pd.read_csv(authors_path)
authors.drop("Unnamed: 0", inplace=True, axis=1)
return data, authors
data, authors = load_data_and_authors()
data_for_csv = data[["id", "title", "authors", "venue", "year", "n_citation", "page_start", "page_end", "doc_type", "publisher", "volume", "issue", "fos", "doi", "references", "abstract", "cleaned_abstract_sentences", "cleaned_title"]]
data_for_csv.to_csv("papers.csv", index=False)
authors.to_csv("authors.csv", index=False)
| Notebooks/create_data_csv.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# So far, we've only studied word embeddings, where each word is represented by a vector of numbers. For instance, the word cat might be represented as
#
# ```python
# cat = [0.23, 0.10, -0.23, -0.01, 0.91, 1.2, 1.01, -0.92]
# ```
#
# But how would you represent a **sentence**? There are many different ways to represent sentences, but the simplest, and often very effective way is to **take the average of all the word embeddings of that sentence**.
# ### Important Note:
#
# Before you start this next portion, download `en_core_web_md` for spacy - the `en_core_web_sm` model we used in class is good as a starter introduction to word embeddings, but won't give you as great results in the long run:
#
# >*To make them compact and fast, spaCy’s small models (all packages that end in `sm`) don’t ship with **true word vectors**, and only include context-sensitive tensors. This means you can still use the `similarity()` methods to compare documents, spans and tokens – but the result won’t be as good, and individual tokens won’t have any vectors assigned. So in order to use real word vectors, you need to download a larger model*.
#
# You can download the larger model in Python by using `python -m spacy en_core_web_md`. In your Jupyter notebook cell, you can also type the command `!{sys.executable} -m spacy download en_core_web_md` in a cell.
# load in spacy
import en_core_web_md
import spacy
from scipy.spatial.distance import cosine
nlp = en_core_web_md.load()
# +
sentenceA = "I watched a movie with my friend."
sentenceA_tokens = nlp(sentenceA)
print("\nSentence A:")
for token in nlp(sentenceA): # I am only going to show the first 6 values of the word embedding, but
# remember that the embedding itself is usually 50, 100, 300, 500 elements long (in Spacy's case, 384)
print(f"{token}'s word embedding: {token.vector[:6]}'")
print("\nSentence B:")
for token in nlp(sentenceB):
print(f"{token}'s word embedding: {token.vector[:6]}'")
# -
# Note that if you had used `en_core_web_sm`, spacy will generate your word embeddings on the fly, the same word, like `I` might have slightly different embedding values! In `en_core_web_md`, spacy downloads and uses pre-trained embeddings that are fixed and more accurate.
# To find the sentence vector for sentence A, sum each of the words in sentence A:
# +
# how to find the sentence embedding of sentence A
# create a 300 length word embedding (spacy's en_core_web_md model uses 300-dimensional word embeddings)
running_total = np.zeros(300)
for token in nlp(sentenceA):
running_total += token.vector # add the word embeddings to the running total
# divide by the total number of words in sentence to get the "average embedding"
sentence_embedding = running_total / len(nlp(sentenceA))
# +
# these are the first 10 values of the 300-dimensional word embeddings in en_core_web_md for sentence A
sentence_embedding[:10]
# -
# There's actually an even easier way to do this in spacy:
tokens = nlp(sentenceA)
tokens.vector[:10] # the same as the above, when we got the sentence embeddings ourselves!
sentenceA_embedding = nlp(sentenceA).vector
sentenceB_embedding = nlp(sentenceB).vector
similarity = 1 - cosine(sentenceA_embedding, sentenceB_embedding)
print(f"The similarity between sentence A and sentence B is {similarity}")
# +
sentenceC = "I drank a watermelon with my dog." # structurally, this is extremely similar to sentence A and B.
# however, semantically, it is extremely different! Let's prove that word embeddings can be used to tell that
# sentenceC is not as similar to A and B.
sentenceC_embedding = nlp(sentenceC).vector
similarity = 1 - cosine(sentenceC_embedding, sentenceA_embedding)
print(f"The similarity between sentence C and sentence A is {similarity}")
similarity = 1 - cosine(sentenceC_embedding, sentenceB_embedding)
print(f"The similarity between sentence C and sentence B is {similarity}")
# -
# What happens if we substitute in `pal` for `dog`? Our word count models would not have picked up on any real difference, since `pal` just another word to be counted. However, semantically, `pal` is an informal name for a friend, and substituting in this new word will increase our similarity.
# +
sentenceC = "I drank a watermelon with my pal."
sentenceC_embedding = nlp(sentenceC).vector
similarity = 1 - cosine(sentenceC_embedding, sentenceA_embedding)
print(f"The similarity between sentence C and sentence A is {similarity}")
similarity = 1 - cosine(sentenceC_embedding, sentenceB_embedding)
print(f"The similarity between sentence C and sentence B is {similarity}")
# +
sentenceC = "I saw a watermelon with my pal."
sentenceC_embedding = nlp(sentenceC).vector
similarity = 1 - cosine(sentenceC_embedding, sentenceA_embedding)
print(f"The similarity between sentence C and sentence A is {similarity}")
similarity = 1 - cosine(sentenceC_embedding, sentenceB_embedding)
print(f"The similarity between sentence C and sentence B is {similarity}")
# Notice the even higher similarity after I substitute in "saw", a synonym for watched.
# -
| week5/Finding Sentence Vectors.ipynb |
from keras.models import Sequential
from keras.layers import Dense, Activation, Dropout
from keras.optimizers import RMSprop
from keras.datasets import mnist
from keras.utils import np_utils
batch_size = 128
nb_classes = 10
nb_epoch = 100
# Load MNIST dataset
(X_train, y_train), (X_test, y_test) = mnist.load_data()
X_train = X_train.reshape(60000, 784)
X_test = X_test.reshape(10000, 784)
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255
X_test /= 255
Y_train = np_utils.to_categorical(y_train, nb_classes)
Y_test = np_utils.to_categorical(y_test, nb_classes)
# +
# Deep Multilayer Perceptron model
model = Sequential()
model.add(Dense(output_dim=625, input_dim=784, init='normal'))
model.add(Activation('relu'))
model.add(Dropout(0.2))
model.add(Dense(output_dim=625, input_dim=625, init='normal'))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(output_dim=10, input_dim=625, init='normal'))
model.add(Activation('softmax'))
model.compile(optimizer=RMSprop(lr=0.001, rho=0.9), loss='categorical_crossentropy', metrics=['accuracy'])
model.summary()
# -
history = model.fit(X_train, Y_train, nb_epoch=nb_epoch, batch_size=batch_size, verbose=1)
# Evaluate
evaluation = model.evaluate(X_test, Y_test, verbose=1)
print('Summary: Loss over the test dataset: %.2f, Accuracy: %.2f' % (evaluation[0], evaluation[1]))
| Workshop - 1/.ipynb_checkpoints/modern_dnn-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# + [markdown] nbgrader={}
# # Numpy Exercise 4
# + [markdown] nbgrader={}
# ## Imports
# + nbgrader={}
import numpy as np
# %matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
# + [markdown] nbgrader={}
# ## Complete graph Laplacian
# + [markdown] nbgrader={}
# In discrete mathematics a [Graph](http://en.wikipedia.org/wiki/Graph_%28mathematics%29) is a set of *vertices* or *nodes* that are connected to each other by *edges* or *lines*. If those *edges* don't have directionality, the graph is said to be *undirected*. Graphs are used to model social and communications networks (Twitter, Facebook, Internet) as well as natural systems such as molecules.
#
# A [Complete Graph](http://en.wikipedia.org/wiki/Complete_graph), $K_n$ on $n$ nodes has an edge that connects each node to every other node.
#
# Here is $K_5$:
# + nbgrader={}
import networkx as nx
K_5=nx.complete_graph(5)
nx.draw(K_5)
# + [markdown] nbgrader={}
# The [Laplacian Matrix](http://en.wikipedia.org/wiki/Laplacian_matrix) is a matrix that is extremely important in graph theory and numerical analysis. It is defined as $L=D-A$. Where $D$ is the degree matrix and $A$ is the adjecency matrix. For the purpose of this problem you don't need to understand the details of these matrices, although their definitions are relatively simple.
#
# The degree matrix for $K_n$ is an $n \times n$ diagonal matrix with the value $n-1$ along the diagonal and zeros everywhere else. Write a function to compute the degree matrix for $K_n$ using NumPy.
# + nbgrader={"checksum": "00d28c9ea423c0f2985eda865ec5ccee", "solution": true}
def complete_deg(n):
"""Return the integer valued degree matrix D for the complete graph K_n."""
k = np.zeros((n,n), dtype=int)
i = 0
while i < n:
k[i,i] = n-1
i += 1
return k
# -
complete_deg(5)
# + deletable=false nbgrader={"checksum": "7f2a5f03b1a59c05f397ce1e4d9ae4a1", "grade": true, "grade_id": "numpyex04a", "points": 4}
D = complete_deg(5)
assert D.shape==(5,5)
assert D.dtype==np.dtype(int)
assert np.all(D.diagonal()==4*np.ones(5))
assert np.all(D-np.diag(D.diagonal())==np.zeros((5,5),dtype=int))
# + [markdown] nbgrader={}
# The adjacency matrix for $K_n$ is an $n \times n$ matrix with zeros along the diagonal and ones everywhere else. Write a function to compute the adjacency matrix for $K_n$ using NumPy.
# + nbgrader={"checksum": "5285cd3c10582e2d30d4a93530092306", "solution": true}
def complete_adj(n):
"""Return the integer valued adjacency matrix A for the complete graph K_n."""
a = np.ones((n,n), dtype=int)
j = 0
while j < n:
a[j,j] = 0
j += 1
return a
# -
complete_adj(5)
# + deletable=false nbgrader={"checksum": "658e2e7db6ac6b06f7349682477e75ce", "grade": true, "grade_id": "numpyex04b", "points": 4}
A = complete_adj(5)
assert A.shape==(5,5)
assert A.dtype==np.dtype(int)
assert np.all(A+np.eye(5,dtype=int)==np.ones((5,5),dtype=int))
# + [markdown] nbgrader={}
# Use NumPy to explore the eigenvalues or *spectrum* of the Laplacian *L* of $K_n$. What patterns do you notice as $n$ changes? Create a *conjecture* about the general Laplace *spectrum* of $K_n$.
# + deletable=false nbgrader={"checksum": "6cff4e8e53b15273846c3aecaea84a3d", "solution": true}
m = 8
L = complete_deg(m) - complete_adj(m)
np.linalg.eigvals(L)
# + [markdown] deletable=false nbgrader={"checksum": "662bdfcc6fa217197b1ba6a46fc50211", "grade": true, "grade_id": "numpyex04c", "points": 2, "solution": true}
# The number of eigenvalues is the same as the size of the Laplacian, and the second one is usually 0 (except for n=1). All others are equal to n.
| assignments/assignment03/NumpyEx04.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import model_afolu as ma
import model_circular_economy as mc
import model_ippu as mi
import numpy as np
import os, os.path
import data_structures as ds
import setup_analysis as sa
import support_functions as sf
import importlib
import warnings
importlib.reload(ds)
importlib.reload(sa)
#import data_structures as ds
#importlib.reload(ds)
# -
# +
model_ce = mc.CircularEconomy(sa.model_attributes)
model_afolu = ma.AFOLU(sa.model_attributes)
model_ippu = mi.IPPU(sa.model_attributes)
# -
# build AFOLU and SOCIOECONOMIC templates
if True:
# get the
tp = max(sa.model_attributes.get_time_periods()[0])
f0, f1 = (f"min_{tp}", f"max_{tp}")
df_template_afolu = pd.read_csv(os.path.join(sa.dir_ref, "fake_data", "tmp_afolu_vars_with_mcmc_new.csv"))
df_template_afolu.drop([f0, f1], axis = 1, inplace = True)
df_sampling_range_defaults = sa.model_attributes.build_default_sampling_range_df()
df_template_afolu = pd.merge(df_template_afolu, df_sampling_range_defaults, how = "left", on = ["variable"])
df_template_afolu = df_template_afolu.dropna(subset = ["max_35", "min_35"], how = "any")
df_template_afolu.drop(["src"], axis = 1, inplace = True)
df_template_afolu["time_series_id"] = np.zeros(len(df_template_afolu)).astype(int)
df_template_afolu["strategy_id"] = np.zeros(len(df_template_afolu)).astype(int)
df_template_afolu["uniform_scaling_q"].iloc[~df_template_afolu["uniform_scaling_q"].isna()] = df_template_afolu["uniform_scaling_q"].iloc[2]
# order
tp_max = max(sa.model_attributes.get_time_periods()[0])
fields_minmax = [f"min_{tp_max}", f"max_{tp_max}"]
fields_prep = [x for x in df_template_afolu.columns if not x.isnumeric() and (x not in fields_minmax)] + fields_minmax
fields_times = [x for x in df_template_afolu.columns if (x not in fields_prep)]
df_template_afolu = df_template_afolu[fields_prep + fields_times]
subsectors = [sa.model_attributes.dict_model_variable_to_subsector[sa.model_attributes.dict_variables_to_model_variables[x]] for x in df_template_afolu["variable"]]
df_template_afolu["subsector"] = subsectors
for sec in ["AFOLU", "Socioeconomic"]:
vars_keep = sa.model_attributes.get_variables_by_sector(sec, "input")
df_exp = df_template_afolu[df_template_afolu["variable"].isin(vars_keep)].copy().reset_index(drop = True).drop_duplicates().sort_values(by = ["subsector", "variable"])
dict_out = {
"strategy_id-0": df_exp,
"strategy_id-1": df_exp.iloc[0:0]
}
#sf.dict_to_excel(sa.excel_template_path(sec, "arg", "demo", True), dict_out)
else:
print("Not running AFOLU and SOCIOECONOMIC")
# build CIRCULAR ECONOMY template
if False:
##############################
# BUILD WASTE TEMPLATE #
##############################
sec = "Circular Economy"
l_vars = sa.model_attributes.get_variables_by_sector(sec, "input")
l_vars.sort()
df_base = df_exp.iloc[0:0].copy()
df_base["variable"] = l_vars
# add subsectors
subsectors = [sa.model_attributes.dict_model_variable_to_subsector[sa.model_attributes.dict_variables_to_model_variables[x]] for x in df_base["variable"]]
df_base["subsector"] = subsectors
df_base.drop(["min_35", "max_35"], axis = 1, inplace = True)
df_base = pd.merge(df_base, df_sampling_range_defaults)
df_safe = pd.read_excel(sa.excel_template_path(sec, "arg", "demo", True).replace("_demo.xlsx", "_demo_safe.xlsx"))
dft_out = pd.merge(df_base[[x for x in df_base.columns if (x != "0")]], df_safe[["variable", "0"]].dropna(subset = ["0"]).reset_index(drop = True), how = "left")
dft_out = dft_out[df_template_afolu.columns]
if True:
dict_out = {
"strategy_id-0": dft_out,
"strategy_id-1": dft_out.iloc[0:0]
}
sf.dict_to_excel(sa.excel_template_path(sec, "arg", "demo", True), dict_out)
else:
print("Note running CIRCULAR ECONOMY")
# +
df_sampling_range_defaults = sa.model_attributes.build_default_sampling_range_df()
df_sampling_range_defaults_ippu = df_sampling_range_defaults[
df_sampling_range_defaults["variable"].isin(sa.model_attributes.build_varlist("IPPU"))
]
df_sampling_range_defaults_ippu.to_csv("/Users/jsyme/Desktop/ippu_tmp.csv", index = None, encoding = "UTF-8")
df_sampling_range_defaults_ippu
# -
# build IPPU template
if True:
df_sampling_range_defaults = sa.model_attributes.build_default_sampling_range_df()
df_sampling_range_defaults_ippu = df_sampling_range_defaults[
df_sampling_range_defaults["variable"].isin(sa.model_attributes.build_varlist("IPPU"))
]
##############################
# BUILD WASTE TEMPLATE #
##############################
sec = "IPPU"
l_vars = sa.model_attributes.get_variables_by_sector(sec, "input")
l_vars.sort()
df_base = df_exp.iloc[0:0].copy()
df_base["variable"] = l_vars
# add subsectors
subsectors = [sa.model_attributes.dict_model_variable_to_subsector[sa.model_attributes.dict_variables_to_model_variables[x]] for x in df_base["variable"]]
df_base["subsector"] = subsectors
df_base.drop(["min_35", "max_35"], axis = 1, inplace = True)
df_base = pd.merge(df_base, df_sampling_range_defaults)
df_safe = pd.read_excel(sa.excel_template_path(sec, "arg", "demo", True).replace("_demo.xlsx", "_demo_safe.xlsx"))
dft_out = pd.merge(df_base[[x for x in df_base.columns if (x != "0")]], df_safe[["variable", "0"]].dropna(subset = ["0"]).reset_index(drop = True), how = "left")
dft_out = dft_out[df_template_afolu.columns]
if True:
dict_out = {
"strategy_id-0": dft_out,
"strategy_id-1": dft_out.iloc[0:0]
}
sf.dict_to_excel(sa.excel_template_path(sec, "arg", "demo", True), dict_out)
else:
print("Note running IPPU")
# +
subsectors = ["Industrial Energy", "Energy Fuels"]#sa.model_attributes.get_sector_subsectors("Energy")
vars_subsectors = set().union(*[set(sa.model_attributes.build_varlist(x)) for x in subsectors])
#subsectors = sa.model_attributes.get_sector_subsectors("Energy")
#vars_subsectors = set().union([set(sa.model_attributes.build_varlist("Energy"))])
df_sampling_range_defaults = sa.model_attributes.build_default_sampling_range_df()
df_sampling_range_defaults_energy = df_sampling_range_defaults[
df_sampling_range_defaults["variable"].isin(vars_subsectors)
].reset_index(drop = True)
df_sampling_range_defaults_energy.to_csv("/Users/jsyme/Desktop/ener_tmp.csv", index = None, encoding = "UTF-8")
# +
subsectors = ["Industrial Energy", "Energy Fuels"]#sa.model_attributes.get_sector_subsectors("Energy")
vars_subsectors = set().union(*[set(sa.model_attributes.build_varlist(x)) for x in subsectors])
#subsectors = sa.model_attributes.get_sector_subsectors("Energy")
#vars_subsectors = set().union([set(sa.model_attributes.build_varlist("Energy"))])
df_sampling_range_defaults = sa.model_attributes.build_default_sampling_range_df()
df_sampling_range_defaults_energy = df_sampling_range_defaults[
df_sampling_range_defaults["variable"].isin(vars_subsectors)
].reset_index(drop = True)
#df_sampling_range_defaults_energy.to_csv("/Users/jsyme/Desktop/ener_tmp.csv", index = None, encoding = "UTF-8")
##############################
# BUILD ENERGY TEMPLATE #
##############################
sec = "Energy"
l_vars = sa.model_attributes.get_variables_by_sector(sec, "input")
l_vars = list(vars_subsectors)
l_vars.sort()
df_base = df_exp.iloc[0:0].copy()
df_base["variable"] = l_vars
# add subsectors
subsectors = [sa.model_attributes.dict_model_variable_to_subsector[sa.model_attributes.dict_variables_to_model_variables[x]] for x in df_base["variable"]]
df_base["subsector"] = subsectors
df_base.drop(["min_35", "max_35"], axis = 1, inplace = True)
df_base = pd.merge(df_base, df_sampling_range_defaults)
#df_safe = pd.read_excel(sa.excel_template_path(sec, "arg", "demo", True).replace("_demo.xlsx", "_demo_safe.xlsx"))
#dft_out = pd.merge(df_base[[x for x in df_base.columns if (x != "0")]], df_safe[["variable", "0"]].dropna(subset = ["0"]).reset_index(drop = True), how = "left")
dft_out = df_base
dft_out = dft_out[df_template_afolu.columns].sort_values(by = ["subsector", "variable"]).reset_index(drop = True)
if True:
dict_out = {
"strategy_id-0": dft_out,
"strategy_id-1": dft_out.iloc[0:0]
}
sf.dict_to_excel(sa.excel_template_path(sec, "arg", "demo", True), dict_out)
else:
print("Note running IPPU")
# +
dict_1 = {"a": 34, "b": 4}
dict_2 = {"c": 34, "d": 41}
sf.check_keys(dict_1, ["a", "b"]) if (1 > 0) else sf.check_keys(dict_2, ["a", "b"]);
# +
#sa.model_attributes.dict_attributes["abbreviation_subsector"].table
# +
# expand later to integrate strategy dimension
def build_modvar_input_db_from_templates(
model_attributes: ds.ModelAttributes,
sectors: list,
region: str,
template_type: str,
repl_missing_with_base: bool = True
) -> pd.DataFrame:
## run some checksee
# check region
region = region.lower()
if region not in model_attributes.dict_attributes["region"].key_values:
valid_regions = sf.format_print_list(model_attributes.dict_attributes["region"].key_values)
raise ValueError(f"Invalid region '{region}' specified. Valid regions are {valid_regions}")
# check sectors
sectors_drop = [x for x in sectors if (x not in model_attributes.all_sectors)]
if len(sectors_drop) > 0:
secs_drop = sf.format_print_list(sectors_drop)
raise ValueError(f"Invalid sectors {secs_drop} found. Valid sectors are {model_attributes.all_sectors}.")
##
## TEMP: 0 only
##
strat_base = 0
sheet_base = f"{model_attributes.dim_strategy_id}-{strat_base}"
strats_eval = [strat_base]
df_out = []
for sec in sectors:
fp_templ = sa.excel_template_path(sec, region, template_type, True)
if not os.path.exists(fp_templ):
raise ValueError(f"Error: path '{fp_templ}' to template not found.")
# check available sheets and ensure baseline is available
sheets_avail = pd.ExcelFile(fp_templ).sheet_names
if sheet_base not in sheets_avail:
raise ValueError(f"Baseline strategy sheet {sheet_base} not found in '{fp_templ}'. The template must have a sheet for the baseline strategy.")
for strat in strats_eval:
sheet = f"{model_attributes.dim_strategy_id}-{strat}"
if not sheet in sheets_avail:
msg = f"Sheet {sheet} not found in '{fp_templ}'. Check the template."
if repl_missing_with_base:
warnings.warn(f"{msg}. The baseline strategy will be used.")
sheet = sheet_base
else:
raise ValueError(msg)
#
df_tmp = pd.read_excel(fp_templ, sheet_name = sheet)
df_tmp[model_attributes.dim_strategy_id] = strat
#
# ADD CHECKS FOR TIME PERIODS
#
#
# ADD DIFFERENT STEPS FOR NON-BASELINE STRATEGY
#
if len(df_out) == 0:
df_out.append(df_tmp)
else:
df_out.append(df_tmp[df_out[0].columns])
df_out = pd.concat(df_out, axis = 0).sort_values(by = ["subsector", "variable"]).reset_index(drop = True)
return df_out
# function to convert a model variable input database into a simple projection input dataframe
def build_basic_df_from_modvar_inputs(
model_attributes: ds.ModelAttributes,
df_mv: pd.DataFrame
) -> pd.DataFrame:
"""
build_basic_df_from_modvar_inputs will take a model input database and transform it (under baseline assumptions) into a projection dataframe.
- model_attributes: a ModelAttributes class
- df_mv: a dataframe of model variables
"""
df_mv_out = df_mv[[str(x) for x in model_attributes.get_time_periods()[0]]].transpose().reset_index(drop = True)
var_fields = list(df_mv["variable"])
df_mv_out.rename(columns = dict(zip([(x) for x in range(len(df_mv_out.columns))], var_fields)), inplace = True)
df_mv_out[model_attributes.dim_time_period] = list(range(len(df_mv_out)))
var_fields.sort()
return df_mv_out[[model_attributes.dim_time_period] + var_fields]
# everything
df_mv_tmp = build_modvar_input_db_from_templates(sa.model_attributes, ["Socioeconomic", "Circular Economy", "AFOLU", "IPPU", "Energy"], "arg", "demo")
df_mv_out = build_basic_df_from_modvar_inputs(sa.model_attributes, df_mv_tmp)
df_mv_out.to_csv(os.path.join(sa.dir_ref, "fake_data", "fake_data_complete.csv"), index = None, encoding = "UTF-8")
# afolu vars
df_mv_tmp = build_modvar_input_db_from_templates(sa.model_attributes, ["Socioeconomic", "AFOLU"], "arg", "demo")
df_mv_out = build_basic_df_from_modvar_inputs(sa.model_attributes, df_mv_tmp)
df_mv_out.to_csv(os.path.join(sa.dir_ref, "fake_data", "fake_data_afolu.csv"), index = None, encoding = "UTF-8")
# ce vars
df_mv_tmp = build_modvar_input_db_from_templates(sa.model_attributes, ["Socioeconomic", "Circular Economy"], "arg", "demo")
df_mv_out = build_basic_df_from_modvar_inputs(sa.model_attributes, df_mv_tmp)
df_mv_out.to_csv(os.path.join(sa.dir_ref, "fake_data", "fake_data_circular_economy.csv"), index = None, encoding = "UTF-8")
# ippu
df_mv_tmp = build_modvar_input_db_from_templates(sa.model_attributes, ["Socioeconomic", "IPPU"], "arg", "demo")
df_mv_out = build_basic_df_from_modvar_inputs(sa.model_attributes, df_mv_tmp)
df_mv_out.to_csv(os.path.join(sa.dir_ref, "fake_data", "fake_data_ippu.csv"), index = None, encoding = "UTF-8")
# energy
df_mv_tmp = build_modvar_input_db_from_templates(sa.model_attributes, ["Socioeconomic", "IPPU", "Energy"], "arg", "demo")
df_mv_out = build_basic_df_from_modvar_inputs(sa.model_attributes, df_mv_tmp)
df_mv_out.to_csv(os.path.join(sa.dir_ref, "fake_data", "fake_data_energy.csv"), index = None, encoding = "UTF-8")
# socioeconomic vars
df_mv_tmp = build_modvar_input_db_from_templates(sa.model_attributes, ["Socioeconomic"], "arg", "demo")
df_mv_out = build_basic_df_from_modvar_inputs(sa.model_attributes, df_mv_tmp)
df_mv_out.to_csv(os.path.join(sa.dir_ref, "fake_data", "fake_data_socioeconomic.csv"), index = None, encoding = "UTF-8")
# -
# +
# random work
df_safe = pd.read_excel(sa.excel_template_path(sec, "arg", "demo", True).replace("_demo.xlsx", "_demo_safe.xlsx"))
#dft = dict_out["strategy_id-0"]
#dft_out = pd.merge(dft[[x for x in dft.columns if (x != "0")]], df_safe[["variable", "0"]].dropna(subset = ["0"]).reset_index(drop = True), how = "left")
# +
dft = sa.model_attributes.dict_attributes["region"].table.copy()
dft["region"] = [x.upper() for x in dft["region"]]
df_wb_waste = pd.read_csv("/Users/jsyme/Documents/Projects/FY21/SWCHE131_1000/Data/waste_management_data/wb_whatawaste_country_level_data_0.csv")
df_wb_waste = pd.merge(df_wb_waste, dft[["region"]], left_on = ["iso3c"], right_on = ["region"], how = "inner")
[x for x in df_wb_waste.columns if "recy" in x]
np.mean(df_wb_waste["waste_treatment_recycling_percent"].dropna())
# +
arr = np.array(df_wb_waste[["total_msw_total_msw_generated_tons_year", "population_population_number_of_people"]])#.columns
np.dot(arr[:, 0], arr[:, 1])/(sum(arr[:, 1])*sum(arr[:, 0]))
sum((arr[:, 1]*(arr[:, 0]/arr[:, 1]))/sum(arr[:, 1]))
ind_waste = np.array(df_wb_waste[[x for x in df_wb_waste.columns if "special_waste_industrial_waste_tons_year" in x]].fillna(0).sum(axis = 1))
gdp = (np.array(df_wb_waste["gdp"])/1000)
np.mean(ind_waste/gdp)
# -
### importlib.reload(sf)
importlib.reload(ds)
importlib.reload(sa)
sa.excel_template_path("AFOLU", "arg", "demo", True)
| python/.ipynb_checkpoints/build_import_template_excel-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Plotting with Python
#
# [back to main page](../index.ipynb)
#
# The de-facto standard library for plotting in Python is "Matplotlib".
#
# * [Getting Started with Matplotlib](matplotlib.ipynb)
#
# * [Default Values for Matplotlib's "inline" backend](matplotlib-inline-defaults.ipynb)
#
# * [Colormaps in Matplotlib](matplotlib-colormaps.ipynb)
#
# * [Properly Sized Color Bars in Matplotlib](matplotlib-colorbar.ipynb)
#
# * [Creating Animations with Matplotlib](matplotlib-animation.ipynb)
#
# * [Using the "notebook" backend](matplotlib-notebook-backend.ipynb)
#
# * [Zoomable Plots on Static HTML Notebooks](mpld3.ipynb)
#
# The following topics are work in progress:
#
# * [Creating Plots for LaTeX Documents](latex.ipynb)
#
# * [Waterfall Plots](waterfall.ipynb)
# ## External Links
#
# https://python-graph-gallery.com/
#
# ### matplotlib
#
# http://matplotlib.org/
#
# http://matplotlib.org/gallery.html
#
# http://matplotlib.org/faq/usage_faq.html
#
# http://matplotlib.org/users/pyplot_tutorial.html
#
# http://matplotlib.org/users/artists.html
#
# https://github.com/rasbt/matplotlib-gallery
#
# http://nbviewer.ipython.org/github/jrjohansson/scientific-python-lectures/blob/master/Lecture-4-Matplotlib.ipynb
#
# http://worksofscience.net/matplotlib/fig
#
# http://worksofscience.net/matplotlib/gridspec
#
# http://nbviewer.ipython.org/github/rasbt/matplotlib-gallery/blob/master/ipynb/publication.ipynb
#
# http://pbpython.com/effective-matplotlib.html
#
# http://www.labri.fr/perso/nrougier/teaching/matplotlib/matplotlib.html
#
# https://realpython.com/blog/python/python-matplotlib-guide/
# ### Interactive Plots
#
# `ipywidgets.interact`: https://ipywidgets.readthedocs.io/en/latest/examples/Using%20Interact.html
#
# https://www.nbinteract.com/
# ### seaborn
#
# http://stanford.edu/~mwaskom/software/seaborn/
# ### Animations
#
# https://jakevdp.github.io/blog/2012/08/18/matplotlib-animation-tutorial/
#
# https://github.com/jakevdp/JSAnimation $\to$ [example](http://nbviewer.ipython.org/github/jakevdp/JSAnimation/blob/master/animation_example.ipynb)
#
# sound field examples: http://blog.kaistale.com/?p=728
# ### Spectrogram
#
# https://github.com/LCAV/TimeDomainAcousticRakeReceiver/blob/master/pyroomacoustics/stft.py
# ### Volumetric Plots
#
# https://ipyvolume.readthedocs.io/
# https://github.com/maartenbreddels/ipyvolume
# https://pypi.python.org/pypi/ipyvolume
# ### Brunel
#
# https://github.com/Brunel-Visualization/Brunel
#
# http://brunelvis.org/
# ### Extract Data from Scanned Plots
#
# http://markummitchell.github.io/engauge-digitizer/
# <p xmlns:dct="http://purl.org/dc/terms/">
# <a rel="license"
# href="http://creativecommons.org/publicdomain/zero/1.0/">
# <img src="http://i.creativecommons.org/p/zero/1.0/88x31.png" style="border-style: none;" alt="CC0" />
# </a>
# <br />
# To the extent possible under law,
# <span rel="dct:publisher" resource="[_:publisher]">the person who associated CC0</span>
# with this work has waived all copyright and related or neighboring
# rights to this work.
# </p>
| plotting/index.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Motif price generator - for items in inventory
# This code parses TTC item and price tables and computes prices per motif
#
# Copy SLPP.py here for ease of use, source is on github SLPP parses the Lua stuff and gives us python data structures
import pandas as pd
from collections import Counter
pd.set_option('display.max_rows', None)
# ### SLPP code pasted below
# +
import re
import sys
from numbers import Number
import six
ERRORS = {
'unexp_end_string': u'Unexpected end of string while parsing Lua string.',
'unexp_end_table': u'Unexpected end of table while parsing Lua string.',
'mfnumber_minus': u'Malformed number (no digits after initial minus).',
'mfnumber_dec_point': u'Malformed number (no digits after decimal point).',
'mfnumber_sci': u'Malformed number (bad scientific format).',
}
def sequential(lst):
length = len(lst)
if length == 0 or lst[0] != 0:
return False
for i in range(length):
if i + 1 < length:
if lst[i] + 1 != lst[i+1]:
return False
return True
class ParseError(Exception):
pass
class SLPP(object):
def __init__(self):
self.text = ''
self.ch = ''
self.at = 0
self.len = 0
self.depth = 0
self.space = re.compile('\s', re.M)
self.alnum = re.compile('\w', re.M)
self.newline = '\n'
self.tab = '\t'
def decode(self, text):
if not text or not isinstance(text, six.string_types):
return
# FIXME: only short comments removed
reg = re.compile('--.*$', re.M)
text = reg.sub('', text, 0)
self.text = text
self.at, self.ch, self.depth = 0, '', 0
self.len = len(text)
self.next_chr()
result = self.value()
return result
def encode(self, obj):
self.depth = 0
return self.__encode(obj)
def __encode(self, obj):
s = ''
tab = self.tab
newline = self.newline
if isinstance(obj, str):
s += '"%s"' % obj.replace(r'"', r'\"')
elif six.PY2 and isinstance(obj, unicode):
s += '"%s"' % obj.encode('utf-8').replace(r'"', r'\"')
elif six.PY3 and isinstance(obj, bytes):
s += '"{}"'.format(''.join(r'\x{:02x}'.format(c) for c in obj))
elif isinstance(obj, bool):
s += str(obj).lower()
elif obj is None:
s += 'nil'
elif isinstance(obj, Number):
s += str(obj)
elif isinstance(obj, (list, tuple, dict)):
self.depth += 1
if len(obj) == 0 or (not isinstance(obj, dict) and len([
x for x in obj
if isinstance(x, Number) or (isinstance(x, six.string_types) and len(x) < 10)
]) == len(obj)):
newline = tab = ''
dp = tab * self.depth
s += "%s{%s" % (tab * (self.depth - 2), newline)
if isinstance(obj, dict):
key = '[%s]' if all(isinstance(k, (int, long)) for k in obj.keys()) else '%s'
contents = [dp + (key + ' = %s') % (k, self.__encode(v)) for k, v in obj.items()]
s += (',%s' % newline).join(contents)
else:
s += (',%s' % newline).join(
[dp + self.__encode(el) for el in obj])
self.depth -= 1
s += "%s%s}" % (newline, tab * self.depth)
return s
def white(self):
while self.ch:
if self.space.match(self.ch):
self.next_chr()
else:
break
def next_chr(self):
if self.at >= self.len:
self.ch = None
return None
self.ch = self.text[self.at]
self.at += 1
return True
def value(self):
self.white()
if not self.ch:
return
if self.ch == '{':
return self.object()
if self.ch == "[":
self.next_chr()
if self.ch in ['"', "'", '[']:
return self.string(self.ch)
if self.ch.isdigit() or self.ch == '-':
return self.number()
return self.word()
def string(self, end=None):
s = ''
start = self.ch
if end == '[':
end = ']'
if start in ['"', "'", '[']:
while self.next_chr():
if self.ch == end:
self.next_chr()
if start != "[" or self.ch == ']':
return s
if self.ch == '\\' and start == end:
self.next_chr()
if self.ch != end:
s += '\\'
s += self.ch
raise ParseError(ERRORS['unexp_end_string'])
def object(self):
o = {}
k = None
idx = 0
numeric_keys = False
self.depth += 1
self.next_chr()
self.white()
if self.ch and self.ch == '}':
self.depth -= 1
self.next_chr()
return o # Exit here
else:
while self.ch:
self.white()
if self.ch == '{':
o[idx] = self.object()
idx += 1
continue
elif self.ch == '}':
self.depth -= 1
self.next_chr()
if k is not None:
o[idx] = k
if len([key for key in o if isinstance(key, six.string_types + (float, bool, tuple))]) == 0:
so = sorted([key for key in o])
if sequential(so):
ar = []
for key in o:
ar.insert(key, o[key])
o = ar
return o # or here
else:
if self.ch == ',':
self.next_chr()
continue
else:
k = self.value()
if self.ch == ']':
self.next_chr()
self.white()
ch = self.ch
if ch in ('=', ','):
self.next_chr()
self.white()
if ch == '=':
o[k] = self.value()
else:
o[idx] = k
idx += 1
k = None
raise ParseError(ERRORS['unexp_end_table']) # Bad exit here
words = {'true': True, 'false': False, 'nil': None}
def word(self):
s = ''
if self.ch != '\n':
s = self.ch
self.next_chr()
while self.ch is not None and self.alnum.match(self.ch) and s not in self.words:
s += self.ch
self.next_chr()
return self.words.get(s, s)
def number(self):
def next_digit(err):
n = self.ch
self.next_chr()
if not self.ch or not self.ch.isdigit():
raise ParseError(err)
return n
n = ''
try:
if self.ch == '-':
n += next_digit(ERRORS['mfnumber_minus'])
n += self.digit()
if n == '0' and self.ch in ['x', 'X']:
n += self.ch
self.next_chr()
n += self.hex()
else:
if self.ch and self.ch == '.':
n += next_digit(ERRORS['mfnumber_dec_point'])
n += self.digit()
if self.ch and self.ch in ['e', 'E']:
n += self.ch
self.next_chr()
if not self.ch or self.ch not in ('+', '-'):
raise ParseError(ERRORS['mfnumber_sci'])
n += next_digit(ERRORS['mfnumber_sci'])
n += self.digit()
except ParseError:
t, e = sys.exc_info()[:2]
print(e)
return 0
try:
return int(n, 0)
except:
pass
return float(n)
def digit(self):
n = ''
while self.ch and self.ch.isdigit():
n += self.ch
self.next_chr()
return n
def hex(self):
n = ''
while self.ch and (self.ch in 'ABCDEFabcdef' or self.ch.isdigit()):
n += self.ch
self.next_chr()
return n
slpp = SLPP()
__all__ = ['slpp']
# -
# ### Load InventoryInsight Data
iia = open(r"C:\Users\jtern\Documents\Elder Scrolls Online\live\SavedVariables\IIfA.lua",'r').read()
settings = slpp.decode(iia.split('IIfA_Settings =')[-1])
data = slpp.decode(iia.split('IIfA_Data =')[-1])
# ### Find all motifs in our current account/server
accounts = ['@TholosTB', '@TholosTB2','@TholosTB_3']
server = 'NA'
#rex = re.compile ("Crafting Motif (\d+)|Style")
rex = re.compile ("Crafting Motif (\d+)")
#rex = re.compile("(Mother's|Medusa)")
#rex = re.compile ("Opal .* Staff")
motifs = []
item_counter = Counter()
for account in accounts:
for key,entry in data['Default'][account]['$AccountWide']['Data'][server]['DBv3'].items():
item_counter[entry['itemName']] +=1
if rex.search(entry['itemName'] ):
print(key,entry)
for loc in entry['locations'].keys() :
print (loc)
charname = settings['Default'][account][loc]['$LastCharacterName'] if re.match('\d+',str(loc)) and loc in settings['Default'][account] else str(loc)
locref = entry['locations'][loc]
#qty = list(locref['bagSlot'].items())[0][1]
motifs.append({'item' : entry['itemName'],
'character' : f'{account}:{charname}',
'quality' : entry['itemQuality'],
'locations' : list(locref.items())
#,
#'qty' : qty
})
data['Default'].keys()
sorted(motifs,key=lambda x : int(rex.search(x['item']).group(1)))
item_counter.most_common()
len(motifs)
motif_df = pd.DataFrame(motifs)
motif_df['motif_num'] = motif_df['item'].str.extract(rex)
len(motif_df.item.unique())
motif_df.sort_values('item')
motif_df.sort_values('item').groupby(['item','character']).count()
# ### Load TTC price and item tables
item_prices = open(r"C:\Users\jtern\Documents\Elder Scrolls Online\live\AddOns\TamrielTradeCentre\PriceTable.lua",'r').read()
prices = slpp.decode(item_prices.split('self.PriceTable=')[-1])
item_lookup = open(r"C:\Users\jtern\Documents\Elder Scrolls Online\live\AddOns\TamrielTradeCentre\ItemLookUpTable_EN.lua",'r').read()
items = slpp.decode(item_lookup.split('self.ItemLookUpTable=')[-1])
# ### Get all motif keys from TTC items table for motifs in our inventory
motif_keys = [key for key in items.keys() for m in motifs if key.lower() == m['item'].lower()]
motif_keys
# ### Iterate over all the motifs and build a new list with prices and names. If prices are not found, leave a blank entry
#
# +
data = []
for motif_key in motif_keys:
item = items[motif_key]
entry = {'motif' : motif_key}
entry['item_id'] = list(item.items())[0][1]
try:
price = prices['Data'][entry['item_id']]
except KeyError:
data.append(entry)
continue
qual = list(price.keys())[0]
level_dict=price[qual]
level = list(level_dict.keys())[0]
traits_dict = level_dict[level]
traits = list(traits_dict.keys())[0]
prices_dict = traits_dict[traits]
entry = {**entry,**prices_dict}
data.append(entry)
# -
motifs_in_inventory_df = pd.DataFrame(data)
motifs_in_inventory_df['motif_num'] = motifs_in_inventory_df['motif'].str.extract('crafting motif (\d+)').astype(int)
motifs_in_inventory_df.sort_values('SuggestedPrice')
motifs_in_inventory_df.loc[motifs_in_inventory_df['SuggestedPrice']<40000].sort_values('motif_num')['motif'].unique()
sorted([m for m in motifs if not re.search('21|45|35|42',m)],key=lambda x : int(rex.search(x).group(1)))
motifs_df2 = motifs_in_inventory_df.merge(motif_df)
savvy = ["crafting motif 41: celestial boots",
"crafting motif 47: buoyant armiger daggers",
"crafting motif 47: buoyant armiger swords"]
motif_df.loc[motif_df['item'].str.lower().isin(savvy)].sort_values('character')
motif_df.loc[motif_df['item'].str.lower().isin(savvy)].sort_values('item')
import base64
import struct
s = "CSSK29ф////+AAP/+43f/9f//b/7/////////////////////kNh///////+gAH///////7Ebf/9AAP//gAABJAABezb//whA//9KAP///////wAAP5d//73rf/+AEH////+t/T+/KUhSkf/+AhAABEAQAAH////+AAAAAf/+huwEAf/+AAAAAAZwAAAAAAAAwNwAAAAAAAAAAAAAAAAAAAAAAAAAAAAA1"
struct.unpack('HhL',bytearray(s,encoding='utf-8'))
bytearray(s,encoding='utf-8')
researched = {'From-Cool-Springs':"CSCR29ф//////////////////////////////////////////////////////0",
"Soloh'T-dar" : "CSCR29фxkSkgOAhkAgCyFIASSI0ADQyKSMLSSNRiiggAQAAAAUzcBWqlUC6nU0" }
styles = {'From-Cool-Springs' : "CSSK29ф////+AAP/+43//9f//b/7/////////////////////kNh///////+gAH///////7Ebf/9AAP//gAABJAABezb//whA//9KAP///////wAAP5d//73rf/+AEH////+t/T+/KUhSkf/+AhAABEAQAAH////+AAAAAf/+huwEAf/+AAAAAIb0AAAAAAAAwNyiAAAAAAAAAAAAAAAAAAAAAAAAAAAA1",
"Soloh'T-dar" : "CSSK29фtdAAAAAAAAAALXQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA1",
"Raf'O" : "CSSK29фAQAAAAAAAAAAAEAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA1"
}
for name,style_str in styles.items():
print(name,len(style_str))
''.join(format(i,'08b') for i in bytearray(styles["From-Cool-Springs"][7:],encoding='utf-8'))
styles['From-Cool-Springs'][6:]
len(styles['From-Cool-Springs'][6:])
motif_df.loc[motif_df.item.str.contains('Stags')]
| motifs_InventoryInsight.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [Root]
# language: python
# name: Python [Root]
# ---
# # Método das diferenças finitas: Difusão
# Vamos resolver a equação de difusão 1D:
#
# $$\frac{\partial T}{\partial t} = \alpha \frac{\partial^2 T}{\partial x^2}$$
#
# em que $T$ é a temperatura e $\alpha$ é uma constante chamada de [difusividade térmica](https://pt.wikipedia.org/wiki/Difusividade_t%C3%A9rmica).
# ## Setup
# %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
# As células abaixo definem funções que criam o domínio e as condições iniciais.
def cria_dominios(tamanho, Nx, duração, Nt):
"""
Cria o domínio espacial e calcula os intervalos de tempo e espaço.
Retorna os valores de x, dx e dt
"""
x = np.linspace(0, tamanho, Nx)
dx = x[1] - x[0]
dt = duração/(Nt - 1)
return x, dx, dt
x, dx, dt = cria_dominios(tamanho=1, Nx=51, duração=1, Nt=21)
print('dx =', dx, 'dt =', dt)
def cria_cond_inicial(x):
"""
Cria um vetor de condições iniciais u0 com uma função degrau.
"""
T = np.zeros(x.size)
T[(x >= 0.3) & (x <= 0.7)] = 100
return T
# +
cond_inicial = cria_cond_inicial(x)
plt.figure()
plt.plot(x, cond_inicial, '-k')
plt.xlabel('x (m)')
plt.ylabel('T (C)')
plt.title('Condição inicial $T^0$')
plt.ylim(0, 150)
# -
# ## Tarefa 1
#
# Complete a função abaixo que executa 1 único passo no tempo utilizando diferenças progressivas no tempo.
def passo_no_tempo(T_passado, dx, dt, difusividade):
"""
Executa 1 passo no tempo da equação de difusão.
Dada a temperatura em uma iteração passada T_passado,
utiliza o método das diferenças finitas
para calcular a temperatura após um único passo no tempo
T_futuro.
OBS: Não inclui condições de contorno.
"""
T_futuro = T_passado.copy()
Nx = len(T_passado)
for k in range( ):
return T_futuro
# Use as células abaixo para checar se sua função funciona. Vamos tentar dar um único passo no tempo a partir de uma condição inicial.
x, dx, dt = cria_dominios(tamanho=1, Nx=51, duração=1, Nt=20)
T0 = cria_cond_inicial(x)
T1 = passo_no_tempo(T0, dx, dt, difusividade=0.001)
plt.figure()
plt.plot(x, T0, '--r')
plt.plot(x, T1, '.-k')
plt.xlabel('x')
plt.ylabel('T')
plt.ylim(0, 150)
# ## Tarefa 2
#
# Complete a função abaixo que impõe condições de contorno na nossa solução. As condições serão:
#
# * Em x=0, o material é mantido a uma temperatura constante $T(x=0, t) = 0° C$
# * Em x=1 (no final do domínio), o material é isolado térmicamente. Isso quer dizer que não há variação espacial de temperatura em x=1, ou seja, $\frac{\partial T}{\partial x}(x=1, t) = 0$
def cond_contorno(T):
"""
Impõe condições de contorno na distribuição de temperaturas T.
Em x = 0, a temperatura é constante e igual a 0°C.
Em x = 1, a derivada espacial da temperatura é 0.
Essa função muda os valores da variável T e a retorna.
"""
return T
# Vamos testar a função aplicando-a a um numpy.array qualquer que vamos criar. Nas posições 0 e -1, vamos colocar valores absurdos e ver se a nossa função `cond_contorno` insere os valores adequados.
T_teste = np.array([1000000, 1, 2, 3, 4, 5, 6, -1000000])
T_teste = cond_contorno(T_teste)
print(T_teste)
# ## Tarefa 3
#
# Complete a função abaixo que executa uma simulação completa de diferenças finitas (utilizando as funções definidas acima) para uma deterimada duração. A função deve retornar uma lista com a temperatura para cada iteração do método.
def simula(tamanho, Nx, duração, Nt, difusividade):
"""
Executa uma simulação completa da equação de difusão
utilizando diferenças finitas.
1. Cria o domínio e a condição inicial
2. Executa Nt passos no tempo
3. Retorna o domínio (x) e uma lista com o resultado
de cada passo no tempo (T).
Para cada passo no tempo, impõe as condições de contorno:
Em x = 0, a temperatura é constante e igual a 0°C.
Em x = 1, a derivada espacial da temperatura é 0.
"""
T_inicial =
T = [T_inicial]
T_passado = T_inicial
for in range( ):
return x, T
# Utilize as células abaixo para checar o resultado da sua função.
x, T = simula(tamanho=1, Nx=50, duração=100, Nt=500, difusividade=0.001)
plt.figure()
plt.plot(x, T[0], '--r')
plt.plot(x, T[-1], '.-k')
plt.xlabel('x (m)')
plt.ylabel('T (C)')
plt.ylim(0, 150)
# ## Tarefa 4
#
# Rode a simulação para com os parâmetros `tamanho=1, Nx=50, duração=100, Nt=600, difusividade=0.001`. Faça um gráfico que mostre as curvas de temperatura para cada 100 passos no tempo. Coloque uma legenda na sua figura indicando qual curva representa qual passo no tempo.
# ## Bônus
#
# Gere uma animação em formato gif da evolução da temperatura na simulação da tarefa 4.
#
# * Cada imagem do gif deve ser um gráfico com o perfil de temperaturas por x.
# * O título de cada imagem deve ser o número da iteração.
# * Gere uma imagem a cada 10 iterações.
# * Não esqueça de fechar cada figura com `plt.close()` antes de criar uma nova.
# Seu resultado deve ficar parecido com o seguinte:
#
# 
# **Course website**: https://github.com/mat-esp/about
#
# **Note**: This notebook is part of the course "Matemática Especial I" of the [Universidade do Estado do Rio de Janeiro](http://www.uerj.br/). All content can be freely used and adapted under the terms of the
# [Creative Commons Attribution 4.0 International License](http://creativecommons.org/licenses/by/4.0/).
#
# 
| df-difusao.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# Doing all major library imports
import matplotlib.pyplot as plt
import scikitplot as skplt
import numpy as np
import pandas as pd
import seaborn as sns
import scipy.stats as stats
import re
from sklearn import datasets, metrics
from sklearn.linear_model import LinearRegression, LogisticRegression,LogisticRegressionCV
from sklearn.model_selection import train_test_split, cross_val_score, cross_val_predict, KFold
from sklearn.metrics import r2_score, mean_squared_error
from sklearn.preprocessing import PolynomialFeatures, StandardScaler
from sklearn.linear_model import Ridge, Lasso, ElasticNet, LinearRegression, RidgeCV, LassoCV, ElasticNetCV
from sklearn.metrics import classification_report, confusion_matrix
from sklearn.metrics import roc_curve, auc, precision_recall_curve, average_precision_score
from matplotlib.colors import ListedColormap
from sklearn.pipeline import Pipeline, make_pipeline
plt.style.use('fivethirtyeight')
# %matplotlib inline
# %config InlineBackend.figure_format = 'retina'
import scikitplot as skplt
from matplotlib.colors import ListedColormap
from sklearn.metrics import classification_report, confusion_matrix
pd.set_option('display.max_rows', 500)
pd.set_option('display.max_columns', 500)
pd.options.display.float_format = '{:.2f}'.format
import wbdata as wb
import os, glob
# -
un = pd.read_csv ('un_compiled.csv')
un.head()
# +
cc = {
'Aruba':'ABW',
'Afghanistan':'AFG',
'Africa':'AFR',
'Angola':'AGO',
'Albania':'ALB',
'Andorra':'AND',
'Andean Region':'ANR',
'Arab World':'ARB',
'United Arab Emirates':'ARE',
'Argentina':'ARG',
'Armenia':'ARM',
'American Samoa':'ASM',
'Antigua and Barbuda':'ATG',
'Australia':'AUS',
'Austria':'AUT',
'Azerbaijan':'AZE',
'Burundi':'BDI',
'East Asia & Pacific (IBRD-only countries)':'BEA',
'Europe & Central Asia (IBRD-only countries)':'BEC',
'Belgium':'BEL',
'Benin':'BEN',
'Burkina Faso':'BFA',
'Bangladesh':'BGD',
'Bulgaria':'BGR',
'IBRD countries classified as high income':'BHI',
'Bahrain':'BHR',
'Bahamas, The':'BHS',
'Bosnia and Herzegovina':'BIH',
'Latin America & the Caribbean (IBRD-only countries)':'BLA',
'Belarus':'BLR',
'Belize':'BLZ',
'Middle East & North Africa (IBRD-only countries)':'BMN',
'Bermuda':'BMU',
'Bolivia':'BOL',
'Brazil':'BRA',
'Barbados':'BRB',
'Brunei Darussalam':'BRN',
'Sub-Saharan Africa (IBRD-only countries)':'BSS',
'Bhutan':'BTN',
'Botswana':'BWA',
'Sub-Saharan Africa (IFC classification)':'CAA',
'Central African Republic':'CAF',
'Canada':'CAN',
'East Asia and the Pacific (IFC classification)':'CEA',
'Central Europe and the Baltics':'CEB',
'Europe and Central Asia (IFC classification)':'CEU',
'Switzerland':'CHE',
'Channel Islands':'CHI',
'Chile':'CHL',
'China':'CHN',
'Cote d\'Ivoire':'CIV',
'Latin America and the Caribbean (IFC classification)':'CLA',
'Middle East and North Africa (IFC classification)':'CME',
'Cameroon':'CMR',
'Congo, Dem. Rep.':'COD',
'Congo, Rep.':'COG',
'Colombia':'COL',
'Comoros':'COM',
'Cabo Verde':'CPV',
'Costa Rica':'CRI',
'South Asia (IFC classification)':'CSA',
'Caribbean small states':'CSS',
'Cuba':'CUB',
'Curacao':'CUW',
'Cayman Islands':'CYM',
'Cyprus':'CYP',
'Czech Republic':'CZE',
'East Asia & Pacific (IDA-eligible countries)':'DEA',
'Europe & Central Asia (IDA-eligible countries)':'DEC',
'Germany':'DEU',
'IDA countries classified as Fragile Situations':'DFS',
'Djibouti':'DJI',
'Latin America & the Caribbean (IDA-eligible countries)':'DLA',
'Dominica':'DMA',
'Middle East & North Africa (IDA-eligible countries)':'DMN',
'IDA countries not classified as Fragile Situations':'DNF',
'Denmark':'DNK',
'IDA countries in Sub-Saharan Africa not classified as fragile situations ':'DNS',
'Dominican Republic':'DOM',
'South Asia (IDA-eligible countries)':'DSA',
'IDA countries in Sub-Saharan Africa classified as fragile situations ':'DSF',
'Sub-Saharan Africa (IDA-eligible countries)':'DSS',
'IDA total, excluding Sub-Saharan Africa':'DXS',
'Algeria':'DZA',
'East Asia & Pacific (excluding high income)':'EAP',
'Early-demographic dividend':'EAR',
'East Asia & Pacific':'EAS',
'Europe & Central Asia (excluding high income)':'ECA',
'Europe & Central Asia':'ECS',
'Ecuador':'ECU',
'Egypt, Arab Rep.':'EGY',
'Euro area':'EMU',
'Eritrea':'ERI',
'Spain':'ESP',
'Estonia':'EST',
'Ethiopia':'ETH',
'European Union':'EUU',
'Fragile and conflict affected situations':'FCS',
'Finland':'FIN',
'Fiji':'FJI',
'France':'FRA',
'Faroe Islands':'FRO',
'Micronesia, Fed. Sts.':'FSM',
'IDA countries classified as fragile situations, excluding Sub-Saharan Africa':'FXS',
'Gabon':'GAB',
'United Kingdom':'GBR',
'Georgia':'GEO',
'Ghana':'GHA',
'Gibraltar':'GIB',
'Guinea':'GIN',
'Gambia, The':'GMB',
'Guinea-Bissau':'GNB',
'Equatorial Guinea':'GNQ',
'Greece':'GRC',
'Grenada':'GRD',
'Greenland':'GRL',
'Guatemala':'GTM',
'Guam':'GUM',
'Guyana':'GUY',
'High income':'HIC',
'Hong Kong SAR, China':'HKG',
'Honduras':'HND',
'Heavily indebted poor countries (HIPC)':'HPC',
'Croatia':'HRV',
'Haiti':'HTI',
'Hungary':'HUN',
'IBRD, including blend':'IBB',
'IBRD only':'IBD',
'IDA & IBRD total':'IBT',
'IDA total':'IDA',
'IDA blend':'IDB',
'Indonesia':'IDN',
'IDA only':'IDX',
'Isle of Man':'IMN',
'India':'IND',
'Not classified':'INX',
'Ireland':'IRL',
'Iran, Islamic Rep.':'IRN',
'Iraq':'IRQ',
'Iceland':'ISL',
'Israel':'ISR',
'Italy':'ITA',
'Jamaica':'JAM',
'Jordan':'JOR',
'Japan':'JPN',
'Kazakhstan':'KAZ',
'Kenya':'KEN',
'Kyrgyz Republic':'KGZ',
'Cambodia':'KHM',
'Kiribati':'KIR',
'St. Kitts and Nevis':'KNA',
'Korea, Rep.':'KOR',
'Kuwait':'KWT',
'Latin America & Caribbean (excluding high income)':'LAC',
'Lao PDR':'LAO',
'Lebanon':'LBN',
'Liberia':'LBR',
'Libya':'LBY',
'St. Lucia':'LCA',
'Latin America & Caribbean ':'LCN',
'Latin America and the Caribbean':'LCR',
'Least developed countries: UN classification':'LDC',
'Low income':'LIC',
'Liechtenstein':'LIE',
'Sri Lanka':'LKA',
'Lower middle income':'LMC',
'Low & middle income':'LMY',
'Lesotho':'LSO',
'Late-demographic dividend':'LTE',
'Lithuania':'LTU',
'Luxembourg':'LUX',
'Latvia':'LVA',
'Macao SAR, China':'MAC',
'St. Martin (French part)':'MAF',
'Morocco':'MAR',
'Central America':'MCA',
'Monaco':'MCO',
'Moldova':'MDA',
'Middle East (developing only)':'MDE',
'Madagascar':'MDG',
'Maldives':'MDV',
'Middle East & North Africa':'MEA',
'Mexico':'MEX',
'Marshall Islands':'MHL',
'Middle income':'MIC',
'North Macedonia':'MKD',
'Mali':'MLI',
'Malta':'MLT',
'Myanmar':'MMR',
'Middle East & North Africa (excluding high income)':'MNA',
'Montenegro':'MNE',
'Mongolia':'MNG',
'Northern Mariana Islands':'MNP',
'Mozambique':'MOZ',
'Mauritania':'MRT',
'Mauritius':'MUS',
'Malawi':'MWI',
'Malaysia':'MYS',
'North America':'NAC',
'North Africa':'NAF',
'Namibia':'NAM',
'New Caledonia':'NCL',
'Niger':'NER',
'Nigeria':'NGA',
'Nicaragua':'NIC',
'Netherlands':'NLD',
'Non-resource rich Sub-Saharan Africa countries, of which landlocked':'NLS',
'Norway':'NOR',
'Nepal':'NPL',
'Non-resource rich Sub-Saharan Africa countries':'NRS',
'Nauru':'NRU',
'IDA countries not classified as fragile situations, excluding Sub-Saharan Africa':'NXS',
'New Zealand':'NZL',
'OECD members':'OED',
'Oman':'OMN',
'Other small states':'OSS',
'Pakistan':'PAK',
'Panama':'PAN',
'Peru':'PER',
'Philippines':'PHL',
'Palau':'PLW',
'Papua New Guinea':'PNG',
'Poland':'POL',
'Pre-demographic dividend':'PRE',
'Puerto Rico':'PRI',
'Korea, Dem. People???s Rep.':'PRK',
'Portugal':'PRT',
'Paraguay':'PRY',
'West Bank and Gaza':'PSE',
'Pacific island small states':'PSS',
'Post-demographic dividend':'PST',
'French Polynesia':'PYF',
'Qatar':'QAT',
'Romania':'ROU',
'Resource rich Sub-Saharan Africa countries':'RRS',
'Resource rich Sub-Saharan Africa countries, of which oil exporters':'RSO',
'Russian Federation':'RUS',
'Rwanda':'RWA',
'South Asia':'SAS',
'Saudi Arabia':'SAU',
'Southern Cone':'SCE',
'Sudan':'SDN',
'Senegal':'SEN',
'Singapore':'SGP',
'Solomon Islands':'SLB',
'Sierra Leone':'SLE',
'El Salvador':'SLV',
'San Marino':'SMR',
'Somalia':'SOM',
'Serbia':'SRB',
'Sub-Saharan Africa (excluding high income)':'SSA',
'South Sudan':'SSD',
'Sub-Saharan Africa ':'SSF',
'Small states':'SST',
'Sao Tome and Principe':'STP',
'Suriname':'SUR',
'Slovak Republic':'SVK',
'Slovenia':'SVN',
'Sweden':'SWE',
'Eswatini':'SWZ',
'Sint Maarten (Dutch part)':'SXM',
'Sub-Saharan Africa excluding South Africa':'SXZ',
'Seychelles':'SYC',
'Syrian Arab Republic':'SYR',
'Turks and Caicos Islands':'TCA',
'Chad':'TCD',
'East Asia & Pacific (IDA & IBRD countries)':'TEA',
'Europe & Central Asia (IDA & IBRD countries)':'TEC',
'Togo':'TGO',
'Thailand':'THA',
'Tajikistan':'TJK',
'Turkmenistan':'TKM',
'Latin America & the Caribbean (IDA & IBRD countries)':'TLA',
'Timor-Leste':'TLS',
'Middle East & North Africa (IDA & IBRD countries)':'TMN',
'Tonga':'TON',
'South Asia (IDA & IBRD)':'TSA',
'Sub-Saharan Africa (IDA & IBRD countries)':'TSS',
'Trinidad and Tobago':'TTO',
'Tunisia':'TUN',
'Turkey':'TUR',
'Tuvalu':'TUV',
'Taiwan, China':'TWN',
'Tanzania':'TZA',
'Uganda':'UGA',
'Ukraine':'UKR',
'Upper middle income':'UMC',
'Uruguay':'URY',
'United States':'USA',
'Uzbekistan':'UZB',
'St. Vincent and the Grenadines':'VCT',
'Venezuela, RB':'VEN',
'British Virgin Islands':'VGB',
'Virgin Islands (U.S.)':'VIR',
'Vietnam':'VNM',
'Vanuatu':'VUT',
'World':'WLD',
'Samoa':'WSM',
'Kosovo':'XKX',
'Sub-Saharan Africa excluding South Africa and Nigeria':'XZN',
'Yemen, Rep.':'YEM',
'South Africa':'ZAF',
'Zambia':'ZMB',
'Zimbabwe':'ZWE',
'Afghanistan, Islamic Republic of': 'AFG',
#'Anguilla',
'Armenia, Republic of': 'ARM',
'Azerbaijan, Republic of': 'AZE',
'Bahrain, Kingdom of': 'BHR',
'China, P.R.: Hong Kong' : 'HKG',
'China, P.R.: Macao': 'MAC',
'China, P.R.: Mainland': 'CHN',
'Congo, Democratic Republic of' : 'COD',
'Congo, Republic of': 'COG',
"Côte d'Ivoire": 'CIV',
'Egypt': 'EGY',
#'Eswatini, Kingdom of',
'Iran, Islamic Republic of':'IRN',
'Korea, Republic of': 'KOR',
#'Kosovo, Republic of',
"Lao People's Democratic Republic" : 'LAO',
#'Marshall Islands, Republic of',
#'Micronesia, Federated States of',
#'Montserrat',
'North Macedonia, Republic of':'MKD',
'Serbia, Republic of': 'SRB',
'São Tomé and Príncipe': 'STP',
'Timor-Leste, Dem. Rep. of': 'TLS',
'Yemen, Republic of': 'YEM',
'Bosnia & Herzegovina':'BIH',
'Gambia':'GMB',
'Iran':'IRN',
#'North Korea': ,
'Trinidad & Tobago' : 'TTO',
'Venezuela' : 'VEN',
'Viet Nam': 'VNM',
'Yemen': 'YEM',
'Bolivia (Plurinational State of)':'BOL',
"Dem. People's Rep. Korea":'KOR',
"Lao People's Dem. Rep.":'LAO',
'Bahamas':'BHS',
'Bolivia (Plurin. State of)':'BOL',
'China':'CHN',
'Congo':'COG',
'Dem. Rep. of the Congo':'COD',
'Hong Kong SAR':'HKG',
'Iran (Islamic Rep. of)':'IRN',
'Iran (Islamic Republic of)':'IRN',
'Kyrgyzstan':'KGZ',
'Netherlands Antilles [former]':'NLD',
'Republic of Moldova':'MDA',
'Serbia and Monten. [former]':'SRB',
'Slovakia':'SVK',
'Sudan [former]':'SDN',
'TFYR of Macedonia':'MKD',
'U.S. Minor Outlying islands':'USA',
'United Rep. of Tanzania':'TZA',
'United States of America':'USA',
'United States Virgin Islands':'VIR',
'Venezuela (Boliv. Rep. of)':'VEN',
}
# -
un['country_code'] = un.Country.replace (cc)
un.head()
un[un.Country == un.country_code].Country.unique()
un_ah = pd.pivot_table (un, index =['Country', 'country_code', 'Year'], columns = 'Series', values = 'Value', aggfunc=np.sum)
un_ah.head()
un_ah.reset_index(inplace=True)
un_ah.head()
un_ah.rename (columns = {
'Year': 'date',
}, inplace=True)
un_ah.head()
un_ah.to_csv ('un_compiled_output.csv', index=False)
| raw_data/un_dataset/UN Dataset.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # PyOpenCAP and Q-Chem
# In this tutorial we, post process results from Q-Chem calculations to analyze eigenvalue trajectories for CAP/EOM-CC and CAP-ADC calculations on the ${}^2\Pi_g$ shape resonance of $N_2^-$. We use the pyopencap.analysis submodule to extract the matrices and analyze the trajectories.
from pyopencap.analysis import CAPHamiltonian
import matplotlib.pyplot as plt
import numpy as np
# # EOM-CC
# ## Option 1: Analysis of .fchk files
# For EOM-CC calculations, one particle densities can be parsed from checkpoint file (GUI=2). The snippet below can be used to post-process the outputs generated by Q-Chem in order to obtain the zeroth order Hamiltonian and the CAP matrix.
# +
import pyopencap
sys_dict = {"molecule": "qchem_fchk",
"basis_file": "ref_outputs/qc_inp.fchk"
}
cap_dict = {
"cap_type": "box",
"cap_x":"2.76",
"cap_y":"2.76",
"cap_z":"4.88"
}
es_dict = { "package": "qchem",
"method" : "eom",
"qchem_output":"ref_outputs/qc_inp.out",
"qchem_fchk":"ref_outputs/qc_inp.fchk",
}
s = pyopencap.System(sys_dict)
pc = pyopencap.CAP(s,cap_dict,5)
pc.read_data(es_dict)
pc.compute_projected_cap()
W = pc.get_projected_cap()
H0 = pc.get_H()
# -
# ### Integration
# Analytical integrals are available for Box CAPs only, and are enabled by default. One can force the use of numerical integration by setting "do_numerical" to "true" in the CAP dictionary. This keyword is unnecessary for Voronoi CAPs as well as custom CAPs.
cap_dict = {
"cap_type": "box",
"cap_x":"2.76",
"cap_y":"2.76",
"cap_z":"4.88",
"do_numerical":"true"
}
s = pyopencap.System(sys_dict)
pc = pyopencap.CAP(s,cap_dict,5)
pc.read_data(es_dict)
pc.compute_projected_cap()
W = pc.get_projected_cap()
H0 = pc.get_H()
# ### Some other helpful functions:
# recompute cap in AO basis with different cap parameters
cap_dict = {"cap_type": "voronoi","r_cut":"3.00"}
pc.compute_ao_cap(cap_dict)
pc.compute_projected_cap()
# retrieve CAP matrix
ao_cap = pc.get_ao_cap(ordering="qchem")
print(np.shape(ao_cap))
# ### Custom CAPs
# One can also define a custom CAP function in Python. It must accept 4 1D array-like objects (x,y,z,weights) with identical shapes, 1 integer (number of points), and return an array-like of the same shape containing the values of the CAP at each coordinate.
def box_cap(x,y,z,w):
cap_values = []
cap_x = 3.00
cap_y = 3.00
cap_z = 3.00
for i in range(0,len(x)):
result = 0
if np.abs(x[i])>cap_x:
result += (np.abs(x[i])-cap_x) * (np.abs(x[i])-cap_x)
if np.abs(y[i])>cap_y:
result += (np.abs(y[i])-cap_y) * (np.abs(y[i])-cap_y)
if np.abs(z[i])>cap_z:
result += (np.abs(z[i])-cap_z) * (np.abs(z[i])-cap_z)
result = w[i]*result
cap_values.append(result)
return cap_values
cap_dict = {"cap_type":"custom"}
pc.compute_ao_cap(cap_dict,box_cap)
pc.compute_projected_cap()
# There are two equivalent ways from here of generating the CAPHamiltonian object for analysis:
CAPH = CAPHamiltonian(H0=H0,W=W)
CAPH = CAPHamiltonian(pc=pc)
# ## Option 2: Post-processing Projected CAP-EOM-CC Q-Chem outputs
# Projected CAP-EOM-CC is implemented in Q-Chem (available as of Q-Chem 5.4), and can be requested using the PROJ_CAP=3 keyword in complex_ccman. PyOpenCAP is capable of performing analysis directly from outputs generated by this calculation.
from pyopencap.analysis import CAPHamiltonian
import matplotlib.pyplot as plt
import numpy as np
# Q-Chem output, can specify irrep
CAPH = CAPHamiltonian(output="ref_outputs/eomcc.out",irrep="B2g")
# OpenCAP output
CAPH = CAPHamiltonian(output="ref_outputs/n2_opencap.out")
# The eigenvalue trajectories are generated using run_trajectory, which requires a list of $\eta$ values. One can include or exclude states from the subspace projection using the include_states and exclude_states keyword arguments. For instance:
eta_list = np.linspace(0,5000,101)
eta_list = np.around(eta_list * 1E-5,decimals=5)
CAPH.run_trajectory(eta_list,include_states=[0,1,2,3])
CAPH.run_trajectory(eta_list,exclude_states=[4])
CAPH.run_trajectory(eta_list)
# ccsd energy of neutral
ref_energy = -109.36195558
# tracking options are energy and eigenvector overlap
traj = CAPH.track_state(1,tracking="energy")
uc_energies = traj.energies_ev(ref_energy=ref_energy)
corr_energies = traj.energies_ev(ref_energy=ref_energy,corrected=True)
plt.plot(np.real(uc_energies),np.imag(uc_energies),'-ro',label="Uncorrected")
plt.plot(np.real(corr_energies),np.imag(corr_energies),'-bo',label="Corrected")
plt.legend()
plt.show()
# Find optimal value of eta
uc_energy, eta_opt = traj.find_eta_opt(start_idx=10,ref_energy=ref_energy,units="eV")
# start_idx and end_idx for search use python slice notation (i.e. [start_idx:end_idx]).
corr_energy, corr_eta_opt = traj.find_eta_opt(corrected=True,start_idx=10,end_idx=-1,ref_energy=ref_energy,units="eV")
uc_energy_au = traj.get_energy(eta_opt,units="au")
corr_energy_au = traj.get_energy(eta_opt,units="au",corrected=True)
print("Uncorrected:")
print(uc_energy)
print(uc_energy_au)
print(eta_opt)
print("Corrected:")
print(corr_energy)
print(corr_eta_opt)
print(corr_energy_au)
plt.plot(np.real(uc_energies),np.imag(uc_energies),'-ro',label="Uncorrected")
plt.plot(np.real(corr_energies),np.imag(corr_energies),'-bo',label="Corrected")
plt.plot(np.real(uc_energy),np.imag(uc_energy),'g*',markersize=20)
plt.plot(np.real(corr_energy),np.imag(corr_energy),'g*',markersize=20)
plt.legend()
plt.show()
# ## ADC
# Starting from Q-Chem 5.4, CAP-ADC calculations can be performed using the ADC_CAP keyword. PyOpenCAP is capable of performing analysis directly from outputs generated by this calculation Here, we a analyze CAP-EA-ADC(2) calculation.
# Q-Chem output, can specify onset
CAPH = CAPHamiltonian(output="ref_outputs/adc.out", onset="3000")
eta_list = np.linspace(0,3000,101)
eta_list = np.around(eta_list * 1E-5,decimals=5)
CAPH.run_trajectory(eta_list)
traj = CAPH.track_state(1,tracking="overlap")
uc_energies = traj.energies_ev(0.0)
corr_energies = traj.energies_ev(0.0,corrected=True)
plt.plot(np.real(uc_energies),np.imag(uc_energies),'-ro',label="Uncorrected")
plt.plot(np.real(corr_energies),np.imag(corr_energies),'-bo',label="Corrected")
plt.legend()
plt.show()
# Find optimal value of eta
uc_energy, eta_opt = traj.find_eta_opt(start_idx=10,units="eV")
corr_energy, corr_eta_opt = traj.find_eta_opt(corrected=True,start_idx=10,units="eV")
print("Uncorrected:")
print(uc_energy)
# one can also obtain energies by value of eta
print(traj.get_energy(eta_opt,units="eV"))
print(eta_opt)
print("Corrected:")
print(corr_energy)
print(corr_eta_opt)
plt.plot(np.real(uc_energies),np.imag(uc_energies),'-ro',label="Uncorrected")
plt.plot(np.real(corr_energies),np.imag(corr_energies),'-bo',label="Corrected")
plt.plot(np.real(uc_energy),np.imag(uc_energy),'g*',markersize=20)
plt.plot(np.real(corr_energy),np.imag(corr_energy),'g*',markersize=20)
plt.legend()
plt.show()
# ## Plotting logarithmic velocities:
# uncorrected
derivs = traj.get_logarithmic_velocities()
plt.plot(traj.etas,derivs)
plt.plot(eta_opt,derivs[traj.etas.index(eta_opt)],'g*',markersize=20)
plt.title("Uncorrected")
plt.show()
# corrected
derivs = traj.get_logarithmic_velocities(corrected=True)
plt.plot(traj.etas,derivs)
plt.plot(corr_eta_opt,derivs[traj.etas.index(corr_eta_opt)],'g*',markersize=20)
plt.title("Corrected")
plt.show()
| examples/analysis/N2/QChemTutorial.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
# +
# #! /usr/bin/env python
# -.- encoding: utf-8 -.-
class arvbin():
def __init__(self, Valor):
self.Menor=None
self.Maior=None
self.valor=Valor
def add(self,valor):
if valor>self.valor:
self.addmaior(valor)
elif valor<self.valor:
self.addmenor(valor)
else:
self.conta=self.conta+1
def addmenor(self,valor):
if self.Menor:
self.Menor.add(valor)
else:
self.Menor=arvbin(valor)
def addmaior(self,valor):
if self.Maior:
self.Maior.add(valor)
else:
self.Maior=arvbin(valor)
def get(self, valor):
"""Retorna uma referência ao nó de chave 'valor'
"""
if self.valor == valor:
return self
node = self.Menor if valor < self.valor else self.Maior
if node is not None:
return node.get(valor)
# -
# +
#poulatting the tree with arvbin objects
itens = ([x for x in range(35)])
np.random.shuffle(itens)
raiz = arvbin(itens[-1])
itens.pop()
for n in itens:
raiz.add(n)
# -
#following the simetric traverse, collecting the level of each node
def grafTree(arv, niveis, level=0):
if arv == None : return
grafTree(arv.Menor, niveis, level-100)
info=(level, arv.valor)
niveis.append(info)
grafTree(arv.Maior, niveis,level-100)
def putX(arv, valor, x):
"""graft an attr x(x will be coordinate in the scatter plot) in node
"""
if arv.valor == valor:
arv.X = x
return ((arv.valor, arv.Y)) if arv else None
node = arv.Menor if valor < arv.valor else arv.Maior
if node is not None:
return putX(node,valor,x)
def putY(arv, level=0):
"""graft an attr y(the level being de y coordinate of scatter plot) in the node
"""
if arv == None : return
putY(arv.Menor, level-1)
arv.Y=level*100
putY(arv.Maior,level-1)
putY(raiz)
lista_niveis=[]
grafTree(raiz, lista_niveis)
for x in range(len(lista_niveis)):
putX(raiz,lista_niveis[x][1],x)
levels=[]; valores = []
for x in lista_niveis:
levels.append(x[0]); valores.append(x[1])
# +
# Matpltlib Annotate needs xy of arrow origin and xy of arrows head
#here we will track the tree collecting xy of node and xy of your sons(if it exists), an save this coordinates in "lista_arrows"
lista_arrows=[]
def arrows(arv, lista):
if arv == None : return
arrows(arv.Menor, lista)
if arv.Menor:
arrow=(arv.X, arv.Y,arv.Menor.X, arv.Menor.Y)
lista.append(arrow)
if arv.Maior:
arrow=(arv.X, arv.Y,arv.Maior.X, arv.Maior.Y)
lista.append(arrow)
arrows(arv.Maior, lista)
# -
arrows(raiz, lista_arrows)
# +
# %config InlineBackend.rc={'figure.figsize': (16, 9)}
# %matplotlib inline
#Plotting dot refering to each node in the tree
w = range(len(lista_niveis))
l = levels
plt.scatter(w, l, lw=2, alpha = 0.5, s=2000)
#printting the values
for val in range(len(valores)):
plt.text(x = val,
y = levels[val],
s = str(valores[val]),
fontsize=20)
#plotting The arrows with informations generated cells above
for arrow in lista_arrows:
plt.annotate(u"",
xy=(arrow[2], arrow[3]),
xytext=(arrow[0], arrow[1]),
arrowprops=dict(facecolor ="grey",shrink=0.07, alpha = 0.3)
)
#plotting without ticks
plt.xticks([])
plt.yticks([])
plt.plot()
# enable line bellow if want to save the figure
#plt.savefig('arve.svg', format="svg")
plt.show()
#its necessary to alternate scale for size of dots, fig and texts according we
#choose the new size of tree( number of elements)
# -
# its a interesting way to see braches degenerations (try to insert elements without shuffle.)
# References:
#
# > https://matplotlib.org/3.1.0/api/_as_gen/matplotlib.pyplot.annotate.html
#
# > https://pythonhelp.wordpress.com/2015/01/19/arvore-binaria-de-busca-em-python/
#
# > https://medeubranco.wordpress.com/2008/07/05/brincando-de-arvore-binaria-com-python/
| arvPlot.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: openvino_env
# language: python
# name: openvino_env
# ---
# # Working with Open Model Zoo Models
#
# ## Model Downloader, Model Converter, Info Dumper and Benchmark Tool
# + [markdown] tags=["hide"]
# This demo shows how to download a model from Open Model Zoo, convert it to OpenVINO's IR format, show information about the model, and benchmark the model.
#
# | Tool | Command | Description |
# |------------------|-------------------|-----------------------------------------------------------------------------------|
# | Model Downloader | `omz_downloader` | Download models from Open Model Zoo |
# | Model Converter | `omz_converter` | Convert Open Model Zoo models that are not in OpenVINO's IR format to that format |
# | Info Dumper | `omz_info_dumper` | Print information about Open Model Zoo models |
# | Benchmark Tool | `benchmark_app` | Benchmark model performance by computing inference time |
# + [markdown] tags=["hide"]
# ## Preparation
#
# ### Model Name
#
# Set `model_name` to the name of the Open Model Zoo model to use in this notebook. Refer to the list of [public](https://github.com/openvinotoolkit/open_model_zoo/blob/master/models/public/index.md) and [Intel](https://github.com/openvinotoolkit/open_model_zoo/blob/master/models/intel/index.md) models for names of models that can be used.
# -
# model_name = "resnet-50-pytorch"
model_name = "mobilenet-v2-pytorch"
# + [markdown] tags=["hide"]
# ### Imports
# +
import json
import os.path
import subprocess
import sys
from pathlib import Path
from IPython.display import Markdown
from openvino.inference_engine import IECore
sys.path.append("../utils")
from notebook_utils import DeviceNotFoundAlert, NotebookAlert
# + [markdown] tags=["hide"]
# ### Settings and Configuration
#
# Set the file and directory paths. By default, this demo notebook downloads models from Open Model Zoo to a directory `open_model_zoo_models` in your `$HOME` directory. On Windows, the $HOME directory is usually `c:\users\username`, on Linux `/home/username`. If you want to change the folder, change `base_model_dir` in the cell below.
#
# You can change the following settings:
#
# * `base_model_dir`: Models will be downloaded into the `intel` and `public` folders in this directory.
# * `omz_cache_dir`: Cache folder for Open Model Zoo. Specifying a cache directory is not required for Model Downloader and Model Converter, but it speeds up subsequent downloads.
# * `precision`: If specified, download only models with this precision.
# +
base_model_dir = Path("~/open_model_zoo_models").expanduser()
omz_cache_dir = Path("~/open_model_zoo_cache").expanduser()
precision = "FP16"
# Check if an iGPU is available on this system to use with Benchmark App
ie = IECore()
gpu_available = "GPU" in ie.available_devices
print(
f"base_model_dir: {base_model_dir}, omz_cache_dir: {omz_cache_dir}, gpu_availble: {gpu_available}"
)
# -
# ## Download Model from Open Model Zoo
#
#
# Specify, display and run the Model Downloader command to download the model
# +
## Uncomment the next line to show omz_downloader's help which explains the command line options
# # !omz_downloader --help
# -
download_command = (
f"omz_downloader --name {model_name} --output_dir {base_model_dir} --cache_dir {omz_cache_dir}"
)
display(Markdown(f"Download command: `{download_command}`"))
display(Markdown(f"Downloading {model_name}..."))
# ! $download_command
# ## Convert Model to OpenVINO IR format
#
# Specify, display and run the Model Converter command to convert the model to IR format. Model Conversion may take a while. The output of the Model Converter command will be displayed. Conversion succeeded if the last lines of the output include `[ SUCCESS ] Generated IR version 10 model.` For downloaded models that are already in IR format, conversion will be skipped.
# +
## Uncomment the next line to show omz_converter's help which explains the command line options
# # !omz_converter --help
# +
convert_command = f"omz_converter --name {model_name} --precisions {precision} --download_dir {base_model_dir} --output_dir {base_model_dir}"
display(Markdown(f"Convert command: `{convert_command}`"))
display(Markdown(f"Converting {model_name}..."))
# ! $convert_command
# -
# ## Get Model Information
#
# The Info Dumper prints the following information for Open Model Zoo models:
#
# * Model name
# * Description
# * Framework that was used to train the model
# * License url
# * Precisions supported by the model
# * Subdirectory: the location of the downloaded model
# * Task type
#
# This information can be shown, by running `omz_info_dumper --name model_name` in a terminal. The information can also be parsed and used in scripts.
#
# In the next cell, we run Info Dumper, and use json to load the information in a dictionary.
# +
# model_info_output = %sx omz_info_dumper --name $model_name
model_info = json.loads(model_info_output.get_nlstr())
if len(model_info) > 1:
NotebookAlert(
f"There are multiple IR files for the {model_name} model. The first model in the "
"omz_info_dumper output will be used for benchmarking. Change "
"`selected_model_info` in the cell below to select a different model from the list.",
"warning",
)
model_info
# -
# Having the model information in a JSON file allows us to extract the path to the model directory, and build the path to the IR file.
selected_model_info = model_info[0]
model_path = (
base_model_dir
/ Path(selected_model_info["subdirectory"])
/ Path(f"{precision}/{selected_model_info['name']}.xml")
)
print(model_path, "exists:", model_path.exists())
# ## Run Benchmark Tool
#
# By default, Benchmark Tool runs inference for 60 seconds in asynchronous mode on CPU. It returns inference speed as latency (milliseconds per image) and throughput (frames per second) values.
# +
## Uncomment the next line to show benchmark_app's help which explains the command line options
# # !benchmark_app --help
# +
benchmark_command = f"benchmark_app -m {model_path} -t 15"
display(Markdown(f"Benchmark command: `{benchmark_command}`"))
display(Markdown(f"Benchmarking {model_name} on CPU with async inference for 15 seconds..."))
# ! $benchmark_command
# -
# ### Benchmark with Different Settings
# `benchmark_app` displays logging information that is not always necessary. We parse the output with json and show a more compact result
#
# The following cells show some examples of `benchmark_app` with different parameters. Some useful parameters are:
#
# - `-d` Device to use for inference. For example: CPU, GPU, MULTI. Default: CPU
# - `-t` Time in number of seconds to run inference. Default: 60
# - `-api` Use asynchronous (async) or synchronous (sync) inference. Default: async
# - `-b` Batch size. Default: 1
#
#
# Run `! benchmark_app --help` to get an overview of all possible command line parameters.
#
# In the next cell, we define a `benchmark_model()` function that calls `benchmark_app`. This makes it easy to try different combinations. In the cell below that, we display the available devices on the system.
#
# > **NOTE**: In this notebook we run benchmark_app for 15 seconds to give a quick indication of performance. For more accurate performance, we recommended running inference for at least one minute by setting the `t` parameter to 60 or higher, and running `benchmark_app` in a terminal/command prompt after closing other applications. You can copy the _benchmark command_ and paste it in a command prompt where you have activated the `openvino_env` environment.
def benchmark_model(model, device="CPU", seconds=60, api="async", batch=1):
ie = IECore()
if ("GPU" in device) and ("GPU" not in ie.available_devices):
DeviceNotFoundAlert("GPU")
else:
benchmark_command = f"benchmark_app -m {model_path} -d {device} -t {seconds} -api {api} -b {batch}"
display(Markdown(f"**Benchmark {model_name} with {device} for {seconds} seconds with {api} inference**"))
display(Markdown(f"Benchmark command: `{benchmark_command}`"))
# benchmark_output = %sx $benchmark_command
benchmark_result = [line for line in benchmark_output if not (line.startswith(r"[") or line.startswith(" ") or line=="")]
print("\n".join(benchmark_result))
# +
ie = IECore()
# Show devices available for OpenVINO Inference Engine
for device in ie.available_devices:
device_name = ie.get_metric(device, "FULL_DEVICE_NAME")
print(f"{device}: {device_name}")
# -
benchmark_model(model_path, device="CPU", seconds=15, api="async")
benchmark_model(model_path, device="AUTO", seconds=15, api="async")
benchmark_model(model_path, device="GPU", seconds=15, api="async")
benchmark_model(model_path, device="MULTI:CPU,GPU", seconds=15, api="async")
| notebooks/104-model-tools/104-model-tools.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: brain-decoding_3.6
# language: python
# name: brain-decoding_3.6
# ---
# # Extracting ROI and Dimensionality Reduction
#
# In this notebook, we will parcellate the brain using the localizer runs in the experiment, in order to build masks eventually.
#
# ## Goals
# * Extract visual system masks from the localizer runs.
# * Explore dimensionality reduction.
from packages import *
# %matplotlib inline
from nilearn.decomposition import DictLearning
from nilearn.regions import Parcellations, RegionExtractor
from nilearn.image.image import mean_img, index_img
from scipy.ndimage.measurements import label
from collections import Counter
from _1_file_acquisition import get_subject_images
# We will need to write some new code to get the localizer files.
def get_localizer_files(subject, data_dir):
output = {}
output['localizer'] = localizer = {}
fmriprep_dir = os.path.join(data_dir, 'fmri', 'fmriprep', get_subject_dir(subject))
for ses in SESSIONS:
session = {}
ses_dir = os.path.join(fmriprep_dir, get_session_dir(ses), 'func')
files = os.listdir(ses_dir)
for file in files:
if 'localizer' not in file:
continue
if 'preproc' in file:
session['preproc'] = load_file(os.path.join(ses_dir, file))[0]
elif '-aseg' in file:
session['aseg'] = load_file(os.path.join(ses_dir, file))[0]
elif 'brainmask' in file:
session['brainmask'] = load_file(os.path.join(ses_dir, file))[0]
if len(session) != 0:
localizer[ses] = session
spm_anat = get_subject_images(subject=subject, data_dir=data_dir, spm_reqs='*', anat_image=True)
output['spm'] = spm_anat['spm']
output['anat'] = spm_anat['anat']
return output
images = get_localizer_files(subject=1, data_dir=DATA_DIR)
timeseries = [images['localizer'][i]['preproc'] for i in images['localizer']]
PLOTS['stat_map'](index_img(combined_img, 50), images['anat']['preproc'])
# ## Dictionary Learning
#
# Let's begin by running Dictionary Learning to decompose the whole brain into components.
n_comps = 17
dict_learn = DictLearning(n_components=n_comps, smoothing_fwhm=9.,standardize=True,
random_state=0, n_jobs=-2, memory='nilearn_cache', memory_level=1)
dict_learn.fit(combined_img)
components_img = dict_learn.components_img_
if save:
numpy_save(components_img.get_fdata(), 'comps_dict-learning_{}'.format(n_comps), os.path.join(get_subject_dir(subject), 'roi'))
comps = [index_img(components_img, i) for i in xrange(n_comps)]
p = None
C = {}
C_ = {}
for i in xrange(n_comps):
C[i] = Counter(comps[i].get_fdata().flatten())
print 'by all {} components, {}% of all voxels are accounted.'.format(n_comps, np.sum([np.product(shp) - C[i][0] for i in range(n_comps)])*100 / np.product(shp))
PLOTS['prob_atlas'](components_img, images['anat']['preproc'], view_type='filled_contours',
title='Dictionary Learning w/ {} comps'.format(n_comps), cut_coords =(0, -70, 28))
# ### Investigating the components
def remove_disconnected_comps(mask, threshold = 0.90):
"""
@ labeled_data: array-like, the voxels labeled by group
@ threshold: double, percentage of non-zero data points that will not be modified
"""
mask_filtered = np.array(mask)
connection = np.ones((3,3,3))
labeled, ncomponents = label(mask_filtered, connection)
c = Counter(labeled.flatten())
c.pop(0, None) # get rid of 0s
N = sum(c.values()) # total data points
remaining = N # number of data points untouched
uncommon_to_common = sorted(c, key=c.get)
removed = 0
for group_label in uncommon_to_common[:]:
group_size = c.get(group_label)
if remaining - group_size < N*threshold:
break
mask_filtered[labeled == group_label] = 0
remaining-=group_size
removed+=1
print 'Removed {} out of {} non-zero voxels, eliminating {} out of {} connected components'.format(N-remaining, N, removed, ncomponents)
return mask_filtered
# Running all of the components at the same is RAM heavy, let's do it one at a time below.
# +
# i = 14
# if run:
# p = PLOTS['html'](comps[i], anat_file, threshold ='auto', cut_coords=[4, -95, 10])
# else:
# p = PLOTS['roi'](comps[i], anat_file, threshold ='auto', cut_coords=[4, -95, 10])
# print '{}% of all voxels.'.format((np.product(shp) - C[i][0]) * 100 / np.product(shp))
# p
# -
# ### Primary visual
primary_viz_i = 14
if run:
p = PLOTS['html'](comps[primary_viz_i], anat_file, threshold ='auto', cut_coords=[4, -95, 10])
else:
p = PLOTS['stat_map'](comps[primary_viz_i], anat_file, threshold ='auto', cut_coords=[4, -95, 10])
print '{}% of all voxels.'.format((np.product(shp) - C[primary_viz_i][0]) * 100 / np.product(shp))
p
arr = np.array(comps[primary_viz_i].get_fdata())
arr[arr <= 0.] = np.nan
qth = np.nanpercentile(arr, 10)
arr = np.nan_to_num(arr)
arr[arr > qth] = 1.
arr[arr <= qth] = 0.
primary_viz_mask= arr.astype(int)
primary_viz_mask = remove_disconnected_comps(primary_viz_mask)
C_['primary_viz'] = Counter(primary_viz_mask.flatten())
print '{}% of primary visual voxels ({}) remain.'.format(100 * C_['primary_viz'][1] / (np.product(shp) - C[primary_viz_i][0]), C_['primary_viz'][1])
if run:
p = PLOTS['html'](nibabel.Nifti1Image(removed,affine), anat_file, title='Primary Visual Mask')
p
# ### Lateral and medial visual
# +
lateral_viz_i = 2
if run:
p = PLOTS['html'](comps[lateral_viz_i], anat_file, threshold ='auto', cut_coords=[4, -95, 10])
else:
p = PLOTS['stat_map'](comps[lateral_viz_i], anat_file, threshold ='auto', cut_coords=[40, -95, 10])
print '{}% of all voxels.'.format((np.product(shp) - C[lateral_viz_i][0]) * 100 / np.product(shp))
p
# -
arr = np.array(comps[lateral_viz_i].get_fdata())
arr[arr <= 0.] = np.nan
qth = np.nanpercentile(arr, 10)
arr = np.nan_to_num(arr)
arr[arr > qth] = 1.
arr[arr <= qth] = 0.
lateral_viz_mask= arr.astype(int)
lateral_viz_mask = remove_disconnected_comps(lateral_viz_mask)
C_['lateral_viz'] = Counter(lateral_viz_mask.flatten())
print '{}% of lateral visual voxels ({}) remain.'.format(100 * C_['lateral_viz'][1] / (np.product(shp) - C[lateral_viz_i][0]), C_['lateral_viz'][1])
if run:
p = PLOTS['html'](nibabel.Nifti1Image(lateral_viz_mask,affine), anat_file, title='Lateral Visual Mask')
p
# ### Gray matter mask
#
# Using the aseg_roi template, we can easily create a mask region for the gray matter areas and exclude the cerebellum.
labels = load_file(os.path.join(MISC_DIR, 'aseg_labels.tsv'))[0]
regions_wanted = ['Left-Cerebral-Cortex', 'Right-Cerebral-Cortex']
labels.head(5)
regions = list(labels[labels['LABEL'].isin(regions_wanted)].ID)
print 'Will look for voxels with values in {}'.format(regions)
labels[labels['LABEL'].isin(regions_wanted)]
# +
# TO SEE WHAT THE ASEG FILE LOOKS LIKE
# if run:
# p = PLOTS['html'](aseg, anat_file, title='aseg_roi')
# p
# -
gm_mask = np.isin(aseg.get_fdata().astype(int), regions).astype(int)
# +
if run:
p = PLOTS['html'](nibabel.Nifti1Image(gm_mask,affine), anat_file, title='GM mask')
else:
p = PLOTS['stat_map'](nibabel.Nifti1Image(gm_mask,affine), anat_file, title='GM mask')
p
# -
if save:
numpy_save(gm_mask, 'mask_gm', os.path.join(get_subject_dir(subject), 'roi'))
# ### Parcellations - ward
n_parcels = 20000
if run:
ward = Parcellations(method='ward', n_parcels=n_parcels,
standardize=False, smoothing_fwhm=6.,
memory='nilearn_cache',n_jobs=-2, memory_level=4)
ward.fit(timeseries)
ward_labels_img = ward.labels_img_
# Comparing the ward-reduced data to the functional data is a way to see if the mask we created is appropriate
comp_i = 0
mean_func_localizer_img = mean_img(timeseries[comp_i])
vmin = np.min(mean_func_localizer_img.get_fdata())
vmax = np.max(mean_func_localizer_img.get_fdata())
ward_reduced = ward.transform(timeseries)
ward_compressed = ward.inverse_transform(ward_reduced)
# The reduced representation flattens all of the parcels.
# The compressed representation returns to the original shape but comressed.
print ward_compressed[comp_i].shape
assert ward_compressed[comp_i].shape == timeseries[comp_i].shape
PLOTS['epi'](mean_img(timeseries[comp_i]),title= 'Original with {} voxels'.format(np.product(shp)),cut_coords=(-8,33,15),vmax=vmax, vmin = vmin)
PLOTS['epi'](mean_img(ward_compressed[comp_i]), title='Compressed to {} parcels'.format(n_parcels), cut_coords=(-8,33,15), vmax=vmax, vmin = vmin)
if save:
save_pickle(ward_labels_img, 'parcellation_ward_{}'.format(n_parcels), roi_dir)
| src/_3_extract_roi_dim_reduce.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python3
# ---
import tensorflow as tf
import numpy as np
from time import time
from utils import plot_images, read_mnist_data
from os.path import exists
data, _ = read_mnist_data()
def generator(z, training=True):
x_gen = tf.layers.dense(z, 1024, tf.nn.relu, name='fc1')
x_gen = tf.layers.batch_normalization(x_gen, training=training, name='bn1')
print(x_gen.shape)
x_gen = tf.layers.dense(z, 7 * 7 * 128, tf.nn.relu, name='fc2')
x_gen = tf.layers.batch_normalization(x_gen, training=training, name='bn2')
print(x_gen.shape)
x_gen = tf.reshape(x_gen, (-1, 7, 7, 128))
print(x_gen.shape)
x_gen = tf.layers.conv2d_transpose(x_gen, 128, (5, 5), (2, 2), padding='same', activation=tf.nn.relu, name='tconv1')
print(x_gen.shape)
x_gen = tf.layers.conv2d_transpose(x_gen, 1, (5, 5), (2, 2), padding='same', activation=tf.nn.relu, name='tconv2')
print(x_gen.shape)
x_gen = tf.reshape(x_gen, (-1, 784))
return x_gen
def discriminator(x, training=True):
p = tf.reshape(x, (-1, 28, 28, 1))
print(p.shape)
p = tf.layers.conv2d(p, 11, (5, 5), (2, 2), padding='same', activation=tf.nn.relu, name='conv1')
print(p.shape)
p = tf.layers.conv2d(p, 75, (5, 5), (2, 2), padding='same', activation=tf.nn.relu, name='conv2')
print(p.shape)
p = tf.reshape(p, (-1, 7 * 7 * 75))
print(p.shape)
p = tf.layers.dense(p, 1024, tf.nn.relu, name='fc1')
p = tf.layers.batch_normalization(p, training=training, name='bn1')
print(p.shape)
p = tf.layers.dense(p, 1, tf.nn.sigmoid, name='fc2')
print(p.shape)
return p
# +
z_dim = 128
tf.reset_default_graph()
with tf.name_scope('inputs'):
x = tf.placeholder(tf.float32, (None, 784), 'x')
z = tf.placeholder(tf.float32, (None, z_dim), 'z')
with tf.variable_scope('generator'):
x_gen = generator(z)
with tf.variable_scope('discriminator'):
p_x = discriminator(x)
tf.get_variable_scope().reuse_variables()
p_g = discriminator(x_gen)
with tf.name_scope('optimizer'):
loss_g = 0.5 * tf.reduce_mean((p_g - 1)**2)
loss_d = 0.5 * (tf.reduce_mean((p_x - 1)**2) + tf.reduce_mean(p_g**2))
optimizer_g = tf.train.AdamOptimizer(1e-4).minimize(loss_g, var_list=tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, 'generator'))
optimizer_d = tf.train.AdamOptimizer(1e-4).minimize(loss_d, var_list=tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, 'discriminator'))
tf.summary.scalar('loss_g', loss_g)
tf.summary.scalar('loss_d', loss_d)
summ = tf.summary.merge_all()
# -
def plot_generated_images(sess):
generated_images = sess.run(x_gen, feed_dict={z: np.random.randn(11, z_dim)})
plot_images(generated_images)
# +
batch_size = 256
batches_per_epoch = int(data.train.num_examples / batch_size)
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
def optimize(epochs=1):
start_time = time()
writer = tf.summary.FileWriter('checkpoints/LSGAN', tf.get_default_graph())
saver = tf.train.Saver()
with tf.Session(config=config) as sess:
sess.run(tf.global_variables_initializer())
for epoch in range(epochs):
for batch in range(batches_per_epoch):
x_batch, _ = data.train.next_batch(batch_size)
z_batch = np.random.randn(batch_size, z_dim)
for _ in range(2):
sess.run(optimizer_g, feed_dict={x: x_batch, z: z_batch})
sess.run(optimizer_d, feed_dict={x: x_batch, z: z_batch})
if batch % 1000 == 0:
writer.add_summary(sess.run(summ, feed_dict={x: x_batch, z: z_batch}), global_step=epoch * batches_per_epoch + batch)
print("{} / {} ({}%)".format(epoch + 1, epochs, np.round((epoch + 1) / epochs * 100, 2)))
plot_generated_images(sess)
saver.save(sess, 'checkpoints/LSGAN/LSGAN', write_meta_graph=False)
print("Time taken - {}s".format(np.round(time() - start_time, 2)))
# -
if exists('checkpoints/LSGAN/LSGAN.data-00000-of-00001'):
with tf.Session(config=config) as sess:
saver = tf.train.Saver()
saver.restore(sess, 'checkpoints/LSGAN/LSGAN')
for _ in range(10):
plot_generated_images(sess)
else:
optimize(10)
| Research-Papers/GANs/LSGAN/Notebooks/LSGAN.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
file=open('test.txt',"w")
file.write("hello world\n"*5)
file.close()
file=open("test.txt","r+")
text_upper=file.read().upper()
file.seek(0)
file.write(text_upper)
file.close()
with open('test.txt',"r+") as file:
lines=file.readlines()
third_line=lines[2].capitalize()
file.seek(24)
file.write(third_line)
with open('test.txt','a')as file:
file.write("I Love Python")
# -
| Python_File_Task.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Overview
#
# This a notebook that inspects the results of a WarpX simulation.
# Import statements
import yt ; yt.funcs.mylog.setLevel(50)
import numpy as np
import scipy.constants as scc
import matplotlib.pyplot as plt
# %matplotlib notebook
# ## Read data in the simulation frame
# # Instruction
#
# Enter the path of the data you wish to visualize below. Then execute the cells one by one, by selecting them with your mouse and typing `Shift + Enter`
plotfile = './diags/plotfiles/plt00001'
field = 'Ex'
species = 'electron'
ds = yt.load( plotfile ) # Load the plotfile
# ds.field_list # Print all available quantities
# ### Plot data with yt
sl = yt.SlicePlot(ds, 2, field, aspect=.2) # Create a sliceplot object
sl.annotate_particles(width=(10.e-6, 'm'), p_size=2, ptype=species, col='black')
sl.annotate_grids() # Show grids
sl.show() # Show the plot
# ### Store quantities in numpy arrays, and plot with matplotlib
# +
# Get field quantities
all_data_level_0 = ds.covering_grid(level=0,left_edge=ds.domain_left_edge, dims=ds.domain_dimensions)
Bx = all_data_level_0['boxlib', field].v.squeeze()
Dx = ds.domain_width/ds.domain_dimensions
extent = [ds.domain_left_edge[ds.dimensionality-1], ds.domain_right_edge[ds.dimensionality-1],
ds.domain_left_edge[0], ds.domain_right_edge[0] ]
# Get particle quantities
ad = ds.all_data()
x = ad[species, 'particle_position_x'].v
z = ad[species, 'particle_position_y'].v
# Plot image
plt.figure()
plt.imshow(Bx, extent=extent)
plt.scatter(z,x,s=.1,c='k')
# -
# ## Read data back-transformed to the lab frame when the simulation runs in the boosted frame (example: 2D run)
# read_raw_data.py is located in warpx/Tools.
import os, glob
import read_raw_data
# +
iteration = 1
snapshot = './lab_frame_data/' + 'snapshot' + str(iteration).zfill(5)
header = './lab_frame_data/Header'
allrd, info = read_raw_data.read_lab_snapshot(snapshot, header) # Read field data
F = allrd[field]
print( "Available info: ", *list(info.keys()) )
print("Available fields: ", info['field_names'])
nx = info['nx']
nz = info['nz']
x = info['x']
z = info['z']
xbo = read_raw_data.get_particle_field(snapshot, species, 'x') # Read particle data
ybo = read_raw_data.get_particle_field(snapshot, species, 'y')
zbo = read_raw_data.get_particle_field(snapshot, species, 'z')
uzbo = read_raw_data.get_particle_field(snapshot, species, 'uz')
plt.figure(figsize=(6, 3))
extent = np.array([info['zmin'], info['zmax'], info['xmin'], info['xmax']])
plt.imshow(F, aspect='auto', extent=extent, cmap='seismic')
plt.colorbar()
plt.plot(zbo, xbo, 'g.', markersize=1.)
# -
# ## Read back-transformed data with hdf5 format (example: 3D run)
import h5py
import matplotlib.pyplot as plt
f = h5py.File('HDF5_lab_frame_data/snapshot00003', 'r')
print( list(f.keys()) )
# plt.figure()
plt.imshow(f['Ey'][:,,:])
| Tools/PostProcessing/Visualization.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import pandas as pd
import random
import tensorflow as tf
from tensorflow.keras import layers, losses
from tensorflow.keras.models import Model
import os, sys
import librosa as lb
from os import listdir
from os.path import isfile, join
# %matplotlib inline
class File_charge:
"""
La classe File_charge permet d'instancier un objet de type dataframe contenant les chemins d'accès des fichiers audio
contenus dans chaque sous dossier du dataset
"""
def __init__(self, path):
self.path = path
def load_file(self):
"""
La fonction load_file retourne un dataframe constituer des chemins d'accès aux fichiers audio contenu
dans un sous dossier du dataset
"""
dirs = os.listdir(self.path)
df = list()
for dir in dirs:
df.append((self.path+"/"+dir))
df = pd.DataFrame(df, columns = ['audio_file'])
return df
#Importation des données
machine_id="01"
machine_id_int=1
machine_id2="03"
machine_id_int2=3
nb_pixels=32
mel_bins=128
nb_img=10 #(calculé dans le code "Conversion des fichiers audio en images")
"""
Changer le nom des dossiers dans lesquels ont été importés les images normalisées des spectrogrammes log mel
"""
train_path="C:/Users/Quasarlight/Desktop/Formation data/Projet/Donnees_sonores/ToyCar_train_img_normalized/id_"+machine_id
train_path2="C:/Users/Quasarlight/Desktop/Formation data/Projet/Donnees_sonores/Toycar_train_img_normalized/id_"+machine_id2
test_path="C:/Users/Quasarlight/Desktop/Formation data/Projet/Donnees_sonores/ToyCar_test_img_normalized/id_"+machine_id
File_train=pd.read_csv('ToyCar_train_img_normalized/fichier_ToyCar_train.csv')
File_test=pd.read_csv('ToyCar_test_img_normalized/fichier_ToyCar_test.csv')
File_train=File_train[(File_train['machine_id']==machine_id_int) | (File_train['machine_id']==machine_id_int2)]
File_train=File_train.set_index([pd.Index(np.arange(len(File_train)))])
File_test=File_test[File_test['machine_id']==machine_id_int]
File_test=File_test.set_index([pd.Index(np.arange(len(File_test)))])
df_train = File_charge(train_path)
df_train = df_train.load_file()
df_test = File_charge(test_path)
df_test = df_test.load_file()
df_train2 = File_charge(train_path2)
df_train2 = df_train2.load_file()
#A dé-commenter si on souhaite joindre les ID
#df_train=pd.concat([df_train,df_train2],axis=0)
test_list=[]
erreur=0
red=2
X_train=np.ndarray((len(df_train)//red,nb_pixels,mel_bins))
X_test=np.ndarray((len(df_test)//red,nb_pixels,mel_bins))
#Les images d'entrainement peuvent être sélectionnées de manière random car elle ont toutes le même label
for j in range(len(df_train)//red): ##Set de données réduit pour tester l'autoencodeur
i=random.choice(range(len(df_train))) #On choisit les images aléatoirement
if (np.loadtxt(df_train.iloc[i,0], delimiter =', ').shape==(nb_pixels,mel_bins)):
X_train[j,:,:]=np.loadtxt(df_train.iloc[i,0], delimiter =', ')
else: erreur+=1
print('Erreurs train=', erreur)
erreur=0
#Les images de test sont groupées par extrait car il ne faut pas mélanger les images normales et anormales
for j in range(len(df_test)//red//nb_img):##Set de données réduit pour tester l'autoencodeur
i=random.choice(range(len(df_test)//nb_img)) #On sélectionne un extrait au hasard
test_list+=[i]
for t in range(nb_img):
if (np.loadtxt(df_test.iloc[nb_img*i,0], delimiter =', ').shape==(nb_pixels,mel_bins)):
X_test[(nb_img)*j+t,:,:]=np.loadtxt(df_test.iloc[nb_img*i+t,0], delimiter =', ')
else: erreur+=1
print('Erreurs test=', erreur)
# -
import seaborn as sns
plt.figure(figsize=(15,4))
plt.subplot(131)
X_train_plot = X_train.reshape([X_train.shape[0],nb_pixels,mel_bins])
sns.heatmap(np.rot90(X_train_plot[250,:,:]), cmap='inferno')
X_train.shape
# +
#Mise en forme des tenseurs
X_train_batch = X_train.reshape([X_train.shape[0],nb_pixels,mel_bins,1])
X_test_batch = X_test.reshape([X_test.shape[0],nb_pixels,mel_bins,1])
batchsize=250
dataset_train = tf.data.Dataset.from_tensor_slices(X_train_batch)
dataset_train = dataset_train.batch(batch_size=batchsize)
dataset_test = tf.data.Dataset.from_tensor_slices(X_test_batch)
dataset_test = dataset_test.batch(batch_size=1)
# +
#Modele d'autoencodeur
class Autoencoder(Model):
def __init__(self):
super(Autoencoder, self).__init__()
self.encoder_decoder = tf.keras.Sequential([
tf.keras.layers.Conv2D(filters=64, kernel_size=(5,5),input_shape=(nb_pixels,mel_bins,1), activation='relu', padding='same'),
tf.keras.layers.MaxPool2D(pool_size=(2,2)),
tf.keras.layers.Conv2D(filters=32, kernel_size=(3,3), activation='relu', padding='same'),
tf.keras.layers.MaxPool2D(pool_size=(2,2)),
tf.keras.layers.Conv2D(filters=16, kernel_size=(3,3), activation='relu', padding='same'),
tf.keras.layers.MaxPool2D(pool_size=(2,2)),
tf.keras.layers.Conv2DTranspose(filters=16, kernel_size=(3,3), activation='relu', padding='same'),
tf.keras.layers.UpSampling2D(size=(2,2)),
tf.keras.layers.Conv2DTranspose(filters=32, kernel_size=(3,3), activation='relu', padding='same'),
tf.keras.layers.UpSampling2D(size=(2,2)),
tf.keras.layers.Conv2DTranspose(filters=64, kernel_size=(5,5),activation='relu', padding='same'),
tf.keras.layers.UpSampling2D(size=(2,2)),
tf.keras.layers.Conv2D(filters=1, kernel_size=(3, 3),activation='sigmoid', padding='same')
])
def call(self, input_features):
reconstructed = self.encoder_decoder(input_features)
return reconstructed
#Calcul de l'erreur de reconstruction
def loss(model, original):
reconstruction_error = tf.reduce_mean(tf.square(tf.subtract(tf.cast(model(original),dtype=tf.float64),tf.cast(original,dtype=tf.float64))))
return reconstruction_error
#Définition de l'entrainement
def train(loss, model, opt, original):
with tf.GradientTape() as tape:
gradients = tape.gradient(loss(model, original), model.trainable_variables)
gradient_variables = zip(gradients, model.trainable_variables)
opt.apply_gradients(gradient_variables)
#Fonctions de post-processing
def predict(model, data, threshold):
rec_loss = loss(model, data).numpy()
return tf.math.less(rec_loss, threshold)
def print_stats(predictions, labels):
print("Accuracy = ",np.round(accuracy_score(labels, predictions),3))
print("Precision = ",np.round(precision_score(labels, predictions),3))
print("Recall = ",np.round(recall_score(labels, predictions),3))
# +
#Paramètres d'entrainement
epochs=30
learning_rate=1e-4
#Instanciation
autoencoder = Autoencoder()
opt = tf.optimizers.Adam(learning_rate=learning_rate)
loss_e=[]
loss_v=[]
#ici on décompose l'entrainement, mais on pourait faire appel à la fonction fit car Autoencoder hérite de Model
for epoch in range(epochs):
print('epoch=',epoch)
for step, batch_features in enumerate(dataset_train):
train(loss, autoencoder, opt, batch_features)
loss_values = loss(autoencoder, batch_features)
loss_v+=[loss_values.numpy()]
if (step%10==0):
print('mean loss =', np.round(loss_values.numpy(),4))
print('deviation =',np.round(np.std(loss_v[-10:]),6))
loss_e+=[loss_values.numpy()]
plt.figure(figsize=(8,8))
plt.plot(range(0,epoch+1),loss_e)
autoencoder.save_weights('C:/Users/Quasarlight/Desktop/Formation data/Projet/Donnees_sonores/TrainingAE/checkpoints/'+machine_id+'/my_checkpoint')
# +
#Repartir d'un modèle sauvegardé
autoencoder = Autoencoder()
autoencoder.load_weights('C:/Users/Quasarlight/Desktop/Formation data/Projet/Donnees_sonores/TrainingAE/checkpoints/'+machine_id+'/my_checkpoint')
epochs=20
learning_rate=1e-3
#Instanciation
opt = tf.optimizers.Adam(learning_rate=learning_rate)
loss_e=[]
#ici on décompose l'entrainement, mais on pourait faire appel à la fonction fit car Autoencoder hérite de Model
for epoch in range(epochs):
print('epoch=',epoch)
for step, batch_features in enumerate(dataset_train):
train(loss, autoencoder, opt, batch_features)
loss_values = loss(autoencoder, batch_features)
if (step==1):
print('mean loss =', np.round(loss_values.numpy(),4))
loss_e+=[loss_values.numpy()]
plt.figure()
plt.plot(loss_e,range(1,epoch+1))
# +
#Comparaison entrée sortie pour vérifier le bon fonctionnement de l'encodeur
reconstructions = autoencoder.predict(dataset_train)
X_train_plot = X_train.reshape([X_train.shape[0],nb_pixels,mel_bins])
reconstructions_plot = reconstructions.reshape([X_train.shape[0],nb_pixels,mel_bins])
i=random.choice(range(X_train.shape[0]))
plt.figure(figsize=(15,4))
plt.subplot(131)
sns.heatmap(np.rot90(X_train_plot[i,:,:]), cmap='inferno')
plt.title("Image d'origine")
plt.xlabel('Temps')
plt.ylabel('Frequence (mel)')
plt.subplot(132)
sns.heatmap(np.rot90(X_train_plot[i+1,:,:]), cmap='inferno')
plt.title("Image d'origine")
plt.subplot(133)
sns.heatmap(np.rot90(X_train_plot[i+2,:,:]), cmap='inferno')
plt.title("Image d'origine")
plt.show()
plt.figure(figsize=(15,4))
plt.subplot(131)
sns.heatmap(np.rot90(reconstructions_plot[i,:,:]), cmap='inferno')
plt.title("Image reconstruite")
plt.xlabel('Temps')
plt.ylabel('Frequence (mel)')
plt.subplot(132)
sns.heatmap(np.rot90(reconstructions_plot[i+1,:,:]), cmap='inferno')
plt.title("Image reconstruite")
plt.subplot(133)
sns.heatmap(np.rot90(reconstructions_plot[i+2,:,:]), cmap='inferno')
plt.title("Image reconstruite")
plt.show()
train_loss=0
for i in range(reconstructions.shape[0]):
train_loss += tf.reduce_mean(tf.square(tf.subtract(tf.cast(reconstructions_plot[i,:,:],dtype=tf.float64),tf.cast(X_train_plot[i,:,:],dtype=tf.float64))))
train_loss/=reconstructions.shape[0]
print('Training loss = ',train_loss.numpy())
# +
from sklearn.metrics import accuracy_score,precision_score,recall_score
losses=[]
accuracy=[]
precision=[]
recall=[]
thresholds=[]
nb_test=10
test_labels=np.ndarray((len(X_test)//(nb_img)+1,1),bool)
#Extraction du vecteur de labels
for i in range(len(test_list)):
test_labels[i]=File_test['label'][test_list[i]]
#Calcul des prédictions et des métriques pour différentes valeurs de seuil
for x in range(0,10):
threshold=(train_loss)*(1+x/10)
thresholds+=[threshold]
preds_img=np.ndarray((len(X_test)+1,1),bool)
losses=[]
preds=np.ndarray(((len(X_test)//(nb_img)+1),1),bool)
i=0
for features in dataset_test:
preds_img[i] = predict(autoencoder,features,threshold)
losses+=[loss(autoencoder,features).numpy()]
i+=1
if (i%nb_test==0):
if(sum(preds_img[i-nb_test:i-1])>(nb_test-3)):
preds[i//nb_test-1]=True
else:
preds[i//nb_test-1]=False
print('Threshold',np.round(threshold.numpy(),4))
accuracy+=[np.round(accuracy_score(test_labels[:-1],preds[:-1]),3)]
precision+=[np.round(precision_score(test_labels[:-1],preds[:-1]),3)]
recall+=[np.round(recall_score(test_labels[:-1],preds[:-1]),3)]
#Tracé des métriques
plt.figure(figsize=(15,8))
plt.subplot(131)
plt.plot(thresholds,accuracy)
plt.hlines(1-test_labels.sum()/len(test_labels),thresholds[0],thresholds[len(thresholds)-1],'g',label='proportion True/Total')
plt.hlines(np.max(accuracy),thresholds[0],thresholds[len(thresholds)-1],'r',linestyles= 'dashed',label='max accuracy')
plt.title('Accuracy')
plt.subplot(132)
plt.plot(thresholds,precision)
plt.title('Precision')
plt.subplot(133)
plt.plot(thresholds,recall)
plt.title('Recall')
# +
#Tracé de la répartition des errreus de reconstruction
test_labels_plot=[]
for n in range(len(test_labels)):
if test_labels[n]==True:
for i in range(nb_test):
test_labels_plot+=['Normal']
else:
for i in range(nb_test):
test_labels_plot+=['Anormal']
sns.displot(x=losses,hue=test_labels_plot[0:len(losses)],kind="hist")
plt.title('losses distibution')
plt.show();
# +
#Tracé d'anomalies
reconstructions = autoencoder.predict(dataset_test)
X_test_plot = X_test.reshape([X_test.shape[0],nb_pixels,mel_bins])
reconstructions_plot = reconstructions.reshape([X_test.shape[0],nb_pixels,mel_bins])
anomalies=[]
for x in range(len(test_list)):
if (File_test['label'][test_list[x]]==False):
anomalies+=[x]
i=random.choice((anomalies))
plt.figure(figsize=(15,4))
plt.subplot(131)
sns.heatmap(np.rot90(X_test_plot[i,:,:]), cmap='inferno')
plt.title("Image d'origine (anomalie)")
plt.xlabel('Temps')
plt.ylabel('Frequence (mel)')
j=random.choice((anomalies))
plt.subplot(132)
sns.heatmap(np.rot90(X_test_plot[j,:,:]), cmap='inferno')
plt.title("Image d'origine (anomalie)")
plt.xlabel('Temps')
plt.ylabel('Frequence (mel)')
k=random.choice((anomalies))
plt.subplot(133)
sns.heatmap(np.rot90(X_test_plot[k,:,:]), cmap='inferno')
plt.title("Image d'origine")
plt.xlabel('Temps')
plt.ylabel('Frequence (mel)')
plt.show()
plt.figure(figsize=(15,4))
plt.subplot(131)
sns.heatmap(np.rot90(reconstructions_plot[i,:,:]), cmap='inferno')
plt.title("Image reconstruite ")
plt.xlabel('Temps')
plt.ylabel('Frequence (mel)')
plt.subplot(132)
sns.heatmap(np.rot90(reconstructions_plot[j,:,:]), cmap='inferno')
plt.title("Image reconstruite (anomalie)")
plt.xlabel('Temps')
plt.ylabel('Frequence (mel)')
plt.subplot(133)
sns.heatmap(np.rot90(reconstructions_plot[k,:,:]), cmap='inferno')
plt.title("Image reconstruite (anomalie)")
plt.xlabel('Temps')
plt.ylabel('Frequence (mel)')
plt.show()
# +
threshold=train_loss*(1-12/100)
preds_img=np.ndarray((len(X_test)+1,1),bool)
preds=np.ndarray(((len(X_test)//(nb_img)+1),1),bool)
i=0
for features in dataset_test:
preds_img[i] = predict(autoencoder,features,threshold)
i+=1
if (i%nb_test==0):
if(sum(preds_img[i-nb_test:i-1])>3):
preds[i//nb_test-1]=True
else:
preds[i//nb_test-1]=False
print('accuracy =', np.round(accuracy_score(test_labels[:-1],preds[:-1]),3))
print('precision = ', np.round(precision_score(test_labels[:-1],preds[:-1]),3))
print('recall = ', np.round(recall_score(test_labels[:-1],preds[:-1]),3))
# -
for i in range(20):
print('Prediction ',preds[i],' Label ',test_labels[i])
| AutoEncodeur/Modele autoencodeur v1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
for num in range(1042000,702648265):
pw = len(str(num))
temp = num
sum = 0
while(temp != 0):
r = temp % 10
sum = sum + r**pw
temp = temp // 10
if (num == sum):
print("The first Armstrong number is ",num)
break
| day 4 python assignment.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Plotagem e formatação de gráficos
# ## Objetivos
#
# - Realizar plotagem de gráficos de funções de uma variável;
# - Compreender o funcionamento básico da classe _artist_ (_axes_, _figures_, _subplots_);
# - Alterar propriedades de linhas, marcadores e legendas;
# - Inserir _labels_, títulos e anotações simples;
# ## Introdução à visualização de dados
#
# A visualização de dados é um campo do conhecimento bastante antigo que foi trazido à mostra muito recentemente com a expansão do "Big Data". Seu principal objetivo é representar dados e informações graficamente por meio de elementos visuais como tabelas, gráficos, mapas e infográficos. Diversas ferramentas estão disponíveis para tornar a interpretação de dados mais clara, compreensível e acessível.
#
# No contexto da análise de dados, a visualização de dados é um componente fundamental para a criação de relatórios de negócios, painéis de instrumentos (_dashboards_) e gráficos multidimensionais que são aplicáveis às mais diversas disciplinas, tais como Economia, Ciência Política e, principalmente, todo o núcleo de ciências exatas (Matemática, Estatística e Computação).
#
# Em seu livro _The Visual Display of Quantitative Information_, [[Edward Tufte]](https://www.edwardtufte.com/tufte/), conhecido como o guru do _design_ aplicado à visualização de dados, afirma que, a cada ano, o mundo produz algo entre 900 bilhões e 2 trilhões de imagens impressas de gráficos. Ele destaca que o _design_ de um gráfico estatístico, por exemplo, é uma matéria universal similar à Matemática e não está atrelado a características únicas de uma linguagem particular. Portanto, aprender visualização de dados para comunicar dados com eficiência é tão importante quanto aprender a Língua Portuguesa para escrever melhor.
#
# Você pode ver uma lista sugestiva de bons blogues e livros sobre visualização de dados nas páginas de aprendizagem do software Tableau [[TabelauBlogs]](https://www.tableau.com/learn/articles/best-data-visualization-blogs), [[TabelauBooks]](https://www.tableau.com/learn/articles/books-about-data-visualization).
#
# ## _Data storytelling_
#
# _Data Storytelling_ é o processo de "contar histórias através dos dados". [[<NAME>]](http://www.storytellingwithdata.com), uma engenheira de dados do Google, ao perceber como a quantidade de informação produzida no mundo às vezes é muito mal lida e comunicada, escreveu dois *best-sellers* sobre este tema a fim de ajudar pessoas a comunicarem melhor seus dados e produtos quantitativos. Ela argumenta em seu livro *Storytelling with Data: A Data Visualization Guide for Business Professionals* (*Storytelling com Dados: um Guia Sobre Visualização de Dados Para Profissionais de Negócios*, na versão em português) que não somos inerentemente bons para "contar uma história" através dos dados. Cole mostra com poucas lições o que devemos aprender para atingir uma comunicação eficiente por meio da visualização de dados.
#
# ## Plotagem matemática
#
# _Plotagem_ é o termo comumente empregado para o esboço de gráficos de funções matemáticas via computador. Plotar gráficos é uma das tarefas que você mais realizará como futuro(a) cientista ou analista de dados. Nesta aula, nós introduziremos você ao universo da plotagem de gráficos em duas dimensões e ensinar como você pode visualizar dados facilmente com a biblioteca *matplotlib*. Daremos uma visão geral principalmente sobre a plotagem de funções matemáticas utilizando *arrays* e recursos de computação vetorizada com *numpy* já aprendidos. Ao longo do curso, você aprenderá a fazer plotagens mais interessantes de cunho estatístico.
#
# ## A biblioteca *matplotlib*
#
# *Matplotlib* é a biblioteca Python mais conhecida para plotagem 2D (bidimensional) de *arrays*. Sua filosofia é simples: criar plotagens simples com apenas alguns comandos, ou apenas um. <NAME> [[History]](https://matplotlib.org/users/history.html), falecido em 2012, foi o autor desta biblioteca. Em 2008, ele escreveu que, enquanto buscava uma solução em Python para plotagem 2D, ele gostaria de ter, entre outras coisas:
#
# - gráficos bonitos com pronta qualidade para publicação;
# - capacidade de incorporação em interfaces gráficas para desenvolvimento de aplicações;
# - um código fácil de entender e de manusear.
#
# O *matplotlib* é um código dividido em três partes:
#
# 1. A interface *pylab*: um conjunto de funções predefinidas no submódulo `matplotlib.pyplot`.
# 2. O *frontend*: um conjunto de classes responsáveis pela criação de figuras, textos, linhas, gráficos etc. No *frontend*, todos os elementos gráficos são objetos ainda abstratos.
# 3. O *backend*: um conjunto de renderizadores responsáveis por converter os gráficos para dispositivos onde eles podem ser, de fato, visualizados. A [[renderização]](https://pt.wikipedia.org/wiki/Renderização) é o produto final do processamento digital. Por exemplo, o *backend* PS é responsável pela renderização de [[PostScript]](https://www.adobe.com/br/products/postscript.html). Já o *backend* SVG constroi gráficos vetoriais escaláveis ([[Scalable Vector Graphics]](https://www.w3.org/Graphics/SVG/).
#
# Veja o conceito de [[Canvas]](https://en.wikipedia.org/wiki/Canvas_(GUI)).
#
#
# ### Sessões interativas do *matplotlib*
#
# Sessões interativas do *matplotlib* são habilitadas através de um [[comando mágico]](https://ipython.readthedocs.io/en/stable/interactive/magics.html):
#
# - Em consoles, use `%matplotlib`;
# - No Jupyter notebook, use `%matplotlib inline`.
#
# Lembre que na aula anterior usamos o comando mágico `%timeit` para temporizar operações.
#
# Para usar plenamente o matplotlib nesta aula, vamos usar:
#
# ```python
# # %matplotlib inline
# from matplotlib import pyplot as plt
# ```
#
# A segunda instrução também pode ser feita como
#
# ```python
# import matplotlib.pyplot as plt
# ```
#
# em que `plt` é um *alias* já padronizado.
# chamada padrão
# %matplotlib inline
import matplotlib.pyplot as plt
# ## Criação de plots simples
#
# Vamos importar o *numpy* para usarmos os benefícios da computação vetorizada e plotar nossos primeiros exemplos.
# +
import numpy as np
x = np.linspace(-10,10,50)
y = x
plt.plot(x,y); # reta y = x
# -
# **Exemplo:** plote o gráfico da parábola $f(x) = ax^2 + bx + c$ para valores quaisquer de $a,b,c$ no intervalo $-20 \leq x \leq 20$.
# +
x = np.linspace(-20,20,50)
a,b,c = 2,3,4
y = a*x**2 + b*x + c # f(x)
plt.plot(x,y);
# -
# Podemos definir uma função para plotar a parábola:
def plota_parabola(a,b,c):
x = np.linspace(-20,21,50)
y = a*x**2 + b*x + c
plt.plot(x,y)
# Agora podemos estudar o que cada coeficiente faz:
# +
# mude o valor de a e considere b = 2, c = 1
for a in np.linspace(-2,3,10):
plota_parabola(a,2,1)
# +
# mude o valor de b e considere a = 2, c = 1
for b in np.linspace(-2,3,20):
plota_parabola(2,b,1)
# +
# mude o valor de c e considere a = 2, b = 1
for c in np.linspace(-2,3,10):
plota_parabola(2,1,c) # por que você não vê muitas mudanças?
# +
# mude o valor de a, b e c
valores = np.linspace(-2,3,5)
for a in valores:
for b in valores:
for c in valores:
plota_parabola(a,b,c)
# -
# **Exemplo:** plote o gráfico da função $g(t) = a\cos(bt + \pi)$ para valores quaisquer de $a$ e $b$ no intervalo $0 \leq t \leq 2\pi$.
# +
t = np.linspace(0,2*np.pi,50,endpoint=True) # t: ângulo
a, b = 1, 1
plt.plot(t,a*np.cos(b*t + np.pi));
b = 2
plt.plot(t,a*np.cos(b*t + np.pi));
b = 3
plt.plot(t,a*np.cos(b*t + np.pi));
# -
# As cores e marcações no gráfico são todas padronizadas. Vejamos como alterar tudo isto.
# ## Alteração de propriedades e estilos de linhas
# Altere:
#
# - cores com `color` ou `c`,
# - espessura de linha com `linewidth` ou `lw`
# - estilo de linha com `linestyle` ou `ls`
# - tipo de símbolo marcador com `marker`
# - largura de borda do símbolo marcardor com `markeredgewidth` ou `mew`
# - cor de borda do símbolo marcardor com `markeredgecolor` ou `mec`
# - cor de face do símbolo marcardor com `markerfacecolor` ou `mfc`
# - transparência com `alpha` no intervalo [0,1]
# +
g = lambda a,b: a*np.cos(b*t + np.pi) # assume t anterior
# estude cada exemplo
# a ordem do 3o. argumento em diante pode mudar
plt.plot(t,g(1,1),color='c',linewidth=5,linestyle='-.',alpha=.3)
plt.plot(t,g(1,2),c='g',ls='-',lw='.7',marker='s',mfc='y',ms=8)
plt.plot(t,g(1,3),c='#e26d5a',ls=':', marker='d',mec='k',mew=2.0);
# -
# Cores e estilo de linha podem ser especificados de modo reduzido e em ordens distintas usando um especificador de formato.
plt.plot(t,g(1,1),'yv') # amarelo; triângulo para baixo;
plt.plot(t,g(1,2),':c+') # pontilhado; ciano; cruz;
plt.plot(t,-g(2,2),'>-.r'); # triangulo direita; traço-ponto; vermelho;
# ### Plotagem múltipla
# O exemplo acima poderia ser feito como plotagem múltipla em 3 blocos do tipo (`x,y,'fmt')`, onde `x` e `y` são as informações dos eixos coordenados e `fmt` é uma string de formatação.
plt.plot(t,g(1,1),'yv', t,g(1,2),':c+', t,-g(2,2),'>-.r'); # 3 blocos sequenciados
# Para verificar todas as opções de propriedades e estilos de linhas, veja `plt.plot?`.
# ### Especificação de figuras
#
# Use `plt.figure` para criar um ambiente de figura e altere:
#
# - a largura e altura (em polegadas) com `figsize = (largura,altura)`. O padrão é (6.4,4.8).
# - a resolução (em pontos por polegadas) com `dpi`. O padrão é 100.
# - a cor de fundo (*background*) com `facecolor`. O padrão é `w` (branco).
# **Exemplo:** Plote os gráficos de $h_1(x) = a\sqrt{x}$ e $h_2(x) = be^{\frac{x}{c}}$ para valores de a,b,c e propriedades acima livres.
# +
x = np.linspace(0,10,50,endpoint=True)
h1, h2 = lambda a: a*np.sqrt(x), lambda b,c: b*np.exp(x/c)
plt.figure(figsize=(8,6), dpi=200, facecolor='#e0eeee')
plt.plot(x,h1(.9),x,h2(1,9));
# -
# ### Alterando limites e marcações de eixos
#
# Altere:
#
# - o intervalo do eixo `x` com `xlim`
# - o intervalo do eixo `y` com `ylim`
# - as marcações do eixo `x` com `xticks`
# - as marcações do eixo `y` com `yticks`
plt.plot(x,h1(.9),x,h2(1,9)); plt.xlim(1.6,9.2); plt.ylim(1.0,2.8);
# +
plt.figure(figsize=(10,8))
plt.plot(t,g(1,3),c=[0.1,0.4,0.5],marker='s',mfc='w',mew=2.0);
plt.plot(t,g(1.2,2),c=[1.0,0.5,0.0],ls='--',marker='>',mfc='c',mew=1.0,ms=10);
plt.xticks([0, np.pi/2,np.pi,3*np.pi/2,2*np.pi]); # lista de múltiplos de pi
plt.yticks([-1, 0, 1]); # 3 valores em y
# -
# ### Especificando texto de marcações em eixos
#
# Podemos alterar as marcações das `ticks` passando um texto indicativo. No caso anterior, seria melhor algo como:
# +
plt.figure(figsize=(10,8))
plt.plot(t,g(1,3),c=[0.1,0.4,0.5],marker='s',mfc='w',mew=2.0);
plt.plot(t,g(1.2,2),c=[1.0,0.5,0.0],ls='--',marker='>',mfc='c',mew=1.0,ms=10);
# o par de $...$ formata os números na linguagem TeX
plt.xticks([0, np.pi/2,np.pi,3*np.pi/2,2*np.pi], ['$0$','$\pi/2$','$\pi$','$3/2\pi$','$2\pi$']);
plt.yticks([-1, 0, 1], ['$y = -1$', '$y = 0$', '$y = +1$']);
# -
# ### Deslocamento de eixos principais
#
# Os eixos principais podem ser movidos para outras posições arbitrárias e as bordas da área de plotagem desligadas usando `spine`.
# +
# plotagem da função
x = np.linspace(-3,3)
plt.plot(x,x**1/2*np.sin(x)-0.5); # f(x) = √x*sen(x) - 1/2
ax = plt.gca()
ax.spines['right'].set_color('none') # remove borda direita
ax.spines['top'].set_color('none') # remove borda superior
ax.spines['bottom'].set_position(('data',0)) # desloca eixo para x = 0
ax.spines['left'].set_position(('data',0)) # desloca eixo para y = 0
ax.xaxis.set_ticks_position('top') # desloca marcações para cima
ax.yaxis.set_ticks_position('right') # desloca marcações para a direita
plt.xticks([-2,0,2]) # altera ticks de x
ax.set_xticklabels(['esq.','zero','dir.']) # altera ticklabels de x
plt.yticks([-0.4,0,0.4]) # altera ticks de y
ax.set_yticklabels(['sup.','zero','inf.']); # altera ticklabels de y
# -
# ### Inserção de legendas
#
# Para criarmos:
#
# - uma legenda para os gráficos, usamos `legend`.
# - uma legenda para o eixo x, usamos `xlabel`
# - uma legenda para o eixo y, usamos `ylabel`
# - um título para o gráfico, usamos `title`
# **Exemplo:** plote o gráfico da reta $f_1(x) = x + 1$ e da reta $f_2(x) = 1 - x$ e adicione uma legenda com cores azul e laranja.
plt.plot(x, x + 1,'-b', label = 'y = x + 1' )
plt.plot(x, 1-x, c = [1.0,0.5,0.0], label = 'y = 1 - x'); # laranja: 100% de vermelho, 50% verde
plt.legend(loc = 'best') # 'loc=best' : melhor localização da legenda
plt.xlabel('x'); plt.ylabel('y'); plt.title('Gráfico de duas retas');
# #### Localização de legendas
#
# Use `loc=valor` para especificar onde posicionar a legenda. Use `plt.legend?` para verificar as posições disponíveis para `valor`. Vide tabela de valores `Location String` e `Location Code`.
plt.plot(np.nan,np.nan,label='upper right'); # nan : not a number
plt.legend(loc=1); # usando número
plt.plot(np.nan,np.nan,label='loc=1');
plt.legend(loc='upper right'); # usando a string correspondente
# ### Alteração de tamanho de fonte
#
# Para alterar o tamanho da fonte de legendas, use `fontsize`.
# +
plt.plot(np.nan,np.nan,label='legenda');
FSx, FSy, FSleg, FStit = 10, 20, 30, 40
plt.xlabel('Eixo x',c='b', fontsize=FSx)
plt.ylabel('Eixo y',c='g', fontsize=FSy)
plt.legend(loc='center', fontsize=FSleg);
plt.title('Título', c='c', fontsize=FStit);
# -
# ### Anotações simples
#
# Podemos incluir anotações em gráficos com a função `annotate(texto,xref,yref)`
plt.plot(np.nan,np.nan);
plt.annotate('P (0.5,0.5)',(0.5,0.5));
plt.annotate('Q (0.1,0.8)',(0.1,0.8));
# **Exemplo**: gere um conjunto de 10 pontos $(x,y)$ aleatórios em que $0.2 < x,y < 0.8$ e anote-os no plano.
# +
# gera uma lista de 10 pontos satisfazendo a condição
P = []
while len(P) != 10:
xy = np.round(np.random.rand(2),1)
test = np.all( (xy > 0.2) & (xy < 0.8) )
if test:
P.append(tuple(xy))
# plota o plano
plt.figure(figsize=(8,8))
plt.xlim(0,1)
plt.ylim(0,1)
for ponto in P:
plt.plot(ponto[0],ponto[1],'o')
plt.annotate(f'({ponto[0]},{ponto[1]})',ponto,fontsize=14)
# -
# **Problema:** o código acima tem um problema. Verifique que `len(P) = 10`, mas ele não plota os 10 pontos como gostaríamos de ver. Descubra o que está acontecendo e proponha uma solução.
# ## Multiplotagem e eixos
#
# No matplotlib, podemos trabalhar com a função `subplot(m,n,p)` para criar múltiplas figuras e eixos independentes como se cada figura fosse um elemento de uma grande "matriz de figuras" de `m` linhas e `n` colunas, enquanto `p` é o índice da figura (este valor será no máximo o produto `mxn`). A função funciona da seguinte forma.
#
# - Exemplo 1: suponha que você queira criar 3 figuras e dispô-las em uma única linha. Neste caso, `m = 1`, `n = 3` e `p` variará de 1 a 3, visto que `mxn = 3`.
#
# - Exemplo 2: suponha que você queira criar 6 figuras e dispô-las em 2 linhas e 3 colunas. Neste caso, `m = 2`, `n = 3` e `p` variará de 1 a 6, visto que `mxn = 6`.
#
# - Exemplo 3: suponha que você queira criar 12 figuras e dispô-las em 4 linhas e 3 colunas. Neste caso, `m = 4`, `n = 3` e `p` variará de 1 a 12, visto que `mxn = 12`.
#
# Cada plotagem possui seu eixo independentemente da outra.
# **Exemplo 1:** gráfico de 1 reta, 1 parábola e 1 polinômio cúbico lado a lado.
# +
x = np.linspace(-5,5,20)
plt.figure(figsize=(15,4))
# aqui p = 1
plt.subplot(1,3,1) # plt.subplot(131) também é válida
plt.plot(x,2*x-1,c='r',marker='^')
plt.title('$y=2x-1$')
# aqui p = 2
plt.subplot(1,3,2) # plt.subplot(132) também é válida
plt.plot(x,3*x**2 - 2*x - 1,c='g',marker='o')
plt.title('$y=3x^2 - 2x - 1$')
# aqui p = 3
plt.subplot(1,3,3) # plt.subplot(133) também é válida
plt.plot(x,1/2*x**3 + 3*x**2 - 2*x - 1,c='b',marker='*')
plt.title('$y=1/2x^3 + 3x^2 - 2x - 1$');
# -
# **Exemplo 2:** gráficos de {$sen(x)$, $sen(2x)$, $sen(3x)$} e {$cos(x)$, $cos(2x)$, $cos(3x)$} dispostos em matriz 2x3.
# +
plt.figure(figsize=(15,4))
plt.subplots_adjust(top=2.5,right=1.2) # ajusta a separação dos plots individuais
def sencosx(p):
x = np.linspace(0,2*np.pi,50)
plt.subplot(2,3,p)
if p <= 3:
plt.plot(x,np.sin(p*x),c=[p/4,p/5,p/6],label=f'$sen({p}x)$')
plt.title(f'subplot(2,3,{p})');
else:
plt.title(f'subplot(2,3,{p})');
p-=3 #
plt.plot(x,np.cos(p*x),c=[p/9,p/7,p/8],label=f'$cos({p}x)$')
plt.legend(loc=0,fontsize=8)
plt.xlabel('x'); plt.ylabel('y');
# plotagem
for p in range(1,7):
sencosx(p)
# -
# **Exemplo 3:** gráficos de um ponto isolado em matriz 4 x 3.
# +
plt.figure(figsize=(15,4))
m,n = 4,3
def star(p):
plt.subplot(m,n,p)
plt.axis('off') # desliga eixos
plt.plot(0.5,0.5,marker='*',c=list(np.random.rand(3)),ms=p*2)
plt.annotate(f'subplot({m},{n},{p})',(0.5,0.5),c='g',fontsize=10)
for p in range(1,m*n+1):
star(p);
# -
# ## Plots com gradeado
#
# Podemos habilitar o gradeado usando `grid(b,which,axis)`.
#
# Para especificar o gradeado:
#
# - em ambos os eixos, use `b='True'` ou `b='False'`.
# - maior, menor ou ambos, use `which='major'`, `which='minor'` ou `which='both'`.
# - nos eixos x, y ou ambos, use `axis='x'`, `axis='y'` ou `axis='both'`.
x = np.linspace(-10,10)
plt.plot(x,x)
plt.grid(True)
plt.plot(x,x)
plt.grid(True,which='major',axis='x')
plt.plot(x,x)
plt.grid(True,which='major',axis='y')
# **Exemplo:** plotagem de gradeado.
#
# Neste exemplo, um eixo abstrato é adicionado sobre a figura (criada diretamente) origem no ponto (0.025,0.025), largura 0.95 e altura 0.95.
# +
ax = plt.axes([0.025, 0.025, 0.95, 0.95])
ax.set_xlim(0,4)
ax.set_ylim(0,3)
# MultipleLocator estabelece pontos de referência para divisão da grade
ax.xaxis.set_major_locator(plt.MultipleLocator(1.0)) # divisor maior em X
ax.xaxis.set_minor_locator(plt.MultipleLocator(0.2)) # divisor maior em X
ax.yaxis.set_major_locator(plt.MultipleLocator(1.0)) # divisor maior em Y
ax.yaxis.set_minor_locator(plt.MultipleLocator(0.1)) # divisor maior em Y
# propriedades das linhas
ax.grid(which='major', axis='x', linewidth=0.75, linestyle='-', color='r')
ax.grid(which='minor', axis='x', linewidth=0.5, linestyle=':', color='b')
ax.grid(which='major', axis='y', linewidth=0.75, linestyle='-', color='r')
ax.grid(which='minor', axis='y', linewidth=0.5, linestyle=':', color='g')
# para remover as ticks, adicione comentários
#ax.set_xticklabels([])
#ax.set_yticklabels([]);
plt.plot(x,x,'k')
plt.plot(x,-x+4,'k')
# -
# ## Plots com preenchimento
#
# Podemos usar `fill_between` para criar preenchimentos de área em gráficos.
# +
x = np.linspace(-np.pi, np.pi, 60)
y = np.sin(2*x)*np.cos(x/2)
plt.fill_between(x,y,alpha=0.5);
# +
x = np.linspace(-np.pi, np.pi, 60)
f1 = np.sin(2*x)
f2 = 0.5*np.sin(2*x)
plt.plot(x,f1,c='r');
plt.plot(x,f2,c='k');
plt.fill_between(x,f1,f2,color='g',alpha=0.2);
| _build/html/_sources/ipynb/09-plotagem-matplotlib.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # How to use it
class MyClass(object):
@property
def my_property(self):
# In reality, this might represent a database call or time
# intensive task like calling a third-party API.
print('Computing my_property...')
return 42
my_object = MyClass()
my_object.my_property
my_object.my_property
# +
from cached_property import cached_property
class MyClass(object):
@cached_property
def my_cached_property(self):
print('Computing my_cached_property...')
return 42
# -
my_object = MyClass()
my_object.my_cached_property
my_object.my_cached_property
# # Inspecting the cache
from cached_property import cached_property, cached_properties, is_cached
class MyClass(object):
@cached_property
def my_cached_property(self):
print('Computing my_cached_property...')
return 42
@cached_property
def my_second_cached_property(self):
print('Computing my_second_cached_property...')
return 51
my_object = MyClass()
for property_name in cached_properties(my_object):
print(property_name)
my_object.my_cached_property
is_cached(my_object, 'my_cached_property')
is_cached(my_object, 'my_second_cached_property')
# # Invalidating the cache
from cached_property import cached_property, un_cache, delete_cache
class MyClass(object):
@cached_property
def my_cached_property(self):
print('Computing my_cached_property...')
return 42
@cached_property
def my_second_cached_property(self):
print('Computing my_second_cached_property...')
return 51
my_object = MyClass()
my_object.my_cached_property
my_object.my_second_cached_property
un_cache(my_object, 'my_cached_property')
my_object.my_cached_property
my_object.my_second_cached_property
delete_cache(my_object)
my_object.my_cached_property
my_object.my_second_cached_property
# # Property deleting the cache
from cached_property import cached_property, property_deleting_cache
class MyClass(object):
def __init__(self, my_parameter):
self.my_parameter = my_parameter
@property_deleting_cache
def my_parameter(self):
"A parameter that deletes the cache when set or deleted."
print('Accessing my_parameter...')
@cached_property
def my_cached_property(self):
print('Computing my_cached_property...')
return self.my_parameter + 1
my_object = MyClass(my_parameter=41)
my_object.my_cached_property
my_object.my_cached_property
my_object.my_parameter = 50
my_object.my_cached_property
my_object.my_cached_property
# # Working with Threads
# +
from cached_property import threaded_cached_property
class MyClass(object):
@threaded_cached_property
def my_cached_property(self):
print('Computing my_cached_property...')
return 42
# +
from threading import Thread
my_object = MyClass()
threads = []
for x in range(10):
thread = Thread(target=lambda: my_object.my_cached_property)
thread.start()
threads.append(thread)
for thread in threads:
thread.join()
# -
# # Working with async/await (Python 3.5+)
# This is just a trick to make asyncio work in jupyter.
# Cf. https://markhneedham.com/blog/2019/05/10/jupyter-runtimeerror-this-event-loop-is-already-running/
import nest_asyncio
nest_asyncio.apply()
# +
from cached_property import cached_property
class MyClass(object):
@cached_property
async def my_cached_property(self):
print('Computing my_cached_property...')
return 42
# -
async def print_my_cached_property():
my_object = MyClass()
print(await my_object.my_cached_property)
print(await my_object.my_cached_property)
print(await my_object.my_cached_property)
import asyncio
asyncio.get_event_loop().run_until_complete(print_my_cached_property())
# # Timing out the cache
# +
import random
from cached_property import cached_property_with_ttl
class MyClass(object):
@cached_property_with_ttl(ttl=2) # cache invalidates after 2 seconds
def my_cached_property(self):
return random.random()
# -
my_object = MyClass()
my_object.my_cached_property
my_object.my_cached_property
from time import sleep
sleep(3) # Sleeps long enough to expire the cache
my_object.my_cached_property
| .ipynb_checkpoints/readme_companion-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# #### Eligibility traces calculation using both Forward view and Backward view in efficient way
import random as rand
import numpy as np
import matplotlib.pyplot as plt
import time
import math
# #### create random states and Temporal Difference Error
data_size = 50
States = np.arange(0,data_size)
Q = np.random.randint(0,10,[1,data_size+1])
reward = np.random.randint(0,2,[1,data_size])
lambda_return = 0.7
discount_factor = 0.9
Q[0,data_size] = 0
# #### First calculate lambda return value using Forward view which offline method
R_lambda = np.zeros([1,data_size])
# +
R_lambda[0,data_size-1] = reward[0,data_size-1]
for i in range(data_size-2,-1,-1):
R_lambda[0,i] = reward[0,i] + discount_factor * (lambda_return * R_lambda[0,i+1] +(1-lambda_return)*Q[0,i+1])
# -
plt.plot(R_lambda[0,:],'*')
plt.show()
# #### Now calculate same returns using Backward view
class Node:
def __init__(self,States=None,reward=None,Q=None,value=0,Time=None,cache=0,repeat=None,Next=None):
self.States = States
self.reward = reward
self.Q = Q
self.value = value
self.Time = Time
self.cache = cache
self.repeat = None
self.next = Next
class Linklist:
def __init__(self,head=None,tail=None):
self.head = head
self.tail = tail
def insert(self,data):
if not self.head:
self.head = data
self.tail = self.head
else :
self.tail.next = data
self.tail = self.tail.next
def delete(self):
if self.head:
if self.head == self.tail:
self.head = None
self.tail = None
else:
self.head = self.head.next
def empty(self):
if self.head == None and self.tail == None:
return True
def print_(self):
temp = self.head
while temp:
print(temp.States, end = "->")
temp = temp.next
memory = Linklist()
length = -1
maximum_length = int(-2/math.log10(lambda_return))
j = 0
token = 0
for i in range(data_size+maximum_length):
if i < data_size:
#Append the state
address = Node(States=States[i],Q=Q[0,i+1],reward=reward[0,i],Time=i)
memory.insert(address)
#increase the length of queue
length = length + 1
#calculate and set the Front node values
memory.head.value = memory.head.value + ((discount_factor*lambda_return)**(length))*(reward[0,i]+ discount_factor*(1-lambda_return)*Q[0,i+1])
token = 0
else :
token = 1
#Check for set the value
if (length >= maximum_length or token) and (not memory.empty()):
#calculate value for next state
next_state_lambda_reward = ((memory.head.value - memory.head.reward)/(lambda_return*discount_factor) )-(((1-lambda_return)* memory.head.Q)/(lambda_return))
if memory.tail:
#store the lambda return value in array
R_lambda[0,j] = memory.head.value
else :
R_lambda[0,j] = memory.head.reward
print(memory.head.value)
#pop the states node out of queue
memory.delete()
length = length - 1
if memory.tail:
#store the value in the next state
memory.head.value = next_state_lambda_reward
j=j+1
plt.plot(R_lambda[0,:],'*')
plt.show()
| Basic_gridWorld/ET_backward_and_forwardview.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # run this with
from __future__ import print_function
from IPython.display import FileLink, FileLinks
from ipywidgets import interact, interactive, fixed, interact_manual, IntSlider, FloatSlider, HBox, VBox, interactive_output
import ipywidgets as widgets
from PIL import Image
from PIL import ImageDraw
import numpy as np
@interact(x=True, y=1.0)
def g(x, y):
return (x, y)
def draw_tree0(start, lam, decay, l0, theta, theta_decay, spread, d, depth=0, max_depth=40, max_n=1000):
# d.text(xy=start, text=str(depth), fill=(0,0,0,255))
if depth>max_depth:
return max_n
n_children=np.random.poisson(lam=lam/(1+depth))
if n_children == 0:
return max_n
thetas = theta-spread/2+spread*np.random.random(size=n_children)
for new_theta in thetas:
end= (start[0]+l0*np.sin(new_theta),start[1]+l0*np.cos(new_theta))
d.line(xy=(start,end),fill=(0,0,0,255))
#print(start, end)
if end[0]>=0 and end[0]<512 and end[1]>=0 and end[1]<512:
max_n-=draw_tree(start=end,
lam=lam,
decay=decay,
l0=l0*decay,
theta=new_theta,
theta_decay=theta_decay,
spread=spread*theta_decay,
d=d,
depth=depth+1)
return max_n
# +
def draw_tree(start, lam, decay, l0, theta, theta_decay, spread, d, depth=0, max_depth=40, max_n=1000):
# d.text(xy=start, text=str(depth), fill=(0,0,0,255))
if depth>max_depth:
return max_n
n_children=np.random.poisson(lam=lam/(1+depth))
if n_children == 0:
return max_n
# 3d gaussian
new_branches= np.random.randn(n_children,3)
# project on sphere (normalize)
new_branches/=np.linalg.norm(new_branches, axis=1, keepdims=True)
# and keep only 2 first dimensions
new_branches=new_branches[:,:2]
for new_branch in new_branches:
end= start+l0*new_branch
d.line(xy=(tuple(start),tuple(end)),fill=(0,0,0,255))
if end[0]>=0 and end[0]<512 and end[1]>=0 and end[1]<512:
max_n-=draw_tree(start=end,
lam=lam,
decay=decay,
l0=l0*decay,
theta=0,
theta_decay=theta_decay,
spread=spread*theta_decay,
d=d,
depth=depth+1)
return max_n
def make_tree(start_x=256,
start_y=256,
lam=9.5,
decay=0.75,
theta_decay=0.75,
l0=60,
theta=np.pi,
spread=np.pi):
im = Image.new('RGBA', (512,512), (255,255,255,255))
d=ImageDraw.Draw(im)
draw_tree(start=np.array([start_x,start_y]),
lam=lam,
decay=decay,
theta_decay=theta_decay,
l0=l0,
theta=theta,
spread=theta,
d=d)
return im
interact(make_tree)
# +
def draw_tree1(start=[0,0],
wind=[0,0],
wind_decay=0.75,
lambda_=8,
lambda_decay=0.75,
radius=50,
radius_decay=0.75,
d=None,
depth=0,
max_depth=40,
max_n=1000):
# d.text(xy=start, text=str(depth), fill=(0,0,0,255))
if depth>max_depth:
return max_n
n_children=np.random.poisson(lam = lambda_)
if n_children == 0:
return max_n
winds = np.array(wind)+radius*np.random.randn(n_children,2)
for new_wind in winds:
child = np.array(start)+new_wind
d.line(xy=((start[0],start[1]),
(child[0],child[1])),
fill=(0,0,0,255))
max_n-=1
#print(start, end)
if max_n>=0 and child[0]>=0 and child[0]<512 and child[1]>=0 and child[1]<512:
max_n=draw_tree(start=child,
wind=wind*wind_decay,
wind_decay=wind_decay,
lambda_=lambda_*lambda_decay,
radius=radius*radius_decay,
radius_decay=radius_decay,
d=d,
depth=depth+1,
max_depth=max_depth,
max_n=max_n)
return max_n
def make_tree(wind_0=25,
wind_decay=0.75,
lambda_=8,
lambda_decay=0.75,
radius=50,
radius_decay=0.75,
max_depth=40,
max_n=1000):
im = Image.new('RGBA', (512,512), (255,255,255,255))
d=ImageDraw.Draw(im)
draw_tree(start=np.array([256,256]),
wind=np.array([wind_0,0]),
wind_decay=wind_decay,
lambda_=lambda_,
lambda_decay=lambda_decay,
radius=radius,
radius_decay=radius_decay,
depth=0,
max_depth=max_depth,
max_n=max_n,
d=d)
display(im)
interact(make_tree)
# -
interact(make_tree)
def make_tree(start=(256,256),
lam=9.5,
decay=0.75,
theta_decay=0.75,
l0=60,
theta=np.pi,
spread=np.pi):
im = Image.new('RGBA', (512,512), (255,255,255,255))
d=ImageDraw.Draw(im)
draw_tree(start=start,
lam=lam,
decay=decay,
theta_decay=theta_decay,
l0=l0,
theta=theta,
spread=theta,
d=d)
return im
# ?draw_tree
def make_and_display_tree(start_x=256, start_y=256, lam=9.5, decay=0.75,
theta_decay=0.75, l0=60, theta=np.pi/2, spread=np.pi, dummy_arg=False):
tree_im=make_tree(start=(start_x,start_y),
lam=lam,
decay=decay,
theta_decay=theta_decay,
l0=l0,
theta=theta,
spread=theta)
display(tree_im)
interact(make_and_display_tree)
# +
start_x = IntSlider(description=r'\(x_0\)', min=0, max=512, value=256, continuous_update=False)
start_y = IntSlider(description=r'\(y_0\)', min=0, max=512, value=256, continuous_update=False)
lam = FloatSlider(description=r'\(\lambda\)', min=0, max=15, value=9.5, continuous_update=False)
decay = FloatSlider(description=r'\(\delta\)', min=0, max=1.0, value=0.75, continuous_update=False)
theta_decay = FloatSlider(description=r'\(\delta_{\theta}\)', min=0, max=1.0, value=0.75, continuous_update=False)
l0 = IntSlider(description=r'\(l_0\)', min=0,max=100,value=60, continuous_update=False)
theta = FloatSlider(description=r'\(\theta\)', min=0, max=2*np.pi, value=np.pi, continuous_update=False)
spread = FloatSlider(description='spread', min=0, max=2*np.pi, value=np.pi, continuous_update=False)
#toggle_to_redraw = widgets.Checkbox(descritption='new', value=False)
toggle_to_redraw=widgets.ToggleButton(
description='Redraw',
disabled=False,
button_style='', # 'success', 'info', 'warning', 'danger' or ''
tooltip='redraw a tree',
icon='check'
)
ui=VBox([
HBox([start_x, start_y, lam, decay]),
HBox([decay, theta_decay, theta, l0, spread]),
toggle_to_redraw
])
out= widgets.interactive_output(f=make_and_display_tree,
controls={'start_x':start_x,
'start_y':start_y,
'lam':lam,
'decay':decay,
'theta':theta,
'theta_decay':theta_decay,
'l0':l0,
'theta':theta,
'spread':spread,
'dummy_arg':toggle_to_redraw
}
#,layout=widgets.Layout(width='50%', height='80px')
)
# -
display(out)
display(ui)
widgets.widget_button.
FileLink('./simple interactive.ipynb')
FileLinks('.', recursive=False)
| simple interactive.1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Introduction to NumPy
# This material is inspired from different sources:
#
# * https://github.com/SciTools/courses
# * https://github.com/paris-saclay-cds/python-workshop/blob/master/Day_1_Scientific_Python/01-numpy-introduction.ipynb
# %matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# ## 1. Create numpy array
# So we can easily create a NumPy array from scract using the function `np.array`.
np.array([0, 1, 2, 3])
# Sometimes, we want our array to be in particular way: only zeros (`np.zeros`), only ones (`np.ones`), equally spaced (`np.linspace`) or logarithmic spaced (`np.logspace`), etc.
# ### Exercise
# Try out some of these ways of creating NumPy arrays. See if you can:
# * create a NumPy array from a list of integer numbers. Use the function [`np.array()`](https://docs.scipy.org/doc/numpy-1.15.1/reference/generated/numpy.array.html) and pass the Python list. You can refer to the example from the documentation.
# +
# # %load solutions/01_solutions.py
# -
# While checking the documentation of [np.array](https://docs.scipy.org/doc/numpy-1.15.1/reference/generated/numpy.array.html) an interesting parameter to pay attention is ``dtype``. This parameter can force the data type inside the array.
arr.dtype
# * create a 3-dimensional NumPy array filled with all zeros or ones numbers. You can check the documentation of [np.zeros](https://docs.scipy.org/doc/numpy-1.15.0/reference/generated/numpy.zeros.html) and [np.ones](https://docs.scipy.org/doc/numpy/reference/generated/numpy.ones.html).
# +
# # %load solutions/03_solutions.py
# -
# * a NumPy array filled with a constant value -- not 0 or 1. (Hint: this can be achieved using the last array you created, or you could use [np.empty](https://docs.scipy.org/doc/numpy-1.15.1/reference/generated/numpy.empty.html) and find a way of filling the array with a constant value),
# +
# # %load solutions/04_solutions.py
# -
# * a NumPy array of 8 elements with a range of values starting from 0 and a spacing of 3 between each element (Hint: check the function [np.arange](https://docs.scipy.org/doc/numpy-1.15.0/reference/generated/numpy.arange.html)), and
# +
# # %load solutions/05_solutions.py
# -
# ## 2. Manipulating NumPy array
# ### 2.1 Indexing
# Note that the NumPy arrays are zero-indexed:
data = np.random.randn(10000, 5)
data[0, 0]
# It means that that the third element in the first row has an index of [0, 2]:
data[0, 2]
# We can also assign the element with a new value:
data[0, 2] = 100.
print(data[0, 2])
# NumPy (and Python in general) checks the bounds of the array:
print(data.shape)
data[60, 10]
# Finally, we can ask for several elements at once:
data[0, [0, 3]]
# You can even pass a negative index. It will go from the end of the array.
data[-1, -1]
# ### 2.2 Slices
# We can reuse the slicing as with the Python list or Pandas dataframe to get element from one of the axis.
data[0, 0:2]
# Note that the returned array does not include third column (with index 2).
#
# You can skip the first or last index (which means, take the values from the beginning or to the end):
data[0, :2]
# If you omit both indices in the slice leaving out only the colon (:), you will get all columns of this row:
data[0, :]
# ### 2.3 Filtering data
data
# We can produce a boolean array when using comparison operators.
data > 0
# This mask can be used to select some specific data.
data[data > 0]
# It can also be used to affect some new values
data[data > 0] = np.inf
data
# ### 2.4 Quizz
# Answer the following quizz:
data = np.random.randn(20, 20)
# * Print the element in the $1^{st}$ row and $10^{th}$ cloumn of the data.
# +
# # %load solutions/08_solutions.py
# -
# * Print the elements in the $3^{rd}$ row and columns of $3^{rd}$ and $15^{th}$.
# +
# # %load solutions/09_solutions.py
# -
# * Print the elements in the $4^{th}$ row and columns from $3^{rd}$ t0 $15^{th}$.
# +
# # %load solutions/10_solutions.py
# -
# * Print all the elements in column $15$ which their value is above 0.
# +
# # %load solutions/11_solutions.py
# -
# ## 3. Numerical analysis
# Vectorizing code is the key to writing efficient numerical calculation with Python/Numpy. That means that as much as possible of a program should be formulated in terms of matrix and vector operations.
# ### 3.1 Scalar-array operations
# We can use the usual arithmetic operators to multiply, add, subtract, and divide arrays with scalar numbers.
# + run_control={"frozen": false, "read_only": false}
v1 = np.arange(0, 5)
# + run_control={"frozen": false, "read_only": false}
v1 * 2
# + run_control={"frozen": false, "read_only": false}
v1 + 2
# + run_control={"frozen": false, "read_only": false}
np.sin(A) # np.log(A), np.arctan(A),...
# -
# ### 3.2 Element-wise array-array operations
# When we add, subtract, multiply and divide arrays with each other, the default behaviour is **element-wise** operations:
# + run_control={"frozen": false, "read_only": false}
A * A # element-wise multiplication
# + run_control={"frozen": false, "read_only": false}
v1 * v1
# -
# ### 3.3 Calculations
# Often it is useful to store datasets in NumPy arrays. NumPy provides a number of functions to calculate statistics of datasets in arrays.
# + run_control={"frozen": false, "read_only": false}
a = np.random.random(40)
# -
# Different frequently used operations can be done:
# + run_control={"frozen": false, "read_only": false}
print ('Mean value is', np.mean(a))
print ('Median value is', np.median(a))
print ('Std is', np.std(a))
print ('Variance is', np.var(a))
print ('Min is', a.min())
print ('Element of minimum value is', a.argmin())
print ('Max is', a.max())
print ('Sum is', np.sum(a))
print ('Prod', np.prod(a))
print ('Cumsum is', np.cumsum(a)[-1])
print ('CumProd of 5 first elements is', np.cumprod(a)[4])
print ('Unique values in this array are:', np.unique(np.random.randint(1, 6, 10)))
print ('85% Percentile value is: ', np.percentile(a, 85))
# + run_control={"frozen": false, "read_only": false}
a = np.random.random(40)
print(a.argsort())
a.sort() #sorts in place!
print(a.argsort())
# -
# #### Calculations with higher-dimensional data
# When functions such as `min`, `max`, etc., is applied to a multidimensional arrays, it is sometimes useful to apply the calculation to the entire array, and sometimes only on a row or column basis. Using the `axis` argument we can specify how these functions should behave:
# + run_control={"frozen": false, "read_only": false}
m = np.random.rand(3, 3)
m
# + run_control={"frozen": false, "read_only": false}
# global max
m.max()
# + run_control={"frozen": false, "read_only": false}
# max in each column
m.max(axis=0)
# + run_control={"frozen": false, "read_only": false}
# max in each row
m.max(axis=1)
# -
# Many other functions and methods in the `array` and `matrix` classes accept the same (optional) `axis` keyword argument.
# ## 4. Data reshaping and merging
# * How could you change the shape of the 8-element array you created previously to have shape (2, 2, 2)? Hint: this can be done without creating a new array.
arr = np.arange(8)
# +
# # %load solutions/07_solutions.py
# -
# * Could you reshape the same 8-element array to a column vector. Do the same, to get a row vector. You can use `np.reshape` or `np.newaxis`.
# +
# # %load solutions/22_solutions.py
# -
# * Stack vertically two 1D NumPy array of size 10. Then, stack them horizontally. You can use the function [np.hstack](https://docs.scipy.org/doc/numpy-1.15.0/reference/generated/numpy.hstack.html) and [np.vstack](https://docs.scipy.org/doc/numpy-1.15.0/reference/generated/numpy.vstack.html). Repeat those two operations using the function [np.concatenate](https://docs.scipy.org/doc/numpy-1.15.0/reference/generated/numpy.concatenate.html) with two 2D NumPy arrays of size 5 x 2.
# +
# # %load solutions/20_solutions.py
# +
# # %load solutions/21_solutions.py
| 02_numpy/notebook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # FloPy
#
# ### Demo of netCDF and shapefile export capabilities within the flopy export module.
# +
import os
import sys
import datetime
# run installed version of flopy or add local path
try:
import flopy
except:
fpth = os.path.abspath(os.path.join('..', '..'))
sys.path.append(fpth)
import flopy
print(sys.version)
print('flopy version: {}'.format(flopy.__version__))
# -
# Load our old friend...the Freyberg model
nam_file = "freyberg.nam"
model_ws = os.path.join("..", "data", "freyberg_multilayer_transient")
ml = flopy.modflow.Modflow.load(nam_file, model_ws=model_ws, check=False)
# We can see the ``Modelgrid`` instance has generic entries, as does ``start_datetime``
ml.modelgrid
ml.modeltime.start_datetime
# Setting the attributes of the ``ml.modelgrid`` is easy:
proj4_str = "+proj=longlat +ellps=WGS84 +datum=WGS84 +no_defs"
ml.modelgrid.set_coord_info(xoff=123456.7, yoff=765432.1, angrot=15.0, proj4=proj4_str)
ml.dis.start_datetime = '7/4/1776'
ml.modeltime.start_datetime
# ### Some netCDF export capabilities:
#
# #### Export the whole model (inputs and outputs)
# make directory
pth = os.path.join('data', 'netCDF_export')
if not os.path.exists(pth):
os.makedirs(pth)
fnc = ml.export(os.path.join(pth, ml.name+'.in.nc'))
hds = flopy.utils.HeadFile(os.path.join(model_ws,"freyberg.hds"))
flopy.export.utils.output_helper(os.path.join(pth, ml.name+'.out.nc'), ml, {"hds":hds})
# #### export a single array to netcdf or shapefile
# export a 2d array
ml.dis.top.export(os.path.join(pth, 'top.nc'))
ml.dis.top.export(os.path.join(pth, 'top.shp'))
# #### sparse export of stress period data for a boundary condition package
# * excludes cells that aren't in the package (aren't in `package.stress_period_data`)
# * by default, stress periods with duplicate parameter values (e.g., stage, conductance, etc.) are omitted
# (`squeeze=True`); only stress periods with different values are exported
# * argue `squeeze=False` to export all stress periods
ml.drn.stress_period_data.export(os.path.join(pth, 'drn.shp'), sparse=True)
# #### Export a 3d array
#export a 3d array
ml.upw.hk.export(os.path.join(pth, 'hk.nc'))
ml.upw.hk.export(os.path.join(pth, 'hk.shp'))
# #### Export a number of things to the same netCDF file
# +
# export lots of things to the same nc file
fnc = ml.dis.botm.export(os.path.join(pth, 'test.nc'))
ml.upw.hk.export(fnc)
ml.dis.top.export(fnc)
# export transient 2d
ml.rch.rech.export(fnc)
# -
# ### Export whole packages to a netCDF file
# export mflist
fnc = ml.wel.export(os.path.join(pth, 'packages.nc'))
ml.upw.export(fnc)
fnc.nc
# ### Export the whole model to a netCDF
fnc = ml.export(os.path.join(pth, 'model.nc'))
fnc.nc
# ## Export output to netcdf
#
# FloPy has utilities to export model outputs to a netcdf file. Valid output types for export are MODFLOW binary head files, formatted head files, cell budget files, seawat concentration files, and zonebudget output.
#
# Let's use output from the Freyberg model as an example of these functions
# +
# load binary head and cell budget files
fhead = os.path.join(model_ws, 'freyberg.hds')
fcbc = os.path.join(model_ws, 'freyberg.cbc')
hds = flopy.utils.HeadFile(fhead)
cbc = flopy.utils.CellBudgetFile(fcbc)
export_dict = {"hds": hds,
"cbc": cbc}
# export head and cell budget outputs to netcdf
fnc = flopy.export.utils.output_helper(os.path.join(pth, "output.nc"), ml, export_dict)
fnc.nc
# -
# ### Exporting zonebudget output
#
# zonebudget output can be exported with other modflow outputs, and is placed in a seperate group which allows the user to post-process the zonebudget output before exporting.
#
# Here are two examples on how to export zonebudget output with a binary head and cell budget file
#
# __Example 1__: No postprocessing of the zonebudget output
# +
# load the zonebudget output file
zonbud_ws = os.path.join("..", "data", "zonbud_examples")
fzonbud = os.path.join(zonbud_ws, "freyberg_mlt.2.csv")
zon_arrays = flopy.utils.zonbud.read_zbarray(os.path.join(zonbud_ws, "zonef_mlt.zbr"))
zbout = flopy.utils.ZoneBudgetOutput(fzonbud, ml.dis, zon_arrays)
zbout
# +
export_dict = {'hds': hds,
'cbc': cbc}
fnc = flopy.export.utils.output_helper(os.path.join(pth, "output_with_zonebudget.nc"),
ml, export_dict)
fnc = zbout.export(fnc, ml)
fnc.nc
# -
# A budget_zones variable has been added to the root group and a new zonebudget group has been added to the netcdf file which hosts all of the budget data
# __Example 2__: postprocessing zonebudget output then exporting
# load the zonebudget output and get the budget information
zbout = flopy.utils.ZoneBudgetOutput(fzonbud, ml.dis, zon_arrays)
df = zbout.dataframe
df
# Let's calculate a yearly volumetric budget from the zonebudget data
# +
# get a dataframe of volumetric budget information
vol_df = zbout.volumetric_flux()
# add a year field to the dataframe using datetime
start_date = ml.modeltime.start_datetime
start_date = datetime.datetime.strptime(start_date, "%m/%d/%Y")
nzones = len(zbout.zones) - 1
year = [start_date.year] * nzones
for totim in vol_df.totim.values[:-nzones]:
t = start_date + datetime.timedelta(days=totim)
year.append(t.year)
vol_df['year'] = year
# calculate yearly volumetric change using pandas
totim_df = vol_df.groupby(['year', 'zone'], as_index=False)['totim'].max()
yearly = vol_df.groupby(['year', 'zone'], as_index=False)[['storage', 'constant head', 'other zones',
'zone 1', 'zone 2', 'zone 3']].sum()
yearly['totim'] = totim_df['totim']
yearly
# -
# And finally, export the pandas dataframe to netcdf
# +
# process the new dataframe into a format that is compatible with netcdf exporting
zbncf = zbout.dataframe_to_netcdf_fmt(yearly, flux=False)
# export to netcdf
export_dict = {"hds": hds,
"cbc": cbc,
"zbud": zbncf}
fnc = flopy.export.utils.output_helper(os.path.join(pth, "output_with_zonebudget.2.nc"),
ml, export_dict)
fnc.nc
| examples/Notebooks/other/flopy3_export.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.6.6 64-bit (''pt120'': conda)'
# name: python366jvsc74a57bd046fb56778051a31a357734d34bff51ce894a75a92d527fd7661abbb023d1a439
# ---
# ## Check the files for duplicate or missing labels
import numpy as np
import os
import pandas as pd
folder_dir = '/home/sr365/Gaia/labels'
img_folder = '/home/sr365/Gaia/rti_rwanda_cut_tiles_ps_8000'
for folder in os.listdir(folder_dir):
cur_folder = os.path.join(folder_dir)
if not os.path.isdir(folder):
continue
for subfolder in os.listdir(cur_folder):
cur_subfolder = os.path.join(cur_folder, subfolder)
if not os.path.isdir(folder):
continue
for file in os.listdir(cur_subfolder):
if '.png' not in file:
continue
# Check for the same name in another folder
new_name = os.path.join(img_folder, folder + '_' + subfolder, file)
if not os.path.exists(new_name):
print('Thiere is only label but not image!! {}'.format(file))
# # Check for .png
# if '.png' in file:
# # Check whether the image is there
# if not os.path.exists(os.path.join(folder_dir, file.replace('.png', '.JPG'))):
# print('Thiere is only label but not image!! {}'.format(file))
# elif '.JPG' in file:
# # Check whether the label is there
# if not os.path.exists(os.path.join(folder_dir, file.replace('.JPG', '.png'))):
# print('Thiere is only image but not label!! {}'.format(file))
print('Finished, this is alright!')
lbl_folder = 'rti_rwanda_crop_type_raw_Cynmpirita_Processed/Phase2'
img_folder = 'rti_rwanda_crop_type_raw_Cyampirita_Processed_Phase1'
| check_files.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# # An interactive Git Tutorial: the tool you didn't know you needed
#
# ## From personal workflows to open collaboration
# -
# **Note:** this tutorial was particularly modeled, and therefore owes a lot, to the excellent materials offered in:
#
# - "Git for Scientists: A Tutorial" by <NAME> (no link as this tutorial seems to have disappeared from the internet).
# - <NAME>'s lecture notes and exercises from the G-Node summer school on [Advanced Scientific Programming in Python](https://python.g-node.org/wiki/schedule).
#
# In particular I've reused the excellent images from the [Pro Git book](http://git-scm.com/book) that John had already selected and downloaded, as well as some of his outline. But this version of the tutorial aims to be 100% reproducible by being executed directly as an IPython notebook and is hosted itself on github so that others can more easily make improvements to it by collaborating on Github. Many thanks to John and Emanuele for making their materials available online.
#
# After writing this document, I discovered [<NAME>](https://github.com/jrjohansson)'s [tutorial on version control](http://nbviewer.ipython.org/urls/raw.github.com/jrjohansson/scientific-python-lectures/master/Lecture-7-Revision-Control-Software.ipynb) that is also written as a fully reproducible notebook and is also aimed at a scientific audience. It has a similar spirit to this one, and is part of his excellent series [Lectures on Scientific Computing with Python](https://github.com/jrjohansson/scientific-python-lectures) that is entirely available as Jupyter Notebooks.
# ## Wikipedia
# “Revision control, also known as version control, source control
# or software configuration management (SCM), is the
# **management of changes to documents, programs, and other
# information stored as computer files.**”
# **Reproducibility?**
#
# * Tracking and recreating every step of your work
# * In the software world: it's called *Version Control*!
#
# What do (good) version control tools give you?
#
# * Peace of mind (backups)
# * Freedom (exploratory branching)
# * Collaboration (synchronization)
#
# + [markdown] slideshow={"slide_type": "slide"}
# ## Git is an enabling technology: Use version control for everything
# -
# * Paper writing (never get `paper_v5_john_jane_final_oct22_really_final.tex` by email again!)
# * Grant writing
# * Everyday research
# * Teaching (never accept an emailed homework assignment again!)
# + [markdown] slideshow={"slide_type": "slide"}
# ## Teaching courses with Git
# -
# 
#
# <!-- offline:
# <img src="files/images/indefero_projects_notes.png" width="100%">
# <img src="https://raw.github.com/fperez/reprosw/master/fig/indefero_projects_notes.png" width="100%">
# -->
# + [markdown] slideshow={"slide_type": "slide"}
# ## Annotated history of each student's worfklow (and backup!)
# -
# <!-- offline:
# <img src="files/images/indefero_projects1.png" width="100%">
# <img src="https://raw.github.com/fperez/reprosw/master/fig/indefero_projects1.png" width="100%">
# -->
#
# 
# ## The plan for this tutorial
# This tutorial is structured in the following way: we will begin with a brief overview of key concepts you need to understand in order for git to really make sense. We will then dive into hands-on work: after a brief interlude into necessary configuration we will discuss 5 "stages of git" with scenarios of increasing sophistication and complexity, introducing the necessary commands for each stage:
#
# 1. Local, single-user, linear workflow
# 2. Single local user, branching
# 3. Using remotes as a single user
# 4. Remotes for collaborating in a small team
# 5. Full-contact github: distributed collaboration with large teams
#
# In reality, this tutorial only covers stages 1-4, since for #5 there are many software develoment-oriented tutorials and documents of very high quality online. But most scientists start working alone with a few files or with a small team, so I feel it's important to build first the key concepts and practices based on problems scientists encounter in their everyday life and without the jargon of the software world. Once you've become familiar with 1-4, the excellent tutorials that exist about collaborating on github on open-source projects should make sense.
# + [markdown] slideshow={"slide_type": "slide"}
# ## Very high level picture: an overview of key concepts
# -
# The **commit**: *a snapshot of work at a point in time*
#
# <!-- offline:
# 
#
# <img src="https://raw.github.com/fperez/reprosw/master/fig/commit_anatomy.png">
# -->
#
# 
#
# Credit: ProGit book, by <NAME>, CC License.
# + [markdown] slideshow={"slide_type": "slide"}
# A **repository**: a group of *linked* commits
#
# <!-- offline:
# 
#
# <img src="https://raw.github.com/fperez/reprosw/master/fig/threecommits.png" >
# -->
#
# 
#
# Note: these form a Directed Acyclic Graph (DAG), with nodes identified by their *hash*.
# + [markdown] slideshow={"slide_type": "slide"}
# A **hash**: a fingerprint of the content of each commit *and its parent*
# +
from hashlib import sha1
# Our first commit
data1 = b'This is the start of my paper.'
meta1 = b'date: 1/1/17'
hash1 = sha1(data1 + meta1).hexdigest( )
print('Hash:', hash1)
# -
# Our second commit, linked to the first
data2 = b'Some more text in my paper...'
meta2 = b'date: 1/2/1'
# Note we add the parent hash here!
hash2 = sha1(data2 + meta2 + hash1.encode()).hexdigest()
print('Hash:', hash2)
# And this is pretty much the essence of Git!
# ## First things first: git must be configured before first use
# The minimal amount of configuration for git to work without pestering you is to tell it who you are. You should run a version of these commands in your shell:
#
# ```bash
# git config --global user.name "<NAME>"
# git config --global user.email "<EMAIL>"
# ```
#
# And while we're at it, we also turn on the use of color, which is very useful
#
# ```bash
# git config --global color.ui "auto"
# ```
#
# Set git to use the credential memory cache so we don't have to retype passwords too frequently.
#
# Github offers in its help pages instructions on how to configure the credentials helper for [Mac OSX](https://help.github.com/articles/caching-your-github-password-in-git/#platform-mac), [Windows](https://help.github.com/articles/caching-your-github-password-in-git/#platform-mac) and [Linux](https://help.github.com/articles/caching-your-github-password-in-git/#platform-linux).
# + [markdown] slideshow={"slide_type": "slide"}
# ## Stage 1: Local, single-user, linear workflow
# -
# Type `git` to see a full list of all the 'core' commands. We'll now go through most of these via small practical exercises:
# !git
# + [markdown] slideshow={"slide_type": "slide"}
# ### `git init`: create an empty repository
# -
# %pwd
# + language="bash"
# rm -rf test
# git init test
# -
# **Note:** all these cells below are meant to be run by you in a terminal where you change *once* to the `test` directory and continue working there.
#
# Since we are putting all of them here in a single notebook for the purposes of the tutorial, they will all be prepended with the first two lines:
#
# # %%bash
# cd test
#
# that tell IPython to do that each time. But you should ignore those two lines and type the rest of each cell yourself in your terminal.
# Let's look at what git did:
# + language="bash"
# cd test
#
# ls
# + language="bash"
# cd test
#
# ls -la
# + language="bash"
# cd test
#
# ls -l .git
# -
# Now let's edit our first file in the test directory with a text editor... I'm doing it programatically here for automation purposes, but you'd normally be editing by hand
# + language="bash"
# cd test
#
# echo "My first bit of text" > file1.txt
# + [markdown] slideshow={"slide_type": "slide"}
# ### `git add`: tell git about this new file
# + language="bash"
# cd test
#
# git add file1.txt
# -
# We can now ask git about what happened with `status`:
# + language="bash"
# cd test
#
# git status
# + [markdown] slideshow={"slide_type": "slide"}
# ### `git commit`: permanently record our changes in git's database
# -
# For now, we are *always* going to call `git commit` either with the `-a` option *or* with specific filenames (`git commit file1 file2...`). This delays the discussion of an aspect of git called the *index* (often referred to also as the 'staging area') that we will cover later. Most everyday work in regular scientific practice doesn't require understanding the extra moving parts that the index involves, so on a first round we'll bypass it. Later on we will discuss how to use it to achieve more fine-grained control of what and how git records our actions.
# + language="bash"
# cd test
#
# git commit -a -m"This is our first commit"
# -
# In the commit above, we used the `-m` flag to specify a message at the command line. If we don't do that, git will open the editor we specified in our configuration above and require that we enter a message. By default, git refuses to record changes that don't have a message to go along with them (though you can obviously 'cheat' by using an empty or meaningless string: git only tries to facilitate best practices, it's not your nanny).
# + [markdown] slideshow={"slide_type": "slide"}
# ### `git log`: what has been committed so far
# + language="bash"
# cd test
#
# git log
# -
# ### `git diff`: what have I changed?
# Let's do a little bit more work... Again, in practice you'll be editing the files by hand, here we do it via shell commands for the sake of automation (and therefore the reproducibility of this tutorial!)
# + jupyter={"outputs_hidden": true} language="bash"
# cd test
#
# echo "And now some more text..." >> file1.txt
# -
# And now we can ask git what is different:
# + language="bash"
# cd test
#
# git diff
# -
# The format of the output above is well explained in detail in [this Stack Overflow post](https://stackoverflow.com/questions/2529441/how-to-read-the-output-from-git-diff). But we can provide a brief summary here:
#
# ```
# diff --git a/file1.txt b/file1.txt
# ```
#
# This tells us which files changed overall, with 'a' representing the old path and 'b' the new one (in this case it's the same file, though if a file had been renamed it would be different).
#
# ```
# index ce645c7..4baa979 100644
# ```
# These are hashes of the file at the two stages, needed by git itself for other operations with the diff output.
#
# The next block shows the actual changes. The first two lines show which paths are being compared (in this case the same file, `file1.txt`):
#
#
# ```
# --- a/file1.txt
# +++ b/file1.txt
# ```
#
# The next line indicates where the changes happened. The format is `@@ from-file-range to-file-range @@`, where there's one more `@` character than there's parents to the file comparison (git can handle multi-way diff/merges), adn the file range format is `-/+<start line>,<# of lines>`, with `-` for the `from-file` and `+` for the `to-file`:
#
# ```
# @@ -1 +1,2 @@
# ```
#
# Lines prepended with `-` correspond to deletions (none in this case), and lines with `+` to additions. A few lines around deletions/additions are shown for context:
#
# ```
# My first bit of text
# +And now some more text...
# ```
# ### The cycle of git virtue: work, commit, work, commit, ...
# + language="bash"
# cd test
#
# git commit -a -m"I have made great progress on this critical matter."
# -
# ### `git log` revisited
# First, let's see what the log shows us now:
# + language="bash"
# cd test
#
# git log
# -
# Sometimes it's handy to see a very summarized version of the log:
# + language="bash"
# cd test
#
# git log --oneline --topo-order --graph
# -
# Git supports *aliases:* new names given to command combinations. Let's make this handy shortlog an alias, so we only have to type `git slog` and see this compact log:
# + language="bash"
# cd test
#
# # We create our alias (this saves it in git's permanent configuration file):
# git config --global alias.slog "log --oneline --topo-order --graph"
#
# # And now we can use it
# git slog
# -
# ### `git mv` and `rm`: moving and removing files
# While `git add` is used to add fils to the list git tracks, we must also tell it if we want their names to change or for it to stop tracking them. In familiar Unix fashion, the `mv` and `rm` git commands do precisely this:
# + language="bash"
# cd test
#
# git mv file1.txt file-newname.txt
# git status
# -
# Note that these changes must be committed too, to become permanent! In git's world, until something hasn't been committed, it isn't permanently recorded anywhere.
# + language="bash"
# cd test
#
# git commit -a -m"I like this new name better"
# echo "Let's look at the log again:"
# git slog
# -
# And `git rm` works in a similar fashion.
# ### Exercise
# Add a new file `file2.txt`, commit it, make some changes to it, commit them again, and then remove it (and don't forget to commit this last step!).
# + [markdown] slideshow={"slide_type": "slide"}
# ## Local user, branching
# -
# What is a branch? Simply a *label for the 'current' commit in a sequence of ongoing commits*:
#
# 
#
# <!-- offline:
# <img src="https://raw.github.com/fperez/reprosw/master/fig/masterbranch.png" >
# -->
# There can be multiple branches alive at any point in time; the working directory is the state of a special pointer called HEAD. In this example there are two branches, *master* and *testing*, and *testing* is the currently active branch since it's what HEAD points to:
#
# 
#
# <!-- offline:
# <img src="https://raw.github.com/fperez/reprosw/master/fig/HEAD_testing.png" >
# -->
# Once new commits are made on a branch, HEAD and the branch label move with the new commits:
#
# 
# This allows the history of both branches to diverge:
#
# 
# But based on this graph structure, git can compute the necessary information to merge the divergent branches back and continue with a unified line of development:
#
# 
# Let's now illustrate all of this with a concrete example. Let's get our bearings first:
# + language="bash"
# cd test
#
# git status
# ls
# -
# We are now going to try two different routes of development: on the `master` branch we will add one file and on the `experiment` branch, which we will create, we will add a different one. We will then merge the experimental branch into `master`.
# + language="bash"
# cd test
#
# git branch experiment
# git checkout experiment
# + language="bash"
# cd test
#
# echo "Some crazy idea" > experiment.txt
# git add experiment.txt
# git commit -a -m"Trying something new"
# git slog
# + language="bash"
# cd test
#
# git checkout master
# git slog
# + language="bash"
# cd test
#
# echo "All the while, more work goes on in master..." >> file-newname.txt
# git commit -a -m"The mainline keeps moving"
# git slog
# -
# By default, all variations of the git `log` commands only show the currently active branch. If we want to see *all* branches, we can ask for them with the `--all` flag:
# + language="bash"
# cd test
#
# git slog --all
# -
# Above, we can see the commit whose message is `Try something new`, that comes from the `experiment` branch.
# + language="bash"
# cd test
#
# ls
# + language="bash"
# cd test
#
# git merge experiment
# git slog
# + [markdown] slideshow={"slide_type": "slide"}
# ## Using remotes as a single user
# -
# We are now going to introduce the concept of a *remote repository*: a pointer to another copy of the repository that lives on a different location. This can be simply a different path on the filesystem or a server on the internet.
#
# For this discussion, we'll be using remotes hosted on the [GitHub.com](http://github.com) service, but you can equally use other services like [BitBucket](http://bitbucket.org) or [Gitorious](http://gitorious.org) as well as host your own.
# + language="bash"
# cd test
#
# ls
# echo "Let's see if we have any remote repositories here:"
# git remote -v
# -
# Since the above cell didn't produce any output after the `git remote -v` call, it means we have no remote repositories configured. We will now proceed to do so. Once logged into GitHub, go to the [new repository page](https://github.com/new) and make a repository called `test`. Do **not** check the box that says `Initialize this repository with a README`, since we already have an existing repository here. That option is useful when you're starting first at Github and don't have a repo made already on a local computer.
#
# We can now follow the instructions from the next page:
# + language="bash"
# cd test
#
# git remote add origin https://github.com/fperez/test.git
# git push -u origin master
# -
# Let's see the remote situation again:
# + language="bash"
# cd test
#
# git remote -v
# -
# We can now [see this repository publicly on github](https://github.com/fperez/test).
#
# Let's see how this can be useful for backup and syncing work between two different computers. I'll simulate a 2nd computer by working in a different directory...
# + language="bash"
#
# # Here I clone my 'test' repo but with a different name, test2, to simulate a 2nd computer
# git clone https://github.com/fperez/test.git test2
# cd test2
# pwd
# git remote -v
# -
# Let's now make some changes in one 'computer' and synchronize them on the second.
# + language="bash"
# cd test2 # working on computer #2
#
# echo "More new content on my experiment" >> experiment.txt
# git commit -a -m"More work, on machine #2"
# -
# Now we put this new work up on the github server so it's available from the internet
# + language="bash"
# cd test2
#
# git push
# -
# Now let's fetch that work from machine #1:
# + language="bash"
# cd test
#
# git pull
# -
# ### An important aside: conflict management
# While git is very good at merging, if two different branches modify the same file in the same location, it simply can't decide which change should prevail. At that point, human intervention is necessary to make the decision. Git will help you by marking the location in the file that has a problem, but it's up to you to resolve the conflict. Let's see how that works by intentionally creating a conflict.
#
# We start by creating a branch and making a change to our experiment file:
# + language="bash"
# cd test
#
# git branch trouble
# git checkout trouble
# echo "This is going to be a problem..." >> experiment.txt
# git commit -a -m"Changes in the trouble branch"
# -
# And now we go back to the master branch, where we change the *same* file:
# + language="bash"
# cd test
#
# git checkout master
# echo "More work on the master branch..." >> experiment.txt
# git commit -a -m"Mainline work"
# -
# So now let's see what happens if we try to merge the `trouble` branch into `master`:
# + language="bash"
# cd test
#
# git merge trouble
# -
# Let's see what git has put into our file:
# + language="bash"
# cd test
#
# cat experiment.txt
# -
# At this point, we go into the file with a text editor, decide which changes to keep, and make a new commit that records our decision. I've now made the edits, in this case I decided that both pieces of text were useful, but integrated them with some changes:
# + language="bash"
# cd test
#
# cat experiment.txt
# -
# Let's then make our new commit:
# + language="bash"
# cd test
#
# git commit -a -m"Completed merge of trouble, fixing conflicts along the way"
# git slog
# -
# *Note:* While it's a good idea to understand the basics of fixing merge conflicts by hand, in some cases you may find the use of an automated tool useful. Git supports multiple [merge tools](https://www.kernel.org/pub/software/scm/git/docs/git-mergetool.html): a merge tool is a piece of software that conforms to a basic interface and knows how to merge two files into a new one. Since these are typically graphical tools, there are various to choose from for the different operating systems, and as long as they obey a basic command structure, git can work with any of them.
# + [markdown] slideshow={"slide_type": "slide"}
# ## Collaborating on github with a small team
# -
# Single remote with shared access: we are going to set up a shared collaboration with one partner (the person sitting next to you). This will show the basic workflow of collaborating on a project with a small team where everyone has write privileges to the same repository.
#
# Note for SVN users: this is similar to the classic SVN workflow, with the distinction that commit and push are separate steps. SVN, having no local repository, commits directly to the shared central resource, so to a first approximation you can think of `svn commit` as being synonymous with `git commit; git push`.
#
# We will have two people, let's call them Alice and Bob, sharing a repository. Alice will be the owner of the repo and she will give Bob write privileges.
# We begin with a simple synchronization example, much like we just did above, but now between *two people* instead of one person. Otherwise it's the same:
#
# - Bob clones Alice's repository.
# - Bob makes changes to a file and commits them locally.
# - Bob pushes his changes to github.
# - Alice pulls Bob's changes into her own repository.
# Next, we will have both parties make non-conflicting changes each, and commit them locally. Then both try to push their changes:
#
# - Alice adds a new file, `alice.txt` to the repo and commits.
# - Bob adds `bob.txt` and commits.
# - Alice pushes to github.
# - Bob tries to push to github. What happens here?
#
# The problem is that Bob's changes create a commit that conflicts with Alice's, so git refuses to apply them. It forces Bob to first do the merge on his machine, so that if there is a conflict in the merge, Bob deals with the conflict manually (git could try to do the merge on the server, but in that case if there's a conflict, the server repo would be left in a conflicted state without a human to fix things up). The solution is for Bob to first pull the changes (pull in git is really fetch+merge), and then push again.
# + [markdown] slideshow={"slide_type": "slide"}
# ## Full-contact github: distributed collaboration with large teams
# -
# Multiple remotes and merging based on pull request workflow: this is beyond the scope of this brief tutorial, so we'll simply discuss how it works very briefly, illustrating it with the activity on the [IPython github repository](http://github.com/ipython/ipython).
# ## Other useful commands
# - [show](http://www.kernel.org/pub/software/scm/git/docs/git-show.html)
# - [reflog](http://www.kernel.org/pub/software/scm/git/docs/git-reflog.html)
# - [rebase](http://www.kernel.org/pub/software/scm/git/docs/git-rebase.html)
# - [tag](http://www.kernel.org/pub/software/scm/git/docs/git-tag.html)
# + [markdown] slideshow={"slide_type": "slide"}
# ## Git resources
# -
# ### Introductory materials
# There are lots of good tutorials and introductions for Git, which you
# can easily find yourself; this is just a short list of things I've found
# useful. For a beginner, I would recommend the following 'core' reading list, and
# below I mention a few extra resources:
#
# 1. The smallest, and in the style of this tuorial: [git - the simple guide](http://rogerdudler.github.com/git-guide)
# contains 'just the basics'. Very quick read.
#
# 1. In my own experience, the most useful resource was [Understanding Git
# Conceptually](http://www.sbf5.com/~cduan/technical/git).
# Git has a reputation for being hard to use, but I have found that with a
# clear view of what is actually a *very simple* internal design, its
# behavior is remarkably consistent, simple and comprehensible.
#
# 1. For more detail, see the start of the excellent [Pro Git book](http://book.git-scm.com).
#
# 1. You can also [try Git in your browser](https://try.github.io) thanks to GitHub's interactive tutorial.
#
# If you are really impatient and just want a quick start, this [visual git tutorial](http://www.ralfebert.de/blog/tools/visual_git_tutorial_1)
# may be sufficient. It is nicely illustrated with diagrams that show what happens on the filesystem.
#
# For windows users, [an Illustrated Guide to Git on Windows](http://nathanj.github.com/gitguide/tour.html) is useful in that
# it contains also some information about handling SSH (necessary to interface with git hosted on remote servers when collaborating) as well
# as screenshots of the Windows interface.
#
# Cheat sheets: a useful [summary of common commands](https://github.com/nerdgirl/git-cheatsheet-visual/blob/master/gitcheatsheet.pdf) in PDF format that can be printed for frequent reference. [Another nice PDF one](https://services.github.com/on-demand/downloads/github-git-cheat-sheet.pdf).
# + [markdown] slideshow={"slide_type": "slide"}
# ### Beyond the basics
# -
# At some point, it will pay off to understand how git itself is *built*. These two documents, written in a similar spirit, are probably the most useful descriptions of the Git architecture short of diving into the actual implementation. They walk you through
# how you would go about building a version control system with a little story. By the end you realize that Git's model is almost an inevitable outcome of the proposed constraints:
#
# The [Git parable](http://tom.preston-werner.com/2009/05/19/the-git-parable.html) by <NAME>.
#
# [Git foundations](http://matthew-brett.github.com/pydagogue/foundation.html) by <NAME>.
#
# [Git ready](http://www.gitready.com): A great website of posts on specific git-related topics, organized by difficulty.
#
# [QGit](http://sourceforge.net/projects/qgit/): an excellent Git GUI.
#
# Git ships by default with gitk and git-gui, a pair of Tk graphical clients to browse a repo and to operate in it. I personally have found [qgit](http://sourceforge.net/projects/qgit/) to be nicer and easier to use. It is available on modern linux distros, and since it is based on Qt, it should run on OSX and Windows.
#
# [Git Magic](http://www-cs-students.stanford.edu/~blynn/gitmagic/index.html)
#
# Another book-size guide that has useful snippets.
#
# A [port](http://cworth.org/hgbook-git/tour) of the Hg book's beginning
#
# The [Mercurial book](http://hgbook.red-bean.com) has a reputation for clarity, so <NAME> decided to [port](http://cworth.org/hgbook-git/tour) its introductory chapter to Git. It's a nicely written intro, which is possible in good measure because of how similar the underlying models of Hg and Git ultimately are.
#
#
# Finally, if you prefer a video presentation, this 1-hour tutorial prepared by the GitHub educational team will walk you through the entire process:
from IPython.display import YouTubeVideo
YouTubeVideo('U8GBXvdmHT4')
# ### A few useful tips for common tasks
# #### Better shell support
#
# Adding git branch info to your bash prompt and tab completion for git commands and branches is extremely useful. I suggest you at least copy:
#
# - [git-completion.bash](https://github.com/git/git/blob/master/contrib/completion/git-completion.bash)
# - [git-prompt.sh](https://github.com/git/git/blob/master/contrib/completion/git-prompt.sh)
#
# You can then source both of these files in your `~/.bashrc` and then set your prompt (I'll assume you named them as the originals but starting with a `.` at the front of the name):
#
# source $HOME/.git-completion.bash
# source $HOME/.git-prompt.sh
# PS1='[\u@\h \W$(__git_ps1 " (%s)")]\$ ' # adjust this to your prompt liking
#
# See the comments in both of those files for lots of extra functionality they offer.
# #### Embedding Git information in LaTeX documents
#
# (Sent by [<NAME>](http://www.onerussian.com))
#
# I use a Make rule:
#
# # Helper if interested in providing proper version tag within the manuscript
# revision.tex: ../misc/revision.tex.in ../.git/index
# GITID=$$(git log -1 | grep -e '^commit' -e '^Date:' | sed -e 's/^[^ ]* *//g' | tr '\n' ' '); \
# echo $$GITID; \
# sed -e "s/GITID/$$GITID/g" $< >| $@
#
# in the top level `Makefile.common` which is included in all
# subdirectories which actually contain papers (hence all those
# `../.git`). The `revision.tex.in` file is simply:
#
# % Embed GIT ID revision and date
# \def\revision{GITID}
#
# The corresponding `paper.pdf` depends on `revision.tex` and includes the
# line `\input{revision}` to load up the actual revision mark.
# #### git export
#
# Git doesn't have a native export command, but this works just fine:
#
# git archive --prefix=fperez.org/ master | gzip > ~/tmp/source.tgz
| 01-Git-Tutorial.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from datetime import datetime
from data import Data
from minisom import MiniSom
import matplotlib.pyplot as plt
# +
quasi_identifiers = ['age', 'educational-num', 'capital-gain', 'capital-loss', 'hours-per-week']
features = data.X[quasi_identifiers]
target = data.y
#要調整
width=10
height=10
sigma=.9
lr=.2
#epochs=1e5
epochs = 10
verbose=True
log = 1000
# -
# # Data + SOM
# +
som_start=datetime.now()
data = Data()
som = MiniSom(width, height, features.shape[1], sigma, lr)
som.train_random(features.values, int(epochs), verbose=True)
out = []
for step, (X, y) in enumerate(zip(features.values, target)):
new_X = som.winner(X)
out.append((new_X, X, y))
if(verbose == True and step % log == 0):
print(f'*Creating SOM: [{step}/{features.shape[0]}]')
som_data = np.array(out)
new_data = []
new_X = []
X = []
y = []
for i in range(0, len(som_data[:,0])):
new_X.append(np.asarray(som_data[:,0][i]))
X.append(np.asarray(som_data[:,1][i]))
y.append(np.asarray(som_data[:,2][i]))
new_data = (new_X, X, y)
print("Time required for SOM: " + str(datetime.now()-som_start))
new_data
# +
from sklearn.cluster import KMeans
from collections import Counter, defaultdict
from sklearn.model_selection import train_test_split
import tensorflow as tf
from tensorflow.keras.layers import Dense, Dropout
from tensorflow.keras import Sequential
from sklearn.metrics import classification_report, confusion_matrix, accuracy_score, roc_auc_score
# -
class TrainingModel:
def __init__(self, input_shape):
self.model = Sequential()
self.model.add(Dense(64, activation='relu', input_shape=input_shape))
self.model.add(Dropout(0.3))
self.model.add(Dense(128, activation='relu'))
self.model.add(Dropout(0.3))
self.model.add(Dense(128, activation='relu'))
self.model.add(Dense(1, activation='sigmoid'))
self.model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
def fit(self, data, label):
self.model.fit(data, label, epochs=1, batch_size=128, verbose=0)
def predict(self, data):
return self.model.predict_classes(data)
def evaluate(self, X_test, y_test, print_report=True):
y_predicted = self.predict(X_test)
y_predicted_probs = self.model.predict_proba(X_test)
if print_report:
self.print_report(y_test, y_predicted, y_predicted_probs)
else:
accuracy = accuracy_score(y_test, y_predicted)
report = classification_report(y_test, y_predicted, output_dict=True)
auc_score = roc_auc_score(y_test, y_predicted_probs)
matrix = confusion_matrix(y_test, y_predicted)
return {
'accuracy': accuracy,
'auc_score': auc_score,
**report['weighted avg'],
}
def print_report(self, test, predicted, predicted_probs):
accuracy = accuracy_score(test, predicted)
report = classification_report(test, predicted)
matrix = confusion_matrix(test, predicted)
print('Accuracy score: {:.5f}'.format(accuracy))
print('-' * 20)
print('Confusion Matrix:')
print(matrix)
print('-' * 20)
print(report)
print('-' * 20)
print('AUC score: {:.5f}'.format(roc_auc_score(test, predicted_probs)))
# # K means + Model
# +
#將SOM資料集丟入Kmeans
#Kmeans分群數依照k值(k-anonymity的k)決定,從K = x/k開始遞減直到每一群至少有k筆資料
#計算跑的時間
sizes = [5, 10, 15, 20, 30, 50]
#k = 5
#K = int(len(new_data[0])/k)
#cluster = True
#c = Counter()
for k in sizes:
perturb_start=datetime.now()
print("K-Anonymity: k=" + str(k))
K = int(len(new_data[0])/k)
cluster = True
while cluster:
clf = KMeans(n_clusters=K)
clf.fit(new_data[0])
c = Counter(clf.labels_)
print("K=" + str(K))
for i in range(0,K):
print("cluster " + str(i) + " has " + str(c[i]) + " data points")
if c[i]<k:
break
else:
if i == K-1:
cluster = False
else:
pass
K = K-1
K = K+1
print("The Resulting number of clusters: " + str(K))
#將分完群的資料還原回原始資料的feature
#準備丟入神經網路
data = pd.concat([pd.DataFrame(new_data[1],columns=['age', 'educational-num','capital-gain', 'capital-loss', 'hours-per-week']),
pd.DataFrame(new_data[2],columns=['target']), pd.DataFrame(clf.labels_,columns=['cluster'])], axis = 1)
columns = ['age', 'educational-num','capital-gain', 'capital-loss', 'hours-per-week','target','cluster']
index = range(0,len(data))
data_perturbed = pd.DataFrame(index = index, columns=columns)
for i in range(0, len(data)):
for j in range(0,K+1):
if data['cluster'][i] == j:
data_perturbed[columns[0]][i] = float(data.groupby(by='cluster').mean()[columns[0]][j])
data_perturbed[columns[1]][i] = float(data.groupby(by='cluster').mean()[columns[1]][j])
data_perturbed[columns[2]][i] = float(data.groupby(by='cluster').mean()[columns[2]][j])
data_perturbed[columns[3]][i] = float(data.groupby(by='cluster').mean()[columns[3]][j])
data_perturbed[columns[4]][i] = float(data.groupby(by='cluster').mean()[columns[4]][j])
data_perturbed['target'][i] = data['target'][i]
data_perturbed['cluster'][i] = data['cluster'][i]
data_sorted = data_perturbed.sort_values(by=['cluster'])
data_ready = data_sorted.drop('cluster', axis=1)
print("Time required for perturbation: " + str(datetime.now()-perturb_start_start))
X_train, X_test, y_train, y_test = train_test_split(data_ready.iloc[:,0:5], data_ready.iloc[:,5:], test_size=0.2)
model = TrainingModel((5,))
model.fit(X_train, y_train)
print(model.predict(X_test))
| Final_Project_Data+SOM+Kmeans+Model.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Example 6 : TS with Geopsy Profiles
#
# Time series analysis to compute surface response spectrum and site
# amplification functions using velocity profiles from [geopsy](http://www.geopsy.org/).
# +
import re
import matplotlib.pyplot as plt
import numpy as np
import pysra
# %matplotlib inline
# -
# Increased figure sizes
plt.rcParams['figure.dpi'] = 120
# ## Function to parse the geospy files
def iter_geopsy_profiles(fname):
"""Read a Geopsy formatted text file created by gpdcreport."""
with open(fname) as fp:
next(fp)
while True:
try:
line = next(fp)
except StopIteration:
break
m = re.search(r'Layered model (\d+): value=([0-9.]+)', line)
id, score = m.groups()
count = int(next(fp))
d = {
'id': id,
'score': score,
'layers': [],
}
cols = ['thickness', 'vel_comp', 'vel_shear', 'density']
for _ in range(count):
values = [float(p) for p in next(fp).split()]
d['layers'].append(dict(zip(cols, values)))
yield d
# ## Create the input motion
fname = 'data/NIS090.AT2'
ts = pysra.motion.TimeSeriesMotion.load_at2_file(fname)
ts.accels
# ## Create the site response calculator
calc = pysra.propagation.LinearElasticCalculator()
# ## Specify the output
# +
freqs = np.logspace(-1, 2, num=500)
outputs = pysra.output.OutputCollection([
pysra.output.ResponseSpectrumOutput(
# Frequency
freqs,
# Location of the output
pysra.output.OutputLocation('outcrop', index=0),
# Damping
0.05),
pysra.output.ResponseSpectrumRatioOutput(
# Frequency
freqs,
# Location in (denominator),
pysra.output.OutputLocation('outcrop', index=-1),
# Location out (numerator)
pysra.output.OutputLocation('outcrop', index=0),
# Damping
0.05),
])
# -
# ## Create site profiles
#
# Iterate over the geopsy profiles and create a site profile. For this example, we just use a linear elastic properties.
# +
fname = 'data/best100_GM_linux.txt'
for geopsy_profile in iter_geopsy_profiles(fname):
profile = pysra.site.Profile([
pysra.site.Layer(
pysra.site.SoilType(
'soil-%d' % i, l['density'] / pysra.site.GRAVITY,
damping=0.05), l['thickness'], l['vel_shear'])
for i, l in enumerate(geopsy_profile['layers'])
])
# Use 1% damping for the half-space
profile[-1].soil_type.damping = 0.01
# Compute the waves from the last layer
calc(ts, profile, profile.location('outcrop', index=-1))
# Compute the site amplification
outputs(calc)
# -
# ## Plot the outputs
#
# Create a few plots of the output.
for o in outputs:
fig, ax = plt.subplots()
ax.plot(o.refs, o.values, color='C0', alpha=0.6, linewidth=0.2)
ax.set(xlabel=o.xlabel, xscale='log', ylabel=o.ylabel)
fig.tight_layout();
| examples/example-06.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:miniconda-ctsm]
# language: python
# name: conda-env-miniconda-ctsm-py
# ---
# # Analyse atm variables of the FHIST f09_f09 simulations (RES-CTL)
# ## 1. Settings
# ### 1.1 Import the necessary python libraries
# +
from __future__ import print_function
import sys
import os
from getpass import getuser
import string
import subprocess
import numpy as np
import matplotlib.pyplot as plt
import netCDF4 as netcdf4
import xarray as xrµ
import pandas
import warnings
import itertools
import numpy as np
from iv_utils import *
# -
# ### 1.2 General Settings
# +
# set directories
outdir = '/glade/work/ivanderk/'
# Define directory where processing is done -- subject to change
procdir = outdir + 'postprocessing/'
# go to processing directory
os.chdir(procdir)
# ignore all runtime warnings
warnings.filterwarnings('ignore')
# -
# ### 1.3 User settings
# +
# set case name
case_res = 'f.FHIST.f09_f09_mg17.CTL'
case_nores = 'f.FHIST.f09_f09_mg17.NORES'
# set number of ensemble members
n_ens = 5
# set individual case names for reference
case_res_ind = 'f.FHIST.f09_f09_mg17.CTL.001'
case_nores_ind = 'f.FHIST.f09_f09_mg17.NORES.001'
case = 'f.FHIST.f09_f09_mg17.CTL.001'
# run settings -- change this to terms directly?
block = 'atm' # lnd data
# atm data
# rof data
stream = 'h0' # h0 output block
# h1 output block
# h2 output block
# xtrm calculated (annual)
# define start and end year
spstartyear = '1979' # spin up start year
startyear = '1984' # start year, spin up excluded
endyear = '2014' # last year of the simulation
# -
# ## Analysis of number ensemble members (detection of forced response)
# ## Plot 5 Members in violinplots
def plot_violins_enssize_response_5m(var, ax=False, ylims = False, panel_label = False):
n_ens = 5
resmask = get_resmask(threshold=0)
# open all ensemble members and calculate temporal and spatial mean
da_res_em = open_da_ens(var,case=case_res, mode='all').where(resmask)
da_nores_em = open_da_ens(var,case=case_nores, mode='all').where(resmask)
da_res_em_mean = da_res_em.mean(dim=('time','lat','lon'))
da_nores_em_mean = da_nores_em.mean(dim=('time','lat','lon'))
# calculate combinations with 1 ensemble member and calculate average over those ensemble members
mean_1_nores = np.asarray(list(itertools.combinations(da_nores_em_mean.values, 1))).mean(axis=1)
mean_1_res = np.asarray(list(itertools.combinations(da_res_em_mean.values, 1))).mean(axis=1)
[meshgrid_1_nores, meshgrid_1_res] = np.meshgrid(mean_1_nores, mean_1_res)
delta_1m = (meshgrid_1_res - meshgrid_1_nores).flatten()
# calculate combinations with 2 ensemble members and calculate average over those ensemble members
mean_2_nores = np.asarray(list(itertools.combinations(da_nores_em_mean.values, 2))).mean(axis=1)
mean_2_res = np.asarray(list(itertools.combinations(da_res_em_mean.values, 2))).mean(axis=1)
[meshgrid_2_nores, meshgrid_2_res] = np.meshgrid(mean_2_nores, mean_2_res)
delta_2m = (meshgrid_2_res - meshgrid_2_nores).flatten()
print('mean_2_nores')
print(mean_2_nores.shape)
# calculate combinations with 3 ensemble members and calculate average over those ensemble members
mean_3_nores = np.asarray(list(itertools.combinations(da_nores_em_mean.values, 3))).mean(axis=1)
mean_3_res = np.asarray(list(itertools.combinations(da_res_em_mean.values, 3))).mean(axis=1)
[meshgrid_3_nores, meshgrid_3_res] = np.meshgrid(mean_3_nores, mean_3_res)
delta_3m = (meshgrid_3_res - meshgrid_3_nores).flatten()
# calculate combinations with 4 ensemble members and calculate average over those ensemble members
mean_4_nores = np.asarray(list(itertools.combinations(da_nores_em_mean.values, 4))).mean(axis=1)
mean_4_res = np.asarray(list(itertools.combinations(da_res_em_mean.values, 4))).mean(axis=1)
[meshgrid_4_nores, meshgrid_4_res] = np.meshgrid(mean_4_nores, mean_4_res)
delta_4m = (meshgrid_4_res - meshgrid_4_nores).flatten()
# calculate combinations with 3 ensemble members and calculate average over those ensemble members
mean_5_nores = np.asarray(list(itertools.combinations(da_nores_em_mean.values, 5))).mean(axis=1)
mean_5_res = np.asarray(list(itertools.combinations(da_res_em_mean.values, 5))).mean(axis=1)
[meshgrid_5_nores, meshgrid_5_res] = np.meshgrid(mean_5_nores, mean_5_res)
delta_5m = (meshgrid_5_res - meshgrid_5_nores).flatten()
data = np.asarray([delta_1m, delta_2m, delta_3m, delta_4m, delta_5m])
print(var+' range 1 ens member: '+str(delta_1m.min()) +' - '+ str(delta_1m.max()))
print()
# VIOLIN PLOT
if not ax:
fig , ax = plt.subplots()
# create dummy violinplot to have white left sides
parts1 = ax.violinplot(data, showmeans=False, showmedians=False,
showextrema=False)
for pc in parts1['bodies']:
pc.set_facecolor('palegoldenrod')
pc.set_edgecolor('darkgoldenrod')
pc.set_alpha(1)
# remove the left half of plot
# get the center
m = np.mean(pc.get_paths()[0].vertices[:, 0])
# modify the paths to not go further right than the center
pc.get_paths()[0].vertices[:, 0] = np.clip(pc.get_paths()[0].vertices[:, 0], m, np.inf)
pc.set_color('w')
# this is the real violinplot
parts2 = ax.violinplot(data, showmeans=False, showmedians=False,
showextrema=False, widths=1)
for pc in parts2['bodies']:
m = np.mean(pc.get_paths()[0].vertices[:, 0])
# modify the paths to not go further right than the center
pc.get_paths()[0].vertices[:, 0] = np.clip(pc.get_paths()[0].vertices[:, 0], m, np.inf)
pc.set_facecolor('palegoldenrod')
pc.set_edgecolor('tan')
pc.set_alpha(1)
# remove the line in the fifth violinplot
parts2['bodies'][-1].set_edgecolor('w')
# plot medians and IQRs
# helper function
def adjacent_values(vals, q1, q3):
upper_adjacent_value = q3 + (q3 - q1) * 1.5
upper_adjacent_value = np.clip(upper_adjacent_value, q3, vals[-1])
lower_adjacent_value = q1 - (q3 - q1) * 1.5
lower_adjacent_value = np.clip(lower_adjacent_value, vals[0], q1)
return lower_adjacent_value, upper_adjacent_value
medians = []
whiskers_min = []
whiskers_max = []
quartiles1 = []
quartiles3 = []
for ensmember in data:
(quartile1, median, quartile3) = np.percentile(ensmember, [25, 50, 75])
whisker = np.array([adjacent_values(ensmember, quartile1, quartile3)])
whisker_min, whisker_max = whisker[:, 0], whisker[:, 1]
medians = np.append(medians,median)
whiskers_min = np.append(whiskers_min, whisker_min)
whiskers_max = np.append(whiskers_max, whisker_max)
quartiles1 = np.append(quartiles1, quartile1)
quartiles3 = np.append(quartiles3, quartile3)
inds = np.arange(1, len(medians) + 1.0)
inds = inds + 0.02
ax.scatter(inds, medians, marker='o', color='maroon', s=20, zorder=3)
ax.vlines(inds, quartiles1, quartiles3, color='darkgoldenrod', linestyle='-', lw=4)
#ax.vlines(inds, whiskers_min, whiskers_max, color='goldenrod', linestyle='-', lw=3)
# do axes settings
ax.set_xticks([1,2,3, 4, 5])
ax.set_xlim((0.5,5.5))
if ylims != False:
ax.set_ylim(ylims)
if panel_label != False:
ax.text(0, 1.02, panel_label, color='dimgrey', fontsize=12, transform=ax.transAxes, weight = 'bold')
ax.set_xlabel('Number of ensemble members')
ax.set_ylabel(da_res_em.units);
if var == ('TSMN'): da_res_em.name = 'TNn'
if var == ('TSMX'): da_res_em.name = 'TXx'
if var == ('TREFHT'): da_res_em.name = 'T$_{2m}$'
ax.set_title('$\Delta$ '+da_res_em.name, loc='right', size=12)
ax.axhline(0, color='lightgray', linewidth=1)
return ax
# +
# 5 members
set_plot_param()
fig, axes = plt.subplots(2,2, figsize=(8,6))
ylims = [-0.35,0.5]
axes = axes.flatten()
plot_violins_enssize_response_5m('TREFHT', ax = axes[0], ylims = ylims, panel_label='a')
plot_violins_enssize_response_5m( 'DTR' , ax = axes[1], ylims = ylims, panel_label='b')
plot_violins_enssize_response_5m( 'TSMX' , ax = axes[2], ylims = ylims, panel_label='c')
plot_violins_enssize_response_5m( 'TSMN' , ax = axes[3], ylims = ylims, panel_label='d')
fig.tight_layout()
fig.savefig('./plots/ens_size.png', bbox_inches='tight')
# -
# ### 3 ensemble members
# +
# set parameters for plotting
set_plot_param()
# 3 members
def plot_enssize_response_3m(var):
n_ens = 3
resmask = get_resmask(threshold=0)
# open all ensemble members and calculate temporal and spatial mean
da_res_em = open_da_ens(var,case=case_res, mode='all').where(resmask)
da_nores_em = open_da_ens(var,case=case_nores, mode='all').where(resmask)
da_res_em_mean = da_res_em.mean(dim=('time','lat','lon'))
da_nores_em_mean = da_nores_em.mean(dim=('time','lat','lon'))
# calculate combinations with 1 ensemble member and calculate average over those ensemble members
mean_1_nores = np.asarray(list(itertools.combinations(da_nores_em_mean.values, 1))).mean(axis=1)
mean_1_res = np.asarray(list(itertools.combinations(da_res_em_mean.values, 1))).mean(axis=1)
[meshgrid_1_nores, meshgrid_1_res] = np.meshgrid(mean_1_nores, mean_1_res)
delta_1m = (meshgrid_1_res - meshgrid_1_nores).flatten()
# calculate combinations with 2 ensemble members and calculate average over those ensemble members
mean_2_nores = np.asarray(list(itertools.combinations(da_nores_em_mean.values, 2))).mean(axis=1)
mean_2_res = np.asarray(list(itertools.combinations(da_res_em_mean.values, 2))).mean(axis=1)
[meshgrid_2_nores, meshgrid_2_res] = np.meshgrid(mean_2_nores, mean_2_res)
delta_2m = (meshgrid_2_res - meshgrid_2_nores).flatten()
# calculate combinations with 3 ensemble members and calculate average over those ensemble members
mean_3_nores = np.asarray(list(itertools.combinations(da_nores_em_mean.values, 3))).mean(axis=1)
mean_3_res = np.asarray(list(itertools.combinations(da_res_em_mean.values, 3))).mean(axis=1)
[meshgrid_3_nores, meshgrid_3_res] = np.meshgrid(mean_3_nores, mean_3_res)
delta_3m = (meshgrid_3_res - meshgrid_3_nores).flatten()
# plot values per possible combination of ensemble members
fig , ax = plt.subplots()
ax.plot(np.ones(delta_1m.shape)* 1,delta_1m, 'ro', color='goldenrod')
ax.plot(np.ones(delta_2m.shape)* 2,delta_2m, 'ro', color='goldenrod')
ax.plot(np.ones(delta_3m.shape)* 3,delta_3m,'ro', color='goldenrod')
ax.set_xticks([1,2,3])
ax.set_xlim((0.5,3.5))
ax.set_xlabel('Number of ensemble members')
ax.set_ylabel('$\Delta$ '+da_res_em.name+' ('+da_res_em.units+')');
ax.set_title(da_res_em.long_name, loc='right', size=12)
ax.axhline(0, color='lightgray', linewidth=1)
# for 5 members
def plot_enssize_response_5m(var, ax = False):
n_ens = 5
resmask = get_resmask(threshold=0)
# open all ensemble members and calculate temporal and spatial mean
da_res_em = open_da_ens(var,case=case_res, mode='all')#.where(resmask)
da_nores_em = open_da_ens(var,case=case_nores, mode='all')#.where(resmask)
da_res_em_mean = da_res_em.mean(dim=('time','lat','lon'))
da_nores_em_mean = da_nores_em.mean(dim=('time','lat','lon'))
# calculate combinations with 1 ensemble member and calculate average over those ensemble members
mean_1_nores = np.asarray(list(itertools.combinations(da_nores_em_mean.values, 1))).mean(axis=1)
mean_1_res = np.asarray(list(itertools.combinations(da_res_em_mean.values, 1))).mean(axis=1)
[meshgrid_1_nores, meshgrid_1_res] = np.meshgrid(mean_1_nores, mean_1_res)
delta_1m = (meshgrid_1_res - meshgrid_1_nores).flatten()
# calculate combinations with 2 ensemble members and calculate average over those ensemble members
mean_2_nores = np.asarray(list(itertools.combinations(da_nores_em_mean.values, 2))).mean(axis=1)
mean_2_res = np.asarray(list(itertools.combinations(da_res_em_mean.values, 2))).mean(axis=1)
[meshgrid_2_nores, meshgrid_2_res] = np.meshgrid(mean_2_nores, mean_2_res)
delta_2m = (meshgrid_2_res - meshgrid_2_nores).flatten()
# calculate combinations with 3 ensemble members and calculate average over those ensemble members
mean_3_nores = np.asarray(list(itertools.combinations(da_nores_em_mean.values, 3))).mean(axis=1)
mean_3_res = np.asarray(list(itertools.combinations(da_res_em_mean.values, 3))).mean(axis=1)
[meshgrid_3_nores, meshgrid_3_res] = np.meshgrid(mean_3_nores, mean_3_res)
delta_3m = (meshgrid_3_res - meshgrid_3_nores).flatten()
# calculate combinations with 4 ensemble members and calculate average over those ensemble members
mean_4_nores = np.asarray(list(itertools.combinations(da_nores_em_mean.values, 4))).mean(axis=1)
mean_4_res = np.asarray(list(itertools.combinations(da_res_em_mean.values, 4))).mean(axis=1)
[meshgrid_4_nores, meshgrid_4_res] = np.meshgrid(mean_4_nores, mean_4_res)
delta_4m = (meshgrid_4_res - meshgrid_4_nores).flatten()
# calculate combinations with 3 ensemble members and calculate average over those ensemble members
mean_5_nores = np.asarray(list(itertools.combinations(da_nores_em_mean.values, 5))).mean(axis=1)
mean_5_res = np.asarray(list(itertools.combinations(da_res_em_mean.values, 5))).mean(axis=1)
[meshgrid_5_nores, meshgrid_5_res] = np.meshgrid(mean_5_nores, mean_5_res)
delta_5m = (meshgrid_5_res - meshgrid_5_nores).flatten()
# plot values per possible combination of ensemble members
if not ax:
fig , ax = plt.subplots()
ax.plot(np.ones(delta_1m.shape)* 1,delta_1m, 'ro', color='goldenrod')
ax.plot(np.ones(delta_2m.shape)* 2,delta_2m, 'ro', color='goldenrod')
ax.plot(np.ones(delta_3m.shape)* 3,delta_3m,'ro', color='goldenrod')
ax.plot(np.ones(delta_4m.shape)* 4,delta_4m,'ro', color='goldenrod')
ax.plot(np.ones(delta_5m.shape)* 5,delta_5m,'ro', color='goldenrod')
ax.set_xticks([1,2,3, 4, 5])
ax.set_xlim((0.5,5.5))
ax.set_xlabel('Number of ensemble members')
ax.set_ylabel('$\Delta$ '+da_res_em.name+' ('+da_res_em.units+')');
ax.set_title(da_res_em.long_name, loc='right', size=12)
ax.axhline(0, color='lightgray', linewidth=1)
# -
plot_enssize_response_3m('TREFHT')
plot_enssize_response_3m('TREFHTMX')
plot_enssize_response_3m('TREFHTMN')
plot_enssize_response_3m('PRECT')
# ## Plot 5 Members
# +
# 5 members
fig, axes = plt.subplots(2,2, figsize=(8,6))
axes = axes.flatten()
plot_enssize_response_5m('TREFHT', ax = axes[0])
plot_enssize_response_5m( 'TSMN', ax = axes[2])
plot_enssize_response_5m( 'TSMX', ax = axes[3])
plot_enssize_response_5m( 'DTR', ax = axes[1])
fig.tight_layout()
# +
var ='TREFHT'
ax=False
ylims = False
panel_label = False
n_ens = 5
resmask = get_resmask(threshold=0)
# open all ensemble members and calculate temporal and spatial mean
da_res_em = open_da_ens(var,case=case_res, mode='all').where(resmask)
da_nores_em = open_da_ens(var,case=case_nores, mode='all').where(resmask)
da_res_em_mean = da_res_em.mean(dim=('time','lat','lon'))
da_nores_em_mean = da_nores_em.mean(dim=('time','lat','lon'))
# calculate combinations with 1 ensemble member and calculate average over those ensemble members
mean_1_nores = np.asarray(list(itertools.combinations(da_nores_em_mean.values, 1))).mean(axis=1)
mean_1_res = np.asarray(list(itertools.combinations(da_res_em_mean.values, 1))).mean(axis=1)
[meshgrid_1_nores, meshgrid_1_res] = np.meshgrid(mean_1_nores, mean_1_res)
delta_1m = (meshgrid_1_res - meshgrid_1_nores).flatten()
# calculate combinations with 2 ensemble members and calculate average over those ensemble members
mean_2_nores = np.asarray(list(itertools.combinations(da_nores_em_mean.values, 2))).mean(axis=1)
mean_2_res = np.asarray(list(itertools.combinations(da_res_em_mean.values, 2))).mean(axis=1)
[meshgrid_2_nores, meshgrid_2_res] = np.meshgrid(mean_2_nores, mean_2_res)
delta_2m = (meshgrid_2_res - meshgrid_2_nores).flatten()
print('mean_2_nores')
print(mean_2_nores.shape)
# calculate combinations with 3 ensemble members and calculate average over those ensemble members
mean_3_nores = np.asarray(list(itertools.combinations(da_nores_em_mean.values, 3))).mean(axis=1)
mean_3_res = np.asarray(list(itertools.combinations(da_res_em_mean.values, 3))).mean(axis=1)
[meshgrid_3_nores, meshgrid_3_res] = np.meshgrid(mean_3_nores, mean_3_res)
delta_3m = (meshgrid_3_res - meshgrid_3_nores).flatten()
# calculate combinations with 4 ensemble members and calculate average over those ensemble members
mean_4_nores = np.asarray(list(itertools.combinations(da_nores_em_mean.values, 4))).mean(axis=1)
mean_4_res = np.asarray(list(itertools.combinations(da_res_em_mean.values, 4))).mean(axis=1)
[meshgrid_4_nores, meshgrid_4_res] = np.meshgrid(mean_4_nores, mean_4_res)
delta_4m = (meshgrid_4_res - meshgrid_4_nores).flatten()
# calculate combinations with 3 ensemble members and calculate average over those ensemble members
mean_5_nores = np.asarray(list(itertools.combinations(da_nores_em_mean.values, 5))).mean(axis=1)
mean_5_res = np.asarray(list(itertools.combinations(da_res_em_mean.values, 5))).mean(axis=1)
[meshgrid_5_nores, meshgrid_5_res] = np.meshgrid(mean_5_nores, mean_5_res)
delta_5m = (meshgrid_5_res - meshgrid_5_nores).flatten()
data = np.asarray([delta_1m, delta_2m, delta_3m, delta_4m, delta_5m])
print(var+' range 1 ens member: '+str(delta_1m.min()) +' - '+ str(delta_1m.max()))
print()
# VIOLIN PLOT
if not ax:
fig , ax = plt.subplots()
# create dummy violinplot to have white left sides
parts1 = ax.violinplot(data, showmeans=False, showmedians=False,
showextrema=False)
for pc in parts1['bodies']:
pc.set_facecolor('palegoldenrod')
pc.set_edgecolor('darkgoldenrod')
pc.set_alpha(1)
# remove the left half of plot
# get the center
m = np.mean(pc.get_paths()[0].vertices[:, 0])
# modify the paths to not go further right than the center
pc.get_paths()[0].vertices[:, 0] = np.clip(pc.get_paths()[0].vertices[:, 0], m, np.inf)
pc.set_color('w')
# this is the real violinplot
parts2 = ax.violinplot(data, showmeans=False, showmedians=False,
showextrema=False, widths=1)
for pc in parts2['bodies']:
m = np.mean(pc.get_paths()[0].vertices[:, 0])
# modify the paths to not go further right than the center
pc.get_paths()[0].vertices[:, 0] = np.clip(pc.get_paths()[0].vertices[:, 0], m, np.inf)
pc.set_facecolor('palegoldenrod')
pc.set_edgecolor('tan')
pc.set_alpha(1)
# remove the line in the fifth violinplot
parts2['bodies'][-1].set_edgecolor('w')
# plot medians and IQRs
# helper function
def adjacent_values(vals, q1, q3):
upper_adjacent_value = q3 + (q3 - q1) * 1.5
upper_adjacent_value = np.clip(upper_adjacent_value, q3, vals[-1])
lower_adjacent_value = q1 - (q3 - q1) * 1.5
lower_adjacent_value = np.clip(lower_adjacent_value, vals[0], q1)
return lower_adjacent_value, upper_adjacent_value
medians = []
whiskers_min = []
whiskers_max = []
quartiles1 = []
quartiles3 = []
for ensmember in data:
(quartile1, median, quartile3) = np.percentile(ensmember, [25, 50, 75])
whisker = np.array([adjacent_values(ensmember, quartile1, quartile3)])
whisker_min, whisker_max = whisker[:, 0], whisker[:, 1]
medians = np.append(medians,median)
whiskers_min = np.append(whiskers_min, whisker_min)
whiskers_max = np.append(whiskers_max, whisker_max)
quartiles1 = np.append(quartiles1, quartile1)
quartiles3 = np.append(quartiles3, quartile3)
inds = np.arange(1, len(medians) + 1.0)
inds = inds + 0.02
ax.scatter(inds, medians, marker='o', color='maroon', s=20, zorder=3)
ax.vlines(inds, quartiles1, quartiles3, color='darkgoldenrod', linestyle='-', lw=4)
#ax.vlines(inds, whiskers_min, whiskers_max, color='goldenrod', linestyle='-', lw=3)
# do axes settings
ax.set_xticks([1,2,3, 4, 5])
ax.set_xlim((0.5,5.5))
if ylims != False:
ax.set_ylim(ylims)
if panel_label != False:
ax.text(0, 1.02, panel_label, color='dimgrey', fontsize=12, transform=ax.transAxes, weight = 'bold')
ax.set_xlabel('Number of ensemble members')
ax.set_ylabel(da_res_em.units);
if var == ('TSMN'): da_res_em.name = 'TNn'
if var == ('TSMX'): da_res_em.name = 'TXx'
if var == ('TREFHT'): da_res_em.name = 'T$_{2m}$'
ax.set_title('$\Delta$ '+da_res_em.name, loc='right', size=12)
ax.axhline(0, color='lightgray', linewidth=1)
# -
delta_2m.shape
| .ipynb_checkpoints/pp_FHIST_enssize-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: icu
# language: python
# name: icu
# ---
import numpy as np
# 3 random init and training set sample. Test set is fixed
scores = np.array([0.6843, 0.6475, 0.5761])
print("decompensation ap")
scores.std(), scores.mean()
scores = np.array([0.4868, 0.5236, 0.5142])
print("phenotyping ap-macro")
scores.std(), scores.mean()
scores = np.array([0.6596, 0.6782, 0.6744])
print("in-hospital-mortality ap")
scores.std(), scores.mean()
scores = np.array([53.755, 52.064, 53.724])
print("los regression")
scores.std(), scores.mean()
scores = np.array([0.7729, 0.7563, 0.7367])
print("los classification")
scores.std(), scores.mean()
| notebooks/AMIA Paper.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="SzKwuqYESWwm"
# ##### Copyright 2020 The Cirq Developers
# + cellView="form" id="4yPUsdJxSXFq"
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# + [markdown] id="dVkNQc0WSIwk"
# # Quantum variational algorithm
# + [markdown] id="zC1qlUJoSXhm"
# <table class="tfo-notebook-buttons" align="left">
# <td>
# <a target="_blank" href="https://quantumai.google/cirq/tutorials/variational_algorithm"><img src="https://quantumai.google/site-assets/images/buttons/quantumai_logo_1x.png" />View on QuantumAI</a>
# </td>
# <td>
# <a target="_blank" href="https://colab.research.google.com/github/quantumlib/Cirq/blob/master/docs/tutorials/variational_algorithm.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/colab_logo_1x.png" />Run in Google Colab</a>
# </td>
# <td>
# <a target="_blank" href="https://github.com/quantumlib/Cirq/blob/master/docs/tutorials/variational_algorithm.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/github_logo_1x.png" />View source on GitHub</a>
# </td>
# <td>
# <a href="https://storage.googleapis.com/tensorflow_docs/Cirq/docs/tutorials/variational_algorithm.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/download_icon_1x.png" />Download notebook</a>
# </td>
# </table>
# + [markdown] id="y0TUn0KcBcSO"
# In this tutorial, we use the [variational quantum eigensolver](https://arxiv.org/abs/1304.3061) (VQE) in Cirq to optimize a simple Ising model.
# + id="bd9529db1c0b"
try:
import cirq
except ImportError:
print("installing cirq...")
# !pip install --quiet cirq
print("installed cirq.")
# + [markdown] id="xcn4Ncad_5pT"
# ## Background: Variational Quantum Algorithm
#
# The [variational method](https://en.wikipedia.org/wiki/Variational_method_(quantum_mechanics)) in quantum theory is a classical method for finding low energy states of a quantum system. The rough idea of this method is that one defines a trial wave function (sometimes called an *ansatz*) as a function of some parameters, and then one finds the values of these parameters that minimize the expectation value of the energy with respect to these parameters. This minimized ansatz is then an approximation to the lowest energy eigenstate, and the expectation value serves as an upper bound on the energy of the ground state.
#
# In the last few years (see [arXiv:1304.3061](https://arxiv.org/abs/1304.3061) and [arXiv:1507.08969](https://arxiv.org/abs/1507.08969), for example), it has been realized that quantum computers can mimic the classical technique and that a quantum computer does so with certain advantages. In particular, when one applies the classical variational method to a system of $n$ qubits, an exponential number (in $n$) of complex numbers is necessary to generically represent the wave function of the system. However, with a quantum computer, one can directly produce this state using a parameterized quantum circuit, and then by repeated measurements estimate the expectation value of the energy.
#
# This idea has led to a class of algorithms known as variational quantum algorithms. Indeed this approach is not just limited to finding low energy eigenstates, but minimizing any objective function that can be expressed as a quantum observable. It is an open question to identify under what conditions these quantum variational algorithms will succeed, and exploring this class of algorithms is a key part of the research for [noisy intermediate scale quantum computers](https://arxiv.org/abs/1801.00862).
#
# The classical problem we will focus on is the 2D +/- Ising model with transverse field ([ISING](http://iopscience.iop.org/article/10.1088/0305-4470/15/10/028/meta)). This problem is NP-complete. So it is highly unlikely that quantum computers will be able to efficiently solve it across all instances. Yet this type of problem is illustrative of the general class of problems that Cirq is designed to tackle.
#
#
# Consider the energy function
#
# $E(s_1,\dots,s_n) = \sum_{\langle i,j \rangle} J_{i,j}s_i s_j + \sum_i h_i s_i$
#
# where here each $s_i, J_{i,j}$, and $h_i$ are either +1 or -1. Here each index i is associated with a bit on a square lattice, and the $\langle i,j \rangle$ notation means sums over neighboring bits on this lattice. The problem we would like to solve is, given $J_{i,j}$, and $h_i$, find an assignment of $s_i$ values that minimize $E$.
#
# How does a variational quantum algorithm work for this? One approach is to consider $n$ qubits and associate them with each of the bits in the classical problem. This maps the classical problem onto the quantum problem of minimizing the expectation value of the observable
#
# $H=\sum_{\langle i,j \rangle} J_{i,j} Z_i Z_j + \sum_i h_iZ_i$
#
# Then one defines a set of parameterized quantum circuits, i.e., a quantum circuit where the gates (or more general quantum operations) are parameterized by some values. This produces an ansatz state
#
# $|\psi(p_1, p_2, \dots, p_k)\rangle$
#
# where $p_i$ are the parameters that produce this state (here we assume a pure state, but mixed states are of course possible).
#
# The variational algorithm then works by noting that one can obtain the value of the objective function for a given ansatz state by
#
# 1. Prepare the ansatz state.
# 2. Make a measurement which samples from some terms in H.
# 3. Goto 1.
#
# Note that one cannot always measure $H$ directly (without the use of quantum phase estimation). So one often relies on the linearity of expectation values to measure parts of $H$ in step 2. One always needs to repeat the measurements to obtain an estimate of the expectation value. How many measurements needed to achieve a given accuracy is beyond the scope of this tutorial, but Cirq can help investigate this question.
#
# The above shows that one can use a quantum computer to obtain estimates of the objective function for the ansatz. This can then be used in an outer loop to try to obtain parameters for the lowest value of the objective function. For these values, one can then use that best ansatz to produce samples of solutions to the problem, which obtain a hopefully good approximation for the lowest possible value of the objective function.
#
# + [markdown] id="cfsYdZw6_5pU"
# ## Create a circuit on a Grid
#
# To build the above variational quantum algorithm using Cirq, one begins by building the appropriate circuit. Because the problem we have defined has a natural structure on a grid, we will use Cirq’s built-in `GridQubits` as our qubits. We will demonstrate some of how this works in an interactive Python environment, the following code can be run in series in a Python environment where you have Cirq installed. For more about circuits and how to create them, see the [Tutorial](basics.ipynb) or the [Circuits](../circuits.ipynb) page.
# + id="5TV_rMxX_5pW"
import cirq
# define the length of the grid.
length = 3
# define qubits on the grid.
qubits = cirq.GridQubit.square(length)
print(qubits)
# + [markdown] id="D3obTsFs_5pa"
# Here we see that we've created a bunch of `GridQubits`, which have a row and column, indicating their position on a grid.
#
# Now that we have some qubits, let us construct a `Circuit` on these qubits. For example, suppose we want to apply the Hadamard gate `H` to every qubit whose row index plus column index is even, and an `X` gate to every qubit whose row index plus column index is odd. To do this, we write:
# + id="dy3VFNMx_5pa"
circuit = cirq.Circuit()
circuit.append(cirq.H(q) for q in qubits if (q.row + q.col) % 2 == 0)
circuit.append(cirq.X(q) for q in qubits if (q.row + q.col) % 2 == 1)
print(circuit)
# + [markdown] id="_iFQ7Zwu_5pi"
# ## Creating the Ansatz
#
# One convenient pattern is to use a python [Generator](https://wiki.python.org/moin/Generators) for defining sub-circuits or layers in our algorithm. We will define a function that takes in the relevant parameters and then yields the operations for the sub-circuit, and then this can be appended to the `Circuit`:
# + id="rLayzy4P_5pj"
def rot_x_layer(length, half_turns):
"""Yields X rotations by half_turns on a square grid of given length."""
# Define the gate once and then re-use it for each Operation.
rot = cirq.XPowGate(exponent=half_turns)
# Create an X rotation Operation for each qubit in the grid.
for i in range(length):
for j in range(length):
yield rot(cirq.GridQubit(i, j))
# Create the circuit using the rot_x_layer generator
circuit = cirq.Circuit()
circuit.append(rot_x_layer(2, 0.1))
print(circuit)
# + [markdown] id="DrO5W2ie_5pl"
# Another important concept here is that the rotation gate is specified in *half turns* ($ht$). For a rotation about `X`, the gate is:
#
# $\cos(ht * \pi) I + i \sin(ht * \pi) X$
#
# There is a lot of freedom defining a variational ansatz. Here we will do a variation on a [QAOA strategy](https://arxiv.org/abs/1411.4028) and define an ansatz related to the problem we are trying to solve.
#
# First, we need to choose how the instances of the problem are represented. These are the values $J$ and $h$ in the Hamiltonian definition. We represent them as two-dimensional arrays (lists of lists). For $J$ we use two such lists, one for the row links and one for the column links.
#
# Here is a snippet that we can use to generate random problem instances:
# + id="6E5BjSxF_5pm"
import random
def rand2d(rows, cols):
return [[random.choice([+1, -1]) for _ in range(cols)] for _ in range(rows)]
def random_instance(length):
# transverse field terms
h = rand2d(length, length)
# links within a row
jr = rand2d(length - 1, length)
# links within a column
jc = rand2d(length, length - 1)
return (h, jr, jc)
h, jr, jc = random_instance(3)
print('transverse fields: {}'.format(h))
print('row j fields: {}'.format(jr))
print('column j fields: {}'.format(jc))
# + [markdown] id="zsq_177Q_5po"
# In the code above, the actual values will be different for each individual run because they are using `random.choice`.
#
# Given this definition of the problem instance, we can now introduce our ansatz. It will consist of one step of a circuit made up of:
#
# 1. Apply an `XPowGate` for the same parameter for all qubits. This is the method we have written above.
# 2. Apply a `ZPowGate` for the same parameter for all qubits where the transverse field term $h$ is $+1$.
# + id="XtYIZSef_5po"
def rot_z_layer(h, half_turns):
"""Yields Z rotations by half_turns conditioned on the field h."""
gate = cirq.ZPowGate(exponent=half_turns)
for i, h_row in enumerate(h):
for j, h_ij in enumerate(h_row):
if h_ij == 1:
yield gate(cirq.GridQubit(i, j))
# + [markdown] id="iSizAkjE_5pq"
# 3. Apply a `CZPowGate` for the same parameter between all qubits where the coupling field term $J$ is $+1$. If the field is $-1$, apply `CZPowGate` conjugated by $X$ gates on all qubits.
# + id="jo9pqBlJ_5pq"
def rot_11_layer(jr, jc, half_turns):
"""Yields rotations about |11> conditioned on the jr and jc fields."""
cz_gate = cirq.CZPowGate(exponent=half_turns)
for i, jr_row in enumerate(jr):
for j, jr_ij in enumerate(jr_row):
q = cirq.GridQubit(i, j)
q_1 = cirq.GridQubit(i + 1, j)
if jr_ij == -1:
yield cirq.X(q)
yield cirq.X(q_1)
yield cz_gate(q, q_1)
if jr_ij == -1:
yield cirq.X(q)
yield cirq.X(q_1)
for i, jc_row in enumerate(jc):
for j, jc_ij in enumerate(jc_row):
q = cirq.GridQubit(i, j)
q_1 = cirq.GridQubit(i, j + 1)
if jc_ij == -1:
yield cirq.X(q)
yield cirq.X(q_1)
yield cz_gate(q, q_1)
if jc_ij == -1:
yield cirq.X(q)
yield cirq.X(q_1)
# + [markdown] id="DI7wQure_5ps"
# Putting all together, we can create a step that uses just three parameters. Below is the code, which uses the generator for each of the layers (note to advanced Python users: this code does not contain a bug in using `yield` due to the auto flattening of the `OP_TREE concept`. Typically, one would want to use `yield` from here, but this is not necessary):
# + id="M6Z2hxsG_5pt"
def one_step(h, jr, jc, x_half_turns, h_half_turns, j_half_turns):
length = len(h)
yield rot_x_layer(length, x_half_turns)
yield rot_z_layer(h, h_half_turns)
yield rot_11_layer(jr, jc, j_half_turns)
h, jr, jc = random_instance(3)
circuit = cirq.Circuit()
circuit.append(one_step(h, jr, jc, 0.1, 0.2, 0.3),
strategy=cirq.InsertStrategy.EARLIEST)
print(circuit)
# + [markdown] id="y5E_9AYw_5pv"
# Here we see that we have chosen particular parameter values $(0.1, 0.2, 0.3)$.
# + [markdown] id="zAwTXwc7_5pv"
# ## Simulation
#
# In Cirq, the simulators make a distinction between a *run* and a *simulation*. A *run* only allows for a simulation that mimics the actual quantum hardware. For example, it does not allow for access to the amplitudes of the wave function of the system, since that is not experimentally accessible. *Simulate* commands, however, are broader and allow different forms of simulation. When prototyping small circuits, it is useful to execute *simulate* methods, but one should be wary of relying on them when running against actual hardware.
#
# Currently, Cirq ships with a simulator tied strongly to the gate set of the **Google xmon architecture**. However, for convenience, the simulator attempts to automatically convert unknown operations into `XmonGates` (as long as the operation specifies a matrix or a decomposition into `XmonGates`). This, in principle, allows us to simulate any circuit that has gates that implement one and two qubit `KnownMatrix` gates. Future releases of Cirq will expand these simulators.
#
# Because the simulator is tied to the **xmon gate set**, the simulator lives, in contrast to core Cirq, in the `cirq_google` module. To run a simulation of the full circuit, we create a simulator, and pass the circuit to the simulator.
# + id="PXpn3xvT_5pv"
simulator = cirq.Simulator()
circuit = cirq.Circuit()
circuit.append(one_step(h, jr, jc, 0.1, 0.2, 0.3))
circuit.append(cirq.measure(*qubits, key='x'))
results = simulator.run(circuit, repetitions=100)
print(results.histogram(key='x'))
# + [markdown] id="DEIjXRgt_5px"
# Note that we have run the simulation 100 times and produced a histogram of the counts of the measurement results. What are the keys in the histogram counter? Note that we have passed in the order of the qubits. This ordering is then used to translate the order of the measurement results to a register using a [big endian representation](https://en.wikipedia.org/wiki/Endianness).
#
# For our optimization problem, we want to calculate the value of the objective function for a given result run. One way to do this is using the raw measurement data from the result of `simulator.run`. Another way to do this is to provide to the histogram a method to calculate the objective: this will then be used as the key for the returned `Counter`.
# + id="Loy-K3YY_5py"
import numpy as np
def energy_func(length, h, jr, jc):
def energy(measurements):
# Reshape measurement into array that matches grid shape.
meas_list_of_lists = [measurements[i * length:(i + 1) * length]
for i in range(length)]
# Convert true/false to +1/-1.
pm_meas = 1 - 2 * np.array(meas_list_of_lists).astype(np.int32)
tot_energy = np.sum(pm_meas * h)
for i, jr_row in enumerate(jr):
for j, jr_ij in enumerate(jr_row):
tot_energy += jr_ij * pm_meas[i, j] * pm_meas[i + 1, j]
for i, jc_row in enumerate(jc):
for j, jc_ij in enumerate(jc_row):
tot_energy += jc_ij * pm_meas[i, j] * pm_meas[i, j + 1]
return tot_energy
return energy
print(results.histogram(key='x', fold_func=energy_func(3, h, jr, jc)))
# + [markdown] id="X_kzMzz1_5pz"
# One can then calculate the expectation value over all repetitions:
# + id="GH8_ww5a_5p0"
def obj_func(result):
energy_hist = result.histogram(key='x', fold_func=energy_func(3, h, jr, jc))
return np.sum([k * v for k,v in energy_hist.items()]) / result.repetitions
print('Value of the objective function {}'.format(obj_func(results)))
# + [markdown] id="9vMxRCBD_5p2"
# ### Parameterizing the Ansatz
#
# Now that we have constructed a variational ansatz and shown how to simulate it using Cirq, we can think about optimizing the value.
#
# On quantum hardware, one would most likely want to have the optimization code as close to the hardware as possible. As the classical hardware that is allowed to inter-operate with the quantum hardware becomes better specified, this language will be better defined. Without this specification, however, Cirq also provides a useful concept for optimizing the looping in many optimization algorithms. This is the fact that many of the value in the gate sets can, instead of being specified by a float, be specified by a `Symbol`, and this `Symbol` can be substituted for a value specified at execution time.
#
# Luckily for us, we have written our code so that using parameterized values is as simple as passing `Symbol` objects where we previously passed float values.
# + id="D49TnPrt_5p2"
import sympy
circuit = cirq.Circuit()
alpha = sympy.Symbol('alpha')
beta = sympy.Symbol('beta')
gamma = sympy.Symbol('gamma')
circuit.append(one_step(h, jr, jc, alpha, beta, gamma))
circuit.append(cirq.measure(*qubits, key='x'))
print(circuit)
# + [markdown] id="StwKTU7R_5p5"
# Note now that the circuit's gates are parameterized.
#
# Parameters are specified at runtime using a `ParamResolver`, which is just a dictionary from `Symbol` keys to runtime values.
#
# For instance:
# + id="XOmpCqRq_5p5"
resolver = cirq.ParamResolver({'alpha': 0.1, 'beta': 0.3, 'gamma': 0.7})
resolved_circuit = cirq.resolve_parameters(circuit, resolver)
# + [markdown] id="DEKrxQrL_5p7"
# resolves the parameters to actual values in the circuit.
#
# Cirq also has the concept of a *sweep*. A sweep is a collection of parameter resolvers. This runtime information is very useful when one wants to run many circuits for many different parameter values. Sweeps can be created to specify values directly (this is one way to get classical information into a circuit), or a variety of helper methods. For example suppose we want to evaluate our circuit over an equally spaced grid of parameter values. We can easily create this using `LinSpace`.
# + id="z43HpXbX_5p7"
sweep = (cirq.Linspace(key='alpha', start=0.1, stop=0.9, length=5)
* cirq.Linspace(key='beta', start=0.1, stop=0.9, length=5)
* cirq.Linspace(key='gamma', start=0.1, stop=0.9, length=5))
results = simulator.run_sweep(circuit, params=sweep, repetitions=100)
for result in results:
print(result.params.param_dict, obj_func(result))
# + [markdown] id="10JkH8Ka_5p9"
# ### Finding the Minimum
#
# Now we have all the code, we do a simple grid search over values to find a minimal value. Grid search is not the best optimization algorithm, but is here simply illustrative.
# + id="oFtTLBDq_5p-"
sweep_size = 10
sweep = (cirq.Linspace(key='alpha', start=0.0, stop=1.0, length=sweep_size)
* cirq.Linspace(key='beta', start=0.0, stop=1.0, length=sweep_size)
* cirq.Linspace(key='gamma', start=0.0, stop=1.0, length=sweep_size))
results = simulator.run_sweep(circuit, params=sweep, repetitions=100)
min = None
min_params = None
for result in results:
value = obj_func(result)
if min is None or value < min:
min = value
min_params = result.params
print('Minimum objective value is {}.'.format(min))
# + [markdown] id="Rjg59AG5_5p_"
# We've created a simple variational quantum algorithm using Cirq. Where to go next? Perhaps you can play around with the above code and work on analyzing the algorithms performance. Add new parameterized circuits and build an end to end program for analyzing these circuits.
| docs/tutorials/variational_algorithm.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import lipd
import pandas as pd
import os
# +
input_path = 'D:\\annotating_paleoclimate_data\\recommender\\download_files\\linked_earth_wiki_files_new'
# f = open('file-name.txt')
tup = list(os.walk(input_path))
root, dir_names, file_names = tup[0][0], tup[0][1], tup[0][2]
table = pd.DataFrame(columns = ['filename','archiveType', 'inferredVariableType','units','proxyObservationType','interpretation/variable','interpretation/variableDetail'])
# -
len(file_names)
for line in file_names:
print(line)
if '.lpd' in line:
filen = line.strip()
line = os.path.join(root, line.strip())
d= lipd.readLipd(line)
if 'paleoData' not in d:
continue
path = d['paleoData']['paleo0']['measurementTable']['paleo0measurement0']['columns']
# print(path)
archive = d['archiveType']
print('archive: ', archive)
for key in path.keys() :
unit = 'NA'
inferredVType = 'NA'
proxyOType = 'NA'
intVariable = 'NA'
intVarDet = 'NA'
if 'inferredVariableType' in path[key].keys() :
#print('---inferredVariableType---')
#print('variableName: ', key)
if 'units' in path[key].keys() :
unit = path[key]['units']
#print('units: ', unit)
#print('inferredVariableType: ', path[key]['inferredVariableType'])
inferredVType = path[key]['inferredVariableType']
if 'proxyObservationType' in path[key].keys() :
#print('---proxyObservationType')
#print('variableName: ', key)
#print('units: ', path[key]['units'])
if 'units' in path[key].keys() :
unit = path[key]['units']
#print('proxyObservationType: ',path[key]['proxyObservationType'])
proxyOType = path[key]['proxyObservationType']
if 'interpretation' in path[key].keys() :
#print('---interpretation---')
for inter in path[key]['interpretation']:
#print('variable: ', inter['variable'])
if type(inter) is not str :
if 'variable' in inter.keys() :
intVariable = inter['variable']
#print('variableDetail: ', inter['variableDetail'])
if 'variableDetail' in inter.keys() :
intVarDet = inter['variableDetail']
if unit != 'NA' or inferredVType != 'NA' or proxyOType != 'NA' or intVariable != 'NA' or intVarDet != 'NA' :
df = pd.DataFrame({'filename':[filen], 'archiveType': [archive],'units':[unit],'inferredVariableType': [inferredVType],'proxyObservationType':[proxyOType],'interpretation/variable':[intVariable],'interpretation/variableDetail':[intVarDet]})
#print(df)
table = table.append(df, ignore_index = True)
table_com=table.explode('proxyObservationType').explode('units').explode('inferredVariableType').explode('interpretation/variable').explode('interpretation/variableDetail').reset_index()
#print(table_com)
table_com = table_com.drop(columns = ['index'])
table_com.to_csv('D:\\annotating_paleoclimate_data\\recommender\\download_files\\raw-data-table.csv', sep = ',', encoding = 'utf-8',index = False)
| background_material/download_files/table_com_notebook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Predictive maintenance for turbofan engine example
#
# ## Part 2: Linear Regression
#
#
# Based on open dataset provided by NASA at:
# https://data.nasa.gov/widgets/vrks-gjie
#
# dataset can be downloaded at: http://ti.arc.nasa.gov/c/6/
# +
import os, time
import datetime
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import pandas as pd
import tensorflow as tf
from tensorflow import keras
# Load the TensorBoard notebook extension (optional, can be started from the command line)
# #%load_ext tensorboard
# Select a plotting style
#plt.style.use('dark_background')
plt.style.use('seaborn')
#plt.style.available
SCALE = 1
SEED = 1
EPOCHS = 20
# -
# ### Data Preparation
# +
# Load the data
data_root = 'data/'
original_dir = data_root + 'original/'
dataset_dir = data_root + 'dataset/'
train_data = pd.read_csv(dataset_dir + 'train_data.csv')
test_data = pd.read_csv(dataset_dir + 'test_data.csv')
train_data
# -
# ### Quick EDA
# +
# Plot the lifecycles
one_engine = []
for i, r in train_data.iterrows():
rul = r['RUL']
one_engine.append(rul)
if rul == 0:
plt.plot(one_engine)
one_engine = []
#plt.grid()
plt.xlabel('Cycles')
plt.ylabel('RUL')
# -
# ## Machine Learning Application
#
# We will split the data in 4 parts: x_train, y_train, x_test, y_test.
#
# (actually the dataset is already split)
# - x is for the sensor data
# - y is for the known Remaining Useful Life
# - train is for data we will use to train the model (we will use the known RUL in the training)
# - test is for data validation... we will apply predictions and compute models performance metrics using the known RUL
# +
# Shuffle train data frame and apply scaling factor
train_data = train_data.sample(frac=SCALE, random_state=SEED).reset_index(drop=True)
# prepare a x frame with useful data and a y frame with RUL value
x_train = train_data.drop(columns=['Unit', 'Cycle', 'RUL'])
y_train = train_data['RUL']
x_test = test_data.drop(columns=['Cycle', 'RUL'])
y_test = test_data['RUL']
# +
# data normalization
mean = x_train.mean()
std = x_train.std()
x_train = (x_train - mean) / std
x_test = (x_test - mean) / std
x_train = x_train.dropna(axis=1, how='any')
x_test = x_test.dropna(axis=1, how='any')
#x_test = np.asarray(x_test).astype('float32')
# what's the shape now we dropped some columns? create a variable to use in
# get_model_v1 function call
(lines,shape) = x_train.shape
# +
# Build a ML model
def get_model_v1(shape):
model = keras.models.Sequential()
model.add(keras.layers.Input(shape, name='input_layer'))
model.add(keras.layers.Dense(128, activation='relu', name='dense_n1'))
model.add(keras.layers.Dense(128, activation='relu', name='dense_n2'))
model.add(keras.layers.Dense(128, activation='relu', name='dense_n3'))
model.add(keras.layers.Dense(128, activation='relu', name='dense_n4'))
model.add(keras.layers.Dense(128, activation='relu', name='dense_n5'))
model.add(keras.layers.Dense(128, activation='relu', name='dense_n6'))
model.add(keras.layers.Dense(128, activation='relu', name='dense_n7'))
model.add(keras.layers.Dense(128, activation='relu', name='dense_n8'))
model.add(keras.layers.Dense(128, activation='relu', name='dense_n9'))
model.add(keras.layers.Dense(1, name='output'))
model.compile(optimizer = 'adam',
loss = 'mse',
metrics = ['mae', 'mse'],
)
return model
# Instanciate the model
model = get_model_v1((shape,))
model.summary()
# +
# Train the model
# Configure callback for vizualization of the training data in tensorboard
if not os.path.exists('logs/'):
os.mkdir('logs')
log_dir = 'logs/fit/' + f'S{SCALE}_E{EPOCHS}_' + datetime.datetime.now().strftime('%Y%m%d-%H%M%S')
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)
# #%tensorboard --logdir ./logs
start_time = time.perf_counter()
history = model.fit(x_train,
y_train,
epochs = EPOCHS,
batch_size = 20,
verbose = 1,
validation_data = (x_test, y_test),
callbacks = [tensorboard_callback],)
end_time = time.perf_counter()
print(f"\n\nTraining time: {end_time-start_time}")
# -
# Evaluate the model
score = model.evaluate(x_test, y_test, verbose=1)
# ## Training History
df = pd.DataFrame(data=history.history)
display(df)
print("min(val_mae) : {:.4f}".format(min(history.history['val_mae'])))
# +
def plot_history(history, figsize=(8,6),
plot={"Accuracy":['accuracy','val_accuracy'], 'Loss':['loss', 'val_loss']},
save_as='auto'):
"""
Show history
args:
history: history
figsize: fig size
plot: list of data to plot : {<title>:[<metrics>,...], ...}
"""
fig_id=0
for title,curves in plot.items():
plt.figure(figsize=figsize)
plt.title(title)
plt.ylabel(title)
plt.xlabel('Epoch')
for c in curves:
plt.plot(history.history[c])
plt.legend(curves, loc='upper left')
plt.show()
plot_history(history, plot={'MSE' :['mse', 'val_mse'],
'MAE' :['mae', 'val_mae'],
'LOSS':['loss','val_loss']}, save_as='01-history')
# -
# ## Make a prediction
# +
# Make a prediction
selection = 56
engine = x_test.iloc[selection]
engine_rul = y_test.iat[selection]
print('Data (denormalized):\n\n', engine.dropna(axis=0, how='any') * std + mean, '\n\n')
print('RUL = ', engine_rul)
engine = np.array(engine).reshape(1, shape)
print('\n\n---\n\n')
predictions = model.predict(engine)
print('Prediction : {:.0f} Cycles'.format(predictions[0][0]))
print('Real RUL : {:.0f} Cycles'.format(engine_rul))
# +
# TODO confusion matrix
predictions = []
for i in range(len(x_test)):
engine = x_test.iloc[i]
engine = np.array(engine).reshape(1, shape)
prediction = model.predict(engine)
predictions.append(prediction[0][0])
# +
plt.figure(figsize=(12,12))
plt.scatter(predictions, y_test);
# Add a line
x = [0, 150]
y = x
plt.plot(x,y, color='lightgreen');
# Layout
plt.xlabel('Predictions');
plt.ylabel('Reality');
# +
# Obviously the ML algo doesn't do much... but this was for benchmarking the DOKS infrastructures anyway :)
| notebooks/1_ML.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python3
# ---
# # Image-Based Flower Classification
# ### <NAME>
#
# <EMAIL>
#
# ---
# ## Relevant Libraries
# +
# Relevant libraries
from sklearn.metrics import classification_report, confusion_matrix, accuracy_score
from sklearn.model_selection import train_test_split, cross_val_score
from sklearn.svm import LinearSVC
import matplotlib.pyplot as plt
import tensorflow as tf
import seaborn as sns
import numpy as np
import tarfile
import urllib
import cv2
import os
import re
# %matplotlib inline
#
# -
# ## Data Retrieval
# +
# Data retrieval (Desktop wil be used as workspace.)
def get_data():
url = "http://www.robots.ox.ac.uk/~vgg/data/flowers/17/17flowers.tgz"
print ("\ndownloading flower images...")
filename, headers = urllib.request.urlretrieve(
url,filename=os.path.expanduser("~/Desktop/17flowers.tgz"))
print ("download complete!")
os.chdir(os.path.expanduser("~/Desktop/"))
print ("extracting flower images...")
tar = tarfile.open(os.path.expanduser("~/Desktop/17flowers.tgz"), "r:gz")
tar.extractall()
tar.close()
print ("extract complete!")
print ("downloading tensorflow component...")
urllib.request.urlretrieve("https://raw.githubusercontent.com/tensorflow/models/master/tutorials/image/imagenet/classify_image.py",
filename=os.path.expanduser("~/Desktop/classify_image.py"))
print ("download complete!")
os.chdir(os.path.expanduser("~/Desktop/"))
print ("generating graph...")
os.system("python classify_image.py --model_dir ~/Desktop/graph/")
print ("graph complete!\n")
get_data()
#
# -
# ## Data Prep
# +
# Classes
classes = ['Daffodil','Snowdrop', 'Lily Valley', 'Bluebell',
'Crocus', 'Iris', 'Tigerlily', 'Tulip',
'Fritillary', 'Sunflower', 'Daisy', 'Colts Foot',
'Dandelalion', 'Cowslip', 'Buttercup', 'Windflower',
'Pansy']
y = np.repeat(classes, 80)
#
# Image files
images = []
loc = os.path.expanduser("~/Desktop/jpg")
for filename in sorted(os.listdir(loc)):
img = cv2.imread(os.path.join(loc,filename))
if img is not None:
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
images.append(img)
images = np.asarray(images)
#
# -
# ## Tensor Prep
# +
# Tensor prep
path = os.path.expanduser("~/Desktop/jpg")
files = sorted(os.listdir(path))
i = 1
for file in files: # Rename image files
if re.search("jpg", file):
os.rename(os.path.join(path, file), os.path.join(path, y[i-1]+'_'+str(i)+'.jpg'))
i = i+1
#
# Tensorflow
model_dir = os.path.expanduser("~/Desktop/graph/")
os.chdir(os.path.expanduser("~/Desktop/"))
images_dir = "jpg/"
list_images = [images_dir+f for f in os.listdir(images_dir) if re.search("jpg", f)] # List of new image filenames
tmp = []
for i in range(len(list_images)): # Retrieve numbers from filenames
t = re.findall(r"\d+", list_images[i])
tmp.append(int(t[0]))
list_images = [x for (y,x) in sorted(zip(tmp,list_images))] # Order image files by number
def create_graph(): # Create instance of the trained model
with tf.gfile.FastGFile(os.path.join(model_dir, "classify_image_graph_def.pb"), "rb") as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
_ = tf.import_graph_def(graph_def, name='')
def extract_features(list_images):
nb_features = 2048
features = np.empty((len(list_images), nb_features))
labels = []
create_graph() # Initiate instance of the trained model
sess = tf.Session() # Open session
penultimate_tensor = sess.graph.get_tensor_by_name('pool_3:0') # Retrieve penultimate layer
for i in range(len(list_images)): # Feed image into layer and retrieve features and label
if (i%100 == 0):
print("Processing %s..." % (list_images[i]))
preds = sess.run(penultimate_tensor,
{'DecodeJpeg:0': images[i]})
features[i,:] = np.squeeze(preds)
labels.append(re.split("_\d+",list_images[i].split("/")[1])[0])
sess.close() # Close session
return features, labels
features, labels = extract_features(list_images)
#
# -
# ## Model Selection
# +
#
model = LinearSVC(C=1, loss='squared_hinge', penalty='l2',multi_class='ovr')
#
# -
# ## Model Performance
# +
# Linear SVC performance/results
Xtrain, Xtest, ytrain, ytest = train_test_split(features, labels,
random_state = 7,
test_size = 0.3
)
model.fit(Xtrain, ytrain)
ypred = model.predict(Xtest)
print("\nLinear SVC Accuracy (Ten-Fold CV):", cross_val_score(model, features, labels, cv=10).mean(), "\n")
print("Linear SVC Accuracy (Holdout Set):", accuracy_score(ytest, ypred), "\n")
print("Linear SVC Classification Report:", "\n")
print(classification_report(ytest, model.predict(Xtest),
target_names = classes))
plt.figure(figsize=(8, 8))
mat = confusion_matrix(ytest, ypred)
ax = sns.heatmap(mat.T, square = True, annot = True, fmt='d', cbar=False,
xticklabels = classes, yticklabels= classes)
plt.xlabel('true label')
plt.ylabel('pred label')
plt.title('Linear SVC Heatmap')
plt.show();
#
| flowers17.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python2
# ---
# # Some Classic Paleoclimate Figures
# ## LR04 Stack
# The LR04 Benthic isotope stack (Lisiecki and Raymo 2005) is one of the most iconic datasets in paleoclimate. It documents the long-term increase in $\delta$<sup>18</sup>O which is a proxy for temperature and ice volume. It also highlights the change between domination by ~41 thousand year periodicity before 1.25 Ma and domination by ~100 thousand year periodicity since 700 ka.
#Need to import a lot of modules... this will take a while
# Large, general packages
import numpy as np
import pylab as plt
import pandas as pd
# Specific packages
import scipy.ndimage as ndimage
import scipy.signal as signal
from mpl_toolkits.axes_grid1.inset_locator import inset_axes, mark_inset
import statsmodels.api as sm
import matplotlib.patches as patches
import matplotlib.ticker as ticker
import matplotlib.gridspec as gridspec
from mpl_toolkits.axes_grid1 import host_subplot
import mpl_toolkits.axisartist as AA
#Import the LR04 stack data
lr2004=np.genfromtxt('/homes/dcw32/Obs/lr2004/stack.txt',skip_header=5)
print lr2004.shape
print lr2004[-1,0]
# Extract required data from the data array
ages=lr2004[:,0]/1000 # Convert to millions of years ago
d18O=lr2004[:,1] #d18O data
err=lr2004[:,2] #standard error
t1=1.25
t2=0.7
t3=0.43
t4=t3-0.10
t5=1.903
t6=t5-0.041
fig=plt.figure(figsize=(12,5))
#plt.plot(ages,d18O,c='k',linewidth=1,linestyle=':')
ax=fig.add_subplot(111)
d18O_fil=ndimage.filters.gaussian_filter1d(d18O, 10.0)
d18O_fil2=ndimage.filters.gaussian_filter1d(d18O, 50.0)
#d18O_fil=signal.savgol_filter(d18O,1251,3)
#plt.plot(ages,d18O_fil,c='k',linewidth=1,linestyle='--')
#plt.yticks(visible=False)
#plt.xticks(visible=False)
plt.plot(ages,d18O_fil2,c='k',linewidth=4)
plt.fill_between(ages[ages>=t1], (d18O-err)[ages>=t1], (d18O+err)[ages>=t1], color='#CE2029', alpha=0.7)
plt.fill_between(ages[np.logical_and(ages > t2, ages < t1)], (d18O-err)[np.logical_and(ages > t2, ages < t1)], (d18O+err)[np.logical_and(ages > t2, ages < t1)], color='#856088', alpha=0.9)
plt.fill_between(ages[ages<=t2], (d18O-err)[ages<=t2], (d18O+err)[ages<=t2], color='#191970', alpha=0.7)
#plt.errorbar((t1+t2)/2.,2.95,xerr=(t1-t2)/2.,color='k',linewidth=2,capthick=2,xuplims=True,xlolims=True)
#plt.errorbar((t1+t2)/2.,2.95,xuplims=t1,xlolims=t2,color='k')
plt.annotate(
'', xy=(t1+0.02, 2.9), xycoords='data',
xytext=(t2-0.02, 2.9), textcoords='data',
arrowprops=dict(arrowstyle='<->',facecolor='black',lw=2)
)
plt.errorbar((t3+t4)/2.,5.2,xerr=(t3-t4)/2.,color='k',linewidth=2,capthick=2)
plt.xlabel('Age / Ma',fontsize=14)
plt.ylabel(r'Benthic $\mathregular{\delta ^{18}O\>(\perthousand)}$',fontsize=14)
ax.annotate('Middle Pleistocene Transition',xy=((t1+t2)/2.,2.80),horizontalalignment='center',fontsize=14)
ax.annotate('100 kyr',xy=((t3+t4)/2.,5.4),horizontalalignment='center',fontsize=14)
#ax.annotate('LR04 stack (Lisiecki and Raymo, 2005)',xy=(1.0,1.02),horizontalalignment='right',xycoords='axes fraction',fontsize=14)
ax.tick_params(axis='both', which='major', labelsize=14)
ax.xaxis.set_minor_locator(ticker.MultipleLocator(0.2))
plt.xlim(0,5.32)
plt.ylim(2.5,5.5)
plt.gca().invert_yaxis()
axins = inset_axes(ax, 3.5,1.8 , loc=2,bbox_to_anchor=(0.57, 0.56),bbox_transform=ax.figure.transFigure) # no zoom
#axins = zoomed_inset_axes(ax, 2.5, loc=2)
axins.set_xlim(1.8,2.0)
axins.set_ylim(3.1,4.3)
axins.fill_between(ages[ages>=t1], (d18O-err)[ages>=t1], (d18O+err)[ages>=t1], color='#CE2029', alpha=0.7)
axins.errorbar((t5+t6)/2.,3.4,xerr=(t5-t6)/2.,color='k',linewidth=2,capthick=2)
axins.annotate('41 kyr',xy=((t5+t6)/2.,3.30),horizontalalignment='center',fontsize=14)
axins.xaxis.set_minor_locator(ticker.MultipleLocator(0.02))
plt.gca().invert_yaxis()
plt.yticks([3.2,3.7,4.2])
plt.xticks([1.8,1.9,2.0],['1.8','1.9','2.0'])
axins.yaxis.tick_right()
axins.yaxis.set_ticks_position('both')
new=mark_inset(ax, axins, loc1=1, loc2=3, fc="none", ec="0.5")
plt.savefig('/homes/dcw32/figures/lr2004.png',dpi=200,bbox_inches='tight')
plt.show()
# ###
# ## The Zachos Curve
#Read in the Zachos data--> Use pandas because it's more flexible to read in
#However, have to set specific widths for columns - would suggest reading the documentation for this!
za2001=pd.read_fwf(r'/homes/dcw32/Obs/zachos2001/zachos2001.txt',skiprows=88,colspecs=[(0,8),(9,22),(23,38),(39,50),(51,61),(62,72),(73,83)])
#SITE, AGE(Ma), Genus, d18O(adj), d13C, d18O(5pt running mean), d13C(5pt running mean)
# +
print za2001
# -
zatoplot=pd.DataFrame.as_matrix(za2001)[:14887,:]
def d18o2T(input):
T=16.5-4.3*input+0.14*input**2
return T
def color_y_axis(ax, color):
"""Color your axes."""
for t in ax.get_yticklabels():
t.set_color(color)
return None
bounds=[66.0,54.9,33.6,23.03,5.333,2.58,0.]
mids=np.zeros(len(bounds)-1)
for i in range(len(mids)):
mids[i]=(0.5*(bounds[i]+bounds[i+1]))/70.
labs=['Paleocene','Eocene','Oligocene','Miocene','Plio.','Plt.']
xlo=0.
xup=70.
ylo=-2.
yup=5.
const=0.06*(yup-ylo)
const2=0.03*(yup-ylo)
#Start plotting
#fig=plt.figure(figsize=(12,4))
#ax=fig.add_subplot(111)
#ax=host_subplot(111,axes_class=AA.Axes)
fig,ax=plt.subplots(figsize=(12,4))
#
ax2 = ax.twinx()
ax2.set_ylim(d18o2T(yup),d18o2T(ylo))
ax2.set_yticks([5.,10.,15.,20.])
color_y_axis(ax2, '#CE2029')
ax2.get_yaxis().set_tick_params(direction='out',width=2,length=6,colors='#CE2029')
t_ap=ax2.set_ylabel(r'Ice-Free Temperature / $\mathregular{^{o}}$C',fontdict={'color':'#CE2029'},labelpad=15)
t_ap.set_rotation(270.)
#
plt.sca(ax)
for i in range(len(bounds)-1):
plt.axvline(bounds[i],c='lightgray',linestyle='--',zorder=1)
vals=zatoplot[:,3]
vals=vals.astype(float)
tims=zatoplot[:,1]
print vals.dtype
#d18O_fil=ndimage.filters.gaussian_filter1d(vals, 500.0)
#d18O_fil=signal.savgol_filter(vals,151,3)
#lowess1 = sm.nonparametric.lowess(vals[tims<25.], tims[tims<25.], frac=0.1)
lowess1 = sm.nonparametric.lowess(vals, tims, frac=0.01,delta=0.6)
lowess2 = sm.nonparametric.lowess(vals[tims>25.], tims[tims>25.], frac=0.05)
#d18O_fil=lowess(zatoplot[:-1,1],zatoplot[:-1,3])
plt.scatter(zatoplot[2:-2,1],zatoplot[2:-2,5],marker='.',c='#856088',alpha=0.5,edgecolors='none',zorder=999)
plt.axis([xlo, xup, ylo, yup])
#plt.hexbin(zatoplot[2:-2,1], zatoplot[2:-2,5], cmap=plt.cm.get_cmap('viridis_r'), mincnt=1, gridsize=300,bins='log',extent=(0,65,-1,5))
#plt.hexbin(zatoplot[2:-2,1], zatoplot[2:-2,5], color='r', mincnt=1, gridsize=300,bins='log',extent=(0,65,-1,5))
#plt.plot(zatoplot[:-1,1],d18O_fil,c='k')
plt.plot(lowess1[:,0],lowess1[:,1],c='k',linewidth=3,zorder=1000)
#plt.plot(lowess2[:,0],lowess2[:,1],c='k',linewidth=2)
#plt.ylim(-1.,5.)
#plt.xlim(0.,67.5172)
ax.yaxis.set_minor_locator(ticker.MultipleLocator(0.2))
plt.yticks([-1,0,1,2,3,4])
plt.gca().invert_yaxis()
plt.xlabel('Age / Ma',fontsize=14)
plt.ylabel(r'Benthic $\mathregular{\delta ^{18}O\>(\perthousand)}$',fontsize=14)
#ax.annotate('Zachos et al, 2001',xy=(1.0,1.08),horizontalalignment='right',xycoords='axes fraction',fontsize=14)
ax.tick_params(axis='both', which='major', labelsize=14)
ax.xaxis.set_minor_locator(ticker.MultipleLocator(2.))
#
########################
#Arrows and Labels
########################
ax.annotate('K-Pg Exinction', xy=(bounds[0], 1.5), xytext=(bounds[0], 2.5), color='#191970',
arrowprops=dict(color='#191970',width=1,headwidth=5,headlength=5),
horizontalalignment='center', verticalalignment='top')
ax.annotate('Paleocene-Eocene\nThermal Maximum', xy=(bounds[1], -0.5), xytext=(bounds[1], -1.7),
arrowprops=dict(color='#191970',width=1,headwidth=5,headlength=5), color='#191970',
horizontalalignment='left', verticalalignment='top')
ax.annotate('Oi-1 Glaciation', xy=(bounds[2], 1.5), xytext=(bounds[2], 0.),
arrowprops=dict(color='#191970',width=1,headwidth=5,headlength=5), color='#191970',
horizontalalignment='center', verticalalignment='top')
ax.annotate('Mi-1 Glaciation', xy=(bounds[3], 1.5), xytext=(bounds[3], 0.),
arrowprops=dict(color='#191970',width=1,headwidth=5,headlength=5), color='#191970',
horizontalalignment='center', verticalalignment='top')
ax.add_patch(patches.Rectangle((50, -1.),3.5,0.02,color='#191970',lw=2))
ax.annotate('E. Eocene Climatic Optimum', xy=(53.5, -1.5),color='#191970',horizontalalignment='right', verticalalignment='top')
ax.add_patch(patches.Rectangle((14, 3.),1.5,0.02,color='#191970',lw=2))
ax.annotate('Mid-Miocene\nClimatic Optimum', xy=(14.75, 3.2),color='#191970',horizontalalignment='center', verticalalignment='top')
ax.add_patch(patches.Rectangle((24., 3.5),2.,0.02,color='#191970',lw=2))
ax.annotate('Late Oligocene\nWarming', xy=(25., 3.7),color='#191970',horizontalalignment='center', verticalalignment='top')
#Antarctic Glaciation
ax.add_patch(patches.Rectangle((0, -1.5),13,0.2,facecolor='k',clip_on=False,zorder=2))
ax.add_patch(patches.Rectangle((13, -1.5),13,0.2,hatch='////',facecolor='w',clip_on=False,zorder=2))
ax.add_patch(patches.Rectangle((26, -1.5),bounds[2]-26,0.2,facecolor='k',clip_on=False,zorder=2))
ax.add_patch(patches.Rectangle((bounds[2], -1.5),3.,0.2,hatch='////',facecolor='w',clip_on=False,zorder=2))
ax.annotate('Antarctic Glaciation', xy=(1.0, -1.51),color='k',horizontalalignment='left', verticalalignment='bottom')
#N Hemi
ax.add_patch(patches.Rectangle((0, -0.9),3.3,0.2,facecolor='k',clip_on=False,zorder=2))
ax.add_patch(patches.Rectangle((3.3, -0.9),8.,0.2,hatch='////',facecolor='w',clip_on=False,zorder=2))
ax.annotate('Northern Hemisphere Glaciation', xy=(1.0, -0.91),color='k',horizontalalignment='left', verticalalignment='bottom')
#Add the Epoch names
ax.add_patch(patches.Rectangle((xlo, ylo-const),xup-xlo,const,clip_on=False,fill=False))
#Legend
ax.add_patch(patches.Rectangle((7.5, -0.35),7.5,1,clip_on=False,fill=False))
ax.add_patch(patches.Rectangle((8, -0.2),1.,0.3,facecolor='k',clip_on=False,zorder=2))
ax.add_patch(patches.Rectangle((8, .25),1.,0.3,hatch='////',facecolor='w',clip_on=False,zorder=2))
ax.annotate('Full Scale', xy=(9.3, -0.2),color='k',horizontalalignment='left', verticalalignment='top',fontsize=10)
ax.annotate('Partial', xy=(9.3, .25),color='k',horizontalalignment='left', verticalalignment='top',fontsize=10)
for i in range(len(mids)):
ax.annotate(labs[i],xy=(mids[i],1.015),xycoords='axes fraction',horizontalalignment='center',fontsize=8)
ax.add_patch(patches.Rectangle((bounds[i], ylo-const),0.,const,clip_on=False,fill=False))
########################
#Now add the EOT inset
########################
axins = inset_axes(ax, 1.5,1.3 , loc=2,bbox_to_anchor=(0.63, 0.53),bbox_transform=ax.figure.transFigure) # no zoom
axins.set_xlim(32.8,34.3)
axins.set_ylim(1.4,3.2)
axins.scatter(zatoplot[2:-2,1], zatoplot[2:-2,5],marker='.',color='#856088',alpha=1.0)
plt.gca().invert_yaxis()
plt.xticks([33,34])
axins.yaxis.tick_right()
axins.yaxis.set_ticks_position('both')
new=mark_inset(ax, axins, loc1=1, loc2=3, fc="none", ec="0.5")
#
# Now add the dummy T axis
#
#par2.set_yticks([5.,10.,15.,20.])
#Save
#plt.gca().get_xticklabels().set_color('red')
plt.savefig('/homes/dcw32/figures/zachos.png',dpi=200,bbox_inches='tight')
plt.show()
fig=plt.figure(figsize=(12,5))
ax=fig.add_subplot(111)
plt.scatter(zatoplot[2:-2,1], zatoplot[2:-2,5],color='white',marker='.')
plt.xlim(0,70)
plt.gca().invert_yaxis()
plt.axis('off')
plt.savefig('/homes/dcw32/figures/cover.png',transparent=True)
plt.show()
fig=plt.figure(figsize=(12,4))
ax=fig.add_subplot(111)
#plt.plot(ages,d18O,c='k',linewidth=1)
plt.scatter(zatoplot[2:-2,1], zatoplot[2:-2,5],color='k')
#plt.xlim(0,70)
plt.gca().invert_yaxis()
plt.axis('off')
plt.savefig('/homes/dcw32/figures/cover2.png',transparent=True)
plt.show()
#Import the Petit d18O data
petit_d18o=np.genfromtxt('/homes/dcw32/Obs/petit1999/o18nat.txt',skip_header=155)
petit_co2=np.genfromtxt('/homes/dcw32/Obs/petit1999/co2nat.txt',skip_header=155)
petit_ch4=np.genfromtxt('/homes/dcw32/Obs/petit1999/ch4nat.txt',skip_header=86)
petit_dnat=np.genfromtxt('/homes/dcw32/Obs/petit1999/deutnat.txt',skip_header=111)
def overlap_plot(xs,ys,overlap=0.1):
d18o_gage=petit_d18o[:,0]/1000.
d18o_vals=petit_d18o[:,1]
co2_gage=petit_co2[:,0]/1000.
co2_vals=petit_co2[:,1]
ch4_gage=petit_ch4[:,0]/1000.
ch4_vals=petit_ch4[:,1]
dnat_age=petit_dnat[:,1]/1000. #kyr
dnat_deu=petit_dnat[:,2]
dnat_ts=petit_dnat[:,3]
# +
#nvals=5
#overlap=0.1
#for i in range(nvals):
# minval=i-overlap
# maxval=i+overlap
#
fig=plt.figure()
gs1=gridspec.GridSpec(3,1)
gs1.update(hspace=-0.0)
ax0=plt.subplot(gs1[0])
ax0.spines['bottom'].set_visible(False)
ax0.xaxis.set_ticks_position('top')
ax0.xaxis.set_ticklabels([])
ax1=ax0.twinx()
#ax1=plt.subplot(gs1[1])
ax1.spines['top'].set_visible(False)
ax1.spines['bottom'].set_visible(False)
ax1.xaxis.set_ticks_position('none')
ax1.xaxis.set_ticklabels([])
plt.gca().invert_yaxis()
#ax2=plt.subplot(gs1[2])
ax2=plt.subplot(gs1[1])
ax2.spines['top'].set_visible(False)
ax2.spines['bottom'].set_visible(False)
ax2.xaxis.set_ticks_position('none')
ax2.xaxis.set_ticklabels([])
ax3=ax2.twinx()
#ax3=plt.subplot(gs1[3])
ax3.spines['top'].set_visible(False)
ax3.spines['bottom'].set_visible(False)
ax3.xaxis.set_ticks_position('none')
ax3.xaxis.set_ticklabels([])
#ax4=plt.subplot(gs1[4])
ax4=plt.subplot(gs1[2])
ax4.spines['top'].set_visible(False)
ax4.xaxis.set_ticks_position('bottom')
ax0.plot(dnat_age,dnat_deu,c='red',clip_on=False,zorder=2,alpha=0.75)
ax4.plot(dnat_age,dnat_ts,c='orange',clip_on=False,zorder=2,alpha=0.75)
ax2.plot(co2_gage,co2_vals,c='black',clip_on=False,zorder=2,alpha=0.75)
ax3.plot(ch4_gage,ch4_vals,c='purple',clip_on=False,zorder=2,alpha=0.75)
ax1.plot(d18o_gage,d18o_vals,c='pink',clip_on=False,zorder=2,alpha=0.75)
#ax1.set_ylim(-9,0)
plt.show()
# +
import numpy as np
import matplotlib.pyplot as plt
def two_scales(ax1, time, data1, data2, c1, c2):
"""
Parameters
----------
ax : axis
Axis to put two scales on
time : array-like
x-axis values for both datasets
data1: array-like
Data for left hand scale
data2 : array-like
Data for right hand scale
c1 : color
Color for line 1
c2 : color
Color for line 2
Returns
-------
ax : axis
Original axis
ax2 : axis
New twin axis
"""
ax2 = ax1.twinx()
ax1.plot(time, data1, color=c1)
ax1.set_xlabel('time (s)')
ax1.set_ylabel('exp')
ax2.plot(time, data2, color=c2)
ax2.set_ylabel('sin')
return ax1, ax2
# Create some mock data
t = np.arange(0.01, 10.0, 0.01)
s1 = np.exp(t)
s2 = np.sin(2 * np.pi * t)
# Create axes
fig, ax = plt.subplots()
ax1, ax2 = two_scales(ax, t, s1, s2, 'r', 'b')
# Change color of each axis
def color_y_axis(ax, color):
"""Color your axes."""
for t in ax.get_yticklabels():
t.set_color(color)
return None
color_y_axis(ax1, 'r')
color_y_axis(ax2, 'b')
plt.show()
# -
| classic_paleo.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 数学函数、字符串和对象
# ## 本章介绍Python函数来执行常见的数学运算
# - 函数是完成一个特殊任务的一组语句,可以理解为一个函数相当于一个小功能,但是在开发中,需要注意一个函数的长度最好不要超过一屏
# - Python中的内置函数是不需要Import导入的
# <img src="../Photo/15.png"></img>
# ## 尝试练习Python内置函数
# ## Python中的math模块提供了许多数学函数
# <img src="../Photo/16.png"></img>
# <img src="../Photo/17.png"></img>
#
num,num1,num2 = eval(input("请输入三个数字以逗号分隔"))
choose_method = input('>>')
if choose_method=='max':
max_=max(num,num1,num2)
print(max_)
elif choose_method=='min':
min_=min(num,num1,num2)
print(min_)
elif choose_method == 'pow2_sum'
pow_=pow(num,2)+pow(num1,2)+pow(num2,2)
print(pow_)
import matplotlib.pyplot as plt
list_ = []
for z in range(-50,50):
res=1./ (1. + math.exp(-z))
list_.append(res)
plt.plot(list_)
plt.show()
import random,math
sj=random.randint(-50,50)
res=1./(1. + math.exp(sj))
round(res)
# ## 两个数学常量PI和e,可以通过使用math.pi 和math.e调用
# ## EP:
# - 通过math库,写一个程序,使得用户输入三个顶点(x,y)返回三个角度
# - 注意:Python计算角度为弧度制,需要将其转换为角度
# <img src="../Photo/18.png">
import math
x1,y1,x2,y2,x3,y3 = eval(input("请输入三个顶点坐标"))
a=math.sqrt((x2-x3)**2+(y2-y3)**2)
b=math.sqrt((x1-x3)**2+(y1-y3)**2)
c=math.sqrt((x2-x1)**2+(y1-y2)**2)
A=math.acos((a*a - b*b - c*c)/(-2 * b * c))
B=math.acos((b*b - a*a - c*c)/(-2 * a * c))
C=math.acos((c*c - b*b - a*a)/(-2 * b * a))
print(math.degrees(A),math.degrees(B),math.degrees(C))
# ## 字符串和字符
# - 在Python中,字符串必须是在单引号或者双引号内,在多段换行的字符串中可以使用“”“
# - 在使用”“”时,给予其变量则变为字符串,否则当多行注释使用
# ## ASCII码与Unicode码
# - <img src="../Photo/19.png"></img>
# - <img src="../Photo/20.png"></img>
# - <img src="../Photo/21.png"></img>
ord('经') #接收或返回10进制
chr(32463)
# ## 函数ord、chr
# - ord 返回ASCII码值
# - chr 返回字符
# ## EP:
# - 利用ord与chr进行简单邮箱加密
# +
email="<EMAIL>"
result=''
for i in email:
print(chr(ord(i) + 10),end='')
for j in email:
result = result + chr(ord(j)+10)
print(result)
# -
import hashlib
#待加密信息
str_ = 'this is a md5 test '
#创建md5对象
h1 = hashlib.md5()
#Tips
#此处必须声明encode
#若写法为h1.update(str) 报错为: Unicode-objects must be
h1.update(str_.encode(encoding='utf-8'))
print('MD5加密前为:'+ str_)
print('MD5加密后为:'+ h1.hexdigest())
import hashlib
dataset= '49200529253ebe8126e9dc892d5e4945'
password = input('>>')
h1 = hashlib.md5()
h1.update(str_.encode(encoding='utf-8'))
if dataset==h1.hexdigest():
print('success')
else:
print('faild')
# ## 转义序列 \
# - a = "He said,"Johon's program is easy to read"
# - 转掉它原来的意思
# - 一般情况下只有当语句与默认方法相撞的时候,就需要转义
# ## 高级print
# - 参数 end: 以什么方式结束打印
# - 默认换行打印
print('a','b',sep='',end='!')
# ## 函数str
# - 将类型强制转换成字符串类型
# - 其他一些以后会学到(list,set,tuple...)
# ## 字符串连接操作
# - 直接使用 “+”
# - join() 函数
# ## EP:
# - 将 “Welcome” “to” "Python" 拼接
# - 将int型 100 与 “joker is a bad man” 拼接
# - 从控制台读取字符串
# > 输入一个名字返回夸奖此人
# %time "welcome".join("Python")
# %time '100' + "joker is a bad man"
name = input(">>")
print(name + "nihao")
# ## 实例研究:最小数量硬币
# - 开发一个程序,让用户输入总金额,这是一个用美元和美分表示的浮点值,返回一个由美元、两角五分的硬币、一角的硬币、五分硬币、以及美分个数
# <img src="../Photo/22.png"></img>
# - Python弱项,对于浮点型的处理并不是很好,但是处理数据的时候使用的是Numpy类型
# <img src="../Photo/23.png"></img>
# num = 1
# while num !=0 :
# num
# ## id与type
# - id 查看内存地址,在判断语句中将会使用
# - type 查看元素类型
a = 1
id(2)
# ## 其他格式化语句见书
print('aaa{}'.format(100))
print('aaa{}'.%d)
# # Homework
# - 1
# <img src="../Photo/24.png"><img>
# <img src="../Photo/25.png"><img>
import math
r = eval(input("请输入五边形顶点到中小的距离"))
s=2*r*math.sin(math.pi/5)
area = 5*s*s/(4*math.tan(math.pi/5))
print(area)
# - 2
# <img src="../Photo/26.png"><img>
import math
x1,y1,x2,y2=eval(input("请输入两点的经度和纬度"))
radius=6371.01
d=radius*math.acos(math.sin(math.radians(x1))*math.sin(math.radians(x2))+math.cos(math.radians(x1))*math.cos(math.radians(x2))*math.cos(math.radians(y1)-math.radians(y2)))
print(d)
# - 3
# <img src="../Photo/27.png"><img>
import math
s = eval(input("请输入五角形的边长"))
area = (5*(s**2))/(4*math.tan(math.pi/5))
print(area)
# - 4
# <img src="../Photo/28.png"><img>
import math
n = eval(input("请输入多边形的边长数"))
s = eval(input("请输入多边形的边长"))
area = n*s**2/(4*math.tan(math.pi/n))
print(area)
# - 5
# <img src="../Photo/29.png"><img>
# <img src="../Photo/30.png"><img>
num = eval(input("请输入一个0~127之间的ASCII码值"))
chr(num)
# - 6
# <img src="../Photo/31.png"><img>
name = input("Please enter employee's name:")
time = eval(input("Please enter number of hours worked in a week:"))
money = eval(input("Please enter hourly pay rate:"))
federalrate = eval(input("Please enter federal tax withholding rate:"))
rate = eval(input("Please enter state tax withholding rate:"))
sum = money*time
federal = sum*federalrate
state = sum*rate
total = federal+state
net = sum-total
print("名字:"+name,"每周工作时长:"+str(time),"每小时金额:"+str(money),"总额:"+str(sum),"联邦扣税:"+str(federal),"州扣税:"+str(state),"总扣税:"+str(total),"剩余:"+str(net))
# - 7
# <img src="../Photo/32.png"><img>
num = eval(input("请输入一个四位整数"))
num1 = num%100%10
num2 = num//10%10
num3 = (num//100)%10
num4 = num//1000
print(str(num1) + str(num2) + str(num3) + str(num4))
# - 8 进阶:
# > 加密一串文本,并将解密后的文件写入本地保存
import hashlib
num = input(">>")
h1 = hashlib.md5()
h1.update(num.encode(encoding='utf-8'))
print('MD5加密后为:'+ h1.hexdigest())
| 9.11.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:ATACseq_SnapATAC]
# language: python
# name: conda-env-ATACseq_SnapATAC-py
# ---
import itertools
import pandas as pd
import numpy as np
import os
metadata = pd.read_csv('../../input/metadata.tsv',sep='\t',index_col=0)
metadata.shape
metadata.head()
path = '../../input/sc-bams_nodup/'
bamfile = [os.path.join(path,f) for f in os.listdir(path) if f.endswith(".bam")]
len(bamfile)
# #### Merge all the bam files
# #### Split the bamfiles into chunks
with open('list_bamfiles.txt', 'w') as f:
for item in bamfile:
f.write("%s\n" % item)
def chunkIt(seq, num):
avg = len(seq) / float(num)
out = []
last = 0.0
while last < len(seq):
out.append(seq[int(last):int(last + avg)])
last += avg
return out
bamfile_chunks = chunkIt(bamfile,10)
for i,x in enumerate(bamfile_chunks):
with open('list_bamfiles_'+str(i)+'.txt', 'w') as f:
for item in x:
f.write("%s\n" % item)
# `bash run_merge_sort.sh`
| Real_Data/Cusanovich_2018/run_methods/SnapATAC/SnapATAC_cusanovich2018_preprocess.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# ## All in - Conditional Statements, Functions, and Loops
# *Suggested Answers follow (usually there are multiple ways to solve a problem in Python).*
# You are provided with the 'nums' list. Complete the code in the cell that follows. Use a while loop to count the number of values lower than 20. <br/>
# *Hint: This exercise is similar to what we did in the video lecture. You might prefer using the x[item] structure for indicating the value of an element from the list.*
nums = [1,35,12,24,31,51,70,100]
def count(numbers):
numbers = sorted(numbers)
tot = 0
while numbers[tot] < 20:
tot += 1
return tot
count(nums)
| course_2/course_material/Part_4_Python/S29_L175/Python 2/All In - Solution_Py2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# +
# %reload_ext autoreload
# %autoreload 2
# %matplotlib inline
import scipy.io
import pandas as pd
import numpy as np
import sys
import os
import ast
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings("ignore")
module_path = os.path.abspath(os.path.join('..'))
if module_path not in sys.path:
sys.path.append(module_path+"/lib")
from get_dob import datenum_to_datetime
from get_face_array import save_array
# -
mat_wiki = scipy.io.loadmat('./wiki_crop/wiki.mat')
columns = ["dob", "photo_taken", "full_path", "gender", "name", "face_location",
"face_score", "second_face_score", 'celeb_names', 'celeb_id']
# +
instances_wiki = mat_wiki['wiki'][0][0][0].shape[1]
df_wiki = pd.DataFrame(index = range(0,instances_wiki), columns=columns)
# -
for i in mat_wiki["wiki"]:
current_array = i[0]
for j in range(len(current_array)):
df_wiki[columns[j]] = pd.DataFrame(current_array[j][0])
df_wiki.head()
df_wiki['dob'] = df_wiki['dob'].apply(datenum_to_datetime)
df_wiki['full_path'] = df_wiki['full_path'].str.get(0)
df_wiki.to_csv('wiki.csv', index=False)
save_array()
test = pd.read_csv('./face_nparray.csv')
test[~test["face_nparray"].isna()]
# +
arr = ast.literal_eval(test[~test["face_nparray"].isna()]["face_nparray"][7066])
b = np.array(arr)
plt.figure(figsize=(16,16))
plt.imshow(b)
plt.show()
# -
| analysis/01_image_extraction.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + colab={"base_uri": "https://localhost:8080/"} id="QauupZPJZpDX" executionInfo={"status": "ok", "timestamp": 1618477999003, "user_tz": -330, "elapsed": 25672, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "10081176100440709596"}} outputId="8fbb39ee-b4ed-4a90-9cd9-529864d8ff7b"
from google.colab import drive
drive.mount("/content/drive")
# + id="VXi-xyS2u35H" executionInfo={"status": "ok", "timestamp": 1618478004275, "user_tz": -330, "elapsed": 3671, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "10081176100440709596"}}
import tensorflow.keras, os
from tensorflow.keras.models import Sequential
from tensorflow.keras.preprocessing.image import ImageDataGenerator
import numpy as np
from tensorflow.keras.layers import Conv2D, Flatten, Dense, MaxPool2D
import matplotlib.pyplot as plt
from tensorflow.keras.preprocessing import image
from tensorflow.keras.models import load_model
import sklearn.metrics as met
from sklearn.metrics import multilabel_confusion_matrix as mcm
import numpy as np
import itertools
# + colab={"base_uri": "https://localhost:8080/"} id="3YTYJd1mu-Xe" executionInfo={"status": "ok", "timestamp": 1618478069340, "user_tz": -330, "elapsed": 60695, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "10081176100440709596"}} outputId="57717bfb-30a9-4027-f766-54f6306c58cb"
trdata = ImageDataGenerator(rescale = 1./255)
traindata = trdata.flow_from_directory(directory="/content/drive/MyDrive/Covid-Train",target_size=(299,299))
tsdata = ImageDataGenerator(rescale = 1./255)
testdata = tsdata.flow_from_directory(directory="/content/drive/MyDrive/Covid-Test", target_size=(299,299))
teacher = ImageDataGenerator(rescale = 1./255)
teacherData = teacher.flow_from_directory(directory="/content/drive/MyDrive/Teacher", target_size=(299,299))
# + colab={"base_uri": "https://localhost:8080/"} id="2Izvzn_nvWIB" executionInfo={"status": "ok", "timestamp": 1618478130306, "user_tz": -330, "elapsed": 2211, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "10081176100440709596"}} outputId="413b582e-6e27-4aea-ad26-59a42d226e7a"
VGG = tensorflow.keras.applications.VGG16(input_shape=(299, 299, 3), include_top= False, weights='imagenet')
VGG.trainable = False
model = tensorflow.keras.Sequential([
VGG,
tensorflow.keras.layers.Flatten(),
tensorflow.keras.layers.Dense(units=256, activation='relu'),
tensorflow.keras.layers.Dense(units=256, activation='relu'),
tensorflow.keras.layers.Dense(units=4, activation='softmax')
])
model.compile(optimizer = 'adam', loss = tensorflow.keras.losses.categorical_crossentropy, metrics=['accuracy', 'mse'])
model.summary()
# + id="BoaUwWOAy72i" colab={"base_uri": "https://localhost:8080/", "height": 1000} executionInfo={"status": "ok", "timestamp": 1618416676563, "user_tz": -330, "elapsed": 190812, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "10081176100440709596"}} outputId="0b24d064-3dc1-48e1-a3b0-5eb09eb7f978"
hist = model.fit(traindata, steps_per_epoch=100, epochs=12, validation_data= testdata, validation_steps=10)
model.save('/content/drive/MyDrive/SaveState3.h5')
plot1=plt.figure(1)
plt.title('Loss')
plt.plot(hist.history['loss'], label='Training')
plt.plot(hist.history['val_loss'], label='Validation')
plt.legend()
plot2=plt.figure(2)
plt.title('Accuracy')
plt.plot(hist.history['accuracy'], label='Training')
plt.plot(hist.history['val_accuracy'], label='Validation')
plt.legend()
plt.show()
# + id="G5yeVd0G37IT" executionInfo={"status": "ok", "timestamp": 1618480880296, "user_tz": -330, "elapsed": 2737945, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "10081176100440709596"}}
model=load_model('/content/drive/MyDrive/SaveState3.h5')
yhat_probs = model.predict(testdata, verbose=0)
# + id="5ixvWlZa_i4N" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1618483966650, "user_tz": -330, "elapsed": 2723902, "user": {"displayName": "minipor<NAME>", "photoUrl": "", "userId": "10081176100440709596"}} outputId="a61874a8-58c5-4652-9354-0219dc8f4393"
yhat_classes = model.predict_classes(testdata, verbose=0)
# + id="i0oVcsLblPpQ" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1618484113979, "user_tz": -330, "elapsed": 1374, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "10081176100440709596"}} outputId="f7f49219-19cd-496d-9d70-e3a3f2edf800"
#yhat_probs = yhat_probs[:, 0]
#yhat_classes = yhat_classes[:, 0]
testY = testdata.classes
accuracy = met.accuracy_score(testY, yhat_classes)
print('Accuracy: %f' % accuracy)
precision = met.precision_score(testY, yhat_classes, average='weighted')
print('Precision: %f' % precision)
recall = met.recall_score(testY, yhat_classes, average='weighted')
print('Recall: %f' % recall)
f1 = met.f1_score(testY, yhat_classes, average='weighted')
print('F1 score: %f' % f1)
matrix = met.confusion_matrix(testY, yhat_classes)
print(matrix)
# + id="ti7y6a9WE-xN" executionInfo={"status": "ok", "timestamp": 1618484138317, "user_tz": -330, "elapsed": 1246, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "10081176100440709596"}}
def plot_confusion_matrix(cm,
target_names,
title='Confusion matrix',
cmap=None,
normalize=True):
accuracy = np.trace(cm) / float(np.sum(cm))
misclass = 1 - accuracy
if cmap is None:
cmap = plt.get_cmap('Blues')
plt.figure(figsize=(8, 6))
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
if target_names is not None:
tick_marks = np.arange(len(target_names))
plt.xticks(tick_marks, target_names, rotation=45)
plt.yticks(tick_marks, target_names)
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
thresh = cm.max() / 1.5 if normalize else cm.max() / 2
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
if normalize:
plt.text(j, i, "{:0.4f}".format(cm[i, j]),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
else:
plt.text(j, i, "{:,}".format(cm[i, j]),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label\naccuracy={:0.4f}; misclass={:0.4f}'.format(accuracy, misclass))
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 467} id="c735wSS0GMEU" executionInfo={"status": "ok", "timestamp": 1618484142419, "user_tz": -330, "elapsed": 1925, "user": {"displayName": "miniporj vce", "photoUrl": "", "userId": "10081176100440709596"}} outputId="ae41955c-db86-48bb-ac37-793e29890877"
plot_confusion_matrix(cm = matrix, normalize = True, target_names = ["COVID-19", "Ground Glass Opacity", "Normal", "Viral Pneumonia"], title = "Confusion Matrix of Test Data")
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="sqnjYy241608" executionInfo={"status": "ok", "timestamp": 1618484163824, "user_tz": -330, "elapsed": 11854, "user": {"displayName": "miniporj vce", "photoUrl": "", "userId": "10081176100440709596"}} outputId="78c27e6f-ce3a-450d-eb0d-9df7517ddcb6"
def getOutput(stri):
img= image.load_img(stri, target_size=(299,299))
img=np.asarray(img)
plt.imshow(img)
img=np.expand_dims(img,axis=0)
saved_model=load_model('/content/drive/MyDrive/SaveState3.h5')
output=saved_model.predict(img)
print(output)
maxi=0
for i in range(4):
if(output[0][i]>output[0][maxi]):
maxi=i
diction = {0:'Diagnosed with COVID-19!', 1:'Diagnosed with Ground Glass Opacity!', 2: 'Healthy', 3: 'Diagnosed with Viral Pneumonia!'}
print(diction[maxi])
plot1=plt.figure(1)
getOutput('/content/drive/MyDrive/COVID-3403.png')
plot2=plt.figure(2)
getOutput('/content/drive/MyDrive/Lung_Opacity-6004.png')
plot3=plt.figure(3)
getOutput('/content/drive/MyDrive/Normal-10182.png')
plot4=plt.figure(4)
getOutput('/content/drive/MyDrive/Viral Pneumonia-1339.png')
| Model.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# # Refactoring Python Code
# + [markdown] slideshow={"slide_type": "fragment"}
#
# ---
#
# ### <NAME>
#
# * Twitter/Github: [@littlepea](https://twitter.com/littlepea12)
# * Email: [<EMAIL>](mailto:<EMAIL>)
# + [markdown] slideshow={"slide_type": "slide"}
# # Scope
#
# * What is refactoring?
# * Why should we refactor?
# * When to refactor?
# * Refactoring examples (how to refactor?)
# + [markdown] slideshow={"slide_type": "slide"}
# # What is Refactoring?
# + [markdown] slideshow={"slide_type": "fragment"}
# > "Refactoring is a disciplined technique for restructuring an existing body of code, altering its internal structure without changing its external behavior." -- © <NAME>
# + [markdown] slideshow={"slide_type": "fragment"}
# ### Or in other words:
#
# > Refactoring is a process of improving your code without writing any new functionality.
# + [markdown] slideshow={"slide_type": "slide"}
# # Why should we refactor?
# + [markdown] slideshow={"slide_type": "fragment"}
# * Make the code easier to read and maintain
# * Reduce complexity
# * Improve performance
# * Make the code more reusable and flexible
# * Pay off technical debt
# * Keep the "cost of change" low
# + [markdown] slideshow={"slide_type": "slide"}
# # When to refactor?
# + [markdown] slideshow={"slide_type": "fragment"}
# * Ideally: as you code...
# * Otherwise:
# * When you find yourself duplicating functionality
# * When working with legacy code
# * When adding features
# * When fixing bugs
#
# ### The boy-scout rule:
#
# > "always leave the code behind in a better state than you found it." -- © <NAME>
# + [markdown] slideshow={"slide_type": "slide"}
# # How to refactor?
# + [markdown] slideshow={"slide_type": "fragment"}
# ### Types of Refactoring:
#
# * Improve form (names, functions size, nesting)
# * Improve style (PEP8/PyLint, pythonic code)
# * Reduce duplication (DRY) and complexity (KISS)
# * Apply design patterns (such as "Ports and Adapters")
# * Apply SOLID principles (such as "Single Responsibility Principle")
# * Improve code structure (modularity and composability)
# + [markdown] slideshow={"slide_type": "slide"}
# # Refactoring examples
#
# We're going to refactor a sample console movie reviews aggregation utility to demonstrate some examples of all kinds of refactoring.
#
# We'll start with a working application that aggregates reviews from Twitter and IMDB using the movie name as input and we'll apply one-by-one different refartoring patters to make it more readable and maintainable python code.
# + [markdown] slideshow={"slide_type": "slide"}
# # Code "Before Refactoring"
#
# [movie_reviews.py](https://github.com/littlepea/python-refactoring-talk/blob/master/before/movie_reviews.py)
# + slideshow={"slide_type": "subslide"}
"""Movie Reviews.
Usage: movie_reviews.py <title>
movie_reviews.py (-h | --help)
movie_reviews.py --version
Arguments:
<title> Movie title
Options:
-h --help Show this screen.
--version Show version.
"""
from docopt import docopt
from TwitterSearch import *
from dateutil import parser
from imdbpie import Imdb
import logging
import os
def main(title):
reviews = []
# Search tweets
ts = TwitterSearch(
consumer_key = os.environ.get('TWITTER_CONSUMER_KEY'),
consumer_secret = os.environ.get('TWITTER_CONSUMER_SECRET'),
access_token = os.environ.get('TWITTER_ACCESS_TOKEN'),
access_token_secret = os.environ.get('TWITTER_TOKEN_SECRET')
)
try:
ts.connect()
tso = TwitterSearchOrder() # create a TwitterSearchOrder object
tso.setKeywords(['#' + title + 'Movie']) # let's define all words we would like to have a look for
tso.setLanguage('en') # we want to see German tweets only
tso.setIncludeEntities(False) # and don't give us all those entity information
# add tweets to reviews list
results = ts.getSearchResults(tso)
except TwitterSearchException as e: # take care of all those ugly errors if there are some
logging.exception(str(e))
ts.cleanUp()
else:
for offset in range(results.getSize()):
if offset > 9:
break
tweet = results.getTweetByIndex(offset)
reviews.append({
'author': tweet.getUserName(),
'summary': tweet.getText(),
'text': tweet.getText(),
'date': parser.parse(tweet.getCreatedDate(), ignoretz=True),
'source': 'Twitter'
})
finally:
ts.disconnect()
# Search Imdb
imdb = Imdb()
try:
response = imdb.search_for_title(title)[0]
title_id = response['imdb_id']
response = imdb.get_title_reviews(title_id, max_results=10)
except IndexError as e:
logging.exception(str(e))
else:
for review in response:
reviews.append({
'author': review.username,
'summary': review.summary,
'text': review.text,
'date': parser.parse(review.date, ignoretz=True),
'source': 'IMDB'
})
# Sort reviews by date
reviews.sort(cmp=_cmprev)
# Print reviews
for review in reviews:
print('(%s) @%s: %s [Source: %s]' % ( review['date'].strftime('%Y-%m-%d'), review['author'], review['summary'], review['source'] ) )
def _cmprev(r1, r2):
if r1['date']>r2['date']:
return -1
elif r1['date']<r2['date']:
return 1
else:
return 0
# + [markdown] slideshow={"slide_type": "slide"}
# # Improve Style
#
# ## PEP8/PyLint:
#
# * Imports rules/order
# * Whitespace rules
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Imports: BEFORE
# + slideshow={"slide_type": "fragment"}
from docopt import docopt
from TwitterSearch import *
from dateutil import parser
from imdbpie import Imdb
import logging
import os
# + [markdown] slideshow={"slide_type": "notes"}
# ### Problems:
#
# * 3rd-party packages before the standard library
# * Wildcard imports
# + [markdown] slideshow={"slide_type": "fragment"}
# ### Imports: AFTER
#
# [commit 1fa2de3](https://github.com/littlepea/python-refactoring-talk/pull/1/commits/1fa2de3)
# + slideshow={"slide_type": "fragment"}
import logging
import os
from docopt import docopt
from dateutil import parser
from imdbpie import Imdb
from TwitterSearch import TwitterSearch, TwitterSearchOrder, TwitterSearchException
# + [markdown] slideshow={"slide_type": "slide"}
# # Improve Names
#
# * Explicit better than implicit
# * Names should reflect the purpose
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Sorting function: BEFORE
# + slideshow={"slide_type": "fragment"}
def _cmprev(r1, r2):
if r1['date']>r2['date']:
return -1
elif r1['date']<r2['date']:
return 1
else:
return 0
# + [markdown] slideshow={"slide_type": "fragment"}
# ### Sorting function: AFTER
#
# [commit 80fc3d6](https://github.com/littlepea/python-refactoring-talk/pull/1/commits/80fc3d6)
# + slideshow={"slide_type": "fragment"}
def _sort_by_date_desc(first, second):
first_date = first['date']
second_date = second['date']
if first_date > second_date:
return -1
elif first_date < second_date:
return 1
return 0
# + [markdown] slideshow={"slide_type": "slide"}
# # Create Adapters for 3rd-party code
# + [markdown] slideshow={"slide_type": "fragment"}
# ## Benefits:
#
# * Decouple the dependencies on the application boundary
# * Only expose the functionality you need
# * Expose 3rd-party functionality via a pythonic API
# * Easy to mock/test
# * Easy to swap
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Twitter Search: BEFORE
# + slideshow={"slide_type": "fragment"}
def main(title):
reviews = []
# Search tweets
ts = TwitterSearch(
consumer_key = os.environ.get('TWITTER_CONSUMER_KEY'),
consumer_secret = os.environ.get('TWITTER_CONSUMER_SECRET'),
access_token = os.environ.get('TWITTER_ACCESS_TOKEN'),
access_token_secret = os.environ.get('TWITTER_TOKEN_SECRET')
)
try:
ts.connect()
tso = TwitterSearchOrder() # create a TwitterSearchOrder object
tso.setKeywords(['#' + title + 'Movie']) # let's define all words we would like to have a look for
tso.setLanguage('en') # we want to see German tweets only
tso.setIncludeEntities(False) # and don't give us all those entity information
# add tweets to reviews list
results = ts.getSearchResults(tso)
except TwitterSearchException as e: # take care of all those ugly errors if there are some
logging.exception(str(e))
ts.cleanUp()
else:
for offset in range(results.getSize()):
if offset > 9:
break
tweet = results.getTweetByIndex(offset)
reviews.append({
'author': tweet.getUserName(),
'summary': tweet.getText(),
'text': tweet.getText(),
'date': parser.parse(tweet.getCreatedDate(), ignoretz=True),
'source': 'Twitter'
})
finally:
ts.disconnect()
# + [markdown] slideshow={"slide_type": "subslide"}
# ### TwitterReviews adapter
#
# [commit 3156496](https://github.com/littlepea/python-refactoring-talk/pull/1/commits/3156496)
# + slideshow={"slide_type": "fragment"}
import logging
import os
from dateutil import parser
from TwitterSearch import TwitterSearch, TwitterSearchOrder, TwitterSearchException
class TwitterReviews(object):
def __init__(self, movie, limit=10, language='en'):
self.movie = movie
self.limit = 10
self.language = language
self.client = TwitterSearch(
consumer_key=os.environ.get('TWITTER_CONSUMER_KEY'),
consumer_secret=os.environ.get('TWITTER_CONSUMER_SECRET'),
access_token=os.environ.get('TWITTER_ACCESS_TOKEN'),
access_token_secret=os.environ.get('TWITTER_TOKEN_SECRET')
)
def __enter__(self):
self.client.connect()
return self
def __exit__(self, exc_type, exc_val, exc_tb):
if exc_type == TwitterSearchException:
logging.exception(str(exc_val))
self.client.cleanUp()
self.client.disconnect()
@property
def reviews(self):
return Reviews(self._get_results(), limit=self.limit)
def _prepare_request(self):
tso = TwitterSearchOrder()
tso.setKeywords(self._get_keywords())
tso.setLanguage(self.language)
tso.setIncludeEntities(False)
return tso
def _get_keywords(self):
return ['#' + self.movie + 'Movie']
def _get_results(self):
request = self._prepare_request()
return self.client.getSearchResults(request)
class Reviews(object):
def __init__(self, tweets, limit=10):
self.limit = limit
self.tweets = tweets
def __len__(self):
size = self.tweets.getSize()
return size if size < self.limit else self.limit
def __getitem__(self, item):
if item >= len(self):
raise IndexError
tweet = self.tweets.getTweetByIndex(item)
return Review(tweet)
class Review(object):
def __init__(self, review):
self.review = review
@property
def author(self):
return self.review.getUserName()
@property
def summary(self):
return self.review.getText()
@property
def text(self):
return self.review.getText()
@property
def date(self):
return parser.parse(self.review.getCreatedDate(), ignoretz=True)
@property
def source(self):
return 'Twitter'
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Twitter Search: AFTER
#
# [commit d0a40a4](https://github.com/littlepea/python-refactoring-talk/pull/1/commits/d0a40a4)
# + slideshow={"slide_type": "fragment"}
def main(title, year=None):
reviews = []
with TwitterReviews(title) as reviews_backend:
for review in reviews_backend.reviews:
reviews.append(review)
# + [markdown] slideshow={"slide_type": "slide"}
# # Remove Duplication (DRY)
#
# * Duplicated code should be extracted into separate functions, methods and classes
# * But the purpose is reusability, not just "not allowing any duplication"
# + [markdown] slideshow={"slide_type": "subslide"}
# ### IMDB Reviews: BEFORE
# + slideshow={"slide_type": "fragment"}
def main(title, year=None):
reviews = []
with TwitterReviews(title) as reviews_backend:
for review in reviews_backend.reviews:
reviews.append(review)
# Search Imdb
imdb = Imdb()
try:
response = imdb.search_for_title(title)[0]
title_id = response['imdb_id']
response = imdb.get_title_reviews(title_id, max_results=10)
except IndexError as e:
logging.exception(str(e))
else:
for review in response:
reviews.append({
'author': review.username,
'summary': review.summary,
'text': review.text,
'date': parser.parse(review.date, ignoretz=True),
'source': 'IMDB'
})
# + [markdown] slideshow={"slide_type": "subslide"}
# ### IMDB Reviews adapter
#
# [commit aefc54e](https://github.com/littlepea/python-refactoring-talk/pull/1/commits/aefc54e)
# + slideshow={"slide_type": "fragment"}
import logging
from dateutil import parser
from imdbpie import Imdb
class IMDBReviews(object):
def __init__(self, movie, limit=10, language='en'):
self.movie = movie
self.limit = limit
self.language = language
self.client = Imdb()
def __enter__(self):
return self
def __exit__(self, exc_type, exc_val, exc_tb):
if exc_val:
logging.exception(str(exc_val))
@property
def reviews(self):
return [Review(result) for result in self._get_results()]
def _get_results(self):
try:
response = self.client.search_for_title(self.movie)[0]
title_id = response['imdb_id']
response = self.client.get_title_reviews(title_id, max_results=10)
return response
except IndexError as e:
logging.exception(str(e))
return []
class Review(object):
def __init__(self, review):
self.review = review
@property
def author(self):
return self.review.username
@property
def summary(self):
return self.review.summary
@property
def text(self):
return self.review.text
@property
def date(self):
return parser.parse(self.review.date, ignoretz=True)
@property
def source(self):
return 'IMDB'
# + [markdown] slideshow={"slide_type": "subslide"}
# ### IMDB Reviews: AFTER
#
# [commit 23888d9](https://github.com/littlepea/python-refactoring-talk/pull/1/commits/23888d9)
# + slideshow={"slide_type": "fragment"}
def main(title):
reviews = []
for backend_class in (TwitterReviews, IMDBReviews):
with backend_class(title) as reviews_backend:
for review in reviews_backend.reviews:
reviews.append(review)
# + [markdown] slideshow={"slide_type": "subslide"}
# ### BaseReview class
#
# [commit 4a314cd](https://github.com/littlepea/python-refactoring-talk/pull/1/commits/4a314cd)
# + slideshow={"slide_type": "fragment"}
class BaseReview(object):
def __init__(self, review):
self.review = review
@property
def author(self):
raise NotImplementedError
@property
def summary(self):
raise NotImplementedError
@property
def text(self):
raise NotImplementedError
@property
def date(self):
raise NotImplementedError
@property
def source(self):
raise NotImplementedError
def display(self):
print '(%s) @%s: %s [Source: %s]' % (
self.date.strftime('%Y-%m-%d'),
self.author,
self.summary,
self.source)
# + [markdown] slideshow={"slide_type": "slide"}
# # Adding more backends
#
# * Now that we've refactored using new abstractions we can easily add more review sources
# * Let's add NY Times movie reviews to the mix
#
# [commit 801f26c](https://github.com/littlepea/python-refactoring-talk/pull/1/commits/801f26c)
# + slideshow={"slide_type": "subslide"}
import logging
import os
import requests
from dateutil import parser
BaseReview = object # ignore this line, it's just for Jupyter notebook
class NYTimesReviews(object):
def __init__(self, movie, limit=10, language='en'):
self.movie = movie
self.limit = 10
self.language = language
self.url = 'https://api.nytimes.com/svc/movies/v2/reviews/search.json'
self.api_key = os.environ.get('NY_TIMES_API_KEY')
def __enter__(self):
return self
def __exit__(self, exc_type, exc_val, exc_tb):
if exc_val:
logging.exception(str(exc_val))
@property
def reviews(self):
return [Review(result) for result in self._get_results()]
def _get_results(self):
response = requests.get(self.url, self._prepare_request_data())
return response.json()['results']
def _prepare_request_data(self):
data = {
'query': self.movie,
'api-key': self.api_key
}
return data
class Review(BaseReview):
@property
def author(self):
return self.review['byline']
@property
def summary(self):
return self.review['headline']
@property
def text(self):
return self.review['summary_short']
@property
def date(self):
return parser.parse(self.review['date_updated'], ignoretz=True)
@property
def source(self):
return 'NYTimes'
# + [markdown] slideshow={"slide_type": "subslide"}
# ### NY Times Reviews: BEFORE
# + slideshow={"slide_type": "fragment"}
def main(title):
reviews = []
for backend_class in (TwitterReviews, IMDBReviews):
with backend_class(title) as reviews_backend:
for review in reviews_backend.reviews:
reviews.append(review)
# Search NYTimes
url = "https://api.nytimes.com/svc/movies/v2/reviews/search.json"
data = {
'query': title,
'api-key': os.environ.get('NY_TIMES_API_KEY')
}
response = requests.get(url, data)
count = 0
for review in response.json()['results']:
if count > 9:
break
reviews.append({
'author': review['byline'],
'summary': review['headline'],
'text': review['summary_short'],
'date': parser.parse(review['date_updated'], ignoretz=True),
'source': 'NYTimes'
})
count += 1
# + [markdown] slideshow={"slide_type": "subslide"}
# ### NY Times Reviews: AFTER
#
# [commit 49079b6](https://github.com/littlepea/python-refactoring-talk/pull/1/commits/49079b6)
# + slideshow={"slide_type": "fragment"}
def main(title):
reviews = []
for backend_class in (TwitterReviews, IMDBReviews, NYTimesReviews):
with backend_class(title) as reviews_backend:
for review in reviews_backend.reviews:
reviews.append(review)
# + [markdown] slideshow={"slide_type": "slide"}
# # Fix sorting
#
# * `cmp` functions are deprecated in Python 3 and are hard to read
# * Let's use `key` function instead
# * And separate getting sorted reviews into a service function
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Sorting: BEFORE
# + slideshow={"slide_type": "fragment"}
def _cmprev(r1, r2):
if r1['date']>r2['date']:
return -1
elif r1['date']<r2['date']:
return 1
else:
return 0
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Sorting and Services: AFTER
#
# [commit fa75a8f](https://github.com/littlepea/python-refactoring-talk/pull/1/commits/fa75a8f)
# + slideshow={"slide_type": "fragment"}
def get_reviews(title):
reviews = []
for backend_class in (TwitterReviews, IMDBReviews, NYTimesReviews):
with backend_class(title) as reviews_backend:
for review in reviews_backend.reviews:
reviews.append(review)
return reversed(sorted(reviews, key=_get_review_date))
def _get_review_date(review):
return review.date
# + [markdown] slideshow={"slide_type": "slide"}
# ### Final: BEFORE
#
# [movie_reviews.py](https://github.com/littlepea/python-refactoring-talk/blob/master/before/movie_reviews.py)
# + slideshow={"slide_type": "fragment"}
"""Movie Reviews.
Usage: movie_reviews.py <title>
movie_reviews.py (-h | --help)
movie_reviews.py --version
Arguments:
<title> Movie title
Options:
-h --help Show this screen.
--version Show version.
"""
from docopt import docopt
from TwitterSearch import *
from dateutil import parser
from imdbpie import Imdb
import logging
import os
def main(title):
reviews = []
# Search tweets
ts = TwitterSearch(
consumer_key = os.environ.get('TWITTER_CONSUMER_KEY'),
consumer_secret = os.environ.get('TWITTER_CONSUMER_SECRET'),
access_token = os.environ.get('TWITTER_ACCESS_TOKEN'),
access_token_secret = os.environ.get('TWITTER_TOKEN_SECRET')
)
try:
ts.connect()
tso = TwitterSearchOrder() # create a TwitterSearchOrder object
tso.setKeywords(['#' + title + 'Movie']) # let's define all words we would like to have a look for
tso.setLanguage('en') # we want to see German tweets only
tso.setIncludeEntities(False) # and don't give us all those entity information
# add tweets to reviews list
results = ts.getSearchResults(tso)
except TwitterSearchException as e: # take care of all those ugly errors if there are some
logging.exception(str(e))
ts.cleanUp()
else:
for offset in range(results.getSize()):
if offset > 9:
break
tweet = results.getTweetByIndex(offset)
reviews.append({
'author': tweet.getUserName(),
'summary': tweet.getText(),
'text': tweet.getText(),
'date': parser.parse(tweet.getCreatedDate(), ignoretz=True),
'source': 'Twitter'
})
finally:
ts.disconnect()
# Search Imdb
imdb = Imdb()
try:
response = imdb.search_for_title(title)[0]
title_id = response['imdb_id']
response = imdb.get_title_reviews(title_id, max_results=10)
except IndexError as e:
logging.exception(str(e))
else:
for review in response:
reviews.append({
'author': review.username,
'summary': review.summary,
'text': review.text,
'date': parser.parse(review.date, ignoretz=True),
'source': 'IMDB'
})
# Search NYTimes
url = "https://api.nytimes.com/svc/movies/v2/reviews/search.json"
data = {
'query': title,
'api-key': os.environ.get('NY_TIMES_API_KEY')
}
response = requests.get(url, data)
count = 0
for review in response.json()['results']:
if count > 9:
break
reviews.append({
'author': review['byline'],
'summary': review['headline'],
'text': review['summary_short'],
'date': parser.parse(review['date_updated'], ignoretz=True),
'source': 'NYTimes'
})
count += 1
# Sort reviews by date
reviews.sort(cmp=_cmprev)
# Print reviews
for review in reviews:
print('(%s) @%s: %s [Source: %s]' % ( review['date'].strftime('%Y-%m-%d'), review['author'], review['summary'], review['source'] ) )
def _cmprev(r1, r2):
if r1['date']>r2['date']:
return -1
elif r1['date']<r2['date']:
return 1
else:
return 0
# + [markdown] slideshow={"slide_type": "slide"}
# # Final `main()` program: AFTER
#
# [movie_reviews.py](https://github.com/littlepea/python-refactoring-talk/blob/develop/after/movie_reviews.py)
# + slideshow={"slide_type": "fragment"}
def main(title):
for review in get_reviews(title):
review.display()
# + [markdown] slideshow={"slide_type": "subslide"}
# # Final code structure
# + [markdown] slideshow={"slide_type": "fragment"}
# * [backends/](https://github.com/littlepea/python-refactoring-talk/tree/develop/after/backends)
# * base.py
# * imdb.py
# * ny_times.py
# * twitter.py
# * [movie_reviews.py](https://github.com/littlepea/python-refactoring-talk/tree/develop/after/movie_reviews.py)
# * [services.py](https://github.com/littlepea/python-refactoring-talk/tree/develop/after/services.py)
# + [markdown] slideshow={"slide_type": "fragment"}
# ---
#
# ### References:
#
# * [Pull Request](https://github.com/littlepea/python-refactoring-talk/pull/1/files?diff=unified)
# * [Commits](https://github.com/littlepea/python-refactoring-talk/pull/1/commits)
# + [markdown] slideshow={"slide_type": "slide"}
# # Q & A
#
# ---
#
# ### <NAME>
#
# * Twitter/Github: [@littlepea](https://twitter.com/littlepea12)
# * Email: [<EMAIL>](mailto:<EMAIL>)
| refactoring.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# 
#
# <a href="https://hub.callysto.ca/jupyter/hub/user-redirect/git-pull?repo=https%3A%2F%2Fgithub.com%2Fcallysto%2Fcurriculum-notebooks&branch=master&subPath=Languages/Pronunciation/pronunciation.ipynb&depth=1" target="_parent"><img src="https://raw.githubusercontent.com/callysto/curriculum-notebooks/master/open-in-callysto-button.svg?sanitize=true" width="123" height="24" alt="Open in Callysto"/></a>
# # Language Pronunciation
#
# As we are learning to speach additional languages, we need to listen to how words and phrases are pronounced. We will use the [Google Translate](https://pypi.org/project/googletrans) and [Google Text-to-Speech](https://pypi.org/project/gTTS) code libraries for this.
#
# In the cell below, you can change what the `translate_this` variable is equal to, for example `translate_this = 'goodbye'`, then click the `►Run` button to display and speak the translation.
# +
translate_this = 'Hello, how are you today?'
source = 'en'
destination = 'fr'
from googletrans import Translator
translator = Translator()
result = translator.translate(translate_this, src=source, dest=destination)
print(translate_this)
print(result.text)
import gtts
gtts.gTTS(result.text, lang=destination, slow=True).save('pronunciation.wav')
import IPython.display as ipd
ipd.display(ipd.Audio('pronunciation.wav'))
# -
# ## Available Languages
#
# To list the languages that are available for translation, `►Run` the following code cell.
gtts.lang.tts_langs()
# ## Speaking a Word or Phrase
#
# If we have a word or phrase that has already been translated, we can use the following code cell to listen to its pronunciation.
#
# To speed up the speaking, change `slow=True` to `slow=False`.
# +
speak_this = 'Dar la vuelta a la tortilla'
language = 'es'
gtts.gTTS(speak_this, lang=language, slow=True).save('pronunciation.wav')
ipd.display(ipd.Audio('pronunciation.wav'))
# -
# ## Conclusion
#
# When learning languages it is important to listen to pronunciation. In this notebook we used code libraries to produce machine-generated audio files so we could hear what words and phrases sound like.
# [](https://github.com/callysto/curriculum-notebooks/blob/master/LICENSE.md)
| Languages/Pronunciation/pronunciation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Transfer Learning
# Adapted from https://www.mlq.ai/transfer-learning-tensorflow-2-0/
# and https://www.tensorflow.org/tutorials/images/transfer_learning
# +
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# %matplotlib inline
import seaborn as sns
import random
import tensorflow as tf
# -
## Import model
model = tf.keras.applications.ResNet50(weights='imagenet')
# +
## include_top = False : exclude the end of the network
base_model = tf.keras.applications.ResNet50(weights='imagenet', include_top = False)
print(base_model.summary())
# -
## visualize the various layers
for i, layer in enumerate(base_model.layers):
print(i, layer.name)
## take output from base model and perform GlobalAveragePooling2D
x = base_model.output
x = tf.keras.layers.GlobalAveragePooling2D()(x)
## add a dense network at the end
x = tf.keras.layers.Dense(1024, activation='relu')(x)
x = tf.keras.layers.Dense(1024, activation='relu')(x)
x = tf.keras.layers.Dense(1024, activation='relu')(x)
x = tf.keras.layers.Dense(512, activation='relu')(x)
preds = tf.keras.layers.Dense(2, activation ='softmax')(x)
model = tf.keras.models.Model(inputs=base_model.input, outputs=preds)
print(model.summary())
for i, layer in enumerate(model.layers):
print(i, layer.name)
# +
## freeze the layers that have already been trained
for layer in model.layers[:175]:
layer.trainable = False
for layer in model.layers[175:]:
layer.trainable = True
# -
## apply to new data: preprocess the new data
train_datagen = tf.keras.preprocessing.image.ImageDataGenerator(
preprocessing_function=tf.keras.applications.resnet50.preprocess_input)
# +
# train_generator = train_datagen.flow_from_directory(
# '/content/drive/My Drive/Colab Notebooks/TF 2.0 Advanced/Transfer Learning Data/train/',
# target_size = (224, 224),
# color_mode = 'rgb',
# batch_size = 32,
# class_mode = 'categorical',
# shuffle = True)
# -
# compile the model
model.compile(optimizer='Adam', loss='categorical_crossentropy', metrics=['accuracy'])
history = model.fit_generator(generator = train_generator,
steps_per_epoch=train_generator.n//train_generator.batch_size, epochs = 5)
# +
## Evaluate the model
acc = history.history['accuracy']
loss = history.history['loss']
plt.figure()
plt.plot(acc, label='Training Accuracy')
plt.ylabel('Accuracy')
plt.title('Training Accuracy')
plt.figure()
plt.plot(loss, label='Training Loss')
plt.ylabel('Loss')
plt.title('Training Loss')
plt.xlabel('epoch')
plt.show()
# -
# Load data, use https://www.tensorflow.org/tutorials/load_data/images if loading data from a local computer.
import tensorflow_datasets as tfds
tfds.disable_progress_bar()
(raw_train, raw_validation, raw_test), metadata = tfds.load('cats_vs_dogs',
split=['train[:80%]', 'train[80%:90%]', 'train[90%:]'], with_info=True,
as_supervised=True)
print(raw_train)
print(raw_validation)
print(raw_test)
## plot some images and labels
get_label_name = metadata.features['label'].int2str
for image, label in raw_train.take(4):
plt.figure()
plt.imshow(image)
plt.title(get_label_name(label))
# +
# resize all the images to 160x160 and rescale the input values to [-1,1]:
IMG_SIZE = 160 # All images will be resized to 160x160
def format_example(image, label):
image = tf.cast(image, tf.float32)
image = (image/127.5) - 1
image = tf.image.resize(image, (IMG_SIZE, IMG_SIZE))
return image, label
# -
train = raw_train.map(format_example)
validation = raw_validation.map(format_example)
test = raw_test.map(format_example)
# +
# shuffle and batch the data
BATCH_SIZE = 32
SHUFFLE_BUFFER_SIZE = 1000
train_batches = train.shuffle(SHUFFLE_BUFFER_SIZE).batch(BATCH_SIZE)
validation_batches = validation.batch(BATCH_SIZE)
test_batches = test.batch(BATCH_SIZE)
# +
# inspect a batch of data
for image_batch, label_batch in train_batches.take(1):
pass
print(image_batch.shape)
# -
| Transfer-Learning/transfer-learning-ResNet50.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# +
T={}
def generateTable(data,k=4):
for i in range(len(data)-k):
X=data[i:i+k]
Y=data[i+k]
if T.get(X) is None:
T[X]={}
T[X][Y]=1
else:
if T[X].get(Y) is None:
T[X][Y]=1
else:
T[X][Y]+=1
return T
def ConvertFreqIntoProb(T):
for kx in T.keys():
s=float(sum(T[kx].values()))
for k in T[kx].keys():
T[kx][k]=T[kx][k]/s
return T
# -
T=generateTable("hello hello helli ")
T
T=ConvertFreqIntoProb(T)
# +
text_path="english_speech_2.txt"
def load_text(filename):
with open(filename,encoding='utf-8') as f:
return f.read().lower()
text=load_text(text_path)
# -
print(text[:1000])
def TrainMarkovChain(text,k=5):
T=generateTable(text,k)
T=ConvertFreqIntoProb(T)
return T
model=TrainMarkovChain(text)
model
# # generate Text at Text time !!!!
def sample_next(ctx,text,k):
ctx=ctx[-k:]
if T.get(ctx) is None:
return " "
possible_chars=list(T[ctx].keys())
possible_values=list(T[ctx].values())
return np.random.choice(possible_chars,p=possible_values)
sample_next("my de",model,5)
def generate_Text(starting_sent,k=4,max_len=1000):
sentence=starting_sent
ctx=starting_sent[-k:]
for kx in range(max_len):
next_prediction=sample_next(ctx,model,k)
sentence+=next_prediction
ctx=sentence[-k:]
return sentence
text= generate_Text("dear",max_len=200)
text
| DeepLearning/Markov-chain/MarkovChaains.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# This cell is added by sphinx-gallery
# !pip install mrsimulator --quiet
# %matplotlib inline
import mrsimulator
print(f'You are using mrsimulator v{mrsimulator.__version__}')
# -
#
# # ⁸⁷Rb 2D 3QMAS NMR of RbNO₃
#
# The following is a 3QMAS fitting example for $\text{RbNO}_3$. The dataset was
# acquired and shared by <NAME>.
#
#
# +
import numpy as np
import csdmpy as cp
import matplotlib.pyplot as plt
from lmfit import Minimizer
from mrsimulator import Simulator
from mrsimulator.methods import ThreeQ_VAS
from mrsimulator import signal_processing as sp
from mrsimulator.utils import spectral_fitting as sf
from mrsimulator.utils import get_spectral_dimensions
from mrsimulator.utils.collection import single_site_system_generator
# -
# ## Import the dataset
#
#
# +
filename = "https://sandbox.zenodo.org/record/814455/files/RbNO3_MQMAS.csdf"
experiment = cp.load(filename)
# standard deviation of noise from the dataset
sigma = 175.5476
# For spectral fitting, we only focus on the real part of the complex dataset
experiment = experiment.real
# Convert the coordinates along each dimension from Hz to ppm.
_ = [item.to("ppm", "nmr_frequency_ratio") for item in experiment.dimensions]
# plot of the dataset.
max_amp = experiment.max()
levels = (np.arange(24) + 1) * max_amp / 25 # contours are drawn at these levels.
options = dict(levels=levels, alpha=0.75, linewidths=0.5) # plot options
plt.figure(figsize=(4.25, 3.0))
ax = plt.subplot(projection="csdm")
ax.contour(experiment, colors="k", **options)
ax.set_xlim(-20, -50)
ax.set_ylim(-45, -65)
plt.grid()
plt.tight_layout()
plt.show()
# -
# ## Create a fitting model
# **Guess model**
#
# Create a guess list of spin systems.
#
#
# +
shifts = [-26.4, -28.5, -31.3] # in ppm
Cq = [1.7e6, 2.0e6, 1.7e6] # in Hz
eta = [0.2, 1.0, 0.6]
abundance = [33.33, 33.33, 33.33] # in %
spin_systems = single_site_system_generator(
isotope="87Rb",
isotropic_chemical_shift=shifts,
quadrupolar={"Cq": Cq, "eta": eta},
abundance=abundance,
)
# -
# **Method**
#
# Create the 3QMAS method.
#
#
# +
# Get the spectral dimension parameters from the experiment.
spectral_dims = get_spectral_dimensions(experiment)
MQMAS = ThreeQ_VAS(
channels=["87Rb"],
magnetic_flux_density=9.395, # in T
spectral_dimensions=spectral_dims,
experiment=experiment, # add the measurement to the method.
)
# Optimize the script by pre-setting the transition pathways for each spin system from
# the das method.
for sys in spin_systems:
sys.transition_pathways = MQMAS.get_transition_pathways(sys)
# -
# **Guess Spectrum**
#
#
# +
# Simulation
# ----------
sim = Simulator(spin_systems=spin_systems, methods=[MQMAS])
sim.config.number_of_sidebands = 1
sim.run()
# Post Simulation Processing
# --------------------------
processor = sp.SignalProcessor(
operations=[
# Gaussian convolution along both dimensions.
sp.IFFT(dim_index=(0, 1)),
sp.apodization.Gaussian(FWHM="0.08 kHz", dim_index=0),
sp.apodization.Gaussian(FWHM="0.1 kHz", dim_index=1),
sp.FFT(dim_index=(0, 1)),
sp.Scale(factor=1e7),
]
)
processed_data = processor.apply_operations(data=sim.methods[0].simulation).real
# Plot of the guess Spectrum
# --------------------------
plt.figure(figsize=(4.25, 3.0))
ax = plt.subplot(projection="csdm")
ax.contour(experiment, colors="k", **options)
ax.contour(processed_data, colors="r", linestyles="--", **options)
ax.set_xlim(-20, -50)
ax.set_ylim(-45, -65)
plt.grid()
plt.tight_layout()
plt.show()
# -
# ## Least-squares minimization with LMFIT
# Use the :func:`~mrsimulator.utils.spectral_fitting.make_LMFIT_params` for a quick
# setup of the fitting parameters.
#
#
params = sf.make_LMFIT_params(sim, processor)
print(params.pretty_print(columns=["value", "min", "max", "vary", "expr"]))
# **Solve the minimizer using LMFIT**
#
#
minner = Minimizer(sf.LMFIT_min_function, params, fcn_args=(sim, processor, sigma))
result = minner.minimize()
result
# ## The best fit solution
#
#
# +
best_fit = sf.bestfit(sim, processor)[0]
# Plot the spectrum
plt.figure(figsize=(4.25, 3.0))
ax = plt.subplot(projection="csdm")
ax.contour(experiment, colors="k", **options)
ax.contour(best_fit, colors="r", linestyles="--", **options)
ax.set_xlim(-20, -50)
ax.set_ylim(-45, -65)
plt.grid()
plt.tight_layout()
plt.show()
# -
# ## Image plots with residuals
#
#
# +
residuals = sf.residuals(sim, processor)[0]
fig, ax = plt.subplots(
1, 3, sharey=True, figsize=(10, 3.0), subplot_kw={"projection": "csdm"}
)
vmax, vmin = experiment.max(), experiment.min()
for i, dat in enumerate([experiment, best_fit, residuals]):
ax[i].imshow(dat, aspect="auto", cmap="gist_ncar_r", vmax=vmax, vmin=vmin)
ax[i].set_xlim(-20, -50)
ax[0].set_ylim(-45, -65)
plt.tight_layout()
plt.show()
| docs/notebooks/fitting/2D_fitting/plot_3_RbNO3_MQMAS.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Key Drivers Analysis with Kano model and Shapley values
# The Kano Model is an approach to prioritize attributes on a product based on the degree to which they are likely to satisfy customers.
#
# The Shapley value is a concept from game theory that quantifies how much each player contributes to the game outcome. The concept, however, can be used in a wide variety of decomposition problems. The algorithm for the Shapley value decomposition is always the same – it is just the value function that varies.
#
# We use Kano theory and Shapley value to identify key drivers of overall satisfaction and/or dissatisfaction.
# #### Install the python package
# !pip install ShapKa
# #### Load an example dataset
# +
import pandas as pd
df = pd.read_csv('../data/example_03.csv')
y_varname = 'Overall'
weight_varname = 'Weight'
X_varnames = df.columns.values.tolist()
X_varnames.remove(y_varname)
X_varnames.remove(weight_varname)
# -
# #### Key Dissatisfaction Analysis (KDA)
# +
from ShapKa.kanomodel import KanoModel
model = KanoModel(df,
y_varname, X_varnames,
analysis = 'kda',
y_dissat_upperbound = 6, y_sat_lowerbound = 9,
X_dissat_upperbound = 6, X_sat_lowerbound = 9,
weight_varname = weight_varname)
kda = model.key_drivers() ;kda
# -
# #### Key Enhancers Analysis (KEA)
# +
X_dissat_upperbound = 6
max_objective_idx = kda.index[kda['Objective'] == max(kda['Objective'])].tolist()
key_dissat_driver = kda.iloc[0:max_objective_idx[0]+1].Attribute.tolist()
df_without_failure_on_kda = df.loc[~(df[key_dissat_driver] <= X_dissat_upperbound).any(axis=1)]
model = KanoModel(df_without_failure_on_kda,
y_varname, X_varnames,
analysis = 'kea',
y_dissat_upperbound = 6, y_sat_lowerbound = 9,
X_dissat_upperbound = 6, X_sat_lowerbound = 9,
weight_varname = weight_varname)
kea = model.key_drivers() ;kea
# -
| notebook/Key Drivers Analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# %matplotlib notebook
# %matplotlib inline
import plot_param_explor
from plot_param_explor import *
import numpy as np
# -
# # Plot Params
#
# The plot below shows that the parameters we have generated vary Axonal Sodium and Axonal Potassium channel parameters between [0,20] and [0,10] respectively.
params = np.array(gen_params())
# narrow parameters to just the ones we will vary below
params_varied = params[:,[0,2]]
plt.plot(sorted(np.unique(params_varied[:,1])), label='Axonal Sodium')
plt.plot(sorted(np.unique(params_varied[:,0])), label='Axonal Potassium')
plt.legend()
scores = calc_scores(vs_fn)
[z,X,Y] =plot_scores(scores)
plt.ylabel('Relative Conductance Axonal Sodium [0,20]')
plt.xlabel('Relative Conductance Axonal Potassium [0,10]')
fig, ax = plt.subplots()
#implot = ax.imshow(z,extent=[0,5,0,5])
implot = ax.imshow(z,origin='lower')
implot.set_cmap(plt.cm.get_cmap('Greys',7))
#implot.set_cmap(mygreens1)
fig.colorbar(implot,ax=ax)
plt.xticks(list(range(0,101,25)))
plt.yticks(list(range(0,101,25)))
plt.savefig('paramExplor.eps', format='eps', dpi=100)
plt.ylabel('Relative Conductance Axonal Sodium [0,20]')
plt.xlabel('Relative Conductance Axonal Potassium [0,10]')
plt.savefig('paramExplorzoom.eps', format='eps', dpi=100)
znp = numpy.array(z)
numpy.savetxt('allscore.csv',znp,delimiter=',')
| GUI/Show param explor.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from datetime import datetime, timedelta
import gc
import numpy as np, pandas as pd
import lightgbm as lgb
# > This notebook aims to push the public LB under 0.50. Certainly, the competition is not yet at its peak and there clearly remains room for improvement.
# # Credits and comments on changes
#
# This notebook is based on [m5-first-public-notebook-under-0-50](https://www.kaggle.com/kneroma/m5-first-public-notebook-under-0-50) v.6 by @kkiller
#
# Presently it's sole purpose is to test accelerated prediction stage (vs original notebook) where I generate lag features only for the days that need sales forecasts. Everything else is unchanged vs the original _kkiller's_ notebook (as in version 6).
CAL_DTYPES={"event_name_1": "category", "event_name_2": "category", "event_type_1": "category",
"event_type_2": "category", "weekday": "category", 'wm_yr_wk': 'int16', "wday": "int16",
"month": "int16", "year": "int16", "snap_CA": "float32", 'snap_TX': 'float32', 'snap_WI': 'float32' }
PRICE_DTYPES = {"store_id": "category", "item_id": "category", "wm_yr_wk": "int16","sell_price":"float32" }
pd.options.display.max_columns = 50
# + _cell_guid="b1076dfc-b9ad-4769-8c92-a6c4dae69d19" _uuid="8f2839f25d086af736a60e9eeb907d3b93b6e0e5"
h = 28
max_lags = 57
tr_last = 1913
fday = datetime(2016,4, 25)
fday
# -
def create_dt(is_train = True, nrows = None, first_day = 1200):
prices = pd.read_csv("../input/m5-forecasting-accuracy/sell_prices.csv", dtype = PRICE_DTYPES)
for col, col_dtype in PRICE_DTYPES.items():
if col_dtype == "category":
prices[col] = prices[col].cat.codes.astype("int16")
prices[col] -= prices[col].min()
cal = pd.read_csv("../input/m5-forecasting-accuracy/calendar.csv", dtype = CAL_DTYPES)
cal["date"] = pd.to_datetime(cal["date"])
for col, col_dtype in CAL_DTYPES.items():
if col_dtype == "category":
cal[col] = cal[col].cat.codes.astype("int16")
cal[col] -= cal[col].min()
start_day = max(1 if is_train else tr_last-max_lags, first_day)
numcols = [f"d_{day}" for day in range(start_day,tr_last+1)]
catcols = ['id', 'item_id', 'dept_id','store_id', 'cat_id', 'state_id']
dtype = {numcol:"float32" for numcol in numcols}
dtype.update({col: "category" for col in catcols if col != "id"})
dt = pd.read_csv("../input/m5-forecasting-accuracy/sales_train_validation.csv",
nrows = nrows, usecols = catcols + numcols, dtype = dtype)
for col in catcols:
if col != "id":
dt[col] = dt[col].cat.codes.astype("int16")
dt[col] -= dt[col].min()
if not is_train:
for day in range(tr_last+1, tr_last+ 28 +1):
dt[f"d_{day}"] = np.nan
dt = pd.melt(dt,
id_vars = catcols,
value_vars = [col for col in dt.columns if col.startswith("d_")],
var_name = "d",
value_name = "sales")
dt = dt.merge(cal, on= "d", copy = False)
dt = dt.merge(prices, on = ["store_id", "item_id", "wm_yr_wk"], copy = False)
return dt
# +
def create_fea(dt):
lags = [7, 28]
lag_cols = [f"lag_{lag}" for lag in lags ]
for lag, lag_col in zip(lags, lag_cols):
dt[lag_col] = dt[["id","sales"]].groupby("id")["sales"].shift(lag)
wins = [7, 28]
for win in wins :
for lag,lag_col in zip(lags, lag_cols):
dt[f"rmean_{lag}_{win}"] = dt[["id", lag_col]].groupby("id")[lag_col].transform(lambda x : x.rolling(win).mean())
date_features = {
"wday": "weekday",
"week": "weekofyear",
"month": "month",
"quarter": "quarter",
"year": "year",
"mday": "day",
# "ime": "is_month_end",
# "ims": "is_month_start",
}
# dt.drop(["d", "wm_yr_wk", "weekday"], axis=1, inplace = True)
for date_feat_name, date_feat_func in date_features.items():
if date_feat_name in dt.columns:
dt[date_feat_name] = dt[date_feat_name].astype("int16")
else:
dt[date_feat_name] = getattr(dt["date"].dt, date_feat_func).astype("int16")
# -
FIRST_DAY = 350 # If you want to load all the data set it to '1' --> Great memory overflow risk !
# +
# %%time
df = create_dt(is_train=True, first_day= FIRST_DAY)
df.shape
# -
df.head()
df.info()
# +
# %%time
create_fea(df)
df.shape
# -
df.info()
df.head()
df.dropna(inplace = True)
df.shape
# +
cat_feats = ['item_id', 'dept_id','store_id', 'cat_id', 'state_id'] + ["event_name_1", "event_name_2", "event_type_1", "event_type_2"]
useless_cols = ["id", "date", "sales","d", "wm_yr_wk", "weekday"]
train_cols = df.columns[~df.columns.isin(useless_cols)]
X_train = df[train_cols]
y_train = df["sales"]
# +
# %%time
np.random.seed(777)
fake_valid_inds = np.random.choice(X_train.index.values, 2_000_000, replace = False)
train_inds = np.setdiff1d(X_train.index.values, fake_valid_inds)
X_train=X_train.loc[fake_valid_inds]
y_train = y_train.loc[fake_valid_inds]
# +
#del df, X_train, y_train, fake_valid_inds,train_inds ; gc.collect()
# +
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import RobustScaler
from sklearn.model_selection import KFold, cross_val_score
from sklearn.metrics import mean_squared_error
from sklearn.preprocessing import LabelEncoder
#ML Algoirthm
from sklearn.linear_model import ElasticNetCV, LassoCV, RidgeCV
import sklearn.linear_model as linear_model
from sklearn.svm import SVR
from lightgbm import LGBMRegressor
from sklearn.ensemble import GradientBoostingRegressor,RandomForestRegressor
from xgboost import XGBRegressor
from mlxtend.regressor import StackingCVRegressor
# +
kf = KFold(n_splits=12, random_state=42, shuffle=True)
# Define error metrics
def cv_rmse(model, X=X_train):
rmse = np.sqrt(-cross_val_score(model, X, y_train, scoring="neg_mean_squared_error", cv=kf))
return (rmse)
# +
from xgboost import XGBRegressor
from lightgbm import LGBMRegressor
from mlxtend.regressor import StackingCVRegressor
ridge_alphas = [1e-15, 1e-10, 1e-8, 9e-4, 7e-4, 5e-4, 3e-4, 1e-4, 1e-3, 5e-2, 1e-2, 0.1, 0.3, 1, 3, 5, 10, 15, 18, 20, 30, 50, 75, 100]
ridge = make_pipeline(RobustScaler(), RidgeCV(alphas=ridge_alphas, cv=kf))
# Support Vector Regressor
#svr = make_pipeline(RobustScaler(), SVR(C= 5, epsilon= 0.008, gamma=0.0003))
# Gradient Boosting Regressor
gbr = GradientBoostingRegressor(n_estimators=100,
learning_rate=0.075)
rf=RandomForestRegressor(n_estimators=10)
lightgbm1 = LGBMRegressor(objective='poisson',
metric ='rmse',
learning_rate = 0.075,
sub_row = 0.75,
bagging_freq = 1,
lambda_l2 = 0.1,
verbosity= 1,
n_estimators = 200,
num_leaves= 128,
min_data_in_leaf= 100)
lightgbm2 = LGBMRegressor(objective='tweedie',
metric ='rmse',
learning_rate = 0.075,
sub_row = 0.75,
bagging_freq = 1,
lambda_l2 = 0.1,
verbosity= 1,
n_estimators = 200,
num_leaves= 128,
min_data_in_leaf= 100)
xgboost = XGBRegressor(objective='count:poisson',
learning_rate=0.075,
n_estimators=100,
min_child_weight=50)
stackReg = StackingCVRegressor(regressors=(lightgbm1,lightgbm2),
meta_regressor=(xgboost),
use_features_in_secondary=True,
random_state=42)
# +
model_score = {}
score = cv_rmse(lightgbm1)
lgb_model1_full_data = lightgbm1.fit(X_train, y_train)
print("lightgbm1: {:.4f}".format(score.mean()))
model_score['lgb1'] = score.mean()
# -
score = cv_rmse(lightgbm2)
lgb_model2_full_data = lightgbm2.fit(X_train, y_train)
print("lightgbm2: {:.4f}".format(score.mean()))
model_score['lgb2'] = score.mean()
score = cv_rmse(xgboost)
xgboost_full_data = xgboost.fit(X_train, y_train)
print("xgboost: {:.4f}".format(score.mean()))
model_score['xgb'] = score.mean()
score = cv_rmse(ridge)
ridge_full_data = ridge.fit(X_train, y_train)
print("ridge: {:.4f}".format(score.mean()))
model_score['ridge'] = score.mean()
# +
# score = cv_rmse(svr)
# svr_full_data = svr.fit(X_train, y_train)
# print("svr: {:.4f}".format(score.mean()))
# model_score['svr'] = score.mean()
# -
score = cv_rmse(gbr)
gbr_full_data = gbr.fit(X_train, y_train)
print("gbr: {:.4f}".format(score.mean()))
model_score['gbr'] = score.mean()
score = cv_rmse(rf)
rf_full_data = rf.fit(X_train, y_train)
print("rf: {:.4f}".format(score.mean()))
model_score['rf'] = score.mean()
score = cv_rmse(stackReg)
stackReg_full_data = stackReg.fit(X_train, y_train)
print("stackReg: {:.4f}".format(score.mean()))
model_score['stackReg'] = score.mean()
def rmsle(y, y_pred):
return np.sqrt(mean_squared_error(y, y_pred))
def blended_predictions(X_train,weight):
return ((weight[0] * ridge_full_data.predict(X_train)) + \
(weight[1] * rf_full_data.predict(X_train)) + \
(weight[2] * gbr_full_data.predict(X_train)) + \
(weight[3] * xgboost_full_data.predict(X_train)) + \
(weight[4] * lgb_model1_full_data.predict(X_train)) + \
(weight[5] * stackReg_full_data.predict(np.array(X_train))))
# Blended model predictions
blended_score = rmsle(y_train, blended_predictions(X_train,[0.15,0.2,0.18,0.1,0.27,0.1]))
print("blended score: {:.4f}".format(blended_score))
model_score['blended_model'] = blended_score
model_score
#my_model = stacked_ensemble(X_train,y_train)
import warnings
warnings.filterwarnings("default")
# %%time
# blend= blended_predictions(X_train,[0.15,0.2,0.1,0.18,0.1,0.27])
# # Prediction stage
# (updated vs original)
def create_lag_features_for_test(dt, day):
# create lag feaures just for single day (faster)
lags = [7, 28]
lag_cols = [f"lag_{lag}" for lag in lags]
for lag, lag_col in zip(lags, lag_cols):
dt.loc[dt.date == day, lag_col] = \
dt.loc[dt.date ==day-timedelta(days=lag), 'sales'].values # !!! main
windows = [7, 28]
for window in windows:
for lag in lags:
df_window = dt[(dt.date <= day-timedelta(days=lag)) & (dt.date > day-timedelta(days=lag+window))]
df_window_grouped = df_window.groupby("id").agg({'sales':'mean'}).reindex(dt.loc[dt.date==day,'id'])
dt.loc[dt.date == day,f"rmean_{lag}_{window}"] = \
df_window_grouped.sales.values
def create_date_features_for_test(dt):
# copy of the code from `create_dt()` above
date_features = {
"wday": "weekday",
"week": "weekofyear",
"month": "month",
"quarter": "quarter",
"year": "year",
"mday": "day",
}
for date_feat_name, date_feat_func in date_features.items():
if date_feat_name in dt.columns:
dt[date_feat_name] = dt[date_feat_name].astype("int16")
else:
dt[date_feat_name] = getattr(
dt["date"].dt, date_feat_func).astype("int16")
# +
# %%time
alphas = [1.028, 1.023, 1.018]
weights = [1/len(alphas)]*len(alphas) # equal weights
te0 = create_dt(False) # create master copy of `te`
create_date_features_for_test (te0)
for icount, (alpha, weight) in enumerate(zip(alphas, weights)):
te = te0.copy() # just copy
# te1 = te0.copy()
cols = [f"F{i}" for i in range(1, 29)]
for tdelta in range(0, 28):
day = fday + timedelta(days=tdelta)
print(tdelta, day.date())
tst = te[(te.date >= day - timedelta(days=max_lags))
& (te.date <= day)].copy()
# tst1 = te1[(te1.date >= day - timedelta(days=max_lags))
# & (te1.date <= day)].copy()
# create_fea(tst) # correct, but takes much time
create_lag_features_for_test(tst, day) # faster
tst = tst.loc[tst.date == day, train_cols]
te.loc[te.date == day, "sales"] = \
alpha * blended_predictions(tst,[0.15,0.2,0.18,0.1,0.27,0.1]) # magic multiplier by kyakovlev
# create_lag_features_for_test(tst1, day) # faster
# tst1 = tst1.loc[tst1.date == day, train_cols]
# te1.loc[te1.date == day, "sales"] = \
# alpha * m_lgb1.predict(tst1) # magic multiplier by kyakovlev
te_sub = te.loc[te.date >= fday, ["id", "sales"]].copy()
# te_sub1 = te1.loc[te1.date >= fday, ["id", "sales"]].copy()
te_sub["F"] = [f"F{rank}" for rank in te_sub.groupby("id")[
"id"].cumcount()+1]
# te_sub1["F"] = [f"F{rank}" for rank in te_sub1.groupby("id")[
# "id"].cumcount()+1]
te_sub = te_sub.set_index(["id", "F"]).unstack()[
"sales"][cols].reset_index()
# te_sub1 = te_sub1.set_index(["id", "F"]).unstack()[
# "sales"][cols].reset_index()
te_sub.fillna(0., inplace=True)
# te_sub1.fillna(0., inplace=True)
te_sub.sort_values("id", inplace=True)
# te_sub1.sort_values("id", inplace=True)
te_sub.reset_index(drop=True, inplace=True)
# te_sub1.reset_index(drop=True, inplace=True)
te_sub.to_csv(f"submission_{icount}.csv", index=False)
# te_sub1.to_csv(f"submission1_{icount}.csv", index=False)
if icount == 0:
sub = te_sub
sub[cols] *= weight
# sub1 = te_sub1
# sub1[cols] *= weight
else:
sub[cols] += te_sub[cols]*weight
# sub1[cols] += te_sub1[cols]*weight
print(icount, alpha, weight)
# -
sub.head(10)
sub.id.nunique(), sub["id"].str.contains("validation$").sum()
# sub1.id.nunique(), sub1["id"].str.contains("validation$").sum()
sub.shape
# sub1.shape
sub2 = sub.copy()
sub2["id"] = sub2["id"].str.replace("validation$", "evaluation")
sub = pd.concat([sub, sub2], axis=0, sort=False)
sub.to_csv("submissionp.csv",index=False)
# +
# sub3 = sub1.copy()
# sub3["id"] = sub3["id"].str.replace("validation$", "evaluation")
# sub1 = pd.concat([sub1, sub3], axis=0, sort=False)
# sub.to_csv("submissiont.csv",index=False)
# +
# poisson = sub.sort_values(by = 'id').reset_index(drop = True)
# tweedie = sub1.sort_values(by = 'id').reset_index(drop = True)
# sub5 = poisson.copy()
# for i in sub5.columns :
# if i != 'id' :
# sub5[i] = 0.5*poisson[i] + 0.5*tweedie[i]
# sub5.to_csv('submissionavg.csv', index = False)
| Ensemble.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
from matplotlib import pyplot as plt
import sys
sys.path.append('../wavelets')
from wavelets import *
# +
#Let's simulate some data, starting with a sin wave with a frequency of 0.1, amplitude of 1, observed over 100 days
frequency1 = 0.1
amp1 = 1.0
baseline = 100.0
t_mod = np.linspace(0,baseline,10000)
mod = amp1*np.sin(2*np.pi*frequency1*t_mod)
plt.plot(t_mod,mod)
# +
#Let's randomly sample that sin wave with different numbers of observations, adding in noise at 5% of the amplitude
N_obs = [10000,1000,100]
for N in N_obs:
t_obs = np.random.uniform(high=baseline,size=N)
t_obs.sort()
f_obs = amp1*np.sin(2*np.pi*frequency1*t_obs)
f_obs += 0.05*amp1*np.random.randn(N)
plt.scatter(t_obs,f_obs,s=10,alpha=0.5,label=f'$N={N}$')
np.savetxt(f'one_sin_0p05_noise_{N}_obs.txt',np.array([t_obs,f_obs]).T)
plt.legend(loc='lower left')
# +
#Let's also add a second frequency component with 1/2 amplitude, a frequency of 1/pi, and a phase difference of 0.2 radian.
amp2 = 0.5*amp1
frequency2 = 1/np.pi
delta_phase = 0.2
N_obs = [10000,1000,100]
for N in N_obs:
t_obs = np.random.uniform(high=baseline,size=N)
t_obs.sort()
f_obs = amp1*np.sin(2*np.pi*frequency1*t_obs)
f_obs += amp2*np.sin(2*np.pi*frequency2*t_obs - delta_phase)
f_obs += 0.05*amp1*np.random.randn(N)
plt.scatter(t_obs,f_obs,s=10,alpha=0.5,label=f'$N={N}$')
np.savetxt(f'two_sin_0p05_noise_{N}_obs.txt',np.array([t_obs,f_obs]).T)
plt.legend(loc='lower left')
# -
| data/benchmarking.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# +
class AdjacencyGraph(object):
def __init__(self, root):
self.graph = {}
self.graph[root] = []
self.visited = {}
self.order = []
self.cycleStack = {}
self.isCycle = False
self.topoStack = []
def insert(self, source, destination):
if source not in self.graph:
self.graph[source] = []
if destination is not None:
if destination not in self.graph:
self.graph[destination] = []
self.graph[source].append(destination)
def bfs(self, start):
queue = [start]
visited = {}
order = []
while queue:
current = queue.pop(0)
order.append(current)
visited[current] = True
if self.graph[current] is not None:
for node in self.graph[current]:
if node not in visited:
visited[node] = True
queue.append(node)
return order
def dfs(self, start):
if start in self.cycleStack:
self.isCycle = True
self.cycleStack[start] = True
if start not in self.visited:
self.order.append(start)
self.visited[start] = True
if self.graph[start]:
for node in self.graph[start]:
if node not in self.visited:
self.dfs(node)
elif node in self.cycleStack:
self.isCycle = True
return (self.order,self.isCycle)
def topoSort(self):
for node in self.graph:
if node not in self.visited:
self.topoUtility(node)
return self.topoStack
def topoUtility(self,node):
self.visited[node] = True
for neighbour in self.graph[node]:
if neighbour not in self.visited:
self.topoUtility(neighbour)
self.topoStack.insert(0, node)
class MatrixGraph(object):
def __init__(self, size):
self.size = size
self.graph = []
self.visited = {}
self.order = []
for i in range(self.size):
self.graph.append([0]*self.size)
def insert(self, source, destination):
if source < self.size and destination < self.size:
self.graph[source][destination] = 1
else:
return "Enter source and destination values between 0 and", self.size
def bfs(self, start):
queue = [start]
visited = {}
order = []
while queue:
current = queue.pop(0)
order.append(current)
visited[current] = True
for idx, node in enumerate(self.graph[current]):
if node == 1 and idx not in visited:
visited[idx] = True
queue.append(idx)
return order
def dfs(self, start):
if start not in self.visited:
self.order.append(start)
self.visited[start] = True
for idx, node in enumerate(self.graph[start]):
if node == 1 and idx not in self.visited:
self.dfs(idx)
return self.order
# +
def main():
g = AdjacencyGraph(0)
m = MatrixGraph(13)
# print m.graph
# m.insert(0,1)
# m.insert(0,4)
# m.insert(1,0)
# m.insert(1,2)
# m.insert(1,5)
# m.insert(2,1)
# m.insert(2,3)
# m.insert(2,6)
# m.insert(3,2)
# m.insert(3,7)
# m.insert(4,0)
# m.insert(4,8)
# m.insert(5,1)
# m.insert(5,6)
# m.insert(5,10)
# m.insert(6,2)
# m.insert(6,5)
# m.insert(6,11)
# m.insert(7,3)
# m.insert(7,12)
# m.insert(8,4)
# m.insert(8,9)
# m.insert(9,8)
# m.insert(9,10)
# m.insert(10,5)
# m.insert(10,9)
# m.insert(10,11)
# m.insert(11,6)
# m.insert(11,10)
# m.insert(11,12)
# m.insert(12,7)
# m.insert(12,11)
# g.insert(0,1)
# g.insert(0,2)
# g.insert(1,2)
# g.insert(1,3)
# g.insert(2,3)
# g.insert(2,5)
# g.insert(3,4)
# g.insert(1,2)
# g.insert(2,3)
# g.insert(3,1)
# g.insert(0, 1)
# g.insert(0, 2)
# g.insert(1, 2)
# g.insert(2, 0)
# g.insert(2, 3)
# g.insert(3, 3)
# g.insert(0, 1)
# g.insert(0, 4)
# g.insert(1, 2)
# g.insert(2, 3)
g.insert(5, 2);
g.insert(5, 0);
g.insert(4, 0);
g.insert(4, 1);
g.insert(2, 3);
g.insert(3, 1);
# print g.graph
# print g.bfs(5)
# print g.dfs(5)
print g.topoSort()
# print m.bfs(0)
# print m.dfs(0)
# -
if __name__ == "__main__":
main()
| Haseeb DS/Graph.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:env]
# language: python
# name: conda-env-env-py
# ---
train = get_train()
test = get_test()
train_group = train["Скважина"]
y_train = train["Нефть, т"]
y_train = y_train.fillna(0).apply(get_float)
y_train = y_train[y_train!=0]
# +
from scipy.stats import boxcox
trf, lamb = boxcox(y_train, )
normaltest(trf)
from scipy.special import inv_boxcox
plt.hist(inv_boxcox(trf, lamb,), bins=100)
import numpy as np
import scipy as sp
import scipy.stats
def mean_confidence_interval(data, confidence=0.95):
a = 1.0*np.array(data)
n = len(a)
m, se = np.mean(a), scipy.stats.sem(a)
h = se * sp.stats.t._ppf((1+confidence)/2., n-1)
return m, m-h, m+h
y_train_clean.std()*2+y_train_clean.mean()
mean_confidence_interval(y_train_clean)
from scipy.stats import normaltest
normaltest(y_train_clean.apply(np.log1p))
y_train.apply(np.log1p).hist(bins =100, figsize=(20,20))
train_cont, y_train_clean, test_cont, train_group = get_clean_data(train, test)
y_train.shape
y_train.hist(bins =100, figsize=(20,20))
y_train[y_train>2000]
dates_cont, dates_cont, dates_cat, _ = dates_transform_pipeline(train, test, train_group)
y_train_clean[train[train["Скважина"]== "6b68ae1be719488f259414bcb925ce37"].index]
y_train.hist(bins =100, figsize=(20,20))
# распределение смещено в сторону нуля в реаль ности значения в два раза больше
y_train.unique()
len(y_train[y_train==0])
len(dates_cat)
from matplotlib import pyplot as plt
plt.figure(figsize=(20, 30), dpi=80)
plt.scatter(dates_cat.loc[y_train.index],y_train)
low_orders = dates_cat[dates_cat<6]
plt.scatter(dates_cat.loc[low_orders.index],y_train.loc[low_orders.index])
train = get_train()
test = get_test()
train_group = train["Скважина"]
y_train = train["Нефть, т"]
dates_cont, dates_cont, dates_cat, _ = dates_transform_pipeline(train, test, train_group)
y_train = y_train.fillna(0).apply(get_float)
low_orders = dates_cat[dates_cat<6]
plt.scatter(dates_cat.loc[y_train.index],y_train)
y_train = y_train[low_orders.index]
y_train = y_train[y_train!=0]
y_train.mean()
y_train.hist(bins =100, figsize=(20,20))
plt.figure(figsize=(20, 30), dpi=80)
plt.boxplot(y_train)
means = []
for i in range(6):
means.append(y_train[(low_orders==i)].mean())
plt.plot(means)
y_train.mean()
np.percentile(y_train, 97)
from scipy.stats import normaltest
normaltest(y_train)
for i in range(6):
print(normaltest(y_train[(low_orders==i)]))
y_train[(low_orders==i)].hist()
["Закачка, м3","ГП(ИДН) Прирост дефита нефти","Вязкость нефти в пластовых условия","Закачка, м3","ГП(ИДН) Дебит жидкости скорр-ый",]
coef = pd.concat([train_cont, y_train_clean], axis=1).corr()
coef[coef>0.8]
coef["Нефть, т"][(coef["Нефть, т"]>=0.1)|(coef["Нефть, т"]<=-0.1)].index
coef["Нефть, т"]
len(train_cont.columns)
y_train_clean.shape
# !pip install seaborn
import seaborn as sns
sns.set_style("whitegrid")
cats = ["Тип испытания",
"Тип скважины",
"Неустановившийся режим",
"ГТМ",
"Метод",
"Характер работы",
"Состояние",
"Пласт МЭР",
"Способ эксплуатации",
"Тип насоса",
"Состояние на конец месяца",
"Номер бригады",
"Фонтан через насос",
"Нерентабельная",
"Назначение по проекту",
"Группа фонда",
"Тип дополнительного оборудования",
"Марка ПЭД",
"Тип ГЗУ",
"ДНС",
"КНС",
#useless potentially
"Диаметр плунжера",
"Природный газ, м3",
"Конденсат, т",
"Длина хода плунжера ШГН",
"Коэффициент подачи насоса",
"Дебит конденсата",
"Вязкость воды в пластовых условиях",
"Газ из газовой шапки, м3",
"Число качаний ШГН",
"Коэффициент сепарации",
"SKIN",
"КН закрепленный",
# radically different
"Время в работе",
"Радиус контура питания",
"Время в накоплении",
"Время накопления",
"Агент закачки",
# text converted
"Мероприятия",
"Проппант",
"Куст",
"Состояние на конец месяца",
"Причина простоя.1",
]
for c in cats:
data = pd.concat([train.iloc[y_train.index][c].fillna("NaN"), y_train], axis=1)
ax = sns.catplot(x=c,y = "Нефть, т", data=data, palette="Set3",kind="box", size =8)
#compare distributions, test, train, categorical, continious
#compare first day distribution of test and train
#prepare SVD solution
cont_columns = [ 'Высота перфорации',
'объемный коэффициент',
'Нефтенасыщенная толщина',
'Плотность нефти',
'ТП - SKIN',
'Динамическая высота',
'Вязкость жидкости в пласт. условиях',
'Глубина текущего забоя',
'Вязкость нефти в пластовых условиях',
'Ноб',
'Газовый фактор',
'Плотность воды',
'Давление на приеме',
'Замерное забойное давление',
'Частота',
'Дебит попутного газа, м3/сут',
'Добыча растворенного газа, м3',
'Конц',
'Забойное давление',
'Плотность раствора глушения',
'Диаметр штуцера',
'V гель',
'Попутный газ, м3',
'Глубина спуска.1',
'Наклон',
'ТП - JD опт.',
'КН закрепленный',
'Удельный коэффициент',
'Pпл',
'Диаметр дополнительного оборудования',
'Коэффициент продуктивности',
'Гель',
'Давление пластовое',
'k',
'Давление наcыщения',
'ГП(ИДН) Дебит жидкости',
'Нэф',
'V под',
'Температура пласта',
'Глубина спуска доп. оборудования',
'Время работы, ч',
'Характеристический дебит жидкости',
'КВЧ',
'Удлинение',
'Время до псевдоуст-ся режима',
'Дата пуска',
'Дата ГРП',
'Дата останова']
squared = []
for c1 in cont_columns:
for c2 in cont_columns:
squared.append(train_cont[c1].multiply(train_cont[c2]))
squared = pd.concat(squared, axis = 1)
#analyze squared correlation
coef = pd.concat([squared, y_train_clean], axis=1).corr()
coef["Нефть, т"][coef["Нефть, т"]>0.4]
train["Дебит попутного газа, м3/сут"]
def sqrt(x):
if np.all(x>0):
return np.sqrt(x)
return 0
def reverse(x):
if np.all(x!=0):
return 1/x
return 0
def log(x):
if np.all(x>0):
return np.log(x)
return 0
transformations = {"log":log,
"exp":np.exp,
"sqrt":sqrt,
"sq":lambda x: x**2,
"cube":lambda x:x**3,
"reverse":reverse,
"orig":lambda x:x}
def get_max_correlation(x,y):
corr_coefs = []
max_corr = 0
max_corr_fn = ""
for n,tf in transformations.items():
x_tf = x.apply(tf)
corr = y.corr(x_tf)
if corr>max_corr:
max_corr = corr
max_corr_fn = n
corr_coefs.append((n, corr))
return max_corr, max_corr_fn
get_max_correlation(train_cont["Плотность воды"], y_train_clean)
for c in train_cont.columns:
print(c)
print(get_max_correlation(train_cont[c], y_train_clean))
test["Добыча растворенного газа, м3"]
| src/eda.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + id="D3DSE_vLVJxg" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1640322524583, "user_tz": -540, "elapsed": 141563, "user": {"displayName": "\uae40\ub3d9\uc601", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "17835880286856113214"}} outputId="f9c635ca-5008-4e57-99d4-013c0d6e19f8"
# !pip uninstall lightgbm -y
# !pip install lightgbm --install-option=--gpu
# + id="fBvZMflSU-zr" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1640322553183, "user_tz": -540, "elapsed": 28607, "user": {"displayName": "\uae40\ub3d9\uc601", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "17835880286856113214"}} outputId="d5211bea-3cb7-4d0e-ce26-cffa679d9ddc"
from google.colab import drive
drive.mount('/gdrive')
# + id="rw0-hDTFU-Qj" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1640322554228, "user_tz": -540, "elapsed": 1058, "user": {"displayName": "\uae40\ub3d9\uc601", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "17835880286856113214"}} outputId="47731954-fa58-4277-ff44-724fa3322f71"
# cd '../gdrive/MyDrive/SSAC/3조'
# + id="ad7c8f1a"
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import os
from datetime import datetime, date, time
import lightgbm as lgb
from sklearn import preprocessing
from sklearn.metrics import log_loss
from sklearn.model_selection import train_test_split
from sklearn.metrics import roc_auc_score
pd.options.display.max_info_columns =200
pd.options.display.max_columns = 200
pd.options.display.max_info_rows =999
pd.options.display.max_rows = 999
import warnings
warnings.filterwarnings('ignore')
# + id="2851a97b"
df1 = pd.read_csv("data/total_data.csv")
# + id="178246e3" outputId="43d94ed5-c4ff-4b75-d566-484f5e0dc50d" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1640322607475, "user_tz": -540, "elapsed": 3, "user": {"displayName": "\uae40\ub3d9\uc601", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "17835880286856113214"}}
# 클릭율
grouped_label = df1.groupby('label').size()
average_ctr = float(grouped_label[1]/grouped_label.sum())
average_ctr
# + [markdown] id="d4a2e8ea"
# # 평가지표 함수
# + id="25db720e"
# 평가지표 함수
def get_rig(train_y, test_y, pred):
avg_ctr = average_ctr
prior = log_loss(train_y, [avg_ctr]*len(train_y))
classifier = log_loss(test_y, pred)
rig = (prior - classifier) / prior
return rig
# + [markdown] id="903311b0"
# # 학습과정 전처리 함수
# + id="97387528"
# 전처리 함수
def process_missing_values(df):
df_pre = df.copy()
for categorical_col in categorical:
df_pre[categorical_col] = df_pre[categorical_col].astype(str)
df_pre[categorical_col] = df_pre[categorical_col].fillna('0')
df_pre[categorical_col] = preprocessing.LabelEncoder().fit_transform(df_pre[categorical_col])
for continuous_col in continuous:
df_pre[continuous_col] = df_pre[continuous_col].fillna(0)
return df_pre
# + [markdown] id="10fa5673"
# # LGB 모델 학습 함수 정의
# + id="f330a7e6"
# Train_test_split
def split_dataset(df, features):
train_test_df = df[['label'] + features]
train, test = train_test_split(train_test_df, test_size = 0.2, random_state=47)
X_train = train[features]
y_train = train['label']
X_test = test[features]
y_test = test['label']
return X_train, y_train, X_test, y_test
# + id="12358533"
# 학습
def train_lgb(X_train, y_train):
model = lgb.LGBMClassifier(n_estimators=50,
random_state=47,
learning_rate=0.1,
num_leaves=127,
max_depth=15,
zero_as_missing=True,
n_jobs=os.cpu_count(),
objective='binary')
print('start training')
model.fit(X_train, y_train)
return model
# 예측
def evaluate_lgb(model, X_test):
print('predicting')
pred = model.predict_proba(X_test)[:,1]
print(f'auc : {roc_auc_score(y_test, pred)}, rig: {get_rig(y_train, y_test, pred)}')
# + [markdown] id="fee0af72"
# # 학습 피쳐 조정
# + id="0eb55eb7"
categorical = [
'viewer_gender',
'content_used',
'content_cat_1',
# 'content_cat_2',
# 'content_cat_3',
# 'content_b_pay',
#"content_status",
'content_delivery_fee']
continuous = [
'bid_price',
'content_price',
'content_emergency_count',
'content_comment_count',
'content_views',
'content_likes',
'adv_follower_count',
'adv_grade',
'adv_item_count',
'adv_views',
'adv_review_count',
'adv_comment_count',
'adv_pay_count',
'adv_parcel_post_count',
'adv_transfer_count',
# 'adv_chat_count',
'viewer_age',
# 'viewer_age_ch',
'viewer_following_count',
'viewer_pay_count',
#"viewer_trans_pay_count",
'viewer_transfer_count']
#'viewer_chat_count']
features = categorical + continuous
df = process_missing_values(df1)
# + id="3a1e7008"
# features만 수정하면 됨
X_train, y_train, X_test, y_test = split_dataset(df, features)
# + [markdown] id="YJ6eCcX6vHR0"
# # 파라미터 지정
# + id="VRJT0e2zrSqp"
params = {"boosting_type": ["dart"],
"n_estimators": [200],
"learning_rate": [0.1, 0.01],
"random_state": [47],
"num_leaves": [31, 63, 127, 255],
"max_depth": [-1, 10, 15, 20],
"min_data_in_leaf": [200],
"objective": ["binary"],
"device": ["gpu"]}
# + [markdown] id="NVH14cMtvJrc"
# # GridSearch 및 결과 확인
# + id="MU2NOrYM2k1m"
from sklearn.model_selection import GridSearchCV
# + id="kLGw5bj21_St"
grid = GridSearchCV(lgb.LGBMClassifier(), params, cv=5)
grid.fit(X_train, y_train)
best = grid.best_estimator_
y_pred = best.predict(X_test)
# + colab={"base_uri": "https://localhost:8080/"} id="mgMjZGid2j1-" executionInfo={"status": "ok", "timestamp": 1640334132110, "user_tz": -540, "elapsed": 389, "user": {"displayName": "\uae40\ub3d9\uc601", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "17835880286856113214"}} outputId="6461d84d-bc7d-446c-ccef-3bdfac75506b"
best
# + colab={"base_uri": "https://localhost:8080/"} id="4zm8yIttiMF_" executionInfo={"status": "ok", "timestamp": 1640334203242, "user_tz": -540, "elapsed": 5587, "user": {"displayName": "\uae40\ub3d9\uc601", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "17835880286856113214"}} outputId="1b984b4f-4537-47be-e142-55aaa05ced71"
evaluate_lgb(best, X_test)
# + colab={"base_uri": "https://localhost:8080/"} id="O-igk204i0JG" executionInfo={"status": "ok", "timestamp": 1640334970821, "user_tz": -540, "elapsed": 384, "user": {"displayName": "\uae40\ub3d9\uc601", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "17835880286856113214"}} outputId="01b8191f-ee51-49ee-aaaf-064624b6ed04"
grid.cv_results_
# + id="cb21cnE0lAgc"
# 가장 잘 나왔던 모델(파라미터) 보여줌
best
# 가장 잘 나온 결과 평가
evaluate_lgb(best, X_test)
# 지금까지 내용 저장
pd.DataFrame(grid.cv_results_).to_csv('경로', index=False)
# + id="NlBWscLzlAi9"
| notebooks/CTR Prediction/LightGBM/GridSearch/LGB_GridSearch_2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### 모듈이란?
#
# > 1. 함수나 변수들을 모아 놓은 library들 의미<br>
#
# > 2. 모듈 사용 방법 <br>
# import file명 <br>
# from file명 import 함수명<br>
#
# ### 파일명만으로 import 후에 file명.함수명으로 호출
# +
import fibo
fibo.fib(100)
# -
# ### 파일명과 함수명을 동시에 호출
# +
from fibo import fib
fib(10)
# -
from fibo import *
fib(20)
# # ### %%writefile 명령어로 개별 파이썬 파일 생성 - 모듈 개발(.py파일 생성)
# +
# %%writefile mymath.py
mypi = 3.14
def add(a,b):
return a+b
def area(r):
return r*r*mypi
# +
import mymath
print(mymath.mypi)
print(mymath.add(3, 4))
# -
from mymath import *
print(mypi)
print(add(3,4))
| Python/basic/step06Module.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: dev
# language: python
# name: dev
# ---
import numpy as np
from sklearn.datasets import load_iris
from sklearn.linear_model import LogisticRegression
from sklearn.svm import LinearSVC
from sklearn.calibration import calibration_curve as skcalibration_curve
def calibration_curve(y_true, y_prob, normalize=False, n_bins=5):
if normalize:
y_prob = (y_prob - y_prob.min()) / (y_prob.max() - y_prob.min())
bins = np.linspace(0, 1 + 1e-8, n_bins + 1)
binids = np.digitize(y_prob, bins) - 1
bin_sums = np.bincount(binids, weights=y_prob, minlength=len(bins))
bin_true = np.bincount(binids, weights=y_true, minlength=len(bins))
bin_total = np.bincount(binids, minlength=len(bins))
nonzero = bin_total != 0
prob_true = (bin_true[nonzero] / bin_total[nonzero])
prob_pred = (bin_sums[nonzero] / bin_total[nonzero])
return prob_true, prob_pred
X, y = load_iris(return_X_y=True)
X, y = X[y != 2], y[y != 2]
clf = LogisticRegression(max_iter=10000).fit(X, y)
y_prob = clf.predict_proba(X)[:, 1]
prob_true1, prob_pred1 = calibration_curve(y, y_prob)
prob_true2, prob_pred2 = skcalibration_curve(y, y_prob)
X, y = load_iris(return_X_y=True)
X, y = X[y != 2], y[y != 2]
clf = LinearSVC(max_iter=10000).fit(X, y)
y_prob = clf.decision_function(X)
prob_true1, prob_pred1 = calibration_curve(y, y_prob, normalize=True)
prob_true2, prob_pred2 = skcalibration_curve(y, y_prob, normalize=True)
| calibration/calibration_curve.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# Prerequisites: install pandas!
# # Input
# +
import numpy
selling_prices = list(numpy.random.randint(1, 1100, size=20000000))
costs_per_unit = list(numpy.random.randint(1, 1000, size=20000000))
units_sold_counts = list(numpy.random.randint(1, 2000, size=20000000))
# -
# # "Exploratory" computation
# +
import pandas
# This is completely dumb, but just for exaggeration purposes...
df = pandas.DataFrame()
df.loc[:, "selling_prices"] = selling_prices
df.loc[:, "costs_per_unit"] = costs_per_unit
df.loc[:, "units_sold_counts"] = units_sold_counts
df.head()
# +
profit_per_fruit = df.apply(
lambda x: (x["selling_prices"] - x["costs_per_unit"]) * x["units_sold_counts"],
axis=1,
)
total_profit = sum(profit_per_fruit)
print(total_profit)
# -
# # "Simplify"
# +
# Create dataframe in one go + broadcasting
df = pandas.DataFrame(
{
"selling_prices": selling_prices,
"costs_per_unit": costs_per_unit,
"units_sold_counts": units_sold_counts,
}
)
# Compute
profit_per_fruit = (df["selling_prices"] - df["costs_per_unit"]) * df["units_sold_counts"]
assert sum(profit_per_fruit) == total_profit # Just to check for consistency
# +
# Without pandas
profit_per_fruit = []
for price, cost, unit in zip(selling_prices, costs_per_unit, units_sold_counts):
profit = (price - cost) * unit
profit_per_fruit.append(profit)
assert sum(profit_per_fruit) == total_profit # Just to check for consistency
# +
# With numpy
price_array = numpy.array(selling_prices)
cost_array = numpy.array(costs_per_unit)
units_array = numpy.array(units_sold_counts)
profit_per_fruit = (price_array - cost_array) * units_array
assert sum(profit_per_fruit) == total_profit # Just to check for consistency
# -
# # Race!
# +
# %%timeit
# Dumbest way
df = pandas.DataFrame()
df.loc[:, "selling_prices"] = selling_prices
df.loc[:, "costs_per_unit"] = costs_per_unit
df.loc[:, "units_sold_counts"] = units_sold_counts
profit_per_fruit = df.apply(
lambda x: (x["selling_prices"] - x["costs_per_unit"]) * x["units_sold_counts"],
axis=1
)
assert sum(profit_per_fruit) == total_profit # Just to check for consistency
# +
# %%timeit
# Still using pandas (but slightly improved)
df = pandas.DataFrame({
"selling_prices": selling_prices,
"costs_per_unit": costs_per_unit,
"units_sold_counts": units_sold_counts,
})
profit_per_fruit = (df["selling_prices"] - df["costs_per_unit"]) * df["units_sold_counts"]
assert sum(profit_per_fruit) == total_profit # Just to check for consistency
# +
# %%timeit
# Without pandas
profit_per_fruit = []
for price, cost, units_sold in zip(selling_prices, costs_per_unit, units_sold_counts):
profit = (price - cost) * units_sold
profit_per_fruit.append(profit)
assert sum(profit_per_fruit) == total_profit # Just to check for consistency
# +
# %%timeit
# With numpy
price_array = numpy.array(selling_prices)
cost_array = numpy.array(costs_per_unit)
units_array = numpy.array(units_sold_counts)
profit_per_fruit = (price_array - cost_array) * units_array
assert sum(profit_per_fruit) == total_profit # Just to check for consistency
# -
| fictional_scenarios/fictional_sales.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Print Formatting Practice (Numbers and Strings)
print("My number %s" %("hello"))
# +
s=13.4576
print('Floating point: %1.3f' %(s))
print('FLoating point value appending zeros: %1.8f' %(s))
print('Floating point with filled in spaces: %25.4f' %(s))
# -
# # %x.yf format: x->Minimum number of characters for the string, y-> number of digits after the decimal point.
# ## Multiple formating
# You can print multiple objects or values at same time.
print('First: %s, Second: %1.1f and Third: %s' %(1,2.0,"three"))
# The values will follow order of appearance (same as in C's printf())
# # Conversion methods
# # %s and %r are 2 conversion methods, they convert any python object to a string.
#
# # %s uses str() and %r uses repr().
print('This is object to string using %s & %s' %(1234,"String here!"))
print('This is object to string using %r & %r' %(4321,"String here!"))
# # String .format() method
a=2
b=3
c=a+b
print('The sum is: {res}'.format(res=c))
print('First: {x}, Second: {y} and Third: {z}'.format(x=1, y=2.0, z="three"))
# You can print the same value more than once.
print('First: {x}, Second: {y} and again First: {x}'.format(x=1, y=2))
# No need to worry about the order here.
| PrintFormatting.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Sample book to show how to work with DSM2 Hydro h5
#
import pandas as pd
import pydsm.hydroh5
hydro=pydsm.hydroh5.HydroH5('../../tests/historical_v82.h5')
# ## Load channels and channel locations from h5
display(hydro.get_channels().head())
display(hydro.get_channels()[0][0])
display(hydro.get_channel_locations().head())
hydro.channel_number2index['441']
# ## Load reservoir and reservoir node connections from h5
display(hydro.get_reservoirs().head())
display(hydro.get_reservoirs()[0][0])
display(hydro.reservoir_node_connections.head())
display(hydro.reservoir_node_connections['res_name'][0])
qext_table=hydro.get_qext()
#display(qext_table)
#qext_table.iloc[:,:].str.strip()
qext_sac=qext_table[qext_table['name']=='sac']
qext_sac.iloc[0]['attach_obj_name']
print(hydro.get_data_tables())
channel_ids=[330,441,1,17]
data=hydro.h5.get('/hydro/data/channel flow')
type(data)
| docs/html/notebooks/sample_hydroh5.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Forecasting models
import sys
import os
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from qgis.core import *
from qgis.PyQt.QtGui import *
from qgis.PyQt.QtCore import *
from IPython.display import Image
QgsApplication.setPrefixPath(r'C:\\OSGeo4W64\\apps\\qgis', True)
qgs = QgsApplication([], True)
qgs.initQgis()
sys.path.append(r'C:\OSGeo4W64\apps\qgis\python\plugins')
project = QgsProject.instance()
# +
path = "C:\\OSGeo4W64\\bin\\SIG\\Projeto_Italy\\Mapa\\ITA_adm1.shp"
map_layer = QgsVectorLayer(path, 'Italy map', 'ogr')
if not map_layer.isValid():
print("Failed to load the layer!")
else:
project.addMapLayer(map_layer)
print("Sucess")
# +
csv_path = "file:///C:/OSGeo4W64/bin/SIG/Projeto_Italy/Dataset/covid_italy.csv?delimiter=,'"
csv_layer = QgsVectorLayer(csv_path, 'Data', 'delimitedtext')
if not csv_layer.isValid():
print('Layer failed to load!')
else:
project.addMapLayer(csv_layer)
print("Sucess")
# +
joinName = 'name_region'
targetName = 'NAME_1'
joinObject = QgsVectorLayerJoinInfo()
joinObject.setJoinFieldName(joinName)
joinObject.setTargetFieldName(targetName)
joinObject.setJoinLayerId(csv_layer.id())
joinObject.setUsingMemoryCache(True)
joinObject.setJoinLayer(csv_layer)
flag = map_layer.addJoin(joinObject)
# -
## For data
import pandas as pd
import numpy as np
## For plotting
import matplotlib.pyplot as plt
## For parametric fitting
from scipy import optimize
# **Lombardia**
# Most affected region of Italy
# +
values_case = []
values_deaths = []
values_new = []
datas = []
lowerTotal = 0
i = 0
for feature in csv_layer.getFeatures():
if feature['name_region']=="Lombardia":
if feature['total_positive']>0:
values_case.append(feature['total_case'])
values_deaths.append(feature['deaths'])
values_new.append(feature['new_positive'])
datas.append(feature['data'])
# +
fig, ax = plt.subplots(nrows=2, ncols=1, sharex=True, figsize=(13,7))
y_pos = np.arange(len(datas))
ax[0].scatter(datas, values_case, color="indigo")
plt.xticks(y_pos, datas, rotation='vertical')
ax[0].set(title="Total cases")
ax[1].bar(datas, values_deaths, color="rebeccapurple")
ax[1].set(title="Deaths")
plt.savefig('../Imagens/Prevision_lombardia_totalcase_deaths.png', dpi=300, format='png')
plt.show()
# -
# **Number of cases**
# +
'''
Linear function: f(x) = a + b*x
'''
def f(x):
return 10 + 1500*x
y_linear = f(x=np.arange(len(values_case)))
'''
Exponential function: f(x) = a + b^x
'''
def f(x):
return 10 + 1.18**x
y_exponential = f(x=np.arange(len(values_case)))
'''
Logistic function: f(x) = a / (1 + e^(-b*(x-c)))
'''
def f(x):
return 90000 / (1 + np.exp(-0.5*(x-20)))
y_logistic = f(x=np.arange(len(values_case)))
# +
fig, ax = plt.subplots(figsize=(13,5))
y_pos = np.arange(len(datas))
ax.scatter(y_pos, values_case, alpha=0.5, color='black')
plt.xticks(y_pos, datas, rotation='vertical')
ax.plot(y_pos, y_linear, label="linear", color="darkgreen")
ax.plot(y_pos, y_exponential, label="exponential", color="seagreen")
ax.plot(y_pos, y_logistic, label="logistic", color="limegreen")
plt.ylabel('Number of cases')
plt.xlabel('Dates')
plt.title('Number of cases in Lombardia')
plt.savefig('../Imagens/Prevision_lombardia_numberofcases.png', dpi=300, format='png')
plt.show()
# -
# **Number of deaths**
# +
'''
Linear function: f(x) = a + b*x
'''
def f(x):
return 10 + 1500*x
y_linear = f(x=np.arange(len(values_deaths)))
'''
Exponential function: f(x) = a + b^x
'''
def f(x):
return 10 + 1.18**x
y_exponential = f(x=np.arange(len(values_deaths)))
'''
Logistic function: f(x) = a / (1 + e^(-b*(x-c)))
'''
def f(x):
return 90000 / (1 + np.exp(-0.5*(x-20)))
y_logistic = f(x=np.arange(len(values_deaths)))
# +
fig, ax = plt.subplots(figsize=(13,5))
y_pos = np.arange(len(datas))
ax.scatter(y_pos, values_case, alpha=0.5, color='black')
plt.xticks(y_pos, datas, rotation='vertical')
ax.plot(y_pos, y_linear, label="linear", color="peru")
ax.plot(y_pos, y_exponential, label="exponential", color="goldenrod")
ax.plot(y_pos, y_logistic, label="logistic", color="khaki")
plt.ylabel('Number of deaths')
plt.xlabel('Dates')
plt.title('Number of deaths in Lombardia')
plt.savefig('../Imagens/Prevision_lombardia_numberofdeaths.png', dpi=300, format='png')
plt.show()
# -
# **Parametric Fitting**
df=pd.read_csv(open("../Dataset/lombardia_last.csv"))
df.index = df['data']
# +
date = df['data'].tolist()
xvalues=np.arange(len(date))
totalcases = df['total_case'].tolist()
print(date)
print(totalcases)
# +
'''
Logistic function: f(x) = capacity / (1 + e^-k*(x - midpoint) )
'''
def logistic_f(xdata, c, k, m):
ydata = c / (1 + np.exp(-k*(xdata-m)))
return ydata
## optimize from scipy
logistic_model, cov = optimize.curve_fit(logistic_f, xdata=xvalues, ydata=totalcases, maxfev=100000, p0=[np.max(totalcases), 1, 1])
## print the parameters
logistic_model
# -
'''
Plot parametric fitting.
'''
def utils_plot_parametric(dtf, zoom=30, figsize=(15,5)):
## interval
dtf["residuals"] = dtf["ts"] - dtf["model"]
dtf["conf_int_low"] = dtf["forecast"] - 1.96*dtf["residuals"].std()
dtf["conf_int_up"] = dtf["forecast"] + 1.96*dtf["residuals"].std()
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=figsize)
## entire series
dtf["ts"].plot(marker=".", linestyle='None', ax=ax[0], title="Parametric Fitting", color="black")
dtf["model"].plot(ax=ax[0], color="maroon")
dtf["forecast"].plot(ax=ax[0], grid=True, color="salmon")
ax[0].fill_between(x=dtf.index, y1=dtf['conf_int_low'], y2=dtf['conf_int_up'], color='b', alpha=0.3)
plt.show()
return dtf[["ts","model","residuals","conf_int_low","forecast","conf_int_up"]]
'''
Forecast unknown future.
:parameter
:param ts: pandas series
:param f: function
:param model: list of optim params
:param pred_ahead: number of observations to forecast (ex. pred_ahead=30)
:param freq: None or str - 'B' business day, 'D' daily, 'W' weekly, 'M' monthly, 'A' annual, 'Q' quarterly
:param zoom: for plotting
'''
def forecast_curve(ts, f, model, pred_ahead=None, zoom=30, figsize=(15,5)):
## fit
X = np.arange(len(ts))
fitted = f(X, model[0], model[1], model[2])
dtf = ts.to_frame(name="ts")
dtf["model"] = fitted
## index
start=0
index = pd.date_range(start=start,periods=pred_ahead)
index = index[1:]
## forecast
Xnew = np.arange(len(ts)+1, len(ts)+1+len(index))
preds = f(Xnew, model[0], model[1], model[2])
dtf = dtf.append(pd.DataFrame(data=preds, index=index, columns=["forecast"]))
## plot
utils_plot_parametric(dtf, zoom=zoom)
return dtf
preds = forecast_curve(df["total_case"], logistic_f, logistic_model, pred_ahead=30, zoom=7)
# # Molise
# **Less affected region of Italy**
# +
values_case = []
values_deaths = []
values_new = []
datas = []
lowerTotal = 0
i = 0
for feature in csv_layer.getFeatures():
if feature['name_region']=="Molise":
if feature['total_positive']>0:
values_case.append(feature['total_case'])
values_deaths.append(feature['deaths'])
values_new.append(feature['new_positive'])
datas.append(feature['data'])
# +
fig, ax = plt.subplots(nrows=2, ncols=1, sharex=True, figsize=(13,7))
y_pos = np.arange(len(datas))
ax[0].scatter(datas, values_case, color="darkcyan")
plt.xticks(y_pos, datas, rotation='vertical')
ax[0].set(title="Total cases")
ax[1].bar(datas, values_deaths, color="c")
ax[1].set(title="Deaths")
plt.savefig('../Imagens/Prevision__Molise_totalcase_deaths.png', dpi=300, format='png')
plt.show()
# -
# **Number of cases**
# +
'''
Linear function: f(x) = a + b*x
'''
def f(x):
return 10 + 1500*x
y_linear = f(x=np.arange(len(values_case)))
'''
Exponential function: f(x) = a + b^x
'''
def f(x):
return 10 + 1.18**x
y_exponential = f(x=np.arange(len(values_case)))
'''
Logistic function: f(x) = a / (1 + e^(-b*(x-c)))
'''
def f(x):
return 90000 / (1 + np.exp(-0.5*(x-20)))
y_logistic = f(x=np.arange(len(values_case)))
# +
fig, ax = plt.subplots(figsize=(13,5))
y_pos = np.arange(len(datas))
ax.scatter(y_pos, values_case, alpha=0.5, color='black')
plt.xticks(y_pos, datas, rotation='vertical')
ax.plot(y_pos, y_linear, label="linear", color="red")
ax.plot(y_pos, y_exponential, label="exponential", color="crimson")
ax.plot(y_pos, y_logistic, label="logistic", color="firebrick")
plt.ylabel('Number of cases')
plt.xlabel('Dates')
plt.title('Number of cases in Molise')
plt.savefig('../Imagens/Prevision_molise_numberofcases.png', dpi=300, format='png')
plt.show()
# -
# **Number of deaths**
# +
'''
Linear function: f(x) = a + b*x
'''
def f(x):
return 10 + 1500*x
y_linear = f(x=np.arange(len(values_deaths)))
'''
Exponential function: f(x) = a + b^x
'''
def f(x):
return 10 + 1.18**x
y_exponential = f(x=np.arange(len(values_deaths)))
'''
Logistic function: f(x) = a / (1 + e^(-b*(x-c)))
'''
def f(x):
return 90000 / (1 + np.exp(-0.5*(x-20)))
y_logistic = f(x=np.arange(len(values_deaths)))
# +
fig, ax = plt.subplots(figsize=(13,5))
y_pos = np.arange(len(datas))
ax.scatter(y_pos, values_case, alpha=0.5, color='black')
plt.xticks(y_pos, datas, rotation='vertical')
ax.plot(y_pos, y_linear, label="linear", color="navy")
ax.plot(y_pos, y_exponential, label="exponential", color="royalblue")
ax.plot(y_pos, y_logistic, label="logistic", color="blue")
plt.ylabel('Number of deaths')
plt.xlabel('Dates')
plt.title('Number of deaths in Molise')
plt.savefig('../Imagens/Prevision_molise_numberofdeaths.png', dpi=300, format='png')
plt.show()
# -
df=pd.read_csv(open("../Dataset/molise_last.csv"))
df.index = df['data']
# +
date = df['data'].tolist()
xvalues=np.arange(len(date))
totalcases = df['total_case'].tolist()
print(date)
print(totalcases)
# +
'''
Logistic function: f(x) = capacity / (1 + e^-k*(x - midpoint) )
'''
def logistic_f(xdata, c, k, m):
ydata = c / (1 + np.exp(-k*(xdata-m)))
return ydata
## optimize from scipy
logistic_model, cov = optimize.curve_fit(logistic_f, xdata=xvalues, ydata=totalcases, maxfev=100000, p0=[np.max(totalcases), 1, 1])
## print the parameters
logistic_model
# -
'''
Plot parametric fitting.
'''
def utils_plot_parametric(dtf, zoom=30, figsize=(15,5)):
## interval
dtf["residuals"] = dtf["ts"] - dtf["model"]
dtf["conf_int_low"] = dtf["forecast"] - 1.96*dtf["residuals"].std()
dtf["conf_int_up"] = dtf["forecast"] + 1.96*dtf["residuals"].std()
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=figsize)
## entire series
dtf["ts"].plot(marker=".", linestyle='None', ax=ax[0], title="Parametric Fitting", color="black")
dtf["model"].plot(ax=ax[0], color="dodgerblue")
dtf["forecast"].plot(ax=ax[0], grid=True, color="fuchsia")
ax[0].fill_between(x=dtf.index, y1=dtf['conf_int_low'], y2=dtf['conf_int_up'], color='b', alpha=0.3)
plt.show()
return dtf[["ts","model","residuals","conf_int_low","forecast","conf_int_up"]]
'''
Forecast unknown future.
:parameter
:param ts: pandas series
:param f: function
:param model: list of optim params
:param pred_ahead: number of observations to forecast (ex. pred_ahead=30)
:param freq: None or str - 'B' business day, 'D' daily, 'W' weekly, 'M' monthly, 'A' annual, 'Q' quarterly
:param zoom: for plotting
'''
def forecast_curve(ts, f, model, pred_ahead=None, zoom=30, figsize=(15,5)):
## fit
X = np.arange(len(ts))
fitted = f(X, model[0], model[1], model[2])
dtf = ts.to_frame(name="ts")
dtf["model"] = fitted
## index
start=0
index = pd.date_range(start=start,periods=pred_ahead)
index = index[1:]
## forecast
Xnew = np.arange(len(ts)+1, len(ts)+1+len(index))
preds = f(Xnew, model[0], model[1], model[2])
dtf = dtf.append(pd.DataFrame(data=preds, index=index, columns=["forecast"]))
## plot
utils_plot_parametric(dtf, zoom=zoom)
return dtf
preds = forecast_curve(df["total_case"], logistic_f, logistic_model, pred_ahead=30, zoom=7)
| Casos de estudo/Italy_Covid19/Projeto_Italy/Notebooks/Italy_prevision.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/abhianand7/awesome-machine-learning/blob/master/gradient_descent_tutorial.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="g0FieADaJ0lT"
# # Gradient Descent Algorithm Implementation
#
# * Tutorial: https://towardsai.net/p/data-science/gradient-descent-algorithm-for-machine-learning-python-tutorial-ml-9ded189ec556
# * Github: https://github.com/towardsai/tutorials/tree/master/gradient_descent_tutorial
# + colab={"base_uri": "https://localhost:8080/"} id="QhOHADWFL1pS" outputId="8e33c29a-3a0b-48c1-f97a-39b685b1395a"
#Download the dataset
# !wget https://raw.githubusercontent.com/towardsai/tutorials/master/gradient_descent_tutorial/data.txt
# + id="Y4Oa5v2d-0Mi"
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
# + colab={"base_uri": "https://localhost:8080/", "height": 202} id="ARofjam3_mfx" outputId="04a366df-2586-47df-bcae-4998a6bf1581"
column_names = ['Population', 'Profit']
df = pd.read_csv('data.txt', header=None, names=column_names)
df.head()
# + id="dT219HO8_xQJ"
df.insert(0, 'Theta0', 1)
cols = df.shape[1]
X = df.iloc[:,0:cols-1]
Y = df.iloc[:,cols-1:cols]
theta = np.matrix(np.array([0]*X.shape[1]))
X = np.matrix(X.values)
Y = np.matrix(Y.values)
# + id="2nTTovCVAK89"
def calculate_RSS(X, y, theta):
inner = np.power(((X * theta.T) - y), 2)
return np.sum(inner) / (2 * len(X))
# + id="K1qp_QiFAQcM"
def gradientDescent(X, Y, theta, alpha, iters):
t = np.matrix(np.zeros(theta.shape))
parameters = int(theta.ravel().shape[1])
cost = np.zeros(iters)
for i in range(iters):
error = (X * theta.T) - Y
for j in range(parameters):
term = np.multiply(error, X[:,j])
t[0,j] = theta[0,j] - ((alpha / len(X)) * np.sum(term))
theta = t
cost[i] = calculate_RSS(X, Y, theta)
return theta, cost
# + colab={"base_uri": "https://localhost:8080/", "height": 515} id="FHs7bQRVAe6I" outputId="d1fb8250-87bb-4822-993c-8e19f66e67d6"
df.plot(kind='scatter', x='Population', y='Profit', figsize=(12,8))
# + [markdown] id="fWddMVZcJ900"
# **Error before applying Gradient Descent**
# + colab={"base_uri": "https://localhost:8080/"} id="Y4hCXYXhAu7g" outputId="94dae253-71b7-4bc3-93f5-a56ca61bcf61"
error = calculate_RSS(X, Y, theta)
error
# + [markdown] id="515QFSSLKF3q"
# **Apply Gradient Descent**
# + id="DFb0u2MyA9iF" colab={"base_uri": "https://localhost:8080/"} outputId="573417e0-e38f-404f-cf89-900eb9a5f113"
g, cost = gradientDescent(X, Y, theta, 0.01, 1000)
g
# + [markdown] id="8T7lbxg-KJrq"
# **Error after Applying Gradient Descent**
# + colab={"base_uri": "https://localhost:8080/"} id="K7hZq_8DDhSC" outputId="f3bce426-81f9-4c7d-e476-d8372523e0de"
error = calculate_RSS(X, Y, g)
error
# + colab={"base_uri": "https://localhost:8080/", "height": 531} id="U_CwT4rnDvVN" outputId="807a3181-5e77-4c3d-adb9-617b43255b7b"
x = np.linspace(df.Population.min(), df.Population.max(), 100)
f = g[0, 0] + (g[0, 1] * x)
fig, ax = plt.subplots(figsize=(12,8))
ax.plot(x, f, 'r', label='Prediction')
ax.scatter(df.Population, df.Profit, label='Traning Data')
ax.legend(loc=2)
ax.set_xlabel('Population')
ax.set_ylabel('Profit')
ax.set_title('Predicted Profit vs. Population Size')
| gradient_descent_tutorial.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Linear Models
import numpy as np
import pandas as pd
from patsy import dmatrix,demo_data
import pystan
import matplotlib.pyplot as plt
# ## General Model Specification
# Let $N$ be the number of data points $(\mathbf{X},\mathbf{y})=\{(\mathbf{x}_n, y_n)\}$. A linear model for the data assumes a linear relationship between the inputs $\mathbf{x}\in\{0,1, \cdots, D-1\}$ and the outputs $y\in\mathbb{R}$ and has the following parameters.
#
# 1. $\beta$ model's weights.
# 2. $\sigma_{\beta}^2$ is (known) prior variance.
# 3. $\sigma_y^2$ is (known) likelihood variance.
#
# The joint distribution of the data and parameters is
#
# \begin{align*}
# p(\mathbf{y}, \mathbf{X},\beta,\sigma_y^2,\sigma_{\beta}^2)= \text{Normal}(\beta \mid \mathbf{0}, \sigma_{\beta}^2\mathbf{I})\prod_{n=1}^N \text{Normal}(y_n \mid \mathbf{x}_n^\top\beta , \sigma_y^2)
# \end{align*}
#
# The hierarchical model can be specified as such
#
# \begin{align*}
# \beta
# &\sim
# \text{Normal}( \mathbf{0}, \sigma_{\beta}^2\mathbf{I}),
# \\[1.5ex]
# y_n \mid (\mathbf{x}_n, \beta)
# &\sim
# \text{Normal}(\mathbf{x}_n^\top\beta , \sigma_y^2)
# \end{align*}
#
#
# ## Data
# ### One categorical variable with one level
def build_dataset_linear_model_categorical_1(N, beta, noise_std):
D = len(beta)
X = dmatrix('-1+a', data=demo_data( "a", nlevels=D, min_rows=N), return_type='dataframe')
y = np.dot(X.values, beta) + np.random.normal(0, noise_std, size=(N,1))
return X, y
N = 1000 # number of data points
D=25
noise_beta=np.sqrt(.5)
beta_true= np.random.normal(0, noise_beta, size=(D,1))
noise_std=0.5
X_train, y_train = build_dataset_linear_model_categorical_1(N, beta_true,noise_std)
df_train=pd.DataFrame(y_train,columns=['y']).join(X_train)
df_train.head()
plt.hist(y_train, histtype='step')
plt.show()
plt.hist(beta_true,10, histtype='step')
plt.show()
plt.plot(beta_true)
plt.show()
# ## Inference
x=df_train.iloc[:,1:].apply(lambda x: np.where(x>0)[0][0]+1, axis=1).tolist()
x[:5]
y=list(df_train['y'].values)
y[:5]
# +
code = """
functions {
matrix make_X(int N, int D, int[] J) {
matrix[N, D] X = rep_matrix(0, N, D); #initialize with zeros
for (i in 1:N){
X[i, J[i]] = 1.0;
}
return X;
}
}
data {
int<lower=0> N;
vector[N] y;
int x[N]; // group membership variable
}
transformed data {
real sigma=0.5;
int<lower=1> D = max(x);
matrix[N, D] X = make_X(N, D, x);
}
parameters {
vector[D] beta;
}
model {
y ~ normal(X * beta, sigma);
}
generated quantities {
real y_sim[N];
for(n in 1:N) {
y_sim[n] = normal_rng(X[n] * beta, sigma);
}
}
"""
dat = {'N': N,
'x': x,
'y': y}
sm = pystan.StanModel(model_code=code)
fit = sm.sampling(data=dat, iter=2000, chains=4)
# -
print(fit)
beta_hat = fit.extract(permuted=True)['beta']
fig, ax = plt.subplots(1, 1)
ax.plot(beta_hat.mean(0))
ax.plot(beta_true)
plt.show()
# +
# if matplotlib is installed (optional, not required), a visual summary and
# traceplot are available
fit.plot()
plt.show()
# -
# ## Posterior Checks
y_sim = fit.extract(permuted=True)['y_sim']
y_sim.shape
plt.hist(y_sim[3999,:], 12, histtype='step')
plt.show()
# # Laplace distribution on beta
# For $\beta=1$ it is identical to a Laplace distribution. For $\beta = 2$, it is identical to a normal distribution with scale $1/\sqrt 2$
# +
from scipy.stats import gennorm
D=25
beta_true= gennorm.rvs(1, size=(D,1))
fig, ax = plt.subplots(1, 1)
ax.hist(beta_true,10, normed=True, histtype='stepfilled', alpha=0.2)
ax.legend(loc='best', frameon=False)
plt.show()
# -
N = 1000 # number of data points
noise_std=0.5
X_train, y_train = build_dataset_linear_model_categorical_1(N, beta_true,noise_std)
df_train=pd.DataFrame(y_train,columns=['y']).join(X_train)
df_train.head()
y=list(df_train['y'].values)
y[:5]
plt.hist(y_train,15, histtype='step')
plt.show()
# # Inference
# +
dat = {'N': N,
'x': x,
'y': y}
sm = pystan.StanModel(model_code=code)
fit2 = sm.sampling(data=dat, iter=2000, chains=4)
# -
print(fit2)
beta_hat = fit2.extract(permuted=True)['beta']
fig, ax = plt.subplots(1, 1)
ax.plot(beta_hat.mean(0))
ax.plot(beta_true)
plt.show()
# +
code = """
functions {
matrix make_X(int N, int D, int[] J) {
matrix[N, D] X = rep_matrix(0, N, D); #initialize with zeros
for (i in 1:N){
X[i, J[i]] = 1.0;
}
return X;
}
}
data {
int<lower=0> N;
vector[N] y;
int x[N]; // group membership variable
}
transformed data {
real sigma=0.5;
int<lower=1> D = max(x);
matrix[N, D] X = make_X(N, D, x);
}
parameters {
vector[D] beta;
}
model {
beta ~ normal(0, .5);
y ~ normal(X * beta, sigma);
}
generated quantities {
real y_sim[N];
for(n in 1:N) {
y_sim[n] = normal_rng(X[n] * beta, sigma);
}
}
"""
dat = {'N': N,
'x': x,
'y': y}
sm2 = pystan.StanModel(model_code=code)
fit3 = sm2.sampling(data=dat, iter=2000, chains=4)
# -
beta_hat = fit3.extract(permuted=True)['beta']
fig, ax = plt.subplots(1, 1)
ax.plot(beta_hat.mean(0))
ax.plot(beta_true)
plt.show()
print(fit3)
| notebooks/linear_model_one_categorical_predictor_shrinkage.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # LASI2021 Machine Learning Workshop
# ## Simple Linear Regression - Exploratory Data Analysis
# ## 0. Import Python Libraries
# +
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style("darkgrid")
# -
# ## 1. Import Data
# import csv file into a dataframe
df = pd.read_csv("data/slrdata.csv")
# display first few lines
df.head()
# basic information about the dataframe
df.info()
# ## 2. Basic Statistics
# summary statistics for numerical columns
df.describe()
df.corr()
# ## 3. Basic Visualizations
# ### Scatterplot
# scatterplot
plt.figure(figsize=(10,6))
plt.plot(df["Study_Time"], df["Exam_Score"],'o',alpha=.5, color='g')
plt.xlabel("Study Time", fontsize=14)
plt.ylabel("Exam Score",fontsize=14)
plt.title("Study Time vs Exam Score",fontsize=18)
# ### Distributions
sns.pairplot(df,markers="+",height=4)
| MachineLearning/SimpleLinearRegression/Simple Linear Regression-EDS.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # PCA with Cary5000 data for deep-UV spectra (190-300 nm)
# +
# Import packages
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from mpl_toolkits import mplot3d
from sklearn.decomposition import PCA
from sklearn.preprocessing import StandardScaler
plt.style.use('ggplot')
# Set seed
seed = 4
# -
# Import data
data = pd.read_csv('Datasets/urea_saline_cary5000.csv')
# Define features and targets
X = data.drop(data.columns[0:2204], axis=1)
y = data['Urea Concentration (mM)']
# Normalize data
sc = StandardScaler()
X = sc.fit_transform(X)
# +
# Do PCA
pca = PCA(n_components=10, random_state=seed)
X_pca = pca.fit_transform(X)
print("Variance explained by all 10 PC's =", sum(pca.explained_variance_ratio_ *100))
# -
# Elbow Plot
plt.plot(np.cumsum(pca.explained_variance_ratio_), color='blueviolet')
plt.xlabel('Number of components')
plt.ylabel('Explained variance (%)')
plt.savefig('elbow_plot.png', dpi=100)
np.cumsum(pca.explained_variance_ratio_)
# +
# If we apply PCA with n_components=2
pca_2 = PCA(n_components=2, random_state=seed)
X_pca_2 = pca_2.fit_transform(X)
# Plot it
plt.figure(figsize=(10, 7))
sns.scatterplot(x=X_pca_2[:, 0], y=X_pca_2[:, 1], s=70,
hue=y, palette='viridis')
plt.title('2D Scatterplot: 95.5% of the variability captured', pad=15)
plt.xlabel('First prinicipal component')
plt.ylabel('Second principal component')
# +
# Plot it in 3D
pca_3 = PCA(n_components=3, random_state=seed)
X_pca_3 = pca_3.fit_transform(X)
fig = plt.figure(figsize = (12, 8))
ax = plt.axes(projection='3d')
sctt = ax.scatter3D(X_pca_3[:, 0], X_pca_3[:, 1], X_pca_3[:, 2],
c = y, s=50, alpha=0.6)
plt.title('3D Scatterplot: 98.0% of the variability captured', pad=15)
ax.set_xlabel('First principal component')
ax.set_ylabel('Second principal component')
ax.set_zlabel('Third principal component')
plt.savefig('3d_scatterplot.png')
# -
# ## Drop outliers - data from 02/11/2022
data = pd.read_csv('Datasets/urea_saline_cary5000.csv')
data = data.drop(data.index[17:29])
data.reset_index(inplace=True)
# +
# Define features and targets
X = data.drop(data.columns[0:2205], axis=1)
y = data['Urea Concentration (mM)']
# Normalize data
sc = StandardScaler()
X = sc.fit_transform(X)
# Do PCA
pca = PCA(n_components=10, random_state=seed)
X_pca = pca.fit_transform(X)
print("Variance explained by all 10 PC's =", sum(pca.explained_variance_ratio_ *100))
# Elbow Plot
plt.plot(np.cumsum(pca.explained_variance_ratio_), color='blueviolet')
plt.xlabel('Number of components')
plt.ylabel('Explained variance (%)')
plt.savefig('elbow_plot.png', dpi=100)
# -
np.cumsum(pca.explained_variance_ratio_)
# +
# If we apply PCA with n_components=2
pca_2 = PCA(n_components=2, random_state=seed)
X_pca_2 = pca_2.fit_transform(X)
# Plot it
plt.figure(figsize=(10, 7))
sns.scatterplot(x=X_pca_2[:, 0], y=X_pca_2[:, 1], s=70,
hue=y, palette='viridis')
plt.title('2D Scatterplot: 96.4% of the variability captured', pad=15)
plt.xlabel('First prinicipal component')
plt.ylabel('Second principal component')
# +
# Plot it in 3D
pca_3 = PCA(n_components=3, random_state=seed)
X_pca_3 = pca_3.fit_transform(X)
fig = plt.figure(figsize = (12, 8))
ax = plt.axes(projection='3d')
sctt = ax.scatter3D(X_pca_3[:, 0], X_pca_3[:, 1], X_pca_3[:, 2],
c = y, s=50, alpha=0.6)
plt.title('3D Scatterplot: 98.4% of the variability captured', pad=15)
ax.set_xlabel('First principal component')
ax.set_ylabel('Second principal component')
ax.set_zlabel('Third principal component')
plt.savefig('3d_scatterplot.png')
# -
| PCA_Cary5000_deepUV.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.2.0
# language: julia
# name: julia-1.2
# ---
using Revise
# +
push!(LOAD_PATH, "/home/amir/work/mps/src/")
using LinearAlgebra
using SymTensors
using QuantumModels
using MatrixProductStateTools
# -
include("/home/amir/work/mps/test/runtests.jl")
model = triangularhopping((4,6), 1.0, 1.0, 1.0, 1.0);
tikzlattice(model, "triangle.tex")
| testmodel.ipynb |