markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
You can read all about numpy datatypes in the documentation. Array math Basic mathematical functions operate elementwise on arrays, and are available both as operator overloads and as functions in the numpy module:
x = np.array([[1,2],[3,4]], dtype=np.float64) y = np.array([[5,6],[7,8]], dtype=np.float64) # Elementwise sum; both produce the array print(x + y) print(np.add(x, y)) # Elementwise difference; both produce the array print(x - y) print(np.subtract(x, y)) # Elementwise product; both produce the array print(x * y) print(np.multiply(x, y)) # Elementwise division; both produce the array # [[ 0.2 0.33333333] # [ 0.42857143 0.5 ]] print(x / y) print(np.divide(x, y)) # Elementwise square root; produces the array # [[ 1. 1.41421356] # [ 1.73205081 2. ]] print(np.sqrt(x))
jupyter-notebook-tutorial.ipynb
cs231n/cs231n.github.io
mit
Note that unlike MATLAB, * is elementwise multiplication, not matrix multiplication. We instead use the dot function to compute inner products of vectors, to multiply a vector by a matrix, and to multiply matrices. dot is available both as a function in the numpy module and as an instance method of array objects:
x = np.array([[1,2],[3,4]]) y = np.array([[5,6],[7,8]]) v = np.array([9,10]) w = np.array([11, 12]) # Inner product of vectors; both produce 219 print(v.dot(w)) print(np.dot(v, w))
jupyter-notebook-tutorial.ipynb
cs231n/cs231n.github.io
mit
You can also use the @ operator which is equivalent to numpy's dot operator.
print(v @ w) # Matrix / vector product; both produce the rank 1 array [29 67] print(x.dot(v)) print(np.dot(x, v)) print(x @ v) # Matrix / matrix product; both produce the rank 2 array # [[19 22] # [43 50]] print(x.dot(y)) print(np.dot(x, y)) print(x @ y)
jupyter-notebook-tutorial.ipynb
cs231n/cs231n.github.io
mit
Numpy provides many useful functions for performing computations on arrays; one of the most useful is sum:
x = np.array([[1,2],[3,4]]) print(np.sum(x)) # Compute sum of all elements; prints "10" print(np.sum(x, axis=0)) # Compute sum of each column; prints "[4 6]" print(np.sum(x, axis=1)) # Compute sum of each row; prints "[3 7]"
jupyter-notebook-tutorial.ipynb
cs231n/cs231n.github.io
mit
You can find the full list of mathematical functions provided by numpy in the documentation. Apart from computing mathematical functions using arrays, we frequently need to reshape or otherwise manipulate data in arrays. The simplest example of this type of operation is transposing a matrix; to transpose a matrix, simply use the T attribute of an array object:
print(x) print("transpose\n", x.T) v = np.array([[1,2,3]]) print(v ) print("transpose\n", v.T)
jupyter-notebook-tutorial.ipynb
cs231n/cs231n.github.io
mit
Broadcasting Broadcasting is a powerful mechanism that allows numpy to work with arrays of different shapes when performing arithmetic operations. Frequently we have a smaller array and a larger array, and we want to use the smaller array multiple times to perform some operation on the larger array. For example, suppose that we want to add a constant vector to each row of a matrix. We could do it like this:
# We will add the vector v to each row of the matrix x, # storing the result in the matrix y x = np.array([[1,2,3], [4,5,6], [7,8,9], [10, 11, 12]]) v = np.array([1, 0, 1]) y = np.empty_like(x) # Create an empty matrix with the same shape as x # Add the vector v to each row of the matrix x with an explicit loop for i in range(4): y[i, :] = x[i, :] + v print(y)
jupyter-notebook-tutorial.ipynb
cs231n/cs231n.github.io
mit
This works; however when the matrix x is very large, computing an explicit loop in Python could be slow. Note that adding the vector v to each row of the matrix x is equivalent to forming a matrix vv by stacking multiple copies of v vertically, then performing elementwise summation of x and vv. We could implement this approach like this:
vv = np.tile(v, (4, 1)) # Stack 4 copies of v on top of each other print(vv) # Prints "[[1 0 1] # [1 0 1] # [1 0 1] # [1 0 1]]" y = x + vv # Add x and vv elementwise print(y)
jupyter-notebook-tutorial.ipynb
cs231n/cs231n.github.io
mit
Numpy broadcasting allows us to perform this computation without actually creating multiple copies of v. Consider this version, using broadcasting:
import numpy as np # We will add the vector v to each row of the matrix x, # storing the result in the matrix y x = np.array([[1,2,3], [4,5,6], [7,8,9], [10, 11, 12]]) v = np.array([1, 0, 1]) y = x + v # Add v to each row of x using broadcasting print(y)
jupyter-notebook-tutorial.ipynb
cs231n/cs231n.github.io
mit
The line y = x + v works even though x has shape (4, 3) and v has shape (3,) due to broadcasting; this line works as if v actually had shape (4, 3), where each row was a copy of v, and the sum was performed elementwise. Broadcasting two arrays together follows these rules: If the arrays do not have the same rank, prepend the shape of the lower rank array with 1s until both shapes have the same length. The two arrays are said to be compatible in a dimension if they have the same size in the dimension, or if one of the arrays has size 1 in that dimension. The arrays can be broadcast together if they are compatible in all dimensions. After broadcasting, each array behaves as if it had shape equal to the elementwise maximum of shapes of the two input arrays. In any dimension where one array had size 1 and the other array had size greater than 1, the first array behaves as if it were copied along that dimension If this explanation does not make sense, try reading the explanation from the documentation or this explanation. Functions that support broadcasting are known as universal functions. You can find the list of all universal functions in the documentation. Here are some applications of broadcasting:
# Compute outer product of vectors v = np.array([1,2,3]) # v has shape (3,) w = np.array([4,5]) # w has shape (2,) # To compute an outer product, we first reshape v to be a column # vector of shape (3, 1); we can then broadcast it against w to yield # an output of shape (3, 2), which is the outer product of v and w: print(np.reshape(v, (3, 1)) * w) # Add a vector to each row of a matrix x = np.array([[1,2,3], [4,5,6]]) # x has shape (2, 3) and v has shape (3,) so they broadcast to (2, 3), # giving the following matrix: print(x + v) # Add a vector to each column of a matrix # x has shape (2, 3) and w has shape (2,). # If we transpose x then it has shape (3, 2) and can be broadcast # against w to yield a result of shape (3, 2); transposing this result # yields the final result of shape (2, 3) which is the matrix x with # the vector w added to each column. Gives the following matrix: print((x.T + w).T) # Another solution is to reshape w to be a row vector of shape (2, 1); # we can then broadcast it directly against x to produce the same # output. print(x + np.reshape(w, (2, 1))) # Multiply a matrix by a constant: # x has shape (2, 3). Numpy treats scalars as arrays of shape (); # these can be broadcast together to shape (2, 3), producing the # following array: print(x * 2)
jupyter-notebook-tutorial.ipynb
cs231n/cs231n.github.io
mit
Broadcasting typically makes your code more concise and faster, so you should strive to use it where possible. This brief overview has touched on many of the important things that you need to know about numpy, but is far from complete. Check out the numpy reference to find out much more about numpy. Matplotlib Matplotlib is a plotting library. In this section give a brief introduction to the matplotlib.pyplot module, which provides a plotting system similar to that of MATLAB.
import matplotlib.pyplot as plt
jupyter-notebook-tutorial.ipynb
cs231n/cs231n.github.io
mit
By running this special iPython command, we will be displaying plots inline:
%matplotlib inline
jupyter-notebook-tutorial.ipynb
cs231n/cs231n.github.io
mit
Plotting The most important function in matplotlib is plot, which allows you to plot 2D data. Here is a simple example:
# Compute the x and y coordinates for points on a sine curve x = np.arange(0, 3 * np.pi, 0.1) y = np.sin(x) # Plot the points using matplotlib plt.plot(x, y)
jupyter-notebook-tutorial.ipynb
cs231n/cs231n.github.io
mit
With just a little bit of extra work we can easily plot multiple lines at once, and add a title, legend, and axis labels:
y_sin = np.sin(x) y_cos = np.cos(x) # Plot the points using matplotlib plt.plot(x, y_sin) plt.plot(x, y_cos) plt.xlabel('x axis label') plt.ylabel('y axis label') plt.title('Sine and Cosine') plt.legend(['Sine', 'Cosine'])
jupyter-notebook-tutorial.ipynb
cs231n/cs231n.github.io
mit
Subplots You can plot different things in the same figure using the subplot function. Here is an example:
# Compute the x and y coordinates for points on sine and cosine curves x = np.arange(0, 3 * np.pi, 0.1) y_sin = np.sin(x) y_cos = np.cos(x) # Set up a subplot grid that has height 2 and width 1, # and set the first such subplot as active. plt.subplot(2, 1, 1) # Make the first plot plt.plot(x, y_sin) plt.title('Sine') # Set the second subplot as active, and make the second plot. plt.subplot(2, 1, 2) plt.plot(x, y_cos) plt.title('Cosine') # Show the figure. plt.show()
jupyter-notebook-tutorial.ipynb
cs231n/cs231n.github.io
mit
The Button is not used to represent a data type. Instead the button widget is used to handle mouse clicks. The on_click method of the Button can be used to register function to be called when the button is clicked. The doc string of the on_click can be seen below.
from IPython.html import widgets print(widgets.Button.on_click.__doc__)
GSOC/notebooks/ipython/examples/Interactive Widgets/Widget Events.ipynb
OSGeo-live/CesiumWidget
apache-2.0
Example Since button clicks are stateless, they are transmitted from the front-end to the back-end using custom messages. By using the on_click method, a button that prints a message when it has been clicked is shown below.
from IPython.display import display button = widgets.Button(description="Click Me!") display(button) def on_button_clicked(b): print("Button clicked.") button.on_click(on_button_clicked)
GSOC/notebooks/ipython/examples/Interactive Widgets/Widget Events.ipynb
OSGeo-live/CesiumWidget
apache-2.0
on_sumbit The Text also has a special on_submit event. The on_submit event fires when the user hits return.
text = widgets.Text() display(text) def handle_submit(sender): print(text.value) text.on_submit(handle_submit)
GSOC/notebooks/ipython/examples/Interactive Widgets/Widget Events.ipynb
OSGeo-live/CesiumWidget
apache-2.0
Traitlet events Widget properties are IPython traitlets and traitlets are eventful. To handle changes, the on_trait_change method of the widget can be used to register a callback. The doc string for on_trait_change can be seen below.
print(widgets.Widget.on_trait_change.__doc__)
GSOC/notebooks/ipython/examples/Interactive Widgets/Widget Events.ipynb
OSGeo-live/CesiumWidget
apache-2.0
Signatures Mentioned in the doc string, the callback registered can have 4 possible signatures: callback() callback(trait_name) callback(trait_name, new_value) callback(trait_name, old_value, new_value) Using this method, an example of how to output an IntSlider's value as it is changed can be seen below.
int_range = widgets.IntSlider() display(int_range) def on_value_change(name, value): print(value) int_range.on_trait_change(on_value_change, 'value')
GSOC/notebooks/ipython/examples/Interactive Widgets/Widget Events.ipynb
OSGeo-live/CesiumWidget
apache-2.0
Linking Widgets Often, you may want to simply link widget attributes together. Synchronization of attributes can be done in a simpler way than by using bare traitlets events. The first method is to use the link and directional_link functions from the traitlets module. Linking traitlets attributes from the server side
from IPython.utils import traitlets caption = widgets.Latex(value = 'The values of slider1, slider2 and slider3 are synchronized') sliders1, slider2, slider3 = widgets.IntSlider(description='Slider 1'),\ widgets.IntSlider(description='Slider 2'),\ widgets.IntSlider(description='Slider 3') l = traitlets.link((sliders1, 'value'), (slider2, 'value'), (slider3, 'value')) display(caption, sliders1, slider2, slider3) caption = widgets.Latex(value = 'Changes in source values are reflected in target1 and target2') source, target1, target2 = widgets.IntSlider(description='Source'),\ widgets.IntSlider(description='Target 1'),\ widgets.IntSlider(description='Target 2') traitlets.dlink((source, 'value'), (target1, 'value'), (target2, 'value')) display(caption, source, target1, target2)
GSOC/notebooks/ipython/examples/Interactive Widgets/Widget Events.ipynb
OSGeo-live/CesiumWidget
apache-2.0
Function traitlets.link returns a Link object. The link can be broken by calling the unlink method.
# l.unlink()
GSOC/notebooks/ipython/examples/Interactive Widgets/Widget Events.ipynb
OSGeo-live/CesiumWidget
apache-2.0
Linking widgets attributes from the client side When synchronizing traitlets attributes, you may experience a lag because of the latency dues to the rountrip to the server side. You can also directly link widgets attributes, either in a unidirectional or a bidirectional fashion using the link widgets.
caption = widgets.Latex(value = 'The values of range1, range2 and range3 are synchronized') range1, range2, range3 = widgets.IntSlider(description='Range 1'),\ widgets.IntSlider(description='Range 2'),\ widgets.IntSlider(description='Range 3') l = widgets.jslink((range1, 'value'), (range2, 'value'), (range3, 'value')) display(caption, range1, range2, range3) caption = widgets.Latex(value = 'Changes in source_range values are reflected in target_range1 and target_range2') source_range, target_range1, target_range2 = widgets.IntSlider(description='Source range'),\ widgets.IntSlider(description='Target range 1'),\ widgets.IntSlider(description='Target range 2') widgets.jsdlink((source_range, 'value'), (target_range1, 'value'), (target_range2, 'value')) display(caption, source_range, target_range1, target_range2)
GSOC/notebooks/ipython/examples/Interactive Widgets/Widget Events.ipynb
OSGeo-live/CesiumWidget
apache-2.0
Function widgets.jslink returns a Link widget. The link can be broken by calling the unlink method.
# l.unlink()
GSOC/notebooks/ipython/examples/Interactive Widgets/Widget Events.ipynb
OSGeo-live/CesiumWidget
apache-2.0
Format: 7zipped Files: badges.xml UserId, e.g.: "420" Name, e.g.: "Teacher" Date, e.g.: "2008-09-15T08:55:03.923" comments.xml Id PostId Score Text, e.g.: "@Stu Thompson: Seems possible to me - why not try it?" CreationDate, e.g.:"2008-09-06T08:07:10.730" UserId posts.xml Id PostTypeId 1: Question 2: Answer ParentID (only present if PostTypeId is 2) AcceptedAnswerId (only present if PostTypeId is 1) CreationDate Score ViewCount Body OwnerUserId LastEditorUserId LastEditorDisplayName="Jeff Atwood" LastEditDate="2009-03-05T22:28:34.823" LastActivityDate="2009-03-11T12:51:01.480" CommunityOwnedDate="2009-03-11T12:51:01.480" ClosedDate="2009-03-11T12:51:01.480" Title= Tags= AnswerCount CommentCount FavoriteCount posthistory.xml Id PostHistoryTypeId - 1: Initial Title - The first title a question is asked with. - 2: Initial Body - The first raw body text a post is submitted with. - 3: Initial Tags - The first tags a question is asked with. - 4: Edit Title - A question's title has been changed. - 5: Edit Body - A post's body has been changed, the raw text is stored here as markdown. - 6: Edit Tags - A question's tags have been changed. - 7: Rollback Title - A question's title has reverted to a previous version. - 8: Rollback Body - A post's body has reverted to a previous version - the raw text is stored here. - 9: Rollback Tags - A question's tags have reverted to a previous version. - 10: Post Closed - A post was voted to be closed. - 11: Post Reopened - A post was voted to be reopened. - 12: Post Deleted - A post was voted to be removed. - 13: Post Undeleted - A post was voted to be restored. - 14: Post Locked - A post was locked by a moderator. - 15: Post Unlocked - A post was unlocked by a moderator. - 16: Community Owned - A post has become community owned. - 17: Post Migrated - A post was migrated. - 18: Question Merged - A question has had another, deleted question merged into itself. - 19: Question Protected - A question was protected by a moderator - 20: Question Unprotected - A question was unprotected by a moderator - 21: Post Disassociated - An admin removes the OwnerUserId from a post. - 22: Question Unmerged - A previously merged question has had its answers and votes restored. PostId RevisionGUID: At times more than one type of history record can be recorded by a single action. All of these will be grouped using the same RevisionGUID CreationDate: "2009-03-05T22:28:34.823" UserId UserDisplayName: populated if a user has been removed and no longer referenced by user Id Comment: This field will contain the comment made by the user who edited a post Text: A raw version of the new value for a given revision If PostHistoryTypeId = 10, 11, 12, 13, 14, or 15 this column will contain a JSON encoded string with all users who have voted for the PostHistoryTypeId If PostHistoryTypeId = 17 this column will contain migration details of either "from <url>" or "to <url>" CloseReasonId 1: Exact Duplicate - This question covers exactly the same ground as earlier questions on this topic; its answers may be merged with another identical question. 2: off-topic 3: subjective 4: not a real question 7: too localized postlinks.xml Id CreationDate PostId RelatedPostId PostLinkTypeId 1: Linked 3: Duplicate users.xml Id Reputation CreationDate DisplayName EmailHash LastAccessDate WebsiteUrl Location Age AboutMe Views UpVotes DownVotes votes.xml Id PostId VoteTypeId 1: AcceptedByOriginator 2: UpMod 3: DownMod 4: Offensive 5: Favorite - if VoteTypeId = 5 UserId will be populated 6: Close 7: Reopen 8: BountyStart 9: BountyClose 10: Deletion 11: Undeletion 12: Spam 13: InformModerator CreationDate UserId (only for VoteTypeId 5) BountyAmount (only for VoteTypeId 9) Descargar todos los CSVs:
import os import os.path as path from urllib.request import urlretrieve def download_csv_upper_dir(baseurl, filename): file = path.abspath(path.join(os.getcwd(),os.pardir,filename)) if not os.path.isfile(file): urlretrieve(baseurl + '/' + filename, file) baseurl = 'http://neuromancer.inf.um.es:8080/es.stackoverflow/' download_csv_upper_dir(baseurl, 'Posts.csv') download_csv_upper_dir(baseurl, 'Users.csv') download_csv_upper_dir(baseurl, 'Tags.csv') download_csv_upper_dir(baseurl, 'Comments.csv') download_csv_upper_dir(baseurl, 'Votes.csv') %%sql DROP SCHEMA IF EXISTS stackoverflow; CREATE SCHEMA stackoverflow CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci; %%sql USE stackoverflow;
sql/sesion1.ipynb
dsevilla/bdge
mit
Se tiene que habilitar esto para que se permita importar CSVs.
%%sql SET GLOBAL local_infile = true; %%sql DROP TABLE IF EXISTS Posts; CREATE TABLE Posts ( Id INT, AcceptedAnswerId INT NULL DEFAULT NULL, AnswerCount INT DEFAULT 0, Body TEXT, ClosedDate DATETIME(6) NULL DEFAULT NULL, CommentCount INT DEFAULT 0, CommunityOwnedDate DATETIME(6) NULL DEFAULT NULL, CreationDate DATETIME(6) NULL DEFAULT NULL, FavoriteCount INT DEFAULT 0, LastActivityDate DATETIME(6) NULL DEFAULT NULL, LastEditDate DATETIME(6) NULL DEFAULT NULL, LastEditorDisplayName TEXT, LastEditorUserId INT NULL DEFAULT NULL, OwnerDisplayName TEXT, OwnerUserId INT NULL DEFAULT NULL, ParentId INT NULL DEFAULT NULL, PostTypeId INT, -- 1 = Question, 2 = Answer Score INT DEFAULT 0, Tags TEXT, Title TEXT, ViewCount INT DEFAULT 0, PRIMARY KEY(Id) ) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci; %%sql LOAD DATA LOCAL INFILE "../Posts.csv" INTO TABLE Posts CHARACTER SET utf8mb4 COLUMNS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' ESCAPED BY '"' LINES TERMINATED BY '\r\n' IGNORE 1 LINES (Id, @AcceptedAnswerId, @AnswerCount, Body, @ClosedDate, @CommentCount, @CommunityOwnedDate, CreationDate, @FavoriteCount, @LastActivityDate, @LastEditDate, LastEditorDisplayName, @LastEditorUserId, OwnerDisplayName, @OwnerUserId, @ParentId, PostTypeId, Score, Tags, Title, @ViewCount) SET ParentId = nullif (@ParentId, ''), ClosedDate = nullif(@ClosedDate, ''), LastEditorUserId = nullif(@OLastEditorUserId, ''), LastActivityDate = nullif(@LastActivityDate, ''), LastEditDate = nullif(@LastEditDate, ''), AcceptedAnswerId = nullif (@AcceptedAnswerId, ''), OwnerUserId = nullif(@OwnerUserId, ''), LastEditorUserId = nullif(@LastEditorUserId, ''), CommunityOwnedDate = nullif(@CommunityOwnedDate, ''), FavoriteCount = if(@FavoriteCount = '',0,@FavoriteCount), CommentCount = if(@CommentCount = '',0,@CommentCount), ViewCount = if(@ViewCount = '',0,@ViewCount), AnswerCount = if(@AnswerCount = '',0,@AnswerCount) ; %%sql select count(*) from Posts; %%sql select Id,Title,CreationDate from Posts LIMIT 2; %%sql DROP TABLE IF EXISTS Users; CREATE TABLE Users ( Id INT, AboutMe TEXT, AccountId INT, Age INT NULL DEFAULT NULL, CreationDate DATETIME(6) NULL DEFAULT NULL, DisplayName TEXT, DownVotes INT DEFAULT 0, LastAccessDate DATETIME(6) NULL DEFAULT NULL, Location TEXT, ProfileImageUrl TEXT, Reputation INT DEFAULT 0, UpVotes INT DEFAULT 0, Views INT DEFAULT 0, WebsiteUrl TEXT, PRIMARY KEY(Id) ) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci; %%sql LOAD DATA LOCAL INFILE "../Users.csv" INTO TABLE Users CHARACTER SET utf8mb4 COLUMNS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' ESCAPED BY '"' LINES TERMINATED BY '\r\n' IGNORE 1 LINES (Id,AboutMe,@AccountId,@Age,@CreationDate,DisplayName,DownVotes,LastAccessDate,Location,ProfileImageUrl, Reputation,UpVotes,Views,WebsiteUrl) SET LastAccessDate = nullif(@LastAccessDate,''), Age = nullif(@Age, ''), CreationDate = nullif(@CreationDate,''), AccountId = nullif(@AccountId, '') ; %%sql select count(*) from Users; %%sql DROP TABLE IF EXISTS Tags; CREATE TABLE Tags ( Id INT, Count INT DEFAULT 0, ExcerptPostId INT NULL DEFAULT NULL, TagName TEXT, WikiPostId INT NULL DEFAULT NULL, PRIMARY KEY(Id) ) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci; %%sql LOAD DATA LOCAL INFILE "../Tags.csv" INTO TABLE Tags CHARACTER SET utf8mb4 COLUMNS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' ESCAPED BY '"' LINES TERMINATED BY '\r\n' IGNORE 1 LINES (Id,Count,@ExcerptPostId,TagName,@WikiPostId) SET WikiPostId = nullif(@WikiPostId, ''), ExcerptPostId = nullif(@ExcerptPostId, '') ; %%sql DROP TABLE IF EXISTS Comments; CREATE TABLE Comments ( Id INT, CreationDate DATETIME(6) NULL DEFAULT NULL, PostId INT NULL DEFAULT NULL, Score INT DEFAULT 0, Text TEXT, UserDisplayName TEXT, UserId INT NULL DEFAULT NULL, PRIMARY KEY(Id) ) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci; %%sql LOAD DATA LOCAL INFILE "../Comments.csv" INTO TABLE Comments CHARACTER SET utf8mb4 COLUMNS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' ESCAPED BY '"' LINES TERMINATED BY '\r\n' IGNORE 1 LINES (Id,@CreationDate,@PostId,Score,Text,@UserDisplayName,@UserId) SET UserId = nullif(@UserId, ''), PostId = nullif(@PostId, ''), CreationDate = nullif(@CreationDate,''), UserDisplayName = nullif(@UserDisplayName,'') ; %%sql SELECT Count(*) FROM Comments; %%sql DROP TABLE IF EXISTS Votes; CREATE TABLE Votes ( Id INT, BountyAmount INT DEFAULT 0, CreationDate DATETIME(6) NULL DEFAULT NULL, PostId INT NULL DEFAULT NULL, UserId INT NULL DEFAULT NULL, VoteTypeId INT, PRIMARY KEY(Id) ) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci; %%sql LOAD DATA LOCAL INFILE "../Votes.csv" INTO TABLE Votes CHARACTER SET utf8mb4 COLUMNS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' ESCAPED BY '"' LINES TERMINATED BY '\r\n' IGNORE 1 LINES (Id,@BountyAmount,@CreationDate,@PostId,@UserId,VoteTypeId) SET UserId = nullif(@UserId, ''), PostId = nullif(@PostId, ''), BountyAmount = if(@BountyAmount = '',0,@BountyAmount), CreationDate = nullif(@CreationDate, '') ;
sql/sesion1.ipynb
dsevilla/bdge
mit
Añadimos las claves ajenas para que todas las tablas estén referenciadas correctamente Usaremos los comandos alter table.
%%sql ALTER TABLE Posts ADD FOREIGN KEY (ParentId) REFERENCES Posts(Id); ALTER TABLE Posts ADD FOREIGN KEY (OwnerUserId) REFERENCES Users(Id); ALTER TABLE Posts ADD FOREIGN KEY (LastEditorUserId) REFERENCES Users(Id); ALTER TABLE Posts ADD FOREIGN KEY (AcceptedAnswerId) REFERENCES Posts(Id); %%sql ALTER TABLE Tags ADD FOREIGN KEY (WikiPostId) REFERENCES Posts(Id); ALTER TABLE Tags ADD FOREIGN KEY (ExcerptPostId) REFERENCES Posts(Id); %%sql ALTER TABLE Comments ADD FOREIGN KEY (PostId) REFERENCES Posts(Id); ALTER TABLE Comments ADD FOREIGN KEY (UserId) REFERENCES Users(Id); %%sql ALTER TABLE Votes ADD FOREIGN KEY (PostId) REFERENCES Posts(Id); ALTER TABLE Votes ADD FOREIGN KEY (UserId) REFERENCES Users(Id); %%sql EXPLAIN SELECT Y.PostId,Y.Present FROM (SELECT v.PostId AS PostId, COALESCE(p.Id,CONCAT('No: ', v.PostId)) AS Present FROM Votes v LEFT JOIN Posts p ON v.PostId = p.Id) AS Y WHERE Y.Present LIKE 'No%'; %%sql EXPLAIN SELECT PostId from Votes WHERE PostId NOT IN (select Id from Posts); %%sql select * from Votes LIMIT 20; %%sql SELECT Y.Id, Y.PostId, Y.Present FROM (SELECT v.PostId AS PostId, v.Id AS Id, p.Id AS Pid, COALESCE(p.Id, CONCAT('No: ', v.PostId)) AS Present FROM Votes v LEFT JOIN Posts p ON v.PostId = p.Id) AS Y WHERE Y.Pid IS NULL LIMIT 1000
sql/sesion1.ipynb
dsevilla/bdge
mit
EJERCICIO: Eliminar de Votes las entradas que se refieran a Posts inexistentes
%%sql -- DELETE FROM Votes WHERE ...; %%sql -- Y ahora sí ALTER TABLE Votes ADD FOREIGN KEY (PostId) REFERENCES Posts(Id); ALTER TABLE Votes ADD FOREIGN KEY (UserId) REFERENCES Users(Id); %sql use stackoverflow %%sql SHOW TABLES; %%sql DESCRIBE Posts; top_tags = %sql SELECT Id, TagName, Count FROM Tags ORDER BY Count DESC LIMIT 40;
sql/sesion1.ipynb
dsevilla/bdge
mit
¡¡Los resultados de %sql se pueden convertir a un DataFrame!!
top_tags_df = top_tags.DataFrame() # invert_y_axis() hace que el más usado aparezca primero. Por defecto es al revés. top_tags_df.plot(kind='barh',x='TagName', y='Count', figsize=(14,14*2/3)).invert_yaxis() top_tags %%sql select Id,TagName,Count from Tags WHERE Count > 5 ORDER BY Count ASC LIMIT 40;
sql/sesion1.ipynb
dsevilla/bdge
mit
Para comparación con HBase Voy a hacer unas consultas para comparar la eficiencia con HBase. Calcularé el tamaño medio del texto de los comentarios de un post en particular (he seleccionado el 7251, que es el que más tiene comentarios, 32). Hago el cálculo en local porque aunque existe la función AVG de SQL, es posible que la función que tuviéramos que calcular no la tuviera la base de datos, con lo que tenemos que obtener todos los datos y calcularla en local. Eso también nos dará una idea de la eficiencia de recuperación de la base de datos.
%%sql SELECT p.Id, MAX(p.CommentCount) AS c FROM Posts p GROUP BY p.Id ORDER BY c DESC LIMIT 1; %sql SELECT AVG(CHAR_LENGTH(Text)) from Comments WHERE PostId = 7251; from functools import reduce def doit(): q = %sql select Text from Comments WHERE PostId = 7251; (s,n) = reduce(lambda res, e: (res[0]+len(e[0]), res[1]+1), q, (0,0)) return (s/n) %timeit doit()
sql/sesion1.ipynb
dsevilla/bdge
mit
EJERCICIO: Calcular las preguntas con más respuestas En la casilla siguiente:
%%sql -- Preguntas con más respuestas (20 primeras) %%sql select Title from Posts where Id = 5;
sql/sesion1.ipynb
dsevilla/bdge
mit
Código de suma de posts de cada Tag
# Calcular la suma de posts cada Tag de manera eficiente import re # Obtener los datos iniciales de los Tags results = %sql SELECT Id, Tags FROM Posts where Tags IS NOT NULL; tagcount = {} for result in results: # Inserta las tags en la tabla Tag tags = re.findall('<(.*?)>', result[1]) for tag in tags: tagcount[tag] = tagcount.get(tag,0) + 1; # Comprobar que son iguales las cuentas for k in tagcount: res = %sql select TagName,SUM(Count) from Tags WHERE TagName = :k GROUP BY TagName; if tagcount[k] != res[0][1]: print("Tag %s NO coincide (%d)!!" % (k, res[0][1])) tagcount df = pd.DataFrame({'count' : pd.Series(list(tagcount.values()), index=list(tagcount.keys()))}) df sort_df = df.sort_values(by='count',ascending=False) sort_df sort_df[:100].plot(kind='bar',figsize=(20,20*2/3)) sort_df[-100:].plot(kind='bar',figsize=(20,20*2/3))
sql/sesion1.ipynb
dsevilla/bdge
mit
Problem statement We are interested in solving $$x^* = \arg \min_x f(x)$$ under the constraints that $f$ is a black box for which no closed form is known (nor its gradients); $f$ is expensive to evaluate; evaluations $y = f(x)$ of may be noisy. Disclaimer. If you do not have these constraints, then there is certainly a better optimization algorithm than Bayesian optimization. Bayesian optimization loop For $t=1:T$: Given observations $(x_i, y_i=f(x_i))$ for $i=1:t$, build a probabilistic model for the objective $f$. Integrate out all possible true functions, using Gaussian process regression. optimize a cheap acquisition/utility function $u$ based on the posterior distribution for sampling the next point. $$x_{t+1} = \arg \min_x u(x)$$ Exploit uncertainty to balance exploration against exploitation. Sample the next observation $y_{t+1}$ at $x_{t+1}$. Acquisition functions Acquisition functions $\text{u}(x)$ specify which sample $x$ should be tried next: Lower confidence bound: $\text{LCB}(x) = \mu_{GP}(x) + \kappa \sigma_{GP}(x)$; Probability of improvement: $-\text{PI}(x) = -P(f(x) \geq f(x_t^+) + \kappa) $; Expected improvement: $-\text{EI}(x) = -\mathbb{E} [f(x) - f(x_t^+)] $; where $x_t^+$ is the best point observed so far. In most cases, acquisition functions provide knobs (e.g., $\kappa$) for controlling the exploration-exploitation trade-off. - Search in regions where $\mu_{GP}(x)$ is high (exploitation) - Probe regions where uncertainty $\sigma_{GP}(x)$ is high (exploration) Toy example Let assume the following noisy function $f$:
noise_level = 0.1 def f(x, noise_level=noise_level): return np.sin(5 * x[0]) * (1 - np.tanh(x[0] ** 2)) + np.random.randn() * noise_level
examples/bayesian-optimization.ipynb
glouppe/scikit-optimize
bsd-3-clause
Bayesian optimization based on gaussian process regression is implemented in skopt.gp_minimize and can be carried out as follows:
from skopt import gp_minimize from skopt.acquisition import gaussian_lcb from sklearn.gaussian_process import GaussianProcessRegressor from sklearn.gaussian_process.kernels import Matern # Note that we have fixed the hyperparameters of the kernel, because it is # sufficient for this easy problem. gp = GaussianProcessRegressor(kernel=Matern(length_scale_bounds="fixed"), alpha=noise_level**2, random_state=0) res = gp_minimize(f, # the function to minimize [(-2.0, 2.0)], # the bounds on each dimension of x x0=[0.], # the starting point acq="LCB", # the acquisition function (optional) base_estimator=gp, # a GP estimator (optional) n_calls=15, # the number of evaluations of f including at x0 n_random_starts=0, # the number of random initialization points random_state=777)
examples/bayesian-optimization.ipynb
glouppe/scikit-optimize
bsd-3-clause
For further inspection of the results, attributes of the res named tuple provide the following information: x [float]: location of the minimum. fun [float]: function value at the minimum. models: surrogate models used for each iteration. x_iters [array]: location of function evaluation for each iteration. func_vals [array]: function value for each iteration. space [Space]: the optimization space. specs [dict]: parameters passed to the function.
for key, value in sorted(res.items()): print(key, "=", value) print()
examples/bayesian-optimization.ipynb
glouppe/scikit-optimize
bsd-3-clause
Together these attributes can be used to visually inspect the results of the minimization, such as the convergence trace or the acquisition function at the last iteration:
from skopt.plots import plot_convergence plot_convergence(res)
examples/bayesian-optimization.ipynb
glouppe/scikit-optimize
bsd-3-clause
Let us visually examine The approximation of the fit gp model to the original function. The acquistion values (The lower confidence bound) that determine the next point to be queried. At the points closer to the points previously evaluated at, the variance dips to zero. The first column shows the following: 1. The true function. 2. The approximation to the original function by the gaussian process model 3. How sure the GP is about the function. The second column shows the acquisition function values after every surrogate model is fit. It is possible that we do not choose the global minimum but a local minimum depending on the minimizer used to minimize the acquisition function.
plt.rcParams["figure.figsize"] = (20, 20) x = np.linspace(-2, 2, 400).reshape(-1, 1) fx = np.array([f(x_i, noise_level=0.0) for x_i in x]) # Plot first five iterations. for n_iter in range(5): gp = res.models[n_iter] curr_x_iters = res.x_iters[: n_iter+1] curr_func_vals = res.func_vals[: n_iter+1] # Plot true function. plt.subplot(5, 2, 2*n_iter+1) plt.plot(x, fx, "r--", label="True (unknown)") plt.fill(np.concatenate([x, x[::-1]]), np.concatenate([fx - 1.9600 * noise_level, fx[::-1] + 1.9600 * noise_level]), alpha=.2, fc="r", ec="None") # Plot GP(x) + contours y_pred, sigma = gp.predict(x, return_std=True) plt.plot(x, y_pred, "g--", label=r"$\mu_{GP}(x)$") plt.fill(np.concatenate([x, x[::-1]]), np.concatenate([y_pred - 1.9600 * sigma, (y_pred + 1.9600 * sigma)[::-1]]), alpha=.2, fc="g", ec="None") # Plot sampled points plt.plot(curr_x_iters, curr_func_vals, "r.", markersize=15, label="Observations") plt.title(r"$x_{%d} = %.4f, f(x_{%d}) = %.4f$" % ( n_iter, res.x_iters[n_iter][0], n_iter, res.func_vals[n_iter])) plt.grid() if n_iter == 0: plt.legend(loc="best", prop={'size': 8}, numpoints=1) plt.subplot(5, 2, 2*n_iter+2) acq = gaussian_lcb(x, gp) plt.plot(x, acq, "b", label="LCB(x)") plt.fill_between(x.ravel(), -2.0, acq.ravel(), alpha=0.3, color='blue') next_x = np.asarray(res.x_iters[n_iter + 1]) next_acq = gaussian_lcb(next_x.reshape(-1, 1), gp) plt.plot(next_x[0], next_acq, "bo", markersize=10, label="Next query point") plt.grid() if n_iter == 0: plt.legend(loc="best", prop={'size': 12}, numpoints=1) plt.suptitle("Sequential model-based minimization using gp_minimize.", fontsize=20) plt.show()
examples/bayesian-optimization.ipynb
glouppe/scikit-optimize
bsd-3-clause
Finally, as we increase the number of points, the GP model approaches the actual function. The final few points are cluttered around the minimum because the GP does not gain anything more by further exploration.
# Plot f(x) + contours plt.rcParams["figure.figsize"] = (10, 6) x = np.linspace(-2, 2, 400).reshape(-1, 1) fx = [f(x_i, noise_level=0.0) for x_i in x] plt.plot(x, fx, "r--", label="True (unknown)") plt.fill(np.concatenate([x, x[::-1]]), np.concatenate(([fx_i - 1.9600 * noise_level for fx_i in fx], [fx_i + 1.9600 * noise_level for fx_i in fx[::-1]])), alpha=.2, fc="r", ec="None") # Plot GP(x) + concours gp = res.models[-1] y_pred, sigma = gp.predict(x, return_std=True) plt.plot(x, y_pred, "g--", label=r"$\mu_{GP}(x)$") plt.fill(np.concatenate([x, x[::-1]]), np.concatenate([y_pred - 1.9600 * sigma, (y_pred + 1.9600 * sigma)[::-1]]), alpha=.2, fc="g", ec="None") # Plot sampled points plt.plot(res.x_iters, res.func_vals, "r.", markersize=15, label="Observations") # Plot LCB(x) + next query point acq = gaussian_lcb(x, gp) plt.plot(x, gaussian_lcb(x, gp), "b", label="LCB(x)") next_x = np.argmin(acq) plt.plot([x[next_x]], [acq[next_x]], "b.", markersize=15, label="Next query point") plt.title(r"$x^* = %.4f, f(x^*) = %.4f$" % (res.x[0], res.fun)) plt.legend(loc="best") plt.grid() plt.show()
examples/bayesian-optimization.ipynb
glouppe/scikit-optimize
bsd-3-clause
DataFrame を array に変換する例です。
data = {'City': ['Tokyo','Osaka','Nagoya','Okinawa'], 'Temperature': [25.0,28.2,27.3,30.9], 'Humidity': [44,42,np.nan,62]} cities = DataFrame(data) cities cities.as_matrix()
07-pandas DataFrame-03.ipynb
enakai00/jupyter_ml4se_commentary
apache-2.0
Series を array に変換する例です。
cities['City'].as_matrix()
07-pandas DataFrame-03.ipynb
enakai00/jupyter_ml4se_commentary
apache-2.0
トランプのカードを集めた DataFrame を定義して、カードのシャッフルを行う例です。
face = ['king','queen','jack','ten','nine','eight', 'seven','six','five','four','three','two','ace'] suit = ['spades', 'clubs', 'diamonds', 'hearts'] value = range(13,0,-1) deck = DataFrame({'face': np.tile(face,4), 'suit': np.repeat(suit,13), 'value': np.tile(value,4)}) deck.head()
07-pandas DataFrame-03.ipynb
enakai00/jupyter_ml4se_commentary
apache-2.0
permutation 関数で、index の順番をランダムにシャッフルします。
np.random.permutation(deck.index)
07-pandas DataFrame-03.ipynb
enakai00/jupyter_ml4se_commentary
apache-2.0
ランダムにシャッフルした index を用いて行を並べ替えます。
deck = deck.reindex(np.random.permutation(deck.index)) deck.head()
07-pandas DataFrame-03.ipynb
enakai00/jupyter_ml4se_commentary
apache-2.0
reset_index メソッドで index に通し番号を付け直します。
deck = deck.reset_index(drop=True) deck.head()
07-pandas DataFrame-03.ipynb
enakai00/jupyter_ml4se_commentary
apache-2.0
DataFrame のグラフ描画機能を使用する例です。 3回分のランダムウォークのデータを並べた DataFrame を作成します。
result = DataFrame() for c in range(3): y = 0 t = [] for delta in np.random.normal(loc=0.0, scale=1.0, size=100): y += delta t.append(y) result['Trial %d' % c] = t result.head()
07-pandas DataFrame-03.ipynb
enakai00/jupyter_ml4se_commentary
apache-2.0
DataFrame の polot メソッドでグラフを描きます。
result.plot(title='Random walk')
07-pandas DataFrame-03.ipynb
enakai00/jupyter_ml4se_commentary
apache-2.0
練習問題 次の関数 coin_game は、所持金と掛け金を引数に渡すと、1/2の確率で掛け金の分だけ所持金が増減した値が返ります。
from numpy.random import randint def coin_game(money, bet): coin = randint(2) if coin == 0: money += bet else: money -= bet return money
07-pandas DataFrame-03.ipynb
enakai00/jupyter_ml4se_commentary
apache-2.0
次は、1000円の所持金において、100円を賭けた場合の結果を示します。
money = 1000 money = coin_game(money, 100) money
07-pandas DataFrame-03.ipynb
enakai00/jupyter_ml4se_commentary
apache-2.0
Read the hematopoiesis data. This has been simplified to a small subset of 23 genes found to be branching. We have also performed Monocle2 (version 2.1) - DDRTree on this data. The results loaded include the Monocle estimated pseudotime, branching assignment (state) and the DDRTree latent dimensions.
Y = pd.read_csv("singlecelldata/hematoData.csv", index_col=[0]) monocle = pd.read_csv("singlecelldata/hematoMonocle.csv", index_col=[0]) Y.head() monocle.head() # Plot Monocle DDRTree space genelist = ["FLT3", "KLF1", "MPO"] f, ax = plt.subplots(1, len(genelist), figsize=(10, 5), sharex=True, sharey=True) for ig, g in enumerate(genelist): y = Y[g].values yt = np.log(1 + y / y.max()) yt = yt / yt.max() h = ax[ig].scatter( monocle["DDRTreeDim1"], monocle["DDRTreeDim2"], c=yt, s=50, alpha=1.0, vmin=0, vmax=1, ) ax[ig].set_title(g) def PlotGene(label, X, Y, s=3, alpha=1.0, ax=None): fig = None if ax is None: fig, ax = plt.subplots(1, 1, figsize=(5, 5)) for li in np.unique(label): idxN = (label == li).flatten() ax.scatter(X[idxN], Y[idxN], s=s, alpha=alpha, label=int(np.round(li))) return fig, ax
notebooks/Hematopoiesis.ipynb
ManchesterBioinference/BranchedGP
apache-2.0
Fit BGP model Notice the cell assignment uncertainty is higher for cells close to the branching point.
def FitGene(g, ns=20): # for quick results subsample data t = time.time() Bsearch = list(np.linspace(0.05, 0.95, 5)) + [ 1.1 ] # set of candidate branching points GPy = (Y[g].iloc[::ns].values - Y[g].iloc[::ns].values.mean())[ :, None ] # remove mean from gene expression data GPt = monocle["StretchedPseudotime"].values[::ns] globalBranching = monocle["State"].values[::ns].astype(int) d = BranchedGP.FitBranchingModel.FitModel(Bsearch, GPt, GPy, globalBranching) print(g, "BGP inference completed in %.1f seconds." % (time.time() - t)) # plot BGP fig, ax = BranchedGP.VBHelperFunctions.PlotBGPFit( GPy, GPt, Bsearch, d, figsize=(10, 10) ) # overplot data f, a = PlotGene( monocle["State"].values, monocle["StretchedPseudotime"].values, Y[g].values - Y[g].iloc[::ns].values.mean(), ax=ax[0], s=10, alpha=0.5, ) # Calculate Bayes factor of branching vs non-branching bf = BranchedGP.VBHelperFunctions.CalculateBranchingEvidence(d)["logBayesFactor"] fig.suptitle("%s log Bayes factor of branching %.1f" % (g, bf)) return d, fig, ax d, fig, ax = FitGene("MPO") d_c, fig_c, ax_c = FitGene("CTSG")
notebooks/Hematopoiesis.ipynb
ManchesterBioinference/BranchedGP
apache-2.0
Then we can construct a DAgger trainer und use it to train the policy on the cartpole environment.
import tempfile import gym from stable_baselines3.common.vec_env import DummyVecEnv from imitation.algorithms import bc from imitation.algorithms.dagger import SimpleDAggerTrainer venv = DummyVecEnv([lambda: gym.make("CartPole-v1")]) bc_trainer = bc.BC( observation_space=env.observation_space, action_space=env.action_space, ) with tempfile.TemporaryDirectory(prefix="dagger_example_") as tmpdir: print(tmpdir) dagger_trainer = SimpleDAggerTrainer( venv=venv, scratch_dir=tmpdir, expert_policy=expert, bc_trainer=bc_trainer ) dagger_trainer.train(2000)
examples/2_train_dagger.ipynb
HumanCompatibleAI/imitation
mit
Finally, the evaluation shows, that we actually trained a policy that solves the environment (500 is the max reward).
from stable_baselines3.common.evaluation import evaluate_policy reward, _ = evaluate_policy(dagger_trainer.policy, env, 10) print(reward)
examples/2_train_dagger.ipynb
HumanCompatibleAI/imitation
mit
Step 4A: Sample data from null
pow_null = np.array((), dtype=np.dtype('float64')) # compute this statistic for various sizes of datasets for s in S: s0 = s/2 s1 = s - s0 # compute this many times for each operating point to get average pval = np.array((), dtype=np.dtype('float64')) for _ in itertools.repeat(None,N): g0 = 1 * (np.random.rand( r, r, s0) > 0.5) # (null), 0.52 (classes) g1 = 1 * (np.random.rand( r, r, s1) > 0.5) # (null), 0.48 (classes) # compute feature of data pbar0 = 1.0*np.sum(g0, axis=(0,1))/( r**2 * s0) pbar1 = 1.0*np.sum(g1, axis=(0,1))/( r**2 * s1) # compute t-test on feature pval = np.append(pval, stats.wilcoxon(pbar0, pbar1)[1]) # record average p value at operating point pow_null = np.append(pow_null, np.sum(1.0*(pval < alpha))/N)
code/inferential_simulation.ipynb
Upward-Spiral-Science/grelliam
apache-2.0
Step 4B: Sample data from alternate
pow_alt = np.array((), dtype=np.dtype('float64')) # compute this statistic for various sizes of datasets for s in S: s0 = s/2 s1 = s - s0 # compute this many times for each operating point to get average pval = np.array((), dtype=np.dtype('float64')) for _ in itertools.repeat(None,N): g0 = 1 * (np.random.rand( r, r, s0) > 0.52) # (null), 0.52 (classes) g1 = 1 * (np.random.rand( r, r, s1) > 0.48) # (null), 0.48 (classes) # compute feature of data pbar0 = 1.0*np.sum(g0, axis=(0,1))/( r**2 * s0) pbar1 = 1.0*np.sum(g1, axis=(0,1))/( r**2 * s0) # compute t-test on feature pval = np.append(pval, stats.wilcoxon(pbar0, pbar1)[1]) # record average p value at operating point pow_alt = np.append(pow_alt, np.sum(1.0*(pval < alpha))/N)
code/inferential_simulation.ipynb
Upward-Spiral-Science/grelliam
apache-2.0
Step 5: Plot power vs n on null set
plt.figure(figsize=(8, 5)) plt.scatter(S, pow_null, hold=True, label='null') plt.scatter(S, pow_alt, color='green', hold=True, label='alt') plt.xscale('log') plt.xlabel('Number of Samples') plt.xlim((0,10000)) plt.ylim((-0.05, 1.05)) plt.ylabel('Power') plt.title('Strength of Gender Classification Using Wilcoxon Test') plt.axhline(alpha, color='red', linestyle='--', label='alpha') plt.legend(loc='center left', bbox_to_anchor=(1, 0.5)) plt.savefig('../figs/wilcoxon_classification.png') plt.show()
code/inferential_simulation.ipynb
Upward-Spiral-Science/grelliam
apache-2.0
Step 6: Apply the above to data
# Initializing dataset names dnames = list(['../data/desikan/KKI2009']) print "Dataset: " + ", ".join(dnames) # Getting graph names fs = list() for dd in dnames: fs.extend([root+'/'+file for root, dir, files in os.walk(dd) for file in files]) fs = fs[:] def loadGraphs(filenames, rois, printer=False): A = np.zeros((rois, rois, len(filenames))) for idx, files in enumerate(filenames): if printer: print "Loading: " + files g = ig.Graph.Read_GraphML(files) tempg = g.get_adjacency(attribute='weight') A[:,:,idx] = np.asarray(tempg.data) return A # Load X X = loadGraphs(fs, 70) print X.shape # Load Y ys = csv.reader(open('../data/kki42_subjectinformation.csv')) y = [y[5] for y in ys] y = y[1:] g_m = np.zeros((70, 70, sum([1 if x=='M' else 0 for x in y]))) g_f = np.zeros((70, 70, sum([1 if x=='F' else 0 for x in y]))) cf=0 cm=0 for idx, val in enumerate(y): if val == 'M': g_m[:,:,cm] = X[:,:,idx] cm += 1 else: g_f[:,:,cf] = X[:,:,idx] cf +=1 print g_f.shape print g_m.shape # compute feature of data p_f = 1.0*np.sum(1.0*(g_f>0), axis=(0,1))/( 70**2 * 20) p_m = 1.0*np.sum(1.0*(g_m>0), axis=(0,1))/( 70**2 * 22) print "Mean p_f: " + str(np.mean(p_f)) print "Mean p_m: " + str(np.mean(p_m)) # compute t-test on feature pval = stats.wilcoxon(p_m[:20], p_f)[1] print "P-value: " + str(pval)
code/inferential_simulation.ipynb
Upward-Spiral-Science/grelliam
apache-2.0
The function definition takes the following, minimal form: python def NAME_OF_FUNCTION(): #Code block - there must be at least one line of code #That said, we can use a null (do nothing) statement pass Set up the notebook to use the simulator and see if you can think of a way to use functions to call the lines of code that control the robot. The function definitions should appear before the loop.
%run 'Set-up.ipynb' %run 'Loading scenes.ipynb' %run 'vrep_models/PioneerP3DX.ipynb' %%vrepsim '../scenes/OU_Pioneer.ttt' PioneerP3DX #Your code - using functions - here
robotVM/notebooks/Demo - Square N - Functions.ipynb
psychemedia/ou-robotics-vrep
apache-2.0
How did you get on? Could you work out how to use the functions? Here's how I used them:
%%vrepsim '../scenes/OU_Pioneer.ttt' PioneerP3DX import time side_speed=2 side_length_time=1 turn_speed=1.8 turn_time=0.45 number_of_sides=4 def traverse_side(): pass def turn(): pass for side in range(number_of_sides): #side robot.move_forward(side_speed) time.sleep(side_length_time) #turn robot.rotate_left(turn_speed) time.sleep(turn_time)
robotVM/notebooks/Demo - Square N - Functions.ipynb
psychemedia/ou-robotics-vrep
apache-2.0
McWilliams performed freely-evolving 2D turbulence ($R_d = \infty$, $\beta =0$) experiments on a $2\pi\times 2\pi$ periodic box.
# create the model object m = pyqg.BTModel(L=2.*np.pi, nx=256, beta=0., H=1., rek=0., rd=None, tmax=40, dt=0.001, taveint=1, ntd=4) # in this example we used ntd=4, four threads # if your machine has more (or fewer) cores available, you could try changing it
docs/examples/barotropic.ipynb
pyqg/pyqg
mit
Initial condition The initial condition is random, with a prescribed spectrum $$ |\hat{\psi}|^2 = A \,\kappa^{-1}\left[1 + \left(\frac{\kappa}{6}\right)^4\right]^{-1}\,, $$ where $\kappa$ is the wavenumber magnitude. The constant A is determined so that the initial energy is $KE = 0.5$.
# generate McWilliams 84 IC condition fk = m.wv != 0 ckappa = np.zeros_like(m.wv2) ckappa[fk] = np.sqrt( m.wv2[fk]*(1. + (m.wv2[fk]/36.)**2) )**-1 nhx,nhy = m.wv2.shape Pi_hat = np.random.randn(nhx,nhy)*ckappa +1j*np.random.randn(nhx,nhy)*ckappa Pi = m.ifft( Pi_hat[np.newaxis,:,:] ) Pi = Pi - Pi.mean() Pi_hat = m.fft( Pi ) KEaux = m.spec_var( m.wv*Pi_hat ) pih = ( Pi_hat/np.sqrt(KEaux) ) qih = -m.wv2*pih qi = m.ifft(qih) # initialize the model with that initial condition m.set_q(qi) # define a quick function for plotting and visualize the initial condition def plot_q(m, qmax=40): fig, ax = plt.subplots() pc = ax.pcolormesh(m.x,m.y,m.q.squeeze(), cmap='RdBu_r') pc.set_clim([-qmax, qmax]) ax.set_xlim([0, 2*np.pi]) ax.set_ylim([0, 2*np.pi]); ax.set_aspect(1) plt.colorbar(pc) plt.title('Time = %g' % m.t) plt.show() plot_q(m)
docs/examples/barotropic.ipynb
pyqg/pyqg
mit
Runing the model Here we demonstrate how to use the run_with_snapshots feature to periodically stop the model and perform some action (in this case, visualization).
for _ in m.run_with_snapshots(tsnapstart=0, tsnapint=10): plot_q(m)
docs/examples/barotropic.ipynb
pyqg/pyqg
mit
The genius of McWilliams (1984) was that he showed that the initial random vorticity field organizes itself into strong coherent vortices. This is true in significant part of the parameter space. This was previously suspected but unproven, mainly because people did not have computer resources to run the simulation long enough. Thirty years later we can perform such simulations in a couple of minutes on a laptop! Also, note that the energy is nearly conserved, as it should be, and this is a nice test of the model. Plotting spectra
energy = m.get_diagnostic('KEspec') enstrophy = m.get_diagnostic('Ensspec') # this makes it easy to calculate an isotropic spectrum from pyqg import diagnostic_tools as tools kr, energy_iso = tools.calc_ispec(m,energy.squeeze()) _, enstrophy_iso = tools.calc_ispec(m,enstrophy.squeeze()) ks = np.array([3.,80]) es = 5*ks**-4 plt.loglog(kr,energy_iso) plt.loglog(ks,es,'k--') plt.text(2.5,.0001,r'$k^{-4}$',fontsize=20) plt.ylim(1.e-10,1.e0) plt.xlabel('wavenumber') plt.title('Energy Spectrum') ks = np.array([3.,80]) es = 5*ks**(-5./3) plt.loglog(kr,enstrophy_iso) plt.loglog(ks,es,'k--') plt.text(5.5,.01,r'$k^{-5/3}$',fontsize=20) plt.ylim(1.e-3,1.e0) plt.xlabel('wavenumber') plt.title('Enstrophy Spectrum')
docs/examples/barotropic.ipynb
pyqg/pyqg
mit
Checking this answer with Wolfram Alpha, we get approximately the same result: [Image in Blog Post] Let's try this on another, more complicated integral: $\int_{1}^{4} \frac{e^{x}}{x} + e^{1/x}$ In this case, it's harder to determine a proper f_max. The most straightforward way is to plot the function over the limits of the integral:
import matplotlib.pyplot as plt %matplotlib inline x = np.linspace(1,4,1000) y = (np.exp(x)/x) + np.exp(1/x) plt.plot(x,y) plt.ylabel('F(x)') plt.xlabel('x');
20170414_CalculatingAnIntegralWithMonteCarloUsingTheRejectionMethod.ipynb
drericstrong/Blog
agpl-3.0
From the above figure, we can see that the maximum is about 15. But what if we decided not to plot the function? Consider a situation where it's computationally expensive to plot the function over the entire limits of the integral. It's okay for us to choose an f_max that is too large, as long as we are sure that all possible values of F(x) will fall below it. The only downside of this approach is that more histories are required to converge to the correct answer. Let's choose f_max as 100 to see what happens:
@jit def MCHist2(n_hist, a, b, fmax): score = (b - a)*fmax tot_score = 0 for n in range(1, n_hist): x = random.uniform(a, b) f = random.uniform(0, fmax) f_x = (np.exp(x)/x) + np.exp(1/x) # Check if the point falls inside the integral if f < f_x: tot_score += score return tot_score # Run the simulation num_hist2 = 1e8 results2 = MCHist2(num_hist2, 1.0, 4.0, 100) integral_val2 = round(results2 / num_hist2, 6) print("The calculated integral is {}".format(integral_val2))
20170414_CalculatingAnIntegralWithMonteCarloUsingTheRejectionMethod.ipynb
drericstrong/Blog
agpl-3.0
Again, we check our work with Wolfram Alpha, and we get approximately the same result: [Image in Blog Post] Would we have less variance if we used the same number of histories, but an f_max closer to the true value (15)?
num_hist3 = 1e8 results3 = MCHist2(num_hist2, 1.0, 4.0, 15) integral_val3 = round(results3 / num_hist3, 6) print("The calculated integral is {}".format(integral_val3))
20170414_CalculatingAnIntegralWithMonteCarloUsingTheRejectionMethod.ipynb
drericstrong/Blog
agpl-3.0
The storage bucket was created earlier. We'll re-declare it here, so we can use it.
storage_bucket = 'gs://' + datalab.Context.default().project_id + '-datalab-workspace/' storage_region = 'us-central1' workspace_path = os.path.join(storage_bucket, 'census') training_path = os.path.join(workspace_path, 'training')
samples/ML Toolbox/Regression/Census/4 Service Evaluate.ipynb
googledatalab/notebooks
apache-2.0
Model Lets take a quick look at the model that was previously produced as a result of the training job.
!gsutil ls -r {training_path}/model
samples/ML Toolbox/Regression/Census/4 Service Evaluate.ipynb
googledatalab/notebooks
apache-2.0
Batch Prediction We'll submit a batch prediction Dataflow job, to use this model by loading it into TensorFlow, and running it in evaluation mode (the mode that expects the input data to contain a value for target). The other mode, prediction, is used to predict over data where the target column is missing. NOTE: Batch prediction can take a few minutes to launch while compute resources are provisioned. In the case of large datasets in real-world problems, this overhead is a much smaller part of the overall job lifetime.
eval_data_path = os.path.join(workspace_path, 'data/eval.csv') evaluation_path = os.path.join(workspace_path, 'evaluation') regression.batch_predict(training_dir=training_path, prediction_input_file=eval_data_path, output_dir=evaluation_path, mode='evaluation', output_format='csv', cloud=True)
samples/ML Toolbox/Regression/Census/4 Service Evaluate.ipynb
googledatalab/notebooks
apache-2.0
Once prediction is done, the individual predictions will be written out into Cloud Storage.
!gsutil ls {evaluation_path} !gsutil cat {evaluation_path}/csv_schema.json !gsutil -q -m cp -r {evaluation_path}/ /tmp !head -n 5 /tmp/evaluation/predictions-00000*
samples/ML Toolbox/Regression/Census/4 Service Evaluate.ipynb
googledatalab/notebooks
apache-2.0
Analysis with BigQuery We're going to use BigQuery to do evaluation. BigQuery can directly work against CSV data in Cloud Storage. However, if you have very large evaluation results, or you're going to be running multiple queries, it is advisable to first load the results into a BigQuery table.
%bq datasource --name eval_results --paths gs://cloud-ml-users-datalab-workspace/census/evaluation/predictions* { "schema": [ { "type": "STRING", "mode": "nullable", "name": "SERIALNO" }, { "type": "FLOAT", "mode": "nullable", "name": "predicted_target" }, { "type": "FLOAT", "mode": "nullable", "name": "target_from_input" } ] } %bq query --datasource eval_results SELECT SQRT(AVG(error)) AS rmse FROM ( SELECT POW(target_from_input - predicted_target, 2) AS error FROM eval_results ) %bq query --name distribution_query --datasource eval_results WITH errors AS ( SELECT predicted_target - target_from_input AS error FROM eval_results ), error_stats AS ( SELECT MIN(error) AS min_error, MAX(error) - MIN(error) AS error_range FROM errors ), quantized_errors AS ( SELECT error, FLOOR((error - min_error) * 20 / error_range) error_bin FROM errors CROSS JOIN error_stats ) SELECT AVG(error) AS error, COUNT(error_bin) AS instances FROM quantized_errors GROUP BY error_bin ORDER BY error_bin %chart columns --data distribution_query --fields error,instances
samples/ML Toolbox/Regression/Census/4 Service Evaluate.ipynb
googledatalab/notebooks
apache-2.0
Sci-kit learn does not implement LDA with Yelp: An Example This is based on the paper by McAuley & Leskovec (2013). References https://www.cs.princeton.edu/~blei/papers/BleiNgJordan2003.pdf LDA Notes Only for text corpora? No but it was the main thing that it was decided for. Seen as an alternative to tf-idf disadvantages of tf-idf is that it provides a relatively small amount of reduction in description legnth and reveals little in the way of inter- or intra-document statistical structure. What they mean by this is that it tells us nothing about the structure to know that the shows up more often in one document than the other. pLSI takes a step forward compared to LSI in that it models each word in a document from a mixture model. problem is pLSI provides no probabilistic model at the level of documents. Leads to overfitting with p/LSI are based on the bag-of-words approach. Other uses for LDA include callaborative filtering and content-based image retrieval Dirchlet Distribution The Dirchlet distribution is paramterized with the number of categories, $K$, and a vector of concentration parameters, $\boldsymbol{\alpha}$ It is a distribution over multinomials. The craziness of LDA distributions The Dirchlet distribution is a generalization of the beta distribution. The beta distribution is itself a pretty crazy distribution: To make the Beta distribution extra confusing, the two parameters, $\alpha$ and $\beta$ are both abstract. a 3d distribution will model the distribution for three topics Fun facts about the beta distribution Model the behavior of random variables limited to intervals of finite length in a wide variety of disciplines Used in Bayesian Inference as a conjugate prior probability distribution for various other distributions. defined between [0,1] $\alpha$ and $\beta$ are concentration parameters. The smaller they are, the more sparse the distribution is. Sme people claim that it will be better The beta distribution is built for proportions which is good. Assumptions of LDA Dimensionality of the Dirichlet distribution is known and fixed Word probabilities are parameterizes by a k x V matrix $\beta$ LDA Important pictures and formulas Where (1) is the prior probability Beta Distribution for AB testing? Propotions come out from a finite number of Bernoulli trials and so they are not continuous. Because of this, the beta distribution is not really appropriate here. The normal distribution works here because of the CLT. That is an interesting point. Intuitive Interpretations of the Dirchlet parameters Common prior is the symmetric Dirichlet distribution where all parameters are equal. This is the case of no priors.
%pylab inline import numpy as np s = np.random.dirichlet((10, 5, 3), 50).transpose() plt.barh(range(50), s[0]) plt.barh(range(50), s[1], left=s[0], color='g') plt.barh(range(50), s[2], left=s[0]+s[1], color='r') _ = plt.title("Lengths of Strings") [i.mean() for i in s] #Mean of each of the lengths of the string [i/18 for i in [10, 5, 3]] # Predicted values of each of the lengths of the string
How to think about Latent Dirichlet Allocation.ipynb
Will-So/yelp-workbook
mit
scikit-learn Training on AI Platform This notebook uses the Census Income Data Set to demonstrate how to train a model on Ai Platform. How to bring your model to AI Platform Getting your model ready for training can be done in 3 steps: 1. Create your python model file 1. Add code to download your data from Google Cloud Storage so that AI Platform can use it 1. Add code to export and save the model to Google Cloud Storage once AI Platform finishes training the model 1. Prepare a package 1. Submit the training job Prerequisites Before you jump in, let’s cover some of the different tools you’ll be using to get online prediction up and running on AI Platform. Google Cloud Platform lets you build and host applications and websites, store data, and analyze data on Google's scalable infrastructure. AI Platform is a managed service that enables you to easily build machine learning models that work on any type of data, of any size. Google Cloud Storage (GCS) is a unified object storage for developers and enterprises, from live data serving to data analytics/ML to data archiving. Cloud SDK is a command line tool which allows you to interact with Google Cloud products. In order to run this notebook, make sure that Cloud SDK is installed in the same environment as your Jupyter kernel. Part 0: Setup Create a project on GCP Create a Google Cloud Storage Bucket Enable AI Platform Training and Prediction and Compute Engine APIs Install Cloud SDK Install scikit-learn [Optional: used if running locally] Install pandas [Optional: used if running locally] These variables will be needed for the following steps. * TRAINER_PACKAGE_PATH &lt;./census_training&gt; - A packaged training application that will be staged in a Google Cloud Storage location. The model file created below is placed inside this package path. * MAIN_TRAINER_MODULE &lt;census_training.train&gt; - Tells AI Platform which file to execute. This is formatted as follows <folder_name.python_file_name> * JOB_DIR &lt;gs://$BUCKET_NAME/scikit_learn_job_dir&gt; - The path to a Google Cloud Storage location to use for job output. * RUNTIME_VERSION &lt;1.9&gt; - The version of AI Platform to use for the job. If you don't specify a runtime version, the training service uses the default AI Platform runtime version 1.0. See the list of runtime versions for more information. * PYTHON_VERSION &lt;3.5&gt; - The Python version to use for the job. Python 3.5 is available with runtime version 1.4 or greater. If you don't specify a Python version, the training service uses Python 2.7. Replace: * PROJECT_ID &lt;YOUR_PROJECT_ID&gt; - with your project's id. Use the PROJECT_ID that matches your Google Cloud Platform project. * BUCKET_NAME &lt;YOUR_BUCKET_NAME&gt; - with the bucket id you created above. * JOB_DIR &lt;gs://YOUR_BUCKET_NAME/scikit_learn_job_dir&gt; - with the bucket id you created above. * REGION &lt;REGION&gt; - select a region from here or use the default 'us-central1'. The region is where the model will be deployed.
%env PROJECT_ID <PROJECT_ID> %env BUCKET_NAME <BUCKET_NAME> %env REGION us-central1 %env TRAINER_PACKAGE_PATH ./census_training %env MAIN_TRAINER_MODULE census_training.train %env JOB_DIR gs://<BUCKET_NAME>/scikit_learn_job_dir %env RUNTIME_VERSION 1.9 %env PYTHON_VERSION 3.5 ! mkdir census_training
notebooks/scikit-learn/TrainingWithScikitLearnInCMLE.ipynb
GoogleCloudPlatform/cloudml-samples
apache-2.0
The data The Census Income Data Set that this sample uses for training is provided by the UC Irvine Machine Learning Repository. We have hosted the data on a public GCS bucket gs://cloud-samples-data/ml-engine/sklearn/census_data/. Training file is adult.data Evaluation file is adult.test (not used in this notebook) Note: Your typical development process with your own data would require you to upload your data to GCS so that AI Platform can access that data. However, in this case, we have put the data on GCS to avoid the steps of having you download the data from UC Irvine and then upload the data to GCS. Disclaimer This dataset is provided by a third party. Google provides no representation, warranty, or other guarantees about the validity or any other aspects of this dataset. Part 1: Create your python model file First, we'll create the python model file (provided below) that we'll upload to AI Platform. This is similar to your normal process for creating a scikit-learn model. However, there are two key differences: 1. Downloading the data from GCS at the start of your file, so that AI Platform can access the data. 1. Exporting/saving the model to GCS at the end of your file, so that you can use it for predictions. The code in this file loads the data into a pandas DataFrame that can be used by scikit-learn. Then the model is fit against the training data. Lastly, sklearn's built in version of joblib is used to save the model to a file that can be uploaded to AI Platform's prediction service. REPLACE Line 18: BUCKET_NAME = '<BUCKET_NAME>' with your GCS BUCKET_NAME Note: In normal practice you would want to test your model locally on a small dataset to ensure that it works, before using it with your larger dataset on AI Platform. This avoids wasted time and costs.
%%writefile ./census_training/train.py # [START setup] import datetime import pandas as pd from google.cloud import storage from sklearn.ensemble import RandomForestClassifier from sklearn.externals import joblib from sklearn.feature_selection import SelectKBest from sklearn.pipeline import FeatureUnion from sklearn.pipeline import Pipeline from sklearn.preprocessing import LabelBinarizer # TODO: REPLACE '<BUCKET_NAME>' with your GCS BUCKET_NAME BUCKET_NAME = '<BUCKET_NAME>' # [END setup] # --------------------------------------- # 1. Add code to download the data from GCS (in this case, using the publicly hosted data). # AI Platform will then be able to use the data when training your model. # --------------------------------------- # [START download-data] # Public bucket holding the census data bucket = storage.Client().bucket('cloud-samples-data') # Path to the data inside the public bucket blob = bucket.blob('ml-engine/sklearn/census_data/adult.data') # Download the data blob.download_to_filename('adult.data') # [END download-data] # --------------------------------------- # This is where your model code would go. Below is an example model using the census dataset. # --------------------------------------- # [START define-and-load-data] # Define the format of your input data including unused columns (These are the columns from the census data files) COLUMNS = ( 'age', 'workclass', 'fnlwgt', 'education', 'education-num', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'capital-gain', 'capital-loss', 'hours-per-week', 'native-country', 'income-level' ) # Categorical columns are columns that need to be turned into a numerical value to be used by scikit-learn CATEGORICAL_COLUMNS = ( 'workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country' ) # Load the training census dataset with open('./adult.data', 'r') as train_data: raw_training_data = pd.read_csv(train_data, header=None, names=COLUMNS) # Remove the column we are trying to predict ('income-level') from our features list # Convert the Dataframe to a lists of lists train_features = raw_training_data.drop('income-level', axis=1).values.tolist() # Create our training labels list, convert the Dataframe to a lists of lists train_labels = (raw_training_data['income-level'] == ' >50K').values.tolist() # [END define-and-load-data] # [START categorical-feature-conversion] # Since the census data set has categorical features, we need to convert # them to numerical values. We'll use a list of pipelines to convert each # categorical column and then use FeatureUnion to combine them before calling # the RandomForestClassifier. categorical_pipelines = [] # Each categorical column needs to be extracted individually and converted to a numerical value. # To do this, each categorical column will use a pipeline that extracts one feature column via # SelectKBest(k=1) and a LabelBinarizer() to convert the categorical value to a numerical one. # A scores array (created below) will select and extract the feature column. The scores array is # created by iterating over the COLUMNS and checking if it is a CATEGORICAL_COLUMN. for i, col in enumerate(COLUMNS[:-1]): if col in CATEGORICAL_COLUMNS: # Create a scores array to get the individual categorical column. # Example: # data = [39, 'State-gov', 77516, 'Bachelors', 13, 'Never-married', 'Adm-clerical', # 'Not-in-family', 'White', 'Male', 2174, 0, 40, 'United-States'] # scores = [0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] # # Returns: [['State-gov']] # Build the scores array scores = [0] * len(COLUMNS[:-1]) # This column is the categorical column we want to extract. scores[i] = 1 skb = SelectKBest(k=1) skb.scores_ = scores # Convert the categorical column to a numerical value lbn = LabelBinarizer() r = skb.transform(train_features) lbn.fit(r) # Create the pipeline to extract the categorical feature categorical_pipelines.append( ('categorical-{}'.format(i), Pipeline([ ('SKB-{}'.format(i), skb), ('LBN-{}'.format(i), lbn)]))) # [END categorical-feature-conversion] # [START create-pipeline] # Create pipeline to extract the numerical features skb = SelectKBest(k=6) # From COLUMNS use the features that are numerical skb.scores_ = [1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0] categorical_pipelines.append(('numerical', skb)) # Combine all the features using FeatureUnion preprocess = FeatureUnion(categorical_pipelines) # Create the classifier classifier = RandomForestClassifier() # Transform the features and fit them to the classifier classifier.fit(preprocess.transform(train_features), train_labels) # Create the overall model as a single pipeline pipeline = Pipeline([ ('union', preprocess), ('classifier', classifier) ]) # [END create-pipeline] # --------------------------------------- # 2. Export and save the model to GCS # --------------------------------------- # [START export-to-gcs] # Export the model to a file model = 'model.joblib' joblib.dump(pipeline, model) # Upload the model to GCS bucket = storage.Client().bucket(BUCKET_NAME) blob = bucket.blob('{}/{}'.format( datetime.datetime.now().strftime('census_%Y%m%d_%H%M%S'), model)) blob.upload_from_filename(model) # [END export-to-gcs]
notebooks/scikit-learn/TrainingWithScikitLearnInCMLE.ipynb
GoogleCloudPlatform/cloudml-samples
apache-2.0
[Optional] StackDriver Logging You can view the logs for your training job: 1. Go to https://console.cloud.google.com/ 1. Select "Logging" in left-hand pane 1. Select "Cloud ML Job" resource from the drop-down 1. In filter by prefix, use the value of $JOB_NAME to view the logs [Optional] Verify Model File in GCS View the contents of the destination model folder to verify that model file has indeed been uploaded to GCS. Note: The model can take a few minutes to train and show up in GCS.
! gsutil ls gs://$BUCKET_NAME/census_*
notebooks/scikit-learn/TrainingWithScikitLearnInCMLE.ipynb
GoogleCloudPlatform/cloudml-samples
apache-2.0
Podemos visualizar que la temperatura minima es -2.08 y la maxima 19.02
""" Generemos una grafica de puntos, modificando el ancho y largo de la grafica para poder tener una mejor visualizacion de la informacion """ plt.figure(figsize = (15, 5)) plt.scatter(x = data['LandAverageTemperature'].index, y = data['LandAverageTemperature']) plt.title("Temperatura promedio de la tierra 1750-2015") plt.xlabel("Anio") plt.ylabel("Temperatura promedio de la Tierra") plt.show() # Vamos a probar solo graficar los anios # Verifiquemos el tipo de dato de la columna dt print(type(data['dt'][0])) # Vamos a convertir la columna a un objeto tiempo times = pd.DatetimeIndex(data['dt']) # Agrupamos la informacion por anios grouped = data.groupby([times.year]).mean() # graficamos plt.figure(figsize = (15, 5)) plt.plot(grouped['LandAverageTemperature']) # Change features of the graph plt.title("Temperatura media anual de la tierra 1750-2015") plt.xlabel("Anio") plt.ylabel("Temperatura promedio de la tierra") plt.show() # Que puede ocurrir en los primeros anios? grouped.head() # Vamos a checar que es lo que pasa en 1752 data[times.year == 1752] # Tenemos demasiada informacion nula data[np.isnan(data['LandAverageTemperature'])] # Usaremos el valor previo validado para llenar las observaciones nulas data['LandAverageTemperature'] = data['LandAverageTemperature'].fillna(method='ffill') # Reagrupamos la informacion y graficamos grouped = data.groupby([times.year]).mean() # La grafica se ve mejor, aunque no es perfecta plt.figure(figsize = (15, 5)) plt.plot(grouped['LandAverageTemperature']) plt.show()
algoritmos/data_science_regression.ipynb
jorgemauricio/INIFAP_Course
mit
Modelado de datos
# Para modelar los datos, vamos a necesitar de la libreria sklearn from sklearn.linear_model import LinearRegression as LinReg x = grouped.index.values.reshape(-1, 1) y = grouped['LandAverageTemperature'].values reg = LinReg() reg.fit(x, y) y_preds = reg.predict(x) print("Certeza: " + str(reg.score(x, y))) plt.figure(figsize = (15, 5)) plt.title("Linear Regression") plt.scatter(x = x, y = y_preds) plt.scatter(x = x, y = y, c = "b") reg.predict(2050)
algoritmos/data_science_regression.ipynb
jorgemauricio/INIFAP_Course
mit
Temperatura por ciudad (Aguascalientes)
# Leer la informacion data = pd.read_csv('../data/GlobalLandTemperaturesByCity.csv') data.head() # Cuantas ciudades estan disponibles data['City'].nunique() # Verificamos que nuestra ciudad de interes este disponible ciudades = np.array(data['City']) "Aguascalientes" in ciudades # Filtramos la informacion para la ciudad de interes data = data.loc[data['City'] == 'Aguascalientes'] # clasificamos la informacion data = data[['dt','AverageTemperature','City']] data.head() # Vamos a convertir la columna dt a un objeto tiempo times = pd.DatetimeIndex(data['dt']) # Agrupamos la informacion por anios grouped = data.groupby([times.year]).mean().plot.line(figsize=(15,5))
algoritmos/data_science_regression.ipynb
jorgemauricio/INIFAP_Course
mit
Modelado de datos
# Usaremos el valor previo validado para llenar las observaciones nulas data['AverageTemperature'] = data['AverageTemperature'].fillna(method='ffill') # Reagrupamos la informacion y graficamos grouped = data.groupby([times.year]).mean() # La grafica se ve mejor, aunque no es perfecta plt.figure(figsize = (15, 5)) plt.plot(grouped['AverageTemperature']) plt.show() x = grouped.index.values.reshape(-1, 1) y = grouped['AverageTemperature'].values reg = LinReg() reg.fit(x, y) y_preds = reg.predict(x) print("Certeza: " + str(reg.score(x, y))) plt.figure(figsize = (15, 5)) plt.title("Linear Regression") plt.scatter(x = x, y = y_preds) plt.scatter(x = x, y = y, c = "b")
algoritmos/data_science_regression.ipynb
jorgemauricio/INIFAP_Course
mit
Vamos a pedecir la temperatura promedio para el 2018
reg.predict(2018)
algoritmos/data_science_regression.ipynb
jorgemauricio/INIFAP_Course
mit
Vamos a pedecir la temperatura promedio para el 2050
reg.predict(2050)
algoritmos/data_science_regression.ipynb
jorgemauricio/INIFAP_Course
mit
Display the same with Python (matplotlib)
import m8r gather = m8r.File('gather.rsf') %matplotlib inline import matplotlib.pylab as plt import numpy as np plt.imshow(np.transpose(gather[:,888:1280]),aspect='auto')
Swan.ipynb
sfomel/ipython
gpl-2.0
Apply moveout with incorrect velocity
%%file nmo.scons Flow('nmo','gather','nmostretch half=n v0=1800') Result('nmo','window f1=888 n1=200 | grey title=NMO') view('nmo')
Swan.ipynb
sfomel/ipython
gpl-2.0
Slope estimation
%%file slope.scons Flow('slope','nmo','dip rect1=100 rect2=5 order=2') Result('slope','grey color=linearlfb mean=y scalebar=y title=Slope') view('slope')
Swan.ipynb
sfomel/ipython
gpl-2.0
Non-physical flattening by predictive painting
%%file flat.scons Flow('paint','slope','pwpaint order=2') Result('paint','window f1=888 n1=200 | contour title=Painting') Flow('flat','nmo paint','iwarp warp=${SOURCES[1]}') Result('flat','window f1=888 n1=200 | grey title=Flattening') view('paint') view('flat')
Swan.ipynb
sfomel/ipython
gpl-2.0
Velocity estimation by time warping Predictive painting produces $t_0(t,x)$. Time warping converts it into $t(t_0,x)$.
%%file twarp.scons Flow('twarp','paint','math output=x1 | iwarp warp=$SOURCE') Result('twarp','window j1=20 | transp | graph yreverse=y min2=0.888 max2=1.088 pad=n title="Time Warping" ') view('twarp')
Swan.ipynb
sfomel/ipython
gpl-2.0
We now want to fit $t^2(t_0,x)-t_0^2 \approx \Delta S\,x^2$, where $\Delta S = \frac{1}{v^2} - \frac{1}{v_0^2}$. The least-squares fit is $\Delta S = \displaystyle \frac{\int x^2\left[t^2(t_0,x)-t_0^2\right]\,dx}{\int x^4\,dx}$. The velocity estimate is $v = \displaystyle \frac{v_0}{\sqrt{\Delta S\,v_0^2 + 1}}$.
%%file lsfit.scons Flow('num','twarp','math output="(input*input-x1*x1)*x2^2" | stack norm=n') Flow('den','twarp','math output="x2^4" | stack norm=n') Flow('vel','num den','div ${SOURCES[1]} | math output="1800/sqrt(1800*1800*input+1)" ') Result('vel', ''' window f1=888 n1=200 | graph yreverse=y transp=y title="Estimated Velocity" label2=Velocity unit2=m/s grid2=y pad=n min2=1950 max2=2050 ''') view('vel')
Swan.ipynb
sfomel/ipython
gpl-2.0
Last step - physical flattening
%%file nmo2.scons Flow('nmo2','gather vel','nmo half=n velocity=${SOURCES[1]}') Result('nmo2','window f1=888 n1=200 | grey title="Physical Flattening" ') view('nmo2')
Swan.ipynb
sfomel/ipython
gpl-2.0
The number of Sobol samples to use at each order is arbitrary, but for compare, we select them to be the same as the Gauss nodes:
from monte_carlo_integration import sobol_samples sobol_nodes = [sobol_samples[:, :nodes.shape[1]] for nodes in gauss_nodes] from matplotlib import pyplot pyplot.rc("figure", figsize=[12, 4]) pyplot.subplot(121) pyplot.scatter(*gauss_nodes[4]) pyplot.title("Gauss quadrature nodes") pyplot.subplot(122) pyplot.scatter(*sobol_nodes[4]) pyplot.title("Sobol nodes") pyplot.show()
docs/user_guide/main_usage/point_collocation.ipynb
jonathf/chaospy
mit
Evaluating model solver Like in the case of problem formulation again, evaluation is straight forward:
import numpy from problem_formulation import model_solver gauss_evals = [ numpy.array([model_solver(node) for node in nodes.T]) for nodes in gauss_nodes ] sobol_evals = [ numpy.array([model_solver(node) for node in nodes.T]) for nodes in sobol_nodes ] from problem_formulation import coordinates pyplot.subplot(121) pyplot.plot(coordinates, gauss_evals[4].T, alpha=0.3) pyplot.title("Gauss evaluations") pyplot.subplot(122) pyplot.plot(coordinates, sobol_evals[4].T, alpha=0.3) pyplot.title("Sobol evaluations") pyplot.show()
docs/user_guide/main_usage/point_collocation.ipynb
jonathf/chaospy
mit
Select polynomial expansion Unlike pseudo spectral projection, the polynomial in point collocations are not required to be orthogonal. But stability wise, orthogonal polynomials have still been shown to work well. This can be achieved by using the chaospy.generate_expansion() function:
import chaospy from problem_formulation import joint expansions = [chaospy.generate_expansion(order, joint) for order in range(1, 10)] expansions[0].round(10)
docs/user_guide/main_usage/point_collocation.ipynb
jonathf/chaospy
mit
Solve the linear regression problem With all samples $Q_1, ..., Q_N$, model evaluations $U_1, ..., U_N$ and polynomial expansion $\Phi_1, ..., \Phi_M$, we can put everything together to solve the equations: $$ U_n = \sum_{m=1}^M c_m(t)\ \Phi_m(Q_n) \qquad n = 1, ..., N $$ with respect to the coefficients $c_1, ..., c_M$. This can be done using the helper function chaospy.fit_regression():
gauss_model_approx = [ chaospy.fit_regression(expansion, samples, evals) for expansion, samples, evals in zip(expansions, gauss_nodes, gauss_evals) ] sobol_model_approx = [ chaospy.fit_regression(expansion, samples, evals) for expansion, samples, evals in zip(expansions, sobol_nodes, sobol_evals) ] pyplot.subplot(121) model_approx = gauss_model_approx[4] evals = model_approx(*gauss_nodes[1]) pyplot.plot(coordinates, evals, alpha=0.3) pyplot.title("Gaussian approximation") pyplot.subplot(122) model_approx = sobol_model_approx[1] evals = model_approx(*sobol_nodes[1]) pyplot.plot(coordinates, evals, alpha=0.3) pyplot.title("Sobol approximation") pyplot.show()
docs/user_guide/main_usage/point_collocation.ipynb
jonathf/chaospy
mit
Descriptive statistics The expected value and variance is calculated as follows:
expected = chaospy.E(gauss_model_approx[-2], joint) std = chaospy.Std(gauss_model_approx[-2], joint) expected[:4].round(4), std[:4].round(4) pyplot.rc("figure", figsize=[6, 4]) pyplot.xlabel("coordinates") pyplot.ylabel("model approximation") pyplot.fill_between( coordinates, expected-2*std, expected+2*std, alpha=0.3) pyplot.plot(coordinates, expected) pyplot.show()
docs/user_guide/main_usage/point_collocation.ipynb
jonathf/chaospy
mit
Error analysis It is hard to assess how well these models are doing from the final estimation alone. They look about the same. So to compare results, we do error analysis. To do so, we use the reference analytical solution and error function as defined in problem formulation.
from problem_formulation import error_in_mean, error_in_variance error_in_mean(expected), error_in_variance(std**2)
docs/user_guide/main_usage/point_collocation.ipynb
jonathf/chaospy
mit
The analysis can be performed as follows:
sizes = [nodes.shape[1] for nodes in gauss_nodes] eps_gauss_mean = [ error_in_mean(chaospy.E(model, joint)) for model in gauss_model_approx ] eps_gauss_var = [ error_in_variance(chaospy.Var(model, joint)) for model in gauss_model_approx ] eps_sobol_mean = [ error_in_mean(chaospy.E(model, joint)) for model in sobol_model_approx ] eps_sobol_var = [ error_in_variance(chaospy.Var(model, joint)) for model in sobol_model_approx ] pyplot.rc("figure", figsize=[12, 4]) pyplot.subplot(121) pyplot.title("Error in mean") pyplot.loglog(sizes, eps_gauss_mean, "-", label="Gaussian") pyplot.loglog(sizes, eps_sobol_mean, "--", label="Sobol") pyplot.legend() pyplot.subplot(122) pyplot.title("Error in variance") pyplot.loglog(sizes, eps_gauss_var, "-", label="Gaussian") pyplot.loglog(sizes, eps_sobol_var, "--", label="Sobol") pyplot.show()
docs/user_guide/main_usage/point_collocation.ipynb
jonathf/chaospy
mit
<h2>Example: Symmetric IPVP</h2> After a bit of algebra, see <a href="https://github.com/davidrpugh/zice-2014/blob/master/solving-auctions/Hubbard%20and%20Paarsch%20(2013).pdf">Hubbard and Parsch (2013)</a> for details, all Symmetric Independent Private Values Paradigm (IPVP) models can be reduced down to a single non-linear ordinary differential equation (ODE) and an initial condition describing the behavior of the equilibrium bidding function $\sigma(v)$... $$\sigma'(v) = \frac{(N - 1)vf(v)}{F(v)} - \frac{\sigma(v)(N-1)f(v)}{F(v)},\ \sigma(\underline{v}) = \underline{v} $$ ...where $f$ and $F$ are the probability density function and the cumulative distribution function, respectively, for the valuations and $N$ is the number of bidders.
import functools class SymmetricIPVPModel(pycollocation.problems.IVP): def __init__(self, f, F, params): rhs = self._rhs_factory(f, F) super(SymmetricIPVPModel, self).__init__(self._initial_condition, 1, 1, params, rhs) @staticmethod def _initial_condition(v, sigma, v_lower, **params): return [sigma - v_lower] @staticmethod def _symmetric_ipvp_model(v, sigma, f, F, N, **params): return [(((N - 1) * f(v, **params)) / F(v, **params)) * (v - sigma)] @classmethod def _rhs_factory(cls, f, F): return functools.partial(cls._symmetric_ipvp_model, f=f, F=F) def valuation_cdf(v, v_lower, v_upper, **params): return stats.uniform.cdf(v, v_lower, v_upper - v_lower) def valuation_pdf(v, v_lower, v_upper, **params): return stats.uniform.pdf(v, v_lower, v_upper - v_lower) params = {'v_lower': 1.0, 'v_upper': 2.0, 'N': 10} symmetric_ipvp_ivp = SymmetricIPVPModel(valuation_pdf, valuation_cdf, params)
examples/auction-models.ipynb
davidrpugh/pyCollocation
mit
<h2>Solving the model with pyCollocation</h2> Finding a good initial guess for $\sigma(v)$ Theory tells us that bidding function should be monotonically increasing in the valuation? Higher valuations lead to higher bids?
def initial_mesh(v_lower, v_upper, num, problem): """Guess that all participants bid their true valuations.""" vs = np.linspace(v_lower, v_upper, num) return vs, vs
examples/auction-models.ipynb
davidrpugh/pyCollocation
mit
Solving the model
pycollocation.solvers.LeastSquaresSolver? polynomial_basis = pycollocation.basis_functions.PolynomialBasis() solver = pycollocation.solvers.LeastSquaresSolver(polynomial_basis) # compute the initial mesh boundary_points = (symmetric_ipvp_ivp.params['v_lower'], symmetric_ipvp_ivp.params['v_upper']) vs, sigmas = initial_mesh(*boundary_points, num=1000, problem=symmetric_ipvp_ivp) # compute the initial coefs basis_kwargs = {'kind': 'Chebyshev', 'domain': boundary_points, 'degree': 2} sigma_poly = polynomial_basis.fit(vs, sigmas, **basis_kwargs) initial_coefs = sigma_poly.coef solution = solver.solve(basis_kwargs, boundary_points, initial_coefs, symmetric_ipvp_ivp, full_output=True) solution.result sigma_soln, = solution.evaluate_solution(vs) plt.plot(vs, sigma_soln) plt.show() sigma_resids, = solution.evaluate_residual(vs) plt.plot(vs, sigma_resids) plt.show() sigma_normalized_resids, = solution.normalize_residuals(vs) plt.plot(vs, np.abs(sigma_normalized_resids)) plt.yscale('log') plt.show() def analytic_solution(v, N, v_lower, v_upper, **params): """ Solution for symmetric IVPVP auction with uniform valuations. Notes ----- There is a generic closed form solution for this class of auctions. Annoyingly it involves integrating a function of the cumulative distribution function for valuations. """ return v - (1.0 / N) * valuation_cdf(v, v_lower, v_upper) plt.plot(vs, analytic_solution(vs, **symmetric_ipvp_ivp.params)) plt.plot(vs, sigma_soln) plt.show()
examples/auction-models.ipynb
davidrpugh/pyCollocation
mit
<h2>Example: Asymmetric IPVP</h2> After a bit of algebra, see <a href="https://github.com/davidrpugh/zice-2014/blob/master/solving-auctions/Hubbard%20and%20Paarsch%20(2013).pdf">Hubbard and Parsch (2013)</a> for details, all Asymmetric Independent Private Values Paradigm (IPVP) models can be reduced down to a system of non-linear ordinary differential equations (ODEs) and associated boundary conditions... \begin{align} \phi'(s) =& \frac{F_n(\phi_n(s))}{f_n(\phi_n(s))}\Bigg[\frac{1}{N-1}\sum_{m=1}^N \frac{1}{\phi_m(s) - s} - \frac{1}{\phi_n(s)}\Bigg] \ \forall n=1,\dots,N \ \phi(\underline{s}) =& \underline{v}\ \forall n=1,\dots,N \ \phi(\overline{s}) = & \overline{v}\ \forall n=1,\dots,N \end{align} ...where $f_n$ and $F_n$ are the probability density function and the cumulative distribution function, respectively, for the valuation of bidder $n$ and $N$ is the number of bidders.
def rhs_bidder_n(n, s, phis, f, F, N, **params): A = (F(phis[n], **params) / f(phis[n], **params)) B = ((1 / (N - 1)) * sum(1 / (phi(s) - s) for phi in phis) - (1 / phis[n])) return A * B def asymmetric_ipvp_model(s, *phis, fs=None, Fs=None, N=2, **params): return [rhs_bidder(n, s, phi, f, F, N, **params) for phi, f, F in zip(phis, fs, Fs)]
examples/auction-models.ipynb
davidrpugh/pyCollocation
mit
This notebook shows a typical workflow to query a Catalog Service for the Web (CSW) and creates a request for data endpoints that are suitable for download. The catalog of choice is the NGCD geoportal (http://www.ngdc.noaa.gov/geoportal/csw) and we want to query it using a geographical bounding box, a time range, and a variable of interested. The example below will fetch Sea Surface Temperature (SST) data from all available observations and models in the Boston Harbor region. The goal is to assess the water temperature for the of the Boston Light Swim event. We will search for data $\pm$ 4 days centered at the event date.
from datetime import datetime, timedelta event_date = datetime(2015, 8, 15) start = event_date - timedelta(days=4) stop = event_date + timedelta(days=4)
content/downloads/notebooks/2015-10-12-fetching_data.ipynb
ioos/system-test
unlicense
The bounding box is slightly larger than the Boston harbor to assure we get some data.
spacing = 0.25 bbox = [-71.05-spacing, 42.28-spacing, -70.82+spacing, 42.38+spacing]
content/downloads/notebooks/2015-10-12-fetching_data.ipynb
ioos/system-test
unlicense
The CF_names object is just a Python dictionary whose keys are SOS names and the values contain all possible combinations of temperature variables names in the CF conventions. Note that we also define a units object. We use the object units to coerce all data to celcius.
import iris from utilities import CF_names sos_name = 'sea_water_temperature' name_list = CF_names[sos_name] units = iris.unit.Unit('celsius')
content/downloads/notebooks/2015-10-12-fetching_data.ipynb
ioos/system-test
unlicense