markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Now let's plot the small test graph:
plot(testgraph)
_____no_output_____
Apache-2.0
class02a_igraph_R.ipynb
curiositymap/Networks-in-Computational-Biology
Axis Bank Stock Data Analysis Project Blog Post> Data Analysis of axis bank stock market time-series dataset.- toc: true - badges: true- comments: true- categories: [jupyter]- image: images/stockdataimg.jpg AxisBank Stock Data AnalysisThe project is based on the dataset I obtained from kaggle. The Analysis I am perfo...
!pip install jovian opendatasets --upgrade --quiet
_____no_output_____
Apache-2.0
_notebooks/2022-02-04-data-analysis-course-project.ipynb
sandeshkatakam/My-Machine_learning-Blog
Let's begin by downloading the data, and listing the files within the dataset.
# Change this dataset_url = 'https://www.kaggle.com/rohanrao/nifty50-stock-market-data' import opendatasets as od od.download(dataset_url)
Skipping, found downloaded files in "./nifty50-stock-market-data" (use force=True to force download)
Apache-2.0
_notebooks/2022-02-04-data-analysis-course-project.ipynb
sandeshkatakam/My-Machine_learning-Blog
The dataset has been downloaded and extracted.
# Change this data_dir = './nifty50-stock-market-data' import os os.listdir(data_dir)
_____no_output_____
Apache-2.0
_notebooks/2022-02-04-data-analysis-course-project.ipynb
sandeshkatakam/My-Machine_learning-Blog
Let us save and upload our work to Jovian before continuing.
project_name = "nifty50-stockmarket-data" # change this (use lowercase letters and hyphens only) !pip install jovian --upgrade -q import jovian jovian.commit(project=project_name)
_____no_output_____
Apache-2.0
_notebooks/2022-02-04-data-analysis-course-project.ipynb
sandeshkatakam/My-Machine_learning-Blog
Data Preparation and CleaningData Preparation and Cleansing constitutes the first part of the Data Analysis project for any dataset. We do this process inorder to obtain retain valuable data from the data frame, one that is relevant for our analysis. The process is also used to remove erroneous values from the dataset...
import pandas as pd import numpy as np axis_df= pd.read_csv(data_dir + "/AXISBANK.csv") axis_df.info() axis_df.describe() axis_df axis_df['Symbol'] = np.where(axis_df['Symbol'] == 'UTIBANK', 'AXISBANK', axis_df['Symbol']) axis_df axis_new_df = axis_df.drop(['Last','Series', 'VWAP', 'Trades','Deliverable Volume','%De...
_____no_output_____
Apache-2.0
_notebooks/2022-02-04-data-analysis-course-project.ipynb
sandeshkatakam/My-Machine_learning-Blog
Summary of the operations done till now:1. we have taken a csv file containing stock data of AXIS BANK from the data set of nifty50 stocks and performed data cleansing operations on them.2. Originally, the data from the data set is noticed as stock price quotations from the year 2001 but for our analysis we have taken...
axis_new_df.reset_index(drop=True, inplace=True) axis_new_df axis_new_df['Date'] = pd.to_datetime(axis_new_df['Date']) # we changed the Dates into Datetime format from the object format axis_new_df.info() axis_new_df['Daily Lag'] = axis_new_df['Close'].shift(1) # Added a new column Daily Lag to calculate daily returns...
_____no_output_____
Apache-2.0
_notebooks/2022-02-04-data-analysis-course-project.ipynb
sandeshkatakam/My-Machine_learning-Blog
Exploratory Analysis and Visualization Here we compute the mean, max/min stock quotes of the stock AXISBANK. We specifically compute the mean of the Daily returns column. we are going to do the analysis by first converting the index datewise to month wise to have a good consolidated dataframe to analyze in broad tim...
import seaborn as sns import matplotlib import matplotlib.pyplot as plt %matplotlib inline sns.set_style('darkgrid') matplotlib.rcParams['font.size'] = 10 matplotlib.rcParams['figure.figsize'] = (15, 5) matplotlib.rcParams['figure.facecolor'] = '#00000000'
_____no_output_____
Apache-2.0
_notebooks/2022-02-04-data-analysis-course-project.ipynb
sandeshkatakam/My-Machine_learning-Blog
Here we are going to explore the daily Returns column by plotting a line graph of daily returns v/s Months. Now we can see that daily returns are growing across months in the years 2019-2021.
axis_dailyret_plot=axis_dailyret_df.groupby(axis_dailyret_df['Date'].dt.strftime('%B'))['Daily Returns'].sum().sort_values() plt.plot(axis_dailyret_plot) axis_new_df['Year'] = pd.DatetimeIndex(axis_new_df['Date']).year axis_new_df axis2019_df = axis_new_df[axis_new_df.Year == 2019 ] axis2020_df = axis_new_df[axis_new...
_____no_output_____
Apache-2.0
_notebooks/2022-02-04-data-analysis-course-project.ipynb
sandeshkatakam/My-Machine_learning-Blog
Summary of above exploratory Analysis:In the above code cells, we performed plotting of the data by exploring a column from the data. We have divided the DataFrame into three data frames containing the stock quote data from year-wise i.e., for the years 2019, 2020, 2021. For dividing the DataFrame year-wise we have ad...
axis_range_df = axis_dailyret_df['Daily Returns'].max() - axis_dailyret_df['Daily Returns'].min() axis_range_df axis_mean_df = axis_dailyret_df['Daily Returns'].mean() axis_mean_df
_____no_output_____
Apache-2.0
_notebooks/2022-02-04-data-analysis-course-project.ipynb
sandeshkatakam/My-Machine_learning-Blog
In the above two code cells, we have computed the range i.e. the difference between maximum and minimum value of the column. We have also calculated the mean of the daily returns of the Axis Bank stock. Exploratory Analysis of stock quotes year-wise for Axis Bank:In this section we have plotted the Closing values of t...
plt.plot(axis2019_df['Date'],axis2019_df['Close'] ) plt.title('Closing Values of stock for the year 2019') plt.xlabel(None) plt.ylabel('Closing price of the stock') plt.plot(axis2020_df['Date'],axis2020_df['Close']) plt.title('Closing Values of stock for the year 2020') plt.xlabel(None) plt.ylabel('Closing price of the...
_____no_output_____
Apache-2.0
_notebooks/2022-02-04-data-analysis-course-project.ipynb
sandeshkatakam/My-Machine_learning-Blog
**TODO** - Explore one or more columns by plotting a graph below, and add some explanation about it
plt.style.use('fivethirtyeight') plt.plot(axis2019_df['Date'], axis2019_df['Close'],linewidth=3, label = '2019') plt.plot(axis2020_df["Date"],axis2020_df['Close'],linewidth=3, label = '2020') plt.legend(loc='best' ) plt.title('Closing Values of stock for the years 2019 and 2020') plt.xlabel(None) plt.ylabel('Closing pr...
['Solarize_Light2', '_classic_test_patch', 'bmh', 'classic', 'dark_background', 'fast', 'fivethirtyeight', 'ggplot', 'grayscale', 'seaborn', 'seaborn-bright', 'seaborn-colorblind', 'seaborn-dark', 'seaborn-dark-palette', 'seaborn-darkgrid', 'seaborn-deep', 'seaborn-muted', 'seaborn-notebook', 'seaborn-paper', 'seaborn-...
Apache-2.0
_notebooks/2022-02-04-data-analysis-course-project.ipynb
sandeshkatakam/My-Machine_learning-Blog
Let us save and upload our work to Jovian before continuing
import jovian jovian.commit()
_____no_output_____
Apache-2.0
_notebooks/2022-02-04-data-analysis-course-project.ipynb
sandeshkatakam/My-Machine_learning-Blog
Asking and Answering QuestionsIn this section, we are going to answer some of the questions regarding the dataset using various data analysis libraries like Numpy, Pandas, Matplotlib and seaborn. By using the tools we can see how useful the libraries come in handy while doing Inference on a dataset. > Instructions (de...
plt.plot(axis2019_df['Date'], axis2019_df['Close'],linewidth=3, label = '2019') plt.plot(axis2020_df["Date"],axis2020_df['Close'],linewidth=3, label = '2020') plt.plot(axis2021_df["Date"], axis2021_df['Close'],linewidth = 3, label = '2021') plt.legend(loc='best' ) plt.title('Closing Price of stock for the years 2019-20...
The Maximum closing price of the stock during 2019-2021 is 822.8 The Minimum closing price of the stock during 2019-2021 is 303.15 The Index for the Maximum closing price in the dataframe is [(105, 'Prev Close'), (104, 'Close'), (105, 'Daily Lag')] The Index for the Minimum closing price in the dataframe is [(304, 'Pre...
Apache-2.0
_notebooks/2022-02-04-data-analysis-course-project.ipynb
sandeshkatakam/My-Machine_learning-Blog
* As we can see from the above one of the two plots there was a dip in the closing price during the year 2020. The Maximum Closing price occurred on 2019-06-04(Close = 822.8). The lowest of closing price during the years occurred on 2020-03-24(Close = 303.15). This can say that the start of the pandemic has caused the ...
plt.plot(axis2019_df["Date"],axis2019_df["Volume"],linewidth=2, label = '2019') plt.plot(axis2020_df["Date"],axis2020_df["Volume"],linewidth=2, label = '2020') plt.plot(axis2021_df["Date"],axis2021_df["Volume"],linewidth=2, label = '2021') plt.legend(loc='best') plt.title('Volume of stock traded in the years 2019-2021(...
The Maximum volume of the stock traded during 2019-2021 is 96190274 The Minimum volume of the stock traded during 2019-2021 is 965772 The Index for the Maximum volume stock traded in the dataframe is [(357, 'Volume')] The Index for the Minimum volume stock traded in the dataframe is [(200, 'Volume')] Date 2...
Apache-2.0
_notebooks/2022-02-04-data-analysis-course-project.ipynb
sandeshkatakam/My-Machine_learning-Blog
As we can see from the above graph a lot of volume of trade happened during 2020. That means the stock was transacted a lot during the year 2020. The highest Volumed of stock is traded on 2020-06-16(Volume =96190274) and the Minimum volume of the stock traded during 2019-2021 is on 2019-10-27(Volume = 965772) Q2: Wha...
#axis_new_df['Daily Returns'].plot(title='Axis Bank Daily Returns') plt.plot(axis_new_df['Date'],axis_new_df['Daily Returns'], linewidth=2 ,label = 'Daily Returns') plt.legend(loc='best' ) plt.title('Daily Returns of stock for the years 2019-2021(Till April)') plt.xlabel(None) plt.ylabel('Daily Returns of the stock') p...
_____no_output_____
Apache-2.0
_notebooks/2022-02-04-data-analysis-course-project.ipynb
sandeshkatakam/My-Machine_learning-Blog
As we can see from the plot there were high daily returns for the stock around late March 2020 and then there was ups and downs from April- July 2020 . we can see that the most changes in daily returns occurred during April 2020 - July 2020 and at other times the daily returns were almost flat. The maximum daily return...
Avgdailyret_2019 =axis2019_df['Daily Returns'].sum()/len(axis2019_df['Daily Returns']) Avgdailyret_2020 =axis2020_df['Daily Returns'].sum()/len(axis2020_df['Daily Returns']) Avgdailyret_2021 =axis2021_df['Daily Returns'].sum()/len(axis2021_df['Daily Returns']) # create a dataset data_dailyret = {'2019': Avgdailyret_20...
/opt/conda/lib/python3.9/site-packages/seaborn/distributions.py:2619: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level function for histograms). war...
Apache-2.0
_notebooks/2022-02-04-data-analysis-course-project.ipynb
sandeshkatakam/My-Machine_learning-Blog
Q3: What is the Average Trading volume of the stock for past three years?
Avgvol_2019 =axis2019_df['Volume'].sum()/len(axis2019_df['Volume']) Avgvol_2020 =axis2020_df['Volume'].sum()/len(axis2020_df['Volume']) Avgvol_2021 =axis2021_df['Volume'].sum()/len(axis2021_df['Volume']) # create a dataset data_volume = {'2019': Avgvol_2019, '2020':Avgvol_2020, '2021':Avgvol_2021} Years = list(data_vol...
_____no_output_____
Apache-2.0
_notebooks/2022-02-04-data-analysis-course-project.ipynb
sandeshkatakam/My-Machine_learning-Blog
From the above plot we can say that more volume of the Axis Bank stock is traded during the year 2020. We can see a significant rise in the trading volume of the stock from 2019 to 2020. Q4: What is the Average Closing price of the stock for past three years?
Avgclose_2019 =axis2019_df['Close'].sum()/len(axis2019_df['Close']) Avgclose_2020 =axis2020_df['Close'].sum()/len(axis2020_df['Close']) Avgclose_2021 =axis2021_df['Close'].sum()/len(axis2021_df['Close']) # create a dataset data_volume = {'2019': Avgclose_2019, '2020':Avgclose_2020, '2021':Avgclose_2021} Years = list(da...
_____no_output_____
Apache-2.0
_notebooks/2022-02-04-data-analysis-course-project.ipynb
sandeshkatakam/My-Machine_learning-Blog
We have seen the Trading Volume of the stock is more during the year 2020. In contrast, the Year 2020 has the lowest average closing price among the other two. But for the years 2019 and 2021 the Average closing price is almost same, there is not much change in the value. Let us save and upload our work to Jovian bef...
import jovian jovian.commit()
_____no_output_____
Apache-2.0
_notebooks/2022-02-04-data-analysis-course-project.ipynb
sandeshkatakam/My-Machine_learning-Blog
Inferences and ConclusionInferences : The above data analysis is done on the data set of stock quotes for AXIS BANK during the years 2019-2021. From the Analysis we can say that during the year 2020 there has been a lot of unsteady growth, there has been rise in the volume of stock traded on the exchange, that means t...
import jovian jovian.commit()
_____no_output_____
Apache-2.0
_notebooks/2022-02-04-data-analysis-course-project.ipynb
sandeshkatakam/My-Machine_learning-Blog
References and Future WorkFuture Ideas for the Analyis:* I am planning to go forward with this basic Analysis of the AXISBANK stock quotes and build a Machine Learning model predicting the future stock prices.* I plan to automate the Data Analysis process for every stock in the NIFTY50 Index by defining reusable funct...
import jovian jovian.commit()
_____no_output_____
Apache-2.0
_notebooks/2022-02-04-data-analysis-course-project.ipynb
sandeshkatakam/My-Machine_learning-Blog
Array Interview Question Anagram Checkanagram是一種字的轉換,使用相同的字母以任意順序重新組成不同的字,之中有任意空白都可以例如, "apple" -> "ap e lp"
def anagram(s1, s2): l_bound = ord('0') r_bound = ord('z') appeared = [0]*(r_bound - l_bound) for letter in s1: if letter != ' ': mapping = ord(letter) - l_bound appeared[mapping] += 1 for letter in s2: if letter != ' ': mapping = ord(letter)...
success
MIT
Array Interview Question.ipynb
sillygod/ds_and_algorithm
個人這邊這解法可能會不夠完善,因為僅僅是針對魚數字字母的陣列mapping,但是萬一有符號就不知道要怎辦了,所以當然是可以用dict來解掉這煩人的問題拉,只是想說這是屬於array類別的問題,就故意只用array解 Array Pair Sum給予一個數字陣列,找出所有特定的數字配對的加起來為特定值kex.```pythonpair_sum([1,3,2,2], 4)(1,3)(2,2)今天是要回傳有幾個配對就好,所以是回傳數字2```
def pair_sum(arr,k): res = [False]*len(arr) for i in range(len(arr)-1): for j in range(i+1,len(arr)): if arr[i] + arr[j] == k: res[i] = True res[j] = True pair_count = [1 for ele in res if ele == True] return len(p...
_____no_output_____
MIT
Array Interview Question.ipynb
sillygod/ds_and_algorithm
上面效率會是$ Big O(n^2) $,但是如果可以使用dict或是set的話,就可以把效率壓到 $ BigO(n) $,因為 `n in dict` 這樣的查找只需 $ BigO(1) $,在array找尋你要的值是要花費 $ BigO(n) $,下面我們就來換成用set or dict來實作
def pair_sum_set_version(arr, k): to_seek = set() output = set() for num in arr: target = k - num if target not in to_seek: to_seek.add(num) else: output.add((min(num, target), max(num, target))) return len(output) class...
success
MIT
Array Interview Question.ipynb
sillygod/ds_and_algorithm
finding missing element這題是會給予你兩個array,第二個array是從第一個array隨機刪除一個元素後,並且進行洗亂的動作,然後今天你的任務就是要去找那個消失的元素
def finder(ary, ary2): table = {} for ele in ary: if ele in table: table[ele] += 1 else: table[ele] = 1 for ele in ary2: if ele in table: table[ele] -= 1 else: return ele for k, v in table.items(): if ...
_____no_output_____
MIT
Array Interview Question.ipynb
sillygod/ds_and_algorithm
上面這個邏輯,如果是先用ary2去做表紀錄的話邏輯上會更加簡潔,也會少了最後一步```pythonfor ele in ary2: table[ele] = 1for ele in ary1: if (ele not in table) or (table[ele] == 0): return ele else: table[ele] -= 1```這個解法算是最快的,因為如果使用排序的話最少都會要 $ n \log n $,排序就是loop他去找不一樣的元素而已。另外有個天殺的聰明解法,這我真的沒想到就是使用XOR,讓我們先來看看codexor ( exclude or ) 具有排他性...
class TestFinder(unittest.TestCase): def test(self, solve): self.assertEqual(solve([5,5,7,7],[5,7,7]),5) self.assertEqual(solve([1,2,3,4,5,6,7],[3,7,2,1,4,6]),5) self.assertEqual(solve([9,8,7,6,5,4,3,2,1],[9,8,7,5,4,3,2,1]),6) print('success') t = TestFinder() ...
success
MIT
Array Interview Question.ipynb
sillygod/ds_and_algorithm
largest continuous sum題目會給予你一個陣列,你的任務就是要去從裡面發現哪種連續數字的總和會是最大值,不一定是全部數字加起來是最大,因為裡面會有負數,有可能是從某某位置開始的連續X個數子總和才是最大。
def lar_con_sum(ary): if len(ary) == 0: return 0 max_sum = cur_sum = ary[0] for num in ary[1:]: cur_sum = max(cur_sum+num, num) max_sum = max(cur_sum, max_sum) return max_sum
_____no_output_____
MIT
Array Interview Question.ipynb
sillygod/ds_and_algorithm
這題的思緒是,長度n的連續數字最大和,一定是從長度n-1連續數字最大和來的所以今天從index=0時來看,因為元素只有一個這時候就是他本身為最大值,當index=1時,我們就要來比較ele[0]+ele[1]和ele[0] <- 當前最大值的比較,比較這兩者然後取最大的,需要注意的是,我們需要暫存目前的sum,因為這是拿來判斷後面遇到負數狀時況,計算另一個最大值的點,此時另一個最大值(cur_sum)仍然會與之前最大值去比較(max_sum),
class TestLargestConSum(unittest.TestCase): def test(self, solve): self.assertEqual(solve([1,2,-1,3,4,-1]),9) self.assertEqual(solve([1,2,-1,3,4,10,10,-10,-1]),29) self.assertEqual(solve([-1,1]),1) self.assertEqual(solve([1,2,-10,5,6]), 11) print('success') ...
success
MIT
Array Interview Question.ipynb
sillygod/ds_and_algorithm
Sentence Reversal給予一個字串,然後反轉單字順序,例如: 'here it is' -> 'is it here'
def sentenceReversal(str1): str1 = str1.strip() words = str1.split() result = '' for i in range(len(words)): result += ' '+words[len(words)-i-1] return result.strip() class TestSentenceReversal(unittest.TestCase): def test(self, solve): self.ass...
success
MIT
Array Interview Question.ipynb
sillygod/ds_and_algorithm
值得注意的是python string split這個方法,不帶參數的話,預設是做strip的事然後分割,跟你使用 split(' ')得到的結果會不一樣,另外面試時可能要使用比較基本的方式來實作這題,也就是少用python trick的方式。 string compression給予一串字串,轉換成數字加字母的標記法,雖然覺得這個壓縮怪怪的,因為無法保留字母順序
def compression(str1): mapping = {} letter_order = [False] result = '' for ele in str1: if ele != letter_order[-1]: letter_order.append(ele) if ele not in mapping: mapping[ele] = 1 else: mapping[ele] += 1 for ...
success
MIT
Array Interview Question.ipynb
sillygod/ds_and_algorithm
unique characters in string給予一串字串並判斷他是否全部不同的字母
def uni_char(str1): mapping = {} for letter in str1: if letter in mapping: return False else: mapping[letter] = True return True def uni_char2(str1): return len(set(str1)) == len(str1) class TestUniChar(unittest.TestCase): def test(self, solve)...
success
MIT
Array Interview Question.ipynb
sillygod/ds_and_algorithm
Multi-Layer Perceptron, MNIST---In this notebook, we will train an MLP to classify images from the [MNIST database](http://yann.lecun.com/exdb/mnist/) hand-written digit database.The process will be broken down into the following steps:>1. Load and visualize the data2. Define a neural network3. Train the model4. Evalu...
# import libraries import torch import numpy as np
_____no_output_____
MIT
convolutional-neural-networks/mnist-mlp/mnist_mlp_exercise.ipynb
armhzjz/deep-learning-v2-pytorch
--- Load and Visualize the [Data](http://pytorch.org/docs/stable/torchvision/datasets.html)Downloading may take a few moments, and you should see your progress as the data is loading. You may also choose to change the `batch_size` if you want to load more data at a time.This cell will create DataLoaders for each of our...
from torchvision import datasets import torchvision.transforms as transforms # number of subprocesses to use for data loading num_workers = 0 # how many samples per batch to load batch_size = 20 # convert data to torch.FloatTensor transform = transforms.ToTensor() # choose the training and test datasets train_data =...
_____no_output_____
MIT
convolutional-neural-networks/mnist-mlp/mnist_mlp_exercise.ipynb
armhzjz/deep-learning-v2-pytorch
Visualize a Batch of Training DataThe first step in a classification task is to take a look at the data, make sure it is loaded in correctly, then make any initial observations about patterns in that data.
import matplotlib.pyplot as plt %matplotlib inline # obtain one batch of training images dataiter = iter(train_loader) images, labels = dataiter.next() images = images.numpy() # plot the images in the batch, along with the corresponding labels fig = plt.figure(figsize=(25, 4)) for idx in np.arange(20): ax = f...
_____no_output_____
MIT
convolutional-neural-networks/mnist-mlp/mnist_mlp_exercise.ipynb
armhzjz/deep-learning-v2-pytorch
View an Image in More Detail
img = np.squeeze(images[1]) fig = plt.figure(figsize = (12,12)) ax = fig.add_subplot(111) ax.imshow(img, cmap='gray') width, height = img.shape thresh = img.max()/2.5 for x in range(width): for y in range(height): val = round(img[x][y],2) if img[x][y] !=0 else 0 ax.annotate(str(val), xy=(y,x), ...
_____no_output_____
MIT
convolutional-neural-networks/mnist-mlp/mnist_mlp_exercise.ipynb
armhzjz/deep-learning-v2-pytorch
--- Define the Network [Architecture](http://pytorch.org/docs/stable/nn.html)The architecture will be responsible for seeing as input a 784-dim Tensor of pixel values for each image, and producing a Tensor of length 10 (our number of classes) that indicates the class scores for an input image. This particular example u...
import torch.nn as nn import torch.nn.functional as F ## TODO: Define the NN architecture class Net(nn.Module): def __init__(self): super(Net, self).__init__() # linear layer (784 -> 1 hidden node) self.fc1 = nn.Linear(28 * 28, 1) def forward(self, x): # flatten image input ...
_____no_output_____
MIT
convolutional-neural-networks/mnist-mlp/mnist_mlp_exercise.ipynb
armhzjz/deep-learning-v2-pytorch
Specify [Loss Function](http://pytorch.org/docs/stable/nn.htmlloss-functions) and [Optimizer](http://pytorch.org/docs/stable/optim.html)It's recommended that you use cross-entropy loss for classification. If you look at the documentation (linked above), you can see that PyTorch's cross entropy function applies a soft...
## TODO: Specify loss and optimization functions # specify loss function criterion = None # specify optimizer optimizer = None
_____no_output_____
MIT
convolutional-neural-networks/mnist-mlp/mnist_mlp_exercise.ipynb
armhzjz/deep-learning-v2-pytorch
--- Train the NetworkThe steps for training/learning from a batch of data are described in the comments below:1. Clear the gradients of all optimized variables2. Forward pass: compute predicted outputs by passing inputs to the model3. Calculate the loss4. Backward pass: compute gradient of the loss with respect to mode...
# number of epochs to train the model n_epochs = 30 # suggest training between 20-50 epochs model.train() # prep model for training for epoch in range(n_epochs): # monitor training loss train_loss = 0.0 ################### # train the model # ################### for data, target in train...
_____no_output_____
MIT
convolutional-neural-networks/mnist-mlp/mnist_mlp_exercise.ipynb
armhzjz/deep-learning-v2-pytorch
--- Test the Trained NetworkFinally, we test our best model on previously unseen **test data** and evaluate it's performance. Testing on unseen data is a good way to check that our model generalizes well. It may also be useful to be granular in this analysis and take a look at how this model performs on each class as w...
# initialize lists to monitor test loss and accuracy test_loss = 0.0 class_correct = list(0. for i in range(10)) class_total = list(0. for i in range(10)) model.eval() # prep model for *evaluation* for data, target in test_loader: # forward pass: compute predicted outputs by passing inputs to the model output...
_____no_output_____
MIT
convolutional-neural-networks/mnist-mlp/mnist_mlp_exercise.ipynb
armhzjz/deep-learning-v2-pytorch
Visualize Sample Test ResultsThis cell displays test images and their labels in this format: `predicted (ground-truth)`. The text will be green for accurately classified examples and red for incorrect predictions.
# obtain one batch of test images dataiter = iter(test_loader) images, labels = dataiter.next() # get sample outputs output = model(images) # convert output probabilities to predicted class _, preds = torch.max(output, 1) # prep images for display images = images.numpy() # plot the images in the batch, along with pre...
_____no_output_____
MIT
convolutional-neural-networks/mnist-mlp/mnist_mlp_exercise.ipynb
armhzjz/deep-learning-v2-pytorch
Reflect Tables into SQLAlchemy ORM
# Python SQL toolkit and Object Relational Mapper import sqlalchemy from sqlalchemy.ext.automap import automap_base from sqlalchemy.orm import Session from sqlalchemy import create_engine, func # create engine to hawaii.sqlite engine = create_engine('sqlite:///Resources/hawaii.sqlite') # reflect an existing database in...
_____no_output_____
ADSL
climate_starter.ipynb
nebiatabuhay/sqlalchemy-challenge
Exploratory Precipitation Analysis
# Find the most recent date in the data set. max_date = session.query(func.max(func.strftime("%Y-%m-%d", Measurement.date))).limit(5).all() max_date[0][0] # Design a query to retrieve the last 12 months of precipitation data and plot the results. # Starting from the most recent data point in the database. # Calculat...
_____no_output_____
ADSL
climate_starter.ipynb
nebiatabuhay/sqlalchemy-challenge
Exploratory Station Analysis
# Design a query to calculate the total number stations in the dataset stations = session.query(Station.id).distinct().count() stations # Design a query to find the most active stations (i.e. what stations have the most rows?) # List the stations and the counts in descending order. station_counts = (session.query(Measu...
_____no_output_____
ADSL
climate_starter.ipynb
nebiatabuhay/sqlalchemy-challenge
Close session
# Close Session session.close()
_____no_output_____
ADSL
climate_starter.ipynb
nebiatabuhay/sqlalchemy-challenge
Visit MIT Deep Learning Run in Google Colab View Source on GitHub Copyright Information
# Copyright 2022 MIT 6.S191 Introduction to Deep Learning. All Rights Reserved. # # Licensed under the MIT License. You may not use this file except in compliance # with the License. Use and/or modification of this code outside of 6.S191 must # reference: # # © MIT 6.S191: Introduction to Deep Learning # http://introt...
_____no_output_____
MIT
lab2/Part1_MNIST.ipynb
AnthonyLapadula/introtodeeplearning
Laboratory 2: Computer Vision Part 1: MNIST Digit ClassificationIn the first portion of this lab, we will build and train a convolutional neural network (CNN) for classification of handwritten digits from the famous [MNIST](http://yann.lecun.com/exdb/mnist/) dataset. The MNIST dataset consists of 60,000 training image...
# Import Tensorflow 2.0 #%tensorflow_version 2.x import tensorflow as tf #!pip install mitdeeplearning import mitdeeplearning as mdl import matplotlib.pyplot as plt import numpy as np import random from tqdm import tqdm # Check that we are using a GPU, if not switch runtimes # using Runtime > Change Runtime Type ...
_____no_output_____
MIT
lab2/Part1_MNIST.ipynb
AnthonyLapadula/introtodeeplearning
1.1 MNIST dataset Let's download and load the dataset and display a few random samples from it:
mnist = tf.keras.datasets.mnist (train_images, train_labels), (test_images, test_labels) = mnist.load_data() train_images = (np.expand_dims(train_images, axis=-1)/255.).astype(np.float32) train_labels = (train_labels).astype(np.int64) test_images = (np.expand_dims(test_images, axis=-1)/255.).astype(np.float32) test_lab...
_____no_output_____
MIT
lab2/Part1_MNIST.ipynb
AnthonyLapadula/introtodeeplearning
Our training set is made up of 28x28 grayscale images of handwritten digits. Let's visualize what some of these images and their corresponding training labels look like.
plt.figure(figsize=(10,10)) random_inds = np.random.choice(60000,36) for i in range(36): plt.subplot(6,6,i+1) plt.xticks([]) plt.yticks([]) plt.grid(False) image_ind = random_inds[i] plt.imshow(np.squeeze(train_images[image_ind]), cmap=plt.cm.binary) plt.xlabel(train_labels[image_ind])
_____no_output_____
MIT
lab2/Part1_MNIST.ipynb
AnthonyLapadula/introtodeeplearning
1.2 Neural Network for Handwritten Digit ClassificationWe'll first build a simple neural network consisting of two fully connected layers and apply this to the digit classification task. Our network will ultimately output a probability distribution over the 10 digit classes (0-9). This first architecture we will be bu...
def build_fc_model(): fc_model = tf.keras.Sequential([ # First define a Flatten layer tf.keras.layers.Flatten(), # '''TODO: Define the activation function for the first fully connected (Dense) layer.''' tf.keras.layers.Dense(128, activation=tf.nn.relu), # '''TODO: Define the second Den...
_____no_output_____
MIT
lab2/Part1_MNIST.ipynb
AnthonyLapadula/introtodeeplearning
As we progress through this next portion, you may find that you'll want to make changes to the architecture defined above. **Note that in order to update the model later on, you'll need to re-run the above cell to re-initialize the model.** Let's take a step back and think about the network we've just created. The firs...
'''TODO: Experiment with different optimizers and learning rates. How do these affect the accuracy of the trained model? Which optimizers and/or learning rates yield the best performance?''' model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=1e-1), loss='sparse_categorical_crossentropy...
_____no_output_____
MIT
lab2/Part1_MNIST.ipynb
AnthonyLapadula/introtodeeplearning
Train the modelWe're now ready to train our model, which will involve feeding the training data (`train_images` and `train_labels`) into the model, and then asking it to learn the associations between images and labels. We'll also need to define the batch size and the number of epochs, or iterations over the MNIST dat...
# Define the batch size and the number of epochs to use during training BATCH_SIZE = 64 EPOCHS = 5 model.fit(train_images, train_labels, batch_size=BATCH_SIZE, epochs=EPOCHS)
Epoch 1/5 938/938 [==============================] - 5s 6ms/step - loss: 0.4299 - accuracy: 0.8817 Epoch 2/5 938/938 [==============================] - 5s 5ms/step - loss: 0.2194 - accuracy: 0.9376 Epoch 3/5 938/938 [==============================] - 5s 5ms/step - loss: 0.1639 - accuracy: 0.9537 Epoch 4/5 938/938 [====...
MIT
lab2/Part1_MNIST.ipynb
AnthonyLapadula/introtodeeplearning
As the model trains, the loss and accuracy metrics are displayed. With five epochs and a learning rate of 0.01, this fully connected model should achieve an accuracy of approximatley 0.97 (or 97%) on the training data. Evaluate accuracy on the test datasetNow that we've trained the model, we can ask it to make predict...
'''TODO: Use the evaluate method to test the model!''' test_loss, test_acc = model.evaluate( x=test_images, y=test_labels, batch_size=BATCH_SIZE)#, #verbose=1, #sample_weight=None, #steps=None, #callbacks=None, #max_queue_size=10, #workers=1, #use_multiprocessing=False, #retu...
157/157 [==============================] - 1s 5ms/step - loss: 0.1066 - accuracy: 0.9694 Test accuracy: 0.9693999886512756
MIT
lab2/Part1_MNIST.ipynb
AnthonyLapadula/introtodeeplearning
You may observe that the accuracy on the test dataset is a little lower than the accuracy on the training dataset. This gap between training accuracy and test accuracy is an example of *overfitting*, when a machine learning model performs worse on new data than on its training data. What is the highest accuracy you can...
def build_cnn_model(): cnn_model = tf.keras.Sequential([ # TODO: Define the first convolutional layer tf.keras.layers.Conv2D(filters=24, kernel_size=(3,3), activation=tf.nn.relu), # TODO: Define the first max pooling layer ##tf.keras.layers.MaxPool2D('''TODO'''), tf.keras....
2022-03-28 14:34:43.418149: I tensorflow/stream_executor/cuda/cuda_dnn.cc:368] Loaded cuDNN version 8303
MIT
lab2/Part1_MNIST.ipynb
AnthonyLapadula/introtodeeplearning
Train and test the CNN modelNow, as before, we can define the loss function, optimizer, and metrics through the `compile` method. Compile the CNN model with an optimizer and learning rate of choice:
'''TODO: Define the compile operation with your optimizer and learning rate of choice''' cnn_model.compile(optimizer=tf.keras.optimizers.Adam(), loss='sparse_categorical_crossentropy', metrics=['accuracy']) # TODO
_____no_output_____
MIT
lab2/Part1_MNIST.ipynb
AnthonyLapadula/introtodeeplearning
As was the case with the fully connected model, we can train our CNN using the `fit` method via the Keras API.
'''TODO: Use model.fit to train the CNN model, with the same batch_size and number of epochs previously used.''' cnn_model.fit(train_images, train_labels, batch_size=BATCH_SIZE, epochs=EPOCHS)
Epoch 1/5 938/938 [==============================] - 7s 7ms/step - loss: 0.1806 - accuracy: 0.9467 Epoch 2/5 938/938 [==============================] - 6s 7ms/step - loss: 0.0578 - accuracy: 0.9819 Epoch 3/5 938/938 [==============================] - 6s 7ms/step - loss: 0.0395 - accuracy: 0.9878 Epoch 4/5 938/938 [====...
MIT
lab2/Part1_MNIST.ipynb
AnthonyLapadula/introtodeeplearning
Great! Now that we've trained the model, let's evaluate it on the test dataset using the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method:
'''TODO: Use the evaluate method to test the model!''' test_loss, test_acc = model.evaluate( x=test_images, y=test_labels, batch_size=BATCH_SIZE) print('Test accuracy:', test_acc)
157/157 [==============================] - 1s 5ms/step - loss: 0.1066 - accuracy: 0.9694 Test accuracy: 0.9693999886512756
MIT
lab2/Part1_MNIST.ipynb
AnthonyLapadula/introtodeeplearning
What is the highest accuracy you're able to achieve using the CNN model, and how does the accuracy of the CNN model compare to the accuracy of the simple fully connected network? What optimizers and learning rates seem to be optimal for training the CNN model? Make predictions with the CNN modelWith the model trained...
predictions = cnn_model.predict(test_images)
_____no_output_____
MIT
lab2/Part1_MNIST.ipynb
AnthonyLapadula/introtodeeplearning
With this function call, the model has predicted the label for each image in the testing set. Let's take a look at the prediction for the first image in the test dataset:
predictions[0]
_____no_output_____
MIT
lab2/Part1_MNIST.ipynb
AnthonyLapadula/introtodeeplearning
As you can see, a prediction is an array of 10 numbers. Recall that the output of our model is a probability distribution over the 10 digit classes. Thus, these numbers describe the model's "confidence" that the image corresponds to each of the 10 different digits. Let's look at the digit that has the highest confidenc...
'''TODO: identify the digit with the highest confidence prediction for the first image in the test dataset. ''' prediction = np.argmax(predictions[0]) print(prediction)
7
MIT
lab2/Part1_MNIST.ipynb
AnthonyLapadula/introtodeeplearning
So, the model is most confident that this image is a "???". We can check the test label (remember, this is the true identity of the digit) to see if this prediction is correct:
print("Label of this digit is:", test_labels[0]) plt.imshow(test_images[0,:,:,0], cmap=plt.cm.binary)
Label of this digit is: 7
MIT
lab2/Part1_MNIST.ipynb
AnthonyLapadula/introtodeeplearning
It is! Let's visualize the classification results on the MNIST dataset. We will plot images from the test dataset along with their predicted label, as well as a histogram that provides the prediction probabilities for each of the digits:
#@title Change the slider to look at the model's predictions! { run: "auto" } image_index = 79 #@param {type:"slider", min:0, max:100, step:1} plt.subplot(1,2,1) mdl.lab2.plot_image_prediction(image_index, predictions, test_labels, test_images) plt.subplot(1,2,2) mdl.lab2.plot_value_prediction(image_index, predictions...
_____no_output_____
MIT
lab2/Part1_MNIST.ipynb
AnthonyLapadula/introtodeeplearning
We can also plot several images along with their predictions, where correct prediction labels are blue and incorrect prediction labels are grey. The number gives the percent confidence (out of 100) for the predicted label. Note the model can be very confident in an incorrect prediction!
# Plots the first X test images, their predicted label, and the true label # Color correct predictions in blue, incorrect predictions in red num_rows = 5 num_cols = 4 num_images = num_rows*num_cols plt.figure(figsize=(2*2*num_cols, 2*num_rows)) for i in range(num_images): plt.subplot(num_rows, 2*num_cols, 2*i+1) md...
_____no_output_____
MIT
lab2/Part1_MNIST.ipynb
AnthonyLapadula/introtodeeplearning
1.4 Training the model 2.0Earlier in the lab, we used the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) function call to train the model. This function is quite high-level and intuitive, which is really useful for simpler models. As you may be able to tell, this function abstracts a...
# Rebuild the CNN model cnn_model = build_cnn_model() batch_size = 12 loss_history = mdl.util.LossHistory(smoothing_factor=0.95) # to record the evolution of the loss plotter = mdl.util.PeriodicPlotter(sec=2, xlabel='Iterations', ylabel='Loss', scale='semilogy') optimizer = tf.keras.optimizers.SGD(learning_rate=1e-2) ...
_____no_output_____
MIT
lab2/Part1_MNIST.ipynb
AnthonyLapadula/introtodeeplearning
Sketch of UWB pipelineThis notebook contains the original sketch of the uwb implementation which is availible in the uwb package.Code in the package is mostly reworked and devide in modules. For usage of the package please check outthe other notebook in the directory.
%matplotlib inline import numpy as np import matplotlib.pyplot as plt from sklearn.datasets import make_blobs from sklearn.cluster import DBSCAN from itertools import product from scipy.stats import multivariate_normal from functools import reduce def multi_dim_noise(grid_dims, amount, step, std=10, means=(1,5)): ...
_____no_output_____
MIT
notebooks/noise_map_generator_example.py.ipynb
freiberg-roman/uwb-proto
Multiple-criteria Analysis
from dpd.mca import MultipleCriteriaAnalysis from dpd.d3 import radar_chart from IPython.core.display import HTML attributes = ["Cost", "Time", "Comfort"] alternatives = ["Tram", "Bus"] mca = MultipleCriteriaAnalysis(attributes, alternatives) mca.mca["Tram"]["Cost"] = 200 mca.mca["Bus"]["Cost"] = 100 mca.mca["Tram"]["...
_____no_output_____
MIT
docs/notebooks/mca.ipynb
davidbailey/dpd
Example of TreeMix NOTE : This page was originally used by HuggingFace to illustrate the summary of various tasks ([original page](https://colab.research.google.com/github/huggingface/notebooks/blob/master/transformers_doc/pytorch/task_summary.ipynbscrollTo=XJEVX6F9rQdI)), we use it to show the examples we illustrate...
# Transformers installation ! pip install transformers datasets # To install from source instead of the last release, comment the command above and uncomment the following one. # ! pip install git+https://github.com/huggingface/transformers.git
Collecting transformers Downloading transformers-4.12.2-py3-none-any.whl (3.1 MB)  |████████████████████████████████| 3.1 MB 4.9 MB/s [?25hCollecting datasets Downloading datasets-1.14.0-py3-none-any.whl (290 kB)  |████████████████████████████████| 290 kB 42.2 MB/s [?25hRequirement already satisfi...
MIT
transformers_doc/pytorch/task_summary.ipynb
Magiccircuit/TreeMix
Sequence Classification Sequence classification is the task of classifying sequences according to a given number of classes. An example ofsequence classification is the GLUE dataset, which is entirely based on that task. If you would like to fine-tune amodel on a GLUE sequence classification task, you may leverage the...
from transformers import pipeline classifier = pipeline("sentiment-analysis") result = classifier("This film is good and every one loves it")[0] print(f"label: {result['label']}, with score: {round(result['score'], 4)}") result = classifier("The film is poor and I do not like it")[0] print(f"label: {result['label']}, w...
No model was supplied, defaulted to distilbert-base-uncased-finetuned-sst-2-english (https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english)
MIT
transformers_doc/pytorch/task_summary.ipynb
Magiccircuit/TreeMix
Selecting Data from a Data Frame, Plotting, and Indexes Selecting Data Import Pandas and Load in the Data from **practicedata.csv**. Call the dataframe 'df'. Show the first 5 lines of the table.
import pandas as pd df = pd.read_csv('practicedata.csv') # overwrite this yourself df.head(5)
_____no_output_____
MIT
CRC Make A Thon Crash Course Lesson 8 Python Pandas Selection and Plotting.ipynb
acox84/crc_python_crash_course
Lets walk through how to select data by row and column number using .iloc[row, column]
# Let's select the first row of the table first_row = df.iloc[0,:] first_row #now let's try selecting the first column of the table first_column = df.iloc[:, 0] #let's print the first five rows of the column first_column[:5]
_____no_output_____
MIT
CRC Make A Thon Crash Course Lesson 8 Python Pandas Selection and Plotting.ipynb
acox84/crc_python_crash_course
Notice a few things: 1st - we select parts of a dataframe by its numeric position in the table using .iloc followed by two values in square brackets. 2nd - We can use ':' to indicate that we want all of a row or column. 3rd - The values in the square brackets are [row, column]. Our old friend from lists is back: **slic...
upper_corner_of_table = df.iloc[:5,:5] upper_corner_of_table another_slice = df.iloc[:5, 5:14] another_slice
_____no_output_____
MIT
CRC Make A Thon Crash Course Lesson 8 Python Pandas Selection and Plotting.ipynb
acox84/crc_python_crash_course
Now let's select a column by its name
oil_prod = df['OIL_PROD'] #simply put the column name as a string in square brackets oil_prod[:8]
_____no_output_____
MIT
CRC Make A Thon Crash Course Lesson 8 Python Pandas Selection and Plotting.ipynb
acox84/crc_python_crash_course
Let's select multiple columns
production_streams = df[['OIL_PROD', 'WATER_PROD', 'OWG_PROD']] # notice that we passed a list of columns production_streams.head(5)
_____no_output_____
MIT
CRC Make A Thon Crash Course Lesson 8 Python Pandas Selection and Plotting.ipynb
acox84/crc_python_crash_course
Let's select data by **index**
first_rows = df.loc[0:5] # to select by row index, we pass the index(es) we want to select with .loc[row index] first_rows
_____no_output_____
MIT
CRC Make A Thon Crash Course Lesson 8 Python Pandas Selection and Plotting.ipynb
acox84/crc_python_crash_course
We can also use loc to select rows and columns at the same time using .loc[row index, column index]
production_streams = df.loc[0:5, ['OIL_PROD', 'WATER_PROD', 'OWG_PROD']] production_streams
_____no_output_____
MIT
CRC Make A Thon Crash Course Lesson 8 Python Pandas Selection and Plotting.ipynb
acox84/crc_python_crash_course
Note that you can't mix positional selection and index selection (.iloc vs .loc)
error = df.loc[0:5, 0:5]
_____no_output_____
MIT
CRC Make A Thon Crash Course Lesson 8 Python Pandas Selection and Plotting.ipynb
acox84/crc_python_crash_course
When you are selecting data in data frames there is a lot of potential to change the **type** of your data let's see what the output types are of the various selection methods.
print(type(df)) print(type(df.iloc[:,0]))
_____no_output_____
MIT
CRC Make A Thon Crash Course Lesson 8 Python Pandas Selection and Plotting.ipynb
acox84/crc_python_crash_course
Notice how the type changes when we select the first column? A Pandas series is similar to a python dictionary, but there are important differences. You can call numerous functions like mean, sum, etc on a pandas series and unlike dictionaries pandas series have an index instead of keys and allows for different values ...
print(type(df.iloc[0, :]))
_____no_output_____
MIT
CRC Make A Thon Crash Course Lesson 8 Python Pandas Selection and Plotting.ipynb
acox84/crc_python_crash_course
Rows also become series when a single one is selected. Let's try summing the water production really quick.
print(df['WATER_PROD'].sum())
_____no_output_____
MIT
CRC Make A Thon Crash Course Lesson 8 Python Pandas Selection and Plotting.ipynb
acox84/crc_python_crash_course
Try summing the oil and water production together below Lastly, lets see what type we get when we select multiple rows/columns
print(type(df.iloc[:5,:5]))
_____no_output_____
MIT
CRC Make A Thon Crash Course Lesson 8 Python Pandas Selection and Plotting.ipynb
acox84/crc_python_crash_course
If we select multiple rows/columns we keep our dataframe type. This can be important in code as dataframes and series behave differently. This is a particular problem if you have an index with unexpected duplicate values and you are selecting something by index expecting a series, but you get multiple rows and have a d...
#select the first value print(df.iat[0,0]) print(df.at[0,'API_NO14'])
_____no_output_____
MIT
CRC Make A Thon Crash Course Lesson 8 Python Pandas Selection and Plotting.ipynb
acox84/crc_python_crash_course
Notice that it works the same as .loc and .iloc, the only difference is that you must select one value.
print(df.iat[0:5, 1]) # gives an error since I tried to select more than one value
_____no_output_____
MIT
CRC Make A Thon Crash Course Lesson 8 Python Pandas Selection and Plotting.ipynb
acox84/crc_python_crash_course
Adding columns
# you can add a column by assigning it a starting value df['column of zeros'] = 0 # you can also create a column by adding columns (or doing anything else that results in a column of the same size) df['GROSS_LIQUID'] = df['OIL_PROD'] + df['WATER_PROD'] df.iloc[0:2, 30:]
_____no_output_____
MIT
CRC Make A Thon Crash Course Lesson 8 Python Pandas Selection and Plotting.ipynb
acox84/crc_python_crash_course
Removing Columns
# remove columns via the .drop function df = df.drop(['column of zeros'], axis=1) df.iloc[0:2, 30:]
_____no_output_____
MIT
CRC Make A Thon Crash Course Lesson 8 Python Pandas Selection and Plotting.ipynb
acox84/crc_python_crash_course
Selecting Data with conditionals (booleans)
# We can use conditional statements (just like with if and while statements to find whether conditions are true/false in our data) # Let's find the months with no oil in wells months_with_zero_oil = df['OIL_PROD'] == 0 print(months_with_zero_oil.sum()) print(len(months_with_zero_oil)) # What does months with zero oil ...
_____no_output_____
MIT
CRC Make A Thon Crash Course Lesson 8 Python Pandas Selection and Plotting.ipynb
acox84/crc_python_crash_course
Notice the warning we recieved about setting data on a 'slice' of a dataframe. This is because when you select a piece of a dataframe, it doesn't (by default at least) create a new dataframe, it shows you a 'view' of the original data. This is true even if we assign that piece to a new variable like we did above. When ...
df[months_with_zero_oil].head(5) temp.head(5)
_____no_output_____
MIT
CRC Make A Thon Crash Course Lesson 8 Python Pandas Selection and Plotting.ipynb
acox84/crc_python_crash_course
In this case we were protected from making changes to original dataframe, what if we want to change the original dataframe?
# Let's try this instead df.loc[months_with_zero_oil,'zero_oil'] = 10000.00 df[months_with_zero_oil].head(5)
_____no_output_____
MIT
CRC Make A Thon Crash Course Lesson 8 Python Pandas Selection and Plotting.ipynb
acox84/crc_python_crash_course
That got it! We were able to set values in the original dataframe using the 'boolean' series of months with zero oil. Finding, Changing, and Setting Data
# Find a column in a dataframe if 'API_NO14' in df: print('got it') else: print('nope, not in there') # If a column name is in a dataframe, get it for column in df: print(column) # Search through the rows of a table count = 0 for row in df.iterrows(): count += 1 print(row) if count == 1: ...
_____no_output_____
MIT
CRC Make A Thon Crash Course Lesson 8 Python Pandas Selection and Plotting.ipynb
acox84/crc_python_crash_course
Notice that 'row' is **tuple** with the row index at 0 and the row series at 1
# Let's change WATER_INJ to 1 for the first row count = 0 for row in df.iterrows(): df.loc[row[0], 'WATER_INJ'] = 1 count += 1 if count == 1: break df[['WATER_INJ']].head(1)
_____no_output_____
MIT
CRC Make A Thon Crash Course Lesson 8 Python Pandas Selection and Plotting.ipynb
acox84/crc_python_crash_course
Exercise: Fix the apis in the table All the apis have been converted to numbers and are missing the leading zero, can you add it back in and convert them to strings in a new column? Plotting Data First we need to import matplotlib and set jupyter notebook to display the plots
import matplotlib.pyplot as plt %matplotlib inline
_____no_output_____
MIT
CRC Make A Thon Crash Course Lesson 8 Python Pandas Selection and Plotting.ipynb
acox84/crc_python_crash_course
Let's select data related to the well: 04029270170000 and plot it
# Let's plot using the original API_NO14 Column for now df.loc[df['API_NO14'] == 4029270170000, 'OIL_PROD'].plot() # Those numbers are not super helpful, lets try a new index # lets copy the dataframe sorted_df = df.copy() # Convert dates to a 'datetime' type instead of string sorted_df['PROD_INJ_DATE'] = pd.to_dateti...
_____no_output_____
MIT
CRC Make A Thon Crash Course Lesson 8 Python Pandas Selection and Plotting.ipynb
acox84/crc_python_crash_course
Let's manipulate the plot and try different options
plot_df['OIL_PROD'].plot(logy=True) plot_df[['OIL_PROD', 'WATER_PROD']].plot(sharey=True, logy=True)
_____no_output_____
MIT
CRC Make A Thon Crash Course Lesson 8 Python Pandas Selection and Plotting.ipynb
acox84/crc_python_crash_course
网络科学理论****** 网络科学:使用NetworkX分析复杂网络******王成军 wangchengjun@nju.edu.cn计算传播网 http://computational-communication.com http://networkx.readthedocs.org/en/networkx-1.11/tutorial/
%matplotlib inline import networkx as nx import matplotlib.cm as cm import matplotlib.pyplot as plt import networkx as nx G=nx.Graph() # G = nx.DiGraph() # 有向网络 # 添加(孤立)节点 G.add_node("spam") # 添加节点和链接 G.add_edge(1,2) print(G.nodes()) print(G.edges()) # 绘制网络 nx.draw(G, with_labels = True)
_____no_output_____
MIT
code/17.networkx.ipynb
nju-teaching/computational-communication
WWW Data download http://www3.nd.edu/~networks/resources.htmWorld-Wide-Web: [README] [DATA]Réka Albert, Hawoong Jeong and Albert-László Barabási:Diameter of the World Wide Web Nature 401, 130 (1999) [ PDF ] 作业:- 下载www数据- 构建networkx的网络对象g(提示:有向网络)- 将www数据添加到g当中- 计算网络中的节点数量和链接数量
G = nx.Graph() n = 0 with open ('/Users/chengjun/bigdata/www.dat.gz.txt') as f: for line in f: n += 1 #if n % 10**4 == 0: #flushPrint(n) x, y = line.rstrip().split(' ') G.add_edge(x,y) nx.info(G)
_____no_output_____
MIT
code/17.networkx.ipynb
nju-teaching/computational-communication
描述网络 nx.karate_club_graph 我们从karate_club_graph开始,探索网络的基本性质。
G = nx.karate_club_graph() clubs = [G.node[i]['club'] for i in G.nodes()] colors = [] for j in clubs: if j == 'Mr. Hi': colors.append('r') else: colors.append('g') nx.draw(G, with_labels = True, node_color = colors) G.node[1] # 节点1的属性 G.edge.keys()[:3] # 前三条边的id nx.info(G) G.nodes()[:10] G....
_____no_output_____
MIT
code/17.networkx.ipynb
nju-teaching/computational-communication
网络直径
nx.diameter(G)#返回图G的直径(最长最短路径的长度)
_____no_output_____
MIT
code/17.networkx.ipynb
nju-teaching/computational-communication
密度
nx.density(G) nodeNum = len(G.nodes()) edgeNum = len(G.edges()) 2.0*edgeNum/(nodeNum * (nodeNum - 1))
_____no_output_____
MIT
code/17.networkx.ipynb
nju-teaching/computational-communication
作业:- 计算www网络的网络密度 聚集系数
cc = nx.clustering(G) cc.items()[:5] plt.hist(cc.values(), bins = 15) plt.xlabel('$Clustering \, Coefficient, \, C$', fontsize = 20) plt.ylabel('$Frequency, \, F$', fontsize = 20) plt.show()
_____no_output_____
MIT
code/17.networkx.ipynb
nju-teaching/computational-communication
Spacing in Math ModeIn a math environment, LaTeX ignores the spaces you type and puts in the spacing that it thinks is best. LaTeX formats mathematics the way it's done in mathematics texts. If you want different spacing, LaTeX provides the following four commands for use in math mode:\; - a thick space\: - a medium s...
# M. E. J. Newman, Mixing patterns in networks Physical Review E, 67 026126, 2003 nx.degree_assortativity_coefficient(G) #计算一个图的度匹配性。 Ge=nx.Graph() Ge.add_nodes_from([0,1],size=2) Ge.add_nodes_from([2,3],size=3) Ge.add_edges_from([(0,1),(2,3)]) print(nx.numeric_assortativity_coefficient(Ge,'size')) # plot degree correl...
_____no_output_____
MIT
code/17.networkx.ipynb
nju-teaching/computational-communication
Degree centrality measures.(度中心性)* degree_centrality(G) Compute the degree centrality for nodes.* in_degree_centrality(G) Compute the in-degree centrality for nodes.* out_degree_centrality(G) Compute the out-degree centrality for nodes.* closeness_centrality(G[, v, weighted_edges]) Compute closene...
dc = nx.degree_centrality(G) closeness = nx.closeness_centrality(G) betweenness= nx.betweenness_centrality(G) fig = plt.figure(figsize=(15, 4),facecolor='white') ax = plt.subplot(1, 3, 1) plt.hist(dc.values(), bins = 20) plt.xlabel('$Degree \, Centrality$', fontsize = 20) plt.ylabel('$Frequency, \, F$', fontsize = 20)...
_____no_output_____
MIT
code/17.networkx.ipynb
nju-teaching/computational-communication