markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
1.2 Importando o YFinance e sobrescrevendo os métodos do pandas_datareader
import yfinance as yf #yf.pdr_override()
_____no_output_____
MIT
07_Python_Finance.ipynb
devscie/PythonFinance
1.3 Importando as Bibliotecas
import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns sns.set() import matplotlib matplotlib.rcParams['figure.figsize'] = (16,8) matplotlib.rcParams.update({'font.size': 22}) import warnings warnings.filterwarnings('ignore') # biblioteca estatística from scipy.stats import norm,...
_____no_output_____
MIT
07_Python_Finance.ipynb
devscie/PythonFinance
2. Análise Estatística do Índice Bovespa
# baixando as cotações ibov = yf.download("^BVSP")[["Adj Close"]]
[*********************100%***********************] 1 of 1 completed
MIT
07_Python_Finance.ipynb
devscie/PythonFinance
Exibindo dados
ibov # criando coluna com retorno percentual para cada dia ibov['retorno'] = ibov['Adj Close'].pct_change() ibov.dropna(inplace=True)
_____no_output_____
MIT
07_Python_Finance.ipynb
devscie/PythonFinance
Exibindo dados
# variação diaria do índice ibov
_____no_output_____
MIT
07_Python_Finance.ipynb
devscie/PythonFinance
Calculando Média do retorno e Desvio Padrão
# calcular a média do retorno media_ibov = ibov['retorno'].mean() print('Retorno médio = {:.2f}%'.format(media_ibov*100)) # calcular o desvio padrão desvio_padrao_ibov = ibov['retorno'].std() print('Desvio padrão = {:.2f}%'.format(desvio_padrao_ibov*100))
Desvio padrão = 2.26%
MIT
07_Python_Finance.ipynb
devscie/PythonFinance
Exibindo os dados que corresponde a pergunta do estudo
# buscar os dias que o índice ibovespa teve retorno abaixo 12% ibov[ibov["retorno"] < -0.12]
_____no_output_____
MIT
07_Python_Finance.ipynb
devscie/PythonFinance
3. Análise**Qual a probabilidade do ibov cair mais que 12% considerando que os retornos seguem uma distribuição normal?**
probabilidade_teorica = norm.cdf(-0.12, loc=media_ibov, scale=desvio_padrao_ibov) print('{:.8f}%'.format(probabilidade_teorica*100)) frequencia_teorica = 1 / probabilidade_teorica print('Uma vez a cada {} dias'.format(int(round(frequencia_teorica, 5)))) print('Ou uma vez a cada {} anos'.format(int(round(frequencia_teor...
_____no_output_____
MIT
07_Python_Finance.ipynb
devscie/PythonFinance
Comparando o gráfico para visualizar se segue uma normal téorica, utilizando os mesmos parametros (padrão de média e desvio padrão) definidos anterior.
ibov['retorno_teorico'] = norm.rvs(size=ibov['retorno'].size, loc=media_ibov, scale=desvio_padrao_ibov) ax = ibov['retorno_teorico'].plot(title="Retorno Normal Simulado"); ax.set_ylim(-0.2, 0.4)
_____no_output_____
MIT
07_Python_Finance.ipynb
devscie/PythonFinance
Distribuição normal os retornos é bem mais comportada, os retornos são centrados na média.
sns.distplot(ibov['retorno'], bins=100, kde=False);
_____no_output_____
MIT
07_Python_Finance.ipynb
devscie/PythonFinance
Histograma da distribuição dos retornos
sns.distplot(ibov['retorno'], bins=100, kde=False, fit=norm);
_____no_output_____
MIT
07_Python_Finance.ipynb
devscie/PythonFinance
Os dados tem um pico elevado, dados centralizados em torno da média. Os dados intermediarios (rombos) taxa de ocorrrência menor, nas caldas tem maior ocorrência.
sns.distplot(ibov['retorno'], bins=100, kde=False, fit=t);
_____no_output_____
MIT
07_Python_Finance.ipynb
devscie/PythonFinance
Encontrar paramentros que coincidem com a amostra.
# obter paramentros que foram utilizados para fazer o ajustar, fit da curva (graus_de_liberdade, media_t, desvio_padrao_t) = t.fit(ibov['retorno']) print('Distribuição T-Student\nGraus de liberdade={:.2f} \nMédia={:.4f} \nDesvio padrão={:.5f}'.format(graus_de_liberdade, media_t, desvio_padrao_t)) # considerando a dist...
Para uma distribuição T-Student: Uma vez a cada 795 dias Ou uma vez a cada 3 anos
MIT
07_Python_Finance.ipynb
devscie/PythonFinance
Comparação distribuição calda gorda e distribuição normal
frequencia_teorica = 1 / probabilidade_teorica print('Para uma distribuição Normal: \nUma vez a cada {} dias'.format(int(round(frequencia_teorica, 5)))) print('Ou uma vez a cada {} anos'.format(int(round(frequencia_teorica/252, 5)))) frequencia_observada = ibov['retorno'].size / ibov[ibov["retorno"] < -0.12].shape[0] ...
Na vida real aconteceu: Uma vez a cada 1380 dias
MIT
07_Python_Finance.ipynb
devscie/PythonFinance
Weight InitializationIn this lesson, you'll learn how to find good initial weights for a neural network. Weight initialization happens once, when a model is created and before it trains. Having good initial weights can place the neural network close to the optimal solution. This allows the neural network to come to th...
import torch import numpy as np from torchvision import datasets import torchvision.transforms as transforms from torch.utils.data.sampler import SubsetRandomSampler # number of subprocesses to use for data loading num_workers = 0 # how many samples per batch to load batch_size = 100 # percentage of training set to us...
_____no_output_____
MIT
weight-initialization/weight_initialization_exercise.ipynb
hfurkanvural/udacity-deep-learning-v2-pytorch
Visualize Some Training Data
import matplotlib.pyplot as plt %matplotlib inline # obtain one batch of training images dataiter = iter(train_loader) images, labels = dataiter.next() images = images.numpy() # plot the images in the batch, along with the corresponding labels fig = plt.figure(figsize=(25, 4)) for idx in np.arange(20): ax = f...
_____no_output_____
MIT
weight-initialization/weight_initialization_exercise.ipynb
hfurkanvural/udacity-deep-learning-v2-pytorch
Define the Model ArchitectureWe've defined the MLP that we'll use for classifying the dataset. Neural Network* A 3 layer MLP with hidden dimensions of 256 and 128. * This MLP accepts a flattened image (784-value long vector) as input and produces 10 class scores as output.---We'll test the effect of different initial ...
import torch.nn as nn import torch.nn.functional as F # define the NN architecture class Net(nn.Module): def __init__(self, hidden_1=256, hidden_2=128, constant_weight=None): super(Net, self).__init__() # linear layer (784 -> hidden_1) self.fc1 = nn.Linear(28 * 28, hidden_1) # linea...
_____no_output_____
MIT
weight-initialization/weight_initialization_exercise.ipynb
hfurkanvural/udacity-deep-learning-v2-pytorch
Compare Model BehaviorBelow, we are using `helpers.compare_init_weights` to compare the training and validation loss for the two models we defined above, `model_0` and `model_1`. This function takes in a list of models (each with different initial weights), the name of the plot to produce, and the training and valida...
# initialize two NN's with 0 and 1 constant weights model_0 = Net(constant_weight=0) model_1 = Net(constant_weight=1) import helpers # put them in list form to compare model_list = [(model_0, 'All Zeros'), (model_1, 'All Ones')] # plot the loss over the first 100 batches helpers.compare_init_weights(mo...
_____no_output_____
MIT
weight-initialization/weight_initialization_exercise.ipynb
hfurkanvural/udacity-deep-learning-v2-pytorch
As you can see the accuracy is close to guessing for both zeros and ones, around 10%.The neural network is having a hard time determining which weights need to be changed, since the neurons have the same output for each layer. To avoid neurons with the same output, let's use unique weights. We can also randomly selec...
helpers.hist_dist('Random Uniform (low=-3, high=3)', np.random.uniform(-3, 3, [1000]))
_____no_output_____
MIT
weight-initialization/weight_initialization_exercise.ipynb
hfurkanvural/udacity-deep-learning-v2-pytorch
The histogram used 500 buckets for the 1000 values. Since the chance for any single bucket is the same, there should be around 2 values for each bucket. That's exactly what we see with the histogram. Some buckets have more and some have less, but they trend around 2.Now that you understand the uniform function, let's...
# takes in a module and applies the specified weight initialization def weights_init_uniform(m): classname = m.__class__.__name__ # for every Linear layer in a model.. if classname.find('Linear') != -1: # apply a uniform distribution to the weights and a bias=0 m.weight.data.uniform_(0.0, 1....
_____no_output_____
MIT
weight-initialization/weight_initialization_exercise.ipynb
hfurkanvural/udacity-deep-learning-v2-pytorch
---The loss graph is showing the neural network is learning, which it didn't with all zeros or all ones. We're headed in the right direction! General rule for setting weightsThe general rule for setting the weights in a neural network is to set them to be close to zero without being too small. >Good practice is to star...
# takes in a module and applies the specified weight initialization def weights_init_uniform_center(m): classname = m.__class__.__name__ # for every Linear layer in a model.. if classname.find('Linear') != -1: # apply a centered, uniform distribution to the weights m.weight.data.uniform_(-0....
_____no_output_____
MIT
weight-initialization/weight_initialization_exercise.ipynb
hfurkanvural/udacity-deep-learning-v2-pytorch
Then let's create a distribution and model that uses the **general rule** for weight initialization; using the range $[-y, y]$, where $y=1/\sqrt{n}$ .And finally, we'll compare the two models.
# takes in a module and applies the specified weight initialization def weights_init_uniform_rule(m): classname = m.__class__.__name__ # for every Linear layer in a model.. if classname.find('Linear') != -1: # get the number of the inputs n = m.in_features y = 1.0/np.sqrt(n) ...
_____no_output_____
MIT
weight-initialization/weight_initialization_exercise.ipynb
hfurkanvural/udacity-deep-learning-v2-pytorch
This behavior is really promising! Not only is the loss decreasing, but it seems to do so very quickly for our uniform weights that follow the general rule; after only two epochs we get a fairly high validation accuracy and this should give you some intuition for why starting out with the right initial weights can real...
helpers.hist_dist('Random Normal (mean=0.0, stddev=1.0)', np.random.normal(size=[1000]))
_____no_output_____
MIT
weight-initialization/weight_initialization_exercise.ipynb
hfurkanvural/udacity-deep-learning-v2-pytorch
Let's compare the normal distribution against the previous, rule-based, uniform distribution. TODO: Define a weight initialization function that gets weights from a normal distribution > The normal distribution should have a mean of 0 and a standard deviation of $y=1/\sqrt{n}$
## complete this function def weights_init_normal(m): '''Takes in a module and initializes all linear layers with weight values taken from a normal distribution.''' classname = m.__class__.__name__ # for every Linear layer in a model # m.weight.data shoud be taken from a normal distribution ...
_____no_output_____
MIT
weight-initialization/weight_initialization_exercise.ipynb
hfurkanvural/udacity-deep-learning-v2-pytorch
The normal distribution gives us pretty similar behavior compared to the uniform distribution, in this case. This is likely because our network is so small; a larger neural network will pick more weight values from each of these distributions, magnifying the effect of both initialization styles. In general, a normal di...
## Instantiate a model with _no_ explicit weight initialization model_no_initialization = Net() ## evaluate the behavior using helpers.compare_init_weights model_list = [(model_no_initialization, 'No Weights')] helpers.compare_init_weights(model_list, 'No Weight Initialization', ...
_____no_output_____
MIT
weight-initialization/weight_initialization_exercise.ipynb
hfurkanvural/udacity-deep-learning-v2-pytorch
Cambios en los precios El aumento en el recio parece constante hasta finales del 2016 o inicios 2017, donde se ve una caida de 105 a 85 aproximadamente, se manntiene unos meses en el rango de 85 a 95, hasta que nicia un crecimeinto en vertical, se mantiene con un precio constante de aproximadamente 102, para después i...
ret_sum=pd.DataFrame(index=['Rend diario','Rend anual','Vol diaria','Vol anual'],columns=['2016','2017','2018','Todo']) list=[a2016,a2017,a2018,ret] for x in range (0,4): ret_sum.loc['Rend diario'][ret_sum.columns[x]]=list[x]['GFNORTEO.MX'].mean() ret_sum.loc['Rend anual'][ret_sum.columns[x]]=list[x]['GFNORTEO....
_____no_output_____
MIT
GFNORTEO.MX.ipynb
ramirezdiana/Analisis-de-riesgo-2019
Discusión datos tabla Se puede observar que el mayor rendimiento anual se da en el 2016, el peor en el 2018, y como se ve en la gráfica de los precios, hay una caida casi al final del 2018, por lo que tiene sentido que su rendimiento diario y anual sea negativo. La volatilidad anual se mantiene entre .24 y .34, siend...
ret_sum=pd.DataFrame(index=['Mean','Volatility'],columns=ticker) ret_sum.loc['Mean']=a2018.mean() ret_sum.loc['Volatility']=a2018.std() n=1000 for x in range (0,3): mu= ret_sum['GFNORTEO.MX']['Mean'] sigma= ret_sum['GFNORTEO.MX']['Volatility'] s0=107.01 listaepsilon = [np.random.randn() for _ in range(n...
st1 st10 st30 st252 Mean 106.993 107.123 107.518 112.653 liminf 100.711 88.3282 76.754 40.8441 limsup 113.925 130.442 150.789 289.134 st1 st10 st30 st252 Mean 106.999 107.156 107.598 113.224 liminf 98.0485 81.1539 66.2795 26.696 limsup 115.462 1...
MIT
GFNORTEO.MX.ipynb
ramirezdiana/Analisis-de-riesgo-2019
Lists in PythonIn most languages a collection of homogeneous (all of the same type)entities is called an array. The size of the array is fixed at thetime of creation, however, the contents of the array can be changedduring the course of the execution of the program. Higher dimensionalarrays are also possible, where e...
# Enumerate the items a = [1, 2, 3] a # Create an empty list and append or insert a = [] print(a) a.append(1) # a = [1] print(a) a.append(2) # a = [1, 2] print(a) a.insert(1, 3) # a = [1, 3, 2] print(a) # Create a two dimensional list b = [ [1, 2, 3], [4, 5, 6], [7, 8, 9] ] b
_____no_output_____
MIT
Notebooks/Example-004-Python-Lists.ipynb
Sean-hsj/Elements-of-Software-Design
Note that the positions of items in a list start at an index value of 0.You can also create a list by concatenating two or more lists together.You can initialize a list with a predefined value.
a = [1, 2] b = [4, 5] c = a + b # c = [1, 2, 4, 5] print(c) d = [0] * 5 # d = [0, 0, 0, 0, 0] print(d)
[1, 2, 4, 5] [0, 0, 0, 0, 0]
MIT
Notebooks/Example-004-Python-Lists.ipynb
Sean-hsj/Elements-of-Software-Design
Basic List ManipulationsTo obtain the length of a list you can use the len() function.
a = [1, 2, 3] length = len (a) # length = 3 length
_____no_output_____
MIT
Notebooks/Example-004-Python-Lists.ipynb
Sean-hsj/Elements-of-Software-Design
IndexingThe items in a list are indexed starting at 0 and ending at indexlength - 1. You can also use negative indices to access elementsin a list. For example a[-1] returns the last item on the list anda[-length] returns the first. Unlike a string, a list is mutable, i.e.its contents can be changed like so:
a = [1, 2, 3] a[1] = 4 # a = [1, 4, 3] a
_____no_output_____
MIT
Notebooks/Example-004-Python-Lists.ipynb
Sean-hsj/Elements-of-Software-Design
To access or change an element in a 2-dimensional list specify the rowfirst and then the column.
b = [ [1, 2, 3], [4, 5, 6], [7, 8, 9] ] print(b) d = b[1][2] # d = 6 print(d) b[2][1] = b[1][2]*2 print(b)
[[1, 2, 3], [4, 5, 6], [7, 8, 9]] 6 [[1, 2, 3], [4, 5, 6], [7, 12, 9]]
MIT
Notebooks/Example-004-Python-Lists.ipynb
Sean-hsj/Elements-of-Software-Design
Note that the positions of items in a list start at an index value of 0.You can also create a list by concatenating two or more lists together.You can initialize a list with a predefined value.
a = [1, 2] b = [4, 5] c = a + b # c = [1, 2, 4, 5] print(c) d = [0] * 5 # d = [0, 0, 0, 0, 0] d
[1, 2, 4, 5]
MIT
Notebooks/Example-004-Python-Lists.ipynb
Sean-hsj/Elements-of-Software-Design
List TraversalOne of the most important operations that you can do with a list is totraverse it, i.e. visit each and every element in the list in order.There are several ways in which to do so:
a = [9, 2, 6, 4, 7] print(a) for item in a: print (item, end = " ") # 9 2 6 4 7 # Doubles each item in the list length = len (a) for i in range (length): a[i] = a[i] * 2
[9, 2, 6, 4, 7] 9 2 6 4 7
MIT
Notebooks/Example-004-Python-Lists.ipynb
Sean-hsj/Elements-of-Software-Design
Other List Functions Function Meaning list.sort() Sorts a list in ascending order list.reverse() Reverses the elements in a list value in list Returns True if the value is in the list and False otherwise list.index(x) Returns the index of the first occurence of x. Use with the abovefunction to check if x is ...
a = [9, 2, 6, 4, 7] a.sort() a a = [9, 2, 6, 4, 7] a.reverse() a for value in [9, 2, 6, 4, 7]: print(value) #index a = [9, 2, 6, 4, 7] a.index(6) # count() a.count(6) # remove a = [9, 2, 6, 4, 7] a.remove(2) a # pop a = [9, 2, 6, 4, 7] b = a.pop(2) print(b) a
6
MIT
Notebooks/Example-004-Python-Lists.ipynb
Sean-hsj/Elements-of-Software-Design
[matplotlib](https://matplotlib.org/gallery/index.html)
# font_manager 글꼴 관리자 # rc 함수 : 전역적으로 사용하고 싶을때 # 예을 들어서, Figure의 크기를(10, 10)으로 전역 설정하고 싶다면 다음과 같이 코딩한다. # plt.rc('figure',figsize=(10,10)) from matplotlib import font_manager, rc import matplotlib.pyplot as plt # 더 많은 설정과 옵션의 종류는 matplotlib/mpl-data폴더의 matplotlibrc 파일에 저장되어 있다. # 예시) 파이썬 설치 경로\Lib\site-packages\matpl...
_____no_output_____
MIT
_pages/Language/Python/src/matplotlib.ipynb
shpimit/shpimit.github.io
Exploratory analysis of the US Airport DatasetThis dataset contains data for 25 years[1995-2015] of flights between various US airports and metadata about these routes. Taken from Bureau of Transportation Statistics, United States Department of Transportation.Let's see what can we make out of this!
%matplotlib inline import networkx as nx import pandas as pd import matplotlib.pyplot as plt import numpy as np import warnings warnings.filterwarnings('ignore') pass_air_data = pd.read_csv('datasets/passengers.csv')
_____no_output_____
MIT
8-US-airports-case-study-student.ipynb
OSSSP/Network-Analysis-Made-Simple
In the `pass_air_data` dataframe we have the information of number of people that fly every year on a particular route.
pass_air_data.head() # Create a MultiDiGraph from this dataset passenger_graph = nx.from_pandas_edgelist(pass_air_data, source='ORIGIN', target='DEST', edge_attr=['YEAR', 'PASSENGERS', 'UNIQUE_CARRIER_NAME'], create_using=nx.MultiDiGraph())
_____no_output_____
MIT
8-US-airports-case-study-student.ipynb
OSSSP/Network-Analysis-Made-Simple
Cleveland to Chicago, how many people fly this route?
passenger_graph['CLE']['ORD'] temp = [(i['YEAR'], i['PASSENGERS'])for i in dict(passenger_graph['CLE']['ORD']).values()] x, y = zip(*temp) plt.plot(x, y) plt.show()
_____no_output_____
MIT
8-US-airports-case-study-student.ipynb
OSSSP/Network-Analysis-Made-Simple
ExerciseFind the busiest route in 1990 and in 2015 according to number of passengers, and plot the time series of number of passengers on these routes.You can use the DataFrame instead of working with the network. It will be faster ;)[5 mins] So let's have a look at the important nodes in this network, i.e. important ...
nx.pagerank(passenger_graph) def year_network(G, year): temp_g = nx.DiGraph() for i in G.edges(data=True): if i[2]['YEAR'] == year: temp_g.add_edge(i[0], i[1], weight=i[2]['PASSENGERS']) return temp_g pass_2015 = year_network(passenger_graph, 2015) len(pass_2015) len(pass_2015.edges()) #...
_____no_output_____
MIT
8-US-airports-case-study-student.ipynb
OSSSP/Network-Analysis-Made-Simple
ExerciseUsing the position dictionary `pos_dict` create a plot of the airports, only the nodes not the edges.- As we don't have coordinates for all the airports we have to create a subgraph first.- Use `nx.subgraph(Graph, iterable of nodes)` to create the subgraph- Use `nx.draw_networkx_nodes(G, pos)` to map the nodes...
plt.hist(list(nx.degree_centrality(pass_2015).values())) plt.show()
_____no_output_____
MIT
8-US-airports-case-study-student.ipynb
OSSSP/Network-Analysis-Made-Simple
Let's plot a log log plot to get a better overview of this.
d = {} for i, j in dict(nx.degree(pass_2015)).items(): if j in d: d[j] += 1 else: d[j] = 1 x = np.log2(list((d.keys()))) y = np.log2(list(d.values())) plt.scatter(x, y, alpha=0.4) plt.show()
_____no_output_____
MIT
8-US-airports-case-study-student.ipynb
OSSSP/Network-Analysis-Made-Simple
Directed Graphs![title](images/pagerank.png)
G = nx.DiGraph() G.add_edge(1, 2, weight=1) # print(G.edges()) # G[1][2] # G[2][1] # G.is_directed() # type(G) G.add_edges_from([(1, 2), (3, 2), (4, 2), (5, 2), (6, 2), (7, 2)]) nx.draw_circular(G, with_labels=True) G.in_degree() nx.pagerank(G) G.add_edge(5, 6) nx.draw_circular(G, with_labels=True) nx.pagerank(G) G.a...
_____no_output_____
MIT
8-US-airports-case-study-student.ipynb
OSSSP/Network-Analysis-Made-Simple
Moving back to Airports
sorted(nx.pagerank(pass_2015, weight=None).items(), key=lambda x:x[1], reverse=True)[:10] sorted(nx.betweenness_centrality(pass_2015).items(), key=lambda x:x[1], reverse=True)[0:10] sorted(nx.degree_centrality(pass_2015).items(), key=lambda x:x[1], reverse=True)[0:10]
_____no_output_____
MIT
8-US-airports-case-study-student.ipynb
OSSSP/Network-Analysis-Made-Simple
'ANC' is the airport code of Anchorage airport, a place in Alaska, and according to pagerank and betweenness centrality it is the most important airport in this network Isn't that weird? Thoughts?related blog post: https://toreopsahl.com/2011/08/12/why-anchorage-is-not-that-important-binary-ties-and-sample-selection/Le...
sorted(nx.betweenness_centrality(pass_2015, weight='weight').items(), key=lambda x:x[1], reverse=True)[0:10] sorted(nx.pagerank(pass_2015, weight='weight').items(), key=lambda x:x[1], reverse=True)[0:10]
_____no_output_____
MIT
8-US-airports-case-study-student.ipynb
OSSSP/Network-Analysis-Made-Simple
How reachable is this network?We calculate the average shortest path length of this network, it gives us an idea about the number of jumps we need to make around the network to go from one airport to any other airport in this network.
nx.average_shortest_path_length(pass_2015)
_____no_output_____
MIT
8-US-airports-case-study-student.ipynb
OSSSP/Network-Analysis-Made-Simple
Wait, What??? This network is not connected. That seems like a really stupid thing to do.
list(nx.weakly_connected_components(pass_2015))
_____no_output_____
MIT
8-US-airports-case-study-student.ipynb
OSSSP/Network-Analysis-Made-Simple
SPB, SSB, AIK anyone?
pass_air_data[(pass_air_data['YEAR'] == 2015) & (pass_air_data['ORIGIN'] == 'AIK')] pass_2015.remove_nodes_from(['SPB', 'SSB', 'AIK']) nx.is_weakly_connected(pass_2015) nx.is_strongly_connected(pass_2015)
_____no_output_____
MIT
8-US-airports-case-study-student.ipynb
OSSSP/Network-Analysis-Made-Simple
Strongly vs weakly connected graphs.
G = nx.DiGraph() G.add_edge(1, 2) G.add_edge(2, 3) G.add_edge(3, 1) nx.draw(G) G.add_edge(3, 4) nx.draw(G) nx.is_strongly_connected(G) list(nx.strongly_connected_components(pass_2015)) pass_air_data[(pass_air_data['YEAR'] == 2015) & (pass_air_data['DEST'] == 'TSP')] pass_2015_strong = pass_2015.subgraph( max(nx.str...
_____no_output_____
MIT
8-US-airports-case-study-student.ipynb
OSSSP/Network-Analysis-Made-Simple
Exercise! (Actually this is a game :D)How can we decrease the avg shortest path length of this network?Think of an effective way to add new edges to decrease the avg shortest path length.Let's see if we can come up with a nice way to do this, and the one who gets the highest decrease wins!!!The rules are simple:- You ...
# unfreeze the graph pass_2015_strong = nx.DiGraph(pass_2015_strong)
_____no_output_____
MIT
8-US-airports-case-study-student.ipynb
OSSSP/Network-Analysis-Made-Simple
What about airlines? Can we find airline specific reachability?
passenger_graph['CLE']['SFO'][25] def str_to_list(a): return a[1:-1].split(', ') for i in str_to_list(passenger_graph['JFK']['SFO'][25]['UNIQUE_CARRIER_NAME']): print(i) %%time for origin, dest in passenger_graph.edges(): for key in passenger_graph[origin][dest]: passenger_graph[origin][dest][key]['...
_____no_output_____
MIT
8-US-airports-case-study-student.ipynb
OSSSP/Network-Analysis-Made-Simple
ExercisePlay around with United Airlines network.- Extract a network for United Airlines flights from the metagraph `passenger_graph` for the year 2015- Make sure it's a weighted network, where weight is the number of passengers.- Find the number of airports and connections in this network- Find the most important air...
united_network = nx._________ for _______, _______ in passenger_graph.edges(): if 25 in passenger_graph[______][_______]: # 25 key is for the year 2015 if "'United Air Lines Inc.'" in ____________________: united_network.add_edge(_____, ______, weight= __________) # number of nodes # number of ...
_____no_output_____
MIT
8-US-airports-case-study-student.ipynb
OSSSP/Network-Analysis-Made-Simple
cGAN Generate Synthetic Data for Compas Dataset CTGAN model is based on the GAN-based Deep Learning data synthesizer
from implementation_functions import * import pandas as pd import numpy as np from prince import FAMD #Factor analysis of mixed data from aif360.metrics import BinaryLabelDatasetMetric from sklearn.model_selection import train_test_split from sklearn.metrics import silhouette_samples, silhouette_score import matplotl...
_____no_output_____
MIT
Implementation/Jupyter_Notebooks/Compas_cGAN.ipynb
bendiste/Algorithmic-Fairness
Here we start the GAN work
from sdv.tabular import CTGAN model = CTGAN()
_____no_output_____
MIT
Implementation/Jupyter_Notebooks/Compas_cGAN.ipynb
bendiste/Algorithmic-Fairness
from sdv.tabular import CTGANmodel = CTGAN()start_time = time.time()model.fit(X_train_new)print("--- %s seconds ---" % (time.time() - start_time)) model.save('my_fariness_Compas_V3.pkl')
loaded = CTGAN.load('my_fariness_Compas_V3.pkl') # print(X_train.loc[:,"sub_labels"]) available_rows = {} for row_count in range(8): available_rows[row_count] = X_train["sub_labels"].value_counts()[row_count] target_rows = max(available_rows.values()) max_label = max(available_rows, key=available_rows.get)...
4477
MIT
Implementation/Jupyter_Notebooks/Compas_cGAN.ipynb
bendiste/Algorithmic-Fairness
Extreme Gradient Boosting Classifier
# Type the desired classifier to train the classification models with model obj xgb= GradientBoostingClassifier() baseline_stats, cm, ratio_table, preds = baseline_metrics(xgb, X_train, X_test, y_train, y_test, sens_attr, ...
AEO Difference \ [{'race': 0, 'sex': 0}][{'race': 1, 'sex': 0}] -0.119925 [{'race': 1, 'sex': 0}][{'race': 0, 'sex': 1}] -0.034355 [{'race': 0, 'sex': 1}][{'race': 1, 'sex': 1}] 0.007313 [{'race': 0, 'sex': 0}][{'race': 0, 'sex': 1}] -0....
MIT
Implementation/Jupyter_Notebooks/Compas_cGAN.ipynb
bendiste/Algorithmic-Fairness
Random Forest Classifer
# Type the desired classifier to train the classification models with model obj RF= GradientBoostingClassifier() baseline_stats, cm, ratio_table, preds = baseline_metrics(xgb, X_train, X_test, y_train, y_test, sens_attr, ...
AEO Difference \ [{'race': 0, 'sex': 0}][{'race': 1, 'sex': 0}] -0.119925 [{'race': 1, 'sex': 0}][{'race': 0, 'sex': 1}] -0.034355 [{'race': 0, 'sex': 1}][{'race': 1, 'sex': 1}] 0.007313 [{'race': 0, 'sex': 0}][{'race': 0, 'sex': 1}] -0....
MIT
Implementation/Jupyter_Notebooks/Compas_cGAN.ipynb
bendiste/Algorithmic-Fairness
**Visualización de Datos en Python***** **Editado por: Kevin Alexander Gómez** Contacto: kevinalexandr19@gmail.com | [Linkedin](https://www.linkedin.com/in/kevin-alexander-g%C3%B3mez-2b0263111/) | [Github](https://github.com/kevinalexandr19)*** **Descripción**Usando este manual, desarrollarás código en Python orienta...
import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns sns.set(context="notebook", style="ticks")
_____no_output_____
MIT
notebooks/2c_visualizacion_datos.ipynb
kevinalexandr19/manual-python-geologia
La información se encuentra en un archivo CSV llamado `rocas.csv`.\Esta información ha sido procesada previamente y proviene de una base de datos geoquímica de uso público llamada [GEOROC](http://georoc.mpch-mainz.gwdg.de/georoc/Start.asp).\Abriremos estos archivos a través de la librería `Pandas` y usaremos la función...
rocas = pd.read_csv("files/rocas.csv", encoding="ISO-8859-1") rocas
_____no_output_____
MIT
notebooks/2c_visualizacion_datos.ipynb
kevinalexandr19/manual-python-geologia
Revisaremos la información general del cuadro usando el método `info`:
rocas.info()
_____no_output_____
MIT
notebooks/2c_visualizacion_datos.ipynb
kevinalexandr19/manual-python-geologia
En resumen, el cuadro contiene una columna llamada `Nombre` que representa la clasificación petrográfica y está representada por valores de tipo `string` (señalado como `object`).\Las columnas: `SiO2`, `Al2O3`, `FeOT`, `CaO`, `MgO`, `Na2O`, `K2O`, `MnO` y `TiO`, representan concentraciones geoquímicas (en wt%) y están...
prd = rocas[rocas["Nombre"] == "Peridotita"].copy() grn = rocas[rocas["Nombre"] == "Granodiorita"].copy()
_____no_output_____
MIT
notebooks/2c_visualizacion_datos.ipynb
kevinalexandr19/manual-python-geologia
Para observar la distribución de los datos geoquímicos en las muestras, usaremos dos tipos de figuras:- `boxplot`: muestra la distribución cuantitativa de los datos y sus cuartiles, también estableces un máximo y mínimo en base al rango intercuartílico.\ Los puntos que se alejan del rango se consideran *outliers*.- ...
fig, axs = plt.subplots(1, 2, figsize=(18, 10)) sns.boxplot(ax=axs[0], data=prd[["SiO2", "Al2O3", "FeOT", "CaO", "MgO"]], orient="h", flierprops={"marker":"o", "markersize": 4}) axs[0].grid() axs[0].set_xlabel("%", fontsize=18) sns.boxplot(ax=axs[1], data=prd[["Na2O", "K2O", "MnO", "TiO"]], orient="h", flierprops={"m...
_____no_output_____
MIT
notebooks/2c_visualizacion_datos.ipynb
kevinalexandr19/manual-python-geologia
Los gráficos en `boxplot` nos ayudan a visualizar mejor la distribución de los datos, pero podemos mejorarlo usando `violinplot`:
fig, axs = plt.subplots(1, 2, figsize=(18, 10)) sns.violinplot(ax=axs[0], data=prd[["SiO2", "Al2O3", "FeOT", "CaO", "MgO"]], orient="h") axs[0].grid() axs[0].set_xlabel("%", fontsize=18) sns.violinplot(ax=axs[1], data=prd[["Na2O", "K2O", "MnO", "TiO"]], orient="h") axs[1].grid() axs[1].set_xlabel("%", fontsize=18) f...
_____no_output_____
MIT
notebooks/2c_visualizacion_datos.ipynb
kevinalexandr19/manual-python-geologia
**2.2. Visualizando la matriz de correlación con `heatmap`** Ahora, crearemos una matriz de correlación para las muestras de peridotita usando el método `corr`:
prd.corr()
_____no_output_____
MIT
notebooks/2c_visualizacion_datos.ipynb
kevinalexandr19/manual-python-geologia
Esta matriz nos muestra la correlación de Pearson por cada par de columnas en el cuadro.\Usaremos esta matriz para crear una visualización agradable de las diferentes correlaciones en el cuadro.
# Matriz de correlación corr = prd.corr() # Generamos una matriz triangular mask = np.triu(np.ones_like(corr, dtype=bool)) # Creamos la figura fig, ax = plt.subplots(figsize=(10, 8)) # Creamos un mapa de colores divergentes cmap = sns.diverging_palette(230, 20, as_cmap=True) # Creamos un mapa de calor usando la mat...
_____no_output_____
MIT
notebooks/2c_visualizacion_datos.ipynb
kevinalexandr19/manual-python-geologia
Vamos a filtrar aquellas correlaciones mayores a 0.7 y menores a -0.7:
# Filtrando aquellos pares con una correlación alta corr = corr.where((corr > 0.7) | (corr < -0.7), 0) # Matriz triangular mask = np.triu(np.ones_like(corr, dtype=bool)) # Figura fig, ax = plt.subplots(figsize=(10, 8)) cmap = sns.diverging_palette(230, 20, as_cmap=True) sns.heatmap(corr, mask=mask, cmap=cmap, ...
_____no_output_____
MIT
notebooks/2c_visualizacion_datos.ipynb
kevinalexandr19/manual-python-geologia
Ahora, crearemos diagramas de dispersión para visualizar estos 3 pares. **2.2. Diagrama de dispersión con `scatterplot`**Colocaremos estos pares en una lista de tuplas llamada `pares`:
pares = [("CaO", "Al2O3"), ("MgO", "Al2O3"), ("MgO", "CaO")]
_____no_output_____
MIT
notebooks/2c_visualizacion_datos.ipynb
kevinalexandr19/manual-python-geologia
Y lo usaremos dentro de la función `scatterplot` para crear una figura con 3 diagramas de dispersión:
fig, axs = plt.subplots(1, 3, figsize=(16, 6)) for par, ax in zip(pares, axs): sns.scatterplot(ax=ax, data=prd, x=par[0], y=par[1], edgecolor="black", marker="o", s=12) ax.grid() fig.suptitle("Diagramas de dispersión para pares de elementos con alta correlación (Peridotita)", fontsize=20) plt.tight_layou...
_____no_output_____
MIT
notebooks/2c_visualizacion_datos.ipynb
kevinalexandr19/manual-python-geologia
Por último, agregaremos los valores de estos pares con las muestras de granodiorita.
fig, axs = plt.subplots(1, 3, figsize=(16, 6)) for par, ax in zip(pares, axs): sns.scatterplot(ax=ax, data=rocas, x=par[0], y=par[1], marker="o", hue="Nombre", s=12, edgecolor="black", palette=["green", "red"], legend=False) ax.grid() fig.suptitle("Diagramas de dispersión para pares de elementos con alta corr...
_____no_output_____
MIT
notebooks/2c_visualizacion_datos.ipynb
kevinalexandr19/manual-python-geologia
**2.3. Histograma y Distribuciones de probabilidad con `histplot` y `kdeplot`**Podemos observar la distribución univariable de datos geoquímicos usando un **histograma** o una **distribución de probabilidad**.
fig, axs = plt.subplots(1, 2, figsize=(15, 5)) sns.histplot(ax=axs[0], data=rocas, x="CaO", hue="Nombre", bins=20, alpha=0.6, edgecolor="black", linewidth=.5, palette=["green", "red"]) axs[0].set_title("Histograma", fontsize=20) sns.kdeplot(ax=axs[1], data=rocas, x="CaO", hue="Nombre", fill=True, cut=0, palette=["gre...
_____no_output_____
MIT
notebooks/2c_visualizacion_datos.ipynb
kevinalexandr19/manual-python-geologia
También es posible observar la distribución bivariable
fig, axs = plt.subplots(1, 2, figsize=(15, 5)) sns.histplot(ax=axs[0], data=rocas, x="SiO2", y="FeOT", hue="Nombre", alpha=0.8, palette=["green", "red"]) axs[0].set_title("Histograma", fontsize=20) axs[0].grid() sns.kdeplot(ax=axs[1], data=rocas, x="SiO2", y="FeOT", hue="Nombre", fill=True, cut=0, palette=["green", "...
_____no_output_____
MIT
notebooks/2c_visualizacion_datos.ipynb
kevinalexandr19/manual-python-geologia
**3. Pyrolite*******Pyrolite es una librería que te permite crear diagramas ternarios a partir de información geoquímica.**Podemos verificar que tenemos `pyrolite` instalado usando el siguiente comando:
!pip show pyrolite
_____no_output_____
MIT
notebooks/2c_visualizacion_datos.ipynb
kevinalexandr19/manual-python-geologia
Ahora, importaremos la función `pyroplot` del módulo `plot`:
import pandas as pd import matplotlib.pyplot as plt import seaborn as sns sns.set(context="notebook", style="ticks") from pyrolite.plot import pyroplot rocas = pd.read_csv("files/rocas.csv") prd = rocas[rocas["Nombre"] == "Peridotita"].copy() grn = rocas[rocas["Nombre"] == "Granodiorita"].copy()
_____no_output_____
MIT
notebooks/2c_visualizacion_datos.ipynb
kevinalexandr19/manual-python-geologia
Y crearemos un diagrama ternario, para esto tenemos que usar el método `pyroplot` en el cuadro que contenga la información geoquímica:
fig, ax = plt.subplots(figsize=(6, 6)) ax1 = prd[["SiO2", "Al2O3", "FeOT"]].pyroplot.scatter(ax=ax, c="green", s=5, marker="o") ax1.grid(axis="r", linestyle="--", linewidth=1) plt.suptitle("Diagrama ternario $SiO_{2} - Al_{2}O_{3} - FeOT$", fontsize=18) plt.show()
_____no_output_____
MIT
notebooks/2c_visualizacion_datos.ipynb
kevinalexandr19/manual-python-geologia
Podemos establecer límites en el diagrama ternario usando el método `set_ternary_lim`.\Además, podemos cambiar la etiqueta de cada esquina usando `set_tlabel`, `set_llabel` y `set_rlabel`:
fig, ax = plt.subplots(figsize=(6, 6)) ax1 = prd[["SiO2", "Al2O3", "FeOT"]].pyroplot.scatter(ax=ax, c="green", s=5, marker="o") ax1.set_ternary_lim(tmin=0.5, tmax=1.0, lmin=0.0, lmax=0.5, rmin=0.0, rmax=0.5) ax1.set_tlabel("$SiO_{2}$") ax1.set_llabel("$Al_{2}O_{3}$") ax1.set_...
_____no_output_____
MIT
notebooks/2c_visualizacion_datos.ipynb
kevinalexandr19/manual-python-geologia
También podemos graficar distribuciones de probabilidad usando el método `density`:
fig, axs = plt.subplots(1, 2, figsize=(15, 6)) prd[["Na2O", "CaO", "K2O"]].pyroplot.density(ax=axs[0]) prd[["Na2O", "CaO", "K2O"]].pyroplot.density(ax=axs[1], contours=[0.95, 0.66, 0.33], linewidths=[1, 2, 3], linestyles=["-.", "--", "-"], colors=["purple", "green", "blue"]) plt.suptitle("Diagrama ternario $Na_{2}O ...
_____no_output_____
MIT
notebooks/2c_visualizacion_datos.ipynb
kevinalexandr19/manual-python-geologia
Ahora, crearemos una figura que muestre la relación `SiO2 - Al2O3 - CaO` para las muestras de peridotita y granodiorita:
fig, ax = plt.subplots(figsize=(8, 8)) ax1 = prd[["SiO2", "Al2O3", "CaO"]].pyroplot.scatter(c="g", s=5, marker="o", ax=ax, alpha=0.7, label="Peridotita") prd[["SiO2", "Al2O3", "CaO"]].pyroplot.density(ax=ax, contours=[0.95, 0.66, 0.33], colors=["blue"]*3, alpha=0.6) grn[["SiO2", "Al2O3", "CaO"]].pyroplot.scatter(c="r...
_____no_output_____
MIT
notebooks/2c_visualizacion_datos.ipynb
kevinalexandr19/manual-python-geologia
Por último, crearemos otra figura que muestre la relación `SiO2 - Al2O3 - (FeOT + MgO)` para las muestras de peridotita y granodiorita.\Para esto, crearemos una columna llamada `FeOT + MgO` en ambos cuadros:
prd["FeOT + MgO"] = prd["FeOT"] + prd["MgO"] grn["FeOT + MgO"] = grn["FeOT"] + grn["MgO"]
_____no_output_____
MIT
notebooks/2c_visualizacion_datos.ipynb
kevinalexandr19/manual-python-geologia
Ahora, podemos usar esta nueva columna en la figura:
fig, ax = plt.subplots(figsize=(8, 8)) ax1 = prd[["SiO2", "Al2O3", "FeOT + MgO"]].pyroplot.scatter(c="g", s=5, marker="o", ax=ax, alpha=0.6, label="Peridotita") prd[["SiO2", "Al2O3", "FeOT + MgO"]].pyroplot.density(ax=ax, contours=[0.95, 0.66, 0.33], colors=["blue"]*3, alpha=0.6) grn[["SiO2", "Al2O3", "FeOT + MgO"]]....
_____no_output_____
MIT
notebooks/2c_visualizacion_datos.ipynb
kevinalexandr19/manual-python-geologia
**4. Mplstereonet*******Esta librería permite crear figuras estereográficas equiangulares (red de Wulff) y equiareales (red de Schmidtt).**Empezaremos revisando si `mplstereonet` se encuentra instalado:
!pip show mplstereonet
_____no_output_____
MIT
notebooks/2c_visualizacion_datos.ipynb
kevinalexandr19/manual-python-geologia
Ahora, importaremos `mplstereonet` y cargaremos el archivo `data_estructural.csv`:
import pandas as pd import numpy as np import matplotlib.pyplot as plt import mplstereonet datos = pd.read_csv("files/data_estructural.csv") datos.head()
_____no_output_____
MIT
notebooks/2c_visualizacion_datos.ipynb
kevinalexandr19/manual-python-geologia
**4.1. Diagrama de círculos máximos o Diagrama Beta**Este diagrama es utilizado para la representación de elementos planos.\En la siguiente figura, usaremos la función `plane` para representar el plano. Esta función debe tener una dirección o rumbo (`strike`) y un buzamiento (`dip`).\También es posible agregar el cabe...
strike = datos.direccion dip = datos.buzamiento rake = datos.cabeceo
_____no_output_____
MIT
notebooks/2c_visualizacion_datos.ipynb
kevinalexandr19/manual-python-geologia
Para crear la figura estereográfica usaremos el método `add_subplot` y la opción `projection="stereonet"`.> Nota: usaremos `constrained_layout=True` para mantener las etiquetas de los ángulos en posición correcta.
fig = plt.figure(figsize=(5, 5), constrained_layout=True) ax = fig.add_subplot(111, projection="equal_angle_stereonet") ax.plane(strike, dip, c="black", linewidth=0.5) ax.grid() plt.show()
_____no_output_____
MIT
notebooks/2c_visualizacion_datos.ipynb
kevinalexandr19/manual-python-geologia
**4.2. Diagrama de polos o Diagrama Pi**Usado cuando las medidas a representar en el diagrama son muy numerosas.\En la siguiente figura, usaremos la función `pole` para representar el polo. Esta función debe tener una dirección (`strike`) y buzamiento (`dip`).
fig = plt.figure(figsize=(5, 5), constrained_layout=True) ax = fig.add_subplot(111, projection="equal_angle_stereonet") ax.pole(strike, dip, c="red", markersize=5) ax.grid() plt.show()
_____no_output_____
MIT
notebooks/2c_visualizacion_datos.ipynb
kevinalexandr19/manual-python-geologia
**4.3. Diagrama de densidad de polos**Usando la red de Schmidt (equiareal), podemos hacer un recuento directo de los polos y calcular su valor estadístico por unidad de superficie, determinando las direcciones y buzamiento predominantes.
fig = plt.figure(figsize=(5, 5), constrained_layout=True) ax = fig.add_subplot(111, projection="equal_area_stereonet") cax = ax.density_contourf(strike, dip, measurement="poles", cmap="gist_earth", sigma=1.5) ax.density_contour(strike, dip, measurement="poles", colors="black", sigma=1.5) ax.pole(strike, dip, c="re...
_____no_output_____
MIT
notebooks/2c_visualizacion_datos.ipynb
kevinalexandr19/manual-python-geologia
**4.4. Stereonet interactiva**Usando una herramienta de visualización interactiva, crearemos una red estereográfica en donde podemos alterar los valores de rumbo, buzamiento y cabeceo de un plano.
import ipywidgets as widgets def stereonet(rotation, strike, dip, rake): fig = plt.figure(figsize=(6, 6), constrained_layout=True) ax = fig.add_subplot(111, projection="equal_angle_stereonet", rotation=rotation) ax.plane(strike, dip, color="green", linewidth=2) ax.pole(strike, dip, color="red"...
_____no_output_____
MIT
notebooks/2c_visualizacion_datos.ipynb
kevinalexandr19/manual-python-geologia
道路交通事故受伤预测 导入必要的包
import numpy as np import pandas as pd # from IPython.display import display, display_html , HTML import matplotlib.pyplot as plt # import seaborn as sns # from sklearn.decomposition import PCA # from sklearn.model_selection import train_test_split # from sklearn.metrics import f1_score, accuracy_score, confusion_m...
_____no_output_____
MIT
RoadTrafficInjury/RoadTrafficInjury.ipynb
leizhenyu-lzy/BigHomework
常量&全局变量定义
# 没有用的特征名列表,方便eval和test时丢弃 unusedFeatureList = [] # 各个特征最可能的值,用于填补空缺。 key为特征名,value为特征的最可能值 featureMostfreqValueDict = {}
_____no_output_____
MIT
RoadTrafficInjury/RoadTrafficInjury.ipynb
leizhenyu-lzy/BigHomework
函数定义
def countDFNull(aimDF): nullAmount = aimDF.isnull().sum().sum() print("Null数量 : ", nullAmount) return nullAmount
_____no_output_____
MIT
RoadTrafficInjury/RoadTrafficInjury.ipynb
leizhenyu-lzy/BigHomework
读取数据集
print("读取trainDataset") trainDatasetDF = pd.read_csv('dataset/train.csv', header=0, index_col=None) trainDatasetDF.Name = 'train dataset' # print(trainDatasetDF.head(2)) print("读取evalDataset") evalDatasetDF = pd.read_csv('dataset/val.csv', header=0, index_col=0) evalDatasetDF.Name = 'eval dataset' # print(evalDatasetD...
读取trainDataset 读取evalDataset 读取testDataset DF Name : train dataset DF Shape : (79786, 54) DF Name : eval dataset DF Shape : (2836, 54) DF Name : test dataset DF Shape : (2836, 53)
MIT
RoadTrafficInjury/RoadTrafficInjury.ipynb
leizhenyu-lzy/BigHomework
数据清洗
topBadFeatureNumbers = 20 # 展示缺失样本最多的特征的数量,全部展示太长了 badFeatureMaxMissingSample = 500 # 若某个特征缺失的样本数量超过该值,认定为坏特征 badSampleMaxMissingFeature = 1 # 若某个样本缺失的特征超过该值,认定为坏样本
_____no_output_____
MIT
RoadTrafficInjury/RoadTrafficInjury.ipynb
leizhenyu-lzy/BigHomework
找出sample较少的feature
# 对所有特征(每一列)进行null值统计 trainFeatureNullSeries = trainDatasetDF.isnull().sum().sort_values(ascending=False) # 降序排列 print("type : ", type(trainFeatureNullSeries)) # averageTrainFeatureNull = trainFeatureNullSeries.sum()/len(trainFeatureNullSeries) # print("averageTrainFeatureNull : ", averageTrainFeatureNull) trainFeatur...
type : <class 'pandas.core.series.Series'> {'situ': 302, 'infra': 226, 'surf': 190, 'prof': 185, 'plan': 176, 'atm': 163, 'jour': 163, 'catr': 163, 'long': 163, 'col': 163, 'lat': 163, 'int': 163, 'agg': 163, 'com': 163, 'dep': 163}
MIT
RoadTrafficInjury/RoadTrafficInjury.ipynb
leizhenyu-lzy/BigHomework
找出feature较少的sample
# 对所有样本(每一行)进行null值统计 trainSampleNullSeries = trainDatasetDF.T.isnull().sum().sort_values(ascending=False) # 倒序排列 trainSampleNullDict = trainSampleNullSeries.to_dict() print("type : ", type(trainSampleNullSeries)) badTrainSampleDict = {key:trainSampleNullDict[key] for key in trainSampleNullDict if trainSampleNullDict[...
(79623, 41)
MIT
RoadTrafficInjury/RoadTrafficInjury.ipynb
leizhenyu-lzy/BigHomework
找出值不具有参考性的特征
# 特征的值的可能太多和太少都不具有参考性 tooMuchValueFeatureThreshold = 300 # 如果特征的可能的指多于该数,认定为没有参考性 tooLessValueFeatureThreshold = 2 # 如果特征的可能的指少于该数,认定为没有参考性 featureValueCountDict = {} # 输出各个特征值对应的特征数量 for loopIdx, colName in enumerate(trainDatasetDF): tempSeries = trainDatasetDF[colName] tempSeriesValueCountDict = tempSeries....
jour float64 31 mois float64 12 lum float64 5 dep object 107 agg float64 2 int float64 9 atm float64 10 col float64 8 catr float64 8 prof float64 4 plan float64 4 surf float64 9 infra float64 10 situ float64 7 num_veh object 27 place int64 10 catu int64 3 grav int64 4 sexe int64 2 an_nais int64 103 trajet int64 8 secu1...
MIT
RoadTrafficInjury/RoadTrafficInjury.ipynb
leizhenyu-lzy/BigHomework
统计清洗后空值数量
nullAfterClean = trainDatasetDF.isnull().sum().sum() print(nullAfterClean)
264
MIT
RoadTrafficInjury/RoadTrafficInjury.ipynb
leizhenyu-lzy/BigHomework
缺失值填充
trainImputer = SimpleImputer(missing_values=np.nan, strategy='most_frequent') trainDatasetDF = trainImputer.fit_transform(trainDatasetDF) nullAfterClean = countDFNull(trainDatasetDF) print(nullAfterClean)
Null数量 : 264 264
MIT
RoadTrafficInjury/RoadTrafficInjury.ipynb
leizhenyu-lzy/BigHomework
数据重采样 查看训练集标签分布
countTrainDatasetLabel = trainDatasetDF['grav'].value_counts() print(countTrainDatasetLabel) maxTrainLabelAmount = countTrainDatasetLabel.max() print(maxTrainLabelAmount) plt.figure(figsize=(10,5)) plt.title("训练集样本标签分布") plt.xlabel('标签名称') plt.ylabel('标签数量') plt.grid() plt.xticks(labels=['Unharmed','Killed','Hospitaliz...
33205 33205 33205 33205 <class 'pandas.core.frame.DataFrame'> Int64Index: 132820 entries, 1 to 48653 Data columns (total 41 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 Num_Acc 132820 non-null int64 1 jour 132820 non-null float64 2 mois ...
MIT
RoadTrafficInjury/RoadTrafficInjury.ipynb
leizhenyu-lzy/BigHomework
模型训练
trainXDF = trainDatasetDF.drop(columns=['grav']) trainYDF = trainDatasetDF.loc[:, 'grav'] evalXDF = evalDatasetDF.drop(columns=['grav']) evalYDF = evalDatasetDF.loc[:, 'grav'] preprocessorPipeline = make_pipeline() randomForestPipeline = make_pipeline(preprocessorPipeline, ) adaBoostPipeline = make_pipeline(preprocess...
_____no_output_____
MIT
RoadTrafficInjury/RoadTrafficInjury.ipynb
leizhenyu-lzy/BigHomework
&nbsp;Made With MLApplied ML · MLOps · ProductionJoin 30K+ developers in learning how to responsibly deliver value with ML. &nbsp; &nbsp; &nbsp; 🔥&nbsp; Among the top ML repositories on GitHub Recurrent Neural Networks (RNN)In this lesson we will learn how to process sequential data (sentences...
import numpy as np import pandas as pd import random import torch import torch.nn as nn SEED = 1234 def set_seeds(seed=1234): """Set seeds for reproducibility.""" np.random.seed(seed) random.seed(seed) torch.manual_seed(seed) torch.cuda.manual_seed(seed) torch.cuda.manual_seed_all(seed) # multi-...
cuda
MIT
notebooks/13_Recurrent_Neural_Networks.ipynb
udapy/MadeWithML
Load data We will download the [AG News dataset](http://www.di.unipi.it/~gulli/AG_corpus_of_news_articles.html), which consists of 120K text samples from 4 unique classes (`Business`, `Sci/Tech`, `Sports`, `World`)
import numpy as np import pandas as pd import re import urllib # Load data url = "https://raw.githubusercontent.com/GokuMohandas/MadeWithML/main/datasets/news.csv" df = pd.read_csv(url, header=0) # load df = df.sample(frac=1).reset_index(drop=True) # shuffle df.head()
_____no_output_____
MIT
notebooks/13_Recurrent_Neural_Networks.ipynb
udapy/MadeWithML