markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
度分布
from collections import defaultdict import numpy as np def plotDegreeDistribution(G): degs = defaultdict(int) for i in G.degree().values(): degs[i]+=1 items = sorted ( degs.items () ) x, y = np.array(items).T y_sum = np.sum(y) y = [float(i)/y_sum for i in y] plt.plot(x, y, 'b-o') plt.xs...
_____no_output_____
MIT
code/17.networkx.ipynb
nju-teaching/computational-communication
网络科学理论简介****** 网络科学:分析网络结构******王成军 wangchengjun@nju.edu.cn计算传播网 http://computational-communication.com 规则网络
import networkx as nx import matplotlib.pyplot as plt RG = nx.random_graphs.random_regular_graph(3,200) #生成包含200个节点、每个节点有3个邻居的规则图RG pos = nx.spectral_layout(RG) #定义一个布局,此处采用了spectral布局方式,后变还会介绍其它布局方式,注意图形上的区别 nx.draw(RG,pos,with_labels=False,node_size = 30) #绘制规则图的图形,with_labels决定节点是非带标签(编号),node_size是节点的直径 ...
_____no_output_____
MIT
code/17.networkx.ipynb
nju-teaching/computational-communication
ER随机网络
import networkx as nx import matplotlib.pyplot as plt ER = nx.random_graphs.erdos_renyi_graph(200,0.05) #生成包含20个节点、以概率0.2连接的随机图 pos = nx.shell_layout(ER) #定义一个布局,此处采用了shell布局方式 nx.draw(ER,pos,with_labels=False,node_size = 30) plt.show() plotDegreeDistribution(ER)
_____no_output_____
MIT
code/17.networkx.ipynb
nju-teaching/computational-communication
小世界网络
import networkx as nx import matplotlib.pyplot as plt WS = nx.random_graphs.watts_strogatz_graph(200,4,0.3) #生成包含200个节点、每个节点4个近邻、随机化重连概率为0.3的小世界网络 pos = nx.circular_layout(WS) #定义一个布局,此处采用了circular布局方式 nx.draw(WS,pos,with_labels=False,node_size = 30) #绘制图形 plt.show() plotDegreeDistribution(WS) nx.diameter(WS...
_____no_output_____
MIT
code/17.networkx.ipynb
nju-teaching/computational-communication
BA网络
import networkx as nx import matplotlib.pyplot as plt BA= nx.random_graphs.barabasi_albert_graph(200,2) #生成n=20、m=1的BA无标度网络 pos = nx.spring_layout(BA) #定义一个布局,此处采用了spring布局方式 nx.draw(BA,pos,with_labels=False,node_size = 30) #绘制图形 plt.show() plotDegreeDistribution(BA) BA= nx.random_graphs.barabasi_albert_grap...
_____no_output_____
MIT
code/17.networkx.ipynb
nju-teaching/computational-communication
作业:- 阅读 Barabasi (1999) Internet Diameter of the world wide web.Nature.401- 绘制www网络的出度分布、入度分布- 使用BA模型生成节点数为N、幂指数为$\gamma$的网络- 计算平均路径长度d与节点数量的关系
Ns = [i*10 for i in [1, 10, 100, 1000]] ds = [] for N in Ns: print N BA= nx.random_graphs.barabasi_albert_graph(N,2) d = nx.average_shortest_path_length(BA) ds.append(d) plt.plot(Ns, ds, 'r-o') plt.xlabel('$N$', fontsize = 20) plt.ylabel('$<d>$', fontsize = 20) plt.xscale('log') plt.show()
_____no_output_____
MIT
code/17.networkx.ipynb
nju-teaching/computational-communication
Now we have to put together the different datasets.
import pandas as pd files_df = pd.read_pickle("data/clean/files_df.pkl")
_____no_output_____
MIT
S02 - Data Wrangling/HCKT02 - Data Wrangling/instructor_solution/5.putting_all_together.ipynb
jtiagosg/batch3-students
We remove from the dataframe those rows whose origin is the website (these rows have `WEBSITE` in all the values) and those that come from the API (these rows have `API` in all the values).
files_df.shape website_ids = files_df[files_df.tierafterorder.isin(['WEBSITE'])].index api_ids = files_df[files_df.tierafterorder.isin(['API'])].index files_df = files_df[-files_df.tierafterorder.isin(['WEBSITE', 'API'])] files_df.shape files_df.tierafterorder.value_counts() scraped_df = pd.read_pickle('data/clean/scra...
_____no_output_____
MIT
S02 - Data Wrangling/HCKT02 - Data Wrangling/instructor_solution/5.putting_all_together.ipynb
jtiagosg/batch3-students
We concat the dataframes that have different ids.
train_df = pd.concat( [ pd.concat([files_df, api_df], sort=True), scraped_df ], sort=True ) train_df.shape
_____no_output_____
MIT
S02 - Data Wrangling/HCKT02 - Data Wrangling/instructor_solution/5.putting_all_together.ipynb
jtiagosg/batch3-students
Now we join the files that share an index.
train_df = train_df.drop(columns=['returned', 'storeid']).join(targets_df).join(storeid_df) train_df.shape train_df.to_pickle('data/clean/train_df_merged.pkl')
_____no_output_____
MIT
S02 - Data Wrangling/HCKT02 - Data Wrangling/instructor_solution/5.putting_all_together.ipynb
jtiagosg/batch3-students
Recommendations with IBMIn this notebook, you will be putting your recommendation skills to use on real data from the IBM Watson Studio platform. You may either submit your notebook through the workspace here, or you may work from your local machine and submit through the next page. Either way assure that your code p...
%autosave 180 import pandas as pd import numpy as np import matplotlib.pyplot as plt import project_tests as t import pickle import seaborn as sns import re import nltk from nltk.corpus import stopwords from nltk.stem.wordnet import WordNetLemmatizer from nltk.tokenize import word_tokenize from sklearn.feature_extract...
_____no_output_____
MIT
notebook/Recommendations_with_IBM.ipynb
dalpengholic/Udacity_Recommendations_with_IBM
Part I : Exploratory Data AnalysisUse the dictionary and cells below to provide some insight into the descriptive statistics of the data.`1.` What is the distribution of how many articles a user interacts with in the dataset? Provide a visual and descriptive statistics to assist with giving a look at the number of ti...
# make a groupby instance and count how many articles were read by each user email_grouped_df = df.groupby('email') num_article_email = email_grouped_df['article_id'].count() print("Mean # article :",num_article_email.mean()) print("Quantile 0.25 , 0.5, 0.75: " , num_article_email.quantile(0.25), num_article_email.quan...
50% of individuals interact with 3.0 number of articles or fewer. The maximum number of user-article interactions by any 1 user is364:
MIT
notebook/Recommendations_with_IBM.ipynb
dalpengholic/Udacity_Recommendations_with_IBM
`2.` Explore and remove duplicate articles from the **df_content** dataframe.
# Find and explore duplicate articles df_content.head() check_dupl_df_1 = df_content[df_content.duplicated(['article_id'])] check_dupl_df_1 # Remove any rows that have the same article_id - only keep the first df_content.drop_duplicates(subset='article_id', keep='first', inplace=True) check_dupl_df_1 = df_content[df_co...
_____no_output_____
MIT
notebook/Recommendations_with_IBM.ipynb
dalpengholic/Udacity_Recommendations_with_IBM
`3.` Use the cells below to find:**a.** The number of unique articles that have an interaction with a user. **b.** The number of unique articles in the dataset (whether they have any interactions or not).**c.** The number of unique users in the dataset. (excluding null values) **d.** The number of user-article interac...
unique_articles = len(df.article_id.unique()) # The number of unique articles that have at least one interaction total_articles = len(df_content.article_id.unique()) # The number of unique articles on the IBM platform df_email_na_dropped = df.dropna(subset=['email']) unique_users = len(df_email_na_dropped.email.unique(...
_____no_output_____
MIT
notebook/Recommendations_with_IBM.ipynb
dalpengholic/Udacity_Recommendations_with_IBM
`4.` Use the cells below to find the most viewed **article_id**, as well as how often it was viewed. After talking to the company leaders, the `email_mapper` function was deemed a reasonable way to map users to ids. There were a small number of null values, and it was found that all of these null values likely belong...
# df_content.head(3) article_id_grouped_df = df.groupby('article_id') print(article_id_grouped_df['email'].count().sort_values(ascending=False).index[0]) print(article_id_grouped_df['email'].count().sort_values(ascending=False).values[0]) most_viewed_article_id = '1429.0'# The most viewed article in the dataset as a st...
It looks like you have everything right here! Nice job!
MIT
notebook/Recommendations_with_IBM.ipynb
dalpengholic/Udacity_Recommendations_with_IBM
Part II: Rank-Based RecommendationsUnlike in the earlier lessons, we don't actually have ratings for whether a user liked an article or not. We only know that a user has interacted with an article. In these cases, the popularity of an article can really only be based on how often an article was interacted with.`1.` ...
def get_top_articles(n, df=df): ''' INPUT: n - (int) the number of top articles to return df - (pandas dataframe) df as defined at the top of the notebook OUTPUT: top_articles - (list) A list of the top 'n' article titles ''' article_id_grouped_df = df.groupby(['title']) ...
Your top_5 looks like the solution list! Nice job. Your top_10 looks like the solution list! Nice job. Your top_20 looks like the solution list! Nice job.
MIT
notebook/Recommendations_with_IBM.ipynb
dalpengholic/Udacity_Recommendations_with_IBM
Part III: User-User Based Collaborative Filtering`1.` Use the function below to reformat the **df** dataframe to be shaped with users as the rows and articles as the columns. * Each **user** should only appear in each **row** once.* Each **article** should only show up in one **column**. * **If a user has interacted...
# create the user-article matrix with 1's and 0's def create_user_item_matrix(df): ''' INPUT: df - pandas dataframe with article_id, title, user_id columns OUTPUT: user_item - user item matrix Description: Return a matrix with user ids as rows and article ids on the columns with ...
You have passed our quick tests! Please proceed!
MIT
notebook/Recommendations_with_IBM.ipynb
dalpengholic/Udacity_Recommendations_with_IBM
`2.` Complete the function below which should take a user_id and provide an ordered list of the most similar users to that user (from most similar to least similar). The returned result should not contain the provided user_id, as we know that each user is similar to him/herself. Because the results for each user here ...
def find_similar_users(user_id, user_item=user_item): ''' INPUT: user_id - (int) a user_id user_item - (pandas dataframe) matrix of users by articles: 1's when a user has interacted with an article, 0 otherwise OUTPUT: similar_users - (list) an ordered list where the closes...
The 10 most similar users to user 1 are: [3933, 23, 3782, 203, 4459, 3870, 131, 4201, 46, 3697] The 5 most similar users to user 3933 are: [1, 3782, 23, 203, 4459] The 3 most similar users to user 46 are: [4201, 3782, 23]
MIT
notebook/Recommendations_with_IBM.ipynb
dalpengholic/Udacity_Recommendations_with_IBM
`3.` Now that you have a function that provides the most similar users to each user, you will want to use these users to find articles you can recommend. Complete the functions below to return the articles you would recommend to each user.
def get_article_names(article_ids, df=df): ''' INPUT: article_ids - (list) a list of article ids df - (pandas dataframe) df as defined at the top of the notebook OUTPUT: article_names - (list) a list of article names associated with the list of article ids (this is iden...
If this is all you see, you passed all of our tests! Nice job!
MIT
notebook/Recommendations_with_IBM.ipynb
dalpengholic/Udacity_Recommendations_with_IBM
`4.` Now we are going to improve the consistency of the **user_user_recs** function from above. * Instead of arbitrarily choosing when we obtain users who are all the same closeness to a given user - choose the users that have the most total article interactions before choosing those with fewer article interactions.* ...
def get_top_sorted_users(user_id, df=df, user_item=user_item): ''' INPUT: user_id - (int) df - (pandas dataframe) df as defined at the top of the notebook user_item - (pandas dataframe) matrix of users by articles: 1's when a user has interacted with an article, 0 otherwise ...
The top 10 recommendations for user 20 are the following article ids: ['1162.0', '1351.0', '1164.0', '491.0', '1186.0', '14.0', '1429.0', '162.0', '939.0', '813.0'] The top 10 recommendations for user 20 are the following article names: ['analyze energy consumption in buildings', 'model bike sharing data with spss', '...
MIT
notebook/Recommendations_with_IBM.ipynb
dalpengholic/Udacity_Recommendations_with_IBM
`5.` Use your functions from above to correctly fill in the solutions to the dictionary below. Then test your dictionary against the solution. Provide the code you need to answer each following the comments below.
### Tests with a dictionary of results neighbor_df_1 = get_top_sorted_users(1, df=df, user_item=user_item) neighbor_df_131 = get_top_sorted_users(131, df=df, user_item=user_item) user1_most_sim = neighbor_df_1.neighbor_id[0].item()# Find the user that is most similar to user 1 user131_10th_sim = neighbor_df_131.neigh...
This all looks good! Nice job!
MIT
notebook/Recommendations_with_IBM.ipynb
dalpengholic/Udacity_Recommendations_with_IBM
`6.` If we were given a new user, which of the above functions would you be able to use to make recommendations? Explain. Can you think of a better way we might make recommendations? Use the cell below to explain a better method for new users. Rank based recommendation is suitable for a new user because it only depe...
new_user = '0.0' # top_10 = get_top_articles(10) top_10 = get_top_article_ids(10, df=df) # What would your recommendations be for this new user '0.0'? As a new user, they have no observed articles. # Provide a list of the top 10 article ids you would give to top_10 = list(map(str, top_10)) new_user_recs = top_10# You...
That's right! Nice job!
MIT
notebook/Recommendations_with_IBM.ipynb
dalpengholic/Udacity_Recommendations_with_IBM
Part IV: Content Based Recommendations (EXTRA - NOT REQUIRED)Another method we might use to make recommendations is to perform a ranking of the highest ranked articles associated with some term. You might consider content to be the **doc_body**, **doc_description**, or **doc_full_name**. There isn't one way to creat...
def make_content_recs(article_id, df_content, df, m=10): ''' INPUT: article_id = (int) a article id in df_content m - (int) the number of recommendations you want for the user df_content - (pandas dataframe) df_content as defined at the top of the notebook df - (pandas dataframe) df as defined...
[730, 194, 53, 470, 1005, 980, 423, 266, 681, 670] ************************************************************ ['Developing for the IBM Streaming Analytics service', 'Data science for real-time streaming analytics', 'Introducing Streams Designer', 'What’s new in the Streaming Analytics service on Bluemix', 'Real-time ...
MIT
notebook/Recommendations_with_IBM.ipynb
dalpengholic/Udacity_Recommendations_with_IBM
`2.` Now that you have put together your content-based recommendation system, use the cell below to write a summary explaining how your content based recommender works. Do you see any possible improvements that could be made to your function? Is there anything novel about your content based recommender? This part is ...
def make_content_recs_2(article_id, df_content, df, m=10): ''' INPUT: article_id = (int) a article id in df_content m - (int) the number of recommendations you want for the user df_content - (pandas dataframe) df_content as defined at the top of the notebook df - (pandas dataframe) df as defin...
input only user_id or article_id [] [] [384, 805, 48, 662, 809, 161, 893, 686, 723, 655] ['Continuous Learning on Watson', 'Machine Learning for everyone', 'Data Science Experience Documentation', 'Build Deep Learning Architectures With Neural Network Modeler', 'Use the Machine Learning Library', 'Use the Machine Learn...
MIT
notebook/Recommendations_with_IBM.ipynb
dalpengholic/Udacity_Recommendations_with_IBM
Part V: Matrix FactorizationIn this part of the notebook, you will build use matrix factorization to make article recommendations to the users on the IBM Watson Studio platform.`1.` You should have already created a **user_item** matrix above in **question 1** of **Part III** above. This first question here will just...
# Load the matrix here user_item_matrix = pd.read_pickle('user_item_matrix.p') # quick look at the matrix user_item_matrix.head() user_item_matrix.shape user_item_matrix.to_numpy()
_____no_output_____
MIT
notebook/Recommendations_with_IBM.ipynb
dalpengholic/Udacity_Recommendations_with_IBM
`2.` In this situation, you can use Singular Value Decomposition from [numpy](https://docs.scipy.org/doc/numpy-1.14.0/reference/generated/numpy.linalg.svd.html) on the user-item matrix. Use the cell to perform SVD, and explain why this is different than in the lesson.
# Perform SVD on the User-Item Matrix Here u, s, vt = np.linalg.svd(user_item_matrix)# use the built in to get the three matrices s.shape, u.shape, vt.shape
_____no_output_____
MIT
notebook/Recommendations_with_IBM.ipynb
dalpengholic/Udacity_Recommendations_with_IBM
Here, the user-item matrix passed in linalg.svd has no missing values. All elements in the matrix are 0 or 1. In the previous lesson, there were a lot of null cells in the matrix. It was not able to be passed in. `3.` Now for the tricky part, how do we choose the number of latent features to use? Running the below cel...
num_latent_feats = np.arange(10,700+10,20) sum_errs = [] for k in num_latent_feats: # restructure with k latent features s_new, u_new, vt_new = np.diag(s[:k]), u[:, :k], vt[:k, :] # take dot product user_item_est = np.around(np.dot(np.dot(u_new, s_new), vt_new)) # compute error for each p...
_____no_output_____
MIT
notebook/Recommendations_with_IBM.ipynb
dalpengholic/Udacity_Recommendations_with_IBM
`4.` From the above, we can't really be sure how many features to use, because simply having a better way to predict the 1's and 0's of the matrix doesn't exactly give us an indication of if we are able to make good recommendations. Instead, we might split our dataset into a training and test set of data, as shown in ...
df_train = df.head(40000) df_test = df.tail(5993) def create_test_and_train_user_item(df_train, df_test): ''' INPUT: df_train - training dataframe df_test - test dataframe OUTPUT: user_item_train - a user-item matrix of the training dataframe (unique users for each r...
Awesome job! That's right! All of the test movies are in the training data, but there are only 20 test users that were also in the training set. All of the other users that are in the test set we have no data on. Therefore, we cannot make predictions for these users using SVD.
MIT
notebook/Recommendations_with_IBM.ipynb
dalpengholic/Udacity_Recommendations_with_IBM
`5.` Now use the **user_item_train** dataset from above to find U, S, and V transpose using SVD. Then find the subset of rows in the **user_item_test** dataset that you can predict using this matrix decomposition with different numbers of latent features to see how many features makes sense to keep based on the accurac...
# fit SVD on the user_item_train matrix u_train, s_train, vt_train = np.linalg.svd(user_item_train) # fit svd similar to above then use the cells below # Use these cells to see how well you can use the training # decomposition to predict on test data both_rows = user_item_train.index.isin(test_idx) rows_mask = np.inte...
_____no_output_____
MIT
notebook/Recommendations_with_IBM.ipynb
dalpengholic/Udacity_Recommendations_with_IBM
`6.` Use the cell below to comment on the results you found in the previous question. Given the circumstances of your results, discuss what you might do to determine if the recommendations you make with any of the above recommendation systems are an improvement to how users currently find articles? 1. Brief summary of...
print("# of row / sum of readings / sum of difference") for i in range(20): print(i, " ",user_item_test.iloc[i].abs().sum(), " ",diffs_test.iloc[i].abs().sum())
# of row / sum of readings / sum of difference 0 2.0 12.0 1 7.0 31.0 2 5.0 16.0 3 5.0 6.0 4 1.0 5.0 5 32.0 48.0 6 3.0 36.0 7 55.0 65.0 8 1.0 2.0 9 26.0 26.0 10 8.0 24.0 11 1.0 2.0 12 1.0 2.0...
MIT
notebook/Recommendations_with_IBM.ipynb
dalpengholic/Udacity_Recommendations_with_IBM
ExtrasUsing your workbook, you could now save your recommendations for each user, develop a class to make new predictions and update your results, and make a flask app to deploy your results. These tasks are beyond what is required for this project. However, from what you learned in the lessons, you certainly capabl...
from subprocess import call call(['python', '-m', 'nbconvert', 'Recommendations_with_IBM.ipynb'])
_____no_output_____
MIT
notebook/Recommendations_with_IBM.ipynb
dalpengholic/Udacity_Recommendations_with_IBM
Generalised RegressionIn this notebook, we will build a generalised regression model on the **electricity consumption** dataset. The dataset contains two variables - year and electricity consumption.
#importing libraries import pandas as pd import matplotlib.pyplot as plt from sklearn.preprocessing import PolynomialFeatures from sklearn.linear_model import LinearRegression from sklearn.pipeline import Pipeline from sklearn import metrics #fetching data elec_cons = pd.read_csv("total-electricity-consumption-us.csv",...
[1, 2, 3] [0.84237474021761372, 0.99088967445535958, 0.9979789881969624] [0.81651704638268097, 0.98760805026754717, 0.99848974839924587]
MIT
Section 16/AdvanceReg/Teclov_generalised_regression.ipynb
ashokjohn/ML_RealWorld
Import library
# !pip install --upgrade tables # !pip install eli5 # !pip install xgboost # !pip install hyperopt import pandas as pd import numpy as np import xgboost as xgb from sklearn.metrics import mean_absolute_error as mae from sklearn.model_selection import cross_val_score from hyperopt import hp, fmin, tpe, STATUS_OK impo...
_____no_output_____
MIT
matrix_two/day5.ipynb
jedrzejd/dw_matrix_car
Feature Engineering
SUFFIX_CAT = '__cat' for feat in df.columns: if isinstance(df[feat][0], list): continue factorized_values = df[feat].factorize()[0] if SUFFIX_CAT in feat: continue df[feat + SUFFIX_CAT] = factorized_values df['param_pojemność-skokowa'] = df['param_pojemność-skokowa'].map(lambda x: -1 ...
Training with params: {'colsample_bytree': 0.6000000000000001, 'learning_rate': 0.15000000000000002, 'max_depth': 13, 'n_estimators': 100, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.7000000000000001} 8021.26782298684 Training with params: {'colsample_bytree': 0.8500000000000001, 'learning_rate': 0.2, 'm...
MIT
matrix_two/day5.ipynb
jedrzejd/dw_matrix_car
Best Config XGBoost
feats = ['param_napęd__cat', 'param_rok-produkcji', 'param_stan__cat', 'param_skrzynia-biegów__cat', 'param_faktura-vat__cat', 'param_moc', 'param_marka-pojazdu__cat', 'feature_kamera-cofania__cat', 'param_typ__cat', 'param_pojemność-skokowa', 'seller_name__cat', 'feature_wspomaganie-kierownicy__cat', 'param_model-poja...
_____no_output_____
MIT
matrix_two/day5.ipynb
jedrzejd/dw_matrix_car
Obtaining Statistics of the RoomNav dataset 1. Average Geodesic Distances2. Histogram of distances vs episodes3. Average of top-down maps4. Lenght of oracle
import habitat import numpy as np import random %matplotlib inline import matplotlib.pyplot as plt splits = ['train'] data_path = '../data/datasets/roomnav/mp3d/v1/{split}/{split}.json.gz' for split in splits: avg_gd = 0 avg_ed = 0 min_gd = 10000000000 max_gd = 0 min_ed = 10000000000 m...
_____no_output_____
MIT
notebooks/dataset_statistics.ipynb
medhini/habitat-api
Validataion set
embs_model = learn.model.eval() embs_model.outputEmbs = True valid_embs, _ = embs_from_model(embs_model, dls.valid) dists, inds = get_nearest(valid_embs, do_chunk(valid_embs)) valid_df=train_df[train_df.is_valid==True].copy().reset_index() valid_df = add_target_groups(valid_df) pairs = sorted_pairs(dists, inds)[:len(va...
_____no_output_____
MIT
SBert.ipynb
slawekslex/shopee
Create a regulus file from a csv knn is the size of the neighborhood. The default is 100, which is usually sufficient
import regulus gauss4 = regulus.from_csv('gauss4', knn=8) regulus.save(gauss4, filename='gauss4')
_____no_output_____
BSD-3-Clause
examples/0-gauss_to_regulus.ipynb
yarden-livnat/ipyregulus
TSG081 - Get namespaces (Kubernetes)====================================Description-----------Get the kubernetes namespacesSteps----- Common functionsDefine helper functions used in this notebook.
# Define `run` function for transient fault handling, suggestions on error, and scrolling updates on Windows import sys import os import re import json import platform import shlex import shutil import datetime from subprocess import Popen, PIPE from IPython.display import Markdown retry_hints = {} # Output in stderr...
_____no_output_____
MIT
Big-Data-Clusters/CU3/Public/content/monitor-k8s/tsg081-get-kubernetes-namespaces.ipynb
gantz-at-incomm/tigertoolbox
Show the Kubernetes namespaces
run('kubectl get namespace')
_____no_output_____
MIT
Big-Data-Clusters/CU3/Public/content/monitor-k8s/tsg081-get-kubernetes-namespaces.ipynb
gantz-at-incomm/tigertoolbox
Show the Kubernetes namespaces with labelsKubernetes namespaces containing a SQL Server Big Data Cluster have thelabel ‘MSSQL\_CLUSTER’
run('kubectl get namespaces -o custom-columns=NAME:.metadata.name,STATUS:.status.phase,LABELS:.metadata.labels') print('Notebook execution complete.')
_____no_output_____
MIT
Big-Data-Clusters/CU3/Public/content/monitor-k8s/tsg081-get-kubernetes-namespaces.ipynb
gantz-at-incomm/tigertoolbox
Introduction to RThis introduction to the R language aims at understanding how to represent and manipulate data objects as commonly found in *data science*, to provide basic summary statistic and to build relevant graphical representation of the data. **Important notice:** Only base commands are discussed here, not th...
library(ggplot2) theme_set(theme_minimal())
_____no_output_____
BSD-3-Clause
lang-r-base.ipynb
duchesnay/dspyr
Note that you need to load the `ggplot2` package only once, at the start of your R session. Getting started VariablesThere are fundamentally two kind of data structures in statistics-oriented programming languages: numbers and strings. Numbers can be integers or real numbers and they are used to represent values obser...
x <- c(1, 3, 2, 5, 4)
_____no_output_____
BSD-3-Clause
lang-r-base.ipynb
duchesnay/dspyr
Note that the symbol `<-` stands for the recommended assignment operator, yet it is possible to use `=` to assign some quantity to a given variable, which appears on the left hand side of the above expression. Also, the series of values is reported between round brackets, and each values is separated by a comma. From n...
length(x) typeof(x)
_____no_output_____
BSD-3-Clause
lang-r-base.ipynb
duchesnay/dspyr
It should be noted that `x` contains values stored as real numbers (`double`) while they may just be stored as integers. It is however possible to ask R to use truly integer values:
x <- c(1L, 3L, 2L, 5L, 4L) typeof(x)
_____no_output_____
BSD-3-Clause
lang-r-base.ipynb
duchesnay/dspyr
The distinction between 32 bits integers and reals will not be that important in common analysis tasks, but it is important to keep in mind that it is sometimes useful to check whether data are represented as expected, especially in the case of categorical variables, also called 'factor' in R parlance (more on this lat...
x <- c(c(1, 2, 3), c(4, 5, 6), 7, 8)
_____no_output_____
BSD-3-Clause
lang-r-base.ipynb
duchesnay/dspyr
In passing, note that since we use the same name for our newly created variable, `x`, the old content referenced in `x` (1, 3, 2, 5, 4) is definitively lost. Once you have a vector of values, you can access each item by providing the (one-based) index of the item(s), e.g.:
x[1] x[3] x[c(1,3)] x[1:3]
_____no_output_____
BSD-3-Clause
lang-r-base.ipynb
duchesnay/dspyr
A convenient shorhand notation for regular sequence of integers is `start:end`, where `start` is the starting value and `end` is the last value (both included). Hence, `c(1,2,3,4)` is the same as `1:4`. This is useful when one wants to preview the first 3 or 5 values in a vector, for example. A more general function to...
seq(1, 10) seq(1, 10, by = 2) seq(0, 10, length = 5)
_____no_output_____
BSD-3-Clause
lang-r-base.ipynb
duchesnay/dspyr
Updating content of a vector can be done directly by assigning a new value to one of the item:
x[3] <- NA
_____no_output_____
BSD-3-Clause
lang-r-base.ipynb
duchesnay/dspyr
In the above statement, the third item has been assigned a missing value, which is coded as `NA` ('not available') in R. Again, there is no way to go back to the previous state of the variable, so be careful when updating the content of a variable.The presence of missing data is important to check before engaging into ...
is.na(x) which(is.na(x))
_____no_output_____
BSD-3-Clause
lang-r-base.ipynb
duchesnay/dspyr
Notice that many functions like `is.na`, or `which`, act in a vectorized way, meaning that you don't have to iterate manually over each item in the vector. Moreover, function calls can be nested one into the other. In the latter R expression, `which` is actually processing the values returned by the call to `is.na`. V...
s <- c(1, 4, 2, 3, 8) sample(s) sample(1:10, size = 5) sample(0:1, size = 10, replace = TRUE)
_____no_output_____
BSD-3-Clause
lang-r-base.ipynb
duchesnay/dspyr
In summary, `sample(1:n, size = n)` returns a permutation of the `n` elements, while `sample(1:n, size = n, replace = TRUE)` provides a bootstrap sample of the original data. SortingSorting a list of values or finding the index or rank of any value in a vector are common tasks in statistical programming. It is differe...
z <- c(1, 6, 7, 2, 8, 3, 9, 4, 5) sort(z) order(z)
_____no_output_____
BSD-3-Clause
lang-r-base.ipynb
duchesnay/dspyr
Data framesData frames are one of the core data structures to store and represent statistical data. Many routine functions that are used to load data stored in flat files or databases or to preprocess data stored in memory rely on data frames. Likewise, graphical commands such as those found in the `ggplot2` package g...
data(ToothGrowth) head(ToothGrowth) str(ToothGrowth)
_____no_output_____
BSD-3-Clause
lang-r-base.ipynb
duchesnay/dspyr
While `head` allows to preview the first 6 lines of a data frame, `str` provides a concise overview of what's available in the data frame, namely: the name of each variable (column), its mode of representation, et the first 10 observations (values).The dimensions (number of lines and columns) of a data frame can be ver...
dim(ToothGrowth)
_____no_output_____
BSD-3-Clause
lang-r-base.ipynb
duchesnay/dspyr
To access any given cell in this data frame, we will use the indexing trick we used in the case of vectors, but this time we have to indicate the line number as well as the column number, or name: Hence, `ToothGrowth[i,j]` means the value located at line `i` and column `j`, while `ToothGrowth[c(a,b),j]` would mean valu...
ToothGrowth[2,1]
_____no_output_____
BSD-3-Clause
lang-r-base.ipynb
duchesnay/dspyr
Since the columns of a data frame have names, it is equivalent to use `ToothGrowth[2,1]` and `ToothGrowth[2,"len"]`. In the latter case, variable names must be quoted. Column names can be displayed using `colnames` or `names` (in the special case of data frames), while row names are available *via* `rownames`. Row name...
ToothGrowth[c(2,4),1]
_____no_output_____
BSD-3-Clause
lang-r-base.ipynb
duchesnay/dspyr
This amounts to 'indexed selection', meaning that we need to provide the row (or column) numbers, while most of the time we are interested in criterion-based indexation, that is: "which observation fullfills a given criterion." We generally call this a 'filter'. Since most R operations are vectorized, this happens to b...
head(ToothGrowth$supp[ToothGrowth$len > 6])
_____no_output_____
BSD-3-Clause
lang-r-base.ipynb
duchesnay/dspyr
![](assets/lang-r-base-004.png) Likewise, it is possible to combine different filters using logical operators: `&` stands for 'and' (logical conjunction) and `|` stands for 'or' (logical disjonction); logical equality is denoted as `==` while its negation reads `!=`. Here is an example where we want to select observati...
ToothGrowth[,ToothGrowth$len > 10 & ToothGrowth$dose < 1]
_____no_output_____
BSD-3-Clause
lang-r-base.ipynb
duchesnay/dspyr
You will soon realize that for complex queries this notation become quite cumbersome: all variable must be prefixed by the name of the data frame, which can result in a very long statement. While it is recommended practice for programming or when developing dedicated package, it is easier to rely on `subset` in an inte...
subset(ToothGrowth, len > 10 & dose < 1)
_____no_output_____
BSD-3-Clause
lang-r-base.ipynb
duchesnay/dspyr
![](assets/lang-r-base-007.png) It is also possible to use the technique discussed in the case of vectors to sort a data frame in ascending or descending order according to one or more variables. Here is an example using the `len` variable:
head(ToothGrowth) head(ToothGrowth[order(ToothGrowth$len),])
_____no_output_____
BSD-3-Clause
lang-r-base.ipynb
duchesnay/dspyr
The `which` function can also be used to retrieve a specific observation in a data frame, like in the following instruction:
which(ToothGrowth$len < 8)
_____no_output_____
BSD-3-Clause
lang-r-base.ipynb
duchesnay/dspyr
Statistical summariesAs explained above, the `str` function is useful to check a given data structure, and individual properties of a data frame can be queried using dedicated functions, e.g. `nrow` or `ncol`. Now, to compute statistical quantities on a variable, we can use dedicated functions like `mean` (arithmetica...
mean(ToothGrowth$len) range(ToothGrowth$len) c(min(ToothGrowth$len), max(ToothGrowth$len)) table(ToothGrowth$dose) summary(ToothGrowth)
_____no_output_____
BSD-3-Clause
lang-r-base.ipynb
duchesnay/dspyr
Of course, the above functions can be applied to a subset of the original data set:
mean(ToothGrowth$len[ToothGrowth$dose == 1]) table(ToothGrowth$dose[ToothGrowth$len < 20])
_____no_output_____
BSD-3-Clause
lang-r-base.ipynb
duchesnay/dspyr
Bivariate caseIf we want to summarize a numerical variable according the values that a factor variable takes, we can use `tapply` or `aggregate`. The latter expects a 'formula' describing the relation between the variables we are interested in: the response variable or outcome appears on the left-hand side (LHS), whil...
aggregate(len ~ dose, data = ToothGrowth, mean) aggregate(len ~ supp + dose, data = ToothGrowth, mean)
_____no_output_____
BSD-3-Clause
lang-r-base.ipynb
duchesnay/dspyr
Note that only one function can be applied to the 'formula'. Even if it possible to write a custom function that computes the mean and standard deviation of a variable, both results will be returned as single column in the data frame returned by `aggregate`. There do exist other ways to perform such computation, though...
aggregate(len ~ dose, data = ToothGrowth, summary) f <- function(x) c(mean = mean(x), sd = sd(x)) aggregate(len ~ dose, data = ToothGrowth, f)
_____no_output_____
BSD-3-Clause
lang-r-base.ipynb
duchesnay/dspyr
The `table` functions also works with two (or even three) variables:
table(ToothGrowth$dose, ToothGrowth$supp)
_____no_output_____
BSD-3-Clause
lang-r-base.ipynb
duchesnay/dspyr
If formulas are to be preferred, the `xtabs` function provides a convenient replacement for `table`:
xtabs(~ dose + supp, data = ToothGrowth)
_____no_output_____
BSD-3-Clause
lang-r-base.ipynb
duchesnay/dspyr
In either case, frequencies can be computed from the table of counts using `prop.table`, using the desired margin (row=1, column=2) in the bivariate case:
prop.table(table(ToothGrowth$dose)) prop.table(table(ToothGrowth$dose, ToothGrowth$supp), margins = 1)
_____no_output_____
BSD-3-Clause
lang-r-base.ipynb
duchesnay/dspyr
Practical use case: The ESS surveyThe `data` directory includes three [RDS](https://www.rdocumentation.org/packages/base/versions/3.5.3/topics/readRDS) files related to the [European Social Survey](https://www.europeansocialsurvey.org) (ESS). This survey first ran in 2002 (round 1), and it is actually renewed every tw...
d <- readRDS("data/ess-one-round-fr.rds") head(d[1:10]) table(d$yrbrn) summary(d$agea)
_____no_output_____
BSD-3-Clause
lang-r-base.ipynb
duchesnay/dspyr
Let us focus on the following list of variables, readily available in the file `ess-one-round-29vars-fr.rds`:- `tvtot`: TV watching, total time on average weekday- `rdtot`: Radio listening, total time on average weekday- `nwsptot`: Newspaper reading, total time on average weekday- `polintr`: How interested in politics-...
d <- readRDS("data/ess-one-round-29vars-fr.rds")
_____no_output_____
BSD-3-Clause
lang-r-base.ipynb
duchesnay/dspyr
First, let us look at the distribution of the `gndr` variable, using a simple bar diagram:
summary(d$gndr) p <- ggplot(data = d, aes(x = gndr)) + geom_bar() + labs(x = "Sex of respondant", y = "Counts") p
_____no_output_____
BSD-3-Clause
lang-r-base.ipynb
duchesnay/dspyr
Now, let's look at the distribution of age. The `ggplot2` package offer a `geom_density` function but it is also possible to draw a line using the precomputed empirical density function, or to let `ggplot2` compute the density function itself using the `stat=` option. Here is how it looks:
summary(d$agea) p <- ggplot(data = d, aes(x = agea)) + geom_line(stat = "density", bw = 2) + labs(x = "Age of respondant") p
_____no_output_____
BSD-3-Clause
lang-r-base.ipynb
duchesnay/dspyr
The distribution of age can also be represented as an histogram, and `ggplot2` makes it quite easy to split the display depending on the sex of the respondants, which is called a 'facet' in `ggplot2` parlance:
p <- ggplot(data = d, aes(x = agea)) + geom_histogram(binwidth = 5) + facet_grid(~ gndr) + labs(x = "Age of respondant") p
_____no_output_____
BSD-3-Clause
lang-r-base.ipynb
duchesnay/dspyr
Finally, a boxplot might also be an option, especially when we want to compare the distribution of a numerical variable across levels of a categorical variable. The `coord_flip` instruction is used to swap the X and Y axes, but keep in mind that `x=` and `y=` labels still refer to the `x=` and `y=` variable defined in ...
p <- ggplot(data = d, aes(x = gndr, y = agea)) + geom_boxplot() + coord_flip() + labs(x = NULL, y = "Age of respondants") p
_____no_output_____
BSD-3-Clause
lang-r-base.ipynb
duchesnay/dspyr
**Sidenote:** In the above instructions, we used the following convention to build a `ggplot2` object:- we assign to a variable, say `p`, the call to `ggplot2` plus any further instructions ('geom', 'scale', 'coord_', etc.) using the `+` operator;- we use only one `aes()` structure, when calling `ggplot`, so that it m...
db <- readRDS("data/ess-one-round.rds") cat("No. observations =", nrow(db)) table(db$cntry)
_____no_output_____
BSD-3-Clause
lang-r-base.ipynb
duchesnay/dspyr
Since French data are (deliberately) missing from this dataset, we can append them to the above data frame as follows:
db <- rbind.data.frame(db, d) cat("No. observations =", nrow(db)) db$cntry <- factor(db$cntry) table(db$cntry)
_____no_output_____
BSD-3-Clause
lang-r-base.ipynb
duchesnay/dspyr
Remember that is also possible to use `summary()` with a factor variable to display a table of counts. In this particular case, we are just appending a data frame to another data frame already loaded in memory. This assumes that both share the name columns, of course. Sometimes, another common operation might be perfor...
db$id <- 1:nrow(db) db1 <- db[,c(1:10,ncol(db))] db2 <- db[,c(11:(ncol(db)-1),ncol(db))] all <- merge(db1, db2, by = "id")
_____no_output_____
BSD-3-Clause
lang-r-base.ipynb
duchesnay/dspyr
IMPORT
import os import sys import logging import subprocess import numpy as np from shutil import copy sys.path.insert(0, '/home/yongliang/third_party/merlin/src') from io_funcs.binary_io import BinaryIOCollection %load_ext autoreload %autoreload 2
The autoreload extension is already loaded. To reload it, use: %reload_ext autoreload
Apache-2.0
egs/singing_synthesis/s3/run.ipynb
YongliangHe/SingingVoiceSynthesis
UTILITY
def get_file_list_of_dir(dir_path): res = [os.path.join(dir_path, f) for f in os.listdir(dir_path)] res.sort() return res def gen_file_list(dir_path, file_id_list, ext): return [os.path.join(dir_path, f + '.' + ext) for f in file_id_list] def get_file_id_list(file_list): return [os.path.splitext(o...
_____no_output_____
Apache-2.0
egs/singing_synthesis/s3/run.ipynb
YongliangHe/SingingVoiceSynthesis
CONFIGURATION
merlin_dir = '/home/yongliang/third_party/merlin' silence_pattern = ['*-pau+*', '*-sil+*'] curr_dir = os.getcwd() # hardcoded nit_dir = os.path.join(curr_dir, 'nit') wav_dir = os.path.join(nit_dir, 'wav2') exp_dir = os.path.join(curr_dir, 'exp') if not os.path.exists(exp_dir): os.makedirs(exp_dir) lab_dir = os.path...
_____no_output_____
Apache-2.0
egs/singing_synthesis/s3/run.ipynb
YongliangHe/SingingVoiceSynthesis
Prepare label files
hmm_dir = os.path.join(exp_dir, 'hmm') full_dir = os.path.join(nit_dir, 'full') mono_dir = os.path.join(nit_dir, 'mono') phones = os.path.join(nit_dir, 'monophone') from src.forced_alignment import ForcedAlignment aligner = ForcedAlignment(hmm_dir, wav_dir, full_dir, mono_dir, phones, lab_dir) aligner.prepare_training...
---make file_id_list.scp: /home/yongliang/third_party/merlin/egs/singing_synthesis/s3/exp/hmm/file_id_list.scp ---make copy.scp: /home/yongliang/third_party/merlin/egs/singing_synthesis/s3/exp/hmm/config/copy.scp ---mfcc extraction at: /home/yongliang/third_party/merlin/egs/singing_synthesis/s3/exp/hmm/mfc ------make c...
Apache-2.0
egs/singing_synthesis/s3/run.ipynb
YongliangHe/SingingVoiceSynthesis
Feature extraction
feat_dir = os.path.join(exp_dir, 'feat') lf0_dir = os.path.join(feat_dir, 'lf0') bap_dir = os.path.join(feat_dir, 'bap') mgc_dir = os.path.join(feat_dir, 'mgc') sample_rate = 16000 from src.feature_extraction import FeatureExtractor feature_extractor = FeatureExtractor(wav_dir, sample_rate, feat_dir) feature_extractor....
nitech_jp_song070_f001_003 Running REAPER f0 extraction...
Apache-2.0
egs/singing_synthesis/s3/run.ipynb
YongliangHe/SingingVoiceSynthesis
Duration model Model configuration
# duration model related # hardcoded dur_lab_dim = 368 dur_cmp_dim = 5 dur_train_file_number = 27 dur_valid_file_number = 1 dur_test_file_number = 1 dur_mdl_dir = os.path.join(exp_dir, 'duration_model') if not os.path.exists(dur_mdl_dir): os.makedirs(dur_mdl_dir) dur_tmp_dir = os.path.join(dur_mdl_dir, 'tmp') if...
_____no_output_____
Apache-2.0
egs/singing_synthesis/s3/run.ipynb
YongliangHe/SingingVoiceSynthesis
Feature extraction from label files
ques_dir = os.path.join(curr_dir, 'ques') question = os.path.join(ques_dir, 'general') from frontend.label_normalisation import HTSLabelNormalisation dur_lab_normaliser = HTSLabelNormalisation(question, add_frame_features=False, subphone_feats='none') dur_lab_normaliser.perform_normalisation(orig_lab_file_list, dur_lab...
_____no_output_____
Apache-2.0
egs/singing_synthesis/s3/run.ipynb
YongliangHe/SingingVoiceSynthesis
Remove silence phone
from frontend.silence_remover import SilenceRemover dur_silence_remover = SilenceRemover(n_cmp=dur_lab_dim, silence_pattern=silence_pattern, remove_frame_features=False, subphone_feats='none') dur_silence_remover.remove_silence(dur_lab_file_list, orig_lab_file_list, dur_lab_no_silence_file_list) _, num_frame = io_funcs...
[[ 0. 0. 0. ... -1. -1. -1.] [ 0. 0. 0. ... -1. -1. -1.] [ 0. 0. 0. ... 192. 0. 100.] ... [ 0. 0. 0. ... 72. 57. 43.] [ 0. 0. 0. ... -1. -1. -1.] [ 0. 0. 0. ... -1. -1. -1.]]
Apache-2.0
egs/singing_synthesis/s3/run.ipynb
YongliangHe/SingingVoiceSynthesis
Input feature normalization
from frontend.min_max_norm import MinMaxNormalisation dur_min_max_normaliser = MinMaxNormalisation(feature_dimension=dur_lab_dim, min_value=0.01, max_value=0.99) dur_min_max_normaliser.find_min_max_values(dur_lab_no_silence_file_list[0: dur_train_file_number]) dur_min_max_normaliser.normalise_data(dur_lab_no_silence_fi...
_____no_output_____
Apache-2.0
egs/singing_synthesis/s3/run.ipynb
YongliangHe/SingingVoiceSynthesis
Compute duration from label files
dur_lab_normaliser.prepare_dur_data(orig_lab_file_list, dur_dur_file_list, feature_type='numerical') feat, num_frame = io_funcs.load_binary_file_frame(dur_dur_file_list[0], 5) print(feat.shape) print(num_frame) print(feat[0:10, :]) dur_dur_file_list[2]
_____no_output_____
Apache-2.0
egs/singing_synthesis/s3/run.ipynb
YongliangHe/SingingVoiceSynthesis
Make output features for duration model
delta_win = [-0.5, 0.0, 0.5] acc_win = [1.0, -2.0, 1.0] """ "in" & "out" just mean before & after feature composition like if we compute dynamic features, dimensions of out will be 3 times of in not really mean in & out of the network """ dur_in_dimension_dict = {'dur': 5} dur_out_dimension_dict = {'dur': 5} dur_in...
(131, 5) 131 [[ 13. 72. 166. 2. 54.] [ 9. 47. 1. 54. 32.] [ 1. 18. 4. 4. 4.] [ 3. 100. 1. 2. 6.] [ 4. 5. 3. 3. 3.] [ 1. 32. 1. 1. 3.] [ 5. 3. 2. 2. 3.] [ 2. 26. 1. 5. 8.] [ 2. 3. 3. 7. 6.] [ 6. 77. 17. 5. 5.]]
Apache-2.0
egs/singing_synthesis/s3/run.ipynb
YongliangHe/SingingVoiceSynthesis
Remove silence phone
dur_silence_remover = SilenceRemover(n_cmp = dur_cmp_dim, silence_pattern = silence_pattern, remove_frame_features = False, subphone_feats = 'none') dur_silence_remover.remove_silence(dur_cmp_file_list, orig_lab_file_list, dur_cmp_no_silence_file_list) _, num_frame = io_funcs.load_binary_file_frame(dur_cmp_file_list[2...
131 124
Apache-2.0
egs/singing_synthesis/s3/run.ipynb
YongliangHe/SingingVoiceSynthesis
Output feature (duration) normalization
from frontend.mean_variance_norm import MeanVarianceNorm dur_mvn_normaliser = MeanVarianceNorm(feature_dimension=dur_cmp_dim) dur_global_mean_vector = dur_mvn_normaliser.compute_mean(dur_cmp_no_silence_file_list[0: dur_train_file_number], 0, dur_cmp_dim) dur_global_std_vector = dur_mvn_normaliser.compute_std(dur_cmp_no...
[[ 4.77816306 32.451236 30.33884862 25.81974051 6.89332861]] [[ 22.830841 1053.0828 920.4457 666.659 47.51798 ]]
Apache-2.0
egs/singing_synthesis/s3/run.ipynb
YongliangHe/SingingVoiceSynthesis
Model training
import torch import torch.nn as nn import torch.nn.functional as F import torch.utils.data as data from torch.autograd import Variable import math import matplotlib.pyplot as plt class DurationDataset(data.Dataset): def __init__(self, lab_file_list, cmp_file_list, lab_dim=368, cmp_dim=5): assert(len(lab_fi...
_____no_output_____
Apache-2.0
egs/singing_synthesis/s3/run.ipynb
YongliangHe/SingingVoiceSynthesis
Test
print('files for test: ') print(dur_lab_no_silence_norm_file_list[-1]) print(dur_cmp_no_silence_norm_file_list[-1]) print(dur_cmp_no_silence_file_list[-1]) print('*' * 20) input_lab, num_input_frame = io_funcs.load_binary_file_frame(dur_lab_no_silence_norm_file_list[-1], 368) target_cmp, num_target_frame = io_funcs...
files for test:
Apache-2.0
egs/singing_synthesis/s3/run.ipynb
YongliangHe/SingingVoiceSynthesis
Acoustic model Model configuration
# acoustic model related # hardcoded acou_lab_dim = 377 acou_cmp_dim = 187 acou_train_file_number = 27 acou_valid_file_number = 1 acou_test_file_number = 1 acou_mdl_dir = os.path.join(exp_dir, 'acoustic_model') if not os.path.exists(acou_mdl_dir): os.makedirs(acou_mdl_dir) acou_inter_dir = os.path.join(acou_...
_____no_output_____
Apache-2.0
egs/singing_synthesis/s3/run.ipynb
YongliangHe/SingingVoiceSynthesis
Feature extraction from label files
acou_lab_normaliser = HTSLabelNormalisation(question, add_frame_features=True, subphone_feats='full') acou_lab_normaliser.perform_normalisation(orig_lab_file_list, acou_lab_file_list)
_____no_output_____
Apache-2.0
egs/singing_synthesis/s3/run.ipynb
YongliangHe/SingingVoiceSynthesis
Remove silence phone
acou_silence_remover = SilenceRemover(n_cmp=acou_lab_dim, silence_pattern=silence_pattern, remove_frame_features=True, subphone_feats='full') acou_silence_remover.remove_silence(acou_lab_file_list, orig_lab_file_list, acou_lab_no_silence_file_list) _, num_frame = io_funcs.load_binary_file_frame(acou_lab_file_list[2], 3...
[[0. 0. 0. ... 0.04234528 1. 0.00325733] [0. 0. 0. ... 0.04234528 0.99674267 0.00651466] [0. 0. 0. ... 0.04234528 0.99348533 0.00977199] ... [0. 0. 0. ... 0.2925373 0.00895522 0.9940299 ] [0. 0. ...
Apache-2.0
egs/singing_synthesis/s3/run.ipynb
YongliangHe/SingingVoiceSynthesis
Input feature normalization
acou_min_max_normaliser = MinMaxNormalisation(feature_dimension=acou_lab_dim, min_value=0.01, max_value=0.99) acou_min_max_normaliser.find_min_max_values(acou_lab_no_silence_file_list[0: acou_train_file_number]) acou_min_max_normaliser.normalise_data(acou_lab_no_silence_file_list, acou_lab_no_silence_norm_file_list) ac...
_____no_output_____
Apache-2.0
egs/singing_synthesis/s3/run.ipynb
YongliangHe/SingingVoiceSynthesis
Make output features for acoustic model
""" "in" & "out" just mean before & after feature composition like if we compute dynamic features, dimensions of out will be 3 times of in not really mean in & out of the network """ acou_in_dimension_dict = {'bap': 1, 'mgc': 60, 'lf0': 1} acou_out_dimension_dict = {'bap': 3, 'vuv': 1, 'mgc': 180, 'lf0': 3} # acou_...
(131, 5) [[ 13. 72. 166. 2. 54.] [ 9. 47. 1. 54. 32.] [ 1. 18. 4. 4. 4.] [ 3. 100. 1. 2. 6.] [ 4. 5. 3. 3. 3.] [ 1. 32. 1. 1. 3.] [ 5. 3. 2. 2. 3.] [ 2. 26. 1. 5. 8.] [ 2. 3. 3. 7. 6.] [ 6. 77. 17. 5. 5.]] (8640, 187) [[ 0. 0. ...
Apache-2.0
egs/singing_synthesis/s3/run.ipynb
YongliangHe/SingingVoiceSynthesis
Remove silence phone
acou_silence_remover = SilenceRemover(n_cmp = acou_cmp_dim, silence_pattern = silence_pattern, remove_frame_features = True, subphone_feats = 'full') acou_silence_remover.remove_silence(acou_cmp_file_list, orig_lab_file_list, acou_cmp_no_silence_file_list) _, num_frame = io_funcs.load_binary_file_frame(acou_cmp_file_l...
8640 7201
Apache-2.0
egs/singing_synthesis/s3/run.ipynb
YongliangHe/SingingVoiceSynthesis
Output feature (dim 187) normalization
acou_mvn_normaliser = MeanVarianceNorm(feature_dimension=acou_cmp_dim) acou_global_mean_vector = acou_mvn_normaliser.compute_mean(acou_cmp_no_silence_file_list[0: acou_train_file_number], 0, acou_cmp_dim) acou_global_std_vector = acou_mvn_normaliser.compute_std(acou_cmp_no_silence_file_list[0: acou_train_file_number], ...
_____no_output_____
Apache-2.0
egs/singing_synthesis/s3/run.ipynb
YongliangHe/SingingVoiceSynthesis
Model training
# TODO to implement cross-validation, print something here to see whether could do normalisation in Pytorch batch_size = int(acou_train_file_number) # batch_size = 1 print('batch_size: ' + str(batch_size)) acou_train_set = DurationDataset(acou_lab_no_silence_norm_file_list[:10], acou_cm...
_____no_output_____
Apache-2.0
egs/singing_synthesis/s3/run.ipynb
YongliangHe/SingingVoiceSynthesis