markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Now for some Python.
LINES = 3 COLUMNS = 4 def foo(row=1, column=2): buf = [ ['@' for _ in range(COLUMNS)] for _ in range(LINES) ] blank(buf, row, column) for row in buf: print(''.join(row).replace(' ', '.')) def blank(buf, row, column): for row in range(row, LINES): for column in range(column, COLUMNS): buf[row][column] = ' ' column = 0 foo() def blank_to_end_of_row(buf, row, column): for column in range(column, COLUMNS): buf[row][column] = ' ' def blank_row(buf, row): blank_to_end_of_row(buf, row, 0) def blank(buf, row, column): blank_to_end_of_row(buf, row, column) row += 1 for row in range(row, LINES): blank_row(buf, row) foo() def blank_to_end_of_row(buf, row, column): for column in range(column, COLUMNS): buf[row][column] = ' ' def blank(buf, row, column): blank_to_end_of_row(buf, row, column) row += 1 for row in range(row, LINES): blank_to_end_of_row(buf, row, 0) foo() %%script 20170706_c_foo /* this is wrong: fails to clear beginning of following lines */ int blank(char buf[LINES][COLUMNS], int row_arg, int column_arg) { int row; int column; for (row = row_arg; row < LINES; row++) for (column = column_arg; column < COLUMNS; column++) buf[row][column] = ' '; }
20170706-dojo-clear-to-end-of-table.ipynb
james-prior/cohpy
mit
來看看資料的前5筆
df.head()
Facebook粉絲頁分析三部曲-分析和輸出報表篇.ipynb
wutienyang/facebook_fanpage_analysis
mit
要如何找到這則在FB上的post呢? 可以透過status_link去找回這則po文
df['status_link'][0]
Facebook粉絲頁分析三部曲-分析和輸出報表篇.ipynb
wutienyang/facebook_fanpage_analysis
mit
處理前總共5234筆
len(df)
Facebook粉絲頁分析三部曲-分析和輸出報表篇.ipynb
wutienyang/facebook_fanpage_analysis
mit
把這些過濾掉,並且重新做reindex,原因是因為內建過濾的時候,它的index是不會改變的
df = df[(df['num_reactions']!=0) & (df['status_message'].notnull())].reindex()
Facebook粉絲頁分析三部曲-分析和輸出報表篇.ipynb
wutienyang/facebook_fanpage_analysis
mit
處理後剩下5061筆,總共過濾掉了173筆
len(df)
Facebook粉絲頁分析三部曲-分析和輸出報表篇.ipynb
wutienyang/facebook_fanpage_analysis
mit
處理日期和新增星期,小時 先處理日期,由於讀入的是string,先轉成datatime的object 就可以再取出星期(發文星期)和小時(單日發文時段)
df['datetime'] = df['status_published'].apply(lambda x: datetime.datetime.strptime(x,'%Y-%m-%d %H:%M:%S')) df['weekday'] = df['datetime'].apply(lambda x: x.weekday_name) df['hour'] = df['datetime'].apply(lambda x: x.hour)
Facebook粉絲頁分析三部曲-分析和輸出報表篇.ipynb
wutienyang/facebook_fanpage_analysis
mit
reactions隨時間變化趨勢圖 由於臉書在2016年有更新,除了按讚(like),還多了loves,wows,hahas,sads,angry,而reaction就是全部的總和 該篇post總共得到多少回應(reaction) x軸是時間,y軸是likes,loves,wows,hahas,sads,angry的數目,可以看出隨時間變化的各個趨勢 2017年看得出來很用心在經營,2014似乎有空窗期
df.plot(x='datetime', y=['num_likes', 'num_loves', 'num_wows', 'num_hahas', 'num_sads', 'num_angrys'] , figsize=(12,8))
Facebook粉絲頁分析三部曲-分析和輸出報表篇.ipynb
wutienyang/facebook_fanpage_analysis
mit
按讚,留言,分享隨時間變化趨勢圖 x軸是時間,y軸是reactions,comments,wows,shares的數目,可以看出隨時間變化的按讚,留言,分享的趨勢 2016有數次綠色高於其他顏色,也就是說留言數大於反應和分享,推測可能是留言送禮的post吧
df.plot(x='datetime', y=['num_reactions', 'num_comments', 'num_shares'], figsize=(12,8))
Facebook粉絲頁分析三部曲-分析和輸出報表篇.ipynb
wutienyang/facebook_fanpage_analysis
mit
發文頻率統計 統計每個post的發布時間間隔,就可以了解發文的頻率 雖然平均的發文頻率是12小時,但是看四分位數,前四分之一是半小時,前四分之三是五小時, 表示發文是非常頻繁的,一天好幾po文,至於平均數和標準差會這麼大,推測是初期經營臉書的發文頻率過低的影響 被前期的outlier影響
import datetime delta_datetime = df['datetime'].shift(1) - df['datetime'] delta_datetime_df = pd.Series(delta_datetime).describe().apply(str) delta_datetime_df = delta_datetime_df.to_frame(name='frequent of posts') delta_datetime_df
Facebook粉絲頁分析三部曲-分析和輸出報表篇.ipynb
wutienyang/facebook_fanpage_analysis
mit
處理星期和小時 要重新創造DataFrame,理由有兩個 第一個 - 假設某個時間是0的話,要填上0 第二個 - key要按照順序,畫圖時才會是周一到週五,不然會順序會亂掉
def weekday(d): list_key = ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday'] list_value = [] for one in list_key: if one in d.keys(): list_value.append(d[one]) else: list_value.append(0) df = pd.DataFrame(index = list_key, data = {'weekday': list_value}).reset_index() return df df_weekday = weekday(dict(df['weekday'].value_counts())) df_weekday
Facebook粉絲頁分析三部曲-分析和輸出報表篇.ipynb
wutienyang/facebook_fanpage_analysis
mit
星期幾發文數目統計長條圖 看起來星期日較少發文
sns.barplot(x='index', y='weekday', data = df_weekday) def hour(d): list_key = [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23] list_value = [] for one in list_key: if one in d.keys(): list_value.append(d[one]) else: list_value.append(0) df = pd.DataFrame(index = list_key, data = {'hour': list_value}).reset_index() return df df_hour = hour(dict(df['hour'].value_counts())) df_hour
Facebook粉絲頁分析三部曲-分析和輸出報表篇.ipynb
wutienyang/facebook_fanpage_analysis
mit
單日第幾時發文數目統計長條圖 看這發文的曲線,小編真難當,晚上凌晨,一大清早還是要發文
ax = sns.barplot(x='index', y='hour', data = df_hour) df_status_type = df['status_type'].value_counts().to_frame(name='status_type') df_status_type
Facebook粉絲頁分析三部曲-分析和輸出報表篇.ipynb
wutienyang/facebook_fanpage_analysis
mit
發文種類數目統計長條圖 最喜歡的發文形式是圖片,再來是分享連結和影片
sns.barplot(x='index', y='status_type', data = df_status_type.reset_index())
Facebook粉絲頁分析三部曲-分析和輸出報表篇.ipynb
wutienyang/facebook_fanpage_analysis
mit
發文種類散佈圖 單一個點代表該發文是什麼種類和得到多少reaction,photo發文的reaction在0-50000最為密集
sns.stripplot(x="status_type", y="num_reactions", data=df, jitter=True)
Facebook粉絲頁分析三部曲-分析和輸出報表篇.ipynb
wutienyang/facebook_fanpage_analysis
mit
星期幾發文散佈圖 單一個點代表該post是什麼種類和得到多少reaction
sns.stripplot(x="weekday", y="num_reactions", data=df, jitter=True)
Facebook粉絲頁分析三部曲-分析和輸出報表篇.ipynb
wutienyang/facebook_fanpage_analysis
mit
單日第幾時發文散佈圖 單一個點代表該post是什麼種類和得到多少reaction
sns.stripplot(x="hour", y="num_reactions", data=df, jitter=True)
Facebook粉絲頁分析三部曲-分析和輸出報表篇.ipynb
wutienyang/facebook_fanpage_analysis
mit
各個不同發文類型的reaction長條圖
g = sns.FacetGrid(df, col="status_type") g.map(plt.hist, "num_reactions")
Facebook粉絲頁分析三部曲-分析和輸出報表篇.ipynb
wutienyang/facebook_fanpage_analysis
mit
如果我們想看不同reaction間的關係就需要用到Pearson Correlation: 主要衡量兩變數間線性關聯性的高低程度 由此圖我們可以得知reaction之間的相關性都不高
df_reaction = df[['num_likes', 'num_loves', 'num_wows', 'num_hahas', 'num_sads', 'num_angrys']] colormap = plt.cm.viridis plt.title('Pearson Correlation of Features', y=1.05, size=15) sns.heatmap(df_reaction.astype(float).corr(),linewidths=0.1,vmax=1.0, square=True, cmap=colormap, linecolor='white', annot=True)
Facebook粉絲頁分析三部曲-分析和輸出報表篇.ipynb
wutienyang/facebook_fanpage_analysis
mit
share和comment相關性較高,留言和分享的人數相關性比留言和reaction的人數相關性高
df_tmp = df[['num_reactions', 'num_comments', 'num_shares']] colormap = plt.cm.viridis plt.title('Pearson Correlation of Features', y=1.05, size=15) sns.heatmap(df_tmp.astype(float).corr(),linewidths=0.1,vmax=1.0, square=True, cmap=colormap, linecolor='white', annot=True)
Facebook粉絲頁分析三部曲-分析和輸出報表篇.ipynb
wutienyang/facebook_fanpage_analysis
mit
分析post中的文字 需要用到jieba! jieba是一個強大的中文斷詞程式,可以幫你去分析你要分析的文本內容 wordcloud 則是畫文字雲的套件
import jieba import jieba.analyse import operator from wordcloud import WordCloud # 安裝jieba套件的時候,就有繁體詞庫 jieba.set_dictionary('/home/wy/anaconda3/envs/python3/lib/python3.6/site-packages/jieba/extra_dict/dict.txt.big')
Facebook粉絲頁分析三部曲-分析和輸出報表篇.ipynb
wutienyang/facebook_fanpage_analysis
mit
先介紹一下jieba的使用 一般使用jieba分詞呼叫的api jieba.cut 而本文則是使用 jieba.analyse.extract_tags(基於 TF-IDF算法關鍵詞抽取) 原因在於透過TF-IDF評估單詞對於文件的集合或詞庫中一份文件的重要程度,就可以過濾掉不重要的字,看以下範例 :
list(df['status_message'])[99] for one in jieba.cut(list(df['status_message'])[99]): print (one) jieba.analyse.extract_tags(list(df['status_message'])[99], topK=120)
Facebook粉絲頁分析三部曲-分析和輸出報表篇.ipynb
wutienyang/facebook_fanpage_analysis
mit
因此我們可以把每一篇發文的內容透過jieba的關鍵字抽取,抽取出重要的字 然後統計這些字出現的頻率,並使用WordCloud畫成文字雲
def jieba_extract(message_list): word_count = {} for message in message_list: # 在抽取關鍵字時,可能會發生錯誤,先把錯誤的message收集起來,看看是怎麼一回事 seg_list = jieba.analyse.extract_tags(message, topK=120) for seg in seg_list: if not seg in word_count: word_count[seg] = 1 else: word_count[seg] += 1 sorted_word_count = sorted(word_count.items(), key=operator.itemgetter(1)) sorted_word_count.reverse() return sorted_word_count sorted_word_count = jieba_extract(list(df['status_message']))
Facebook粉絲頁分析三部曲-分析和輸出報表篇.ipynb
wutienyang/facebook_fanpage_analysis
mit
看一下最常出現在post的字前十名是啥
print (sorted_word_count[:10])
Facebook粉絲頁分析三部曲-分析和輸出報表篇.ipynb
wutienyang/facebook_fanpage_analysis
mit
出現文字雲! 文字雲其實還有蠻多可以調整的,甚至可以把圖片填滿字,請參考 wordcloud 會出現http com www 是因為每篇發文都會有link連結,所以就把link連結解析出來了
tpath = '/home/wy/font/NotoSansCJKtc-Black.otf' wordcloud = WordCloud(max_font_size=120, relative_scaling=.1, width=900, height=600, font_path=tpath).fit_words(sorted_word_count) plt.imshow(wordcloud) plt.axis("off") plt.show()
Facebook粉絲頁分析三部曲-分析和輸出報表篇.ipynb
wutienyang/facebook_fanpage_analysis
mit
所以我們把前30名頻率高的拿掉在畫文字雲 sorted_word_count[30:]
tpath = '/home/wy/font/NotoSansCJKtc-Black.otf' wordcloud = WordCloud(max_font_size=120, relative_scaling=.1, width=900, height=600, font_path=tpath).fit_words(sorted_word_count[30:]) plt.imshow(wordcloud) plt.axis("off") plt.show()
Facebook粉絲頁分析三部曲-分析和輸出報表篇.ipynb
wutienyang/facebook_fanpage_analysis
mit
comments
# 讀入comment csv c_path = path = 'comment/'+page_id+'_comment.csv' c_df = pd.read_csv(c_path) c_df.head() c_df = c_df[c_df['comment_message'].notnull()].reindex() sorted_comment_message = jieba_extract(list(c_df['comment_message'])) print (sorted_comment_message[:10]) tpath = '/home/wy/font/NotoSansCJKtc-Black.otf' wordcloud = WordCloud(max_font_size=120, relative_scaling=.1, width=900, height=600, font_path=tpath).fit_words(sorted_comment_message) plt.figure() plt.imshow(wordcloud) plt.axis("off") plt.show() c_df = c_df[c_df['comment_author'].notnull()].reindex() def word_count(data_list): d = {} for one in data_list: if one not in d: d[one] = 1 else: d[one] += 1 return d d = word_count(list(c_df['comment_author'])) comment_authors = [(k, d[k]) for k in sorted(d, key=d.get, reverse=True)] print (comment_authors[:10]) tpath = '/home/wy/font/NotoSansCJKtc-Black.otf' wordcloud = WordCloud(max_font_size=120, relative_scaling=.1, width=900, height=600, font_path=tpath).fit_words(comment_authors) plt.figure() plt.imshow(wordcloud) plt.axis("off") plt.show()
Facebook粉絲頁分析三部曲-分析和輸出報表篇.ipynb
wutienyang/facebook_fanpage_analysis
mit
輸出excel報表 當我們在上面做了許多分析,那能不能把這些分析都輸成excel報表!? 用python輸出excel就靠它了 xlsxwriter !
import xlsxwriter
Facebook粉絲頁分析三部曲-分析和輸出報表篇.ipynb
wutienyang/facebook_fanpage_analysis
mit
我們先用pandas中的describe (Generate various summary statistics),去統計該欄位的平均數,中位數,標準差,四分位數...
df_num_reactions = df['num_reactions'].describe().to_frame(name='reactions') df_num_reactions df_num_comments = df['num_comments'].describe().to_frame(name='comments') df_num_comments df_num_shares = df['num_shares'].describe().to_frame(name='shares') df_num_shares
Facebook粉絲頁分析三部曲-分析和輸出報表篇.ipynb
wutienyang/facebook_fanpage_analysis
mit
然後把這些DataFrame寫入xlsx 官網介紹 : Working with Python Pandas and XlsxWriter xlsxwriter 調整位置 當你在寫入excel的時候,需要設定你要寫入的位置 : 第一種 : 把DataFrame寫入到xlsx df_num_reactions.to_excel(writer, sheet_name=page_id, startcol=0, startrow=0) startcol,startrow : 像座標的形式 sheet_name : 工作表 第二種繪製內建圖表 : chart1.add_series({ 'categories': '='+page_id+'!$A$13:$A$18', 'values': '='+page_id+'!$B$13:$B$18', }) categories : name來自A13-A18 values : value來自B13-B18 第三種插入圖表 : worksheet.insert_image('L12', 'image/image1.png') 'L12' : 插入位置 'image/image1.png' : 圖片路徑
# 設定路徑 excel_path = 'excel/'+page_id+'_analysis.xlsx' writer = pd.ExcelWriter(excel_path, engine='xlsxwriter') # 把DataFrame寫入到xlsx df_num_reactions.to_excel(writer, sheet_name=page_id, startcol=0, startrow=0) df_num_comments.to_excel(writer, sheet_name=page_id, startcol=3, startrow=0) df_num_shares.to_excel(writer, sheet_name=page_id, startcol=6, startrow=0) delta_datetime_df.to_excel(writer, sheet_name=page_id, startcol=9, startrow=0) df_status_type.to_excel(writer, sheet_name=page_id, startcol=0, startrow=11) df_weekday.set_index('index').to_excel(writer, sheet_name=page_id, startcol=0, startrow=25) df_hour.set_index('index').to_excel(writer, sheet_name=page_id, startcol=0, startrow=39) # 畫出內建長條圖 workbook = writer.book # 發文種類長條統計圖 chart1 = workbook.add_chart({'type': 'column'}) chart1.add_series({ 'categories': '='+page_id+'!$A$13:$A$18', 'values': '='+page_id+'!$B$13:$B$18', }) chart1.set_title ({'name': '發文種類長條統計圖'}) chart1.set_x_axis({'name': 'status_type'}) chart1.set_y_axis({'name': 'count'}) worksheet = writer.sheets[page_id] worksheet.insert_chart('D12', chart1) # 星期幾發文統計長條圖 chart2 = workbook.add_chart({'type': 'column'}) chart2.add_series({ 'categories': '='+page_id+'!$A$27:$A$33', 'values': '='+page_id+'!$B$27:$B$33', }) chart2.set_title ({'name': '星期幾發文統計長條圖'}) chart2.set_x_axis({'name': 'hour'}) chart2.set_y_axis({'name': 'count'}) worksheet = writer.sheets[page_id] worksheet.insert_chart('D26', chart2) # 單日幾時發文統計長條圖 chart3 = workbook.add_chart({'type': 'column'}) chart3.add_series({ 'categories': '='+page_id+'!$A$41:$A$64', 'values': '='+page_id+'!$B$41:$B$64', }) chart3.set_title ({'name': '單日幾時發文統計長條圖'}) chart3.set_x_axis({'name': 'weekday'}) chart3.set_y_axis({'name': 'count'}) worksheet = writer.sheets[page_id] worksheet.insert_chart('D40', chart3) # 示範插入image, 當把上面的圖畫出來之後,要先存起來才能插入到xlsx df.plot(x='datetime', y=['num_likes', 'num_loves', 'num_wows', 'num_hahas', 'num_sads', 'num_angrys']) plt.savefig('image/image1.png') worksheet.insert_image('L12', 'image/image1.png')
Facebook粉絲頁分析三部曲-分析和輸出報表篇.ipynb
wutienyang/facebook_fanpage_analysis
mit
Define the output control to save heads and interface every 50 steps, and define the pcg solver with default arguments.
spd = {} for istp in xrange(49, nstp+1, 50): spd[(0, istp)] = ['save head', 'print budget'] spd[(0, istp+1)] = [] oc = mf.ModflowOc(ml, stress_period_data=spd) pcg = mf.ModflowPcg(ml)
original_libraries/flopy-master/examples/Notebooks/swiex1.ipynb
mjasher/gac
gpl-2.0
Load the head and zeta data from the file
#--read model heads hfile = fu.HeadFile(os.path.join(ml.model_ws, modelname+'.hds')) head = hfile.get_alldata() #--read model zeta zfile = fu.CellBudgetFile(os.path.join(ml.model_ws, modelname+'.zta')) kstpkper = zfile.get_kstpkper() zeta = [] for kk in kstpkper: zeta.append(zfile.get_data(kstpkper=kk, text='ZETASRF 1')[0]) zeta = np.array(zeta)
original_libraries/flopy-master/examples/Notebooks/swiex1.ipynb
mjasher/gac
gpl-2.0
Make a graph and add the solution of Wilson and Sa da Costa
plt.figure(figsize=(16,6)) # define x-values of xcells and plot interface x = np.arange(0, ncol*delr, delr) + delr/2. label = ['SWI2','_','_','_'] # labels with an underscore are not added to legend for i in range(4): zt = np.ma.masked_outside(zeta[i,0,0,:], -39.99999, -0.00001) plt.plot(x, zt, 'r-', lw=1, zorder=10, label=label[i]) # Data for the Wilson - Sa da Costa solution k = 2.0 n = 0.2 nu = 0.025 H = 40.0 tzero = H * n / (k * nu) / 4.0 Ltoe = np.zeros(4) v = 0.125 t = np.arange(100, 500, 100) label = ['W & SD','_','_','_'] # labels with an underscore are not added to legend for i in range(4): Ltoe[i] = H * np.sqrt(k * nu * (t[i] + tzero) / n / H) plt.plot([100 - Ltoe[i] + v * t[i], 100 + Ltoe[i] + v * t[i]], [0, -40], '0.75', lw=8, zorder=0, label=label[i]) # Scale figure and add legend plt.axis('scaled') plt.xlim(0, 250) plt.ylim(-40, 0) plt.legend(loc='best');
original_libraries/flopy-master/examples/Notebooks/swiex1.ipynb
mjasher/gac
gpl-2.0
Convert zeta surfaces to relative seawater concentrations
X, Y = np.meshgrid(x, [0, -40]) zc = fp.SwiConcentration(model=ml) conc = zc.calc_conc(zeta={0:zeta[3,:,:,:]}) / 0.025 print conc[0, 0, :] v = np.vstack((conc[0, 0, :], conc[0, 0, :])) plt.imshow(v, extent=[0, 250, -40, 0], cmap='Reds') cb = plt.colorbar(orientation='horizontal') cb.set_label('percent seawater'); plt.contour(X, Y, v, [0.75, 0.5, 0.25], linewidths=[2, 1.5, 1], colors='black');
original_libraries/flopy-master/examples/Notebooks/swiex1.ipynb
mjasher/gac
gpl-2.0
In the previous graph we can see a multi-peak ODF (peaks are modeled using PEARSONVII functions). It actually represent quite well the microstructure of injected plates. The next step is to discretize the ODF into phases. The file containing the initial 2-phase microstructure contains the following informations
NPhases_file = dir + '/data/Nellipsoids0.dat' NPhases = pd.read_csv(NPhases_file, delimiter=r'\s+', index_col=False, engine='python') NPhases[::] umat_name = 'MIMTN' #This is the 5 character code for the Mori-Tanaka homogenization for composites with a matrix and ellipsoidal reinforcments nstatev = 0 rho = 1.12 #The density of the material (overall) c_p = 1.64 #The specific heat capacity (overall) nphases = 2 #The number of phases num_file = 0 #The num of the file that contains the subphases int1 = 20 int2 = 20 psi_rve = 0. theta_rve = 0. phi_rve = 0. props = np.array([nphases, num_file, int1, int2, 0]) path_data = 'data' path_results = 'results' Nfile_init = 'Nellipsoids0.dat' Nfile_disc = 'Nellipsoids1.dat' nphases_rve = 36 num_phase_disc = 1 sim.ODF_discretization(nphases_rve, num_phase_disc, 0., 180., umat_name, props, path_data, peak_file, Nfile_init, Nfile_disc, 1) NPhases_file = dir + '/data/Nellipsoids1.dat' NPhases = pd.read_csv(NPhases_file, delimiter=r'\s+', index_col=False, engine='python') #We plot here the five first phases NPhases[:5] #Plot the concentration and the angle c, angle = np.loadtxt(NPhases_file, usecols=(4,5), skiprows=2, unpack=True) fig, ax1 = plt.subplots() ax2 = ax1.twinx() # the histogram of the data xs = np.arange(0,180,5) rects1 = ax1.bar(xs, c, width=5, color='r', align='center') ax2.plot(x, y, 'b-') ax1.set_xlabel('X data') ax1.set_ylabel('Y1 data', color='g') ax2.set_ylabel('Y2 data', color='b') ax1.set_ylim([0,0.025]) ax2.set_ylim([0,0.25]) plt.show() #plt.grid(True) #plt.plot(angle,c, c='black') plt.show() #Run the simulation pathfile = 'path.txt' nphases = 37 #The number of phases num_file = 1 #The num of the file that contains the subphases props = np.array([nphases, num_file, int1, int2]) outputfile = 'results_MTN.txt' sim.solver(umat_name, props, nstatev, psi_rve, theta_rve, phi_rve, path_data, path_results, pathfile, outputfile) fig = plt.figure() outputfile_macro = dir + '/' + path_results + '/results_MTN_global-0.txt' e11, e22, e33, e12, e13, e23, s11, s22, s33, s12, s13, s23 = np.loadtxt(outputfile_macro, usecols=(8,9,10,11,12,13,14,15,16,17,18,19), unpack=True) plt.grid(True) plt.plot(e11,s11, c='black') for i in range(8,12): outputfile_micro = dir + '/' + path_results + '/results_MTN_global-0-' + str(i) + '.txt' e11, e22, e33, e12, e13, e23, s11, s22, s33, s12, s13, s23 = np.loadtxt(outputfile_micro, usecols=(8,9,10,11,12,13,14,15,16,17,18,19), unpack=True) plt.grid(True) plt.plot(e11,s11, c='red') plt.xlabel('Strain') plt.ylabel('Stress (MPa)') plt.show()
Examples/ODF/.ipynb_checkpoints/ODF-checkpoint.ipynb
chemiskyy/simmit
gpl-3.0
There are two main ways that outliers can affect Prophet forecasts. Here we make a forecast on the logged Wikipedia visits to the R page from before, but with a block of bad data:
%%R -w 10 -h 6 -u in df <- read.csv('../examples/example_wp_log_R_outliers1.csv') m <- prophet(df) future <- make_future_dataframe(m, periods = 1096) forecast <- predict(m, future) plot(m, forecast) df = pd.read_csv('../examples/example_wp_log_R_outliers1.csv') m = Prophet() m.fit(df) future = m.make_future_dataframe(periods=1096) forecast = m.predict(future) fig = m.plot(forecast)
notebooks/outliers.ipynb
facebook/prophet
mit
The trend forecast seems reasonable, but the uncertainty intervals seem way too wide. Prophet is able to handle the outliers in the history, but only by fitting them with trend changes. The uncertainty model then expects future trend changes of similar magnitude. The best way to handle outliers is to remove them - Prophet has no problem with missing data. If you set their values to NA in the history but leave the dates in future, then Prophet will give you a prediction for their values.
%%R -w 10 -h 6 -u in outliers <- (as.Date(df$ds) > as.Date('2010-01-01') & as.Date(df$ds) < as.Date('2011-01-01')) df$y[outliers] = NA m <- prophet(df) forecast <- predict(m, future) plot(m, forecast) df.loc[(df['ds'] > '2010-01-01') & (df['ds'] < '2011-01-01'), 'y'] = None model = Prophet().fit(df) fig = model.plot(model.predict(future))
notebooks/outliers.ipynb
facebook/prophet
mit
In the above example the outliers messed up the uncertainty estimation but did not impact the main forecast yhat. This isn't always the case, as in this example with added outliers:
%%R -w 10 -h 6 -u in df <- read.csv('../examples/example_wp_log_R_outliers2.csv') m <- prophet(df) future <- make_future_dataframe(m, periods = 1096) forecast <- predict(m, future) plot(m, forecast) df = pd.read_csv('../examples/example_wp_log_R_outliers2.csv') m = Prophet() m.fit(df) future = m.make_future_dataframe(periods=1096) forecast = m.predict(future) fig = m.plot(forecast)
notebooks/outliers.ipynb
facebook/prophet
mit
Here a group of extreme outliers in June 2015 mess up the seasonality estimate, so their effect reverberates into the future forever. Again the right approach is to remove them:
%%R -w 10 -h 6 -u in outliers <- (as.Date(df$ds) > as.Date('2015-06-01') & as.Date(df$ds) < as.Date('2015-06-30')) df$y[outliers] = NA m <- prophet(df) forecast <- predict(m, future) plot(m, forecast) df.loc[(df['ds'] > '2015-06-01') & (df['ds'] < '2015-06-30'), 'y'] = None m = Prophet().fit(df) fig = m.plot(m.predict(future))
notebooks/outliers.ipynb
facebook/prophet
mit
See Auspex example notebooks on how to configure a channel library. For the examples in this notebook, we will the channel library generated by running ex1_QGL_basics.ipynb in this same directory.
cl = ChannelLibrary("example")
doc/ex2_single-qubit_sequences.ipynb
BBN-Q/QGL
apache-2.0
q1 was already defined in the example channel library, and we load it here
q = cl["q1"]
doc/ex2_single-qubit_sequences.ipynb
BBN-Q/QGL
apache-2.0
Keep in mind that recreating q1 at this stage would give a warning, and try to update the existing qubit to reflect the newly requested parameters: i.e.
q = cl.new_qubit("q1")
doc/ex2_single-qubit_sequences.ipynb
BBN-Q/QGL
apache-2.0
Sequences Pulsed spectroscopy sequence A single sequence with a long saturating pulse to find qubit transitions. The specOn option turns the saturation pulse on/off as this sequence is also useful with just a readout pulse for cavity spectroscopy.
plot_pulse_files(PulsedSpec(q, specOn=True)) #with a Pi/saturation pulse if specOn = True
doc/ex2_single-qubit_sequences.ipynb
BBN-Q/QGL
apache-2.0
Rabi Nutation sequences For spectroscopy or calibration purposes we can perform a variable nutation angle experiment by varying either the amplitude or width of the excitation pulse.
plot_pulse_files(RabiAmp(q, np.linspace(0,1,101))) plot_pulse_files(RabiWidth(q, np.arange(40e-9, 1e-6, 10e-9)))
doc/ex2_single-qubit_sequences.ipynb
BBN-Q/QGL
apache-2.0
Coherence Time Measurements T$_1$ T$_1$ can be measured with an inversion recovery variable delay experiment. The sequence helper function tacks on calibration experiments that can be controlled with the calRepeats keyword argument.
plot_pulse_files(InversionRecovery(q,np.arange(100e-9,10e-6,100e-9), calRepeats=2))
doc/ex2_single-qubit_sequences.ipynb
BBN-Q/QGL
apache-2.0
T$_2$ T$_2^*$ is usually characterized with a 90-delay-90 experiment where as the Hahn echo removes low frequency noise and that causes incoherent loss and recovers something closer to the true T$_2$. The delay parameter is the pulse spacing and so the total effective delay in the Hahn echo will be 2 times this plus the 180 pulse length.
plot_pulse_files(Ramsey(q, np.arange(100e-9,10e-6,100e-9))) plot_pulse_files(HahnEcho(q, np.arange(100e-9,10e-6,100e-9)))
doc/ex2_single-qubit_sequences.ipynb
BBN-Q/QGL
apache-2.0
Create a regex for removing the punctuations from the text file
regex = re.compile('[%s]' % re.escape(string.punctuation))
Lab5/Lab5-Task1.ipynb
vk3105/Data-Intensive-Programming-CSE-587
apache-2.0
Generate a pyspark Context
sc = pyspark.SparkContext()
Lab5/Lab5-Task1.ipynb
vk3105/Data-Intensive-Programming-CSE-587
apache-2.0
Generate a SQL Context for our files
sqlContext = SQLContext(sc)
Lab5/Lab5-Task1.ipynb
vk3105/Data-Intensive-Programming-CSE-587
apache-2.0
A function to read the string and generate the lemma dictionary
def buildLemmaDict(x): lemmas = x.split(",") lemmaWords = [] for lemma in lemmas: if lemma!="": lemmaWords.append(lemma) lemmaDic = [(lemmas[0],list(set(lemmaWords)))] return(lemmaDic)
Lab5/Lab5-Task1.ipynb
vk3105/Data-Intensive-Programming-CSE-587
apache-2.0
Read the lemmatizer CSV file to build the lemma dictionary
lemma_rdd = sc.textFile("./new_lemmatizer.csv")
Lab5/Lab5-Task1.ipynb
vk3105/Data-Intensive-Programming-CSE-587
apache-2.0
Create a rdd from the above file to apply a common function "buildLemmaDict"
lemmaDictionary_rdd = (lemma_rdd.flatMap(lambda x : buildLemmaDict(x)))
Lab5/Lab5-Task1.ipynb
vk3105/Data-Intensive-Programming-CSE-587
apache-2.0
Collect the rdd as a map to get the dictionary
lemmaDictionary = lemmaDictionary_rdd.collectAsMap()
Lab5/Lab5-Task1.ipynb
vk3105/Data-Intensive-Programming-CSE-587
apache-2.0
A function to provide the lemmas by lookup from the lemma dictionary
def getLemmasList(word): cooccuranceList = [] wordLemmaList = [] if word in lemmaDictionary: wordLemmaList = wordLemmaList + lemmaDictionary.get(word) else : wordLemmaList = [word] return wordLemmaList
Lab5/Lab5-Task1.ipynb
vk3105/Data-Intensive-Programming-CSE-587
apache-2.0
Provide the location of the input file
path = "./input/" subDirList = next(os.walk(path))[1] print(subDirList) subDirList = [int(x) for x in subDirList] subDirList.sort() subDirList = [path+str(x) for x in subDirList] print(subDirList)
Lab5/Lab5-Task1.ipynb
vk3105/Data-Intensive-Programming-CSE-587
apache-2.0
2-Gram Generator A for loop to generate the time duration and set up the path for output We can specify a folder structure having different number of files. And similarly we can define the output folder based on the input folder Also showing the data in a tabular format. Following is just a test result we did using small amount of data. Running the same code with better RAM provides much better performance.
for dirPath in subDirList: outputPath = dirPath.replace("input","output2") start_time = time.time() data_rdd = sc.textFile(dirPath) test = data_rdd.filter(lambda y : y.strip()!="")\ .map(lambda x : x.replace('\t','').lower().split(">"))\ .map(lambda (x,y): (x,regex.sub('',y).strip().replace("j","i").replace("v","u").split(" ")))\ .flatMap(lambda (x,y): [(pair,x[1:]+" |"+str(1+y.index(pair[0]))+"."+str(1+y.index(pair[1]))+"| ") for pair in combinations(y,2)])\ .filter(lambda (x,y): x[0]!="" and x[1]!="")\ .flatMap(lambda (x,y): [(lemma,y) for lemma in product(getLemmasList(x[0]),getLemmasList(x[1]))])\ .reduceByKey(lambda x,y : x + ", "+y).sortByKey(True) print("Input Directory Path :" + dirPath) print("Ouput Directory Path :" + outputPath) print("Time taken for "+ dirPath[-1:] +" files %s" % (time.time() - start_time)) test = test.map(lambda (x,y):("{"+x[0]+","+x[1]+"}",y)) test.take(5) df = sqlContext.createDataFrame(test, ['n-gram (n =2)', 'Location']) df.show() df.coalesce(1).write.option("header", "true").csv(outputPath+"/result.csv")
Lab5/Lab5-Task1.ipynb
vk3105/Data-Intensive-Programming-CSE-587
apache-2.0
3-Gram Generator A for loop to generate the time duration and set up the path for output We can specify a folder structure having different number of files. And similarly we can define the output folder based on the input folder Also showing the data in a tabular format. Following is just a test result we did using small amount of data. Running the same code with better RAM provides much better performance.
for dirPath in subDirList: outputPath = dirPath.replace("input","output3") start_time = time.time() data_rdd = sc.textFile(dirPath) test = data_rdd.filter(lambda y : y.strip()!="")\ .map(lambda x : x.replace('\t','').lower().split(">"))\ .map(lambda (x,y): (x,regex.sub('',y).strip().replace("j","i").replace("v","u").split(" ")))\ .flatMap(lambda (x,y): [(pair,x[1:]+" |"+str(1+y.index(pair[0]))+"."+str(1+y.index(pair[1]))+"."+str(1+y.index(pair[2]))+"| ") for pair in combinations(y,3)])\ .filter(lambda (x,y): x[0]!="" and x[1]!="" and x[2]!="")\ .flatMap(lambda (x,y): [(lemma,y) for lemma in product(getLemmasList(x[0]),getLemmasList(x[1]),getLemmasList(x[2]))])\ .reduceByKey(lambda x,y : x + ", "+y).sortByKey(True) print("Input Directory Path :" + dirPath) print("Ouput Directory Path :" + outputPath) print("Time taken for "+ dirPath[-1:] +" files %s" % (time.time() - start_time)) test = test.map(lambda (x,y):("{"+x[0]+","+x[1]+","+x[2]+"}",y)) test.take(5) df = sqlContext.createDataFrame(test, ['n-gram (n =3)', 'Location']) df.show() df.coalesce(1).write.option("header", "true").csv(outputPath+"/result.csv")
Lab5/Lab5-Task1.ipynb
vk3105/Data-Intensive-Programming-CSE-587
apache-2.0
Scale up Graph results for 2-Gram and 3-Gram The values used are the actual data we collected from various number of files
num_of_files = [1,2,5,10,20,40,80,90] linear_line = [1,2,5,10,20,40,80,90] execution_time_2_Gram = [14.7669999599,16.5650000572,19.5239999294,29.0959999561,47.1339998245,85.7990000248,167.852999926,174.265000105] execution_time_3_Gram = [15.2109999657,17.493999958,23.0810000896,32.367000103,54.4680001736,101.955999851,184.548999786,189.816999912] plot(num_of_files, execution_time_2_Gram, 'o-',num_of_files, execution_time_3_Gram, 'o-',num_of_files, linear_line, 'o-') xlabel('Number of files') ylabel('Execution time (secs)') title('Performace Analysis of 2-Gram and 3-Gram Generator with different number of files') legend(('2Gram','3Gram','Linear Line(y=x)'), loc='upper left') grid(True) savefig("performance.png") show()
Lab5/Lab5-Task1.ipynb
vk3105/Data-Intensive-Programming-CSE-587
apache-2.0
Interpolated SiGe band structure The band structure for Si(1-x)Ge(x) is supposedly well approximated by a linear interpolation of the Si and Ge band structures.
def SiGe_band(x=0.2): Si_data = TB.bandpts(TB.Si) Ge_data = TB.bandpts(TB.Ge) data = (1-x)*Si_data + x*Ge_data TB.bandplt("SiGe, %%Ge=%.2f" % x,data) return SiGe_band(0) SiGe_band(0.1) SiGe_band(0.25) SiGe_band(0.37)
Harry Tight Binding.ipynb
rpmuller/TightBinding
bsd-2-clause
Plotting misc parts of Brillouin zones Compare Si and Ge CBs.
Ge_CB = TB.bandpts(TB.Ge)[:,4] Si_CB = TB.bandpts(TB.Si)[:,4] nk = len(Si_CB) n = (nk-2)//3 plt.plot(Si_CB) plt.plot(Ge_CB) TB.band_labels(n) plt.axis(xmax=3*n+1) plt.plot(Si_CB,label='Si') plt.plot(Ge_CB,label='Ge') plt.plot(0.9*Si_CB + 0.1*Ge_CB,label='Si_0.9 Ge_0.1') TB.band_labels(n) plt.axis(xmax=3*n+1) plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) min_Si = min(Si_CB) min_Ge = min(Ge_CB) print min_Si, min_Ge # min_Si - min_Ge = 0.12 Si_CB_shifted = Si_CB - min_Si + min_Ge + 0.12
Harry Tight Binding.ipynb
rpmuller/TightBinding
bsd-2-clause
Start by finding structures using online databases (or cached local results). This uses an InChI for water.
mol = oc.find_structure('InChI=1S/H2O/h1H2') mol.structure.show()
girder/notebooks/notebooks/notebooks/PSI4.ipynb
OpenChemistry/mongochemserver
bsd-3-clause
Set up the calculation, by specifying the name of the Docker image that will be used, and by providing input parameters that are known to the specific image
image_name = 'openchemistry/psi4:1.2.1' input_parameters = { 'theory': 'dft', 'functional': 'b3lyp', 'basis': '6-31g' }
girder/notebooks/notebooks/notebooks/PSI4.ipynb
OpenChemistry/mongochemserver
bsd-3-clause
Geometry Optimization Calculation The mol.optimize() method is a specialized helper function that adds 'task': 'optimize' to the input_parameters dictionary, and then calls the generic mol.calculate() method internally.
result = mol.optimize(image_name, input_parameters) result.orbitals.show(mo='homo', iso=0.005)
girder/notebooks/notebooks/notebooks/PSI4.ipynb
OpenChemistry/mongochemserver
bsd-3-clause
Single Point Energy Calculation The mol.energy() method is a specialized helper function that adds 'task': 'energy' to the input_parameters dictionary, and then calls the generic mol.calculate() method internally.
result = mol.energy(image_name, input_parameters) result.orbitals.show(mo='lumo', iso=0.005)
girder/notebooks/notebooks/notebooks/PSI4.ipynb
OpenChemistry/mongochemserver
bsd-3-clause
Normal Modes Calculation The mol.frequencies() method is a specialized helper function that adds 'task': 'frequency' to the input_parameters dictionary, and then calls the generic mol.calculate() method internally.
result = mol.frequencies(image_name, input_parameters) result.vibrations.show(mode=1)
girder/notebooks/notebooks/notebooks/PSI4.ipynb
OpenChemistry/mongochemserver
bsd-3-clause
Find what features had non-zero weight.
model_all.coefficients.sort('value', ascending = False).print_rows(num_rows = 50)
coursera/ml-regression/assignments/week-5-lasso-assignment-1-blank.ipynb
jinntrance/MOOC
cc0-1.0
Next, we write a loop that does the following: * For l1_penalty in [10^1, 10^1.5, 10^2, 10^2.5, ..., 10^7] (to get this in Python, type np.logspace(1, 7, num=13).) * Fit a regression model with a given l1_penalty on TRAIN data. Specify l1_penalty=l1_penalty and l2_penalty=0. in the parameter list. * Compute the RSS on VALIDATION data (here you will want to use .predict()) for that l1_penalty * Report which l1_penalty produced the lowest RSS on validation data. When you call linear_regression.create() make sure you set validation_set = None. Note: you can turn off the print out of linear_regression.create() with verbose = False
import numpy as np import sys def choose_l1(l1s, nnz_threshold = -1): best_model = 0 best_l1 = sys.maxint min_nnz = sys.maxint for l1 in l1s: model = graphlab.linear_regression.create(training, target='price', features=all_features, validation_set=None, l2_penalty=0., l1_penalty=l1, verbose = False) validation['pred'] = model.predict(validation) testing['pred'] = model.predict(testing) this_rss = validation.apply(lambda x : (x['price'] - x['pred']) ** 2).sum() test_rss = testing.apply(lambda x : (x['price'] - x['pred']) ** 2).sum() nnz_num = model['coefficients']['value'].nnz() if (this_rss < best_l1 or rss < 0) and (nnz_threshold <= 0 or nnz_num == nnz_threshold): best_l1 = this_rss best_model = model best_l1 = print best_l1, test_rss, l1 model.coefficients.sort('value', ascending = False).print_rows(num_rows = 50) if nnz_num < min_nnz: min_nnz = nnz_num return best_model, best_l1, min_nnz model, l1, mnn = choose_l1(np.logspace(1, 7, num=13)) mnn
coursera/ml-regression/assignments/week-5-lasso-assignment-1-blank.ipynb
jinntrance/MOOC
cc0-1.0
Now, implement a loop that search through this space of possible l1_penalty values: For l1_penalty in np.logspace(8, 10, num=20): Fit a regression model with a given l1_penalty on TRAIN data. Specify l1_penalty=l1_penalty and l2_penalty=0. in the parameter list. When you call linear_regression.create() make sure you set validation_set = None Extract the weights of the model and count the number of nonzeros. Save the number of nonzeros to a list. Hint: model['coefficients']['value'] gives you an SArray with the parameters you learned. If you call the method .nnz() on it, you will find the number of non-zero parameters!
this_model, this_l1, this_mnn = choose_l1(l1_penalty_values) this_mnn this_model['coefficients']['value'].nnz()
coursera/ml-regression/assignments/week-5-lasso-assignment-1-blank.ipynb
jinntrance/MOOC
cc0-1.0
Out of this large range, we want to find the two ends of our desired narrow range of l1_penalty. At one end, we will have l1_penalty values that have too few non-zeros, and at the other end, we will have an l1_penalty that has too many non-zeros. More formally, find: * The largest l1_penalty that has more non-zeros than max_nonzero (if we pick a penalty smaller than this value, we will definitely have too many non-zero weights) * Store this value in the variable l1_penalty_min (we will use it later) * The smallest l1_penalty that has fewer non-zeros than max_nonzero (if we pick a penalty larger than this value, we will definitely have too few non-zero weights) * Store this value in the variable l1_penalty_max (we will use it later) Hint: there are many ways to do this, e.g.: * Programmatically within the loop above * Creating a list with the number of non-zeros for each value of l1_penalty and inspecting it to find the appropriate boundaries.
def choose_largest_l1(l1s): best_model = 0 best_l1 = sys.maxint rss = sys.maxint min_nnz = sys.maxint l1_penalty_min = -sys.maxint l1_penalty_max = sys.maxint for l1 in l1s: model = graphlab.linear_regression.create(training, target='price', features=all_features, validation_set=None, l2_penalty=0., l1_penalty=l1, verbose = False) validation['pred'] = model.predict(validation) testing['pred'] = model.predict(testing) this_rss = validation.apply(lambda x : (x['price'] - x['pred']) ** 2).sum() test_rss = testing.apply(lambda x : (x['price'] - x['pred']) ** 2).sum() nnz_num = model['coefficients']['value'].nnz() if this_rss < rss or rss < 0: rss = this_rss best_model = model print rss, test_rss, l1 model.coefficients.sort('value', ascending = False).print_rows(num_rows = 50) if nnz_num < max_nonzeros and l1 < l1_penalty_max: l1_penalty_max = l1 if nnz_num > max_nonzeros and l1 > l1_penalty_min: l1_penalty_min = l1 return l1_penalty_min, l1_penalty_max l1_penalty_min, l1_penalty_max = choose_largest_l1(l1_penalty_values) l1_penalty_min, l1_penalty_max
coursera/ml-regression/assignments/week-5-lasso-assignment-1-blank.ipynb
jinntrance/MOOC
cc0-1.0
For l1_penalty in np.linspace(l1_penalty_min,l1_penalty_max,20): Fit a regression model with a given l1_penalty on TRAIN data. Specify l1_penalty=l1_penalty and l2_penalty=0. in the parameter list. When you call linear_regression.create() make sure you set validation_set = None Measure the RSS of the learned model on the VALIDATION set Find the model that the lowest RSS on the VALIDATION set and has sparsity equal to max_nonzero.
model_new, l1_new, nnz_new = choose_l1(l1_penalty_values, max_nonzeros) l1_new, nnz_new
coursera/ml-regression/assignments/week-5-lasso-assignment-1-blank.ipynb
jinntrance/MOOC
cc0-1.0
QUIZ QUESTIONS 1. What value of l1_penalty in our narrow range has the lowest RSS on the VALIDATION set and has sparsity equal to max_nonzeros? 2. What features in this model have non-zero coefficients?
l1_penalty_values
coursera/ml-regression/assignments/week-5-lasso-assignment-1-blank.ipynb
jinntrance/MOOC
cc0-1.0
Safe default: Unlisted & owner-editable When creating a plot, Graphistry creates a dedicated URL with the following rules: Viewing: Unlisted - Only those given the link can access it Editing: Owner-only The URL is unguessable, and the only webpage it is listed at is the creator's private gallery: https://hub.graphistry.com/datasets/ . That means it is as private as whomever the owner shares the URL with.
public_url = g.plot(render=False)
demos/more_examples/graphistry_features/sharing_tutorial.ipynb
graphistry/pygraphistry
bsd-3-clause
Switching to fully private by default Call graphistry.privacy() to default to stronger privacy. It sets: mode='private' - viewing only by owners and invitees invited_users=[] - no invitees by default notify=False - no email notifications during invitations message='' By default, this means an explicit personal invitation is necessary for viewing. Subsequent plots in the session will default to this setting. You can also explicitly set or override those as optional parameters.
graphistry.privacy() # or equivaently, graphistry.privacy(mode='private', invited_users=[], notify=False, message='') owner_only_url = g.plot(render=False)
demos/more_examples/graphistry_features/sharing_tutorial.ipynb
graphistry/pygraphistry
bsd-3-clause
Local overrides We can locally override settings, such as opting back in to public sharing for some visualizations:
public_g = g.privacy(mode='public') public_url1 = public_g.plot(render=False) #Ex: Inheriting public_g's mode='public' public_g2 = public_g.name('viz2') public_url2 = public_g.plot(render=False) #Ex: Global default was still via .privacy() still_private_url = g.plot(render=False)
demos/more_examples/graphistry_features/sharing_tutorial.ipynb
graphistry/pygraphistry
bsd-3-clause
Invitations and notifications As part of the settings, we can permit specific individuals as viewers or editors, and optionally, send them an email notification
VIEW = '10' EDIT = '20' shared_g = g.privacy( mode='private', notify=True, invited_users=[{'email': 'partner1@site1.com', 'action': VIEW}, {'email': 'partner2@site2.org', 'action': EDIT}], message='Check out this graph!') shared_url = shared_g.plot(render=False)
demos/more_examples/graphistry_features/sharing_tutorial.ipynb
graphistry/pygraphistry
bsd-3-clause
The options can be configured globally or locally, just as we did with mode. For example, we might not want to send emails by default, just on specific plots:
graphistry.privacy( mode='private', notify=False, invited_users=[{'email': 'partner1@site1.com', 'action': VIEW}, {'email': 'partner2@site2.org', 'action': EDIT}]) shared_url = g.plot(render=False) notified_and_shared_url = g.privacy(notify=True).plot(render=False)
demos/more_examples/graphistry_features/sharing_tutorial.ipynb
graphistry/pygraphistry
bsd-3-clause
compute the vector encoding of each instance in a sparse data matrix
%%time from eden.graph import vectorize X = vectorize(list(graphs), complexity=3, nbits=18,n_jobs=1) print 'Instances: %d ; Features: %d with an avg of %d features per instance' % (X.shape[0], X.shape[1], X.getnnz()/X.shape[0])
sequence_example.ipynb
fabriziocosta/EDeN_examples
gpl-2.0
compute the pairwise similarity as the dot product between the vector representations of each sequence
from sklearn import metrics K=metrics.pairwise.pairwise_kernels(X, metric='linear')
sequence_example.ipynb
fabriziocosta/EDeN_examples
gpl-2.0
visualize it as a picture is worth thousand words...
%matplotlib inline import pylab as plt plt.figure( figsize=(6,6) ) img = plt.imshow( K, interpolation='none', cmap=plt.get_cmap( 'YlOrRd' ) ) plt.show()
sequence_example.ipynb
fabriziocosta/EDeN_examples
gpl-2.0
Vamos a cargar la extensión ipython-cypher para poder lanzar consultas Cypher directamente a través de la hoja. Todas las celdas que comiencen por %%cypher y todas las instrucciones Python que comiencen por %cypher se enviarán a Neo4j para su interpretación.
!pip install ipython-cypher %load_ext cypher %config CypherMagic.uri='http://neo4j:7474/db/data' %config CypherMagic.auto_html=False
neo4j/sesion7.ipynb
dsevilla/bdge
mit
La siguiente celda genera una consulta en Cypher que devuelve los 10 primeros nodos. Al inicio la base de datos está vacía, pero se puede probar después para ver la salida. Existen plugins para ver gráficamente la salida como un grafo, pero para eso usaremos el interfaz gráfico del propio Neo4j.
%%cypher match (n) return n limit 10;
neo4j/sesion7.ipynb
dsevilla/bdge
mit
La carga de datos CSV no se podía realizar directamente desde los ficheros CSV la hoja, porque el CSV que acepta Neo4j no es estándar. Envié un issue para que lo arreglaran, y en la versión 3.3 parece que ya funciona si se añade un parámetro de configuración: https://github.com/neo4j/neo4j/issues/8472 bash dbms.import.csv.legacy_quote_escaping = false He añadido al contenedor de la práctica esta opción en la carga de Neo4j. Tened en cuenta que si usáis otra configuración hay que añadírselo. Primero se crea un índice sobre el atributo Id de User, que se usará después para crear usuarios y relacionarlos con la pregunta o respuesta que se ha leído. Si no se hace esto, la carga del CSV es muy lenta.
%%cypher CREATE INDEX ON :User(Id);
neo4j/sesion7.ipynb
dsevilla/bdge
mit
El siguiente código carga el CSV de las preguntas y respuestas. El código primero todos los nodos con la etiqueta Post, y después añade la etiqueta Question ó Answer dependiendo del valor del atributo PostTypeId.
%%cypher USING PERIODIC COMMIT 10000 LOAD CSV WITH HEADERS FROM "http://neuromancer.inf.um.es:8080/es.stackoverflow/Posts.csv" AS row CREATE (n) SET n=row SET n :Post ;
neo4j/sesion7.ipynb
dsevilla/bdge
mit
A todas las preguntas, se las etiqueta con Question.
%%cypher MATCH (n:Post {PostTypeId : "1"}) SET n:Question;
neo4j/sesion7.ipynb
dsevilla/bdge
mit
A todas las respuestas se las etiqueta con Answer.
%%cypher MATCH (n:Post {PostTypeId : "2"}) SET n:Answer;
neo4j/sesion7.ipynb
dsevilla/bdge
mit
Se crea un nodo usuario (o se utiliza uno si ya existe) usando el campo OwnerUserId, siempre que no esté vacío. Nótese que se puede utilizar CREATE porque esta combinación de relación usuario y pregunta no existe. Cuidado, si se ejecuta dos veces creará el doble de relaciones.
%%cypher MATCH (n:Post) WHERE n.OwnerUserId <> "" MERGE (u:User {Id: n.OwnerUserId}) CREATE (u)-[:WROTE]->(n);
neo4j/sesion7.ipynb
dsevilla/bdge
mit
El lenguaje Cypher El lenguaje Cypher tiene una sintaxis de Query By Example. Acepta funciones y permite creación y búsqueda de nodos y relaciones. Tiene algunas peculiaridades que veremos a continuación. Por lo pronto, se puede ver un resumen de características en la Cypher Reference Card. La anterior consulta utiliza la construcción LOAD CSV para leer datos CSV dentro de nodos. La cláusula CREATE crea nuevos nodos. La SET permite poner valores a las propiedades de los nodos. En el caso de la consulta de arriba, a todos los datos leídos se les copia los datos de la línea (primer SET). Después, dependiendo del valor de PostTypeId, se les etiqueta como :Question o como :Answer. Si tienen un usuario asignado a través de OwnerUserId, se añade un usuario si no existe y se crea la relación :WROTE. También hay otros posts especiales que no eran preguntas ni respuestas. A estos no se les asigna una segunda etiqueta:
%%cypher match (n:Post) WHERE size(labels(n)) = 1 RETURN n;
neo4j/sesion7.ipynb
dsevilla/bdge
mit
Creamos un índice sobre el Id para acelerar las siguientes búsquedas:
%%cypher CREATE INDEX ON :Post(Id);
neo4j/sesion7.ipynb
dsevilla/bdge
mit
Añadimos una relación entre las preguntas y las respuestas:
%%cypher MATCH (a:Answer), (q:Question {Id: a.ParentId}) CREATE (a)-[:ANSWERS]->(q) ;
neo4j/sesion7.ipynb
dsevilla/bdge
mit
Las construcciones %cypher retornan resultados de los que se puede obtener un dataframe de pandas:
#%%cypher res = %cypher MATCH q=(r)-[:ANSWERS]->(p) RETURN p.Id,r.Id; df = res.get_dataframe() df['r.Id'] = pd.to_numeric(df['r.Id'],downcast='unsigned') df['p.Id'] = pd.to_numeric(df['p.Id'],downcast='unsigned') df.plot(kind='scatter',x='p.Id',y='r.Id',figsize=(15,15))
neo4j/sesion7.ipynb
dsevilla/bdge
mit
La consulta RQ4 se puede resolver de manera muy fácil. En esta primera consulta se devuelve los nodos:
%%cypher // RQ4 MATCH (u1:User)-[:WROTE]->()-[:ANSWERS]->()<-[:WROTE]-(u2:User), (u2)-[:WROTE]->()-[:ANSWERS]->()<-[:WROTE]-(u1) WHERE u1 <> u2 AND u1.Id < u2.Id RETURN DISTINCT u1,u2 ;
neo4j/sesion7.ipynb
dsevilla/bdge
mit
O bien retornar los Id de cada usuario:
%%cypher MATCH (u1:User)-[:WROTE]->()-[:ANSWERS]->()<-[:WROTE]-(u2:User), (u2)-[:WROTE]->()-[:ANSWERS]->()<-[:WROTE]-(u1) WHERE u1 <> u2 AND toInt(u1.Id) < toInt(u2.Id) RETURN DISTINCT u1.Id,u2.Id ORDER BY toInt(u1.Id) ;
neo4j/sesion7.ipynb
dsevilla/bdge
mit
Y finalmente, la creación de relaciones :RECIPROCATE entre los usuarios. Se introduce también la construcción WITH. WITH sirve para introducir "espacios de nombres". Permite importar nombres de filas anteriores, hacer alias con AS e introducir nuevos valores con funciones de Cypher. La siguiente consulta es la misma de arriba, RQ4, pero creando relaciones :RECIPROCATE entre cada dos usuarios que se ayudan recíprocamente.
%%cypher // RQ4 creando relaciones de reciprocidad MATCH (u1:User)-[:WROTE]->()-[:ANSWERS]->()<-[:WROTE]-(u2:User), (u2)-[:WROTE]->()-[:ANSWERS]->()<-[:WROTE]-(u1) WHERE u1 <> u2 AND u1.Id < u2.Id WITH u1 AS user1,u2 AS user2 MERGE (user1)-[:RECIPROCATE]->(user2) MERGE (user2)-[:RECIPROCATE]->(user1) ;
neo4j/sesion7.ipynb
dsevilla/bdge
mit
También se puede buscar el camino mínimo entre dos usuarios cualesquiera. Si existe un camino a través de alguna pregunta o respuesta, la encontrará. Un ejemplo donde hay una comunicación directa:
%%cypher MATCH p=shortestPath( (u1:User {Id: '24'})-[*]-(u2:User {Id:'25'}) ) RETURN p
neo4j/sesion7.ipynb
dsevilla/bdge
mit
Mientras que con otro usuario la cadena es más larga:
%%cypher MATCH p=shortestPath( (u1:User {Id: '324'})-[*]-(u2:User {Id:'25'}) ) RETURN p
neo4j/sesion7.ipynb
dsevilla/bdge
mit
Finalmente se pueden encontrar todos los caminos mínimos en donde se ve que tiene que existir al menos un par pregunta/respuesta entre los usuarios que son recíprocos:
%%cypher MATCH p=allShortestPaths( (u1:User {Id: '24'})-[*]-(u2:User {Id:'25'}) ) RETURN p
neo4j/sesion7.ipynb
dsevilla/bdge
mit
EJERCICIO: Construir los nodos :Tag para cada uno de los tags que aparecen en las preguntas. Construir las relaciones post-[:TAGGED_BY]-&gt;tag para cada tag y también tag-[:TAGS]-&gt;post Para ello, buscar en la ayuda las construcciones WITH y UNWIND y las funciones replace() y split() de Cypher. La siguiente consulta debe retornar 5703 resultados:
%%cypher MATCH p=(t:Tag)-[:TAGS]->(:Question) WHERE t.name =~ "^java$|^c\\+\\+$" RETURN count(p);
neo4j/sesion7.ipynb
dsevilla/bdge
mit
La siguiente consulta muestra los usuarios que preguntan por cada Tag:
%%cypher MATCH (t:Tag)-->(:Question)<--(u:User) RETURN t.name,collect(distinct u.Id) ORDER BY t.name;
neo4j/sesion7.ipynb
dsevilla/bdge
mit
El mismo MATCH se puede usar para encontrar qué conjunto de tags ha usado cada usuario cambiando lo que retornamos:
%%cypher MATCH (t:Tag)-->(:Question)<--(u:User) RETURN u.Id, collect(distinct t.name) ORDER BY toInt(u.Id);
neo4j/sesion7.ipynb
dsevilla/bdge
mit
2. Cleaning up and summarizing the data Lookin' good! Let's convert the data into a nice format. We rearrange some columns, check out what the columns are.
# Reorder the data columns and drop email_id cols = dataset.columns.tolist() cols = cols[2:] + [cols[1]] dataset = dataset[cols] # Examine shape of dataset and some column names print dataset.shape print dataset.columns.values[0:10] # Summarise feature values dataset.describe() # Convert dataframe to numpy array and split # data into input matrix X and class label vector y npArray = np.array(dataset) X = npArray[:,:-1].astype(float) y = npArray[:,-1]
ensembling/ensembling.ipynb
nslatysheva/data_science_blogging
gpl-3.0