markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Check results on some example sentence pairs
sentence1 = "A soccer game with multiple males playing" sentence2 = "Some men are playing a sport" check_similarity(sentence1, sentence2)
examples/nlp/ipynb/semantic_similarity_with_bert.ipynb
keras-team/keras-io
apache-2.0
I-TASSER See: http://ssbio.readthedocs.io/en/latest/instructions/itasser.html for instructions on how to obtain the program - follow it up until step 3. Copy and paste the download link below in itasser_download_link Make sure the version number is correct in itasser_version_number Extract the archive Download the I-TASSER library (this will take some time) Installation will be complete after this. Your paths to execute I-TASSER using ssbio are: ~/software/itasser/I-TASSER5.1/I-TASSERmod/runI-TASSER.pl with a library directory of ~/software/itasser/ITLIB
# Step 2 itasser_download_link = 'my_download_link' # Step 3 itasser_version_number = '5.1' # Step 4 itasser_archive = itasser_download_link.split('/')[-1] os.mkdir(op.expanduser('~/software/itasser/')) os.chdir(op.expanduser('~/software/itasser/')) !wget $itasser_download_link !tar -jxf $itasser_archive # Step 5 os.mkdir(op.expanduser('~/software/itasser/ITLIB')) !./I-TASSER5.1/download_lib.pl -libdir ITLIB
docs/notebooks/I-TASSER and TMHMM Install Guide.ipynb
SBRG/ssbio
mit
TMHMM See: http://ssbio.readthedocs.io/en/latest/instructions/tmhmm.html for instructions on how to obtain the program - follow it up until step 2 Copy and paste the download link below in tmhmm_download_link Run the code below to install tmhmm.
# Step 2 tmhmm_download_link = 'my_download_link' # Step 3 os.mkdir(op.expanduser('~/software/tmhmm/')) os.chdir(op.expanduser('~/software/tmhmm/')) !wget $tmhmm_download_link !tar -zxf tmhmm-2.0c.Linux.tar.gz # Replace perl path os.chdir(op.expanduser('~/software/tmhmm/tmhmm-2.0c/bin')) !perl -i -pe 's{^#!/usr/local/bin/perl}{#!/usr/bin/perl}' tmhmm !perl -i -pe 's{^#!/usr/local/bin/perl -w}{#!/usr/bin/perl -w}' tmhmmformat.pl # Create symbolic links !ln -s $HOME/software/tmhmm/tmhmm-2.0c/bin/* /srv/venv/bin/
docs/notebooks/I-TASSER and TMHMM Install Guide.ipynb
SBRG/ssbio
mit
2. Funzioni ricorsive ESERCIZIO: Si scrivano le seguenti funzioni usando delle definizioni ricorsive. Fattoriale(n) Fibonacci(n) IsPalindrome(stringa)
# Fattoriale def Fattoriale(n): if n == 0: return 1 else: return n*Fattoriale(n-1) print(Fattoriale(4)) # Fibonacci def Fib(n): if n == 0 or n == 1: return 1 else: return Fib(n-1) + Fib(n-2) print(Fib(6)) # Test if a list (e.g. string) is a palindrome def IsPalindrome(Ls): if len(Ls) <= 1: return True else: return (Ls[0] == Ls[-1]) and IsPalindrome(Ls[1:-1]) print(IsPalindrome("abcdedcba")) print(IsPalindrome("abcdeecba"))
Lab 2 - Cenni di Programmazione Funzionale - Parte prima.ipynb
mathcoding/Programmazione2
mit
3. List processing Esempi da fare: Sommatoria degli elementi di una lista: $$\mbox{sum}(Ls) = \sum_{e \in Ls} e$$ Produttoria degli elementi di una lista: $$\mbox{prod}(Ls) = \prod_{e \in Ls} e$$ Generalizzare le due funzioni precedenti in una singola funzione Fold: $$\mbox{fold}(f,v_0,[v_1,v_2,v_3,...,v_n]) = f(v_1, f(v_2, f(v_3, ... f(v_n,v_0))))$$
def Sum(Ls): if Ls == []: return 0 else: return Ls[0] + Sum(Ls[1:]) # Notazione postfix per l'addizione # Meglio se si usa operator.add dalla libreria operator # https://docs.python.org/3.6/library/operator.html def Add(x,y): return x+y def Sum2(Ls): if Ls == []: return 0 else: return Add(Ls[0], Sum(Ls[1:])) # Notazione prefix per l'addizione def Mul(x,y): return x*y def Prod(Ls): if Ls == []: return 1 else: return Mul(Ls[0], Prod(Ls[1:])) def Fold(F, v, Ls): if Ls == []: return v else: return F(Ls[0], Fold(F, v, Ls[1:])) def FoldSum(Ls): return Fold(Add, 0, Ls) def FoldProd(Ls): return Fold(Mul, 1, Ls)
Lab 2 - Cenni di Programmazione Funzionale - Parte prima.ipynb
mathcoding/Programmazione2
mit
Espressività della funzione fold Con la funzione fold appena vista, si possono scrivere diverse funzioni classiche che operano su delle liste. Si chiede di sviluppare come esercizio, le seguenti funzioni in termini di fold: And(Ls): viene valutata a True se tutti gli elementi della lista sono True Or(Ls): viene valutata a True se almeno uno degli elementi della lista è True Length(Ls): determina la lughezza di una lista Reverse(Ls): inverte il contenuto di una lista (esempio: [1,2,3,4] diventa [4,3,2,1]) FoldFactorial(n): una funzione che calcola il fattoriale di n SumLength(Ls): restituisce una coppia di valori, data dalla somma degli elementi nella lista, e la lunghezza della stringa (esempio: SumLength([1,2,3,4]) = (10,4)) Map(F, Ls): una funzione map equivalente alla builtin di python Filter(P, Ls): una funzione filter equivalente alla builtin di python
# DA SCRIVERE LE 8 FUNZIONI RICHIESTE SOPRA
Lab 2 - Cenni di Programmazione Funzionale - Parte prima.ipynb
mathcoding/Programmazione2
mit
FoldRight e FoldLeft La funzione fold scritta sopra, viene generalmente chiamata foldRight in quando applica la funzione data agli elementi della lista utilizzando la convenzione che la funzione data sia associativa a destra. Per esempio, la funzione FoldSum come implementa sopra, se applicata alla lista $[1,2,3,4,5]$, calcola $(1+(2+(3+(4+(5+0)))))$. ESERCIZIO: Si scriva una funzione fold che assuma che per la regola data valga regola associativa a sinistra, ovvero che data la lista $[1,2,3,4,5]$, calcoli $(((((0+1)+2)+3)+4)+5)$.
def FoldLeft(F, v, Ls): # DA COMPLETARE pass
Lab 2 - Cenni di Programmazione Funzionale - Parte prima.ipynb
mathcoding/Programmazione2
mit
ESERCIZIO: Si scriva la funzione reverse(Ls) utilizzando la FolfLeft invece della Fold (se non altrimenti specificato, per fold si intende la FolfRight).
def ReverseFoldL(Ls): # DA COMPLETARE pass
Lab 2 - Cenni di Programmazione Funzionale - Parte prima.ipynb
mathcoding/Programmazione2
mit
Проверка стационарности и STL-декомпозиция ряда:
sm.tsa.seasonal_decompose(deaths['num_deaths']).plot() print("Критерий Дики-Фуллера: p=%f" % sm.tsa.stattools.adfuller(deaths['num_deaths'])[1])
time series regression/ARIMA/arima_time_series_deaths.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Стационарность Критерий Дики-Фуллера не отвергает гипотезу нестационарности, но небольшой тренд остался. Попробуем сезонное дифференцирование; сделаем на продифференцированном ряде STL-декомпозицию и проверим стационарность:
deaths['num_deaths_diff'] = deaths['num_deaths'] - deaths['num_deaths'].shift(12) sm.tsa.seasonal_decompose(deaths['num_deaths_diff'][12:]).plot() print("Критерий Дики-Фуллера: p=%f" % sm.tsa.stattools.adfuller(deaths['num_deaths_diff'][12:])[1])
time series regression/ARIMA/arima_time_series_deaths.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Критерий Дики-Фуллера отвергает гипотезу нестационарности, но полностью избавиться от тренда не удалось. Попробуем добавить ещё обычное дифференцирование:
deaths['num_deaths_diff2'] = deaths['num_deaths_diff'] - deaths['num_deaths_diff'].shift(1) sm.tsa.seasonal_decompose(deaths['num_deaths_diff2'][13:]).plot() print("Критерий Дики-Фуллера: p=%f" % sm.tsa.stattools.adfuller(deaths['num_deaths_diff2'][13:])[1])
time series regression/ARIMA/arima_time_series_deaths.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Гипотеза нестационарности уверенно отвергается, и визуально ряд выглядит лучше — тренда больше нет. Подбор модели Посмотрим на ACF и PACF полученного ряда:
ax = plt.subplot(211) sm.graphics.tsa.plot_acf(deaths['num_deaths_diff2'][13:].values.squeeze(), lags=58, ax=ax) ax = plt.subplot(212) sm.graphics.tsa.plot_pacf(deaths['num_deaths_diff2'][13:].values.squeeze(), lags=58, ax=ax);
time series regression/ARIMA/arima_time_series_deaths.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Начальные приближения: Q=2, q=1, P=2, p=2
ps = range(0, 3) d=1 qs = range(0, 1) Ps = range(0, 3) D=1 Qs = range(0, 3) parameters = product(ps, qs, Ps, Qs) parameters_list = list(parameters) len(parameters_list) %%time results = [] best_aic = float("inf") for param in parameters_list: #try except нужен, потому что на некоторых наборах параметров модель не обучается try: model=sm.tsa.statespace.SARIMAX(deaths['num_deaths'], order=(param[0], d, param[1]), seasonal_order=(param[2], D, param[3], 12)).fit(disp=-1) #выводим параметры, на которых модель не обучается и переходим к следующему набору except ValueError: print('wrong parameters:', param) continue aic = model.aic #сохраняем лучшую модель, aic, параметры if aic < best_aic: best_model = model best_aic = aic best_param = param results.append([param, model.aic]) warnings.filterwarnings('default')
time series regression/ARIMA/arima_time_series_deaths.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Её остатки:
plt.subplot(211) best_model.resid[13:].plot() plt.ylabel(u'Residuals') ax = plt.subplot(212) sm.graphics.tsa.plot_acf(best_model.resid[13:].values.squeeze(), lags=48, ax=ax) print("Критерий Стьюдента: p=%f" % stats.ttest_1samp(best_model.resid[13:], 0)[1]) print("Критерий Дики-Фуллера: p=%f" % sm.tsa.stattools.adfuller(best_model.resid[13:])[1])
time series regression/ARIMA/arima_time_series_deaths.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Остатки несмещены (подтверждается критерием Стьюдента), стационарны (подтверждается критерием Дики-Фуллера и визуально), неавтокоррелированы (подтверждается критерием Льюнга-Бокса и коррелограммой). Посмотрим, насколько хорошо модель описывает данные:
deaths['model'] = best_model.fittedvalues deaths['num_deaths'].plot() deaths['model'][13:].plot(color='r') plt.ylabel('Accidental deaths');
time series regression/ARIMA/arima_time_series_deaths.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Прогноз
from dateutil.relativedelta import relativedelta deaths2 = deaths[['num_deaths']] date_list = [pd.datetime.strptime("1979-01-01", "%Y-%m-%d") + relativedelta(months=x) for x in range(0,24)] future = pd.DataFrame(index=date_list, columns=deaths2.columns) deaths2 = pd.concat([deaths2, future]) deaths2['forecast'] = best_model.predict(start=72, end=100) deaths2['num_deaths'].plot(color='b') deaths2['forecast'].plot(color='r') plt.ylabel('Accidental deaths');
time series regression/ARIMA/arima_time_series_deaths.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Description The magnetization curve for a separately excited dc generator is shown in Figure P8-7. The generator is rated at 6 kW, 120 V, 50 A, and 1800 r/min and is shown in Figure P8-8. Its field circuit is rated at 5A. <img src="figs/FigC_P8-7.jpg" width="70%"> <hr> Note An electronic version of this magnetization curve can be found in file p87_mag.dat , which can be used with Python programs. Column 1 contains field current in amps, and column 2 contains the internal generated voltage $E_A$ in volts. <hr> <img src="figs/FigC_P8-8.jpg" width="70%"> The following data are known about the machine: $$R_A = 0.18\,\Omega \qquad \quad V_F = 120\,V$$ $$R_\text{adj} = 0\text{ to }40\,\Omega \qquad R_F = 20\, \Omega$$ $$N_F = 1000 \text{ turns per pole}$$
n0 = 1800 # [r/min] Ra = 0.18 # [Ohm] Vf = 120 # [V] Radj_min = 0 # [Ohm] Radj_max = 40 # [Ohm] Rf = 20 # [Ohm] Nf = 1000
Chapman/Ch8-Problem_8-22.ipynb
dietmarw/EK5312_ElectricalMachines
unlicense
(a) If this generator is operating at no load What is the range of voltage adjustments that can be achieved by changing $R_\text{adj}$ ? (b) If the field rheostat is allowed to vary from 0 to 30 $\Omega$ and the generator's speed is allowed to vary from 1500 to 2000 r/min What are the maximum and minimum no-load voltages in the generator? SOLUTION (a) If the generator is operating with no load at 1800 r/min, then the terminal voltage will equal the internal generated voltage $E_A$ . The maximum possible field current occurs when $R_\text{adj} = 0\,\Omega$ . The current is: $$I_F = \frac{V_F}{R_F + R_\text{adj}}$$
If_max = Vf / (Rf + Radj_min) If_max
Chapman/Ch8-Problem_8-22.ipynb
dietmarw/EK5312_ElectricalMachines
unlicense
Amperes. From the magnetization curve, the voltage $E_{Ao}$ at 1800 r/min is 135 V. Since the actual speed is 1800r/min, the maximum no-load voltage is 135 V. The minimum possible field current occurs when $R_\text{adj} = 40\,\Omega$. The current is:
If_min = Vf / (Rf + Radj_max) If_min
Chapman/Ch8-Problem_8-22.ipynb
dietmarw/EK5312_ElectricalMachines
unlicense
Amperes. From the magnetization curve, the voltage $E_{Ao}$ at 1800 r/min is 79.5 V. Since the actual speed is 1800r/min, the minimum no-load voltage is 79.5 V. (b) The maximum voltage will occur at the highest current and speed, and the minimum voltage will occur at the lowest current and speed. The maximum possible field current occurs when $R_\text{adj} = 0\,\Omega$. The current is
If_max = Vf / (Rf + Radj_min) If_max
Chapman/Ch8-Problem_8-22.ipynb
dietmarw/EK5312_ElectricalMachines
unlicense
From the magnetization curve, the voltage $E_{Ao}$ at 1800 r/min is 135 V. Since the actual speed is 2000 r/min, the maximum no-load voltage is: $$\frac{E_A}{E_{A0}} = \frac{n}{n_0}$$
n_max = 2000 # [r/min] Ea0_max = 135 # [V] Ea_max = Ea0_max * n_max / n0 print(''' Ea_max = {:.0f} V =============='''.format(Ea_max))
Chapman/Ch8-Problem_8-22.ipynb
dietmarw/EK5312_ElectricalMachines
unlicense
The minimum possible field current occurs and minimum speed and field current. The maximum adjustable resistance is $R_\text{adj} = 30\,\Omega$. The current is
Radj_max_b = 30.0 # [Ohm] If_min = Vf / (Rf + Radj_max_b) If_min
Chapman/Ch8-Problem_8-22.ipynb
dietmarw/EK5312_ElectricalMachines
unlicense
Amperes. From the magnetization curve, the voltage $E_{Ao}$ at 1800 r/min is 93.1 V. Since the actual speed is 1500 r/min, the maximum no-load voltage is
n_min = 1500 # [r/min] Ea0_min = 93.1 # [V] Ea_min = Ea0_min * n_min / n0 print(''' Ea_min = {:.1f} V ==============='''.format(Ea_min))
Chapman/Ch8-Problem_8-22.ipynb
dietmarw/EK5312_ElectricalMachines
unlicense
Compose list of options and then construct widget to present them, along with a default option. Display the widget using the IPython display call.
coordinates = [(coord.name()) for coord in cube.coords()] dim_x = ipywidgets.RadioButtons( description='Dimension:', options=coordinates, value='time') IPython.display.display(dim_x)
doc/write_your_own/components/DimPicker2.ipynb
SciTools/cube_browser
bsd-3-clause
Method01 两次归一化 对同一批次的数据 先按照sample进行归一化 再按照amplicon进行归一化 归一化方法:Min-Max Normalization 后者这样做是否合理并不清楚
py.iplot(fig_3d(norm_data(data, 'by_s'), 'BRCA161116_norm_sample'), filename='BRCA161116_norm_sample') py.iplot(fig_3d(norm_data(data, 'double'), 'BRCA161116_norm_double'), filename='BRCA161116_norm_double') norm_data(data, 'double')['NGS161111-6-2'].plot() norm_data(data, 'double')['NGS161111-7-2'].plot()
ipynb/BRCA_LargeDel_Analysis.ipynb
codeunsolved/NGS-Dashboard
mit
Method01 结论 先对样本进行归一化取得了较好的效果。见图 BRCA161116_norm_sample。 对同一个amplicon,不同sample之间的数据拉到了较同一的水平。 但是再进行对amplicon的归一化并不能达到预期效果。将数据用不同统计方法校准后也不行。 之前认为AB pool之间差异较大,实际上还是Amplicon之间的差异,并不是A或B pool内部有较好的均一性而之间就不行。 以后测试数据附上均一性评估的数据 Method02 按sample归一化后,比较相对比例 先同Method01按sample归一化 再选择一个sample的amplicon的reads数作为基准1,看其他sample与它的比例 去检视每个sample是否有连续的低于0.5的amplicon,光一个假阳性会较高 当然这个方法还是需要数据具有较好的均一性。
def plot_brca_largeindel(d, dir_pic): def choose_one(d): cu_sort = coverage_uniforminity(d).sort_values(by='0.2x') one = cu_sort.iloc[-1] if one.values[0] < 98: print "[WARNING] Max Coverage Uniformity 0.2x < 98%%: %s" % one return one def get_plot_data(): p_data = (d.T / d[choose_one(data).name]).T for s in p_data: if p_data[s].max() < 0.8: print "[WARNING] Sample ID: %s's data quality is low, poped!" % s p_data.pop(s) elif p_data[s].min() > 1: print "[WARNING] Sample ID: %s's data is over amplified, poped!" % s return p_data dir_pic = os.path.join('.', 'pic/%s' % dir_pic) if not os.path.exists(dir_pic): print "[WARNING] %s doesn't exist, create it!" % dir_pic os.makedirs(dir_pic) plot_data = get_plot_data() plt.violinplot([plot_data.T[a] for a in plot_data.T]) plt.show() plot_data.plot() plot_brca_largeindel(data, '161116')
ipynb/BRCA_LargeDel_Analysis.ipynb
codeunsolved/NGS-Dashboard
mit
Likelihood of a coin being fair $$ P(\theta|X) = \frac{P(X|\theta)P(\theta)}{P(X)} $$ Here, $P(X|\theta)$ is the likelihood, $P(\theta)$ is the prior on theta, $P(X)$ is the evidence, while $P(\theta|X)$ is the posteior. Now the probability of observing $k$ heads out of $n$ trials, given that the probability of a fair coin is $\theta$ is given as follows: $P(N_{heads}=k|n,\theta) = \frac{n!}{k!(n-k)!}\theta^{k}(1-\theta)^{(n-k)}$ Consider, $n=10$ and $k=9$. Now, we can plot our likelihood. as follows:
def get_likelihood(theta, n, k, normed=False): ll = (theta**k)*((1-theta)**(n-k)) if normed: num_combs = comb(n, k) ll = num_combs*ll return ll get_likelihood(0.5, 2, 2, normed=True) get_likelihood(0.5, 10, np.arange(10), normed=True) N = 100 plt.plot( np.arange(N), get_likelihood(0.5, N, np.arange(N), normed=True), color='k', markeredgecolor='none', marker='o', linestyle="--", ms=3, markerfacecolor='r', lw=0.5 ) n, k = 10, 6 theta=np.arange(0,1,0.01) ll = get_likelihood(theta, n, k, normed=True) source = ColumnDataSource(data=dict( theta=theta, ll=ll, )) hover = HoverTool(tooltips=[ ("index", "$index"), ("theta", "$x"), ("ll", "$y"), ]) p1 = figure(plot_width=600, plot_height=400, tools=[hover], title="Likelihood of fair coin") p1.grid.grid_line_alpha=0.3 p1.xaxis.axis_label = 'theta' p1.yaxis.axis_label = 'Likelihood' p1.line('theta', 'll', color='#A6CEE3', source=source) # get a handle to update the shown cell with handle = show(p1, notebook_handle=True) handle
Likelihood+ratio.ipynb
napsternxg/ipython-notebooks
apache-2.0
Plot for multiple data
hover = HoverTool(tooltips=[ ("index", "$index"), ("theta", "$x"), ("ll_ratio", "$y"), ]) p1 = figure(plot_width=600, plot_height=400, #y_axis_type="log", tools=[hover], title="Likelihood ratio compared to unbiased coin") p1.grid.grid_line_alpha=0.3 p1.xaxis.axis_label = 'theta' p1.yaxis.axis_label = 'Likelihood ratio wrt theta = 0.5' theta=np.arange(0,1,0.01) for n, k, color in zip( [10, 100, 500], [6, 60, 300], ["red", "blue", "black"] ): ll_unbiased = get_likelihood(0.5, n, k, normed=False) ll = get_likelihood(theta, n, k, normed=False) ll_ratio = ll / ll_unbiased source = ColumnDataSource(data=dict( theta=theta, ll_ratio=ll_ratio, )) p1.line('theta', 'll_ratio', color=color, source=source, legend="n={}, k={}".format(n,k)) # get a handle to update the shown cell with handle = show(p1, notebook_handle=True) handle hover = HoverTool(tooltips=[ ("index", "$index"), ("theta", "$x"), ("ll_ratio", "$y"), ]) p1 = figure(plot_width=600, plot_height=400, y_axis_type="log", tools=[hover], title="Likelihood ratio compared to unbiased coin") p1.grid.grid_line_alpha=0.3 p1.xaxis.axis_label = 'n' p1.yaxis.axis_label = 'Likelihood ratio wrt theta = 0.5' n = 10**np.arange(0,6) k = (n*0.6).astype(int) theta=0.6 ll_unbiased = get_likelihood(0.5, n, k, normed=False) ll = get_likelihood(theta, n, k, normed=False) ll_ratio = ll / ll_unbiased source = ColumnDataSource(data=dict( n=n, ll_ratio=ll_ratio, )) p1.line('n', 'll_ratio', color=color, source=source, legend="theta={:.2f}".format(theta)) # get a handle to update the shown cell with handle = show(p1, notebook_handle=True) handle
Likelihood+ratio.ipynb
napsternxg/ipython-notebooks
apache-2.0
Create Query URL from MPC formatted file In the notebook orbit_fitting_demo.ipynb we show how to use the ephem_utils.py code in KBMOD to create a file with the observations for an identified object in KBMOD and turn it into an MPC formatted file. The file created in that demo is saved here as kbmod_mpc.dat. We will use that file to show how the precovery interface works.
ssois_query = ssoisPrecovery() query_url = ssois_query.format_search_by_arc_url('kbmod_mpc.dat') print(query_url)
notebooks/precovery_demo.ipynb
DiracInstitute/kbmod
bsd-2-clause
Query service via URL The formatted URL above will work in a browser to return results. But we have the query_ssois function that will pull down the results and provide them in a pandas dataframe all in one go.
results_df = ssois_query.query_ssois(query_url) results_df.head()
notebooks/precovery_demo.ipynb
DiracInstitute/kbmod
bsd-2-clause
Create direct data download link It's possible to take the URLs for the data provided in results_df['MetaData'] and turn them directly into a download link clickable from here in the notebook.
from IPython.display import HTML image_data_link = results_df["MetaData"].iloc[-1] HTML('<a href="{}">{}</a>'.format(image_data_link, image_data_link))
notebooks/precovery_demo.ipynb
DiracInstitute/kbmod
bsd-2-clause
Compare KBMOD data to available data
%pylab inline from ephem_utils import mpc_reader kbmod_observations = mpc_reader('kbmod_mpc.dat') scatter(kbmod_observations.ra.deg, kbmod_observations.dec.deg, marker='x', s=200, c='r', label='KBMOD Observations', zorder=10) plt.legend() scatter(results_df['Object_RA'], results_df['Object_Dec'], c=results_df['MJD']) cbar = plt.colorbar() plt.xlabel('RA') plt.ylabel('Dec') cbar.set_label('MJD')
notebooks/precovery_demo.ipynb
DiracInstitute/kbmod
bsd-2-clause
Layers Layers are one of the most powerful aspects of ggplot. The idea is to think of your plot as containing different components, or layers, which when combined together make up the entire visual. Take the following example.
ggplot(diamonds, aes(x='carat', y='price')) + geom_point() + ggtitle("Carat vs. Price")
docs/how-to/Layering Plots.ipynb
yhat/ggplot
bsd-2-clause
The plot above shows a scatterplot comparing a diamon's carat and the price of the diamond. The plot is composed of 3 layers: Base layer ggplot(diamonds, aes(x='carat', y='price')) -- This defines the dataset that's going to be plotted and the aesthetics (or instructions) to be used for defining the x and y axes. Geom layer geom_point() -- This layer tells ggplot to render a scatter plot using the aesthetics and data defined in the base layer. Labels layer ggtitle("Carat vs. Price") -- This layer applies a title to the plot. There are lots of other labels and customizations you can do to your plots (xlab, ylab, etc.). You can continue to add more layers to your plot as there are more things you'd like to see. For instance, if I wanted to customize the x and y axis labels, I could do so by add 2 addition layers using xlab and ylab.
ggplot(diamonds, aes(x='carat', y='price')) + \ geom_point() + \ ggtitle("Carat vs. Price") + \ xlab(" Carat\n(1 carat = 200 mg)") + \ ylab(" Price\n(2008 USD)")
docs/how-to/Layering Plots.ipynb
yhat/ggplot
bsd-2-clause
In addition to adding labels you can also add additional "geoms", or plot types. For instance, let's add a linear trend-line to our plot using stat_smooth.
ggplot(diamonds, aes(x='carat', y='price')) + \ geom_point() + \ stat_smooth(method='lm') + \ ggtitle("Carat vs. Price") + \ xlab(" Carat\n(1 carat = 200 mg)") + \ ylab(" Price\n(2008 USD)")
docs/how-to/Layering Plots.ipynb
yhat/ggplot
bsd-2-clause
It looks like there are some outlying points in our plot. Let's filter out some of those rows in our dataset by using xlim and ylim. By adding these layers, it'll cap the x and y axes with whatever values we tell it to.
ggplot(diamonds, aes(x='carat', y='price')) + \ geom_point() + \ stat_smooth(method='lm') + \ ggtitle("Carat vs. Price") + \ xlab(" Carat\n(1 carat = 200 mg)") + \ ylab(" Price\n(2008 USD)") + \ xlim(0, 3) + \ ylim(0, 20000)
docs/how-to/Layering Plots.ipynb
yhat/ggplot
bsd-2-clause
Instead of building your ggplots with one big line of code, you can break them up into individual lines of code. To do this, use the + or += operators to gradually tack on layers to your plot.
p = ggplot(aes(x='mpg'), data=mtcars) p += geom_histogram() p += xlab("Miles per Gallon") p += ylab("# of Cars") p
docs/how-to/Layering Plots.ipynb
yhat/ggplot
bsd-2-clause
1. Implementar o algoritmo K-means Nesta etapa você irá implementar as funções que compõe o algoritmo do KMeans uma a uma. É importante entender e ler a documentação de cada função, principalmente as dimensões dos dados esperados na saída. 1.1 Inicializar os centróides A primeira etapa do algoritmo consiste em inicializar os centróides de maneira aleatória. Essa etapa é uma das mais importantes do algoritmo e uma boa inicialização pode diminuir bastante o tempo de convergência. Para inicializar os centróides você pode considerar o conhecimento prévio sobre os dados, mesmo sem saber a quantidade de grupos ou sua distribuição. Dica: https://docs.scipy.org/doc/numpy/reference/generated/numpy.random.uniform.html
import random def calculate_initial_centers(dataset, k): """ Inicializa os centróides iniciais de maneira arbitrária Argumentos: dataset -- Conjunto de dados - [m,n] k -- Número de centróides desejados Retornos: centroids -- Lista com os centróides calculados - [k,n] """ #### CODE HERE #### #coleta numero de colunas dimensoes = len(dataset[0]) #coleta dos extremos das dimensoes extremos_dimensoes = [] for i in range(0,dimensoes): maximo = max(dataset[: , i]) minimo = min(dataset[: , i]) extremos_dimensoes.append( {'max': maximo , 'min': minimo} ) centroids = [] #fazendo os centroids for centro in range(0,k): posicao = [] for n in range(0,dimensoes): posicao.append( random.uniform( extremos_dimensoes[n]['min'] , extremos_dimensoes[n]['max'] ) ) centroids.append(posicao) ### END OF CODE ### return np.array(centroids) calculate_initial_centers(dataset , 4)
2019/09-clustering/cl_JoãoCastelo.ipynb
InsightLab/data-science-cookbook
mit
1.2 Definir os clusters Na segunda etapa do algoritmo serão definidos o grupo de cada dado, de acordo com os centróides calculados. 1.2.1 Função de distância Codifique a função de distância euclidiana entre dois pontos (a, b). Definido pela equação: $$ dist(a, b) = \sqrt{(a_1-b_1)^{2}+(a_2-b_2)^{2}+ ... + (a_n-b_n)^{2}} $$ $$ dist(a, b) = \sqrt{\sum_{i=1}^{n}(a_i-b_i)^{2}} $$
def euclidean_distance(a, b): """ Calcula a distância euclidiana entre os pontos a e b Argumentos: a -- Um ponto no espaço - [1,n] b -- Um ponto no espaço - [1,n] Retornos: distance -- Distância euclidiana entre os pontos """ #### CODE HERE #### quadraticos = 0 for i in range(0 , len(a)): potencia = (a[i] - b[i]) ** 2 quadraticos = quadraticos + potencia ### END OF CODE ### return quadraticos ** 0.5
2019/09-clustering/cl_JoãoCastelo.ipynb
InsightLab/data-science-cookbook
mit
1.2.2 Calcular o centroide mais próximo Utilizando a função de distância codificada anteriormente, complete a função abaixo para calcular o centroid mais próximo de um ponto qualquer. Dica: https://docs.scipy.org/doc/numpy/reference/generated/numpy.argmin.html
import math def nearest_centroid(a, centroids): """ Calcula o índice do centroid mais próximo ao ponto a Argumentos: a -- Um ponto no espaço - [1,n] centroids -- Lista com os centróides - [k,n] Retornos: nearest_index -- Índice do centróide mais próximo """ #### CODE HERE #### index_centroide = 0 distancia_minima = math.inf for i in range(0 , len(centroids)): distancia_centroid = euclidean_distance(a,centroids[i]) if(distancia_centroid <= distancia_minima): distancia_minima = distancia_centroid index_centroide = i ### END OF CODE ### return index_centroide
2019/09-clustering/cl_JoãoCastelo.ipynb
InsightLab/data-science-cookbook
mit
1.2.3 Calcular centroid mais próximo de cada dado do dataset Utilizando a função anterior que retorna o índice do centroid mais próximo, calcule o centroid mais próximo de cada dado do dataset.
def all_nearest_centroids(dataset, centroids): """ Calcula o índice do centroid mais próximo para cada ponto do dataset Argumentos: dataset -- Conjunto de dados - [m,n] centroids -- Lista com os centróides - [k,n] Retornos: nearest_indexes -- Índices do centróides mais próximos - [m,1] """ #### CODE HERE #### nearest_indexes = [] for ponto in dataset: nearest_indexes.append( nearest_centroid(ponto , centroids) ) ### END OF CODE ### return np.array(nearest_indexes)
2019/09-clustering/cl_JoãoCastelo.ipynb
InsightLab/data-science-cookbook
mit
1.3 Métrica de avaliação Após formar os clusters, como sabemos se o resultado gerado é bom? Para isso, precisamos definir uma métrica de avaliação. O algoritmo K-means tem como objetivo escolher centróides que minimizem a soma quadrática das distância entre os dados de um cluster e seu centróide. Essa métrica é conhecida como inertia. $$\sum_{i=0}^{n}\min_{c_j \in C}(||x_i - c_j||^2)$$ A inertia, ou o critério de soma dos quadrados dentro do cluster, pode ser reconhecido como uma medida de o quão internamente coerentes são os clusters, porém ela sofre de alguns inconvenientes: A inertia pressupõe que os clusters são convexos e isotrópicos, o que nem sempre é o caso. Desta forma, pode não representar bem em aglomerados alongados ou variedades com formas irregulares. A inertia não é uma métrica normalizada: sabemos apenas que valores mais baixos são melhores e zero é o valor ótimo. Mas em espaços de dimensões muito altas, as distâncias euclidianas tendem a se tornar infladas (este é um exemplo da chamada “maldição da dimensionalidade”). A execução de um algoritmo de redução de dimensionalidade, como o PCA, pode aliviar esse problema e acelerar os cálculos. Fonte: https://scikit-learn.org/stable/modules/clustering.html Para podermos avaliar os nosso clusters, codifique a métrica da inertia abaixo, para isso você pode utilizar a função de distância euclidiana construída anteriormente. $$inertia = \sum_{i=0}^{n}\min_{c_j \in C} (dist(x_i, c_j))^2$$
def inertia(dataset, centroids, nearest_indexes): """ Soma das distâncias quadradas das amostras para o centro do cluster mais próximo. Argumentos: dataset -- Conjunto de dados - [m,n] centroids -- Lista com os centróides - [k,n] nearest_indexes -- Índices do centróides mais próximos - [m,1] Retornos: inertia -- Soma total do quadrado da distância entre os dados de um cluster e seu centróide """ #### CODE HERE #### inertia = 0 for i in range(0, len(dataset)): distancia = euclidean_distance( dataset[i] , centroids[ nearest_indexes[i] ] ) **2 inertia = inertia + distancia ### END OF CODE ### return inertia
2019/09-clustering/cl_JoãoCastelo.ipynb
InsightLab/data-science-cookbook
mit
Teste a função codificada executando o código abaixo.
tmp_data = np.array([[1,2,3],[3,6,5],[4,5,6]]) tmp_centroide = np.array([[2,3,4]]) tmp_nearest_indexes = all_nearest_centroids(tmp_data, tmp_centroide) if inertia(tmp_data, tmp_centroide, tmp_nearest_indexes) == 26: print("Inertia calculada corretamente!") else: print("Função de inertia incorreta!") # Use a função para verificar a inertia dos seus clusters inertia(dataset, centroids, nearest_indexes) def media(lista_de_listas , dimensao): media =[] for i in range(0,dimensao): media.append(0) for ponto in lista_de_listas: for i in range(0,dimensao): media[i] = media[i] + ponto[i] for i in range(0,dimensao): media[i] = media[i]/len(lista_de_listas) return media
2019/09-clustering/cl_JoãoCastelo.ipynb
InsightLab/data-science-cookbook
mit
1.4 Atualizar os clusters Nessa etapa, os centróides são recomputados. O novo valor de cada centróide será a media de todos os dados atribuídos ao cluster.
def update_centroids(dataset, centroids, nearest_indexes): """ Atualiza os centroids Argumentos: dataset -- Conjunto de dados - [m,n] centroids -- Lista com os centróides - [k,n] nearest_indexes -- Índices do centróides mais próximos - [m,1] Retornos: centroids -- Lista com centróides atualizados - [k,n] """ #### CODE HERE #### pontos_relacionados = [] for i in range(0 , len(centroids)): candidatos = [] for index in range(0,len(nearest_indexes)): if(nearest_indexes[index] == i): candidatos.append( dataset[index] ) pontos_relacionados.append(candidatos) for i in range(0,len(centroids)): if(len(pontos_relacionados[i]) != 0): centroids[i] = media(pontos_relacionados[i], len(dataset[0])) ### END OF CODE ### return centroids
2019/09-clustering/cl_JoãoCastelo.ipynb
InsightLab/data-science-cookbook
mit
2. K-means 2.1 Algoritmo completo Utilizando as funções codificadas anteriormente, complete a classe do algoritmo K-means!
class KMeans(): def __init__(self, n_clusters=8, max_iter=300): self.n_clusters = n_clusters self.max_iter = max_iter self.inertia_=0 def fit(self,X): # Inicializa os centróides self.cluster_centers_ = calculate_initial_centers(X , self.n_clusters) # Computa o cluster de cada amostra self.labels_ = all_nearest_centroids(X , self.cluster_centers_) # Calcula a inércia inicial self.inertia_ = inertia(X, self.cluster_centers_, self.labels_) for index in range(0,self.max_iter): #### CODE HERE #### self.cluster_centers_=update_centroids(X, self.cluster_centers_, self.labels_) self.labels_ = all_nearest_centroids(X , self.cluster_centers_) self.inertia_ = inertia(X, self.cluster_centers_, self.labels_) ### END OF CODE ### return self def predict(self, X): return self.cluster_centers_( nearest_centroid(X, self.cluster_centers_) )
2019/09-clustering/cl_JoãoCastelo.ipynb
InsightLab/data-science-cookbook
mit
3. Método do cotovelo Implemete o método do cotovelo e mostre o melhor K para o conjunto de dados.
iner =[] for i in range(1,20): kmeans = KMeans(n_clusters=i) kmeans.fit(dataset) iner.append(kmeans.inertia_) print(iner) iner = np.array(iner) plt.plot(range(1,20) ,iner , '--') plt.show() #com 3 clusters ja temos o cotovelo!
2019/09-clustering/cl_JoãoCastelo.ipynb
InsightLab/data-science-cookbook
mit
MPA and MPS basics A convenient example to deal with is a random MPA. First, we create a fixed seed, then a random MPA:
rng = np.random.RandomState(seed=42) mpa = mp.random_mpa(sites=4, ldim=2, rank=3, randstate=rng, normalized=True)
examples/mpnum_intro.ipynb
dseuss/mpnum
bsd-3-clause
The MPA is an instance of the MPArray class:
mpa
examples/mpnum_intro.ipynb
dseuss/mpnum
bsd-3-clause
Number of sites:
len(mpa)
examples/mpnum_intro.ipynb
dseuss/mpnum
bsd-3-clause
Number of physical legs at each site (=number of array indices at each site):
mpa.ndims
examples/mpnum_intro.ipynb
dseuss/mpnum
bsd-3-clause
Because the MPA has one physical leg per site, we have created a matrix product state (i.e. a tensor train). In the graphical notation, this MPS looks like this Note that mpnum internally stores the local tensors of the matrix product representation on the right hand side. We see below how to obtain the "dense" tensor from an MPArray. Dimension of each physical leg:
mpa.shape
examples/mpnum_intro.ipynb
dseuss/mpnum
bsd-3-clause
Note that the number and dimension of the physical legs at each site can differ (altough this is rarely used in practice). Representation ranks (aka compression ranks) between each pair of sites:
mpa.ranks
examples/mpnum_intro.ipynb
dseuss/mpnum
bsd-3-clause
In physics, the representation ranks are usually called the bond dimensions of the representation. Dummy bonds before and after the chain are omitted in mpa.ranks. (Currently, mpnum only implements open boundary conditions.) Above, we have specified normalized=True. Therefore, we have created an MPA with $\ell_2$-norm 1. In case the MPA does not represent a vector but has more physical legs, it is nonetheless treated as a vector. Hence, for operators <code>mp.norm</code> implements the Frobenius norm.
mp.norm(mpa)
examples/mpnum_intro.ipynb
dseuss/mpnum
bsd-3-clause
Convert to a dense array, which should be used with care due because the memory used increases exponentially with the number of sites:
arr = mpa.to_array() arr.shape
examples/mpnum_intro.ipynb
dseuss/mpnum
bsd-3-clause
The resulting full array has one index for each physical leg. Now convert the full array back to an MPA:
mpa2 = mp.MPArray.from_array(arr) len(mpa2)
examples/mpnum_intro.ipynb
dseuss/mpnum
bsd-3-clause
We have obtained an MPA with length 1. This is not what we expected. The reason is that by default, all legs are placed on a single site (also notice the difference between mpa2.shape here and mpa.shape from above):
mpa2.shape mpa.shape
examples/mpnum_intro.ipynb
dseuss/mpnum
bsd-3-clause
We obtain the desired result by specifying the number of legs per site we want:
mpa2 = mp.MPArray.from_array(arr, ndims=1) len(mpa2)
examples/mpnum_intro.ipynb
dseuss/mpnum
bsd-3-clause
Finally, we can compute the norm distance between the two MPAs. (Again, the Frobenius norm is used.)
mp.norm(mpa - mpa2)
examples/mpnum_intro.ipynb
dseuss/mpnum
bsd-3-clause
Since this is an often used operation and allows for additional optimization (not implemented currently), it is advisable to use the specific <code>mp.normdist</code> for this:
mp.normdist(mpa, mpa2)
examples/mpnum_intro.ipynb
dseuss/mpnum
bsd-3-clause
Sums, differences and scalar multiplication of MPAs is done with the normal operators:
mp.norm(3 * mpa) mp.norm(mpa + 0.5 * mpa) mp.norm(mpa - 1.5 * mpa)
examples/mpnum_intro.ipynb
dseuss/mpnum
bsd-3-clause
Multiplication with a scalar leaves the bond dimension unchanged:
mpa.ranks (3 * mpa).ranks
examples/mpnum_intro.ipynb
dseuss/mpnum
bsd-3-clause
The bond dimensions of a sum (or difference) are given by the sums of the bond dimensions:
mpa2 = mp.random_mpa(sites=4, ldim=2, rank=2, randstate=rng) mpa2.ranks (mpa + mpa2).ranks
examples/mpnum_intro.ipynb
dseuss/mpnum
bsd-3-clause
MPO basics First, we create a random MPA with two physical legs per site:
mpo = mp.random_mpa(sites=4, ldim=(3, 2), rank=3, randstate=rng, normalized=True)
examples/mpnum_intro.ipynb
dseuss/mpnum
bsd-3-clause
In graphical notation, mpo looks like this It's basic properties are:
[len(mpo), mpo.ndims, mpo.ranks]
examples/mpnum_intro.ipynb
dseuss/mpnum
bsd-3-clause
Each site has two physical legs, one with dimension 3 and one with dimension 2. This corresponds to a non-square full array.
mpo.shape
examples/mpnum_intro.ipynb
dseuss/mpnum
bsd-3-clause
Now convert the mpo to a full array:
mpo_arr = mpo.to_array() mpo_arr.shape
examples/mpnum_intro.ipynb
dseuss/mpnum
bsd-3-clause
We refer to this arangement of axes as local form, since indices which correspond to the same site are neighboring. This is a natural form for the MPO representation. However, for some operations it is necessary to have row and column indices grouped together -- we refer to this as global form:
from mpnum.utils.array_transforms import local_to_global mpo_arr = mpo.to_array() mpo_arr = local_to_global(mpo_arr, sites=len(mpo)) mpo_arr.shape
examples/mpnum_intro.ipynb
dseuss/mpnum
bsd-3-clause
This gives the expected result. Note that it is crucial to specify the correct number of sites, otherwise we do not get what we want:
mpo_arr = mpo.to_array() mpo_arr = local_to_global(mpo_arr, sites=2) mpo_arr.shape
examples/mpnum_intro.ipynb
dseuss/mpnum
bsd-3-clause
As an alternative, there is the following shorthand:
mpo_arr = mpo.to_array_global() mpo_arr.shape
examples/mpnum_intro.ipynb
dseuss/mpnum
bsd-3-clause
An array in global form can be converted into matrix-product form with the following API:
mpo2 = mp.MPArray.from_array_global(mpo_arr, ndims=2) mp.normdist(mpo, mpo2)
examples/mpnum_intro.ipynb
dseuss/mpnum
bsd-3-clause
MPO-MPS product and arbitrary MPA-MPA products We can now compute the matrix-vector product of mpa from above (which is an MPS) and mpo.
mpa.shape mpo.shape prod = mp.dot(mpo, mpa, axes=(-1, 0)) prod.shape
examples/mpnum_intro.ipynb
dseuss/mpnum
bsd-3-clause
The result is a new MPS, with local dimension changed by mpo and looks like this: The axes argument is optional and defaults to axes=(-1, 0) -- i.e. contracting, at each site, the last pyhsical index of the first factor with the first physical index of the second factor. More specifically, the axes argument specifies which physical legs should be contracted: axes[0] specifies the physical in the first argument, and axes[1] specifies the physical leg in the second argument. This means that the same product can be achieved with
prod2 = mp.dot(mpa, mpo, axes=(0, 1)) mp.normdist(prod, prod2)
examples/mpnum_intro.ipynb
dseuss/mpnum
bsd-3-clause
Note that in any case, the ranks of the output of mp.dot are the products of the original ranks:
mpo.ranks, mpa.ranks, prod.ranks
examples/mpnum_intro.ipynb
dseuss/mpnum
bsd-3-clause
Now we compute the same product using the full arrays arr and mpo_arr:
arr_vec = arr.ravel() mpo_arr = mpo.to_array_global() mpo_arr_matrix = mpo_arr.reshape((81, 16)) prod3_vec = np.dot(mpo_arr_matrix, arr_vec) prod3_vec.shape
examples/mpnum_intro.ipynb
dseuss/mpnum
bsd-3-clause
As you can see, we need to reshape the result prod3_vec before we can convert it back to an MPA:
prod3_arr = prod3_vec.reshape((3, 3, 3, 3)) prod3 = mp.MPArray.from_array(prod3_arr, ndims=1) prod3.shape
examples/mpnum_intro.ipynb
dseuss/mpnum
bsd-3-clause
Now we can compare the two results:
mp.normdist(prod, prod3)
examples/mpnum_intro.ipynb
dseuss/mpnum
bsd-3-clause
We can also compare by converting prod to a full array:
prod_arr = prod.to_array() la.norm((prod3_arr - prod_arr).reshape(81))
examples/mpnum_intro.ipynb
dseuss/mpnum
bsd-3-clause
Converting full operators to MPOs While MPO algorithms avoid using full operators in general, we will need to convert a term acting on only two sites to an MPO in order to continue with MPO operations; i.e. we will need to convert a full array to an MPO. First, we define a full operator:
CZ = np.array([[ 1., 0., 0., 0.], [ 0., 1., 0., 0.], [ 0., 0., 1., 0.], [ 0., 0., 0., -1.]])
examples/mpnum_intro.ipynb
dseuss/mpnum
bsd-3-clause
This operator is the so-called controlled Z gate: Apply Z on the second qubit if the first qubit is in state e2. To convert it to an MPO, we have to reshape:
CZ_arr = CZ.reshape((2, 2, 2, 2))
examples/mpnum_intro.ipynb
dseuss/mpnum
bsd-3-clause
Now we can create an MPO, being careful to specify the correct number of legs per site:
CZ_mpo = mp.MPArray.from_array_global(CZ_arr, ndims=2)
examples/mpnum_intro.ipynb
dseuss/mpnum
bsd-3-clause
To test it, we apply the operator to the state which has both qubits in state e2:
vec = np.kron([0, 1], [0, 1]) vec
examples/mpnum_intro.ipynb
dseuss/mpnum
bsd-3-clause
Reshape and convert to an MPS:
vec_arr = vec.reshape([2, 2]) mps = mp.MPArray.from_array(vec_arr, ndims=1)
examples/mpnum_intro.ipynb
dseuss/mpnum
bsd-3-clause
Now we can compute the matrix-vector product:
out = mp.dot(CZ_mpo, mps) out.to_array().ravel()
examples/mpnum_intro.ipynb
dseuss/mpnum
bsd-3-clause
The output is as expected: We have acquired a minus sign. We have to be careful to use from_array_global and not from_array for CZ_mpo, because the CZ_arr is in global form. Here, all physical legs have the same dimension, so we can use from_array without error:
CZ_mpo2 = mp.MPArray.from_array(CZ_arr, ndims=2)
examples/mpnum_intro.ipynb
dseuss/mpnum
bsd-3-clause
However, the result is not what we want:
out2 = mp.dot(CZ_mpo2, mps) out2.to_array().ravel()
examples/mpnum_intro.ipynb
dseuss/mpnum
bsd-3-clause
The reason is easy to see: We have applied the following matrix to our state:
CZ_mpo2.to_array_global().reshape(4, 4)
examples/mpnum_intro.ipynb
dseuss/mpnum
bsd-3-clause
Keep in mind that we have to use to_array_global before the reshape. Using to_array would not provide us the matrix which we have applied to the state with mp.dot. Instead, it will exactly return the input:
CZ_mpo2.to_array().reshape(4, 4)
examples/mpnum_intro.ipynb
dseuss/mpnum
bsd-3-clause
Again, from_array_global is just the shorthand for the following:
from mpnum.utils.array_transforms import global_to_local CZ_mpo3 = mp.MPArray.from_array(global_to_local(CZ_arr, sites=2), ndims=2) mp.normdist(CZ_mpo, CZ_mpo3)
examples/mpnum_intro.ipynb
dseuss/mpnum
bsd-3-clause
As you can see, in the explicit version you must submit both the correct number of sites and the correct number of physical legs per site. Therefore, the function MPArray.from_array_global simplifies the conversion. Creating MPAs from Kronecker products It is a frequent task to create an MPS which represents the product state of $\vert 0 \rangle$ on each qubit. If the chain is very long, we cannot create the full array with np.kron and use MPArray.from_array afterwards because the array would be too large. In the following, we describe how to efficiently construct an MPA representation of a Kronecker product of vectors. The same methods can be used to efficiently construct MPA representations of Kronecker products of operators or tensors with three or more indices. First, we need the state on a single site:
e1 = np.array([1, 0]) e1
examples/mpnum_intro.ipynb
dseuss/mpnum
bsd-3-clause
Then we can use from_kron to directly create an MPS representation of the Kronecker product:
mps = mp.MPArray.from_kron([e1, e1, e1]) mps.to_array().ravel()
examples/mpnum_intro.ipynb
dseuss/mpnum
bsd-3-clause
This works well for large numbers of sites because the needed memory scales linearly with the number of sites:
mps = mp.MPArray.from_kron([e1] * 2000) len(mps)
examples/mpnum_intro.ipynb
dseuss/mpnum
bsd-3-clause
An even more pythonic solution is the use of iterators in this example:
from itertools import repeat mps = mp.MPArray.from_kron(repeat(e1, 2000)) len(mps)
examples/mpnum_intro.ipynb
dseuss/mpnum
bsd-3-clause
Do not call .to_array() on this state! The bond dimension of the state is 1, because it is a product state:
np.array(mps.ranks) # Convert to an array for nicer display
examples/mpnum_intro.ipynb
dseuss/mpnum
bsd-3-clause
We can also create a single-site MPS:
mps1 = mp.MPArray.from_array(e1, ndims=1) len(mps1)
examples/mpnum_intro.ipynb
dseuss/mpnum
bsd-3-clause
After that, we can use mp.chain to create Kronecker products of the MPS directly:
mps = mp.chain([mps1, mps1, mps1]) len(mps)
examples/mpnum_intro.ipynb
dseuss/mpnum
bsd-3-clause
It returns the same result as before:
mps.to_array().ravel()
examples/mpnum_intro.ipynb
dseuss/mpnum
bsd-3-clause
We can also use mp.chain on the three-site MPS:
mps = mp.chain([mps] * 100) len(mps)
examples/mpnum_intro.ipynb
dseuss/mpnum
bsd-3-clause
Note that mp.chain interprets the factors in the tensor product as distinct sites. Hence, the factors do not need to be of the same length or even have the same number of indices. In contrast, there is also mp.localouter, which computes the tensor product of MPArrays with the same number of sites:
mps = mp.chain([mps1] * 4) len(mps), mps.shape, rho = mp.localouter(mps.conj(), mps) len(rho), rho.shape
examples/mpnum_intro.ipynb
dseuss/mpnum
bsd-3-clause
Compression A typical matrix product based numerical algorithm performs many additions or multiplications of MPAs. As mentioned above, both operations increase the rank. If we let the bond dimension grow, the amount of memory we need grows with the number of operations we perform. To avoid this problem, we have to find an MPA with a smaller rank which is a good approximation to the original MPA. We start by creating an MPO representation of the identity matrix on 6 sites with local dimension 3:
op = mp.eye(sites=6, ldim=3) op.shape
examples/mpnum_intro.ipynb
dseuss/mpnum
bsd-3-clause
As it is a tensor product operator, it has rank 1:
op.ranks
examples/mpnum_intro.ipynb
dseuss/mpnum
bsd-3-clause