markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Step 4.b: Define $\tilde{\tau}$ and $\tilde{S}_i$ flux terms for GRHD equations \[Back to [top](toc)\]$$\label{taustildesourceterms}$$Recall from above that\begin{array}\ \partial_t \tilde{\tau} &+ \partial_j \underbrace{\left(\alpha^2 \sqrt{\gamma} T^{0j} - \rho_* v^j \right)} &= s \\\partial_t \tilde{S}_i &+ \partia...
# Step 4.c: tau_tilde flux def compute_tau_tilde_fluxU(alpha, sqrtgammaDET, vU,T4UU, rho_star): global tau_tilde_fluxU tau_tilde_fluxU = ixp.zerorank1(DIM=3) for j in range(3): tau_tilde_fluxU[j] = alpha**2*sqrtgammaDET*T4UU[0][j+1] - rho_star*vU[j] # Step 4.d: S_tilde flux def compute_S_tilde_flux...
_____no_output_____
BSD-2-Clause
Tutorial-GRHD_Equations-Cartesian.ipynb
rhaas80/nrpytutorial
Step 5: Define source terms on RHSs of GRHD equations \[Back to [top](toc)\]$$\label{grhdsourceterms}$$ Step 5.a: Define $s$ source term on RHS of $\tilde{\tau}$ equation \[Back to [top](toc)\]$$\label{ssourceterm}$$Recall again from above the $s$ source term on the right-hand side of the $\tilde{\tau}$ evolution equa...
def compute_s_source_term(KDD,betaU,alpha, sqrtgammaDET,alpha_dD, T4UU): global s_source_term s_source_term = sp.sympify(0) # Term 1: for i in range(3): for j in range(3): s_source_term += (T4UU[0][0]*betaU[i]*betaU[j] + 2*T4UU[0][i+1]*betaU[j] + T4UU[i+1][j+1])*KDD[i][j] # Term ...
_____no_output_____
BSD-2-Clause
Tutorial-GRHD_Equations-Cartesian.ipynb
rhaas80/nrpytutorial
Step 5.b: Define source term on RHS of $\tilde{S}_i$ equation \[Back to [top](toc)\]$$\label{stildeisourceterm}$$Recall from above$$\partial_t \tilde{S}_i + \partial_j \left(\alpha \sqrt{\gamma} T^j{}_i \right) = \frac{1}{2} \alpha\sqrt{\gamma} T^{\mu\nu} g_{\mu\nu,i}.$$Our goal here will be to compute$$\frac{1}{2} \a...
def compute_g4DD_zerotimederiv_dD(gammaDD,betaU,alpha, gammaDD_dD,betaU_dD,alpha_dD): global g4DD_zerotimederiv_dD # Eq. 2.121 in B&S betaD = ixp.zerorank1(DIM=3) for i in range(3): for j in range(3): betaD[i] += gammaDD[i][j]*betaU[j] betaDdD = ixp.zerorank2(DIM=3) for i in...
_____no_output_____
BSD-2-Clause
Tutorial-GRHD_Equations-Cartesian.ipynb
rhaas80/nrpytutorial
Step 5.b.ii: Compute source term of the $\tilde{S}_i$ equation: $\frac{1}{2} \alpha\sqrt{\gamma} T^{\mu\nu} g_{\mu\nu,i}$ \[Back to [top](toc)\]$$\label{stildeisource}$$Now that we've computed `g4DD_zerotimederiv_dD`$=g_{\mu\nu,i}$, the $\tilde{S}_i$ evolution equation source term may be quickly constructed.
# Step 5.b.ii: Compute S_tilde source term def compute_S_tilde_source_termD(alpha, sqrtgammaDET,g4DD_zerotimederiv_dD, T4UU): global S_tilde_source_termD S_tilde_source_termD = ixp.zerorank1(DIM=3) for i in range(3): for mu in range(4): for nu in range(4): S_tilde_source_...
_____no_output_____
BSD-2-Clause
Tutorial-GRHD_Equations-Cartesian.ipynb
rhaas80/nrpytutorial
Step 6: Conversion of $v^i$ to $u^\mu$ (Courtesy Patrick Nelson) \[Back to [top](toc)\]$$\label{convertvtou}$$According to Eqs. 9-11 of [the IllinoisGRMHD paper](https://arxiv.org/pdf/1501.07276.pdf), the Valencia 3-velocity $v^i_{(n)}$ is related to the 4-velocity $u^\mu$ via\begin{align}\alpha v^i_{(n)} &= \frac{u^i...
# Step 6.a: Convert Valencia 3-velocity v_{(n)}^i into u^\mu, and apply a speed limiter # Speed-limited ValenciavU is output to rescaledValenciavU global. def u4U_in_terms_of_ValenciavU__rescale_ValenciavU_by_applying_speed_limit(alpha,betaU,gammaDD, ValenciavU): # Inputs: Metric lapse alpha, shift betaU...
_____no_output_____
BSD-2-Clause
Tutorial-GRHD_Equations-Cartesian.ipynb
rhaas80/nrpytutorial
Step 7: Declare ADM and hydrodynamical input variables, and construct GRHD equations \[Back to [top](toc)\]$$\label{declarevarsconstructgrhdeqs}$$
# First define hydrodynamical quantities u4U = ixp.declarerank1("u4U", DIM=4) rho_b,P,epsilon = sp.symbols('rho_b P epsilon',real=True) # Then ADM quantities gammaDD = ixp.declarerank2("gammaDD","sym01",DIM=3) KDD = ixp.declarerank2("KDD" ,"sym01",DIM=3) betaU = ixp.declarerank1("betaU", DIM=3) alpha = sp.s...
_____no_output_____
BSD-2-Clause
Tutorial-GRHD_Equations-Cartesian.ipynb
rhaas80/nrpytutorial
Step 8: Code Validation against `GRHD.equations` NRPy+ module \[Back to [top](toc)\]$$\label{code_validation}$$As a code validation check, we verify agreement in the SymPy expressions for the GRHD equations generated in1. this tutorial versus2. the NRPy+ [GRHD.equations](../edit/GRHD/equations.py) module.
import GRHD.equations as Ge # First compute stress-energy tensor T4UU and T4UD: Ge.compute_T4UU(gammaDD,betaU,alpha, rho_b,P,epsilon,u4U) Ge.compute_T4UD(gammaDD,betaU,alpha, Ge.T4UU) # Next sqrt(gamma) Ge.compute_sqrtgammaDET(gammaDD) # Compute conservative variables in terms of primitive variables Ge.compute_rho_s...
ALL TESTS PASSED!
BSD-2-Clause
Tutorial-GRHD_Equations-Cartesian.ipynb
rhaas80/nrpytutorial
Step 9: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](toc)\]$$\label{latex_pdf_output}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial direct...
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface cmd.output_Jupyter_notebook_to_LaTeXed_PDF("Tutorial-GRHD_Equations-Cartesian")
Created Tutorial-GRHD_Equations-Cartesian.tex, and compiled LaTeX file to PDF file Tutorial-GRHD_Equations-Cartesian.pdf
BSD-2-Clause
Tutorial-GRHD_Equations-Cartesian.ipynb
rhaas80/nrpytutorial
K-Means
class Kmeans: """K-Means Clustering Algorithm""" def __init__(self, k, centers=None, cost=None,iter=None, labels=None, max_iter = 1000): """Initialize Parameters""" self.max_iter = max_iter self.k = k self.centers = np.empty(1) self.cost = [] self.it...
_____no_output_____
MIT
Algorithms.ipynb
BeanHam/STA-663-Project
K-Means++
class Kmeanspp: """K-Means++ Clustering Algorithm""" def __init__(self, k, centers=None, cost=None,iter=None, labels=None, max_iter = 1000): """Initialize Parameters""" self.max_iter = max_iter self.k = k self.centers = np.empty(1) self.cost = [] sel...
_____no_output_____
MIT
Algorithms.ipynb
BeanHam/STA-663-Project
K-Meansll
class Kmeansll: """K-Meansll Clustering Algorithm""" def __init__(self, k, omega, centers=None, cost=None,iter=None, labels=None, max_iter = 1000): """Initialize Parameters""" self.max_iter = max_iter self.k = k self.omega = omega self.centers = np.empty(1) ...
_____no_output_____
MIT
Algorithms.ipynb
BeanHam/STA-663-Project
1. a)
def simetrica(A): "Verifică dacă matricea A este simetrică" return np.all(A == A.T) def pozitiv_definita(A): "Verifică dacă matricea A este pozitiv definită" for i in range(1, len(A) + 1): d_minor = np.linalg.det(A[:i, :i]) if d_minor < 0: return False return True def ...
L este: [[ 5. 0. 0.] [ 3. 3. 0.] [-1. 1. 3.]] Verificare: [[25. 15. -5.] [15. 18. 0.] [-5. 0. 11.]]
MIT
cn/examen/Examen - Proba practică.ipynb
GabrielMajeri/teme-fmi
b)
b = np.array([1, 2, 3], dtype=np.float64) y = np.zeros(3) x = np.zeros(3) # Substituție ascendentă for i in range(0, 3): coefs = L[i, :i + 1] values = y[:i + 1] y[i] = (b[i] - coefs @ values) / L[i, i] L_t = L.T # Substituție descendentă for i in range(2, -1, -1): coefs = L_t[i, i + 1:] valu...
x = [0.06814815 0.05432099 0.3037037 ] Verificare: A @ x = [1. 2. 3.]
MIT
cn/examen/Examen - Proba practică.ipynb
GabrielMajeri/teme-fmi
2.
def step(x, f, df): "Calculează un pas din metoda Newton-Rhapson." return x - f(x) / df(x) def newton_rhapson(f, df, x0, eps): "Determină o soluție a f(x) = 0 plecând de la x_0" # Primul punct este cel primit ca parametru prev_x = x0 # Execut o iterație x = step(x0, f, df) N = 1 ...
_____no_output_____
MIT
cn/examen/Examen - Proba practică.ipynb
GabrielMajeri/teme-fmi
Funcția dată este$$f(x) = x^3 + 3 x^2 - 18 x - 40$$iar derivatele ei sunt$$f'(x) = 3x^2 + 6 x - 18$$$$f''(x) = 6x + 6$$
f = lambda x: (x ** 3) + 3 * (x ** 2) - 18 * x - 40 df = lambda x: 3 * (x ** 2) + 6 * x - 18 ddf = lambda x: 6 * x + 6 left = -8 right = +8 x_grafic = np.linspace(left, right, 500) def set_spines(ax): # Mut axele de coordonate ax.spines['bottom'].set_position('zero') ax.spines['top'].set_color('none') ...
_____no_output_____
MIT
cn/examen/Examen - Proba practică.ipynb
GabrielMajeri/teme-fmi
Alegem subintervale astfel încât $f(a) f(b) < 0$:- $[-8, -4]$- $[-4, 0]$- $[2, 6]$Pentru fiecare dintre acestea, căutăm un punct $x_0$ astfel încât $f(x_0) f''(x_0) > 0$:- $-6$- $-1$- $5$
eps = 1e-3 x1, _ = newton_rhapson(f, df, -6, eps) x2, _ = newton_rhapson(f, df, -1, eps) x3, _ = newton_rhapson(f, df, 5, eps) fig, ax = plt.subplots(dpi=120) plt.suptitle('Soluțiile lui $f(x) = 0$') set_spines(ax) plt.plot(x_grafic, f(x_grafic)) plt.scatter(x1, 0) plt.scatter(x2, 0) plt.scatter(x3, 0) plt.show()
_____no_output_____
MIT
cn/examen/Examen - Proba practică.ipynb
GabrielMajeri/teme-fmi
Input data representation as 2D array of 3D blocks> An easy way to represent input data to neural networks or any other machine learning algorithm in the form of 2D array of 3D-blocks- toc: false- branch: master- badges: true- comments: true- categories: [machine learning, jupyter, graphviz]- image: images/array_visua...
import graphviz as G # to create the required graphs import random # to generate random hex codes for colors FORWARDS = True # to visualise array from left to right BACKWARDS = False # to visualise array from right to left
_____no_output_____
Apache-2.0
_notebooks/2020-12-26-Array-Visualiser.ipynb
logicatcore/scratchpad
Properties of 2D representation of 3D array blocksMain features/properties of the array visualisation needed are defined gere before actually creating the graph/picture.1) Number of Rows: similar to rows in a matrix where each each row corresponds to one particular data type with data across different time instants ar...
ROW_NUMS = [1, 2] # Layer numbers corresponding to the number of rows of array data (must be contiguous) BLOCKS = [3, 3] # number of data fields in each row i.e., columns in each row diff = [x - ROW_NUMS[i] for i, x in enumerate(ROW_NUMS[1:])] assert diff == [1]*(len(ROW_NUMS) - 1), '"layer_num" should contain contigu...
_____no_output_____
Apache-2.0
_notebooks/2020-12-26-Array-Visualiser.ipynb
logicatcore/scratchpad
Render
dot
_____no_output_____
Apache-2.0
_notebooks/2020-12-26-Array-Visualiser.ipynb
logicatcore/scratchpad
Save/Export
# dot.format = 'jpeg' # or PDF, SVG, JPEG, PNG, etc. # to save the file, pdf is default dot.render('./lstm_input')
_____no_output_____
Apache-2.0
_notebooks/2020-12-26-Array-Visualiser.ipynb
logicatcore/scratchpad
Additional script to just show the breakdown of train-test data of the dataset being used
import random r = lambda: random.randint(0,255) # to generate random colors for each row folders = G.Digraph(node_attr={'style':'filled'}, graph_attr={'style':'invis', 'rankdir':'LR'},edge_attr={'color':'black', 'arrowsize':'.2'}) color = '#{:02x}{:02x}{:02x}'.format(r(),r(),r()) with folders.subgraph(name='cluster0'...
_____no_output_____
Apache-2.0
_notebooks/2020-12-26-Array-Visualiser.ipynb
logicatcore/scratchpad
Jupman TestsTests and cornercases.The page Title has one sharp, the Sections always have two sharps. Sezione 1bla bla Sezione 2Subsections always have three sharps Subsection 1bla bla Subsection 2bla bla Quotes > I'm quoted with **greater than** symbol> on multiple lines> Am I readable? I'm quoted with **space...
<details> <summary>Click here to see the code</summary> <code> <pre> question = raw_input("What?") answers = random.randint(1,8) if question == "": sys.exit() </pre> </code> </details>
_____no_output_____
Apache-2.0
jupman-tests.ipynb
DavidLeoni/iep
Some other Markdown cell afterwards .... Files in templatesSince Dec 2019 they are not accessible [see issue 10](https://github.com/DavidLeoni/jupman/issues/10), but it is not a great problem, you can always put a link to Github, see for example [exam-yyyy-mm-dd.ipynb](https://github.com/DavidLeoni/jupman/tree/master/...
x = [5,8,4,10,30,20,40,50,60,70,20,30] y= {3:9} z = [x] jupman.pytut()
_____no_output_____
Apache-2.0
jupman-tests.ipynb
DavidLeoni/iep
**jupman.pytut scope**: BEWARE of variables which were initialized in previous cells, they WILL NOT be available in Python Tutor:
w = 8 x = w + 5 jupman.pytut()
Traceback (most recent call last): File "/home/da/Da/prj/jupman/prj/jupman.py", line 2305, in _runscript self.run(script_str, user_globals, user_globals) File "/usr/lib/python3.5/bdb.py", line 431, in run exec(cmd, globals, locals) File "<string>", line 2, in <module> NameError: name 'w' is not defined
Apache-2.0
jupman-tests.ipynb
DavidLeoni/iep
**jupman.pytut window overflow**: When too much right space is taken, it might be difficult to scroll:
x = [3,2,5,2,42,34,2,4,34,2,3,4,23,4,23,4,2,34,23,4,23,4,23,4,234,34,23,4,23,4,23,4,2] jupman.pytut() x = w + 5 jupman.pytut()
Traceback (most recent call last): File "/home/da/Da/prj/jupman/prj/jupman.py", line 2305, in _runscript self.run(script_str, user_globals, user_globals) File "/usr/lib/python3.5/bdb.py", line 431, in run exec(cmd, globals, locals) File "<string>", line 2, in <module> NameError: name 'w' is not defined
Apache-2.0
jupman-tests.ipynb
DavidLeoni/iep
**jupman.pytut execution:** Some cells might execute in Jupyter but not so well in Python Tutor, due to [its inherent limitations](https://github.com/pgbovine/OnlinePythonTutor/blob/master/unsupported-features.md):
x = 0 for i in range(10000): x += 1 print(x) jupman.pytut()
10000
Apache-2.0
jupman-tests.ipynb
DavidLeoni/iep
**jupman.pytut infinite loops**: Since execution occurs first in Jupyter and then in Python tutor, if you have an infinite loop no Python Tutor instance will be spawned: ```pythonwhile True: passjupman.pytut()``` **jupman.pytut() resizability:** long vertical and horizontal expansion should work:
x = {0:'a'} for i in range(1,30): x[i] = x[i-1]+str(i*10000) jupman.pytut()
_____no_output_____
Apache-2.0
jupman-tests.ipynb
DavidLeoni/iep
**jupman.pytut cross arrows**: With multiple visualizations, arrows shouldn't cross from one to the other even if underlying script is loaded multiple times (relates to visualizerIdOverride)
x = [1,2,3] jupman.pytut()
_____no_output_____
Apache-2.0
jupman-tests.ipynb
DavidLeoni/iep
**jupman.pytut print output**: With only one line of print, Print output panel shouldn't be too short:
print("hello") jupman.pytut() y = [1,2,3,4] jupman.pytut()
_____no_output_____
Apache-2.0
jupman-tests.ipynb
DavidLeoni/iep
HTML magicsAnother option is to directly paste Python Tutor iframe in the cells, and use Jupyter `%%HTML` magics command. HTML should be available both in notebook and website - of course, requires an internet connection.Beware: you need the HTTP**S** !
%%HTML <iframe width="800" height="300" frameborder="0" src="https://pythontutor.com/iframe-embed.html#code=x+%3D+5%0Ay+%3D+10%0Az+%3D+x+%2B+y&cumulative=false&py=2&curInstr=3"> </iframe>
_____no_output_____
Apache-2.0
jupman-tests.ipynb
DavidLeoni/iep
NBTutor To show Python Tutor in notebooks, there is already a jupyter extension called [NBTutor](https://github.com/lgpage/nbtutor) , afterwards you can use magic `%%nbtutor` to show the interpreter.Unfortunately, it doesn't show in the generated HTML :-/
%reload_ext nbtutor %%nbtutor for x in range(1,4): print("ciao") x=5 y=7 x +y
ciao ciao ciao
Apache-2.0
jupman-tests.ipynb
DavidLeoni/iep
Stripping answersFor stripping answers examples, see [jupyter-example/jupyter-example-sol](jupyter-example/jupyter-example-sol.ipynb). For explanation, see [usage](usage.ipynbTags-to-strip) Metadata to HTML classes Formatting problems Characters per linePython standard for code has limit to 79, many styles have 80 ...
len('---------------------------------------------------------------------------')
_____no_output_____
Apache-2.0
jupman-tests.ipynb
DavidLeoni/iep
On website this **may** display a scroll bar, because it will actually print `'` apexes plus the dashes
'-'*80
_____no_output_____
Apache-2.0
jupman-tests.ipynb
DavidLeoni/iep
This should **not** display a scrollbar:
'-'*78
_____no_output_____
Apache-2.0
jupman-tests.ipynb
DavidLeoni/iep
This should **not** display a scrollbar:
print('-'*80)
--------------------------------------------------------------------------------
Apache-2.0
jupman-tests.ipynb
DavidLeoni/iep
Very large inputIn Jupyter: default behaviour, show scrollbarOn the website: should expand in horizontal as much as it wants, the rationale is that for input code since it may be printed to PDF you should always manually put line breaks.
# line with an exceedingly long comment line with an exceedingly long comment line with an exceedingly long comment line with an exceedingly long comment line with an exceedingly long comment line with an exceedingly long comment # line with an an out-of-this-world long comment line with an an out-of-this-world long c...
_____no_output_____
Apache-2.0
jupman-tests.ipynb
DavidLeoni/iep
**Very long HTML** (and long code line)Should expand in vertical as much as it wants.
%%HTML <iframe width="100%" height="1300px" frameBorder="0" src="https://umap.openstreetmap.fr/en/map/mia-mappa-agritur_182055?scaleControl=false&miniMap=false&scrollWheelZoom=false&zoomControl=true&allowEdit=false&moreControl=true&searchControl=null&tilelayersControl=null&embedControl=null&datalayersControl=true&onLo...
_____no_output_____
Apache-2.0
jupman-tests.ipynb
DavidLeoni/iep
Very long outputIn Jupyter: by clicking, you can collapseOn the website: a scrollbar should appear
for x in range(150): print('long output ...', x)
long output ... 0 long output ... 1 long output ... 2 long output ... 3 long output ... 4 long output ... 5 long output ... 6 long output ... 7 long output ... 8 long output ... 9 long output ... 10 long output ... 11 long output ... 12 long output ... 13 long output ... 14 long output ... 15 long output ... 16 long ou...
Apache-2.0
jupman-tests.ipynb
DavidLeoni/iep
Load Dataset
import pandas as pd import numpy as np import matplotlib as mpl import matplotlib.pyplot as plt import seaborn as sns # 노트북 안에 그래프를 그리기 위해 %matplotlib inline # 그래프에서 마이너스 폰트 깨지는 문제에 대한 대처 mpl.rcParams['axes.unicode_minus'] = False import warnings warnings.filterwarnings('ignore') train = pd.read_csv("data/train.csv...
_____no_output_____
MIT
bike-sharing-demand/bike-sharing-demand-rf.ipynb
jaepil-choi/Kaggle_bikeshare
Feature Engineering
train["year"] = train["datetime"].dt.year train["month"] = train["datetime"].dt.month train["day"] = train["datetime"].dt.day train["hour"] = train["datetime"].dt.hour train["minute"] = train["datetime"].dt.minute train["second"] = train["datetime"].dt.second train["dayofweek"] = train["datetime"].dt.dayofweek train.sh...
_____no_output_____
MIT
bike-sharing-demand/bike-sharing-demand-rf.ipynb
jaepil-choi/Kaggle_bikeshare
Feature Selection* 신호와 잡음을 구분해야 한다.* 피처가 많다고 해서 무조건 좋은 성능을 내지 않는다.* 피처를 하나씩 추가하고 변경해 가면서 성능이 좋지 않은 피처는 제거하도록 한다.
# 연속형 feature와 범주형 feature # 연속형 feature = ["temp","humidity","windspeed","atemp"] # 범주형 feature의 type을 category로 변경 해 준다. categorical_feature_names = ["season","holiday","workingday","weather", "dayofweek","month","year","hour"] for var in categorical_feature_names: train[var] = trai...
(10886,)
MIT
bike-sharing-demand/bike-sharing-demand-rf.ipynb
jaepil-choi/Kaggle_bikeshare
Score RMSLE과대평가 된 항목보다는 과소평가 된 항목에 패널티를 준다.오차(Error)를 제곱(Square)해서 평균(Mean)한 값의 제곱근(Root) 으로 값이 작을 수록 정밀도가 높다. 0에 가까운 값이 나올 수록 정밀도가 높은 값이다.Submissions are evaluated one the Root Mean Squared Logarithmic Error (RMSLE) $$ \sqrt{\frac{1}{n} \sum_{i=1}^n (\log(p_i + 1) - \log(a_i+1))^2 } $$ * \\({n}\\) is the number of ho...
from sklearn.metrics import make_scorer def rmsle(predicted_values, actual_values): # 넘파이로 배열 형태로 바꿔준다. predicted_values = np.array(predicted_values) actual_values = np.array(actual_values) # 예측값과 실제 값에 1을 더하고 로그를 씌워준다. log_predict = np.log(predicted_values + 1) log_actual = np.log(actual_...
_____no_output_____
MIT
bike-sharing-demand/bike-sharing-demand-rf.ipynb
jaepil-choi/Kaggle_bikeshare
Cross Validation 교차 검증* 일반화 성능을 측정하기 위해 데이터를 여러 번 반복해서 나누고 여러 모델을 학습한다.![image.png](https://www.researchgate.net/profile/Halil_Bisgin/publication/228403467/figure/fig2/AS:302039595798534@1449023259454/Figure-4-k-fold-cross-validation-scheme-example.png)이미지 출처 : https://www.researchgate.net/figure/228403467_fig2_Figure...
from sklearn.model_selection import KFold from sklearn.model_selection import cross_val_score k_fold = KFold(n_splits=10, shuffle=True, random_state=0)
_____no_output_____
MIT
bike-sharing-demand/bike-sharing-demand-rf.ipynb
jaepil-choi/Kaggle_bikeshare
RandomForest
from sklearn.ensemble import RandomForestRegressor max_depth_list = [] model = RandomForestRegressor(n_estimators=100, # 높을수록 좋지만, 느려짐. n_jobs=-1, random_state=0) model %time score = cross_val_score(model, X_train, y_train, cv=k_fold, scoring=rmsle_scorer) ...
Wall time: 19.9 s Score= 0.33110
MIT
bike-sharing-demand/bike-sharing-demand-rf.ipynb
jaepil-choi/Kaggle_bikeshare
Train
# 학습시킴, 피팅(옷을 맞출 때 사용하는 피팅을 생각함) - 피처와 레이블을 넣어주면 알아서 학습을 함 model.fit(X_train, y_train) # 예측 predictions = model.predict(X_test) print(predictions.shape) predictions[0:10] # 예측한 데이터를 시각화 해본다. fig,(ax1,ax2)= plt.subplots(ncols=2) fig.set_size_inches(12,5) sns.distplot(y_train,ax=ax1,bins=50) ax1.set(title="train") sns....
_____no_output_____
MIT
bike-sharing-demand/bike-sharing-demand-rf.ipynb
jaepil-choi/Kaggle_bikeshare
Submit
submission = pd.read_csv("data/sampleSubmission.csv") submission submission["count"] = predictions print(submission.shape) submission.head() submission.to_csv("data/Score_{0:.5f}_submission.csv".format(score), index=False)
_____no_output_____
MIT
bike-sharing-demand/bike-sharing-demand-rf.ipynb
jaepil-choi/Kaggle_bikeshare
After separating tweets out as male and female tweets, try to find topic
with open('her_list.txt', 'r') as filename: her_list=json.load(filename) with open('his_list.txt','r') as filename: his_list=json.load(filename) cv_tfidf = TfidfVectorizer(stop_words='english') X_tfidf = cv_tfidf.fit_transform(her_list) nmf_model = NMF(3) topic_matrix = nmf_model.fit_transform(X_tfidf) def di...
Topic 0 hes, man, trump, good, men, said, oh, great, got, work Topic 1 man, oh, boy, thanks, old, life, thankyou, work, sorry, young Topic 2 hes, man, andrew___baker, oh, talking, boy, gelbach, modeledbehavior, wearing, saying Topic 3 men, women, white, work, good, economics, macro, great, read, labor
MIT
code/model_his_her_tfidf_nmf.ipynb
my321/project4_econtwitter
_____no_output_____
MIT
src/02_loops_condicionais_metodos_funcoes/06_funcao_range.ipynb
ralsouza/python_fundamentos
Range Imprimir os números pares entre 50 e 101. Usar a técnica da função range que executa saltos entre os números.
for i in range(50,101,2): print(i)
50 52 54 56 58 60 62 64 66 68 70 72 74 76 78 80 82 84 86 88 90 92 94 96 98 100
MIT
src/02_loops_condicionais_metodos_funcoes/06_funcao_range.ipynb
ralsouza/python_fundamentos
Imprimir de 3 à 6, lembrando que o último número é exclusivo.
for i in range(3,6): print(i)
3 4 5
MIT
src/02_loops_condicionais_metodos_funcoes/06_funcao_range.ipynb
ralsouza/python_fundamentos
Gerar uma lista negativa, iniciando de 0 até -20, saltando de dois em dois números.Lembrando novamente que o valor máximo é exclusivo.
for i in range(0,-20,-2): print(i)
0 -2 -4 -6 -8 -10 -12 -14 -16 -18
MIT
src/02_loops_condicionais_metodos_funcoes/06_funcao_range.ipynb
ralsouza/python_fundamentos
Configurar o valor máximo do range, conforme o tamanho de um objeto em um loop for.
lista = ['morango','abacaxi','banana','melão'] for i in range(0, len(lista)): print(lista[i]) # Checar o tipo do objeto range type(range(0,5))
_____no_output_____
MIT
src/02_loops_condicionais_metodos_funcoes/06_funcao_range.ipynb
ralsouza/python_fundamentos
From Variables to Classes A short Introduction Python - as any programming language - has many extensions and libraries at its disposal. Basically, there exist libraries for everything. But what are **libraries**? Basically, **libraries** are a collection of methods (_small pieces of code where you put sth in and...
x = 4.24725723 print(type(x)) y = 'Hello World! Hello universe' print(y) z = True print(type(z))
<class 'float'> Hello World! Hello universe <class 'bool'>
MIT
00_Variables_to_Classes.ipynb
Zqs0527/geothermics
We can use operations (normal arithmetic operations) to use variables for getting results we want. With numbers, you can add, substract, multiply, divide, basically taking the values from the memory assigned to the variable name and performing calculations. Let's have a look at operations with numbers and strings. We...
n1 = 7 n2 = 42 s1 = 'Looking good, ' s2 = 'you are.' first_sum = n1 + n2 print(first_sum) first_conc = s1 + s2 print(first_conc)
49 Looking good, you are.
MIT
00_Variables_to_Classes.ipynb
Zqs0527/geothermics
Variables can be more than just a number. If you think of an Excel-Spreadsheet, a variable can be the content of a single cell, or multiple cells can be combined in one variable (e.g. one column of an Excel table). So let's create a list -_a collection of variables_ - from `x`, `n1`, and `n2`. Lists in python are creat...
first_list = [x, n1, n2] second_sum = first_list[0] + first_list[1] + first_list[2] print('manual sum {}'.format(second_sum)) # This can also be done with a function print('sum function {}'.format(sum(first_list)))
manual sum 53.2 sum function 53.2
MIT
00_Variables_to_Classes.ipynb
Zqs0527/geothermics
Functions The `sum()` method we used above is a **function**. Functions (later we will call them methods) are pieces of code, which take an input, perform some kind of operation, and (_optionally_) return an output. In Python, functions are written like: ```pythondef func(input): """ Description of the funct...
def sumup(inp): """ input: inp - list/array with floating point or integer numbers return: sumd - scalar value of the summed up list """ val = 0 for i in inp: val = val + i return val # let's compare the implemented standard sum function with the new sumup function sum1 = sum(first_...
the sum of the array is: 5050
MIT
00_Variables_to_Classes.ipynb
Zqs0527/geothermics
As we see above, functions are quite practical and save a lot of time. Further, they help structuring your code. Some functions are directly available in python without any libraries or other external software. In the example above however, you might have noticed, that we `import`ed a library called `numpy`. In those ...
# here we just create the data for clustering from sklearn.datasets.samples_generator import make_blobs import matplotlib.pyplot as plt %matplotlib inline X, y = make_blobs(n_samples=100, centers=3, cluster_std= 0.5, random_state=0) plt.scatter(X[:,0], X[:,1], s=70) # now we create an instance of the...
_____no_output_____
MIT
00_Variables_to_Classes.ipynb
Zqs0527/geothermics
Visualize Counts for the three classes The number of volume-wise predictions for each of the three classes can be visualized in a 2D-space (with two classes as the axes and the remained or class1-class2 as the value of the third class). Also, the percentage of volume-wise predictions can be shown in a modified pie-ch...
import os import pickle import numpy as np import pandas as pd from sklearn import preprocessing from sklearn import svm import scipy.misc from scipy import ndimage from scipy.stats import beta from PIL import Image import matplotlib import matplotlib.pyplot as plt import seaborn as sns sns.set_context('poster') ...
_____no_output_____
MIT
notebooks/14-mw-prediction-space.ipynb
mwegrzyn/volume-wise-language
Outline the WTA prediction model make all possible values
def make_all_dummy(): my_max = 100 d = {} count = 0 for bi in np.arange(0,my_max+(10**-10),0.5): left_and_right = my_max - bi for left in np.arange(0,left_and_right+(10**-10),0.5): right = left_and_right-left d[count] = {'left':left,'bilateral':bi,'right':righ...
_____no_output_____
MIT
notebooks/14-mw-prediction-space.ipynb
mwegrzyn/volume-wise-language
transform labels into numbers
my_labeler = preprocessing.LabelEncoder() my_labeler.fit(['left','bilateral','right','inconclusive']) my_labeler.classes_
_____no_output_____
MIT
notebooks/14-mw-prediction-space.ipynb
mwegrzyn/volume-wise-language
2d space where highest number indiciates class membership (WTA)
def make_dummy_space(dummy_df): space_df = dummy_df.copy() space_df['pred'] = my_labeler.transform(dummy_df['pred']) space_df.index = [space_df.left, space_df.right] space_df = space_df[['pred']] space_df = space_df.unstack(1)['pred'] return space_df dummy_space_df = make_dummy_sp...
_____no_output_____
MIT
notebooks/14-mw-prediction-space.ipynb
mwegrzyn/volume-wise-language
define color map
colors_file = os.path.join(supDir,'models','colors.p') with open(colors_file, 'rb') as f: color_dict = pickle.load(f) my_cols = {} for i, j in zip(['red','yellow','blue','trans'], ['left','bilateral','right','inconclusive']): my_cols[j] = color_dict[i] my_col_order = np.array([my_cols[g] for g in my_labeler.c...
_____no_output_____
MIT
notebooks/14-mw-prediction-space.ipynb
mwegrzyn/volume-wise-language
plot WTA predictions
plt.figure(figsize=(6,6)) plt.imshow(dummy_space_df, origin='image',cmap=cmap,extent=(0,100,0,100),alpha=0.8) plt.contour(dummy_space_df[::-1],colors='white',alpha=1,origin='image',extent=(0,100,0,100),antialiased=True) plt.xlabel('right',fontsize=32) plt.xticks(range(0,101,25),np.arange(0,1.01,.25),fontsize=28) plt.yt...
_____no_output_____
MIT
notebooks/14-mw-prediction-space.ipynb
mwegrzyn/volume-wise-language
load data
groupdata_filename = '../data/processed/csv/withinconclusive_prediction_df.csv' prediction_df = pd.read_csv(groupdata_filename,index_col=[0,1],header=0)
_____no_output_____
MIT
notebooks/14-mw-prediction-space.ipynb
mwegrzyn/volume-wise-language
toolbox use
#groupdata_filename = os.path.join(supDir,'models','withinconclusive_prediction_df.csv') #prediction_df = pd.read_csv(groupdata_filename,index_col=[0,1],header=0) prediction_df.tail()
_____no_output_____
MIT
notebooks/14-mw-prediction-space.ipynb
mwegrzyn/volume-wise-language
show data and WTA space
plt.figure(figsize=(6,6)) plt.imshow(dummy_space_df, origin='image',cmap=cmap,extent=(0,100,0,100),alpha=0.8) plt.contour(dummy_space_df[::-1],colors='white',alpha=1,origin='image',extent=(0,100,0,100),antialiased=True) for c in ['left','right','bilateral']: a_df = prediction_df.loc[c,['left','right']] * 100 ...
_____no_output_____
MIT
notebooks/14-mw-prediction-space.ipynb
mwegrzyn/volume-wise-language
show one patient's data doughnut plot
p_name = 'pat###' p_count_df = pd.read_csv('../data/processed/csv/%s_counts_df.csv'%p_name,index_col=[0,1],header=0) p_count_df def make_donut(p_count_df, ax, my_cols=my_cols): """show proportion of the number of volumes correlating highest with one of the three groups""" percentages = p_count_df/p_count_df.su...
_____no_output_____
MIT
notebooks/14-mw-prediction-space.ipynb
mwegrzyn/volume-wise-language
prediction space
def make_pred_space(p_count_df, prediction_df, ax, dummy_space_df=dummy_space_df): ax.imshow(dummy_space_df, origin='image',cmap=cmap,extent=(0,100,0,100),alpha=0.8) ax.contour(dummy_space_df[::-1],colors='white',alpha=1,origin='image',extent=(0,100,0,100),antialiased=True) for c in ['left','right','bilat...
_____no_output_____
MIT
notebooks/14-mw-prediction-space.ipynb
mwegrzyn/volume-wise-language
toolbox use
#def make_p(pFolder,pName,prediction_df=prediction_df): # # count_filename = os.path.join(pFolder,''.join([pName,'_counts_df.csv'])) # p_count_df = pd.read_csv(count_filename,index_col=[0,1],header=0) # # fig = plt.figure(figsize=(8,8)) # ax = plt.subplot(111) # ax = make_donut(p_count_df,ax) # ...
_____no_output_____
MIT
notebooks/14-mw-prediction-space.ipynb
mwegrzyn/volume-wise-language
Torch Core> Basic pytorch functions used in the fastai library Arrays and show
#export @delegates(plt.subplots, keep=True) def subplots(nrows=1, ncols=1, figsize=None, imsize=3, add_vert=0, **kwargs): if figsize is None: figsize=(ncols*imsize, nrows*imsize+add_vert) fig,ax = plt.subplots(nrows, ncols, figsize=figsize, **kwargs) if nrows*ncols==1: ax = array([ax]) return fig,ax #hi...
_____no_output_____
Apache-2.0
nbs/00_torch_core.ipynb
nigh8w0lf/fastai2
`show_image` can show PIL images...
im = Image.open(TEST_IMAGE_BW) ax = show_image(im, cmap="Greys")
_____no_output_____
Apache-2.0
nbs/00_torch_core.ipynb
nigh8w0lf/fastai2
...and color images with standard `CHW` dim order...
im2 = np.array(Image.open(TEST_IMAGE)) ax = show_image(im2, figsize=(2,2))
_____no_output_____
Apache-2.0
nbs/00_torch_core.ipynb
nigh8w0lf/fastai2
...and color images with `HWC` dim order...
im3 = torch.as_tensor(im2).permute(2,0,1) ax = show_image(im3, figsize=(2,2)) #export def show_titled_image(o, **kwargs): "Call `show_image` destructuring `o` to `(img,title)`" show_image(o[0], title=str(o[1]), **kwargs) show_titled_image((im3,'A puppy'), figsize=(2,2)) #export @delegates(subplots) def show_ima...
_____no_output_____
Apache-2.0
nbs/00_torch_core.ipynb
nigh8w0lf/fastai2
`ArrayImage`, `ArrayImageBW` and `ArrayMask` are subclasses of `ndarray` that know how to show themselves.
#export class ArrayBase(ndarray): @classmethod def _before_cast(cls, x): return x if isinstance(x,ndarray) else array(x) #export class ArrayImageBase(ArrayBase): _show_args = {'cmap':'viridis'} def show(self, ctx=None, **kwargs): return show_image(self, ctx=ctx, **{**self._show_args, **kwargs}) ...
_____no_output_____
Apache-2.0
nbs/00_torch_core.ipynb
nigh8w0lf/fastai2
Basics
#export @patch def __array_eq__(self:Tensor,b): return torch.equal(self,b) if self.dim() else self==b #export def _array2tensor(x): if x.dtype==np.uint16: x = x.astype(np.float32) return torch.from_numpy(x) #export def tensor(x, *rest, **kwargs): "Like `torch.as_tensor`, but handle lists too, and can pa...
_____no_output_____
Apache-2.0
nbs/00_torch_core.ipynb
nigh8w0lf/fastai2
`gather` only applies during distributed training and the result tensor will be the one gathered accross processes if `gather=True` (as a result, the batch size will be multiplied by the number of processes).
#export def to_half(b): "Recursively map lists of tensors in `b ` to FP16." return apply(lambda x: x.half() if torch.is_floating_point(x) else x, b) #export def to_float(b): "Recursively map lists of int tensors in `b ` to float." return apply(lambda x: x.float() if torch.is_floating_point(x) else x, b)...
_____no_output_____
Apache-2.0
nbs/00_torch_core.ipynb
nigh8w0lf/fastai2
Tensor subtypes
#export @patch def set_meta(self:Tensor, x): "Set all metadata in `__dict__`" if hasattr(x,'__dict__'): self.__dict__ = x.__dict__ #export @patch def get_meta(self:Tensor, n, d=None): "Set `n` from `self._meta` if it exists and returns default `d` otherwise" return getattr(self, '_meta', {}).get(n, d) #...
_____no_output_____
Apache-2.0
nbs/00_torch_core.ipynb
nigh8w0lf/fastai2
`Tensor.set_meta` and `Tensor.as_subclass` work together to maintain `_meta` after casting.
class _T(Tensor): pass t = tensor(1) t._meta = {'img_size': 1} t2 = t.as_subclass(_T) test_eq(t._meta, t2._meta) test_eq(t2.get_meta('img_size'), 1) #export class TensorBase(Tensor): def __new__(cls, x, **kwargs): res = cast(tensor(x), cls) res._meta = kwargs return res @classmethod ...
_____no_output_____
Apache-2.0
nbs/00_torch_core.ipynb
nigh8w0lf/fastai2
L -
#export @patch def tensored(self:L): "`mapped(tensor)`" return self.map(tensor) @patch def stack(self:L, dim=0): "Same as `torch.stack`" return torch.stack(list(self.tensored()), dim=dim) @patch def cat (self:L, dim=0): "Same as `torch.cat`" return torch.cat (list(self.tensored()), dim=dim) sh...
_____no_output_____
Apache-2.0
nbs/00_torch_core.ipynb
nigh8w0lf/fastai2
There are shortcuts for `torch.stack` and `torch.cat` if your `L` contains tensors or something convertible. You can manually convert with `tensored`.
t = L(([1,2],[3,4])) test_eq(t.tensored(), [tensor(1,2),tensor(3,4)]) show_doc(L.stack) test_eq(t.stack(), tensor([[1,2],[3,4]])) show_doc(L.cat) test_eq(t.cat(), tensor([1,2,3,4]))
_____no_output_____
Apache-2.0
nbs/00_torch_core.ipynb
nigh8w0lf/fastai2
Chunks
#export def concat(*ls): "Concatenate tensors, arrays, lists, or tuples" if not len(ls): return [] it = ls[0] if isinstance(it,torch.Tensor): res = torch.cat(ls) elif isinstance(it,ndarray): res = np.concatenate(ls) else: res = itertools.chain.from_iterable(map(L,ls)) if isinstan...
_____no_output_____
Apache-2.0
nbs/00_torch_core.ipynb
nigh8w0lf/fastai2
Simple types
#export def show_title(o, ax=None, ctx=None, label=None, color='black', **kwargs): "Set title of `ax` to `o`, or print `o` if `ax` is `None`" ax = ifnone(ax,ctx) if ax is None: print(o) elif hasattr(ax, 'set_title'): t = ax.title.get_text() if len(t) > 0: o = t+'\n'+str(o) ax.set...
_____no_output_____
Apache-2.0
nbs/00_torch_core.ipynb
nigh8w0lf/fastai2
Other functions
#export if not hasattr(pd.DataFrame,'_old_init'): pd.DataFrame._old_init = pd.DataFrame.__init__ #export @patch def __init__(self:pd.DataFrame, data=None, index=None, columns=None, dtype=None, copy=False): if data is not None and isinstance(data, Tensor): data = to_np(data) self._old_init(data, index=index, col...
_____no_output_____
Apache-2.0
nbs/00_torch_core.ipynb
nigh8w0lf/fastai2
This decorator is particularly useful for using numpy functions as fastai metrics, for instance:
from sklearn.metrics import f1_score @np_func def f1(inp,targ): return f1_score(targ, inp) a1,a2 = array([0,1,1]),array([1,0,1]) t = f1(tensor(a1),tensor(a2)) test_eq(f1_score(a1,a2), t) assert isinstance(t,Tensor) #export class Module(nn.Module, metaclass=PrePostInitMeta): "Same as `nn.Module`, but no need for s...
_____no_output_____
Apache-2.0
nbs/00_torch_core.ipynb
nigh8w0lf/fastai2
Compression lib can be any of: blosclz, lz4, lz4hc, snappy, zlib or zstd.
#export @patch def load_array(p:Path): "Save numpy array to a `pytables` file" with tables.open_file(p, 'r') as f: return f.root.data.read() inspect.getdoc(load_array) str(inspect.signature(load_array)) #export def base_doc(elt): "Print a base documentation of `elt`" name = getattr(elt, '__qualname__', ...
_____no_output_____
Apache-2.0
nbs/00_torch_core.ipynb
nigh8w0lf/fastai2
Image helpers
#export def to_image(x): if isinstance(x,Image.Image): return x if isinstance(x,Tensor): x = to_np(x.permute((1,2,0))) if x.dtype==np.float32: x = (x*255).astype(np.uint8) return Image.fromarray(x, mode=['RGB','CMYK'][x.shape[0]==4]) #export def make_cross_image(bw=True): "Create a tensor containing...
_____no_output_____
Apache-2.0
nbs/00_torch_core.ipynb
nigh8w0lf/fastai2
Model init
#export def requires_grad(m): "Check if the first parameter of `m` requires grad or not" ps = list(m.parameters()) return ps[0].requires_grad if len(ps)>0 else False tst = nn.Linear(4,5) assert requires_grad(tst) for p in tst.parameters(): p.requires_grad_(False) assert not requires_grad(tst) #export def in...
_____no_output_____
Apache-2.0
nbs/00_torch_core.ipynb
nigh8w0lf/fastai2
Multiprocessing
#export from multiprocessing import Process, Queue #export def set_num_threads(nt): "Get numpy (and others) to use `nt` threads" try: import mkl; mkl.set_num_threads(nt) except: pass torch.set_num_threads(1) os.environ['IPC_ENABLE']='1' for o in ['OPENBLAS_NUM_THREADS','NUMEXPR_NUM_THREADS','OMP...
_____no_output_____
Apache-2.0
nbs/00_torch_core.ipynb
nigh8w0lf/fastai2
`cls` is any class with `__call__`. It will be passed `args` and `kwargs` when initialized. Note that `n_workers` instances of `cls` are created, one in each process. `items` are then split in `n_workers` batches and one is sent to each `cls`. The function then returns a list of all the results, matching the order of `...
class SleepyBatchFunc: def __init__(self): self.a=1 def __call__(self, batch): for k in batch: time.sleep(random.random()/4) yield k+self.a x = np.linspace(0,0.99,20) res = L(parallel_gen(SleepyBatchFunc, x, n_workers=2)) test_eq(res.sorted().itemgot(1), x+1)
_____no_output_____
Apache-2.0
nbs/00_torch_core.ipynb
nigh8w0lf/fastai2
autograd jit functions
#export def script_use_ctx(f): "Decorator: create jit script and pass everything in `ctx.saved_variables to `f`, after `*args`" sf = torch.jit.script(f) def _f(ctx, *args, **kwargs): return sf(*args, *ctx.saved_variables, **kwargs) return update_wrapper(_f,f) #export def script_save_ctx(static, *argidx)...
_____no_output_____
Apache-2.0
nbs/00_torch_core.ipynb
nigh8w0lf/fastai2
Export -
#hide from nbdev.export import notebook2script notebook2script()
Converted 00_torch_core.ipynb. Converted 01_layers.ipynb. Converted 02_data.load.ipynb. Converted 03_data.core.ipynb. Converted 04_data.external.ipynb. Converted 05_data.transforms.ipynb. Converted 06_data.block.ipynb. Converted 07_vision.core.ipynb. Converted 08_vision.data.ipynb. Converted 09_vision.augment.ipynb. Co...
Apache-2.0
nbs/00_torch_core.ipynb
nigh8w0lf/fastai2
Soft Computing Vežba 1 - Digitalna slika, computer vision, OpenCV OpenCVOpen source biblioteka namenjena oblasti računarske vizije (eng. computer vision). Dokumentacija dostupna ovde. matplotlibPlotting biblioteka za programski jezik Python i njegov numerički paket NumPy. Dokumentacija dostupna ovde. Učitavanje slike...
import numpy as np import cv2 # OpenCV biblioteka import matplotlib import matplotlib.pyplot as plt # iscrtavanje slika i grafika unutar samog browsera %matplotlib inline # prikaz vecih slika matplotlib.rcParams['figure.figsize'] = 16,12 img = cv2.imread('images/girl.jpg') # ucitavanje slike sa diska img = cv2.cvt...
_____no_output_____
MIT
v1-uvod/sc-siit-v1-cv-basics.ipynb
ftn-ai-lab/sc-2019-siit
Prikazivanje dimenzija slike
print(img.shape) # shape je property Numpy array-a za prikaz dimenzija
_____no_output_____
MIT
v1-uvod/sc-siit-v1-cv-basics.ipynb
ftn-ai-lab/sc-2019-siit
Obratiti pažnju da slika u boji ima 3 komponente za svaki piksel na slici - R (red), G (green) i B (blue).![images/cat_rgb.png](images/cat_rgb.png)
img
_____no_output_____
MIT
v1-uvod/sc-siit-v1-cv-basics.ipynb
ftn-ai-lab/sc-2019-siit
Primetite da je svaki element matrice **uint8** (unsigned 8-bit integer), odnosno celobroja vrednost u interval [0, 255].
img.dtype
_____no_output_____
MIT
v1-uvod/sc-siit-v1-cv-basics.ipynb
ftn-ai-lab/sc-2019-siit
Osnovne operacije pomoću NumPyPredstavljanje slike kao NumPy array je vrlo korisna stvar, jer omogućava jednostavnu manipulaciju i izvršavanje osnovih operacija nad slikom. Isecanje (crop)
img_crop = img[100:200, 300:600] # prva koordinata je po visini (formalno red), druga po širini (formalo kolona) plt.imshow(img_crop)
_____no_output_____
MIT
v1-uvod/sc-siit-v1-cv-basics.ipynb
ftn-ai-lab/sc-2019-siit
Okretanje (flip)
img_flip_h = img[:, ::-1] # prva koordinata ostaje ista, a kolone se uzimaju unazad plt.imshow(img_flip_h) img_flip_v = img[::-1, :] # druga koordinata ostaje ista, a redovi se uzimaju unazad plt.imshow(img_flip_v) img_flip_c = img[:, :, ::-1] # možemo i izmeniti redosled boja (RGB->BGR), samo je pitanje koliko to i...
_____no_output_____
MIT
v1-uvod/sc-siit-v1-cv-basics.ipynb
ftn-ai-lab/sc-2019-siit
Invertovanje
img_inv = 255 - img # ako su pikeli u intervalu [0,255] ovo je ok, a ako su u intervalu [0.,1.] onda bi bilo 1. - img plt.imshow(img_inv)
_____no_output_____
MIT
v1-uvod/sc-siit-v1-cv-basics.ipynb
ftn-ai-lab/sc-2019-siit
Konvertovanje iz RGB u "grayscale"Konvertovanjem iz RGB modela u nijanse sivih (grayscale) se gubi informacija o boji piksela na slici, ali sama slika postaje mnogo lakša za dalju obradu.Ovo se može uraditi na više načina:1. **Srednja vrednost** RGB komponenti - najjednostavnija varijanta $$ G = \frac{R+G+B}{3} $$2. *...
# implementacija metode perceptivne osvetljenosti def my_rgb2gray(img_rgb): img_gray = np.ndarray((img_rgb.shape[0], img_rgb.shape[1])) # zauzimanje memorije za sliku (nema trece dimenzije) img_gray = 0.21*img_rgb[:, :, 0] + 0.77*img_rgb[:, :, 1] + 0.07*img_rgb[:, :, 2] img_gray = img_gray.astype('uint8') ...
_____no_output_____
MIT
v1-uvod/sc-siit-v1-cv-basics.ipynb
ftn-ai-lab/sc-2019-siit
Ipak je najbolje se držati implementacije u **OpenCV** biblioteci :).
img_gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY) img_gray.shape plt.imshow(img_gray, 'gray') img_gray
_____no_output_____
MIT
v1-uvod/sc-siit-v1-cv-basics.ipynb
ftn-ai-lab/sc-2019-siit