markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Problem Statement 2
# Initialization of Variables n, answ = 4, "Y" # Boolean Calculations print("a.", 2 < n and n < 6) # T and T is T print("b.", 2 < n or n == 6) # T or F is T print("c.", not(2 < n) or n == 6) # -T or F is F or F is F print("d.", not(n < 6)) ...
a. True b. True c. False d. False e. True f. False g. True h. False i. True j. False
Apache-2.0
Midterm_Exam.ipynb
milayacharlieCvSU/CPEN-21A-CPE-1-1
Problem Statement 3
# Initialization of Variables x, y, z, w = 2, -3, 7, -10 # Numerical Calculations print("a.", x/y) print("b.", w/y/x) print("c.", z/y % x) print("d.", x % -y * w) print("e.", x % y) print("f.", z % w - y/x*5 + 5) print("g.", 9 - x % (2 + y)) print("h.", z//w) print("i.", (2 + y)**2) print("j.", w/x * 2)
a. -0.6666666666666666 b. 1.6666666666666667 c. 1.6666666666666665 d. -20 e. -1 f. 9.5 g. 9 h. -1 i. 1 j. -10.0
Apache-2.0
Midterm_Exam.ipynb
milayacharlieCvSU/CPEN-21A-CPE-1-1
Introduction---
2+2
_____no_output_____
MIT
intro_book.ipynb
RoetGer/applied-causality-booklet
Cross Entropy> PyTorch와 Numpy로 구현하는 Cross Entropy- toc: true - badges: true- comments: true- categories: [Implementation, AI-math]- image: images/cross_entropy_fig1.png Introduction본 포스팅에서는 Cross Entropy에 대해 알아보겠습니다.- Cross Entropy의 개념- Cross Entropy 수식- Matplotlib으로 figure 그려보기- Numpy로 코드 구현체 살펴보기- PyTorch로 forward,...
import numpy as np import matplotlib.pyplot as plt x = np.linspace(0.01, 1, 100) y = - np.log(x) plt.axvline(0, color="k", alpha=0.7) plt.axhline(0, color="k", alpha=0.7) plt.plot(x, y, lw=3) plt.xlabel("probability") plt.ylabel("information") plt.grid() _ = plt.plot()
_____no_output_____
Apache-2.0
_notebooks/2022-02-07-cross-entropy.ipynb
jinmang2/blog
위 그래프로부터 아래의 사실을 확인할 수 있습니다.- $P(x)$가 `0.0`에 가까울 때 (사건이 등장할 확률이 희박할 때) 해당 사건에 대한 정보량이 높음을 알 수 있습니다.- $P(x)$가 `1.0`에 가까울 때 (사건이 등장할 확률이 높을 때) 해당 사건에 대한 정보량이 낮음을 알 수 있습니다.때문에 이를 정보 이론에서는 `surprise` 라고 묘사합니다. 우리가 매일 마주하는 일상에서는 크게 놀랄만한 정보가 없습니다. 그러나 취업을 했달지 연애를 시작한다던지 인생에 드물게 찾아오는 사건이 생기면 우리는 이 기쁨에 취하며 놀라곤 합니다.즉, 드물게 발생할 수...
p_y = [0.8, 0.2] sum([p * -np.log2(p) for p in p_y])
_____no_output_____
Apache-2.0
_notebooks/2022-02-07-cross-entropy.ipynb
jinmang2/blog
연속변수의 Entropy이와는 다르게 Continuous variable의 기댓값은 Integral로 쓸 수 있습니다.$$\begin{aligned}\\H(X)&=\mathbb{E}\big[-\log{P(X)}\big]\\&=\int_{-\infty}^{\infty}{P(X)\cdot(-\log{P(X)})}dx\end{aligned}$$위를 trapezoidal rule을 사용하여 적분값을 계산해보겠습니다.
def normal(x, mu, sigma): var = sigma ** 2 x = x - mu return (1 / np.sqrt(2 * np.pi * var)) * np.exp(-x**2 / (2 * var)) entropy = lambda p: -p * np.log(p) def trapezoidal_rule(dt, p, f): return np.sum((f(p[:-1]) + f(p[1:])) * dt) / 2 xlim, ylim, n_sample, n_bin = 10, 0.5, 50000, 1000 yticks = [0.1, 0...
_____no_output_____
Apache-2.0
_notebooks/2022-02-07-cross-entropy.ipynb
jinmang2/blog
위처럼 직접 적분값을 근사하여 계산할 수도 있지만 정규 분포 엔트로피 값을 해석적으로도 계산할 수 있습니다. 정규 분포의 수식은 아래와 같습니다.$$P(X)=\cfrac{1}{\sqrt{2\pi\sigma^2}}\exp{\bigg(\cfrac{-(X-\mu)^2}{2\sigma^2}\bigg)}$$이제 엔트로피 수식을 전개해봅시다.$$\begin{aligned}\\H(X)&=\mathbb{E}\big[-\log{P(X)}\big]\\&=\int_{-\infty}^{\infty}{P(X)\cdot(-\log{P(X)})}dX\end{aligned}$$여기서 $-\log...
def normal_entropy(sigma): # return 0.5 * np.log(np.e * 2 * np.pi * sigma**2) return 0.5 * (1 + np.log(2 * np.pi * sigma**2)) normal_entropy(sigma1)
_____no_output_____
Apache-2.0
_notebooks/2022-02-07-cross-entropy.ipynb
jinmang2/blog
Cross EntropyMachine Learning: A Probabilistic Perspective에서는 cross entropy를 다음과 같이 설명합니다.```The cross entropy is the average number of bits needed to encode data coming from a source with distribution p when we use model q ...```$P$는 모델링하고자 하는 분포, $Q$는 모델링을 할 분포라고 생각하면 이해가 빠릅니다. $P$ 대신 $Q$를 사용하여 사건을 나타내는 총 비트의 평균 수 입...
import math import numbers from typing import Optional, Tuple, Sequence, Union, Any import numpy as np import torch import torch.nn as nn _TensorOrTensors = Union[torch.Tensor, Sequence[torch.Tensor]]
_____no_output_____
Apache-2.0
_notebooks/2022-02-07-cross-entropy.ipynb
jinmang2/blog
Numpy로 구현하기: forwardCross Entropy는 `LogSoftmax`와 `NegativeLogLikelihood`로 계산할 수 있습니다. 위에서 본 것처럼 우도를 최대화하는 문제와 Cross entropy를 최소화하는 문제는 동일하기 때문에 이를 Negative Log likelihood를 최소화하는 문제로 생각해도 무방하겠죠? 이를 활용하여 Cross entropy를 구해봅시다. 한 가지, 구현 테크닉을 소개드리고자 합니다. softmax 함수는 다음과 같은 특징을 가지고 있습니다.$$\mathrm{softmax}(x)=\mathrm{softmax...
def log_softmax_numpy(arr): c = np.amax(arr, axis=-1, keepdims=True) s = arr - c nominator = np.exp(s) denominator = nominator.sum(axis=-1, keepdims=True) probs = nominator / denominator return np.log(probs)
_____no_output_____
Apache-2.0
_notebooks/2022-02-07-cross-entropy.ipynb
jinmang2/blog
우도는 fancy indexing으로 간단하게 계산할 수 있습니다. 구한 likelihood에 음수를 씌워주면 Negative Likelihood가 되겠지요.구한 Negative Log Likelihood는 평균값으로 reduce하겠습니다.
def negative_log_likelihood_numpy(y_pred, y): log_likelihood = y_pred[np.arange(y_pred.shape[0]), y] nll = -log_likelihood return nll.mean()
_____no_output_____
Apache-2.0
_notebooks/2022-02-07-cross-entropy.ipynb
jinmang2/blog
Cross entropy는 이제 간단합니다. 예측 값 Q에 log softmax를 취해주고 음의 로그 가능도를 계산해주면 됩니다.
def cross_entropy_numpy(y_pred, y): log_probs = log_softmax_numpy(y_pred) ce_loss = negative_log_likelihood_numpy(log_probs, y) return ce_loss
_____no_output_____
Apache-2.0
_notebooks/2022-02-07-cross-entropy.ipynb
jinmang2/blog
PyTorch로 구현하기: forward이를 pytorch로도 구현해봅시다. 구현은 메서드명까지 동일합니다. 차이라면 numpy에서는 차원축을 axis라는 parameter로 받는 반면, torch는 dim이라는 parameter로 받습니다.
def log_softmax_torch(tensor): c = torch.amax(tensor, dim=-1, keepdims=True) s = tensor - c nominator = torch.exp(s) denominator = nominator.sum(dim=-1, keepdims=True) probs = nominator / denominator return torch.log(probs) def negative_log_likelihood_torch(y_pred, y): log_likelihood = y_p...
_____no_output_____
Apache-2.0
_notebooks/2022-02-07-cross-entropy.ipynb
jinmang2/blog
forward 결과값 비교
import random from functools import partial batch_size = 8 vocab_size = 3000 rtol = 1e-4 atol = 1e-6 isclose = partial(torch.isclose, rtol=rtol, atol=atol) y_pred = [ [random.normalvariate(mu=0., sigma=1.) for _ in range(vocab_size)] for _ in range(batch_size) ] y_pred_torch = torch.FloatTensor(y_pred) y_pred...
Do both output the same tensors? 🔥
Apache-2.0
_notebooks/2022-02-07-cross-entropy.ipynb
jinmang2/blog
PyTorch로 구현하기: backward Log Softmax$$o(1-o)$$- 1은 크로네클 델타- log softmax가 계산 상 이점이 큼- loss를 더 크게 만들어 주기도- softmax의 backward의 grad_outputs에 log 함수의 역함수인 1/x를 넣어주면 log_softmax의 backward form이 나온다.
def _softmax(tensor: torch.Tensor, dim: int = -1) -> torch.Tensor: c = torch.amax(tensor, dim=dim, keepdims=True) s = tensor - c nominator = torch.exp(s) denominator = nominator.sum(dim=dim, keepdims=True) probs = nominator / denominator return probs class Softmax(torch.autograd.Function): ...
🔥
Apache-2.0
_notebooks/2022-02-07-cross-entropy.ipynb
jinmang2/blog
Negative Log Likelihood
class NegativeLogLikelihoodLoss(torch.autograd.Function): @staticmethod def forward(ctx: Any, y_pred: Any, y: Any) -> Any: bsz, n_classes = torch.tensor(y_pred.size()) ctx.save_for_backward(bsz, n_classes, y) log_likelihood = y_pred[torch.arange(bsz), y] nll = -log_likelihoo...
_____no_output_____
Apache-2.0
_notebooks/2022-02-07-cross-entropy.ipynb
jinmang2/blog
Cross Entropy
class CrossEntropyLoss(nn.Module): def forward( self, y_pred: _TensorOrTensors, y: _TensorOrTensors ) -> _TensorOrTensors: log_probs = log_softmax(y_pred) ce_loss = nll_loss(log_probs, y) probs = torch.exp(log_probs) / log_probs.size(0) self.save_fo...
_____no_output_____
Apache-2.0
_notebooks/2022-02-07-cross-entropy.ipynb
jinmang2/blog
put everything together- 위의 세 모듈을 한데 모아 구현- `ignore_index`, `reduction` 추가 구현- 설명은 시간이 되면 추가로 포스팅 업데이트
class LogSoftmax(torch.autograd.Function): @staticmethod def forward(ctx: Any, tensor: Any, dim: int = -1) -> Any: # softmax(x) = softmax(x+c) c = torch.amax(tensor, dim=dim, keepdims=True) s = tensor - c # Calculate softmax nominator = torch.exp(s) denominator =...
_____no_output_____
Apache-2.0
_notebooks/2022-02-07-cross-entropy.ipynb
jinmang2/blog
Modeling and Simulation in PythonChapter 11Copyright 2017 Allen DowneyLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
# Configure Jupyter so figures appear in the notebook %matplotlib inline # Configure Jupyter to display the assigned value after an assignment %config InteractiveShell.ast_node_interactivity='last_expr_or_assign' # import functions from the modsim.py module from modsim import *
_____no_output_____
MIT
notebooks/chap11.ipynb
maciejkos/ModSimPy
SIR implementationWe'll use a `State` object to represent the number (or fraction) of people in each compartment.
init = State(S=89, I=1, R=0)
_____no_output_____
MIT
notebooks/chap11.ipynb
maciejkos/ModSimPy
To convert from number of people to fractions, we divide through by the total.
init /= sum(init)
_____no_output_____
MIT
notebooks/chap11.ipynb
maciejkos/ModSimPy
`make_system` creates a `System` object with the given parameters.
def make_system(beta, gamma): """Make a system object for the SIR model. beta: contact rate in days gamma: recovery rate in days returns: System object """ init = State(S=89, I=1, R=0) init /= sum(init) t0 = 0 t_end = 7 * 14 return System(init=init, t0=t0, t_end=t_end...
_____no_output_____
MIT
notebooks/chap11.ipynb
maciejkos/ModSimPy
Here's an example with hypothetical values for `beta` and `gamma`.
tc = 3 # time between contacts in days tr = 4 # recovery time in days beta = 1 / tc # contact rate in per day gamma = 1 / tr # recovery rate in per day system = make_system(beta, gamma)
_____no_output_____
MIT
notebooks/chap11.ipynb
maciejkos/ModSimPy
The update function takes the state during the current time step and returns the state during the next time step.
def update_func(state, t, system): """Update the SIR model. state: State with variables S, I, R t: time step system: System with beta and gamma returns: State object """ s, i, r = state infected = system.beta * i * s recovered = system.gamma * i s -= infected ...
_____no_output_____
MIT
notebooks/chap11.ipynb
maciejkos/ModSimPy
To run a single time step, we call it like this:
state = update_func(init, 0, system)
_____no_output_____
MIT
notebooks/chap11.ipynb
maciejkos/ModSimPy
Now we can run a simulation by calling the update function for each time step.
def run_simulation(system, update_func): """Runs a simulation of the system. system: System object update_func: function that updates state returns: State object for final state """ state = system.init for t in linrange(system.t0, system.t_end): state = update_func(sta...
_____no_output_____
MIT
notebooks/chap11.ipynb
maciejkos/ModSimPy
The result is the state of the system at `t_end`
run_simulation(system, update_func)
_____no_output_____
MIT
notebooks/chap11.ipynb
maciejkos/ModSimPy
**Exercise** Suppose the time between contacts is 4 days and the recovery time is 5 days. After 14 weeks, how many students, total, have been infected?Hint: what is the change in `S` between the beginning and the end of the simulation?
# Solution goes here
_____no_output_____
MIT
notebooks/chap11.ipynb
maciejkos/ModSimPy
Using TimeSeries objects If we want to store the state of the system at each time step, we can use one `TimeSeries` object for each state variable.
def run_simulation(system, update_func): """Runs a simulation of the system. Add three Series objects to the System: S, I, R system: System object update_func: function that updates state """ S = TimeSeries() I = TimeSeries() R = TimeSeries() state = system.init t0 = s...
_____no_output_____
MIT
notebooks/chap11.ipynb
maciejkos/ModSimPy
Here's how we call it.
tc = 3 # time between contacts in days tr = 4 # recovery time in days beta = 1 / tc # contact rate in per day gamma = 1 / tr # recovery rate in per day system = make_system(beta, gamma) S, I, R = run_simulation(system, update_func)
_____no_output_____
MIT
notebooks/chap11.ipynb
maciejkos/ModSimPy
And then we can plot the results.
def plot_results(S, I, R): """Plot the results of a SIR model. S: TimeSeries I: TimeSeries R: TimeSeries """ plot(S, '--', label='Susceptible') plot(I, '-', label='Infected') plot(R, ':', label='Recovered') decorate(xlabel='Time (days)', ylabel='Fraction of populati...
_____no_output_____
MIT
notebooks/chap11.ipynb
maciejkos/ModSimPy
Here's what they look like.
plot_results(S, I, R) savefig('figs/chap11-fig01.pdf')
_____no_output_____
MIT
notebooks/chap11.ipynb
maciejkos/ModSimPy
Using a DataFrame Instead of making three `TimeSeries` objects, we can use one `DataFrame`.We have to use `row` to selects rows, rather than columns. But then Pandas does the right thing, matching up the state variables with the columns of the `DataFrame`.
def run_simulation(system, update_func): """Runs a simulation of the system. system: System object update_func: function that updates state returns: TimeFrame """ frame = TimeFrame(columns=system.init.index) frame.row[system.t0] = system.init for t in linrange(system.t...
_____no_output_____
MIT
notebooks/chap11.ipynb
maciejkos/ModSimPy
Here's how we run it, and what the result looks like.
tc = 3 # time between contacts in days tr = 4 # recovery time in days beta = 1 / tc # contact rate in per day gamma = 1 / tr # recovery rate in per day system = make_system(beta, gamma) results = run_simulation(system, update_func) results.head()
_____no_output_____
MIT
notebooks/chap11.ipynb
maciejkos/ModSimPy
We can extract the results and plot them.
plot_results(results.S, results.I, results.R)
_____no_output_____
MIT
notebooks/chap11.ipynb
maciejkos/ModSimPy
Exercises**Exercise** Suppose the time between contacts is 4 days and the recovery time is 5 days. Simulate this scenario for 14 weeks and plot the results.
# Solution goes here
_____no_output_____
MIT
notebooks/chap11.ipynb
maciejkos/ModSimPy
Análisis de los datos obtenidos Uso de ipython para el análsis y muestra de los datos obtenidos durante la producción.Se implementa un regulador experto. Los datos analizados son del día 13 de Agosto del 2015Los datos del experimento:* Hora de inicio: 12:06* Hora final : 12:26* Filamento extruido: 314Ccm* $T: 150ºC$*...
#Importamos las librerías utilizadas import numpy as np import pandas as pd import seaborn as sns #Mostramos las versiones usadas de cada librerías print ("Numpy v{}".format(np.__version__)) print ("Pandas v{}".format(pd.__version__)) print ("Seaborn v{}".format(sns.__version__)) #Abrimos el fichero csv con los datos d...
_____no_output_____
CC0-1.0
ipython_notebooks/06_regulador_experto/.ipynb_checkpoints/ensayo5-checkpoint.ipynb
darkomen/TFG
Representamos ambos diámetro y la velocidad de la tractora en la misma gráfica
graf = datos.ix[:, "Diametro X"].plot(figsize=(16,10),ylim=(0.5,3)).hlines([1.85,1.65],0,3500,colors='r') graf.axhspan(1.65,1.85, alpha=0.2) graf.set_xlabel('Tiempo (s)') graf.set_ylabel('Diámetro (mm)') #datos['RPM TRAC'].plot(secondary_y='RPM TRAC') datos.ix[:, "Diametro X":"Diametro Y"].boxplot(return_type='axes')
_____no_output_____
CC0-1.0
ipython_notebooks/06_regulador_experto/.ipynb_checkpoints/ensayo5-checkpoint.ipynb
darkomen/TFG
Comparativa de Diametro X frente a Diametro Y para ver el ratio del filamento
plt.scatter(x=datos['Diametro X'], y=datos['Diametro Y'], marker='.')
_____no_output_____
CC0-1.0
ipython_notebooks/06_regulador_experto/.ipynb_checkpoints/ensayo5-checkpoint.ipynb
darkomen/TFG
Filtrado de datosLas muestras tomadas $d_x >= 0.9$ or $d_y >= 0.9$ las asumimos como error del sensor, por ello las filtramos de las muestras tomadas.
datos_filtrados = datos[(datos['Diametro X'] >= 0.9) & (datos['Diametro Y'] >= 0.9)] #datos_filtrados.ix[:, "Diametro X":"Diametro Y"].boxplot(return_type='axes')
_____no_output_____
CC0-1.0
ipython_notebooks/06_regulador_experto/.ipynb_checkpoints/ensayo5-checkpoint.ipynb
darkomen/TFG
Representación de X/Y
plt.scatter(x=datos_filtrados['Diametro X'], y=datos_filtrados['Diametro Y'], marker='.')
_____no_output_____
CC0-1.0
ipython_notebooks/06_regulador_experto/.ipynb_checkpoints/ensayo5-checkpoint.ipynb
darkomen/TFG
Analizamos datos del ratio
ratio = datos_filtrados['Diametro X']/datos_filtrados['Diametro Y'] ratio.describe() rolling_mean = pd.rolling_mean(ratio, 50) rolling_std = pd.rolling_std(ratio, 50) rolling_mean.plot(figsize=(12,6)) # plt.fill_between(ratio, y1=rolling_mean+rolling_std, y2=rolling_mean-rolling_std, alpha=0.5) ratio.plot(figsize=(12,6...
_____no_output_____
CC0-1.0
ipython_notebooks/06_regulador_experto/.ipynb_checkpoints/ensayo5-checkpoint.ipynb
darkomen/TFG
Límites de calidad Calculamos el número de veces que traspasamos unos límites de calidad. $Th^+ = 1.85$ and $Th^- = 1.65$
Th_u = 1.85 Th_d = 1.65 data_violations = datos[(datos['Diametro X'] > Th_u) | (datos['Diametro X'] < Th_d) | (datos['Diametro Y'] > Th_u) | (datos['Diametro Y'] < Th_d)] data_violations.describe() data_violations.plot(subplots=True, figsize=(12,12))
_____no_output_____
CC0-1.0
ipython_notebooks/06_regulador_experto/.ipynb_checkpoints/ensayo5-checkpoint.ipynb
darkomen/TFG
Spotify: Visualizing the top 100 songs from 2018 Grace Thompson
#copied the first cell from the data_visualization notebook with import statements #visualization tools import matplotlib.pyplot as plt #matplotlib is a basic plotting library import seaborn as sns; sns.set() #seaborn is a library that uses matplotlib to make styled plots from scipy.stats import pearsonr import plotly...
_____no_output_____
MIT
data-stories/spotify/spotify-data-thompson.ipynb
BohanMeng/storytelling-with-data
IntroductionThroughout history, "popular music" has evolved to align with the social environment in which it was created. In the 1920's Blues and Jazz were considered most popular, whereas now (2019) R&B and Hip Hop are more commonly listened to. The purpose of this analysis was to look for common audio features in to...
df = pd.read_csv('top2018.csv').fillna(0) df.head() #df = dataframe of top 2018 songs, df.columns df.shape #Renaming columns in DataFrame to be more easily readable names={'name': 'Name', 'artists': 'Artists',} df.rename(names, inplace=True, axis=1) df.drop(columns=['id', 'duration_ms', 'tempo'], inplace=True) ...
_____no_output_____
MIT
data-stories/spotify/spotify-data-thompson.ipynb
BohanMeng/storytelling-with-data
Number of hit songs per artist
df['Artists'].value_counts().head(20) #top 20 artists with the most hit songs in top 100 plt.figure(figsize=(18,12)) df['Artists'].value_counts().plot.bar()
_____no_output_____
MIT
data-stories/spotify/spotify-data-thompson.ipynb
BohanMeng/storytelling-with-data
Post Malone and XXXTENTACTION each have 6 tracks in the top 100 hits of the year, but it is important to note the amount of hits in the top 100 does not equate to being critically acclaimed- for example Childish Gambino isn't listed (but won 4 grammys for "This is America")
df.mean().plot.bar() plt.title('Mean Values of Audio Features') plt.show() # was unable to successfully drop columns for this specific barplot. In future, I would drop key, loudness and time signature for scale consistency sns.heatmap(df.corr(),cmap="YlGnBu")
_____no_output_____
MIT
data-stories/spotify/spotify-data-thompson.ipynb
BohanMeng/storytelling-with-data
There is little correlation between the different song characteristics. That being said, loudness and energy predictably correlate positively, while acousticness and energy have negative correlation.
sns.distplot(df['energy']) sns.distplot(df['danceability'],hist=True,kde=True) Correlation=df[['danceability','energy','valence','loudness',]] # Set conditions Vd=df['danceability']>=0.75 Ld=(df['danceability']>=0.5) & (df['danceability']<0.75) Nd=df['danceability']<0.5 data=[Vd.sum(),Ld.sum(),Nd.sum()] Danceability=p...
_____no_output_____
MIT
data-stories/spotify/spotify-data-thompson.ipynb
BohanMeng/storytelling-with-data
Few songs were instrumental in the top 100
sns.heatmap(Correlation.corr(),annot=True,cmap="YlGnBu") #a closer look at specific song feature correlation sns.jointplot(data=Correlation,y='energy',x='loudness',kind='reg',stat_func=pearsonr)
_____no_output_____
MIT
data-stories/spotify/spotify-data-thompson.ipynb
BohanMeng/storytelling-with-data
MAZ Diplomarbeit - Biodiversität in der Schweiz
import pandas as pd import matplotlib.pyplot as plt %matplotlib inline df = pd.read_csv("01_source/tabula_ch/tabula_ameisen.csv") df.head() len (df) df.shape df["CH"].value_counts() df.pop("N") df.pop("S") df.head(1) df.rename(columns={"ORDER":"order", "FAMILY":"family","Artname": "sc name", "CH":"cat ch", "Bemerkungen...
_____no_output_____
MIT
Eigene Projekte/Diplomarbeit_Biodiversitaet/002_einlesen_tabulacsv_ameisen.ipynb
Priskawa/kurstag2
Lambda School Data Science Module 132 Sampling, Confidence Intervals, and Hypothesis Testing Prepare - examine other available hypothesis testsIf you had to pick a single hypothesis test in your toolbox, t-test would probably be the best choice - but the good news is you don't have to pick just one! Here's some of t...
import numpy as np from scipy.stats import chisquare # One-way chi square test # Chi square can take any crosstab/table and test the independence of rows/cols # The null hypothesis is that the rows/cols are independent -> low chi square # The alternative is that there is a dependence -> high chi square # Be aware! Ch...
KruskalResult(statistic=0.2727272727272734, pvalue=0.6015081344405895) KruskalResult(statistic=7.0, pvalue=0.0301973834223185)
MIT
module2-sampling-confidence-intervals-and-hypothesis-testing/LS_DS_132_Sampling_Confidence_Intervals_and_Hypothesis_Testing.ipynb
Bhavani-Rajan/DS-Unit-1-Sprint-3-Statistical-Tests-and-Experiments
And there's many more! `scipy.stats` is fairly comprehensive, though there are even more available if you delve into the extended world of statistics packages. As tests get increasingly obscure and specialized, the importance of knowing them by heart becomes small - but being able to look them up and figure them out wh...
mean = 20 n = 7 [5, 9, 10, 20 , 15, 12, 69] # the first 6 days added up to 71 # The mean has to be 20 # I need the sum of all the values in the list to be 140 # The last value in the list *HAS* to be 140-71 = 69
_____no_output_____
MIT
module2-sampling-confidence-intervals-and-hypothesis-testing/LS_DS_132_Sampling_Confidence_Intervals_and_Hypothesis_Testing.ipynb
Bhavani-Rajan/DS-Unit-1-Sprint-3-Statistical-Tests-and-Experiments
T-test Assumptions- Independence of meansAre the means of our voting data independent (do not affect the outcome of one another)? The best way to increase thel likelihood of our means being independent is to randomly sample (which we did not do).
from scipy.stats import ttest_ind ?ttest_ind
_____no_output_____
MIT
module2-sampling-confidence-intervals-and-hypothesis-testing/LS_DS_132_Sampling_Confidence_Intervals_and_Hypothesis_Testing.ipynb
Bhavani-Rajan/DS-Unit-1-Sprint-3-Statistical-Tests-and-Experiments
- "Homogeneity" of Variance? Is the magnitude of the variance between the two roughly the same?I think we're OK on this one for the voting data, although it probably could be better, one party was larger than the other.If we suspect this to be a problem then we can use Welch's T-test
?ttest_ind
_____no_output_____
MIT
module2-sampling-confidence-intervals-and-hypothesis-testing/LS_DS_132_Sampling_Confidence_Intervals_and_Hypothesis_Testing.ipynb
Bhavani-Rajan/DS-Unit-1-Sprint-3-Statistical-Tests-and-Experiments
- "Dependent Variable" (sample means) are Distributed NormallyLots of statistical tests depend on normal distributions. We can test for normality using Scipy as was shown above.This assumption is often assumed even if the assumption is a weak one. If you strongly suspect that things are not normally distributed, you ca...
import numpy as np import matplotlib.pyplot as plt import pandas as pd sample_means = [] for x in range(0,3000): coinflips = np.random.binomial(n=1, p=.5, size=250) one_sample = coinflips sample_means.append(coinflips.mean()) print(len(sample_means)) print(sample_means) df = pd.DataFrame({'single_sample': one_s...
_____no_output_____
MIT
module2-sampling-confidence-intervals-and-hypothesis-testing/LS_DS_132_Sampling_Confidence_Intervals_and_Hypothesis_Testing.ipynb
Bhavani-Rajan/DS-Unit-1-Sprint-3-Statistical-Tests-and-Experiments
What does the Central Limit Theorem State? That no matter the initial distribution of the population, the distribution of sample means taken will approximate a normal distribution as $n \rightarrow \infty$.This has very important implications for hypothesis testing and is precisely the reason why the t-distribution beg...
sample_means_small = [] sample_means_large = [] for x in range(0,3000): coinflips_small = np.random.binomial(n=1, p=.5, size=20) coinflips_large = np.random.binomial(n=1, p=.5, size=100) one_small_sample = coinflips_small one_small_large = coinflips_large sample_means_small.append(coinflips_small.mean()) sa...
_____no_output_____
MIT
module2-sampling-confidence-intervals-and-hypothesis-testing/LS_DS_132_Sampling_Confidence_Intervals_and_Hypothesis_Testing.ipynb
Bhavani-Rajan/DS-Unit-1-Sprint-3-Statistical-Tests-and-Experiments
Standard Error of the MeanWhat does it mean to "estimate"? the Population mean?
# Calculate the sample mean for a single sample df.single_sample.mean()
_____no_output_____
MIT
module2-sampling-confidence-intervals-and-hypothesis-testing/LS_DS_132_Sampling_Confidence_Intervals_and_Hypothesis_Testing.ipynb
Bhavani-Rajan/DS-Unit-1-Sprint-3-Statistical-Tests-and-Experiments
Build and Interpret a Confidence Interval
import numpy as np coinflips = np.random.binomial(n=1, p=.5, size=42) # ddof modifies the divisor of the sum of the squares of the samples-minus-mean. # The divisor is N - ddof, where the default ddof is 0 as you can see from your result. print(np.std(coinflips, ddof=1)) print(coinflips) print(np.std(coinflips)) imp...
0.04396401634104949
MIT
module2-sampling-confidence-intervals-and-hypothesis-testing/LS_DS_132_Sampling_Confidence_Intervals_and_Hypothesis_Testing.ipynb
Bhavani-Rajan/DS-Unit-1-Sprint-3-Statistical-Tests-and-Experiments
Looking at stats.t.ppf
# stats.t.ppf(# probability cutoff, # degrees of freedom) # 95% confidence level -> .025 # 1 - confidence_level == .05 / 2 -> .025 is our upper/lower bound cutoff confidence_level = .95 # dof is degree of freedom is n-1 dof = 42-1 # .ppf is percent point function that gives the upper/lower bounds of the cofindence...
_____no_output_____
MIT
module2-sampling-confidence-intervals-and-hypothesis-testing/LS_DS_132_Sampling_Confidence_Intervals_and_Hypothesis_Testing.ipynb
Bhavani-Rajan/DS-Unit-1-Sprint-3-Statistical-Tests-and-Experiments
Graphically Represent a Confidence Interval
import seaborn as sns coinflips_42 = np.random.binomial(n=1, p=.5, size=42) sns.kdeplot(coinflips_42) CI = confidence_interval(coinflips_42) plt.axvline(x=CI[1], color='red') plt.axvline(x=CI[2], color='red') plt.axvline(x=CI[0], color='k');
0.1575207555477215
MIT
module2-sampling-confidence-intervals-and-hypothesis-testing/LS_DS_132_Sampling_Confidence_Intervals_and_Hypothesis_Testing.ipynb
Bhavani-Rajan/DS-Unit-1-Sprint-3-Statistical-Tests-and-Experiments
Relationship between Confidence Intervals and T-testsConfidence Interval == Bounds of statistical significance for our t-testA sample mean that falls inside of our confidence interval will "FAIL TO REJECT" our null hypothesisA sample mean that falls outside of our confidence interval will "REJECT" our null hypothesis
from scipy.stats import t, ttest_1samp import numpy as np coinflip_means = [] for x in range(0,100): coinflips = np.random.binomial(n=1, p=.5, size=30) coinflip_means.append(coinflips.mean()) print(coinflip_means) # Sample Size n = len(coinflip_means) # Degrees of Freedom dof = n-1 # The Mean of Means: mean = np....
t Statistic: 1.9842169515086827 Confidence Interval (0.48087691780652664, 0.5184564155268068)
MIT
module2-sampling-confidence-intervals-and-hypothesis-testing/LS_DS_132_Sampling_Confidence_Intervals_and_Hypothesis_Testing.ipynb
Bhavani-Rajan/DS-Unit-1-Sprint-3-Statistical-Tests-and-Experiments
A null hypothesis that's just inside of our confidence interval == fail to reject
ttest_1samp(coinflip_means, .5)
_____no_output_____
MIT
module2-sampling-confidence-intervals-and-hypothesis-testing/LS_DS_132_Sampling_Confidence_Intervals_and_Hypothesis_Testing.ipynb
Bhavani-Rajan/DS-Unit-1-Sprint-3-Statistical-Tests-and-Experiments
A null hypothesis that's just outside of our confidence interval == reject
ttest_1samp(coinflip_means, .53)
_____no_output_____
MIT
module2-sampling-confidence-intervals-and-hypothesis-testing/LS_DS_132_Sampling_Confidence_Intervals_and_Hypothesis_Testing.ipynb
Bhavani-Rajan/DS-Unit-1-Sprint-3-Statistical-Tests-and-Experiments
Run a $\chi^{2}$ Test "by hand" (Using Numpy)\begin{align}\chi^2 = \sum \frac{(observed_{ij}-expected_{ij})^2}{(expected_{ij})}\end{align}
df = pd.read_csv('https://raw.githubusercontent.com/ryanleeallred/datasets/master/adult.csv', na_values=" ?") print(df.shape) df.head() df.describe() df.describe(exclude='number') cut_points = [0, 9, 19, 29, 39, 49, 1000] label_names = ['0-9', '10-19', '20-29', '30-39', '40-49', '50+'] df['hours_per_week_categories'] =...
_____no_output_____
MIT
module2-sampling-confidence-intervals-and-hypothesis-testing/LS_DS_132_Sampling_Confidence_Intervals_and_Hypothesis_Testing.ipynb
Bhavani-Rajan/DS-Unit-1-Sprint-3-Statistical-Tests-and-Experiments
import matplotlib.pyplot as plt import seaborn as sns #Plots the bar chart fig = plt.figure(figsize=(10, 5)) sns.set(font_scale=1.8) categories = ["0-9","10-19","20-29","30-39","40-49","50+"] p1 = plt.plot(categories, malecount, 0.55, color='blue') p2 = plt.plot(categories, femalecount, 0.55, color='red') plt.legend((...
_____no_output_____
MIT
module2-sampling-confidence-intervals-and-hypothesis-testing/LS_DS_132_Sampling_Confidence_Intervals_and_Hypothesis_Testing.ipynb
Bhavani-Rajan/DS-Unit-1-Sprint-3-Statistical-Tests-and-Experiments
Expected Value Calculation\begin{align}expected_{i,j} =\frac{(row_{i} \text{total})(column_{j} \text{total}) }{(\text{total observations})} \end{align}
row_sums = contingency_table.iloc[0:2, 6].values col_sums = contingency_table.iloc[2, 0:6].values print(row_sums) print(col_sums) total = contingency_table.loc['All','All'] total # same thing as previous one. no of rows in the data set df.shape[0] expected = [] for i in range(len(row_sums)): expected_row = [] for...
(2, 6)
MIT
module2-sampling-confidence-intervals-and-hypothesis-testing/LS_DS_132_Sampling_Confidence_Intervals_and_Hypothesis_Testing.ipynb
Bhavani-Rajan/DS-Unit-1-Sprint-3-Statistical-Tests-and-Experiments
Chi-Squared Statistic with Numpy\begin{align}\chi^2 = \sum \frac{(observed_{i}-expected_{i})^2}{(expected_{i})}\end{align}For the $observed$ values we will just use a version of our contingency table without the margins as a numpy array. In this way, if our observed values array and our expected values array are the s...
# Array broadcasting will work with numpy arrays but not python lists chi_squared = ((observed - expected)**2/(expected)).sum() print(f"Chi-Squared: {chi_squared}") # Degrees of Freedom of a Chi-squared test # range between 3 to 40 #degrees_of_freedom = (num_rows - 1)(num_columns - 1) # Calculate Degrees of Freedom...
Degrees of Freedom: 5
MIT
module2-sampling-confidence-intervals-and-hypothesis-testing/LS_DS_132_Sampling_Confidence_Intervals_and_Hypothesis_Testing.ipynb
Bhavani-Rajan/DS-Unit-1-Sprint-3-Statistical-Tests-and-Experiments
Run a $\chi^{2}$ Test using Scipy
chi_squared, p_value, dof, expected = stats.chi2_contingency(observed) print(f"Chi-Squared: {chi_squared}") print(f"P-value: {p_value}") print(f"Degrees of Freedom: {dof}") print("Expected: \n", np.array(expected))
Chi-Squared: 2287.190943926107 P-value: 0.0 Degrees of Freedom: 5 Expected: [[ 151.50388502 412.16995793 791.26046497 1213.02346365 6065.44811277 2137.59411566] [ 306.49611498 833.83004207 1600.73953503 2453.97653635 12270.55188723 4324.40588434]]
MIT
module2-sampling-confidence-intervals-and-hypothesis-testing/LS_DS_132_Sampling_Confidence_Intervals_and_Hypothesis_Testing.ipynb
Bhavani-Rajan/DS-Unit-1-Sprint-3-Statistical-Tests-and-Experiments
Functions vs. MethodsYou might have noticed in one of the previous units that sometimes the term *function* and sometimes the term *method*was used to refer to some functionality in the Python 🐍 standard library. This was not by mistake but a conscious usageof the terms to refer to different concepts. After learning ...
song = "Blue Train" print("Listening to", song)
_____no_output_____
CC0-1.0
week_5/week_5_unit_6_methodsfunct_notebook.ipynb
ceedee666/opensap_python_intro
The `print()` function is invoked by using its name followed by parentheses. Inside the parentheses *all* the datarequired for the execution of the function is provided as parameters. In the example above two parameters are provided:- The string `"Listening to"`- The variable `song` containing the value `"Blue Train"`....
song = "Ace of Spades" turned_up_song = song.upper() print("Listening to", turned_up_song)
_____no_output_____
CC0-1.0
week_5/week_5_unit_6_methodsfunct_notebook.ipynb
ceedee666/opensap_python_intro
In the example a variable `song` of type `string` is defined. Note that in Python 🐍 there are actually no primitivedata types. Instead everything is an object in the sense of the object oriented programming paradigm. Using the `song`object, the method `upper()` is invoked. This is done by adding a `.` to the object fo...
songs = "Ace of Spaces, Blitzkrieg Bop, Blue Train" song_list = songs.split(", ") for song in song_list: print("Listening to", song)
_____no_output_____
CC0-1.0
week_5/week_5_unit_6_methodsfunct_notebook.ipynb
ceedee666/opensap_python_intro
Analysis saltswap results Applied chemical potential: $\Delta\mu = 750$ Defining a functions to read the simulation data, and generating pretty colours for plotting.
def read_data(filename): """ Read the number of salt molecules added, acceptance rates, and simulation run times for iterations of saltswap Parameters ---------- filename: str the name of the file that contains the simulation data Returns ------- data: numpy.ndarray ...
_____no_output_____
MIT
development/acceptance-study/NCMC_Analysis_mu750.ipynb
choderalab/saltswap
NCMC parameter sweep at a glancePlotting colored matrices to summarise the main results.
# Results from the initial set of parameters: nperturbations = [1024,2048,4096] npropogations = [1,2,4] MeanSalt = np.zeros((len(nperturbations),len(npropogations))) AccProb = np.zeros((len(nperturbations),len(npropogations))) MeanTime = np.zeros((len(nperturbations),len(npropogations))) for i in range(len(nperturbat...
/Users/rossg/miniconda2/lib/python2.7/site-packages/matplotlib/tight_layout.py:222: UserWarning: tight_layout : falling back to Agg renderer warnings.warn("tight_layout : falling back to Agg renderer")
MIT
development/acceptance-study/NCMC_Analysis_mu750.ipynb
choderalab/saltswap
The data does not seem equilibrated. Time series plotsViewing to what extent the data can be considered to be 'in equilibrium'.
params = [(1024,1),(1024,2),(1024,4),(2048,1),(2048,2),(2048,4),(4096,1),(4096,2),(4096,4)] coords = [(0,0),(0,1),(0,2),(1,0),(1,1),(1,2),(2,0),(2,1),(2,2)] f, axarr = plt.subplots(3, 3) xlims =(0,400) # x limits ylims = (0,125) # y limits xstep = 100 for p,c in zip(params,coords): # Reading in data ...
_____no_output_____
MIT
development/acceptance-study/NCMC_Analysis_mu750.ipynb
choderalab/saltswap
None of the simulations reach equilibrium, and it's worrying that the steady state appears to be the formation of a salt crystal. Work distributions perturbations = 4096, propagations = 4 Looking at perturbations = 4096, propagations = 4 as it's the most computationally expensive protocol.
kT = 2.479 print 'Chemical potential in units of kT =', 750/kT work_add = read_work('Titration750/prt4096_prp4/work_add_data.txt') work_rm = read_work('Titration750/prt4096_prp4/work_rm_data.txt') plt.clf() plt.plot(-work_add, color=tableau4[0]) plt.plot(work_rm, color=tableau4[3]) plt.axhline(750/kT, ls='--', color=...
_____no_output_____
MIT
development/acceptance-study/NCMC_Analysis_mu750.ipynb
choderalab/saltswap
The work to remove salt increases are more salt is added to the system. This is the opposite of what we want, as this incourages more salt to enter the system. How Work decreases with longer protocolGiven the increase in salt pairs over time, I'll only look at the work distributions for the first 500 insertion/deletio...
params = [(1024,1),(2048,1),(4096,1)] N = 500 work_add = np.zeros((3,N)) work_rm = np.zeros((3,N)) for i in range(len(params)): filename = 'Titration750/prt{0}_prp{1}/work_add_data.txt'.format(params[i][0],params[i][1]) work_add[i,:] = read_work(filename)[0:N] filename = 'Titration750/prt{0}_prp{1}/work_r...
_____no_output_____
MIT
development/acceptance-study/NCMC_Analysis_mu750.ipynb
choderalab/saltswap
Histogram of work to add salt
# Automatically calculate the histogram of all the data, and save the edges and midpoints counts, edges = np.histogram(-work_add,30) midpoints = edges[0:-1] + np.diff(edges)/2.0 colours = (tableau4_light[0], tableau4_light[1], tableau4_light[3]) plt.clf() lines = [] for i in range(3): cnts, junk = np.histogram(a ...
_____no_output_____
MIT
development/acceptance-study/NCMC_Analysis_mu750.ipynb
choderalab/saltswap
变量类型本章介绍 Python 的内置变量类型。 我认为以下内置变量类型是在 Python 中经常用到、或者必须有所了解的:| 类型 | 关键字 | 说明 | 例子 || :---: | :--- | :--- | :--- || 【数字】 || 整型 | `int` | 整数 | `1`, `-1` || 浮点型 | `float` | 浮点数 | `1.0` || 复数型 | `complex` | 复数 | `complex(1,2)`| 【序列】 || 列表 | `list` | 一串有序的可变数据序列,每一项数据可以是任意类型。 | `[1, 2]` || 元组 | `tuple` | 一串有序的不可变数据序列,在创建时...
x = True y = False print(x and y, x or y, not x, x ^ y)
False True False True
MIT
docsrc/Python/VariableTypes.ipynb
wklchris/blog
空对象None 是 Python 中的空对象,它或许并不常用,但读者有必要了解。空对象的逻辑值为假:
x = None print(bool(x))
False
MIT
docsrc/Python/VariableTypes.ipynb
wklchris/blog
数字类型:int, float, complex数字类型没有太多需要介绍的地方。- 四则运算:`+`, `-`, `*`, `/`- 整除与取余:`c = a // b`, `d = a % b`;或者 `c, d = divmod(a,b)`。 - 这里的整除是指向负无穷取整,例如 `-5//2` 的结果是 `-3`。 - 复数不能参与整除或取余运算。- 乘方:`a ** b`,或者 `pow(a, b)`- 取模:`abs(a)`。如果 `a` 是复数,那么会计算模长;如果是整数或浮点数,实质上就是取绝对值。- 自运算:`a += 1` 即 `a` 自增 1;同理有 `-=`, `*=`, `/=`值得注意的点:- **只...
x, y, z = 'nan', 'inf', '-inf' print(float(x), float(y), float(z))
nan inf -inf
MIT
docsrc/Python/VariableTypes.ipynb
wklchris/blog
复数的使用非常少见,随便举个例子吧:
x = complex(1, 5) y = complex(2, -1) z = x + y print(z, abs(z))
(3+4j) 5.0
MIT
docsrc/Python/VariableTypes.ipynb
wklchris/blog
类型转换与取整Python 中从浮点型到整型的强制类型转换会截断小数点之后的部分:
a, b, c, d = 1.2, 1.6, -1.2, -1.6 print(int(a), int(b), int(c), int(d))
1 1 -1 -1
MIT
docsrc/Python/VariableTypes.ipynb
wklchris/blog
要实现复杂的取整控制,可以调用 Python 内置的 `math` 模块:- floor:向负无穷取整。- ceil:向正无穷取整。
import math # 导入 math 模块 print(math.floor(a), math.ceil(b), math.floor(c), math.ceil(d))
1 2 -2 -1
MIT
docsrc/Python/VariableTypes.ipynb
wklchris/blog
不过在我个人的实践中,取整与四舍五入进位的任务通常由 `numpy` 库代劳;有兴趣的读者,可以阅读 Numpy 的相关函数:- [numpy.round](https://numpy.org/doc/stable/reference/generated/numpy.round_.html)- [numpy.floor](https://numpy.org/doc/stable/reference/generated/numpy.floor.html)- [numpy.ceil](https://numpy.org/doc/stable/reference/generated/numpy.ceil.html)- [numpy.tru...
x = 3 y = 4 print(x != y)
True
MIT
docsrc/Python/VariableTypes.ipynb
wklchris/blog
特别地,Python 还支持“三元比较”:
print(3 < 4 == 4, 3 > 2 < 4, 1 < 3 <= 5)
True True True
MIT
docsrc/Python/VariableTypes.ipynb
wklchris/blog
重要 不要试图用双等号比较两个浮点值的是否相等! 浮点计算是有精度的,直接比较它们是否相等是不明智的:
x, y = 0.1, 0.2 z = 0.3 print(x+y, z, x+y==z)
0.30000000000000004 0.3 False
MIT
docsrc/Python/VariableTypes.ipynb
wklchris/blog
关于高精度的数学计算,推荐配合科学计算库 NumPy 使用。 列表:listPython 的三种常用序列 list, tuple, str, 我们先讲列表 list;列表大概是最接近其他编程语言的序列了。- 列表序号从 0 开始。- Python 中的列表类似于其他语言的数组,不过列表的长度可变、并且元素不必是同一类型的。*虽然列表中可以包含不同类型的元素,但从编程习惯上说,个人不推荐这样做。*
x = [1, 2, 'a', 4] y = [] # 空列表 print(x)
[1, 2, 'a', 4]
MIT
docsrc/Python/VariableTypes.ipynb
wklchris/blog
Python 中的列表默认支持常见所有的序列操作:- 索引元素:单个选取 `x[index]` ,切片型选取 `x[start:end:step]`- 元素个数: `len(x)`- 追加:单个元素 `x.append(item)` ,追加一个列表 `x.extend(y)`- 插入: `x.insert(index, item)`- 排序: - 按值排序:`x.sort()` ,或者带返回值的 `sorted(x)` - 反序:`x.reverse()` ,或者带返回值的 `reversed(x)`- 查询: - 判断是否包含 `item in x` - 判断包含的次数 `x.count(item)` - 返回索引序...
x = [1, 2, 3, 4, 5] print(x[0], x[4])
1 5
MIT
docsrc/Python/VariableTypes.ipynb
wklchris/blog
Python 支持负数索引,比如 `x[-1]` 表示列表的倒数第 1 位元素:
x = [1, 2, 3, 4, 5] print(x[-2])
4
MIT
docsrc/Python/VariableTypes.ipynb
wklchris/blog
Python 支持一种切片语法,可以指定索引序号的起始、终止、步长,来选取多个元素。- `x[start:end]` :从 `x[start]` 依次选取到 `x[end-1]`- `x[start:end:step]` :从 `x[start]` 每 `step` 个元素选取依次,直到 `x[end-1]` (或它之前最后一个能被选取到的元素)。步长可以是负数,但相应地必须有 start >= end- `x[start:]` 或者 `x[:end]` :从 `x[start]` 选取到末尾,或者从起始选取到 `x[end-1]` 。这其实是忽略了冒号一侧的项,也可以用空值 None 补位 重要切片选取的结束是第 end-1 个元...
# 选取 0 到 3,注意末尾被舍去 x = [1, 2, 3, 4, 5] print(x[0:4]) # 选取 0 到 3 (或4),每 2 个选取一次 print(x[0:4:2], x[0:5:2]) # 负数步长 print(x[::-1]) # 从 1 选取到末尾,或从起始选取至倒数第 2 项 print(x[1:], x[:-1]) # 也可以忽略某一项,等同于 None print(x[::2], x[None:None:2])
[1, 3, 5] [1, 3, 5]
MIT
docsrc/Python/VariableTypes.ipynb
wklchris/blog
赋值可以直接进行:
x = [1, 2, 3, 4, 5] x[:2] = [6, 7] # 本质是将右侧解包,然后分别传入两个元素位 print(x)
[6, 7, 3, 4, 5]
MIT
docsrc/Python/VariableTypes.ipynb
wklchris/blog
元素个数Python 的 `len()` 函数是一个独立的函数,并不是通过 `x.len()` 的方式调用的。这其中设计的差别,读者可以仔细体会。该函数不止适用于列表,也适用于其他序列类型。
x = [1, 2, 3, 4, 5] print(len(x))
5
MIT
docsrc/Python/VariableTypes.ipynb
wklchris/blog
追加Python 的追加元素使用 `x.append()` :
x = [1, 2, 3, 4, 5] x.append(-2) print(x)
[1, 2, 3, 4, 5, -2]
MIT
docsrc/Python/VariableTypes.ipynb
wklchris/blog
要追加一个列表,使用 `x.extend()` :
x = [1, 2, 3, 4, 5] x.extend([-2, -1]) print(x)
[1, 2, 3, 4, 5, -2, -1]
MIT
docsrc/Python/VariableTypes.ipynb
wklchris/blog
注意, `x.append()` 这种 Python 内置实例的方法并不返回任何值。所以,你不能把它赋值到另一个变量:
y = [1, 2, 3, 4, 5].append(-2) print(y) # 无效的赋值,因为它并没有返回值
None
MIT
docsrc/Python/VariableTypes.ipynb
wklchris/blog
要实现这种“返回值”效果,可以考虑下面两种方法之一:- 加号:Python 支持用加号连接两个可变列表,从而把它们“融合”在一起- 星号:Python 支持用前缀星号的方式进行列表展开
x = [1, 2, 3] y1 = x + [4, 5] y2 = [*x, 4, 5] print(y1, y2)
[1, 2, 3, 4, 5] [1, 2, 3, 4, 5]
MIT
docsrc/Python/VariableTypes.ipynb
wklchris/blog
插入用 `x.insert(index, item)` 将元素插入到第 `index` 个元素的位置,原第 index 及以后的元素依次后移:
x = [1, 2, 3] x.insert(1, -1) # 插入到 x[1] 处 print(x, x[1])
[1, -1, 2, 3] -1
MIT
docsrc/Python/VariableTypes.ipynb
wklchris/blog
要实现类似上一小节的“返回值”效果,仍然可以使用加号或者星号两种方式:
x = [1, 2, 3] item = -1 y1 = x[:1] + [item] + x[1:] y2 = [*x[:1], item, *x[1:]] print(y1, y2)
[1, -1, 2, 3] [1, -1, 2, 3]
MIT
docsrc/Python/VariableTypes.ipynb
wklchris/blog
排序 按值排序用 `x.sort()` 对列表进行排序,默认是按升序。Python 的排序是稳定排序。
x = [1, 4, 3, 2] x.sort() print(x)
[1, 2, 3, 4]
MIT
docsrc/Python/VariableTypes.ipynb
wklchris/blog
添加 `reverse=True` 选项,可以按降序进行排列。
x = [1, 4, 3, 2] x.sort(reverse=True) print(x)
[4, 3, 2, 1]
MIT
docsrc/Python/VariableTypes.ipynb
wklchris/blog
如果列表内的元素不是数字,是列表或其他序列,会按 `<` 比较的方式来排序。下面是对列表元素进行排序:
x = [[1, 2], [4, 3, 2], [3, 4], [3, 2]] x.sort() print(x)
[[1, 2], [3, 2], [3, 4], [4, 3, 2]]
MIT
docsrc/Python/VariableTypes.ipynb
wklchris/blog