markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
이 tff.federated_computation은 페더레이션 유형 <float>@CLIENTS의 인수를 받아들이고 페더레이션 유형 <float>@SERVER의 값을 반환합니다. 페더레이션 계산은 서버에서 클라이언트로, 클라이언트에서 클라이언트로 또는 서버에서 서버로 이동할 수도 있습니다. 페더레이션 계산은 유형 서명이 일치하는 한 일반 함수처럼 구성할 수도 있습니다. 개발을 지원하기 위해 TFF를 사용하면 tff.federated_computation을 Python 함수로 호출할 수 있습니다. 예를 들어 다음을 호출할 수 있습니다.
get_average_temperature([68.5, 70.3, 69.8])
site/ko/federated/tutorials/building_your_own_federated_learning_algorithm.ipynb
tensorflow/docs-l10n
apache-2.0
비 즉시 실행 계산 및 TensorFlow 두 가지 주요 제한 사항을 알고 있어야 합니다. 첫째, Python 인터프리터가 tff.federated_computation 데코레이터를 만나면 함수가 한 번 추적되고 나중에 사용할 수 있도록 직렬화됩니다. 페더레이션 학습의 분산된 특성으로 인해 이러한 미래의 사용은 원격 실행 환경과 같은 다른 곳에서 발생할 수 있습니다. 따라서 TFF 계산은 기본적으로 비 즉시 실행입니다. 이 동작은 TensorFlow의 tf.function 데코레이터와 다소 유사합니다. 둘째, 페더레이션 계산은 페더레이션 연산자(예: tff.federated_mean)로만 구성될 수 있으며 TensorFlow 작업을 포함할 수 없습니다. TensorFlow 코드는 tff.tf_computation 데코레이션된 블록으로 제한되어야 합니다. 숫자를 취하고 여기에 0.5를 더하는 다음 함수와 같이 대부분의 일반적인 TensorFlow 코드는 직접 데코레이션할 수 있습니다.
@tff.tf_computation(tf.float32) def add_half(x): return tf.add(x, 0.5)
site/ko/federated/tutorials/building_your_own_federated_learning_algorithm.ipynb
tensorflow/docs-l10n
apache-2.0
또한 유형 서명이 있지만 배치는 없습니다 . 예를 들어 다음을 호출할 수 있습니다.
str(add_half.type_signature)
site/ko/federated/tutorials/building_your_own_federated_learning_algorithm.ipynb
tensorflow/docs-l10n
apache-2.0
여기에서 tff.federated_computation과 tff.tf_computation의 중요한 차이점을 볼 수 있습니다. 전자는 명시적 배치가 있는 반면 후자는 그렇지 않습니다. 게재 위치를 지정하여 페더레이션 계산에서 tff.tf_computation 블록을 사용할 수 있습니다. 절반을 추가하지만 클라이언트에서 페더레이션 부동 소수점에만 추가하는 함수를 만들어 보겠습니다. 게재 위치를 유지하면서 지정된 tff.tf_computation을 적용하는 tff.federated_map을 사용하여 이를 수행할 수 있습니다.
@tff.federated_computation(tff.FederatedType(tf.float32, tff.CLIENTS)) def add_half_on_clients(x): return tff.federated_map(add_half, x)
site/ko/federated/tutorials/building_your_own_federated_learning_algorithm.ipynb
tensorflow/docs-l10n
apache-2.0
이 함수는 tff.CLIENTS에 배치된 값만 허용하고 동일한 배치의 값을 반환한다는 점을 제외하면 add_half와 거의 동일합니다. 유형 서명에서 이를 확인할 수 있습니다.
str(add_half_on_clients.type_signature)
site/ko/federated/tutorials/building_your_own_federated_learning_algorithm.ipynb
tensorflow/docs-l10n
apache-2.0
요약하자면: TFF는 페더레이션 값에서 작동합니다. 각 페더레이션 값에는 유형(예: tf.float32)과 배치(예: tff.CLIENTS)가 있는 페더레이션 유형{/em}이 있습니다. 페더레이션 값은 페더레이션 계산을 사용하여 변환될 수 있으며, tff.federated_computation 및 연합 유형 서명으로 데코레이션되어야 합니다. TensorFlow 코드는 tff.tf_computation 데코레이터가 있는 블록에 포함되어야 합니다. 그런 다음 이러한 블록을 페더레이션 계산에 통합할 수 있습니다. 고유한 페더레이션 학습 알고리즘 구축하기, 재검토 이제 페더레이션 코어를 살펴보았으므로 자체적인 페더레이션 학습 알고리즘을 구축할 수 있습니다. 위에서 알고리즘에 대해 initialize_fn 및 next_fn을 정의했음을 상기하세요. next_fn은 순수한 TensorFlow 코드를 사용하여 정의한 client_update 및 server_update를 사용합니다. 그러나 알고리즘을 페더레이션 계산으로 만들려면 next_fn과 initialize_fn이 각각 tff.federated_computation이 되어야 합니다. TensorFlow 페더레이션 블록 초기화 계산 만들기 initialize 함수는 매우 간단합니다. model_fn을 사용하여 모델을 생성합니다. 그러나 tff.tf_computation을 사용하여 TensorFlow 코드를 분리해야 합니다.
@tff.tf_computation def server_init(): model = model_fn() return model.trainable_variables
site/ko/federated/tutorials/building_your_own_federated_learning_algorithm.ipynb
tensorflow/docs-l10n
apache-2.0
그런 다음 tff.federated_value를 사용하여 이를 페더레이션 계산에 직접 전달할 수 있습니다.
@tff.federated_computation def initialize_fn(): return tff.federated_value(server_init(), tff.SERVER)
site/ko/federated/tutorials/building_your_own_federated_learning_algorithm.ipynb
tensorflow/docs-l10n
apache-2.0
next_fn 만들기 이제 클라이언트 및 서버 업데이트 코드를 사용하여 실제 알고리즘을 작성합니다. 먼저 client_update를 클라이언트 데이터세트와 서버 가중치를 받아들이고 업데이트된 클라이언트 가중치 텐서를 출력하는 tff.tf_computation으로 변환합니다. 함수를 적절하게 데코레이션하려면 해당 유형이 필요합니다. 다행히도 서버 가중치 유형은 모델에서 직접 추출할 수 있습니다.
whimsy_model = model_fn() tf_dataset_type = tff.SequenceType(whimsy_model.input_spec)
site/ko/federated/tutorials/building_your_own_federated_learning_algorithm.ipynb
tensorflow/docs-l10n
apache-2.0
데이터세트 유형 서명을 살펴보겠습니다. 28 x 28 이미지(정수 레이블 포함)를 가져와 병합했다는 사실을 상기하세요.
str(tf_dataset_type)
site/ko/federated/tutorials/building_your_own_federated_learning_algorithm.ipynb
tensorflow/docs-l10n
apache-2.0
위의 server_init 함수를 사용하여 모델 가중치 유형을 추출할 수도 있습니다.
model_weights_type = server_init.type_signature.result
site/ko/federated/tutorials/building_your_own_federated_learning_algorithm.ipynb
tensorflow/docs-l10n
apache-2.0
형식 서명을 살펴보면 모델의 아키텍처를 볼 수 있습니다!
str(model_weights_type)
site/ko/federated/tutorials/building_your_own_federated_learning_algorithm.ipynb
tensorflow/docs-l10n
apache-2.0
이제 클라이언트 업데이트를 위한 tff.tf_computation을 생성할 수 있습니다.
@tff.tf_computation(tf_dataset_type, model_weights_type) def client_update_fn(tf_dataset, server_weights): model = model_fn() client_optimizer = tf.keras.optimizers.SGD(learning_rate=0.01) return client_update(model, tf_dataset, server_weights, client_optimizer)
site/ko/federated/tutorials/building_your_own_federated_learning_algorithm.ipynb
tensorflow/docs-l10n
apache-2.0
서버 업데이트의 tff.tf_computation 버전은 이미 추출한 유형을 사용하여 유사한 방식으로 정의할 수 있습니다.
@tff.tf_computation(model_weights_type) def server_update_fn(mean_client_weights): model = model_fn() return server_update(model, mean_client_weights)
site/ko/federated/tutorials/building_your_own_federated_learning_algorithm.ipynb
tensorflow/docs-l10n
apache-2.0
마지막으로 이 모든 것을 통합하는 tff.federated_computation을 만들어야 합니다. 이 함수는 두 개의 페더레이션 값을 받아 들입니다. 하나는 서버 가중치(게재 위치 tff.SERVER)에 해당하고 다른 하나는 클라이언트 데이터세트(게재 위치 tff.CLIENTS)에 해당합니다. 이 두 유형은 모두 위에서 정의되었습니다! tff.FederatedType을 사용하여 적절한 배치를 제공하기만 하면 됩니다.
federated_server_type = tff.FederatedType(model_weights_type, tff.SERVER) federated_dataset_type = tff.FederatedType(tf_dataset_type, tff.CLIENTS)
site/ko/federated/tutorials/building_your_own_federated_learning_algorithm.ipynb
tensorflow/docs-l10n
apache-2.0
FL 알고리즘의 4가지 요소를 기억하십니까? 서버-클라이언트 브로드캐스트 단계 로컬 클라이언트 업데이트 단계 클라이언트-서버 업로드 단계 서버 업데이트 단계 이제 위의 내용을 작성했으므로 각 부분을 한 줄의 TFF 코드로 간결하게 표현할 수 있습니다. 이 단순함 때문에 페더레이션 유형과 같은 것을 지정하기 위해 특별히 주의해야 했습니다!
@tff.federated_computation(federated_server_type, federated_dataset_type) def next_fn(server_weights, federated_dataset): # Broadcast the server weights to the clients. server_weights_at_client = tff.federated_broadcast(server_weights) # Each client computes their updated weights. client_weights = tff.federated_map( client_update_fn, (federated_dataset, server_weights_at_client)) # The server averages these updates. mean_client_weights = tff.federated_mean(client_weights) # The server updates its model. server_weights = tff.federated_map(server_update_fn, mean_client_weights) return server_weights
site/ko/federated/tutorials/building_your_own_federated_learning_algorithm.ipynb
tensorflow/docs-l10n
apache-2.0
이제 알고리즘 초기화와 알고리즘의 한 단계를 실행하기 위한 tff.federated_computation이 있습니다. 알고리즘을 완료하기 위해 이를 tff.templates.IterativeProcess에 전달합니다.
federated_algorithm = tff.templates.IterativeProcess( initialize_fn=initialize_fn, next_fn=next_fn )
site/ko/federated/tutorials/building_your_own_federated_learning_algorithm.ipynb
tensorflow/docs-l10n
apache-2.0
반복 프로세스의 <code>initialize</code> 및 next 함수의 <em>유형 서명</em>을 살펴보겠습니다.
str(federated_algorithm.initialize.type_signature)
site/ko/federated/tutorials/building_your_own_federated_learning_algorithm.ipynb
tensorflow/docs-l10n
apache-2.0
이는 federated_algorithm.initialize가 단일 레이어 모델(784x10 가중치 행렬 및 10개의 바이어스 단위 포함)을 반환하는 인수가 없는 함수라는 사실을 반영합니다.
str(federated_algorithm.next.type_signature)
site/ko/federated/tutorials/building_your_own_federated_learning_algorithm.ipynb
tensorflow/docs-l10n
apache-2.0
여기에서 federated_algorithm.next는 서버 모델과 클라이언트 데이터를 받아들이고 업데이트된 서버 모델을 반환합니다. 알고리즘 평가 몇 라운드를 실행하고 손실이 어떻게 변하는지 살펴보겠습니다. 먼저 두 번째 튜토리얼에서 논의한 중앙 집중식 접근 방식을 사용하여 평가 기능을 정의합니다. 먼저 중앙 집중식 평가 데이터세트를 만든 다음 훈련 데이터에 사용한 것과 동일한 전처리를 적용합니다.
central_emnist_test = emnist_test.create_tf_dataset_from_all_clients() central_emnist_test = preprocess(central_emnist_test)
site/ko/federated/tutorials/building_your_own_federated_learning_algorithm.ipynb
tensorflow/docs-l10n
apache-2.0
다음으로 서버 상태를 받아들이고 Keras를 사용하여 테스트 데이터세트를 평가하는 함수를 작성합니다. tf.Keras에 익숙하다면 모두 낯설지 않겠지만 set_weights의 사용에 주목하세요!
def evaluate(server_state): keras_model = create_keras_model() keras_model.compile( loss=tf.keras.losses.SparseCategoricalCrossentropy(), metrics=[tf.keras.metrics.SparseCategoricalAccuracy()] ) keras_model.set_weights(server_state) keras_model.evaluate(central_emnist_test)
site/ko/federated/tutorials/building_your_own_federated_learning_algorithm.ipynb
tensorflow/docs-l10n
apache-2.0
이제 알고리즘을 초기화하고 테스트세트에서 평가해 보겠습니다.
server_state = federated_algorithm.initialize() evaluate(server_state)
site/ko/federated/tutorials/building_your_own_federated_learning_algorithm.ipynb
tensorflow/docs-l10n
apache-2.0
몇 라운드 동안 훈련하고 변경 사항이 있는지 살펴보겠습니다.
for round in range(15): server_state = federated_algorithm.next(server_state, federated_train_data) evaluate(server_state)
site/ko/federated/tutorials/building_your_own_federated_learning_algorithm.ipynb
tensorflow/docs-l10n
apache-2.0
Automatically, read has identified the header and the format of each column. The result is a Table object, and that brings some additional properties.
# Show the info of the data read data.info # Get the name of the columns data.colnames # Get just the values of a particular column data['obsid'] # get the first element data['obsid', 'redshift'][0]
07-Tables/reading_tables_Instructor.ipynb
OpenAstronomy/workshop_sunpy_astropy
mit
Astropy can read a variety of formats easily. The following example uses a quite
# Read the data from the source table = ascii.read("ftp://cdsarc.u-strasbg.fr/pub/cats/VII/253/snrs.dat", readme="ftp://cdsarc.u-strasbg.fr/pub/cats/VII/253/ReadMe") # See the stats of the table table.info('stats') # If we want to see the first 10 entries table[0:10] # the units are also stored, we can extract them too table['MajDiam'].quantity.to('rad')[0:3] # Adding values of different columns (table['RAh'] + table['RAm'] + table['RAs'])[0:3] # adding values of different columns but being aware of column units (table['RAh'].quantity + table['RAm'].quantity + table['RAs'].quantity)[0:3] # Create a new column in the table table['RA'] = table['RAh'].quantity + table['RAm'].quantity + table['RAs'].quantity # Show table's new column table['RA'][0:3] # add a description to the new column table['RA'].description = table['RAh'].description # Now it does show the values table['RA'][0:3] # Using numpy to calculate the sin of the RA import numpy as np np.sin(table['RA'].quantity) # Let's change the units... import astropy.units as u table['RA'].unit = u.hourangle # does the sin now works? np.sin(table['RA'].quantity)
07-Tables/reading_tables_Instructor.ipynb
OpenAstronomy/workshop_sunpy_astropy
mit
Properties when reading the reading of the table has many properties, let's imagine the following easy example:
weather_data = """ # Country = Finland # City = Helsinki # Longitud = 24.9375 # Latitud = 60.170833 # Week = 32 # Year = 2015 day, precip, type Mon,1.5,rain Tues,, Wed,1.1,snow Thur,2.3,rain Fri,0.2, Sat,1.1,snow Sun,5.4,snow """ # Read the table weather = ascii.read(weather_data) # Blank values are interpreted by default as bad/missing values weather.info('stats') # Let's define missing values for the columns we want: weather['type'].fill_value = 'N/A' weather['precip'].fill_value = -999 # Use filled to show the value filled. weather.filled() # We can see the meta as a dictionary, but not as key, value pairs weather.meta # To get it the header as a table header = ascii.read(weather.meta['comments'], delimiter='=', format='no_header', names=['key', 'val']) print(header)
07-Tables/reading_tables_Instructor.ipynb
OpenAstronomy/workshop_sunpy_astropy
mit
When the values are not empty, then the keyword fill_values on read has to be used. Reading VOTables VOTables are an special type of tables which should be self-consistent and can be tied to a particular scheme. This mean the file will contain where the data comes from (and which query produced it) and the properties for each field, making it easier to ingest by a machine.
from astropy.io.votable import parse_single_table # Read the example table from HELIO (hfc_ar.xml) table = parse_single_table("hfc_ar.xml") # See the fields of the table table.fields # extract one (NOAA_NUMBER) or all of the columns NOAA = table.array['NOAA_NUMBER'] # Show the data NOAA.data # See the mask NOAA.mask # Shee the whole array. NOAA # Convert the table to an astropy table asttable = table.to_table() # See the table asttable # Different results because quantities are not print(np.sin(asttable['FEAT_HG_LAT_DEG'][0:5])) print(np.sin(asttable['FEAT_HG_LAT_DEG'][0:5].quantity)) # And it can also be converted to other units print(asttable[0:5]['FEAT_AREA_DEG2'].quantity.to('arcmin2'))
07-Tables/reading_tables_Instructor.ipynb
OpenAstronomy/workshop_sunpy_astropy
mit
下面我们再编写一个方法用于快速地获得一个训练实例(即一个名字以及它所属的语言): 其中 line_index 中保存的是选择的姓氏中的字母的索引,这个需要你去实现。
import random def random_training_pair(): # 随机选择一种语言 category = random.choice(all_categories) # 从语言中随机选择一个姓氏 line = random.choice(category_lines[category]) # 我们将姓氏和语言都转化为索引 category_index = all_categories.index(category) line_index = [] # 你需要把 line 中字母的索引加入到line_index 中 # Todo: return category, line, category_index, line_index # 测试一下上面的函数方法 for i in range(5): category, line, category_index, line_index = random_training_pair() print('category =', category, '/ line =', line) print('category =', category_index, '/ line =', line_index)
jizhi-pytorch-2/02_sentiment_analysis/homework-Copy1.ipynb
liufuyang/deep_learning_tutorial
mit
编写 LSTM 模型 现在是建立 LSTM 模型的时候了。 我在模型中设置了一些空缺,你需要编写空缺处的代码。 如果遇到问题,可以参考课程中的代码讲解哦!
class LSTMNetwork(nn.Module): def __init__(self, input_size, hidden_size, output_size, n_layers=1): super(LSTMNetwork, self).__init__() self.n_layers = n_layers self.hidden_size = hidden_size # LSTM的构造如下: # 一个embedding层,将输入的任意一个单词(list)映射为一个向量(向量的维度与隐含层有关系?) self.embedding = # 然后是一个LSTM隐含层,共有hidden_size个LSTM神经元,并且它可以根据n_layers设置层数 self.lstm = # 接着是一个全链接层,外接一个softmax输出 self.fc = self.logsoftmax = nn.LogSoftmax() def forward(self, input, hidden=None): #首先根据输入input,进行词向量嵌入 embedded = # 这里需要注意! # PyTorch设计的LSTM层有一个特别别扭的地方是,输入张量的第一个维度需要是时间步, # 第二个维度才是batch_size,所以需要对embedded变形 # 因为此次没有采用batch,所以batch_size为1 # 变形的维度应该是(input_list_size, batch_size, hidden_size) embedded = embedded.view(, , ) # 调用PyTorch自带的LSTM层函数,注意有两个输入,一个是输入层的输入,另一个是隐含层自身的输入 # 输出output是所有步的隐含神经元的输出结果,hidden是隐含层在最后一个时间步的状态。 # 注意hidden是一个tuple,包含了最后时间步的隐含层神经元的输出,以及每一个隐含层神经元的cell的状态 output, hidden = self.lstm(embedded, hidden) #我们要把最后一个时间步的隐含神经元输出结果拿出来,送给全连接层 output = output[-1,...] #全链接层 out = self.fc(output) # softmax out = self.logsoftmax(out) return out def initHidden(self): # 对隐单元的初始化 # 对引单元输出的初始化,全0. # 注意hidden和cell的维度都是layers,batch_size,hidden_size hidden = Variable(torch.zeros(self.n_layers, 1, self.hidden_size)) # 对隐单元内部的状态cell的初始化,全0 cell = Variable(torch.zeros(self.n_layers, 1, self.hidden_size)) return (hidden, cell)
jizhi-pytorch-2/02_sentiment_analysis/homework-Copy1.ipynb
liufuyang/deep_learning_tutorial
mit
训练网络 每次训练模型的时候,我的心里都是有点小激动的! 我同样在训练程序中预留了一些空位,你要编写空余位置的程序,训练才可以正常进行。
import time import math # 开始训练LSTM网络 n_epochs = 100000 # 构造一个LSTM网络的实例 lstm = LSTMNetwork(n_letters, 10, n_categories, 2) #定义损失函数 cost = torch.nn.NLLLoss() #定义优化器, optimizer = torch.optim.Adam(lstm.parameters(), lr = 0.001) records = [] # 用于计算训练时间的函数 def time_since(since): now = time.time() s = now - since m = math.floor(s / 60) s -= m * 60 return '%dm %ds' % (m, s) start = time.time() # 开始训练,一共5个epoch,否则容易过拟合 for epoch in range(5): losses = [] #每次随机选择数据进行训练,每个 EPOCH 训练“所有名字个数”次。 for i in range(all_line_num): category, line, y, x = random_training_pair() x = Variable(torch.LongTensor(x)) y = Variable(torch.LongTensor(np.array([y]))) optimizer.zero_grad() # Step1:初始化LSTM隐含层单元的状态 hidden = # Step2:让LSTM开始做运算,注意,不需要手工编写对时间步的循环,而是直接交给PyTorch的LSTM层。 # 它自动会根据数据的维度计算若干时间步 output = # Step3:计算损失 loss = losses.append(loss.data.numpy()[0]) #反向传播 loss.backward() optimizer.step() #每隔3000步,跑一次校验集,并打印结果 if i % 3000 == 0: # 判断模型的预测是否正确 guess, guess_i = category_from_output(output) correct = '✓' if guess == category else '✗ (%s)' % category # 计算训练进度 training_process = (all_line_num * epoch + i) / (all_line_num * 5) * 100 training_process = '%.2f' % training_process print('第{}轮,训练损失:{:.2f},训练进度:{}%,({}),名字:{},预测国家:{},正确?{}'\ .format(epoch, np.mean(losses), float(training_process), time_since(start), line, guess, correct)) records.append([np.mean(losses)]) a = [i[0] for i in records] plt.plot(a, label = 'Train Loss') plt.xlabel('Steps') plt.ylabel('Loss') plt.legend()
jizhi-pytorch-2/02_sentiment_analysis/homework-Copy1.ipynb
liufuyang/deep_learning_tutorial
mit
通过姓氏分析语言的相似性 激动人心的时刻到了! 下面将使用10000条数据评估训练好的模型,并根据评估的结果绘制图形,从图形中我们可以发现哪些语言的姓氏是相似的! 你需要自行编写 evaluate 函数的内容。
import matplotlib.pyplot as plt # 建立一个(18 x 18)的方阵张量 # 用于保存神经网络做出的预测结果 confusion = torch.zeros(n_categories, n_categories) # 用于评估的模型的测试次数 n_confusion = 10000 # 评估用方法 传进去一个名字,给出预测结果 # 可以观察到这个方法的实现与 train 方法前半部分类似 # 其实它就是去掉反向传播的 train 方法 def evaluate(line_list): # 调用模型前应该先初始化模型的隐含层 hidden = # 别忘了将输入的list转化为torch.Variable line_variable = # 调用模型 output = lstm(line_variable, hidden) return output # 循环一万次 for i in range(n_confusion): # 随机选择测试数据,包括姓氏以及所属语言 category, line, category_index, line_list = random_training_pair() # 取得预测结果 output = evaluate(line_list) # 取得预测结果的语言和索引 guess, guess_i = category_from_output(output) # 以姓氏实际的所属语言为行 # 以模型预测的所属语言为列 # 在方阵的特定位置增加1 confusion[category_index][guess_i] += 1 # 数据归一化 for i in range(n_categories): confusion[i] = confusion[i] / confusion[i].sum() # 设置一个图表 fig = plt.figure() ax = fig.add_subplot(111) # 将 confusion 方阵数据传入 cax = ax.matshow(confusion.numpy()) fig.colorbar(cax) # 设置图表两边的语言类别名称 ax.set_xticklabels([''] + all_categories, rotation=90) ax.set_yticklabels([''] + all_categories) ax.xaxis.set_major_locator(ticker.MultipleLocator(1)) ax.yaxis.set_major_locator(ticker.MultipleLocator(1)) plt.show()
jizhi-pytorch-2/02_sentiment_analysis/homework-Copy1.ipynb
liufuyang/deep_learning_tutorial
mit
The Bernoulli distribution we studied earlier answers the question of which of two outcomes ($Y \in \lbrace 0,1 \rbrace$) would be selected with probability, $p$. $$ \mathbb{P}(Y) = p^Y (1-p)^{ 1-Y } $$ We also know how to solve the corresponding likelihood function for the maximum likelihood estimate of $p$ given observations of the output, $\lbrace Y_i \rbrace_{i=1}^n$. However, now we want to include other factors in our estimate of $p$. For example, suppose we observe not just the outcomes, but a corresponding continuous variable, $x$. That is, the observed data is now $\lbrace (x_i,Y_i) \rbrace_{i=1}^n$ How can we incorporate $x$ into our estimation of $p$? The most straightforward idea is to model $p= a x + b$ where $a,b$ are parameters of a fitted line. However, because $p$ is a probability with value bounded between zero and one, we need to wrap this estimate in another function that can map the entire real line into the $[0,1]$ interval. The logistic (a.k.a. sigmoid) function has this property, $$ \theta(s) = \frac{e^s}{1+e^s} $$ Thus, the new parameterized estimate for $p$ is the following, <!-- Equation labels as ordinary links --> <div id="eq:prob"></div> $$ \begin{equation} \hat{p} = \theta(a x+b)= \frac{e^{a x + b}}{1+e^{a x + b}} \label{eq:prob} \tag{1} \end{equation} $$ This is usually expressed using the logit function, $$ \texttt{logit}(t)= \log \frac{t}{1-t} $$ as, $$ \texttt{logit}(p) = b + a x $$ More continuous variables can be accommodated easily as $$ \texttt{logit}(p) = b + \sum_k a_k x_k $$ This can be further extended beyond the binary case to multiple target labels. The maximum likelihood estimate of this uses numerical optimization methods that are implemented in Scikit-learn. Let's construct some data to see how this works. In the following, we assign class labels to a set of randomly scattered points in the two-dimensional plane,
%matplotlib inline import numpy as np from matplotlib.pylab import subplots v = 0.9 @np.vectorize def gen_y(x): if x<5: return np.random.choice([0,1],p=[v,1-v]) else: return np.random.choice([0,1],p=[1-v,v]) xi = np.sort(np.random.rand(500)*10) yi = gen_y(xi)
chapters/machine_learning/notebooks/logreg.ipynb
unpingco/Python-for-Probability-Statistics-and-Machine-Learning
mit
Programming Tip. The np.vectorize decorator used in the code above makes it easy to avoid looping in code that uses Numpy arrays by embedding the looping semantics inside of the so-decorated function. Note, however, that this does not necessarily accelerate the wrapped function. It's mainly for convenience. Figure shows a scatter plot of the data we constructed in the above code, $\lbrace (x_i,Y_i) \rbrace$. As constructed, it is more likely that large values of $x$ correspond to $Y=1$. On the other hand, values of $x \in [4,6]$ of either category are heavily overlapped. This means that $x$ is not a particularly strong indicator of $Y$ in this region. Figure shows the fitted logistic regression curve against the same data. The points along the curve are the probabilities that each point lies in either of the two categories. For large values of $x$ the curve is near one, meaning that the probability that the associated $Y$ value is equal to one. On the other extreme, small values of $x$ mean that this probability is close to zero. Because there are only two possible categories, this means that the probability of $Y=0$ is thereby higher. The region in the middle corresponding to the middle probabilities reflect the ambiguity between the two catagories because of the overlap in the data for this region. Thus, logistic regression cannot make a strong case for one category here. The following code fits the logistic regression model,
fig,ax=subplots() _=ax.plot(xi,yi,'o',color='gray',alpha=.3) _=ax.axis(ymax=1.1,ymin=-0.1) _=ax.set_xlabel(r'$X$',fontsize=22) _=ax.set_ylabel(r'$Y$',fontsize=22) from sklearn.linear_model import LogisticRegression lr = LogisticRegression() lr.fit(np.c_[xi],yi) fig,ax=subplots() xii=np.linspace(0,10,20) _=ax.plot(xii,lr.predict_proba(np.c_[xii])[:,1],'k-',lw=3) _=ax.plot(xi,yi,'o',color='gray',alpha=.3) _=ax.axis(ymax=1.1,ymin=-0.1) _=ax.set_xlabel(r'$x$',fontsize=20) _=ax.set_ylabel(r'$\mathbb{P}(Y)$',fontsize=20)
chapters/machine_learning/notebooks/logreg.ipynb
unpingco/Python-for-Probability-Statistics-and-Machine-Learning
mit
<!-- dom:FIGURE: [fig-machine_learning/logreg_001.png, width=500 frac=0.75] This scatterplot shows the binary $Y$ variables and the corresponding $x$ data for each category. <div id="fig:logreg_001"></div> --> <!-- begin figure --> <div id="fig:logreg_001"></div> <p>This scatterplot shows the binary $Y$ variables and the corresponding $x$ data for each category.</p> <img src="fig-machine_learning/logreg_001.png" width=500> <!-- end figure --> <!-- dom:FIGURE: [fig-machine_learning/logreg_002.png, width=500 frac=0.75] This shows the fitted logistic regression on the data shown in [Figure](#fig:logreg_001). The points along the curve are the probabilities that each point lies in either of the two categories. <div id="fig:logreg_002"></div> --> <!-- begin figure --> <div id="fig:logreg_002"></div> <p>This shows the fitted logistic regression on the data shown in [Figure](#fig:logreg_001). The points along the curve are the probabilities that each point lies in either of the two categories.</p> <img src="fig-machine_learning/logreg_002.png" width=500> <!-- end figure --> For a deeper understanding of logistic regression, we need to alter our notation slightly and once again use our projection methods. More generally we can rewrite Equation eq:prob as the following, <!-- Equation labels as ordinary links --> <div id="eq:probbeta"></div> $$ \begin{equation} p(\mathbf{x}) = \frac{1}{1+\exp(-\boldsymbol{\beta}^T \mathbf{x})} \label{eq:probbeta} \tag{2} \end{equation} $$ where $\boldsymbol{\beta}, \mathbf{x}\in \mathbb{R}^n$. From our prior work on projection we know that the signed perpendicular distance between $\mathbf{x}$ and the linear boundary described by $\boldsymbol{\beta}$ is $\boldsymbol{\beta}^T \mathbf{x}/\Vert\boldsymbol{\beta}\Vert$. This means that the probability that is assigned to any point in $\mathbb{R}^n$ is a function of how close that point is to the linear boundary described by the following equation, $$ \boldsymbol{\beta}^T \mathbf{x} = 0 $$ But there is something subtle hiding here. Note that for any $\alpha\in\mathbb{R}$, $$ \alpha\boldsymbol{\beta}^T \mathbf{x} = 0 $$ describes the same hyperplane. This means that we can multiply $\boldsymbol{\beta}$ by an arbitrary scalar and still get the same geometry. However, because of $\exp(-\alpha\boldsymbol{\beta}^T \mathbf{x})$ in Equation eq:probbeta, this scaling determines the intensity of the probability attributed to $\mathbf{x}$. This is illustrated in Figure. The panel on the left shows two categories (squares/circles) split by the dotted line that is determined by $\boldsymbol{\beta}^T\mathbf{x}=0$. The background colors shows the probabilities assigned to points in the plane. The right panel shows that by scaling with $\alpha$, we can increase the probabilities of class membership for the given points, given the exact same geometry. The points near the boundary have lower probabilities because they could easily be on the opposite side. However, by scaling by $\alpha$, we can raise those probabilities to any desired level at the cost of driving the points further from the boundary closer to one. Why is this a problem? By driving the probabilities arbitrarily using $\alpha$, we can overemphasize the training set at the cost of out-of-sample data. That is, we may wind up insisting on emphatic class membership of yet unseen points that are close to the boundary that otherwise would have more equivocal probabilities (say, near $1/2$). Once again, this is another manifestation of bias/variance trade-off. <!-- dom:FIGURE: [fig-machine_learning/logreg_003.png, width=500 frac=1.25] Scaling can arbitrarily increase the probabilities of points near the decision boundary. <div id="fig:logreg_003"></div> --> <!-- begin figure --> <div id="fig:logreg_003"></div> <p>Scaling can arbitrarily increase the probabilities of points near the decision boundary.</p> <img src="fig-machine_learning/logreg_003.png" width=500> <!-- end figure --> Regularization is a method that controls this effect by penalizing the size of $\beta$ as part of its solution. Algorithmically, logistic regression works by iteratively solving a sequence of weighted least squares problems. Regression adds a $\Vert\boldsymbol{\beta}\Vert/C$ term to the least squares error. To see this in action, let's create some data from a logistic regression and see if we can recover it using Scikit-learn. Let's start with a scatter of points in the two-dimensional plane,
x0,x1=np.random.rand(2,20)*6-3 X = np.c_[x0,x1,x1*0+1] # stack as columns
chapters/machine_learning/notebooks/logreg.ipynb
unpingco/Python-for-Probability-Statistics-and-Machine-Learning
mit
Note that X has a third column of all ones. This is a trick to allow the corresponding line to be offset from the origin in the two-dimensional plane. Next, we create a linear boundary and assign the class probabilities according to proximity to the boundary.
beta = np.array([1,-1,1]) # last coordinate for affine offset prd = X.dot(beta) probs = 1/(1+np.exp(-prd/np.linalg.norm(beta))) c = (prd>0) # boolean array class labels
chapters/machine_learning/notebooks/logreg.ipynb
unpingco/Python-for-Probability-Statistics-and-Machine-Learning
mit
This establishes the training data. The next block creates the logistic regression object and fits the data.
lr = LogisticRegression() _=lr.fit(X[:,:-1],c)
chapters/machine_learning/notebooks/logreg.ipynb
unpingco/Python-for-Probability-Statistics-and-Machine-Learning
mit
Note that we have to omit the third dimension because of how Scikit-learn internally breaks down the components of the boundary. The resulting code extracts the corresponding $\boldsymbol{\beta}$ from the LogisticRegression object.
betah = np.r_[lr.coef_.flat,lr.intercept_]
chapters/machine_learning/notebooks/logreg.ipynb
unpingco/Python-for-Probability-Statistics-and-Machine-Learning
mit
Programming Tip. The Numpy np.r_ object provides a quick way to stack Numpy arrays horizontally instead of using np.hstack. The resulting boundary is shown in the left panel in Figure. The crosses and triangles represent the two classes we created above, along with the separating gray line. The logistic regression fit produces the dotted black line. The dark circle is the point that logistic regression categorizes incorrectly. The regularization parameter is $C=1$ by default. Next, we can change the strength of the regularization parameter as in the following,
lr = LogisticRegression(C=1000)
chapters/machine_learning/notebooks/logreg.ipynb
unpingco/Python-for-Probability-Statistics-and-Machine-Learning
mit
Initialize
folder = '../sumSines/' metric = get_metric()
MES/integrals/volumeIntegral/calculations/exactSolutions.ipynb
CELMA-project/CELMA
lgpl-3.0
Define the function to volume integrate NOTE: These do not need to be fulfilled in order to get convergence z must be periodic The field $f(\rho, \theta)$ must be of class infinity in $z=0$ and $z=2\pi$ The field $f(\rho, \theta)$ must be continuous in the $\rho$ direction with $f(\rho, \theta + \pi)$ But this needs to be fulfilled: 1. The field $f(\rho, \theta)$ must be single valued when $\rho\to0$ 2. Eventual BC in $\rho$ must be satisfied
# We need Lx and Ly from boututils.options import BOUTOptions myOpts = BOUTOptions(folder) Lx = eval(myOpts.geom['Lx']) Ly = eval(myOpts.geom['Ly']) # z sin function # NOTE: The function is not continuous over origo s = 2 c = pi w = pi/2 the_vars['f'] = sin(z + 2.0) + sin(2.0*z + 2.1) + sin(3.0*z)
MES/integrals/volumeIntegral/calculations/exactSolutions.ipynb
CELMA-project/CELMA
lgpl-3.0
Calculating the solution
rhoInt = integrate(the_vars['f']*x, (x, 0 ,Lx)) rhoYInt = integrate(rhoInt , (y, 0 ,Ly)) the_vars['S'] = (integrate(rhoYInt , (z, 0, 2*np.pi))).evalf()
MES/integrals/volumeIntegral/calculations/exactSolutions.ipynb
CELMA-project/CELMA
lgpl-3.0
Very simple baseline Now let's write what in NLP jargon is called a baseline, that is a method for extracting named entities that can serve as a term of comparison to evaluate the accuracy of other methods. Baseline method: - cycle through each token of the text - if the token starts with a capital letter it's a named entity (only one type, i.e. Entity)
"T".istitle() "t".istitle() # we need a list to store the tagged tokens tagged_tokens = [] # tokenisation is done by using the string method `split(" ")` # that splits a string upon white spaces for n, token in enumerate(de_bello_gallico_book1.split(" ")): if(token.istitle()): tagged_tokens.append((token, "Entity")) #else: #tagged_tokens.append((token, "O"))
participants_notebooks/Sunoikisis - Named Entity Extraction 1b-GBclass.ipynb
mromanello/SunoikisisDC_NER
gpl-3.0
For convenience we can also wrap our baseline code into a function that we call extract_baseline. Let's define it:
def extract_baseline(input_text): """ :param input_text: the text to tag (string) :return: a list of tuples, where tuple[0] is the token and tuple[1] is the named entity tag """ # we need a list to store the tagged tokens tagged_tokens = [] # tokenisation is done by using the string method `split(" ")` # that splits a string upon white spaces for n, token in enumerate(input_text.split(" ")): if(token.istitle()): tagged_tokens.append((token, "Entity")) #else: #tagged_tokens.append((token, "O")) return tagged_tokens
participants_notebooks/Sunoikisis - Named Entity Extraction 1b-GBclass.ipynb
mromanello/SunoikisisDC_NER
gpl-3.0
We can modify slightly our function so that it prints the snippet of text where an entity is found:
def extract_baseline(input_text): """ :param input_text: the text to tag (string) :return: a list of tuples, where tuple[0] is the token and tuple[1] is the named entity tag """ # we need a list to store the tagged tokens tagged_tokens = [] # tokenisation is done by using the string method `split(" ")` # that splits a string upon white spaces for n, token in enumerate(input_text.split(" ")): if(token.istitle()): tagged_tokens.append((token, "Entity")) context = input_text.split(" ")[n-5:n+5] print("Found entity \"%s\" in context \"%s\""%(token, " ".join(context))) #else: #tagged_tokens.append((token, "O")) return tagged_tokens tagged_text_baseline = extract_baseline(de_bello_gallico_book1) tagged_text_baseline[:150]
participants_notebooks/Sunoikisis - Named Entity Extraction 1b-GBclass.ipynb
mromanello/SunoikisisDC_NER
gpl-3.0
The output looks slightly different from the one of our baseline function (the size of the tuples in the list varies). But we can write a function to fix this, we call it reshape_cltk_output:
def reshape_cltk_output(tagged_tokens): reshaped_output = [] for tagged_token in tagged_tokens: if(len(tagged_token)==1): continue #reshaped_output.append((tagged_token[0], "O")) else: reshaped_output.append((tagged_token[0], tagged_token[1])) return reshaped_output
participants_notebooks/Sunoikisis - Named Entity Extraction 1b-GBclass.ipynb
mromanello/SunoikisisDC_NER
gpl-3.0
Let's have a look at the output
tagged_text_nltk[0:150]
participants_notebooks/Sunoikisis - Named Entity Extraction 1b-GBclass.ipynb
mromanello/SunoikisisDC_NER
gpl-3.0
Wrap up At this point we can "compare" the output of the three different methods we used, again by using the zip function.
list(zip(tagged_text_baseline[:50], tagged_text_cltk[:50],tagged_text_nltk[:50])) for baseline_out, cltk_out, nltk_out in zip(tagged_text_baseline[:150], tagged_text_cltk[:150], tagged_text_nltk[:150]): print("Baseline: %s\nCLTK: %s\nNLTK: %s\n"%(baseline_out, cltk_out, nltk_out))
participants_notebooks/Sunoikisis - Named Entity Extraction 1b-GBclass.ipynb
mromanello/SunoikisisDC_NER
gpl-3.0
Convolutional Autoencoders Convolutional autoencoders [3] are the adaptation of autoencoders to images (or other spacially-structured data). They are built with convolutional layers where each layer consists of a number of feature maps. Each feature map is produced by convolving a small filter with the layer's inputs, adding a bias, and then applying some non-linear activation function. Additionally, a max-pooling operation can be performed on each feature map by dividing it into small non-overlapping regions and taking the maximum over each region. In this section we'll pre-train a convolutional network as a stacked autoencoder and use it for classification. In Shogun, convolutional autoencoders are constructed and trained just like regular autoencoders. Except that we build the autoencoder using NeuralConvolutionalLayer objects:
from shogun import DynamicObjectArray, NeuralInputLayer, NeuralConvolutionalLayer, CMAF_RECTIFIED_LINEAR conv_layers = DynamicObjectArray() # 16x16 single channel images conv_layers.append_element(NeuralInputLayer(16,16,1)) # the first encoding layer: 5 feature maps, filters with radius 2 (5x5 filters) # and max-pooling in a 2x2 region: its output will be 10 8x8 feature maps conv_layers.append_element(NeuralConvolutionalLayer(CMAF_RECTIFIED_LINEAR, 5, 2, 2, 2, 2)) # the second encoding layer: 15 feature maps, filters with radius 2 (5x5 filters) # and max-pooling in a 2x2 region: its output will be 20 4x4 feature maps conv_layers.append_element(NeuralConvolutionalLayer(CMAF_RECTIFIED_LINEAR, 15, 2, 2, 2, 2)) # the first decoding layer: same structure as the first encoding layer conv_layers.append_element(NeuralConvolutionalLayer(CMAF_RECTIFIED_LINEAR, 5, 2, 2)) # the second decoding layer: same structure as the input layer conv_layers.append_element(NeuralConvolutionalLayer(CMAF_RECTIFIED_LINEAR, 1, 2, 2)) conv_ae = DeepAutoencoder(conv_layers)
doc/ipython-notebooks/neuralnets/autoencoders.ipynb
lambday/shogun
bsd-3-clause
Data augmentation <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/tutorials/images/data_augmentation"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/images/data_augmentation.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/images/data_augmentation.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/images/data_augmentation.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> Overview This tutorial demonstrates data augmentation: a technique to increase the diversity of your training set by applying random (but realistic) transformations, such as image rotation. You will learn how to apply data augmentation in two ways: Use the Keras preprocessing layers, such as tf.keras.layers.Resizing, tf.keras.layers.Rescaling, tf.keras.layers.RandomFlip, and tf.keras.layers.RandomRotation. Use the tf.image methods, such as tf.image.flip_left_right, tf.image.rgb_to_grayscale, tf.image.adjust_brightness, tf.image.central_crop, and tf.image.stateless_random*. Setup
import matplotlib.pyplot as plt import numpy as np import tensorflow as tf import tensorflow_datasets as tfds from tensorflow.keras import layers
site/en/tutorials/images/data_augmentation.ipynb
tensorflow/docs
apache-2.0
Download a dataset This tutorial uses the tf_flowers dataset. For convenience, download the dataset using TensorFlow Datasets. If you would like to learn about other ways of importing data, check out the load images tutorial.
(train_ds, val_ds, test_ds), metadata = tfds.load( 'tf_flowers', split=['train[:80%]', 'train[80%:90%]', 'train[90%:]'], with_info=True, as_supervised=True, )
site/en/tutorials/images/data_augmentation.ipynb
tensorflow/docs
apache-2.0
The flowers dataset has five classes.
num_classes = metadata.features['label'].num_classes print(num_classes)
site/en/tutorials/images/data_augmentation.ipynb
tensorflow/docs
apache-2.0
Let's retrieve an image from the dataset and use it to demonstrate data augmentation.
get_label_name = metadata.features['label'].int2str image, label = next(iter(train_ds)) _ = plt.imshow(image) _ = plt.title(get_label_name(label))
site/en/tutorials/images/data_augmentation.ipynb
tensorflow/docs
apache-2.0
Use Keras preprocessing layers Resizing and rescaling You can use the Keras preprocessing layers to resize your images to a consistent shape (with tf.keras.layers.Resizing), and to rescale pixel values (with tf.keras.layers.Rescaling).
IMG_SIZE = 180 resize_and_rescale = tf.keras.Sequential([ layers.Resizing(IMG_SIZE, IMG_SIZE), layers.Rescaling(1./255) ])
site/en/tutorials/images/data_augmentation.ipynb
tensorflow/docs
apache-2.0
Note: The rescaling layer above standardizes pixel values to the [0, 1] range. If instead you wanted it to be [-1, 1], you would write tf.keras.layers.Rescaling(1./127.5, offset=-1). You can visualize the result of applying these layers to an image.
result = resize_and_rescale(image) _ = plt.imshow(result)
site/en/tutorials/images/data_augmentation.ipynb
tensorflow/docs
apache-2.0
Verify that the pixels are in the [0, 1] range:
print("Min and max pixel values:", result.numpy().min(), result.numpy().max())
site/en/tutorials/images/data_augmentation.ipynb
tensorflow/docs
apache-2.0
Data augmentation You can use the Keras preprocessing layers for data augmentation as well, such as tf.keras.layers.RandomFlip and tf.keras.layers.RandomRotation. Let's create a few preprocessing layers and apply them repeatedly to the same image.
data_augmentation = tf.keras.Sequential([ layers.RandomFlip("horizontal_and_vertical"), layers.RandomRotation(0.2), ]) # Add the image to a batch. image = tf.cast(tf.expand_dims(image, 0), tf.float32) plt.figure(figsize=(10, 10)) for i in range(9): augmented_image = data_augmentation(image) ax = plt.subplot(3, 3, i + 1) plt.imshow(augmented_image[0]) plt.axis("off")
site/en/tutorials/images/data_augmentation.ipynb
tensorflow/docs
apache-2.0
There are a variety of preprocessing layers you can use for data augmentation including tf.keras.layers.RandomContrast, tf.keras.layers.RandomCrop, tf.keras.layers.RandomZoom, and others. Two options to use the Keras preprocessing layers There are two ways you can use these preprocessing layers, with important trade-offs. Option 1: Make the preprocessing layers part of your model
model = tf.keras.Sequential([ # Add the preprocessing layers you created earlier. resize_and_rescale, data_augmentation, layers.Conv2D(16, 3, padding='same', activation='relu'), layers.MaxPooling2D(), # Rest of your model. ])
site/en/tutorials/images/data_augmentation.ipynb
tensorflow/docs
apache-2.0
There are two important points to be aware of in this case: Data augmentation will run on-device, synchronously with the rest of your layers, and benefit from GPU acceleration. When you export your model using model.save, the preprocessing layers will be saved along with the rest of your model. If you later deploy this model, it will automatically standardize images (according to the configuration of your layers). This can save you from the effort of having to reimplement that logic server-side. Note: Data augmentation is inactive at test time so input images will only be augmented during calls to Model.fit (not Model.evaluate or Model.predict). Option 2: Apply the preprocessing layers to your dataset
aug_ds = train_ds.map( lambda x, y: (resize_and_rescale(x, training=True), y))
site/en/tutorials/images/data_augmentation.ipynb
tensorflow/docs
apache-2.0
With this approach, you use Dataset.map to create a dataset that yields batches of augmented images. In this case: Data augmentation will happen asynchronously on the CPU, and is non-blocking. You can overlap the training of your model on the GPU with data preprocessing, using Dataset.prefetch, shown below. In this case the preprocessing layers will not be exported with the model when you call Model.save. You will need to attach them to your model before saving it or reimplement them server-side. After training, you can attach the preprocessing layers before export. You can find an example of the first option in the Image classification tutorial. Let's demonstrate the second option here. Apply the preprocessing layers to the datasets Configure the training, validation, and test datasets with the Keras preprocessing layers you created earlier. You will also configure the datasets for performance, using parallel reads and buffered prefetching to yield batches from disk without I/O become blocking. (Learn more dataset performance in the Better performance with the tf.data API guide.) Note: Data augmentation should only be applied to the training set.
batch_size = 32 AUTOTUNE = tf.data.AUTOTUNE def prepare(ds, shuffle=False, augment=False): # Resize and rescale all datasets. ds = ds.map(lambda x, y: (resize_and_rescale(x), y), num_parallel_calls=AUTOTUNE) if shuffle: ds = ds.shuffle(1000) # Batch all datasets. ds = ds.batch(batch_size) # Use data augmentation only on the training set. if augment: ds = ds.map(lambda x, y: (data_augmentation(x, training=True), y), num_parallel_calls=AUTOTUNE) # Use buffered prefetching on all datasets. return ds.prefetch(buffer_size=AUTOTUNE) train_ds = prepare(train_ds, shuffle=True, augment=True) val_ds = prepare(val_ds) test_ds = prepare(test_ds)
site/en/tutorials/images/data_augmentation.ipynb
tensorflow/docs
apache-2.0
Train a model For completeness, you will now train a model using the datasets you have just prepared. The Sequential model consists of three convolution blocks (tf.keras.layers.Conv2D) with a max pooling layer (tf.keras.layers.MaxPooling2D) in each of them. There's a fully-connected layer (tf.keras.layers.Dense) with 128 units on top of it that is activated by a ReLU activation function ('relu'). This model has not been tuned for accuracy (the goal is to show you the mechanics).
model = tf.keras.Sequential([ layers.Conv2D(16, 3, padding='same', activation='relu'), layers.MaxPooling2D(), layers.Conv2D(32, 3, padding='same', activation='relu'), layers.MaxPooling2D(), layers.Conv2D(64, 3, padding='same', activation='relu'), layers.MaxPooling2D(), layers.Flatten(), layers.Dense(128, activation='relu'), layers.Dense(num_classes) ])
site/en/tutorials/images/data_augmentation.ipynb
tensorflow/docs
apache-2.0
Choose the tf.keras.optimizers.Adam optimizer and tf.keras.losses.SparseCategoricalCrossentropy loss function. To view training and validation accuracy for each training epoch, pass the metrics argument to Model.compile.
model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy'])
site/en/tutorials/images/data_augmentation.ipynb
tensorflow/docs
apache-2.0
Train for a few epochs:
epochs=5 history = model.fit( train_ds, validation_data=val_ds, epochs=epochs ) loss, acc = model.evaluate(test_ds) print("Accuracy", acc)
site/en/tutorials/images/data_augmentation.ipynb
tensorflow/docs
apache-2.0
Custom data augmentation You can also create custom data augmentation layers. This section of the tutorial shows two ways of doing so: First, you will create a tf.keras.layers.Lambda layer. This is a good way to write concise code. Next, you will write a new layer via subclassing, which gives you more control. Both layers will randomly invert the colors in an image, according to some probability.
def random_invert_img(x, p=0.5): if tf.random.uniform([]) < p: x = (255-x) else: x return x def random_invert(factor=0.5): return layers.Lambda(lambda x: random_invert_img(x, factor)) random_invert = random_invert() plt.figure(figsize=(10, 10)) for i in range(9): augmented_image = random_invert(image) ax = plt.subplot(3, 3, i + 1) plt.imshow(augmented_image[0].numpy().astype("uint8")) plt.axis("off")
site/en/tutorials/images/data_augmentation.ipynb
tensorflow/docs
apache-2.0
Next, implement a custom layer by subclassing:
class RandomInvert(layers.Layer): def __init__(self, factor=0.5, **kwargs): super().__init__(**kwargs) self.factor = factor def call(self, x): return random_invert_img(x) _ = plt.imshow(RandomInvert()(image)[0])
site/en/tutorials/images/data_augmentation.ipynb
tensorflow/docs
apache-2.0
Both of these layers can be used as described in options 1 and 2 above. Using tf.image The above Keras preprocessing utilities are convenient. But, for finer control, you can write your own data augmentation pipelines or layers using tf.data and tf.image. (You may also want to check out TensorFlow Addons Image: Operations and TensorFlow I/O: Color Space Conversions.) Since the flowers dataset was previously configured with data augmentation, let's reimport it to start fresh:
(train_ds, val_ds, test_ds), metadata = tfds.load( 'tf_flowers', split=['train[:80%]', 'train[80%:90%]', 'train[90%:]'], with_info=True, as_supervised=True, )
site/en/tutorials/images/data_augmentation.ipynb
tensorflow/docs
apache-2.0
Retrieve an image to work with:
image, label = next(iter(train_ds)) _ = plt.imshow(image) _ = plt.title(get_label_name(label))
site/en/tutorials/images/data_augmentation.ipynb
tensorflow/docs
apache-2.0
Let's use the following function to visualize and compare the original and augmented images side-by-side:
def visualize(original, augmented): fig = plt.figure() plt.subplot(1,2,1) plt.title('Original image') plt.imshow(original) plt.subplot(1,2,2) plt.title('Augmented image') plt.imshow(augmented)
site/en/tutorials/images/data_augmentation.ipynb
tensorflow/docs
apache-2.0
Data augmentation Flip an image Flip an image either vertically or horizontally with tf.image.flip_left_right:
flipped = tf.image.flip_left_right(image) visualize(image, flipped)
site/en/tutorials/images/data_augmentation.ipynb
tensorflow/docs
apache-2.0
Grayscale an image You can grayscale an image with tf.image.rgb_to_grayscale:
grayscaled = tf.image.rgb_to_grayscale(image) visualize(image, tf.squeeze(grayscaled)) _ = plt.colorbar()
site/en/tutorials/images/data_augmentation.ipynb
tensorflow/docs
apache-2.0
Saturate an image Saturate an image with tf.image.adjust_saturation by providing a saturation factor:
saturated = tf.image.adjust_saturation(image, 3) visualize(image, saturated)
site/en/tutorials/images/data_augmentation.ipynb
tensorflow/docs
apache-2.0
Change image brightness Change the brightness of image with tf.image.adjust_brightness by providing a brightness factor:
bright = tf.image.adjust_brightness(image, 0.4) visualize(image, bright)
site/en/tutorials/images/data_augmentation.ipynb
tensorflow/docs
apache-2.0
Center crop an image Crop the image from center up to the image part you desire using tf.image.central_crop:
cropped = tf.image.central_crop(image, central_fraction=0.5) visualize(image, cropped)
site/en/tutorials/images/data_augmentation.ipynb
tensorflow/docs
apache-2.0
Rotate an image Rotate an image by 90 degrees with tf.image.rot90:
rotated = tf.image.rot90(image) visualize(image, rotated)
site/en/tutorials/images/data_augmentation.ipynb
tensorflow/docs
apache-2.0
Random transformations Warning: There are two sets of random image operations: tf.image.random* and tf.image.stateless_random*. Using tf.image.random* operations is strongly discouraged as they use the old RNGs from TF 1.x. Instead, please use the random image operations introduced in this tutorial. For more information, refer to Random number generation. Applying random transformations to the images can further help generalize and expand the dataset. The current tf.image API provides eight such random image operations (ops): tf.image.stateless_random_brightness tf.image.stateless_random_contrast tf.image.stateless_random_crop tf.image.stateless_random_flip_left_right tf.image.stateless_random_flip_up_down tf.image.stateless_random_hue tf.image.stateless_random_jpeg_quality tf.image.stateless_random_saturation These random image ops are purely functional: the output only depends on the input. This makes them simple to use in high performance, deterministic input pipelines. They require a seed value be input each step. Given the same seed, they return the same results independent of how many times they are called. Note: seed is a Tensor of shape (2,) whose values are any integers. In the following sections, you will: 1. Go over examples of using random image operations to transform an image. 2. Demonstrate how to apply random transformations to a training dataset. Randomly change image brightness Randomly change the brightness of image using tf.image.stateless_random_brightness by providing a brightness factor and seed. The brightness factor is chosen randomly in the range [-max_delta, max_delta) and is associated with the given seed.
for i in range(3): seed = (i, 0) # tuple of size (2,) stateless_random_brightness = tf.image.stateless_random_brightness( image, max_delta=0.95, seed=seed) visualize(image, stateless_random_brightness)
site/en/tutorials/images/data_augmentation.ipynb
tensorflow/docs
apache-2.0
Randomly change image contrast Randomly change the contrast of image using tf.image.stateless_random_contrast by providing a contrast range and seed. The contrast range is chosen randomly in the interval [lower, upper] and is associated with the given seed.
for i in range(3): seed = (i, 0) # tuple of size (2,) stateless_random_contrast = tf.image.stateless_random_contrast( image, lower=0.1, upper=0.9, seed=seed) visualize(image, stateless_random_contrast)
site/en/tutorials/images/data_augmentation.ipynb
tensorflow/docs
apache-2.0
Randomly crop an image Randomly crop image using tf.image.stateless_random_crop by providing target size and seed. The portion that gets cropped out of image is at a randomly chosen offset and is associated with the given seed.
for i in range(3): seed = (i, 0) # tuple of size (2,) stateless_random_crop = tf.image.stateless_random_crop( image, size=[210, 300, 3], seed=seed) visualize(image, stateless_random_crop)
site/en/tutorials/images/data_augmentation.ipynb
tensorflow/docs
apache-2.0
Apply augmentation to a dataset Let's first download the image dataset again in case they are modified in the previous sections.
(train_datasets, val_ds, test_ds), metadata = tfds.load( 'tf_flowers', split=['train[:80%]', 'train[80%:90%]', 'train[90%:]'], with_info=True, as_supervised=True, )
site/en/tutorials/images/data_augmentation.ipynb
tensorflow/docs
apache-2.0
Next, define a utility function for resizing and rescaling the images. This function will be used in unifying the size and scale of images in the dataset:
def resize_and_rescale(image, label): image = tf.cast(image, tf.float32) image = tf.image.resize(image, [IMG_SIZE, IMG_SIZE]) image = (image / 255.0) return image, label
site/en/tutorials/images/data_augmentation.ipynb
tensorflow/docs
apache-2.0
Let's also define the augment function that can apply the random transformations to the images. This function will be used on the dataset in the next step.
def augment(image_label, seed): image, label = image_label image, label = resize_and_rescale(image, label) image = tf.image.resize_with_crop_or_pad(image, IMG_SIZE + 6, IMG_SIZE + 6) # Make a new seed. new_seed = tf.random.experimental.stateless_split(seed, num=1)[0, :] # Random crop back to the original size. image = tf.image.stateless_random_crop( image, size=[IMG_SIZE, IMG_SIZE, 3], seed=seed) # Random brightness. image = tf.image.stateless_random_brightness( image, max_delta=0.5, seed=new_seed) image = tf.clip_by_value(image, 0, 1) return image, label
site/en/tutorials/images/data_augmentation.ipynb
tensorflow/docs
apache-2.0
Option 1: Using tf.data.experimental.Counter Create a tf.data.experimental.Counter object (let's call it counter) and Dataset.zip the dataset with (counter, counter). This will ensure that each image in the dataset gets associated with a unique value (of shape (2,)) based on counter which later can get passed into the augment function as the seed value for random transformations.
# Create a `Counter` object and `Dataset.zip` it together with the training set. counter = tf.data.experimental.Counter() train_ds = tf.data.Dataset.zip((train_datasets, (counter, counter)))
site/en/tutorials/images/data_augmentation.ipynb
tensorflow/docs
apache-2.0
Map the augment function to the training dataset:
train_ds = ( train_ds .shuffle(1000) .map(augment, num_parallel_calls=AUTOTUNE) .batch(batch_size) .prefetch(AUTOTUNE) ) val_ds = ( val_ds .map(resize_and_rescale, num_parallel_calls=AUTOTUNE) .batch(batch_size) .prefetch(AUTOTUNE) ) test_ds = ( test_ds .map(resize_and_rescale, num_parallel_calls=AUTOTUNE) .batch(batch_size) .prefetch(AUTOTUNE) )
site/en/tutorials/images/data_augmentation.ipynb
tensorflow/docs
apache-2.0
Option 2: Using tf.random.Generator Create a tf.random.Generator object with an initial seed value. Calling the make_seeds function on the same generator object always returns a new, unique seed value. Define a wrapper function that: 1) calls the make_seeds function; and 2) passes the newly generated seed value into the augment function for random transformations. Note: tf.random.Generator objects store RNG state in a tf.Variable, which means it can be saved as a checkpoint or in a SavedModel. For more details, please refer to Random number generation.
# Create a generator. rng = tf.random.Generator.from_seed(123, alg='philox') # Create a wrapper function for updating seeds. def f(x, y): seed = rng.make_seeds(2)[0] image, label = augment((x, y), seed) return image, label
site/en/tutorials/images/data_augmentation.ipynb
tensorflow/docs
apache-2.0
Map the wrapper function f to the training dataset, and the resize_and_rescale function—to the validation and test sets:
train_ds = ( train_datasets .shuffle(1000) .map(f, num_parallel_calls=AUTOTUNE) .batch(batch_size) .prefetch(AUTOTUNE) ) val_ds = ( val_ds .map(resize_and_rescale, num_parallel_calls=AUTOTUNE) .batch(batch_size) .prefetch(AUTOTUNE) ) test_ds = ( test_ds .map(resize_and_rescale, num_parallel_calls=AUTOTUNE) .batch(batch_size) .prefetch(AUTOTUNE) )
site/en/tutorials/images/data_augmentation.ipynb
tensorflow/docs
apache-2.0
Fitting multiple linear regression to the training set
from sklearn.linear_model import LinearRegression regressor = LinearRegression() regressor.fit(X_train,Y_train) # Predicting the test set results Y_pred = regressor.predict(X_test)
Part-2-Regression/P2-Multiple_Linear_Regression.ipynb
snbhanja/A-Z-Machine-Learning---Udemy-
mit
Building the optimal model using Backward elimination Previously to build the multiple regression model we used all the independent variable. Out of these some independent variable are higly statistically significant and some are not
import statsmodels.formula.api as sm # The multiple linear regression equation is # y = b0 + b1.X^1 + b2.x^2 + ... + bn.x^n # The stats model requires that b0 is actually b0.x^0 # So, X^0 here is column vector of 1's # we will add this vector of 1's to X X = np.append(arr = np.ones((50, 1)).astype(int), values = X, axis = 1) # append methode adds a new row or column # np.ones adds a column of 1's if axis=1 # We want to keep column of 1's as first column so, np.ones is first argument and then append X to it # X has 50 rows # astype[int] required to convert the 1's to int otherwise # you will get type error X[1] # Backward elimination steps # Step 1: Select a significance level to stay in the model(e.g. SL = 0.05) X_opt = X[:, [0,1,2,3,4,5]] # Step 2: Fit the full model with all possible predictors regressor_OLS = sm.OLS(endog = Y, exog = X_opt).fit() # Step 3: Consider the predictor with the highest P-value. # If P>SL, go to STEP4, otherwise go to Finish(model is final) regressor_OLS.summary() # P-value of X2 is highest, so we remove X2 # Step4: Remove the predictor whose p-value is highest and more than SL = 0.05 # here we have to remove index 2 column # Step 5: Fit the model without this variable X_opt = X[:, [0,1,3,4,5]] regressor_OLS = sm.OLS(endog = Y, exog = X_opt).fit() regressor_OLS.summary() # here X1 has highest p-value. So we will remove column index # 1 X_opt = X[:, [0,3,4,5]] regressor_OLS = sm.OLS(endog = Y, exog = X_opt).fit() regressor_OLS.summary() # here X2 has highest p-value. So we will remove column index # 4 X_opt = X[:, [0,3,5]] regressor_OLS = sm.OLS(endog = Y, exog = X_opt).fit() regressor_OLS.summary() # here X2 has highest p-value. So we will remove column index # 5 X_opt = X[:, [0,3]] regressor_OLS = sm.OLS(endog = Y, exog = X_opt).fit() regressor_OLS.summary() X[1] # So column R&D spent has the max impact on profit
Part-2-Regression/P2-Multiple_Linear_Regression.ipynb
snbhanja/A-Z-Machine-Learning---Udemy-
mit
Load and Prepare the Data Check out the first example notebook to learn more about the data and format.
from tsfresh.examples.robot_execution_failures import download_robot_execution_failures download_robot_execution_failures() df_ts, y = load_robot_execution_failures()
notebooks/examples/02 sklearn Pipeline.ipynb
blue-yonder/tsfresh
mit
We want to use the extracted features to predict for each of the robot executions, if it was a failure or not. Therefore our basic "entity" is a single robot execution given by a distinct id. A dataframe with these identifiers as index needs to be prepared for the pipeline.
X = pd.DataFrame(index=y.index) # Split data into train and test set X_train, X_test, y_train, y_test = train_test_split(X, y)
notebooks/examples/02 sklearn Pipeline.ipynb
blue-yonder/tsfresh
mit
Build the pipeline We build a sklearn pipeline that consists of a feature extraction step (RelevantFeatureAugmenter) with a subsequent RandomForestClassifier. The RelevantFeatureAugmenter takes roughly the same arguments as extract_features and select_features do.
ppl = Pipeline([ ('augmenter', RelevantFeatureAugmenter(column_id='id', column_sort='time')), ('classifier', RandomForestClassifier()) ])
notebooks/examples/02 sklearn Pipeline.ipynb
blue-yonder/tsfresh
mit
<div class="alert alert-warning"> Here comes the tricky part! The input to the pipeline will be our dataframe `X`, which one row per identifier. It is currently empty. But which time series data should the `RelevantFeatureAugmenter` to actually extract the features from? We need to pass the time series data (stored in `df_ts`) to the transformer. </div> In this case, df_ts contains the time series of both train and test set, if you have different dataframes for train and test set, you have to call set_params two times (see further below on how to deal with two independent data sets)
ppl.set_params(augmenter__timeseries_container=df_ts);
notebooks/examples/02 sklearn Pipeline.ipynb
blue-yonder/tsfresh
mit
We are now ready to fit the pipeline
ppl.fit(X_train, y_train)
notebooks/examples/02 sklearn Pipeline.ipynb
blue-yonder/tsfresh
mit
The augmenter has used the input time series data to extract time series features for each of the identifiers in the X_train and selected only the relevant ones using the passed y_train as target. These features have been added to X_train as new columns. The classifier can now use these features during trainings. Prediction During interference, the augmentor does only extract the relevant features it has found out in the training phase and the classifier predicts the target using these features.
y_pred = ppl.predict(X_test)
notebooks/examples/02 sklearn Pipeline.ipynb
blue-yonder/tsfresh
mit
So, finally we inspect the performance:
print(classification_report(y_test, y_pred))
notebooks/examples/02 sklearn Pipeline.ipynb
blue-yonder/tsfresh
mit
You can also find out, which columns the augmenter has selected
ppl.named_steps["augmenter"].feature_selector.relevant_features
notebooks/examples/02 sklearn Pipeline.ipynb
blue-yonder/tsfresh
mit
<div class="alert alert-info"> In this example we passed in an empty (except the index) `X_train` or `X_test` into the pipeline. However, you can also fill the input with other features you have (e.g. features extracted from the metadata) or even use other pipeline components before. </div> Separating the time series data containers In the example above we passed in a single df_ts into the RelevantFeatureAugmenter, which was used both for training and predicting. During training, only the data with the ids from X_train where extracted and during prediction the rest. However, it is perfectly fine to call set_params twice: once before training and once before prediction. This can be handy if you for example dump the trained pipeline to disk and re-use it only later for prediction. You only need to make sure that the ids of the enteties you use during training/prediction are actually present in the passed time series data.
df_ts_train = df_ts[df_ts["id"].isin(y_train.index)] df_ts_test = df_ts[df_ts["id"].isin(y_test.index)] ppl.set_params(augmenter__timeseries_container=df_ts_train); ppl.fit(X_train, y_train); import pickle with open("pipeline.pkl", "wb") as f: pickle.dump(ppl, f)
notebooks/examples/02 sklearn Pipeline.ipynb
blue-yonder/tsfresh
mit
Later: load the fitted model and do predictions on new, unseen data
import pickle with open("pipeline.pkl", "rb") as f: ppk = pickle.load(f) ppl.set_params(augmenter__timeseries_container=df_ts_test); y_pred = ppl.predict(X_test) print(classification_report(y_test, y_pred))
notebooks/examples/02 sklearn Pipeline.ipynb
blue-yonder/tsfresh
mit
Get Classifier Results
plots_dir = pathlib.Path("./plots/") if not plots_dir.is_dir(): plots_dir.mkdir() features = combined.loc[:,["g_minus_r", "r_minus_i", "i_minus_z", "z_minus_y", "icmodel_mag", "photoz_best", "photoz_risk_best" # The risk of photoz_best being outside of the range z_true +- 0.15(1+z_true). It ranges from 0 (safe) to 1(risky) ]] target = combined.loc[:,["low_z_low_mass"]] target.mean() COSMOS_field_area = 2 # sq. degree N_COSMOS_total = HSC.df.shape[0] N_COSMOS_good = combined.shape[0] true_dwarf_density = target.sum().values[0] / COSMOS_field_area print("true dwarf density: {:.2f} / sq. deg.".format(true_dwarf_density)) filename = "../data/galaxy_images_training/2017_09_26-dwarf_galaxy_scores.csv" df_dwarf_prob = pd.read_csv(filename, # index_label="COSMOS_id", ) df_dwarf_prob.head() COSMOS_field_area = 2 # sq. degree N_COSMOS_total = HSC.df.shape[0] N_COSMOS_good = combined.shape[0] true_dwarf_density = target.sum().values[0] / COSMOS_field_area print("true dwarf density: {:.2f} / sq. deg.".format(true_dwarf_density)) from sklearn.metrics import precision_recall_curve, average_precision_score from sklearn.metrics import roc_curve, roc_auc_score pr_auc = average_precision_score(df_dwarf_prob.low_z_low_mass, df_dwarf_prob.dwarf_prob) roc_auc = roc_auc_score(df_dwarf_prob.low_z_low_mass, df_dwarf_prob.dwarf_prob) print("pr_auc: ", pr_auc) print("roc_auc: ", roc_auc) print("prior probability: ", df_dwarf_prob.low_z_low_mass.mean()) threshold_probs = expit(np.linspace(-9, 6, num=100)) completeness = np.zeros_like(threshold_probs) purity = np.zeros_like(threshold_probs) true_positive_rate = np.zeros_like(threshold_probs) false_positive_rate = np.zeros_like(threshold_probs) sample_size_reduction = np.zeros_like(threshold_probs) objects_per_sq_deg = np.zeros_like(threshold_probs) true_dwarf = df_dwarf_prob.low_z_low_mass true_non_dwarf = ~df_dwarf_prob.low_z_low_mass for i, threshold_prob in enumerate(threshold_probs): target_prediction = (df_dwarf_prob.dwarf_prob > threshold_prob) prediction_dwarf = target_prediction prediction_non_dwarf = ~target_prediction completeness[i] = (true_dwarf & prediction_dwarf).sum() / true_dwarf.sum() purity[i] = (true_dwarf & prediction_dwarf).sum() / prediction_dwarf.sum() sample_size_reduction[i] = prediction_dwarf.size / prediction_dwarf.sum() objects_per_sq_deg[i] = prediction_dwarf.sum() / COSMOS_field_area true_positives = np.sum(true_dwarf & prediction_dwarf) false_positives = np.sum(true_non_dwarf & prediction_dwarf) true_negatives = np.sum(true_non_dwarf & prediction_non_dwarf) false_negatives = np.sum(true_dwarf & prediction_non_dwarf) true_positive_rate[i] = true_positives / true_dwarf.sum() false_positive_rate[i] = false_positives / true_non_dwarf.sum() i_threshold = np.argmin(objects_per_sq_deg > 1e3) print("objects_per_sq_deg: ", objects_per_sq_deg[i_threshold]) print("threshold prob: ", threshold_probs[i_threshold]) print("sample_size_reduction: ", sample_size_reduction[i_threshold]) print("completeness: ", completeness[i_threshold]) print("purity: ", purity[i_threshold]) color_RF = "g" plt.plot([1, *false_positive_rate, 0], [1, *true_positive_rate, 0], label="Random Forest\n(AUC={:.3f})".format(roc_auc), color=color_RF ) plt.plot([0, 1], [0, 1], linestyle="dashed", color="black", ) # plt.xlim(left=0, right=1) # plt.ylim(bottom=0, top=1) plt.xlabel("False Positive Rate") plt.ylabel("True Positive Rate") plt.legend(loc="best") filename = pathlib.Path(plots_dir) / "curve_ROC" plt.tight_layout() plt.savefig(filename.with_suffix(".pdf")) plt.savefig(filename.with_suffix(".png")) plt.plot(completeness, purity, label="Random Forest\n(AUC={:.3f})".format(pr_auc), color=color_RF, ) plt.xlabel("Recall") plt.ylabel("Precision") plt.legend(loc="best") filename = pathlib.Path(plots_dir) / "curve_PR" plt.tight_layout() plt.savefig(filename.with_suffix(".pdf")) plt.savefig(filename.with_suffix(".png")) plt.plot(objects_per_sq_deg, completeness, color=color_RF) plt.xlabel("Number of objects / deg$^2$") plt.ylabel("completeness") plt.axvline(1e3, linestyle="dashed", color="black") plt.ylim(bottom=0, top=1) plt.xscale("log") filename = pathlib.Path(plots_dir) / "curve_completeness" plt.tight_layout() plt.savefig(filename.with_suffix(".pdf")) plt.savefig(filename.with_suffix(".png")) plt.plot(objects_per_sq_deg, purity, color=color_RF) plt.xlabel("Number of objects / deg$^2$") plt.ylabel("purity") plt.axvline(1e3, linestyle="dashed", color="black") plt.ylim(bottom=0, top=1.1) plt.xscale("log") filename = pathlib.Path(plots_dir) / "curve_purity" plt.tight_layout() plt.savefig(filename.with_suffix(".pdf")) plt.savefig(filename.with_suffix(".png"))
dwarfz/catalog_only_classifier/just plotting RF.ipynb
egentry/dwarf_photo-z
mit
Exercise 1: Define variables str1 and str2 containing strings "The greatest glory in living lies not in never falling, " and "but in rising every time we fall", respectively. Concatenate both strings to compose a sentence, transform it to capital letters and print the result.
# Write your solution here # <SOL> # </SOL>
P1.Python_intro/P1_Starting_with_Python_student.ipynb
ML4DS/ML4all
mit
Exercise 2: Define variable odds containing all odd numbers betewen 1 and 9. Also, define variable prime containing all prime numbers between 1 and 9. Now, define variable odd_or_prime as the concatenation of odd and prime. Some numbers are repeated. User method remove to supress elements appearing twice. Print the result.
# Write your solution here # <SOL> # </SOL>
P1.Python_intro/P1_Starting_with_Python_student.ipynb
ML4DS/ML4all
mit
Exercise 3: After the exam review, you teacher has updated your mark in Statistics to 7.0. Also, the scores of "communication theory" have been just published, and you've got 10.0. How would you modify your dictionary with the new marks?
# Write your solution here # <SOL> # </SOL> print(my_marks)
P1.Python_intro/P1_Starting_with_Python_student.ipynb
ML4DS/ML4all
mit
Exercise 4: Create a list powers2 of all powers of 2 between $2^0$ (2**0) and $2^{16}$ (2**16). Loop over the list to create a new list y with all powers of 2 that are greater than 100 and lower than 1000
# Write your solution here # <SOL> # </SOL> print(y)
P1.Python_intro/P1_Starting_with_Python_student.ipynb
ML4DS/ML4all
mit
Exercise 5: Now try it yourself! Write a function called my_factorial that computes the fatorial of a given number. If you're feeling brave today, you can try writing it recursively since Python obviously supports recursion. Remember that the factorial of 0 is 1 and the factorial of a negative number is undefined.
# Write your solution here # <SOL> # </SOL> # Let's try it out! from math import factorial x = int(input('Please enter an integer: ')) if my_factorial(x) == factorial(x): print(x, '! = ', my_factorial(x)) print('It works!') else: print("Something's not working..." )
P1.Python_intro/P1_Starting_with_Python_student.ipynb
ML4DS/ML4all
mit