markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
In many cases it is useful to have both a high quality movie and a lower resolution gif of the same animation. If that is desired, just deactivate the `remove_movie` option and give a filename with `.gif`. xmovie will first render a high quality movie and then convert it to a gif, without removing the movie afterwards....
mov.save('movie_combo.gif', remove_movie=False, progress=True)
_____no_output_____
MIT
docs/examples/quickstart.ipynb
zmoon/xmovie
Modify the framerate of the output with the keyword arguments `framerate` (for movies) and `gif_framerate` (for gifs).
mov.save('movie_fast.gif', remove_movie=False, progress=True, framerate=20, gif_framerate=20) mov.save('movie_slow.gif', remove_movie=False, progress=True, framerate=5, gif_framerate=5)
_____no_output_____
MIT
docs/examples/quickstart.ipynb
zmoon/xmovie
![movie_fast.gif](movie_fast.gif)![movie_slow.gif](movie_slow.gif) ![](movie_combo.gif) Frame dimension selection By default, the movie passes through the `'time'` dimension of the DataArray, but this can be easily changed with the `framedim` argument:
mov = Movie(ds.air, framedim='lon') mov.save('lon_movie.gif')
Movie created at lon_movie.mp4 GIF created at lon_movie.gif
MIT
docs/examples/quickstart.ipynb
zmoon/xmovie
![](lon_movie.gif) Modifying plots Rotating globe (preset)
from xmovie.presets import rotating_globe mov = Movie(ds.air, plotfunc=rotating_globe) mov.save('movie_rotating.gif', progress=True)
_____no_output_____
MIT
docs/examples/quickstart.ipynb
zmoon/xmovie
![movie_rotating.gif](movie_rotating.gif)
mov = Movie(ds.air, plotfunc=rotating_globe, style='dark') mov.save('movie_rotating_dark.gif', progress=True)
_____no_output_____
MIT
docs/examples/quickstart.ipynb
zmoon/xmovie
![](movie_rotating_dark.gif) Specifying xarray plot method to be used Change the plotting function with the parameter `plotmethod`.
mov = Movie(ds.air, rotating_globe, plotmethod='contour') mov.save('movie_cont.gif') mov = Movie(ds.air, rotating_globe, plotmethod='contourf') mov.save('movie_contf.gif')
Movie created at movie_cont.mp4 GIF created at movie_cont.gif Movie created at movie_contf.mp4 GIF created at movie_contf.gif
MIT
docs/examples/quickstart.ipynb
zmoon/xmovie
![](movie_cont.gif)![](movie_contf.gif) Changing preset settings
import numpy as np ds = xr.tutorial.open_dataset('rasm', decode_times=False).Tair # 36 times in total # Interpolate time for smoother animation ds['time'].values[:] = np.arange(len(ds['time'])) ds = ds.interp(time=np.linspace(0, 10, 60)) # `Movie` accepts keywords for the xarray plotting interface and provides a set...
_____no_output_____
MIT
docs/examples/quickstart.ipynb
zmoon/xmovie
![](movie_rasm.gif) User-provided Besides the presets, xmovie is designed to animate any custom plot which can be wrapped in a function acting on a matplotlib figure. This can contain xarray plotting commands, 'pure' matplotlib or a combination of both. This can come in handy when you want to animate a complex static ...
ds = xr.tutorial.open_dataset('rasm', decode_times=False).Tair fig = plt.figure(figsize=[10,5]) tt = 30 station = dict(x=100, y=150) ds_station = ds.sel(**station) (ax1, ax2) = fig.subplots(ncols=2) ds.isel(time=tt).plot(ax=ax1) ax1.plot(station['x'], station['y'], marker='*', color='k' ,markersize=15) ax1.text(stati...
_____no_output_____
MIT
docs/examples/quickstart.ipynb
zmoon/xmovie
All you need to do is wrap your plotting calls into a functions `func(ds, fig, frame)`, where ds is an xarray dataset you pass to `Movie`, fig is a matplotlib.figure handle and tt is the movie frame.
def custom_plotfunc(ds, fig, tt, *args, **kwargs): # Define station location for timeseries station = dict(x=100, y=150) ds_station = ds.sel(**station) (ax1, ax2) = fig.subplots(ncols=2) # Map axis # Colorlimits need to be fixed or your video is going to cause seizures. # This is the o...
_____no_output_____
MIT
docs/examples/quickstart.ipynb
zmoon/xmovie
수학 기호 연습수식 기호들을 집어 넣는 연습을 해봅시다. $\theta = 1$ $1 \le 5 $ $\sum_{i=1}^{n} i^2 $ $$\sum_{i=1}^{n} \frac{1}{i} $$
1+1
_____no_output_____
MIT
math_symbol_prac.ipynb
Sumi-Lee/testrepository
Summarize titers and sequences by dateCreate a single histogram on the same scale for number of titer measurements and number of genomic sequences per year to show the relative contribution of each data source.
import Bio import Bio.SeqIO import matplotlib import matplotlib.pyplot as plt import numpy as np import pandas as pd %matplotlib inline # Configure matplotlib theme. fontsize = 14 matplotlib_params = { 'axes.labelsize': fontsize, 'font.size': fontsize, 'legend.fontsize': 12, 'xtick.labelsize': fontsize,...
_____no_output_____
MIT
analyses/2018-11-07-summarize-titers-and-sequences-by-date.ipynb
blab/flu-forecasting
Load sequences
ls ../../seasonal-flu/data/*.fasta # Open FASTA of HA sequences for H3N2. sequences = Bio.SeqIO.parse("../../seasonal-flu/data/h3n2_ha.fasta", "fasta") # Get strain names from sequences. distinct_strains_with_sequences = pd.Series([sequence.name.split("|")[0].replace("-egg", "") ...
_____no_output_____
MIT
analyses/2018-11-07-summarize-titers-and-sequences-by-date.ipynb
blab/flu-forecasting
Load titers
# Read titers into a data frame. titers = pd.read_table( "../../seasonal-flu/data/cdc_h3n2_egg_hi_titers.tsv", header=None, index_col=False, names=["test", "reference", "serum", "source", "titer", "assay"] ) titers.head() titers["test_year"] = titers["test"].apply(lambda strain: int(strain.replace("-egg...
_____no_output_____
MIT
analyses/2018-11-07-summarize-titers-and-sequences-by-date.ipynb
blab/flu-forecasting
Plot sequence and titer strains by year
sequence_years.min() sequence_years.max() [sequence_years, titer_years] sequence fig, ax = plt.subplots(1, 1) bins = np.arange(1968, 2019) ax.hist([sequence_years, titer_years], bins, histtype="bar", label=["HA sequence", "HI titer"]) legend = ax.legend( loc="upper left", ncol=1, frameon=False, handlel...
_____no_output_____
MIT
analyses/2018-11-07-summarize-titers-and-sequences-by-date.ipynb
blab/flu-forecasting
HSMfile examples The [hsmfile module](https://github.com/hadfieldnz/hsmfile) is modelled on my IDL mgh_san routines and provides user-customisable access to remote (slow-access) and local (fast-access) files.This notebook exercises various aspects of the hsmfile module.Change history:MGH 2019-08-15 - afile is now ...
import os import hsmfile
_____no_output_____
MIT
examples/HSMfile_examples.ipynb
hadfieldnz/notebooks
The following cell should be executed whenever the hsmfile module code has been changed.
from importlib import reload reload(hsmfile);
_____no_output_____
MIT
examples/HSMfile_examples.ipynb
hadfieldnz/notebooks
Print the volumes supported by the hsmfile module on this platform
print(hsmfile.volume.keys())
_____no_output_____
MIT
examples/HSMfile_examples.ipynb
hadfieldnz/notebooks
Specify the files for which we will search (Cook Strait Narrows 1 km run). Normally
vol = '/nesi/nobackup/niwa00020/hadfield' sub = 'work/cook/roms/sim34/run' pattern = 'bran-2009-2012-nzlam-1.20-detide/roms_avg_????.nc'
_____no_output_____
MIT
examples/HSMfile_examples.ipynb
hadfieldnz/notebooks
The hsmfile.path function returns a pathlib Path object. Here we construct the path names for the base directory on the remote, or master, volume (mirror = False) and the local, or mirror, volume (mirror = True)
hsmfile.path(sub=sub,vol=vol,mirror=False) if 'mirror' in hsmfile.volume[vol]: print(repr(hsmfile.path(sub=sub,vol=vol,mirror=True))) else: print('Volume has no mirror')
_____no_output_____
MIT
examples/HSMfile_examples.ipynb
hadfieldnz/notebooks
The hsmfile.search function uses the Path's glob function to create a generator object and from that generates and returns a sorted list of Path objects relative to the base.
match = hsmfile.search(pattern,sub=sub,vol=vol); match
_____no_output_____
MIT
examples/HSMfile_examples.ipynb
hadfieldnz/notebooks
The hsmfile.file function constructs and returns a list of path objects representing actual files. It checks for existence and copies from master to mirror as necessary.
file = [hsmfile.file(m,sub=sub,vol=vol) for m in match]; file
_____no_output_____
MIT
examples/HSMfile_examples.ipynb
hadfieldnz/notebooks
参数| | || --------- | ---------------------------------------------------------------------- || 参数 | 描述 || `pattern` | 匹配的正则表达式 ...
# re.sub(pattern, repl, string, count=0, flags=0) # pattern 正则中的模式字符串。 # repl 替换的字符串,也可为一个函数。 # string 要被查找替换的原始字符串。 # count 模式匹配后替换的最大次数,默认 0 表示替换所有的匹配。 phone = "123-456-789 # 这是一个电话号码" print(re.sub(r'#.*$', "", phone)) ...
_____no_output_____
MIT
_note_/内置包/re_正则处理.ipynb
By2048/_python_
其他```re.RegexObjectre.compile()返回RegexObject对象。re.MatchObjectgroup()返回被RE匹配的字符串。```
dytt_title = ".*\[(.*)\].*" name_0 = r"罗拉快跑BD国德双语中字[电影天堂www.dy2018.com].mkv" name_1 = r"[电影天堂www.dy2018.com]罗拉快跑BD国德双语中字.mkv" print(1, re.findall(dytt_title, name_0)) print(1, re.findall(dytt_title, name_1)) data = "xxxxxxxxxxxentry某某内容for-----------" result = re.findall(".*entry(.*)for.*", data) print(3, result)
_____no_output_____
MIT
_note_/内置包/re_正则处理.ipynb
By2048/_python_
Wizualizacja danych
df.price_value.hist(bins=100); df.price_value.max() df.price_value.describe() df.groupby(['param_marka-pojazdu'])['price_value'].mean() ( df .groupby(['param_marka-pojazdu'])['price_value'] .agg(np.mean) .sort_values(ascending=False) .head(50) ).plot(kind='bar', figsize=(20,5)) ( df .groupby(['param_marka-poja...
_____no_output_____
MIT
day2_visualization.ipynb
wudzitsu/dw_matrix_car
1- Class Activation Map with convolutionsIn this firt part, we will code class activation map as described in the paper [Learning Deep Features for Discriminative Localization](http://cnnlocalization.csail.mit.edu/)There is a GitHub repo associated with the paper:https://github.com/zhoubolei/CAMAnd even a demo in PyTo...
import io import requests from PIL import Image import torch import torch.nn as nn from torchvision import models, transforms from torch.nn import functional as F import torch.optim as optim import numpy as np import cv2 import pdb from matplotlib.pyplot import imshow # input image LABELS_URL = 'https://s3.amazonaws....
_____no_output_____
Apache-2.0
HW2/HW2_CAM_Adversarial.ipynb
Hmkhalla/notebooks
As in the demo, we will use the Resnet18 architecture. In order to get CAM, we need to transform this network in a fully convolutional network: at all layers, we need to deal with images, i.e. with a shape $\text{Number of channels} \times W\times H$ . In particular, we are interested in the last images as shown here:!...
net = models.resnet18(pretrained=True) net.eval() x = torch.randn(5, 3, 224, 224) y = net(x) y.shape n_mean = [0.485, 0.456, 0.406] n_std = [0.229, 0.224, 0.225] normalize = transforms.Normalize( mean=n_mean, std=n_std ) preprocess = transforms.Compose([ transforms.Resize((224,224)), transforms.ToTensor(),...
_____no_output_____
Apache-2.0
HW2/HW2_CAM_Adversarial.ipynb
Hmkhalla/notebooks
2- Adversarial examples In this second part, we will look at [adversarial examples](https://arxiv.org/abs/1607.02533): "An adversarial example is a sample of input data which has been modified very slightly in a way that is intended to cause a machine learning classifier to misclassify it. In many cases, these modific...
# Image under attack! url_car = 'https://cdn130.picsart.com/263132982003202.jpg?type=webp&to=min&r=640' response = requests.get(url_car) img_pil = Image.open(io.BytesIO(response.content)) imshow(img_pil); # same as above preprocess = transforms.Compose([ transforms.Resize((224,224)), transforms.ToTensor(), nor...
_____no_output_____
Apache-2.0
HW2/HW2_CAM_Adversarial.ipynb
Hmkhalla/notebooks
3- Transforming a car into a catWe now implement the *Iterative Target Class Method (ITCM)* as defined by equation (4) in [Adversarial Attacks and Defences Competition](https://arxiv.org/abs/1804.00097)To test it, we will transform the car (labeled minivan by our `resnet18`) into a [Tabby cat](https://en.wikipedia.org...
x = preprocess(img_pil).clone() xd = preprocess(img_pil).clone() xd.requires_grad = True idx = 281 #tabby optimizer = optim.SGD([xd], lr=0.01) for i in range(200): #TODO: your code here optimizer.zero_grad() loss.backward() optimizer.step() print(loss.item()) _ = print_preds(output) ...
_____no_output_____
Apache-2.0
HW2/HW2_CAM_Adversarial.ipynb
Hmkhalla/notebooks
Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors ...
# First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal...
_____no_output_____
MIT
intro-to-pytorch/Part 1 - Tensors in PyTorch (Exercises).ipynb
Yasel-Garces/deep-learning-v2-pytorch
Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distrib...
## Calculate the output of this network using the weights and bias tensors activation(torch.sum(weights*features) + bias)
_____no_output_____
MIT
intro-to-pytorch/Part 1 - Tensors in PyTorch (Exercises).ipynb
Yasel-Garces/deep-learning-v2-pytorch
You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the...
## Calculate the output of this network using matrix multiplication torch.matmul(features,weights.reshape(5,1))+bias
_____no_output_____
MIT
intro-to-pytorch/Part 1 - Tensors in PyTorch (Exercises).ipynb
Yasel-Garces/deep-learning-v2-pytorch
Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input un...
### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 ...
_____no_output_____
MIT
intro-to-pytorch/Part 1 - Tensors in PyTorch (Exercises).ipynb
Yasel-Garces/deep-learning-v2-pytorch
> **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`.
## Your solution here activation(torch.matmul(activation(torch.matmul(features,W1) + B1),W2) + B2)
_____no_output_____
MIT
intro-to-pytorch/Part 1 - Tensors in PyTorch (Exercises).ipynb
Yasel-Garces/deep-learning-v2-pytorch
If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network h...
import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy()
_____no_output_____
MIT
intro-to-pytorch/Part 1 - Tensors in PyTorch (Exercises).ipynb
Yasel-Garces/deep-learning-v2-pytorch
The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well.
# Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a
_____no_output_____
MIT
intro-to-pytorch/Part 1 - Tensors in PyTorch (Exercises).ipynb
Yasel-Garces/deep-learning-v2-pytorch
Passive Membrane Tutorial This is a tutorial which is designed to allow users to explore the passive responses of neuron membrane potentials and how it changes under various conditions such as current injection, ion concentration (both inside and outside the cell), change in membrane capacitance and passive conductanc...
dt = 1e-4 #Integration time step. Reduce if you encounter NaN errors. t_sim = 0.5 #Total time plotted. Increase as desired. Na_in = 13 #Sodium ion concentration inside the cell. Default = 13 (in mM) Na_out = 120 #Sodium ion concentration outside the cell. Default = 120 (in mM) K_in = 140 #Potassium i...
_____no_output_____
MIT
Passive_Membrane_tutorial.ipynb
zbpvarun/Neuron
Nernst Potential Equations:
import math as ma Ena = -0.058*ma.log10(Na_in/Na_out); Ek = -0.058*ma.log10(K_in/K_out);
_____no_output_____
MIT
Passive_Membrane_tutorial.ipynb
zbpvarun/Neuron
If you wish to use pre-determined ENa and EK values, set them here and convert this cell into code from Markdown:Ena = ??;Ek = ??;
import numpy as np niter = int(t_sim//dt) #Total number of integration steps (constant). #Output variables: Vm = np.zeros(niter) Ie = np.zeros(niter) #Starting values: You can change the initial conditions of each simulation here: Vm[0] = -0.070;
_____no_output_____
MIT
Passive_Membrane_tutorial.ipynb
zbpvarun/Neuron
Current Injection
I_inj =-5e-8 #Current amplitude. Default = 50 nA. t_start = 0.150 #Start time of current injection. t_end = 0.350 #End time of current injection. Ie[int(t_start//dt):int(t_end//dt)] = I_inj
_____no_output_____
MIT
Passive_Membrane_tutorial.ipynb
zbpvarun/Neuron
Calculation - do the actual computation here:
#Integration steps - do not change: for i in np.arange(niter-1): Vm[i+1] = Vm[i] + dt/Cm*(Ie[i] - gNa*(Vm[i] - Ena) - gK*(Vm[i] - Ek));
_____no_output_____
MIT
Passive_Membrane_tutorial.ipynb
zbpvarun/Neuron
Plot results
import matplotlib.pyplot as plt %matplotlib notebook plt.figure() t = np.arange(niter)*dt; plt.plot(t,Vm); plt.xlabel('Time in s') plt.ylabel('Membrane Voltage in V')
_____no_output_____
MIT
Passive_Membrane_tutorial.ipynb
zbpvarun/Neuron
Circuit visualizeこのドキュメントでは scikit-qulacs に用意されている量子回路を可視化します。scikitqulacsには現在、以下のような量子回路を用意しています。- create_qcl_ansatz(n_qubit: int, c_depth: int, time_step: float, seed=None): [arXiv:1803.00745](https://arxiv.org/abs/1803.00745)- create_farhi_neven_ansatz(n_qubit: int, c_depth: int, seed: Optional[int] = None): [arXiv...
from skqulacs.circuit.pre_defined import create_qcl_ansatz from qulacsvis import circuit_drawer n_qubit = 4 c_depth = 2 time_step = 1. ansatz = create_qcl_ansatz(n_qubit, c_depth, time_step) circuit_drawer(ansatz._circuit,"latex")
_____no_output_____
MIT
doc/source/notebooks/circuit_visualize.ipynb
forest1040/scikit-qulacs
farhi_neven_ansatzcreate_farhi_neven_ansatz( n_qubit: int, c_depth: int, seed: Optional[int] = None)[arXiv:1802.06002](https://arxiv.org/abs/1802.06002)
from skqulacs.circuit.pre_defined import create_farhi_neven_ansatz n_qubit = 4 c_depth = 2 ansatz = create_farhi_neven_ansatz(n_qubit, c_depth) circuit_drawer(ansatz._circuit,"latex")
_____no_output_____
MIT
doc/source/notebooks/circuit_visualize.ipynb
forest1040/scikit-qulacs
farhi_neven_watle_ansatzfarhi_neven_ansatzを @WATLE さんが改良したものcreate_farhi_neven_watle_ansatz( n_qubit: int, c_depth: int, seed: Optional[int] = None)
from skqulacs.circuit.pre_defined import create_farhi_neven_watle_ansatz n_qubit = 4 c_depth = 2 ansatz = create_farhi_neven_watle_ansatz(n_qubit, c_depth) circuit_drawer(ansatz._circuit,"latex")
_____no_output_____
MIT
doc/source/notebooks/circuit_visualize.ipynb
forest1040/scikit-qulacs
ibm_embedding_circuitcreate_ibm_embedding_circuit(n_qubit: int)[arXiv:1802.06002](https://arxiv.org/abs/1802.06002)
from skqulacs.circuit.pre_defined import create_ibm_embedding_circuit n_qubit = 4 circuit = create_ibm_embedding_circuit(n_qubit) circuit_drawer(circuit._circuit,"latex")
_____no_output_____
MIT
doc/source/notebooks/circuit_visualize.ipynb
forest1040/scikit-qulacs
shirai_ansatzcreate_shirai_ansatz( n_qubit: int, c_depth: int = 5, seed: int = 0)[arXiv:2111.02951](https://arxiv.org/abs/2111.02951)
from skqulacs.circuit.pre_defined import create_shirai_ansatz n_qubit = 4 c_depth = 2 ansatz = create_shirai_ansatz(n_qubit, c_depth) circuit_drawer(ansatz._circuit,"latex")
_____no_output_____
MIT
doc/source/notebooks/circuit_visualize.ipynb
forest1040/scikit-qulacs
npqcd_ansatzcreate_npqcd_ansatz( n_qubit: int, c_depth: int, c: float = 0.1)[arXiv:2108.01039](https://arxiv.org/abs/2108.01039)
from skqulacs.circuit.pre_defined import create_npqc_ansatz n_qubit = 4 c_depth = 2 ansatz = create_npqc_ansatz(n_qubit, c_depth) circuit_drawer(ansatz._circuit,"latex")
_____no_output_____
MIT
doc/source/notebooks/circuit_visualize.ipynb
forest1040/scikit-qulacs
yzcx_ansatzcreate_yzcx_ansatz( n_qubit: int, c_depth: int = 4, c: float = 0.1, seed: int = 9)[arXiv:2108.01039](https://arxiv.org/abs/2108.01039)
from skqulacs.circuit.pre_defined import create_yzcx_ansatz n_qubit = 4 c_depth = 2 ansatz = create_yzcx_ansatz(n_qubit, c_depth) circuit_drawer(ansatz._circuit,"latex")
_____no_output_____
MIT
doc/source/notebooks/circuit_visualize.ipynb
forest1040/scikit-qulacs
qcnn_ansatzcreate_qcnn_ansatz(n_qubit: int, seed: int = 0)Creates circuit used in https://www.tensorflow.org/quantum/tutorials/qcnn?hl=en, Section 1.
from skqulacs.circuit.pre_defined import create_qcnn_ansatz n_qubit = 8 ansatz = create_qcnn_ansatz(n_qubit) circuit_drawer(ansatz._circuit,"latex")
_____no_output_____
MIT
doc/source/notebooks/circuit_visualize.ipynb
forest1040/scikit-qulacs
**Downloading data from Google Drive**
!pip install -U -q PyDrive import os from pydrive.auth import GoogleAuth from pydrive.drive import GoogleDrive from google.colab import auth from oauth2client.client import GoogleCredentials import zipfile from google.colab import drive # 1. Authenticate and create the PyDrive client. auth.authenticate_user() gauth = ...
_____no_output_____
MIT
Defect_check.ipynb
franchukpetro/steel_defect_detection
Define working directories
working_dir = os.path.join(local_download_path, "extracted") # defining working folders and labels train_images_folder = os.path.join(working_dir, "train_images") train_labels_file = os.path.join(working_dir, "train.csv") test_images_folder = os.path.join(working_dir, "test_images") test_labels_file = os.path.join(wo...
_____no_output_____
MIT
Defect_check.ipynb
franchukpetro/steel_defect_detection
**Data preprocessing** Drop duplicates
train_labels.drop_duplicates("ImageId", keep="last", inplace=True)
_____no_output_____
MIT
Defect_check.ipynb
franchukpetro/steel_defect_detection
Add to the train dataframe all non-defective images, setting None as value of EncodedPixels column
images = os.listdir(train_images_folder) present_rows = train_labels.ImageId.tolist() for img in images: if img not in present_rows: train_labels = train_labels.append({"ImageId" : img, "ClassId" : 1, "EncodedPixels" : None}, ignore_index=True)
_____no_output_____
MIT
Defect_check.ipynb
franchukpetro/steel_defect_detection
Change EncodedPixels column, by setting 1 if images is defected and 0 otherwise
for index, row in train_labels.iterrows(): train_labels.at[index, "EncodedPixels"] = int(train_labels.at[index, "EncodedPixels"] is not None)
_____no_output_____
MIT
Defect_check.ipynb
franchukpetro/steel_defect_detection
In total we got 12,568 training samples
train_labels
_____no_output_____
MIT
Defect_check.ipynb
franchukpetro/steel_defect_detection
Create data flow using ImageDataGenerator, see example here: https://medium.com/@vijayabhaskar96/tutorial-on-keras-flow-from-dataframe-1fd4493d237c
from keras_preprocessing.image import ImageDataGenerator def create_datagen(): return ImageDataGenerator( fill_mode='constant', cval=0., rotation_range=10, height_shift_range=0.1, width_shift_range=0.1, vertical_flip=True, rescale=1./255, zoom_range=0...
Found 10683 validated image filenames. Found 1885 validated image filenames. Found 5506 validated image filenames.
MIT
Defect_check.ipynb
franchukpetro/steel_defect_detection
**Building and fiting model**
from keras.applications import InceptionResNetV2 from keras.models import Model from keras.layers.core import Dense from keras.layers.pooling import GlobalAveragePooling2D from keras import optimizers model = InceptionResNetV2(weights='imagenet', input_shape=(256,512,3), include_top=False) #model.load_weights('/kaggle...
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/optimizers.py:793: The name tf.train.Optimizer is deprecated. Please use tf.compat.v1.train.Optimizer instead. WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:3657: The name tf.log is deprecated. Ple...
MIT
Defect_check.ipynb
franchukpetro/steel_defect_detection
Fittting the data
STEP_SIZE_TRAIN=train_gen.n//train_gen.batch_size STEP_SIZE_VALID=val_gen.n//val_gen.batch_size STEP_SIZE_TEST=test_gen.n//test_gen.batch_size model_binary.fit_generator(generator=train_gen, steps_per_epoch=STEP_SIZE_TRAIN, validation_data=val_gen, validation...
Epoch 1/15 333/333 [==============================] - 637s 2s/step - loss: 0.5724 - acc: 0.7208 - val_loss: 1.1674 - val_acc: 0.3987 Epoch 2/15 333/333 [==============================] - 632s 2s/step - loss: 0.3274 - acc: 0.8580 - val_loss: 0.6656 - val_acc: 0.7275 Epoch 3/15 333/333 [==============================] - ...
MIT
Defect_check.ipynb
franchukpetro/steel_defect_detection
Predicting test labels
test_gen.reset() pred=model_binary.predict_generator(test_gen, steps=STEP_SIZE_TEST, verbose=1)
5506/5506 [==============================] - 211s 38ms/step
MIT
Defect_check.ipynb
franchukpetro/steel_defect_detection
**Saving results** Create dataframe with probalities of having defects for each image
ids = np.array(test_labels.ImageId) pred = np.array([p[0] for p in pred]) probabilities_df = pd.DataFrame({'ImageId': ids, 'Probability': pred}, columns=['ImageId', 'Probability']) probabilities_df from google.colab import files df.to_csv('filename.csv') files.download('filename.csv') drive.mount('/content/gdrive') ...
_____no_output_____
MIT
Defect_check.ipynb
franchukpetro/steel_defect_detection
For Loop
week = ["Sunday", "Monday", "Tuesday", "Wednesday", "Thursday","Friday", "Saturday"] for x in week: print (x)
Sunday Monday Tuesday Wednesday Thursday Friday Saturday
Apache-2.0
Loop_Statement.ipynb
cocolleen/CPEN-21A-CPE1-2
The Break Statement
for x in week: print (x) if x == "Thursday": break for x in week: if x == "Thursday": break print (x)
Sunday Monday Tuesday Wednesday
Apache-2.0
Loop_Statement.ipynb
cocolleen/CPEN-21A-CPE1-2
Looping through string
for x in "Programmming with python": print (x)
P r o g r a m m m i n g w i t h p y t h o n
Apache-2.0
Loop_Statement.ipynb
cocolleen/CPEN-21A-CPE1-2
The range function
for x in range(10): print (x)
0 1 2 3 4 5 6 7 8 9
Apache-2.0
Loop_Statement.ipynb
cocolleen/CPEN-21A-CPE1-2
Nested Loops
adjective = ["red", "big", "tasty"] fruits = ["apple","banana", "cherry"] for x in adjective: for y in fruits: print (x, y)
red apple red banana red cherry big apple big banana big cherry tasty apple tasty banana tasty cherry
Apache-2.0
Loop_Statement.ipynb
cocolleen/CPEN-21A-CPE1-2
While loop
i = 10 while i > 6: print(i) i -= 1 #Assignment operator for subtraction i = 1 - i
10 9 8 7
Apache-2.0
Loop_Statement.ipynb
cocolleen/CPEN-21A-CPE1-2
The break statement
i = 10 while i > 6: print (i) if i == 8: break i-=1
10 9 8
Apache-2.0
Loop_Statement.ipynb
cocolleen/CPEN-21A-CPE1-2
The continue statement
i = 10 while i>6: i = i - 1 if i == 8: continue print (i)
9 7 6
Apache-2.0
Loop_Statement.ipynb
cocolleen/CPEN-21A-CPE1-2
The else statement
i = 10 while i>6: i = i - 1 print (i) else: print ("i is no longer greater than 6")
9 8 7 6 i is no longer greater than 6
Apache-2.0
Loop_Statement.ipynb
cocolleen/CPEN-21A-CPE1-2
Aplication 1
#WHILE LOOP x = 0 while x <= 10: print ("Value", x) x+=1 #FOR LOOPS value = ["Value 1", "Value 2", "Value 3", "Value 4", "Value 5","Value 6", "Value 7", "Value 8", "Value 9", "Value 10"] for x in value: print (x)
Value 1 Value 2 Value 3 Value 4 Value 5 Value 6 Value 7 Value 8 Value 9 Value 10
Apache-2.0
Loop_Statement.ipynb
cocolleen/CPEN-21A-CPE1-2
Application 2
i = 20 while i>4: i -= 1 print (i) else: print ('i is no longer greater than 3')
19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 i is no longer greater than 3
Apache-2.0
Loop_Statement.ipynb
cocolleen/CPEN-21A-CPE1-2
PLOT FOR FOLLOWER
# THE FOLLOWERS'S VALUE AND NAME plt.plot(markFollower[:3], [1, 1, 0]) plt.suptitle("FOLLOWER - NANO") plt.show() plt.plot(markFollower[1:5], [0, 1, 1,0]) plt.suptitle("FOLLOWER - MICRO") plt.show() plt.plot(markFollower[3:], [0, 1, 1]) plt.suptitle("FOLLOWER - MEDIUM") plt.show() plt.plot(markFollower[:3], [1, 1,...
_____no_output_____
MIT
.ipynb_checkpoints/Fuzzy - Copy-checkpoint.ipynb
evanezcent/Fuzzing
PLOT FOR LINGUSITIC
# THE LINGUISTIC'S VALUE AND NAME markEngagement = [0, 0.6, 1.7, 4.7, 6.9, 8, 10] plt.plot(markEngagement[:3], [1, 1, 0]) plt.suptitle("ENGAGEMENT - NANO") plt.show() plt.plot(markEngagement[1:4], [0, 1, 0]) plt.suptitle("ENGAGEMENT - MICRO") plt.show() plt.plot(markEngagement[2:6], [0, 1, 1, 0]) plt.suptitle("ENGAG...
_____no_output_____
MIT
.ipynb_checkpoints/Fuzzy - Copy-checkpoint.ipynb
evanezcent/Fuzzing
Fuzzification
# FOLLOWER========================================= # membership function def fuzzyFollower(countFol): follower = [] # STABLE GRAPH if (markFollower[0] <= countFol and countFol < markFollower[1]): scoreFuzzy = 1 follower.append(Datafuzzy(scoreFuzzy, lingFollower[0])) # GRAPH DOWN ...
_____no_output_____
MIT
.ipynb_checkpoints/Fuzzy - Copy-checkpoint.ipynb
evanezcent/Fuzzing
Inference
def cekDecission(follower, engagement): temp_yes = [] temp_no = [] if (follower.decission == "NANO"): # Get minimal score fuzzy every decision NO or YES temp_yes.append(min(follower.score,engagement[0].score)) # if get 2 data fuzzy Engagement if (len(engagement) > 1): ...
_____no_output_____
MIT
.ipynb_checkpoints/Fuzzy - Copy-checkpoint.ipynb
evanezcent/Fuzzing
Result
# Result def getResult(resultYes, resultNo): yes = 0 no = 0 if(resultNo): no = max(resultNo) if(resultYes): yes = max(resultYes) return yes, no
_____no_output_____
MIT
.ipynb_checkpoints/Fuzzy - Copy-checkpoint.ipynb
evanezcent/Fuzzing
Defuzzification
def finalDecission(yes, no): mamdani = (((10 + 20 + 30 + 40 + 50 + 60 + 70) * no) + ((80 + 90 + 100) * yes)) / ((7 * no) + (yes * 3)) return mamdani
_____no_output_____
MIT
.ipynb_checkpoints/Fuzzy - Copy-checkpoint.ipynb
evanezcent/Fuzzing
Main Function
def mainFunction(followerCount, engagementRate): follower = fuzzyFollower(followerCount) engagement = fuzzyEngagement(engagementRate) resultYes, resultNo = fuzzyRules(follower, engagement) yes, no = getResult(resultYes, resultNo) return finalDecission(yes, no) data = pd.read_csv('influencers.csv') ...
_____no_output_____
MIT
.ipynb_checkpoints/Fuzzy - Copy-checkpoint.ipynb
evanezcent/Fuzzing
Road Following - Live demo (TensorRT) with collision avoidance Added collision avoidance ResNet18 TRT threshold between free and blocked is the controller - action: just a pause as long the object is in front or by time increase in speed_gain requires some small increase in steer_gain (once a slider is blue (mouse cl...
import torch device = torch.device('cuda')
_____no_output_____
MIT
trt-Jetbot-RoadFollowing_with_CollisionRESNet_TRT.ipynb
tomMEM/Jetbot-Project
Load the TRT optimized models by executing the cell below
import torch from torch2trt import TRTModule model_trt = TRTModule() model_trt.load_state_dict(torch.load('best_steering_model_xy_trt.pth')) # well trained road following model model_trt_collision = TRTModule() model_trt_collision.load_state_dict(torch.load('best_model_trt.pth')) # anti collision model trained for o...
_____no_output_____
MIT
trt-Jetbot-RoadFollowing_with_CollisionRESNet_TRT.ipynb
tomMEM/Jetbot-Project
Creating the Pre-Processing Function We have now loaded our model, but there's a slight issue. The format that we trained our model doesnt exactly match the format of the camera. To do that, we need to do some preprocessing. This involves the following steps:1. Convert from HWC layout to CHW layout2. Normalize using s...
import torchvision.transforms as transforms import torch.nn.functional as F import cv2 import PIL.Image import numpy as np mean = torch.Tensor([0.485, 0.456, 0.406]).cuda().half() std = torch.Tensor([0.229, 0.224, 0.225]).cuda().half() def preprocess(image): image = PIL.Image.fromarray(image) image = transfor...
_____no_output_____
MIT
trt-Jetbot-RoadFollowing_with_CollisionRESNet_TRT.ipynb
tomMEM/Jetbot-Project
Awesome! We've now defined our pre-processing function which can convert images from the camera format to the neural network input format.Now, let's start and display our camera. You should be pretty familiar with this by now.
from IPython.display import display import ipywidgets import traitlets from jetbot import Camera, bgr8_to_jpeg camera = Camera() import IPython image_widget = ipywidgets.Image() traitlets.dlink((camera, 'value'), (image_widget, 'value'), transform=bgr8_to_jpeg)
_____no_output_____
MIT
trt-Jetbot-RoadFollowing_with_CollisionRESNet_TRT.ipynb
tomMEM/Jetbot-Project
We'll also create our robot instance which we'll need to drive the motors.
from jetbot import Robot robot = Robot()
_____no_output_____
MIT
trt-Jetbot-RoadFollowing_with_CollisionRESNet_TRT.ipynb
tomMEM/Jetbot-Project
Now, we will define sliders to control JetBot> Note: We have initialize the slider values for best known configurations, however these might not work for your dataset, therefore please increase or decrease the sliders according to your setup and environment1. Speed Control (speed_gain_slider): To start your JetBot incr...
speed_gain_slider = ipywidgets.FloatSlider(min=0.0, max=1.0, step=0.01, description='speed gain') steering_gain_slider = ipywidgets.FloatSlider(min=0.0, max=1.0, step=0.01, value=0.10, description='steering gain') steering_dgain_slider = ipywidgets.FloatSlider(min=0.0, max=0.5, step=0.001, value=0.23, description='stee...
_____no_output_____
MIT
trt-Jetbot-RoadFollowing_with_CollisionRESNet_TRT.ipynb
tomMEM/Jetbot-Project
Next, we'll create a function that will get called whenever the camera's value changes. This function will do the following steps1. Pre-process the camera image2. Execute the neural network3. Compute the approximate steering value4. Control the motors using proportional / derivative control (PD)
import time import os import math angle = 0.0 angle_last = 0.0 angle_last_block=0 count_stops=0 go_on=1 stop_time=20 #number of frames to remain stopped x=0.0 y=0.0 speed_value=speed_gain_slider.value t1=0 road_following=1 speed_value_block=0 def execute(change): global angle, angle_last, angle_last_block, bloc...
_____no_output_____
MIT
trt-Jetbot-RoadFollowing_with_CollisionRESNet_TRT.ipynb
tomMEM/Jetbot-Project
Cool! We've created our neural network execution function, but now we need to attach it to the camera for processing.We accomplish that with the observe function. >WARNING: This code will move the robot!! Please make sure your robot has clearance and it is on Lego or Track you have collected data on. The road follower ...
camera.observe(execute, names='value')
_____no_output_____
MIT
trt-Jetbot-RoadFollowing_with_CollisionRESNet_TRT.ipynb
tomMEM/Jetbot-Project
Awesome! If your robot is plugged in it should now be generating new commands with each new camera frame. You can now place JetBot on Lego or Track you have collected data on and see whether it can follow track.If you want to stop this behavior, you can unattach this callback by executing the code below.
import time camera.unobserve(execute, names='value') time.sleep(0.1) # add a small sleep to make sure frames have finished processing robot.stop() camera.stop()
_____no_output_____
MIT
trt-Jetbot-RoadFollowing_with_CollisionRESNet_TRT.ipynb
tomMEM/Jetbot-Project
Elastic search in CollabHad to install elastic search in colab for 'reasons' and this is the way it worked for me. Might be usefull for someone else also.Works with 7.9.2. Probably could be run also with 7.14.0, but didn't have time to debug the issues. If you want, you can try and just run the instance under the 'ela...
#7.9.1 works with ES 7.9.2 !pip install -Iv elasticsearch==7.9.1 #download ES 7.92 and extract %%bash wget -q https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-oss-7.9.2-linux-x86_64.tar.gz wget -q https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-oss-7.9.2-linux-x86_64.tar.gz.sha512 ...
/usr/local/lib/python3.7/dist-packages/requests/__init__.py:91: RequestsDependencyWarning: urllib3 (1.26.6) or chardet (3.0.4) doesn't match a supported version! RequestsDependencyWarning)
MIT
elasticsearch_install.ipynb
xSakix/AI_colan_notebooks
Building a Bayesian Network---In this tutorial, we introduce how to build a **Bayesian (belief) network** based on domain knowledge of the problem.If we build the Bayesian network in different ways, the built network can have different graphs and sizes, which can greatly affect the memory requirement and inference eff...
pip install pgmpy
Requirement already satisfied: pgmpy in /Users/yimei/miniforge3/lib/python3.9/site-packages (0.1.17) Requirement already satisfied: torch in /Users/yimei/miniforge3/lib/python3.9/site-packages (from pgmpy) (1.11.0) Requirement already satisfied: statsmodels in /Users/yimei/miniforge3/lib/python3.9/site-packages (from p...
Apache-2.0
notebooks/bayesian-network-building.ipynb
meiyi1986/tutorials
Then, we import the necessary modules for the Bayesian network as follows.
from pgmpy.models import BayesianNetwork from pgmpy.factors.discrete import TabularCPD
_____no_output_____
Apache-2.0
notebooks/bayesian-network-building.ipynb
meiyi1986/tutorials
Now, we build the alarm Bayesian network as follows.1. We define the network structure by specifying the four links.2. We define (estimate) the discrete conditional probability tables, represented as the `TabularCPD` class.
# Define the network structure alarm_model = BayesianNetwork( [ ("Burglary", "Alarm"), ("Earthquake", "Alarm"), ("Alarm", "JohnCall"), ("Alarm", "MaryCall"), ] ) # Define the probability tables by TabularCPD cpd_burglary = TabularCPD( variable="Burglary", variable_card=2, va...
_____no_output_____
Apache-2.0
notebooks/bayesian-network-building.ipynb
meiyi1986/tutorials
We can view the nodes of the alarm network.
# Viewing nodes of the model alarm_model.nodes()
_____no_output_____
Apache-2.0
notebooks/bayesian-network-building.ipynb
meiyi1986/tutorials
We can also view the edges of the alarm network.
# Viewing edges of the model alarm_model.edges()
_____no_output_____
Apache-2.0
notebooks/bayesian-network-building.ipynb
meiyi1986/tutorials
We can show the probability tables using the `print()` method. > **NOTE**: the `pgmpy` library stores ALL the probabilities (including the last probability). This requires a bit more memory, but can save time for calculating the last probability by normalisation rule.Let's print the probability tables for **Alarm** and...
# Print the probability table of the Alarm node print(cpd_alarm) # Print the probability table of the MaryCalls node print(cpd_marycall)
+------------+---------------+---------------+---------------+---------------+ | Burglary | Burglary(0) | Burglary(0) | Burglary(1) | Burglary(1) | +------------+---------------+---------------+---------------+---------------+ | Earthquake | Earthquake(0) | Earthquake(1) | Earthquake(0) | Earthquake(1) | +---...
Apache-2.0
notebooks/bayesian-network-building.ipynb
meiyi1986/tutorials
We can find all the **(conditional) independencies** between the nodes in the network.
alarm_model.get_independencies()
_____no_output_____
Apache-2.0
notebooks/bayesian-network-building.ipynb
meiyi1986/tutorials
We can also find the **local (conditional) independencies of a specific node** in the network as follows.
# Checking independcies of a node alarm_model.local_independencies("JohnCall")
_____no_output_____
Apache-2.0
notebooks/bayesian-network-building.ipynb
meiyi1986/tutorials
1. Introduction
import os import sys module_path = os.path.abspath(os.path.join('..')) if module_path not in sys.path: sys.path.append(module_path) import numpy as np import matplotlib.pyplot as plt %matplotlib inline from prml.linear import ( LinearRegression, RidgeRegression, BayesianRegression ) from prml.preprocess...
_____no_output_____
MIT
notebooks/ch01_Introduction.ipynb
wenbos3109/PRML
1.1. Example: Polynomial Curve Fitting
def create_toy_data(func, sample_size, std): x = np.linspace(0, 1, sample_size) t = func(x) + np.random.normal(scale=std, size=x.shape) return x, t def func(x): return np.sin(2 * np.pi * x) x_train, y_train = create_toy_data(func, 10, 0.25) x_test = np.linspace(0, 1, 100) y_test = func(x_test) plt.sc...
_____no_output_____
MIT
notebooks/ch01_Introduction.ipynb
wenbos3109/PRML
Regularization
feature = PolynomialFeature(9) X_train = feature.transform(x_train) X_test = feature.transform(x_test) model = RidgeRegression(alpha=1e-3) model.fit(X_train, y_train) y = model.predict(X_test) y = model.predict(X_test) plt.scatter(x_train, y_train, facecolor="none", edgecolor="b", s=50, label="training data") plt.plo...
_____no_output_____
MIT
notebooks/ch01_Introduction.ipynb
wenbos3109/PRML
1.2.6 Bayesian curve fitting
model = BayesianRegression(alpha=2e-3, beta=2) model.fit(X_train, y_train) y, y_err = model.predict(X_test, return_std=True) plt.scatter(x_train, y_train, facecolor="none", edgecolor="b", s=50, label="training data") plt.plot(x_test, y_test, c="g", label="$\sin(2\pi x)$") plt.plot(x_test, y, c="r", label="mean") plt.f...
_____no_output_____
MIT
notebooks/ch01_Introduction.ipynb
wenbos3109/PRML