markdown stringlengths 0 1.02M | code stringlengths 0 832k | output stringlengths 0 1.02M | license stringlengths 3 36 | path stringlengths 6 265 | repo_name stringlengths 6 127 |
|---|---|---|---|---|---|
We can draw the object by using the method drawCircle(): | # Call the method drawCircle
RedCircle.drawCircle() | _____no_output_____ | MIT | Python for AI and DataScience/PY0101EN-3-4-Classes.ipynb | amitkrishna/IBM-DataScience |
We can increase the radius of the circle by applying the method add_radius(). Let increases the radius by 2 and then by 5: | # Use method to change the object attribute radius
print('Radius of object:',RedCircle.radius)
RedCircle.add_radius(2)
print('Radius of object of after applying the method add_radius(2):',RedCircle.radius)
RedCircle.add_radius(5)
print('Radius of object of after applying the method add_radius(5):',RedCircle.radius) | _____no_output_____ | MIT | Python for AI and DataScience/PY0101EN-3-4-Classes.ipynb | amitkrishna/IBM-DataScience |
Let’s create a blue circle. As the default colour is blue, all we have to do is specify what the radius is: | # Create a blue circle with a given radius
BlueCircle = Circle(radius=100) | _____no_output_____ | MIT | Python for AI and DataScience/PY0101EN-3-4-Classes.ipynb | amitkrishna/IBM-DataScience |
As before we can access the attributes of the instance of the class by using the dot notation: | # Print the object attribute radius
BlueCircle.radius
# Print the object attribute color
BlueCircle.color | _____no_output_____ | MIT | Python for AI and DataScience/PY0101EN-3-4-Classes.ipynb | amitkrishna/IBM-DataScience |
We can draw the object by using the method drawCircle(): | # Call the method drawCircle
BlueCircle.drawCircle() | _____no_output_____ | MIT | Python for AI and DataScience/PY0101EN-3-4-Classes.ipynb | amitkrishna/IBM-DataScience |
Compare the x and y axis of the figure to the figure for RedCircle; they are different. The Rectangle Class Let's create a class rectangle with the attributes of height, width and color. We will only add the method to draw the rectangle object: | # Create a new Rectangle class for creating a rectangle object
class Rectangle(object):
# Constructor
def __init__(self, width=2, height=3, color='r'):
self.height = height
self.width = width
self.color = color
# Method
def drawRectangle(self):
plt.gca().add_p... | _____no_output_____ | MIT | Python for AI and DataScience/PY0101EN-3-4-Classes.ipynb | amitkrishna/IBM-DataScience |
Let’s create the object SkinnyBlueRectangle of type Rectangle. Its width will be 2 and height will be 3, and the color will be blue: | # Create a new object rectangle
SkinnyBlueRectangle = Rectangle(2, 10, 'blue') | _____no_output_____ | MIT | Python for AI and DataScience/PY0101EN-3-4-Classes.ipynb | amitkrishna/IBM-DataScience |
As before we can access the attributes of the instance of the class by using the dot notation: | # Print the object attribute height
SkinnyBlueRectangle.height
# Print the object attribute width
SkinnyBlueRectangle.width
# Print the object attribute color
SkinnyBlueRectangle.color | _____no_output_____ | MIT | Python for AI and DataScience/PY0101EN-3-4-Classes.ipynb | amitkrishna/IBM-DataScience |
We can draw the object: | # Use the drawRectangle method to draw the shape
SkinnyBlueRectangle.drawRectangle() | _____no_output_____ | MIT | Python for AI and DataScience/PY0101EN-3-4-Classes.ipynb | amitkrishna/IBM-DataScience |
Let’s create the object FatYellowRectangle of type Rectangle : | # Create a new object rectangle
FatYellowRectangle = Rectangle(20, 5, 'yellow') | _____no_output_____ | MIT | Python for AI and DataScience/PY0101EN-3-4-Classes.ipynb | amitkrishna/IBM-DataScience |
We can access the attributes of the instance of the class by using the dot notation: | # Print the object attribute height
FatYellowRectangle.height
# Print the object attribute width
FatYellowRectangle.width
# Print the object attribute color
FatYellowRectangle.color | _____no_output_____ | MIT | Python for AI and DataScience/PY0101EN-3-4-Classes.ipynb | amitkrishna/IBM-DataScience |
We can draw the object: | # Use the drawRectangle method to draw the shape
FatYellowRectangle.drawRectangle() | _____no_output_____ | MIT | Python for AI and DataScience/PY0101EN-3-4-Classes.ipynb | amitkrishna/IBM-DataScience |
Despesas - Autorizações de Pagamento do Governo do Estado da Paraíba De Janeiro/2021 a Junho/2021 | # Instalação pacotes
!pip install pandas
!pip install PyMySQL
!pip install SQLAlchemy
import pandas as pd
# Carregar CSVs em data frame do pandas
df1 = pd.read_csv('../data/pagamento_exercicio_2021_mes_1.csv', encoding='ISO-8859-1',sep=';')
df2 = pd.read_csv('../data/pagamento_exercicio_2021_mes_2.csv', encoding='ISO... | _____no_output_____ | MIT | notebooks/01-exploracao-dados.ipynb | andersonnrc/projeto-bootcamp-carrefour-analise-dados |
Realização de análises e transformações | # Exibir as colunas
df.columns
# Exibir quantidade de linhas e colunas
df.shape
# Exibir tipos das colunas
df.dtypes
# Converter coluna (DATA_PAGAMENTO) em datetime
# Converter colunas (EXERCICIO, CODIGO_UNIDADE_GESTORA, NUMERO_EMPENHO, NUMERO_AUTORIZACAO_PAGAMENTO) em object
df["DATA_PAGAMENTO"] = pd.to_datetime(d... | _____no_output_____ | MIT | notebooks/01-exploracao-dados.ipynb | andersonnrc/projeto-bootcamp-carrefour-analise-dados |
Gráficos para análise exploratória e/ou tomada de decisão | import matplotlib.pyplot as plt
plt.style.use("seaborn")
# Gráfico com o total pago aos credores por mês (Janeiro a Junho)
df.groupby(df['MES_PAGAMENTO'])['VALOR_PAGAMENTO'].sum().plot.bar(title = 'Total Pago por Mês', color = 'blue')
plt.xlabel('MÊS')
plt.ylabel('RECEITA');
# Gráfico com o valor máximo pago a um cred... | _____no_output_____ | MIT | notebooks/01-exploracao-dados.ipynb | andersonnrc/projeto-bootcamp-carrefour-analise-dados |
**Load the libraries:** | import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from keras.models import Sequential
from keras.layers import Dense, Dropout
from keras.callbacks import EarlyStopping, ModelCheckpoint
from keras.optimizers import SGD, Adadelta, Adam, RMSprop, Adagrad, Nadam, Adamax
SEED = 2... | Using TensorFlow backend.
| MIT | Chapter 2/8_Experimenting with different optimizers.ipynb | Anacoder1/Python_DeepLearning_Cookbook |
**Import the dataset and extract the target variable:** | data = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/wine-quality/winequality-red.csv',
sep = ';')
y = data['quality']
X = data.drop(['quality'], axis = 1) | _____no_output_____ | MIT | Chapter 2/8_Experimenting with different optimizers.ipynb | Anacoder1/Python_DeepLearning_Cookbook |
**Split the dataset for training, validation and testing:** | X_train, X_test, y_train, y_test = train_test_split(X, y,
test_size = 0.2,
random_state = SEED)
X_train, X_val, y_train, y_val = train_test_split(X_train, y_train,
... | _____no_output_____ | MIT | Chapter 2/8_Experimenting with different optimizers.ipynb | Anacoder1/Python_DeepLearning_Cookbook |
**Define a function that creates the model:** | def create_model(opt):
model = Sequential()
model.add(Dense(100, input_dim = X_train.shape[1],
activation = 'relu'))
model.add(Dense(50, activation = 'relu'))
model.add(Dense(25, activation = 'relu'))
model.add(Dense(10, activation = 'relu'))
model.add(Dense(1, activation = 'linear'))
re... | _____no_output_____ | MIT | Chapter 2/8_Experimenting with different optimizers.ipynb | Anacoder1/Python_DeepLearning_Cookbook |
**Create a function that defines callbacks we will be using during training:** | def create_callbacks(opt):
callbacks = [
EarlyStopping(monitor = 'val_acc', patience = 200,
verbose = 2),
ModelCheckpoint('optimizers_best_' + opt + '.h5',
monitor = 'val_acc',
save_best_only = True,
verbose = 0)
]
r... | _____no_output_____ | MIT | Chapter 2/8_Experimenting with different optimizers.ipynb | Anacoder1/Python_DeepLearning_Cookbook |
**Create a dict of the optimizers we want to try:** | opts = dict({
'sgd': SGD(),
'sgd-0001': SGD(lr = 0.0001, decay = 0.00001),
'adam': Adam(),
'adadelta': Adadelta(),
'rmsprop': RMSprop(),
'rmsprop-0001': RMSprop(lr = 0.0001),
'nadam': Nadam(),
'adamax': Adamax()
}) | WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
| MIT | Chapter 2/8_Experimenting with different optimizers.ipynb | Anacoder1/Python_DeepLearning_Cookbook |
**Train our networks and store results:** | batch_size = 128
n_epochs = 1000
results = []
# Loop through the optimizers
for opt in opts:
model = create_model(opt)
callbacks = create_callbacks(opt)
model.compile(loss = 'mse',
optimizer = opts[opt],
metrics = ['accuracy'])
hist = model.fit(X_train.values, y_train,
... | Epoch 00201: early stopping
Epoch 00414: early stopping
Epoch 00625: early stopping
Epoch 00373: early stopping
Epoch 00413: early stopping
Epoch 00230: early stopping
Epoch 00269: early stopping
Epoch 00424: early stopping
| MIT | Chapter 2/8_Experimenting with different optimizers.ipynb | Anacoder1/Python_DeepLearning_Cookbook |
**Compare the results:** | res = pd.DataFrame(results)
res.columns = ['optimizer', 'epochs', 'val_accuracy', 'test_accuracy']
res | _____no_output_____ | MIT | Chapter 2/8_Experimenting with different optimizers.ipynb | Anacoder1/Python_DeepLearning_Cookbook |
Jupyter (IPython) Advanced Features--- Outline- Keyboard shortcuts- Magic- Accessing the underlying operating system- Using different languages inside single notebook- File magic- Using Jupyter more efficiently- Profiling- Output- Automation- Extensions- 'Big Data' Analysis Sources: [IPython Tutorial](https://githu... | %magic | _____no_output_____ | MIT | notebooks/2-Jupyter/jupyter-advanced-Copy1.ipynb | burkesquires/jupyter_training_2020 |
List available python magics | %lsmagic | _____no_output_____ | MIT | notebooks/2-Jupyter/jupyter-advanced-Copy1.ipynb | burkesquires/jupyter_training_2020 |
%envYou can manage environment variables of your notebook without restarting the jupyter server process. Some libraries (like theano) use environment variables to control behavior, %env is the most convenient way. | # %env - without arguments lists environmental variables
%env OMP_NUM_THREADS=4 | _____no_output_____ | MIT | notebooks/2-Jupyter/jupyter-advanced-Copy1.ipynb | burkesquires/jupyter_training_2020 |
Accessing the underlying operating system--- Executing shell commandsYou can call any shell command. This in particular useful to manage your virtual environment. | !pip install numpy
!pip list | grep Theano | _____no_output_____ | MIT | notebooks/2-Jupyter/jupyter-advanced-Copy1.ipynb | burkesquires/jupyter_training_2020 |
Adding packages can also be done using... %conda install numpy %pip install numpy will attempt to install packages in the current environment. | !pwd
%pwd
pwd
files = !ls .
print("files in notebooks directory:")
print(files)
!echo $files
!echo {files[0].upper()} | 2-1-JUPYTER-ECOSYSTEM.IPYNB
| MIT | notebooks/2-Jupyter/jupyter-advanced-Copy1.ipynb | burkesquires/jupyter_training_2020 |
Note that all this is available even in multiline blocks: | import os
for i,f in enumerate(files):
if f.endswith('ipynb'):
!echo {"%02d" % i} - "{os.path.splitext(f)[0]}"
else:
print('--') | 00 - 2-1-Jupyter-ecosystem
--
02 - 2-10-jupyter-code-script-of-scripts
03 - 2-11-Advanced-jupyter
04 - 2-2-jupyter-get-in-and-out
05 - 2-3-jupyter-notebook-basics
06 - 2-4-jupyter-markdown
07 - 2-5-jupyter-code-python
08 - 2-6-jupyter-code-r
09 - 2-7-jupyter-command-line
10 - 2-8-jupyter-magics
--
12 - 2-Jupyter-help
1... | MIT | notebooks/2-Jupyter/jupyter-advanced-Copy1.ipynb | burkesquires/jupyter_training_2020 |
I could take the same list with a bash commandbecause magics and bash calls return python variables: | names = !ls ../images/ml_demonstrations/*.png
names[:5] | _____no_output_____ | MIT | notebooks/2-Jupyter/jupyter-advanced-Copy1.ipynb | burkesquires/jupyter_training_2020 |
Suppress output of last linesometimes output isn't needed, so we can either use `pass` instruction on new line or semicolon at the end %conda install matplotlib | %matplotlib inline
from matplotlib import pyplot as plt
import numpy
# if you don't put semicolon at the end, you'll have output of function printed
plt.hist(numpy.linspace(0, 1, 1000)**1.5); | _____no_output_____ | MIT | notebooks/2-Jupyter/jupyter-advanced-Copy1.ipynb | burkesquires/jupyter_training_2020 |
Using different languages inside single notebook---If you're missing those much, using other computational kernels:- %%python2- %%python3- %%ruby- %%perl- %%bash- %%Ris possible, but obviously you'll need to setup the corresponding kernel first. | # %%ruby
# puts 'Hi, this is ruby.'
%%bash
echo 'Hi, this is bash.' | Hi, this is bash.
| MIT | notebooks/2-Jupyter/jupyter-advanced-Copy1.ipynb | burkesquires/jupyter_training_2020 |
Running R code in Jupyter notebook Installing R kernelEasy Option: Installing the R Kernel Using AnacondaIf you used Anaconda to set up your environment, getting R working is extremely easy. Just run the below in your terminal: | # %conda install -c r r-essentials | _____no_output_____ | MIT | notebooks/2-Jupyter/jupyter-advanced-Copy1.ipynb | burkesquires/jupyter_training_2020 |
Running R and Python in the same notebook.The best solution to this is to install rpy2 (requires a working version of R as well), which can be easily done with pip: | %pip install rpy2 | Collecting rpy2
Downloading rpy2-3.3.6.tar.gz (179 kB)
[K |████████████████████████████████| 179 kB 465 kB/s eta 0:00:01
[31m ERROR: Command errored out with exit status 1:
command: /Users/squiresrb/opt/anaconda3/bin/python -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/private/var/folders/d7... | MIT | notebooks/2-Jupyter/jupyter-advanced-Copy1.ipynb | burkesquires/jupyter_training_2020 |
You can then use the two languages together, and even pass variables inbetween: | %load_ext rpy2.ipython
%R require(ggplot2)
import pandas as pd
df = pd.DataFrame({
'Letter': ['a', 'a', 'a', 'b', 'b', 'b', 'c', 'c', 'c'],
'X': [4, 3, 5, 2, 1, 7, 7, 5, 9],
'Y': [0, 4, 3, 6, 7, 10, 11, 9, 13],
'Z': [1, 2, 3, 1, 2, 3, 1, 2, 3]
})
%%R -i df
ggplot(data = df) + geom_po... | _____no_output_____ | MIT | notebooks/2-Jupyter/jupyter-advanced-Copy1.ipynb | burkesquires/jupyter_training_2020 |
Writing functions in cython (or fortran)Sometimes the speed of numpy is not enough and I need to write some fast code. In principle, you can compile function in the dynamic library and write python wrappers...But it is much better when this boring part is done for you, right?You can write functions in cython or fortra... | %pip install cython
%load_ext Cython
%%cython
def myltiply_by_2(float x):
return 2.0 * x
myltiply_by_2(23.) | _____no_output_____ | MIT | notebooks/2-Jupyter/jupyter-advanced-Copy1.ipynb | burkesquires/jupyter_training_2020 |
I also should mention that there are different jitter systems which can speed up your python code.More examples in [my notebook](http://arogozhnikov.github.io/2015/09/08/SpeedBenchmarks.html). For more information see the IPython help at: [Cython](https://github.com/ipython/ipython-in-depth/blob/pycon-2019/6%20-%20Cro... | %%writefile? | _____no_output_____ | MIT | notebooks/2-Jupyter/jupyter-advanced-Copy1.ipynb | burkesquires/jupyter_training_2020 |
`%pycat` ill output in the pop-up window:```Show a syntax-highlighted file through a pager.This magic is similar to the cat utility, but it will assume the fileto be Python source and will show it with syntax highlighting.This magic command can either take a local filename, an url,an history range (see %history) or a m... | # %load https://matplotlib.org/_downloads/f7171577b84787f4b4d987b663486a94/anatomy.py | _____no_output_____ | MIT | notebooks/2-Jupyter/jupyter-advanced-Copy1.ipynb | burkesquires/jupyter_training_2020 |
%run to execute python code%run can execute python code from .py files — this is a well-documented behavior. But it also can execute other jupyter notebooks! Sometimes it is quite useful.NB. %run is not the same as importing python module. | # this will execute all the code cells from different notebooks
%run ./matplotlib-anatomy.ipynb | _____no_output_____ | MIT | notebooks/2-Jupyter/jupyter-advanced-Copy1.ipynb | burkesquires/jupyter_training_2020 |
Using Jupyter more efficiently--- Store Magic - %store: lazy passing data between notebooks %store lets you store your macro and use it across all of your Jupyter Notebooks. | data = 'this is the string I want to pass to different notebook'
%store data
del data # deleted variable
# in second notebook I will use:
%store -r data
print(data) | _____no_output_____ | MIT | notebooks/2-Jupyter/jupyter-advanced-Copy1.ipynb | burkesquires/jupyter_training_2020 |
%who: analyze variables of global scope | %whos
# pring names of string variables
%who str | _____no_output_____ | MIT | notebooks/2-Jupyter/jupyter-advanced-Copy1.ipynb | burkesquires/jupyter_training_2020 |
Multiple cursorsSince recently jupyter supports multiple cursors (in a single cell), just like sublime ot intelliJ! __Alt + mouse selection__ for multiline selection and __Ctrl + mouse clicks__ for multicursors.Gif taken from http://swanintelligence.com/multi-cursor-in-jupyter.html Timing When you need to measure ti... | # measure small code snippets with timeit !
import numpy
%timeit numpy.random.normal(size=100)
%%writefile pythoncode.py
import numpy
def append_if_not_exists(arr, x):
if x not in arr:
arr.append(x)
def some_useless_slow_function():
arr = list()
for i in range(10000):
x = numpy.ran... | _____no_output_____ | MIT | notebooks/2-Jupyter/jupyter-advanced-Copy1.ipynb | burkesquires/jupyter_training_2020 |
Hiding code or output - Click on the blue vertical bar or line to the left to collapse code or output Commenting and uncommenting a block of codeYou might want to add new lines of code and comment out the old lines while you’re working. This is great if you’re improving the performance of your code or trying to debug... | from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all" | _____no_output_____ | MIT | notebooks/2-Jupyter/jupyter-advanced-Copy1.ipynb | burkesquires/jupyter_training_2020 |
Profiling: %prun, %lprun, %mprun--- See a much longer explination of profiling and timeing in Jake Vanderplas' Python Data Science Handbook: https://jakevdp.github.io/PythonDataScienceHandbook/01.07-timing-and-profiling.html | # shows how much time program spent in each function
%prun some_useless_slow_function() | _____no_output_____ | MIT | notebooks/2-Jupyter/jupyter-advanced-Copy1.ipynb | burkesquires/jupyter_training_2020 |
Example of output:```26338 function calls in 0.713 seconds Ordered by: internal time ncalls tottime percall cumtime percall filename:lineno(function) 10000 0.684 0.000 0.685 0.000 pythoncode.py:3(append_if_not_exists) 10000 0.014 0.000 0.014 0.000 {method 'randint' of 'mtrand.Rando... | # %load_ext memory_profiler ???
To profile memory, you can install and run pmrun
# %pip install memory_profiler
# %pip install line_profiler
# tracking memory consumption (show in the pop-up)
# %mprun -f append_if_not_exists some_useless_slow_function() | _____no_output_____ | MIT | notebooks/2-Jupyter/jupyter-advanced-Copy1.ipynb | burkesquires/jupyter_training_2020 |
Example of output:```Line Mem usage Increment Line Contents================================================ 3 20.6 MiB 0.0 MiB def append_if_not_exists(arr, x): 4 20.6 MiB 0.0 MiB if x not in arr: 5 20.6 MiB 0.0 MiB arr.append(x)``` **%lprun** is line pr... | #%%debug filename:line_number_for_breakpoint
# Here some code that fails. This will activate interactive context for debugging | _____no_output_____ | MIT | notebooks/2-Jupyter/jupyter-advanced-Copy1.ipynb | burkesquires/jupyter_training_2020 |
A bit easier option is `%pdb`, which activates debugger when exception is raised: | # %pdb
# def pick_and_take():
# picked = numpy.random.randint(0, 1000)
# raise NotImplementedError()
# pick_and_take() | _____no_output_____ | MIT | notebooks/2-Jupyter/jupyter-advanced-Copy1.ipynb | burkesquires/jupyter_training_2020 |
Output--- [RISE](https://github.com/damianavila/RISE): presentations with notebookExtension by Damian Avila makes it possible to show notebooks as demonstrations. Example of such presentation: http://bollwyvl.github.io/live_reveal//7It is very useful when you teach others e.g. to use some library. Jupyter output s... | import os
from IPython.display import display, Image
names = [f for f in os.listdir('../images/') if f.endswith('.png')]
for name in names[:5]:
display(Image('../images/' + name, width=300)) | _____no_output_____ | MIT | notebooks/2-Jupyter/jupyter-advanced-Copy1.ipynb | burkesquires/jupyter_training_2020 |
Boolean Operators | a = 10
b = 9
c = 8
print (10 > 9)
print (10 == 9)
print (10 < 9)
print (a)
print (a > b)
c = print (a > b)
c
##true
print(bool("Hello"))
print(bool(15))
print(bool(True))
print(bool(1))
##false
print(bool(False))
print(bool(0))
print(bool(None))
print(bool([]))
def myFunction():
return True
print(myFunction())... | True
False
True
| Apache-2.0 | Operations_and_Expressions_in_Python.ipynb | michaelll22/CPEN-21A-ECE-2-2 |
Python Operators | print(10 + 5)
print(10 - 5)
print(10 * 5)
print(10 / 5)
print(10 % 5)
print(10 // 3)
print(10 ** 2) | 15
5
50
2.0
0
3
100
| Apache-2.0 | Operations_and_Expressions_in_Python.ipynb | michaelll22/CPEN-21A-ECE-2-2 |
Bitwise Operators | a = 60 #0011 1100
b = 13
print (a^b)
print (~a)
print (a<<2)
print (a>>2) #0000 1111 | 49
-61
240
15
| Apache-2.0 | Operations_and_Expressions_in_Python.ipynb | michaelll22/CPEN-21A-ECE-2-2 |
Assignment Operator | x = 2
x += 3 #Same As x = x+3
print(x)
x | 5
| Apache-2.0 | Operations_and_Expressions_in_Python.ipynb | michaelll22/CPEN-21A-ECE-2-2 |
Logical Operators | a = 5
b = 6
print(a>b and a==a)
print(a<b or b==a) | False
True
| Apache-2.0 | Operations_and_Expressions_in_Python.ipynb | michaelll22/CPEN-21A-ECE-2-2 |
Identity Operator | print(a is b)
print(a is not b) | False
True
| Apache-2.0 | Operations_and_Expressions_in_Python.ipynb | michaelll22/CPEN-21A-ECE-2-2 |
I estimate 10s per game. 100 starting positions, 100 secondary starting positions, then 10000 openings. 4 threads, and symmetries that produce x4 data. If I want 12 points per opening, then that would be: | estimated_seconds = 10000 * 12 * 10/ (4 * 4)
estimated_hours = estimated_seconds / 3600
print(estimated_hours) | 20.833333333333332
| MIT | Projects/3_Adversarial Search/scratchpad/n03_book_creation.ipynb | mtasende/artificial-intelligence |
The plan is as follows: - Create a book (or load previously saved) - for each starting action for player 1 (100) and each starting action for player 2 (100) run 3 experiments (DETERMINISTIC BOOK FILLING). - Run epsilon-greedy algorithm to make a STOCHASTIC BOOK FILLING (using the opening book up to its depth [1-epsilo... | book = b.load_latest_book(depth=4)
type(book)
sum(abs(value) for value in book.values())
#book # book -> {(state, action): counts}
agent_names = ('CustomPlayer1', 'CustomPlayer2')
agent1 = isolation.Agent(custom.CustomPlayer, agent_names[0])
agent2 = isolation.Agent(custom.CustomPlayer, agent_names[1])
agents = (agent... | _____no_output_____ | MIT | Projects/3_Adversarial Search/scratchpad/n03_book_creation.ipynb | mtasende/artificial-intelligence |
Let's generate the corresponding matches | # Constant parameteres
time_limit = 150
depth = 4
full_search_depth = 2
matches_per_opening = 3
# Create the agents that will play
agent_names = ('CustomPlayer1', 'CustomPlayer2')
agent1 = isolation.Agent(custom.CustomPlayer, agent_names[0])
agent2 = isolation.Agent(custom.CustomPlayer, agent_names[1])
agents = (agent... | _____no_output_____ | MIT | Projects/3_Adversarial Search/scratchpad/n03_book_creation.ipynb | mtasende/artificial-intelligence |
Let's add the symmetry conditions to the game processing | s_a = list(book.keys())[0]
s_a
W, H = 11, 9
def h_symmetry(loc):
if loc is None:
return None
row = loc // (W + 2)
center = W + (row - 1) * (W + 2) + (W + 2) // 2 + 1 if row != 0 else W // 2
return 2 * center - loc
h_symmetry(28)
h_symmetry(1)
center = (H // 2) * (W + 2) + W // 2
center
def c_sy... | _____no_output_____ | MIT | Projects/3_Adversarial Search/scratchpad/n03_book_creation.ipynb | mtasende/artificial-intelligence |
Statistical Downscaling and Bias-Adjustment`xclim` provides tools and utilities to ease the bias-adjustement process through its `xclim.sdba` module. Almost all adjustment algorithms conform to the `train` - `adjust` scheme, formalized within `TrainAdjust` classes. Given a reference time series (ref), historical simul... | from __future__ import annotations
import cftime
import matplotlib.pyplot as plt
import numpy as np
import xarray as xr
%matplotlib inline
plt.style.use("seaborn")
plt.rcParams["figure.figsize"] = (11, 5)
# Create toy data to explore bias adjustment, here fake temperature timeseries
t = xr.cftime_range("2000-01-01",... | _____no_output_____ | Apache-2.0 | docs/notebooks/sdba.ipynb | Ouranosinc/dcvar |
In the previous example, a simple Quantile Mapping algorithm was used with 15 quantiles and one group of values. The model performs well, but our toy data is also quite smooth and well-behaved so this is not surprising. A more complex example could have biais distribution varying strongly across months. To perform the ... | QM_mo = sdba.EmpiricalQuantileMapping.train(
ref, hist, nquantiles=15, group="time.month", kind="+"
)
scen = QM_mo.adjust(sim, extrapolation="constant", interp="linear")
ref.groupby("time.dayofyear").mean().plot(label="Reference")
hist.groupby("time.dayofyear").mean().plot(label="Model - biased")
scen.sel(time=sli... | _____no_output_____ | Apache-2.0 | docs/notebooks/sdba.ipynb | Ouranosinc/dcvar |
The training data (here the adjustment factors) is available for inspection in the `ds` attribute of the adjustment object. | QM_mo.ds
QM_mo.ds.af.plot() | _____no_output_____ | Apache-2.0 | docs/notebooks/sdba.ipynb | Ouranosinc/dcvar |
GroupingFor basic time period grouping (months, day of year, season), passing a string to the methods needing it is sufficient. Most methods acting on grouped data also accept a `window` int argument to pad the groups with data from adjacent ones. Units of `window` are the sampling frequency of the main grouping dimen... | group = sdba.Grouper("time.dayofyear", window=31)
QM_doy = sdba.Scaling.train(ref, hist, group=group, kind="+")
scen = QM_doy.adjust(sim)
ref.groupby("time.dayofyear").mean().plot(label="Reference")
hist.groupby("time.dayofyear").mean().plot(label="Model - biased")
scen.sel(time=slice("2000", "2015")).groupby("time.da... | _____no_output_____ | Apache-2.0 | docs/notebooks/sdba.ipynb | Ouranosinc/dcvar |
Modular approachThe `sdba` module adopts a modular approach instead of implementing published and named methods directly.A generic bias adjustment process is laid out as follows:- preprocessing on `ref`, `hist` and `sim` (using methods in `xclim.sdba.processing` or `xclim.sdba.detrending`)- creating and training the a... | vals = np.random.randint(0, 1000, size=(t.size,)) / 100
vals_ref = (4 ** np.where(vals < 9, vals / 100, vals)) / 3e6
vals_sim = (
(1 + 0.1 * np.random.random_sample((t.size,)))
* (4 ** np.where(vals < 9.5, vals / 100, vals))
/ 3e6
)
pr_ref = xr.DataArray(
vals_ref, coords={"time": t}, dims=("time",), a... | _____no_output_____ | Apache-2.0 | docs/notebooks/sdba.ipynb | Ouranosinc/dcvar |
In the figure above, `scen` has small peaks where `sim` is 0. This problem originates from the fact that there are more "dry days" (days with almost no precipitation) in `hist` than in `ref`. The next example works around the problem using frequency-adaptation, as described in [Themeßl et al. (2010)](https://doi.org/10... | # 2nd try with adapt_freq
sim_ad, pth, dP0 = sdba.processing.adapt_freq(
pr_ref, pr_sim, thresh="0.05 mm d-1", group="time"
)
QM_ad = sdba.EmpiricalQuantileMapping.train(
pr_ref, sim_ad, nquantiles=15, kind="*", group="time"
)
scen_ad = QM_ad.adjust(pr_sim)
pr_ref.sel(time="2010").plot(alpha=0.9, label="Refere... | _____no_output_____ | Apache-2.0 | docs/notebooks/sdba.ipynb | Ouranosinc/dcvar |
Second example: tas and detrendingThe next example reuses the fake temperature timeseries generated at the beginning and applies the same QM adjustment method. However, for a better adjustment, we will scale sim to ref and then detrend the series, assuming the trend is linear. When `sim` (or `sim_scl`) is detrended, i... | doy_win31 = sdba.Grouper("time.dayofyear", window=15)
Sca = sdba.Scaling.train(ref, hist, group=doy_win31, kind="+")
sim_scl = Sca.adjust(sim)
detrender = sdba.detrending.PolyDetrend(degree=1, group="time.dayofyear", kind="+")
sim_fit = detrender.fit(sim_scl)
sim_detrended = sim_fit.detrend(sim_scl)
ref_n, _ = sdba.p... | _____no_output_____ | Apache-2.0 | docs/notebooks/sdba.ipynb | Ouranosinc/dcvar |
Third example : Multi-method protocol - Hnilica et al. 2017In [their paper of 2017](https://doi.org/10.1002/joc.4890), Hnilica, Hanel and Puš present a bias-adjustment method based on the principles of Principal Components Analysis. The idea is simple : use principal components to define coordinates on the reference a... | # We are using xarray's "air_temperature" dataset
ds = xr.tutorial.open_dataset("air_temperature")
# To get an exagerated example we select different points
# here "lon" will be our dimension of two "spatially correlated" points
reft = ds.air.isel(lat=21, lon=[40, 52]).drop_vars(["lon", "lat"])
simt = ds.air.isel(lat=1... | _____no_output_____ | Apache-2.0 | docs/notebooks/sdba.ipynb | Ouranosinc/dcvar |
Fourth example : Multivariate bias-adjustment with multiple steps - Cannon 2018This section replicates the "MBCn" algorithm described by [Cannon (2018)](https://doi.org/10.1007/s00382-017-3580-6). The method relies on some univariate algorithm, an adaption of the N-pdf transform of [Pitié et al. (2005)](https://ieeexp... | from xclim.core.units import convert_units_to
from xclim.testing import open_dataset
dref = open_dataset(
"sdba/ahccd_1950-2013.nc", chunks={"location": 1}, drop_variables=["lat", "lon"]
).sel(time=slice("1981", "2010"))
dref = dref.assign(
tasmax=convert_units_to(dref.tasmax, "K"),
pr=convert_units_to(dre... | _____no_output_____ | Apache-2.0 | docs/notebooks/sdba.ipynb | Ouranosinc/dcvar |
Perform an initial univariate adjustment. | # additive for tasmax
QDMtx = sdba.QuantileDeltaMapping.train(
dref.tasmax, dhist.tasmax, nquantiles=20, kind="+", group="time"
)
# Adjust both hist and sim, we'll feed both to the Npdf transform.
scenh_tx = QDMtx.adjust(dhist.tasmax)
scens_tx = QDMtx.adjust(dsim.tasmax)
# remove == 0 values in pr:
dref["pr"] = sd... | _____no_output_____ | Apache-2.0 | docs/notebooks/sdba.ipynb | Ouranosinc/dcvar |
Stack the variables to multivariate arrays and standardize themThe standardization process ensure the mean and standard deviation of each column (variable) is 0 and 1 respectively.`hist` and `sim` are standardized together so the two series are coherent. We keep the mean and standard deviation to be reused when we bui... | # Stack the variables (tasmax and pr)
ref = sdba.processing.stack_variables(dref)
scenh = sdba.processing.stack_variables(scenh)
scens = sdba.processing.stack_variables(scens)
# Standardize
ref, _, _ = sdba.processing.standardize(ref)
allsim, savg, sstd = sdba.processing.standardize(xr.concat((scenh, scens), "time"))... | _____no_output_____ | Apache-2.0 | docs/notebooks/sdba.ipynb | Ouranosinc/dcvar |
Perform the N-dimensional probability density function transformThe NpdfTransform will iteratively randomly rotate our arrays in the "variables" space and apply the univariate adjustment before rotating it back. In Cannon (2018) and Pitié et al. (2005), it can be seen that the source array's joint distribution converg... | from xclim import set_options
# See the advanced notebook for details on how this option work
with set_options(sdba_extra_output=True):
out = sdba.adjustment.NpdfTransform.adjust(
ref,
hist,
sim,
base=sdba.QuantileDeltaMapping, # Use QDM as the univariate adjustment.
base_k... | _____no_output_____ | Apache-2.0 | docs/notebooks/sdba.ipynb | Ouranosinc/dcvar |
Restoring the trendThe NpdfT has given us new "hist" and "sim" arrays with a correct rank structure. However, the trend is lost in this process. We reorder the result of the initial adjustment according to the rank structure of the NpdfT outputs to get our final bias-adjusted series.`sdba.processing.reordering` : 'ref... | scenh = sdba.processing.reordering(hist, scenh, group="time")
scens = sdba.processing.reordering(sim, scens, group="time")
scenh = sdba.processing.unstack_variables(scenh)
scens = sdba.processing.unstack_variables(scens) | _____no_output_____ | Apache-2.0 | docs/notebooks/sdba.ipynb | Ouranosinc/dcvar |
There we are!Let's trigger all the computations. Here we write the data to disk and use `compute=False` in order to trigger the whole computation tree only once. There seems to be no way in xarray to do the same with a `load` call. | from dask import compute
from dask.diagnostics import ProgressBar
tasks = [
scenh.isel(location=2).to_netcdf("mbcn_scen_hist_loc2.nc", compute=False),
scens.isel(location=2).to_netcdf("mbcn_scen_sim_loc2.nc", compute=False),
extra.escores.isel(location=2)
.to_dataset()
.to_netcdf("mbcn_escores_loc2... | _____no_output_____ | Apache-2.0 | docs/notebooks/sdba.ipynb | Ouranosinc/dcvar |
Let's compare the series and look at the distance scores to see how well the Npdf transform has converged. | scenh = xr.open_dataset("mbcn_scen_hist_loc2.nc")
fig, ax = plt.subplots()
dref.isel(location=2).tasmax.plot(ax=ax, label="Reference")
scenh.tasmax.plot(ax=ax, label="Adjusted", alpha=0.65)
dhist.isel(location=2).tasmax.plot(ax=ax, label="Simulated")
ax.legend()
escores = xr.open_dataarray("mbcn_escores_loc2.nc")
di... | _____no_output_____ | Apache-2.0 | docs/notebooks/sdba.ipynb | Ouranosinc/dcvar |
Working with PDBsum in Jupyter & Demonstration of PDBsum protein interface data to dataframe script Usually you'll want to get some data from PDBsum and analyze it. For the current example in this series of notebooks, I'll cover how to bring in a file of protein-protein interactions and then progress through using tha... | !curl -L -o data.txt --data "pdb=6ah3&chain1=B&chain2=G" http://www.ebi.ac.uk/thornton-srv/databases/cgi-bin/pdbsum/GetIface.pl | % Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 7063 0 7037 100 26 9033 33 --:--:-- --:--:-- --:--:-- 9055
| MIT | notebooks/Working with PDBsum in Jupyter Basics.ipynb | fomightez/pdbsum-binder |
To prove that the data file has been retieved, we'll show the first 16 lines of it by running the next cell: | !head -16 data.txt | <PRE>
List of atom-atom interactions across protein-protein interface
---------------------------------------------------------------
<P>
PDB code: 6ah3 Chains B }{ G
------------------------------
<P>
Hydrogen bonds
--------------
<----- A T O M 1 -----> ... | MIT | notebooks/Working with PDBsum in Jupyter Basics.ipynb | fomightez/pdbsum-binder |
Later in this series of notebooks, I'll demonstrate how to make this step even easier with just the PDB entry id and the chains you are interested in and the later how to loop on this process to get multiple data files for interactions from different structures. Making a Pandas dataframe from the interactions fileTo c... | !curl -OL https://raw.githubusercontent.com/fomightez/structurework/master/pdbsum-utilities/pdbsum_prot_interactions_list_to_df.py | % Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 23915 100 23915 0 0 35272 0 --:--:-- --:--:-- --:--:-- 35220
| MIT | notebooks/Working with PDBsum in Jupyter Basics.ipynb | fomightez/pdbsum-binder |
We have the script now. And we already have a data file for it to process. To process the data file, run the next command where we use Python to run the script and direct it at the results file, `data.txt`, we made just a few cells ago. | %run pdbsum_prot_interactions_list_to_df.py data.txt | Provided interactions data read and converted to a dataframe...
A dataframe of the data has been saved as a file
in a manner where other Python programs can access it (pickled form).
RESULTING DATAFRAME is stored as ==> 'prot_int_pickled_df.pkl' | MIT | notebooks/Working with PDBsum in Jupyter Basics.ipynb | fomightez/pdbsum-binder |
As of writing this, the script we are using outputs a file that is a binary, compact form of the dataframe. (That means it is tiny and not human readable. It is called 'pickled'. Saving in that form may seem odd, but as illustrated [here](Output-to-more-universal,-table-like-formats) below this is is a very malleable f... | import pandas as pd
df = pd.read_pickle("prot_int_pickled_df.pkl") | _____no_output_____ | MIT | notebooks/Working with PDBsum in Jupyter Basics.ipynb | fomightez/pdbsum-binder |
When that last cell ran, you won't notice any output, but something happened. We can look at that dataframe by calling it in a cell. | df | _____no_output_____ | MIT | notebooks/Working with PDBsum in Jupyter Basics.ipynb | fomightez/pdbsum-binder |
You'll notice that if the list of data is large, that the Jupyter environment represents just the head and tail to make it more reasonable. There are ways you can have Jupyter display it all which we won't go into here. Instead we'll start to show some methods of dataframes that make them convenient. For example, you c... | df.head() | _____no_output_____ | MIT | notebooks/Working with PDBsum in Jupyter Basics.ipynb | fomightez/pdbsum-binder |
Now what types of interactions are observed for this pair of interacting protein chains?To help answer that, we can group the results by the type column. | grouped = df.groupby('type')
for type, grouped_df in grouped:
print(type)
display(grouped_df) | Hydrogen bonds
| MIT | notebooks/Working with PDBsum in Jupyter Basics.ipynb | fomightez/pdbsum-binder |
Same data as earlier but we can cleary see we have Hydrogen bonds, Non-bonded contacts (a.k.a., van der Waals contacts), and salt bridges, and we immediately get a sense of what types of interactions are more abundant.You may want to get a sense of what else you can do by examining he first two notebooks that come up w... | #Save / write a TSV-formatted (tab-separated values/ tab-delimited) file
df.to_csv('pdbsum_data.tsv', sep='\t',index = False) #add `,header=False` to leave off header, too | _____no_output_____ | MIT | notebooks/Working with PDBsum in Jupyter Basics.ipynb | fomightez/pdbsum-binder |
Because `df.to_csv()` defaults to dealing with csv, you can simply use `df.to_csv('example.csv',index = False)` for comma-delimited (comma-separated) files.You can see that worked by looking at the first few lines with the next command. (Feel free to make the number higher or delete the number all together. I restricte... | !head -5 pdbsum_data.tsv | Atom1 no. Atom1 name Atom1 Res name Atom1 Res no. Atom1 Chain Atom2 no. Atom2 name Atom2 Res name Atom2 Res no. Atom2 Chain Distance type
9937 NZ LYS 326 B 20598 O LYS 122 G 2.47 Hydrogen bonds
9591 O CYS 280 B 19928 CG1 ILE 29 G 3.77 Non-bonded contacts
9591 O CYS 280 B 19930 CD1 ILE 29 G 3.42 Non-bonded contacts
... | MIT | notebooks/Working with PDBsum in Jupyter Basics.ipynb | fomightez/pdbsum-binder |
If you had need to go back from a tab-separated table to a dataframe, you can run something like in the following cell. | reverted_df = pd.read_csv('pdbsum_data.tsv', sep='\t')
reverted_df.to_pickle('reverted_df.pkl') # OPTIONAL: pickle that data too | _____no_output_____ | MIT | notebooks/Working with PDBsum in Jupyter Basics.ipynb | fomightez/pdbsum-binder |
For a comma-delimited (CSV) file you'd use `df = pd.read_csv('example.csv')` because `pd.read_csv()` method defaults to comma as the separator (`sep` parameter).You can verify that read from the text-based table by viewing it with the next line. | reverted_df.head() | _____no_output_____ | MIT | notebooks/Working with PDBsum in Jupyter Basics.ipynb | fomightez/pdbsum-binder |
**Generating an Excel spreadsheet from a dataframe.**Because this is an specialized need, there is a special module needed that I didn't bother installing by default and so it needs to be installed before generating the Excel file. Running the next cell will do both. | %pip install openpyxl
# save to excel (KEEPS multiINDEX, and makes sparse to look good in Excel straight out of Python)
df.to_excel('pdbsum_data.xlsx') # after openpyxl installed | Requirement already satisfied: openpyxl in /srv/conda/envs/notebook/lib/python3.7/site-packages (3.0.6)
Requirement already satisfied: et-xmlfile in /srv/conda/envs/notebook/lib/python3.7/site-packages (from openpyxl) (1.0.1)
Requirement already satisfied: jdcal in /srv/conda/envs/notebook/lib/python3.7/site-packages (... | MIT | notebooks/Working with PDBsum in Jupyter Basics.ipynb | fomightez/pdbsum-binder |
You'll need to download the file first to your computer and then view it locally as there is no viewer in the Jupyter environment.Adiitionally, it is possible to add styles to dataframes and the styles such as shading of cells and coloring of text will be translated to the Excel document made as well.Excel files can be... | # read Excel
df_from_excel = pd.read_excel('pdbsum_data.xlsx',engine='openpyxl') # see https://stackoverflow.com/a/65266270/8508004 where notes xlrd no longer supports xlsx | Collecting xlrd
Downloading xlrd-2.0.1-py2.py3-none-any.whl (96 kB)
[K |████████████████████████████████| 96 kB 2.8 MB/s eta 0:00:011
[?25hInstalling collected packages: xlrd
Successfully installed xlrd-2.0.1
Note: you may need to restart the kernel to use updated packages.
| MIT | notebooks/Working with PDBsum in Jupyter Basics.ipynb | fomightez/pdbsum-binder |
That can be viewed to convince yourself it worked by running the next command. | df_from_excel.head() | _____no_output_____ | MIT | notebooks/Working with PDBsum in Jupyter Basics.ipynb | fomightez/pdbsum-binder |
Next, we'll cover how to bring the dataframe we just made into the notebook without dealing with a file intermediate.---- Making a Pandas dataframe from the interactions file directly in JupyterFirst we'll check for the script we'll use and get it if we don't already have it. (The thinking is once you know what you are... | # Get a file if not yet retrieved / check if file exists
import os
file_needed = "pdbsum_prot_interactions_list_to_df.py"
if not os.path.isfile(file_needed):
!curl -OL https://raw.githubusercontent.com/fomightez/structurework/master/pdbsum-utilities/{file_needed} | _____no_output_____ | MIT | notebooks/Working with PDBsum in Jupyter Basics.ipynb | fomightez/pdbsum-binder |
This is going to rely on approaches very similar to those illustrated [here](https://github.com/fomightez/patmatch-binder/blob/6f7630b2ee061079a72cd117127328fd1abfa6c7/notebooks/PatMatch%20with%20more%20Python.ipynbPassing-results-data-into-active-memory-without-a-file-intermediate) and [here](https://github.com/fomigh... | from pdbsum_prot_interactions_list_to_df import pdbsum_prot_interactions_list_to_df | _____no_output_____ | MIT | notebooks/Working with PDBsum in Jupyter Basics.ipynb | fomightez/pdbsum-binder |
We can demonstrate that worked by calling the function. | pdbsum_prot_interactions_list_to_df() | _____no_output_____ | MIT | notebooks/Working with PDBsum in Jupyter Basics.ipynb | fomightez/pdbsum-binder |
If the module was not imported, you'd see `ModuleNotFoundError: No module named 'pdbsum_prot_interactions_list_to_df'`, but instead you should see it saying it is missing `data_file` to act on because you passed it nothing.After importing the main function of that script into this running notebook, you are ready to dem... | direct_df = pdbsum_prot_interactions_list_to_df("data.txt")
direct_df.head() | Provided interactions data read and converted to a dataframe...
A dataframe of the data has been saved as a file
in a manner where other Python programs can access it (pickled form).
RESULTING DATAFRAME is stored as ==> 'prot_int_pickled_df.pkl'
Returning a dataframe with the information as well. | MIT | notebooks/Working with PDBsum in Jupyter Basics.ipynb | fomightez/pdbsum-binder |
survived, pclass, sibsp, parch, fare | X = df[['pclass', 'sibsp', 'parch', 'fare']]
Y = df[['survived']]
X.shape, Y.shape
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(X, Y)
x_train.shape, x_test.shape, y_train.shape, y_test.shape
from sklearn.linear_model import LogisticRegression
logR = LogisticRe... | _____no_output_____ | Apache-2.0 | titanic_classfication.ipynb | jhee-yun/test_machinelearning1 |
Working with SeqFish data | import stlearn as st | _____no_output_____ | BSD-3-Clause | docs/tutorials/Read_seqfish.ipynb | duypham2108/dev_st |
The data is downloaded from https://www.spatialomics.org/SpatialDB/download.php| Technique | PMID | Title | Expression | SV genes|| ----------- | ----------- | ----------- | ----------- | ----------- ||seqFISH|30911168|Transcriptome-scale super-resolved imaging in tissues by RNA seqFISH+ seqfish_30911168.tar.gz|seqfish... | data = st.ReadSeqFish(count_matrix_file="../Downloads/seqfish_30911168/cortex_svz_counts.matrix",
spatial_file="../Downloads/seqfish_30911168/cortex_svz_cellcentroids.csv",
field=5) | D:\Anaconda3\envs\test2\lib\site-packages\anndata-0.7.3-py3.8.egg\anndata\_core\anndata.py:119: ImplicitModificationWarning: Transforming to str index.
warnings.warn("Transforming to str index.", ImplicitModificationWarning)
| BSD-3-Clause | docs/tutorials/Read_seqfish.ipynb | duypham2108/dev_st |
Quality checking for the data | st.pl.QC_plot(data) | _____no_output_____ | BSD-3-Clause | docs/tutorials/Read_seqfish.ipynb | duypham2108/dev_st |
Plot gene Nr4a1 | st.pl.gene_plot(data,genes="Nr4a1") | _____no_output_____ | BSD-3-Clause | docs/tutorials/Read_seqfish.ipynb | duypham2108/dev_st |
Running Preprocessing for MERFISH data | st.pp.filter_genes(data,min_cells=3)
st.pp.normalize_total(data)
st.pp.log1p(data)
st.pp.scale(data) | Normalization step is finished in adata.X
Log transformation step is finished in adata.X
Scale step is finished in adata.X
| BSD-3-Clause | docs/tutorials/Read_seqfish.ipynb | duypham2108/dev_st |
Running PCA to reduce the dimensions to 50 | st.em.run_pca(data,n_comps=50,random_state=0) | PCA is done! Generated in adata.obsm['X_pca'], adata.uns['pca'] and adata.varm['PCs']
| BSD-3-Clause | docs/tutorials/Read_seqfish.ipynb | duypham2108/dev_st |
Perform Louvain clustering | st.pp.neighbors(data,n_neighbors=25)
st.tl.clustering.louvain(data)
st.pl.cluster_plot(data,use_label="louvain",spot_size=10) | _____no_output_____ | BSD-3-Clause | docs/tutorials/Read_seqfish.ipynb | duypham2108/dev_st |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.